161 Commits

Author SHA1 Message Date
Alexander Kukushkin
6300ec4dbf Implement missing tests for watchdog (#487)
and fix one bug
2017-07-27 12:41:46 +02:00
Ants Aasma
70d718a058 Simplify watchdog code (#452)
* Only activate watchdog while master and not paused

We don't really need the protections while we are not master. This way
we only need to tickle the watchdog when we are updating leader key or
while demotion is happening.

As implemented we might fail to notice to shut down the watchdog if
someone demotes postgres and removes leader key behind Patroni's back.
There are probably other similar cases. Basically if the administrator
if being actively stupid they might get unexpected restarts. That seems
fine.

* Add configuration change support. Change MODE_REQUIRED to disable leader eligibility instead of closing Patroni.

Changes watchdog timeout during the next keepalive when ttl is changed. Watchdog driver and requirement can also be switched online.

When watchdog mode is `required` and watchdog setup does not work then the effect is similar to nofailover. Add watchdog_failed to status API to signify this. This is True only when watchdog does not work **AND** it is required.

* Reset implementation when config changed while active.

* Add watchdog safety margin configuration

Defaults to 5 seconds. Basically this is the maximum amount of time
that can pass between the calls to odcs.update_leader()` and
`watchdog.keepalive()`, which are called right after each other. Should
be safe for pretty much any sane scenario and allows the default
settings to not trigger watchdog when DCS is not responding.

* Cancel bootstrap if watchdog activation fails

The system would have demoted itself anyway the next HA loop. Doing it
in bootstrap gives at least some other node chance to try bootstrapping
in the hope that it is configured correctly.

If all nodes are unable to activate they will continue to try until the
disk is filled with moved datadirs. Perhaps not ideal behavior, but as
the situation is unlikely to resolve itself without administrator
intervention it doesn't seem too bad.
2017-07-27 12:16:11 +02:00
Alexander Kukushkin
e2feac87bc Block callbacks during bootstrap (#483)
It wasn't a big issue when on_start was called during normal boostrap
with initdb, because usually such process is very fast. But situation is
changing when we run custom bootstrap, becuase it might be a long time
between cluster become connectable and end of recovery and promote.

Actually situation was even worse than that, on_start was called with
the `replica` argument and later on_role_changes was never called,
because promote wasn't performed by Patroni.

As a solution for this problem we will block any callbacks during
bootstrap and explicitly call on_start after leader lock was taken.
2017-07-24 14:19:19 +02:00
Alexander Kukushkin
cb360f089c Restart postgres after custom bootstrap if hba_file is defined in configuration (#482)
In addition to that always use absolute paths to config files.

Fixes https://github.com/zalando/patroni/issues/481
2017-07-22 09:46:05 +02:00
Alexander Kukushkin
d5b3d94377 Custom bootstrap (#454)
Task of restoring a cluster from backup or cloning existing cluster into a new one was floating around for some time. It was kind of possible to achieve it by doing a lot of manual actions and very error prone. So I come up with the idea of making the way how we bootstrap a new cluster configurable.

In short - we want to run a custom script instead of running initdb.
2017-07-18 15:12:58 +02:00
Alexander Kukushkin
acc6d7c2c2 Watchdog unit-tests, bugfixes and questions (#449)
Implement missing unit-tests for and drop unused code
2017-07-11 10:00:30 +02:00
jouir
4ca94a5dab Add config_dir option for configuration files location (#466)
On debian, the configuration files (postgresql.conf, pg_hba.conf, etc) are not stored in the data directory. It would be great to be able to configure the location of this separate directory. Patroni could override existing configuration files where they are used to be.

The default is to store configuration files in the data directory. This setting is targeting custom installations like debian and any others moving configuration files out of the data directory.

Fixes #465
2017-07-04 16:14:17 +02:00
Alexander Kukushkin
b576e69362 Manage pg_hba.conf via patroni config or dynamic_configuration (#458)
So far Patroni was populating pg_hba.conf only when running bootstrap code and after that it was not very handy to manage it's content, because it was necessary to login to every node, change pg_hba.conf manually and run pg_ctl reload.

This commit intends to fix it and give Patroni control over pg_hba.conf. It is possible to define pg_hba.conf content via `postgresql.pg_hba` in the patroni configuration file or in the `DCS/config` (dynamic configuration).

If the `hba_file` is defined in the `postgresql.parameters`, Patroni will ignore `postgresql.pg_hba`.
2017-06-23 12:38:25 +02:00
Alexander Kukushkin
681b6b507b Support unix sockets when connecting to a local postgres cluster (#457)
For backward compatibility this feature is not enabled by default. To enable it you have to set `postgresql.use_unix_socket: true`.
If feature is enable, and `unix_socket_directories` is defined and non empty, Patroni will use the first suitable value from it to connect to the local postgres cluster.
If the `unix_socket_directories` is not defined, Patroni will assume that default value should be used and will not pass `host` to command line arguments and omit it from connection url.

Solves: https://github.com/zalando/patroni/issues/61

In addition to mentioned above, this commit solves couple of bugs:
* manual failover with pg_rewind in a pause state was broken
* psycopg2 (or libpq, I am not really sure what exactly) doesn't mark cursors connection as closed when we use unix socket and there is an `OperationalError` occurs. We will close such connection on our own.
2017-06-22 11:47:57 +02:00
Alexander Kukushkin
5bd9aa7547 BUGFIX: pg_rewind wasn't working when data page checksum is not enabled (#456)
pg_controldata output depends on postgres major version and in some cases some of the parameters are prefixed by 'Current ' for old postgres versions.

Bug was introduced by commit 37c1552.
Fixes https://github.com/zalando/patroni/issues/455
2017-06-16 10:25:54 +02:00
Alexander Kukushkin
e3a01727a9 Implement missing tests and add pg-10 support to wale_restore(#446)
in addition to that get rid from two modules and fix formatting of tests
2017-05-22 12:01:02 +02:00
Alexander Kukushkin
cd84dc82b6 Implement postgresql-10 support (#444)
Mainly it handles rename of xlog to wal.
In the API and inside DCS it is still named xlog (for compatibility).

* Address feedback
2017-05-19 17:04:53 +02:00
Alexander Kukushkin
7633b19213 Support change of superuser and replication credentials on reload (#445)
Fixes: https://github.com/zalando/patroni/issues/353
and: https://github.com/zalando/patroni/issues/443
2017-05-19 16:32:35 +02:00
Alexander Kukushkin
37c1552c0a Smart pg_rewind (#417)
Previously we were running pg_rewind only in limited amount of cases:
 * when we knew postgres was a master (no recovery.conf in data dir)
 * when we were doing a manual switchover to a specific node (no
   guaranty that this node is the most up-to-date)
 * when a given node has nofailover tag (it could be ahead of new master)

This approach was kind of working in most of the cases, but sometimes we
were executing pg_rewind when it was not necessary and in some other
cases we were not executing it although it was needed.

The main idea of this PR is first try to figure out that we really need
to run pg_rewind by analyzing timelineid, LSN and history file on master
and replica and run it only if it's needed.
2017-05-19 16:32:06 +02:00
Alexander Kukushkin
44a7142a9d Synchronous mode strict (#435)
If synchronous_mode_strict==true then '*' will be written as synchronous_standby_names when that last replication host dies.
2017-04-27 14:32:15 +02:00
Alexander Kukushkin
1c5d5f1dae BUGFIX: pg_drop_replication_slot may not be called if slot is active (#427)
Default value of wal_sender_timeout is 60 seconds while we are trying to remove replication slot after 30 seconds (ttl=30). That means postgres might think that slot is still active and does nothing. Patroni at the same time was thinking that it was removed successfully.

If the drop replication slot query didn't return any single row we must fetch list of existing physical replication slots from postgres on the next iteration of HA loop.

Fixes: issue #425
2017-04-18 12:45:24 +02:00
Alexander Kukushkin
3ece35c0a6 Reassemble postgresql parameters when major version became known (#395)
* Reassemble postgresql parameters when major version became known

Otherwise we were writing some "unknown" parameters into postgresql.conf
and postgres was refusing to start. Only 9.3 was affected.

In addition to that move rename of wal_level from hot_standby to replica
into get_server_parameters method. Now this rename is handled in a
single place.

* Bump etcd and consul versions
2017-02-16 17:07:21 +01:00
Alexander Kukushkin
a5e79bce9d * bugfix: pass an arguments to a callback 2016-12-16 15:44:04 +01:00
Alexander Kukushkin
8c0712047e Serialize callback execution (#366)
If the previous callback is still running - kill it
Also it will fix a problem of zombie processes when executing callbacks from the main thread.
2016-12-16 14:29:53 +01:00
Alexander Kukushkin
1e984c3f00 Take a max from xlog_receive and xlog_replay (#363) 2016-12-12 16:27:36 +01:00
Ants Aasma
1290b30b84 Introduce starting state and master start timeout. (#295)
Previously pg_ctl waited for a timeout and then happily trodded on considering PostgreSQL to be running. This caused PostgreSQL to show up in listings as running when it was actually not and caused a race condition that resulted in either a failover or a crash recovery or a crash recovery interrupted by failover and a missed rewind.

This change adds a master_start_timeout parameter and introduces a new state for the main run_cycle loop: starting. When master_start_timeout is zero we will fail over as soon as there is a failover candidate. Otherwise PostgreSQL will be started, but once master_start_timeout expires we will stop and release leader lock if failover is possible. Once failover succeeds or fails (no leader and no one to take the role) we continue with normal processing. While we are waiting for the master timeout we handle manual failover requests.

* Introduce timeout parameter to restart.

When restart timeout is set master becomes eligible for failover after that timeout expires regardless of master_start_time. Immediate restart calls will wait for this timeout to pass, even when node is a standby.
2016-12-08 14:44:27 +01:00
Alexander Kukushkin
c6417b2558 Add postgres-9.6 support (#357)
starting from 9.6 we need wal_level = 'replica' which is alias for 'hot_standby'. It was working before without problems, but if somebody change wal_level to replica, Patroni will expose pending_restart flag, although restart in this case is not necessary.

* bump versions of consul and etcd to the latest for travis integration-tests
2016-11-25 12:35:01 +01:00
Ants Aasma
7e53a604d4 Add synchronous replication support. (#314)
Adds a new configuration variable synchronous_mode. When enabled Patroni will manage synchronous_standby_names to enable synchronous replication whenever there are healthy standbys available. With synchronous mode enabled Patroni will automatically fail over only to a standby that was synchronously replicating at the time of master failure. This effectively means zero lost user visible transactions.

To enforce the synchronous failover guarantee Patroni stores current synchronous replication state in the DCS, using strict ordering, first enable synchronous replication, then publish the information. Standby can use this to verify that it was indeed a synchronous standby before master failed and is allowed to fail over.

We can't enable multiple standbys as synchronous, allowing PostreSQL to pick one because we can't know which one was actually set to be synchronous on the master when it failed. This means that on standby failure commits will be blocked on the master until next run_cycle iteration. TODO: figure out a way to poke Patroni to run sooner or allow for PostgreSQL to pick one without the possibility of lost transactions.

On graceful shutdown standbys will disable themselves by setting a nosync tag for themselves and waiting for the master to notice and pick another standby. This adds a new mechanism for Ha to publish dynamic tags to the DCS.

When the synchronous standby goes away or disconnects a new one is picked and Patroni switches master over to the new one. If no synchronous standby exists Patroni disables synchronous replication (synchronous_standby_names=''), but not synchronous_mode. In this case, only the node that was previously master is allowed to acquire the leader lock.

Added acceptance tests and documentation.

Implementation by @ants with extensive review by @CyberDem0n.
2016-10-19 16:12:51 +02:00
Alejandro Martínez
48a6af6994 Add post_init configuration parameter on bootstrap (#296)
* Add bootstrap post_init configuration parameter
* Add documentation

By @zenitraM
2016-09-28 15:42:23 +02:00
Alexander Kukushkin
453e68637a Don't try to remove leader key when running ctl on the leader node (#302) 2016-09-19 13:33:24 +02:00
Alexander Kukushkin
5c8399e4fa Make sure data directory is empty before trying to restore backup (#307)
We are doing number of attempts when trying to initialize replica using
different methods. Any of this attemp may create and put something into
data directory, what causes next attempts fail.
In addition to that improve logging when creating replica.
2016-09-19 13:32:27 +02:00
Alexander Kukushkin
c2b91d0195 Merge branch 'master' of github.com:zalando/patroni into feature/disable-automatic-failover 2016-09-05 16:03:55 +02:00
Oleksii Kliukin
3f7fa4b41f Avoid retries when syncing replication slots. (#282)
* Avoid retries when syncing replication slots.

Do not retry postgres queries that fetch, create and drop slots at the end of
the HA cycle. The complete run_cycle routine executes with the async_executor
lock. This lock is also used with scheduling operations like reinit or restart
in different threads. Looks like CPython threading class has fairness issues
when multiple threads try to acquire the same lock and one of them executes
long-running actions while holding it: the others have little chances of
acquiring the lock in order. To get around this issue, the long action (i.e.
retrying the query) is removed.

Investigation by Ants Aasma and Alexander Kukushkin.
2016-09-02 17:00:37 +02:00
Alexander Kukushkin
db9b62b7ed Merge branch 'master' of github.com:zalando/patroni into feature/disable-automatic-failover 2016-09-01 11:09:09 +02:00
Alexander Kukushkin
33ff372ef6 Always try to rewind on manual failover 2016-09-01 11:08:26 +02:00
Oleksii Kliukin
46f1c5b690 Merge pull request #269 from zalando/feature/replica-info
Return replication information on the api
2016-08-31 13:58:19 +02:00
Ants Aasma
fa6bd51ad1 Appease Quantifiedcode about stylistic issues 2016-08-30 00:40:19 +03:00
Ants Aasma
e428c8d0fa Replace invalid characters in member names for replication slot names
PostgreSQL replication slot names only allow names consisting of [a-z0-9_].
Invalid characters cause replication slot creation and standby startup to fail.
This change substitutes the invalid characters with underscores or unicode
codepoints. In case multiple member names map to identical replication slots
master log will contain a corresponding error message.

Motivated by wanting to use hostnames as member names. Hostnames often
contain periods and dashes.
2016-08-30 00:21:33 +03:00
Alexander Kukushkin
74166e996c Fix tests and formatting 2016-08-25 10:09:32 +02:00
Feike Steenbergen
1fc8b43b36 Return replication information on the api
To enable better monitoring, it is useful to have replication statistics.
Addresses issue #261
2016-08-24 09:31:49 +02:00
Alexander Kukushkin
8ef7178ddf Refactor code dealing with database connection string/params (#255)
In the original code we were parsing/deparsing url-style connection
strings back and forth. That was not really resource greedy but rather
annoying. Also it was not really obvious how to switch all local
connections to unix-sockets (preferably).

This commit isolates different use-cases of working with connection
strings and minimizes amount of code parsing and deparsing them. Also it
introduces one new helper method in the `Member` object - `conn_kwargs`.
This method can accept as a parameter dict object with credentials
(username and password). As a result it returns dict object which could
be used by `psycopg2.connect` or for building connection urls for
pg_rewind, pg_basebackup or some other replica creation methods.

Params for local connection are builded in the `_local_connect_kwargs`
method and could be changed to unix-socket later easily.
2016-08-10 10:19:52 +02:00
Oleksii Kliukin
c91eda8d78 Merge branch 'master' into feature/scheduled_restarts 2016-07-11 12:56:24 +02:00
Alexander Kukushkin
659f7617f5 New option: remove_data_directory_on_rewind_failure
One more try to fix pg_rewind
2016-07-05 12:11:15 +02:00
Oleksii Kliukin
8834f929aa Improve the unit tests/coverage. 2016-07-05 10:07:29 +02:00
Alexander Kukushkin
b84e22c4ea Implement more checks in the follow method
Although such situation should not happen in reality (follow method is
not supposed to be called when when the node is holding leader lock and
postgres is running), but to be on the safe side it is better to
implement as much checks as possible, because this method could
potentially remove data directory.
2016-07-04 10:56:37 +02:00
Alexander Kukushkin
ee529669d2 Start readonly when holding leader lock
Not starting of postgres was causeing situation when there were no
master running...
2016-07-01 12:28:02 +02:00
Alexander Kukushkin
aa10f42913 checkpoint method returns string status message 2016-06-30 10:45:54 +02:00
Alexander Kukushkin
4b67008488 Try to cover as much as possible pg_rewind corner-cases
rewind is not possible when:
1) trying to rewind from themself
2) leader is not reachable
3) leader is_in_recovery

All these cases were leading to removing of data directory...
In all cases except 1) it should "retry" when leader will became
available and not is_in_recovery.
2016-06-29 14:29:31 +02:00
Alexander Kukushkin
0318749b56 bugfix: api must report role=master during pg_ctl stop
In addition for that make pg_ctl --timeout option configurable.
If the stop or start didn't succeeded during given timeout when demoting
master, role will be forcibly changed to 'unknown' and all needed
callbacks executed.
2016-06-28 14:14:42 +02:00
Alexander Kukushkin
bd1e658080 Bugfix: obviously sys.hexversion was one symbol shorter
plus remove some unneeded code
2016-06-17 12:18:41 +02:00
Alexander Kukushkin
57807ff337 Don't expose replication user/passwd in DCS 2016-06-15 09:34:04 +02:00
Alexander Kukushkin
c64170ef33 Extend list of postgres parameters controlled by Patroni
These parameters usually must be the same across all cluster nodes and
therefore must be set only via global configuration and always passed as
a list of postgres arguments (via pg_ctl) to make it not possible
accidentally change them by 'ALTER SYSTEM'
2016-06-13 10:33:14 +02:00
Alexander Kukushkin
57c6641683 Reimplement pg_ctl status in python
subprocess.call was causing problems when server is running under high
load.
2016-06-09 08:28:11 +02:00
Alexander Kukushkin
16771f37d5 Compare old and new user-defined-parameters to avoid reload
when parameters didn't changed.
Plus get wal_segment_size from pg_settings instead of hardcoding it's value.
2016-06-03 12:11:14 +02:00
Alexander Kukushkin
2e5ce4a303 "Smart" compare of postgres parameters
to decide do we need to reload/restart
2016-06-02 16:34:34 +02:00