Previously pg_ctl waited for a timeout and then happily trodded on considering PostgreSQL to be running. This caused PostgreSQL to show up in listings as running when it was actually not and caused a race condition that resulted in either a failover or a crash recovery or a crash recovery interrupted by failover and a missed rewind.
This change adds a master_start_timeout parameter and introduces a new state for the main run_cycle loop: starting. When master_start_timeout is zero we will fail over as soon as there is a failover candidate. Otherwise PostgreSQL will be started, but once master_start_timeout expires we will stop and release leader lock if failover is possible. Once failover succeeds or fails (no leader and no one to take the role) we continue with normal processing. While we are waiting for the master timeout we handle manual failover requests.
* Introduce timeout parameter to restart.
When restart timeout is set master becomes eligible for failover after that timeout expires regardless of master_start_time. Immediate restart calls will wait for this timeout to pass, even when node is a standby.
* Fix broken WAL directory symlinks after WAL-E restore.
* Add unit-tests for wale_restore.
* Reduce the amount of MagicMock to the one (for psycopg2.connect)
* Make WAL-E restore process more robuts.
Allow retries only on WAL-E failures.
Sleep after each attempt
* Update the tests.
* Change WAL-E behavior when master is absent, tests.
- Challenge the use of WAL-E even when 'no_master' flag is set. This flag in
fact does not indicate that the master is absent. In order to check the master
absense the script looks whether the connection string is not empty.
- Retry on a failure to fetch current xlog position from the master. The reason
it has to be separate from retries in the main loop is that we don't just
retry the connection attempt, but also make a decision when either it was
successfull or all attempts are exhausted.
- Remove wrong usages of ProperyMocks from the tests.
* Avoid redundant output of the exception message in logger.exception
* Address issues uncovered by flake8
* Add https and auth support for etcd
Also implement support of PATRONI_ETCD_URL and PATRONI_ETCD_SRV
environment variables
* Implement etcd.proxy etcd.cacert, etcd.cert and etcd.key support
Now it should be possible to set up fully encrypted connection to etcd
with authorization.
starting from 9.6 we need wal_level = 'replica' which is alias for 'hot_standby'. It was working before without problems, but if somebody change wal_level to replica, Patroni will expose pending_restart flag, although restart in this case is not necessary.
* bump versions of consul and etcd to the latest for travis integration-tests
If Patroni was started in a docker with pid=1 it will execute itself
with the same arguments. The original process will take care about init
process duties, i.e. handle sigchld and reap dead orphan processes.
Also it will forward SIGINT, SIGHUP, SIGTERM and some other signals to
the real Patroni process.
Previously replicas were always watching for leader key (even if the
postgres was not in the running there). It was not a big issue, but it
was not possible to interrupt such watch in cases if the postgres
started up or stopped successfully. Also it was delaying update_member
call and we had kind of stale information in DCS up to `loop_wait`
seconds. This commit changes such behavior. If the async_executor is
busy by starting/stopping or restarting postgres we will not watch for
leader key but waiting for event from async_executor up to `loop_wait`
seconds. Async executor will fire such event only in case if the
function it was calling returned something what could be evaluated to
boolean True.
Such functionality is really needed to change the way how we are making
decision about necessity of pg_rewind. It will require to have a local
postgres running and for us it is really important to get such
notification as soon as possible.
* Replace pytz.UTC with dateutil.tz.tzutc, it helps to reduce memory by more than 4Mb...
* fix check of python version: 0x0300000 => 0x3000000
* Update leader key before restart and demote
Adds a new configuration variable synchronous_mode. When enabled Patroni will manage synchronous_standby_names to enable synchronous replication whenever there are healthy standbys available. With synchronous mode enabled Patroni will automatically fail over only to a standby that was synchronously replicating at the time of master failure. This effectively means zero lost user visible transactions.
To enforce the synchronous failover guarantee Patroni stores current synchronous replication state in the DCS, using strict ordering, first enable synchronous replication, then publish the information. Standby can use this to verify that it was indeed a synchronous standby before master failed and is allowed to fail over.
We can't enable multiple standbys as synchronous, allowing PostreSQL to pick one because we can't know which one was actually set to be synchronous on the master when it failed. This means that on standby failure commits will be blocked on the master until next run_cycle iteration. TODO: figure out a way to poke Patroni to run sooner or allow for PostgreSQL to pick one without the possibility of lost transactions.
On graceful shutdown standbys will disable themselves by setting a nosync tag for themselves and waiting for the master to notice and pick another standby. This adds a new mechanism for Ha to publish dynamic tags to the DCS.
When the synchronous standby goes away or disconnects a new one is picked and Patroni switches master over to the new one. If no synchronous standby exists Patroni disables synchronous replication (synchronous_standby_names=''), but not synchronous_mode. In this case, only the node that was previously master is allowed to acquire the leader lock.
Added acceptance tests and documentation.
Implementation by @ants with extensive review by @CyberDem0n.
..the same way as for etcd
Change HTTPClient implementation from using `requests.session` to
`urllib3.PoolManager`, because reference implementation from python-consul
didn't really worked with timeouts and was blocking HA loop...
If the Etcd node partitioned from rest of the cluster it is still
possible to read from it (though it returns some stale information),
but it is not possible to write into it.
Previously Patroni was trying to fetch the new cluster view from DCS in
order to figure out is it still the leader or not and Etcd is always
returning stale info where the node still owns the leader key, but with
negative TTL.
This weird bug clearly shows how dangerous premature optimization is.
We are doing number of attempts when trying to initialize replica using
different methods. Any of this attemp may create and put something into
data directory, what causes next attempts fail.
In addition to that improve logging when creating replica.
* reap children before and after running HA loop
When the Patroni is running in a docker container with the pid=1 it is
also responsible for reaping of all dead processes. We can't call
os.waitpid immediately after receiving SIGCHLD because it breaks
subprocess module. It simply stops receiving exit codes of the processes
it executes because these processes. That's why we just registering the
fact of receiving SIGCHLD and reaping children only after execution of
HA loop.
If the postmaster was dying for some reason, Patroni was able to detect
this fact only on the next iteration of HA loop, because zombie
processes where still there and it was possible to send 0 signal to it.
To avoid such situation we should also reap all dead processes before
executing HA loop.
* Don't rely on _cursor_holder when closing connection
it could happen that connection has been opened but not cursor...
* Don't "retry" when fetching current xlog location and it fails
On every iteration of HA loop we are updaing member key in DCS and among
other data there is current xlog location stored in the value.
If the postgres has died for some reason it is not possible to fetch
xlog position and we are just wasting retry_timeout/2 = 5 seconds there.
If this information will be missing from DCS during period of one HA
loop nothing should break. Patroni is not relying on this information
anyway. When it is doing manual or automatic failover it aways
communicates with other nodes directly to get the most fresh
infomation.
* Don't try to update leader optime when postgres is not 100% healthy
`update_lock` method is not only doing update of the leader lock but
also writes the most recent value of xlog position into optime/leader
key. If you know that postgres can be not 100% healthy because it is in
process of restart or recover we should not try to fetch current xlog
position and update 'optime/leader'. Previously we were using
`AsyncExecutor.busy` property for avoiding of such action, but I think
we should be more excpilicit and do the update only if we know that
postgres is 100% healty.
* Avoid retries when syncing replication slots.
Do not retry postgres queries that fetch, create and drop slots at the end of
the HA cycle. The complete run_cycle routine executes with the async_executor
lock. This lock is also used with scheduling operations like reinit or restart
in different threads. Looks like CPython threading class has fairness issues
when multiple threads try to acquire the same lock and one of them executes
long-running actions while holding it: the others have little chances of
acquiring the lock in order. To get around this issue, the long action (i.e.
retrying the query) is removed.
Investigation by Ants Aasma and Alexander Kukushkin.
This error is send by etcd when Patroni is doing "watch" on leader key
which is never updated after creation and etcd cluster receives a lot of
updates, what cleans history of events.
Instead of doing watch on modifiedIndex + 1 we will do watch on X-Etcd-Index,
which is probably still available...
Instead of empying the stale failover key as a master and bailing
out, continue with the healthiest node evaluation. This should make
the actual master acquire the leader key faster. Emit the warning
message as well and add unit tests.
PostgreSQL replication slot names only allow names consisting of [a-z0-9_].
Invalid characters cause replication slot creation and standby startup to fail.
This change substitutes the invalid characters with underscores or unicode
codepoints. In case multiple member names map to identical replication slots
master log will contain a corresponding error message.
Motivated by wanting to use hostnames as member names. Hostnames often
contain periods and dashes.
Any node of the cluster will maintain it's member key until Patroni is
running there.
Master node will also maintain the leader key until postgres is running
as a master. If there is not postgres or it is running 'in_recovery',
Patroni will release leader lock.
Bootstrap of a new cluster will work (it is possible to specify
paused: true) in the `bootstrap.dcs`. Replicas also will be able to join
the cluster if the leader lock exist.
If the postgres is not running on the node it will not try to bring it
up. Also it disables reinitialize and all kind of scheduled actions, i.e.
scheduled restart and scheduled failover.
In case if DCS stops being reachable Patroni will not "demote" master if
the automatic failover was disabled.
Patroni will not stop postgres on exit.