mirror of
https://github.com/optim-enterprises-bv/patroni.git
synced 2026-01-09 09:01:40 +00:00
Adds a new configuration variable synchronous_mode. When enabled Patroni will manage synchronous_standby_names to enable synchronous replication whenever there are healthy standbys available. With synchronous mode enabled Patroni will automatically fail over only to a standby that was synchronously replicating at the time of master failure. This effectively means zero lost user visible transactions. To enforce the synchronous failover guarantee Patroni stores current synchronous replication state in the DCS, using strict ordering, first enable synchronous replication, then publish the information. Standby can use this to verify that it was indeed a synchronous standby before master failed and is allowed to fail over. We can't enable multiple standbys as synchronous, allowing PostreSQL to pick one because we can't know which one was actually set to be synchronous on the master when it failed. This means that on standby failure commits will be blocked on the master until next run_cycle iteration. TODO: figure out a way to poke Patroni to run sooner or allow for PostgreSQL to pick one without the possibility of lost transactions. On graceful shutdown standbys will disable themselves by setting a nosync tag for themselves and waiting for the master to notice and pick another standby. This adds a new mechanism for Ha to publish dynamic tags to the DCS. When the synchronous standby goes away or disconnects a new one is picked and Patroni switches master over to the new one. If no synchronous standby exists Patroni disables synchronous replication (synchronous_standby_names=''), but not synchronous_mode. In this case, only the node that was previously master is allowed to acquire the leader lock. Added acceptance tests and documentation. Implementation by @ants with extensive review by @CyberDem0n.
15 lines
792 B
Gherkin
15 lines
792 B
Gherkin
Feature: cascading replication
|
|
We should check that patroni can do base backup and streaming from the replica
|
|
|
|
Scenario: check a base backup and streaming replication from a replica
|
|
Given I start postgres0
|
|
And postgres0 is a leader after 10 seconds
|
|
And I configure and start postgres1 with a tag clonefrom true
|
|
And replication works from postgres0 to postgres1 after 20 seconds
|
|
And I create label with "postgres0" in postgres0 data directory
|
|
And I create label with "postgres1" in postgres1 data directory
|
|
And "members/postgres1" key in DCS has state=running after 12 seconds
|
|
And I configure and start postgres2 with a tag replicatefrom postgres1
|
|
Then replication works from postgres0 to postgres2 after 30 seconds
|
|
And there is a label with "postgres1" in postgres2 data directory
|