Allow to define labels that will be assigned to a postgres instance pod when in 'initializing new cluster', 'running custom bootstrap script', 'starting after custom bootstrap', or 'creating replica' state
If logical replication slot is created with failover => true option, we
get respective field set to true in `pg_replication_slots` view.
By avoiding interacting with such slots we make logical failover slots
feature fully functional in PG17.
the problem existed because _build_retain_slots() method was falsely relying on members being present in DCS, while on failover the member key for the former leader is expiring exactly at the same time.
Additionally run on_role_change callback in post_recover() for a primary
that failed to start after a crash to increase chances the callback is executed,
even if the further start as a replica fails
---------
Co-authored-by: Alexander Kukushkin <cyberdemn@gmail.com>
Since `3.2.0` Patroni is able to create physical replication slots on replica nodes just for the case if this node at some moment will become the primary.
There are two potential problems of having such slots:
1. They prevent recycling of WAL files.
2. They may affect vacuum on the primary is hot_standby_feedback is enabled.
The first class of issues is already addressed by periodically calling pg_replication_slot_advance() function.
However the second class of issues doesn't happen instantly, but only when the old primary switched to a replica. In this case physical replication slots that were at some moment activate will hold NOT NULL value of `xmin`, which will be propagated to the primary via hot_standby_feedback mechanism.
To address the second problem we will detect that a physical replication slot is not supposed to be active, but having NOT NULL `xmin` and drop/crecreate it.
Close#3146Close#3153
Co-authored-by: Polina Bungina <27892524+hughcapet@users.noreply.github.com>
This commit is a breaking change:
1. `role` in DCS is written as "primary" instead of "master".
2. `role` in REST API responses is also written as "primary".
3. REST API no longer accepts role=master in requests (for example switchover/failover/restart endpoints).
4. `/metrics` REST API endpoint will no longer report `patroni_master`.
5. `patronictl` no longer accepts `--master` argument.
6. `no_master` option in declarative configuration of custom replica creation methods is no longer treated as a special option, please use `no_leader` instead.
7. `patroni_wale_restore` doesn't accept `--no_master` anymore.
8. `patroni_barman` doesn't accept `--role=master` anymore.
9. callback scripts will be executed with role=primary instead of role=master
10. On Kubernetes Patroni by default will set role label to primary. In case if you want to keep old behavior and avoid downtime or lengthy complex migrations you can configure `kubernetes.leader_label_value` and `kubernetes.standby_leader_label_value` to `master`.
However, a few exceptions regarding master are still in place:
1. `GET /master` REST API endpoint will continue to work.
2. `master_start_timeout` and `master_stop_timeout` in global configuration are still accepted.
3. `master` tag is still preserved in Consul services in addition to `primary`.
Rationale for these exceptions: DBA doesn't always 100% control the infrastructure and can't adjust the configuration.
To enable quorum commit:
```diff
$ patronictl.py edit-config
---
+++
@@ -5,3 +5,4 @@
use_pg_rewind: true
retry_timeout: 10
ttl: 30
+synchronous_mode: quorum
Apply these changes? [y/N]: y
Configuration changed
```
By default Patroni will use `ANY 1(list,of,stanbys)` in `synchronous_standby_names`. That is, only one node out of listed replicas will be used for quorum.
If you want to increase the number of quorum nodes it is possible to do it with:
```diff
$ patronictl edit-config
---
+++
@@ -6,3 +6,4 @@
retry_timeout: 10
synchronous_mode: quorum
ttl: 30
+synchronous_node_count: 2
Apply these changes? [y/N]: y
Configuration changed
```
Good old `synchronous_mode: on` is still supported.
Close https://github.com/patroni/patroni/issues/664
Close https://github.com/zalando/patroni/pull/672
* Ensure that nofailover will always be used if both nofailover and
failover_priority tags are provided
* Call _validate_failover_tags from reload_local_configuration() as well
* Properly check values in the _validate_failover_tags(): nofailover value should be casted to boolean like it is done when accessed in other places
Previous to this commit, if a user would ever like to add parameters to the custom bootstrap script call, they would need to configure Patroni like this:
```
bootstrap:
method: custom_method_name
custom_method_name:
command: /path/to/my/custom_script --arg1=value1 --arg2=value2 ...
```
This commit extends that so we achieve a similar behavior that is seen when using `create_replica_methods`, i.e., we also allow the following syntax:
```
bootstrap:
method: custom_method_name
custom_method_name:
command: /path/to/my/custom_script
arg1: value1
arg2: value2
```
All keys in the mapping which are not recognized by Patroni, will be dealt with as if they were additional named arguments to be passed down to the `command` call.
References: PAT-218.
The priority is configured with `failover_priority` tag. Possible values are from `0` till infinity, where `0` means that the node will never become the leader, which is the same as `nofailover` tag set to `true`. As a result, in the configuration file one should set only one of `failover_priority` or `nofailover` tags.
The failover priority kicks in only when there are more than one node have the same receive/replay LSN and are ahead of other nodes in the cluster. In this case the node with higher value of `failover_priority` is preferred. If there is a node with higher values of receive/replay LSN, it will become the new leader even if it has lower value of `failover_priority` (except when priority is set to 0).
Close https://github.com/zalando/patroni/issues/2759
- Fixed issues with has_permanent_slots() method. It didn't took into account the case of permanent physical slots for members, falsely concluding that there are no permanent slots.
- Write to the status key only LSNs for permanent slots (not just for slots that exist on the primary).
- Include pg_current_wal_flush_lsn() to slots feedback, so that slots on standby nodes could be advanced
- Improved behave tests:
- Verify that permanent slots are properly created on standby nodes
- Verify that permanent slots are properly advanced, including DCS failsafe mode
- Verify that only permanent slots are written to the `/status`
- get rid from sleeps
- reduce retry_timeout
- avoid graceful Patroni shut down while DCS is "paused", just kill
Patroni and after that gracefully stop postgres
- don't try to delete Pod when Patroni is killed. If K8s API is paused it takes ages
The run time on my laptop is reduced from 2m to 1m28s.
Create permanent physical replication slots on standby nodes and use `pg_replication_slot_advance()` function to move them forward.
The `restart_lsn` is advanced based on values stored in the `/status` key by the primary node.
When slot is created on a replica it could be ahead the same slot on the primary and therefore there is some period of time when it doesn't protect WAL files from being recycled.
When starting check if node with the same is registered in DCS and try to query it's REST API.
If REST API is accessible exit with the error.
Close#1804
* Reduce flakiness of citus behave tests
- make a few attempts with timeout when checking registered nodes
- get rid from artificial sleep
- allow check_registration() function to check secondaries
These changes are useful for Quorum based failover (#2668) and future PR
that enhances Citus support by registering secondaries in `pg_dist_node`.
* Ignore D401 in flake8-docstrings
* Fix newly reported flake8 issues, ignore the old W503 rule
* rely on concatenation of adjecent strings
* Format behave scripts
* Reformat ha.py according to new rules
Co-authored-by: Alexander Kukushkin <cyberdemn@gmail.com>
We made incorrect assumption that `citus_set_coordinator_host()` will trigger `pg_dist_node` sync. Instead we should also use `citus_update_node()` and call `citus_set_coordinator_host()` only during the bootstrap.
Adjust behave tests to verify that coordinator failover is visible on workers.
* bump version
* update release notes
* removed 2.7, 3.4, 3.5, and 3.6 from supported versions in setup.py
* switched GH actions back to ubuntu-latest, removed tests with 2.7 and 3.6, and added 3.11
* some little fixes in Citus documentation and behave tests
keep as much backward compatibility as possible.
Following changes were made:
1. All internal checks are performed as `role in ('master', 'primary')`
2. All internal variables/functions/methods are renamed
3. `GET /metrics` endpoint returns `patroni_primary` in addition to `patroni_master`.
4. Logs are changed to use leader/primary/member/remote depending on the context
5. Unit-tests are using only role = 'primary' instead of 'master' to verify that 1 works.
6. patronictl still supports old syntax, but also accepts `--leader` and `--primary`.
7. `master_(start|stop)_timeout` is automatically translated to `primary_(start|stop)_timeout` if the last one is not set.
8. updated the documentation and some examples
Future plan: in the next major release switch role name from `master` to `primary` and maybe drop `master` altogether.
The Kubernetes implementation will require more work and keep two labels in parallel. Label values should probably be configurable as described in https://github.com/zalando/patroni/issues/2495.
Citus cluster (coordinator and workers) will be stored in DCS as a fleet of Patroni logically grouped together:
```
/service/batman/
/service/batman/0/
/service/batman/0/initialize
/service/batman/0/leader
/service/batman/0/members/
/service/batman/0/members/m1
/service/batman/0/members/m2
/service/batman/
/service/batman/1/
/service/batman/1/initialize
/service/batman/1/leader
/service/batman/1/members/
/service/batman/1/members/m1
/service/batman/1/members/m2
...
```
Where 0 is a Citus group for coordinator and 1, 2, etc are worker groups.
Such hierarchy allows reading the entire Citus cluster with a single call to DCS (except Zookeeper).
The get_cluster() method will be reading the entire Citus cluster on the coordinator because it needs to discover workers. For the worker cluster it will be reading the subtree of its own group.
Besides that we introduce a new method get_citus_coordinator(). It will be used only by worker clusters.
Since there is no hierarchical structures on K8s we will use the citus group suffix on all objects that Patroni creates.
E.g.
```
batman-0-leader # the leader config map for the coordinator
batman-0-config # the config map holding initialize, config, and history "keys"
...
batman-1-leader # the leader config map for worker group 1
batman-1-config
...
```
Citus integration is enabled from patroni.yaml:
```yaml
citus:
database: citus
group: 0 # 0 is for coordinator, 1, 2, etc are for workers
```
If enabled, Patroni will create the database, citus extension in it, and INSERTs INTO `pg_dist_authinfo` information required for Citus nodes to communicate between each other, i.e. 'password', 'sslcert', 'sslkey' for superuser if they are defined in the Patroni configuration file.
When the new Citus coordinator/worker is bootstrapped, Patroni adds `synchronous_mode: on` to the `bootstrap.dcs` section.
Besides that, Patroni takes over management of some Postgres GUCs:
- `shared_preload_libraries` - Patroni ensures that the "citus" is added to the first place
- `max_prepared_transactions` - if not set or set to 0, Patroni changes the value to `max_connections*2`
- wal_level - automatically set to logical. It is used by Citus to move/split shards. Under the hood Citus is creating/removing replication slots and they are automatically added by Patroni to the `ignore_slots` configuration to avoid accidental removal.
The coordinator primary actively discovers worker primary nodes and registers/updates them in the `pg_dist_node` table using
citus_add_node() and citus_update_node() functions.
Patroni running on the coordinator provides the new REST API endpoint: `POST /citus`. It is used by workers to facilitate controlled switchovers and restarts of worker primaries.
When the worker primary needs to shut down Postgres because of restart or switchover, it calls the `POST /citus` endpoint on the coordinator and the Patroni on the coordinator starts a transaction and calls `citus_update_node(nodeid, 'host-demoted', port)` in order to pause client connections that work with the given worker.
Once the new leader is elected or postgres started back, they perform another call to the `POST/citus` endpoint, that does another `citus_update_node()` call with actual hostname and port and commits a transaction. After transaction is committed, coordinator reestablishes connections to the worker node and client connections are unblocked.
If clients don't run long transaction the operation finishes without client visible errors, but only a short latency spike.
All operations on the `pg_dist_node` are serialized by Patroni on the coordinator. It allows to have more control and ROLLBACK transaction in progress if its lifetime exceeding a certain threshold and there are other worker nodes should be updated.
If enabled it will allow Patroni to cope with DCS outages.
In case of a DCS outage the leader tries to call all remaining members in the cluster via API and if all of them respond with success the leader will not be demoted.
The failsafe_mode could be enabled by running
```sh
patronictl edit-config -s failsafe_mode=true
```
or by calling the `/config` REST API endpoint.
Co-authored-by: Polina Bungina <bungina@gmail.com>
They are frequently failing because sometimes replicas are a bit slow realizing that they are synchronous. Instead of instroducing more sleeps we will poll for required http status code with some timeout.
If openssl binary is available use it to generate a self-signed certificate. Use it to protect Patroni REST API
(`verify_client: required`).
In case if Postgres is compiled with SSL support enable it in the configuration and configure pg_hba.conf to check client certificates (`verify-ca`) in addition to passwords. Also configure superuser/replication/rewind users to use client certificates and verify server certificate (`verify-ca`)
There is a known [vector of attact](https://pganalyze.com/blog/5mins-postgres-security-patch-releases-pgspot-pghostile) by creating functions and/or operators in a public scheme with the same name and signature as corresponding objects in `pg_catalog`.
Since Patroni is heavily relying on superuser connections we want to mitigate it by enforcing `search_path=pg_catalog` for all connections created by Patroni (except replication connections). It is achieved by introducing a new function, that wraps psycopg.connect() and appends ` -c search_path=pg_catalog` to `options` parameter.
In addition to that, we set connection.autocommit to True before returning it.
Fix issue of patroni configuring watchdog with defaults when bootstrapping a new cluster rather than taking configuration used to bootstrap the DCS.
Also log changes to watchdog configuration based on calculated timeout value.
Close#2470
Windows doesn't support `SIGTERM`, but our behave tests in majority of cases relying on Patroni graceful shutdown.
In order to emulate the behaviour we introduced the new REST API endpoint `POST /sigterm`. The endpoint works only on Windows and when `BEHAVE_DEBUG` environment variable is set.
Besides that some minor adjustments in behave tests were done. Mainly related to backslash-slash handling.
In addition to that improve test coverage on Windows by properly mocking access to filesystem and avoiding calling
`subprocess.call()`. Specifically, symlink creation on Windows requires Admin privileges and there is no `true.exe`.
Effectively, this PR consists of a few changes:
1. The easy part:
In case of permanent logical slots are defined in the global configuration, Patroni on the primary will not only create them, but also periodically update DCS with the current values of `confirmed_flush_lsn` for all these slots.
In order to reduce the number of interactions with DCS the new `/status` key was introduced. It will contain the json object with `optime` and `slots` keys. For backward compatibility the `/optime/leader` will be updated if there are members with old Patroni in the cluster.
2. The tricky part:
On replicas that are eligible for a failover, Patroni creates the logical replication slot by copying the slot file from the primary and restarting the replica. In order to copy the slot file Patroni opens a connection to the primary with `rewind` or `superuser` credentials and calls `pg_read_binary_file()` function.
When the logical slot already exists on the replica Patroni periodically calls `pg_replication_slot_advance()` function, which allows moving the slot forward.
3. Additional requirements:
In order to ensure that primary doesn't cleanup tuples from pg_catalog that are required for logical decoding, Patroni enables `hot_standby_feedback` on replicas with logical slots and on cascading replicas if they are used for streaming by replicas with logical slots.
4. When logical slots are copied from to the replica there is a timeframe when it could be not safe to use them after promotion. Right now there is no protection from promoting such a replica. But, Patroni will show the warning with names of the slots that might be not safe to use.
Compatibility.
The `pg_replication_slot_advance()` function is only available starting from PostgreSQL 11. For older Postgres versions Patroni will refuse to create the logical slot on the primary.
The old "permanent slots" feature, which creates logical slots right after promotion and before allowing connections, was removed.
Close: https://github.com/zalando/patroni/issues/1749
This commit makes it possible to configure the maximum lag (`maximum_lag_on_syncnode`) after which Patroni will "demote" the node from synchronous and replace it with another node.
The previous implementation always tried to stick to the same synchronous nodes (even if they are not optimal ones).
When there is no config key in DCS Patroni shouldn't try accessing ignore_slots, otherwise an exception is raised.
In addition to that implement missing unit-tests and fix linting issues in behave tests.
There are sometimes good reasons to manage replication slots externally
to Patroni. For example, a consumer may wish to manage its own slots (so
that it can more easily track when a failover has a occurred and whether
it is ahead of or behind the WAL position on the new primary).
Additionally tooling like pglogical actually replicates slots to all
replicas so that the current position can be maintained on failover
targets (this also aids consumers by supplying primitives so that they
can verify data hasn't been lost or a split brain occurred relative to
the physical cluster).
To support these use cases this new feature allows configuring Patroni
to entirely ignore sets of slots specified by any subset of name,
database, slot type, and plugin.
The new parameter `synchronous_node_count` is used by Patroni to manage number of synchronous standby databases. It is set to 1 by default. It has no effect when synchronous_mode is set to off. When enabled, Patroni manages precise number of synchronous standby databases based on parameter synchronous_node_count and adjusts the state in DCS & synchronous_standby_names as members join and leave.
This functionality can be further extended to support Priority (FIRST n) based synchronous replication & Quorum (ANY n) based synchronous replication in future.
* Implement proper tests for `multiprocessing.set_start_method()`
* Exclude some watchdog code from coverage (it is used only for behave tests)
* properly use os.path.join for windows compatibility
* import DCS modules in `features/environment.py` on demand. It allows to run behave tests against chosen DCS without installing all dependencies.
* remove some unused behave code
* fix some minor issues in the dcs.kubernetes module