This commit is a breaking change:
1. `role` in DCS is written as "primary" instead of "master".
2. `role` in REST API responses is also written as "primary".
3. REST API no longer accepts role=master in requests (for example switchover/failover/restart endpoints).
4. `/metrics` REST API endpoint will no longer report `patroni_master`.
5. `patronictl` no longer accepts `--master` argument.
6. `no_master` option in declarative configuration of custom replica creation methods is no longer treated as a special option, please use `no_leader` instead.
7. `patroni_wale_restore` doesn't accept `--no_master` anymore.
8. `patroni_barman` doesn't accept `--role=master` anymore.
9. callback scripts will be executed with role=primary instead of role=master
10. On Kubernetes Patroni by default will set role label to primary. In case if you want to keep old behavior and avoid downtime or lengthy complex migrations you can configure `kubernetes.leader_label_value` and `kubernetes.standby_leader_label_value` to `master`.
However, a few exceptions regarding master are still in place:
1. `GET /master` REST API endpoint will continue to work.
2. `master_start_timeout` and `master_stop_timeout` in global configuration are still accepted.
3. `master` tag is still preserved in Consul services in addition to `primary`.
Rationale for these exceptions: DBA doesn't always 100% control the infrastructure and can't adjust the configuration.
There are two cases when libpq may search for "localhost":
1. When host in the connection string is not specified and it is using default socket directory path.
2. When specified host matches default socket directory path.
Since we don't know the value of default socket directory path and effectively can't detect the case 2, the best strategy to mitigate the problem would be to add "localhost" if we detected a "host" be a unix socket directory (it starts with '/' character).
Close#3134
- don't register secondaries with `noloadbalance` tag.
- mention in the documentation that secondaries are also registered in `pg_dist_node`.
- update docker/kubernetes README files to include examples with secondaries being registered in `pg_dist_node`.
Current problem of Patroni that strikes many people is that it removes replication slot for member which key is expired from DCS. As a result, when the replica comes back from a scheduled maintenance WAL segments could be already absent, and it can't continue streaming without pulling files from archive.
With PostgreSQL 16 and newer we get another problem: logical slot on a standby node could be invalidated if physical replication slot on the primary was removed (and `pg_catalog` vacuumed).
The most problematic environment is Kubernetes, where slot is removed nearly instantly when member Pod is deleted.
So far, one of the recommended solutions was to configure permanent physical slots with names that match member names to avoid removal of replication slots. It works, but depending on environment might be non-trivial to implement (when for example members may change their names).
This PR implements support of `member_slots_ttl` global configuration parameter, that controls for how long member replication slots should be kept when the member key is absent. Default value is set to `30min`.
The feature is supported only starting from PostgreSQL 11 and newer, because we want to retain slots not only on the leader node, but on all nodes that could potentially become the new leader, and they should be moved forward using `pg_replication_slot_advance()` function.
One could disable feature and get back to the old behavior by setting `member_slots_ttl` to `0`.
Due to postgres --describe-config not showing GUCs defined as GUC_NO_SHOW_ALL | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE, Patroni was always ignoring some GUCs that a user might want to have configured with non-default values.
- remove postgres --describe-config validation.
- define minor versions for availability bounds of some back-patched GUCs
1. All nodes with role == 'replica' and state == 'running' are are registered. In case is state isn't running the node is removed.
2. In case of failover/switchover we always first update the primary
3. When switching to a registered secondary we call citus_update_node() three times: rename primary to primary-demoted, put the primary name to a promoted secondary row and put the promoted secondary name to the primary row
State transitions are produced by the transition() method. First of all the method makes sure that the actual primary is registered in the metadata. In case if for a given group the primary didn't change, the method registers new secondaries and removes secondaries that are gone. It prefers to use citus_update_node() UDF to replace gone secondaries with added.
Communication protocol between primary nodes remains the same and all old features work without any changes.
To enable quorum commit:
```diff
$ patronictl.py edit-config
---
+++
@@ -5,3 +5,4 @@
use_pg_rewind: true
retry_timeout: 10
ttl: 30
+synchronous_mode: quorum
Apply these changes? [y/N]: y
Configuration changed
```
By default Patroni will use `ANY 1(list,of,stanbys)` in `synchronous_standby_names`. That is, only one node out of listed replicas will be used for quorum.
If you want to increase the number of quorum nodes it is possible to do it with:
```diff
$ patronictl edit-config
---
+++
@@ -6,3 +6,4 @@
retry_timeout: 10
synchronous_mode: quorum
ttl: 30
+synchronous_node_count: 2
Apply these changes? [y/N]: y
Configuration changed
```
Good old `synchronous_mode: on` is still supported.
Close https://github.com/patroni/patroni/issues/664
Close https://github.com/zalando/patroni/pull/672
There was one oversight of #2781 - to influence external tools that Patroni could execute, we set global `umask` value based on permissions of the $PGDATA directory. As a result, it also influenced permissions of log files created by Patroni.
To address the problem we implement two measures:
1. Make `log.mode` configurable.
2. If the value is not set - calculate permissions from the original value of the umask setting.
The last one is only available since psycopg 2.8, while the first one since 2.0.8.
For backward compatibility monkeypatch connection object returned by psycopg3.
Close https://github.com/patroni/patroni/issues/3116
It could happen that there is "something" streaming from the current primary node with `application_name` that matches name of the current primary, for instance due to a faulty configuration. When processing `pg_stat_replication` we only checked that the `application_name` matches with the name one of the member nodes, but we forgot to exclude our own name.
As a result there were following side-effects:
1. The current primary could be declared as a synchronous node.
2. As a result of [1] it wasn't possible to do a switchover.
3. During shutdown the current primary was waiting for itself to release it from synchronous nodes.
Close#3111
Pass the `Cluster` object instead of `Leader`.
It will help to implement a new feature, "Configurable retention of replication slots for cluster members".
Besides that fix a couple of issues with docstrings.
Lets consider a following replication setup:
```
primary->standby1->standby2(replicatefrom: standby1)
```
In this case the `primary` will not create a physical replication slot for standby2, because it is streaming from the `standby1`.
Things will look differently if we have the following dynamic configuration:
```yaml
slots:
primary:
type: physical
standby1:
type: physical
standby2:
type: physical
```
In this case `primary` will also have `standby2` physical replication slot, which periodically must be advanced. So far it was working by taking value of `xlog_location` from the `/members/standby2` key in DCS.
But, when DCS is down and failsafe mode is activate, the `standby2` physical slot on the `primary` will not not be moved, because there was not way to get the latest value of `xlog_location`.
This PR is addressing the problem by making replica nodes to return their `xlog_location` as `lsn` header in the response on `POST /failsafe` REST API request. The current primary will use these values to advance replication slots for nodes with `replicatefrom` tag.
This constant was imported in `postgresql/__init__.py` and used in the `can_advance_slots` property.
But, after refactoring in #2958 we pass around a reference to `Postgresql` instead of `major_version` and therefore we can just rely on `can_advance_slots` property and don't reimplement its logic in other places.
The `Status` class was introduced in #2853, but we kept old properties in the `Cluster` object in order to have fewer changes in the rest of the code.
This PR is finishing the refactoring.
The following adjustments were made:
- Introduced `Status.is_empty()` method, which is used in the `Cluster.is_empty()` instead of checking actual values to simplify introduction of further fields to the Status object.
- Removed `Cluster.last_lsn` property
- Changed `Cluster.slots` property to always return dict and perform sanity checks on values.
Besides that, this PR addressing a couple of problems:
- the `AbstractDCS.get_cluster()` method some properties without holding a lock on `_cluster_thread_lock`.
- `Cluster.__permanent_slots` property was setting 'lsn' from all cluster members, while it should be doing that only for members with `replicatefrom` tag.
The `SlotsAdvanceThread` is asynchronously calling
pg_replication_slot_advance() and providing feedback about logical
replication slots that must be reinitialized by copying from the
primary. That is, the parent thread will learn about slots to be copied
only when scheduling the next pg_replication_slot_advance() call.
As a result it was possible situation when logical slot was copied with
PostgreSQL restart more than once.
To improve it we implement following measures:
1. do not schedule slot sync if it is in the list to be copied
2. remove to be copued slots from the `self._scheduled` structure
3. clean state of `SlotsAdvanceThread` when slot files are copied.
Since PG16 logical replication slots on a standby can be invalidated due
to horizon. In this case, pg_replication_slot_advance() will fail with
ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE. We should force slot copy
(i.e., recreation) of such slots.
Since `synchronous_mode` was introduced to Patroni, the plain Postgres synchronous replication has been no longer working.
The issue occurs because `process_sync_replication` always resets the value of `synchronous_standby_names` in Postgres when `synchronous_mode` is disabled in Patroni.
This commit fixes that issue by setting the value of `synchronous_standby_names` as configured by the user, if that is the case, when `synchronous_mode` is disabled.
Closes#3093
References: PAT-254.
* Update release notes
* Bump version
* Bump pyright version and solve reported issues
---------
Co-authored-by: Alexander Kukushkin <cyberdemn@gmail.com>
the get_address() function was called when config_generator.py is loaded because it was required to initialize `_HOSTNAME` and `_IP` properties of `AbstractConfigGenerator` and in some cases making unit tests very slow.
`synchronous_standby_names` and synchronous replication only work on a real primary node and in case of cascading replication simply ignored by Postgres.
This fact was already addressed by `global_config.is_synchronous_mode`, but in case if in a standby cluster the `/sync` key in DCS is not empty, `patronictl list` and `GET /cluster` were falsely reporting some nodes as synchronous because this check was missing.
Close https://github.com/zalando/patroni/issues/3078
- updated list of GUCs
- updated regex for filtering backend processes by name
- `primary_conninfo` will contain `dbname` parameter
The last one is required for synchronizing logical replication slots by slotsync worker and doesn't create problems on older versions.
Besides that:
1. fix problem with is_physical_slot() methods, it was returning false positives for logical slots.
2. Fix a little issue with replicatefrom docs.
Close https://github.com/zalando/patroni/issues/3068
Etcd keeps old revisions unless instructed to delete them. If we don't delete old revisions then etcd memory usage will keep growing forever due to keepalive updates. Since Patroni does not really need to roll back to older revisions we can safely delete them.
- monkey patch `jsonlogger.RESERVED_ATTRS` to hide new attribute in `LogRecord`
- "silence" warning about `atetime.datetime.utcnow()`
- run some tests with python 3.12
- bump actions versions to silence complains about Node version
- fix PATH to Postgres binaries on MacOS