We should ignor the former leader with higher priority when it reports the same LSN as the current node.
This bug could be a contributing factor to issues described in #3295
In addition to that mock socket.getaddrinfo() call in test_api.py to avoid hitting DNS servers.
1. when evaluating whether there are healthy nodes for a leader race before demoting we need to take into account quorum requirements. Without it the former leader may end up in recovery surrounded by asynchronous nodes.
2. QuorumStateResolver wasn't correctly handling the case when the replica node quickly joined and disconnected, what was resulting in the following errors:
```
File "/home/akukushkin/git/patroni/patroni/quorum.py", line 427, in _generate_transitions
yield from self.__remove_gone_nodes()
File "/home/akukushkin/git/patroni/patroni/quorum.py", line 327, in __remove_gone_nodes
yield from self.sync_update(numsync, sync)
File "/home/akukushkin/git/patroni/patroni/quorum.py", line 227, in sync_update
raise QuorumError(f'Sync {numsync} > N of ({sync})')
patroni.quorum.QuorumError: Sync 2 > N of ({'postgresql2'})
2025-02-14 10:18:07,058 INFO: Unexpected exception raised, please report it as a BUG
File "/home/akukushkin/git/patroni/patroni/quorum.py", line 246, in __iter__
transitions = list(self._generate_transitions())
File "/home/akukushkin/git/patroni/patroni/quorum.py", line 423, in _generate_transitions
yield from self.__handle_non_steady_cases()
File "/home/akukushkin/git/patroni/patroni/quorum.py", line 281, in __handle_non_steady_cases
yield from self.quorum_update(len(voters) - self.numsync, voters)
File "/home/akukushkin/git/patroni/patroni/quorum.py", line 184, in quorum_update
raise QuorumError(f'Quorum {quorum} < 0 of ({voters})')
patroni.quorum.QuorumError: Quorum -1 < 0 of ({'postgresql1'})
2025-02-18 15:50:48,243 INFO: Unexpected exception raised, please report it as a BUG
```
Allow to define labels that will be assigned to a postgres instance pod when in 'initializing new cluster', 'running custom bootstrap script', 'starting after custom bootstrap', or 'creating replica' state
The first one if available starting from PostgreSQL v13 and contains the
real write LSN. We will prefer it over value returned by
pg_last_wal_receive_lsn(), which is in fact flush LSN.
The second one is available starting from PostgreSQL v9.6 and points to
WAL flush on the source host. In case of primary it will allow to better
calculate the replay lag, because values stored in DCS are updated only
every loop_wait seconds.
Consider a situation: there is a permanent logical slot and primary and replica are temporary down.
When Patroni is started on the former primary it starts Postgres in a standby mode, what leads to removal of physical replication slot for the replica because it has xmin.
We should postpone removal of such physical slots:
- on replica until there will be a leader in the cluster
- on primary until Postgres is promoted
- fix unit tests (logging now uses time.time_ns() instead of time.time())
- update setup.py
- update tox.ini
- enable unix and behave tests with 3.13
Close https://github.com/patroni/patroni/issues/3243
Test if config (file) parsed with yaml_load() contains a valid Mapping
object, otherwise Patroni throws an explicit exception. It also makes
the Patroni output more explicit when using that kind of "invalid"
configuration.
``` console
$ touch /tmp/patroni.yaml
$ patroni --validate-config /tmp/patroni.yaml
/tmp/patroni.yaml does not contain a dict
invalid config file /tmp/patroni.yaml
```
reportUnnecessaryIsInstance is explicitly ignored since we can't
determine what yaml_safeload can bring from a YAML config (list,
dict,...).
* Compatibility with python-json-logger>=3.1
After refactoring the old API is still working, but producing warnings
and pyright also fails.
Besides that improve coverage of watchdog/base.py and ctl.py
* Stick to ubuntu 22.04
* Please pyright
1. Implemented compatibility.
2. Constrained the upper version in requirements.txt to avoid future failures.
3. Setup an additional pipeline to check with the latest ydiff.
Close#3209Close#3212Close#3218
Additionally run on_role_change callback in post_recover() for a primary
that failed to start after a crash to increase chances the callback is executed,
even if the further start as a replica fails
---------
Co-authored-by: Alexander Kukushkin <cyberdemn@gmail.com>
python-consul is unmaintained for a long time and py-consul is an official replacement.
However, we still keep backward compatibility with python-consul.
Close: #3189
There are cases when we may send the same PATCH request more than one time to K8s API server and it could happen that the first request actually successfully updated the target and we cancelled while waiting for a response. The second PATCH request in this case will fail due to resource_version mismatch.
So far our strategy for update_leader() method was - re-read the object and repeat the request with the new resource_version. However, we can avoid the update by comparing annotations on the read object with annotations that we wanted to set.
Since `3.2.0` Patroni is able to create physical replication slots on replica nodes just for the case if this node at some moment will become the primary.
There are two potential problems of having such slots:
1. They prevent recycling of WAL files.
2. They may affect vacuum on the primary is hot_standby_feedback is enabled.
The first class of issues is already addressed by periodically calling pg_replication_slot_advance() function.
However the second class of issues doesn't happen instantly, but only when the old primary switched to a replica. In this case physical replication slots that were at some moment activate will hold NOT NULL value of `xmin`, which will be propagated to the primary via hot_standby_feedback mechanism.
To address the second problem we will detect that a physical replication slot is not supposed to be active, but having NOT NULL `xmin` and drop/crecreate it.
Close#3146Close#3153
Co-authored-by: Polina Bungina <27892524+hughcapet@users.noreply.github.com>
This commit is a breaking change:
1. `role` in DCS is written as "primary" instead of "master".
2. `role` in REST API responses is also written as "primary".
3. REST API no longer accepts role=master in requests (for example switchover/failover/restart endpoints).
4. `/metrics` REST API endpoint will no longer report `patroni_master`.
5. `patronictl` no longer accepts `--master` argument.
6. `no_master` option in declarative configuration of custom replica creation methods is no longer treated as a special option, please use `no_leader` instead.
7. `patroni_wale_restore` doesn't accept `--no_master` anymore.
8. `patroni_barman` doesn't accept `--role=master` anymore.
9. callback scripts will be executed with role=primary instead of role=master
10. On Kubernetes Patroni by default will set role label to primary. In case if you want to keep old behavior and avoid downtime or lengthy complex migrations you can configure `kubernetes.leader_label_value` and `kubernetes.standby_leader_label_value` to `master`.
However, a few exceptions regarding master are still in place:
1. `GET /master` REST API endpoint will continue to work.
2. `master_start_timeout` and `master_stop_timeout` in global configuration are still accepted.
3. `master` tag is still preserved in Consul services in addition to `primary`.
Rationale for these exceptions: DBA doesn't always 100% control the infrastructure and can't adjust the configuration.
There are two cases when libpq may search for "localhost":
1. When host in the connection string is not specified and it is using default socket directory path.
2. When specified host matches default socket directory path.
Since we don't know the value of default socket directory path and effectively can't detect the case 2, the best strategy to mitigate the problem would be to add "localhost" if we detected a "host" be a unix socket directory (it starts with '/' character).
Close#3134
Current problem of Patroni that strikes many people is that it removes replication slot for member which key is expired from DCS. As a result, when the replica comes back from a scheduled maintenance WAL segments could be already absent, and it can't continue streaming without pulling files from archive.
With PostgreSQL 16 and newer we get another problem: logical slot on a standby node could be invalidated if physical replication slot on the primary was removed (and `pg_catalog` vacuumed).
The most problematic environment is Kubernetes, where slot is removed nearly instantly when member Pod is deleted.
So far, one of the recommended solutions was to configure permanent physical slots with names that match member names to avoid removal of replication slots. It works, but depending on environment might be non-trivial to implement (when for example members may change their names).
This PR implements support of `member_slots_ttl` global configuration parameter, that controls for how long member replication slots should be kept when the member key is absent. Default value is set to `30min`.
The feature is supported only starting from PostgreSQL 11 and newer, because we want to retain slots not only on the leader node, but on all nodes that could potentially become the new leader, and they should be moved forward using `pg_replication_slot_advance()` function.
One could disable feature and get back to the old behavior by setting `member_slots_ttl` to `0`.
Due to postgres --describe-config not showing GUCs defined as GUC_NO_SHOW_ALL | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE, Patroni was always ignoring some GUCs that a user might want to have configured with non-default values.
- remove postgres --describe-config validation.
- define minor versions for availability bounds of some back-patched GUCs
1. All nodes with role == 'replica' and state == 'running' are are registered. In case is state isn't running the node is removed.
2. In case of failover/switchover we always first update the primary
3. When switching to a registered secondary we call citus_update_node() three times: rename primary to primary-demoted, put the primary name to a promoted secondary row and put the promoted secondary name to the primary row
State transitions are produced by the transition() method. First of all the method makes sure that the actual primary is registered in the metadata. In case if for a given group the primary didn't change, the method registers new secondaries and removes secondaries that are gone. It prefers to use citus_update_node() UDF to replace gone secondaries with added.
Communication protocol between primary nodes remains the same and all old features work without any changes.
To enable quorum commit:
```diff
$ patronictl.py edit-config
---
+++
@@ -5,3 +5,4 @@
use_pg_rewind: true
retry_timeout: 10
ttl: 30
+synchronous_mode: quorum
Apply these changes? [y/N]: y
Configuration changed
```
By default Patroni will use `ANY 1(list,of,stanbys)` in `synchronous_standby_names`. That is, only one node out of listed replicas will be used for quorum.
If you want to increase the number of quorum nodes it is possible to do it with:
```diff
$ patronictl edit-config
---
+++
@@ -6,3 +6,4 @@
retry_timeout: 10
synchronous_mode: quorum
ttl: 30
+synchronous_node_count: 2
Apply these changes? [y/N]: y
Configuration changed
```
Good old `synchronous_mode: on` is still supported.
Close https://github.com/patroni/patroni/issues/664
Close https://github.com/zalando/patroni/pull/672
There was one oversight of #2781 - to influence external tools that Patroni could execute, we set global `umask` value based on permissions of the $PGDATA directory. As a result, it also influenced permissions of log files created by Patroni.
To address the problem we implement two measures:
1. Make `log.mode` configurable.
2. If the value is not set - calculate permissions from the original value of the umask setting.
The last one is only available since psycopg 2.8, while the first one since 2.0.8.
For backward compatibility monkeypatch connection object returned by psycopg3.
Close https://github.com/patroni/patroni/issues/3116
It could happen that there is "something" streaming from the current primary node with `application_name` that matches name of the current primary, for instance due to a faulty configuration. When processing `pg_stat_replication` we only checked that the `application_name` matches with the name one of the member nodes, but we forgot to exclude our own name.
As a result there were following side-effects:
1. The current primary could be declared as a synchronous node.
2. As a result of [1] it wasn't possible to do a switchover.
3. During shutdown the current primary was waiting for itself to release it from synchronous nodes.
Close#3111
Pass the `Cluster` object instead of `Leader`.
It will help to implement a new feature, "Configurable retention of replication slots for cluster members".
Besides that fix a couple of issues with docstrings.