They could be useful to eliminate "unhealthy" pods from subsets addresses when the K8s service with label selectors are used.
Real-life example: the node where the primary was running has failed and being shutdown and Patroni can't update (remove) the role label.
Therefore on OpenShift the leader service will have two pods assigned, one of them is a failed primary.
With the readiness probe defined, the failed primary pod will be excluded from the list.
PostgreSQL 13 finally introduced the possibility to change the `primary_conninfo` without a restart. Just doing reload is enough, but in case if the role is changing from the `replica` to the `standby_leader` we want to call only `on_role_change` callback and skip `on_reload`, because they duplicate each other.
It could happen that the WAL segment required for `pg_rewind` doesn't exist in the `pg_wal` anymore and therefore `pg_rewind` can't find the checkpoint location before the diverging point.
Starting from PostgreSQL 13 `pg_rewind` could use `restore_command` for fetching missing WALs, but we can do better than that.
On older PostgreSQL versions Patroni will parse the stdout and stderr of failed rewind attempt, try to fetch the missing WAL by calling the `restore_command`, and repeat an attempt.
1. Between get_cluster() and update_leader() calls the K8s leader object might be updated from outside and therefore the resource version will not match (error code=409). Since we are watching for all changes, the ObjectCache likely will have the most up-to-date version and we will take advantage of that. There is still a chance to hit a race-condition, but it would be smaller than before. Actually, other DCS are free of this issue. Etcd - update is based on the value comparison, Zookeeper and Consul are relying on session mechanism.
2. If the update still failed - recheck the resource version of the leader object and that the current node is still the leader there and repeat the call.
P.S. The leader race is still relying on the version of the leader object as it was during the get_cluster() call.
In addition to that fixed handling of K8s API errors, we should retry on 500, not on 502.
Close https://github.com/zalando/patroni/issues/1589
The `SSLSocket` is immediately doing the handshake on accept. Effectively it blocks the whole API thread if the client-side doesn't send any data.
In order to solve the issue we defer the handshake until a thread serving request has started.
The solution is a bit hacky, but thread-safe.
Close https://github.com/zalando/patroni/issues/1545
It is possible to specify custom hba_file and ident_file in the postgresql configuration parameters and Patroni is considering that these files are managed externally. It could happen that locations of these files matching with default locations of pg_hba,conf and pg_ident.conf. In this case we will ignore custom values and fallback to the default workflow, i.e. Patroni will overwrite them.
Close: https://github.com/zalando/patroni/issues/1544
Patroni was already doing that before creating users for a long time, but the post_init was an oversight. It will help to all utilities relying on libpq and reduce the end-user confusion.
Zookpeeper implementation heavily relies on cached version of the cluster view in order to minimize the number of requests. Having stale members information is fine for Patroni workflow because it basically relies only on member names and tags.
The `GET /cluster` is a different case. Being exposed outside it might be used for monitoring purposes and therefore we should show the up-to-date members information.
We don't need to rewind when:
1. replayed location for the former replica is not ahead of switchpoint
2. end of checkpoint record for the former primary is the same as switchpoint
In order to get the end of checkpoint record we use the `pg_waldump` and parse its output.
Close https://github.com/zalando/patroni/issues/1493
The standby cluster doesn't know about leader elections in the main cluster and therefore the usual mechanisms of detecting divergences don't work. For example, it could happen that the standby cluster is ahead of the new primary of the main cluster and must be rewound.
There is a way to know that the new timeline has been created by checking the presence of a history file in pg_wal. If the new file is there, we will start usual procedures of making sure that we can continue streaming or will run the pg_rewind.
The `touch_member()` could be called from the finally block of the `_run_cycle()`. In case if it raised an exception the whole Patroni process was crashing.
In order to avoid future crashes we wrap `_run_cycle()` into the try..except block and ask a user to report a BUG.
Close https://github.com/zalando/patroni/issues/1529
This PR makes split_host_port return IPv6 address without enclosing brackets.
This is due to the fact that e.g. socket.* functions expect host not to contain them when being called with IPv6.
Close: #1532
When making a decision whether the running replica is able to stream from the new primary or must be rewound we should use replayed location, therefore we extract received and replayed independently.
Reuse the part of the query that extracts the timeline and locations in the REST API.
There is no expire mechanism available on K8s, therefore we implement soft leader lock, i.e. every pod is "watching" for changes of the leader object and when there are no changes during the TTL it starts leader race.
Before we switched to LIST+WATCH approach in #1189 and #1276, we only watched for the leader object and every time it was updated, the main thread of the HA loop was waking up. As a result, all replica pods were synchronized, and starting the leader race more or less at the same time.
The new approach made all pods "unsynchronized" and the biggest downside of it - it takes `ttl + loop_wait` in the worst case to detect the leader failure.
This commit makes all pods in one cluster to sync HA loops again based on updates of the leader object.
So far Patroni was parsing `recovery.conf` or querying `pg_settings` in order to get the current values of recovery parameters. On PostgreSQL earlier than 12 it could easily happen that the value of `primary_conninfo` in the `recovery.conf` has nothing to do with reality. Luckily for us, on PostgreSQL 9.6+ there is a `pg_stat_wal_receiver` view, which contains current values of `primary_conninfo` and `primary_slot_name`. The password field is masked through, but this is fine, because authentication happens only during opening the connection. All other parameters we compare as usual.
Another advantage of `pg_stat_wal_recevier` - it contains the current timeline, therefore on 9.6+ we don't need to use the replication connection trick if walreceiver process is alive.
If there is no walreceiver process available or it is not streaming we will stick to old methods.
when Patroni is trying to figure out the necessity of pg_rewind it could write the content history file from the primary into the log. The history file is growing with every failover/switchover and eventually starts taking too many lines in the log, most of them are not so much useful.
Instead of showing the raw data, we will show only 3 lines before the current replica timeline and 2 lines after.
Replicas are waiting for checkpoint indication via member key of the leader in DCS. The key is normally updated only one time per HA loop.
Without waking the main thread up replicas will have to wait up to `loop_wait` seconds longer than necessary.
Without supplying the --enable-v2=true flag to etcd on startup, patroni cannot find etcd to run.
after running `etcd --data-dir=data/etcd` in one terminal and `patroni postgres0.yaml` in another terminal, etcd starts fine, but the postgres instance cannot find etcd.
```
patroni postgres0.yaml
2020-05-09 15:58:48,560 ERROR: Failed to get list of machines from http://127.0.0.1:2379/v2: EtcdException('Bad response : 404 page not found\n')
2020-05-09 15:58:48,560 INFO: waiting on etcd
```
If etcd is passed the flag `--enable-v2=true` on startup, everything works fine.
initial bootstrap by attaching Patroni to the running postgres was causing the following error.
```
File "/xx/lib/python3.8/site-packages/patroni/ha.py", line 529, in update_cluster_history
history = history[-self.cluster.config.max_timelines_history:]
AttributeError: 'NoneType' object has no attribute 'max_timelines_history'
```
Skip missing values from pg_controldata
When calling controldata(), it may return an empty dictionary, which in
turn caused the following error to occur:
effective_configuration
cvalue = parse_int(data[cname])
KeyError: 'max_wal_senders setting'
Instead of causing a crash of this part, we now log the error and
continue.
This is the full output of the error:
```
2020-04-17 14:31:54,791 ERROR: Exception during execution of long running task restarting after failure
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/patroni/async_executor.py", line 97, in run
wakeup = func(*args) if args else func()
File "/usr/lib/python3/dist-packages/patroni/postgresql/__init__.py", line 707, in follow
self.start(timeout=timeout, block_callbacks=change_role, role=role)
File "/usr/lib/python3/dist-packages/patroni/postgresql/__init__.py", line 409, in start
configuration = self.config.effective_configuration
File "/usr/lib/python3/dist-packages/patroni/postgresql/config.py", line 983, in effective_configuration
cvalue = parse_int(data[cname])
KeyError: 'max_wal_senders setting'
```
In dynamic environments it is common that during the rolling upgrade etcd nodes are changing their IP addresses. If the etcd node where Patroni is currently connected to is upgraded last, it could happen that the cached topology doesn't contain any live node anymore and therefore request can't be retried and totally fails, usually resulting in demoting of the primary.
In order to partially overcome the problem, Patroni is already doing a periodic (every 5 minutes) rediscovery of the etcd cluster topology, but in case of very fast node rotation there was still a possibility to hit the issue.
This PR is an attempt to address the problem. If the list of nodes exhausted, Patroni will try to perform initial discovery via an external mechanism, like resolving A or SRV dns records and if the new list is different from the original, Patroni will use it as the new etcd cluster topology.
In order to deal with tcp issues the connect_timeout is set to max(read_timeout/2, 1). It will make list of members exhaust faster, but leaves the time to perform topology rediscovery and another attempt.
The third issue addressed by this PR - it could happen that dns names of etcd nodes didn't change, but ip addresses are new, therefore we clean up the internal dns cache when doing topology rediscovery.
Besides that, this commit makes `_machines_cache` property pretty much static, it will be updated only when the topology has changed and helps to avoid concurrency issues.
It is safe to call pg_rewind on the replica only when pg_control on the primary contains information about the latest timeline. Postgres is usually doing immediate checkpoint right after promote and in most cases it works just fine. Unfortunately we regularly receive complaints that it takes to long (minutes) until the checkpoint is done and replicas can't perform rewind. At the same time doing the checkpoint manually immediately helped. So Patroni starts doing the same. When the promotion happened and postgres is not running in recovery, we explicitly issue the checkpoint.
We are intentionally not using the AsyncExecutor here, because we want the HA loop continues doing its normal flow.
## Feature: Postgres stop timeout
Switchover/Failover operation hangs on signal_stop (or checkpoint) call when postmaster doesn't respond or hangs for some reason(Issue described in [1371](https://github.com/zalando/patroni/issues/1371)). This is leading to service loss for an extended period of time until the hung postmaster starts responding or it is killed by some other actor.
### master_stop_timeout
The number of seconds Patroni is allowed to wait when stopping Postgres and effective only when synchronous_mode is enabled. When set to > 0 and the synchronous_mode is enabled, Patroni sends SIGKILL to the postmaster if the stop operation is running for more than the value set by master_stop_timeout. Set the value according to your durability/availability tradeoff. If the parameter is not set or set <= 0, master_stop_timeout does not apply.
$ python3 patronictl.py -c postgresql0.yml list
Error: Provided config file postgresql0.yml not existing or no read rights. Check the -c/--config-file parameter
Right now it's not really clear from --help what it does, flushing in software
context usually means persisting...so actually it's the opposite, so make
the intention more explicit.