1. The `ttl` was incorrectly returned 1000 times higher then it should
2. The `watch()` method must return True if the parent method returned True. Not doing so resulted in the incorrect calculation of sleep time.
3. Move mock of exhibitor api to the features/environment.py. It simplifies testing with behave.
This patch fixes the error handling of cases where there are runtime errors in `socketserver`.
For example, when creating a new thread (to handle a request) fails.
`get_request` handles ssl connections by replacing the new client socket by a tuple containing `(server_socket, new_client_socket)` in order to later deal with handshakes in `process_request_thread`
During the processing of a request, the socketserver `BaseServer` calls `handle_request`, calling the `_handle_request_noblock`, which is calling the following functions (https://github.com/python/cpython/blob/3.8/Lib/socketserver.py#L303):
```
request, client_addr = get_request()
verify_request(request, client_address):
process_request(request, client_address)
handle_error(request, client_address)
shutdown_request(request)
```
- `get_request` is overloaded in patroni and returns `request` as a tuple in case of ssl calls
- `verify_request` defaults to `return True` and should be fixed if used but is fine in this case
- `process_request` just calls `process_request_thread` (which is overloaded in patroni and handles tuple-style requests)
- `handle_error` is overloaded in patroni and handles tuple-style requests)
- but `shutdown_request` is not overloaded and thus missing support for tuple-style requests
This patch adds support for tuple-style requests in patroni api
There are sometimes good reasons to manage replication slots externally
to Patroni. For example, a consumer may wish to manage its own slots (so
that it can more easily track when a failover has a occurred and whether
it is ahead of or behind the WAL position on the new primary).
Additionally tooling like pglogical actually replicates slots to all
replicas so that the current position can be maintained on failover
targets (this also aids consumers by supplying primitives so that they
can verify data hasn't been lost or a split brain occurred relative to
the physical cluster).
To support these use cases this new feature allows configuring Patroni
to entirely ignore sets of slots specified by any subset of name,
database, slot type, and plugin.
Previously the only documentation for how to run tests was the
implementation in the Travis configuration file. Here we add
instructions as well as move development dependencies to an easily used
and shared (with Travis config) separate requirements.dev.txt file.
It is not very common, but the master Postgres might "crash" due to different reasons, like OOM, or out of disk space. Of course, there are chances that the current node holds some unreplicated data and therefore Patroni by default prefers to start Postgres on the leader node rather than doing a failover.
In order to be on the safe side Patroni always starts Postgres in recovery no matter whether the current node owns the leader lock or not. If the Postgres wasn't shut down cleanly, starting in recovery might fail, therefore in some cases as a workaround Patroni is executing a crash recovery by starting the postgres up in the single-user mode.
A few times we end up in the situation:
1. Master postgres crashed due to the out of disk space
2. Patroni starts crash recovery in a single-user mode
3. While doing crash-recovery Patroni keeps updating the leader lock
It makes Patroni stuck on step 3 and the manual intervention is required for recovering the cluster.
Patroni already has the option `master_start_timeout`, which controls for how long we let postgres stay in the `starting` state and after that Patroni might decide to release the leader lock if there are healthy replicas available which could take it over.
This PR makes the `master_start_timeout` option also work for crash recovery.
1. Don't call bootstrap if PGDATA is missing/empty, because it might be for purpose, and someone/something working on it.
2. Consider postgres running as a leader in pause not healthy if pg_control sysid doesn't match with the /initialize key (empty initialize key will allow the "race" and the leader will "restore" initialize key).
3. Don't exit on sysid mismatch in pause, only log a warning.
4. Cover corner cases when Patroni started in pause with empty PGDATA and it was restored by somebody else
5. Empty string is a valid `recovery_target`.
Unhandled exception prevented demoting the primary.
In addition to that wrap the update_leader call in the HA loop into try..except block and implement a test case.
Fixes https://github.com/zalando/patroni/issues/1684
* update release notes
* bump version
* change the default alignment in patronictl table output to `left`
* add missing tests
* add missing pieces to the documentation
It is now also possible to point the configuration path to a directory instead of a file.
Patroni will find all yml files in the directory and apply them in sorted order
Close https://github.com/zalando/patroni/issues/1669
So far Patroni was performing a comparison of the old value (in the `pg_settings`) with the new value (from Patroni configuration or from DCS) in order to figure out if reload or restart is required when the parameter has been changed. If the given parameter was missing in the `pg_settings` Patroni was ignoring it and not writing into the `postgresql.conf`.
In case if Postgres is not running, no validation has been performed and parameters and values were written into the config as it is.
It is not a very common mistake, but people tend to mistype parameter names or values.
Also, it happens that some parameters are removed in specific Postgres versions and some new are added (e.g. `checkpoint_segments` replaced with `min_wal_size` and `max_wal_size` in 9.5 or` wal_keep_segments` was replaced with `wal_keep_size` in 13).
Writing nonexistent parameters or invalid values into the `postgresql.conf` makes postgres unstartable.
This change doesn't solve the issue 100%, but at least approaching this goal very close.
Call a fencing script after acquiring the leader lock. If the script didn't finish successfully - don't promote but remove leader key
Close https://github.com/zalando/patroni/issues/1567
Patroni fails If hba conf happens to be located in non pgdata dir after custom bootstrap.
```
2020-08-27 23:25:06,966 INFO: establishing a new patroni connection to the postgres cluster
2020-08-27 23:25:06,997 INFO: running post_bootstrap
2020-08-27 23:25:07,009 ERROR: post_bootstrap
Traceback (most recent call last):
File "../lib/python3.8/site-packages/patroni/postgresql/bootstrap.py", line 357, in post_bootstrap
os.unlink(postgresql.config.pg_hba_conf)
FileNotFoundError: [Errno 2] No such file or directory: '../pgrept1/sysdata/pg_hba.conf'
2020-08-27 23:25:07,016 INFO: removing initialize key after failed attempt to bootstrap the cluster
```
On K8s the `Cluster.leader` is a valid object even if the cluster has no leader because we need to know the `resourceVersion` for future CAS operation. Such a non-empty object broke HA loop and made other nodes to think that the leader is there.
The right way to identify the missing leader which reliably works across all DCS is checking that the leader's name is empty.
For init processes that use a symlinked WAL directory, or use custom scripts that create new tablespaces, these directories should also be renamed after a failed init attempt, as currently the following errors occur if the first init attempt failed, but a second one might succeed:
fixing permissions on existing directory /var/lib/postgresql/data ... ok
initdb: error: directory "/var/lib/postgresql/wal/pg_wal" exists but is not empty
[...]
File "/usr/lib/python3/dist-packages/patroni/ha.py", line 1173, in post_bootstrap
self.cancel_initialization()
File "/usr/lib/python3/dist-packages/patroni/ha.py", line 1168, in cancel_initialization
raise PatroniException('Failed to bootstrap cluster')
patroni.exceptions.PatroniException: 'Failed to bootstrap cluster'
In the remove_data_directory function the same happens for removing the data directory, it seems the same kind of thing should also happen when moving a data directory.
To ensure the data directory can still be used, the symlinks will point to the renamed directories.
So far Patroni was enforcing the same value of `wal_keep_segments` on all nodes in the cluster. If the parameter was missing from the global configuration it was using the default value `8`.
In pg13 beta3 the `wal_keep_segments` was renamed to the `wal_keep_size` and it broke Patroni.
If `wal_keep_segments` happened to be present in the configuration for pg13, Paroni will recalculate the value to `wal_keep_size` assuming that the `wal_segment_size` is 16MB. Sure, it is possible to get the real value of `wal_segment_size` from pg_control, but since we are dealing with the case of misconfiguration it is not worse time spend on it.
When running on K8s Patroni is communicating with API via the `kubernetes` service, which is address is exposed via the
`KUBERNETES_SERVICE_HOST` environment variable. Like any other service, the `kubernetes` service is handled by `kube-proxy`, that depending on configuration is either relying on userspace program or `iptables` for traffic routing.
During K8s upgrade, when master nodes are replaced, it is possible that `kube-proxy` doesn't update the service configuration in time and as a result Patroni fails to update the leader lock and demotes postgres.
In order to improve the user experience and get more control on the problem we make it possible to bypass the `kubernetes` service and connect directly to API nodes.
The strategy is very simple:
1. Resolve list IPs of API nodes from the kubernetes endpoint on every iteration of HA loop.
2. Stick to one of these IPs for API requests
3. Switch to a different IP if connected to IP is not from the list
4. If the request fails, switch to another IP and retry
Such a strategy is already used for Etcd and proven to work quite well.
In order to enable the feature, you need either to set to `true` `kubernetes.bypass_api_service` in the Patroni configuration file or `PATRONI_KUBERNETES_BYPASS_API_SERVICE` environment variable.
If for some reason `GET /default/endpoints/kubernetes` isn't allowed Patroni will disable the feature.
The new parameter `synchronous_node_count` is used by Patroni to manage number of synchronous standby databases. It is set to 1 by default. It has no effect when synchronous_mode is set to off. When enabled, Patroni manages precise number of synchronous standby databases based on parameter synchronous_node_count and adjusts the state in DCS & synchronous_standby_names as members join and leave.
This functionality can be further extended to support Priority (FIRST n) based synchronous replication & Quorum (ANY n) based synchronous replication in future.
Patroni appeared to be creating dormant slot (when `slots` defined) for leader node when the name contains special chars such as '-' (for e.g. "abc-us-1").
The newest versions of docker-compose want to have some values double-quoted in the env file while old versions failing to process such files.
The solution is simple, move some of the parameters to the `docker-compose.yml` and rely on anchors for inheritance.
Since the main idea behind env files was to keep "secret" information off the main YAML we also get rid of any non-secret stuff, mainly located in the etcd.env.
For DCS other than `kubernetes` it was failing with exception due to the `cluster.config` being `None`, but on Kubernetes it was happily creating the config annotation and preventing writing bootstrap configuration after the bootstrap finished.
* pg_rewind error messages contain '/' as directory separator
* fix Raft unit tests on win
* fix validator unit tests on win
* fix keepalive unit tests on win
* make standby cluster behave tests less shaky
The only python-etcd3 client working directly via gRPC still supports only a single endpoint, which is not very nice for high-availability.
Since Patroni is already using a heavily hacked version of python-etcd with smart retries and auto-discovery out-of-the-box, I decided to enhance the existing code with limited support of v3 protocol via gRPC-gateway.
Unfortunately, watches via gRPC-gateway requires us to open and keep the second connection to the etcd.
Known limitations:
* The very minimal supported version is 3.0.4. On earlier versions transactions don't work due to bugs in grpc-gateway. Without transactions we can't do atomic operations, i.e. leader locks.
* Watches work only starting from 3.1.0
* Authentication works only starting from 3.3.0
* gRPC-gateway does not support authentication using TLS Common Name. This is because gRPC-proxy terminates TLS from its client so all the clients share a cert of the proxy: https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/authentication.md#using-tls-common-name
* new node can join the cluster dynamically and become a part of consensus
* it is also possible to join only Patroni cluster (without adding the node to the raft), just comment or remove `raft.self_addr` for that
* when the node joins the cluster it is using values from `raft.partner_addrs` only for initial discovery.
* It is possible to run Patroni and Postgres on two nodes plus one node with `patroni_raft_controller` (without Patroni and Postgres). In such setup one can temporarily lose one node without affecting the primary.
1. Put the `*` character into pgpass if the actual value is empty
2. Re-raise fatal exception from the HA loop (we need to exit if for example cluster initialization failed)
Close https://github.com/zalando/patroni/issues/1617
The official python kubernetes client contains a lot of auto-generated code and therefore very heavy, but we need only a little fraction of it.
The naive implementation, that covers all API methods we use, takes about 250 LoC, and about half of it is responsible for the handling of configuration files.
Disadvantage: If somebody was using the `patronictl` outside of the pod (on his machine), it might not work anymore (depending on the environment).
If postgres is not running due to the crash, panic or because it was stopped by some external force, we need to change the internal state.
So far we always relied on the state being set in the `Postgresql.start()` method. Unfortunately not in all cases this method is reachable. For example, the `Postgresql.follow()` could raise an exception when calling `write_recovery_conf()` and there is not enough disk space to write the file.
In order to solve it, we explicitly set the state when detected that the postgres is not alive. In the `pause` we set it to `stopped`, otherwise to `crashed` if it was `running` or `starting` before.
- ``GET /replica?lag=<max-lag>``: replica check endpoint.
- ``GET /asynchronous?lag=<max-lag>`` or ``GET /async&lag=<max-lag>``: asynchronous standby check endpoint.
Checks replication latency and returns status code **200** only when the latency is below a specified value. The key leader_optime from DCS is used for the leader WAL position and compute latency on the replica for performance reasons. Please note that the value in leader_optime might be a couple of seconds old (based on loop_wait).
Co-authored-by: Alexander Kukushkin <cyberdemn@gmail.com>
Patroni is caching the cluster view in the DCS object because not all operations require the most up-to-date values. The cached version is valid for TTL seconds. So far it worked quite well, the only known problem was that the `last_leader_operation` for some DCS implementations was not very up-to-date:
* Etcd: since the `/optime/leader` key is updated right after the `/leader` key, usually all replicas get the value from the previous HA loop. Therefore the value is somewhere between `loop_wait` and `loop_wait*2` old. We improve it by using the 10ms artificial sleep after receiving watch notification from `compareAndSwap` operation on the leader key. It usually gives enough time for the primary to update the `/optime/leader`. On average that makes the cached version `loop_wait/2` old.
* ZooKeeper: Patroni itself is not so much interested in most up-to-date values of member and leader/optime ZNodes. In case of the leader race it just reads everything from ZooKeeper, but during normal operation it is relying on cache. In order to see the recent value on replicas they are doing watch on the `leader/optime` Znode and will re-read it after it was updated by the primary. On average that makes the cached version `loop_wait/2` old.
* Kubernetes: last_leader_operation is stored in the same object as the leader key itself and therefore update is atomic and we always see the latest version. That makes the cached version `loop_wait/2` old on avg.
* Consul: HA loops on the primary and replicas are not synchronized, therefore at the moment when we read the cluster state from the Consul KV we see the last_leader_operation value that is between 0 and loop_wait old. On average that makes the cached version `loop_wait` old. Unfortunately we can't make it much better without performing periodic updates from Consul, which might have negative side effects.
Since the `optime/leader` is only updated at most once per HA loop cycle, the value stored in the DCS is usually `loop_wait/2` old on avg. For majority of DCS implementations we could promise that the cached version in Patroni will match the value in DCS most of the time, therefore there is no need to make additional requests. The only exception is Consul, but probably we could just document it, so when someone relying on last_leader_operation value to check the replication lag can correspondingly adjust thresholds.
Will help to implement #1599