We are using only one function from there, `find_executable()` and it is better to implement a similar function in Patroni rather than add `distutils` module into requirements.txt
It might happen that statement_timeout on the server is set to some small value and one of the statements executed by pg_rewind is canceled.
I already proposed a patch fixing the pg_rewind itself, but it also would be good to have a workaround in Patroni.
* use `SHOW primary_conninfo` instead of parsing config file on pg12
* strip out standby and recovery parameters from postgresql.auto.conf before starting the postgres 12
Patroni config remains backward compatible.
Despite for example `restore_command` converted to a GUC starting from postgresql 12, in the Patroni configuration you can still keep it in the `postgresql.recovery_conf` section.
If you put it into `postgresql.parameters.restore_command`, that will also work, but it is important not to mix both ways:
```yaml
# is OK
postgresql:
parameters:
restore_command: my_restore_command
archive_cleanup_command: my_archive_cleanup_command
# is OK
postgresql:
recovery_conf:
restore_command: my_restore_command
archive_cleanup_command: my_archive_cleanup_command
# is NOT ok
postgresql:
parameters:
restore_command: my_restore_command
recovery_conf:
archive_cleanup_command: my_archive_cleanup_command
```
On every replica Patroni periodically opens the recovery.conf file and checks that it contains the correct primary_conninfo.
So far the correctness check was pretty dumb, it was basically doing a full match of strings. That could lead to the restart of the replica when Patroni is "joining" already running postgres process.
Instead of using string comparison we parse the actual primary_conninfo value from the file and check that all parameters are matching with the desired value.
In addition to that we stop reading and parsing recovery.conf of every iteration if the modification time didn't change.
the main purpose of this PR is simplifying #1068
It is mostly necessary for future support of pg12, where there will be no recovery.conf anymore, but `keep_existing_recovery_conf` parameter still needs to be supported due to backward compatibility.
A few times we observed that Patroni HA loop was blocked for a few minutes due to not being able to write logs to stderr. This is a very rare condition which we hit so far only on k8s. This commit makes Patroni resilient to such kind of problems. All log messages first are written into the in-memory queue and later they are asynchronously flushed into the stderr or file from a separate thread.
The maximum queue size is configurable and the default value is 1000. This should be enough to keep more than one hour of log messages with default settings and when Patroni cluster operates normally (without big issues).
In case if we hit the maximum size of the queue further logs will be discarded until the queue size will be reduced. The number of discarded messages will be reported into the log later.
In addition to that, the number of non-flushed and discarded messages (if there are any), will be reported via Patroni REST API as:
```json
"logger_queue_size": X,
"logger_records_lost": Y`
```
`compare_values` function is already smart enough to handle all tricky cases when the unit is not in `B` or `kB`, but also in blocks. Therefore we don't need to query `wal_segment_size` from pg_settings.
Log the final exception when either the number of attempts or the timeout were reached, it will hopefully help to debug some issues when communication to DCS fails
* integer variables could be specified as floats: work_mem = '30.1GB'
* memory-based units could be in B and MB
* time-based variables now allow microseconds (us)
* vacuum_cost_delay and autovacuum_vacuum_cost_delay converted to real with base unit ms
* Convert postgresql.py into a package
* Factor out cancellable process into a separate class
* Factor out connection handler into a separate class
* Move postmaster into postgresql package
* Factor out pg_rewind into a separate class
* Factor out bootstrap into a separate class
* Factor out slots handler into a separate class
* Factor out postgresql config handler into a separate class
* Move callback_executor into postgresql package
This is just a careful refactoring, without code changes.
1. use the default port is 5432 when only standby_cluster.host is defined
2. check that standby_cluster replica can be bootstrapped without connection to the standby_cluster leader against `create_replica_methods` defined in the `standby_cluster` config instead of the `postgresql` section.
3. Don't fallback to the create_replica_methods defined in the `postgresql` section when bootstrapping a member of the standby cluster.
4. Make sure we specify the database when connecting to the leader.
* expose the current patroni version in DCS
* expose `checkpoint_after_promote` flag in DCS as an indicator that pg_rewind could be safely executed
* other nodes will wait until this flag is set instead of connecting as superuser and issuing the CHECKPOINT
* define `postgresql.authention.rewind` with credentials for pg_rewind in patroni configuration files.
* create user for pg_rewind if postgres is 11+
* grant execute on functions required for pg_rewind to rewind user
* make it possible to create users without passwords
* put `krbsrvname` into the connection string if it is specified in the config
* update postgres?.yml example files to mention `krbsrvname`
This functionality works similarly to the `pg_hba`:
If the `postgresql.pg_ident` is defined in the config file or DCS, Patroni will write its value to pg_ident.conf, however, if `postgresql.parameters.ident_file` is defined, Patroni will assume that pg_ident is managed from outside and not update the file.
Using `ssl.wrap_socket` is deprecated and was still allowing soon-to-be-deprecated protocols like TLS 1.1.
Now using `SSLContext.create_default_context()` to produce a secure SSL context to wrap the REST API server's socket.
Patroni is great behind a load balancer like HAProxy. API routes are handy to direct the traffic to a valid node depending on its role. We could have one route for writes directed to the primary node and one route for reads directed to the replicas. In a two nodes setup, when we lose a node, there's one host left: the primary. Then, the read traffic will be dropped, even if the primary can handle it. This commit adds read-only and read-write routes. Read-only route enables reads on the primary. Read-write route is an alias for '/master', '/primary' or '/leader' route.
As @ants complained on Slack, it breaks the workflow when the same location is used for bootstrap and further archiving.
For the pg_upgrade we can achieve the same results by modifying postgres config in the upgrade script, which is part of spilo.
Recently released psycopg2 split into two different packages, psycopg2, and psycopg2-binary which could be installed at the same time into the same place on the filesystem. In order to decrease dependency hell problem, we let a user choose how to install psycopg2. There are a few options available and it is reflected in the documentation.
This PR also changes the following behavior:
* `pip install patroni` will fail if psycopg2 is not installed
* Patroni will check psycopg2 upon start and fail if it can't be found or outdated.
Closes https://github.com/zalando/patroni/issues/1021
1. Fix race condition on shutdown. It is very annoying when you cancel behave tests but postgres remains running.
2. Dump pg_controldata output to logs when "recovering" stopped postgres. It will help to investigate some annoying issues.
First of all, this patch changes the behavior of `on_start`/`on_restart` callbacks, they will be called only when postgres is started or restarted without role changes. In case if the member is promoted or demoted only the `on_role_change` callback will be executed. `on_role_change` was never called for standby leader, only `on_start`/`on_restart` and with a wrong role argument.
Before that `on_role_change` was never called for standby leader, only `on_start`/`on_restart` and with a wrong role argument.
In addition to that, the REST API will return standby_leader role for the leader of the standby cluster.
Closes https://github.com/zalando/patroni/issues/988