* Add failsafe_mode_is_active to /patroni and /metrics
* Add patroni_primary to /metrics
* Add examples showing that failsafe_mode_is_active and cluster_unlocked
are only shown for /patroni when the value is "true"
* Update /patroni and /config examples
Cluster.get_replication_slots() didn't take into account that there can not be logical replication slots in a standby cluster replicas. It was only skipping logical slots for the standby_leader, but replicas were expecting that they will have to copy them over.
Also on replicas in a standby cluster these logical slots were falsely added to the `_replication_slots` dict.
1. stop using the same cursor all the time, it creates problems when not carefully used from different threads.
2. introduce query() method in the Connection class and make it return a result set when it is possible.
3. refactor most of the code that is relying (directly or indirectly) on the Connection object to use the query() method as much as possible.
This refactoring helps with reducing code complexity and will help with future introduction of a separate database connection for the REST API thread. The last one will help to improve reliability when system is under significant stress when simple monitoring queries are taking seconds to execute and the REST API starts blocking the main thread.
Previous to this commit `IntValidator` would always consider the value `0` invalid, even if in the allowed range.
The problem was that `parse_int` was returning `0` in the following line:
```python
value = parse_int(value, self.base_unit) or ""
```
However the `or ""` was evaluating to an empty string.
As `parse_int` returns either an `int` if able to parse, or `None` otherwise, the `isinstance(value, int)` is enough to error out when not a valid `int`.
Closes#2817
Besides adding docstrings to `patroni.config`, a few side changes
have been applied:
* Reference `config_file` property instead of internal attribute
`_config_file` in method `_load_config_file`;
* Have `_AUTH_ALLOWED_PARAMETERS[:2]` as default value of `params`
argument in method `_get_auth` instead of using
`params or _AUTH_ALLOWED_PARAMETERS[:2]` in the body;
* Use `len(PATRONI_ENV_PREFIX)` instead of a hard-coded `8` when
removing the prefix from environment variable names;
* Fix documentation of `wal_log_hints` setting. The previous docs
mentioned it was a dynamic setting that could be changed. However
it is managed by Patroni, which forces `on` value.
References: PAT-123.
Expanding on the addition of docstrings in code, this adds python module API docs to sphinx documentation.
A developer can preview what this might look like by running this locally:
```
tox -m docs
```
The option `-W` is added to the tox env so that warning messages are considered errors.
Adds doc generation using the above method to the test GitHub workflow to catch documentation problems on PRs.
Some docstrings have been reformatted and fixed to satisfy errors generated with the above setup.
Due to historical reasons (not available before 9.6) we used `pg_current_wal_lsn()`/`pg_current_xlog_location()` functions to get current WAL LSN on the primary. But, this LSN is not necessarily synced to disk, and could be lost if the primary node crashed.
It seems that a common pitfall for new users of Patroni is that the `bootstrap.dcs` section is only used to initialize the configuration in DCS. This moves the comment about this to an info block so it is more visible to the reader.
New patroni.py option that allows to
* generate patroni.yml configuration file with the values from a running cluster
* generate a sample patroni.yml configuration file
* Refactor is_failover_possible()
Move all the members filtering inside the function.
* Remove check_synchronous parameter
* Add sync_mode_is_active() method and user it everywhere where it is appropriate
* Reduce nesting
---------
Co-authored-by: Alexander Kukushkin <cyberdemn@gmail.com>
consider the scenario(enable failsafe_mode):
0. node1(primary) - node2(replica)
1. stop all etcd nodes; wait ttl seconds; start all etcd nodes; (node2's failsafe will contain the info about node1)
2. switchover to node2; (node2's failsafe still contain the info about node1)
3. stop all etcd nodes; wait ttl seconds; start all etcd nodes;
4. node2 will demote because it consider node1 as primary
Resetting failsafe state when running as a primary fixes the issue.
1. Unit tests should not really try accessing any resources.
2. Not doing so results in significant execution time of unit tests on Windows
In addition to that perform a request with timeout 3s. Usually this is more than enough to figure out whether resource is accessible.
Followup on #2724
This PR is an attempt of refactoring the docs about migration to Patroni.
These are a few enhancements that we propose through this PR:
* Docs used to mention the procedure can only be performed in a single-node cluster. We changed that so the procedure considers a cluster composed of primary and standbys;
* Teach how to deal with pre-existing replication slots;
* Explain how to create the user for `pg_rewind`, if user intends to enable `use_pg_rewind`.
References: PAT-143.
Postgres supports two types of permissions:
1. owner only
2. group readable
By default the first one is used because it provides better security. But, sometimes people want to run a backup tool with the user that is different from postgres. In this case the second option becomes very useful. Unfortunately it didn't work correctly because Patroni was creating files with owner access only permissions.
This PR changes the behavior and permissions on files and directories that are created by Patroni will be calculated based on permissions of PGDATA. I.e., they will get group readable access when it is necessary.
Close#1899Close#1901
The docs of `slots` configuration used to have this mention:
```
my_slot_name: the name of replication slot. If the permanent slot name
matches with the name of the current primary it will not be created.
Everything else is the responsibility of the operator to make sure that
there are no clashes in names between replication slots automatically
created by Patroni for members and permanent replication slots.
```
However that is not true in the sense that Patroni does not check for
clashes between `my_slot_name` and the name of replication slots created
for replicating changes among members. If you specify a slot name that
clashes with the name of a replication slot used by a member, it turns
out Patroni will make the slot permanent in the primary even if the member
key expire from the DCS.
Through this commit we also enhance the docs in terms of explaining that
physical permanent slots are maintained only in the primary, while logical
replication slots are copied from primary to standbys.
Signed-off-by: Israel Barth Rubio <israel.barth@enterprisedb.com>