735 Commits

Author SHA1 Message Date
Alexander Kukushkin
c2a78ee652 Bugfix: GET /cluster was showing stale member info in zookeeper (#1573)
Zookpeeper implementation heavily relies on cached version of the cluster view in order to minimize the number of requests. Having stale members information is fine for Patroni workflow because it basically relies only on member names and tags.

The `GET /cluster` is a different case. Being exposed outside it might be used for monitoring purposes and therefore we should show the up-to-date members information.
2020-06-05 09:23:54 +02:00
Alexander Kukushkin
cd1b2741fa Improve timeline divergence check (#1563)
We don't need to rewind when:
1. replayed location for the former replica is not ahead of switchpoint
2. end of checkpoint record for the former primary is the same as switchpoint

In order to get the end of checkpoint record we use the `pg_waldump` and parse its output.

Close https://github.com/zalando/patroni/issues/1493
2020-05-29 14:15:10 +02:00
Alexander Kukushkin
98c2081c67 Detect a new timeline in the standby cluster (#1522)
The standby cluster doesn't know about leader elections in the main cluster and therefore the usual mechanisms of detecting divergences don't work. For example, it could happen that the standby cluster is ahead of the new primary of the main cluster and must be rewound.
There is a way to know that the new timeline has been created by checking the presence of a history file in pg_wal. If the new file is there, we will start usual procedures of making sure that we can continue streaming or will run the pg_rewind.
2020-05-29 14:14:47 +02:00
Alexander Kukushkin
c6207933d1 Properly handle the exception raised from refresh_session (#1531)
The `touch_member()` could be called from the finally block of the `_run_cycle()`. In case if it raised an exception the whole Patroni process was crashing.
In order to avoid future crashes we wrap `_run_cycle()` into the try..except block and ask a user to report a BUG.

Close https://github.com/zalando/patroni/issues/1529
2020-05-29 14:14:11 +02:00
Alexander Kukushkin
6a0d2924a0 Separate received and replayed location (#1514)
When making a decision whether the running replica is able to stream from the new primary or must be rewound we should use replayed location, therefore we extract received and replayed independently.

Reuse the part of the query that extracts the timeline and locations in the REST API.
2020-05-27 13:33:37 +02:00
Alexander Kukushkin
ad5c686c11 Take advantage of pg_stat_wal_recevier (#1513)
So far Patroni was parsing `recovery.conf` or querying `pg_settings` in order to get the current values of recovery parameters. On PostgreSQL earlier than 12 it could easily happen that the value of `primary_conninfo` in the `recovery.conf` has nothing to do with reality. Luckily for us, on PostgreSQL 9.6+ there is a `pg_stat_wal_receiver` view, which contains current values of `primary_conninfo` and `primary_slot_name`. The password field is masked through, but this is fine, because authentication happens only during opening the connection. All other parameters we compare as usual.

Another advantage of `pg_stat_wal_recevier` - it contains the current timeline, therefore on 9.6+ we don't need to use the replication connection trick if walreceiver process is alive.

If there is no walreceiver process available or it is not streaming we will stick to old methods.
2020-05-15 18:04:24 +02:00
Alexander Kukushkin
08b3d5d20d Move ensure_clean_shutdown into rewind module (#1528)
Logically fits there better
2020-05-15 16:22:57 +02:00
Alexander Kukushkin
30aa355eb5 Shorten and beautify history log output (#1526)
when Patroni is trying to figure out the necessity of pg_rewind it could write the content history file from the primary into the log. The history file is growing with every failover/switchover and eventually starts taking too many lines in the log, most of them are not so much useful.
Instead of showing the raw data, we will show only 3 lines before the current replica timeline and 2 lines after.
2020-05-15 16:14:25 +02:00
Alexander Kukushkin
7cf0b753ab Update optime/leader with checkpoint location after clean shut down (#1527)
Potentially this information could be used in order to make sure that there is no data loss on switchover.
2020-05-15 16:13:16 +02:00
Alexander Kukushkin
285bffc68d Use pg_rewind with --restore-target-wal on 13 if possible (#1525)
On PostgreSQL 13 check if restore_command is configured and tell pg_rewind to use it
2020-05-15 16:05:07 +02:00
Alexander Kukushkin
e6ef3c340a Wake up the main thread after checkpoint is done (#1524)
Replicas are waiting for checkpoint indication via member key of the leader in DCS. The key is normally updated only one time per HA loop.
Without waking the main thread up replicas will have to wait up to `loop_wait` seconds longer than necessary.
2020-05-15 16:02:17 +02:00
Alexander Kukushkin
0d957076ca Improve compatibility with PostgreSQL 12 and 13 (#1523)
There were two new connection parameters introduced:
1. `gssencmode` in 12
2. `channel_binding` in 13
2020-05-13 13:13:25 +02:00
Alexander Kukushkin
fe23d1f2d0 Release 1.6.5 (#1503)
* bump version
* update release notes
* implement missing unit-tests and format code.
2020-04-23 16:02:01 +02:00
Alexander Kukushkin
be4c078d95 Etcd smart refresh members (#1499)
In dynamic environments it is common that during the rolling upgrade etcd nodes are changing their IP addresses. If the etcd node where Patroni is currently connected to is upgraded last, it could happen that the cached topology doesn't contain any live node anymore and therefore request can't be retried and totally fails, usually resulting in demoting of the primary.

In order to partially overcome the problem, Patroni is already doing a periodic (every 5 minutes) rediscovery of the etcd cluster topology, but in case of very fast node rotation there was still a possibility to hit the issue.

This PR is an attempt to address the problem. If the list of nodes exhausted, Patroni will try to perform initial discovery via an external mechanism, like resolving A or SRV dns records and if the new list is different from the original, Patroni will use it as the new etcd cluster topology.

In order to deal with tcp issues the connect_timeout is set to max(read_timeout/2, 1). It will make list of members exhaust faster, but leaves the time to perform topology rediscovery and another attempt.

The third issue addressed by this PR - it could happen that dns names of etcd nodes didn't change, but ip addresses are new, therefore we clean up the internal dns cache when doing topology rediscovery.

Besides that, this commit makes `_machines_cache` property pretty much static, it will be updated only when the topology has changed and helps to avoid concurrency issues.
2020-04-23 12:51:05 +02:00
Alexander Kukushkin
80fbe90056 Issue CHEKPOINT explicitely after promote happened (#1498)
It is safe to call pg_rewind on the replica only when pg_control on the primary contains information about the latest timeline. Postgres is usually doing immediate checkpoint right after promote and in most cases it works just fine. Unfortunately we regularly receive complaints that it takes to long (minutes) until the checkpoint is done and replicas can't perform rewind. At the same time doing the checkpoint manually immediately helped. So Patroni starts doing the same. When the promotion happened and postgres is not running in recovery, we explicitly issue the checkpoint.

We are intentionally not using the AsyncExecutor here, because we want the HA loop continues doing its normal flow.
2020-04-20 11:55:05 +02:00
Alexander Kukushkin
337f9efc9e Improve patronictl list output (#1486)
The redundant column `Column` will be presented in the table header.

Depending on output format `Tags` are serialized differently:
* For *pretty* format YAML is used, every element on the new line
* For *tsv* format for YAML is also used, but all elements and on the same line (similar to JSON)
* For *json* and *yaml* formats `Tags` are serialized into an appropriate format.

<details><summary>Examples of output in pretty formats:</summary>

```bash
$ patronictl list
+ Cluster: batman (6813309862653668387) +---------+----+-----------+---------------------+
|    Member   |      Host      |  Role  |  State  | TL | Lag in MB | Tags                |
+-------------+----------------+--------+---------+----+-----------+---------------------+
| postgresql0 | 127.0.0.1:5432 | Leader | running |  3 |           | clonefrom: true     |
|             |                |        |         |    |           | noloadbalance: true |
|             |                |        |         |    |           | nosync: true        |
+-------------+----------------+--------+---------+----+-----------+---------------------+
| postgresql1 | 127.0.0.1:5433 |        | running |  3 |       0.0 |                     |
+-------------+----------------+--------+---------+----+-----------+---------------------+

$ patronictl list badclustername
+ Cluster: badclustername (uninitialized) ------+
| Member | Host | Role | State | TL | Lag in MB |
+--------+------+------+-------+----+-----------+
+--------+------+------+-------+----+-----------+
```
</details>

<details><summary>Example in tsv format:</summary>

```bash
Cluster Member  Host    Role    State   TL      Lag in MB       Pending restart Tags
batman  postgresql0     127.0.0.1:5432  Leader  running 2
batman  postgresql1     127.0.0.1:5433          running 2       0               {clonefrom: true, nofailover: true, noloadbalance: true, replicatefrom: postgresql0}
batman  postgresql2     127.0.0.1:5434          running 2       0       *       {replicatefrom: postgres1}
```
</details>

In addition to that, `patronictl list` command will stop showing keys with empty values in `json` and `yaml` formats.
<details><summary>Examples:</summary>

```yaml
$ patronictl list -f yaml
- Cluster: batman
  Host: 127.0.0.1:5432
  Member: postgresql0
  Role: Leader
  State: running
  TL: 2
- Cluster: batman
  Host: 127.0.0.1:5433
  Lag in MB: 0
  Member: postgresql1
  State: running
  TL: 2
  Tags:
    clonefrom: true
    nofailover: true
    noloadbalance: true
    replicatefrom: postgresql0
- Cluster: batman
  Host: 127.0.0.1:5434
  Lag in MB: 0
  Member: postgresql2
  Pending restart: '*'
  State: running
  TL: 2
  Tags:
    replicatefrom: postgres1
```

```json
$ patronictl list -f json | jq .
[
  {
    "Cluster": "batman",
    "Member": "postgresql0",
    "Host": "127.0.0.1:5432",
    "Role": "Leader",
    "State": "running",
    "TL": 2
  },
  {
    "Cluster": "batman",
    "Member": "postgresql1",
    "Host": "127.0.0.1:5433",
    "State": "running",
    "TL": 2,
    "Lag in MB": 0,
    "Tags": {
      "nofailover": true,
      "noloadbalance": true,
      "replicatefrom": "postgresql0",
      "clonefrom": true
    }
  },
  {
    "Cluster": "batman",
    "Member": "postgresql2",
    "Host": "127.0.0.1:5434",
    "State": "running",
    "TL": 2,
    "Lag in MB": 0,
    "Pending restart": "*",
    "Tags": {
      "replicatefrom": "postgres1"
    }
  }
]
```
</details>
2020-04-15 12:19:18 +02:00
ksarabu1
e3335bea1a Master stop timeout (#1445)
## Feature: Postgres stop timeout

Switchover/Failover operation hangs on signal_stop (or checkpoint) call when postmaster doesn't respond or  hangs for some reason(Issue described in [1371](https://github.com/zalando/patroni/issues/1371)). This is leading to service loss for an extended period of time until the hung postmaster starts responding or it is killed by some other actor.

### master_stop_timeout

The number of seconds Patroni is allowed to wait when stopping Postgres and effective only when synchronous_mode is enabled. When set to > 0 and the synchronous_mode is enabled, Patroni sends SIGKILL to the postmaster if the stop operation is running for more than the value set by master_stop_timeout. Set the value according to your durability/availability tradeoff. If the parameter is not set or set <= 0, master_stop_timeout does not apply.
2020-04-15 12:18:49 +02:00
Alexander Kukushkin
27cda08ece Improve unit-tests (#1479)
* tests were failing on windows and macos
* improve coverage
2020-04-09 10:34:35 +02:00
Alexander Kukushkin
369a93ce2a Handle cases when conn_url is not defined (#1482)
On K8s when one of the Patroni pods in starting there is valid annotation yet, which could cause failure in patronictl.
In addition to that handle cases if port isn't specified in the standby_cluster configuration.

Close https://github.com/zalando/patroni/issues/1100
Close https://github.com/zalando/patroni/issues/1463
2020-04-09 10:34:12 +02:00
Kaarel Moppel
d58006319b Patronictl - fail if a config file is specified explicitly but not found (#1467)
$ python3 patronictl.py -c postgresql0.yml list
Error: Provided config file postgresql0.yml not existing or no read rights. Check the -c/--config-file parameter
2020-04-01 15:52:43 +02:00
Alexander Kukushkin
b020874486 Small improvement in tests (#1423)
which actually revealed a small issue in the validator
2020-03-10 12:07:40 +01:00
Alexander Kukushkin
ab38ab2e97 Apply 1 second backoff if LIST failed (#1424)
It is mostly necessary to avoid flooding logs, but also help to prevent starvation of the main thread.
2020-03-10 12:07:26 +01:00
Alexander Kukushkin
613634c26b Reset rewind state if postgres started after successful pg_rewind (#1408)
Close https://github.com/zalando/patroni/issues/1406
2020-02-27 12:24:17 +01:00
Alexander Kukushkin
4a29caa9d3 On role change callback didn't fire on failed primary (#1420)
Bug was introduced in https://github.com/zalando/patroni/pull/703
Close https://github.com/zalando/patroni/issues/1418
2020-02-27 12:22:44 +01:00
Alexander Kukushkin
80ce61876e Don't create permanent physical slot with name of the primary (#1392)
It is a regular issue that primary is recycling WALs when one of the replicas is down for a long time. So far there were only two solutions for such a problem and both of them are not perfect:
1. Increase `wal_keep_segments`, but it is hard to guess the good value.
2. Use continuous archiving and PITR, but it is not always possible.

This PR is introducing the way to solve the problem for static clusters, with a fixed number of nodes and names that never change. You just need to list the names of all nodes in the `slots` so the primary will not remove the slot when the node is down (not registered in DCS).
Of course, the primary will not create the permanent slot which is matching its own name.

Usage example: let's assume you have a cluster with nodes named *abc1*, *abc2*, and *abc3*.
You have to run `patronictl edit-config` and put the following snippet into the configuration:
```yaml
slots:
  abc1:
    type: physical
  abc2:
    type: physical
  abc3:
    type: physical
```

If the node *abc2* is the primary, it will always create slots for *abc1* and *abc3* even if they are not running, but will not create slot *abc2*.
Other nodes will behave the same.

Close #280
2020-02-20 10:07:43 +01:00
Igor Yanchenko
ffde403a0a Config validator implemented (#1314) 2020-02-20 09:40:44 +01:00
Michael Banck
f419d73465 Set postgresql.pgpass to ./pgpass (#1386)
This avoids test failures if $HOME is not available (fixes: #1385).
2020-02-13 15:06:36 +01:00
Alexander Kukushkin
6aa3f809d4 Configure keepalive for connections to K8s API (#1366)
In case if we got nothing from the socket after the TTL seconds it should be considered dead.
2020-01-27 09:25:08 +01:00
Alexander Kukushkin
902411239f More compatibility with windows (#1367)
* unix-domain sockets are not yet supported
* signal.SIGQUIT doesn't exists
2020-01-24 12:52:55 +01:00
Igor Yanchenko
16fe180ed6 implemented stop signal using pg_ctl for non posix systems (#1342)
Using pg_ctl to send stop signal for non posix os.
2020-01-16 14:35:47 +01:00
Alexander Kukushkin
1c4d395d5a Handle exception from Ha.shutdown (#1351)
During the shutdown Patroni is trying to update its status in the DCS.
If the DCS is inaccessible an exception might be raised. Lack of exception handling prevents logger thread from stopping.

Fixes https://github.com/zalando/patroni/issues/1344
2020-01-16 14:34:58 +01:00
Alexander Kukushkin
1461d7d4b8 Allow certain recovery parameters be defined in the custom_conf (#1335)
Fixes https://github.com/zalando/patroni/issues/1333
2020-01-15 12:41:07 +01:00
Igor Yanchenko
ea76a40845 Make sure postgresql.pgpass is a file or it does not exist (#1337)
Also make sure that it is located in the writable directory.
2020-01-15 12:40:41 +01:00
Alexander Kukushkin
16d1ffdde7 Update timeline on standby cluster (#1332)
Fixes https://github.com/zalando/patroni/issues/1031
2019-12-20 12:56:00 +01:00
Igor Yanchenko
26b6e00575 wait option for patronictl reinit implemented (#1339)
Wait to finish `reinit` if `--wait` option is used.
Every 2 seconds it pulls the status from Patroni REST API and reports to console.
2019-12-20 12:05:39 +01:00
Igor Yanchenko
7ff27d9e10 Make sure unix_socket_directories and stats_temp_directory exist (#1293)
Upon the start of Patroni and Postgres make sure that unix_socket_directories and stats_temp_directory exist or try to create them. Patroni will exit if failed to create them.

Close https://github.com/zalando/patroni/issues/863
2019-12-11 12:26:17 +01:00
Alexander Kukushkin
08d6e5e50e BUGFIX: don't leak password when running pg_rewind (#1321)
In addition to that:
* enforce security settings from `postgresql.authention`
* update release notes
* bump version
* close https://github.com/zalando/patroni/issues/1320
2019-12-05 18:19:38 +01:00
Alexander Kukushkin
0693fe7dd0 Housekeeping (#1315)
* Reduce memory usage by patroni init process
* More cleanup in setup.py
* Implement missing tests
2019-12-04 11:28:46 +01:00
Igor Yanchenko
49d3968c23 Make it possible to configure log level for exception tracebacks (#1311)
If you set `log.traceback_level=DEBUG`, the tracebacks will be visible only when `log.level=DEBUG`. The default behavior remains the same.
2019-12-03 15:13:42 +01:00
Alexander Kukushkin
f1819443ef Avoid spawning semaphore tracker process (#1299)
We are not using semaphores, therefore we don't need to track them.
2019-12-02 12:16:18 +01:00
Alexander Kukushkin
e1d569ad75 Inherit CaseInsensitiveDict from urllib3 HTTPHeaderDict (#1302)
It might look like a hack, but the API is stable enough and didn't change in the past 3+ years.
2019-12-02 12:14:59 +01:00
Igor Yanchenko
726ee46111 Implemented patroni --version (#1291)
That required a refactoring of `Config` and `Patroni` classes. Now one has to explicitely create the instance of `Config` before creating `Patroni`.

The Config file can optionally call the validate function.
2019-12-02 12:14:19 +01:00
Alexander Kukushkin
7793887ea7 Fix tests on windows (#1303)
and disable junit, it produces a deprecation warning
2019-11-27 14:57:33 +01:00
Alexander Kukushkin
90a4208390 Get rid from requests module (#1296)
It wasn't used for anything critical anyway, so it doesn't make a lot of sense to keep it as an explicit dependency.
2019-11-22 15:31:55 +01:00
Alexander Kukushkin
474ac3cc11 Move multiprocessing.set_start_method() back to main (#1295)
It is not possible to call it from forked process
2019-11-21 17:28:14 +01:00
Alexander Kukushkin
412c720d3a Avoid importing all DCS modules (#1286)
We will try to import only the module which has a configuration section.
I.e. if there is only zookeeper section in the config, Patroni will try to import only `patroni.dcs.zookeeper` and skip `etcd`, `consul`, and `kubernetes`.
This approach has two benefits:
1. When there are no dependencies installed Patroni was showing INFO messages `Failed to import smth`, which looks scary.
2. It reduces memory usage, because sometimes dependencies are heavy.
2019-11-21 14:39:37 +01:00
Alexander Kukushkin
183adb7848 Housekeeping (#1284)
* Implement proper tests for `multiprocessing.set_start_method()`
* Exclude some watchdog code from coverage (it is used only for behave tests)
* properly use os.path.join for windows compatibility
* import DCS modules in `features/environment.py` on demand. It allows to run behave tests against chosen DCS without installing all dependencies.
* remove some unused behave code
* fix some minor issues in the dcs.kubernetes module
2019-11-21 13:27:55 +01:00
Alexander Kukushkin
2f9a48fae4 Release 1.6.1 (#1281)
* Bump version to 1.6.1
* Update release notes
2019-11-15 12:48:00 +01:00
Maciej Kowalczyk
efcd05ace2 Use "spawn" multiprocessing start method (#1279)
workaround https://bugs.python.org/issue6721

Fixes #1278
2019-11-15 10:56:18 +01:00
Alexander Kukushkin
66d77697ae Use LIST + WATCH when working with K8s API (#1276)
There is an opinion that LIST requests with labelSelector to K8s API are expensive and Patroni was doing two such requests per HA loop (LIST pods and LIST endpoints/configmaps).
To efficiently detect object changes we will switch to the LIST+WATCH approach.
The initial LIST request populates the ObjectCache and events from the WATCH request update it.

In addition to that, the ObjectCache will be updated after performing the UPDATE operations on the K8s objects. To avoid race conditions, all operations on ObjectCache are performed after comparing the resource_version of the old and the new objects and rejected if the new resource_version value is smaller than the old one.

The disadvantage of such an approach is that it will require keeping three connections to the K8s API from each Patroni Pod (previously it was two).

Yesterday I deployed this feature branch on our biggest K8s cluster, with ~300 Patroni pods.
The CPU Utilization on K8s master nodes immediately dropped from ~20% to ~10% (two times), and the incoming traffic on master nodes dropped ~7-8 times!

Last, but not least, we get more or less the same impact on etcd cluster behind K8s master nodes, the CPU Utilization dropped nearly twice and outgoing traffic ~7-8 times.
2019-11-14 14:54:57 +01:00