It could happen that ttl provided in Patroni configuration is smaller
than minimum supported by Consul. In such case Consul agent fails to
create a new session and responds with 500 Internal Server Error and
http body contains something like: "Invalid Session TTL '3000000000',
must be between [10s=24h0m0s]". Without session Patroni is not able to
create member and leader keys in the Consul KV store and it means that
cluster becomes completely unhealthy.
As a workaround we will handle such exception, adjust ttl to the minimum
possible and retry session creation.
In addition to that make it possible to define custom log format via environment variable `PATRONI_LOGFORMAT`
When Patroni does calculation whether it should run pg_rewind or not, it relies on pg_controldata output or gets necessary information from replication connection.
On some cases (when for example postgres running as a master was killed), we can't use pg_controldata output immediately, but trying to start postgres. Such start could fail with the following errror:
```
LOG,00000,"ending log output to stderr",,"Future log output will go to log destination ""csvlog"".",,,,,,,""
LOG,00000,"database system was interrupted; last known up at 2017-09-16 22:35:22 UTC",,,,,,,,,""
LOG,00000,"restored log file ""00000006.history"" from archive",,,,,,,,,""
LOG,00000,"entering standby mode",,,,,,,,,"" 2017-09-18 08:00:39.433 UTC,,,57,,59bf7d26.39,4,,2017-09-18 08:00:38 UTC,,0,LOG,00000,"restored log file ""00000006.history"" from archive",,,,,,,,,""
FATAL,XX000,"requested timeline 6 is not a child of this server's history","Latest checkpoint is at 29/1A000178 on timeline 5, but in the history of the requested timeline, the server forked off from that timeline at 29/1A000140.",,,,,,,,""
LOG,00000,"startup process (PID 57) exited with exit code 1",,,,,,,,,""
LOG,00000,"aborting startup due to startup process failure",,,,,,,,,""
LOG,00000,"database system is shut down",,,,,,,,,""
```
In this case controldata will still have `Database cluster state: in production`
All further attempts to start postgres will fail. Such situation could be fixed only if we start not in recovery. For safety we will do it in a single user mode.
The second problems is: if postgres was running as master, but later we started it and stopped, than pg_controldata will report:
```
Database cluster state: shut down in recovery
Minimum recovery ending location: 0/0
Min recovery ending loc's timeline: 0
```
And this info can't be used for calculations. In this case we should use
`Latest checkpoint location` and `Latest checkpoint's TimeLineID`
A misunderstanding of the ioctl() call interface. If mutable=False then fcntl.ioctl() actually returns the arg buffer back.
This accidentally worked on Python2 because int and str comparison did not return an error.
Error reporting is actually done by raising IOError on Python2 and OSError on Python3.
* Properly handle errors in set_timeout(), have them result in only a warning if watchdog support is not required.
* Improve watchdog device driver name display on Python3
* Eliminate race condition in watchdog feature tests.
The pinged/closed states were not getting reset properly if the checks ran too quickly.
Add explicit reset points in feature test so the check is unambiguous.
in addition to that implement additional checks around manual failover and recover when synchronous_mode is enabled
* Comparison must be case insensitive
* Do not send keepalives if watchdog is not active
* Avoid activating watchdog in a pause mode
* Set correct postgres state in pause mode
* Don't try to run queries from API if postgres is stopped
Originally fetch_nodes_statuses was returning a tuple, later it was
wrapped into namedtuple _MemberStatus and recently _MemberStatus was
extened with watchdog_failed field, but api.py was still relying on
usual tuple and checking failover limitations on it's own instead of
calling `failover_limitation` method.
* Only activate watchdog while master and not paused
We don't really need the protections while we are not master. This way
we only need to tickle the watchdog when we are updating leader key or
while demotion is happening.
As implemented we might fail to notice to shut down the watchdog if
someone demotes postgres and removes leader key behind Patroni's back.
There are probably other similar cases. Basically if the administrator
if being actively stupid they might get unexpected restarts. That seems
fine.
* Add configuration change support. Change MODE_REQUIRED to disable leader eligibility instead of closing Patroni.
Changes watchdog timeout during the next keepalive when ttl is changed. Watchdog driver and requirement can also be switched online.
When watchdog mode is `required` and watchdog setup does not work then the effect is similar to nofailover. Add watchdog_failed to status API to signify this. This is True only when watchdog does not work **AND** it is required.
* Reset implementation when config changed while active.
* Add watchdog safety margin configuration
Defaults to 5 seconds. Basically this is the maximum amount of time
that can pass between the calls to odcs.update_leader()` and
`watchdog.keepalive()`, which are called right after each other. Should
be safe for pretty much any sane scenario and allows the default
settings to not trigger watchdog when DCS is not responding.
* Cancel bootstrap if watchdog activation fails
The system would have demoted itself anyway the next HA loop. Doing it
in bootstrap gives at least some other node chance to try bootstrapping
in the hope that it is configured correctly.
If all nodes are unable to activate they will continue to try until the
disk is filled with moved datadirs. Perhaps not ideal behavior, but as
the situation is unlikely to resolve itself without administrator
intervention it doesn't seem too bad.
It wasn't a big issue when on_start was called during normal boostrap
with initdb, because usually such process is very fast. But situation is
changing when we run custom bootstrap, becuase it might be a long time
between cluster become connectable and end of recovery and promote.
Actually situation was even worse than that, on_start was called with
the `replica` argument and later on_role_changes was never called,
because promote wasn't performed by Patroni.
As a solution for this problem we will block any callbacks during
bootstrap and explicitly call on_start after leader lock was taken.
Task of restoring a cluster from backup or cloning existing cluster into a new one was floating around for some time. It was kind of possible to achieve it by doing a lot of manual actions and very error prone. So I come up with the idea of making the way how we bootstrap a new cluster configurable.
In short - we want to run a custom script instead of running initdb.
On debian, the configuration files (postgresql.conf, pg_hba.conf, etc) are not stored in the data directory. It would be great to be able to configure the location of this separate directory. Patroni could override existing configuration files where they are used to be.
The default is to store configuration files in the data directory. This setting is targeting custom installations like debian and any others moving configuration files out of the data directory.
Fixes#465
So far Patroni was populating pg_hba.conf only when running bootstrap code and after that it was not very handy to manage it's content, because it was necessary to login to every node, change pg_hba.conf manually and run pg_ctl reload.
This commit intends to fix it and give Patroni control over pg_hba.conf. It is possible to define pg_hba.conf content via `postgresql.pg_hba` in the patroni configuration file or in the `DCS/config` (dynamic configuration).
If the `hba_file` is defined in the `postgresql.parameters`, Patroni will ignore `postgresql.pg_hba`.
For backward compatibility this feature is not enabled by default. To enable it you have to set `postgresql.use_unix_socket: true`.
If feature is enable, and `unix_socket_directories` is defined and non empty, Patroni will use the first suitable value from it to connect to the local postgres cluster.
If the `unix_socket_directories` is not defined, Patroni will assume that default value should be used and will not pass `host` to command line arguments and omit it from connection url.
Solves: https://github.com/zalando/patroni/issues/61
In addition to mentioned above, this commit solves couple of bugs:
* manual failover with pg_rewind in a pause state was broken
* psycopg2 (or libpq, I am not really sure what exactly) doesn't mark cursors connection as closed when we use unix socket and there is an `OperationalError` occurs. We will close such connection on our own.
because `boto.exception` is not an excpetion, but a python module.
+ increase retry timeout to 5 minutes
+ refactor unit-tests to cover the case with retries.
pg_controldata output depends on postgres major version and in some cases some of the parameters are prefixed by 'Current ' for old postgres versions.
Bug was introduced by commit 37c1552.
Fixes https://github.com/zalando/patroni/issues/455
Previously we were running pg_rewind only in limited amount of cases:
* when we knew postgres was a master (no recovery.conf in data dir)
* when we were doing a manual switchover to a specific node (no
guaranty that this node is the most up-to-date)
* when a given node has nofailover tag (it could be ahead of new master)
This approach was kind of working in most of the cases, but sometimes we
were executing pg_rewind when it was not necessary and in some other
cases we were not executing it although it was needed.
The main idea of this PR is first try to figure out that we really need
to run pg_rewind by analyzing timelineid, LSN and history file on master
and replica and run it only if it's needed.
Current UI to change cluster configuration is somewhat unfriendly, involving a curl command, knowing the REST API endpoint, knowing the specific syntax to call it with and writing a JSON document. I added two commands in this branch to make this a bit easier, `show-config` and `edit-config` (names are merely placeholders, any opinions on better ones?).
* `patronictl show-config clustername` fetches the config from DCS, formats it as YAML and outputs it.
* `patronictl edit-config clustername` fetches the config, formats it as YAML, invokes $EDITOR on it, then shows user the diff and after confirmation applies the changed config to DCS, guarding for concurrent modifications.
* `patronictl edit-config clustername --set synchronous_mode=true --set postgresql.use_slots=true` will set the specific key-value pairs.
There are also some UI capabilities I'm less sure of, but included them here as I already implemented them.
* If output is a tty then the diffs are colored. I'm not sure if this feature is cool enough to pull the weight of adding a dependency on cdiff. Or maybe someone knows of another more task focused diff coloring library?
* `patronictl edit-config clustername --pg work_mem=100MB` - Shorthand for `--set postgresql.parameters.work_mem=100MB`
* `patronictl edit-config clustername --apply changes.yaml` - apply changes from a yaml file.
* `patronictl edit-config clustername --replace new-config.yaml` - replace config with new version.
wal-e outputs in CSV format using the 'excel-tab' dialect: 3164de6852/wal_e/operator/backup.py (L63)
The ISO date may be written with a space instead of'T' as delimiter between date
and time, this causes the old parsing to fail.
When all etcd servers refuse connections during watch the call will fail with an exception and will be immediately retried. This creates a huge amount of log spam potentially creating additional issues on top of losing the DCS. This patch takes note if etcd failures are repeating and starting from the second failure will sleep for a second before retrying. It additionally omits the stack trace after the first failure in a streak of failures.
Default value of wal_sender_timeout is 60 seconds while we are trying to remove replication slot after 30 seconds (ttl=30). That means postgres might think that slot is still active and does nothing. Patroni at the same time was thinking that it was removed successfully.
If the drop replication slot query didn't return any single row we must fetch list of existing physical replication slots from postgres on the next iteration of HA loop.
Fixes: issue #425
* Reassemble postgresql parameters when major version became known
Otherwise we were writing some "unknown" parameters into postgresql.conf
and postgres was refusing to start. Only 9.3 was affected.
In addition to that move rename of wal_level from hot_standby to replica
into get_server_parameters method. Now this rename is handled in a
single place.
* Bump etcd and consul versions
Change hostnames by ip addresses was causing certificate verification to
fail. Instead of doing it we will better monkey patch urllib3
functionality which does name resolution. It should work without
problems even for https connection.