Recently released psycopg2 split into two different packages, psycopg2, and psycopg2-binary which could be installed at the same time into the same place on the filesystem. In order to decrease dependency hell problem, we let a user choose how to install psycopg2. There are a few options available and it is reflected in the documentation.
This PR also changes the following behavior:
* `pip install patroni` will fail if psycopg2 is not installed
* Patroni will check psycopg2 upon start and fail if it can't be found or outdated.
Closes https://github.com/zalando/patroni/issues/1021
* Use `shutil.move` instead of `os.replace`, which is available only from 3.3
* Introduce standby-leader health-check and consul service
* Improve unit tests, some lines were not covered
* rename `assertEquals` -> `assertEqual`, due to deprecation warning
* Use ConfigMaps or Endpoins for leader elections and to keep cluster state
* Label pods with a postgres role
* change behavior of pip install. From now on it will not install all dependencies, you have to specify explicitly DCS you want to use Patroni with: `pip install patroni[etcd,zookeeper,kubernetes]`
It will capture output to stdout and stderr and print it when test is
failed. Please set LOGLEVEL env variable to INFO or DEBUG if you want
to see everything (as it was before).
Despite this release was very buggy it has really nice features:
* EtcdWatchTimedOut exception is raised when `watch` call timed out
* it supports SRV autodiscovery
Since we already implemented our own SRV discovery this feature is not
really interesting for us, but it solves the problem of having two
requirements files for different python versions, because python-etcd
will install dnspython or dnspython3 as a dependency.
In order to fix https://github.com/jplana/python-etcd/issues/152 and
https://github.com/jplana/python-etcd/pull/154 I had to override
`api_execute` method.
In case if one member of a cluster is not available it will retry with
another one and fetch the new cluster configuration. Default timeout for
all requests to etcd is 5 seconds.
Initial cluster configuration can be resolved through:
1) /v2/members call on one of the cluster members on a client port
2) when it is possible to resolve hostname into multiple ip's it will
iterate through list and try to perform action from 1)
3) If there is discovery_srv defined in etcd section of config file it
will resolve peer addresses of all cluster members and will fetch
cluster configuration with using peer protocol by doing /members call on
a peer port