* expose the current patroni version in DCS
* expose `checkpoint_after_promote` flag in DCS as an indicator that pg_rewind could be safely executed
* other nodes will wait until this flag is set instead of connecting as superuser and issuing the CHECKPOINT
* define `postgresql.authention.rewind` with credentials for pg_rewind in patroni configuration files.
* create user for pg_rewind if postgres is 11+
* grant execute on functions required for pg_rewind to rewind user
* make it possible to create users without passwords
* put `krbsrvname` into the connection string if it is specified in the config
* update postgres?.yml example files to mention `krbsrvname`
Fix the discrepancy for the values of max_wal_senders and max_replication_slots between the sample postgres.yml files and hard-coded defaults in Patroni, bumping the former to 10.
Contributed by @dtseiler
On debian, the configuration files (postgresql.conf, pg_hba.conf, etc) are not stored in the data directory. It would be great to be able to configure the location of this separate directory. Patroni could override existing configuration files where they are used to be.
The default is to store configuration files in the data directory. This setting is targeting custom installations like debian and any others moving configuration files out of the data directory.
Fixes#465
The v1.0 has been released more than one month ago and the new version
is coming. It doesn't make a lot of sense to keep configuration files in
the old format anymore.
In addition to that I've also commented out all the lines enabling and
configuring "archiving" to avoid incidents like here:
https://github.com/zalando/patroni/issues/264
Originally Exhibitor was supported in the ZooKeeper class and
configuration for Exhibitor was taken also from `zookeeper` section in
the yaml config file. In fact, Exhibitor just extends ZooKeeper and now
it is reflected in the code and also Exhibitor got it's own section in
the config.yaml file. It will make it easier to configure Exhibitor
hosts and port via environment variables when PR#211 will be merged.
Previously we explicitly injected a replication record into pg_hba.conf.
This doesn't allow users to explicitly write their configurations.
This change will just write the lines specified by the user.
Rename the follow_the_leader to just follow, since the node to
be followed is not necessary a leader anymore. Extend the code
that manages replication slots to the non-master nodes if they
are mentioned in at least one replicatefrom tag.
Add the 3rd configuration in order to be able to run cascading
replicas.
In the initial implementation we were using the only option
--encoding=UTF8. In order to have pg_rewind working with postgresql-9.3
we have to enable data-checksums. The naive approach was to enable it
globaly but taking into account some performance degradation it's better
not to do it but make it possible to configure it.
In addition to that fix all problems with setting up password of default
postgres user: execute CREATE ROLE | ALTER ROLE depending on content of
pg_authid
Tags are labels assigned to individual members in order
to alter its default behavior, i.e. exclude from the
leader election or indicate a possibility to create base
backups from the member.
This commit only adds support for setting tags in the
configuration file, exposes the tags to DCS /member subkey
and returns the tags in a response of the API request. At
the moment the tag names are not validated, nor they are
interpreted in any way.
Support for setting tags via the API is also in the scope
of further work.
user:passwd pair should be configured in restapi section of main
configuration file in following format:
restapi:
auth: 'username:password'
Plus implemented some simple routing mechanisms:
GET /foo => do_GET_foo()
POST /bar => do_POST_bar()
List of ZooKeeper nodes could be periodically updated from Exhibitor
Since we know that each Exhibitor accompanies one ZooKeeper node, list
of Exhibitor nodes also maintained. Exhibitor assumes that all ZooKeeper
nodes are using the same client port, 2181. The same assumption is valid
for Exhibitor, it should always listen on the same port on all nodes.
Original list of Exhibitor nodes is cached and used as a fallback when
it failed ito query information with using maintained list.
This implementation is using the same interface (AbstractDCS) as Etcd
class. It means that there should be no problem to implement another
plugin to work agains Consul for example.
In case if one member of a cluster is not available it will retry with
another one and fetch the new cluster configuration. Default timeout for
all requests to etcd is 5 seconds.
Initial cluster configuration can be resolved through:
1) /v2/members call on one of the cluster members on a client port
2) when it is possible to resolve hostname into multiple ip's it will
iterate through list and try to perform action from 1)
3) If there is discovery_srv defined in etcd section of config file it
will resolve peer addresses of all cluster members and will fetch
cluster configuration with using peer protocol by doing /members call on
a peer port