When in automatic mode we probably don't need to warn user about failure to set up watchdog. This is the common case and makes many users think that this feature is somehow necessary to run Patroni safely. For most users it is completely fine to run without and it makes sense to reduce their log spam.
- don't register secondaries with `noloadbalance` tag.
- mention in the documentation that secondaries are also registered in `pg_dist_node`.
- update docker/kubernetes README files to include examples with secondaries being registered in `pg_dist_node`.
Etcd keeps old revisions unless instructed to delete them. If we don't delete old revisions then etcd memory usage will keep growing forever due to keepalive updates. Since Patroni does not really need to roll back to older revisions we can safely delete them.
Citus cluster (coordinator and workers) will be stored in DCS as a fleet of Patroni logically grouped together:
```
/service/batman/
/service/batman/0/
/service/batman/0/initialize
/service/batman/0/leader
/service/batman/0/members/
/service/batman/0/members/m1
/service/batman/0/members/m2
/service/batman/
/service/batman/1/
/service/batman/1/initialize
/service/batman/1/leader
/service/batman/1/members/
/service/batman/1/members/m1
/service/batman/1/members/m2
...
```
Where 0 is a Citus group for coordinator and 1, 2, etc are worker groups.
Such hierarchy allows reading the entire Citus cluster with a single call to DCS (except Zookeeper).
The get_cluster() method will be reading the entire Citus cluster on the coordinator because it needs to discover workers. For the worker cluster it will be reading the subtree of its own group.
Besides that we introduce a new method get_citus_coordinator(). It will be used only by worker clusters.
Since there is no hierarchical structures on K8s we will use the citus group suffix on all objects that Patroni creates.
E.g.
```
batman-0-leader # the leader config map for the coordinator
batman-0-config # the config map holding initialize, config, and history "keys"
...
batman-1-leader # the leader config map for worker group 1
batman-1-config
...
```
Citus integration is enabled from patroni.yaml:
```yaml
citus:
database: citus
group: 0 # 0 is for coordinator, 1, 2, etc are for workers
```
If enabled, Patroni will create the database, citus extension in it, and INSERTs INTO `pg_dist_authinfo` information required for Citus nodes to communicate between each other, i.e. 'password', 'sslcert', 'sslkey' for superuser if they are defined in the Patroni configuration file.
When the new Citus coordinator/worker is bootstrapped, Patroni adds `synchronous_mode: on` to the `bootstrap.dcs` section.
Besides that, Patroni takes over management of some Postgres GUCs:
- `shared_preload_libraries` - Patroni ensures that the "citus" is added to the first place
- `max_prepared_transactions` - if not set or set to 0, Patroni changes the value to `max_connections*2`
- wal_level - automatically set to logical. It is used by Citus to move/split shards. Under the hood Citus is creating/removing replication slots and they are automatically added by Patroni to the `ignore_slots` configuration to avoid accidental removal.
The coordinator primary actively discovers worker primary nodes and registers/updates them in the `pg_dist_node` table using
citus_add_node() and citus_update_node() functions.
Patroni running on the coordinator provides the new REST API endpoint: `POST /citus`. It is used by workers to facilitate controlled switchovers and restarts of worker primaries.
When the worker primary needs to shut down Postgres because of restart or switchover, it calls the `POST /citus` endpoint on the coordinator and the Patroni on the coordinator starts a transaction and calls `citus_update_node(nodeid, 'host-demoted', port)` in order to pause client connections that work with the given worker.
Once the new leader is elected or postgres started back, they perform another call to the `POST/citus` endpoint, that does another `citus_update_node()` call with actual hostname and port and commits a transaction. After transaction is committed, coordinator reestablishes connections to the worker node and client connections are unblocked.
If clients don't run long transaction the operation finishes without client visible errors, but only a short latency spike.
All operations on the `pg_dist_node` are serialized by Patroni on the coordinator. It allows to have more control and ROLLBACK transaction in progress if its lifetime exceeding a certain threshold and there are other worker nodes should be updated.
The newest versions of docker-compose want to have some values double-quoted in the env file while old versions failing to process such files.
The solution is simple, move some of the parameters to the `docker-compose.yml` and rely on anchors for inheritance.
Since the main idea behind env files was to keep "secret" information off the main YAML we also get rid of any non-secret stuff, mainly located in the etcd.env.
The only python-etcd3 client working directly via gRPC still supports only a single endpoint, which is not very nice for high-availability.
Since Patroni is already using a heavily hacked version of python-etcd with smart retries and auto-discovery out-of-the-box, I decided to enhance the existing code with limited support of v3 protocol via gRPC-gateway.
Unfortunately, watches via gRPC-gateway requires us to open and keep the second connection to the etcd.
Known limitations:
* The very minimal supported version is 3.0.4. On earlier versions transactions don't work due to bugs in grpc-gateway. Without transactions we can't do atomic operations, i.e. leader locks.
* Watches work only starting from 3.1.0
* Authentication works only starting from 3.3.0
* gRPC-gateway does not support authentication using TLS Common Name. This is because gRPC-proxy terminates TLS from its client so all the clients share a cert of the proxy: https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/authentication.md#using-tls-common-name
1. Multi-stage build with an extensive cleanup of useless files and optional image compression
2. Start three-node etcd cluster
3. Start three-node Patroni cluster
4. One container with haproxy
5. All container names are prefixed with "demo-" and don't have suffixes
6. Decommission dev_patroni_cluster.sh script, docker-compose is now standard de-facto.
7. Provide more examples in the docker/README.md
For starting up cluster easy with docker-compose.
And unify Dockerfile and scripts to be able to work with docker-compose
and the old one dev_patroni_cluster.sh script
For easier development using Docker the $HOSTNAME variable will be used to
name the running Patroni. Bumped some _segments postgresql settings to ensure
WAL files are not removed very quickly.
Increased the timeout for the post request for Patroni, as some operations
(failover) may take considerable time to complete.
The failover to a specific member was broken in patronictl as it used a wrong
key to specify the member to failover to.
Pretty printing fix for xlog lag, to prevent false negatives to show up and have
good alignment.
For managing Patroni clusters, the Patroni api can be used. For many tasks, a command line interface for
this api would be a useful addition. This commit adds patroncli (The name is still under debate).
The command line interface needs access to the DCS; this is required for any operation. For some tasks it is required
to have access to the Patroni api.
A small summary of the additions to get the cli/ctl started:
* Updated Docker image to use 'true' as the archive_command, to ensure disk not filling up during failover
testing.
* The cli currently can list members, failover a master and remove a given cluster from DCS.
* The cli can be configured with a command, for repeated access to the same DCS
* Added some simple tests for the cli, code coverage is very low
As the Dockerfile is there mainly to support developers, we want to build the Dockerfile using the current working
directory instead of a previously released version.
To help in developing features, the Dockerfile and its entrypoint have been extended.
The README.md explains stuff in detail, in short:
- you can now run a Patroni cluster with a single command