Change master->primary, take two (#3127)

This commit is a breaking change:
1. `role` in DCS is written as "primary" instead of "master".
2. `role` in REST API responses is also written as "primary".
3. REST API no longer accepts role=master in requests (for example switchover/failover/restart endpoints).
4. `/metrics` REST API endpoint will no longer report `patroni_master`.
5. `patronictl` no longer accepts `--master` argument.
6. `no_master` option in declarative configuration of custom replica creation methods is no longer treated as a special option, please use `no_leader` instead.
7. `patroni_wale_restore` doesn't accept `--no_master` anymore.
8. `patroni_barman` doesn't accept `--role=master` anymore.
9. callback scripts will be executed with role=primary instead of role=master
10. On Kubernetes Patroni by default will set role label to primary. In case if you want to keep old behavior and avoid downtime or lengthy complex migrations you can configure `kubernetes.leader_label_value` and `kubernetes.standby_leader_label_value` to `master`.

However, a few exceptions regarding master are still in place:
1. `GET /master` REST API endpoint will continue to work.
2. `master_start_timeout` and `master_stop_timeout` in global configuration are still accepted.
3. `master` tag is still preserved in Consul services in addition to `primary`.

Rationale for these exceptions: DBA doesn't always 100% control the infrastructure and can't adjust the configuration.
This commit is contained in:
Alexander Kukushkin
2024-08-28 17:19:00 +02:00
committed by GitHub
parent 835d93951d
commit b470ade20e
34 changed files with 135 additions and 154 deletions

View File

@@ -55,7 +55,7 @@ Consul
- **PATRONI\_CONSUL\_CONSISTENCY**: (optional) Select consul consistency mode. Possible values are ``default``, ``consistent``, or ``stale`` (more details in `consul API reference <https://www.consul.io/api/features/consistency.html/>`__)
- **PATRONI\_CONSUL\_CHECKS**: (optional) list of Consul health checks used for the session. By default an empty list is used.
- **PATRONI\_CONSUL\_REGISTER\_SERVICE**: (optional) whether or not to register a service with the name defined by the scope parameter and the tag master, primary, replica, or standby-leader depending on the node's role. Defaults to **false**
- **PATRONI\_CONSUL\_SERVICE\_TAGS**: (optional) additional static tags to add to the Consul service apart from the role (``master``/``primary``/``replica``/``standby-leader``). By default an empty list is used.
- **PATRONI\_CONSUL\_SERVICE\_TAGS**: (optional) additional static tags to add to the Consul service apart from the role (``primary``/``replica``/``standby-leader``). By default an empty list is used.
- **PATRONI\_CONSUL\_SERVICE\_CHECK\_INTERVAL**: (optional) how often to perform health check against registered url
- **PATRONI\_CONSUL\_SERVICE\_CHECK\_TLS\_SERVER\_NAME**: (optional) overide SNI host when connecting via TLS, see also `consul agent check API reference <https://www.consul.io/api-docs/agent/check#tlsservername>`__.
@@ -113,11 +113,11 @@ Kubernetes
- **PATRONI\_KUBERNETES\_NAMESPACE**: (optional) Kubernetes namespace where the Patroni pod is running. Default value is `default`.
- **PATRONI\_KUBERNETES\_LABELS**: Labels in format ``{label1: value1, label2: value2}``. These labels will be used to find existing objects (Pods and either Endpoints or ConfigMaps) associated with the current cluster. Also Patroni will set them on every object (Endpoint or ConfigMap) it creates.
- **PATRONI\_KUBERNETES\_SCOPE\_LABEL**: (optional) name of the label containing cluster name. Default value is `cluster-name`.
- **PATRONI\_KUBERNETES\_ROLE\_LABEL**: (optional) name of the label containing role (master or replica or other custom value). Patroni will set this label on the pod it runs in. Default value is ``role``.
- **PATRONI\_KUBERNETES\_LEADER\_LABEL\_VALUE**: (optional) value of the pod label when Postgres role is `master`. Default value is `master`.
- **PATRONI\_KUBERNETES\_ROLE\_LABEL**: (optional) name of the label containing role (`primary`, `replica` or other custom value). Patroni will set this label on the pod it runs in. Default value is ``role``.
- **PATRONI\_KUBERNETES\_LEADER\_LABEL\_VALUE**: (optional) value of the pod label when Postgres role is `primary`. Default value is `primary`.
- **PATRONI\_KUBERNETES\_FOLLOWER\_LABEL\_VALUE**: (optional) value of the pod label when Postgres role is `replica`. Default value is `replica`.
- **PATRONI\_KUBERNETES\_STANDBY\_LEADER\_LABEL\_VALUE**: (optional) value of the pod label when Postgres role is ``standby_leader``. Default value is ``master``.
- **PATRONI\_KUBERNETES\_TMP\_ROLE\_LABEL**: (optional) name of the temporary label containing role (master or replica). Value of this label will always use the default of corresponding role. Set only when necessary.
- **PATRONI\_KUBERNETES\_STANDBY\_LEADER\_LABEL\_VALUE**: (optional) value of the pod label when Postgres role is ``standby_leader``. Default value is ``primary``.
- **PATRONI\_KUBERNETES\_TMP\_ROLE\_LABEL**: (optional) name of the temporary label containing role (`primary` or `replica`). Value of this label will always use the default of corresponding role. Set only when necessary.
- **PATRONI\_KUBERNETES\_USE\_ENDPOINTS**: (optional) if set to true, Patroni will use Endpoints instead of ConfigMaps to run leader elections and keep cluster state.
- **PATRONI\_KUBERNETES\_POD\_IP**: (optional) IP address of the pod Patroni is running in. This value is required when `PATRONI_KUBERNETES_USE_ENDPOINTS` is enabled and is used to populate the leader endpoint subsets when the pod's PostgreSQL is promoted.
- **PATRONI\_KUBERNETES\_PORTS**: (optional) if the Service object has the name for the port, the same name must appear in the Endpoint object, otherwise service won't work. For example, if your service is defined as ``{Kind: Service, spec: {ports: [{name: postgresql, port: 5432, targetPort: 5432}]}}``, then you have to set ``PATRONI_KUBERNETES_PORTS='[{"name": "postgresql", "port": 5432}]'`` and Patroni will use it for updating subsets of the leader Endpoint. This parameter is used only if `PATRONI_KUBERNETES_USE_ENDPOINTS` is set.

View File

@@ -109,7 +109,7 @@ digraph G {
subgraph cluster_process_healthy_cluster {
label = "process_healthy_cluster"
"healthy_has_lock" [label="Am I the owner of the leader lock?", shape=diamond]
"healthy_is_leader" [label="Is Postgres running as master?", shape=diamond]
"healthy_is_leader" [label="Is Postgres running as primary?", shape=diamond]
"healthy_no_lock" [label="Follow the leader (async,\ncreate/update recovery.conf and restart if necessary)"]
"healthy_has_lock" -> "healthy_no_lock" [label="no" color="red"]
"healthy_has_lock" -> "healthy_update_leader_lock" [label="yes" color="green"]
@@ -119,7 +119,7 @@ digraph G {
"healthy_update_success" -> "healthy_is_leader" [label="yes" color="green"]
"healthy_update_success" -> "healthy_demote" [label="no" color="red"]
"healthy_demote" [label="Demote (async,\nrestart in read-only)"]
"healthy_failover" [label="Promote Postgres to master"]
"healthy_failover" [label="Promote Postgres to primary"]
"healthy_is_leader" -> "healthy_failover" [label="no" color="red"]
}
"healthy_demote" -> "update_member"
@@ -134,10 +134,10 @@ digraph G {
"unhealthy_leader_race" [label="Try to create leader key"]
"unhealthy_leader_race" -> "unhealthy_acquire_lock"
"unhealthy_acquire_lock" [label="Was I able to get the lock?", shape="diamond"]
"unhealthy_is_leader" [label="Is Postgres running as master?", shape=diamond]
"unhealthy_is_leader" [label="Is Postgres running as primary?", shape=diamond]
"unhealthy_acquire_lock" -> "unhealthy_is_leader" [label="yes" color="green"]
"unhealthy_is_leader" -> "unhealthy_promote" [label="no" color="red"]
"unhealthy_promote" [label="Promote to master"]
"unhealthy_promote" [label="Promote to primary"]
"unhealthy_is_healthiest" -> "unhealthy_follow" [label="no" color="red"]
"unhealthy_follow" [label="try to follow somebody else()"]
"unhealthy_acquire_lock" -> "unhealthy_follow" [label="no" color="red"]

Binary file not shown.

Before

Width:  |  Height:  |  Size: 507 KiB

After

Width:  |  Height:  |  Size: 524 KiB

View File

@@ -37,7 +37,7 @@ Patroni Kubernetes :ref:`settings <kubernetes_settings>` and :ref:`environment v
Customize role label
^^^^^^^^^^^^^^^^^^^^
By default, Patroni will set corresponding labels on the pod it runs in based on node's role, such as ``role=master``.
By default, Patroni will set corresponding labels on the pod it runs in based on node's role, such as ``role=primary``.
The key and value of label can be customized by `kubernetes.role_label`, `kubernetes.leader_label_value`, `kubernetes.follower_label_value` and `kubernetes.standby_leader_label_value`.
Note that if you migrate from default role labels to custom ones, you can reduce downtime by following migration steps:
@@ -48,8 +48,8 @@ Note that if you migrate from default role labels to custom ones, you can reduce
labels:
cluster-name: foo
role: master
tmp_role: master
role: primary
tmp_role: primary
2. After all pods have been updated, modify the service selector to select the temporary label.
@@ -57,7 +57,7 @@ Note that if you migrate from default role labels to custom ones, you can reduce
selector:
cluster-name: foo
tmp_role: master
tmp_role: primary
3. Add your custom role label (e.g., set `kubernetes.leader_label_value=primary`). Once pods are restarted they will get following new labels set by Patroni:
@@ -66,7 +66,7 @@ Note that if you migrate from default role labels to custom ones, you can reduce
labels:
cluster-name: foo
role: primary
tmp_role: master
tmp_role: primary
4. After all pods have been updated again, modify the service selector to use new role value.

View File

@@ -103,9 +103,9 @@ The ``GET /patroni`` is used by Patroni during the leader race. It also could be
$ curl -s http://localhost:8008/patroni | jq .
{
"state": "running",
"postmaster_start_time": "2023-08-18 11:03:37.966359+00:00",
"role": "master",
"server_version": 150004,
"postmaster_start_time": "2024-08-18 11:03:37.966359+00:00",
"role": "primary",
"server_version": 160004,
"xlog": {
"location": 67395656
},
@@ -134,7 +134,7 @@ The ``GET /patroni`` is used by Patroni during the leader race. It also could be
},
"database_system_identifier": "7268616322854375442",
"patroni": {
"version": "3.1.0",
"version": "4.0.0",
"scope": "demo",
"name": "patroni1"
}
@@ -147,9 +147,9 @@ The ``GET /patroni`` is used by Patroni during the leader race. It also could be
$ curl -s http://localhost:8008/patroni | jq .
{
"state": "running",
"postmaster_start_time": "2023-08-18 11:09:08.615242+00:00",
"postmaster_start_time": "2024-08-18 11:09:08.615242+00:00",
"role": "replica",
"server_version": 150004,
"server_version": 160004,
"xlog": {
"received_location": 67419744,
"replayed_location": 67419744,
@@ -182,7 +182,7 @@ The ``GET /patroni`` is used by Patroni during the leader race. It also could be
},
"database_system_identifier": "7268616322854375442",
"patroni": {
"version": "3.1.0",
"version": "4.0.0",
"scope": "demo",
"name": "patroni1"
}
@@ -195,9 +195,9 @@ The ``GET /patroni`` is used by Patroni during the leader race. It also could be
$ curl -s http://localhost:8008/patroni | jq .
{
"state": "running",
"postmaster_start_time": "2023-08-18 11:09:08.615242+00:00",
"postmaster_start_time": "2024-08-18 11:09:08.615242+00:00",
"role": "replica",
"server_version": 150004,
"server_version": 160004,
"xlog": {
"location": 67420024
},
@@ -228,7 +228,7 @@ The ``GET /patroni`` is used by Patroni during the leader race. It also could be
},
"database_system_identifier": "7268616322854375442",
"patroni": {
"version": "3.1.0",
"version": "4.0.0",
"scope": "demo",
"name": "patroni1"
}
@@ -241,9 +241,9 @@ The ``GET /patroni`` is used by Patroni during the leader race. It also could be
$ curl -s http://localhost:8008/patroni | jq .
{
"state": "running",
"postmaster_start_time": "2023-08-18 11:09:08.615242+00:00",
"postmaster_start_time": "2024-08-18 11:09:08.615242+00:00",
"role": "replica",
"server_version": 150004,
"server_version": 160004,
"xlog": {
"location": 67420024
},
@@ -273,7 +273,7 @@ The ``GET /patroni`` is used by Patroni during the leader race. It also could be
},
"database_system_identifier": "7268616322854375442",
"patroni": {
"version": "3.1.0",
"version": "4.0.0",
"scope": "demo",
"name": "patroni1"
}
@@ -287,16 +287,13 @@ Retrieve the Patroni metrics in Prometheus format through the ``GET /metrics`` e
# HELP patroni_version Patroni semver without periods. \
# TYPE patroni_version gauge
patroni_version{scope="batman",name="patroni1"} 020103
patroni_version{scope="batman",name="patroni1"} 040000
# HELP patroni_postgres_running Value is 1 if Postgres is running, 0 otherwise.
# TYPE patroni_postgres_running gauge
patroni_postgres_running{scope="batman",name="patroni1"} 1
# HELP patroni_postmaster_start_time Epoch seconds since Postgres started.
# TYPE patroni_postmaster_start_time gauge
patroni_postmaster_start_time{scope="batman",name="patroni1"} 1657656955.179243
# HELP patroni_master Value is 1 if this node is the leader, 0 otherwise.
# TYPE patroni_master gauge
patroni_master{scope="batman",name="patroni1"} 1
# HELP patroni_primary Value is 1 if this node is the leader, 0 otherwise.
# TYPE patroni_primary gauge
patroni_primary{scope="batman",name="patroni1"} 1
@@ -335,7 +332,7 @@ Retrieve the Patroni metrics in Prometheus format through the ``GET /metrics`` e
patroni_postgres_in_archive_recovery{scope="batman",name="patroni1"} 0
# HELP patroni_postgres_server_version Version of Postgres (if running), 0 otherwise.
# TYPE patroni_postgres_server_version gauge
patroni_postgres_server_version{scope="batman",name="patroni1"} 140004
patroni_postgres_server_version{scope="batman",name="patroni1"} 160004
# HELP patroni_cluster_unlocked Value is 1 if the cluster is unlocked, 0 if locked.
# TYPE patroni_cluster_unlocked gauge
patroni_cluster_unlocked{scope="batman",name="patroni1"} 0
@@ -497,18 +494,18 @@ Let's check that the node processed this configuration. First of all it should s
{
"pending_restart": true,
"database_system_identifier": "6287881213849985952",
"postmaster_start_time": "2016-06-13 13:13:05.211 CEST",
"postmaster_start_time": "2024-08-18 13:13:05.211 CEST",
"xlog": {
"location": 2197818976
},
"patroni": {
"version": "1.0",
"version": "4.0.0",
"scope": "batman",
"name": "patroni1"
},
"state": "running",
"role": "master",
"server_version": 90503
"role": "primary",
"server_version": 160004
}
Removing parameters:

View File

@@ -104,7 +104,7 @@ Most of the parameters are optional, but you have to specify one of the **host**
- **consistency**: (optional) Select consul consistency mode. Possible values are ``default``, ``consistent``, or ``stale`` (more details in `consul API reference <https://www.consul.io/api/features/consistency.html/>`__)
- **checks**: (optional) list of Consul health checks used for the session. By default an empty list is used.
- **register\_service**: (optional) whether or not to register a service with the name defined by the scope parameter and the tag master, primary, replica, or standby-leader depending on the node's role. Defaults to **false**.
- **service\_tags**: (optional) additional static tags to add to the Consul service apart from the role (``master``/``primary``/``replica``/``standby-leader``). By default an empty list is used.
- **service\_tags**: (optional) additional static tags to add to the Consul service apart from the role (``primary``/``replica``/``standby-leader``). By default an empty list is used.
- **service\_check\_interval**: (optional) how often to perform health check against registered url. Defaults to '5s'.
- **service\_check\_tls\_server\_name**: (optional) overide SNI host when connecting via TLS, see also `consul agent check API reference <https://www.consul.io/api-docs/agent/check#tlsservername>`__.
@@ -178,11 +178,11 @@ Kubernetes
- **namespace**: (optional) Kubernetes namespace where Patroni pod is running. Default value is `default`.
- **labels**: Labels in format ``{label1: value1, label2: value2}``. These labels will be used to find existing objects (Pods and either Endpoints or ConfigMaps) associated with the current cluster. Also Patroni will set them on every object (Endpoint or ConfigMap) it creates.
- **scope\_label**: (optional) name of the label containing cluster name. Default value is `cluster-name`.
- **role\_label**: (optional) name of the label containing role (master or replica or other custom value). Patroni will set this label on the pod it runs in. Default value is ``role``.
- **leader\_label\_value**: (optional) value of the pod label when Postgres role is ``master``. Default value is ``master``.
- **role\_label**: (optional) name of the label containing role (`primary`, `replica`, or other custom value). Patroni will set this label on the pod it runs in. Default value is ``role``.
- **leader\_label\_value**: (optional) value of the pod label when Postgres role is ``primary``. Default value is ``primary``.
- **follower\_label\_value**: (optional) value of the pod label when Postgres role is ``replica``. Default value is ``replica``.
- **standby\_leader\_label\_value**: (optional) value of the pod label when Postgres role is ``standby_leader``. Default value is ``master``.
- **tmp_\role\_label**: (optional) name of the temporary label containing role (master or replica). Value of this label will always use the default of corresponding role. Set only when necessary.
- **standby\_leader\_label\_value**: (optional) value of the pod label when Postgres role is ``standby_leader``. Default value is ``primary``.
- **tmp_\role\_label**: (optional) name of the temporary label containing role (`primary` or `replica`). Value of this label will always use the default of corresponding role. Set only when necessary.
- **use\_endpoints**: (optional) if set to true, Patroni will use Endpoints instead of ConfigMaps to run leader elections and keep cluster state.
- **pod\_ip**: (optional) IP address of the pod Patroni is running in. This value is required when `use_endpoints` is enabled and is used to populate the leader endpoint subsets when the pod's PostgreSQL is promoted.
- **ports**: (optional) if the Service object has the name for the port, the same name must appear in the Endpoint object, otherwise service won't work. For example, if your service is defined as ``{Kind: Service, spec: {ports: [{name: postgresql, port: 5432, targetPort: 5432}]}}``, then you have to set ``kubernetes.ports: [{"name": "postgresql", "port": 5432}]`` and Patroni will use it for updating subsets of the leader Endpoint. This parameter is used only if `kubernetes.use_endpoints` is set.

View File

@@ -65,7 +65,7 @@ Feature: basic replication
Then I receive a response returncode 0
And postgres2 role is the primary after 24 seconds
And Response on GET http://127.0.0.1:8010/history contains recovery after 10 seconds
And there is a postgres2_cb.log with "on_role_change master batman" in postgres2 data directory
And there is a postgres2_cb.log with "on_role_change primary batman" in postgres2 data directory
When I issue a PATCH request to http://127.0.0.1:8010/config with {"synchronous_mode": null, "master_start_timeout": 0}
Then I receive a response code 200
When I add the table bar to postgres2

View File

@@ -69,7 +69,7 @@ Feature: citus
Scenario: check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node
Given I start postgres4 in citus group 2
Then postgres4 is a leader in a group 2 after 10 seconds
And "members/postgres4" key in a group 2 in DCS has role=master after 3 seconds
And "members/postgres4" key in a group 2 in DCS has role=primary after 3 seconds
When I run patronictl.py edit-config batman --group 2 -s ttl=20 --force
Then I receive a response returncode 0
And I receive a response output "+ttl: 20"

View File

@@ -10,7 +10,7 @@ Feature: ignored slots
When I shut down postgres1
And I start postgres1
Then postgres1 is a leader after 10 seconds
And "members/postgres1" key in DCS has role=master after 10 seconds
And "members/postgres1" key in DCS has role=primary after 10 seconds
# Make sure Patroni has finished telling Postgres it should be accepting writes.
And postgres1 role is the primary after 20 seconds
# 1. Create our test logical replication slot.
@@ -38,7 +38,7 @@ Feature: ignored slots
# cycle we don't accidentally rewind to before the slot creation.
And replication works from postgres1 to postgres0 after 20 seconds
When I shut down postgres1
Then "members/postgres0" key in DCS has role=master after 10 seconds
Then "members/postgres0" key in DCS has role=primary after 10 seconds
# 2. After a failover the server (now a replica) still has the slot.
When I start postgres1
@@ -54,7 +54,7 @@ Feature: ignored slots
# 3. After a failover the server (now a primary) still has the slot.
When I shut down postgres0
Then "members/postgres1" key in DCS has role=master after 10 seconds
Then "members/postgres1" key in DCS has role=primary after 10 seconds
And postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds
And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds
And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds

View File

@@ -7,7 +7,7 @@ Scenario: check API requests on a stand-alone server
When I issue a GET request to http://127.0.0.1:8008/
Then I receive a response code 200
And I receive a response state running
And I receive a response role master
And I receive a response role primary
When I issue a GET request to http://127.0.0.1:8008/standby_leader
Then I receive a response code 503
When I issue a GET request to http://127.0.0.1:8008/health
@@ -17,7 +17,7 @@ Scenario: check API requests on a stand-alone server
When I issue a POST request to http://127.0.0.1:8008/reinitialize with {"force": true}
Then I receive a response code 503
And I receive a response text I am the leader, can not reinitialize
When I run patronictl.py switchover batman --master postgres0 --force
When I run patronictl.py switchover batman --primary postgres0 --force
Then I receive a response returncode 1
And I receive a response output "Error: No candidates found to switchover to"
When I issue a POST request to http://127.0.0.1:8008/switchover with {"leader": "postgres0"}

View File

@@ -12,7 +12,7 @@ Feature: recovery
Then postgres0 role is the primary after 10 seconds
When I issue a GET request to http://127.0.0.1:8008/
Then I receive a response code 200
And I receive a response role master
And I receive a response role primary
And I receive a response timeline 1
And "members/postgres0" key in DCS has state=running after 12 seconds
And replication works from postgres0 to postgres1 after 15 seconds

View File

@@ -26,7 +26,7 @@ Feature: standby cluster
Scenario: Detach exiting node from the cluster
When I shut down postgres1
Then postgres0 is a leader after 10 seconds
And "members/postgres0" key in DCS has role=master after 5 seconds
And "members/postgres0" key in DCS has role=primary after 5 seconds
When I issue a GET request to http://127.0.0.1:8008/
Then I receive a response code 200

View File

@@ -119,7 +119,7 @@ def check_response(context, component, data):
@step('I issue a scheduled switchover from {from_host:w} to {to_host:w} in {in_seconds:d} seconds')
def scheduled_switchover(context, from_host, to_host, in_seconds):
context.execute_steps(u"""
Given I run patronictl.py switchover batman --master {0} --candidate {1} --scheduled "{2}" --force
Given I run patronictl.py switchover batman --primary {0} --candidate {1} --scheduled "{2}" --force
""".format(from_host, to_host, datetime.now(tzutc) + timedelta(seconds=int(in_seconds))))

View File

@@ -46,7 +46,7 @@ Example session:
$ kubectl get pods -L role
NAME READY STATUS RESTARTS AGE ROLE
patronidemo-0 1/1 Running 0 34s master
patronidemo-0 1/1 Running 0 34s primary
patronidemo-1 1/1 Running 0 30s replica
patronidemo-2 1/1 Running 0 26s replica
@@ -119,12 +119,12 @@ Example session:
$ kubectl get pods -l cluster-name=citusdemo -L role
NAME READY STATUS RESTARTS AGE ROLE
citusdemo-0-0 1/1 Running 0 105s master
citusdemo-0-0 1/1 Running 0 105s primary
citusdemo-0-1 1/1 Running 0 101s replica
citusdemo-0-2 1/1 Running 0 96s replica
citusdemo-1-0 1/1 Running 0 105s master
citusdemo-1-0 1/1 Running 0 105s primary
citusdemo-1-1 1/1 Running 0 101s replica
citusdemo-2-0 1/1 Running 0 105s master
citusdemo-2-0 1/1 Running 0 105s primary
citusdemo-2-1 1/1 Running 0 101s replica
$ kubectl exec -ti citusdemo-0-0 -- bash

View File

@@ -458,7 +458,7 @@ metadata:
application: patroni
cluster-name: citusdemo
citus-type: worker
role: master
role: primary
spec:
type: ClusterIP
selector:

View File

@@ -36,7 +36,7 @@ objects:
labels:
application: ${APPLICATION_NAME}
cluster-name: ${PATRONI_CLUSTER_NAME}
name: ${PATRONI_MASTER_SERVICE_NAME}
name: ${PATRONI_PRIMARY_SERVICE_NAME}
spec:
ports:
- port: 5432
@@ -45,7 +45,7 @@ objects:
selector:
application: ${APPLICATION_NAME}
cluster-name: ${PATRONI_CLUSTER_NAME}
role: master
role: primary
sessionAffinity: None
type: ClusterIP
status:
@@ -289,12 +289,12 @@ parameters:
displayName: Cluster Name
name: PATRONI_CLUSTER_NAME
value: patroni-ephemeral
- description: The name of the OpenShift Service exposed for the patroni-ephemeral-master container.
displayName: Master service name.
name: PATRONI_MASTER_SERVICE_NAME
value: patroni-ephemeral-master
- description: The name of the OpenShift Service exposed for the patroni-ephemeral-primary container.
displayName: Primary service name.
name: PATRONI_PRIMARY_SERVICE_NAME
value: patroni-ephemeral-primary
- description: The name of the OpenShift Service exposed for the patroni-ephemeral-replica containers.
displayName: Replica service name.
displayName: Replica service name.
name: PATRONI_REPLICA_SERVICE_NAME
value: patroni-ephemeral-replica
- description: Maximum amount of memory the container can use.
@@ -321,7 +321,7 @@ parameters:
displayName: Repication Passsword
name: PATRONI_REPLICATION_PASSWORD
value: postgres
- description: Service account name used for pods and rolebindings to form a cluster in the project.
- description: Service account name used for pods and rolebindings to form a cluster in the project.
displayName: Service Account
name: SERVICE_ACCOUNT
value: patroniocp

View File

@@ -34,7 +34,7 @@ objects:
labels:
application: ${APPLICATION_NAME}
cluster-name: ${PATRONI_CLUSTER_NAME}
name: ${PATRONI_MASTER_SERVICE_NAME}
name: ${PATRONI_PRIMARY_SERVICE_NAME}
spec:
ports:
- port: 5432
@@ -43,7 +43,7 @@ objects:
selector:
application: ${APPLICATION_NAME}
cluster-name: ${PATRONI_CLUSTER_NAME}
role: master
role: primary
sessionAffinity: None
type: ClusterIP
status:
@@ -107,7 +107,7 @@ objects:
initContainers:
- command:
- sh
- -c
- -c
- "mkdir -p /home/postgres/pgdata/pgroot/data && chmod 0700 /home/postgres/pgdata/pgroot/data"
image: docker-registry.default.svc:5000/${NAMESPACE}/patroni:latest
imagePullPolicy: IfNotPresent
@@ -196,7 +196,7 @@ objects:
terminationGracePeriodSeconds: 0
volumes:
- name: ${APPLICATION_NAME}
persistentVolumeClaim:
persistentVolumeClaim:
claimName: ${APPLICATION_NAME}
volumeClaimTemplates:
- metadata:
@@ -313,12 +313,12 @@ parameters:
displayName: Cluster Name
name: PATRONI_CLUSTER_NAME
value: patroni-persistent
- description: The name of the OpenShift Service exposed for the patroni-persistent-master container.
displayName: Master service name.
name: PATRONI_MASTER_SERVICE_NAME
value: patroni-persistent-master
- description: The name of the OpenShift Service exposed for the patroni-persistent-primary container.
displayName: Primary service name.
name: PATRONI_PRIMARY_SERVICE_NAME
value: patroni-persistent-primary
- description: The name of the OpenShift Service exposed for the patroni-persistent-replica containers.
displayName: Replica service name.
displayName: Replica service name.
name: PATRONI_REPLICA_SERVICE_NAME
value: patroni-persistent-replica
- description: Maximum amount of memory the container can use.
@@ -345,11 +345,11 @@ parameters:
displayName: Repication Passsword
name: PATRONI_REPLICATION_PASSWORD
value: postgres
- description: Service account name used for pods and rolebindings to form a cluster in the project.
- description: Service account name used for pods and rolebindings to form a cluster in the project.
displayName: Service Account
name: SERVICE_ACCOUNT
value: patroni-persistent
- description: The size of the persistent volume to create.
- description: The size of the persistent volume to create.
displayName: Persistent Volume Size
name: PVC_SIZE
value: 5Gi

View File

@@ -15,9 +15,9 @@ pipeline {
script {
openshift.withCluster() {
openshift.withProject() {
def pgbench = openshift.newApp( "https://github.com/stewartshea/docker-pgbench/", "--name=pgbench", "-e PGPASSWORD=postgres", "-e PGUSER=postgres", "-e PGHOST=patroni-persistent-master", "-e PGDATABASE=postgres", "-e TEST_CLIENT_COUNT=20", "-e TEST_DURATION=120" )
def pgbench = openshift.newApp( "https://github.com/stewartshea/docker-pgbench/", "--name=pgbench", "-e PGPASSWORD=postgres", "-e PGUSER=postgres", "-e PGHOST=patroni-persistent-primary", "-e PGDATABASE=postgres", "-e TEST_CLIENT_COUNT=20", "-e TEST_DURATION=120" )
def pgbenchdc = openshift.selector( "dc", "pgbench" )
timeout(5) {
timeout(5) {
pgbenchdc.rollout().status()
}
}

View File

@@ -326,8 +326,8 @@ class RestApiHandler(BaseHTTPRequestHandler):
response.get('role') == 'replica' and response.get('state') == 'running' else 503
if not cluster and response.get('pause'):
leader_status_code = 200 if response.get('role') in ('master', 'primary', 'standby_leader') else 503
primary_status_code = 200 if response.get('role') in ('master', 'primary') else 503
leader_status_code = 200 if response.get('role') in ('primary', 'standby_leader') else 503
primary_status_code = 200 if response.get('role') == 'primary' else 503
standby_leader_status_code = 200 if response.get('role') == 'standby_leader' else 503
elif patroni.ha.is_leader():
leader_status_code = 200
@@ -435,7 +435,7 @@ class RestApiHandler(BaseHTTPRequestHandler):
"""
patroni: Patroni = self.server.patroni
is_primary = patroni.postgresql.role in ('master', 'primary') and patroni.postgresql.is_running()
is_primary = patroni.postgresql.role == 'primary' and patroni.postgresql.is_running()
# We can tolerate Patroni problems longer on the replica.
# On the primary the liveness probe most likely will start failing only after the leader key expired.
# It should not be a big problem because replicas will see that the primary is still alive via REST API call.
@@ -532,8 +532,7 @@ class RestApiHandler(BaseHTTPRequestHandler):
* ``patroni_version``: Patroni version without periods, e.g. ``030002`` for Patroni ``3.0.2``;
* ``patroni_postgres_running``: ``1`` if PostgreSQL is running, else ``0``;
* ``patroni_postmaster_start_time``: epoch timestamp since Postmaster was started;
* ``patroni_master``: ``1`` if this node holds the leader lock, else ``0``;
* ``patroni_primary``: same as ``patroni_master``;
* ``patroni_primary``: ``1`` if this node holds the leader lock, else ``0``;
* ``patroni_xlog_location``: ``pg_wal_lsn_diff(pg_current_wal_flush_lsn(), '0/0')`` if leader, else ``0``;
* ``patroni_standby_leader``: ``1`` if standby leader node, else ``0``;
* ``patroni_replica``: ``1`` if a replica, else ``0``;
@@ -580,13 +579,9 @@ class RestApiHandler(BaseHTTPRequestHandler):
postmaster_start_time = (postmaster_start_time - epoch).total_seconds() if postmaster_start_time else 0
metrics.append("patroni_postmaster_start_time{0} {1}".format(labels, postmaster_start_time))
metrics.append("# HELP patroni_master Value is 1 if this node is the leader, 0 otherwise.")
metrics.append("# TYPE patroni_master gauge")
metrics.append("patroni_master{0} {1}".format(labels, int(postgres['role'] in ('master', 'primary'))))
metrics.append("# HELP patroni_primary Value is 1 if this node is the leader, 0 otherwise.")
metrics.append("# TYPE patroni_primary gauge")
metrics.append("patroni_primary{0} {1}".format(labels, int(postgres['role'] in ('master', 'primary'))))
metrics.append("patroni_primary{0} {1}".format(labels, int(postgres['role'] == 'primary')))
metrics.append("# HELP patroni_xlog_location Current location of the Postgres"
" transaction log, 0 if this node is not the leader.")
@@ -863,7 +858,7 @@ class RestApiHandler(BaseHTTPRequestHandler):
* ``schedule``: timestamp at which the restart should occur;
* ``role``: restart only nodes which role is ``role``. Can be either:
* ``primary`` (or ``master``); or
* ``primary`; or
* ``replica``.
* ``postgres_version``: restart only nodes which PostgreSQL version is less than ``postgres_version``, e.g.
@@ -912,9 +907,9 @@ class RestApiHandler(BaseHTTPRequestHandler):
status_code = _
break
elif k == 'role':
if request[k] not in ('master', 'primary', 'replica'):
if request[k] not in ('primary', 'standby_leader', 'replica'):
status_code = 400
data = "PostgreSQL role should be either primary or replica"
data = "PostgreSQL role should be either primary, standby_leader, or replica"
break
elif k == 'postgres_version':
try:
@@ -1271,11 +1266,11 @@ class RestApiHandler(BaseHTTPRequestHandler):
``initdb failed``, ``running custom bootstrap script``, ``custom bootstrap failed``,
``creating replica``, or ``unknown``;
* ``postmaster_start_time``: ``pg_postmaster_start_time()``;
* ``role``: ``replica`` or ``master`` based on ``pg_is_in_recovery()`` output;
* ``role``: ``replica`` or ``primary`` based on ``pg_is_in_recovery()`` output;
* ``server_version``: Postgres version without periods, e.g. ``150002`` for Postgres ``15.2``;
* ``xlog``: dictionary. Its structure depends on ``role``:
* If ``master``:
* If ``primary``:
* ``location``: ``pg_current_wal_flush_lsn()``
@@ -1327,7 +1322,7 @@ class RestApiHandler(BaseHTTPRequestHandler):
result = {
'state': postgresql.state,
'postmaster_start_time': row[0],
'role': 'replica' if row[1] == 0 else 'master',
'role': 'replica' if row[1] == 0 else 'primary',
'server_version': postgresql.server_version,
'xlog': ({
'received_location': row[4] or row[3],

View File

@@ -282,7 +282,7 @@ arg_cluster_name = click.argument('cluster_name', required=False,
option_default_citus_group = click.option('--group', required=False, type=int, help='Citus group',
default=lambda: _get_configuration().get('citus', {}).get('group'))
option_citus_group = click.option('--group', required=False, type=int, help='Citus group')
role_choice = click.Choice(['leader', 'primary', 'standby-leader', 'replica', 'standby', 'any', 'master'])
role_choice = click.Choice(['leader', 'primary', 'standby-leader', 'replica', 'standby', 'any'])
@click.group(cls=click.Group)
@@ -486,7 +486,7 @@ def get_all_members(cluster: Cluster, group: Optional[int], role: str = 'leader'
:param group: filter which Citus group we should get members from. If ``None`` get from all groups.
:param role: role to filter members. Can be one among:
* ``primary`` or ``master``: the primary PostgreSQL instance;
* ``primary``: the primary PostgreSQL instance;
* ``replica`` or ``standby``: a standby PostgreSQL instance;
* ``leader``: the leader of a Patroni cluster. Can also be used to get the leader of a Patroni standby cluster;
* ``standby-leader``: the leader of a Patroni standby cluster;
@@ -497,16 +497,15 @@ def get_all_members(cluster: Cluster, group: Optional[int], role: str = 'leader'
clusters = {0: cluster}
if is_citus_cluster() and group is None:
clusters.update(cluster.workers)
if role in ('leader', 'master', 'primary', 'standby-leader'):
if role in ('leader', 'primary', 'standby-leader'):
# In the DCS the members' role can be one among: ``primary``, ``master``, ``replica`` or ``standby_leader``.
# ``primary`` and ``master`` are the same thing, so we map both to ``master`` to have a simpler ``if``.
# In a future release we might remove ``master`` from the available roles for the DCS members.
role = {'primary': 'master', 'standby-leader': 'standby_leader'}.get(role, role)
# ``primary`` and ``master`` are the same thing.
role = {'standby-leader': 'standby_leader'}.get(role, role)
for cluster in clusters.values():
if cluster.leader is not None and cluster.leader.name and\
(role == 'leader'
or cluster.leader.data.get('role') != 'master' and role == 'standby_leader'
or cluster.leader.data.get('role') != 'standby_leader' and role == 'master'):
or cluster.leader.data.get('role') not in ('primary', 'master') and role == 'standby_leader'
or cluster.leader.data.get('role') != 'standby_leader' and role == 'primary'):
yield cluster.leader.member
return
@@ -608,8 +607,7 @@ def get_cursor(cluster: Cluster, group: Optional[int], connect_parameters: Dict[
row = cursor.fetchone()
in_recovery = not row or row[0]
if in_recovery and role in ('replica', 'standby', 'standby-leader')\
or not in_recovery and role in ('master', 'primary'):
if in_recovery and role in ('replica', 'standby', 'standby-leader') or not in_recovery and role == 'primary':
return cursor
conn.close()
@@ -1376,7 +1374,7 @@ def failover(cluster_name: str, group: Optional[int], candidate: Optional[str],
@ctl.command('switchover', help='Switchover to a replica')
@arg_cluster_name
@option_citus_group
@click.option('--leader', '--primary', '--master', 'leader', help='The name of the current leader', default=None)
@click.option('--leader', '--primary', 'leader', help='The name of the current leader', default=None)
@click.option('--candidate', help='The name of the candidate', default=None)
@click.option('--scheduled', help='Timestamp of a scheduled switchover in unambiguous format (e.g. ISO 8601)',
default=None)

View File

@@ -1111,7 +1111,7 @@ class Cluster(NamedTuple('Cluster',
if global_config.is_standby_cluster or self.get_slot_name_on_primary(postgresql.name, tags) is None:
return self.permanent_physical_slots if postgresql.can_advance_slots or role == 'standby_leader' else {}
return self.__permanent_slots if postgresql.can_advance_slots or role in ('master', 'primary') \
return self.__permanent_slots if postgresql.can_advance_slots or role == 'primary' \
else self.__permanent_logical_slots
def _get_members_slots(self, name: str, role: str, nofailover: bool,
@@ -1150,7 +1150,7 @@ class Cluster(NamedTuple('Cluster',
# if the node does only cascading and can't become the leader, we
# want only to have slots for members that could connect to it.
members = [m for m in members if not nofailover or m.replicatefrom == name]
elif role in ('master', 'primary', 'standby_leader'): # PostgreSQL is older than 11
elif role in ('primary', 'standby_leader'): # PostgreSQL is older than 11
# on the leader want to have slots only for the nodes that are supposed to be replicating from it.
members = [m for m in members if m.replicatefrom is None
or m.replicatefrom == name or not self.has_member(m.replicatefrom)]

View File

@@ -532,9 +532,7 @@ class Consul(AbstractDCS):
check['TLSServerName'] = self._service_check_tls_server_name
tags = self._service_tags[:]
tags.append(role)
if role == 'master':
tags.append('primary')
elif role == 'primary':
if role == 'primary':
tags.append('master')
self._previous_loop_service_tags = self._service_tags
self._previous_loop_token = self._client.token
@@ -553,7 +551,7 @@ class Consul(AbstractDCS):
return self.deregister_service(params['service_id'])
self._previous_loop_register_service = self._register_service
if role in ['master', 'primary', 'replica', 'standby-leader']:
if role in ['primary', 'replica', 'standby-leader']:
if state != 'running':
return
return self.register_service(service_name, **params)

View File

@@ -756,9 +756,9 @@ class Kubernetes(AbstractDCS):
self._label_selector = ','.join('{0}={1}'.format(k, v) for k, v in self._labels.items())
self._namespace = config.get('namespace') or 'default'
self._role_label = config.get('role_label', 'role')
self._leader_label_value = config.get('leader_label_value', 'master')
self._leader_label_value = config.get('leader_label_value', 'primary')
self._follower_label_value = config.get('follower_label_value', 'replica')
self._standby_leader_label_value = config.get('standby_leader_label_value', 'master')
self._standby_leader_label_value = config.get('standby_leader_label_value', 'primary')
self._tmp_role_label = config.get('tmp_role_label')
self._ca_certs = os.environ.get('PATRONI_KUBERNETES_CACERT', config.get('cacert')) or SERVICE_CERT_FILENAME
super(Kubernetes, self).__init__({**config, 'namespace': ''}, mpp)
@@ -1312,8 +1312,8 @@ class Kubernetes(AbstractDCS):
cluster = self.cluster
if cluster and cluster.leader and cluster.leader.name == self._name:
role = self._standby_leader_label_value if data['role'] == 'standby_leader' else self._leader_label_value
tmp_role = 'master'
elif data['state'] == 'running' and data['role'] not in ('master', 'primary'):
tmp_role = 'primary'
elif data['state'] == 'running' and data['role'] != 'primary':
role = {'replica': self._follower_label_value}.get(data['role'], data['role'])
tmp_role = data['role']
else:

View File

@@ -493,7 +493,7 @@ class Ha(object):
ret = self.dcs.touch_member(data)
if ret:
new_state = (data['state'], {'master': 'primary'}.get(data['role'], data['role']))
new_state = (data['state'], data['role'])
if self._last_state != new_state and new_state == ('running', 'primary'):
self.notify_mpp_coordinator('after_promote')
self._last_state = new_state
@@ -636,7 +636,7 @@ class Ha(object):
and data.get('Database cluster state') in ('in production', 'in crash recovery',
'shutting down', 'shut down')\
and self.state_handler.state == 'crashed'\
and self.state_handler.role in ('primary', 'master')\
and self.state_handler.role == 'primary'\
and not self.state_handler.config.recovery_conf_exists():
# We know 100% that we were running as a primary a few moments ago, therefore could just start postgres
msg = 'starting primary after failure'
@@ -733,7 +733,7 @@ class Ha(object):
if not (self._rewind.is_needed and self._rewind.can_rewind_or_reinitialize_allowed)\
or self.cluster.is_unlocked():
if is_leader:
self.state_handler.set_role('master')
self.state_handler.set_role('primary')
return 'continue to run as primary without lock'
elif self.state_handler.role != 'standby_leader':
self.state_handler.set_role('replica')
@@ -1096,12 +1096,12 @@ class Ha(object):
if self.state_handler.is_primary():
# Inform the state handler about its primary role.
# It may be unaware of it if postgres is promoted manually.
self.state_handler.set_role('master')
self.state_handler.set_role('primary')
self.process_sync_replication()
self.update_cluster_history()
self.state_handler.mpp_handler.sync_meta_data(self.cluster)
return message
elif self.state_handler.role in ('master', 'promoted', 'primary'):
elif self.state_handler.role in ('primary', 'promoted'):
self.process_sync_replication()
return message
else:
@@ -1109,7 +1109,7 @@ class Ha(object):
# Somebody else updated sync state, it may be due to us losing the lock. To be safe,
# postpone promotion until next cycle. TODO: trigger immediate retry of run_cycle.
return 'Postponing promotion because synchronous replication state was updated by somebody else'
if self.state_handler.role not in ('master', 'promoted', 'primary'):
if self.state_handler.role not in ('primary', 'promoted'):
# reset failsafe state when promote
self._failsafe.set_is_active(0)
@@ -1157,7 +1157,7 @@ class Ha(object):
:returns: the reason why caller shouldn't continue as a primary or the current value of received/replayed LSN.
"""
if self.state_handler.state == 'running' and self.state_handler.role in ('master', 'primary'):
if self.state_handler.state == 'running' and self.state_handler.role == 'primary':
return 'Running as a leader'
self._failsafe.update(data)
return self._last_wal_lsn
@@ -1936,7 +1936,7 @@ class Ha(object):
self.state_handler.cancellable.cancel()
return 'lost leader before promote'
if self.state_handler.role in ('master', 'primary'):
if self.state_handler.role == 'primary':
logger.info('Demoting primary during %s', self._async_executor.scheduled_action)
if self._async_executor.scheduled_action in ('restart', 'starting primary after failure'):
# Restart needs a special interlocking cancel because postmaster may be just started in a
@@ -1962,7 +1962,7 @@ class Ha(object):
if not self.state_handler.is_running():
self.watchdog.disable()
if self.has_lock():
if self.state_handler.role in ('master', 'primary', 'standby_leader'):
if self.state_handler.role in ('primary', 'standby_leader'):
self.state_handler.set_role('demoted')
self._delete_leader()
return 'removed leader key after trying and failing to start postgres'
@@ -1987,7 +1987,7 @@ class Ha(object):
if not self.state_handler.is_primary():
return 'waiting for end of recovery after bootstrap'
self.state_handler.set_role('master')
self.state_handler.set_role('primary')
ret = self._async_executor.try_run_async('post_bootstrap', self.state_handler.bootstrap.post_bootstrap,
args=(self.patroni.config['bootstrap'], self._async_response))
return ret or 'running post_bootstrap'

View File

@@ -128,19 +128,19 @@ class Postgresql(object):
# we know that PostgreSQL is accepting connections and can read some GUC's from pg_settings
self.config.load_current_server_parameters()
self.set_role('master' if self.is_primary() else 'replica')
self.set_role('primary' if self.is_primary() else 'replica')
hba_saved = self.config.replace_pg_hba()
ident_saved = self.config.replace_pg_ident()
if self.major_version < 120000 or self.role in ('master', 'primary'):
if self.major_version < 120000 or self.role == 'primary':
# If PostgreSQL is running as a primary or we run PostgreSQL that is older than 12 we can
# call reload_config() once again (the first call happened in the ConfigHandler constructor),
# so that it can figure out if config files should be updated and pg_ctl reload executed.
self.config.reload_config(config, sighup=bool(hba_saved or ident_saved))
elif hba_saved or ident_saved:
self.reload()
elif not self.is_running() and self.role in ('master', 'primary'):
elif not self.is_running() and self.role == 'primary':
self.set_role('demoted')
@property
@@ -223,7 +223,7 @@ class Postgresql(object):
" pg_catalog.pg_stat_get_activity(w.pid)"
" WHERE w.state = 'streaming') r)").format(self.wal_name, self.lsn_name)
if global_config.is_synchronous_mode
and self.role in ('master', 'primary', 'promoted') else "'on', '', NULL")
and self.role in ('primary', 'promoted') else "'on', '', NULL")
if self._major_version >= 90600:
extra = ("pg_catalog.current_setting('restore_command')" if self._major_version >= 120000 else "NULL") +\
@@ -348,7 +348,7 @@ class Postgresql(object):
elif self.config.recovery_conf_exists():
return 'replica'
else:
return 'master'
return 'primary'
@property
def server_version(self) -> int:
@@ -414,8 +414,7 @@ class Postgresql(object):
return deepcopy(self.config.get(method, {}) or EMPTY_DICT.copy())
def replica_method_can_work_without_replication_connection(self, method: str) -> bool:
return method != 'basebackup' and bool(self.replica_method_options(method).get('no_master')
or self.replica_method_options(method).get('no_leader'))
return method != 'basebackup' and bool(self.replica_method_options(method).get('no_leader'))
def can_create_replica_without_replication_connection(self, replica_methods: Optional[List[str]]) -> bool:
""" go through the replication methods to see if there are ones
@@ -567,7 +566,7 @@ class Postgresql(object):
return bool(self._cluster_info_state_get('timeline'))
except PostgresConnectionException:
logger.warning('Failed to determine PostgreSQL state from the connection, falling back to cached role')
return bool(self.is_running() and self.role in ('master', 'primary'))
return bool(self.is_running() and self.role == 'primary')
def replay_paused(self) -> bool:
return self._cluster_info_state_get('replay_paused') or False
@@ -668,7 +667,7 @@ class Postgresql(object):
if self.callback and cb_type in self.callback:
cmd = self.callback[cb_type]
role = 'master' if self.role == 'promoted' else self.role
role = 'primary' if self.role == 'promoted' else self.role
try:
cmd = shlex.split(self.callback[cb_type]) + [cb_type, role, self.scope]
self._callback_executor.call(cmd)
@@ -1136,7 +1135,7 @@ class Postgresql(object):
# and we know for sure that postgres was already running before, we will only execute on_role_change
# callback and prevent execution of on_restart/on_start callback.
# If the role remains the same (replica or standby_leader), we will execute on_start or on_restart
change_role = self.cb_called and (self.role in ('master', 'primary', 'demoted')
change_role = self.cb_called and (self.role in ('primary', 'demoted')
or not {'standby_leader', 'replica'} - {self.role, role})
if change_role:
self.__cb_pending = CallbackAction.NOOP
@@ -1162,7 +1161,7 @@ class Postgresql(object):
for _ in polling_loop(wait_seconds):
data = self.controldata()
if data.get('Database cluster state') == 'in production':
self.set_role('master')
self.set_role('primary')
return True
def _pre_promote(self) -> bool:
@@ -1197,7 +1196,7 @@ class Postgresql(object):
def promote(self, wait_seconds: int, task: CriticalTask,
before_promote: Optional[Callable[..., Any]] = None) -> Optional[bool]:
if self.role in ('promoted', 'master', 'primary'):
if self.role in ('promoted', 'primary'):
return True
ret = self._pre_promote()

View File

@@ -301,7 +301,7 @@ class Bootstrap(object):
"datadir": self._postgresql.data_dir,
"connstring": connstring})
else:
for param in ('no_params', 'no_master', 'no_leader', 'keep_data'):
for param in ('no_params', 'no_leader', 'keep_data'):
method_config.pop(param, None)
params = ["--{0}={1}".format(arg, val) for arg, val in method_config.items()]
try:

View File

@@ -1009,7 +1009,7 @@ class ConfigHandler(object):
synchronous_standby_names = self._server_parameters.get('synchronous_standby_names')
if synchronous_standby_names is None:
if global_config.is_synchronous_mode_strict\
and self._postgresql.role in ('master', 'primary', 'promoted'):
and self._postgresql.role in ('primary', 'promoted'):
parameters['synchronous_standby_names'] = '*'
else:
parameters.pop('synchronous_standby_names', None)
@@ -1277,7 +1277,7 @@ class ConfigHandler(object):
As a workaround we will start it with the values from controldata and set `pending_restart`
to true as an indicator that current values of parameters are not matching expectations."""
if self._postgresql.role in ('master', 'primary'):
if self._postgresql.role == 'primary':
return self._server_parameters
options_mapping = {

View File

@@ -171,8 +171,7 @@ def main() -> None:
config_switch_parser.add_argument(
"role",
type=str,
choices=["master", "primary", "promoted", "standby_leader", "replica",
"demoted"],
choices=["primary", "promoted", "standby_leader", "replica", "demoted"],
help="Name of the new role of this node (automatically filled by "
"Patroni)",
)
@@ -210,7 +209,7 @@ def main() -> None:
choices=["promoted", "demoted", "always"],
help="Controls under which circumstances the 'on_role_change' callback "
"should actually switch config in Barman. 'promoted' means the "
"'role' is either 'master', 'primary' or 'promoted'. 'demoted' "
"'role' is either 'primary' or 'promoted'. 'demoted' "
"means the 'role' is either 'replica' or 'demoted' "
"(default: '%(default)s')",
dest="switch_when",

View File

@@ -56,7 +56,7 @@ def _should_skip_switch(args: Namespace) -> bool:
:returns: if the operation should be skipped.
"""
if args.switch_when == "promoted":
return args.role not in {"master", "primary", "promoted"}
return args.role not in {"primary", "promoted"}
if args.switch_when == "demoted":
return args.role not in {"replica", "demoted"}
return False

View File

@@ -343,7 +343,7 @@ def main() -> int:
parser.add_argument('--threshold_megabytes', type=int, default=10240)
parser.add_argument('--threshold_backup_size_percentage', type=int, default=30)
parser.add_argument('--use_iam', type=int, default=0)
parser.add_argument('--no_leader', '--no_master', type=int, default=0)
parser.add_argument('--no_leader', type=int, default=0)
args = parser.parse_args()
exit_code = None

View File

@@ -595,10 +595,6 @@ class TestBarmanConfigSwitchCli(unittest.TestCase):
args = MagicMock()
for role, switch_when, expected in [
("master", "promoted", False),
("master", "demoted", True),
("master", "always", False),
("primary", "promoted", False),
("primary", "demoted", True),
("primary", "always", False),

View File

@@ -262,9 +262,8 @@ class TestConsul(unittest.TestCase):
d['state'] = 'running'
d['role'] = 'bla'
self.assertIsNone(self.c.update_service({}, d))
for role in ('master', 'primary'):
d['role'] = role
self.assertTrue(self.c.update_service({}, d))
d['role'] = 'primary'
self.assertTrue(self.c.update_service({}, d))
@patch.object(consul.Consul.KV, 'put', Mock(side_effect=ConsulException))
def test_reload_config(self):

View File

@@ -43,7 +43,7 @@ def get_default_config(*args):
@patch('patroni.ctl.load_config', get_default_config)
@patch('patroni.dcs.AbstractDCS.get_cluster', Mock(return_value=get_cluster_initialized_with_leader()))
class TestCtl(unittest.TestCase):
TEST_ROLES = ('master', 'primary', 'leader')
TEST_ROLES = ('primary', 'leader')
@patch('socket.getaddrinfo', socket_getaddrinfo)
def setUp(self):

View File

@@ -338,13 +338,13 @@ class TestKubernetesConfigMaps(BaseTestKubernetes):
self.k.touch_member({'role': 'standby_leader'})
mock_patch_namespaced_pod.assert_called()
self.assertEqual(mock_patch_namespaced_pod.call_args[0][2].metadata.labels['isMaster'], 'false')
self.assertEqual(mock_patch_namespaced_pod.call_args[0][2].metadata.labels['tmp_role'], 'master')
self.assertEqual(mock_patch_namespaced_pod.call_args[0][2].metadata.labels['tmp_role'], 'primary')
mock_patch_namespaced_pod.rest_mock()
self.k.touch_member({'role': 'primary'})
mock_patch_namespaced_pod.assert_called()
self.assertEqual(mock_patch_namespaced_pod.call_args[0][2].metadata.labels['isMaster'], 'true')
self.assertEqual(mock_patch_namespaced_pod.call_args[0][2].metadata.labels['tmp_role'], 'master')
self.assertEqual(mock_patch_namespaced_pod.call_args[0][2].metadata.labels['tmp_role'], 'primary')
def test_initialize(self):
self.k.initialize()