Compare commits

..

11 Commits

Author SHA1 Message Date
Jeff McCune
0d7bbbb659 (#48) Disable pg spec.dataSource for standby cluster
Problem:
The standby cluster on k2 fails to start.  A pgbackrest pod first
restores the database from S3, then the pgha nodes try to replay the WAL
as part of the standby initialization process.  This fails because the
PGDATA directory is not empty.

Solution:
Specify the spec.dataSource field only when the cluster is configured as
a primary cluster.

Result:
Non-primary clusters are standby, they skip the pgbackrest job to
restore from S3 and move straight to patroni replaying the WAL from S3
as part of the pgha pods.

One of the two pgha pods becomes the "standby leader" and restores the
WAL from S3.  The other is a cascading standby and then restores the
same WAL from the standby leader.

After 8 minutes both pods are ready.

```
❯ k get pods
NAME                               READY   STATUS    RESTARTS   AGE
zitadel-pgbouncer-d9f8cffc-j469g   2/2     Running   0          11m
zitadel-pgbouncer-d9f8cffc-xq29g   2/2     Running   0          11m
zitadel-pgha1-27w7-0               4/4     Running   0          11m
zitadel-pgha1-c5qj-0               4/4     Running   0          11m
zitadel-repo-host-0                2/2     Running   0          11m
```
2024-03-11 17:56:47 -07:00
Jeff McCune
3f3e36bbe9 (#48) Split workload into foundation and accounts
Problem:
The k3 and k4 clusters are getting the Zitadel components which are
really only intended for the core cluster pair.

Solution:
Split the workload subtree into two, named foundation and accounts.  The
core cluster pair gets foundation+accounts while the kX clusters get
just the foundation subtree.

Result:
prod-zitadel-iam is no longer managed on k3 and k4
2024-03-11 15:20:35 -07:00
Jeff McCune
9f41478d33 (#48) Restore from Monday morning after Gary and Nate registered
Set the restore point to time="2024-03-11T17:08:58Z" level=info
msg="crunchy-pgbackrest ends" which is just after Gary and Nate
registered and were granted the cluster-admin role.
2024-03-11 10:18:45 -07:00
Jeff McCune
b86fee04fc (#48) v0.55.4 to rebuild k3, k4, k5 2024-03-11 08:48:07 -07:00
Jeff McCune
c78da6949f Merge pull request #51 from holos-run/jeff/48-zitadel-backups
(#48) Custom PGO Certs for Zitadel
2024-03-10 23:08:29 -07:00
Jeff McCune
7b215bb8f1 (#48) Custom PGO Certs for Zitadel
The [Streaming Standby][standby] architecture requires custom tls certs
for two clusters in two regions to connect to each other.

This patch manages the custom certs following the configuration
described in the article [Using Cert Manager to Deploy TLS for Postgres
on Kubernetes][article].

NOTE: One thing not mentioned anywhere in the crunchy documentation is
how custom tls certs work with pgbouncer.  The pgbouncer service uses a
tls certificate issued by the pgo root cert, not by the custom
certificate authority.

For this reason, we use kustomize to patch the zitadel Deployment and
the zitadel-init and zitadel-setup Jobs.  The patch projects the ca
bundle from the `zitadel-pgbouncer` secret into the zitadel pods at
/pgbouncer/ca.crt

[standby]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery#streaming-standby-with-an-external-repo
[article]: https://www.crunchydata.com/blog/using-cert-manager-to-deploy-tls-for-postgres-on-kubernetes
2024-03-10 22:54:06 -07:00
Jeff McCune
78cec76a96 (#48) Restore ZITADEL from point in time full backup
A full backup was taken using:

```
kubectl annotate postgrescluster zitadel postgres-operator.crunchydata.com/pgbackrest-backup="$(date)"
```

And completed with:

```
❯ k logs -f zitadel-backup-5r6v-v5jnm
time="2024-03-10T21:52:15Z" level=info msg="crunchy-pgbackrest starts"
time="2024-03-10T21:52:15Z" level=info msg="debug flag set to false"
time="2024-03-10T21:52:15Z" level=info msg="backrest backup command requested"
time="2024-03-10T21:52:15Z" level=info msg="command to execute is [pgbackrest backup --stanza=db --repo=2 --type=full]"
time="2024-03-10T21:55:18Z" level=info msg="crunchy-pgbackrest ends"
```

This patch verifies the point in time backup is robust in the face of
the following operations:

1. pg cluster zitadel was deleted (whole namespace emptied)
2. pg cluster zitadel was re-created _without_ a `dataSource`
3. pgo initailized a new database and backed up the blank database to
   S3.
4. pg cluster zitadel was deleted again.
5. pg cluster zitadel was re-created with `dataSource` `options: ["--type=time", "--target=\"2024-03-10 21:56:00+00\""]` (Just after the full backup completed)
6. Restore completed successfully.
7. Applied the holos zitadel component.
8. Zitadel came up successfully and user login worked as expected.

- [x] Perform an in place [restore][restore] from [s3][bucket].
- [x] Set repo1-retention-full to clear warning

[restore]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#restore-properties
[bucket]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#cloud-based-data-source
2024-03-10 17:42:54 -07:00
Jeff McCune
0e98ad2ecb (#48) Zitadel Backups
This patch configures backups suitable to support the [Streaming Standby
with an External Repo][0] architecture.

- [x] PGO [Multiple Backup Repositories][1] to k8s pv and s3.
- [x] [Encryption][2] of backups to S3.
- [x] [Remove SUPERUSER][3] role from zitadel-admin pg user to work with pgbouncer.  Resolves zitadel-init job failure.
- [x] Take a [Manual Backup][5]

[0]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery#streaming-standby-with-an-external-repo
[1]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups#set-up-multiple-backup-repositories
[2]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups#encryption
[3]: https://github.com/CrunchyData/postgres-operator/issues/3095#issuecomment-1904712211
[4]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#streaming-standby-with-an-external-repo
[5]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backup-management#taking-a-one-off-backup
2024-03-10 16:38:56 -07:00
Jeff McCune
30bb3f183a (#50) Describe type as strings to match others 2024-03-10 11:29:19 -07:00
Jeff McCune
1369338f3c (#50) Add -n shorthand for --namespace for secrets
It's annoying holos get secret -n foo doesn't work like kubectl get
secret -n foo works.

Closes: #50
2024-03-10 10:45:49 -07:00
Jeff McCune
ac03f64724 (#45) Configure ZITADEL to use pgbouncer 2024-03-09 09:44:33 -08:00
70 changed files with 487 additions and 176 deletions

View File

@@ -44,7 +44,8 @@ package holos
_name: string
_cluster: string
_wildcard: true | *false
metadata: name: string | *"\(_cluster)-\(_name)"
// Enforce this value
metadata: name: "\(_cluster)-\(_name)"
metadata: namespace: string | *"istio-ingress"
spec: {
commonName: string | *"\(_name).\(_cluster).\(#Platform.org.domain)"

View File

@@ -0,0 +1,13 @@
package holos
#InputKeys: component: "postgres-certs"
#KubernetesObjects & {
apiObjects: {
ExternalSecret: {
"\(_DBName)-primary-tls": _
"\(_DBName)-repl-tls": _
"\(_DBName)-client-tls": _
"\(_DBName)-root-ca": _
}
}
}

View File

@@ -0,0 +1,171 @@
package holos
#InputKeys: component: "postgres"
#DependsOn: "postgres-certs": _
let Cluster = #Platform.clusters[#ClusterName]
let S3Secret = "pgo-s3-creds"
let ZitadelUser = _DBName
let ZitadelAdmin = "\(_DBName)-admin"
// This must be an external storage bucket for our architecture.
let BucketRepoName = "repo2"
// Restore options. Set the timestamp to a known good point in time.
// time="2024-03-11T17:08:58Z" level=info msg="crunchy-pgbackrest ends"
let RestoreOptions = ["--type=time", "--target=\"2024-03-11 17:10:00+00\""]
#KubernetesObjects & {
apiObjects: {
ExternalSecret: "pgo-s3-creds": _
PostgresCluster: db: #PostgresCluster & HighlyAvailable & {
metadata: name: _DBName
metadata: namespace: #TargetNamespace
spec: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.2-0"
postgresVersion: 16
// Custom certs are necessary for streaming standby replication which we use to replicate between two regions.
// Refer to https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#streaming-standby
customTLSSecret: name: "\(_DBName)-primary-tls"
customReplicationTLSSecret: name: "\(_DBName)-repl-tls"
// Refer to https://access.crunchydata.com/documentation/postgres-operator/latest/references/crd/5.5.x/postgrescluster#postgresclusterspecusersindex
users: [
{name: ZitadelUser},
// NOTE: Users with SUPERUSER role cannot log in through pgbouncer. Use options that allow zitadel admin to use pgbouncer.
// Refer to: https://github.com/CrunchyData/postgres-operator/issues/3095#issuecomment-1904712211
{name: ZitadelAdmin, options: "CREATEDB CREATEROLE", databases: [_DBName, "postgres"]},
]
users: [...{databases: [_DBName, ...]}]
instances: [{
replicas: 2
dataVolumeClaimSpec: {
accessModes: ["ReadWriteOnce"]
resources: requests: storage: string | *"1Gi"
}
}]
standby: {
repoName: BucketRepoName
if Cluster.primary {
enabled: false
}
if !Cluster.primary {
enabled: true
}
}
// Restore from backup if and only if the cluster is primary
if Cluster.primary {
dataSource: pgbackrest: {
stanza: "db"
configuration: backups.pgbackrest.configuration
// Restore from known good full backup taken
options: RestoreOptions
global: {
"\(BucketRepoName)-path": "/pgbackrest/\(#TargetNamespace)/\(metadata.name)/\(BucketRepoName)"
"\(BucketRepoName)-cipher-type": "aes-256-cbc"
}
repo: {
name: BucketRepoName
s3: backups.pgbackrest.repos[1].s3
}
}
}
// Refer to https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups
backups: pgbackrest: {
configuration: [{secret: name: S3Secret}]
// Defines details for manual pgBackRest backup Jobs
manual: {
// Note: the repoName value must match the config keys in the S3Secret.
// This must be an external repository for backup / restore / regional failovers.
repoName: BucketRepoName
options: ["--type=full", ...]
}
// Defines details for performing an in-place restore using pgBackRest
restore: {
// Enables triggering a restore by annotating the postgrescluster with postgres-operator.crunchydata.com/pgbackrest-restore="$(date)"
enabled: true
repoName: BucketRepoName
}
global: {
// Store only one full backup in the PV because it's more expensive than object storage.
"\(repos[0].name)-retention-full": "1"
// Store 14 days of full backups in the bucket.
"\(BucketRepoName)-retention-full": string | *"14"
"\(BucketRepoName)-retention-full-type": "count" | *"time" // time in days
// Refer to https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups#encryption
"\(BucketRepoName)-cipher-type": "aes-256-cbc"
// "The convention we recommend for setting this variable is /pgbackrest/$NAMESPACE/$CLUSTER_NAME/repoN"
// Ref: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups#understanding-backup-configuration-and-basic-operations
"\(BucketRepoName)-path": "/pgbackrest/\(#TargetNamespace)/\(metadata.name)/\(manual.repoName)"
}
repos: [
{
name: "repo1"
volume: volumeClaimSpec: {
accessModes: ["ReadWriteOnce"]
resources: requests: storage: string | *"1Gi"
}
},
{
name: BucketRepoName
// Full backup weekly on Sunday at 1am, differntial daily at 1am every day except Sunday.
schedules: full: string | *"0 1 * * 0"
schedules: differential: string | *"0 1 * * 1-6"
s3: {
bucket: string | *"\(#Platform.org.name)-zitadel-backups"
region: string | *#Backups.s3.region
endpoint: string | *"s3.dualstack.\(region).amazonaws.com"
}
},
]
}
}
}
}
}
// Refer to https://github.com/holos-run/postgres-operator-examples/blob/main/kustomize/high-availability/ha-postgres.yaml
let HighlyAvailable = {
apiVersion: "postgres-operator.crunchydata.com/v1beta1"
kind: "PostgresCluster"
metadata: name: string
spec: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.2-0"
postgresVersion: 16
instances: [{
name: "pgha1"
replicas: 2
dataVolumeClaimSpec: {
accessModes: ["ReadWriteOnce"]
resources: requests: storage: "1Gi"
}
affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: [{
weight: 1
podAffinityTerm: {
topologyKey: "kubernetes.io/hostname"
labelSelector: matchLabels: {
"postgres-operator.crunchydata.com/cluster": metadata.name
"postgres-operator.crunchydata.com/instance-set": name
}
}
}]
}]
backups: pgbackrest: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.49-0"
}
proxy: pgBouncer: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-pgbouncer:ubi8-1.21-3"
replicas: 2
affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: [{
weight: 1
podAffinityTerm: {
topologyKey: "kubernetes.io/hostname"
labelSelector: matchLabels: {
"postgres-operator.crunchydata.com/cluster": metadata.name
"postgres-operator.crunchydata.com/role": "pgbouncer"
}
}
}]
}
}
}

View File

@@ -9,12 +9,12 @@ package holos
{
name: "ZITADEL_DATABASE_POSTGRES_HOST"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)"
valueFrom: secretKeyRef: key: "host"
valueFrom: secretKeyRef: key: "pgbouncer-host"
},
{
name: "ZITADEL_DATABASE_POSTGRES_PORT"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)"
valueFrom: secretKeyRef: key: "port"
valueFrom: secretKeyRef: key: "pgbouncer-port"
},
{
name: "ZITADEL_DATABASE_POSTGRES_DATABASE"
@@ -35,26 +35,30 @@ package holos
// The postgres component configures privileged postgres user creds.
{
name: "ZITADEL_DATABASE_POSTGRES_ADMIN_USERNAME"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-postgres"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)-admin"
valueFrom: secretKeyRef: key: "user"
},
{
name: "ZITADEL_DATABASE_POSTGRES_ADMIN_PASSWORD"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-postgres"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)-admin"
valueFrom: secretKeyRef: key: "password"
},
// CA Cert issued by PGO which issued the pgbouncer tls cert
{
name: "ZITADEL_DATABASE_POSTGRES_USER_SSL_ROOTCERT"
value: "/\(_PGBouncer)/ca.crt"
},
{
name: "ZITADEL_DATABASE_POSTGRES_ADMIN_SSL_ROOTCERT"
value: "/\(_PGBouncer)/ca.crt"
},
]
// Refer to https://zitadel.com/docs/self-hosting/manage/database
zitadel: {
// Zitadel master key
masterkeySecretName: "zitadel-masterkey"
// Note the tls configuration is a challenge to use externally issued certs from the provsioner cluster.
// We intentionally use pgo managed certs and intend to backup the ca key to the provisioner and restore it for
// cross cluster replication. The problems seemed to arise from specifying the user and admin tls secrets in
// addition to the ca cert secret.
dbSslCaCrtSecret: "\(_DBName)-cluster-cert"
// dbSslCaCrtSecret: "pgo-root-cacert"
// All settings: https://zitadel.com/docs/self-hosting/manage/configure#runtime-configuration-file
// Helm interface: https://github.com/zitadel/zitadel-charts/blob/zitadel-7.4.0/charts/zitadel/values.yaml#L20-L21

View File

@@ -0,0 +1,102 @@
package holos
import "encoding/yaml"
let Name = "zitadel"
#InputKeys: component: Name
#DependsOn: postgres: _
// Upstream helm chart doesn't specify the namespace field for all resources.
#Kustomization: spec: targetNamespace: #TargetNamespace
#HelmChart & {
namespace: #TargetNamespace
chart: {
name: Name
version: "7.9.0"
repository: {
name: Name
url: "https://charts.zitadel.com"
}
}
values: #Values
apiObjects: {
ExternalSecret: "zitadel-masterkey": _
VirtualService: "\(Name)": {
metadata: name: Name
metadata: namespace: #TargetNamespace
spec: hosts: ["login.\(#Platform.org.domain)"]
spec: gateways: ["istio-ingress/default"]
spec: http: [{route: [{destination: host: Name}]}]
}
}
}
// TODO: Generalize this common pattern of injecting the istio sidecar into a Deployment
let IstioInject = [{op: "add", path: "/spec/template/metadata/labels/sidecar.istio.io~1inject", value: "true"}]
_PGBouncer: "pgbouncer"
let DatabaseCACertPatch = [
{
op: "add"
path: "/spec/template/spec/volumes/-"
value: {
name: _PGBouncer
secret: {
secretName: "\(_DBName)-pgbouncer"
items: [{key: "pgbouncer-frontend.ca-roots", path: "ca.crt"}]
}
}
},
{
op: "add"
path: "/spec/template/spec/containers/0/volumeMounts/-"
value: {
name: _PGBouncer
mountPath: "/" + _PGBouncer
}
},
]
#Kustomize: {
patches: [
{
target: {
group: "apps"
version: "v1"
kind: "Deployment"
name: Name
}
patch: yaml.Marshal(IstioInject)
},
{
target: {
group: "apps"
version: "v1"
kind: "Deployment"
name: Name
}
patch: yaml.Marshal(DatabaseCACertPatch)
},
{
target: {
group: "batch"
version: "v1"
kind: "Job"
name: "\(Name)-init"
}
patch: yaml.Marshal(DatabaseCACertPatch)
},
{
target: {
group: "batch"
version: "v1"
kind: "Job"
name: "\(Name)-setup"
}
patch: yaml.Marshal(DatabaseCACertPatch)
},
]
}

View File

@@ -11,11 +11,7 @@ package holos
githubConfigSecret: "controller-manager"
githubConfigUrl: "https://github.com/" + #Platform.org.github.orgs.primary.name
}
apiObjects: {
ExternalSecret: controller: #ExternalSecret & {
_name: values.githubConfigSecret
}
}
apiObjects: ExternalSecret: "\(values.githubConfigSecret)": _
chart: {
// Match the gha-base-name in the chart _helpers.tpl to avoid long full names.
// NOTE: Unfortunately the INSTALLATION_NAME is used as the helm release

View File

@@ -18,9 +18,7 @@ let Cert = #PlatformCerts[SecretName]
#KubernetesObjects & {
apiObjects: {
ExternalSecret: httpbin: #ExternalSecret & {
_name: Cert.spec.secretName
}
ExternalSecret: "\(Cert.spec.secretName)": _
Deployment: httpbin: #Deployment & {
metadata: Metadata
spec: selector: matchLabels: MatchLabels

View File

@@ -63,7 +63,7 @@ let RedirectMetaName = {
// https-redirect
_APIObjects: {
Gateway: {
httpsRedirect: #Gateway & {
"\(RedirectMetaName.name)": #Gateway & {
metadata: RedirectMetaName
spec: selector: GatewayLabels
spec: servers: [{
@@ -79,7 +79,7 @@ _APIObjects: {
}
}
VirtualService: {
httpsRedirect: #VirtualService & {
"\(RedirectMetaName.name)": #VirtualService & {
metadata: RedirectMetaName
spec: hosts: ["*"]
spec: gateways: [RedirectMetaName.name]

View File

@@ -0,0 +1,10 @@
package holos
// Components under this directory are part of this collection
#InputKeys: project: "iam"
// Shared dependencies for all components in this collection.
#DependsOn: _Namespaces
// Common Dependencies
_Namespaces: Namespaces: name: "\(#StageName)-secrets-namespaces"

View File

@@ -0,0 +1,101 @@
package holos
// Manage an Issuer for the database.
// Both cockroach and postgres handle tls database connections with cert manager
// PGO: https://github.com/CrunchyData/postgres-operator-examples/tree/main/kustomize/certmanager/certman
// CRDB: https://github.com/cockroachdb/helm-charts/blob/3dcf96726ebcfe3784afb526ddcf4095a1684aea/README.md?plain=1#L196-L201
// Refer to [Using Cert Manager to Deploy TLS for Postgres on Kubernetes](https://www.crunchydata.com/blog/using-cert-manager-to-deploy-tls-for-postgres-on-kubernetes)
#InputKeys: component: "postgres-certs"
let SelfSigned = "\(_DBName)-selfsigned"
let RootCA = "\(_DBName)-root-ca"
let Orgs = ["Database"]
#KubernetesObjects & {
apiObjects: {
// Put everything in the target namespace.
[_]: {
[Name=_]: {
metadata: name: Name
metadata: namespace: #TargetNamespace
}
}
Issuer: {
"\(SelfSigned)": #Issuer & {
_description: "Self signed issuer to issue ca certs"
metadata: name: SelfSigned
spec: selfSigned: {}
}
"\(RootCA)": #Issuer & {
_description: "Root signed intermediate ca to issue mtls database certs"
metadata: name: RootCA
spec: ca: secretName: RootCA
}
}
Certificate: {
"\(RootCA)": #Certificate & {
_description: "Root CA cert for database"
metadata: name: RootCA
spec: {
commonName: RootCA
isCA: true
issuerRef: group: "cert-manager.io"
issuerRef: kind: "Issuer"
issuerRef: name: SelfSigned
privateKey: algorithm: "ECDSA"
privateKey: size: 256
secretName: RootCA
subject: organizations: Orgs
}
}
"\(_DBName)-primary-tls": #DatabaseCert & {
// PGO managed name is "<cluster name>-cluster-cert" e.g. zitadel-cluster-cert
spec: {
commonName: "\(_DBName)-primary"
dnsNames: [
commonName,
"\(commonName).\(#TargetNamespace)",
"\(commonName).\(#TargetNamespace).svc",
"\(commonName).\(#TargetNamespace).svc.cluster.local",
"localhost",
"127.0.0.1",
]
usages: ["digital signature", "key encipherment"]
}
}
"\(_DBName)-repl-tls": #DatabaseCert & {
spec: {
commonName: "_crunchyrepl"
dnsNames: [commonName]
usages: ["digital signature", "key encipherment"]
}
}
"\(_DBName)-client-tls": #DatabaseCert & {
spec: {
commonName: "\(_DBName)-client"
dnsNames: [commonName]
usages: ["digital signature", "key encipherment"]
}
}
}
}
}
#DatabaseCert: #Certificate & {
metadata: name: string
metadata: namespace: #TargetNamespace
spec: {
duration: "2160h" // 90d
renewBefore: "360h" // 15d
issuerRef: group: "cert-manager.io"
issuerRef: kind: "Issuer"
issuerRef: name: RootCA
privateKey: algorithm: "ECDSA"
privateKey: size: 256
secretName: metadata.name
subject: organizations: Orgs
}
}

View File

@@ -0,0 +1,7 @@
# Database Certs
This component issues postgres certificates from the provisioner cluster using certmanager.
The purpose is to define customTLSSecret and customReplicationTLSSecret to provide certs that allow the standby to authenticate to the primary. For this type of standby, you must use custom TLS.
Refer to the PGO [Streaming Standby](https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#streaming-standby) tutorial.

View File

@@ -0,0 +1,6 @@
package holos
#TargetNamespace: #InstancePrefix + "-zitadel"
// _DBName is the database name used across multiple holos components in this project
_DBName: "zitadel"

View File

@@ -1,91 +0,0 @@
package holos
#InputKeys: component: "postgres"
#KubernetesObjects & {
apiObjects: {
PostgresCluster: db: #PostgresCluster & HighlyAvailable & {
metadata: name: _DBName
metadata: namespace: #TargetNamespace
spec: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.2-0"
postgresVersion: 16
users: [
{name: "postgres"},
{name: _DBName},
]
users: [...{databases: [_DBName]}]
instances: [{
replicas: 2
dataVolumeClaimSpec: {
accessModes: ["ReadWriteOnce"]
resources: requests: storage: "1Gi"
}
}]
backups: pgbackrest: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.49-0"
repos: [{
name: "repo1"
volume: volumeClaimSpec: {
accessModes: ["ReadWriteOnce"]
resources: requests: storage: "1Gi"
}
}]
}
}
}
}
}
// Refer to https://github.com/holos-run/postgres-operator-examples/blob/main/kustomize/high-availability/ha-postgres.yaml
let HighlyAvailable = {
apiVersion: "postgres-operator.crunchydata.com/v1beta1"
kind: "PostgresCluster"
metadata: name: string | *"hippo-ha"
spec: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.2-0"
postgresVersion: 16
instances: [{
name: "pgha1"
replicas: 2
dataVolumeClaimSpec: {
accessModes: ["ReadWriteOnce"]
resources: requests: storage: "1Gi"
}
affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: [{
weight: 1
podAffinityTerm: {
topologyKey: "kubernetes.io/hostname"
labelSelector: matchLabels: {
"postgres-operator.crunchydata.com/cluster": "hippo-ha"
"postgres-operator.crunchydata.com/instance-set": "pgha1"
}
}
}]
}]
backups: pgbackrest: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.49-0"
repos: [{
name: "repo1"
volume: volumeClaimSpec: {
accessModes: ["ReadWriteOnce"]
resources: requests: storage: "1Gi"
}
}]
}
proxy: pgBouncer: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-pgbouncer:ubi8-1.21-3"
replicas: 2
affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: [{
weight: 1
podAffinityTerm: {
topologyKey: "kubernetes.io/hostname"
labelSelector: matchLabels: {
"postgres-operator.crunchydata.com/cluster": "hippo-ha"
"postgres-operator.crunchydata.com/role": "pgbouncer"
}
}
}]
}
}
}

View File

@@ -1,52 +0,0 @@
package holos
import "encoding/yaml"
let Name = "zitadel"
#InputKeys: component: Name
// Upstream helm chart doesn't specify the namespace field for all resources.
#Kustomization: spec: targetNamespace: #TargetNamespace
#HelmChart & {
namespace: #TargetNamespace
chart: {
name: Name
version: "7.9.0"
repository: {
name: Name
url: "https://charts.zitadel.com"
}
}
values: #Values
apiObjects: {
ExternalSecret: masterkey: #ExternalSecret & {
_name: "zitadel-masterkey"
}
VirtualService: zitadel: #VirtualService & {
metadata: name: Name
metadata: namespace: #TargetNamespace
spec: hosts: ["login.\(#Platform.org.domain)"]
spec: gateways: ["istio-ingress/default"]
spec: http: [{route: [{destination: host: Name}]}]
}
}
}
// TODO: Generalize this common pattern of injecting the istio sidecar into a Deployment
let Patch = [{op: "add", path: "/spec/template/metadata/labels/sidecar.istio.io~1inject", value: "true"}]
#Kustomize: {
patches: [
{
target: {
group: "apps"
version: "v1"
kind: "Deployment"
name: Name
}
patch: yaml.Marshal(Patch)
},
]
}

View File

@@ -82,6 +82,7 @@ _apiVersion: "holos.run/v1alpha1"
}
#NamespaceObject: #ClusterObject & {
metadata: name: string
metadata: namespace: string
...
}
@@ -237,19 +238,40 @@ _apiVersion: "holos.run/v1alpha1"
pool?: string
// region is the geographic region of the cluster.
region?: string
// primary is true if name matches the primaryCluster name
primary: bool
}
// #Platform defines the primary lookup table for the platform. Lookup keys should be limited to those defined in #KeyTags.
#Platform: {
// org holds user defined values scoped organization wide. A platform has one and only one organization.
org: {
name: string
// e.g. "example"
name: string
// e.g. "example.com"
domain: string
contact: email: string
// e.g. "Example"
displayName: string
// e.g. "platform@example.com"
contact: email: string
// e.g. "platform@example.com"
cloudflare: email: string
// e.g. "example"
github: orgs: primary: name: string
}
clusters: [ID=_]: #ClusterSpec & {
name: string & ID
// Only one cluster may be primary at a time. All others are standby.
// Refer to [repo based standby](https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#repo-based-standby)
primaryCluster: {
name: string
}
clusters: [Name=_]: #ClusterSpec & {
name: string & Name
if Name == primaryCluster.name {
primary: true
}
if Name != primaryCluster.name {
primary: false
}
}
stages: [ID=_]: {
name: string & ID
@@ -263,6 +285,15 @@ _apiVersion: "holos.run/v1alpha1"
}
}
// #Backups defines backup configuration.
// TODO: Consider the best place for this, possibly as part of the site platform config. This represents the primary location for backups.
#Backups: {
s3: {
region: string
endpoint: string | *"s3.dualstack.\(region).amazonaws.com"
}
}
// #APIObjects is the output type for api objects produced by cue. A map is used to aid debugging and clarity.
#APIObjects: {
// apiObjects holds each the api objects produced by cue.
@@ -272,6 +303,9 @@ _apiVersion: "holos.run/v1alpha1"
kind: Kind
}
}
ExternalSecret?: [Name=_]: #ExternalSecret & {_name: Name}
VirtualService?: [Name=_]: #VirtualService & {metadata: name: Name}
Issuer?: [Name=_]: #Issuer & {metadata: name: Name}
}
// apiObjectMap holds the marshalled representation of apiObjects
@@ -428,6 +462,12 @@ _apiVersion: "holos.run/v1alpha1"
...
}
// Certificate name should always match the secret name.
#Certificate: {
metadata: name: _
spec: secretName: metadata.name
}
// By default, render kind: Skipped so holos knows to skip over intermediate cue files.
// This enables the use of holos render ./foo/bar/baz/... when bar contains intermediary constraints which are not complete components.
// Holos skips over these intermediary cue instances.

View File

@@ -33,7 +33,7 @@ func NewCreateCmd(hc *holos.Config) *cobra.Command {
cfg.trimTrailingNewlines = flagSet.Bool("trim-trailing-newlines", true, "trim trailing newlines if true")
cmd.Flags().SortFlags = false
cmd.Flags().AddGoFlagSet(flagSet)
cmd.Flags().AddFlagSet(flagSet)
cmd.RunE = makeCreateRunFunc(hc, cfg)
return cmd

View File

@@ -31,7 +31,7 @@ func NewGetCmd(hc *holos.Config) *cobra.Command {
cfg.extractTo = flagSet.String("extract-to", ".", "extract to directory")
cmd.Flags().SortFlags = false
cmd.Flags().AddGoFlagSet(flagSet)
cmd.Flags().AddFlagSet(flagSet)
cmd.RunE = makeGetRunFunc(hc, cfg)
return cmd
}

View File

@@ -1,8 +1,8 @@
package secret
import (
"flag"
"github.com/holos-run/holos/pkg/holos"
"github.com/spf13/pflag"
)
const NameLabel = "holos.run/secret.name"
@@ -24,10 +24,10 @@ type config struct {
extractTo *string
}
func newConfig() (*config, *flag.FlagSet) {
func newConfig() (*config, *pflag.FlagSet) {
cfg := &config{}
flagSet := flag.NewFlagSet("", flag.ContinueOnError)
cfg.namespace = flagSet.String("namespace", holos.DefaultProvisionerNamespace, "namespace in the provisioner cluster")
flagSet := pflag.NewFlagSet("", pflag.ContinueOnError)
cfg.namespace = flagSet.StringP("namespace", "n", holos.DefaultProvisionerNamespace, "namespace in the provisioner cluster")
cfg.cluster = flagSet.String("cluster-name", "", "cluster name selector")
return cfg, flagSet
}

View File

@@ -13,6 +13,11 @@ func (i *StringSlice) String() string {
return fmt.Sprint(*i)
}
// Type implements the pflag.Value interface and describes the type.
func (i *StringSlice) Type() string {
return "strings"
}
// Set implements the flag.Value interface.
func (i *StringSlice) Set(value string) error {
for _, str := range strings.Split(value, ",") {

View File

@@ -1 +1 @@
55
56

View File

@@ -1 +1 @@
2
0