Compare commits

...

52 Commits

Author SHA1 Message Date
Jeff McCune
b86fee04fc (#48) v0.55.4 to rebuild k3, k4, k5 2024-03-11 08:48:07 -07:00
Jeff McCune
c78da6949f Merge pull request #51 from holos-run/jeff/48-zitadel-backups
(#48) Custom PGO Certs for Zitadel
2024-03-10 23:08:29 -07:00
Jeff McCune
7b215bb8f1 (#48) Custom PGO Certs for Zitadel
The [Streaming Standby][standby] architecture requires custom tls certs
for two clusters in two regions to connect to each other.

This patch manages the custom certs following the configuration
described in the article [Using Cert Manager to Deploy TLS for Postgres
on Kubernetes][article].

NOTE: One thing not mentioned anywhere in the crunchy documentation is
how custom tls certs work with pgbouncer.  The pgbouncer service uses a
tls certificate issued by the pgo root cert, not by the custom
certificate authority.

For this reason, we use kustomize to patch the zitadel Deployment and
the zitadel-init and zitadel-setup Jobs.  The patch projects the ca
bundle from the `zitadel-pgbouncer` secret into the zitadel pods at
/pgbouncer/ca.crt

[standby]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery#streaming-standby-with-an-external-repo
[article]: https://www.crunchydata.com/blog/using-cert-manager-to-deploy-tls-for-postgres-on-kubernetes
2024-03-10 22:54:06 -07:00
Jeff McCune
78cec76a96 (#48) Restore ZITADEL from point in time full backup
A full backup was taken using:

```
kubectl annotate postgrescluster zitadel postgres-operator.crunchydata.com/pgbackrest-backup="$(date)"
```

And completed with:

```
❯ k logs -f zitadel-backup-5r6v-v5jnm
time="2024-03-10T21:52:15Z" level=info msg="crunchy-pgbackrest starts"
time="2024-03-10T21:52:15Z" level=info msg="debug flag set to false"
time="2024-03-10T21:52:15Z" level=info msg="backrest backup command requested"
time="2024-03-10T21:52:15Z" level=info msg="command to execute is [pgbackrest backup --stanza=db --repo=2 --type=full]"
time="2024-03-10T21:55:18Z" level=info msg="crunchy-pgbackrest ends"
```

This patch verifies the point in time backup is robust in the face of
the following operations:

1. pg cluster zitadel was deleted (whole namespace emptied)
2. pg cluster zitadel was re-created _without_ a `dataSource`
3. pgo initailized a new database and backed up the blank database to
   S3.
4. pg cluster zitadel was deleted again.
5. pg cluster zitadel was re-created with `dataSource` `options: ["--type=time", "--target=\"2024-03-10 21:56:00+00\""]` (Just after the full backup completed)
6. Restore completed successfully.
7. Applied the holos zitadel component.
8. Zitadel came up successfully and user login worked as expected.

- [x] Perform an in place [restore][restore] from [s3][bucket].
- [x] Set repo1-retention-full to clear warning

[restore]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#restore-properties
[bucket]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#cloud-based-data-source
2024-03-10 17:42:54 -07:00
Jeff McCune
0e98ad2ecb (#48) Zitadel Backups
This patch configures backups suitable to support the [Streaming Standby
with an External Repo][0] architecture.

- [x] PGO [Multiple Backup Repositories][1] to k8s pv and s3.
- [x] [Encryption][2] of backups to S3.
- [x] [Remove SUPERUSER][3] role from zitadel-admin pg user to work with pgbouncer.  Resolves zitadel-init job failure.
- [x] Take a [Manual Backup][5]

[0]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery#streaming-standby-with-an-external-repo
[1]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups#set-up-multiple-backup-repositories
[2]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups#encryption
[3]: https://github.com/CrunchyData/postgres-operator/issues/3095#issuecomment-1904712211
[4]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#streaming-standby-with-an-external-repo
[5]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backup-management#taking-a-one-off-backup
2024-03-10 16:38:56 -07:00
Jeff McCune
30bb3f183a (#50) Describe type as strings to match others 2024-03-10 11:29:19 -07:00
Jeff McCune
1369338f3c (#50) Add -n shorthand for --namespace for secrets
It's annoying holos get secret -n foo doesn't work like kubectl get
secret -n foo works.

Closes: #50
2024-03-10 10:45:49 -07:00
Jeff McCune
ac03f64724 (#45) Configure ZITADEL to use pgbouncer 2024-03-09 09:44:33 -08:00
Jeff McCune
bea4468972 (#42) Remove cert manager db ca components
Simpler to let postgres manage the certs.  TLS is in verify-full mode
with the pgo configured certs.
2024-03-08 21:34:26 -08:00
Jeff McCune
224adffa15 (#42) Add holos components for zitadel with postgres
To establish the canonical https://login.ois.run identity issuer on the
core cluster pair.

Custom resources for PGO have been imported with:

    timoni mod vendor crds -f deploy/clusters/core2/components/prod-pgo-crds/prod-pgo-crds.gen.yaml

Note, the zitadel tls connection took some considerable effort to get
working.  We intentionally use pgo issued certs to reduce the toil of
managing certs issued by cert manager.

The default tls configuration of pgo is pretty good with verify full
enabled.
2024-03-08 21:29:25 -08:00
Jeff McCune
b4d34ffdbc (#42) Fix incorrect ceph pool for core2 cluster
The core2 cluster cannot provision pvcs because it's using the k8s-dev
pool when it has credentials valid only for the k8s-prod pool.

This patch adds an entry to the platform cluster map to configure the
pool for each cluster, with a default of k8s-dev.
2024-03-08 13:14:27 -08:00
Jeff McCune
a85db9cf5e (#42) Add KustomizeBuild holos component type to install pgo
PGO uses plain yaml and kustomize as the recommended installation
method.  Holos supports upstream by adding a new PlainFiles component
kind, which simply copies files into place and lets kustomize handle the
generation of the api objects.

Cue is responsible for very little in this kind of component, basically
allowing overlay resources if needed and deferring everything else to
the holos cli.

The holos cli in turn is responsible for executing kubectl kustomize
build on the input directory to produce the rendered output, then writes
the rendered output into place.
2024-03-08 11:27:42 -08:00
Jeff McCune
990c82432c (#40) Fix go releaser with standard arc runners
Standard arc runner image is missing gpg and git.
2024-03-07 22:59:15 -08:00
Jeff McCune
e3673b594c Merge pull request #41 from holos-run/jeff/40-actions-runners
(#40) Actions Runner Controller (Runner Scale Sets)
2024-03-07 22:43:16 -08:00
Jeff McCune
f8cf278a24 (#40) bump to v0.54.0 2024-03-07 22:37:51 -08:00
Jeff McCune
b0bc596a49 (#40) Update workflow to run on arc runner set
Matches the value of the github/arc/runner component helm release, which
is the installation name.
2024-03-07 22:37:51 -08:00
Jeff McCune
4501ceec05 (#40) Use baseline security context for GitHub arc
Without this patch the arc controller fails to create a listener.  The
template for the listener doesn't appear to be configurable from the
chart.

Could patch the listener pod template with kustomize, do this as a
follow up feature.

With this patch we get the expected two pods in the runner system
namespace:

```
❯ k get pods
NAME                                 READY   STATUS    RESTARTS   AGE
gha-rs-7db9c9f7-listener             1/1     Running   0          43s
gha-rs-controller-56bb9c77d9-6tjch   1/1     Running   0          8s
```
2024-03-07 22:37:50 -08:00
Jeff McCune
4183fdfd42 (#40) Note the helm release name is the installation name
Which is the value of the `runs-on` field in workflows.
2024-03-07 22:37:50 -08:00
Jeff McCune
2595793019 (#40) Do not force the namespace with kustomize
To avoid confining the custom resource definitions to a namespace.
2024-03-07 22:37:50 -08:00
Jeff McCune
aa3d1914b1 (#40) Manage the actions runner scale sets 2024-03-07 22:37:49 -08:00
Jeff McCune
679ddbb6bf (#40) Use Restricted pod security for arc runners
Might as well put the restriction in place before deploying the runners
to see what breaks.
2024-03-07 22:37:49 -08:00
Jeff McCune
b1d7d07a04 (#40) Add field for helm chart release name
The resource names for the arc controller are too long:

❯ k get pods -n arc-systems
NAME                                                              READY   STATUS    RESTARTS   AGE
gha-runner-scale-set-controller-gha-rs-controller-6bdf45bd6jx5n   1/1     Running   0          59m

Solve the problem by allowing components to set the release name to
`gha-rs-controller` which requires an additional field from the cue code
to differentiate from the chart name.
2024-03-07 20:40:31 -08:00
Jeff McCune
5f58263232 (#40) Create arc namespaces
Named after the upstream install guide, though arc-systems makes me
twitch for arc-system.
2024-03-07 20:37:35 -08:00
Jeff McCune
b6bdd072f7 (#40) Include crds when running helm template
Might need to make this a configurable option, but for now just always
do it.
2024-03-07 20:37:35 -08:00
Jeff McCune
509f2141ac (#40) Actions Runner Controller
This patch adds support for helm oci images which are used by the
gha-runner-scale-set-controller.

For example, arc is installed normally with:

```
NAMESPACE="arc-systems"
helm install arc \
    --namespace "${NAMESPACE}" \
    --create-namespace \
    oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller
```

This patch caches the oci image in the same way as the repository based
method.

Refer to: https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/quickstart-for-actions-runner-controller
2024-03-07 20:37:35 -08:00
Jeff McCune
4c2bc34d58 (#32) SecretStore Component
Separate the SecretStore resources from the namespaces component because
it creates a deadlock.  The secretstore crds don't get applied until the
eso component is managed.

The namespaces component should have nothing but core api objects, no
custom resources.
2024-03-07 16:01:22 -08:00
Jeff McCune
d831070f53 Trim trailing newlines from files when creating secrets
Without this patch, the pattern of echoing data (without -n) or editing
files in a directory to represent the keys of a secret results in a
trailing newline in the kubernetes Secret.

This patch trims off the trailing newline by default, with the option to
preserve it with the --trim-trailing-newlines=false flag.
2024-03-06 11:21:32 -08:00
Jeff McCune
340715f76c (#36) Provide certs to Cockroach DB and Zitadel with ExternalSecrets
This patch switches CockroachDB to use certs provided by ExternalSecrets
instead of managing Certificate resources in-cluster from the upstream
helm chart.

This paves the way for multi-cluster replication by moving certificates
outside of the lifecycle of the workload cluster cockroach db operates
within.

Closes: #36
2024-03-06 10:38:47 -08:00
Jeff McCune
64ffacfc7a (#36) Add Cockroach Issuer for Zitadel to provisioner cluster
Issuing mtls certs for cockroach db moves to the provisioner cluster so
we can more easily support cross cluster replication in the future.
crdb certs will be synced same as public tls certs, using ExternalSecret
resources.
2024-03-06 09:36:20 -08:00
Nate McCurdy
54acea42cb Merge pull request #37 from holos-run/nate/preflight
Add 'holos preflight' command, check for GitHub CLI
2024-03-06 09:32:54 -08:00
Nate McCurdy
5ef8e75194 Fix Actions warning during Lint by updating golangci-lint-action
Warning:
> Node.js 16 actions are deprecated. Please update the following actions to use Node.js 20: golangci/golangci-lint-action@v3. For more information see: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/.
2024-03-05 17:42:30 -08:00
Nate McCurdy
cb2b5c0f49 Add the 'preflight' subcommand; check for GitHub access
This adds a new holos subcommand: preflight

Initially, this just checks that the GitHub CLI is installed and
authenticated.

The preflight command will be used to validate that the user has the
neccessary CLI tools, access, and authorization to start using Holos and
setup a Holos cluster.
2024-03-05 17:40:08 -08:00
Jeff McCune
fd5a2fdbc1 (#36) Sync certs as ExternalSecrets from workload clusters
This patch replaces the httpbin and login cert on the workload clusters
with an ExternalSecret to sync the tls cert from the provisioner
cluster.
2024-03-05 17:05:10 -08:00
Jeff McCune
eb3e272612 (#36) Dynamically generate cluster certs from Platform spec
Each cluster should be more or less identical, configure certs from the
dynamic list of platform clusters.
2024-03-05 16:44:35 -08:00
Nate McCurdy
9f2a51bde8 Move the RunCmd function to the util package
More than one Holos package needs to execute commands, so pull out the
runCmd from builder and move it to the util package.

This commits adds the following to the util package:
* util.RunCmd func
* util.runResult struct
2024-03-05 15:12:14 -08:00
Jeff McCune
2b3b5a4887 (#36) Issue login and httpbin certs
This patch uses cert manager in the provisioner cluster to provision tls
certs for https://login.example.com and https://httpbin.k2.example.com

The certs are not yet synced to the clusters.  Next step is to replace
the Certificate resources with ExternalSecret resources, then remove
cert manager from the workload clusters.
2024-03-05 14:27:37 -08:00
Jeff McCune
7426e8f867 (#36) Move cert-manager to the provisioner cluster
This patch moves certificate management to the provisioner cluster to
centralize all secrets into the highly secured cluster.  This change
also simplifies the architecture in a number of ways:

1. Certificate lives are now completely independent of cluster
   lifecycle.
2. Remove the need for bi-directional sync to save cert secrets.
3. Workload clusters no longer need access to DNS.
2024-03-05 12:51:58 -08:00
Jeff McCune
cf0c455aa2 (#34) Add test for print secret data 2024-03-05 11:14:37 -08:00
Jeff McCune
752a3f912d (#34) Remove debug info logs 2024-03-05 11:05:51 -08:00
Jeff McCune
7d5852d675 (#34) Print secret data as json
Closes: #34
2024-03-05 11:03:47 -08:00
Jeff McCune
66b4ca0e6c (#31) Fix helm missing in actions workflow
Causing test failures that should pass.
2024-03-05 10:11:43 -08:00
Jeff McCune
b3f682453d (#31) Inject istio sidecar into Deployment zitadel using Kustomize
Multiple holos components rely on kustomize to modify the output of the
upstream helm chart, for example patching a Deployment to inject the
istio sidecar.

The new holos cue based component system did not support running
kustomize after helm template.  This patch adds the kustomize execution
if two fields are defined in the helm chart kind of cue output.

The API spec is pretty loose in this patch but I'm proceeding for
expedience and to inform the final API with more use cases as more
components are migrated to cue.
2024-03-05 09:56:39 -08:00
Jeff McCune
0c3181ae05 (#31) Add VirtualService for Zitadel
Also import the Kustomize types using:

    cue get go sigs.k8s.io/kustomize/api/types/...
2024-03-04 17:18:46 -08:00
Jeff McCune
18cbff0c13 (#31) Add tls cert for zitadel to connect to cockroach db
Cockroach DB uses tls certs for client authentication.  Issue one for
Zitadel.

With this patch Zitadel starts up bit is not yet exposted with a
VirtualService.

Refer to https://zitadel.com/docs/self-hosting/manage/configure
2024-03-04 14:46:49 -08:00
Jeff McCune
b4fca0929c (#31) ExternalSecret for zitadel-masterkey 2024-03-04 14:31:27 -08:00
Jeff McCune
911d65bdc6 (#31) Setup login.ois.run with basic istio default Gateway
The istio default Gateway is the basis for what will become a dynamic
set of server entries specified from cue project data integrated with
extauthz.

For now we simply need to get the identity provider up and running as
the first step toward identity and access management.
2024-03-04 13:59:17 -08:00
Jeff McCune
2a5eccf0c1 (#33) Helm stderr logging
Log error messages from helm when building and rendering holos
components.

Closes: #33
2024-03-04 13:16:51 -08:00
Jeff McCune
9db4873205 (#31) Add Cockroach DB for Zitadel
Following https://github.com/zitadel/zitadel-charts/blob/main/examples/4-cockroach-secure/README.md
2024-03-04 10:31:39 -08:00
Jeff McCune
f90e83e142 (#30) Add httpbin Gateway and VirtualService
There isn't a default Gateway yet, so use a specific `httpbin` gateway
to test istio instead.
2024-03-02 21:12:03 -08:00
Jeff McCune
bdd2964edb (#30) Add httpbin Service for ns istio-ingress 2024-03-02 20:39:55 -08:00
Jeff McCune
56375b82d8 (#30) Fix httpbin Deployment selector match labels
Without this patch the deployment fails with:

```
Deployment/istio-ingress/httpbin dry-run failed, reason: Invalid:
Deployment.apps "httpbin" is invalid: spec.template.metadata.labels:
Invalid value:
map[string]string{"app.kubernetes.io/component":"httpbin",
"app.kubernetes.io/instance":"prod-mesh-httpbin",
"app.kubernetes.io/name":"mesh", "app .kubernetes.io/part-of":"prod",
"holos.run/component.name":"httpbin", "holos.run/project.name":"mesh",
"holos.run/stage.name":"prod", "sidecar.istio.io/inject":"true"}:
`selector` does not match template `labels`
```
2024-03-02 20:23:23 -08:00
Jeff McCune
dc27489249 (#30) Add httpbin Deployment in istio-ingress namespace
This patch gets the Deployment running with a restricted seccomp
profile.
2024-03-02 20:17:16 -08:00
131 changed files with 33919 additions and 305 deletions

View File

@@ -1,6 +1,7 @@
---
# https://github.com/golangci/golangci-lint-action?tab=readme-ov-file#how-to-use
name: Lint
on:
"on":
push:
branches:
- main
@@ -14,7 +15,7 @@ permissions:
jobs:
golangci:
name: lint
runs-on: [self-hosted, k8s]
runs-on: gha-rs
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
@@ -22,6 +23,6 @@ jobs:
go-version: stable
cache: false
- name: golangci-lint
uses: golangci/golangci-lint-action@v3
uses: golangci/golangci-lint-action@v4
with:
version: latest

View File

@@ -2,17 +2,20 @@ name: Release
on:
push:
# Run only against tags
tags:
- '*'
branches:
- release
permissions:
contents: write
jobs:
goreleaser:
runs-on: [self-hosted, k8s]
runs-on: gha-rs
steps:
- name: Provide GPG and Git
run: sudo apt update && sudo apt -qq -y install gnupg git
- name: Checkout
uses: actions/checkout@v4
with:

View File

@@ -13,7 +13,7 @@ permissions:
jobs:
test:
runs-on: [self-hosted, k8s]
runs-on: gha-rs
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -23,5 +23,14 @@ jobs:
with:
go-version: stable
- name: Provide unzip for Helm
run: sudo apt update && sudo apt -qq -y install curl zip unzip tar bzip2
- name: Set up Helm
uses: azure/setup-helm@v4
- name: Set up Kubectl
uses: azure/setup-kubectl@v3
- name: Test
run: ./scripts/test

View File

@@ -0,0 +1,281 @@
# Want helm errors to show up
! exec holos build .
stderr 'Error: execution error at \(zitadel/templates/secret_zitadel-masterkey.yaml:2:4\): Either set .Values.zitadel.masterkey xor .Values.zitadel.masterkeySecretName'
-- cue.mod --
package holos
-- zitadel.cue --
package holos
cluster: string @tag(cluster, string)
apiVersion: "holos.run/v1alpha1"
kind: "HelmChart"
metadata: name: "zitadel"
namespace: "zitadel"
chart: {
name: "zitadel"
version: "7.9.0"
release: name
repository: {
name: "zitadel"
url: "https://charts.zitadel.com"
}
}
-- vendor/zitadel/templates/secret_zitadel-masterkey.yaml --
{{- if (or (and .Values.zitadel.masterkey .Values.zitadel.masterkeySecretName) (and (not .Values.zitadel.masterkey) (not .Values.zitadel.masterkeySecretName)) ) }}
{{- fail "Either set .Values.zitadel.masterkey xor .Values.zitadel.masterkeySecretName" }}
{{- end }}
{{- if .Values.zitadel.masterkey -}}
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: zitadel-masterkey
{{- with .Values.zitadel.masterkeyAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels:
{{- include "zitadel.labels" . | nindent 4 }}
stringData:
masterkey: {{ .Values.zitadel.masterkey }}
{{- end -}}
-- vendor/zitadel/Chart.yaml --
apiVersion: v2
appVersion: v2.46.0
description: A Helm chart for ZITADEL
icon: https://zitadel.com/zitadel-logo-dark.svg
kubeVersion: '>= 1.21.0-0'
maintainers:
- email: support@zitadel.com
name: zitadel
url: https://zitadel.com
name: zitadel
type: application
version: 7.9.0
-- vendor/zitadel/values.yaml --
# Default values for zitadel.
zitadel:
# The ZITADEL config under configmapConfig is written to a Kubernetes ConfigMap
# See all defaults here:
# https://github.com/zitadel/zitadel/blob/main/cmd/defaults.yaml
configmapConfig:
ExternalSecure: true
Machine:
Identification:
Hostname:
Enabled: true
Webhook:
Enabled: false
# The ZITADEL config under secretConfig is written to a Kubernetes Secret
# See all defaults here:
# https://github.com/zitadel/zitadel/blob/main/cmd/defaults.yaml
secretConfig:
# Annotations set on secretConfig secret
secretConfigAnnotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation
helm.sh/hook-weight: "0"
# Reference the name of a secret that contains ZITADEL configuration.
configSecretName:
# The key under which the ZITADEL configuration is located in the secret.
configSecretKey: config-yaml
# ZITADEL uses the masterkey for symmetric encryption.
# You can generate it for example with tr -dc A-Za-z0-9 </dev/urandom | head -c 32
masterkey: ""
# Reference the name of the secret that contains the masterkey. The key should be named "masterkey".
# Note: Either zitadel.masterkey or zitadel.masterkeySecretName must be set
masterkeySecretName: ""
# Annotations set on masterkey secret
masterkeyAnnotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation
helm.sh/hook-weight: "0"
# The CA Certificate needed for establishing secure database connections
dbSslCaCrt: ""
# The Secret containing the CA certificate at key ca.crt needed for establishing secure database connections
dbSslCaCrtSecret: ""
# The db admins secret containing the client certificate and key at tls.crt and tls.key needed for establishing secure database connections
dbSslAdminCrtSecret: ""
# The db users secret containing the client certificate and key at tls.crt and tls.key needed for establishing secure database connections
dbSslUserCrtSecret: ""
# Generate a self-signed certificate using an init container
# This will also mount the generated files to /etc/tls/ so that you can reference them in the pod.
# E.G. KeyPath: /etc/tls/tls.key CertPath: /etc/tls/tls.crt
# By default, the SAN DNS names include, localhost, the POD IP address and the POD name. You may include one more by using additionalDnsName like "my.zitadel.fqdn".
selfSignedCert:
enabled: false
additionalDnsName:
replicaCount: 3
image:
repository: ghcr.io/zitadel/zitadel
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
chownImage:
repository: alpine
pullPolicy: IfNotPresent
tag: "3.19"
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
# Annotations to add to the deployment
annotations: {}
# Annotations to add to the configMap
configMap:
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation
helm.sh/hook-weight: "0"
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation
helm.sh/hook-weight: "0"
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podAdditionalLabels: {}
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
securityContext: {}
# Additional environment variables
env:
[]
# - name: ZITADEL_DATABASE_POSTGRES_HOST
# valueFrom:
# secretKeyRef:
# name: postgres-pguser-postgres
# key: host
service:
type: ClusterIP
# If service type is "ClusterIP", this can optionally be set to a fixed IP address.
clusterIP: ""
port: 8080
protocol: http2
annotations: {}
scheme: HTTP
ingress:
enabled: false
className: ""
annotations: {}
hosts:
- host: localhost
paths:
- path: /
pathType: Prefix
tls: []
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
topologySpreadConstraints: []
initJob:
# Once ZITADEL is installed, the initJob can be disabled.
enabled: true
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation
helm.sh/hook-weight: "1"
resources: {}
backoffLimit: 5
activeDeadlineSeconds: 300
extraContainers: []
podAnnotations: {}
# Available init commands :
# "": initialize ZITADEL instance (without skip anything)
# database: initialize only the database
# grant: set ALL grant to user
# user: initialize only the database user
# zitadel: initialize ZITADEL internals (skip "create user" and "create database")
command: ""
setupJob:
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation
helm.sh/hook-weight: "2"
resources: {}
activeDeadlineSeconds: 300
extraContainers: []
podAnnotations: {}
additionalArgs:
- "--init-projections=true"
machinekeyWriter:
image:
repository: bitnami/kubectl
tag: ""
resources: {}
readinessProbe:
enabled: true
initialDelaySeconds: 0
periodSeconds: 5
failureThreshold: 3
livenessProbe:
enabled: true
initialDelaySeconds: 0
periodSeconds: 5
failureThreshold: 3
startupProbe:
enabled: true
periodSeconds: 1
failureThreshold: 30
metrics:
enabled: false
serviceMonitor:
# If true, the chart creates a ServiceMonitor that is compatible with Prometheus Operator
# https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.ServiceMonitor.
# The Prometheus community Helm chart installs this operator
# https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#kube-prometheus-stack
enabled: false
honorLabels: false
honorTimestamps: true
pdb:
enabled: false
# these values are used for the PDB and are mutally exclusive
minAvailable: 1
# maxUnavailable: 1
annotations: {}

View File

@@ -0,0 +1,33 @@
# Kustomize is a supported holos component kind
exec holos render --cluster-name=mycluster . --log-level=debug
# Want generated output
cmp want.yaml deploy/clusters/mycluster/components/kstest/kstest.gen.yaml
-- cue.mod --
package holos
-- component.cue --
package holos
cluster: string @tag(cluster, string)
apiVersion: "holos.run/v1alpha1"
kind: "KustomizeBuild"
metadata: name: "kstest"
-- kustomization.yaml --
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: mynamespace
resources:
- serviceaccount.yaml
-- serviceaccount.yaml --
apiVersion: v1
kind: ServiceAccount
metadata:
name: test
-- want.yaml --
apiVersion: v1
kind: ServiceAccount
metadata:
name: test
namespace: mynamespace

View File

@@ -0,0 +1,975 @@
// Code generated by timoni. DO NOT EDIT.
//timoni:generate timoni vendor crd -f /home/jeff/workspace/holos-run/holos-infra/deploy/clusters/core2/components/prod-pgo-crds/prod-pgo-crds.gen.yaml
package v1beta1
import "strings"
// PGAdmin is the Schema for the pgadmins API
#PGAdmin: {
// APIVersion defines the versioned schema of this representation
// of an object. Servers should convert recognized schemas to the
// latest internal value, and may reject unrecognized values.
// More info:
// https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
apiVersion: "postgres-operator.crunchydata.com/v1beta1"
// Kind is a string value representing the REST resource this
// object represents. Servers may infer this from the endpoint
// the client submits requests to. Cannot be updated. In
// CamelCase. More info:
// https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
kind: "PGAdmin"
metadata!: {
name!: strings.MaxRunes(253) & strings.MinRunes(1) & {
string
}
namespace!: strings.MaxRunes(63) & strings.MinRunes(1) & {
string
}
labels?: {
[string]: string
}
annotations?: {
[string]: string
}
}
// PGAdminSpec defines the desired state of PGAdmin
spec!: #PGAdminSpec
}
// PGAdminSpec defines the desired state of PGAdmin
#PGAdminSpec: {
// Scheduling constraints of the PGAdmin pod. More info:
// https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node
affinity?: {
// Describes node affinity scheduling rules for the pod.
nodeAffinity?: {
// The scheduler will prefer to schedule pods to nodes that
// satisfy the affinity expressions specified by this field, but
// it may choose a node that violates one or more of the
// expressions. The node that is most preferred is the one with
// the greatest sum of weights, i.e. for each node that meets all
// of the scheduling requirements (resource request,
// requiredDuringScheduling affinity expressions, etc.), compute
// a sum by iterating through the elements of this field and
// adding "weight" to the sum if the node matches the
// corresponding matchExpressions; the node(s) with the highest
// sum are the most preferred.
preferredDuringSchedulingIgnoredDuringExecution?: [...{
// A node selector term, associated with the corresponding weight.
preference: {
// A list of node selector requirements by node's labels.
matchExpressions?: [...{
// The label key that the selector applies to.
key: string
// Represents a key's relationship to a set of values. Valid
// operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
operator: string
// An array of string values. If the operator is In or NotIn, the
// values array must be non-empty. If the operator is Exists or
// DoesNotExist, the values array must be empty. If the operator
// is Gt or Lt, the values array must have a single element,
// which will be interpreted as an integer. This array is
// replaced during a strategic merge patch.
values?: [...string]
}]
// A list of node selector requirements by node's fields.
matchFields?: [...{
// The label key that the selector applies to.
key: string
// Represents a key's relationship to a set of values. Valid
// operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
operator: string
// An array of string values. If the operator is In or NotIn, the
// values array must be non-empty. If the operator is Exists or
// DoesNotExist, the values array must be empty. If the operator
// is Gt or Lt, the values array must have a single element,
// which will be interpreted as an integer. This array is
// replaced during a strategic merge patch.
values?: [...string]
}]
}
// Weight associated with matching the corresponding
// nodeSelectorTerm, in the range 1-100.
weight: int
}]
requiredDuringSchedulingIgnoredDuringExecution?: {
// Required. A list of node selector terms. The terms are ORed.
nodeSelectorTerms: [...{
// A list of node selector requirements by node's labels.
matchExpressions?: [...{
// The label key that the selector applies to.
key: string
// Represents a key's relationship to a set of values. Valid
// operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
operator: string
// An array of string values. If the operator is In or NotIn, the
// values array must be non-empty. If the operator is Exists or
// DoesNotExist, the values array must be empty. If the operator
// is Gt or Lt, the values array must have a single element,
// which will be interpreted as an integer. This array is
// replaced during a strategic merge patch.
values?: [...string]
}]
// A list of node selector requirements by node's fields.
matchFields?: [...{
// The label key that the selector applies to.
key: string
// Represents a key's relationship to a set of values. Valid
// operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
operator: string
// An array of string values. If the operator is In or NotIn, the
// values array must be non-empty. If the operator is Exists or
// DoesNotExist, the values array must be empty. If the operator
// is Gt or Lt, the values array must have a single element,
// which will be interpreted as an integer. This array is
// replaced during a strategic merge patch.
values?: [...string]
}]
}]
}
}
// Describes pod affinity scheduling rules (e.g. co-locate this
// pod in the same node, zone, etc. as some other pod(s)).
podAffinity?: {
// The scheduler will prefer to schedule pods to nodes that
// satisfy the affinity expressions specified by this field, but
// it may choose a node that violates one or more of the
// expressions. The node that is most preferred is the one with
// the greatest sum of weights, i.e. for each node that meets all
// of the scheduling requirements (resource request,
// requiredDuringScheduling affinity expressions, etc.), compute
// a sum by iterating through the elements of this field and
// adding "weight" to the sum if the node has pods which matches
// the corresponding podAffinityTerm; the node(s) with the
// highest sum are the most preferred.
preferredDuringSchedulingIgnoredDuringExecution?: [...{
// Required. A pod affinity term, associated with the
// corresponding weight.
podAffinityTerm: {
// A label query over a set of resources, in this case pods.
labelSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// A label query over the set of namespaces that the term applies
// to. The term is applied to the union of the namespaces
// selected by this field and the ones listed in the namespaces
// field. null selector and null or empty namespaces list means
// "this pod's namespace". An empty selector ({}) matches all
// namespaces.
namespaceSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// namespaces specifies a static list of namespace names that the
// term applies to. The term is applied to the union of the
// namespaces listed in this field and the ones selected by
// namespaceSelector. null or empty namespaces list and null
// namespaceSelector means "this pod's namespace".
namespaces?: [...string]
// This pod should be co-located (affinity) or not co-located
// (anti-affinity) with the pods matching the labelSelector in
// the specified namespaces, where co-located is defined as
// running on a node whose value of the label with key
// topologyKey matches that of any node on which any of the
// selected pods is running. Empty topologyKey is not allowed.
topologyKey: string
}
// weight associated with matching the corresponding
// podAffinityTerm, in the range 1-100.
weight: int
}]
// If the affinity requirements specified by this field are not
// met at scheduling time, the pod will not be scheduled onto the
// node. If the affinity requirements specified by this field
// cease to be met at some point during pod execution (e.g. due
// to a pod label update), the system may or may not try to
// eventually evict the pod from its node. When there are
// multiple elements, the lists of nodes corresponding to each
// podAffinityTerm are intersected, i.e. all terms must be
// satisfied.
requiredDuringSchedulingIgnoredDuringExecution?: [...{
// A label query over a set of resources, in this case pods.
labelSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// A label query over the set of namespaces that the term applies
// to. The term is applied to the union of the namespaces
// selected by this field and the ones listed in the namespaces
// field. null selector and null or empty namespaces list means
// "this pod's namespace". An empty selector ({}) matches all
// namespaces.
namespaceSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// namespaces specifies a static list of namespace names that the
// term applies to. The term is applied to the union of the
// namespaces listed in this field and the ones selected by
// namespaceSelector. null or empty namespaces list and null
// namespaceSelector means "this pod's namespace".
namespaces?: [...string]
// This pod should be co-located (affinity) or not co-located
// (anti-affinity) with the pods matching the labelSelector in
// the specified namespaces, where co-located is defined as
// running on a node whose value of the label with key
// topologyKey matches that of any node on which any of the
// selected pods is running. Empty topologyKey is not allowed.
topologyKey: string
}]
}
// Describes pod anti-affinity scheduling rules (e.g. avoid
// putting this pod in the same node, zone, etc. as some other
// pod(s)).
podAntiAffinity?: {
// The scheduler will prefer to schedule pods to nodes that
// satisfy the anti-affinity expressions specified by this field,
// but it may choose a node that violates one or more of the
// expressions. The node that is most preferred is the one with
// the greatest sum of weights, i.e. for each node that meets all
// of the scheduling requirements (resource request,
// requiredDuringScheduling anti-affinity expressions, etc.),
// compute a sum by iterating through the elements of this field
// and adding "weight" to the sum if the node has pods which
// matches the corresponding podAffinityTerm; the node(s) with
// the highest sum are the most preferred.
preferredDuringSchedulingIgnoredDuringExecution?: [...{
// Required. A pod affinity term, associated with the
// corresponding weight.
podAffinityTerm: {
// A label query over a set of resources, in this case pods.
labelSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// A label query over the set of namespaces that the term applies
// to. The term is applied to the union of the namespaces
// selected by this field and the ones listed in the namespaces
// field. null selector and null or empty namespaces list means
// "this pod's namespace". An empty selector ({}) matches all
// namespaces.
namespaceSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// namespaces specifies a static list of namespace names that the
// term applies to. The term is applied to the union of the
// namespaces listed in this field and the ones selected by
// namespaceSelector. null or empty namespaces list and null
// namespaceSelector means "this pod's namespace".
namespaces?: [...string]
// This pod should be co-located (affinity) or not co-located
// (anti-affinity) with the pods matching the labelSelector in
// the specified namespaces, where co-located is defined as
// running on a node whose value of the label with key
// topologyKey matches that of any node on which any of the
// selected pods is running. Empty topologyKey is not allowed.
topologyKey: string
}
// weight associated with matching the corresponding
// podAffinityTerm, in the range 1-100.
weight: int
}]
// If the anti-affinity requirements specified by this field are
// not met at scheduling time, the pod will not be scheduled onto
// the node. If the anti-affinity requirements specified by this
// field cease to be met at some point during pod execution (e.g.
// due to a pod label update), the system may or may not try to
// eventually evict the pod from its node. When there are
// multiple elements, the lists of nodes corresponding to each
// podAffinityTerm are intersected, i.e. all terms must be
// satisfied.
requiredDuringSchedulingIgnoredDuringExecution?: [...{
// A label query over a set of resources, in this case pods.
labelSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// A label query over the set of namespaces that the term applies
// to. The term is applied to the union of the namespaces
// selected by this field and the ones listed in the namespaces
// field. null selector and null or empty namespaces list means
// "this pod's namespace". An empty selector ({}) matches all
// namespaces.
namespaceSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// namespaces specifies a static list of namespace names that the
// term applies to. The term is applied to the union of the
// namespaces listed in this field and the ones selected by
// namespaceSelector. null or empty namespaces list and null
// namespaceSelector means "this pod's namespace".
namespaces?: [...string]
// This pod should be co-located (affinity) or not co-located
// (anti-affinity) with the pods matching the labelSelector in
// the specified namespaces, where co-located is defined as
// running on a node whose value of the label with key
// topologyKey matches that of any node on which any of the
// selected pods is running. Empty topologyKey is not allowed.
topologyKey: string
}]
}
}
// Configuration settings for the pgAdmin process. Changes to any
// of these values will be loaded without validation. Be careful,
// as you may put pgAdmin into an unusable state.
config?: {
// Files allows the user to mount projected volumes into the
// pgAdmin container so that files can be referenced by pgAdmin
// as needed.
files?: [...{
// configMap information about the configMap data to project
configMap?: {
// items if unspecified, each key-value pair in the Data field of
// the referenced ConfigMap will be projected into the volume as
// a file whose name is the key and content is the value. If
// specified, the listed keys will be projected into the
// specified paths, and unlisted keys will not be present. If a
// key is specified which is not present in the ConfigMap, the
// volume setup will error unless it is marked optional. Paths
// must be relative and may not contain the '..' path or start
// with '..'.
items?: [...{
// key is the key to project.
key: string
// mode is Optional: mode bits used to set permissions on this
// file. Must be an octal value between 0000 and 0777 or a
// decimal value between 0 and 511. YAML accepts both octal and
// decimal values, JSON requires decimal values for mode bits. If
// not specified, the volume defaultMode will be used. This might
// be in conflict with other options that affect the file mode,
// like fsGroup, and the result can be other mode bits set.
mode?: int
// path is the relative path of the file to map the key to. May
// not be an absolute path. May not contain the path element
// '..'. May not start with the string '..'.
path: string
}]
// Name of the referent. More info:
// https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
name?: string
// optional specify whether the ConfigMap or its keys must be
// defined
optional?: bool
}
downwardAPI?: {
// Items is a list of DownwardAPIVolume file
items?: [...{
// Required: Selects a field of the pod: only annotations, labels,
// name and namespace are supported.
fieldRef?: {
// Version of the schema the FieldPath is written in terms of,
// defaults to "v1".
apiVersion?: string
// Path of the field to select in the specified API version.
fieldPath: string
}
// Optional: mode bits used to set permissions on this file, must
// be an octal value between 0000 and 0777 or a decimal value
// between 0 and 511. YAML accepts both octal and decimal values,
// JSON requires decimal values for mode bits. If not specified,
// the volume defaultMode will be used. This might be in conflict
// with other options that affect the file mode, like fsGroup,
// and the result can be other mode bits set.
mode?: int
// Required: Path is the relative path name of the file to be
// created. Must not be absolute or contain the '..' path. Must
// be utf-8 encoded. The first item of the relative path must not
// start with '..'
path: string
// Selects a resource of the container: only resources limits and
// requests (limits.cpu, limits.memory, requests.cpu and
// requests.memory) are currently supported.
resourceFieldRef?: {
// Container name: required for volumes, optional for env vars
containerName?: string
// Specifies the output format of the exposed resources, defaults
// to "1"
divisor?: (int | string) & {
=~"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$"
}
// Required: resource to select
resource: string
}
}]
}
// secret information about the secret data to project
secret?: {
// items if unspecified, each key-value pair in the Data field of
// the referenced Secret will be projected into the volume as a
// file whose name is the key and content is the value. If
// specified, the listed keys will be projected into the
// specified paths, and unlisted keys will not be present. If a
// key is specified which is not present in the Secret, the
// volume setup will error unless it is marked optional. Paths
// must be relative and may not contain the '..' path or start
// with '..'.
items?: [...{
// key is the key to project.
key: string
// mode is Optional: mode bits used to set permissions on this
// file. Must be an octal value between 0000 and 0777 or a
// decimal value between 0 and 511. YAML accepts both octal and
// decimal values, JSON requires decimal values for mode bits. If
// not specified, the volume defaultMode will be used. This might
// be in conflict with other options that affect the file mode,
// like fsGroup, and the result can be other mode bits set.
mode?: int
// path is the relative path of the file to map the key to. May
// not be an absolute path. May not contain the path element
// '..'. May not start with the string '..'.
path: string
}]
// Name of the referent. More info:
// https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
name?: string
// optional field specify whether the Secret or its key must be
// defined
optional?: bool
}
// serviceAccountToken is information about the
// serviceAccountToken data to project
serviceAccountToken?: {
// audience is the intended audience of the token. A recipient of
// a token must identify itself with an identifier specified in
// the audience of the token, and otherwise should reject the
// token. The audience defaults to the identifier of the
// apiserver.
audience?: string
// expirationSeconds is the requested duration of validity of the
// service account token. As the token approaches expiration, the
// kubelet volume plugin will proactively rotate the service
// account token. The kubelet will start trying to rotate the
// token if the token is older than 80 percent of its time to
// live or if the token is older than 24 hours.Defaults to 1 hour
// and must be at least 10 minutes.
expirationSeconds?: int
// path is the path relative to the mount point of the file to
// project the token into.
path: string
}
}]
// A Secret containing the value for the LDAP_BIND_PASSWORD
// setting. More info:
// https://www.pgadmin.org/docs/pgadmin4/latest/ldap.html
ldapBindPassword?: {
// The key of the secret to select from. Must be a valid secret
// key.
key: string
// Name of the referent. More info:
// https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
name?: string
// Specify whether the Secret or its key must be defined
optional?: bool
}
// Settings for the pgAdmin server process. Keys should be
// uppercase and values must be constants. More info:
// https://www.pgadmin.org/docs/pgadmin4/latest/config_py.html
settings?: {
...
}
}
// Defines a PersistentVolumeClaim for pgAdmin data. More info:
// https://kubernetes.io/docs/concepts/storage/persistent-volumes
dataVolumeClaimSpec: {
// accessModes contains the desired access modes the volume should
// have. More info:
// https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes?: [...string]
// dataSource field can be used to specify either: * An existing
// VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot)
// * An existing PVC (PersistentVolumeClaim) If the provisioner
// or an external controller can support the specified data
// source, it will create a new volume based on the contents of
// the specified data source. If the AnyVolumeDataSource feature
// gate is enabled, this field will always have the same contents
// as the DataSourceRef field.
dataSource?: {
// APIGroup is the group for the resource being referenced. If
// APIGroup is not specified, the specified Kind must be in the
// core API group. For any other third-party types, APIGroup is
// required.
apiGroup?: string
// Kind is the type of resource being referenced
kind: string
// Name is the name of resource being referenced
name: string
}
// dataSourceRef specifies the object from which to populate the
// volume with data, if a non-empty volume is desired. This may
// be any local object from a non-empty API group (non core
// object) or a PersistentVolumeClaim object. When this field is
// specified, volume binding will only succeed if the type of the
// specified object matches some installed volume populator or
// dynamic provisioner. This field will replace the functionality
// of the DataSource field and as such if both fields are
// non-empty, they must have the same value. For backwards
// compatibility, both fields (DataSource and DataSourceRef) will
// be set to the same value automatically if one of them is empty
// and the other is non-empty. There are two important
// differences between DataSource and DataSourceRef: * While
// DataSource only allows two specific types of objects,
// DataSourceRef allows any non-core object, as well as
// PersistentVolumeClaim objects. * While DataSource ignores
// disallowed values (dropping them), DataSourceRef preserves all
// values, and generates an error if a disallowed value is
// specified. (Beta) Using this field requires the
// AnyVolumeDataSource feature gate to be enabled.
dataSourceRef?: {
// APIGroup is the group for the resource being referenced. If
// APIGroup is not specified, the specified Kind must be in the
// core API group. For any other third-party types, APIGroup is
// required.
apiGroup?: string
// Kind is the type of resource being referenced
kind: string
// Name is the name of resource being referenced
name: string
}
// resources represents the minimum resources the volume should
// have. If RecoverVolumeExpansionFailure feature is enabled
// users are allowed to specify resource requirements that are
// lower than previous value but must still be higher than
// capacity recorded in the status field of the claim. More info:
// https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
resources?: {
// Limits describes the maximum amount of compute resources
// allowed. More info:
// https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits?: {
[string]: (int | string) & =~"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$"
}
// Requests describes the minimum amount of compute resources
// required. If Requests is omitted for a container, it defaults
// to Limits if that is explicitly specified, otherwise to an
// implementation-defined value. More info:
// https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests?: {
[string]: (int | string) & =~"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$"
}
}
// selector is a label query over volumes to consider for binding.
selector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// storageClassName is the name of the StorageClass required by
// the claim. More info:
// https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
storageClassName?: string
// volumeMode defines what type of volume is required by the
// claim. Value of Filesystem is implied when not included in
// claim spec.
volumeMode?: string
// volumeName is the binding reference to the PersistentVolume
// backing this claim.
volumeName?: string
}
// The image name to use for pgAdmin instance.
image?: string
// ImagePullPolicy is used to determine when Kubernetes will
// attempt to pull (download) container images. More info:
// https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
imagePullPolicy?: "Always" | "Never" | "IfNotPresent"
// The image pull secrets used to pull from a private registry.
// Changing this value causes all running PGAdmin pods to
// restart.
// https://k8s.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets?: [...{
// Name of the referent. More info:
// https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
name?: string
}]
// Metadata contains metadata for custom resources
metadata?: {
annotations?: {
[string]: string
}
labels?: {
[string]: string
}
}
// Priority class name for the PGAdmin pod. Changing this value
// causes PGAdmin pod to restart. More info:
// https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/
priorityClassName?: string
// Resource requirements for the PGAdmin container.
resources?: {
// Limits describes the maximum amount of compute resources
// allowed. More info:
// https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits?: {
[string]: (int | string) & =~"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$"
}
// Requests describes the minimum amount of compute resources
// required. If Requests is omitted for a container, it defaults
// to Limits if that is explicitly specified, otherwise to an
// implementation-defined value. More info:
// https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests?: {
[string]: (int | string) & =~"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$"
}
}
// ServerGroups for importing PostgresClusters to pgAdmin. To
// create a pgAdmin with no selectors, leave this field empty. A
// pgAdmin created with no `ServerGroups` will not automatically
// add any servers through discovery. PostgresClusters can still
// be added manually.
serverGroups?: [...{
// The name for the ServerGroup in pgAdmin. Must be unique in the
// pgAdmin's ServerGroups since it becomes the ServerGroup name
// in pgAdmin.
name: string
// PostgresClusterSelector selects clusters to dynamically add to
// pgAdmin by matching labels. An empty selector like `{}` will
// select ALL clusters in the namespace.
postgresClusterSelector: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
}]
// Tolerations of the PGAdmin pod. More info:
// https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration
tolerations?: [...{
// Effect indicates the taint effect to match. Empty means match
// all taint effects. When specified, allowed values are
// NoSchedule, PreferNoSchedule and NoExecute.
effect?: string
// Key is the taint key that the toleration applies to. Empty
// means match all taint keys. If the key is empty, operator must
// be Exists; this combination means to match all values and all
// keys.
key?: string
// Operator represents a key's relationship to the value. Valid
// operators are Exists and Equal. Defaults to Equal. Exists is
// equivalent to wildcard for value, so that a pod can tolerate
// all taints of a particular category.
operator?: string
// TolerationSeconds represents the period of time the toleration
// (which must be of effect NoExecute, otherwise this field is
// ignored) tolerates the taint. By default, it is not set, which
// means tolerate the taint forever (do not evict). Zero and
// negative values will be treated as 0 (evict immediately) by
// the system.
tolerationSeconds?: int
// Value is the taint value the toleration matches to. If the
// operator is Exists, the value should be empty, otherwise just
// a regular string.
value?: string
}]
}

View File

@@ -0,0 +1,632 @@
// Code generated by timoni. DO NOT EDIT.
//timoni:generate timoni vendor crd -f /home/jeff/workspace/holos-run/holos-infra/deploy/clusters/core2/components/prod-pgo-crds/prod-pgo-crds.gen.yaml
package v1beta1
import "strings"
// PGUpgrade is the Schema for the pgupgrades API
#PGUpgrade: {
// APIVersion defines the versioned schema of this representation
// of an object. Servers should convert recognized schemas to the
// latest internal value, and may reject unrecognized values.
// More info:
// https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
apiVersion: "postgres-operator.crunchydata.com/v1beta1"
// Kind is a string value representing the REST resource this
// object represents. Servers may infer this from the endpoint
// the client submits requests to. Cannot be updated. In
// CamelCase. More info:
// https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
kind: "PGUpgrade"
metadata!: {
name!: strings.MaxRunes(253) & strings.MinRunes(1) & {
string
}
namespace!: strings.MaxRunes(63) & strings.MinRunes(1) & {
string
}
labels?: {
[string]: string
}
annotations?: {
[string]: string
}
}
// PGUpgradeSpec defines the desired state of PGUpgrade
spec!: #PGUpgradeSpec
}
// PGUpgradeSpec defines the desired state of PGUpgrade
#PGUpgradeSpec: {
// Scheduling constraints of the PGUpgrade pod. More info:
// https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node
affinity?: {
// Describes node affinity scheduling rules for the pod.
nodeAffinity?: {
// The scheduler will prefer to schedule pods to nodes that
// satisfy the affinity expressions specified by this field, but
// it may choose a node that violates one or more of the
// expressions. The node that is most preferred is the one with
// the greatest sum of weights, i.e. for each node that meets all
// of the scheduling requirements (resource request,
// requiredDuringScheduling affinity expressions, etc.), compute
// a sum by iterating through the elements of this field and
// adding "weight" to the sum if the node matches the
// corresponding matchExpressions; the node(s) with the highest
// sum are the most preferred.
preferredDuringSchedulingIgnoredDuringExecution?: [...{
// A node selector term, associated with the corresponding weight.
preference: {
// A list of node selector requirements by node's labels.
matchExpressions?: [...{
// The label key that the selector applies to.
key: string
// Represents a key's relationship to a set of values. Valid
// operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
operator: string
// An array of string values. If the operator is In or NotIn, the
// values array must be non-empty. If the operator is Exists or
// DoesNotExist, the values array must be empty. If the operator
// is Gt or Lt, the values array must have a single element,
// which will be interpreted as an integer. This array is
// replaced during a strategic merge patch.
values?: [...string]
}]
// A list of node selector requirements by node's fields.
matchFields?: [...{
// The label key that the selector applies to.
key: string
// Represents a key's relationship to a set of values. Valid
// operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
operator: string
// An array of string values. If the operator is In or NotIn, the
// values array must be non-empty. If the operator is Exists or
// DoesNotExist, the values array must be empty. If the operator
// is Gt or Lt, the values array must have a single element,
// which will be interpreted as an integer. This array is
// replaced during a strategic merge patch.
values?: [...string]
}]
}
// Weight associated with matching the corresponding
// nodeSelectorTerm, in the range 1-100.
weight: int
}]
requiredDuringSchedulingIgnoredDuringExecution?: {
// Required. A list of node selector terms. The terms are ORed.
nodeSelectorTerms: [...{
// A list of node selector requirements by node's labels.
matchExpressions?: [...{
// The label key that the selector applies to.
key: string
// Represents a key's relationship to a set of values. Valid
// operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
operator: string
// An array of string values. If the operator is In or NotIn, the
// values array must be non-empty. If the operator is Exists or
// DoesNotExist, the values array must be empty. If the operator
// is Gt or Lt, the values array must have a single element,
// which will be interpreted as an integer. This array is
// replaced during a strategic merge patch.
values?: [...string]
}]
// A list of node selector requirements by node's fields.
matchFields?: [...{
// The label key that the selector applies to.
key: string
// Represents a key's relationship to a set of values. Valid
// operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
operator: string
// An array of string values. If the operator is In or NotIn, the
// values array must be non-empty. If the operator is Exists or
// DoesNotExist, the values array must be empty. If the operator
// is Gt or Lt, the values array must have a single element,
// which will be interpreted as an integer. This array is
// replaced during a strategic merge patch.
values?: [...string]
}]
}]
}
}
// Describes pod affinity scheduling rules (e.g. co-locate this
// pod in the same node, zone, etc. as some other pod(s)).
podAffinity?: {
// The scheduler will prefer to schedule pods to nodes that
// satisfy the affinity expressions specified by this field, but
// it may choose a node that violates one or more of the
// expressions. The node that is most preferred is the one with
// the greatest sum of weights, i.e. for each node that meets all
// of the scheduling requirements (resource request,
// requiredDuringScheduling affinity expressions, etc.), compute
// a sum by iterating through the elements of this field and
// adding "weight" to the sum if the node has pods which matches
// the corresponding podAffinityTerm; the node(s) with the
// highest sum are the most preferred.
preferredDuringSchedulingIgnoredDuringExecution?: [...{
// Required. A pod affinity term, associated with the
// corresponding weight.
podAffinityTerm: {
// A label query over a set of resources, in this case pods.
labelSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// A label query over the set of namespaces that the term applies
// to. The term is applied to the union of the namespaces
// selected by this field and the ones listed in the namespaces
// field. null selector and null or empty namespaces list means
// "this pod's namespace". An empty selector ({}) matches all
// namespaces.
namespaceSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// namespaces specifies a static list of namespace names that the
// term applies to. The term is applied to the union of the
// namespaces listed in this field and the ones selected by
// namespaceSelector. null or empty namespaces list and null
// namespaceSelector means "this pod's namespace".
namespaces?: [...string]
// This pod should be co-located (affinity) or not co-located
// (anti-affinity) with the pods matching the labelSelector in
// the specified namespaces, where co-located is defined as
// running on a node whose value of the label with key
// topologyKey matches that of any node on which any of the
// selected pods is running. Empty topologyKey is not allowed.
topologyKey: string
}
// weight associated with matching the corresponding
// podAffinityTerm, in the range 1-100.
weight: int
}]
// If the affinity requirements specified by this field are not
// met at scheduling time, the pod will not be scheduled onto the
// node. If the affinity requirements specified by this field
// cease to be met at some point during pod execution (e.g. due
// to a pod label update), the system may or may not try to
// eventually evict the pod from its node. When there are
// multiple elements, the lists of nodes corresponding to each
// podAffinityTerm are intersected, i.e. all terms must be
// satisfied.
requiredDuringSchedulingIgnoredDuringExecution?: [...{
// A label query over a set of resources, in this case pods.
labelSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// A label query over the set of namespaces that the term applies
// to. The term is applied to the union of the namespaces
// selected by this field and the ones listed in the namespaces
// field. null selector and null or empty namespaces list means
// "this pod's namespace". An empty selector ({}) matches all
// namespaces.
namespaceSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// namespaces specifies a static list of namespace names that the
// term applies to. The term is applied to the union of the
// namespaces listed in this field and the ones selected by
// namespaceSelector. null or empty namespaces list and null
// namespaceSelector means "this pod's namespace".
namespaces?: [...string]
// This pod should be co-located (affinity) or not co-located
// (anti-affinity) with the pods matching the labelSelector in
// the specified namespaces, where co-located is defined as
// running on a node whose value of the label with key
// topologyKey matches that of any node on which any of the
// selected pods is running. Empty topologyKey is not allowed.
topologyKey: string
}]
}
// Describes pod anti-affinity scheduling rules (e.g. avoid
// putting this pod in the same node, zone, etc. as some other
// pod(s)).
podAntiAffinity?: {
// The scheduler will prefer to schedule pods to nodes that
// satisfy the anti-affinity expressions specified by this field,
// but it may choose a node that violates one or more of the
// expressions. The node that is most preferred is the one with
// the greatest sum of weights, i.e. for each node that meets all
// of the scheduling requirements (resource request,
// requiredDuringScheduling anti-affinity expressions, etc.),
// compute a sum by iterating through the elements of this field
// and adding "weight" to the sum if the node has pods which
// matches the corresponding podAffinityTerm; the node(s) with
// the highest sum are the most preferred.
preferredDuringSchedulingIgnoredDuringExecution?: [...{
// Required. A pod affinity term, associated with the
// corresponding weight.
podAffinityTerm: {
// A label query over a set of resources, in this case pods.
labelSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// A label query over the set of namespaces that the term applies
// to. The term is applied to the union of the namespaces
// selected by this field and the ones listed in the namespaces
// field. null selector and null or empty namespaces list means
// "this pod's namespace". An empty selector ({}) matches all
// namespaces.
namespaceSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// namespaces specifies a static list of namespace names that the
// term applies to. The term is applied to the union of the
// namespaces listed in this field and the ones selected by
// namespaceSelector. null or empty namespaces list and null
// namespaceSelector means "this pod's namespace".
namespaces?: [...string]
// This pod should be co-located (affinity) or not co-located
// (anti-affinity) with the pods matching the labelSelector in
// the specified namespaces, where co-located is defined as
// running on a node whose value of the label with key
// topologyKey matches that of any node on which any of the
// selected pods is running. Empty topologyKey is not allowed.
topologyKey: string
}
// weight associated with matching the corresponding
// podAffinityTerm, in the range 1-100.
weight: int
}]
// If the anti-affinity requirements specified by this field are
// not met at scheduling time, the pod will not be scheduled onto
// the node. If the anti-affinity requirements specified by this
// field cease to be met at some point during pod execution (e.g.
// due to a pod label update), the system may or may not try to
// eventually evict the pod from its node. When there are
// multiple elements, the lists of nodes corresponding to each
// podAffinityTerm are intersected, i.e. all terms must be
// satisfied.
requiredDuringSchedulingIgnoredDuringExecution?: [...{
// A label query over a set of resources, in this case pods.
labelSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// A label query over the set of namespaces that the term applies
// to. The term is applied to the union of the namespaces
// selected by this field and the ones listed in the namespaces
// field. null selector and null or empty namespaces list means
// "this pod's namespace". An empty selector ({}) matches all
// namespaces.
namespaceSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// namespaces specifies a static list of namespace names that the
// term applies to. The term is applied to the union of the
// namespaces listed in this field and the ones selected by
// namespaceSelector. null or empty namespaces list and null
// namespaceSelector means "this pod's namespace".
namespaces?: [...string]
// This pod should be co-located (affinity) or not co-located
// (anti-affinity) with the pods matching the labelSelector in
// the specified namespaces, where co-located is defined as
// running on a node whose value of the label with key
// topologyKey matches that of any node on which any of the
// selected pods is running. Empty topologyKey is not allowed.
topologyKey: string
}]
}
}
// The major version of PostgreSQL before the upgrade.
fromPostgresVersion: uint & >=10 & <=16
// The image name to use for major PostgreSQL upgrades.
image?: string
// ImagePullPolicy is used to determine when Kubernetes will
// attempt to pull (download) container images. More info:
// https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
imagePullPolicy?: "Always" | "Never" | "IfNotPresent"
// The image pull secrets used to pull from a private registry.
// Changing this value causes all running PGUpgrade pods to
// restart.
// https://k8s.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets?: [...{
// Name of the referent. More info:
// https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
name?: string
}]
// Metadata contains metadata for custom resources
metadata?: {
annotations?: {
[string]: string
}
labels?: {
[string]: string
}
}
// The name of the cluster to be updated
postgresClusterName: strings.MinRunes(1)
// Priority class name for the PGUpgrade pod. Changing this value
// causes PGUpgrade pod to restart. More info:
// https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/
priorityClassName?: string
// Resource requirements for the PGUpgrade container.
resources?: {
// Limits describes the maximum amount of compute resources
// allowed. More info:
// https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits?: {
[string]: (int | string) & =~"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$"
}
// Requests describes the minimum amount of compute resources
// required. If Requests is omitted for a container, it defaults
// to Limits if that is explicitly specified, otherwise to an
// implementation-defined value. More info:
// https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests?: {
[string]: (int | string) & =~"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$"
}
}
// The image name to use for PostgreSQL containers after upgrade.
// When omitted, the value comes from an operator environment
// variable.
toPostgresImage?: string
// The major version of PostgreSQL to be upgraded to.
toPostgresVersion: uint & >=10 & <=16
// Tolerations of the PGUpgrade pod. More info:
// https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration
tolerations?: [...{
// Effect indicates the taint effect to match. Empty means match
// all taint effects. When specified, allowed values are
// NoSchedule, PreferNoSchedule and NoExecute.
effect?: string
// Key is the taint key that the toleration applies to. Empty
// means match all taint keys. If the key is empty, operator must
// be Exists; this combination means to match all values and all
// keys.
key?: string
// Operator represents a key's relationship to the value. Valid
// operators are Exists and Equal. Defaults to Equal. Exists is
// equivalent to wildcard for value, so that a pod can tolerate
// all taints of a particular category.
operator?: string
// TolerationSeconds represents the period of time the toleration
// (which must be of effect NoExecute, otherwise this field is
// ignored) tolerates the taint. By default, it is not set, which
// means tolerate the taint forever (do not evict). Zero and
// negative values will be treated as 0 (evict immediately) by
// the system.
tolerationSeconds?: int
// Value is the taint value the toleration matches to. If the
// operator is Exists, the value should be empty, otherwise just
// a regular string.
value?: string
}]
}

View File

@@ -0,0 +1,7 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
_#_BuiltinPluginLoadingOptions_name: "BploUndefinedBploUseStaticallyLinkedBploLoadFromFileSys"

View File

@@ -0,0 +1,10 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// ConfigMapArgs contains the metadata of how to generate a configmap.
#ConfigMapArgs: {
#GeneratorArgs
}

View File

@@ -0,0 +1,10 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
// Package types holds the definition of the kustomization struct and
// supporting structs. It's the k8s API conformant object that describes
// a set of generation and transformation operations to create and/or
// modify k8s resources.
// A kustomization file is a serialization of this struct.
package types

View File

@@ -0,0 +1,29 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// FieldSpec completely specifies a kustomizable field in a k8s API object.
// It helps define the operands of transformations.
//
// For example, a directive to add a common label to objects
// will need to know that a 'Deployment' object (in API group
// 'apps', any version) can have labels at field path
// 'spec/template/metadata/labels', and further that it is OK
// (or not OK) to add that field path to the object if the
// field path doesn't exist already.
//
// This would look like
// {
// group: apps
// kind: Deployment
// path: spec/template/metadata/labels
// create: true
// }
#FieldSpec: {
path?: string @go(Path)
create?: bool @go(CreateIfNotPresent)
}
#FsSlice: [...#FieldSpec]

View File

@@ -0,0 +1,33 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// GenerationBehavior specifies generation behavior of configmaps, secrets and maybe other resources.
#GenerationBehavior: int // #enumGenerationBehavior
#enumGenerationBehavior:
#BehaviorUnspecified |
#BehaviorCreate |
#BehaviorReplace |
#BehaviorMerge
#values_GenerationBehavior: {
BehaviorUnspecified: #BehaviorUnspecified
BehaviorCreate: #BehaviorCreate
BehaviorReplace: #BehaviorReplace
BehaviorMerge: #BehaviorMerge
}
// BehaviorUnspecified is an Unspecified behavior; typically treated as a Create.
#BehaviorUnspecified: #GenerationBehavior & 0
// BehaviorCreate makes a new resource.
#BehaviorCreate: #GenerationBehavior & 1
// BehaviorReplace replaces a resource.
#BehaviorReplace: #GenerationBehavior & 2
// BehaviorMerge attempts to merge a new resource with an existing resource.
#BehaviorMerge: #GenerationBehavior & 3

View File

@@ -0,0 +1,27 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// GeneratorArgs contains arguments common to ConfigMap and Secret generators.
#GeneratorArgs: {
// Namespace for the resource, optional
namespace?: string @go(Namespace)
// Name - actually the partial name - of the generated resource.
// The full name ends up being something like
// NamePrefix + this.Name + hash(content of generated resource).
name?: string @go(Name)
// Behavior of generated resource, must be one of:
// 'create': create a new one
// 'replace': replace the existing one
// 'merge': merge with the existing one
behavior?: string @go(Behavior)
#KvPairSources
// Local overrides to global generatorOptions field.
options?: null | #GeneratorOptions @go(Options,*GeneratorOptions)
}

View File

@@ -0,0 +1,22 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// GeneratorOptions modify behavior of all ConfigMap and Secret generators.
#GeneratorOptions: {
// Labels to add to all generated resources.
labels?: {[string]: string} @go(Labels,map[string]string)
// Annotations to add to all generated resources.
annotations?: {[string]: string} @go(Annotations,map[string]string)
// DisableNameSuffixHash if true disables the default behavior of adding a
// suffix to the names of generated resources that is a hash of the
// resource contents.
disableNameSuffixHash?: bool @go(DisableNameSuffixHash)
// Immutable if true add to all generated resources.
immutable?: bool @go(Immutable)
}

View File

@@ -0,0 +1,116 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
#HelmDefaultHome: "charts"
#HelmGlobals: {
// ChartHome is a file path, relative to the kustomization root,
// to a directory containing a subdirectory for each chart to be
// included in the kustomization.
// The default value of this field is "charts".
// So, for example, kustomize looks for the minecraft chart
// at {kustomizationRoot}/{ChartHome}/minecraft.
// If the chart is there at build time, kustomize will use it as found,
// and not check version numbers or dates.
// If the chart is not there, kustomize will attempt to pull it
// using the version number specified in the kustomization file,
// and put it there. To suppress the pull attempt, simply assure
// that the chart is already there.
chartHome?: string @go(ChartHome)
// ConfigHome defines a value that kustomize should pass to helm via
// the HELM_CONFIG_HOME environment variable. kustomize doesn't attempt
// to read or write this directory.
// If omitted, {tmpDir}/helm is used, where {tmpDir} is some temporary
// directory created by kustomize for the benefit of helm.
// Likewise, kustomize sets
// HELM_CACHE_HOME={ConfigHome}/.cache
// HELM_DATA_HOME={ConfigHome}/.data
// for the helm subprocess.
configHome?: string @go(ConfigHome)
}
#HelmChart: {
// Name is the name of the chart, e.g. 'minecraft'.
name?: string @go(Name)
// Version is the version of the chart, e.g. '3.1.3'
version?: string @go(Version)
// Repo is a URL locating the chart on the internet.
// This is the argument to helm's `--repo` flag, e.g.
// `https://itzg.github.io/minecraft-server-charts`.
repo?: string @go(Repo)
// ReleaseName replaces RELEASE-NAME in chart template output,
// making a particular inflation of a chart unique with respect to
// other inflations of the same chart in a cluster. It's the first
// argument to the helm `install` and `template` commands, i.e.
// helm install {RELEASE-NAME} {chartName}
// helm template {RELEASE-NAME} {chartName}
// If omitted, the flag --generate-name is passed to 'helm template'.
releaseName?: string @go(ReleaseName)
// Namespace set the target namespace for a release. It is .Release.Namespace
// in the helm template
namespace?: string @go(Namespace)
// AdditionalValuesFiles are local file paths to values files to be used in
// addition to either the default values file or the values specified in ValuesFile.
additionalValuesFiles?: [...string] @go(AdditionalValuesFiles,[]string)
// ValuesFile is a local file path to a values file to use _instead of_
// the default values that accompanied the chart.
// The default values are in '{ChartHome}/{Name}/values.yaml'.
valuesFile?: string @go(ValuesFile)
// ValuesInline holds value mappings specified directly,
// rather than in a separate file.
valuesInline?: {...} @go(ValuesInline,map[string]interface{})
// ValuesMerge specifies how to treat ValuesInline with respect to Values.
// Legal values: 'merge', 'override', 'replace'.
// Defaults to 'override'.
valuesMerge?: string @go(ValuesMerge)
// IncludeCRDs specifies if Helm should also generate CustomResourceDefinitions.
// Defaults to 'false'.
includeCRDs?: bool @go(IncludeCRDs)
// SkipHooks sets the --no-hooks flag when calling helm template. This prevents
// helm from erroneously rendering test templates.
skipHooks?: bool @go(SkipHooks)
// ApiVersions is the kubernetes apiversions used for Capabilities.APIVersions
apiVersions?: [...string] @go(ApiVersions,[]string)
// KubeVersion is the kubernetes version used by Helm for Capabilities.KubeVersion"
kubeVersion?: string @go(KubeVersion)
// NameTemplate is for specifying the name template used to name the release.
nameTemplate?: string @go(NameTemplate)
// SkipTests skips tests from templated output.
skipTests?: bool @go(SkipTests)
}
// HelmChartArgs contains arguments to helm.
// Deprecated. Use HelmGlobals and HelmChart instead.
#HelmChartArgs: {
chartName?: string @go(ChartName)
chartVersion?: string @go(ChartVersion)
chartRepoUrl?: string @go(ChartRepoURL)
chartHome?: string @go(ChartHome)
chartRepoName?: string @go(ChartRepoName)
helmBin?: string @go(HelmBin)
helmHome?: string @go(HelmHome)
values?: string @go(Values)
valuesLocal?: {...} @go(ValuesLocal,map[string]interface{})
valuesMerge?: string @go(ValuesMerge)
releaseName?: string @go(ReleaseName)
releaseNamespace?: string @go(ReleaseNamespace)
extraArgs?: [...string] @go(ExtraArgs,[]string)
}

View File

@@ -0,0 +1,40 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
#Cloud: string // #enumCloud
#enumCloud:
#GKE
#GKE: #Cloud & "gke"
// IAMPolicyGeneratorArgs contains arguments to generate a GKE service account resource.
#IAMPolicyGeneratorArgs: {
// which cloud provider to generate for (e.g. "gke")
cloud: #Cloud @go(Cloud)
// information about the kubernetes cluster for this object
kubernetesService: #KubernetesService @go(KubernetesService)
// information about the service account and project
serviceAccount: #ServiceAccount @go(ServiceAccount)
}
#KubernetesService: {
// the name used for the Kubernetes service account
name: string @go(Name)
// the name of the Kubernetes namespace for this object
namespace?: string @go(Namespace)
}
#ServiceAccount: {
// the name of the new cloud provider service account
name: string @go(Name)
// The ID of the project
projectId: string @go(ProjectId)
}

View File

@@ -0,0 +1,26 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// Image contains an image name, a new name, a new tag or digest,
// which will replace the original name and tag.
#Image: {
// Name is a tag-less image name.
name?: string @go(Name)
// NewName is the value used to replace the original name.
newName?: string @go(NewName)
// TagSuffix is the value used to suffix the original tag
// If Digest and NewTag is present an error is thrown
tagSuffix?: string @go(TagSuffix)
// NewTag is the value used to replace the original tag.
newTag?: string @go(NewTag)
// Digest is the value used to replace the original image tag.
// If digest is present NewTag value is ignored.
digest?: string @go(Digest)
}

View File

@@ -0,0 +1,163 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
#KustomizationVersion: "kustomize.config.k8s.io/v1beta1"
#KustomizationKind: "Kustomization"
#ComponentVersion: "kustomize.config.k8s.io/v1alpha1"
#ComponentKind: "Component"
#MetadataNamespacePath: "metadata/namespace"
#MetadataNamespaceApiVersion: "v1"
#MetadataNamePath: "metadata/name"
#OriginAnnotations: "originAnnotations"
#TransformerAnnotations: "transformerAnnotations"
#ManagedByLabelOption: "managedByLabel"
// Kustomization holds the information needed to generate customized k8s api resources.
#Kustomization: {
#TypeMeta
// MetaData is a pointer to avoid marshalling empty struct
metadata?: null | #ObjectMeta @go(MetaData,*ObjectMeta)
// OpenAPI contains information about what kubernetes schema to use.
openapi?: {[string]: string} @go(OpenAPI,map[string]string)
// NamePrefix will prefix the names of all resources mentioned in the kustomization
// file including generated configmaps and secrets.
namePrefix?: string @go(NamePrefix)
// NameSuffix will suffix the names of all resources mentioned in the kustomization
// file including generated configmaps and secrets.
nameSuffix?: string @go(NameSuffix)
// Namespace to add to all objects.
namespace?: string @go(Namespace)
// CommonLabels to add to all objects and selectors.
commonLabels?: {[string]: string} @go(CommonLabels,map[string]string)
// Labels to add to all objects but not selectors.
labels?: [...#Label] @go(Labels,[]Label)
// CommonAnnotations to add to all objects.
commonAnnotations?: {[string]: string} @go(CommonAnnotations,map[string]string)
// Deprecated: Use the Patches field instead, which provides a superset of the functionality of PatchesStrategicMerge.
// PatchesStrategicMerge specifies the relative path to a file
// containing a strategic merge patch. Format documented at
// https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md
// URLs and globs are not supported.
patchesStrategicMerge?: [...#PatchStrategicMerge] @go(PatchesStrategicMerge,[]PatchStrategicMerge)
// Deprecated: Use the Patches field instead, which provides a superset of the functionality of JSONPatches.
// JSONPatches is a list of JSONPatch for applying JSON patch.
// Format documented at https://tools.ietf.org/html/rfc6902
// and http://jsonpatch.com
patchesJson6902?: [...#Patch] @go(PatchesJson6902,[]Patch)
// Patches is a list of patches, where each one can be either a
// Strategic Merge Patch or a JSON patch.
// Each patch can be applied to multiple target objects.
patches?: [...#Patch] @go(Patches,[]Patch)
// Images is a list of (image name, new name, new tag or digest)
// for changing image names, tags or digests. This can also be achieved with a
// patch, but this operator is simpler to specify.
images?: [...#Image] @go(Images,[]Image)
// Deprecated: Use the Images field instead.
imageTags?: [...#Image] @go(ImageTags,[]Image)
// Replacements is a list of replacements, which will copy nodes from a
// specified source to N specified targets.
replacements?: [...#ReplacementField] @go(Replacements,[]ReplacementField)
// Replicas is a list of {resourcename, count} that allows for simpler replica
// specification. This can also be done with a patch.
replicas?: [...#Replica] @go(Replicas,[]Replica)
// Deprecated: Vars will be removed in future release. Migrate to Replacements instead.
// Vars allow things modified by kustomize to be injected into a
// kubernetes object specification. A var is a name (e.g. FOO) associated
// with a field in a specific resource instance. The field must
// contain a value of type string/bool/int/float, and defaults to the name field
// of the instance. Any appearance of "$(FOO)" in the object
// spec will be replaced at kustomize build time, after the final
// value of the specified field has been determined.
vars?: [...#Var] @go(Vars,[]Var)
// SortOptions change the order that kustomize outputs resources.
sortOptions?: null | #SortOptions @go(SortOptions,*SortOptions)
// Resources specifies relative paths to files holding YAML representations
// of kubernetes API objects, or specifications of other kustomizations
// via relative paths, absolute paths, or URLs.
resources?: [...string] @go(Resources,[]string)
// Components specifies relative paths to specifications of other Components
// via relative paths, absolute paths, or URLs.
components?: [...string] @go(Components,[]string)
// Crds specifies relative paths to Custom Resource Definition files.
// This allows custom resources to be recognized as operands, making
// it possible to add them to the Resources list.
// CRDs themselves are not modified.
crds?: [...string] @go(Crds,[]string)
// Deprecated: Anything that would have been specified here should be specified in the Resources field instead.
bases?: [...string] @go(Bases,[]string)
// ConfigMapGenerator is a list of configmaps to generate from
// local data (one configMap per list item).
// The resulting resource is a normal operand, subject to
// name prefixing, patching, etc. By default, the name of
// the map will have a suffix hash generated from its contents.
configMapGenerator?: [...#ConfigMapArgs] @go(ConfigMapGenerator,[]ConfigMapArgs)
// SecretGenerator is a list of secrets to generate from
// local data (one secret per list item).
// The resulting resource is a normal operand, subject to
// name prefixing, patching, etc. By default, the name of
// the map will have a suffix hash generated from its contents.
secretGenerator?: [...#SecretArgs] @go(SecretGenerator,[]SecretArgs)
// HelmGlobals contains helm configuration that isn't chart specific.
helmGlobals?: null | #HelmGlobals @go(HelmGlobals,*HelmGlobals)
// HelmCharts is a list of helm chart configuration instances.
helmCharts?: [...#HelmChart] @go(HelmCharts,[]HelmChart)
// HelmChartInflationGenerator is a list of helm chart configurations.
// Deprecated. Auto-converted to HelmGlobals and HelmCharts.
helmChartInflationGenerator?: [...#HelmChartArgs] @go(HelmChartInflationGenerator,[]HelmChartArgs)
// GeneratorOptions modify behavior of all ConfigMap and Secret generators.
generatorOptions?: null | #GeneratorOptions @go(GeneratorOptions,*GeneratorOptions)
// Configurations is a list of transformer configuration files
configurations?: [...string] @go(Configurations,[]string)
// Generators is a list of files containing custom generators
generators?: [...string] @go(Generators,[]string)
// Transformers is a list of files containing transformers
transformers?: [...string] @go(Transformers,[]string)
// Validators is a list of files containing validators
validators?: [...string] @go(Validators,[]string)
// BuildMetadata is a list of strings used to toggle different build options
buildMetadata?: [...string] @go(BuildMetadata,[]string)
}
_#deprecatedWarningToRunEditFix: "Run 'kustomize edit fix' to update your Kustomization automatically."
_#deprecatedWarningToRunEditFixExperimential: "[EXPERIMENTAL] Run 'kustomize edit fix' to update your Kustomization automatically."
_#deprecatedBaseWarningMessage: "# Warning: 'bases' is deprecated. Please use 'resources' instead. Run 'kustomize edit fix' to update your Kustomization automatically."
_#deprecatedImageTagsWarningMessage: "# Warning: 'imageTags' is deprecated. Please use 'images' instead. Run 'kustomize edit fix' to update your Kustomization automatically."
_#deprecatedPatchesJson6902Message: "# Warning: 'patchesJson6902' is deprecated. Please use 'patches' instead. Run 'kustomize edit fix' to update your Kustomization automatically."
_#deprecatedPatchesStrategicMergeMessage: "# Warning: 'patchesStrategicMerge' is deprecated. Please use 'patches' instead. Run 'kustomize edit fix' to update your Kustomization automatically."
_#deprecatedVarsMessage: "# Warning: 'vars' is deprecated. Please use 'replacements' instead. [EXPERIMENTAL] Run 'kustomize edit fix' to update your Kustomization automatically."
_#deprecatedCommonLabelsWarningMessage: "# Warning: 'commonLabels' is deprecated. Please use 'labels' instead. Run 'kustomize edit fix' to update your Kustomization automatically."

View File

@@ -0,0 +1,37 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// KvPairSources defines places to obtain key value pairs.
#KvPairSources: {
// LiteralSources is a list of literal
// pair sources. Each literal source should
// be a key and literal value, e.g. `key=value`
literals?: [...string] @go(LiteralSources,[]string)
// FileSources is a list of file "sources" to
// use in creating a list of key, value pairs.
// A source takes the form: [{key}=]{path}
// If the "key=" part is missing, the key is the
// path's basename. If they "key=" part is present,
// it becomes the key (replacing the basename).
// In either case, the value is the file contents.
// Specifying a directory will iterate each named
// file in the directory whose basename is a
// valid configmap key.
files?: [...string] @go(FileSources,[]string)
// EnvSources is a list of file paths.
// The contents of each file should be one
// key=value pair per line, e.g. a Docker
// or npm ".env" file or a ".ini" file
// (wikipedia.org/wiki/INI_file)
envs?: [...string] @go(EnvSources,[]string)
// Older, singular form of EnvSources.
// On edits (e.g. `kustomize fix`) this is merged into the plural form
// for consistency with LiteralSources and FileSources.
env?: string @go(EnvSource)
}

View File

@@ -0,0 +1,23 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
#Label: {
// Pairs contains the key-value pairs for labels to add
pairs?: {[string]: string} @go(Pairs,map[string]string)
// IncludeSelectors inidicates should transformer include the
// fieldSpecs for selectors. Custom fieldSpecs specified by
// FieldSpecs will be merged with builtin fieldSpecs if this
// is true.
includeSelectors?: bool @go(IncludeSelectors)
// IncludeTemplates inidicates should transformer include the
// spec/template/metadata fieldSpec. Custom fieldSpecs specified by
// FieldSpecs will be merged with spec/template/metadata fieldSpec if this
// is true. If IncludeSelectors is true, IncludeTemplates is not needed.
includeTemplates?: bool @go(IncludeTemplates)
fields?: [...#FieldSpec] @go(FieldSpecs,[]FieldSpec)
}

View File

@@ -0,0 +1,34 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// Restrictions on what things can be referred to
// in a kustomization file.
//
//go:generate stringer -type=LoadRestrictions
#LoadRestrictions: int // #enumLoadRestrictions
#enumLoadRestrictions:
#LoadRestrictionsUnknown |
#LoadRestrictionsRootOnly |
#LoadRestrictionsNone
#values_LoadRestrictions: {
LoadRestrictionsUnknown: #LoadRestrictionsUnknown
LoadRestrictionsRootOnly: #LoadRestrictionsRootOnly
LoadRestrictionsNone: #LoadRestrictionsNone
}
#LoadRestrictionsUnknown: #LoadRestrictions & 0
// Files referenced by a kustomization file must be in
// or under the directory holding the kustomization
// file itself.
#LoadRestrictionsRootOnly: #LoadRestrictions & 1
// The kustomization file may specify absolute or
// relative paths to patch or resources files outside
// its own tree.
#LoadRestrictionsNone: #LoadRestrictions & 2

View File

@@ -0,0 +1,7 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
_#_LoadRestrictions_name: "LoadRestrictionsUnknownLoadRestrictionsRootOnlyLoadRestrictionsNone"

View File

@@ -0,0 +1,14 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// ObjectMeta partially copies apimachinery/pkg/apis/meta/v1.ObjectMeta
// No need for a direct dependence; the fields are stable.
#ObjectMeta: {
name?: string @go(Name)
namespace?: string @go(Namespace)
labels?: {[string]: string} @go(Labels,map[string]string)
annotations?: {[string]: string} @go(Annotations,map[string]string)
}

View File

@@ -0,0 +1,11 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// Pair is a key value pair.
#Pair: {
Key: string
Value: string
}

View File

@@ -0,0 +1,23 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// Patch represent either a Strategic Merge Patch or a JSON patch
// and its targets.
// The content of the patch can either be from a file
// or from an inline string.
#Patch: {
// Path is a relative file path to the patch file.
path?: string @go(Path)
// Patch is the content of a patch.
patch?: string @go(Patch)
// Target points to the resources that the patch is applied to
target?: #Target | #Selector @go(Target,*Selector)
// Options is a list of options for the patch
options?: {[string]: bool} @go(Options,map[string]bool)
}

View File

@@ -0,0 +1,10 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// PatchStrategicMerge represents a relative path to a
// stategic merge patch with the format
// https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md
#PatchStrategicMerge: string

View File

@@ -0,0 +1,27 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
#HelmConfig: {
Enabled: bool
Command: string
ApiVersions: [...string] @go(,[]string)
KubeVersion: string
}
// PluginConfig holds plugin configuration.
#PluginConfig: {
// PluginRestrictions distinguishes plugin restrictions.
PluginRestrictions: #PluginRestrictions
// BpLoadingOptions distinguishes builtin plugin behaviors.
BpLoadingOptions: #BuiltinPluginLoadingOptions
// FnpLoadingOptions sets the way function-based plugin behaviors.
FnpLoadingOptions: #FnPluginLoadingOptions
// HelmConfig contains metadata needed for allowing and running helm.
HelmConfig: #HelmConfig
}

View File

@@ -0,0 +1,87 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// Some plugin classes
// - builtin: plugins defined in the kustomize repo.
// May be freely used and re-configured.
// - local: plugins that aren't builtin but are
// locally defined (presumably by the user), meaning
// the kustomization refers to them via a relative
// file path, not a URL.
// - remote: require a build-time download to obtain.
// Unadvised, unless one controls the
// serving site.
//
//go:generate stringer -type=PluginRestrictions
#PluginRestrictions: int // #enumPluginRestrictions
#enumPluginRestrictions:
#PluginRestrictionsUnknown |
#PluginRestrictionsBuiltinsOnly |
#PluginRestrictionsNone
#values_PluginRestrictions: {
PluginRestrictionsUnknown: #PluginRestrictionsUnknown
PluginRestrictionsBuiltinsOnly: #PluginRestrictionsBuiltinsOnly
PluginRestrictionsNone: #PluginRestrictionsNone
}
#PluginRestrictionsUnknown: #PluginRestrictions & 0
// Non-builtin plugins completely disabled.
#PluginRestrictionsBuiltinsOnly: #PluginRestrictions & 1
// No restrictions, do whatever you want.
#PluginRestrictionsNone: #PluginRestrictions & 2
// BuiltinPluginLoadingOptions distinguish ways in which builtin plugins are used.
//go:generate stringer -type=BuiltinPluginLoadingOptions
#BuiltinPluginLoadingOptions: int // #enumBuiltinPluginLoadingOptions
#enumBuiltinPluginLoadingOptions:
#BploUndefined |
#BploUseStaticallyLinked |
#BploLoadFromFileSys
#values_BuiltinPluginLoadingOptions: {
BploUndefined: #BploUndefined
BploUseStaticallyLinked: #BploUseStaticallyLinked
BploLoadFromFileSys: #BploLoadFromFileSys
}
#BploUndefined: #BuiltinPluginLoadingOptions & 0
// Desired in production use for performance.
#BploUseStaticallyLinked: #BuiltinPluginLoadingOptions & 1
// Desired in testing and development cycles where it's undesirable
// to generate static code.
#BploLoadFromFileSys: #BuiltinPluginLoadingOptions & 2
// FnPluginLoadingOptions set way functions-based plugins are restricted
#FnPluginLoadingOptions: {
// Allow to run executables
EnableExec: bool
// Allow to run starlark
EnableStar: bool
// Allow container access to network
Network: bool
NetworkName: string
// list of mounts
Mounts: [...string] @go(,[]string)
// list of env variables to pass to fn
Env: [...string] @go(,[]string)
// Run as uid and gid of the command executor
AsCurrentUser: bool
// Run in this working directory
WorkingDir: string
}

View File

@@ -0,0 +1,7 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
_#_PluginRestrictions_name: "PluginRestrictionsUnknownPluginRestrictionsBuiltinsOnlyPluginRestrictionsNone"

View File

@@ -0,0 +1,57 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
#DefaultReplacementFieldPath: "metadata.name"
// Replacement defines how to perform a substitution
// where it is from and where it is to.
#Replacement: {
// The source of the value.
source?: null | #SourceSelector @go(Source,*SourceSelector)
// The N fields to write the value to.
targets?: [...null | #TargetSelector] @go(Targets,[]*TargetSelector)
}
// SourceSelector is the source of the replacement transformer.
#SourceSelector: {
// Structured field path expected in the allowed object.
fieldPath?: string @go(FieldPath)
// Used to refine the interpretation of the field.
options?: null | #FieldOptions @go(Options,*FieldOptions)
}
// TargetSelector specifies fields in one or more objects.
#TargetSelector: {
// Include objects that match this.
select?: null | #Selector @go(Select,*Selector)
// From the allowed set, remove objects that match this.
reject?: [...null | #Selector] @go(Reject,[]*Selector)
// Structured field paths expected in each allowed object.
fieldPaths?: [...string] @go(FieldPaths,[]string)
// Used to refine the interpretation of the field.
options?: null | #FieldOptions @go(Options,*FieldOptions)
}
// FieldOptions refine the interpretation of FieldPaths.
#FieldOptions: {
// Used to split/join the field.
delimiter?: string @go(Delimiter)
// Which position in the split to consider.
index?: int @go(Index)
// TODO (#3492): Implement use of this option
// None, Base64, URL, Hex, etc
encoding?: string @go(Encoding)
// If field missing, add it.
create?: bool @go(Create)
}

View File

@@ -0,0 +1,10 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
#ReplacementField: {
#Replacement
path?: string @go(Path)
}

View File

@@ -0,0 +1,17 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// Replica specifies a modification to a replica config.
// The number of replicas of a resource whose name matches will be set to count.
// This struct is used by the ReplicaCountTransform, and is meant to supplement
// the existing patch functionality with a simpler syntax for replica configuration.
#Replica: {
// The name of the resource to change the replica count
name?: string @go(Name)
// The number of replicas required.
count: int64 @go(Count)
}

View File

@@ -0,0 +1,19 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// SecretArgs contains the metadata of how to generate a secret.
#SecretArgs: {
#GeneratorArgs
// Type of the secret.
//
// This is the same field as the secret type field in v1/Secret:
// It can be "Opaque" (default), or "kubernetes.io/tls".
//
// If type is "kubernetes.io/tls", then "literals" or "files" must have exactly two
// keys: "tls.key" and "tls.crt"
type?: string @go(Type)
}

View File

@@ -0,0 +1,20 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// Selector specifies a set of resources.
// Any resource that matches intersection of all conditions
// is included in this set.
#Selector: {
// AnnotationSelector is a string that follows the label selection expression
// https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#api
// It matches with the resource annotations.
annotationSelector?: string @go(AnnotationSelector)
// LabelSelector is a string that follows the label selection expression
// https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#api
// It matches with the resource labels.
labelSelector?: string @go(LabelSelector)
}

View File

@@ -0,0 +1,36 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// SortOptions defines the order that kustomize outputs resources.
#SortOptions: {
// Order selects the ordering strategy.
order?: #SortOrder @go(Order)
// LegacySortOptions tweaks the sorting for the "legacy" sort ordering
// strategy.
legacySortOptions?: null | #LegacySortOptions @go(LegacySortOptions,*LegacySortOptions)
}
// SortOrder defines different ordering strategies.
#SortOrder: string // #enumSortOrder
#enumSortOrder:
#LegacySortOrder |
#FIFOSortOrder
#LegacySortOrder: #SortOrder & "legacy"
#FIFOSortOrder: #SortOrder & "fifo"
// LegacySortOptions define various options for tweaking the "legacy" ordering
// strategy.
#LegacySortOptions: {
// OrderFirst selects the resource kinds to order first.
orderFirst: [...string] @go(OrderFirst,[]string)
// OrderLast selects the resource kinds to order last.
orderLast: [...string] @go(OrderLast,[]string)
}

View File

@@ -0,0 +1,12 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// TypeMeta partially copies apimachinery/pkg/apis/meta/v1.TypeMeta
// No need for a direct dependence; the fields are stable.
#TypeMeta: {
kind?: string @go(Kind)
apiVersion?: string @go(APIVersion)
}

View File

@@ -0,0 +1,45 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go sigs.k8s.io/kustomize/api/types
package types
// Var represents a variable whose value will be sourced
// from a field in a Kubernetes object.
#Var: {
// Value of identifier name e.g. FOO used in container args, annotations
// Appears in pod template as $(FOO)
name: string @go(Name)
// ObjRef must refer to a Kubernetes resource under the
// purview of this kustomization. ObjRef should use the
// raw name of the object (the name specified in its YAML,
// before addition of a namePrefix and a nameSuffix).
objref: #Target @go(ObjRef)
// FieldRef refers to the field of the object referred to by
// ObjRef whose value will be extracted for use in
// replacing $(FOO).
// If unspecified, this defaults to fieldPath: $defaultFieldPath
fieldref?: #FieldSelector @go(FieldRef)
}
// Target refers to a kubernetes object by Group, Version, Kind and Name
// gvk.Gvk contains Group, Version and Kind
// APIVersion is added to keep the backward compatibility of using ObjectReference
// for Var.ObjRef
#Target: {
apiVersion?: string @go(APIVersion)
name: string @go(Name)
namespace?: string @go(Namespace)
}
// FieldSelector contains the fieldPath to an object field.
// This struct is added to keep the backward compatibility of using ObjectFieldSelector
// for Var.FieldRef
#FieldSelector: {
fieldPath?: string @go(FieldPath)
}
// byName is a sort interface which sorts Vars by name alphabetically
_#byName: [...#Var]

View File

@@ -1,6 +1,6 @@
package v1
#Deployment: {
apiVersion: "apps/v1"
kind: "Deployment"
apiVersion: "apps/v1"
kind: "Deployment"
}

View File

@@ -1,11 +1,11 @@
package v1
#CronJob: {
apiVersion: "batch/v1"
kind: "CronJob"
apiVersion: "batch/v1"
kind: "CronJob"
}
#Job: {
apiVersion: "batch/v1"
kind: "Job"
apiVersion: "batch/v1"
kind: "Job"
}

View File

@@ -1,26 +1,26 @@
package v1
#Namespace: {
apiVersion: "v1"
kind: "Namespace"
apiVersion: "v1"
kind: "Namespace"
}
#ConfigMap: {
apiVersion: "v1"
kind: "ConfigMap"
apiVersion: "v1"
kind: "ConfigMap"
}
#ServiceAccount: {
apiVersion: "v1"
kind: "ServiceAccount"
apiVersion: "v1"
kind: "ServiceAccount"
}
#Pod: {
apiVersion: "v1"
kind: "Pod"
apiVersion: "v1"
kind: "Pod"
}
#Service: {
apiVersion: "v1"
kind: "Service"
apiVersion: "v1"
kind: "Service"
}

View File

@@ -0,0 +1,15 @@
package types
#Patch: {
// Path is a relative file path to the patch file.
path?: string @go(Path)
// Patch is the content of a patch.
patch?: string @go(Patch)
// Target points to the resources that the patch is applied to
target?: #Target | #Selector @go(Target,*Selector)
// Options is a list of options for the patch
options?: {[string]: bool} @go(Options,map[string]bool)
}

View File

@@ -0,0 +1,7 @@
package types
#Target: {
group?: string @go(Group)
version?: string @go(Version)
kind?: string @go(Kind)
}

View File

@@ -0,0 +1,62 @@
package holos
#PlatformCerts: {
// Globally scoped platform services are defined here.
login: #PlatformCert & {
_name: "login"
_wildcard: true
_description: "Cert for Zitadel oidc identity provider for iam services"
}
// Cluster scoped services are defined here.
for cluster in #Platform.clusters {
"\(cluster.name)-httpbin": #ClusterCert & {
_name: "httpbin"
_cluster: cluster.name
_description: "Test endpoint to verify the service mesh ingress gateway"
}
}
}
// #PlatformCert provisions a cert in the provisioner cluster.
// Workload clusters use ExternalSecret resources to fetch the Secret tls key and cert from the provisioner cluster.
#PlatformCert: #Certificate & {
_name: string
_wildcard: true | *false
metadata: name: string | *_name
metadata: namespace: string | *"istio-ingress"
spec: {
commonName: string | *"\(_name).\(#Platform.org.domain)"
if _wildcard {
dnsNames: [commonName, "*.\(commonName)"]
}
if !_wildcard {
dnsNames: [commonName]
}
secretName: metadata.name
issuerRef: kind: "ClusterIssuer"
issuerRef: name: string | *"letsencrypt"
}
}
// #ClusterCert provisions a cluster specific certificate.
#ClusterCert: #Certificate & {
_name: string
_cluster: string
_wildcard: true | *false
// Enforce this value
metadata: name: "\(_cluster)-\(_name)"
metadata: namespace: string | *"istio-ingress"
spec: {
commonName: string | *"\(_name).\(_cluster).\(#Platform.org.domain)"
if _wildcard {
dnsNames: [commonName, "*.\(commonName)"]
}
if !_wildcard {
dnsNames: [commonName]
}
secretName: metadata.name
issuerRef: kind: "ClusterIssuer"
issuerRef: name: string | *"letsencrypt"
}
}

View File

@@ -0,0 +1,20 @@
package holos
#InputKeys: component: "crdb"
#HelmChart & {
namespace: #TargetNamespace
chart: {
name: "cockroachdb"
version: "11.2.3"
repository: {
name: "cockroachdb"
url: "https://charts.cockroachdb.com/"
}
}
values: #Values
apiObjects: {
ExternalSecret: node: #ExternalSecret & {_name: "cockroachdb-node"}
ExternalSecret: root: #ExternalSecret & {_name: "cockroachdb-root"}
}
}

View File

@@ -0,0 +1,606 @@
package holos
#Values: {
// Generated file, DO NOT EDIT. Source: build/templates/values.yaml
// Overrides the chart name against the label "app.kubernetes.io/name: " placed on every resource this chart creates.
nameOverride: ""
// Override the resource names created by this chart which originally is generated using release and chart name.
fullnameOverride: string | *""
image: {
repository: string | *"cockroachdb/cockroach"
tag: "v23.1.13"
pullPolicy: "IfNotPresent"
credentials: {}
}
// registry: docker.io
// username: john_doe
// password: changeme
// Additional labels to apply to all Kubernetes resources created by this chart.
labels: {}
// app.kubernetes.io/part-of: my-app
// Cluster's default DNS domain.
// You should overwrite it if you're using a different one,
// otherwise CockroachDB nodes discovery won't work.
clusterDomain: "cluster.local"
conf: {
// An ordered list of CockroachDB node attributes.
// Attributes are arbitrary strings specifying machine capabilities.
// Machine capabilities might include specialized hardware or number of cores
// (e.g. "gpu", "x16c").
attrs: []
// - x16c
// - gpu
// Total size in bytes for caches, shared evenly if there are multiple
// storage devices. Size suffixes are supported (e.g. `1GB` and `1GiB`).
// A percentage of physical memory can also be specified (e.g. `.25`).
cache: "25%"
// Sets a name to verify the identity of a cluster.
// The value must match between all nodes specified via `conf.join`.
// This can be used as an additional verification when either the node or
// cluster, or both, have not yet been initialized and do not yet know their
// cluster ID.
// To introduce a cluster name into an already-initialized cluster, pair this
// option with `conf.disable-cluster-name-verification: yes`.
"cluster-name": ""
// Tell the server to ignore `conf.cluster-name` mismatches.
// This is meant for use when opting an existing cluster into starting to use
// cluster name verification, or when changing the cluster name.
// The cluster should be restarted once with `conf.cluster-name` and
// `conf.disable-cluster-name-verification: yes` combined, and once all nodes
// have been updated to know the new cluster name, the cluster can be restarted
// again with `conf.disable-cluster-name-verification: no`.
// This option has no effect if `conf.cluster-name` is not specified.
"disable-cluster-name-verification": false
// The addresses for connecting a CockroachDB nodes to an existing cluster.
// If you are deploying a second CockroachDB instance that should join a first
// one, use the below list to join to the existing instance.
// Each item in the array should be a FQDN (and port if needed) resolvable by
// new Pods.
join: []
// New logging configuration.
log: {
enabled: false
// https://www.cockroachlabs.com/docs/v21.1/configure-logs
config: {}
}
// file-defaults:
// dir: /custom/dir/path/
// fluent-defaults:
// format: json-fluent
// sinks:
// stderr:
// channels: [DEV]
// Logs at or above this threshold to STDERR. Ignored when "log" is enabled
logtostderr: "INFO"
// Maximum storage capacity available to store temporary disk-based data for
// SQL queries that exceed the memory budget (e.g. join, sorts, etc are
// sometimes able to spill intermediate results to disk).
// Accepts numbers interpreted as bytes, size suffixes (e.g. `32GB` and
// `32GiB`) or a percentage of disk size (e.g. `10%`).
// The location of the temporary files is within the first store dir.
// If expressed as a percentage, `max-disk-temp-storage` is interpreted
// relative to the size of the storage device on which the first store is
// placed. The temp space usage is never counted towards any store usage
// (although it does share the device with the first store) so, when
// configuring this, make sure that the size of this temp storage plus the size
// of the first store don't exceed the capacity of the storage device.
// If the first store is an in-memory one (i.e. `type=mem`), then this
// temporary "disk" data is also kept in-memory.
// A percentage value is interpreted as a percentage of the available internal
// memory.
// max-disk-temp-storage: 0GB
// Maximum allowed clock offset for the cluster. If observed clock offsets
// exceed this limit, servers will crash to minimize the likelihood of
// reading inconsistent data. Increasing this value will increase the time
// to recovery of failures as well as the frequency of uncertainty-based
// read restarts.
// Note, that this value must be the same on all nodes in the cluster.
// In order to change it, all nodes in the cluster must be stopped
// simultaneously and restarted with the new value.
// max-offset: 500ms
// Maximum memory capacity available to store temporary data for SQL clients,
// including prepared queries and intermediate data rows during query
// execution. Accepts numbers interpreted as bytes, size suffixes
// (e.g. `1GB` and `1GiB`) or a percentage of physical memory (e.g. `.25`).
"max-sql-memory": "25%"
// An ordered, comma-separated list of key-value pairs that describe the
// topography of the machine. Topography might include country, datacenter
// or rack designations. Data is automatically replicated to maximize
// diversities of each tier. The order of tiers is used to determine
// the priority of the diversity, so the more inclusive localities like
// country should come before less inclusive localities like datacenter.
// The tiers and order must be the same on all nodes. Including more tiers
// is better than including fewer. For example:
// locality: country=us,region=us-west,datacenter=us-west-1b,rack=12
// locality: country=ca,region=ca-east,datacenter=ca-east-2,rack=4
// locality: planet=earth,province=manitoba,colo=secondary,power=3
locality: ""
// Run CockroachDB instances in standalone mode with replication disabled
// (replication factor = 1).
// Enabling this option makes the following values to be ignored:
// - `conf.cluster-name`
// - `conf.disable-cluster-name-verification`
// - `conf.join`
//
// WARNING: Enabling this option makes each deployed Pod as a STANDALONE
// CockroachDB instance, so the StatefulSet does NOT FORM A CLUSTER.
// Don't use this option for production deployments unless you clearly
// understand what you're doing.
// Usually, this option is intended to be used in conjunction with
// `statefulset.replicas: 1` for temporary one-time deployments (like
// running E2E tests, for example).
"single-node": false
// If non-empty, create a SQL audit log in the specified directory.
"sql-audit-dir": ""
// CockroachDB's port to listen to inter-communications and client connections.
port: 26257
// CockroachDB's port to listen to HTTP requests.
"http-port": 8080
// CockroachDB's data mount path.
path: "cockroach-data"
// CockroachDB's storage configuration https://www.cockroachlabs.com/docs/v21.1/cockroach-start.html#storage
// Uses --store flag
store: {
enabled: false
// Should be empty or 'mem'
type: null
// Required for type=mem. If type and size is empty - storage.persistentVolume.size is used
size: null
// Arbitrary strings, separated by colons, specifying disk type or capability
attrs: null
}
}
statefulset: {
replicas: 3
updateStrategy: type: "RollingUpdate"
podManagementPolicy: "Parallel"
budget: maxUnavailable: 1
// List of additional command-line arguments you want to pass to the
// `cockroach start` command.
args: []
// - --disable-cluster-name-verification
// List of extra environment variables to pass into container
env: []
// - name: COCKROACH_ENGINE_MAX_SYNC_DURATION
// value: "24h"
// List of Secrets names in the same Namespace as the CockroachDB cluster,
// which shall be mounted into `/etc/cockroach/secrets/` for every cluster
// member.
secretMounts: []
// Additional labels to apply to this StatefulSet and all its Pods.
labels: {
"app.kubernetes.io/component": "cockroachdb"
}
// Additional annotations to apply to the Pods of this StatefulSet.
annotations: {}
// Affinity rules for scheduling Pods of this StatefulSet on Nodes.
// https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity
nodeAffinity: {}
// Inter-Pod Affinity rules for scheduling Pods of this StatefulSet.
// https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity
podAffinity: {}
// Anti-affinity rules for scheduling Pods of this StatefulSet.
// https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity
// You may either toggle options below for default anti-affinity rules,
// or specify the whole set of anti-affinity rules instead of them.
podAntiAffinity: {
// The topologyKey to be used.
// Can be used to spread across different nodes, AZs, regions etc.
topologyKey: "kubernetes.io/hostname"
// Type of anti-affinity rules: either `soft`, `hard` or empty value (which
// disables anti-affinity rules).
type: "soft"
// Weight for `soft` anti-affinity rules.
// Does not apply for other anti-affinity types.
weight: 100
}
// Node selection constraints for scheduling Pods of this StatefulSet.
// https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}
// PriorityClassName given to Pods of this StatefulSet
// https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
// Taints to be tolerated by Pods of this StatefulSet.
// https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
// https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
topologySpreadConstraints: {
maxSkew: 1
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: "ScheduleAnyway"
}
// Uncomment the following resources definitions or pass them from
// command line to control the CPU and memory resources allocated
// by Pods of this StatefulSet.
resources: {}
// limits:
// cpu: 100m
// memory: 512Mi
// requests:
// cpu: 100m
// memory: 512Mi
// Custom Liveness probe
// https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request
customLivenessProbe: {}
// httpGet:
// path: /health
// port: http
// scheme: HTTPS
// initialDelaySeconds: 30
// periodSeconds: 5
// Custom Rediness probe
// https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes
customReadinessProbe: {}
// httpGet:
// path: /health
// port: http
// scheme: HTTPS
// initialDelaySeconds: 30
// periodSeconds: 5
securityContext: {
enabled: true
}
serviceAccount: {
// Specifies whether this ServiceAccount should be created.
create: true
// The name of this ServiceAccount to use.
// If not set and `create` is `true`, then service account is auto-generated.
// If not set and `create` is `false`, then it uses default service account.
name: ""
// Additional serviceAccount annotations (e.g. for attaching AWS IAM roles to pods)
annotations: {}
}
}
service: {
ports: {
// You can set a different external and internal gRPC ports and their name.
grpc: {
external: {
port: 26257
name: "grpc"
}
// If the port number is different than `external.port`, then it will be
// named as `internal.name` in Service.
internal: {
port: 26257
// If using Istio set it to `cockroach`.
name: "grpc-internal"
}
}
http: {
port: 8080
name: "http"
}
}
// This Service is meant to be used by clients of the database.
// It exposes a ClusterIP that will automatically load balance connections
// to the different database Pods.
public: {
type: "ClusterIP"
// Additional labels to apply to this Service.
labels: {
"app.kubernetes.io/component": "cockroachdb"
}
// Additional annotations to apply to this Service.
annotations: {}
}
// This service only exists to create DNS entries for each pod in
// the StatefulSet such that they can resolve each other's IP addresses.
// It does not create a load-balanced ClusterIP and should not be used directly
// by clients in most circumstances.
discovery: {
// Additional labels to apply to this Service.
labels: {
"app.kubernetes.io/component": "cockroachdb"
}
// Additional annotations to apply to this Service.
annotations: {}
}
}
// CockroachDB's ingress for web ui.
ingress: {
enabled: false
labels: {}
annotations: {}
// kubernetes.io/ingress.class: nginx
// cert-manager.io/cluster-issuer: letsencrypt
paths: ["/"]
hosts: []
// - cockroachlabs.com
tls: []
}
// - hosts: [cockroachlabs.com]
// secretName: cockroachlabs-tls
prometheus: {
enabled: true
}
securityContext: enabled: true
// CockroachDB's Prometheus operator ServiceMonitor support
serviceMonitor: {
enabled: false
labels: {}
annotations: {}
interval: "10s"
// scrapeTimeout: 10s
// Limits the ServiceMonitor to the current namespace if set to `true`.
namespaced: false
// tlsConfig: TLS configuration to use when scraping the endpoint.
// Of type: https://github.com/coreos/prometheus-operator/blob/main/Documentation/api.md#tlsconfig
tlsConfig: {}
}
// CockroachDB's data persistence.
// If neither `persistentVolume` nor `hostPath` is used, then data will be
// persisted in ad-hoc `emptyDir`.
storage: {
// Absolute path on host to store CockroachDB's data.
// If not specified, then `emptyDir` will be used instead.
// If specified, but `persistentVolume.enabled` is `true`, then has no effect.
hostPath: ""
// If `enabled` is `true` then a PersistentVolumeClaim will be created and
// used to store CockroachDB's data, otherwise `hostPath` is used.
persistentVolume: {
enabled: true
size: string | *"100Gi"
// If defined, then `storageClassName: <storageClass>`.
// If set to "-", then `storageClassName: ""`, which disables dynamic
// provisioning.
// If undefined or empty (default), then no `storageClassName` spec is set,
// so the default provisioner will be chosen (gp2 on AWS, standard on
// GKE, AWS & OpenStack).
storageClass: ""
// Additional labels to apply to the created PersistentVolumeClaims.
labels: {}
// Additional annotations to apply to the created PersistentVolumeClaims.
annotations: {}
}
}
// Kubernetes Job which initializes multi-node CockroachDB cluster.
// It's not created if `statefulset.replicas` is `1`.
init: {
// Additional labels to apply to this Job and its Pod.
labels: {
"app.kubernetes.io/component": "init"
}
// Additional annotations to apply to this Job.
jobAnnotations: {}
// Additional annotations to apply to the Pod of this Job.
annotations: {}
// Affinity rules for scheduling the Pod of this Job.
// https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity
affinity: {}
// Node selection constraints for scheduling the Pod of this Job.
// https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}
// Taints to be tolerated by the Pod of this Job.
// https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
// The init Pod runs at cluster creation to initialize CockroachDB. It finishes
// quickly and doesn't continue to consume resources in the Kubernetes
// cluster. Normally, you should leave this section commented out, but if your
// Kubernetes cluster uses Resource Quotas and requires all pods to specify
// resource requests or limits, you can set those here.
resources: {}
// requests:
// cpu: "10m"
// memory: "128Mi"
// limits:
// cpu: "10m"
// memory: "128Mi"
securityContext: {
enabled: true
}
provisioning: {
enabled: false
// https://www.cockroachlabs.com/docs/stable/cluster-settings.html
clusterSettings: null
// cluster.organization: "'FooCorp - Local Testing'"
// enterprise.license: "'xxxxx'"
users: []
// - name:
// password:
// # https://www.cockroachlabs.com/docs/stable/create-user.html#parameters
// options: [LOGIN]
databases: []
}
}
// - name:
// # https://www.cockroachlabs.com/docs/stable/create-database.html#parameters
// options: [encoding='utf-8']
// owners: []
// # https://www.cockroachlabs.com/docs/stable/grant.html#parameters
// owners_with_grant_option: []
// # Backup schedules are not idemponent for now and will fail on next run
// # https://github.com/cockroachdb/cockroach/issues/57892
// backup:
// into: s3://
// # Enterprise-only option (revision_history)
// # https://www.cockroachlabs.com/docs/stable/create-schedule-for-backup.html#backup-options
// options: [revision_history]
// recurring: '@always'
// # Enterprise-only feature. Remove this value to use `FULL BACKUP ALWAYS`
// fullBackup: '@daily'
// schedule:
// # https://www.cockroachlabs.com/docs/stable/create-schedule-for-backup.html#schedule-options
// options: [first_run = 'now']
// Whether to run securely using TLS certificates.
tls: {
enabled: true
copyCerts: image: "busybox"
certs: {
// Bring your own certs scenario. If provided, tls.init section will be ignored.
provided: true | *false
// Secret name for the client root cert.
clientRootSecret: "cockroachdb-root"
// Secret name for node cert.
nodeSecret: "cockroachdb-node"
// Secret name for CA cert
caSecret: "cockroach-ca"
// Enable if the secret is a dedicated TLS.
// TLS secrets are created by cert-mananger, for example.
tlsSecret: true | *false
// Enable if the you want cockroach db to create its own certificates
selfSigner: {
// If set, the cockroach db will generate its own certificates
enabled: false | *true
// Run selfSigner as non-root
securityContext: {
enabled: true
}
// If set, the user should provide the CA certificate to sign other certificates.
caProvided: false
// It holds the name of the secret with caCerts. If caProvided is set, this can not be empty.
caSecret: ""
// Minimum Certificate duration for all the certificates, all certs duration will be validated against this.
minimumCertDuration: "624h"
// Duration of CA certificates in hour
caCertDuration: "43800h"
// Expiry window of CA certificates means a window before actual expiry in which CA certs should be rotated.
caCertExpiryWindow: "648h"
// Duration of Client certificates in hour
clientCertDuration: "672h"
// Expiry window of client certificates means a window before actual expiry in which client certs should be rotated.
clientCertExpiryWindow: "48h"
// Duration of node certificates in hour
nodeCertDuration: "8760h"
// Expiry window of node certificates means a window before actual expiry in which node certs should be rotated.
nodeCertExpiryWindow: "168h"
// If set, the cockroachdb cert selfSigner will rotate the certificates before expiry.
rotateCerts: true
// Wait time for each cockroachdb replica to become ready once it comes in running state. Only considered when rotateCerts is set to true
readinessWait: "30s"
// Wait time for each cockroachdb replica to get to running state. Only considered when rotateCerts is set to true
podUpdateTimeout: "2m"
// ServiceAccount annotations for selfSigner jobs (e.g. for attaching AWS IAM roles to pods)
svcAccountAnnotations: {}
}
// Use cert-manager to issue certificates for mTLS.
certManager: true | *false
// Specify an Issuer or a ClusterIssuer to use, when issuing
// node and client certificates. The values correspond to the
// issuerRef specified in the certificate.
certManagerIssuer: {
group: "cert-manager.io"
kind: "Issuer"
name: string | *"cockroachdb"
// Make it false when you are providing your own CA issuer
isSelfSignedIssuer: true
// Duration of Client certificates in hours
clientCertDuration: "672h"
// Expiry window of client certificates means a window before actual expiry in which client certs should be rotated.
clientCertExpiryWindow: "48h"
// Duration of node certificates in hours
nodeCertDuration: "8760h"
// Expiry window of node certificates means a window before actual expiry in which node certs should be rotated.
nodeCertExpiryWindow: "168h"
}
}
selfSigner: {
// Additional annotations to apply to the Pod of this Job.
annotations: {}
// Affinity rules for scheduling the Pod of this Job.
// https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity
affinity: {}
// Node selection constraints for scheduling the Pod of this Job.
// https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}
// Taints to be tolerated by the Pod of this Job.
// https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
// Image Placeholder for the selfSigner utility. This will be changed once the CI workflows for the image is in place.
image: {
repository: "cockroachlabs-helm-charts/cockroach-self-signer-cert"
tag: "1.5"
pullPolicy: "IfNotPresent"
credentials: {}
registry: "gcr.io"
}
}
}
// username: john_doe
// password: changeme
networkPolicy: {
enabled: false
ingress: {
// List of sources which should be able to access the CockroachDB Pods via
// gRPC port. Items in this list are combined using a logical OR operation.
// Rules for allowing inter-communication are applied automatically.
// If empty, then connections from any Pod is allowed.
grpc: []
// - podSelector:
// matchLabels:
// app.kubernetes.io/name: my-app-django
// app.kubernetes.io/instance: my-app
// List of sources which should be able to access the CockroachDB Pods via
// HTTP port. Items in this list are combined using a logical OR operation.
// If empty, then connections from any Pod is allowed.
http: []
}
}
// - namespaceSelector:
// matchLabels:
// project: my-project
// To put the admin interface behind Identity Aware Proxy (IAP) on Google Cloud Platform
// make sure to set ingress.paths: ['/*']
iap: {
enabled: false
}
}

View File

@@ -0,0 +1,23 @@
package holos
#Values: {
image: repository: "quay.io/holos/cockroachdb/cockroach"
fullnameOverride: #ComponentName
tls: {
enabled: true
certs: {
// https://github.com/cockroachdb/helm-charts/blob/3dcf96726ebcfe3784afb526ddcf4095a1684aea/README.md?plain=1#L204-L215
selfSigner: enabled: false
certManager: false
provided: true
tlsSecret: true
}
}
storage: persistentVolume: {
enabled: true
size: "1Gi"
}
}

View File

@@ -0,0 +1,10 @@
package holos
// Components under this directory are part of this collection
#InputKeys: project: "iam"
// Shared dependencies for all components in this collection.
#DependsOn: _Namespaces
// Common Dependencies
_Namespaces: Namespaces: name: "\(#StageName)-secrets-namespaces"

View File

@@ -0,0 +1,101 @@
package holos
// Manage an Issuer for the database.
// Both cockroach and postgres handle tls database connections with cert manager
// PGO: https://github.com/CrunchyData/postgres-operator-examples/tree/main/kustomize/certmanager/certman
// CRDB: https://github.com/cockroachdb/helm-charts/blob/3dcf96726ebcfe3784afb526ddcf4095a1684aea/README.md?plain=1#L196-L201
// Refer to [Using Cert Manager to Deploy TLS for Postgres on Kubernetes](https://www.crunchydata.com/blog/using-cert-manager-to-deploy-tls-for-postgres-on-kubernetes)
#InputKeys: component: "postgres-certs"
let SelfSigned = "\(_DBName)-selfsigned"
let RootCA = "\(_DBName)-root-ca"
let Orgs = ["Database"]
#KubernetesObjects & {
apiObjects: {
// Put everything in the target namespace.
[_]: {
[Name=_]: {
metadata: name: Name
metadata: namespace: #TargetNamespace
}
}
Issuer: {
"\(SelfSigned)": #Issuer & {
_description: "Self signed issuer to issue ca certs"
metadata: name: SelfSigned
spec: selfSigned: {}
}
"\(RootCA)": #Issuer & {
_description: "Root signed intermediate ca to issue mtls database certs"
metadata: name: RootCA
spec: ca: secretName: RootCA
}
}
Certificate: {
"\(RootCA)": #Certificate & {
_description: "Root CA cert for database"
metadata: name: RootCA
spec: {
commonName: RootCA
isCA: true
issuerRef: group: "cert-manager.io"
issuerRef: kind: "Issuer"
issuerRef: name: SelfSigned
privateKey: algorithm: "ECDSA"
privateKey: size: 256
secretName: RootCA
subject: organizations: Orgs
}
}
"\(_DBName)-primary-tls": #DatabaseCert & {
// PGO managed name is "<cluster name>-cluster-cert" e.g. zitadel-cluster-cert
spec: {
commonName: "\(_DBName)-primary"
dnsNames: [
commonName,
"\(commonName).\(#TargetNamespace)",
"\(commonName).\(#TargetNamespace).svc",
"\(commonName).\(#TargetNamespace).svc.cluster.local",
"localhost",
"127.0.0.1",
]
usages: ["digital signature", "key encipherment"]
}
}
"\(_DBName)-repl-tls": #DatabaseCert & {
spec: {
commonName: "_crunchyrepl"
dnsNames: [commonName]
usages: ["digital signature", "key encipherment"]
}
}
"\(_DBName)-client-tls": #DatabaseCert & {
spec: {
commonName: "\(_DBName)-client"
dnsNames: [commonName]
usages: ["digital signature", "key encipherment"]
}
}
}
}
}
#DatabaseCert: #Certificate & {
metadata: name: string
metadata: namespace: #TargetNamespace
spec: {
duration: "2160h" // 90d
renewBefore: "360h" // 15d
issuerRef: group: "cert-manager.io"
issuerRef: kind: "Issuer"
issuerRef: name: RootCA
privateKey: algorithm: "ECDSA"
privateKey: size: 256
secretName: metadata.name
subject: organizations: Orgs
}
}

View File

@@ -0,0 +1,7 @@
# Database Certs
This component issues postgres certificates from the provisioner cluster using certmanager.
The purpose is to define customTLSSecret and customReplicationTLSSecret to provide certs that allow the standby to authenticate to the primary. For this type of standby, you must use custom TLS.
Refer to the PGO [Streaming Standby](https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#streaming-standby) tutorial.

View File

@@ -0,0 +1,6 @@
package holos
#TargetNamespace: #InstancePrefix + "-zitadel"
// _DBName is the database name used across multiple holos components in this project
_DBName: "zitadel"

View File

@@ -0,0 +1,20 @@
package holos
// Provision all platform certificates.
#InputKeys: component: "certificates"
// Certificates usually go into the istio-system namespace, but they may go anywhere.
#TargetNamespace: "default"
// Depends on issuers
#DependsOn: _LetsEncrypt
#KubernetesObjects & {
apiObjects: {
for k, obj in #PlatformCerts {
"\(obj.kind)": {
"\(obj.metadata.namespace)/\(obj.metadata.name)": obj
}
}
}
}

View File

@@ -0,0 +1,43 @@
package holos
// https://cert-manager.io/docs/
#TargetNamespace: "cert-manager"
#InputKeys: {
component: "certmanager"
service: "cert-manager"
}
#HelmChart & {
values: #Values & {
installCRDs: true
startupapicheck: enabled: false
// Must not use kube-system on gke autopilot. GKE Warden authz blocks access.
global: leaderElection: namespace: #TargetNamespace
}
namespace: #TargetNamespace
chart: {
name: "cert-manager"
version: "1.14.3"
repository: {
name: "jetstack"
url: "https://charts.jetstack.io"
}
}
}
// https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-resource-requests#min-max-requests
#PodResources: {
requests: {
cpu: string | *"250m"
memory: string | *"512Mi"
"ephemeral-storage": string | *"100Mi"
}
}
// https://cloud.google.com/kubernetes-engine/docs/how-to/autopilot-spot-pods
#NodeSelector: {
"kubernetes.io/os": "linux"
"cloud.google.com/gke-spot": "true"
}

View File

@@ -1,6 +1,6 @@
package holos
#UpstreamValues: {
#Values: {
// +docs:section=Global
// Default values for cert-manager.
@@ -51,7 +51,7 @@ package holos
leaderElection: {
// Override the namespace used for the leader election lease
namespace: "kube-system"
namespace: string | *"kube-system"
}
}
@@ -246,7 +246,7 @@ package holos
// memory: 32Mi
//
// ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
resources: {}
resources: #PodResources
// Pod Security Context
// ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
@@ -310,9 +310,7 @@ package holos
// This default ensures that Pods are only scheduled to Linux nodes.
// It prevents Pods being scheduled to Windows nodes in a mixed OS cluster.
// +docs:property
nodeSelector: {
"kubernetes.io/os": "linux"
}
nodeSelector: #NodeSelector
// +docs:ignore
ingressShim: {}
@@ -408,7 +406,7 @@ package holos
enabled: true
servicemonitor: {
// Create a ServiceMonitor to add cert-manager to Prometheus
enabled: false
enabled: true | *false
// Specifies the `prometheus` label on the created ServiceMonitor, this is
// used when different Prometheus instances have label selectors matching
@@ -652,7 +650,7 @@ package holos
// memory: 32Mi
//
// ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
resources: {}
resources: #PodResources
// Liveness probe values
// ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
@@ -685,9 +683,7 @@ package holos
// This default ensures that Pods are only scheduled to Linux nodes.
// It prevents Pods being scheduled to Windows nodes in a mixed OS cluster.
// +docs:property
nodeSelector: {
"kubernetes.io/os": "linux"
}
nodeSelector: #NodeSelector
// A Kubernetes Affinity, if required; see https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#affinity-v1-core
//
@@ -959,7 +955,7 @@ package holos
// memory: 32Mi
//
// ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
resources: {}
resources: #PodResources
// The nodeSelector on Pods tells Kubernetes to schedule Pods on the nodes with
// matching labels.
@@ -968,9 +964,7 @@ package holos
// This default ensures that Pods are only scheduled to Linux nodes.
// It prevents Pods being scheduled to Windows nodes in a mixed OS cluster.
// +docs:property
nodeSelector: {
"kubernetes.io/os": "linux"
}
nodeSelector: #NodeSelector
// A Kubernetes Affinity, if required; see https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#affinity-v1-core
//
@@ -1098,7 +1092,7 @@ package holos
startupapicheck: {
// Enables the startup api check
enabled: true
enabled: *true | false
// Pod Security Context to be set on the startupapicheck component Pod
// ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
@@ -1151,7 +1145,7 @@ package holos
// memory: 32Mi
//
// ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
resources: {}
resources: #PodResources
// The nodeSelector on Pods tells Kubernetes to schedule Pods on the nodes with
// matching labels.
@@ -1160,9 +1154,7 @@ package holos
// This default ensures that Pods are only scheduled to Linux nodes.
// It prevents Pods being scheduled to Windows nodes in a mixed OS cluster.
// +docs:property
nodeSelector: {
"kubernetes.io/os": "linux"
}
nodeSelector: #NodeSelector
// A Kubernetes Affinity, if required; see https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#affinity-v1-core
//

View File

@@ -0,0 +1,78 @@
package holos
// Lets Encrypt certificate issuers for public tls certs
#InputKeys: component: "letsencrypt"
#TargetNamespace: "cert-manager"
let Name = "letsencrypt"
// The cloudflare api token is platform scoped, not cluster scoped.
#SecretName: "cloudflare-api-token-secret"
// Depends on cert manager
#DependsOn: _CertManager
#KubernetesObjects & {
apiObjects: {
ClusterIssuer: {
letsencrypt: #ClusterIssuer & {
metadata: name: Name
spec: {
acme: {
email: #Platform.org.contact.email
server: "https://acme-v02.api.letsencrypt.org/directory"
privateKeySecretRef: name: Name
solvers: [{
dns01: cloudflare: {
email: #Platform.org.cloudflare.email
apiTokenSecretRef: name: #SecretName
apiTokenSecretRef: key: "api_token"
}}]
}
}
}
letsencryptStaging: #ClusterIssuer & {
metadata: name: Name + "-staging"
spec: {
acme: {
email: #Platform.org.contact.email
server: "https://acme-staging-v02.api.letsencrypt.org/directory"
privateKeySecretRef: name: Name + "-staging"
solvers: [{
dns01: cloudflare: {
email: #Platform.org.cloudflare.email
apiTokenSecretRef: name: #SecretName
apiTokenSecretRef: key: "api_token"
}}]
}
}
}
}
}
}
// _HTTPSolvers are disabled in the provisioner cluster, dns is the method supported by holos.
_HTTPSolvers: {
letsencryptHTTP: #ClusterIssuer & {
metadata: name: Name + "-http"
spec: {
acme: {
email: #Platform.org.contact.email
server: "https://acme-v02.api.letsencrypt.org/directory"
privateKeySecretRef: name: Name
solvers: [{http01: ingress: class: "istio"}]
}
}
}
letsencryptHTTPStaging: #ClusterIssuer & {
metadata: name: Name + "-http-staging"
spec: {
acme: {
email: #Platform.org.contact.email
server: "https://acme-staging-v02.api.letsencrypt.org/directory"
privateKeySecretRef: name: Name + "-staging"
solvers: [{http01: ingress: class: "istio"}]
}
}
}
}

View File

@@ -0,0 +1,13 @@
package holos
// Components under this directory are part of this collection
#InputKeys: project: "mesh"
// Shared dependencies for all components in this collection.
#DependsOn: _Namespaces
// Common Dependencies
_Namespaces: Namespaces: name: "\(#StageName)-secrets-namespaces"
_CertManager: CertManager: name: "\(#InstancePrefix)-certmanager"
_LetsEncrypt: LetsEncrypt: name: "\(#InstancePrefix)-letsencrypt"
_Certificates: Certificates: name: "\(#InstancePrefix)-certificates"

View File

@@ -0,0 +1,9 @@
package holos
// GitHub Actions Runner Controller
#InputKeys: project: "github"
#DependsOn: Namespaces: name: "prod-secrets-namespaces"
#TargetNamespace: #InputKeys.component
#HelmChart: namespace: #TargetNamespace
#HelmChart: chart: version: "0.8.3"

View File

@@ -0,0 +1,26 @@
package holos
#InputKeys: component: "arc-runner"
#Kustomization: spec: targetNamespace: #TargetNamespace
#HelmChart & {
values: {
#Values
controllerServiceAccount: name: "gha-rs-controller"
controllerServiceAccount: namespace: "arc-system"
githubConfigSecret: "controller-manager"
githubConfigUrl: "https://github.com/" + #Platform.org.github.orgs.primary.name
}
apiObjects: ExternalSecret: "\(values.githubConfigSecret)": _
chart: {
// Match the gha-base-name in the chart _helpers.tpl to avoid long full names.
// NOTE: Unfortunately the INSTALLATION_NAME is used as the helm release
// name and GitHub removed support for runner labels, so the only way to
// specify which runner a workflow runs on is using this helm release name.
// The quote is "Update the INSTALLATION_NAME value carefully. You will use
// the installation name as the value of runs-on in your workflows." Refer to
// https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/quickstart-for-actions-runner-controller
release: "gha-rs"
name: "oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set"
}
}

View File

@@ -0,0 +1,192 @@
package holos
#Values: {
//# githubConfigUrl is the GitHub url for where you want to configure runners
//# ex: https://github.com/myorg/myrepo or https://github.com/myorg
githubConfigUrl: string | *""
//# githubConfigSecret is the k8s secrets to use when auth with GitHub API.
//# You can choose to use GitHub App or a PAT token
githubConfigSecret: string | {
//## GitHub Apps Configuration
//# NOTE: IDs MUST be strings, use quotes
//github_app_id: ""
//github_app_installation_id: ""
//github_app_private_key: |
//## GitHub PAT Configuration
github_token: ""
}
//# If you have a pre-define Kubernetes secret in the same namespace the gha-runner-scale-set is going to deploy,
//# you can also reference it via `githubConfigSecret: pre-defined-secret`.
//# You need to make sure your predefined secret has all the required secret data set properly.
//# For a pre-defined secret using GitHub PAT, the secret needs to be created like this:
//# > kubectl create secret generic pre-defined-secret --namespace=my_namespace --from-literal=github_token='ghp_your_pat'
//# For a pre-defined secret using GitHub App, the secret needs to be created like this:
//# > kubectl create secret generic pre-defined-secret --namespace=my_namespace --from-literal=github_app_id=123456 --from-literal=github_app_installation_id=654321 --from-literal=github_app_private_key='-----BEGIN CERTIFICATE-----*******'
// githubConfigSecret: pre-defined-secret
//# proxy can be used to define proxy settings that will be used by the
//# controller, the listener and the runner of this scale set.
//
// proxy:
// http:
// url: http://proxy.com:1234
// credentialSecretRef: proxy-auth # a secret with `username` and `password` keys
// https:
// url: http://proxy.com:1234
// credentialSecretRef: proxy-auth # a secret with `username` and `password` keys
// noProxy:
// - example.com
// - example.org
//# maxRunners is the max number of runners the autoscaling runner set will scale up to.
// maxRunners: 5
//# minRunners is the min number of idle runners. The target number of runners created will be
//# calculated as a sum of minRunners and the number of jobs assigned to the scale set.
// minRunners: 0
// runnerGroup: "default"
//# name of the runner scale set to create. Defaults to the helm release name
// runnerScaleSetName: ""
//# A self-signed CA certificate for communication with the GitHub server can be
//# provided using a config map key selector. If `runnerMountPath` is set, for
//# each runner pod ARC will:
//# - create a `github-server-tls-cert` volume containing the certificate
//# specified in `certificateFrom`
//# - mount that volume on path `runnerMountPath`/{certificate name}
//# - set NODE_EXTRA_CA_CERTS environment variable to that same path
//# - set RUNNER_UPDATE_CA_CERTS environment variable to "1" (as of version
//# 2.303.0 this will instruct the runner to reload certificates on the host)
//#
//# If any of the above had already been set by the user in the runner pod
//# template, ARC will observe those and not overwrite them.
//# Example configuration:
//
// githubServerTLS:
// certificateFrom:
// configMapKeyRef:
// name: config-map-name
// key: ca.crt
// runnerMountPath: /usr/local/share/ca-certificates/
//# Container mode is an object that provides out-of-box configuration
//# for dind and kubernetes mode. Template will be modified as documented under the
//# template object.
//#
//# If any customization is required for dind or kubernetes mode, containerMode should remain
//# empty, and configuration should be applied to the template.
// containerMode:
// type: "dind" ## type can be set to dind or kubernetes
// ## the following is required when containerMode.type=kubernetes
// kubernetesModeWorkVolumeClaim:
// accessModes: ["ReadWriteOnce"]
// # For local testing, use https://github.com/openebs/dynamic-localpv-provisioner/blob/develop/docs/quickstart.md to provide dynamic provision volume with storageClassName: openebs-hostpath
// storageClassName: "dynamic-blob-storage"
// resources:
// requests:
// storage: 1Gi
// kubernetesModeServiceAccount:
// annotations:
//# template is the PodSpec for each listener Pod
//# For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec
// listenerTemplate:
// spec:
// containers:
// # Use this section to append additional configuration to the listener container.
// # If you change the name of the container, the configuration will not be applied to the listener,
// # and it will be treated as a side-car container.
// - name: listener
// securityContext:
// runAsUser: 1000
// # Use this section to add the configuration of a side-car container.
// # Comment it out or remove it if you don't need it.
// # Spec for this container will be applied as is without any modifications.
// - name: side-car
// image: example-sidecar
//# template is the PodSpec for each runner Pod
//# For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec
template: {
//# template.spec will be modified if you change the container mode
//# with containerMode.type=dind, we will populate the template.spec with following pod spec
//# template:
//# spec:
//# initContainers:
//# - name: init-dind-externals
//# image: ghcr.io/actions/actions-runner:latest
//# command: ["cp", "-r", "-v", "/home/runner/externals/.", "/home/runner/tmpDir/"]
//# volumeMounts:
//# - name: dind-externals
//# mountPath: /home/runner/tmpDir
//# containers:
//# - name: runner
//# image: ghcr.io/actions/actions-runner:latest
//# command: ["/home/runner/run.sh"]
//# env:
//# - name: DOCKER_HOST
//# value: unix:///run/docker/docker.sock
//# volumeMounts:
//# - name: work
//# mountPath: /home/runner/_work
//# - name: dind-sock
//# mountPath: /run/docker
//# readOnly: true
//# - name: dind
//# image: docker:dind
//# args:
//# - dockerd
//# - --host=unix:///run/docker/docker.sock
//# - --group=$(DOCKER_GROUP_GID)
//# env:
//# - name: DOCKER_GROUP_GID
//# value: "123"
//# securityContext:
//# privileged: true
//# volumeMounts:
//# - name: work
//# mountPath: /home/runner/_work
//# - name: dind-sock
//# mountPath: /run/docker
//# - name: dind-externals
//# mountPath: /home/runner/externals
//# volumes:
//# - name: work
//# emptyDir: {}
//# - name: dind-sock
//# emptyDir: {}
//# - name: dind-externals
//# emptyDir: {}
//#####################################################################################################
//# with containerMode.type=kubernetes, we will populate the template.spec with following pod spec
//# template:
//# spec:
//# containers:
//# - name: runner
//# image: ghcr.io/actions/actions-runner:latest
//# command: ["/home/runner/run.sh"]
//# env:
//# - name: ACTIONS_RUNNER_CONTAINER_HOOKS
//# value: /home/runner/k8s/index.js
//# - name: ACTIONS_RUNNER_POD_NAME
//# valueFrom:
//# fieldRef:
//# fieldPath: metadata.name
//# - name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER
//# value: "true"
//# volumeMounts:
//# - name: work
//# mountPath: /home/runner/_work
//# volumes:
//# - name: work
//# ephemeral:
//# volumeClaimTemplate:
//# spec:
//# accessModes: [ "ReadWriteOnce" ]
//# storageClassName: "local-path"
//# resources:
//# requests:
//# storage: 1Gi
spec: {
containers: [{
name: "runner"
image: "ghcr.io/actions/actions-runner:latest"
command: ["/home/runner/run.sh"]
}]
}
}
}

View File

@@ -0,0 +1,15 @@
package holos
#TargetNamespace: "arc-system"
#InputKeys: component: "arc-system"
#HelmChart & {
values: #Values & #DefaultSecurityContext
namespace: #TargetNamespace
chart: {
// Match the gha-base-name in the chart _helpers.tpl to avoid long full names.
release: "gha-rs-controller"
name: "oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller"
version: "0.8.3"
}
}

View File

@@ -0,0 +1,127 @@
package holos
#Values: {
// Default values for gha-runner-scale-set-controller.
// This is a YAML-formatted file.
// Declare variables to be passed into your templates.
labels: {}
// leaderElection will be enabled when replicaCount>1,
// So, only one replica will in charge of reconciliation at a given time
// leaderElectionId will be set to {{ define gha-runner-scale-set-controller.fullname }}.
replicaCount: 1
image: {
repository: "ghcr.io/actions/gha-runner-scale-set-controller"
pullPolicy: "IfNotPresent"
// Overrides the image tag whose default is the chart appVersion.
tag: ""
}
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
env: null
//# Define environment variables for the controller pod
// - name: "ENV_VAR_NAME_1"
// value: "ENV_VAR_VALUE_1"
// - name: "ENV_VAR_NAME_2"
// valueFrom:
// secretKeyRef:
// key: ENV_VAR_NAME_2
// name: secret-name
// optional: true
serviceAccount: {
// Specifies whether a service account should be created for running the controller pod
create: true
// Annotations to add to the service account
annotations: {}
// The name of the service account to use.
// If not set and create is true, a name is generated using the fullname template
// You can not use the default service account for this.
name: ""
}
podAnnotations: {}
podLabels: {}
podSecurityContext: {}
// fsGroup: 2000
securityContext: {...}
// capabilities:
// drop:
// - ALL
// readOnlyRootFilesystem: true
// runAsNonRoot: true
// runAsUser: 1000
resources: {}
//# We usually recommend not to specify default resources and to leave this as a conscious
//# choice for the user. This also increases chances charts run on environments with little
//# resources, such as Minikube. If you do want to specify resources, uncomment the following
//# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
// limits:
// cpu: 100m
// memory: 128Mi
// requests:
// cpu: 100m
// memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
// Mount volumes in the container.
volumes: []
volumeMounts: []
// Leverage a PriorityClass to ensure your pods survive resource shortages
// ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
// PriorityClass: system-cluster-critical
priorityClassName: ""
//# If `metrics:` object is not provided, or commented out, the following flags
//# will be applied the controller-manager and listener pods with empty values:
//# `--metrics-addr`, `--listener-metrics-addr`, `--listener-metrics-endpoint`.
//# This will disable metrics.
//#
//# To enable metrics, uncomment the following lines.
// metrics:
// controllerManagerAddr: ":8080"
// listenerAddr: ":8080"
// listenerEndpoint: "/metrics"
flags: {
//# Log level can be set here with one of the following values: "debug", "info", "warn", "error".
//# Defaults to "debug".
logLevel: "debug"
//# Log format can be set with one of the following values: "text", "json"
//# Defaults to "text"
logFormat: "text"
//# Restricts the controller to only watch resources in the desired namespace.
//# Defaults to watch all namespaces when unset.
// watchSingleNamespace: ""
//# Defines how the controller should handle upgrades while having running jobs.
//#
//# The strategies available are:
//# - "immediate": (default) The controller will immediately apply the change causing the
//# recreation of the listener and ephemeral runner set. This can lead to an
//# overprovisioning of runners, if there are pending / running jobs. This should not
//# be a problem at a small scale, but it could lead to a significant increase of
//# resources if you have a lot of jobs running concurrently.
//#
//# - "eventual": The controller will remove the listener and ephemeral runner set
//# immediately, but will not recreate them (to apply changes) until all
//# pending / running jobs have completed.
//# This can lead to a longer time to apply the change but it will ensure
//# that you don't have any overprovisioning of runners.
updateStrategy: "immediate"
}
}

View File

@@ -0,0 +1,7 @@
package holos
// Components under this directory are part of this collection
#InputKeys: project: "iam"
// Shared dependencies for all components in this collection.
#DependsOn: namespaces: name: "\(#StageName)-secrets-namespaces"

View File

@@ -0,0 +1,17 @@
# IAM
The IAM service provides identity and access management for a holos managed platform. Zitadel is the identity provider which integrates tightly with:
1. AuthorizationPolicy at the level of the service mesh.
2. Application level oidc login (ArgoCD, Grafana, etc...)
3. Cloud provider IAM via oidc.
## Preflight
The zitadel master key needs to have a data key named `masterkey` with a Secret name of `zitadel-masterkey`.
```bash
holos create secret zitadel-masterkey --namespace prod-iam-zitadel --append-hash=false --data-stdin <<EOF
{"masterkey":"$(tr -dc A-Za-z0-9 </dev/urandom | head -c 32)"}
EOF
```

View File

@@ -0,0 +1,13 @@
package holos
#InputKeys: component: "postgres-certs"
#KubernetesObjects & {
apiObjects: {
ExternalSecret: {
"\(_DBName)-primary-tls": _
"\(_DBName)-repl-tls": _
"\(_DBName)-client-tls": _
"\(_DBName)-root-ca": _
}
}
}

View File

@@ -0,0 +1,161 @@
package holos
#InputKeys: component: "postgres"
#DependsOn: "postgres-certs": _
let Cluster = #Platform.clusters[#ClusterName]
let S3Secret = "pgo-s3-creds"
let ZitadelUser = _DBName
let ZitadelAdmin = "\(_DBName)-admin"
#KubernetesObjects & {
apiObjects: {
ExternalSecret: "pgo-s3-creds": _
PostgresCluster: db: #PostgresCluster & HighlyAvailable & {
// This must be an external storage bucket for our architecture.
let BucketRepoName = spec.backups.pgbackrest.manual.repoName
metadata: name: _DBName
metadata: namespace: #TargetNamespace
spec: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.2-0"
postgresVersion: 16
// Custom certs are necessary for streaming standby replication which we use to replicate between two regions.
// Refer to https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#streaming-standby
customTLSSecret: name: "\(_DBName)-primary-tls"
customReplicationTLSSecret: name: "\(_DBName)-repl-tls"
// Refer to https://access.crunchydata.com/documentation/postgres-operator/latest/references/crd/5.5.x/postgrescluster#postgresclusterspecusersindex
users: [
{name: ZitadelUser},
// NOTE: Users with SUPERUSER role cannot log in through pgbouncer. Use options that allow zitadel admin to use pgbouncer.
// Refer to: https://github.com/CrunchyData/postgres-operator/issues/3095#issuecomment-1904712211
{name: ZitadelAdmin, options: "CREATEDB CREATEROLE", databases: [_DBName, "postgres"]},
]
users: [...{databases: [_DBName, ...]}]
instances: [{
replicas: 2
dataVolumeClaimSpec: {
accessModes: ["ReadWriteOnce"]
resources: requests: storage: string | *"1Gi"
}
}]
standby: {
repoName: BucketRepoName
if Cluster.primary {
enabled: false
}
if !Cluster.primary {
enabled: true
}
}
// Restore from a backup
dataSource: pgbackrest: {
stanza: "db"
configuration: [{secret: name: S3Secret}]
// Restore from known good full backup taken in https://github.com/holos-run/holos/issues/48#issuecomment-1987375044
options: ["--type=time", "--target=\"2024-03-10 21:56:00+00\""]
global: {
"\(BucketRepoName)-path": "/pgbackrest/\(#TargetNamespace)/\(metadata.name)/\(BucketRepoName)"
"\(BucketRepoName)-cipher-type": "aes-256-cbc"
}
repo: {
name: BucketRepoName
s3: {
bucket: string | *"\(#Platform.org.name)-zitadel-backups"
region: string | *#Backups.s3.region
endpoint: string | *"s3.dualstack.\(region).amazonaws.com"
}
}
}
// Refer to https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups
backups: pgbackrest: {
configuration: dataSource.pgbackrest.configuration
manual: {
// Note: the repoName value must match the config keys in the S3Secret.
// This must be an external repository for backup / restore / regional failovers.
repoName: "repo2"
options: ["--type=full", ...]
}
restore: {
enabled: true
repoName: BucketRepoName
}
global: {
// Store only one full backup in the PV because it's more expensive than object storage.
"\(repos[0].name)-retention-full": "1"
// Store 14 days of full backups in the bucket.
"\(BucketRepoName)-retention-full": string | *"14"
"\(BucketRepoName)-retention-full-type": "count" | *"time" // time in days
// Refer to https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups#encryption
"\(BucketRepoName)-cipher-type": "aes-256-cbc"
// "The convention we recommend for setting this variable is /pgbackrest/$NAMESPACE/$CLUSTER_NAME/repoN"
// Ref: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups#understanding-backup-configuration-and-basic-operations
"\(BucketRepoName)-path": "/pgbackrest/\(#TargetNamespace)/\(metadata.name)/\(manual.repoName)"
}
repos: [
{
name: "repo1"
volume: volumeClaimSpec: {
accessModes: ["ReadWriteOnce"]
resources: requests: storage: string | *"1Gi"
}
},
{
name: BucketRepoName
// Full backup weekly on Sunday at 1am, differntial daily at 1am every day except Sunday.
schedules: full: string | *"0 1 * * 0"
schedules: differential: string | *"0 1 * * 1-6"
s3: dataSource.pgbackrest.repo.s3
},
]
}
}
}
}
}
// Refer to https://github.com/holos-run/postgres-operator-examples/blob/main/kustomize/high-availability/ha-postgres.yaml
let HighlyAvailable = {
apiVersion: "postgres-operator.crunchydata.com/v1beta1"
kind: "PostgresCluster"
metadata: name: string
spec: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.2-0"
postgresVersion: 16
instances: [{
name: "pgha1"
replicas: 2
dataVolumeClaimSpec: {
accessModes: ["ReadWriteOnce"]
resources: requests: storage: "1Gi"
}
affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: [{
weight: 1
podAffinityTerm: {
topologyKey: "kubernetes.io/hostname"
labelSelector: matchLabels: {
"postgres-operator.crunchydata.com/cluster": metadata.name
"postgres-operator.crunchydata.com/instance-set": name
}
}
}]
}]
backups: pgbackrest: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.49-0"
}
proxy: pgBouncer: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-pgbouncer:ubi8-1.21-3"
replicas: 2
affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: [{
weight: 1
podAffinityTerm: {
topologyKey: "kubernetes.io/hostname"
labelSelector: matchLabels: {
"postgres-operator.crunchydata.com/cluster": metadata.name
"postgres-operator.crunchydata.com/role": "pgbouncer"
}
}
}]
}
}
}

View File

@@ -0,0 +1,10 @@
package holos
#TargetNamespace: #InstancePrefix + "-zitadel"
// _DBName is the database name used across multiple holos components in this project
_DBName: "zitadel"
// The canonical login domain for the entire platform. Zitadel will be active
// on a single cluster at a time, but always accessible from this domain.
#ExternalDomain: "login.\(#Platform.org.domain)"

View File

@@ -0,0 +1,251 @@
package holos
#Values: {
// Default values for zitadel.
zitadel: {
// The ZITADEL config under configmapConfig is written to a Kubernetes ConfigMap
// See all defaults here:
// https://github.com/zitadel/zitadel/blob/main/cmd/defaults.yaml
configmapConfig: {
ExternalSecure: true
Machine: Identification: {
Hostname: Enabled: true
Webhook: Enabled: false
}
}
// The ZITADEL config under secretConfig is written to a Kubernetes Secret
// See all defaults here:
// https://github.com/zitadel/zitadel/blob/main/cmd/defaults.yaml
secretConfig: null
// Annotations set on secretConfig secret
secretConfigAnnotations: {
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-delete-policy": "before-hook-creation"
"helm.sh/hook-weight": "0"
}
// Reference the name of a secret that contains ZITADEL configuration.
configSecretName: null
// The key under which the ZITADEL configuration is located in the secret.
configSecretKey: "config-yaml"
// ZITADEL uses the masterkey for symmetric encryption.
// You can generate it for example with tr -dc A-Za-z0-9 </dev/urandom | head -c 32
masterkey: ""
// Reference the name of the secret that contains the masterkey. The key should be named "masterkey".
// Note: Either zitadel.masterkey or zitadel.masterkeySecretName must be set
masterkeySecretName: string | *""
// Annotations set on masterkey secret
masterkeyAnnotations: {
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-delete-policy": "before-hook-creation"
"helm.sh/hook-weight": "0"
}
// The CA Certificate needed for establishing secure database connections
dbSslCaCrt: ""
// The Secret containing the CA certificate at key ca.crt needed for establishing secure database connections
dbSslCaCrtSecret: string | *""
// The db admins secret containing the client certificate and key at tls.crt and tls.key needed for establishing secure database connections
dbSslAdminCrtSecret: string | *""
// The db users secret containing the client certificate and key at tls.crt and tls.key needed for establishing secure database connections
dbSslUserCrtSecret: string | *""
// Generate a self-signed certificate using an init container
// This will also mount the generated files to /etc/tls/ so that you can reference them in the pod.
// E.G. KeyPath: /etc/tls/tls.key CertPath: /etc/tls/tls.crt
// By default, the SAN DNS names include, localhost, the POD IP address and the POD name. You may include one more by using additionalDnsName like "my.zitadel.fqdn".
selfSignedCert: {
enabled: false
additionalDnsName: null
}
}
replicaCount: 3
image: {
repository: "ghcr.io/zitadel/zitadel"
pullPolicy: "IfNotPresent"
// Overrides the image tag whose default is the chart appVersion.
tag: ""
}
chownImage: {
repository: "alpine"
pullPolicy: "IfNotPresent"
tag: "3.19"
}
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
// Annotations to add to the deployment
annotations: {}
// Annotations to add to the configMap
configMap: {
annotations: {
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-delete-policy": "before-hook-creation"
"helm.sh/hook-weight": "0"
}
}
serviceAccount: {
// Specifies whether a service account should be created
create: true
// Annotations to add to the service account
annotations: {
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-delete-policy": "before-hook-creation"
"helm.sh/hook-weight": "0"
}
// The name of the service account to use.
// If not set and create is true, a name is generated using the fullname template
name: ""
}
podAnnotations: {}
podAdditionalLabels: {}
podSecurityContext: {
runAsNonRoot: true
runAsUser: 1000
}
securityContext: {}
// Additional environment variables
env: [...]
// - name: ZITADEL_DATABASE_POSTGRES_HOST
// valueFrom:
// secretKeyRef:
// name: postgres-pguser-postgres
// key: host
service: {
type: "ClusterIP"
// If service type is "ClusterIP", this can optionally be set to a fixed IP address.
clusterIP: ""
port: 8080
protocol: "http2"
annotations: {}
scheme: "HTTP"
}
ingress: {
enabled: false
className: ""
annotations: {}
hosts: [{
host: "localhost"
paths: [{
path: "/"
pathType: "Prefix"
}]
}]
tls: []
}
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
topologySpreadConstraints: []
initJob: {
// Once ZITADEL is installed, the initJob can be disabled.
enabled: true
annotations: {
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-delete-policy": "before-hook-creation"
"helm.sh/hook-weight": "1"
}
resources: {}
backoffLimit: 5
activeDeadlineSeconds: 300
extraContainers: []
podAnnotations: {}
// Available init commands :
// "": initialize ZITADEL instance (without skip anything)
// database: initialize only the database
// grant: set ALL grant to user
// user: initialize only the database user
// zitadel: initialize ZITADEL internals (skip "create user" and "create database")
command: ""
}
setupJob: {
annotations: {
"helm.sh/hook": "pre-install,pre-upgrade"
"helm.sh/hook-delete-policy": "before-hook-creation"
"helm.sh/hook-weight": "2"
}
resources: {}
activeDeadlineSeconds: 300
extraContainers: []
podAnnotations: {}
additionalArgs: ["--init-projections=true"]
machinekeyWriter: {
image: {
repository: "bitnami/kubectl"
tag: ""
}
resources: {}
}
}
readinessProbe: {
enabled: true
initialDelaySeconds: 0
periodSeconds: 5
failureThreshold: 3
}
livenessProbe: {
enabled: true
initialDelaySeconds: 0
periodSeconds: 5
failureThreshold: 3
}
startupProbe: {
enabled: true
periodSeconds: 1
failureThreshold: 30
}
metrics: {
enabled: false
serviceMonitor: {
// If true, the chart creates a ServiceMonitor that is compatible with Prometheus Operator
// https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.ServiceMonitor.
// The Prometheus community Helm chart installs this operator
// https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#kube-prometheus-stack
enabled: false
honorLabels: false
honorTimestamps: true
}
}
pdb: {
enabled: false
// these values are used for the PDB and are mutally exclusive
minAvailable: 1
// maxUnavailable: 1
annotations: {}
}
}

View File

@@ -0,0 +1,89 @@
package holos
#Values: {
// Database credentials
// Refer to https://access.crunchydata.com/documentation/postgres-operator/5.2.0/architecture/user-management/
// Refer to https://zitadel.com/docs/self-hosting/manage/database#postgres
env: [
// Connection
{
name: "ZITADEL_DATABASE_POSTGRES_HOST"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)"
valueFrom: secretKeyRef: key: "pgbouncer-host"
},
{
name: "ZITADEL_DATABASE_POSTGRES_PORT"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)"
valueFrom: secretKeyRef: key: "pgbouncer-port"
},
{
name: "ZITADEL_DATABASE_POSTGRES_DATABASE"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)"
valueFrom: secretKeyRef: key: "dbname"
},
// The <db>-pguser-<db> secret contains creds for the unpriviliged zitadel user
{
name: "ZITADEL_DATABASE_POSTGRES_USER_USERNAME"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)"
valueFrom: secretKeyRef: key: "user"
},
{
name: "ZITADEL_DATABASE_POSTGRES_USER_PASSWORD"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)"
valueFrom: secretKeyRef: key: "password"
},
// The postgres component configures privileged postgres user creds.
{
name: "ZITADEL_DATABASE_POSTGRES_ADMIN_USERNAME"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)-admin"
valueFrom: secretKeyRef: key: "user"
},
{
name: "ZITADEL_DATABASE_POSTGRES_ADMIN_PASSWORD"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)-admin"
valueFrom: secretKeyRef: key: "password"
},
// CA Cert issued by PGO which issued the pgbouncer tls cert
{
name: "ZITADEL_DATABASE_POSTGRES_USER_SSL_ROOTCERT"
value: "/\(_PGBouncer)/ca.crt"
},
{
name: "ZITADEL_DATABASE_POSTGRES_ADMIN_SSL_ROOTCERT"
value: "/\(_PGBouncer)/ca.crt"
},
]
// Refer to https://zitadel.com/docs/self-hosting/manage/database
zitadel: {
// Zitadel master key
masterkeySecretName: "zitadel-masterkey"
// dbSslCaCrtSecret: "pgo-root-cacert"
// All settings: https://zitadel.com/docs/self-hosting/manage/configure#runtime-configuration-file
// Helm interface: https://github.com/zitadel/zitadel-charts/blob/zitadel-7.4.0/charts/zitadel/values.yaml#L20-L21
configmapConfig: {
// NOTE: You can change the ExternalDomain, ExternalPort and ExternalSecure
// configuration options at any time. However, for ZITADEL to be able to
// pick up the changes, you need to rerun ZITADELs setup phase. Do so with
// kubectl delete job zitadel-setup, then re-apply the new config.
//
// https://zitadel.com/docs/self-hosting/manage/custom-domain
ExternalSecure: true
ExternalDomain: #ExternalDomain
ExternalPort: 443
TLS: Enabled: false
// Database connection credentials are injected via environment variables from the db-pguser-db secret.
Database: postgres: {
MaxOpenConns: 25
MaxIdleConns: 10
MaxConnLifetime: "1h"
MaxConnIdleTime: "5m"
// verify-full verifies the host name matches cert dns names in addition to root ca signature
User: SSL: Mode: "verify-full"
Admin: SSL: Mode: "verify-full"
}
}
}
}

View File

@@ -0,0 +1,102 @@
package holos
import "encoding/yaml"
let Name = "zitadel"
#InputKeys: component: Name
#DependsOn: postgres: _
// Upstream helm chart doesn't specify the namespace field for all resources.
#Kustomization: spec: targetNamespace: #TargetNamespace
#HelmChart & {
namespace: #TargetNamespace
chart: {
name: Name
version: "7.9.0"
repository: {
name: Name
url: "https://charts.zitadel.com"
}
}
values: #Values
apiObjects: {
ExternalSecret: "zitadel-masterkey": _
VirtualService: "\(Name)": {
metadata: name: Name
metadata: namespace: #TargetNamespace
spec: hosts: ["login.\(#Platform.org.domain)"]
spec: gateways: ["istio-ingress/default"]
spec: http: [{route: [{destination: host: Name}]}]
}
}
}
// TODO: Generalize this common pattern of injecting the istio sidecar into a Deployment
let IstioInject = [{op: "add", path: "/spec/template/metadata/labels/sidecar.istio.io~1inject", value: "true"}]
_PGBouncer: "pgbouncer"
let DatabaseCACertPatch = [
{
op: "add"
path: "/spec/template/spec/volumes/-"
value: {
name: _PGBouncer
secret: {
secretName: "\(_DBName)-pgbouncer"
items: [{key: "pgbouncer-frontend.ca-roots", path: "ca.crt"}]
}
}
},
{
op: "add"
path: "/spec/template/spec/containers/0/volumeMounts/-"
value: {
name: _PGBouncer
mountPath: "/" + _PGBouncer
}
},
]
#Kustomize: {
patches: [
{
target: {
group: "apps"
version: "v1"
kind: "Deployment"
name: Name
}
patch: yaml.Marshal(IstioInject)
},
{
target: {
group: "apps"
version: "v1"
kind: "Deployment"
name: Name
}
patch: yaml.Marshal(DatabaseCACertPatch)
},
{
target: {
group: "batch"
version: "v1"
kind: "Job"
name: "\(Name)-init"
}
patch: yaml.Marshal(DatabaseCACertPatch)
},
{
target: {
group: "batch"
version: "v1"
kind: "Job"
name: "\(Name)-setup"
}
patch: yaml.Marshal(DatabaseCACertPatch)
},
]
}

View File

@@ -1,61 +0,0 @@
package holos
// Lets Encrypt certificate issuers for public tls certs
#InputKeys: component: "certissuers"
#TargetNamespace: "cert-manager"
let Name = "letsencrypt"
// The cloudflare api token is platform scoped, not cluster scoped.
#SecretName: "cloudflare-api-token-secret"
// Depends on cert manager
#DependsOn: _CertManager
#KubernetesObjects & {
apiObjects: {
ClusterIssuer: {
letsencrypt: #ClusterIssuer & {
metadata: name: Name
spec: {
acme: {
email: #Platform.org.contact.email
server: "https://acme-v02.api.letsencrypt.org/directory"
privateKeySecretRef: name: Name + "-istio"
solvers: [{http01: ingress: class: "istio"}]
}
}
}
letsencryptStaging: #ClusterIssuer & {
metadata: name: Name + "-staging"
spec: {
acme: {
email: #Platform.org.contact.email
server: "https://acme-staging-v02.api.letsencrypt.org/directory"
privateKeySecretRef: name: Name + "-staging-istio"
solvers: [{http01: ingress: class: "istio"}]
}
}
}
letsencryptDns: #ClusterIssuer & {
metadata: name: Name + "-dns"
spec: {
acme: {
email: #Platform.org.contact.email
server: "https://acme-v02.api.letsencrypt.org/directory"
privateKeySecretRef: name: Name + "-istio"
solvers: [{
dns01: cloudflare: {
email: #Platform.org.cloudflare.email
apiTokenSecretRef: name: #SecretName
apiTokenSecretRef: key: "api_token"
}}]
}
}
}
}
ExternalSecret: "\(#SecretName)": #ExternalSecret & {
_name: #SecretName
}
}
}

View File

@@ -1,25 +0,0 @@
package holos
// https://cert-manager.io/docs/
#TargetNamespace: "cert-manager"
#InputKeys: {
component: "certmanager"
service: "cert-manager"
}
#HelmChart & {
values: #UpstreamValues & {
installCRDs: true
}
namespace: #TargetNamespace
chart: {
name: "cert-manager"
version: "1.14.3"
repository: {
name: "jetstack"
url: "https://charts.jetstack.io"
}
}
}

View File

@@ -0,0 +1,35 @@
package holos
// The primary istio Gateway, named default
let Name = "gateway"
#InputKeys: component: Name
#TargetNamespace: "istio-ingress"
#DependsOn: _IngressGateway
let LoginCert = #PlatformCerts.login
#KubernetesObjects & {
apiObjects: {
ExternalSecret: login: #ExternalSecret & {
_name: "login"
}
Gateway: default: #Gateway & {
metadata: name: "default"
metadata: namespace: #TargetNamespace
spec: selector: istio: "ingressgateway"
spec: servers: [
{
hosts: [for dnsName in LoginCert.spec.dnsNames {"prod-iam-zitadel/\(dnsName)"}]
port: name: "https-prod-iam-login"
port: number: 443
port: protocol: "HTTPS"
tls: credentialName: LoginCert.spec.secretName
tls: mode: "SIMPLE"
},
]
}
}
}

View File

@@ -1,32 +1,72 @@
package holos
let Name = "httpbin"
let Host = Name + "." + #ClusterDomain
let SecretName = #InputKeys.cluster + "-" + Name
let MatchLabels = {app: Name} & #SelectorLabels
let Metadata = {
name: Name
namespace: #TargetNamespace
labels: app: Name
}
#InputKeys: component: Name
#TargetNamespace: "istio-ingress"
#DependsOn: _IngressGateway
#Metadata: namespace: #TargetNamespace
SecretName: #InputKeys.cluster + "-" + Name
let Cert = #PlatformCerts[SecretName]
#KubernetesObjects & {
apiObjects: {
Certificate: {
httpbin: #Certificate & {
metadata: {
#Metadata
name: SecretName
}
spec: {
commonName: Host
dnsNames: [Host]
secretName: SecretName
issuerRef: kind: "ClusterIssuer"
issuerRef: name: "letsencrypt"
}
ExternalSecret: "\(Cert.spec.secretName)": _
Deployment: httpbin: #Deployment & {
metadata: Metadata
spec: selector: matchLabels: MatchLabels
spec: template: {
metadata: labels: MatchLabels
metadata: labels: #CommonLabels
metadata: labels: #IstioSidecar
spec: securityContext: seccompProfile: type: "RuntimeDefault"
spec: containers: [{
name: Name
image: "quay.io/holos/mccutchen/go-httpbin"
ports: [{containerPort: 8080}]
securityContext: {
seccompProfile: type: "RuntimeDefault"
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1337
runAsGroup: 1337
capabilities: drop: ["ALL"]
}}]
}
}
Service: httpbin: #Service & {
metadata: Metadata
spec: selector: MatchLabels
spec: ports: [
{port: 80, targetPort: 8080, protocol: "TCP", name: "http"},
]
}
Gateway: httpbin: #Gateway & {
metadata: Metadata
spec: selector: istio: "ingressgateway"
spec: servers: [
{
hosts: [for host in Cert.spec.dnsNames {"\(#TargetNamespace)/\(host)"}]
port: name: "https-\(#InstanceName)"
port: number: 443
port: protocol: "HTTPS"
tls: credentialName: Cert.spec.secretName
tls: mode: "SIMPLE"
},
]
}
VirtualService: httpbin: #VirtualService & {
metadata: Metadata
spec: hosts: [for host in Cert.spec.dnsNames {host}]
spec: gateways: ["\(#TargetNamespace)/\(Name)"]
spec: http: [{route: [{destination: host: Name}]}]
}
}
}

View File

@@ -63,7 +63,7 @@ let RedirectMetaName = {
// https-redirect
_APIObjects: {
Gateway: {
httpsRedirect: #Gateway & {
"\(RedirectMetaName.name)": #Gateway & {
metadata: RedirectMetaName
spec: selector: GatewayLabels
spec: servers: [{
@@ -79,7 +79,7 @@ _APIObjects: {
}
}
VirtualService: {
httpsRedirect: #VirtualService & {
"\(RedirectMetaName.name)": #VirtualService & {
metadata: RedirectMetaName
spec: hosts: ["*"]
spec: gateways: [RedirectMetaName.name]

View File

@@ -526,7 +526,7 @@ package holos
base: {
// For istioctl usage to disable istio config crds in base
enableIstioConfigCRDs: true
enableIstioConfigCRDs: *true | false
// If enabled, gateway-api types will be validated using the standard upstream validation logic.
// This is an alternative to deploying the standalone validation server the project provides.

View File

@@ -16,8 +16,8 @@ package holos
remotePilotAddress: ""
}
base: {
// Include the CRDs in the helm template output
enableCRDTemplates: true
// holos includes crd templates with the --include-crds helm flag.
enableCRDTemplates: false
// Validation webhook configuration url
// For example: https://$remotePilotAddress:15017/validate
validationURL: ""

View File

@@ -0,0 +1,6 @@
package holos
#DependsOn: Namespaces: name: "prod-secrets-namespaces"
#DependsOn: CRDS: name: "\(#InstancePrefix)-crds"
#InputKeys: component: "controller"
{} & #KustomizeBuild

View File

@@ -0,0 +1,21 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: postgres-operator
labels:
- includeTemplates: true
pairs:
app.kubernetes.io/name: pgo
# The version below should match the version on the PostgresCluster CRD
app.kubernetes.io/version: 5.5.1
postgres-operator.crunchydata.com/control-plane: postgres-operator
resources:
- ./rbac/cluster
- ./manager
images:
- name: postgres-operator
newName: registry.developers.crunchydata.com/crunchydata/postgres-operator
newTag: ubi8-5.5.1-0

View File

@@ -0,0 +1,5 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- manager.yaml

View File

@@ -0,0 +1,56 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgo
labels:
postgres-operator.crunchydata.com/control-plane: postgres-operator
spec:
replicas: 1
strategy: { type: Recreate }
selector:
matchLabels:
postgres-operator.crunchydata.com/control-plane: postgres-operator
template:
metadata:
labels:
postgres-operator.crunchydata.com/control-plane: postgres-operator
spec:
containers:
- name: operator
image: postgres-operator
env:
- name: PGO_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CRUNCHY_DEBUG
value: "true"
- name: RELATED_IMAGE_POSTGRES_15
value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-15.6-0"
- name: RELATED_IMAGE_POSTGRES_15_GIS_3.3
value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-15.6-3.3-0"
- name: RELATED_IMAGE_POSTGRES_16
value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.2-0"
- name: RELATED_IMAGE_POSTGRES_16_GIS_3.3
value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-16.2-3.3-0"
- name: RELATED_IMAGE_POSTGRES_16_GIS_3.4
value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-16.2-3.4-0"
- name: RELATED_IMAGE_PGADMIN
value: "registry.developers.crunchydata.com/crunchydata/crunchy-pgadmin4:ubi8-4.30-22"
- name: RELATED_IMAGE_PGBACKREST
value: "registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.49-0"
- name: RELATED_IMAGE_PGBOUNCER
value: "registry.developers.crunchydata.com/crunchydata/crunchy-pgbouncer:ubi8-1.21-3"
- name: RELATED_IMAGE_PGEXPORTER
value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres-exporter:ubi8-0.15.0-3"
- name: RELATED_IMAGE_PGUPGRADE
value: "registry.developers.crunchydata.com/crunchydata/crunchy-upgrade:ubi8-5.5.1-0"
- name: RELATED_IMAGE_STANDALONE_PGADMIN
value: "registry.developers.crunchydata.com/crunchydata/crunchy-pgadmin4:ubi8-7.8-3"
securityContext:
allowPrivilegeEscalation: false
capabilities: { drop: [ALL] }
readOnlyRootFilesystem: true
runAsNonRoot: true
serviceAccountName: pgo

View File

@@ -0,0 +1,7 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service_account.yaml
- role.yaml
- role_binding.yaml

View File

@@ -0,0 +1,146 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: postgres-operator
rules:
- apiGroups:
- ''
resources:
- configmaps
- persistentvolumeclaims
- secrets
- services
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- ''
resources:
- endpoints
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- watch
- apiGroups:
- ''
resources:
- endpoints/restricted
- pods/exec
verbs:
- create
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- ''
resources:
- pods
verbs:
- delete
- get
- list
- patch
- watch
- apiGroups:
- ''
resources:
- serviceaccounts
verbs:
- create
- get
- list
- patch
- watch
- apiGroups:
- apps
resources:
- deployments
- statefulsets
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- postgres-operator.crunchydata.com
resources:
- pgadmins
- pgupgrades
verbs:
- get
- list
- watch
- apiGroups:
- postgres-operator.crunchydata.com
resources:
- pgadmins/finalizers
- pgupgrades/finalizers
- postgresclusters/finalizers
verbs:
- update
- apiGroups:
- postgres-operator.crunchydata.com
resources:
- pgadmins/status
- pgupgrades/status
- postgresclusters/status
verbs:
- patch
- apiGroups:
- postgres-operator.crunchydata.com
resources:
- postgresclusters
verbs:
- get
- list
- patch
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
- roles
verbs:
- create
- get
- list
- patch
- watch

View File

@@ -0,0 +1,14 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: postgres-operator
labels:
postgres-operator.crunchydata.com/control-plane: postgres-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: postgres-operator
subjects:
- kind: ServiceAccount
name: pgo

View File

@@ -0,0 +1,7 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pgo
labels:
postgres-operator.crunchydata.com/control-plane: postgres-operator

View File

@@ -0,0 +1,7 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service_account.yaml
- role.yaml
- role_binding.yaml

View File

@@ -0,0 +1,146 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: postgres-operator
rules:
- apiGroups:
- ''
resources:
- configmaps
- persistentvolumeclaims
- secrets
- services
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- ''
resources:
- endpoints
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- watch
- apiGroups:
- ''
resources:
- endpoints/restricted
- pods/exec
verbs:
- create
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- ''
resources:
- pods
verbs:
- delete
- get
- list
- patch
- watch
- apiGroups:
- ''
resources:
- serviceaccounts
verbs:
- create
- get
- list
- patch
- watch
- apiGroups:
- apps
resources:
- deployments
- statefulsets
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- postgres-operator.crunchydata.com
resources:
- pgadmins
- pgupgrades
verbs:
- get
- list
- watch
- apiGroups:
- postgres-operator.crunchydata.com
resources:
- pgadmins/finalizers
- pgupgrades/finalizers
- postgresclusters/finalizers
verbs:
- update
- apiGroups:
- postgres-operator.crunchydata.com
resources:
- pgadmins/status
- pgupgrades/status
- postgresclusters/status
verbs:
- patch
- apiGroups:
- postgres-operator.crunchydata.com
resources:
- postgresclusters
verbs:
- get
- list
- patch
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
- roles
verbs:
- create
- get
- list
- patch
- watch

View File

@@ -0,0 +1,14 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: postgres-operator
labels:
postgres-operator.crunchydata.com/control-plane: postgres-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: postgres-operator
subjects:
- kind: ServiceAccount
name: pgo

View File

@@ -0,0 +1,7 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pgo
labels:
postgres-operator.crunchydata.com/control-plane: postgres-operator

View File

@@ -0,0 +1,7 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- bases/postgres-operator.crunchydata.com_postgresclusters.yaml
- bases/postgres-operator.crunchydata.com_pgupgrades.yaml
- bases/postgres-operator.crunchydata.com_pgadmins.yaml

View File

@@ -0,0 +1,6 @@
package holos
// Refer to https://github.com/CrunchyData/postgres-operator-examples/tree/main/kustomize/install/crd
#InputKeys: component: "crds"
{} & #KustomizeBuild

View File

@@ -0,0 +1,4 @@
package holos
// Crunchy Postgres Operator
#InputKeys: project: "pgo"

Some files were not shown because too many files have changed in this diff Show More