Compare commits

..

25 Commits

Author SHA1 Message Date
Jeff McCune
104bda459f (#69) Go Types for CUE/Holos API contract
This patch refactors the go structs used to decode cue output for
processing by the holos cli.  For context, the purpose of the structs
are to inform holos how the data from cue should be modeled and
processed as a rendering pipeline that provides rendered yaml to
configure kubernetes api objects.

The structs share common fields in the form of the HolosComponent
embedded struct.  The three main holos component kinds today are:

 1. KubernetesObjects - CUE outputs a nested map where each value is a
    single rendered api object (resource).
 2. HelmChart - CUE outputs the chart name and values.  Holos calls helm
    template to render the chart.  Additional api objects may be
    overlaid into the rendered output.  Kustomize may also optionally be
    called at the end of the render pipeline.
 3. KustomizeBuild - CUE outputs data to construct a kustomize
    kustomization build.  The holos component contains raw yaml files to
    use as kustomization resources.  CUE optionally defines additional
    patches, common labels, etc.

With the Go structs, cue may directly import the definitions to more
easily keep the CUE definitions in sync with what the holos cli expects
to receive.

The holos component types may be imported into cue using:

    cue get go github.com/holos-run/holos/api/v1alpha1/...
2024-03-20 17:21:10 -07:00
Jeff McCune
bd2effa183 (#61) Improve ks prod-iam-zitadel robustness with flux health checks
Without this patch ks/prod-iam-zitadel often gets blocked waiting for
jobs that will never complete.  In addition, flux should not manage the
zitadel-test-connection Pod which is an unnecessary artifact of the
upstream helm chart.

We'd disable helm hooks, but they're necessary to create the init and
setup jobs.

This patch also changes the default behavior of Kustomizations from
wait: true to wait: false.  Waiting is expensive for the api server and
slows down the reconciliation process considerably.

Component authors should use ks.spec.healthChecks to target specific
important resources to watch and wait for.
2024-03-15 15:56:43 -07:00
Jeff McCune
562412fbe7 (#57) Run gha-rs scale set only on the primary cluster
This patch fixes the problem of the actions runner scale set listener
pod failing every 3 seconds.  See
https://github.com/actions/actions-runner-controller/issues/3351

The solution is not ideal, if the primary cluster is down workflows will
not execute.  The primary cluster shouldn't go down though so this is
the trade off.  Lower log spam and resource usage by eliminating the
failing pods on other clusters for lower availability if the primary
cluster is not available.

We could let the pods loop and if the primary is unavailable another
would quickly pick up the role, but it doesn't seem worth it.
2024-03-15 13:13:25 -07:00
Jeff McCune
fd6fbe5598 (#57) Allow gha-rs scale set to fail on all but one clusters
The effect of this patch is limited to refreshing credentials only for
namespaces that exist in the local cluster.  There is structure in place
in the CUE code to allow for namespaces bound to specific clusters, but
this is used only by the optional Vault component.

This patch was an attempt to work around
https://github.com/actions/actions-runner-controller/issues/3351 by
deploying the runner scale sets into unique namespaces.

This effort was a waste of time, only one listener pod successfully
registered for a given scale set name / group combination.

Because we have only one group named Default we can only have one
listener pod globally for a given scale set name.

Because we want our workflows to execute regardless of the availability
of a single cluster, we're going to let this fail for now.  The pod
retries every 3 seconds.  When a cluster is destroyed, another cluster
will quickly register.

A follow up patch will look to expand this retry behavior.
2024-03-15 12:53:16 -07:00
Jeff McCune
67472e1e1c (#60) Disable flux reconciliation of deployment/zitadel on standby clusters 2024-03-14 21:58:32 -07:00
Jeff McCune
d64c3e8c66 (#58) Zitadel Failover RunBook 2024-03-14 15:25:38 -07:00
Jeff McCune
f344f97374 (#58) Restore last zitadel database backup
When the cluster is provisioned, restore the most recent backup instead
of a fixed point in time.
2024-03-14 11:40:17 -07:00
Jeff McCune
770088b912 (#53) Clean up nested if statements with && 2024-03-13 10:35:20 -07:00
Jeff McCune
cb9b39c3ca (#53) Add Vault as an optional service on the core clusters
This patch migrates the vault component from [holos-infra][1] to a cue
based component.  Vault is optional in the reference platform, so this
patch also defines an `#OptionalServices` struct to conditionally manage
a service across multiple clusters in the platform.

The primary use case for optional services is managing a namespace to
provision and provide secrets across clusters.

[1]: https://github.com/holos-run/holos-infra/tree/v0.5.0/components/core/core/vault
2024-03-12 17:18:38 -07:00
Jeff McCune
0f34b20546 (#54) Disable helm hooks when rendering components
Pods are unnecessarily created when deploying helm based holos
components and often fail.  Prevent these test pods by disabling helm
hooks with the `--no-hooks` flag.

Closes: #54
2024-03-12 14:14:20 -07:00
Jeff McCune
0d7bbbb659 (#48) Disable pg spec.dataSource for standby cluster
Problem:
The standby cluster on k2 fails to start.  A pgbackrest pod first
restores the database from S3, then the pgha nodes try to replay the WAL
as part of the standby initialization process.  This fails because the
PGDATA directory is not empty.

Solution:
Specify the spec.dataSource field only when the cluster is configured as
a primary cluster.

Result:
Non-primary clusters are standby, they skip the pgbackrest job to
restore from S3 and move straight to patroni replaying the WAL from S3
as part of the pgha pods.

One of the two pgha pods becomes the "standby leader" and restores the
WAL from S3.  The other is a cascading standby and then restores the
same WAL from the standby leader.

After 8 minutes both pods are ready.

```
❯ k get pods
NAME                               READY   STATUS    RESTARTS   AGE
zitadel-pgbouncer-d9f8cffc-j469g   2/2     Running   0          11m
zitadel-pgbouncer-d9f8cffc-xq29g   2/2     Running   0          11m
zitadel-pgha1-27w7-0               4/4     Running   0          11m
zitadel-pgha1-c5qj-0               4/4     Running   0          11m
zitadel-repo-host-0                2/2     Running   0          11m
```
2024-03-11 17:56:47 -07:00
Jeff McCune
3f3e36bbe9 (#48) Split workload into foundation and accounts
Problem:
The k3 and k4 clusters are getting the Zitadel components which are
really only intended for the core cluster pair.

Solution:
Split the workload subtree into two, named foundation and accounts.  The
core cluster pair gets foundation+accounts while the kX clusters get
just the foundation subtree.

Result:
prod-zitadel-iam is no longer managed on k3 and k4
2024-03-11 15:20:35 -07:00
Jeff McCune
9f41478d33 (#48) Restore from Monday morning after Gary and Nate registered
Set the restore point to time="2024-03-11T17:08:58Z" level=info
msg="crunchy-pgbackrest ends" which is just after Gary and Nate
registered and were granted the cluster-admin role.
2024-03-11 10:18:45 -07:00
Jeff McCune
b86fee04fc (#48) v0.55.4 to rebuild k3, k4, k5 2024-03-11 08:48:07 -07:00
Jeff McCune
c78da6949f Merge pull request #51 from holos-run/jeff/48-zitadel-backups
(#48) Custom PGO Certs for Zitadel
2024-03-10 23:08:29 -07:00
Jeff McCune
7b215bb8f1 (#48) Custom PGO Certs for Zitadel
The [Streaming Standby][standby] architecture requires custom tls certs
for two clusters in two regions to connect to each other.

This patch manages the custom certs following the configuration
described in the article [Using Cert Manager to Deploy TLS for Postgres
on Kubernetes][article].

NOTE: One thing not mentioned anywhere in the crunchy documentation is
how custom tls certs work with pgbouncer.  The pgbouncer service uses a
tls certificate issued by the pgo root cert, not by the custom
certificate authority.

For this reason, we use kustomize to patch the zitadel Deployment and
the zitadel-init and zitadel-setup Jobs.  The patch projects the ca
bundle from the `zitadel-pgbouncer` secret into the zitadel pods at
/pgbouncer/ca.crt

[standby]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery#streaming-standby-with-an-external-repo
[article]: https://www.crunchydata.com/blog/using-cert-manager-to-deploy-tls-for-postgres-on-kubernetes
2024-03-10 22:54:06 -07:00
Jeff McCune
78cec76a96 (#48) Restore ZITADEL from point in time full backup
A full backup was taken using:

```
kubectl annotate postgrescluster zitadel postgres-operator.crunchydata.com/pgbackrest-backup="$(date)"
```

And completed with:

```
❯ k logs -f zitadel-backup-5r6v-v5jnm
time="2024-03-10T21:52:15Z" level=info msg="crunchy-pgbackrest starts"
time="2024-03-10T21:52:15Z" level=info msg="debug flag set to false"
time="2024-03-10T21:52:15Z" level=info msg="backrest backup command requested"
time="2024-03-10T21:52:15Z" level=info msg="command to execute is [pgbackrest backup --stanza=db --repo=2 --type=full]"
time="2024-03-10T21:55:18Z" level=info msg="crunchy-pgbackrest ends"
```

This patch verifies the point in time backup is robust in the face of
the following operations:

1. pg cluster zitadel was deleted (whole namespace emptied)
2. pg cluster zitadel was re-created _without_ a `dataSource`
3. pgo initailized a new database and backed up the blank database to
   S3.
4. pg cluster zitadel was deleted again.
5. pg cluster zitadel was re-created with `dataSource` `options: ["--type=time", "--target=\"2024-03-10 21:56:00+00\""]` (Just after the full backup completed)
6. Restore completed successfully.
7. Applied the holos zitadel component.
8. Zitadel came up successfully and user login worked as expected.

- [x] Perform an in place [restore][restore] from [s3][bucket].
- [x] Set repo1-retention-full to clear warning

[restore]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#restore-properties
[bucket]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#cloud-based-data-source
2024-03-10 17:42:54 -07:00
Jeff McCune
0e98ad2ecb (#48) Zitadel Backups
This patch configures backups suitable to support the [Streaming Standby
with an External Repo][0] architecture.

- [x] PGO [Multiple Backup Repositories][1] to k8s pv and s3.
- [x] [Encryption][2] of backups to S3.
- [x] [Remove SUPERUSER][3] role from zitadel-admin pg user to work with pgbouncer.  Resolves zitadel-init job failure.
- [x] Take a [Manual Backup][5]

[0]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery#streaming-standby-with-an-external-repo
[1]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups#set-up-multiple-backup-repositories
[2]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups#encryption
[3]: https://github.com/CrunchyData/postgres-operator/issues/3095#issuecomment-1904712211
[4]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#streaming-standby-with-an-external-repo
[5]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backup-management#taking-a-one-off-backup
2024-03-10 16:38:56 -07:00
Jeff McCune
30bb3f183a (#50) Describe type as strings to match others 2024-03-10 11:29:19 -07:00
Jeff McCune
1369338f3c (#50) Add -n shorthand for --namespace for secrets
It's annoying holos get secret -n foo doesn't work like kubectl get
secret -n foo works.

Closes: #50
2024-03-10 10:45:49 -07:00
Jeff McCune
ac03f64724 (#45) Configure ZITADEL to use pgbouncer 2024-03-09 09:44:33 -08:00
Jeff McCune
bea4468972 (#42) Remove cert manager db ca components
Simpler to let postgres manage the certs.  TLS is in verify-full mode
with the pgo configured certs.
2024-03-08 21:34:26 -08:00
Jeff McCune
224adffa15 (#42) Add holos components for zitadel with postgres
To establish the canonical https://login.ois.run identity issuer on the
core cluster pair.

Custom resources for PGO have been imported with:

    timoni mod vendor crds -f deploy/clusters/core2/components/prod-pgo-crds/prod-pgo-crds.gen.yaml

Note, the zitadel tls connection took some considerable effort to get
working.  We intentionally use pgo issued certs to reduce the toil of
managing certs issued by cert manager.

The default tls configuration of pgo is pretty good with verify full
enabled.
2024-03-08 21:29:25 -08:00
Jeff McCune
b4d34ffdbc (#42) Fix incorrect ceph pool for core2 cluster
The core2 cluster cannot provision pvcs because it's using the k8s-dev
pool when it has credentials valid only for the k8s-prod pool.

This patch adds an entry to the platform cluster map to configure the
pool for each cluster, with a default of k8s-dev.
2024-03-08 13:14:27 -08:00
Jeff McCune
a85db9cf5e (#42) Add KustomizeBuild holos component type to install pgo
PGO uses plain yaml and kustomize as the recommended installation
method.  Holos supports upstream by adding a new PlainFiles component
kind, which simply copies files into place and lets kustomize handle the
generation of the api objects.

Cue is responsible for very little in this kind of component, basically
allowing overlay resources if needed and deferring everything else to
the holos cli.

The holos cli in turn is responsible for executing kubectl kustomize
build on the input directory to produce the rendered output, then writes
the rendered output into place.
2024-03-08 11:27:42 -08:00
120 changed files with 33215 additions and 732 deletions

View File

@@ -27,9 +27,10 @@ jobs:
run: sudo apt update && sudo apt -qq -y install curl zip unzip tar bzip2
- name: Set up Helm
uses: azure/setup-helm@v4.1.0
with:
version: 'latest'
uses: azure/setup-helm@v4
- name: Set up Kubectl
uses: azure/setup-kubectl@v3
- name: Test
run: ./scripts/test

12
api/v1alpha1/component.go Normal file
View File

@@ -0,0 +1,12 @@
package v1alpha1
// HolosComponent defines the common fields for all holos components.
type HolosComponent struct {
TypeMeta `json:",inline" yaml:",inline"`
// Metadata represents the holos component name
Metadata ObjectMeta `json:"metadata,omitempty" yaml:"metadata,omitempty"`
// APIObjectMap holds the marshalled representation of api objects.
APIObjectMap APIObjectMap `json:"apiObjectMap,omitempty" yaml:"apiObjectMap,omitempty"`
// Kustomization holds the marshalled representation of the flux kustomization.
Kustomization `json:",inline" yaml:",inline"`
}

2
api/v1alpha1/doc.go Normal file
View File

@@ -0,0 +1,2 @@
// Package v1alpha1 defines the api boundary between CUE and Holos.
package v1alpha1

163
api/v1alpha1/helm.go Normal file
View File

@@ -0,0 +1,163 @@
package v1alpha1
import (
"context"
"fmt"
"github.com/holos-run/holos"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/pkg/util"
"github.com/holos-run/holos/pkg/wrapper"
"os"
"path/filepath"
"strings"
)
const (
HelmChartKind = "HelmChart"
// ChartDir is the directory name created in the holos component directory to cache a chart.
ChartDir = "vendor"
)
// A HelmChart represents a helm command to provide chart values in order to render kubernetes api objects.
type HelmChart struct {
HolosComponent `json:",inline" yaml:",inline"`
// Namespace is the namespace to install into. TODO: Use metadata.namespace instead.
Namespace string `json:"namespace"`
Chart Chart `json:"chart"`
ValuesContent string `json:"valuesContent"`
EnableHooks bool `json:"enableHooks"`
}
type Chart struct {
Name string `json:"name"`
Version string `json:"version"`
Release string `json:"release"`
Repository Repository `json:"repository"`
}
type Repository struct {
Name string `json:"name"`
URL string `json:"url"`
}
func (hc *HelmChart) Render(ctx context.Context, path holos.PathComponent) (*Result, error) {
result := Result{
TypeMeta: hc.TypeMeta,
Metadata: hc.Metadata,
Kustomization: hc.Kustomization,
}
if err := hc.helm(ctx, &result, path); err != nil {
return nil, err
}
result.addObjectMap(ctx, hc.APIObjectMap)
if err := result.kustomize(ctx); err != nil {
return nil, wrapper.Wrap(fmt.Errorf("could not kustomize: %w", err))
}
return &result, nil
}
// runHelm provides the values produced by CUE to helm template and returns
// the rendered kubernetes api objects in the result.
func (hc *HelmChart) helm(ctx context.Context, r *Result, path holos.PathComponent) error {
log := logger.FromContext(ctx).With("chart", hc.Chart.Name)
if hc.Chart.Name == "" {
log.WarnContext(ctx, "skipping helm: no chart name specified, use a different component type")
return nil
}
cachedChartPath := filepath.Join(string(path), ChartDir, filepath.Base(hc.Chart.Name))
if isNotExist(cachedChartPath) {
// Add repositories
repo := hc.Chart.Repository
if repo.URL != "" {
out, err := util.RunCmd(ctx, "helm", "repo", "add", repo.Name, repo.URL)
if err != nil {
log.ErrorContext(ctx, "could not run helm", "stderr", out.Stderr.String(), "stdout", out.Stdout.String())
return wrapper.Wrap(fmt.Errorf("could not run helm repo add: %w", err))
}
// Update repository
out, err = util.RunCmd(ctx, "helm", "repo", "update", repo.Name)
if err != nil {
log.ErrorContext(ctx, "could not run helm", "stderr", out.Stderr.String(), "stdout", out.Stdout.String())
return wrapper.Wrap(fmt.Errorf("could not run helm repo update: %w", err))
}
} else {
log.DebugContext(ctx, "no chart repository url proceeding assuming oci chart")
}
// Cache the chart
if err := cacheChart(ctx, path, ChartDir, hc.Chart); err != nil {
return fmt.Errorf("could not cache chart: %w", err)
}
}
// Write values file
tempDir, err := os.MkdirTemp("", "holos")
if err != nil {
return wrapper.Wrap(fmt.Errorf("could not make temp dir: %w", err))
}
defer util.Remove(ctx, tempDir)
valuesPath := filepath.Join(tempDir, "values.yaml")
if err := os.WriteFile(valuesPath, []byte(hc.ValuesContent), 0644); err != nil {
return wrapper.Wrap(fmt.Errorf("could not write values: %w", err))
}
log.DebugContext(ctx, "helm: wrote values", "path", valuesPath, "bytes", len(hc.ValuesContent))
// Run charts
chart := hc.Chart
args := []string{"template"}
if !hc.EnableHooks {
args = append(args, "--no-hooks")
}
namespace := hc.Namespace
args = append(args, "--include-crds", "--values", valuesPath, "--namespace", namespace, "--kubeconfig", "/dev/null", "--version", chart.Version, chart.Release, cachedChartPath)
helmOut, err := util.RunCmd(ctx, "helm", args...)
if err != nil {
stderr := helmOut.Stderr.String()
lines := strings.Split(stderr, "\n")
for _, line := range lines {
if strings.HasPrefix(line, "Error:") {
err = fmt.Errorf("%s: %w", line, err)
}
}
return wrapper.Wrap(fmt.Errorf("could not run helm template: %w", err))
}
r.accumulatedOutput = helmOut.Stdout.String()
return nil
}
// cacheChart stores a cached copy of Chart in the chart subdirectory of path.
func cacheChart(ctx context.Context, path holos.PathComponent, chartDir string, chart Chart) error {
log := logger.FromContext(ctx)
cacheTemp, err := os.MkdirTemp(string(path), chartDir)
if err != nil {
return wrapper.Wrap(fmt.Errorf("could not make temp dir: %w", err))
}
defer util.Remove(ctx, cacheTemp)
chartName := chart.Name
if chart.Repository.Name != "" {
chartName = fmt.Sprintf("%s/%s", chart.Repository.Name, chart.Name)
}
helmOut, err := util.RunCmd(ctx, "helm", "pull", "--destination", cacheTemp, "--untar=true", "--version", chart.Version, chartName)
if err != nil {
return wrapper.Wrap(fmt.Errorf("could not run helm pull: %w", err))
}
log.Debug("helm pull", "stdout", helmOut.Stdout, "stderr", helmOut.Stderr)
cachePath := filepath.Join(string(path), chartDir)
if err := os.Rename(cacheTemp, cachePath); err != nil {
return wrapper.Wrap(fmt.Errorf("could not rename: %w", err))
}
log.InfoContext(ctx, "cached", "chart", chart.Name, "version", chart.Version, "path", cachePath)
return nil
}
func isNotExist(path string) bool {
_, err := os.Stat(path)
return os.IsNotExist(err)
}

View File

@@ -0,0 +1,24 @@
package v1alpha1
import (
"context"
"github.com/holos-run/holos"
)
const KubernetesObjectsKind = "KubernetesObjects"
// KubernetesObjects represents CUE output which directly provides Kubernetes api objects to holos.
type KubernetesObjects struct {
HolosComponent `json:",inline" yaml:",inline"`
}
// Render produces kubernetes api objects from the APIObjectMap
func (o *KubernetesObjects) Render(ctx context.Context, path holos.PathComponent) (*Result, error) {
result := Result{
TypeMeta: o.TypeMeta,
Metadata: o.Metadata,
Kustomization: o.Kustomization,
}
result.addObjectMap(ctx, o.APIObjectMap)
return &result, nil
}

View File

@@ -0,0 +1,11 @@
package v1alpha1
// Kustomization holds the rendered flux kustomization api object content for git ops.
type Kustomization struct {
// KsContent is the yaml representation of the flux kustomization for gitops.
KsContent string `json:"ksContent,omitempty" yaml:"ksContent,omitempty"`
// KustomizeFiles holds file contents for kustomize, e.g. patch files.
KustomizeFiles FileContentMap `json:"kustomizeFiles,omitempty" yaml:"kustomizeFiles,omitempty"`
// ResourcesFile is the file name used for api objects in kustomization.yaml
ResourcesFile string `json:"resourcesFile,omitempty" yaml:"resourcesFile,omitempty"`
}

View File

@@ -0,0 +1,37 @@
package v1alpha1
import (
"context"
"github.com/holos-run/holos"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/pkg/util"
"github.com/holos-run/holos/pkg/wrapper"
)
const KustomizeBuildKind = "KustomizeBuild"
// KustomizeBuild
type KustomizeBuild struct {
HolosComponent `json:",inline" yaml:",inline"`
}
// Render produces kubernetes api objects from the APIObjectMap
func (kb *KustomizeBuild) Render(ctx context.Context, path holos.PathComponent) (*Result, error) {
log := logger.FromContext(ctx)
result := Result{
TypeMeta: kb.TypeMeta,
Metadata: kb.Metadata,
Kustomization: kb.Kustomization,
}
// Run kustomize.
kOut, err := util.RunCmd(ctx, "kubectl", "kustomize", string(path))
if err != nil {
log.ErrorContext(ctx, kOut.Stderr.String())
return nil, wrapper.Wrap(err)
}
// Replace the accumulated output
result.accumulatedOutput = kOut.Stdout.String()
// Add CUE based api objects.
result.addObjectMap(ctx, kb.APIObjectMap)
return &result, nil
}

View File

@@ -0,0 +1,8 @@
package v1alpha1
// APIObjectMap is the shape of marshalled api objects returned from cue to the
// holos cli. A map is used to improve the clarity of error messages from cue.
type APIObjectMap map[string]map[string]string
// FileContentMap is a map of file names to file contents.
type FileContentMap map[string]string

View File

@@ -0,0 +1,15 @@
package v1alpha1
// ObjectMeta represents metadata of a holos component object. The fields are a
// copy of upstream kubernetes api machinery but are by holos objects distinct
// from kubernetes api objects.
type ObjectMeta struct {
// Name uniquely identifies the holos component instance and must be suitable as a file name.
Name string `json:"name,omitempty" yaml:"name,omitempty"`
// Namespace confines a holos component to a single namespace via kustomize if set.
Namespace string `json:"namespace,omitempty" yaml:"namespace,omitempty"`
// Labels are not used but are copied from api machinery ObjectMeta for completeness.
Labels map[string]string `json:"labels,omitempty" yaml:"labels,omitempty"`
// Annotations are not used but are copied from api machinery ObjectMeta for completeness.
Annotations map[string]string `json:"annotations,omitempty" yaml:"annotations,omitempty"`
}

21
api/v1alpha1/render.go Normal file
View File

@@ -0,0 +1,21 @@
package v1alpha1
import (
"context"
"github.com/holos-run/holos"
)
type Renderer interface {
GetKind() string
Render(ctx context.Context, path holos.PathComponent) (*Result, error)
}
// Render produces a Result representing the kubernetes api objects to
// configure. Each of the various holos component types, e.g. Helm, Kustomize,
// et al, should implement the Renderer interface. This process is best
// conceptualized as a data pipeline, for example a component may render a
// result by first calling helm template, then passing the result through
// kustomize, then mixing in overlay api objects.
func Render(ctx context.Context, r Renderer, path holos.PathComponent) (*Result, error) {
return r.Render(ctx, path)
}

141
api/v1alpha1/result.go Normal file
View File

@@ -0,0 +1,141 @@
package v1alpha1
import (
"context"
"fmt"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/pkg/util"
"github.com/holos-run/holos/pkg/wrapper"
"os"
"path/filepath"
"slices"
)
// Result is the build result for display or writing. Holos components Render the Result as a data pipeline.
type Result struct {
TypeMeta `json:",inline" yaml:",inline"`
Kustomization `json:",inline" yaml:",inline"`
Metadata ObjectMeta `json:"metadata,omitempty"`
// Skip causes holos to take no action if true.
Skip bool
// accumulatedOutput accumulates rendered api objects.
accumulatedOutput string
}
func (r *Result) Name() string {
return r.Metadata.Name
}
func (r *Result) Filename(writeTo string, cluster string) string {
name := r.Metadata.Name
return filepath.Join(writeTo, "clusters", cluster, "components", name, name+".gen.yaml")
}
func (r *Result) KustomizationFilename(writeTo string, cluster string) string {
return filepath.Join(writeTo, "clusters", cluster, "holos", "components", r.Metadata.Name+"-kustomization.gen.yaml")
}
// AccumulatedOutput returns the accumulated rendered output.
func (r *Result) AccumulatedOutput() string {
return r.accumulatedOutput
}
// addObjectMap renders the provided APIObjectMap into the accumulated output.
func (r *Result) addObjectMap(ctx context.Context, objectMap APIObjectMap) {
log := logger.FromContext(ctx)
b := []byte(r.AccumulatedOutput())
kinds := make([]string, 0, len(objectMap))
// Sort the keys
for kind := range objectMap {
kinds = append(kinds, kind)
}
slices.Sort(kinds)
for _, kind := range kinds {
v := objectMap[kind]
// Sort the keys
names := make([]string, 0, len(v))
for name := range v {
names = append(names, name)
}
slices.Sort(names)
for _, name := range names {
yamlString := v[name]
log.Debug(fmt.Sprintf("%s/%s", kind, name), "kind", kind, "name", name)
b = util.EnsureNewline(b)
header := fmt.Sprintf("---\n# Source: CUE apiObjects.%s.%s\n", kind, name)
b = append(b, []byte(header+yamlString)...)
b = util.EnsureNewline(b)
}
}
r.accumulatedOutput = string(b)
}
// kustomize replaces the accumulated output with the output of kustomize build
func (r *Result) kustomize(ctx context.Context) error {
log := logger.FromContext(ctx)
if r.ResourcesFile == "" {
log.DebugContext(ctx, "skipping kustomize: no resourcesFile")
return nil
}
if len(r.KustomizeFiles) < 1 {
log.DebugContext(ctx, "skipping kustomize: no kustomizeFiles")
return nil
}
tempDir, err := os.MkdirTemp("", "holos.kustomize")
if err != nil {
return wrapper.Wrap(err)
}
defer util.Remove(ctx, tempDir)
// Write the main api object resources file for kustomize.
target := filepath.Join(tempDir, r.ResourcesFile)
b := []byte(r.AccumulatedOutput())
b = util.EnsureNewline(b)
if err := os.WriteFile(target, b, 0644); err != nil {
return wrapper.Wrap(fmt.Errorf("could not write resources: %w", err))
}
log.DebugContext(ctx, "wrote: "+target, "op", "write", "path", target, "bytes", len(b))
// Write the kustomization tree, kustomization.yaml must be in this map for kustomize to work.
for file, content := range r.KustomizeFiles {
target := filepath.Join(tempDir, file)
if err := os.MkdirAll(filepath.Dir(target), 0755); err != nil {
return wrapper.Wrap(err)
}
b := []byte(content)
b = util.EnsureNewline(b)
if err := os.WriteFile(target, b, 0644); err != nil {
return wrapper.Wrap(fmt.Errorf("could not write: %w", err))
}
log.DebugContext(ctx, "wrote: "+target, "op", "write", "path", target, "bytes", len(b))
}
// Run kustomize.
kOut, err := util.RunCmd(ctx, "kubectl", "kustomize", tempDir)
if err != nil {
log.ErrorContext(ctx, kOut.Stderr.String())
return wrapper.Wrap(err)
}
// Replace the accumulated output
r.accumulatedOutput = kOut.Stdout.String()
return nil
}
// Save writes the content to the filesystem for git ops.
func (r *Result) Save(ctx context.Context, path string, content string) error {
log := logger.FromContext(ctx)
dir := filepath.Dir(path)
if err := os.MkdirAll(dir, os.FileMode(0775)); err != nil {
log.WarnContext(ctx, "could not mkdir", "path", dir, "err", err)
return wrapper.Wrap(err)
}
// Write the kube api objects
if err := os.WriteFile(path, []byte(content), os.FileMode(0644)); err != nil {
log.WarnContext(ctx, "could not write", "path", path, "err", err)
return wrapper.Wrap(err)
}
log.DebugContext(ctx, "out: wrote "+path, "action", "write", "path", path, "status", "ok")
return nil
}

10
api/v1alpha1/typemeta.go Normal file
View File

@@ -0,0 +1,10 @@
package v1alpha1
type TypeMeta struct {
Kind string `json:"kind,omitempty" yaml:"kind,omitempty"`
APIVersion string `json:"apiVersion,omitempty" yaml:"apiVersion,omitempty"`
}
func (tm *TypeMeta) GetKind() string {
return tm.Kind
}

View File

@@ -0,0 +1,33 @@
# Kustomize is a supported holos component kind
exec holos render --cluster-name=mycluster . --log-level=debug
# Want generated output
cmp want.yaml deploy/clusters/mycluster/components/kstest/kstest.gen.yaml
-- cue.mod --
package holos
-- component.cue --
package holos
cluster: string @tag(cluster, string)
apiVersion: "holos.run/v1alpha1"
kind: "KustomizeBuild"
metadata: name: "kstest"
-- kustomization.yaml --
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: mynamespace
resources:
- serviceaccount.yaml
-- serviceaccount.yaml --
apiVersion: v1
kind: ServiceAccount
metadata:
name: test
-- want.yaml --
apiVersion: v1
kind: ServiceAccount
metadata:
name: test
namespace: mynamespace

View File

@@ -0,0 +1,18 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
// HolosComponent defines the common fields for all holos components.
#HolosComponent: {
#TypeMeta
// Metadata represents the holos component name
metadata?: #ObjectMeta @go(Metadata)
// APIObjectMap holds the marshalled representation of api objects.
apiObjectMap?: #APIObjectMap @go(APIObjectMap)
#Kustomization
}

View File

@@ -0,0 +1,6 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
// Package v1alpha1 defines the api boundary between CUE and Holos.
package v1alpha1

View File

@@ -0,0 +1,33 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
#HelmChartKind: "HelmChart"
// ChartDir is the directory name created in the holos component directory to cache a chart.
#ChartDir: "vendor"
// A HelmChart represents a helm command to provide chart values in order to render kubernetes api objects.
#HelmChart: {
#HolosComponent
// Namespace is the namespace to install into. TODO: Use metadata.namespace instead.
namespace: string @go(Namespace)
chart: #Chart @go(Chart)
valuesContent: string @go(ValuesContent)
enableHooks: bool @go(EnableHooks)
}
#Chart: {
name: string @go(Name)
version: string @go(Version)
release: string @go(Release)
repository: #Repository @go(Repository)
}
#Repository: {
name: string @go(Name)
url: string @go(URL)
}

View File

@@ -0,0 +1,12 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
#KubernetesObjectsKind: "KubernetesObjects"
// KubernetesObjects represents CUE output which directly provides Kubernetes api objects to holos.
#KubernetesObjects: {
#HolosComponent
}

View File

@@ -0,0 +1,17 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
// Kustomization holds the rendered flux kustomization api object content for git ops.
#Kustomization: {
// KsContent is the yaml representation of the flux kustomization for gitops.
ksContent?: string @go(KsContent)
// KustomizeFiles holds file contents for kustomize, e.g. patch files.
kustomizeFiles?: #FileContentMap @go(KustomizeFiles)
// ResourcesFile is the file name used for api objects in kustomization.yaml
resourcesFile?: string @go(ResourcesFile)
}

View File

@@ -0,0 +1,12 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
#KustomizeBuildKind: "KustomizeBuild"
// KustomizeBuild
#KustomizeBuild: {
#HolosComponent
}

View File

@@ -0,0 +1,12 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
// APIObjectMap is the shape of marshalled api objects returned from cue to the
// holos cli. A map is used to improve the clarity of error messages from cue.
#APIObjectMap: {[string]: [string]: string}
// FileContentMap is a map of file names to file contents.
#FileContentMap: {[string]: string}

View File

@@ -0,0 +1,22 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
// ObjectMeta represents metadata of a holos component object. The fields are a
// copy of upstream kubernetes api machinery but are by holos objects distinct
// from kubernetes api objects.
#ObjectMeta: {
// Name uniquely identifies the holos component instance and must be suitable as a file name.
name?: string @go(Name)
// Namespace confines a holos component to a single namespace via kustomize if set.
namespace?: string @go(Namespace)
// Labels are not used but are copied from api machinery ObjectMeta for completeness.
labels?: {[string]: string} @go(Labels,map[string]string)
// Annotations are not used but are copied from api machinery ObjectMeta for completeness.
annotations?: {[string]: string} @go(Annotations,map[string]string)
}

View File

@@ -0,0 +1,7 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
#Renderer: _

View File

@@ -0,0 +1,16 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
// Result is the build result for display or writing. Holos components Render the Result as a data pipeline.
#Result: {
#TypeMeta
#Kustomization
metadata?: #ObjectMeta @go(Metadata)
// Skip causes holos to take no action if true.
Skip: bool
}

View File

@@ -0,0 +1,10 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
#TypeMeta: {
kind?: string @go(Kind)
apiVersion?: string @go(APIVersion)
}

View File

@@ -0,0 +1,975 @@
// Code generated by timoni. DO NOT EDIT.
//timoni:generate timoni vendor crd -f /home/jeff/workspace/holos-run/holos-infra/deploy/clusters/core2/components/prod-pgo-crds/prod-pgo-crds.gen.yaml
package v1beta1
import "strings"
// PGAdmin is the Schema for the pgadmins API
#PGAdmin: {
// APIVersion defines the versioned schema of this representation
// of an object. Servers should convert recognized schemas to the
// latest internal value, and may reject unrecognized values.
// More info:
// https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
apiVersion: "postgres-operator.crunchydata.com/v1beta1"
// Kind is a string value representing the REST resource this
// object represents. Servers may infer this from the endpoint
// the client submits requests to. Cannot be updated. In
// CamelCase. More info:
// https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
kind: "PGAdmin"
metadata!: {
name!: strings.MaxRunes(253) & strings.MinRunes(1) & {
string
}
namespace!: strings.MaxRunes(63) & strings.MinRunes(1) & {
string
}
labels?: {
[string]: string
}
annotations?: {
[string]: string
}
}
// PGAdminSpec defines the desired state of PGAdmin
spec!: #PGAdminSpec
}
// PGAdminSpec defines the desired state of PGAdmin
#PGAdminSpec: {
// Scheduling constraints of the PGAdmin pod. More info:
// https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node
affinity?: {
// Describes node affinity scheduling rules for the pod.
nodeAffinity?: {
// The scheduler will prefer to schedule pods to nodes that
// satisfy the affinity expressions specified by this field, but
// it may choose a node that violates one or more of the
// expressions. The node that is most preferred is the one with
// the greatest sum of weights, i.e. for each node that meets all
// of the scheduling requirements (resource request,
// requiredDuringScheduling affinity expressions, etc.), compute
// a sum by iterating through the elements of this field and
// adding "weight" to the sum if the node matches the
// corresponding matchExpressions; the node(s) with the highest
// sum are the most preferred.
preferredDuringSchedulingIgnoredDuringExecution?: [...{
// A node selector term, associated with the corresponding weight.
preference: {
// A list of node selector requirements by node's labels.
matchExpressions?: [...{
// The label key that the selector applies to.
key: string
// Represents a key's relationship to a set of values. Valid
// operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
operator: string
// An array of string values. If the operator is In or NotIn, the
// values array must be non-empty. If the operator is Exists or
// DoesNotExist, the values array must be empty. If the operator
// is Gt or Lt, the values array must have a single element,
// which will be interpreted as an integer. This array is
// replaced during a strategic merge patch.
values?: [...string]
}]
// A list of node selector requirements by node's fields.
matchFields?: [...{
// The label key that the selector applies to.
key: string
// Represents a key's relationship to a set of values. Valid
// operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
operator: string
// An array of string values. If the operator is In or NotIn, the
// values array must be non-empty. If the operator is Exists or
// DoesNotExist, the values array must be empty. If the operator
// is Gt or Lt, the values array must have a single element,
// which will be interpreted as an integer. This array is
// replaced during a strategic merge patch.
values?: [...string]
}]
}
// Weight associated with matching the corresponding
// nodeSelectorTerm, in the range 1-100.
weight: int
}]
requiredDuringSchedulingIgnoredDuringExecution?: {
// Required. A list of node selector terms. The terms are ORed.
nodeSelectorTerms: [...{
// A list of node selector requirements by node's labels.
matchExpressions?: [...{
// The label key that the selector applies to.
key: string
// Represents a key's relationship to a set of values. Valid
// operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
operator: string
// An array of string values. If the operator is In or NotIn, the
// values array must be non-empty. If the operator is Exists or
// DoesNotExist, the values array must be empty. If the operator
// is Gt or Lt, the values array must have a single element,
// which will be interpreted as an integer. This array is
// replaced during a strategic merge patch.
values?: [...string]
}]
// A list of node selector requirements by node's fields.
matchFields?: [...{
// The label key that the selector applies to.
key: string
// Represents a key's relationship to a set of values. Valid
// operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
operator: string
// An array of string values. If the operator is In or NotIn, the
// values array must be non-empty. If the operator is Exists or
// DoesNotExist, the values array must be empty. If the operator
// is Gt or Lt, the values array must have a single element,
// which will be interpreted as an integer. This array is
// replaced during a strategic merge patch.
values?: [...string]
}]
}]
}
}
// Describes pod affinity scheduling rules (e.g. co-locate this
// pod in the same node, zone, etc. as some other pod(s)).
podAffinity?: {
// The scheduler will prefer to schedule pods to nodes that
// satisfy the affinity expressions specified by this field, but
// it may choose a node that violates one or more of the
// expressions. The node that is most preferred is the one with
// the greatest sum of weights, i.e. for each node that meets all
// of the scheduling requirements (resource request,
// requiredDuringScheduling affinity expressions, etc.), compute
// a sum by iterating through the elements of this field and
// adding "weight" to the sum if the node has pods which matches
// the corresponding podAffinityTerm; the node(s) with the
// highest sum are the most preferred.
preferredDuringSchedulingIgnoredDuringExecution?: [...{
// Required. A pod affinity term, associated with the
// corresponding weight.
podAffinityTerm: {
// A label query over a set of resources, in this case pods.
labelSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// A label query over the set of namespaces that the term applies
// to. The term is applied to the union of the namespaces
// selected by this field and the ones listed in the namespaces
// field. null selector and null or empty namespaces list means
// "this pod's namespace". An empty selector ({}) matches all
// namespaces.
namespaceSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// namespaces specifies a static list of namespace names that the
// term applies to. The term is applied to the union of the
// namespaces listed in this field and the ones selected by
// namespaceSelector. null or empty namespaces list and null
// namespaceSelector means "this pod's namespace".
namespaces?: [...string]
// This pod should be co-located (affinity) or not co-located
// (anti-affinity) with the pods matching the labelSelector in
// the specified namespaces, where co-located is defined as
// running on a node whose value of the label with key
// topologyKey matches that of any node on which any of the
// selected pods is running. Empty topologyKey is not allowed.
topologyKey: string
}
// weight associated with matching the corresponding
// podAffinityTerm, in the range 1-100.
weight: int
}]
// If the affinity requirements specified by this field are not
// met at scheduling time, the pod will not be scheduled onto the
// node. If the affinity requirements specified by this field
// cease to be met at some point during pod execution (e.g. due
// to a pod label update), the system may or may not try to
// eventually evict the pod from its node. When there are
// multiple elements, the lists of nodes corresponding to each
// podAffinityTerm are intersected, i.e. all terms must be
// satisfied.
requiredDuringSchedulingIgnoredDuringExecution?: [...{
// A label query over a set of resources, in this case pods.
labelSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// A label query over the set of namespaces that the term applies
// to. The term is applied to the union of the namespaces
// selected by this field and the ones listed in the namespaces
// field. null selector and null or empty namespaces list means
// "this pod's namespace". An empty selector ({}) matches all
// namespaces.
namespaceSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// namespaces specifies a static list of namespace names that the
// term applies to. The term is applied to the union of the
// namespaces listed in this field and the ones selected by
// namespaceSelector. null or empty namespaces list and null
// namespaceSelector means "this pod's namespace".
namespaces?: [...string]
// This pod should be co-located (affinity) or not co-located
// (anti-affinity) with the pods matching the labelSelector in
// the specified namespaces, where co-located is defined as
// running on a node whose value of the label with key
// topologyKey matches that of any node on which any of the
// selected pods is running. Empty topologyKey is not allowed.
topologyKey: string
}]
}
// Describes pod anti-affinity scheduling rules (e.g. avoid
// putting this pod in the same node, zone, etc. as some other
// pod(s)).
podAntiAffinity?: {
// The scheduler will prefer to schedule pods to nodes that
// satisfy the anti-affinity expressions specified by this field,
// but it may choose a node that violates one or more of the
// expressions. The node that is most preferred is the one with
// the greatest sum of weights, i.e. for each node that meets all
// of the scheduling requirements (resource request,
// requiredDuringScheduling anti-affinity expressions, etc.),
// compute a sum by iterating through the elements of this field
// and adding "weight" to the sum if the node has pods which
// matches the corresponding podAffinityTerm; the node(s) with
// the highest sum are the most preferred.
preferredDuringSchedulingIgnoredDuringExecution?: [...{
// Required. A pod affinity term, associated with the
// corresponding weight.
podAffinityTerm: {
// A label query over a set of resources, in this case pods.
labelSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// A label query over the set of namespaces that the term applies
// to. The term is applied to the union of the namespaces
// selected by this field and the ones listed in the namespaces
// field. null selector and null or empty namespaces list means
// "this pod's namespace". An empty selector ({}) matches all
// namespaces.
namespaceSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// namespaces specifies a static list of namespace names that the
// term applies to. The term is applied to the union of the
// namespaces listed in this field and the ones selected by
// namespaceSelector. null or empty namespaces list and null
// namespaceSelector means "this pod's namespace".
namespaces?: [...string]
// This pod should be co-located (affinity) or not co-located
// (anti-affinity) with the pods matching the labelSelector in
// the specified namespaces, where co-located is defined as
// running on a node whose value of the label with key
// topologyKey matches that of any node on which any of the
// selected pods is running. Empty topologyKey is not allowed.
topologyKey: string
}
// weight associated with matching the corresponding
// podAffinityTerm, in the range 1-100.
weight: int
}]
// If the anti-affinity requirements specified by this field are
// not met at scheduling time, the pod will not be scheduled onto
// the node. If the anti-affinity requirements specified by this
// field cease to be met at some point during pod execution (e.g.
// due to a pod label update), the system may or may not try to
// eventually evict the pod from its node. When there are
// multiple elements, the lists of nodes corresponding to each
// podAffinityTerm are intersected, i.e. all terms must be
// satisfied.
requiredDuringSchedulingIgnoredDuringExecution?: [...{
// A label query over a set of resources, in this case pods.
labelSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// A label query over the set of namespaces that the term applies
// to. The term is applied to the union of the namespaces
// selected by this field and the ones listed in the namespaces
// field. null selector and null or empty namespaces list means
// "this pod's namespace". An empty selector ({}) matches all
// namespaces.
namespaceSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// namespaces specifies a static list of namespace names that the
// term applies to. The term is applied to the union of the
// namespaces listed in this field and the ones selected by
// namespaceSelector. null or empty namespaces list and null
// namespaceSelector means "this pod's namespace".
namespaces?: [...string]
// This pod should be co-located (affinity) or not co-located
// (anti-affinity) with the pods matching the labelSelector in
// the specified namespaces, where co-located is defined as
// running on a node whose value of the label with key
// topologyKey matches that of any node on which any of the
// selected pods is running. Empty topologyKey is not allowed.
topologyKey: string
}]
}
}
// Configuration settings for the pgAdmin process. Changes to any
// of these values will be loaded without validation. Be careful,
// as you may put pgAdmin into an unusable state.
config?: {
// Files allows the user to mount projected volumes into the
// pgAdmin container so that files can be referenced by pgAdmin
// as needed.
files?: [...{
// configMap information about the configMap data to project
configMap?: {
// items if unspecified, each key-value pair in the Data field of
// the referenced ConfigMap will be projected into the volume as
// a file whose name is the key and content is the value. If
// specified, the listed keys will be projected into the
// specified paths, and unlisted keys will not be present. If a
// key is specified which is not present in the ConfigMap, the
// volume setup will error unless it is marked optional. Paths
// must be relative and may not contain the '..' path or start
// with '..'.
items?: [...{
// key is the key to project.
key: string
// mode is Optional: mode bits used to set permissions on this
// file. Must be an octal value between 0000 and 0777 or a
// decimal value between 0 and 511. YAML accepts both octal and
// decimal values, JSON requires decimal values for mode bits. If
// not specified, the volume defaultMode will be used. This might
// be in conflict with other options that affect the file mode,
// like fsGroup, and the result can be other mode bits set.
mode?: int
// path is the relative path of the file to map the key to. May
// not be an absolute path. May not contain the path element
// '..'. May not start with the string '..'.
path: string
}]
// Name of the referent. More info:
// https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
name?: string
// optional specify whether the ConfigMap or its keys must be
// defined
optional?: bool
}
downwardAPI?: {
// Items is a list of DownwardAPIVolume file
items?: [...{
// Required: Selects a field of the pod: only annotations, labels,
// name and namespace are supported.
fieldRef?: {
// Version of the schema the FieldPath is written in terms of,
// defaults to "v1".
apiVersion?: string
// Path of the field to select in the specified API version.
fieldPath: string
}
// Optional: mode bits used to set permissions on this file, must
// be an octal value between 0000 and 0777 or a decimal value
// between 0 and 511. YAML accepts both octal and decimal values,
// JSON requires decimal values for mode bits. If not specified,
// the volume defaultMode will be used. This might be in conflict
// with other options that affect the file mode, like fsGroup,
// and the result can be other mode bits set.
mode?: int
// Required: Path is the relative path name of the file to be
// created. Must not be absolute or contain the '..' path. Must
// be utf-8 encoded. The first item of the relative path must not
// start with '..'
path: string
// Selects a resource of the container: only resources limits and
// requests (limits.cpu, limits.memory, requests.cpu and
// requests.memory) are currently supported.
resourceFieldRef?: {
// Container name: required for volumes, optional for env vars
containerName?: string
// Specifies the output format of the exposed resources, defaults
// to "1"
divisor?: (int | string) & {
=~"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$"
}
// Required: resource to select
resource: string
}
}]
}
// secret information about the secret data to project
secret?: {
// items if unspecified, each key-value pair in the Data field of
// the referenced Secret will be projected into the volume as a
// file whose name is the key and content is the value. If
// specified, the listed keys will be projected into the
// specified paths, and unlisted keys will not be present. If a
// key is specified which is not present in the Secret, the
// volume setup will error unless it is marked optional. Paths
// must be relative and may not contain the '..' path or start
// with '..'.
items?: [...{
// key is the key to project.
key: string
// mode is Optional: mode bits used to set permissions on this
// file. Must be an octal value between 0000 and 0777 or a
// decimal value between 0 and 511. YAML accepts both octal and
// decimal values, JSON requires decimal values for mode bits. If
// not specified, the volume defaultMode will be used. This might
// be in conflict with other options that affect the file mode,
// like fsGroup, and the result can be other mode bits set.
mode?: int
// path is the relative path of the file to map the key to. May
// not be an absolute path. May not contain the path element
// '..'. May not start with the string '..'.
path: string
}]
// Name of the referent. More info:
// https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
name?: string
// optional field specify whether the Secret or its key must be
// defined
optional?: bool
}
// serviceAccountToken is information about the
// serviceAccountToken data to project
serviceAccountToken?: {
// audience is the intended audience of the token. A recipient of
// a token must identify itself with an identifier specified in
// the audience of the token, and otherwise should reject the
// token. The audience defaults to the identifier of the
// apiserver.
audience?: string
// expirationSeconds is the requested duration of validity of the
// service account token. As the token approaches expiration, the
// kubelet volume plugin will proactively rotate the service
// account token. The kubelet will start trying to rotate the
// token if the token is older than 80 percent of its time to
// live or if the token is older than 24 hours.Defaults to 1 hour
// and must be at least 10 minutes.
expirationSeconds?: int
// path is the path relative to the mount point of the file to
// project the token into.
path: string
}
}]
// A Secret containing the value for the LDAP_BIND_PASSWORD
// setting. More info:
// https://www.pgadmin.org/docs/pgadmin4/latest/ldap.html
ldapBindPassword?: {
// The key of the secret to select from. Must be a valid secret
// key.
key: string
// Name of the referent. More info:
// https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
name?: string
// Specify whether the Secret or its key must be defined
optional?: bool
}
// Settings for the pgAdmin server process. Keys should be
// uppercase and values must be constants. More info:
// https://www.pgadmin.org/docs/pgadmin4/latest/config_py.html
settings?: {
...
}
}
// Defines a PersistentVolumeClaim for pgAdmin data. More info:
// https://kubernetes.io/docs/concepts/storage/persistent-volumes
dataVolumeClaimSpec: {
// accessModes contains the desired access modes the volume should
// have. More info:
// https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes?: [...string]
// dataSource field can be used to specify either: * An existing
// VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot)
// * An existing PVC (PersistentVolumeClaim) If the provisioner
// or an external controller can support the specified data
// source, it will create a new volume based on the contents of
// the specified data source. If the AnyVolumeDataSource feature
// gate is enabled, this field will always have the same contents
// as the DataSourceRef field.
dataSource?: {
// APIGroup is the group for the resource being referenced. If
// APIGroup is not specified, the specified Kind must be in the
// core API group. For any other third-party types, APIGroup is
// required.
apiGroup?: string
// Kind is the type of resource being referenced
kind: string
// Name is the name of resource being referenced
name: string
}
// dataSourceRef specifies the object from which to populate the
// volume with data, if a non-empty volume is desired. This may
// be any local object from a non-empty API group (non core
// object) or a PersistentVolumeClaim object. When this field is
// specified, volume binding will only succeed if the type of the
// specified object matches some installed volume populator or
// dynamic provisioner. This field will replace the functionality
// of the DataSource field and as such if both fields are
// non-empty, they must have the same value. For backwards
// compatibility, both fields (DataSource and DataSourceRef) will
// be set to the same value automatically if one of them is empty
// and the other is non-empty. There are two important
// differences between DataSource and DataSourceRef: * While
// DataSource only allows two specific types of objects,
// DataSourceRef allows any non-core object, as well as
// PersistentVolumeClaim objects. * While DataSource ignores
// disallowed values (dropping them), DataSourceRef preserves all
// values, and generates an error if a disallowed value is
// specified. (Beta) Using this field requires the
// AnyVolumeDataSource feature gate to be enabled.
dataSourceRef?: {
// APIGroup is the group for the resource being referenced. If
// APIGroup is not specified, the specified Kind must be in the
// core API group. For any other third-party types, APIGroup is
// required.
apiGroup?: string
// Kind is the type of resource being referenced
kind: string
// Name is the name of resource being referenced
name: string
}
// resources represents the minimum resources the volume should
// have. If RecoverVolumeExpansionFailure feature is enabled
// users are allowed to specify resource requirements that are
// lower than previous value but must still be higher than
// capacity recorded in the status field of the claim. More info:
// https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
resources?: {
// Limits describes the maximum amount of compute resources
// allowed. More info:
// https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits?: {
[string]: (int | string) & =~"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$"
}
// Requests describes the minimum amount of compute resources
// required. If Requests is omitted for a container, it defaults
// to Limits if that is explicitly specified, otherwise to an
// implementation-defined value. More info:
// https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests?: {
[string]: (int | string) & =~"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$"
}
}
// selector is a label query over volumes to consider for binding.
selector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// storageClassName is the name of the StorageClass required by
// the claim. More info:
// https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
storageClassName?: string
// volumeMode defines what type of volume is required by the
// claim. Value of Filesystem is implied when not included in
// claim spec.
volumeMode?: string
// volumeName is the binding reference to the PersistentVolume
// backing this claim.
volumeName?: string
}
// The image name to use for pgAdmin instance.
image?: string
// ImagePullPolicy is used to determine when Kubernetes will
// attempt to pull (download) container images. More info:
// https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
imagePullPolicy?: "Always" | "Never" | "IfNotPresent"
// The image pull secrets used to pull from a private registry.
// Changing this value causes all running PGAdmin pods to
// restart.
// https://k8s.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets?: [...{
// Name of the referent. More info:
// https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
name?: string
}]
// Metadata contains metadata for custom resources
metadata?: {
annotations?: {
[string]: string
}
labels?: {
[string]: string
}
}
// Priority class name for the PGAdmin pod. Changing this value
// causes PGAdmin pod to restart. More info:
// https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/
priorityClassName?: string
// Resource requirements for the PGAdmin container.
resources?: {
// Limits describes the maximum amount of compute resources
// allowed. More info:
// https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits?: {
[string]: (int | string) & =~"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$"
}
// Requests describes the minimum amount of compute resources
// required. If Requests is omitted for a container, it defaults
// to Limits if that is explicitly specified, otherwise to an
// implementation-defined value. More info:
// https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests?: {
[string]: (int | string) & =~"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$"
}
}
// ServerGroups for importing PostgresClusters to pgAdmin. To
// create a pgAdmin with no selectors, leave this field empty. A
// pgAdmin created with no `ServerGroups` will not automatically
// add any servers through discovery. PostgresClusters can still
// be added manually.
serverGroups?: [...{
// The name for the ServerGroup in pgAdmin. Must be unique in the
// pgAdmin's ServerGroups since it becomes the ServerGroup name
// in pgAdmin.
name: string
// PostgresClusterSelector selects clusters to dynamically add to
// pgAdmin by matching labels. An empty selector like `{}` will
// select ALL clusters in the namespace.
postgresClusterSelector: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
}]
// Tolerations of the PGAdmin pod. More info:
// https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration
tolerations?: [...{
// Effect indicates the taint effect to match. Empty means match
// all taint effects. When specified, allowed values are
// NoSchedule, PreferNoSchedule and NoExecute.
effect?: string
// Key is the taint key that the toleration applies to. Empty
// means match all taint keys. If the key is empty, operator must
// be Exists; this combination means to match all values and all
// keys.
key?: string
// Operator represents a key's relationship to the value. Valid
// operators are Exists and Equal. Defaults to Equal. Exists is
// equivalent to wildcard for value, so that a pod can tolerate
// all taints of a particular category.
operator?: string
// TolerationSeconds represents the period of time the toleration
// (which must be of effect NoExecute, otherwise this field is
// ignored) tolerates the taint. By default, it is not set, which
// means tolerate the taint forever (do not evict). Zero and
// negative values will be treated as 0 (evict immediately) by
// the system.
tolerationSeconds?: int
// Value is the taint value the toleration matches to. If the
// operator is Exists, the value should be empty, otherwise just
// a regular string.
value?: string
}]
}

View File

@@ -0,0 +1,632 @@
// Code generated by timoni. DO NOT EDIT.
//timoni:generate timoni vendor crd -f /home/jeff/workspace/holos-run/holos-infra/deploy/clusters/core2/components/prod-pgo-crds/prod-pgo-crds.gen.yaml
package v1beta1
import "strings"
// PGUpgrade is the Schema for the pgupgrades API
#PGUpgrade: {
// APIVersion defines the versioned schema of this representation
// of an object. Servers should convert recognized schemas to the
// latest internal value, and may reject unrecognized values.
// More info:
// https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
apiVersion: "postgres-operator.crunchydata.com/v1beta1"
// Kind is a string value representing the REST resource this
// object represents. Servers may infer this from the endpoint
// the client submits requests to. Cannot be updated. In
// CamelCase. More info:
// https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
kind: "PGUpgrade"
metadata!: {
name!: strings.MaxRunes(253) & strings.MinRunes(1) & {
string
}
namespace!: strings.MaxRunes(63) & strings.MinRunes(1) & {
string
}
labels?: {
[string]: string
}
annotations?: {
[string]: string
}
}
// PGUpgradeSpec defines the desired state of PGUpgrade
spec!: #PGUpgradeSpec
}
// PGUpgradeSpec defines the desired state of PGUpgrade
#PGUpgradeSpec: {
// Scheduling constraints of the PGUpgrade pod. More info:
// https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node
affinity?: {
// Describes node affinity scheduling rules for the pod.
nodeAffinity?: {
// The scheduler will prefer to schedule pods to nodes that
// satisfy the affinity expressions specified by this field, but
// it may choose a node that violates one or more of the
// expressions. The node that is most preferred is the one with
// the greatest sum of weights, i.e. for each node that meets all
// of the scheduling requirements (resource request,
// requiredDuringScheduling affinity expressions, etc.), compute
// a sum by iterating through the elements of this field and
// adding "weight" to the sum if the node matches the
// corresponding matchExpressions; the node(s) with the highest
// sum are the most preferred.
preferredDuringSchedulingIgnoredDuringExecution?: [...{
// A node selector term, associated with the corresponding weight.
preference: {
// A list of node selector requirements by node's labels.
matchExpressions?: [...{
// The label key that the selector applies to.
key: string
// Represents a key's relationship to a set of values. Valid
// operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
operator: string
// An array of string values. If the operator is In or NotIn, the
// values array must be non-empty. If the operator is Exists or
// DoesNotExist, the values array must be empty. If the operator
// is Gt or Lt, the values array must have a single element,
// which will be interpreted as an integer. This array is
// replaced during a strategic merge patch.
values?: [...string]
}]
// A list of node selector requirements by node's fields.
matchFields?: [...{
// The label key that the selector applies to.
key: string
// Represents a key's relationship to a set of values. Valid
// operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
operator: string
// An array of string values. If the operator is In or NotIn, the
// values array must be non-empty. If the operator is Exists or
// DoesNotExist, the values array must be empty. If the operator
// is Gt or Lt, the values array must have a single element,
// which will be interpreted as an integer. This array is
// replaced during a strategic merge patch.
values?: [...string]
}]
}
// Weight associated with matching the corresponding
// nodeSelectorTerm, in the range 1-100.
weight: int
}]
requiredDuringSchedulingIgnoredDuringExecution?: {
// Required. A list of node selector terms. The terms are ORed.
nodeSelectorTerms: [...{
// A list of node selector requirements by node's labels.
matchExpressions?: [...{
// The label key that the selector applies to.
key: string
// Represents a key's relationship to a set of values. Valid
// operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
operator: string
// An array of string values. If the operator is In or NotIn, the
// values array must be non-empty. If the operator is Exists or
// DoesNotExist, the values array must be empty. If the operator
// is Gt or Lt, the values array must have a single element,
// which will be interpreted as an integer. This array is
// replaced during a strategic merge patch.
values?: [...string]
}]
// A list of node selector requirements by node's fields.
matchFields?: [...{
// The label key that the selector applies to.
key: string
// Represents a key's relationship to a set of values. Valid
// operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
operator: string
// An array of string values. If the operator is In or NotIn, the
// values array must be non-empty. If the operator is Exists or
// DoesNotExist, the values array must be empty. If the operator
// is Gt or Lt, the values array must have a single element,
// which will be interpreted as an integer. This array is
// replaced during a strategic merge patch.
values?: [...string]
}]
}]
}
}
// Describes pod affinity scheduling rules (e.g. co-locate this
// pod in the same node, zone, etc. as some other pod(s)).
podAffinity?: {
// The scheduler will prefer to schedule pods to nodes that
// satisfy the affinity expressions specified by this field, but
// it may choose a node that violates one or more of the
// expressions. The node that is most preferred is the one with
// the greatest sum of weights, i.e. for each node that meets all
// of the scheduling requirements (resource request,
// requiredDuringScheduling affinity expressions, etc.), compute
// a sum by iterating through the elements of this field and
// adding "weight" to the sum if the node has pods which matches
// the corresponding podAffinityTerm; the node(s) with the
// highest sum are the most preferred.
preferredDuringSchedulingIgnoredDuringExecution?: [...{
// Required. A pod affinity term, associated with the
// corresponding weight.
podAffinityTerm: {
// A label query over a set of resources, in this case pods.
labelSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// A label query over the set of namespaces that the term applies
// to. The term is applied to the union of the namespaces
// selected by this field and the ones listed in the namespaces
// field. null selector and null or empty namespaces list means
// "this pod's namespace". An empty selector ({}) matches all
// namespaces.
namespaceSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// namespaces specifies a static list of namespace names that the
// term applies to. The term is applied to the union of the
// namespaces listed in this field and the ones selected by
// namespaceSelector. null or empty namespaces list and null
// namespaceSelector means "this pod's namespace".
namespaces?: [...string]
// This pod should be co-located (affinity) or not co-located
// (anti-affinity) with the pods matching the labelSelector in
// the specified namespaces, where co-located is defined as
// running on a node whose value of the label with key
// topologyKey matches that of any node on which any of the
// selected pods is running. Empty topologyKey is not allowed.
topologyKey: string
}
// weight associated with matching the corresponding
// podAffinityTerm, in the range 1-100.
weight: int
}]
// If the affinity requirements specified by this field are not
// met at scheduling time, the pod will not be scheduled onto the
// node. If the affinity requirements specified by this field
// cease to be met at some point during pod execution (e.g. due
// to a pod label update), the system may or may not try to
// eventually evict the pod from its node. When there are
// multiple elements, the lists of nodes corresponding to each
// podAffinityTerm are intersected, i.e. all terms must be
// satisfied.
requiredDuringSchedulingIgnoredDuringExecution?: [...{
// A label query over a set of resources, in this case pods.
labelSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// A label query over the set of namespaces that the term applies
// to. The term is applied to the union of the namespaces
// selected by this field and the ones listed in the namespaces
// field. null selector and null or empty namespaces list means
// "this pod's namespace". An empty selector ({}) matches all
// namespaces.
namespaceSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// namespaces specifies a static list of namespace names that the
// term applies to. The term is applied to the union of the
// namespaces listed in this field and the ones selected by
// namespaceSelector. null or empty namespaces list and null
// namespaceSelector means "this pod's namespace".
namespaces?: [...string]
// This pod should be co-located (affinity) or not co-located
// (anti-affinity) with the pods matching the labelSelector in
// the specified namespaces, where co-located is defined as
// running on a node whose value of the label with key
// topologyKey matches that of any node on which any of the
// selected pods is running. Empty topologyKey is not allowed.
topologyKey: string
}]
}
// Describes pod anti-affinity scheduling rules (e.g. avoid
// putting this pod in the same node, zone, etc. as some other
// pod(s)).
podAntiAffinity?: {
// The scheduler will prefer to schedule pods to nodes that
// satisfy the anti-affinity expressions specified by this field,
// but it may choose a node that violates one or more of the
// expressions. The node that is most preferred is the one with
// the greatest sum of weights, i.e. for each node that meets all
// of the scheduling requirements (resource request,
// requiredDuringScheduling anti-affinity expressions, etc.),
// compute a sum by iterating through the elements of this field
// and adding "weight" to the sum if the node has pods which
// matches the corresponding podAffinityTerm; the node(s) with
// the highest sum are the most preferred.
preferredDuringSchedulingIgnoredDuringExecution?: [...{
// Required. A pod affinity term, associated with the
// corresponding weight.
podAffinityTerm: {
// A label query over a set of resources, in this case pods.
labelSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// A label query over the set of namespaces that the term applies
// to. The term is applied to the union of the namespaces
// selected by this field and the ones listed in the namespaces
// field. null selector and null or empty namespaces list means
// "this pod's namespace". An empty selector ({}) matches all
// namespaces.
namespaceSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// namespaces specifies a static list of namespace names that the
// term applies to. The term is applied to the union of the
// namespaces listed in this field and the ones selected by
// namespaceSelector. null or empty namespaces list and null
// namespaceSelector means "this pod's namespace".
namespaces?: [...string]
// This pod should be co-located (affinity) or not co-located
// (anti-affinity) with the pods matching the labelSelector in
// the specified namespaces, where co-located is defined as
// running on a node whose value of the label with key
// topologyKey matches that of any node on which any of the
// selected pods is running. Empty topologyKey is not allowed.
topologyKey: string
}
// weight associated with matching the corresponding
// podAffinityTerm, in the range 1-100.
weight: int
}]
// If the anti-affinity requirements specified by this field are
// not met at scheduling time, the pod will not be scheduled onto
// the node. If the anti-affinity requirements specified by this
// field cease to be met at some point during pod execution (e.g.
// due to a pod label update), the system may or may not try to
// eventually evict the pod from its node. When there are
// multiple elements, the lists of nodes corresponding to each
// podAffinityTerm are intersected, i.e. all terms must be
// satisfied.
requiredDuringSchedulingIgnoredDuringExecution?: [...{
// A label query over a set of resources, in this case pods.
labelSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// A label query over the set of namespaces that the term applies
// to. The term is applied to the union of the namespaces
// selected by this field and the ones listed in the namespaces
// field. null selector and null or empty namespaces list means
// "this pod's namespace". An empty selector ({}) matches all
// namespaces.
namespaceSelector?: {
// matchExpressions is a list of label selector requirements. The
// requirements are ANDed.
matchExpressions?: [...{
// key is the label key that the selector applies to.
key: string
// operator represents a key's relationship to a set of values.
// Valid operators are In, NotIn, Exists and DoesNotExist.
operator: string
// values is an array of string values. If the operator is In or
// NotIn, the values array must be non-empty. If the operator is
// Exists or DoesNotExist, the values array must be empty. This
// array is replaced during a strategic merge patch.
values?: [...string]
}]
// matchLabels is a map of {key,value} pairs. A single {key,value}
// in the matchLabels map is equivalent to an element of
// matchExpressions, whose key field is "key", the operator is
// "In", and the values array contains only "value". The
// requirements are ANDed.
matchLabels?: {
[string]: string
}
}
// namespaces specifies a static list of namespace names that the
// term applies to. The term is applied to the union of the
// namespaces listed in this field and the ones selected by
// namespaceSelector. null or empty namespaces list and null
// namespaceSelector means "this pod's namespace".
namespaces?: [...string]
// This pod should be co-located (affinity) or not co-located
// (anti-affinity) with the pods matching the labelSelector in
// the specified namespaces, where co-located is defined as
// running on a node whose value of the label with key
// topologyKey matches that of any node on which any of the
// selected pods is running. Empty topologyKey is not allowed.
topologyKey: string
}]
}
}
// The major version of PostgreSQL before the upgrade.
fromPostgresVersion: uint & >=10 & <=16
// The image name to use for major PostgreSQL upgrades.
image?: string
// ImagePullPolicy is used to determine when Kubernetes will
// attempt to pull (download) container images. More info:
// https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
imagePullPolicy?: "Always" | "Never" | "IfNotPresent"
// The image pull secrets used to pull from a private registry.
// Changing this value causes all running PGUpgrade pods to
// restart.
// https://k8s.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets?: [...{
// Name of the referent. More info:
// https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
name?: string
}]
// Metadata contains metadata for custom resources
metadata?: {
annotations?: {
[string]: string
}
labels?: {
[string]: string
}
}
// The name of the cluster to be updated
postgresClusterName: strings.MinRunes(1)
// Priority class name for the PGUpgrade pod. Changing this value
// causes PGUpgrade pod to restart. More info:
// https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/
priorityClassName?: string
// Resource requirements for the PGUpgrade container.
resources?: {
// Limits describes the maximum amount of compute resources
// allowed. More info:
// https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits?: {
[string]: (int | string) & =~"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$"
}
// Requests describes the minimum amount of compute resources
// required. If Requests is omitted for a container, it defaults
// to Limits if that is explicitly specified, otherwise to an
// implementation-defined value. More info:
// https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests?: {
[string]: (int | string) & =~"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$"
}
}
// The image name to use for PostgreSQL containers after upgrade.
// When omitted, the value comes from an operator environment
// variable.
toPostgresImage?: string
// The major version of PostgreSQL to be upgraded to.
toPostgresVersion: uint & >=10 & <=16
// Tolerations of the PGUpgrade pod. More info:
// https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration
tolerations?: [...{
// Effect indicates the taint effect to match. Empty means match
// all taint effects. When specified, allowed values are
// NoSchedule, PreferNoSchedule and NoExecute.
effect?: string
// Key is the taint key that the toleration applies to. Empty
// means match all taint keys. If the key is empty, operator must
// be Exists; this combination means to match all values and all
// keys.
key?: string
// Operator represents a key's relationship to the value. Valid
// operators are Exists and Equal. Defaults to Equal. Exists is
// equivalent to wildcard for value, so that a pod can tolerate
// all taints of a particular category.
operator?: string
// TolerationSeconds represents the period of time the toleration
// (which must be of effect NoExecute, otherwise this field is
// ignored) tolerates the taint. By default, it is not set, which
// means tolerate the taint forever (do not evict). Zero and
// negative values will be treated as 0 (evict immediately) by
// the system.
tolerationSeconds?: int
// Value is the taint value the toleration matches to. If the
// operator is Exists, the value should be empty, otherwise just
// a regular string.
value?: string
}]
}

View File

@@ -0,0 +1,46 @@
// Controls optional feature flags for services distributed across multiple holos components.
// For example, enable issuing certificates in the provisioner cluster when an optional service is
// enabled for a workload cluster.
package holos
import "list"
#OptionalService: {
name: string
enabled: true | *false
clusters: [Name=_]: #Platform.clusters[Name]
clusterNames: [for c in clusters {c.name}]
managedNamespaces: [Name=_]: #ManagedNamespace & {
namespace: metadata: name: Name
clusterNames: ["provisioner", for c in clusters {c.name}]
}
// servers represents istio Gateway.spec.servers.hosts entries
// Refer to istio/gateway/gateway.cue
servers: [Name=_]: {
hosts: [...string]
port: name: Name
port: number: 443
port: protocol: "HTTPS"
tls: credentialName: string
tls: mode: "SIMPLE"
}
// public tls certs should align to hosts.
certs: [Name=_]: #Certificate & {
metadata: name: Name
}
}
#OptionalServices: {
[Name=_]: #OptionalService & {
name: Name
}
}
for svc in #OptionalServices {
for nsName, ns in svc.managedNamespaces {
if svc.enabled && list.Contains(ns.clusterNames, #ClusterName) {
#ManagedNamespaces: "\(nsName)": ns
}
}
}

View File

@@ -0,0 +1,56 @@
package holos
let CoreDomain = "core.\(#Platform.org.domain)"
let TargetNamespace = "prod-core-vault"
#OptionalServices: {
vault: {
enabled: true
clusters: core1: _
clusters: core2: _
managedNamespaces: "prod-core-vault": {
namespace: metadata: labels: "istio-injection": "enabled"
}
certs: "vault-core": #Certificate & {
metadata: name: "vault-core"
metadata: namespace: "istio-ingress"
spec: {
commonName: "vault.\(CoreDomain)"
dnsNames: [commonName]
secretName: metadata.name
issuerRef: kind: "ClusterIssuer"
issuerRef: name: string | *"letsencrypt"
}
}
servers: "https-vault-core": {
hosts: ["\(TargetNamespace)/vault.\(CoreDomain)"]
tls: credentialName: certs."vault-core".spec.secretName
}
for k, v in clusters {
let obj = (Cert & {Name: "vault-core", Cluster: v.name}).APIObject
certs: "\(obj.metadata.name)": obj
servers: "https-\(obj.metadata.name)": {
hosts: [for host in obj.spec.dnsNames {"\(TargetNamespace)/\(host)"}]
tls: credentialName: obj.spec.secretName
}
}
}
}
// Cert provisions a cluster specific certificate.
let Cert = {
Name: string
Cluster: string
APIObject: #Certificate & {
metadata: name: "\(Cluster)-\(Name)"
metadata: namespace: string | *"istio-ingress"
spec: {
commonName: string | *"vault.\(Cluster).\(CoreDomain)"
dnsNames: [commonName]
secretName: metadata.name
issuerRef: kind: "ClusterIssuer"
issuerRef: name: string | *"letsencrypt"
}
}
}

View File

@@ -44,7 +44,8 @@ package holos
_name: string
_cluster: string
_wildcard: true | *false
metadata: name: string | *"\(_cluster)-\(_name)"
// Enforce this value
metadata: name: "\(_cluster)-\(_name)"
metadata: namespace: string | *"istio-ingress"
spec: {
commonName: string | *"\(_name).\(_cluster).\(#Platform.org.domain)"

View File

@@ -4,7 +4,4 @@ package holos
#InputKeys: project: "iam"
// Shared dependencies for all components in this collection.
#DependsOn: _Namespaces
// Common Dependencies
_Namespaces: Namespaces: name: "\(#StageName)-secrets-namespaces"
#DependsOn: namespaces: name: "\(#StageName)-secrets-namespaces"

View File

@@ -0,0 +1,29 @@
package holos
#InputKeys: component: "postgres-certs"
let SecretNames = {
[Name=_]: {name: Name}
"\(_DBName)-primary-tls": _
"\(_DBName)-repl-tls": _
"\(_DBName)-client-tls": _
"\(_DBName)-root-ca": _
}
#Kustomization: spec: targetNamespace: #TargetNamespace
#Kustomization: spec: healthChecks: [
for s in SecretNames {
apiVersion: "external-secrets.io/v1beta1"
kind: "ExternalSecret"
name: s.name
namespace: #TargetNamespace
},
]
#KubernetesObjects & {
apiObjects: {
for s in SecretNames {
ExternalSecret: "\(s.name)": _
}
}
}

View File

@@ -0,0 +1,189 @@
package holos
#InputKeys: component: "postgres"
#DependsOn: "postgres-certs": _
let Cluster = #Platform.clusters[#ClusterName]
let S3Secret = "pgo-s3-creds"
let ZitadelUser = _DBName
let ZitadelAdmin = "\(_DBName)-admin"
// This must be an external storage bucket for our architecture.
let BucketRepoName = "repo2"
// Restore options. Set the timestamp to a known good point in time.
// time="2024-03-11T17:08:58Z" level=info msg="crunchy-pgbackrest ends"
// let RestoreOptions = ["--type=time", "--target=\"2024-03-11 17:10:00+00\""]
// Restore the most recent backup.
let RestoreOptions = []
#Kustomization: spec: healthChecks: [
{
apiVersion: "external-secrets.io/v1beta1"
kind: "ExternalSecret"
name: S3Secret
namespace: #TargetNamespace
},
{
apiVersion: "postgres-operator.crunchydata.com/v1beta1"
kind: "PostgresCluster"
name: _DBName
namespace: #TargetNamespace
},
]
#KubernetesObjects & {
apiObjects: {
ExternalSecret: "\(S3Secret)": _
PostgresCluster: db: #PostgresCluster & HighlyAvailable & {
metadata: name: _DBName
metadata: namespace: #TargetNamespace
spec: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.2-0"
postgresVersion: 16
// Custom certs are necessary for streaming standby replication which we use to replicate between two regions.
// Refer to https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#streaming-standby
customTLSSecret: name: "\(_DBName)-primary-tls"
customReplicationTLSSecret: name: "\(_DBName)-repl-tls"
// Refer to https://access.crunchydata.com/documentation/postgres-operator/latest/references/crd/5.5.x/postgrescluster#postgresclusterspecusersindex
users: [
{name: ZitadelUser},
// NOTE: Users with SUPERUSER role cannot log in through pgbouncer. Use options that allow zitadel admin to use pgbouncer.
// Refer to: https://github.com/CrunchyData/postgres-operator/issues/3095#issuecomment-1904712211
{name: ZitadelAdmin, options: "CREATEDB CREATEROLE", databases: [_DBName, "postgres"]},
]
users: [...{databases: [_DBName, ...]}]
instances: [{
replicas: 2
dataVolumeClaimSpec: {
accessModes: ["ReadWriteOnce"]
resources: requests: storage: "10Gi"
}
}]
standby: {
repoName: BucketRepoName
if Cluster.primary {
enabled: false
}
if !Cluster.primary {
enabled: true
}
}
// Restore from backup if and only if the cluster is primary
if Cluster.primary {
dataSource: pgbackrest: {
stanza: "db"
configuration: backups.pgbackrest.configuration
// Restore from known good full backup taken
options: RestoreOptions
global: {
"\(BucketRepoName)-path": "/pgbackrest/\(#TargetNamespace)/\(metadata.name)/\(BucketRepoName)"
"\(BucketRepoName)-cipher-type": "aes-256-cbc"
}
repo: {
name: BucketRepoName
s3: backups.pgbackrest.repos[1].s3
}
}
}
// Refer to https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups
backups: pgbackrest: {
configuration: [{secret: name: S3Secret}]
// Defines details for manual pgBackRest backup Jobs
manual: {
// Note: the repoName value must match the config keys in the S3Secret.
// This must be an external repository for backup / restore / regional failovers.
repoName: BucketRepoName
options: ["--type=full", ...]
}
// Defines details for performing an in-place restore using pgBackRest
restore: {
// Enables triggering a restore by annotating the postgrescluster with postgres-operator.crunchydata.com/pgbackrest-restore="$(date)"
enabled: true
repoName: BucketRepoName
}
global: {
// Store only one full backup in the PV because it's more expensive than object storage.
"\(repos[0].name)-retention-full": "1"
// Store 14 days of full backups in the bucket.
"\(BucketRepoName)-retention-full": string | *"14"
"\(BucketRepoName)-retention-full-type": "count" | *"time" // time in days
// Refer to https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups#encryption
"\(BucketRepoName)-cipher-type": "aes-256-cbc"
// "The convention we recommend for setting this variable is /pgbackrest/$NAMESPACE/$CLUSTER_NAME/repoN"
// Ref: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups#understanding-backup-configuration-and-basic-operations
"\(BucketRepoName)-path": "/pgbackrest/\(#TargetNamespace)/\(metadata.name)/\(manual.repoName)"
}
repos: [
{
name: "repo1"
volume: volumeClaimSpec: {
accessModes: ["ReadWriteOnce"]
resources: requests: storage: string | *"4Gi"
}
},
{
name: BucketRepoName
// Full backup weekly on Sunday at 1am, differntial daily at 1am every day except Sunday.
schedules: full: string | *"0 1 * * 0"
schedules: differential: string | *"0 1 * * 1-6"
s3: {
bucket: string | *"\(#Platform.org.name)-zitadel-backups"
region: string | *#Backups.s3.region
endpoint: string | *"s3.dualstack.\(region).amazonaws.com"
}
},
]
}
}
}
}
}
// Refer to https://github.com/holos-run/postgres-operator-examples/blob/main/kustomize/high-availability/ha-postgres.yaml
let HighlyAvailable = {
apiVersion: "postgres-operator.crunchydata.com/v1beta1"
kind: "PostgresCluster"
metadata: name: string
spec: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.2-0"
postgresVersion: 16
instances: [{
name: "pgha1"
replicas: 2
dataVolumeClaimSpec: {
accessModes: ["ReadWriteOnce"]
resources: requests: storage: string | *"10Gi"
}
affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: [{
weight: 1
podAffinityTerm: {
topologyKey: "kubernetes.io/hostname"
labelSelector: matchLabels: {
"postgres-operator.crunchydata.com/cluster": metadata.name
"postgres-operator.crunchydata.com/instance-set": name
}
}
}]
}]
backups: pgbackrest: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.49-0"
}
proxy: pgBouncer: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-pgbouncer:ubi8-1.21-3"
replicas: 2
affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: [{
weight: 1
podAffinityTerm: {
topologyKey: "kubernetes.io/hostname"
labelSelector: matchLabels: {
"postgres-operator.crunchydata.com/cluster": metadata.name
"postgres-operator.crunchydata.com/role": "pgbouncer"
}
}
}]
}
}
}

View File

@@ -0,0 +1,10 @@
package holos
#TargetNamespace: #InstancePrefix + "-zitadel"
// _DBName is the database name used across multiple holos components in this project
_DBName: "zitadel"
// The canonical login domain for the entire platform. Zitadel will be active
// on a single cluster at a time, but always accessible from this domain.
#ExternalDomain: "login.\(#Platform.org.domain)"

View File

@@ -125,7 +125,7 @@ package holos
securityContext: {}
// Additional environment variables
env: []
env: [...]
// - name: ZITADEL_DATABASE_POSTGRES_HOST
// valueFrom:
// secretKeyRef:

View File

@@ -0,0 +1,89 @@
package holos
#Values: {
// Database credentials
// Refer to https://access.crunchydata.com/documentation/postgres-operator/5.2.0/architecture/user-management/
// Refer to https://zitadel.com/docs/self-hosting/manage/database#postgres
env: [
// Connection
{
name: "ZITADEL_DATABASE_POSTGRES_HOST"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)"
valueFrom: secretKeyRef: key: "pgbouncer-host"
},
{
name: "ZITADEL_DATABASE_POSTGRES_PORT"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)"
valueFrom: secretKeyRef: key: "pgbouncer-port"
},
{
name: "ZITADEL_DATABASE_POSTGRES_DATABASE"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)"
valueFrom: secretKeyRef: key: "dbname"
},
// The <db>-pguser-<db> secret contains creds for the unpriviliged zitadel user
{
name: "ZITADEL_DATABASE_POSTGRES_USER_USERNAME"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)"
valueFrom: secretKeyRef: key: "user"
},
{
name: "ZITADEL_DATABASE_POSTGRES_USER_PASSWORD"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)"
valueFrom: secretKeyRef: key: "password"
},
// The postgres component configures privileged postgres user creds.
{
name: "ZITADEL_DATABASE_POSTGRES_ADMIN_USERNAME"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)-admin"
valueFrom: secretKeyRef: key: "user"
},
{
name: "ZITADEL_DATABASE_POSTGRES_ADMIN_PASSWORD"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)-admin"
valueFrom: secretKeyRef: key: "password"
},
// CA Cert issued by PGO which issued the pgbouncer tls cert
{
name: "ZITADEL_DATABASE_POSTGRES_USER_SSL_ROOTCERT"
value: "/\(_PGBouncer)/ca.crt"
},
{
name: "ZITADEL_DATABASE_POSTGRES_ADMIN_SSL_ROOTCERT"
value: "/\(_PGBouncer)/ca.crt"
},
]
// Refer to https://zitadel.com/docs/self-hosting/manage/database
zitadel: {
// Zitadel master key
masterkeySecretName: "zitadel-masterkey"
// dbSslCaCrtSecret: "pgo-root-cacert"
// All settings: https://zitadel.com/docs/self-hosting/manage/configure#runtime-configuration-file
// Helm interface: https://github.com/zitadel/zitadel-charts/blob/zitadel-7.4.0/charts/zitadel/values.yaml#L20-L21
configmapConfig: {
// NOTE: You can change the ExternalDomain, ExternalPort and ExternalSecure
// configuration options at any time. However, for ZITADEL to be able to
// pick up the changes, you need to rerun ZITADELs setup phase. Do so with
// kubectl delete job zitadel-setup, then re-apply the new config.
//
// https://zitadel.com/docs/self-hosting/manage/custom-domain
ExternalSecure: true
ExternalDomain: #ExternalDomain
ExternalPort: 443
TLS: Enabled: false
// Database connection credentials are injected via environment variables from the db-pguser-db secret.
Database: postgres: {
MaxOpenConns: 25
MaxIdleConns: 10
MaxConnLifetime: "1h"
MaxConnIdleTime: "5m"
// verify-full verifies the host name matches cert dns names in addition to root ca signature
User: SSL: Mode: "verify-full"
Admin: SSL: Mode: "verify-full"
}
}
}
}

View File

@@ -0,0 +1,164 @@
package holos
import "encoding/yaml"
let Name = "zitadel"
#InputKeys: component: Name
#DependsOn: postgres: _
// Upstream helm chart doesn't specify the namespace field for all resources.
#Kustomization: spec: {
targetNamespace: #TargetNamespace
wait: false
}
if #IsPrimaryCluster == true {
#Kustomization: spec: healthChecks: [
{
apiVersion: "apps/v1"
kind: "Deployment"
name: Name
namespace: #TargetNamespace
},
{
apiVersion: "batch/v1"
kind: "Job"
name: "\(Name)-init"
namespace: #TargetNamespace
},
{
apiVersion: "batch/v1"
kind: "Job"
name: "\(Name)-setup"
namespace: #TargetNamespace
},
]
}
#HelmChart & {
namespace: #TargetNamespace
enableHooks: true
chart: {
name: Name
version: "7.9.0"
repository: {
name: Name
url: "https://charts.zitadel.com"
}
}
values: #Values
apiObjects: {
ExternalSecret: "zitadel-masterkey": _
VirtualService: "\(Name)": {
metadata: name: Name
metadata: namespace: #TargetNamespace
spec: hosts: ["login.\(#Platform.org.domain)"]
spec: gateways: ["istio-ingress/default"]
spec: http: [{route: [{destination: host: Name}]}]
}
}
}
// TODO: Generalize this common pattern of injecting the istio sidecar into a Deployment
let IstioInject = [{op: "add", path: "/spec/template/metadata/labels/sidecar.istio.io~1inject", value: "true"}]
_PGBouncer: "pgbouncer"
let DatabaseCACertPatch = [
{
op: "add"
path: "/spec/template/spec/volumes/-"
value: {
name: _PGBouncer
secret: {
secretName: "\(_DBName)-pgbouncer"
items: [{key: "pgbouncer-frontend.ca-roots", path: "ca.crt"}]
}
}
},
{
op: "add"
path: "/spec/template/spec/containers/0/volumeMounts/-"
value: {
name: _PGBouncer
mountPath: "/" + _PGBouncer
}
},
]
let CAPatch = #Patch & {
target: {
group: "apps" | "batch"
version: "v1"
kind: "Job" | "Deployment"
name: string
}
patch: yaml.Marshal(DatabaseCACertPatch)
}
#KustomizePatches: {
mesh: {
target: {
group: "apps"
version: "v1"
kind: "Deployment"
name: Name
}
patch: yaml.Marshal(IstioInject)
}
deploymentCA: CAPatch & {
target: group: "apps"
target: kind: "Deployment"
target: name: Name
}
initJob: CAPatch & {
target: group: "batch"
target: kind: "Job"
target: name: "\(Name)-init"
}
setupJob: CAPatch & {
target: group: "batch"
target: kind: "Job"
target: name: "\(Name)-setup"
}
testDisable: {
target: {
version: "v1"
kind: "Pod"
name: "\(Name)-test-connection"
}
patch: yaml.Marshal(DisableFluxPatch)
}
if #IsPrimaryCluster == false {
fluxDisable: {
target: {
group: "apps"
version: "v1"
kind: "Deployment"
name: Name
}
patch: yaml.Marshal(DisableFluxPatch)
}
initDisable: {
target: {
group: "batch"
version: "v1"
kind: "Job"
name: "\(Name)-init"
}
patch: yaml.Marshal(DisableFluxPatch)
}
setupDisable: {
target: {
group: "batch"
version: "v1"
kind: "Job"
name: "\(Name)-setup"
}
patch: yaml.Marshal(DisableFluxPatch)
}
}
}
let DisableFluxPatch = [{op: "replace", path: "/metadata/annotations/kustomize.toolkit.fluxcd.io~1reconcile", value: "disabled"}]

View File

@@ -1,10 +0,0 @@
package holos
#TargetNamespace: #InstancePrefix + "-zitadel"
#DB: {
Host: "crdb-public"
}
// The canonical login domain for the entire platform. Zitadel will be active on a singlec cluster at a time, but always accessible from this hostname.
#ExternalDomain: "login.\(#Platform.org.domain)"

View File

@@ -1,34 +0,0 @@
package holos
#Values: {
// https://raw.githubusercontent.com/zitadel/zitadel-charts/main/examples/4-cockroach-secure/zitadel-values.yaml
zitadel: {
masterkeySecretName: "zitadel-masterkey"
// https://github.com/zitadel/zitadel-charts/blob/zitadel-7.4.0/charts/zitadel/templates/configmap.yaml#L13
configmapConfig: {
// NOTE: You can change the ExternalDomain, ExternalPort and ExternalSecure
// configuration options at any time. However, for ZITADEL to be able to
// pick up the changes, you need to rerun ZITADELs setup phase. Do so with
// kubectl delete job zitadel-setup, then re-apply the new config.
//
// https://zitadel.com/docs/self-hosting/manage/custom-domain
ExternalDomain: #ExternalDomain
ExternalPort: 443
ExternalSecure: true
TLS: Enabled: false
Database: Cockroach: {
Host: #DB.Host
User: SSL: Mode: "verify-full"
Admin: SSL: Mode: "verify-full"
}
}
// Managed by crdb component
dbSslCaCrtSecret: "cockroach-ca"
dbSslAdminCrtSecret: "cockroachdb-root"
// Managed by this component
dbSslUserCrtSecret: "cockroachdb-zitadel"
}
}

View File

@@ -1,55 +0,0 @@
package holos
import "encoding/yaml"
let Name = "zitadel"
#InputKeys: component: Name
// Upstream helm chart doesn't specify the namespace field for all resources.
#Kustomization: spec: targetNamespace: #TargetNamespace
#HelmChart & {
namespace: #TargetNamespace
chart: {
name: Name
version: "7.9.0"
repository: {
name: Name
url: "https://charts.zitadel.com"
}
}
values: #Values
apiObjects: {
ExternalSecret: masterkey: #ExternalSecret & {
_name: "zitadel-masterkey"
}
ExternalSecret: zitadel: #ExternalSecret & {
_name: "cockroachdb-zitadel"
}
VirtualService: zitadel: #VirtualService & {
metadata: name: Name
metadata: namespace: #TargetNamespace
spec: hosts: ["login.\(#Platform.org.domain)"]
spec: gateways: ["istio-ingress/default"]
spec: http: [{route: [{destination: host: Name}]}]
}
}
}
// TODO: Generalize this common pattern of injecting the istio sidecar into a Deployment
let Patch = [{op: "add", path: "/spec/template/metadata/labels/sidecar.istio.io~1inject", value: "true"}]
#Kustomize: {
patches: [
{
target: {
group: "apps"
version: "v1"
kind: "Deployment"
name: Name
}
patch: yaml.Marshal(Patch)
},
]
}

View File

@@ -4,6 +4,6 @@ package holos
#InputKeys: project: "github"
#DependsOn: Namespaces: name: "prod-secrets-namespaces"
#TargetNamespace: #InputKeys.component
#ARCSystemNamespace: "arc-system"
#HelmChart: namespace: #TargetNamespace
#HelmChart: chart: version: "0.8.3"

View File

@@ -0,0 +1,40 @@
package holos
#TargetNamespace: "arc-runner"
#InputKeys: component: "arc-runner"
#Kustomization: spec: targetNamespace: #TargetNamespace
let GitHubConfigSecret = "controller-manager"
// Just sync the external secret, don't configure the scale set
// Work around https://github.com/actions/actions-runner-controller/issues/3351
if #IsPrimaryCluster == false {
#KubernetesObjects & {
apiObjects: ExternalSecret: "\(GitHubConfigSecret)": _
}
}
// Put the scale set on the primary cluster.
if #IsPrimaryCluster == true {
#HelmChart & {
values: {
#Values
controllerServiceAccount: name: "gha-rs-controller"
controllerServiceAccount: namespace: "arc-system"
githubConfigSecret: GitHubConfigSecret
githubConfigUrl: "https://github.com/" + #Platform.org.github.orgs.primary.name
}
apiObjects: ExternalSecret: "\(values.githubConfigSecret)": _
chart: {
// Match the gha-base-name in the chart _helpers.tpl to avoid long full names.
// NOTE: Unfortunately the INSTALLATION_NAME is used as the helm release
// name and GitHub removed support for runner labels, so the only way to
// specify which runner a workflow runs on is using this helm release name.
// The quote is "Update the INSTALLATION_NAME value carefully. You will use
// the installation name as the value of runs-on in your workflows." Refer to
// https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/quickstart-for-actions-runner-controller
release: "gha-rs"
name: "oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set"
}
}
}

View File

@@ -1,6 +1,6 @@
package holos
#TargetNamespace: "arc-system"
#TargetNamespace: #ARCSystemNamespace
#InputKeys: component: "arc-system"
#HelmChart & {

View File

@@ -0,0 +1,26 @@
package holos
import "list"
#TargetNamespace: "default"
#InputKeys: {
project: "secrets"
component: "namespaces"
}
#KubernetesObjects & {
apiObjects: {
// #ManagedNamespaces is the set of all namespaces across all clusters in the platform.
for k, ns in #ManagedNamespaces {
if list.Contains(ns.clusterNames, #ClusterName) {
Namespace: "\(k)": #Namespace & ns.namespace
}
}
// #PlatformNamespaces is deprecated in favor of #ManagedNamespaces.
for ns in #PlatformNamespaces {
Namespace: "\(ns.name)": #Namespace & {metadata: ns}
}
}
}

View File

@@ -1,7 +1,8 @@
package holos
// The primary istio Gateway, named default
import "list"
// The primary istio Gateway, named default
let Name = "gateway"
#InputKeys: component: Name
@@ -31,5 +32,19 @@ let LoginCert = #PlatformCerts.login
},
]
}
for k, svc in #OptionalServices {
if svc.enabled && list.Contains(svc.clusterNames, #ClusterName) {
Gateway: "\(svc.name)": #Gateway & {
metadata: name: svc.name
metadata: namespace: #TargetNamespace
spec: selector: istio: "ingressgateway"
spec: servers: [for s in svc.servers {s}]
}
for k, s in svc.servers {
ExternalSecret: "\(s.tls.credentialName)": _
}
}
}
}
}

View File

@@ -18,9 +18,7 @@ let Cert = #PlatformCerts[SecretName]
#KubernetesObjects & {
apiObjects: {
ExternalSecret: httpbin: #ExternalSecret & {
_name: Cert.spec.secretName
}
ExternalSecret: "\(Cert.spec.secretName)": _
Deployment: httpbin: #Deployment & {
metadata: Metadata
spec: selector: matchLabels: MatchLabels

View File

@@ -63,7 +63,7 @@ let RedirectMetaName = {
// https-redirect
_APIObjects: {
Gateway: {
httpsRedirect: #Gateway & {
"\(RedirectMetaName.name)": #Gateway & {
metadata: RedirectMetaName
spec: selector: GatewayLabels
spec: servers: [{
@@ -79,7 +79,7 @@ _APIObjects: {
}
}
VirtualService: {
httpsRedirect: #VirtualService & {
"\(RedirectMetaName.name)": #VirtualService & {
metadata: RedirectMetaName
spec: hosts: ["*"]
spec: gateways: [RedirectMetaName.name]

View File

@@ -0,0 +1,6 @@
package holos
#DependsOn: Namespaces: name: "prod-secrets-namespaces"
#DependsOn: CRDS: name: "\(#InstancePrefix)-crds"
#InputKeys: component: "controller"
{} & #KustomizeBuild

View File

@@ -0,0 +1,21 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: postgres-operator
labels:
- includeTemplates: true
pairs:
app.kubernetes.io/name: pgo
# The version below should match the version on the PostgresCluster CRD
app.kubernetes.io/version: 5.5.1
postgres-operator.crunchydata.com/control-plane: postgres-operator
resources:
- ./rbac/cluster
- ./manager
images:
- name: postgres-operator
newName: registry.developers.crunchydata.com/crunchydata/postgres-operator
newTag: ubi8-5.5.1-0

View File

@@ -0,0 +1,5 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- manager.yaml

View File

@@ -0,0 +1,56 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgo
labels:
postgres-operator.crunchydata.com/control-plane: postgres-operator
spec:
replicas: 1
strategy: { type: Recreate }
selector:
matchLabels:
postgres-operator.crunchydata.com/control-plane: postgres-operator
template:
metadata:
labels:
postgres-operator.crunchydata.com/control-plane: postgres-operator
spec:
containers:
- name: operator
image: postgres-operator
env:
- name: PGO_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CRUNCHY_DEBUG
value: "true"
- name: RELATED_IMAGE_POSTGRES_15
value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-15.6-0"
- name: RELATED_IMAGE_POSTGRES_15_GIS_3.3
value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-15.6-3.3-0"
- name: RELATED_IMAGE_POSTGRES_16
value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.2-0"
- name: RELATED_IMAGE_POSTGRES_16_GIS_3.3
value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-16.2-3.3-0"
- name: RELATED_IMAGE_POSTGRES_16_GIS_3.4
value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-16.2-3.4-0"
- name: RELATED_IMAGE_PGADMIN
value: "registry.developers.crunchydata.com/crunchydata/crunchy-pgadmin4:ubi8-4.30-22"
- name: RELATED_IMAGE_PGBACKREST
value: "registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.49-0"
- name: RELATED_IMAGE_PGBOUNCER
value: "registry.developers.crunchydata.com/crunchydata/crunchy-pgbouncer:ubi8-1.21-3"
- name: RELATED_IMAGE_PGEXPORTER
value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres-exporter:ubi8-0.15.0-3"
- name: RELATED_IMAGE_PGUPGRADE
value: "registry.developers.crunchydata.com/crunchydata/crunchy-upgrade:ubi8-5.5.1-0"
- name: RELATED_IMAGE_STANDALONE_PGADMIN
value: "registry.developers.crunchydata.com/crunchydata/crunchy-pgadmin4:ubi8-7.8-3"
securityContext:
allowPrivilegeEscalation: false
capabilities: { drop: [ALL] }
readOnlyRootFilesystem: true
runAsNonRoot: true
serviceAccountName: pgo

View File

@@ -0,0 +1,7 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service_account.yaml
- role.yaml
- role_binding.yaml

View File

@@ -0,0 +1,146 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: postgres-operator
rules:
- apiGroups:
- ''
resources:
- configmaps
- persistentvolumeclaims
- secrets
- services
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- ''
resources:
- endpoints
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- watch
- apiGroups:
- ''
resources:
- endpoints/restricted
- pods/exec
verbs:
- create
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- ''
resources:
- pods
verbs:
- delete
- get
- list
- patch
- watch
- apiGroups:
- ''
resources:
- serviceaccounts
verbs:
- create
- get
- list
- patch
- watch
- apiGroups:
- apps
resources:
- deployments
- statefulsets
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- postgres-operator.crunchydata.com
resources:
- pgadmins
- pgupgrades
verbs:
- get
- list
- watch
- apiGroups:
- postgres-operator.crunchydata.com
resources:
- pgadmins/finalizers
- pgupgrades/finalizers
- postgresclusters/finalizers
verbs:
- update
- apiGroups:
- postgres-operator.crunchydata.com
resources:
- pgadmins/status
- pgupgrades/status
- postgresclusters/status
verbs:
- patch
- apiGroups:
- postgres-operator.crunchydata.com
resources:
- postgresclusters
verbs:
- get
- list
- patch
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
- roles
verbs:
- create
- get
- list
- patch
- watch

View File

@@ -0,0 +1,14 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: postgres-operator
labels:
postgres-operator.crunchydata.com/control-plane: postgres-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: postgres-operator
subjects:
- kind: ServiceAccount
name: pgo

View File

@@ -0,0 +1,7 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pgo
labels:
postgres-operator.crunchydata.com/control-plane: postgres-operator

View File

@@ -0,0 +1,7 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service_account.yaml
- role.yaml
- role_binding.yaml

View File

@@ -0,0 +1,146 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: postgres-operator
rules:
- apiGroups:
- ''
resources:
- configmaps
- persistentvolumeclaims
- secrets
- services
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- ''
resources:
- endpoints
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- watch
- apiGroups:
- ''
resources:
- endpoints/restricted
- pods/exec
verbs:
- create
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- ''
resources:
- pods
verbs:
- delete
- get
- list
- patch
- watch
- apiGroups:
- ''
resources:
- serviceaccounts
verbs:
- create
- get
- list
- patch
- watch
- apiGroups:
- apps
resources:
- deployments
- statefulsets
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups:
- postgres-operator.crunchydata.com
resources:
- pgadmins
- pgupgrades
verbs:
- get
- list
- watch
- apiGroups:
- postgres-operator.crunchydata.com
resources:
- pgadmins/finalizers
- pgupgrades/finalizers
- postgresclusters/finalizers
verbs:
- update
- apiGroups:
- postgres-operator.crunchydata.com
resources:
- pgadmins/status
- pgupgrades/status
- postgresclusters/status
verbs:
- patch
- apiGroups:
- postgres-operator.crunchydata.com
resources:
- postgresclusters
verbs:
- get
- list
- patch
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
- roles
verbs:
- create
- get
- list
- patch
- watch

View File

@@ -0,0 +1,14 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: postgres-operator
labels:
postgres-operator.crunchydata.com/control-plane: postgres-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: postgres-operator
subjects:
- kind: ServiceAccount
name: pgo

View File

@@ -0,0 +1,7 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pgo
labels:
postgres-operator.crunchydata.com/control-plane: postgres-operator

View File

@@ -0,0 +1,7 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- bases/postgres-operator.crunchydata.com_postgresclusters.yaml
- bases/postgres-operator.crunchydata.com_pgupgrades.yaml
- bases/postgres-operator.crunchydata.com_pgadmins.yaml

View File

@@ -0,0 +1,6 @@
package holos
// Refer to https://github.com/CrunchyData/postgres-operator-examples/tree/main/kustomize/install/crd
#InputKeys: component: "crds"
{} & #KustomizeBuild

View File

@@ -0,0 +1,4 @@
package holos
// Crunchy Postgres Operator
#InputKeys: project: "pgo"

View File

@@ -93,7 +93,14 @@ provisioner get serviceaccount -A --selector=holos.run/job.name=\(NAME) --output
# Create the tokens
mkdir tokens
jq -r '.items[].metadata | "provisioner -n \\(.namespace) create token --duration=12h \\(.name) > tokens/\\(.namespace).\\(.name).jwt"' serviceaccounts.json | bash -x
kubectl get namespaces -o name > namespaces.txt
# Iterate over local namespaces
while IFS= read -r NAMESPACE; do
echo "Getting token for local cluster $NAMESPACE" >&2
jq -r '.items[] | select("namespace/"+.metadata.namespace == "'${NAMESPACE}'") | .metadata | "provisioner -n \\(.namespace) create token --duration=12h \\(.name) > tokens/\\(.namespace).\\(.name).jwt"' serviceaccounts.json | bash -x
done < namespaces.txt
# Create the secrets
mksecret tokens/*.jwt
@@ -124,6 +131,11 @@ kubectl apply --server-side=true -f secrets.yaml
resources: ["secrets"]
verbs: ["*"]
},
{
apiGroups: [""]
resources: ["namespaces"]
verbs: ["list"]
},
]
},
// Bind the Role to the ServiceAccount for the Job.

View File

@@ -1,5 +1,7 @@
package holos
import "list"
#DependsOn: _ESOCreds
#TargetNamespace: "default"
@@ -30,5 +32,12 @@ package holos
"\(Kind)": "\(NS)/\(Name)": obj
}
}
for nsName, ns in #ManagedNamespaces {
if list.Contains(ns.clusterNames, #ClusterName) {
let obj = #SecretStore & {_namespace: nsName}
SecretStore: "\(nsName)/\(obj.metadata.name)": obj
}
}
}
}

View File

@@ -48,7 +48,7 @@ package holos
// (required) Ceph pool into which the RBD image shall be created
// eg: pool: replicapool
pool: "k8s-dev"
pool: #Platform.clusters[#ClusterName].pool
// (optional) RBD image features, CSI creates image with image-format 2 CSI
// RBD currently supports `layering`, `journaling`, `exclusive-lock`,

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,146 @@
package holos
#Values: {
// Vault Helm Chart Holos Values
global: {
enabled: true
// Istio handles this
tlsDisable: true
}
injector: enabled: false
server: {
image: {
// repository: "hashicorp/vault"
repository: "quay.io/holos/hashicorp/vault"
tag: "1.14.10"
// Overrides the default Image Pull Policy
pullPolicy: "IfNotPresent"
}
extraLabels: "sidecar.istio.io/inject": "true"
resources: requests: {
memory: "256Mi"
cpu: "2000m"
}
// limits:
// memory: 1024Mi
// cpu: 2000m
// For HA configuration and because we need to manually init the vault,
// we need to define custom readiness/liveness Probe settings
readinessProbe: {
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
}
livenessProbe: {
enabled: true
path: "/v1/sys/health?standbyok=true"
initialDelaySeconds: 60
}
// extraEnvironmentVars is a list of extra environment variables to set with
// the stateful set. These could be used to include variables required for
// auto-unseal.
// Vault validates an incomplete chain:
// https://github.com/hashicorp/vault/issues/11318
extraEnvironmentVars: {
GOMAXPROCS: "2"
} // Set to cpu limit, see https://github.com/uber-go/automaxprocs
// extraVolumes is a list of extra volumes to mount. These will be exposed
// to Vault in the path `/vault/userconfig/<name>/`.
extraVolumes: [{
type: "secret"
name: "gcpkms-creds"
}]
// This configures the Vault Statefulset to create a PVC for audit logs.
// See https://www.vaultproject.io/docs/audit/index.html to know more
auditStorage: {
enabled: true
mountPath: "/var/log/vault"
} // for compatibility with plain debian vm location.
standalone: {
enabled: false
}
ha: {
enabled: true
replicas: 3
raft: {
enabled: true
setNodeId: true
config: """
ui = true
listener \"tcp\" {
address = \"[::]:8200\"
cluster_address = \"[::]:8201\"
# mTLS is handled by the the istio sidecar
tls_disable = \"true\"
# Enable unauthenticated metrics access (necessary for Prometheus Operator)
telemetry {
unauthenticated_metrics_access = true
}
}
telemetry {
prometheus_retention_time = \"30s\"
disable_hostname = true
}
seal \"gcpckms\" {
credentials = \"/vault/userconfig/gcpkms-creds/credentials.json\"
project = \"v6-vault-f15f\"
region = \"us-west1\"
key_ring = \"vault-core\"
crypto_key = \"vault-core-unseal\"
}
# Note; the retry_join leader_api_address values come from the Stable
# Network ID feature of a Statefulset. See:
# https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id
storage \"raft\" {
path = \"/vault/data\"
retry_join {
leader_api_addr = \"http://vault-0.vault-internal:8200\"
leader_tls_servername = \"vault\"
}
retry_join {
leader_api_addr = \"http://vault-1.vault-internal:8200\"
leader_tls_servername = \"vault\"
}
retry_join {
leader_api_addr = \"http://vault-2.vault-internal:8200\"
leader_tls_servername = \"vault\"
}
autopilot {
cleanup_dead_servers = \"true\"
last_contact_threshold = \"200ms\"
last_contact_failure_threshold = \"10m\"
max_trailing_logs = 250000
min_quorum = 3
server_stabilization_time = \"10s\"
}
}
service_registration \"kubernetes\" {}
"""
// Vault UI (Will be exposed via the service mesh)
} // Vault UI (Will be exposed via the service mesh)
} // Vault UI (Will be exposed via the service mesh)
} // Vault UI (Will be exposed via the service mesh)// Vault UI (Will be exposed via the service mesh)
ui: {
enabled: true
serviceType: "ClusterIP"
serviceNodePort: null
externalPort: 8200
}
}

View File

@@ -0,0 +1,75 @@
package holos
import "encoding/yaml"
import "list"
let Name = "vault"
#InputKeys: component: Name
#InputKeys: project: "core"
#TargetNamespace: "\(#InstancePrefix)-\(Name)"
let Vault = #OptionalServices[Name]
if Vault.enabled && list.Contains(Vault.clusterNames, #ClusterName) {
#HelmChart & {
namespace: #TargetNamespace
chart: {
name: Name
version: "0.25.0"
repository: {
name: "hashicorp"
url: "https://helm.releases.hashicorp.com"
}
}
values: #Values
apiObjects: {
ExternalSecret: "gcpkms-creds": _
ExternalSecret: "vault-server-cert": _
VirtualService: "\(Name)": {
metadata: name: Name
metadata: namespace: #TargetNamespace
spec: hosts: [for cert in Vault.certs {cert.spec.commonName}]
spec: gateways: ["istio-ingress/\(Name)"]
spec: http: [
{
route: [
{
destination: host: "\(Name)-active"
destination: port: number: 8200
},
]
},
]
}
}
}
#Kustomize: {
patches: [
{
target: {
group: "apps"
version: "v1"
kind: "StatefulSet"
name: Name
}
patch: yaml.Marshal(EnvPatch)
},
]
}
let EnvPatch = [
{
op: "test"
path: "/spec/template/spec/containers/0/env/4/name"
value: "VAULT_ADDR"
},
{
op: "replace"
path: "/spec/template/spec/containers/0/env/4/value"
value: "http://$(VAULT_K8S_POD_NAME):8200"
},
]
}

View File

@@ -0,0 +1,3 @@
package holos
#InputKeys: project: "projects"

View File

@@ -1,108 +0,0 @@
package holos
// Manage an Issuer for cockroachdb for zitadel.
// For the iam login service, zitadel connects to cockroach db using tls certs for authz.
// Upstream: "The recommended approach is to use cert-manager for certificate management. For details, refer to Deploy cert-manager for mTLS."
// Refer to https://www.cockroachlabs.com/docs/stable/secure-cockroachdb-kubernetes#deploy-cert-manager-for-mtls
#InputKeys: component: "crdb"
#KubernetesObjects & {
apiObjects: {
Issuer: {
// https://github.com/cockroachdb/helm-charts/blob/3dcf96726ebcfe3784afb526ddcf4095a1684aea/README.md?plain=1#L196-L201
crdb: #Issuer & {
_description: "Issues the self signed root ca cert for cockroach db"
metadata: name: #ComponentName
metadata: namespace: #TargetNamespace
spec: selfSigned: {}
}
"crdb-ca-issuer": #Issuer & {
_description: "Issues mtls certs for cockroach db"
metadata: name: "crdb-ca-issuer"
metadata: namespace: #TargetNamespace
spec: ca: secretName: "cockroach-ca"
}
}
Certificate: {
"crdb-ca-cert": #Certificate & {
_description: "Root CA cert for cockroach db"
metadata: name: "crdb-ca-cert"
metadata: namespace: #TargetNamespace
spec: {
commonName: "root"
isCA: true
issuerRef: group: "cert-manager.io"
issuerRef: kind: "Issuer"
issuerRef: name: "crdb"
privateKey: algorithm: "ECDSA"
privateKey: size: 256
secretName: "cockroach-ca"
subject: organizations: ["Cockroach"]
}
}
"crdb-node": #Certificate & {
metadata: name: "crdb-node"
metadata: namespace: #TargetNamespace
spec: {
commonName: "node"
dnsNames: [
"localhost",
"127.0.0.1",
"crdb-public",
"crdb-public.\(#TargetNamespace)",
"crdb-public.\(#TargetNamespace).svc.cluster.local",
"*.crdb",
"*.crdb.\(#TargetNamespace)",
"*.crdb.\(#TargetNamespace).svc.cluster.local",
]
duration: "876h"
issuerRef: group: "cert-manager.io"
issuerRef: kind: "Issuer"
issuerRef: name: "crdb-ca-issuer"
privateKey: algorithm: "RSA"
privateKey: size: 2048
renewBefore: "168h"
secretName: "cockroachdb-node"
subject: organizations: ["Cockroach"]
usages: ["digital signature", "key encipherment", "server auth", "client auth"]
}
}
"crdb-root-client": #Certificate & {
metadata: name: "crdb-root-client"
metadata: namespace: #TargetNamespace
spec: {
commonName: "root"
duration: "672h"
issuerRef: group: "cert-manager.io"
issuerRef: kind: "Issuer"
issuerRef: name: "crdb-ca-issuer"
privateKey: algorithm: "RSA"
privateKey: size: 2048
renewBefore: "48h"
secretName: "cockroachdb-root"
subject: organizations: ["Cockroach"]
usages: ["digital signature", "key encipherment", "client auth"]
}
}
}
Certificate: zitadel: #Certificate & {
metadata: name: "crdb-zitadel-client"
metadata: namespace: #TargetNamespace
spec: {
commonName: "zitadel"
issuerRef: {
group: "cert-manager.io"
kind: "Issuer"
name: "crdb-ca-issuer"
}
privateKey: algorithm: "RSA"
privateKey: size: 2048
renewBefore: "48h0m0s"
secretName: "cockroachdb-zitadel"
subject: organizations: ["Cockroach"]
usages: ["digital signature", "key encipherment", "client auth"]
}
}
}
}

View File

@@ -0,0 +1,101 @@
package holos
// Manage an Issuer for the database.
// Both cockroach and postgres handle tls database connections with cert manager
// PGO: https://github.com/CrunchyData/postgres-operator-examples/tree/main/kustomize/certmanager/certman
// CRDB: https://github.com/cockroachdb/helm-charts/blob/3dcf96726ebcfe3784afb526ddcf4095a1684aea/README.md?plain=1#L196-L201
// Refer to [Using Cert Manager to Deploy TLS for Postgres on Kubernetes](https://www.crunchydata.com/blog/using-cert-manager-to-deploy-tls-for-postgres-on-kubernetes)
#InputKeys: component: "postgres-certs"
let SelfSigned = "\(_DBName)-selfsigned"
let RootCA = "\(_DBName)-root-ca"
let Orgs = ["Database"]
#KubernetesObjects & {
apiObjects: {
// Put everything in the target namespace.
[_]: {
[Name=_]: {
metadata: name: Name
metadata: namespace: #TargetNamespace
}
}
Issuer: {
"\(SelfSigned)": #Issuer & {
_description: "Self signed issuer to issue ca certs"
metadata: name: SelfSigned
spec: selfSigned: {}
}
"\(RootCA)": #Issuer & {
_description: "Root signed intermediate ca to issue mtls database certs"
metadata: name: RootCA
spec: ca: secretName: RootCA
}
}
Certificate: {
"\(RootCA)": #Certificate & {
_description: "Root CA cert for database"
metadata: name: RootCA
spec: {
commonName: RootCA
isCA: true
issuerRef: group: "cert-manager.io"
issuerRef: kind: "Issuer"
issuerRef: name: SelfSigned
privateKey: algorithm: "ECDSA"
privateKey: size: 256
secretName: RootCA
subject: organizations: Orgs
}
}
"\(_DBName)-primary-tls": #DatabaseCert & {
// PGO managed name is "<cluster name>-cluster-cert" e.g. zitadel-cluster-cert
spec: {
commonName: "\(_DBName)-primary"
dnsNames: [
commonName,
"\(commonName).\(#TargetNamespace)",
"\(commonName).\(#TargetNamespace).svc",
"\(commonName).\(#TargetNamespace).svc.cluster.local",
"localhost",
"127.0.0.1",
]
usages: ["digital signature", "key encipherment"]
}
}
"\(_DBName)-repl-tls": #DatabaseCert & {
spec: {
commonName: "_crunchyrepl"
dnsNames: [commonName]
usages: ["digital signature", "key encipherment"]
}
}
"\(_DBName)-client-tls": #DatabaseCert & {
spec: {
commonName: "\(_DBName)-client"
dnsNames: [commonName]
usages: ["digital signature", "key encipherment"]
}
}
}
}
}
#DatabaseCert: #Certificate & {
metadata: name: string
metadata: namespace: #TargetNamespace
spec: {
duration: "2160h" // 90d
renewBefore: "360h" // 15d
issuerRef: group: "cert-manager.io"
issuerRef: kind: "Issuer"
issuerRef: name: RootCA
privateKey: algorithm: "ECDSA"
privateKey: size: 256
secretName: metadata.name
subject: organizations: Orgs
}
}

View File

@@ -0,0 +1,7 @@
# Database Certs
This component issues postgres certificates from the provisioner cluster using certmanager.
The purpose is to define customTLSSecret and customReplicationTLSSecret to provide certs that allow the standby to authenticate to the primary. For this type of standby, you must use custom TLS.
Refer to the PGO [Streaming Standby](https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#streaming-standby) tutorial.

View File

@@ -2,6 +2,5 @@ package holos
#TargetNamespace: #InstancePrefix + "-zitadel"
#DB: {
Host: "crdb-public"
}
// _DBName is the database name used across multiple holos components in this project
_DBName: "zitadel"

View File

@@ -0,0 +1,13 @@
package holos
let Vault = #OptionalServices.vault
if Vault.enabled {
#KubernetesObjects & {
apiObjects: {
for k, obj in Vault.certs {
"\(obj.kind)": "\(obj.metadata.name)": obj
}
}
}
}

View File

@@ -24,6 +24,14 @@ ksObjects: []
"\(Kind)": "\(ns.name)/\(Name)": obj
}
}
for nsName, ns in #ManagedNamespaces {
for obj in (#PlatformNamespaceObjects & {_ns: ns.namespace.metadata}).objects {
let Kind = obj.kind
let Name = obj.metadata.name
"\(Kind)": "\(nsName)/\(Name)": obj
}
}
}
}

Some files were not shown because too many files have changed in this diff Show More