Compare commits

...

23 Commits

Author SHA1 Message Date
Jeff McCune
6f0928b12c (#71) Add go BuildPlan type as the CUE<->Holos API
This patch establishes the BuildPlan struct as the single API contract
between CUE and Holos.  A BuildPlan spec contains a list of each of the
support holos component types.

The purpose of this data structure is to support the use case of one CUE
instance generating 1 build plan that contains 0..N of each type of
holos component.

The need for multiple components per one CUE instance is to support the
generation of a collection of N~4 flux kustomization resources per
project and P~6 projects built from one CUE instance.

Tested with:

    holos render --cluster-name=k2 ~/workspace/holos-run/holos/docs/examples/platforms/reference/clusters/foundation/cloud/init/namespaces/...

Common labels are removed because they're too tightly coupled to the
model of one component per one cue instance.
2024-03-21 16:13:36 -07:00
Jeff McCune
c6e9250d60 (#69) Refactor clean up go types
Separate out the Kustomization and Kustomize types commonly used in
holos components.  Embed HolosComponent into Result.
2024-03-21 08:57:02 -07:00
Jeff McCune
104bda459f (#69) Go Types for CUE/Holos API contract
This patch refactors the go structs used to decode cue output for
processing by the holos cli.  For context, the purpose of the structs
are to inform holos how the data from cue should be modeled and
processed as a rendering pipeline that provides rendered yaml to
configure kubernetes api objects.

The structs share common fields in the form of the HolosComponent
embedded struct.  The three main holos component kinds today are:

 1. KubernetesObjects - CUE outputs a nested map where each value is a
    single rendered api object (resource).
 2. HelmChart - CUE outputs the chart name and values.  Holos calls helm
    template to render the chart.  Additional api objects may be
    overlaid into the rendered output.  Kustomize may also optionally be
    called at the end of the render pipeline.
 3. KustomizeBuild - CUE outputs data to construct a kustomize
    kustomization build.  The holos component contains raw yaml files to
    use as kustomization resources.  CUE optionally defines additional
    patches, common labels, etc.

With the Go structs, cue may directly import the definitions to more
easily keep the CUE definitions in sync with what the holos cli expects
to receive.

The holos component types may be imported into cue using:

    cue get go github.com/holos-run/holos/api/v1alpha1/...
2024-03-20 17:21:10 -07:00
Jeff McCune
bd2effa183 (#61) Improve ks prod-iam-zitadel robustness with flux health checks
Without this patch ks/prod-iam-zitadel often gets blocked waiting for
jobs that will never complete.  In addition, flux should not manage the
zitadel-test-connection Pod which is an unnecessary artifact of the
upstream helm chart.

We'd disable helm hooks, but they're necessary to create the init and
setup jobs.

This patch also changes the default behavior of Kustomizations from
wait: true to wait: false.  Waiting is expensive for the api server and
slows down the reconciliation process considerably.

Component authors should use ks.spec.healthChecks to target specific
important resources to watch and wait for.
2024-03-15 15:56:43 -07:00
Jeff McCune
562412fbe7 (#57) Run gha-rs scale set only on the primary cluster
This patch fixes the problem of the actions runner scale set listener
pod failing every 3 seconds.  See
https://github.com/actions/actions-runner-controller/issues/3351

The solution is not ideal, if the primary cluster is down workflows will
not execute.  The primary cluster shouldn't go down though so this is
the trade off.  Lower log spam and resource usage by eliminating the
failing pods on other clusters for lower availability if the primary
cluster is not available.

We could let the pods loop and if the primary is unavailable another
would quickly pick up the role, but it doesn't seem worth it.
2024-03-15 13:13:25 -07:00
Jeff McCune
fd6fbe5598 (#57) Allow gha-rs scale set to fail on all but one clusters
The effect of this patch is limited to refreshing credentials only for
namespaces that exist in the local cluster.  There is structure in place
in the CUE code to allow for namespaces bound to specific clusters, but
this is used only by the optional Vault component.

This patch was an attempt to work around
https://github.com/actions/actions-runner-controller/issues/3351 by
deploying the runner scale sets into unique namespaces.

This effort was a waste of time, only one listener pod successfully
registered for a given scale set name / group combination.

Because we have only one group named Default we can only have one
listener pod globally for a given scale set name.

Because we want our workflows to execute regardless of the availability
of a single cluster, we're going to let this fail for now.  The pod
retries every 3 seconds.  When a cluster is destroyed, another cluster
will quickly register.

A follow up patch will look to expand this retry behavior.
2024-03-15 12:53:16 -07:00
Jeff McCune
67472e1e1c (#60) Disable flux reconciliation of deployment/zitadel on standby clusters 2024-03-14 21:58:32 -07:00
Jeff McCune
d64c3e8c66 (#58) Zitadel Failover RunBook 2024-03-14 15:25:38 -07:00
Jeff McCune
f344f97374 (#58) Restore last zitadel database backup
When the cluster is provisioned, restore the most recent backup instead
of a fixed point in time.
2024-03-14 11:40:17 -07:00
Jeff McCune
770088b912 (#53) Clean up nested if statements with && 2024-03-13 10:35:20 -07:00
Jeff McCune
cb9b39c3ca (#53) Add Vault as an optional service on the core clusters
This patch migrates the vault component from [holos-infra][1] to a cue
based component.  Vault is optional in the reference platform, so this
patch also defines an `#OptionalServices` struct to conditionally manage
a service across multiple clusters in the platform.

The primary use case for optional services is managing a namespace to
provision and provide secrets across clusters.

[1]: https://github.com/holos-run/holos-infra/tree/v0.5.0/components/core/core/vault
2024-03-12 17:18:38 -07:00
Jeff McCune
0f34b20546 (#54) Disable helm hooks when rendering components
Pods are unnecessarily created when deploying helm based holos
components and often fail.  Prevent these test pods by disabling helm
hooks with the `--no-hooks` flag.

Closes: #54
2024-03-12 14:14:20 -07:00
Jeff McCune
0d7bbbb659 (#48) Disable pg spec.dataSource for standby cluster
Problem:
The standby cluster on k2 fails to start.  A pgbackrest pod first
restores the database from S3, then the pgha nodes try to replay the WAL
as part of the standby initialization process.  This fails because the
PGDATA directory is not empty.

Solution:
Specify the spec.dataSource field only when the cluster is configured as
a primary cluster.

Result:
Non-primary clusters are standby, they skip the pgbackrest job to
restore from S3 and move straight to patroni replaying the WAL from S3
as part of the pgha pods.

One of the two pgha pods becomes the "standby leader" and restores the
WAL from S3.  The other is a cascading standby and then restores the
same WAL from the standby leader.

After 8 minutes both pods are ready.

```
❯ k get pods
NAME                               READY   STATUS    RESTARTS   AGE
zitadel-pgbouncer-d9f8cffc-j469g   2/2     Running   0          11m
zitadel-pgbouncer-d9f8cffc-xq29g   2/2     Running   0          11m
zitadel-pgha1-27w7-0               4/4     Running   0          11m
zitadel-pgha1-c5qj-0               4/4     Running   0          11m
zitadel-repo-host-0                2/2     Running   0          11m
```
2024-03-11 17:56:47 -07:00
Jeff McCune
3f3e36bbe9 (#48) Split workload into foundation and accounts
Problem:
The k3 and k4 clusters are getting the Zitadel components which are
really only intended for the core cluster pair.

Solution:
Split the workload subtree into two, named foundation and accounts.  The
core cluster pair gets foundation+accounts while the kX clusters get
just the foundation subtree.

Result:
prod-zitadel-iam is no longer managed on k3 and k4
2024-03-11 15:20:35 -07:00
Jeff McCune
9f41478d33 (#48) Restore from Monday morning after Gary and Nate registered
Set the restore point to time="2024-03-11T17:08:58Z" level=info
msg="crunchy-pgbackrest ends" which is just after Gary and Nate
registered and were granted the cluster-admin role.
2024-03-11 10:18:45 -07:00
Jeff McCune
b86fee04fc (#48) v0.55.4 to rebuild k3, k4, k5 2024-03-11 08:48:07 -07:00
Jeff McCune
c78da6949f Merge pull request #51 from holos-run/jeff/48-zitadel-backups
(#48) Custom PGO Certs for Zitadel
2024-03-10 23:08:29 -07:00
Jeff McCune
7b215bb8f1 (#48) Custom PGO Certs for Zitadel
The [Streaming Standby][standby] architecture requires custom tls certs
for two clusters in two regions to connect to each other.

This patch manages the custom certs following the configuration
described in the article [Using Cert Manager to Deploy TLS for Postgres
on Kubernetes][article].

NOTE: One thing not mentioned anywhere in the crunchy documentation is
how custom tls certs work with pgbouncer.  The pgbouncer service uses a
tls certificate issued by the pgo root cert, not by the custom
certificate authority.

For this reason, we use kustomize to patch the zitadel Deployment and
the zitadel-init and zitadel-setup Jobs.  The patch projects the ca
bundle from the `zitadel-pgbouncer` secret into the zitadel pods at
/pgbouncer/ca.crt

[standby]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery#streaming-standby-with-an-external-repo
[article]: https://www.crunchydata.com/blog/using-cert-manager-to-deploy-tls-for-postgres-on-kubernetes
2024-03-10 22:54:06 -07:00
Jeff McCune
78cec76a96 (#48) Restore ZITADEL from point in time full backup
A full backup was taken using:

```
kubectl annotate postgrescluster zitadel postgres-operator.crunchydata.com/pgbackrest-backup="$(date)"
```

And completed with:

```
❯ k logs -f zitadel-backup-5r6v-v5jnm
time="2024-03-10T21:52:15Z" level=info msg="crunchy-pgbackrest starts"
time="2024-03-10T21:52:15Z" level=info msg="debug flag set to false"
time="2024-03-10T21:52:15Z" level=info msg="backrest backup command requested"
time="2024-03-10T21:52:15Z" level=info msg="command to execute is [pgbackrest backup --stanza=db --repo=2 --type=full]"
time="2024-03-10T21:55:18Z" level=info msg="crunchy-pgbackrest ends"
```

This patch verifies the point in time backup is robust in the face of
the following operations:

1. pg cluster zitadel was deleted (whole namespace emptied)
2. pg cluster zitadel was re-created _without_ a `dataSource`
3. pgo initailized a new database and backed up the blank database to
   S3.
4. pg cluster zitadel was deleted again.
5. pg cluster zitadel was re-created with `dataSource` `options: ["--type=time", "--target=\"2024-03-10 21:56:00+00\""]` (Just after the full backup completed)
6. Restore completed successfully.
7. Applied the holos zitadel component.
8. Zitadel came up successfully and user login worked as expected.

- [x] Perform an in place [restore][restore] from [s3][bucket].
- [x] Set repo1-retention-full to clear warning

[restore]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#restore-properties
[bucket]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#cloud-based-data-source
2024-03-10 17:42:54 -07:00
Jeff McCune
0e98ad2ecb (#48) Zitadel Backups
This patch configures backups suitable to support the [Streaming Standby
with an External Repo][0] architecture.

- [x] PGO [Multiple Backup Repositories][1] to k8s pv and s3.
- [x] [Encryption][2] of backups to S3.
- [x] [Remove SUPERUSER][3] role from zitadel-admin pg user to work with pgbouncer.  Resolves zitadel-init job failure.
- [x] Take a [Manual Backup][5]

[0]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery#streaming-standby-with-an-external-repo
[1]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups#set-up-multiple-backup-repositories
[2]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups#encryption
[3]: https://github.com/CrunchyData/postgres-operator/issues/3095#issuecomment-1904712211
[4]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#streaming-standby-with-an-external-repo
[5]: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backup-management#taking-a-one-off-backup
2024-03-10 16:38:56 -07:00
Jeff McCune
30bb3f183a (#50) Describe type as strings to match others 2024-03-10 11:29:19 -07:00
Jeff McCune
1369338f3c (#50) Add -n shorthand for --namespace for secrets
It's annoying holos get secret -n foo doesn't work like kubectl get
secret -n foo works.

Closes: #50
2024-03-10 10:45:49 -07:00
Jeff McCune
ac03f64724 (#45) Configure ZITADEL to use pgbouncer 2024-03-09 09:44:33 -08:00
125 changed files with 3592 additions and 847 deletions

View File

@@ -44,13 +44,18 @@ tidy: ## Tidy go module.
go mod tidy
.PHONY: fmt
fmt: ## Format Go code.
fmt: ## Format code.
cd docs/examples && cue fmt ./...
go fmt ./...
.PHONY: vet
vet: ## Vet Go code.
go vet ./...
.PHONY: gencue
gencue: ## Generate CUE definitions
cd docs/examples && cue get go github.com/holos-run/holos/api/...
.PHONY: generate
generate: ## Generate code.
go generate ./...

39
api/v1alpha1/buildplan.go Normal file
View File

@@ -0,0 +1,39 @@
package v1alpha1
import (
"fmt"
"strings"
)
// BuildPlan is the primary interface between CUE and the Holos cli.
type BuildPlan struct {
TypeMeta `json:",inline" yaml:",inline"`
// Metadata represents the holos component name
Metadata ObjectMeta `json:"metadata,omitempty" yaml:"metadata,omitempty"`
Spec BuildPlanSpec `json:"spec,omitempty" yaml:"spec,omitempty"`
}
type BuildPlanSpec struct {
Disabled bool `json:"disabled,omitempty" yaml:"disabled,omitempty"`
Components BuildPlanComponents `json:"components,omitempty" yaml:"components,omitempty"`
}
type BuildPlanComponents struct {
HelmCharts []HelmChart `json:"helmCharts,omitempty" yaml:"helmCharts,omitempty"`
KubernetesObjects []KubernetesObjects `json:"kubernetesObjects,omitempty" yaml:"kubernetesObjects,omitempty"`
KustomizeBuilds []KustomizeBuild `json:"kustomizeBuilds,omitempty" yaml:"kustomizeBuilds,omitempty"`
}
func (bp *BuildPlan) Validate() error {
errs := make([]string, 0, 10)
if bp.Kind != BuildPlanKind {
errs = append(errs, fmt.Sprintf("kind invalid: want: %s have: %s", BuildPlanKind, bp.Kind))
}
if bp.APIVersion != APIVersion {
errs = append(errs, fmt.Sprintf("apiVersion invalid: want: %s have: %s", APIVersion, bp.APIVersion))
}
if len(errs) > 0 {
return fmt.Errorf("invalid BuildPlan: " + strings.Join(errs, ", "))
}
return nil
}

22
api/v1alpha1/component.go Normal file
View File

@@ -0,0 +1,22 @@
package v1alpha1
// HolosComponent defines the fields common to all holos component kinds including the Render Result.
type HolosComponent struct {
TypeMeta `json:",inline" yaml:",inline"`
// Metadata represents the holos component name
Metadata ObjectMeta `json:"metadata,omitempty" yaml:"metadata,omitempty"`
// APIObjectMap holds the marshalled representation of api objects. Think of
// these as resources overlaid at the back of the render pipeline.
APIObjectMap APIObjectMap `json:"apiObjectMap,omitempty" yaml:"apiObjectMap,omitempty"`
// Kustomization holds the marshalled representation of the flux kustomization
// which reconciles resources in git with the api server.
Kustomization `json:",inline" yaml:",inline"`
// Kustomize represents a kubectl kustomize build post-processing step.
Kustomize `json:",inline" yaml:",inline"`
// Skip causes holos to take no action regarding the component.
Skip bool
}
func (hc *HolosComponent) NewResult() *Result {
return &Result{HolosComponent: *hc}
}

View File

@@ -0,0 +1,9 @@
package v1alpha1
const (
APIVersion = "holos.run/v1alpha1"
BuildPlanKind = "BuildPlan"
HelmChartKind = "HelmChart"
// ChartDir is the directory name created in the holos component directory to cache a chart.
ChartDir = "vendor"
)

2
api/v1alpha1/doc.go Normal file
View File

@@ -0,0 +1,2 @@
// Package v1alpha1 defines the api boundary between CUE and Holos.
package v1alpha1

153
api/v1alpha1/helm.go Normal file
View File

@@ -0,0 +1,153 @@
package v1alpha1
import (
"context"
"fmt"
"github.com/holos-run/holos"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/pkg/util"
"github.com/holos-run/holos/pkg/wrapper"
"os"
"path/filepath"
"strings"
)
// A HelmChart represents a helm command to provide chart values in order to render kubernetes api objects.
type HelmChart struct {
HolosComponent `json:",inline" yaml:",inline"`
// Namespace is the namespace to install into. TODO: Use metadata.namespace instead.
Namespace string `json:"namespace"`
Chart Chart `json:"chart"`
ValuesContent string `json:"valuesContent"`
EnableHooks bool `json:"enableHooks"`
}
type Chart struct {
Name string `json:"name"`
Version string `json:"version"`
Release string `json:"release"`
Repository Repository `json:"repository"`
}
type Repository struct {
Name string `json:"name"`
URL string `json:"url"`
}
func (hc *HelmChart) Render(ctx context.Context, path holos.PathComponent) (*Result, error) {
result := Result{HolosComponent: hc.HolosComponent}
if err := hc.helm(ctx, &result, path); err != nil {
return nil, err
}
result.addObjectMap(ctx, hc.APIObjectMap)
if err := result.kustomize(ctx); err != nil {
return nil, wrapper.Wrap(fmt.Errorf("could not kustomize: %w", err))
}
return &result, nil
}
// runHelm provides the values produced by CUE to helm template and returns
// the rendered kubernetes api objects in the result.
func (hc *HelmChart) helm(ctx context.Context, r *Result, path holos.PathComponent) error {
log := logger.FromContext(ctx).With("chart", hc.Chart.Name)
if hc.Chart.Name == "" {
log.WarnContext(ctx, "skipping helm: no chart name specified, use a different component type")
return nil
}
cachedChartPath := filepath.Join(string(path), ChartDir, filepath.Base(hc.Chart.Name))
if isNotExist(cachedChartPath) {
// Add repositories
repo := hc.Chart.Repository
if repo.URL != "" {
out, err := util.RunCmd(ctx, "helm", "repo", "add", repo.Name, repo.URL)
if err != nil {
log.ErrorContext(ctx, "could not run helm", "stderr", out.Stderr.String(), "stdout", out.Stdout.String())
return wrapper.Wrap(fmt.Errorf("could not run helm repo add: %w", err))
}
// Update repository
out, err = util.RunCmd(ctx, "helm", "repo", "update", repo.Name)
if err != nil {
log.ErrorContext(ctx, "could not run helm", "stderr", out.Stderr.String(), "stdout", out.Stdout.String())
return wrapper.Wrap(fmt.Errorf("could not run helm repo update: %w", err))
}
} else {
log.DebugContext(ctx, "no chart repository url proceeding assuming oci chart")
}
// Cache the chart
if err := cacheChart(ctx, path, ChartDir, hc.Chart); err != nil {
return fmt.Errorf("could not cache chart: %w", err)
}
}
// Write values file
tempDir, err := os.MkdirTemp("", "holos")
if err != nil {
return wrapper.Wrap(fmt.Errorf("could not make temp dir: %w", err))
}
defer util.Remove(ctx, tempDir)
valuesPath := filepath.Join(tempDir, "values.yaml")
if err := os.WriteFile(valuesPath, []byte(hc.ValuesContent), 0644); err != nil {
return wrapper.Wrap(fmt.Errorf("could not write values: %w", err))
}
log.DebugContext(ctx, "helm: wrote values", "path", valuesPath, "bytes", len(hc.ValuesContent))
// Run charts
chart := hc.Chart
args := []string{"template"}
if !hc.EnableHooks {
args = append(args, "--no-hooks")
}
namespace := hc.Namespace
args = append(args, "--include-crds", "--values", valuesPath, "--namespace", namespace, "--kubeconfig", "/dev/null", "--version", chart.Version, chart.Release, cachedChartPath)
helmOut, err := util.RunCmd(ctx, "helm", args...)
if err != nil {
stderr := helmOut.Stderr.String()
lines := strings.Split(stderr, "\n")
for _, line := range lines {
if strings.HasPrefix(line, "Error:") {
err = fmt.Errorf("%s: %w", line, err)
}
}
return wrapper.Wrap(fmt.Errorf("could not run helm template: %w", err))
}
r.accumulatedOutput = helmOut.Stdout.String()
return nil
}
// cacheChart stores a cached copy of Chart in the chart subdirectory of path.
func cacheChart(ctx context.Context, path holos.PathComponent, chartDir string, chart Chart) error {
log := logger.FromContext(ctx)
cacheTemp, err := os.MkdirTemp(string(path), chartDir)
if err != nil {
return wrapper.Wrap(fmt.Errorf("could not make temp dir: %w", err))
}
defer util.Remove(ctx, cacheTemp)
chartName := chart.Name
if chart.Repository.Name != "" {
chartName = fmt.Sprintf("%s/%s", chart.Repository.Name, chart.Name)
}
helmOut, err := util.RunCmd(ctx, "helm", "pull", "--destination", cacheTemp, "--untar=true", "--version", chart.Version, chartName)
if err != nil {
return wrapper.Wrap(fmt.Errorf("could not run helm pull: %w", err))
}
log.Debug("helm pull", "stdout", helmOut.Stdout, "stderr", helmOut.Stderr)
cachePath := filepath.Join(string(path), chartDir)
if err := os.Rename(cacheTemp, cachePath); err != nil {
return wrapper.Wrap(fmt.Errorf("could not rename: %w", err))
}
log.InfoContext(ctx, "cached", "chart", chart.Name, "version", chart.Version, "path", cachePath)
return nil
}
func isNotExist(path string) bool {
_, err := os.Stat(path)
return os.IsNotExist(err)
}

View File

@@ -0,0 +1,24 @@
package v1alpha1
import (
"context"
"github.com/holos-run/holos"
)
const KubernetesObjectsKind = "KubernetesObjects"
// KubernetesObjects represents CUE output which directly provides Kubernetes api objects to holos.
type KubernetesObjects struct {
HolosComponent `json:",inline" yaml:",inline"`
}
// Render produces kubernetes api objects from the APIObjectMap
func (o *KubernetesObjects) Render(ctx context.Context, path holos.PathComponent) (*Result, error) {
result := Result{
TypeMeta: o.TypeMeta,
Metadata: o.Metadata,
Kustomization: o.Kustomization,
}
result.addObjectMap(ctx, o.APIObjectMap)
return &result, nil
}

View File

@@ -0,0 +1,7 @@
package v1alpha1
// Kustomization holds the rendered flux kustomization api object content for git ops.
type Kustomization struct {
// KsContent is the yaml representation of the flux kustomization for gitops.
KsContent string `json:"ksContent,omitempty" yaml:"ksContent,omitempty"`
}

46
api/v1alpha1/kustomize.go Normal file
View File

@@ -0,0 +1,46 @@
package v1alpha1
import (
"context"
"github.com/holos-run/holos"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/pkg/util"
"github.com/holos-run/holos/pkg/wrapper"
)
const KustomizeBuildKind = "KustomizeBuild"
// Kustomize represents resources necessary to execute a kustomize build.
// Intended for at least two use cases:
//
// 1. Process raw yaml file resources in a holos component directory.
// 2. Post process a HelmChart to inject istio, add custom labels, etc...
type Kustomize struct {
// KustomizeFiles holds file contents for kustomize, e.g. patch files.
KustomizeFiles FileContentMap `json:"kustomizeFiles,omitempty" yaml:"kustomizeFiles,omitempty"`
// ResourcesFile is the file name used for api objects in kustomization.yaml
ResourcesFile string `json:"resourcesFile,omitempty" yaml:"resourcesFile,omitempty"`
}
// KustomizeBuild renders plain yaml files in the holos component directory using kubectl kustomize build.
type KustomizeBuild struct {
HolosComponent `json:",inline" yaml:",inline"`
}
// Render produces a Result by executing kubectl kustomize on the holos
// component path. Useful for processing raw yaml files.
func (kb *KustomizeBuild) Render(ctx context.Context, path holos.PathComponent) (*Result, error) {
log := logger.FromContext(ctx)
result := Result{HolosComponent: kb.HolosComponent}
// Run kustomize.
kOut, err := util.RunCmd(ctx, "kubectl", "kustomize", string(path))
if err != nil {
log.ErrorContext(ctx, kOut.Stderr.String())
return nil, wrapper.Wrap(err)
}
// Replace the accumulated output
result.accumulatedOutput = kOut.Stdout.String()
// Add CUE based api objects.
result.addObjectMap(ctx, kb.APIObjectMap)
return &result, nil
}

View File

@@ -0,0 +1,8 @@
package v1alpha1
// APIObjectMap is the shape of marshalled api objects returned from cue to the
// holos cli. A map is used to improve the clarity of error messages from cue.
type APIObjectMap map[string]map[string]string
// FileContentMap is a map of file names to file contents.
type FileContentMap map[string]string

View File

@@ -0,0 +1,15 @@
package v1alpha1
// ObjectMeta represents metadata of a holos component object. The fields are a
// copy of upstream kubernetes api machinery but are by holos objects distinct
// from kubernetes api objects.
type ObjectMeta struct {
// Name uniquely identifies the holos component instance and must be suitable as a file name.
Name string `json:"name,omitempty" yaml:"name,omitempty"`
// Namespace confines a holos component to a single namespace via kustomize if set.
Namespace string `json:"namespace,omitempty" yaml:"namespace,omitempty"`
// Labels are not used but are copied from api machinery ObjectMeta for completeness.
Labels map[string]string `json:"labels,omitempty" yaml:"labels,omitempty"`
// Annotations are not used but are copied from api machinery ObjectMeta for completeness.
Annotations map[string]string `json:"annotations,omitempty" yaml:"annotations,omitempty"`
}

21
api/v1alpha1/render.go Normal file
View File

@@ -0,0 +1,21 @@
package v1alpha1
import (
"context"
"github.com/holos-run/holos"
)
type Renderer interface {
GetKind() string
Render(ctx context.Context, path holos.PathComponent) (*Result, error)
}
// Render produces a Result representing the kubernetes api objects to
// configure. Each of the various holos component types, e.g. Helm, Kustomize,
// et al, should implement the Renderer interface. This process is best
// conceptualized as a data pipeline, for example a component may render a
// result by first calling helm template, then passing the result through
// kustomize, then mixing in overlay api objects.
func Render(ctx context.Context, r Renderer, path holos.PathComponent) (*Result, error) {
return r.Render(ctx, path)
}

141
api/v1alpha1/result.go Normal file
View File

@@ -0,0 +1,141 @@
package v1alpha1
import (
"context"
"fmt"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/pkg/util"
"github.com/holos-run/holos/pkg/wrapper"
"os"
"path/filepath"
"slices"
)
// Result is the build result for display or writing. Holos components Render the Result as a data pipeline.
type Result struct {
HolosComponent
TypeMeta `json:",inline" yaml:",inline"`
Kustomization `json:",inline" yaml:",inline"`
Kustomize `json:",inline" yaml:",inline"`
Metadata ObjectMeta `json:"metadata,omitempty"`
// accumulatedOutput accumulates rendered api objects.
accumulatedOutput string
}
func (r *Result) Name() string {
return r.Metadata.Name
}
func (r *Result) Filename(writeTo string, cluster string) string {
name := r.Metadata.Name
return filepath.Join(writeTo, "clusters", cluster, "components", name, name+".gen.yaml")
}
func (r *Result) KustomizationFilename(writeTo string, cluster string) string {
return filepath.Join(writeTo, "clusters", cluster, "holos", "components", r.Metadata.Name+"-kustomization.gen.yaml")
}
// AccumulatedOutput returns the accumulated rendered output.
func (r *Result) AccumulatedOutput() string {
return r.accumulatedOutput
}
// addObjectMap renders the provided APIObjectMap into the accumulated output.
func (r *Result) addObjectMap(ctx context.Context, objectMap APIObjectMap) {
log := logger.FromContext(ctx)
b := []byte(r.AccumulatedOutput())
kinds := make([]string, 0, len(objectMap))
// Sort the keys
for kind := range objectMap {
kinds = append(kinds, kind)
}
slices.Sort(kinds)
for _, kind := range kinds {
v := objectMap[kind]
// Sort the keys
names := make([]string, 0, len(v))
for name := range v {
names = append(names, name)
}
slices.Sort(names)
for _, name := range names {
yamlString := v[name]
log.Debug(fmt.Sprintf("%s/%s", kind, name), "kind", kind, "name", name)
b = util.EnsureNewline(b)
header := fmt.Sprintf("---\n# Source: CUE apiObjects.%s.%s\n", kind, name)
b = append(b, []byte(header+yamlString)...)
b = util.EnsureNewline(b)
}
}
r.accumulatedOutput = string(b)
}
// kustomize replaces the accumulated output with the output of kustomize build
func (r *Result) kustomize(ctx context.Context) error {
log := logger.FromContext(ctx)
if r.ResourcesFile == "" {
log.DebugContext(ctx, "skipping kustomize: no resourcesFile")
return nil
}
if len(r.KustomizeFiles) < 1 {
log.DebugContext(ctx, "skipping kustomize: no kustomizeFiles")
return nil
}
tempDir, err := os.MkdirTemp("", "holos.kustomize")
if err != nil {
return wrapper.Wrap(err)
}
defer util.Remove(ctx, tempDir)
// Write the main api object resources file for kustomize.
target := filepath.Join(tempDir, r.ResourcesFile)
b := []byte(r.AccumulatedOutput())
b = util.EnsureNewline(b)
if err := os.WriteFile(target, b, 0644); err != nil {
return wrapper.Wrap(fmt.Errorf("could not write resources: %w", err))
}
log.DebugContext(ctx, "wrote: "+target, "op", "write", "path", target, "bytes", len(b))
// Write the kustomization tree, kustomization.yaml must be in this map for kustomize to work.
for file, content := range r.KustomizeFiles {
target := filepath.Join(tempDir, file)
if err := os.MkdirAll(filepath.Dir(target), 0755); err != nil {
return wrapper.Wrap(err)
}
b := []byte(content)
b = util.EnsureNewline(b)
if err := os.WriteFile(target, b, 0644); err != nil {
return wrapper.Wrap(fmt.Errorf("could not write: %w", err))
}
log.DebugContext(ctx, "wrote: "+target, "op", "write", "path", target, "bytes", len(b))
}
// Run kustomize.
kOut, err := util.RunCmd(ctx, "kubectl", "kustomize", tempDir)
if err != nil {
log.ErrorContext(ctx, kOut.Stderr.String())
return wrapper.Wrap(err)
}
// Replace the accumulated output
r.accumulatedOutput = kOut.Stdout.String()
return nil
}
// Save writes the content to the filesystem for git ops.
func (r *Result) Save(ctx context.Context, path string, content string) error {
log := logger.FromContext(ctx)
dir := filepath.Dir(path)
if err := os.MkdirAll(dir, os.FileMode(0775)); err != nil {
log.WarnContext(ctx, "could not mkdir", "path", dir, "err", err)
return wrapper.Wrap(err)
}
// Write the kube api objects
if err := os.WriteFile(path, []byte(content), os.FileMode(0644)); err != nil {
log.WarnContext(ctx, "could not write", "path", path, "err", err)
return wrapper.Wrap(err)
}
log.DebugContext(ctx, "out: wrote "+path, "action", "write", "path", path, "status", "ok")
return nil
}

10
api/v1alpha1/typemeta.go Normal file
View File

@@ -0,0 +1,10 @@
package v1alpha1
type TypeMeta struct {
Kind string `json:"kind,omitempty" yaml:"kind,omitempty"`
APIVersion string `json:"apiVersion,omitempty" yaml:"apiVersion,omitempty"`
}
func (tm *TypeMeta) GetKind() string {
return tm.Kind
}

View File

@@ -0,0 +1,25 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
// BuildPlan is the primary interface between CUE and the Holos cli.
#BuildPlan: {
#TypeMeta
// Metadata represents the holos component name
metadata?: #ObjectMeta @go(Metadata)
spec?: #BuildPlanSpec @go(Spec)
}
#BuildPlanSpec: {
disabled?: bool @go(Disabled)
components?: #BuildPlanComponents @go(Components)
}
#BuildPlanComponents: {
helmCharts?: [...#HelmChart] @go(HelmCharts,[]HelmChart)
kubernetesObjects?: [...#KubernetesObjects] @go(KubernetesObjects,[]KubernetesObjects)
kustomizeBuilds?: [...#KustomizeBuild] @go(KustomizeBuilds,[]KustomizeBuild)
}

View File

@@ -0,0 +1,24 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
// HolosComponent defines the fields common to all holos component kinds including the Render Result.
#HolosComponent: {
#TypeMeta
// Metadata represents the holos component name
metadata?: #ObjectMeta @go(Metadata)
// APIObjectMap holds the marshalled representation of api objects. Think of
// these as resources overlaid at the back of the render pipeline.
apiObjectMap?: #APIObjectMap @go(APIObjectMap)
#Kustomization
#Kustomize
// Skip causes holos to take no action regarding the component.
Skip: bool
}

View File

@@ -0,0 +1,12 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
#APIVersion: "holos.run/v1alpha1"
#BuildPlanKind: "BuildPlan"
#HelmChartKind: "HelmChart"
// ChartDir is the directory name created in the holos component directory to cache a chart.
#ChartDir: "vendor"

View File

@@ -0,0 +1,6 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
// Package v1alpha1 defines the api boundary between CUE and Holos.
package v1alpha1

View File

@@ -0,0 +1,28 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
// A HelmChart represents a helm command to provide chart values in order to render kubernetes api objects.
#HelmChart: {
#HolosComponent
// Namespace is the namespace to install into. TODO: Use metadata.namespace instead.
namespace: string @go(Namespace)
chart: #Chart @go(Chart)
valuesContent: string @go(ValuesContent)
enableHooks: bool @go(EnableHooks)
}
#Chart: {
name: string @go(Name)
version: string @go(Version)
release: string @go(Release)
repository: #Repository @go(Repository)
}
#Repository: {
name: string @go(Name)
url: string @go(URL)
}

View File

@@ -0,0 +1,12 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
#KubernetesObjectsKind: "KubernetesObjects"
// KubernetesObjects represents CUE output which directly provides Kubernetes api objects to holos.
#KubernetesObjects: {
#HolosComponent
}

View File

@@ -0,0 +1,11 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
// Kustomization holds the rendered flux kustomization api object content for git ops.
#Kustomization: {
// KsContent is the yaml representation of the flux kustomization for gitops.
ksContent?: string @go(KsContent)
}

View File

@@ -0,0 +1,25 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
#KustomizeBuildKind: "KustomizeBuild"
// Kustomize represents resources necessary to execute a kustomize build.
// Intended for at least two use cases:
//
// 1. Process raw yaml file resources in a holos component directory.
// 2. Post process a HelmChart to inject istio, add custom labels, etc...
#Kustomize: {
// KustomizeFiles holds file contents for kustomize, e.g. patch files.
kustomizeFiles?: #FileContentMap @go(KustomizeFiles)
// ResourcesFile is the file name used for api objects in kustomization.yaml
resourcesFile?: string @go(ResourcesFile)
}
// KustomizeBuild renders plain yaml files in the holos component directory using kubectl kustomize build.
#KustomizeBuild: {
#HolosComponent
}

View File

@@ -0,0 +1,12 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
#KustomizeBuildKind: "KustomizeBuild"
// KustomizeBuild
#KustomizeBuild: {
#HolosComponent
}

View File

@@ -0,0 +1,12 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
// APIObjectMap is the shape of marshalled api objects returned from cue to the
// holos cli. A map is used to improve the clarity of error messages from cue.
#APIObjectMap: {[string]: [string]: string}
// FileContentMap is a map of file names to file contents.
#FileContentMap: {[string]: string}

View File

@@ -0,0 +1,22 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
// ObjectMeta represents metadata of a holos component object. The fields are a
// copy of upstream kubernetes api machinery but are by holos objects distinct
// from kubernetes api objects.
#ObjectMeta: {
// Name uniquely identifies the holos component instance and must be suitable as a file name.
name?: string @go(Name)
// Namespace confines a holos component to a single namespace via kustomize if set.
namespace?: string @go(Namespace)
// Labels are not used but are copied from api machinery ObjectMeta for completeness.
labels?: {[string]: string} @go(Labels,map[string]string)
// Annotations are not used but are copied from api machinery ObjectMeta for completeness.
annotations?: {[string]: string} @go(Annotations,map[string]string)
}

View File

@@ -0,0 +1,7 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
#Renderer: _

View File

@@ -0,0 +1,17 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
// Result is the build result for display or writing. Holos components Render the Result as a data pipeline.
#Result: {
HolosComponent: #HolosComponent
#TypeMeta
#Kustomization
#Kustomize
metadata?: #ObjectMeta @go(Metadata)
}

View File

@@ -0,0 +1,10 @@
// Code generated by cue get go. DO NOT EDIT.
//cue:generate cue get go github.com/holos-run/holos/api/v1alpha1
package v1alpha1
#TypeMeta: {
kind?: string @go(Kind)
apiVersion?: string @go(APIVersion)
}

View File

@@ -0,0 +1,8 @@
package v1alpha1
// #BuildPlan is the API contract between CUE and the Holos cli.
// Holos requires CUE to evaluate and provide a valid #BuildPlan.
#BuildPlan: {
kind: #BuildPlanKind
apiVersion: #APIVersion
}

View File

@@ -0,0 +1,3 @@
package v1alpha1
#HolosComponent: metadata: name: string

View File

@@ -0,0 +1,3 @@
package v1alpha1
#HolosComponent: Skip: true | *false

31
docs/examples/helpers.cue Normal file
View File

@@ -0,0 +1,31 @@
package holos
import "encoding/yaml"
// #APIObjects is the output type for api objects produced by cue.
#APIObjects: {
// apiObjects holds each the api objects produced by cue.
apiObjects: {
[Kind=_]: {
[Name=_]: {
kind: Kind
...
}
}
ExternalSecret?: [Name=_]: #ExternalSecret & {_name: Name}
VirtualService?: [Name=_]: #VirtualService & {metadata: name: Name}
Issuer?: [Name=_]: #Issuer & {metadata: name: Name}
}
// apiObjectMap holds the marshalled representation of apiObjects
apiObjectMap: {
for kind, v in apiObjects {
"\(kind)": {
for name, obj in v {
"\(name)": yaml.Marshal(obj)
}
}
}
...
}
}

63
docs/examples/holos.cue Normal file
View File

@@ -0,0 +1,63 @@
package holos
import (
"encoding/yaml"
h "github.com/holos-run/holos/api/v1alpha1"
ksv1 "kustomize.toolkit.fluxcd.io/kustomization/v1"
)
// The overall structure of the data is:
// 1 CUE Instance => 1 BuildPlan => 0..N HolosComponents
// Holos requires CUE to evaluate and provide a valid BuildPlan.
// Constrain each CUE instance to output a BuildPlan.
{} & h.#BuildPlan
// #HolosComponent defines struct fields common to all holos component types.
#HolosComponent: {
h.#HolosComponent
metadata: name: string
#namelen: len(metadata.name) & >=1
let Name = metadata.name
ksContent: yaml.Marshal(#Kustomization & {
metadata: name: Name
})
...
}
// Holos component types.
#HelmChart: #HolosComponent & h.#HelmChart
#KubernetesObjects: #HolosComponent & h.#KubernetesObjects
#KustomizeBuild: #HolosComponent & h.#KustomizeBuild
// #ClusterName is the cluster name for cluster scoped resources.
#ClusterName: #InputKeys.cluster
// Flux Kustomization CRDs
#Kustomization: #NamespaceObject & ksv1.#Kustomization & {
_dependsOn: [Name=_]: name: string & Name
metadata: {
name: string
namespace: string | *"flux-system"
}
spec: ksv1.#KustomizationSpec & {
interval: string | *"30m0s"
path: string | *"deploy/clusters/\(#InputKeys.cluster)/components/\(metadata.name)"
prune: bool | *true
retryInterval: string | *"2m0s"
sourceRef: {
kind: string | *"GitRepository"
name: string | *"flux-system"
}
suspend?: bool
targetNamespace?: string
timeout: string | *"3m0s"
// wait performs health checks for all reconciled resources. If set to true, .spec.healthChecks is ignored.
// Setting this to true for all components generates considerable load on the api server from watches.
// Operations are additionally more complicated when all resources are watched. Consider setting wait true for
// relatively simple components, otherwise target specific resources with spec.healthChecks.
wait: true | *false
dependsOn: [for k, v in _dependsOn {v}, ...]
}
}

View File

@@ -0,0 +1,46 @@
// Controls optional feature flags for services distributed across multiple holos components.
// For example, enable issuing certificates in the provisioner cluster when an optional service is
// enabled for a workload cluster.
package holos
import "list"
#OptionalService: {
name: string
enabled: true | *false
clusters: [Name=_]: #Platform.clusters[Name]
clusterNames: [for c in clusters {c.name}]
managedNamespaces: [Name=_]: #ManagedNamespace & {
namespace: metadata: name: Name
clusterNames: ["provisioner", for c in clusters {c.name}]
}
// servers represents istio Gateway.spec.servers.hosts entries
// Refer to istio/gateway/gateway.cue
servers: [Name=_]: {
hosts: [...string]
port: name: Name
port: number: 443
port: protocol: "HTTPS"
tls: credentialName: string
tls: mode: "SIMPLE"
}
// public tls certs should align to hosts.
certs: [Name=_]: #Certificate & {
metadata: name: Name
}
}
#OptionalServices: {
[Name=_]: #OptionalService & {
name: Name
}
}
for svc in #OptionalServices {
for nsName, ns in svc.managedNamespaces {
if svc.enabled && list.Contains(ns.clusterNames, #ClusterName) {
#ManagedNamespaces: "\(nsName)": ns
}
}
}

View File

@@ -0,0 +1,56 @@
package holos
let CoreDomain = "core.\(#Platform.org.domain)"
let TargetNamespace = "prod-core-vault"
#OptionalServices: {
vault: {
enabled: true
clusters: core1: _
clusters: core2: _
managedNamespaces: "prod-core-vault": {
namespace: metadata: labels: "istio-injection": "enabled"
}
certs: "vault-core": #Certificate & {
metadata: name: "vault-core"
metadata: namespace: "istio-ingress"
spec: {
commonName: "vault.\(CoreDomain)"
dnsNames: [commonName]
secretName: metadata.name
issuerRef: kind: "ClusterIssuer"
issuerRef: name: string | *"letsencrypt"
}
}
servers: "https-vault-core": {
hosts: ["\(TargetNamespace)/vault.\(CoreDomain)"]
tls: credentialName: certs."vault-core".spec.secretName
}
for k, v in clusters {
let obj = (Cert & {Name: "vault-core", Cluster: v.name}).APIObject
certs: "\(obj.metadata.name)": obj
servers: "https-\(obj.metadata.name)": {
hosts: [for host in obj.spec.dnsNames {"\(TargetNamespace)/\(host)"}]
tls: credentialName: obj.spec.secretName
}
}
}
}
// Cert provisions a cluster specific certificate.
let Cert = {
Name: string
Cluster: string
APIObject: #Certificate & {
metadata: name: "\(Cluster)-\(Name)"
metadata: namespace: string | *"istio-ingress"
spec: {
commonName: string | *"vault.\(Cluster).\(CoreDomain)"
dnsNames: [commonName]
secretName: metadata.name
issuerRef: kind: "ClusterIssuer"
issuerRef: name: string | *"letsencrypt"
}
}
}

View File

@@ -44,7 +44,8 @@ package holos
_name: string
_cluster: string
_wildcard: true | *false
metadata: name: string | *"\(_cluster)-\(_name)"
// Enforce this value
metadata: name: "\(_cluster)-\(_name)"
metadata: namespace: string | *"istio-ingress"
spec: {
commonName: string | *"\(_name).\(_cluster).\(#Platform.org.domain)"

View File

@@ -0,0 +1,29 @@
package holos
#InputKeys: component: "postgres-certs"
let SecretNames = {
[Name=_]: {name: Name}
"\(_DBName)-primary-tls": _
"\(_DBName)-repl-tls": _
"\(_DBName)-client-tls": _
"\(_DBName)-root-ca": _
}
#Kustomization: spec: targetNamespace: #TargetNamespace
#Kustomization: spec: healthChecks: [
for s in SecretNames {
apiVersion: "external-secrets.io/v1beta1"
kind: "ExternalSecret"
name: s.name
namespace: #TargetNamespace
},
]
#KubernetesObjects & {
apiObjects: {
for s in SecretNames {
ExternalSecret: "\(s.name)": _
}
}
}

View File

@@ -0,0 +1,189 @@
package holos
#InputKeys: component: "postgres"
#DependsOn: "postgres-certs": _
let Cluster = #Platform.clusters[#ClusterName]
let S3Secret = "pgo-s3-creds"
let ZitadelUser = _DBName
let ZitadelAdmin = "\(_DBName)-admin"
// This must be an external storage bucket for our architecture.
let BucketRepoName = "repo2"
// Restore options. Set the timestamp to a known good point in time.
// time="2024-03-11T17:08:58Z" level=info msg="crunchy-pgbackrest ends"
// let RestoreOptions = ["--type=time", "--target=\"2024-03-11 17:10:00+00\""]
// Restore the most recent backup.
let RestoreOptions = []
#Kustomization: spec: healthChecks: [
{
apiVersion: "external-secrets.io/v1beta1"
kind: "ExternalSecret"
name: S3Secret
namespace: #TargetNamespace
},
{
apiVersion: "postgres-operator.crunchydata.com/v1beta1"
kind: "PostgresCluster"
name: _DBName
namespace: #TargetNamespace
},
]
#KubernetesObjects & {
apiObjects: {
ExternalSecret: "\(S3Secret)": _
PostgresCluster: db: #PostgresCluster & HighlyAvailable & {
metadata: name: _DBName
metadata: namespace: #TargetNamespace
spec: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.2-0"
postgresVersion: 16
// Custom certs are necessary for streaming standby replication which we use to replicate between two regions.
// Refer to https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#streaming-standby
customTLSSecret: name: "\(_DBName)-primary-tls"
customReplicationTLSSecret: name: "\(_DBName)-repl-tls"
// Refer to https://access.crunchydata.com/documentation/postgres-operator/latest/references/crd/5.5.x/postgrescluster#postgresclusterspecusersindex
users: [
{name: ZitadelUser},
// NOTE: Users with SUPERUSER role cannot log in through pgbouncer. Use options that allow zitadel admin to use pgbouncer.
// Refer to: https://github.com/CrunchyData/postgres-operator/issues/3095#issuecomment-1904712211
{name: ZitadelAdmin, options: "CREATEDB CREATEROLE", databases: [_DBName, "postgres"]},
]
users: [...{databases: [_DBName, ...]}]
instances: [{
replicas: 2
dataVolumeClaimSpec: {
accessModes: ["ReadWriteOnce"]
resources: requests: storage: "10Gi"
}
}]
standby: {
repoName: BucketRepoName
if Cluster.primary {
enabled: false
}
if !Cluster.primary {
enabled: true
}
}
// Restore from backup if and only if the cluster is primary
if Cluster.primary {
dataSource: pgbackrest: {
stanza: "db"
configuration: backups.pgbackrest.configuration
// Restore from known good full backup taken
options: RestoreOptions
global: {
"\(BucketRepoName)-path": "/pgbackrest/\(#TargetNamespace)/\(metadata.name)/\(BucketRepoName)"
"\(BucketRepoName)-cipher-type": "aes-256-cbc"
}
repo: {
name: BucketRepoName
s3: backups.pgbackrest.repos[1].s3
}
}
}
// Refer to https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups
backups: pgbackrest: {
configuration: [{secret: name: S3Secret}]
// Defines details for manual pgBackRest backup Jobs
manual: {
// Note: the repoName value must match the config keys in the S3Secret.
// This must be an external repository for backup / restore / regional failovers.
repoName: BucketRepoName
options: ["--type=full", ...]
}
// Defines details for performing an in-place restore using pgBackRest
restore: {
// Enables triggering a restore by annotating the postgrescluster with postgres-operator.crunchydata.com/pgbackrest-restore="$(date)"
enabled: true
repoName: BucketRepoName
}
global: {
// Store only one full backup in the PV because it's more expensive than object storage.
"\(repos[0].name)-retention-full": "1"
// Store 14 days of full backups in the bucket.
"\(BucketRepoName)-retention-full": string | *"14"
"\(BucketRepoName)-retention-full-type": "count" | *"time" // time in days
// Refer to https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups#encryption
"\(BucketRepoName)-cipher-type": "aes-256-cbc"
// "The convention we recommend for setting this variable is /pgbackrest/$NAMESPACE/$CLUSTER_NAME/repoN"
// Ref: https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/backups#understanding-backup-configuration-and-basic-operations
"\(BucketRepoName)-path": "/pgbackrest/\(#TargetNamespace)/\(metadata.name)/\(manual.repoName)"
}
repos: [
{
name: "repo1"
volume: volumeClaimSpec: {
accessModes: ["ReadWriteOnce"]
resources: requests: storage: string | *"4Gi"
}
},
{
name: BucketRepoName
// Full backup weekly on Sunday at 1am, differntial daily at 1am every day except Sunday.
schedules: full: string | *"0 1 * * 0"
schedules: differential: string | *"0 1 * * 1-6"
s3: {
bucket: string | *"\(#Platform.org.name)-zitadel-backups"
region: string | *#Backups.s3.region
endpoint: string | *"s3.dualstack.\(region).amazonaws.com"
}
},
]
}
}
}
}
}
// Refer to https://github.com/holos-run/postgres-operator-examples/blob/main/kustomize/high-availability/ha-postgres.yaml
let HighlyAvailable = {
apiVersion: "postgres-operator.crunchydata.com/v1beta1"
kind: "PostgresCluster"
metadata: name: string
spec: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.2-0"
postgresVersion: 16
instances: [{
name: "pgha1"
replicas: 2
dataVolumeClaimSpec: {
accessModes: ["ReadWriteOnce"]
resources: requests: storage: string | *"10Gi"
}
affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: [{
weight: 1
podAffinityTerm: {
topologyKey: "kubernetes.io/hostname"
labelSelector: matchLabels: {
"postgres-operator.crunchydata.com/cluster": metadata.name
"postgres-operator.crunchydata.com/instance-set": name
}
}
}]
}]
backups: pgbackrest: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.49-0"
}
proxy: pgBouncer: {
image: "registry.developers.crunchydata.com/crunchydata/crunchy-pgbouncer:ubi8-1.21-3"
replicas: 2
affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: [{
weight: 1
podAffinityTerm: {
topologyKey: "kubernetes.io/hostname"
labelSelector: matchLabels: {
"postgres-operator.crunchydata.com/cluster": metadata.name
"postgres-operator.crunchydata.com/role": "pgbouncer"
}
}
}]
}
}
}

View File

@@ -9,12 +9,12 @@ package holos
{
name: "ZITADEL_DATABASE_POSTGRES_HOST"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)"
valueFrom: secretKeyRef: key: "host"
valueFrom: secretKeyRef: key: "pgbouncer-host"
},
{
name: "ZITADEL_DATABASE_POSTGRES_PORT"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)"
valueFrom: secretKeyRef: key: "port"
valueFrom: secretKeyRef: key: "pgbouncer-port"
},
{
name: "ZITADEL_DATABASE_POSTGRES_DATABASE"
@@ -35,26 +35,30 @@ package holos
// The postgres component configures privileged postgres user creds.
{
name: "ZITADEL_DATABASE_POSTGRES_ADMIN_USERNAME"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-postgres"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)-admin"
valueFrom: secretKeyRef: key: "user"
},
{
name: "ZITADEL_DATABASE_POSTGRES_ADMIN_PASSWORD"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-postgres"
valueFrom: secretKeyRef: name: "\(_DBName)-pguser-\(_DBName)-admin"
valueFrom: secretKeyRef: key: "password"
},
// CA Cert issued by PGO which issued the pgbouncer tls cert
{
name: "ZITADEL_DATABASE_POSTGRES_USER_SSL_ROOTCERT"
value: "/\(_PGBouncer)/ca.crt"
},
{
name: "ZITADEL_DATABASE_POSTGRES_ADMIN_SSL_ROOTCERT"
value: "/\(_PGBouncer)/ca.crt"
},
]
// Refer to https://zitadel.com/docs/self-hosting/manage/database
zitadel: {
// Zitadel master key
masterkeySecretName: "zitadel-masterkey"
// Note the tls configuration is a challenge to use externally issued certs from the provsioner cluster.
// We intentionally use pgo managed certs and intend to backup the ca key to the provisioner and restore it for
// cross cluster replication. The problems seemed to arise from specifying the user and admin tls secrets in
// addition to the ca cert secret.
dbSslCaCrtSecret: "\(_DBName)-cluster-cert"
// dbSslCaCrtSecret: "pgo-root-cacert"
// All settings: https://zitadel.com/docs/self-hosting/manage/configure#runtime-configuration-file
// Helm interface: https://github.com/zitadel/zitadel-charts/blob/zitadel-7.4.0/charts/zitadel/values.yaml#L20-L21

View File

@@ -0,0 +1,164 @@
package holos
import "encoding/yaml"
let Name = "zitadel"
#InputKeys: component: Name
#DependsOn: postgres: _
// Upstream helm chart doesn't specify the namespace field for all resources.
#Kustomization: spec: {
targetNamespace: #TargetNamespace
wait: false
}
if #IsPrimaryCluster == true {
#Kustomization: spec: healthChecks: [
{
apiVersion: "apps/v1"
kind: "Deployment"
name: Name
namespace: #TargetNamespace
},
{
apiVersion: "batch/v1"
kind: "Job"
name: "\(Name)-init"
namespace: #TargetNamespace
},
{
apiVersion: "batch/v1"
kind: "Job"
name: "\(Name)-setup"
namespace: #TargetNamespace
},
]
}
#HelmChart & {
namespace: #TargetNamespace
enableHooks: true
chart: {
name: Name
version: "7.9.0"
repository: {
name: Name
url: "https://charts.zitadel.com"
}
}
values: #Values
apiObjects: {
ExternalSecret: "zitadel-masterkey": _
VirtualService: "\(Name)": {
metadata: name: Name
metadata: namespace: #TargetNamespace
spec: hosts: ["login.\(#Platform.org.domain)"]
spec: gateways: ["istio-ingress/default"]
spec: http: [{route: [{destination: host: Name}]}]
}
}
}
// TODO: Generalize this common pattern of injecting the istio sidecar into a Deployment
let IstioInject = [{op: "add", path: "/spec/template/metadata/labels/sidecar.istio.io~1inject", value: "true"}]
_PGBouncer: "pgbouncer"
let DatabaseCACertPatch = [
{
op: "add"
path: "/spec/template/spec/volumes/-"
value: {
name: _PGBouncer
secret: {
secretName: "\(_DBName)-pgbouncer"
items: [{key: "pgbouncer-frontend.ca-roots", path: "ca.crt"}]
}
}
},
{
op: "add"
path: "/spec/template/spec/containers/0/volumeMounts/-"
value: {
name: _PGBouncer
mountPath: "/" + _PGBouncer
}
},
]
let CAPatch = #Patch & {
target: {
group: "apps" | "batch"
version: "v1"
kind: "Job" | "Deployment"
name: string
}
patch: yaml.Marshal(DatabaseCACertPatch)
}
#KustomizePatches: {
mesh: {
target: {
group: "apps"
version: "v1"
kind: "Deployment"
name: Name
}
patch: yaml.Marshal(IstioInject)
}
deploymentCA: CAPatch & {
target: group: "apps"
target: kind: "Deployment"
target: name: Name
}
initJob: CAPatch & {
target: group: "batch"
target: kind: "Job"
target: name: "\(Name)-init"
}
setupJob: CAPatch & {
target: group: "batch"
target: kind: "Job"
target: name: "\(Name)-setup"
}
testDisable: {
target: {
version: "v1"
kind: "Pod"
name: "\(Name)-test-connection"
}
patch: yaml.Marshal(DisableFluxPatch)
}
if #IsPrimaryCluster == false {
fluxDisable: {
target: {
group: "apps"
version: "v1"
kind: "Deployment"
name: Name
}
patch: yaml.Marshal(DisableFluxPatch)
}
initDisable: {
target: {
group: "batch"
version: "v1"
kind: "Job"
name: "\(Name)-init"
}
patch: yaml.Marshal(DisableFluxPatch)
}
setupDisable: {
target: {
group: "batch"
version: "v1"
kind: "Job"
name: "\(Name)-setup"
}
patch: yaml.Marshal(DisableFluxPatch)
}
}
}
let DisableFluxPatch = [{op: "replace", path: "/metadata/annotations/kustomize.toolkit.fluxcd.io~1reconcile", value: "disabled"}]

View File

@@ -4,6 +4,6 @@ package holos
#InputKeys: project: "github"
#DependsOn: Namespaces: name: "prod-secrets-namespaces"
#TargetNamespace: #InputKeys.component
#ARCSystemNamespace: "arc-system"
#HelmChart: namespace: #TargetNamespace
#HelmChart: chart: version: "0.8.3"

View File

@@ -0,0 +1,40 @@
package holos
#TargetNamespace: "arc-runner"
#InputKeys: component: "arc-runner"
#Kustomization: spec: targetNamespace: #TargetNamespace
let GitHubConfigSecret = "controller-manager"
// Just sync the external secret, don't configure the scale set
// Work around https://github.com/actions/actions-runner-controller/issues/3351
if #IsPrimaryCluster == false {
#KubernetesObjects & {
apiObjects: ExternalSecret: "\(GitHubConfigSecret)": _
}
}
// Put the scale set on the primary cluster.
if #IsPrimaryCluster == true {
#HelmChart & {
values: {
#Values
controllerServiceAccount: name: "gha-rs-controller"
controllerServiceAccount: namespace: "arc-system"
githubConfigSecret: GitHubConfigSecret
githubConfigUrl: "https://github.com/" + #Platform.org.github.orgs.primary.name
}
apiObjects: ExternalSecret: "\(values.githubConfigSecret)": _
chart: {
// Match the gha-base-name in the chart _helpers.tpl to avoid long full names.
// NOTE: Unfortunately the INSTALLATION_NAME is used as the helm release
// name and GitHub removed support for runner labels, so the only way to
// specify which runner a workflow runs on is using this helm release name.
// The quote is "Update the INSTALLATION_NAME value carefully. You will use
// the installation name as the value of runs-on in your workflows." Refer to
// https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/quickstart-for-actions-runner-controller
release: "gha-rs"
name: "oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set"
}
}
}

View File

@@ -1,6 +1,6 @@
package holos
#TargetNamespace: "arc-system"
#TargetNamespace: #ARCSystemNamespace
#InputKeys: component: "arc-system"
#HelmChart & {

View File

@@ -0,0 +1,24 @@
package holos
import "list"
spec: components: KubernetesObjects: [
#KubernetesObjects & {
metadata: name: "prod-secrets-namespaces"
apiObjectMap: (#APIObjects & {
apiObjects: {
// #ManagedNamespaces is the set of all namespaces across all clusters in the platform.
for k, ns in #ManagedNamespaces {
if list.Contains(ns.clusterNames, #ClusterName) {
Namespace: "\(k)": #Namespace & ns.namespace
}
}
// #PlatformNamespaces is deprecated in favor of #ManagedNamespaces.
for ns in #PlatformNamespaces {
Namespace: "\(ns.name)": #Namespace & {metadata: ns}
}
}
}).apiObjectMap
},
]

View File

@@ -1,7 +1,8 @@
package holos
// The primary istio Gateway, named default
import "list"
// The primary istio Gateway, named default
let Name = "gateway"
#InputKeys: component: Name
@@ -31,5 +32,19 @@ let LoginCert = #PlatformCerts.login
},
]
}
for k, svc in #OptionalServices {
if svc.enabled && list.Contains(svc.clusterNames, #ClusterName) {
Gateway: "\(svc.name)": #Gateway & {
metadata: name: svc.name
metadata: namespace: #TargetNamespace
spec: selector: istio: "ingressgateway"
spec: servers: [for s in svc.servers {s}]
}
for k, s in svc.servers {
ExternalSecret: "\(s.tls.credentialName)": _
}
}
}
}
}

View File

@@ -18,9 +18,7 @@ let Cert = #PlatformCerts[SecretName]
#KubernetesObjects & {
apiObjects: {
ExternalSecret: httpbin: #ExternalSecret & {
_name: Cert.spec.secretName
}
ExternalSecret: "\(Cert.spec.secretName)": _
Deployment: httpbin: #Deployment & {
metadata: Metadata
spec: selector: matchLabels: MatchLabels

View File

@@ -63,7 +63,7 @@ let RedirectMetaName = {
// https-redirect
_APIObjects: {
Gateway: {
httpsRedirect: #Gateway & {
"\(RedirectMetaName.name)": #Gateway & {
metadata: RedirectMetaName
spec: selector: GatewayLabels
spec: servers: [{
@@ -79,7 +79,7 @@ _APIObjects: {
}
}
VirtualService: {
httpsRedirect: #VirtualService & {
"\(RedirectMetaName.name)": #VirtualService & {
metadata: RedirectMetaName
spec: hosts: ["*"]
spec: gateways: [RedirectMetaName.name]

View File

@@ -93,7 +93,14 @@ provisioner get serviceaccount -A --selector=holos.run/job.name=\(NAME) --output
# Create the tokens
mkdir tokens
jq -r '.items[].metadata | "provisioner -n \\(.namespace) create token --duration=12h \\(.name) > tokens/\\(.namespace).\\(.name).jwt"' serviceaccounts.json | bash -x
kubectl get namespaces -o name > namespaces.txt
# Iterate over local namespaces
while IFS= read -r NAMESPACE; do
echo "Getting token for local cluster $NAMESPACE" >&2
jq -r '.items[] | select("namespace/"+.metadata.namespace == "'${NAMESPACE}'") | .metadata | "provisioner -n \\(.namespace) create token --duration=12h \\(.name) > tokens/\\(.namespace).\\(.name).jwt"' serviceaccounts.json | bash -x
done < namespaces.txt
# Create the secrets
mksecret tokens/*.jwt
@@ -124,6 +131,11 @@ kubectl apply --server-side=true -f secrets.yaml
resources: ["secrets"]
verbs: ["*"]
},
{
apiGroups: [""]
resources: ["namespaces"]
verbs: ["list"]
},
]
},
// Bind the Role to the ServiceAccount for the Job.

View File

@@ -1,5 +1,7 @@
package holos
import "list"
#DependsOn: _ESOCreds
#TargetNamespace: "default"
@@ -30,5 +32,12 @@ package holos
"\(Kind)": "\(NS)/\(Name)": obj
}
}
for nsName, ns in #ManagedNamespaces {
if list.Contains(ns.clusterNames, #ClusterName) {
let obj = #SecretStore & {_namespace: nsName}
SecretStore: "\(nsName)/\(obj.metadata.name)": obj
}
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,146 @@
package holos
#Values: {
// Vault Helm Chart Holos Values
global: {
enabled: true
// Istio handles this
tlsDisable: true
}
injector: enabled: false
server: {
image: {
// repository: "hashicorp/vault"
repository: "quay.io/holos/hashicorp/vault"
tag: "1.14.10"
// Overrides the default Image Pull Policy
pullPolicy: "IfNotPresent"
}
extraLabels: "sidecar.istio.io/inject": "true"
resources: requests: {
memory: "256Mi"
cpu: "2000m"
}
// limits:
// memory: 1024Mi
// cpu: 2000m
// For HA configuration and because we need to manually init the vault,
// we need to define custom readiness/liveness Probe settings
readinessProbe: {
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
}
livenessProbe: {
enabled: true
path: "/v1/sys/health?standbyok=true"
initialDelaySeconds: 60
}
// extraEnvironmentVars is a list of extra environment variables to set with
// the stateful set. These could be used to include variables required for
// auto-unseal.
// Vault validates an incomplete chain:
// https://github.com/hashicorp/vault/issues/11318
extraEnvironmentVars: {
GOMAXPROCS: "2"
} // Set to cpu limit, see https://github.com/uber-go/automaxprocs
// extraVolumes is a list of extra volumes to mount. These will be exposed
// to Vault in the path `/vault/userconfig/<name>/`.
extraVolumes: [{
type: "secret"
name: "gcpkms-creds"
}]
// This configures the Vault Statefulset to create a PVC for audit logs.
// See https://www.vaultproject.io/docs/audit/index.html to know more
auditStorage: {
enabled: true
mountPath: "/var/log/vault"
} // for compatibility with plain debian vm location.
standalone: {
enabled: false
}
ha: {
enabled: true
replicas: 3
raft: {
enabled: true
setNodeId: true
config: """
ui = true
listener \"tcp\" {
address = \"[::]:8200\"
cluster_address = \"[::]:8201\"
# mTLS is handled by the the istio sidecar
tls_disable = \"true\"
# Enable unauthenticated metrics access (necessary for Prometheus Operator)
telemetry {
unauthenticated_metrics_access = true
}
}
telemetry {
prometheus_retention_time = \"30s\"
disable_hostname = true
}
seal \"gcpckms\" {
credentials = \"/vault/userconfig/gcpkms-creds/credentials.json\"
project = \"v6-vault-f15f\"
region = \"us-west1\"
key_ring = \"vault-core\"
crypto_key = \"vault-core-unseal\"
}
# Note; the retry_join leader_api_address values come from the Stable
# Network ID feature of a Statefulset. See:
# https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id
storage \"raft\" {
path = \"/vault/data\"
retry_join {
leader_api_addr = \"http://vault-0.vault-internal:8200\"
leader_tls_servername = \"vault\"
}
retry_join {
leader_api_addr = \"http://vault-1.vault-internal:8200\"
leader_tls_servername = \"vault\"
}
retry_join {
leader_api_addr = \"http://vault-2.vault-internal:8200\"
leader_tls_servername = \"vault\"
}
autopilot {
cleanup_dead_servers = \"true\"
last_contact_threshold = \"200ms\"
last_contact_failure_threshold = \"10m\"
max_trailing_logs = 250000
min_quorum = 3
server_stabilization_time = \"10s\"
}
}
service_registration \"kubernetes\" {}
"""
// Vault UI (Will be exposed via the service mesh)
} // Vault UI (Will be exposed via the service mesh)
} // Vault UI (Will be exposed via the service mesh)
} // Vault UI (Will be exposed via the service mesh)// Vault UI (Will be exposed via the service mesh)
ui: {
enabled: true
serviceType: "ClusterIP"
serviceNodePort: null
externalPort: 8200
}
}

View File

@@ -0,0 +1,75 @@
package holos
import "encoding/yaml"
import "list"
let Name = "vault"
#InputKeys: component: Name
#InputKeys: project: "core"
#TargetNamespace: "\(#InstancePrefix)-\(Name)"
let Vault = #OptionalServices[Name]
if Vault.enabled && list.Contains(Vault.clusterNames, #ClusterName) {
#HelmChart & {
namespace: #TargetNamespace
chart: {
name: Name
version: "0.25.0"
repository: {
name: "hashicorp"
url: "https://helm.releases.hashicorp.com"
}
}
values: #Values
apiObjects: {
ExternalSecret: "gcpkms-creds": _
ExternalSecret: "vault-server-cert": _
VirtualService: "\(Name)": {
metadata: name: Name
metadata: namespace: #TargetNamespace
spec: hosts: [for cert in Vault.certs {cert.spec.commonName}]
spec: gateways: ["istio-ingress/\(Name)"]
spec: http: [
{
route: [
{
destination: host: "\(Name)-active"
destination: port: number: 8200
},
]
},
]
}
}
}
#Kustomize: {
patches: [
{
target: {
group: "apps"
version: "v1"
kind: "StatefulSet"
name: Name
}
patch: yaml.Marshal(EnvPatch)
},
]
}
let EnvPatch = [
{
op: "test"
path: "/spec/template/spec/containers/0/env/4/name"
value: "VAULT_ADDR"
},
{
op: "replace"
path: "/spec/template/spec/containers/0/env/4/value"
value: "http://$(VAULT_K8S_POD_NAME):8200"
},
]
}

View File

@@ -0,0 +1,3 @@
package holos
#InputKeys: project: "projects"

View File

@@ -0,0 +1,10 @@
package holos
// Components under this directory are part of this collection
#InputKeys: project: "iam"
// Shared dependencies for all components in this collection.
#DependsOn: _Namespaces
// Common Dependencies
_Namespaces: Namespaces: name: "\(#StageName)-secrets-namespaces"

View File

@@ -0,0 +1,101 @@
package holos
// Manage an Issuer for the database.
// Both cockroach and postgres handle tls database connections with cert manager
// PGO: https://github.com/CrunchyData/postgres-operator-examples/tree/main/kustomize/certmanager/certman
// CRDB: https://github.com/cockroachdb/helm-charts/blob/3dcf96726ebcfe3784afb526ddcf4095a1684aea/README.md?plain=1#L196-L201
// Refer to [Using Cert Manager to Deploy TLS for Postgres on Kubernetes](https://www.crunchydata.com/blog/using-cert-manager-to-deploy-tls-for-postgres-on-kubernetes)
#InputKeys: component: "postgres-certs"
let SelfSigned = "\(_DBName)-selfsigned"
let RootCA = "\(_DBName)-root-ca"
let Orgs = ["Database"]
#KubernetesObjects & {
apiObjects: {
// Put everything in the target namespace.
[_]: {
[Name=_]: {
metadata: name: Name
metadata: namespace: #TargetNamespace
}
}
Issuer: {
"\(SelfSigned)": #Issuer & {
_description: "Self signed issuer to issue ca certs"
metadata: name: SelfSigned
spec: selfSigned: {}
}
"\(RootCA)": #Issuer & {
_description: "Root signed intermediate ca to issue mtls database certs"
metadata: name: RootCA
spec: ca: secretName: RootCA
}
}
Certificate: {
"\(RootCA)": #Certificate & {
_description: "Root CA cert for database"
metadata: name: RootCA
spec: {
commonName: RootCA
isCA: true
issuerRef: group: "cert-manager.io"
issuerRef: kind: "Issuer"
issuerRef: name: SelfSigned
privateKey: algorithm: "ECDSA"
privateKey: size: 256
secretName: RootCA
subject: organizations: Orgs
}
}
"\(_DBName)-primary-tls": #DatabaseCert & {
// PGO managed name is "<cluster name>-cluster-cert" e.g. zitadel-cluster-cert
spec: {
commonName: "\(_DBName)-primary"
dnsNames: [
commonName,
"\(commonName).\(#TargetNamespace)",
"\(commonName).\(#TargetNamespace).svc",
"\(commonName).\(#TargetNamespace).svc.cluster.local",
"localhost",
"127.0.0.1",
]
usages: ["digital signature", "key encipherment"]
}
}
"\(_DBName)-repl-tls": #DatabaseCert & {
spec: {
commonName: "_crunchyrepl"
dnsNames: [commonName]
usages: ["digital signature", "key encipherment"]
}
}
"\(_DBName)-client-tls": #DatabaseCert & {
spec: {
commonName: "\(_DBName)-client"
dnsNames: [commonName]
usages: ["digital signature", "key encipherment"]
}
}
}
}
}
#DatabaseCert: #Certificate & {
metadata: name: string
metadata: namespace: #TargetNamespace
spec: {
duration: "2160h" // 90d
renewBefore: "360h" // 15d
issuerRef: group: "cert-manager.io"
issuerRef: kind: "Issuer"
issuerRef: name: RootCA
privateKey: algorithm: "ECDSA"
privateKey: size: 256
secretName: metadata.name
subject: organizations: Orgs
}
}

View File

@@ -0,0 +1,7 @@
# Database Certs
This component issues postgres certificates from the provisioner cluster using certmanager.
The purpose is to define customTLSSecret and customReplicationTLSSecret to provide certs that allow the standby to authenticate to the primary. For this type of standby, you must use custom TLS.
Refer to the PGO [Streaming Standby](https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/backups-disaster-recovery/disaster-recovery#streaming-standby) tutorial.

View File

@@ -0,0 +1,6 @@
package holos
#TargetNamespace: #InstancePrefix + "-zitadel"
// _DBName is the database name used across multiple holos components in this project
_DBName: "zitadel"

Some files were not shown because too many files have changed in this diff Show More