This patch uses cert manager in the provisioner cluster to provision tls
certs for https://login.example.com and https://httpbin.k2.example.com
The certs are not yet synced to the clusters. Next step is to replace
the Certificate resources with ExternalSecret resources, then remove
cert manager from the workload clusters.
This patch moves certificate management to the provisioner cluster to
centralize all secrets into the highly secured cluster. This change
also simplifies the architecture in a number of ways:
1. Certificate lives are now completely independent of cluster
lifecycle.
2. Remove the need for bi-directional sync to save cert secrets.
3. Workload clusters no longer need access to DNS.
Multiple holos components rely on kustomize to modify the output of the
upstream helm chart, for example patching a Deployment to inject the
istio sidecar.
The new holos cue based component system did not support running
kustomize after helm template. This patch adds the kustomize execution
if two fields are defined in the helm chart kind of cue output.
The API spec is pretty loose in this patch but I'm proceeding for
expedience and to inform the final API with more use cases as more
components are migrated to cue.
Cockroach DB uses tls certs for client authentication. Issue one for
Zitadel.
With this patch Zitadel starts up bit is not yet exposted with a
VirtualService.
Refer to https://zitadel.com/docs/self-hosting/manage/configure
The istio default Gateway is the basis for what will become a dynamic
set of server entries specified from cue project data integrated with
extauthz.
For now we simply need to get the identity provider up and running as
the first step toward identity and access management.
This patch migrates the https redirect and the
istio-ingressgateway-loopback Service from
`holos-infra/components/core/istio/ingress/templates/deployment`
This patch adds the standard istiod controller, which depends on
istio-base.
The holos reference platform heavily customizes the meshconfig, so the
upstream istio ConfigMap is disabled in the helm chart values. The mesh
config is generated from cue data defined in the controller holos
component.
Note: This patch adds a static configuration for the istio meshconfig in
the meshconfig.cue file. The extauthz providers are a core piece of
functionality in the holos reference platform and a key motivation of
moving to CUE from Helm is the need to dynamically generate the
meshconfig from a platform scoped set of projects and services across
multiple clusters.
For expedience this dynamic generation is not part of this patch but is
expected to replace the static meshconfig once the cluster is more fully
configured with the new cue based holos command line interface.
Using a list to merge dependencies through the tree from root to leaf is
challenging. This patch uses a #DependsOn struct instead then builds
the list of dependencies for flux from the struct field values.
This enables the dns01 letsencrypt acme solver and is heavily used in
the reference platform.
Secret migrated from Vault using:
```bash
vault kv get -format=json -field data kv/k8s/ns/cert-manager/cloudflare-api-token-secret \
| holos create secret --namespace cert-manager cloudflare-api-token-secret --data-stdin --append-hash=false
```
It makes sense to manage the SecretStore along with the Namespace in the
platform namespaces holos component. Otherwise, the first component
that needs an ExternalSecret also needs to manage a SecretStore, which
creates an artificial dependency for subesequent components that also
need a SecretStore in the same namespace.
Best to just have all components depend on the namespaces component.
This patch partially adds the Let's Encrypt issuers. The platform data
expands to take a contact email and a cloudflare login email.
The external secret needs to be added next.
Straight-forward helm install with no customization.
This patch also adds a "Skip" output kind which allows intermediate cue
files in the tree to signal holos to skip over the instance. This
enables constraints to be added at intermediate layers without build
errors.
Add the recommended labels mapping to holos stage, project, and
component names. Project will eventually be renamed to "collection" or
something.
Example:
app.kubernetes.io/part-of: prod
app.kubernetes.io/name: secrets
app.kubernetes.io/component: validate
app.kubernetes.io/instance: prod-secrets-validate
Also sort the api objects produced from cue so the output of the `holos
render` command is stable for git commits.
This patch changes the interface between CUE and Holos to remove the
content field and replace it with an api object map. The map is a
`map[string]map[string]string` with the rendered yaml as the value of a
kind/name nesting.
This structure enables better error messages, cue disjunction errors
indicate the type and the name of the resource instead of just the list
index number.
Without this patch the secret data was nested under a key with the same
name as the secret name. This caused the ceph controller to not find
the values.
This patch changes the golden path for #ExternalSecret to copy all data
keys 1:1 from the external to the target in the cluster.
Without this patch all clusters would use the same ceph secret from the
provisioner cluster. This is a problem because ceph credentials are
unique per cluster.
This patch renames the ceph secret to have a cluster name prefix.
The secret is created with:
```bash
vault kv get -format=json -field data kv/k2/kube-namespace/ceph-csi-rbd/csi-rbd-secret \
| holos create secret --namespace ceph-system k2-ceph-csi-rbd --cluster-name=k2 --data-stdin --append-hash=false
```
This patch adds the `pod-security.kubernetes.io/enforce: privileged`
label to the ceph-system namespace.
The Namespace resources are managed all over the map, it would be a good
idea to consolidate the PlatformNamespaces data into one well known
place for the entire platform. Eschewing for now.
This patch adds the ceph-csi-rbd helm chart component to the metal
cluster type. The purpose is to enable PersistentVolumeClaims on ois
metal clusters.
Cloud clusters like GKE and EKS are expected to skip rendering the metal
type.
Helm values are handled with CUE. The ceph secret is managed as an
ExternalSecret resource, appended to the rendered output by cue and the
holos cli.
Use:
❯ holos render --cluster-name=k2 ~/workspace/holos-run/holos/docs/examples/platforms/reference/clusters/metal/...
2:45PM INF render.go:40 rendered prod-metal-ceph version=0.47.0 status=ok action=rendered name=prod-metal-ceph
This patch validates secrets are synced from the provisioner cluster to
a workload cluster. This verifies the eso-creds-refresher job, external
secrets operator, etc...
Refer to
0ae58858f5
for the corresponding commit on the k2 cluster.
This patch prints out the cue file and line numbers when a cue error
contains multiple go errors to unwrap.
For example:
```
❯ holos render --cluster-name=k2 ~/workspace/holos-run/holos/docs/examples/platforms/reference/clusters/workload/...
3:31PM ERR could not execute version=0.46.0 err="could not decode: content: error in call to encoding/yaml.MarshalStream: incomplete value string (and 1 more errors)" loc=builder.go:212
content: error in call to encoding/yaml.MarshalStream: incomplete value string:
/home/jeff/workspace/holos-run/holos/docs/examples/schema.cue:199:11
/home/jeff/workspace/holos-run/holos/docs/examples/cue.mod/gen/external-secrets.io/externalsecret/v1beta1/types_gen.cue:83:14
```
This patch adds the `eso-creds-refresher` CronJob which executes every 8
hours in the holos-system namespace of each workload cluster. The job
creates Secrets with a `token` field representing the id token
credential for a SecretStore to use when synchronizing secrets to and
from the provisioner cluster.
Service accounts in the provisioner cluster are selected with
selector=holos.run/job.name=eso-creds-refresher.
Each selected service account has a token issued with a 12 hour
expiration ttl and is stored in a Secret matching the service account
name in the same namespace in the workload cluster.
The job takes about 25 seconds to run once the image is cached on the
node.
Without this patch the Job on a workload cluster fails with:
```
+ kubectl get serviceaccount -A --selector=holos.run/job.name=eso-creds-refresher --output=json
Error from server (Forbidden): serviceaccounts is forbidden: User
"eso-creds-refresher@holos-run.iam.gserviceaccount.com" cannot list
resource "serviceaccounts" in API group "" at the cluster scope:
requires one of ["container.serviceAccounts.list"] permission(s).
```
This label is intended for the Job to select which service accounts to
issue tokens for. For example:
kubectl get serviceaccount -A --selector=holos.run/job.name=eso-creds-refresher --output=json
Without this patch it is difficult to navigate the structure of the
configuration of the api objects because they're positional elements in
a list.
This patch extracts the configuration of the eso-reader and eso-writer
ServiceAccount, Role, and RoleBinding structs into a definition that
behaves like a function. The individual objects are fields of the
struct instead of positional elements in a list.
This patch adds a ConfigMap and Pod to the eso-creds-refresher
component. The Pod executes the gcloud container, impersonates the
eso-creds-refresher iam service account using workload identity, then
authenticates to the remote provisioner cluster.
This is the foundation for a script to automatically create Secret API
objects in a workload cluster which have a kubernetes service account
token ESO SecretStore resources can use to fetch secrets from the
provisioner cluster.
Once we have that script in place we can turn this Pod into a Job and
replace Vault.
The provisioner cluster is a worker-less autopilot cluster that provides
secrets to other clusters in the platform. The `eso-creds-refresher`
Job in the holos-system namespace of each other cluster refreshes
service account tokens for SecretStores.
This patch adds the IAM structure for the Job implemented by Namespace,
ServiceAccount, Role, and RoleBinding api objects.