Compare commits

...

16 Commits

Author SHA1 Message Date
Jeff McCune
990c82432c (#40) Fix go releaser with standard arc runners
Standard arc runner image is missing gpg and git.
2024-03-07 22:59:15 -08:00
Jeff McCune
e3673b594c Merge pull request #41 from holos-run/jeff/40-actions-runners
(#40) Actions Runner Controller (Runner Scale Sets)
2024-03-07 22:43:16 -08:00
Jeff McCune
f8cf278a24 (#40) bump to v0.54.0 2024-03-07 22:37:51 -08:00
Jeff McCune
b0bc596a49 (#40) Update workflow to run on arc runner set
Matches the value of the github/arc/runner component helm release, which
is the installation name.
2024-03-07 22:37:51 -08:00
Jeff McCune
4501ceec05 (#40) Use baseline security context for GitHub arc
Without this patch the arc controller fails to create a listener.  The
template for the listener doesn't appear to be configurable from the
chart.

Could patch the listener pod template with kustomize, do this as a
follow up feature.

With this patch we get the expected two pods in the runner system
namespace:

```
❯ k get pods
NAME                                 READY   STATUS    RESTARTS   AGE
gha-rs-7db9c9f7-listener             1/1     Running   0          43s
gha-rs-controller-56bb9c77d9-6tjch   1/1     Running   0          8s
```
2024-03-07 22:37:50 -08:00
Jeff McCune
4183fdfd42 (#40) Note the helm release name is the installation name
Which is the value of the `runs-on` field in workflows.
2024-03-07 22:37:50 -08:00
Jeff McCune
2595793019 (#40) Do not force the namespace with kustomize
To avoid confining the custom resource definitions to a namespace.
2024-03-07 22:37:50 -08:00
Jeff McCune
aa3d1914b1 (#40) Manage the actions runner scale sets 2024-03-07 22:37:49 -08:00
Jeff McCune
679ddbb6bf (#40) Use Restricted pod security for arc runners
Might as well put the restriction in place before deploying the runners
to see what breaks.
2024-03-07 22:37:49 -08:00
Jeff McCune
b1d7d07a04 (#40) Add field for helm chart release name
The resource names for the arc controller are too long:

❯ k get pods -n arc-systems
NAME                                                              READY   STATUS    RESTARTS   AGE
gha-runner-scale-set-controller-gha-rs-controller-6bdf45bd6jx5n   1/1     Running   0          59m

Solve the problem by allowing components to set the release name to
`gha-rs-controller` which requires an additional field from the cue code
to differentiate from the chart name.
2024-03-07 20:40:31 -08:00
Jeff McCune
5f58263232 (#40) Create arc namespaces
Named after the upstream install guide, though arc-systems makes me
twitch for arc-system.
2024-03-07 20:37:35 -08:00
Jeff McCune
b6bdd072f7 (#40) Include crds when running helm template
Might need to make this a configurable option, but for now just always
do it.
2024-03-07 20:37:35 -08:00
Jeff McCune
509f2141ac (#40) Actions Runner Controller
This patch adds support for helm oci images which are used by the
gha-runner-scale-set-controller.

For example, arc is installed normally with:

```
NAMESPACE="arc-systems"
helm install arc \
    --namespace "${NAMESPACE}" \
    --create-namespace \
    oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller
```

This patch caches the oci image in the same way as the repository based
method.

Refer to: https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/quickstart-for-actions-runner-controller
2024-03-07 20:37:35 -08:00
Jeff McCune
4c2bc34d58 (#32) SecretStore Component
Separate the SecretStore resources from the namespaces component because
it creates a deadlock.  The secretstore crds don't get applied until the
eso component is managed.

The namespaces component should have nothing but core api objects, no
custom resources.
2024-03-07 16:01:22 -08:00
Jeff McCune
d831070f53 Trim trailing newlines from files when creating secrets
Without this patch, the pattern of echoing data (without -n) or editing
files in a directory to represent the keys of a secret results in a
trailing newline in the kubernetes Secret.

This patch trims off the trailing newline by default, with the option to
preserve it with the --trim-trailing-newlines=false flag.
2024-03-06 11:21:32 -08:00
Jeff McCune
340715f76c (#36) Provide certs to Cockroach DB and Zitadel with ExternalSecrets
This patch switches CockroachDB to use certs provided by ExternalSecrets
instead of managing Certificate resources in-cluster from the upstream
helm chart.

This paves the way for multi-cluster replication by moving certificates
outside of the lifecycle of the workload cluster cockroach db operates
within.

Closes: #36
2024-03-06 10:38:47 -08:00
41 changed files with 578 additions and 104 deletions

View File

@@ -15,7 +15,7 @@ permissions:
jobs:
golangci:
name: lint
runs-on: [self-hosted, k8s]
runs-on: gha-rs
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5

View File

@@ -2,17 +2,20 @@ name: Release
on:
push:
# Run only against tags
tags:
- '*'
branches:
- release
permissions:
contents: write
jobs:
goreleaser:
runs-on: [self-hosted, k8s]
runs-on: gha-rs
steps:
- name: Provide GPG and Git
run: sudo apt update && sudo apt -qq -y install gnupg git
- name: Checkout
uses: actions/checkout@v4
with:

View File

@@ -13,7 +13,7 @@ permissions:
jobs:
test:
runs-on: [self-hosted, k8s]
runs-on: gha-rs
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -23,6 +23,9 @@ jobs:
with:
go-version: stable
- name: Provide unzip for Helm
run: sudo apt update && sudo apt -qq -y install curl zip unzip tar bzip2
- name: Set up Helm
uses: azure/setup-helm@v4.1.0
with:

View File

@@ -16,6 +16,7 @@ namespace: "zitadel"
chart: {
name: "zitadel"
version: "7.9.0"
release: name
repository: {
name: "zitadel"
url: "https://charts.zitadel.com"

View File

@@ -0,0 +1,20 @@
package holos
#InputKeys: component: "crdb"
#HelmChart & {
namespace: #TargetNamespace
chart: {
name: "cockroachdb"
version: "11.2.3"
repository: {
name: "cockroachdb"
url: "https://charts.cockroachdb.com/"
}
}
values: #Values
apiObjects: {
ExternalSecret: node: #ExternalSecret & {_name: "cockroachdb-node"}
ExternalSecret: root: #ExternalSecret & {_name: "cockroachdb-root"}
}
}

View File

@@ -478,7 +478,7 @@ package holos
copyCerts: image: "busybox"
certs: {
// Bring your own certs scenario. If provided, tls.init section will be ignored.
provided: false
provided: true | *false
// Secret name for the client root cert.
clientRootSecret: "cockroachdb-root"
// Secret name for node cert.
@@ -487,7 +487,7 @@ package holos
caSecret: "cockroach-ca"
// Enable if the secret is a dedicated TLS.
// TLS secrets are created by cert-mananger, for example.
tlsSecret: false
tlsSecret: true | *false
// Enable if the you want cockroach db to create its own certificates
selfSigner: {
// If set, the cockroach db will generate its own certificates

View File

@@ -10,11 +10,9 @@ package holos
certs: {
// https://github.com/cockroachdb/helm-charts/blob/3dcf96726ebcfe3784afb526ddcf4095a1684aea/README.md?plain=1#L204-L215
selfSigner: enabled: false
certManager: true
certManagerIssuer: {
kind: "Issuer"
name: #ComponentName
}
certManager: false
provided: true
tlsSecret: true
}
}

View File

@@ -24,23 +24,8 @@ let Name = "zitadel"
ExternalSecret: masterkey: #ExternalSecret & {
_name: "zitadel-masterkey"
}
Certificate: zitadel: #Certificate & {
metadata: name: "crdb-zitadel-client"
metadata: namespace: #TargetNamespace
spec: {
commonName: "zitadel"
issuerRef: {
group: "cert-manager.io"
kind: "Issuer"
name: "crdb-ca-issuer"
}
privateKey: algorithm: "RSA"
privateKey: size: 2048
renewBefore: "48h0m0s"
secretName: "cockroachdb-zitadel"
subject: organizations: ["Cockroach"]
usages: ["digital signature", "key encipherment", "client auth"]
}
ExternalSecret: zitadel: #ExternalSecret & {
_name: "cockroachdb-zitadel"
}
VirtualService: zitadel: #VirtualService & {
metadata: name: Name

View File

@@ -85,7 +85,24 @@ package holos
usages: ["digital signature", "key encipherment", "client auth"]
}
}
}
Certificate: zitadel: #Certificate & {
metadata: name: "crdb-zitadel-client"
metadata: namespace: #TargetNamespace
spec: {
commonName: "zitadel"
issuerRef: {
group: "cert-manager.io"
kind: "Issuer"
name: "crdb-ca-issuer"
}
privateKey: algorithm: "RSA"
privateKey: size: 2048
renewBefore: "48h0m0s"
secretName: "cockroachdb-zitadel"
subject: organizations: ["Cockroach"]
usages: ["digital signature", "key encipherment", "client auth"]
}
}
}
}

View File

@@ -0,0 +1,9 @@
package holos
// GitHub Actions Runner Controller
#InputKeys: project: "github"
#DependsOn: Namespaces: name: "prod-secrets-namespaces"
#TargetNamespace: #InputKeys.component
#HelmChart: namespace: #TargetNamespace
#HelmChart: chart: version: "0.8.3"

View File

@@ -0,0 +1,30 @@
package holos
#InputKeys: component: "arc-runner"
#Kustomization: spec: targetNamespace: #TargetNamespace
#HelmChart & {
values: {
#Values
controllerServiceAccount: name: "gha-rs-controller"
controllerServiceAccount: namespace: "arc-system"
githubConfigSecret: "controller-manager"
githubConfigUrl: "https://github.com/" + #Platform.org.github.orgs.primary.name
}
apiObjects: {
ExternalSecret: controller: #ExternalSecret & {
_name: values.githubConfigSecret
}
}
chart: {
// Match the gha-base-name in the chart _helpers.tpl to avoid long full names.
// NOTE: Unfortunately the INSTALLATION_NAME is used as the helm release
// name and GitHub removed support for runner labels, so the only way to
// specify which runner a workflow runs on is using this helm release name.
// The quote is "Update the INSTALLATION_NAME value carefully. You will use
// the installation name as the value of runs-on in your workflows." Refer to
// https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/quickstart-for-actions-runner-controller
release: "gha-rs"
name: "oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set"
}
}

View File

@@ -0,0 +1,192 @@
package holos
#Values: {
//# githubConfigUrl is the GitHub url for where you want to configure runners
//# ex: https://github.com/myorg/myrepo or https://github.com/myorg
githubConfigUrl: string | *""
//# githubConfigSecret is the k8s secrets to use when auth with GitHub API.
//# You can choose to use GitHub App or a PAT token
githubConfigSecret: string | {
//## GitHub Apps Configuration
//# NOTE: IDs MUST be strings, use quotes
//github_app_id: ""
//github_app_installation_id: ""
//github_app_private_key: |
//## GitHub PAT Configuration
github_token: ""
}
//# If you have a pre-define Kubernetes secret in the same namespace the gha-runner-scale-set is going to deploy,
//# you can also reference it via `githubConfigSecret: pre-defined-secret`.
//# You need to make sure your predefined secret has all the required secret data set properly.
//# For a pre-defined secret using GitHub PAT, the secret needs to be created like this:
//# > kubectl create secret generic pre-defined-secret --namespace=my_namespace --from-literal=github_token='ghp_your_pat'
//# For a pre-defined secret using GitHub App, the secret needs to be created like this:
//# > kubectl create secret generic pre-defined-secret --namespace=my_namespace --from-literal=github_app_id=123456 --from-literal=github_app_installation_id=654321 --from-literal=github_app_private_key='-----BEGIN CERTIFICATE-----*******'
// githubConfigSecret: pre-defined-secret
//# proxy can be used to define proxy settings that will be used by the
//# controller, the listener and the runner of this scale set.
//
// proxy:
// http:
// url: http://proxy.com:1234
// credentialSecretRef: proxy-auth # a secret with `username` and `password` keys
// https:
// url: http://proxy.com:1234
// credentialSecretRef: proxy-auth # a secret with `username` and `password` keys
// noProxy:
// - example.com
// - example.org
//# maxRunners is the max number of runners the autoscaling runner set will scale up to.
// maxRunners: 5
//# minRunners is the min number of idle runners. The target number of runners created will be
//# calculated as a sum of minRunners and the number of jobs assigned to the scale set.
// minRunners: 0
// runnerGroup: "default"
//# name of the runner scale set to create. Defaults to the helm release name
// runnerScaleSetName: ""
//# A self-signed CA certificate for communication with the GitHub server can be
//# provided using a config map key selector. If `runnerMountPath` is set, for
//# each runner pod ARC will:
//# - create a `github-server-tls-cert` volume containing the certificate
//# specified in `certificateFrom`
//# - mount that volume on path `runnerMountPath`/{certificate name}
//# - set NODE_EXTRA_CA_CERTS environment variable to that same path
//# - set RUNNER_UPDATE_CA_CERTS environment variable to "1" (as of version
//# 2.303.0 this will instruct the runner to reload certificates on the host)
//#
//# If any of the above had already been set by the user in the runner pod
//# template, ARC will observe those and not overwrite them.
//# Example configuration:
//
// githubServerTLS:
// certificateFrom:
// configMapKeyRef:
// name: config-map-name
// key: ca.crt
// runnerMountPath: /usr/local/share/ca-certificates/
//# Container mode is an object that provides out-of-box configuration
//# for dind and kubernetes mode. Template will be modified as documented under the
//# template object.
//#
//# If any customization is required for dind or kubernetes mode, containerMode should remain
//# empty, and configuration should be applied to the template.
// containerMode:
// type: "dind" ## type can be set to dind or kubernetes
// ## the following is required when containerMode.type=kubernetes
// kubernetesModeWorkVolumeClaim:
// accessModes: ["ReadWriteOnce"]
// # For local testing, use https://github.com/openebs/dynamic-localpv-provisioner/blob/develop/docs/quickstart.md to provide dynamic provision volume with storageClassName: openebs-hostpath
// storageClassName: "dynamic-blob-storage"
// resources:
// requests:
// storage: 1Gi
// kubernetesModeServiceAccount:
// annotations:
//# template is the PodSpec for each listener Pod
//# For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec
// listenerTemplate:
// spec:
// containers:
// # Use this section to append additional configuration to the listener container.
// # If you change the name of the container, the configuration will not be applied to the listener,
// # and it will be treated as a side-car container.
// - name: listener
// securityContext:
// runAsUser: 1000
// # Use this section to add the configuration of a side-car container.
// # Comment it out or remove it if you don't need it.
// # Spec for this container will be applied as is without any modifications.
// - name: side-car
// image: example-sidecar
//# template is the PodSpec for each runner Pod
//# For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec
template: {
//# template.spec will be modified if you change the container mode
//# with containerMode.type=dind, we will populate the template.spec with following pod spec
//# template:
//# spec:
//# initContainers:
//# - name: init-dind-externals
//# image: ghcr.io/actions/actions-runner:latest
//# command: ["cp", "-r", "-v", "/home/runner/externals/.", "/home/runner/tmpDir/"]
//# volumeMounts:
//# - name: dind-externals
//# mountPath: /home/runner/tmpDir
//# containers:
//# - name: runner
//# image: ghcr.io/actions/actions-runner:latest
//# command: ["/home/runner/run.sh"]
//# env:
//# - name: DOCKER_HOST
//# value: unix:///run/docker/docker.sock
//# volumeMounts:
//# - name: work
//# mountPath: /home/runner/_work
//# - name: dind-sock
//# mountPath: /run/docker
//# readOnly: true
//# - name: dind
//# image: docker:dind
//# args:
//# - dockerd
//# - --host=unix:///run/docker/docker.sock
//# - --group=$(DOCKER_GROUP_GID)
//# env:
//# - name: DOCKER_GROUP_GID
//# value: "123"
//# securityContext:
//# privileged: true
//# volumeMounts:
//# - name: work
//# mountPath: /home/runner/_work
//# - name: dind-sock
//# mountPath: /run/docker
//# - name: dind-externals
//# mountPath: /home/runner/externals
//# volumes:
//# - name: work
//# emptyDir: {}
//# - name: dind-sock
//# emptyDir: {}
//# - name: dind-externals
//# emptyDir: {}
//#####################################################################################################
//# with containerMode.type=kubernetes, we will populate the template.spec with following pod spec
//# template:
//# spec:
//# containers:
//# - name: runner
//# image: ghcr.io/actions/actions-runner:latest
//# command: ["/home/runner/run.sh"]
//# env:
//# - name: ACTIONS_RUNNER_CONTAINER_HOOKS
//# value: /home/runner/k8s/index.js
//# - name: ACTIONS_RUNNER_POD_NAME
//# valueFrom:
//# fieldRef:
//# fieldPath: metadata.name
//# - name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER
//# value: "true"
//# volumeMounts:
//# - name: work
//# mountPath: /home/runner/_work
//# volumes:
//# - name: work
//# ephemeral:
//# volumeClaimTemplate:
//# spec:
//# accessModes: [ "ReadWriteOnce" ]
//# storageClassName: "local-path"
//# resources:
//# requests:
//# storage: 1Gi
spec: {
containers: [{
name: "runner"
image: "ghcr.io/actions/actions-runner:latest"
command: ["/home/runner/run.sh"]
}]
}
}
}

View File

@@ -0,0 +1,15 @@
package holos
#TargetNamespace: "arc-system"
#InputKeys: component: "arc-system"
#HelmChart & {
values: #Values & #DefaultSecurityContext
namespace: #TargetNamespace
chart: {
// Match the gha-base-name in the chart _helpers.tpl to avoid long full names.
release: "gha-rs-controller"
name: "oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller"
version: "0.8.3"
}
}

View File

@@ -0,0 +1,127 @@
package holos
#Values: {
// Default values for gha-runner-scale-set-controller.
// This is a YAML-formatted file.
// Declare variables to be passed into your templates.
labels: {}
// leaderElection will be enabled when replicaCount>1,
// So, only one replica will in charge of reconciliation at a given time
// leaderElectionId will be set to {{ define gha-runner-scale-set-controller.fullname }}.
replicaCount: 1
image: {
repository: "ghcr.io/actions/gha-runner-scale-set-controller"
pullPolicy: "IfNotPresent"
// Overrides the image tag whose default is the chart appVersion.
tag: ""
}
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
env: null
//# Define environment variables for the controller pod
// - name: "ENV_VAR_NAME_1"
// value: "ENV_VAR_VALUE_1"
// - name: "ENV_VAR_NAME_2"
// valueFrom:
// secretKeyRef:
// key: ENV_VAR_NAME_2
// name: secret-name
// optional: true
serviceAccount: {
// Specifies whether a service account should be created for running the controller pod
create: true
// Annotations to add to the service account
annotations: {}
// The name of the service account to use.
// If not set and create is true, a name is generated using the fullname template
// You can not use the default service account for this.
name: ""
}
podAnnotations: {}
podLabels: {}
podSecurityContext: {}
// fsGroup: 2000
securityContext: {...}
// capabilities:
// drop:
// - ALL
// readOnlyRootFilesystem: true
// runAsNonRoot: true
// runAsUser: 1000
resources: {}
//# We usually recommend not to specify default resources and to leave this as a conscious
//# choice for the user. This also increases chances charts run on environments with little
//# resources, such as Minikube. If you do want to specify resources, uncomment the following
//# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
// limits:
// cpu: 100m
// memory: 128Mi
// requests:
// cpu: 100m
// memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
// Mount volumes in the container.
volumes: []
volumeMounts: []
// Leverage a PriorityClass to ensure your pods survive resource shortages
// ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
// PriorityClass: system-cluster-critical
priorityClassName: ""
//# If `metrics:` object is not provided, or commented out, the following flags
//# will be applied the controller-manager and listener pods with empty values:
//# `--metrics-addr`, `--listener-metrics-addr`, `--listener-metrics-endpoint`.
//# This will disable metrics.
//#
//# To enable metrics, uncomment the following lines.
// metrics:
// controllerManagerAddr: ":8080"
// listenerAddr: ":8080"
// listenerEndpoint: "/metrics"
flags: {
//# Log level can be set here with one of the following values: "debug", "info", "warn", "error".
//# Defaults to "debug".
logLevel: "debug"
//# Log format can be set with one of the following values: "text", "json"
//# Defaults to "text"
logFormat: "text"
//# Restricts the controller to only watch resources in the desired namespace.
//# Defaults to watch all namespaces when unset.
// watchSingleNamespace: ""
//# Defines how the controller should handle upgrades while having running jobs.
//#
//# The strategies available are:
//# - "immediate": (default) The controller will immediately apply the change causing the
//# recreation of the listener and ephemeral runner set. This can lead to an
//# overprovisioning of runners, if there are pending / running jobs. This should not
//# be a problem at a small scale, but it could lead to a significant increase of
//# resources if you have a lot of jobs running concurrently.
//#
//# - "eventual": The controller will remove the listener and ephemeral runner set
//# immediately, but will not recreate them (to apply changes) until all
//# pending / running jobs have completed.
//# This can lead to a longer time to apply the change but it will ensure
//# that you don't have any overprovisioning of runners.
updateStrategy: "immediate"
}
}

View File

@@ -1,26 +0,0 @@
package holos
#InputKeys: component: "crdb"
#HelmChart & {
namespace: #TargetNamespace
chart: {
name: "cockroachdb"
version: "11.2.3"
repository: {
name: "cockroachdb"
url: "https://charts.cockroachdb.com/"
}
}
values: #Values
apiObjects: {
Issuer: {
// https://github.com/cockroachdb/helm-charts/blob/3dcf96726ebcfe3784afb526ddcf4095a1684aea/README.md?plain=1#L196-L201
cockroachdb: #Issuer & {
metadata: name: #ComponentName
metadata: namespace: #TargetNamespace
spec: selfSigned: {}
}
}
}
}

View File

@@ -526,7 +526,7 @@ package holos
base: {
// For istioctl usage to disable istio config crds in base
enableIstioConfigCRDs: true
enableIstioConfigCRDs: *true | false
// If enabled, gateway-api types will be validated using the standard upstream validation logic.
// This is an alternative to deploying the standalone validation server the project provides.

View File

@@ -16,8 +16,8 @@ package holos
remotePilotAddress: ""
}
base: {
// Include the CRDs in the helm template output
enableCRDTemplates: true
// holos includes crd templates with the --include-crds helm flag.
enableCRDTemplates: false
// Validation webhook configuration url
// For example: https://$remotePilotAddress:15017/validate
validationURL: ""

View File

@@ -2,6 +2,15 @@ package holos
import "encoding/json"
#DependsOn: _ESO
#InputKeys: {
project: "secrets"
component: "eso-creds-refresher"
}
#TargetNamespace: #CredsRefresher.namespace
// output kubernetes api objects for holos
#KubernetesObjects & {
apiObjects: {
@@ -13,15 +22,6 @@ import "encoding/json"
}
}
#InputKeys: {
project: "secrets"
component: "eso-creds-refresher"
}
#TargetNamespace: #CredsRefresher.namespace
#DependsOn: Namespaces: name: #InstancePrefix + "-namespaces"
let NAME = #CredsRefresher.name
let AUD = "//iam.googleapis.com/projects/\(#InputKeys.gcpProjectNumber)/locations/global/workloadIdentityPools/holos/providers/k8s-\(#InputKeys.cluster)"
let MOUNT = "/var/run/service-account"

View File

@@ -0,0 +1,12 @@
package holos
// Components under this directory are part of this collection
#InputKeys: project: "secrets"
// Shared dependencies for all components in this collection.
#DependsOn: _Namespaces
// Common Dependencies
_Namespaces: Namespaces: name: "\(#StageName)-secrets-namespaces"
_ESO: ESO: name: "\(#InstancePrefix)-eso"
_ESOCreds: ESOCreds: name: "\(#InstancePrefix)-eso-creds-refresher"

View File

@@ -0,0 +1,34 @@
package holos
#DependsOn: _ESOCreds
#TargetNamespace: "default"
#InputKeys: {
project: "secrets"
component: "stores"
}
// #PlatformNamespaceObjects defines the api objects necessary for eso SecretStores in external clusters to access secrets in a given namespace in the provisioner cluster.
#PlatformNamespaceObjects: {
_ns: #PlatformNamespace
objects: [
#SecretStore & {
_namespace: _ns.name
},
]
}
#KubernetesObjects & {
apiObjects: {
for ns in #PlatformNamespaces {
for obj in (#PlatformNamespaceObjects & {_ns: ns}).objects {
let Kind = obj.kind
let NS = ns.name
let Name = obj.metadata.name
"\(Kind)": "\(NS)/\(Name)": obj
}
}
}
}

View File

@@ -9,7 +9,7 @@ package holos
component: "validate"
}
#DependsOn: Namespaces: name: #InstancePrefix + "-eso"
#DependsOn: _ESO
#KubernetesObjects & {
apiObjects: {

View File

@@ -12,6 +12,7 @@ let Privileged = {
// #PlatformNamespaces is the union of all namespaces across all cluster types. Namespaces are created in all clusters regardless of if they're
// used within the cluster or not. The is important for security and consistency with IAM, RBAC, and Secrets sync between clusters.
// Holos adopts the namespace sameness position of SIG Multicluster, refer to https://github.com/kubernetes/community/blob/dd4c8b704ef1c9c3bfd928c6fa9234276d61ad18/sig-multicluster/namespace-sameness-position-statement.md
#PlatformNamespaces: [
{name: "external-secrets"},
{name: "holos-system"},
@@ -22,4 +23,6 @@ let Privileged = {
{name: "cert-manager"},
{name: "argocd"},
{name: "prod-iam-zitadel"},
{name: "arc-system"},
{name: "arc-runner"},
]

View File

@@ -312,9 +312,10 @@ _apiVersion: "holos.run/v1alpha1"
#Chart: {
name: string
version: string
release: string | *name
repository: {
name: string
url: string
name?: string
url?: string
}
}
@@ -389,6 +390,18 @@ _apiVersion: "holos.run/v1alpha1"
...
}
// #DefaultSecurityContext is the holos default security context to comply with the restricted namespace policy.
// Refer to https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted
#DefaultSecurityContext: {
securityContext: {
allowPrivilegeEscalation: false
runAsNonRoot: true
capabilities: drop: ["ALL"]
seccompProfile: type: "RuntimeDefault"
}
...
}
// By default, render kind: Skipped so holos knows to skip over intermediate cue files.
// This enables the use of holos render ./foo/bar/baz/... when bar contains intermediary constraints which are not complete components.
// Holos skips over these intermediary cue instances.

View File

@@ -133,9 +133,12 @@ This section configured:
1. Provisioner Cluster to provide secrets to workload clusters.
2. IAM service account `eso-creds-refresher` to identify the credential refresher job.
3. Workload identity pool to authenticate the `eso-creds-refresher` Kubernetes service account in an external cluster.
4. IAM policy to allow `eso-creds-refresher` to authenticate to the Provisioner Cluster.
5. RoleBinding to allow `eso-creds-refresher` to create kubernetes service account tokens representing the credentials for use by SecretStore resources in workload clusters.
3. Workload identity pool to authenticate the `system:serviceaccount:holos-system:eso-creds-refresher` Kubernetes service account in all clusters that share the workload identity pool.
4. IAM policy to allow the `eso-creds-refresher` IAM service account to authenticate to the Provisioner Cluster.
5. RoleBinding to allow the `eso-creds-refresher` IAM service account to create kubernetes service account tokens representing the credentials for use by SecretStore resources in workload clusters.
> [!NOTE]
> Any cluster in the workload identity pool can impersonate the eso-creds-refresher IAM service account.
## Cluster Setup
@@ -150,6 +153,12 @@ HOLOS_CLUSTER_NAME=west1
ISSUER_URL="https://example.com/clusters/${HOLOS_CLUSTER_NAME}"
```
Alternatively:
```shell
ISSUER_URL="$(kubectl get --raw='/.well-known/openid-configuration' | jq -r .issuer)"
```
```shell
gcloud iam workload-identity-pools providers create-oidc \
k8s-$HOLOS_CLUSTER_NAME \

View File

@@ -1,6 +1,7 @@
package secret
import (
"bytes"
"fmt"
"github.com/holos-run/holos/pkg/cli/command"
"github.com/holos-run/holos/pkg/holos"
@@ -29,6 +30,7 @@ func NewCreateCmd(hc *holos.Config) *cobra.Command {
cfg.dryRun = flagSet.Bool("dry-run", false, "dry run")
cfg.appendHash = flagSet.Bool("append-hash", true, "append hash to kubernetes secret name")
cfg.dataStdin = flagSet.Bool("data-stdin", false, "read data field as json from stdin if")
cfg.trimTrailingNewlines = flagSet.Bool("trim-trailing-newlines", true, "trim trailing newlines if true")
cmd.Flags().SortFlags = false
cmd.Flags().AddGoFlagSet(flagSet)
@@ -80,7 +82,7 @@ func makeCreateRunFunc(hc *holos.Config, cfg *config) command.RunFunc {
}
for _, file := range cfg.files {
if err := filepath.WalkDir(file, makeWalkFunc(secret.Data, file)); err != nil {
if err := filepath.WalkDir(file, makeWalkFunc(secret.Data, file, *cfg.trimTrailingNewlines)); err != nil {
return wrapper.Wrap(err)
}
}
@@ -125,7 +127,7 @@ func makeCreateRunFunc(hc *holos.Config, cfg *config) command.RunFunc {
}
}
func makeWalkFunc(data secretData, root string) fs.WalkDirFunc {
func makeWalkFunc(data secretData, root string, trimNewlines bool) fs.WalkDirFunc {
return func(path string, d os.DirEntry, err error) error {
if err != nil {
return err
@@ -143,6 +145,9 @@ func makeWalkFunc(data secretData, root string) fs.WalkDirFunc {
if data[key], err = os.ReadFile(path); err != nil {
return wrapper.Wrap(err)
}
if trimNewlines {
data[key] = bytes.TrimRight(data[key], "\r\n")
}
}
return nil

View File

@@ -12,15 +12,16 @@ const ClusterLabel = "holos.run/cluster.name"
type secretData map[string][]byte
type config struct {
files holos.StringSlice
printFile *string
extract *bool
dryRun *bool
appendHash *bool
dataStdin *bool
cluster *string
namespace *string
extractTo *string
files holos.StringSlice
printFile *string
extract *bool
dryRun *bool
appendHash *bool
dataStdin *bool
trimTrailingNewlines *bool
cluster *string
namespace *string
extractTo *string
}
func newConfig() (*config, *flag.FlagSet) {

View File

@@ -1,5 +1,5 @@
# Create the secret
holos create secret directory --from-file=$WORK/fixture --dry-run
holos create secret directory --trim-trailing-newlines=false --from-file=$WORK/fixture --dry-run
# Want no warnings.
! stderr 'WRN'

View File

@@ -1,5 +1,5 @@
# Create the secret
holos create secret directory --from-file=$WORK/want
holos create secret directory --trim-trailing-newlines=false --from-file=$WORK/want
stderr 'created: directory-..........'
stderr 'secret=directory-..........'
stderr 'name=directory'

View File

@@ -0,0 +1,17 @@
# Create a secret from files with trailing newlines
holos create secret smtp --from-file=$WORK/smtp
# Get the secret back expecting no trailing newlines
mkdir have
holos get secret smtp
stdout '"username": "holos.run@gmail.com"'
stdout '"password": "secret"'
-- smtp/username --
holos.run@gmail.com
-- smtp/password --
secret
-- smtp/host --
smtp.gmail.com
-- smtp/port --
587

View File

@@ -1,5 +1,5 @@
# Create the secret
holos create secret directory --from-file=$WORK/want
holos create secret directory --trim-trailing-newlines=false --from-file=$WORK/want
# Get the secret back
mkdir have

View File

@@ -106,6 +106,7 @@ type Repository struct {
type Chart struct {
Name string `json:"name"`
Version string `json:"version"`
Release string `json:"release"`
Repository Repository `json:"repository"`
}
@@ -387,21 +388,26 @@ func runHelm(ctx context.Context, hc *HelmChart, r *Result, path holos.PathCompo
return nil
}
cachedChartPath := filepath.Join(string(path), ChartDir, hc.Chart.Name)
cachedChartPath := filepath.Join(string(path), ChartDir, filepath.Base(hc.Chart.Name))
if isNotExist(cachedChartPath) {
// Add repositories
repo := hc.Chart.Repository
out, err := util.RunCmd(ctx, "helm", "repo", "add", repo.Name, repo.URL)
if err != nil {
log.ErrorContext(ctx, "could not run helm", "stderr", out.Stderr.String(), "stdout", out.Stdout.String())
return wrapper.Wrap(fmt.Errorf("could not run helm repo add: %w", err))
}
// Update repository
out, err = util.RunCmd(ctx, "helm", "repo", "update", repo.Name)
if err != nil {
log.ErrorContext(ctx, "could not run helm", "stderr", out.Stderr.String(), "stdout", out.Stdout.String())
return wrapper.Wrap(fmt.Errorf("could not run helm repo update: %w", err))
if repo.URL != "" {
out, err := util.RunCmd(ctx, "helm", "repo", "add", repo.Name, repo.URL)
if err != nil {
log.ErrorContext(ctx, "could not run helm", "stderr", out.Stderr.String(), "stdout", out.Stdout.String())
return wrapper.Wrap(fmt.Errorf("could not run helm repo add: %w", err))
}
// Update repository
out, err = util.RunCmd(ctx, "helm", "repo", "update", repo.Name)
if err != nil {
log.ErrorContext(ctx, "could not run helm", "stderr", out.Stderr.String(), "stdout", out.Stdout.String())
return wrapper.Wrap(fmt.Errorf("could not run helm repo update: %w", err))
}
} else {
log.DebugContext(ctx, "no chart repository url proceeding assuming oci chart")
}
// Cache the chart
if err := cacheChart(ctx, path, ChartDir, hc.Chart); err != nil {
return fmt.Errorf("could not cache chart: %w", err)
@@ -423,7 +429,7 @@ func runHelm(ctx context.Context, hc *HelmChart, r *Result, path holos.PathCompo
// Run charts
chart := hc.Chart
helmOut, err := util.RunCmd(ctx, "helm", "template", "--values", valuesPath, "--namespace", hc.Namespace, "--kubeconfig", "/dev/null", "--version", chart.Version, chart.Name, cachedChartPath)
helmOut, err := util.RunCmd(ctx, "helm", "template", "--include-crds", "--values", valuesPath, "--namespace", hc.Namespace, "--kubeconfig", "/dev/null", "--version", chart.Version, chart.Release, cachedChartPath)
if err != nil {
stderr := helmOut.Stderr.String()
lines := strings.Split(stderr, "\n")
@@ -465,7 +471,10 @@ func cacheChart(ctx context.Context, path holos.PathComponent, chartDir string,
}
defer remove(ctx, cacheTemp)
chartName := fmt.Sprintf("%s/%s", chart.Repository.Name, chart.Name)
chartName := chart.Name
if chart.Repository.Name != "" {
chartName = fmt.Sprintf("%s/%s", chart.Repository.Name, chart.Name)
}
helmOut, err := util.RunCmd(ctx, "helm", "pull", "--destination", cacheTemp, "--untar=true", "--version", chart.Version, chartName)
if err != nil {
return wrapper.Wrap(fmt.Errorf("could not run helm pull: %w", err))

View File

@@ -1 +1 @@
53
54