Previously the landing page focused on Holos as a reference platform.
We're refocusing the release on the holos package management tool. This
patch updates the landing page and adds placeholders for a new quick
start guide which will focus on wrapping a helm chart and a concepts
page which will provide a high level overview of how holos is unique
from other tools.
In an effort to increase reliability when trying holos locally. The
idea being generate to render platform should ideally work without a
network connection provided the executable has already been downloaded.
For example, to give a quick demo without a network connection.
Without this patch the argo install manifest may fail because the
resources are fetched from github.
This patch embeds the same resources to increase speed and reliability.
Without this patch the argo crds component takes a few seconds to render
and may fail because the resources are fetched from github.
This patch embeds the same resources to increase speed and reliability.
Without this patch the gateway api component takes a few seconds to
render and may fail because the resources are fetched from github.
This patch embeds the same resources to increase speed and reliability.
Result:
rendered components/gateway-api for cluster workload in 257.206208ms
Building the cluster today I got hung up on a `ERR_CONNECTION_CLOSED`
error from Chrome when trying to access httpbin.
The problem was I forgot to run the local-ca script, thinking I already
had a local ca. The problem is the script also copies the private key
to the cluster, so it must be run every time the cluster is created.
This patch clarifies the sequence. When resetting, everything following
the Create the Cluster step needs to be executed.
Previously the image is build on merge to main, but not deployed
anywhere. This patch adds steps to the publish workflow to deploy the
image that was published using gitops and argocd.
On a release, make tools is run which pulls in the latest connect tools
for angular. This is a problem because it makes the git tree dirty.
The packages should be in the package.json file and the lock file so
these additional steps should not be necessary.
Remove them.
Desired result is make tools is idempotent and installs the correct
pinned versions necessary to build and release the container image.
This makes the following changes to the getting started guide after
running through both the signed-in and signed-out paths:
* Added helm and git as requirements
* made it easier to modify the requirements by using all "1." list items
* Wait for the httpbin pod to be ready before continuing
* Make all the signed-out steps work
* Fixed sub-section header values so they show up in the TOC
* Fix minor typos and grammar issues
* Fix minor spacing and formatting inconsistencies
* Mark the ArgoCD guide as "coming soon"
Also fixed the docs for running the website locally to be able to
preview all these changes while working on them.
Noticed a few remaining rough edges when I read through it on my phone
last night. This patch hopefully gets the try holos doc into a place
we're happy with.
Instead of tutorials. The goal is to refine Try Holos Locally down to a
minimal number of steps and then branch out to deeper use cases like
ArgoCD, Backstage, etc...
This patch moves the ArgoCD related sections to a separate "dive deeper"
guide to trim down the length of the try holos guide.
When someone is trying holos locally but has not signed up, ArgoCD needs
to be configured to allow anonymous access. This patch enables
anonymous access and gives the admin role.
With this patch the Try Holos Locally guide can be completed without
signing up or signing in.
Nate gave the feedback the Try Holos Locally doesn't work with Orb.
This patch makes the input form accept *.local domains so we can use the
default Orb managed domain of *.k8s.orb.local
I haven't tested this, but we at least need to allow the domain to
test it.
[1]: https://docs.orbstack.dev/kubernetes/#loadbalancer-ingress
Previously the top level logger used a json handler while the rest of
the code used the default console handler. This patch unifies them to
be consistent.
Remove side comments about the reference platform. Move the in-line
exploration of ArgoCD and CUE to the end once the reader has completed
their goal. Other minor edits.
Previously CUE paniced when holos tried to unify values originating from
two different cue runtimes. This patch fixes the problem by
initializaing cue.Value structs from the same cue context.
Log messages are also improved after making one complete pass through
the Try Holos Locally guide.
Now that we have multi-platform images, we need a way to easily deploy
them. This involves changing the image tag. kustomize edit is often
used to bump image tags, but we can do better providing it directly in
the unified CUE configuration.
This patch modifies the builder to unify user data *.json files
recursively under userdata/ into the #UserData definition of the holos
entrypoint.
This is to support automation that writes simple json files to version
control, executes holos render platform, then commits and pushes the
results for git ops to take over deployment.
The make deploy target is the reason this change exists, to demonstrate
how to automatically deploy a new container image.
This patch addresses Nate's feedback that it's difficult to know what
platform is being operated on.
Previously it wasn't clear where the platform id used for push and pull
comes from. The source of truth is the platform.metadata.json file
created when the platform is first generated using `holos generate
platform k3d`.
This patch removes the platformId field from the platform.config.json
file, renames the platform.config.json file to platform.model.json and
renames the internal symbols to match the domain language of "Platform
Model" instead of the less clear "config"
This patch also changes the API between holos and CUE to use the proto
json imported from the proto file instead of generated from the go code
generated from the proto file. The purpose is to ensure protojson
encoding is used end to end.
Default log handler:
The patch also changes the default log output to print only the message
to stderr. This addresses similar feedback from both Gary and Nate that
the output is skipped over because it feels like internal debug logs.
We still want 100% of output to go through the logger so we can ensure
each line can be made into valid json. Info messages however are meant
for the user and all other attributes can be stripped off by default.
If additional source location is necessary, enable the text or json
output format.
Protobuf JSON:
This patch modifies the API contract between holos and CUE to ensure
data is exchanged exclusively using protojson. This is necessary
because protobuf has a canonical json format which is not compatible
with the go json package struct tags. When Holos handles a protobuf
message, it must marshal and unmarshal it using the protojson package.
Similarly, when importing protobuf messages into CUE, we must use `cue
import` instead of `cue go get` so that the canonical format is used
instead of the invalid go json struct tags.
Finally, when a Go struct like v1alpha1.Form is used to represent data
defined in cue which contains a nested protobuf message, Holos should
use a cue.Value to lookup the nested path, marshal it into json bytes,
then unmarshal it again using protojson.
Previously there was no way to delete a platform. This patch adds a
basic delete subcommand which deletes platforms by their id using the
rpc api.
❯ holos get platform
NAME DESCRIPTION AGE ID
k3d Holos Local k3d 20h 0190c78a-4027-7a7e-82d0-0b9f400f4bc9
k3d2 Holos Local k3d 20h 0190c7b3-382b-7212-81d6-ffcfc4a3fe7e
k3dasdf Holos Local k3d 20h 0190c7b3-728a-7212-b56d-2d2edf389003
k3d9 Holos Local k3d 20h 0190c7b8-4c4e-7cea-9d3d-a6b9434ae438
k3d-8581 Holos Local k3d 20h 0190c7ba-1de9-7cea-bff8-f15b51a56bdd
k3d-13974 Holos Local k3d 20h 0190c7ba-5833-7cea-b863-8e5ffb926810
k3d-20760 Holos Local k3d 19h 0190c7ba-7a12-7cea-a350-d55b4817d8bc
❯ holos delete platform 0190c7ba-1de9-7cea-bff8-f15b51a56bdd 0190c7ba-5833-7cea-b863-8e5ffb926810 0190c7ba-7a12-7cea-a350-d55b4817d8bc
deleted platform k3d-8581
deleted platform k3d-13974
deleted platform k3d-20760
Previously there was no way to get/list platforms. This patch adds a
basic get subcommand with list as an alias to get the platforms
currently defined in the organization.
❯ holos get platform
NAME DESCRIPTION AGE ID
k3d Holos Local k3d 18h 0190c78a-4027-7a7e-82d0-0b9f400f4bc9
k3d2 Holos Local k3d 17h 0190c7b3-382b-7212-81d6-ffcfc4a3fe7e
k3dasdf Holos Local k3d 17h 0190c7b3-728a-7212-b56d-2d2edf389003
k3d9 Holos Local k3d 17h 0190c7b8-4c4e-7cea-9d3d-a6b9434ae438
k3d-8581 Holos Local k3d 17h 0190c7ba-1de9-7cea-bff8-f15b51a56bdd
k3d-13974 Holos Local k3d 17h 0190c7ba-5833-7cea-b863-8e5ffb926810
k3d-20760 Holos Local k3d 17h 0190c7ba-7a12-7cea-a350-d55b4817d8bc
k3d-13916 Holos Local k3d 17h 0190c7ba-8313-7cea-be37-41491c95ae79
k3d-26154 Holos Local k3d 17h 0190c7ba-a117-7cea-8229-ce27da84135e
❯ holos get platform foo
7:16AM ERR could not execute version=0.89.1 code=unknown err="not found"
❯ holos get platform foo k3d
NAME DESCRIPTION AGE ID
k3d Holos Local k3d 18h 0190c78a-4027-7a7e-82d0-0b9f400f4bc9
Previously the CreatePlatform rpc wrote over all fields when the
platform already exists. This is surprising and basically the
UpdatePlatform rpc.
This patch changes the behavior to do nothing except set the
already_exists flag in the response message.
Users who have the use case of needing to know if the creation actually
created a new resource should use the API to check the already_exists
flag. The CLI has no affordance for this other than parsing the log
messages.
Previously holos.platform.v1alpha1.PlatformService.CreatePlatform
returns an error for a request to create a platform of the same name as
an existing platform.
holos create platform --name k3d --display-name "Try Holos Locally"
8:00AM ERR could not execute version=0.87.2 code=failed_precondition
err="failed_precondition: platform.go:55: ent: constraint failed:
ERROR: duplicate key value violates unique constraint
\"platform_org_id_name\" (SQLSTATE 23505)" loc=client.go:138
This patch makes the CreatePlatform rpc idempotent using the upsert API.
The already_exists bool field is added to CreatePlatformResponse
response to indicate to the client if the platform already exists or
not.
Result:
holos create platform --display-name "Holos Local" --name k3d10
11:53AM INF create.go:56 created platform k3d10 version=0.87.2
name=k3d10 id=0190c731-1808-7e7d-9ccb-3d17434d0055
org=0190c6d6-4974-7733-9f7b-5d759a3e60e7 exists=false
holos create platform --display-name "Holos Local" --name k3d10
11:53AM INF create.go:56 updated platform k3d10 version=0.87.2
name=k3d10 id=0190c731-1808-7e7d-9ccb-3d17434d0055
org=0190c6d6-4974-7733-9f7b-5d759a3e60e7 exists=true
Previously I developed holos server in the dev-holos namespace of a
remote cluster. This patch updates the Tilt configs to develop locally
against k3d quickly and easily.
The database is a CNPG database which replaces PGO. This is simpler and
ligher weight, one container in one pod. CNPG has no repo host like PGO
has.
When starting holos server from the production Deployment, pgbouncer
blocks the automatic migration on startup.
```json
{
"time": "2024-07-16T16:35:52.54507682-07:00",
"level": "ERROR",
"msg": "could not execute",
"version": "0.87.2",
"code": "unknown",
"err": "sql/schema: create \"users\" table: ERROR: permission denied for schema public (SQLSTATE 42501)",
"loc": "cli.go:82"
}
```
This patch separates automatic migration into a `holos server init`
subcommand intended for use in a Job.
Closes: #204
Previously, the Tiltfile was hard-wired to Jeff's development
environment on the k2 cluster on-prem. This doesn't work for other
contributors.
This patch fixes the problem by re-using the [Try Holos Locally][1]
documentation to create a local development enironment. This has a
number of benefits. The evaluation documentation will be kept up to
date because it doubles as our development environment. Developing
locally is preferrable to developing in a remote cluster. Hostnames and
URL's can be constant, e.g. https://app.holos.localhost/ for local dev
and https://app.holos.run/ for production. We don't need to push to a
remote container registry, k3d has a local registry built in that works
with Tilt.
The only difference presently between evaluation and development when
following the local/k3d doc is the addition of a local registry.
With this patch holos starts up and is accessible at
https://app.holos.localhost/
[1]: https://holos.run/docs/tutorial/local/k3d/
This applies various grammar, formatting, and flow improvements to the
local k3d tutorial steps based on running through it from start to
finish.
This also removes the Go code responsible for embedding the website into
`holos`, which isn't needed since the site is hosted on Cloudflare
Pages.
Made it in preview using a background png from https://social.cards/ and
converting our logo.
mogrify -background none -resize 1200x -format png logo.svg
This patch fixes up the link colors and mermaid diagrams to look better
in both light and dark mode. This may not be the final result but it
moves in the right direction.
Links are now blue with a visible line on hover.
Previously the guide did not cover reconciling holos platform components
with GitOps. This patch adds instructions on how to apply the
application resources, review the diff, sync manually, and finally
enable automatic sync using CUE's struct merge feature.
Previously there is no web app except httpbin in the k3d platform. This
commit adds ArgoCD with an httproute and authorization policy at the
mesh layer. The application layer authenticates against a separate
oidc client id in the same issuer the mesh uses to demonstrate zero
trust and compatibility between the application and platform layers.
With this patch the user can authenticate and log in, but applications
are not configured. The user has no roles in ArgoCD either, rbac needs
to be configured properly for the getting started guide.
This patch adds the authproxy and authpolicy holos components to the k3d
platform for local evaluation. This combination implements a basic Zero
Trust security model. The httpbin backend service is protected with
authenication and authorization at the platform level without any
changes to the backend service.
The client id and project are static because they're defined centrally
in https://login.holos.run to avoid needing to setup a full identity
provider locally in k3d.
With this patch authentication and authorization work from both the web
browser and from the command line with curl using the token provided by
the holos cli.
Previously the local k3d tutorial doesn't expose any services to verify
the local certificate and the local dns changes work as expected.
This patch adds instructions and modifies the k3d platform to work with
a local mkcert certificate. A ClusterIssuer is configured to issue
Certificate resources using the ca private key created my mkcert.
With this patch, following the instructions results in a working and
trusted httpbin resource at https://httpbin.holos.localhost This works
both in Chrome and curl on the command line.
This patch adds a script to install a local CA and configure cert
manager to issue certs similar to how it issues certs using LetsEncrypt
in a real cluster.
Previously there is no way to evaluate Holos on local host. This is a
problem because it's a high barrier to entry to setup a full blown GKE
and EKS cluster to evaluate the reference platform.
This patch adds a minimal, but useful, k3d platform which deploys to a
single local k3d cluster. The purpose is to provide a shorter on ramp
to see the value of ArgoCD integrated with Istio to provide a zero trust
auth proxy.
The intentional trade off is to provide a less-holistic k3d platform
with a faster on-ramp to learn about the value the more-holistic holos
platform.
With this patch the documentation is correct and the platform renders
fully. The user doesn't need to provide any Platform Model values, the
default suffice.
For the ArgoCD client ID, we'll use https://login.holos.run as the
issuer instead of building a new OIDC issuer inside of k3d, which would
create significant friction.
This patch adds a diagram that gives an overview of the holos rendering
pipeline. This is an importantn concept to understand when working with
holos components.
Note this probably should not go in the Overview, which is intended only
to give a sense of what getting started looks like. Move it to the
render page when we add it.
Previously there are no diagrams in the documentation. This patch wires
up mermaid for use in code blocks in the markdown files. A minimal
diagram is added to verify mermaid works but it's not the final diagram.
Previously the Docusaurus features examples were still in place on the
home page. This patch replaces the homepage features with Holos
specific features and illustrations from undraw.
Refer to https://undraw.co/search
Generating the docusaurus site is not idempotent like generating the
Angular web app. This is a problem for building and releasing the
executable because it creates a dirty git state.
Embedding the doc website into the executable is no longer necessary
since we're deploying the site with Cloudflare pages. Remove it from
the compiled executable as a result.
Cloudflare fails to build the website with:
```
07:44:47.179 sh: 1: docusaurus: not found
07:44:47.192 Failed: Error while executing user command. Exited with error code: 127
```
Resolve it by executing npm install from the build-website script and
note the script is intended for use in a cloudflare context.
The API docs are not published yet becuase the module is private. Our
own docs site does not have any API reference docs.
This patch adds auto-generated markdown docs for the core v1alpha2 types
by generating them directly from the go source code.
Some light editing of the output of `gomarkdoc` is necessary to get the
heading anchor tags to align correctly for Docusaurus.
The github workflows fail because yarn is not available. The Angular
frontend app uses npm so we should also use npm for the website to
minimize dependencies.
Previously `go install` fails to install holos.
```
❯ go install github.com/holos-run/holos/cmd/holos@latest
../../go/pkg/mod/github.com/holos-run/holos@v0.86.0/internal/frontend/frontend.go:25:12: pattern holos/dist/holos/ui/index.html: no matching files found
../../go/pkg/mod/github.com/holos-run/holos@v0.86.0/doc/website/website.go:14:12: pattern all:build: no matching files found
```
This is because we do not commit required files. This patch fixes the
problem by following Rob Pike's guidance to commit generated files.
This patch also replaces the previous use of Makefile tasks to generate
code with //go:generate directives.
This means the process of keeping the source code clean is straight
forward:
```
git clone
make tools
make generate
make build
```
Refer to https://go.dev/blog/generate
> Also, if the containing package is intended for import by go get, once
> the file is generated (and tested!) it must be checked into the source
> code repository to be available to clients. - Rob Pike
Previously docs are not published. This patch adds Docusaurus into the
doc/website directory which is also a Go package to embed the static
site into the executable.
Serve the site using http.Server with a h2c handler with the command:
holos website --log-format=json --log-drop=source
The website subcommand is intended to be run from a container as a
Deployment. For expedience, the website subcommand doesn't use the
signals package like the server subcommand does. Consider using it for
graceful Deployment restarts.
Refer to https://github.com/ent/ent/tree/master/doc/website
Previously a couple of methods were defined on the Result struct.
This patch moves the methods to an internal wrapper struct to remove
them from the API documentation.
With this patch the API between holos and CUE is entirely a data API.
Previosly, the holos component Results for each ArgoCD Application
resource managed as part of each BuildPlan results in an empty file
being written for the empty list of k8s api objects.
This patch fixes the problem by skipping writing the accumulated output
of API objects with the Result metadata.name starts with `gitops/`.
This is kind of a hack, but it works well enough for now.
Previously components appeared to be duplicated, it was not clear to the
user one build plan results in two components: one for the k8s yaml and
one for the gitops argocd Application resource.
```
❯ holos render component --cluster-name aws1 components/login/zitadel-server
9:27AM INF result.go:195 wrote deploy file version=0.84.1 path=deploy/clusters/aws1/gitops/zitadel-server.application.gen.yaml bytes=338
9:27AM INF render.go:92 rendered zitadel-server version=0.84.1 cluster=aws1 name=zitadel-server status=ok action=rendered
9:27AM INF render.go:92 rendered zitadel-server version=0.84.1 cluster=aws1 name=zitadel-server status=ok action=rendered
```
This patch prefixes the ArgoCD Application resource, which is
implemented as a separate HolosComponent in the same BuildPlan. The
result is more clear about what is going on:
```
❯ holos render component --cluster-name aws1 components/login/zitadel-server
9:39AM INF result.go:195 wrote deploy file version=0.84.1 path=deploy/clusters/aws1/gitops/zitadel-server.application.gen.yaml bytes=338
9:39AM INF render.go:92 rendered gitops/zitadel-server version=0.84.1 cluster=aws1 name=gitops/zitadel-server status=ok action=rendered
9:39AM INF render.go:92 rendered zitadel-server version=0.84.1 cluster=aws1 name=zitadel-server status=ok action=rendered
```
The pod identity webhook component fails to render with v1alpha2. This
patch fixes the problem by providing concrete values for enableHooks and
the namespace of the helm chart holos component.
The namespace is mainly necessary to render the ArgoCD Application
resource along side the helm chart output.
With this patch the eso-creds-manager component renders correctly. This
is a `#Kubernetes` type build plan which uses the
spec.components.resources map to manage resources.
The only issue was needing to provide the namespace to the nested holos
component inside the BuildPlan.
The ArgoCD Application resource moves to the DeployFiles field of a
separate holos component in the same build plan at
spec.components.resources.argocd. For this reason a separate Result
object is no longer necessary inside of the Holos cli for the purpose of
managing Flux or ArgoCD gitops. The CUE code can simply inline whatever
gitops resources it wants and the holos cli will write the files
relative to the cluster specific deploy directory.
Result:
```
❯ holos render component --cluster-name management components/eso-creds-manager
2:55PM INF result.go:195 wrote deploy file version=0.84.1 path=deploy/clusters/management/gitops/eso-creds-manager.application.gen.yaml bytes=350
2:55PM INF render.go:92 rendered eso-creds-manager version=0.84.1 cluster=management name=eso-creds-manager status=ok action=rendered
```
Previously holos render platform failed for the holos platform. The issue was
caused by the deployFiles field moving from the BuildPlan down to
HolosComponent.
This patch fixes the problem by placing the ArgoCD Application resource into a
separate Resources entry of the BuildPlan. The sole purpose of this additional
entry in the Resources map is to produce the Application resource along side
any other components which are part of the build plan.
Previously methods were defined on the API objects in the v1alpha1 API.
The API should be data structures only. This patch refactors the
methods responsible for orchestrating the build plan to pull them into
the internal render package.
The result is the API is cleaner and has no methods. The render package
has corresponding data structures which simply wrap around the API
structure and implement the methods to render and return the result to
the CLI.
This commit compiles, but it has not been tested at all. It's almost
surely broken completely.
Previously in v1alpha1, all Holos structs are located in the same
package. This makes it difficult to focus on only the structs necessary
to transfer configuration data from CUE to the `holos` cli.
This patch splits the structs into `meta` and `core` where the core
package holds the structs end users should refer to and focus on. Only
the Platform resource is in core now, but other BuildPlan types will be
added shortly.
Previously Backstage was not configured to integrate with GitHub. The
integration is necessary for Backstage to automatically discover
resources in a GitHub organization and import them into the Catalog.
This patch adds a new platform model form field and section for the
primary GitHub organization name of the platform. Additional GitHub
organizations can be added in the future, Backstage supports them.
The result is Backstage automatically scans public and private
repositories and adds the information in `catalog-info.yaml` to the UI.
Previosly the gateway ArogCD Application resource is out of sync because
the `default-istio` `ServiceAccount` is not in the git repository
source. Argo would prune the service account on sync which is a problem.
This patch manages the service account so the Application can be synced
properly.
Previously the holos render platform command fails with the following
error when giving a demo after the generate platform step.
This patch updates the internal generated holos platform to the latest
version.
Running through the demo is successful now.
```
holos logout
holos login
holos register user
holos generate platform holos
holos pull platform config .
holos render platform ./platform
```
I'm not sure if we should check in the loop, in the go routine, or in
both places. Double check in both cases just to be sure we're not doing
extra unnecessary work.
Previously a channel was used to limit concurrency. This is more
difficult to read and comprehend than the inbuilt errorgroup.SetLimit
functionality.
This patch uses `errgroup.`[Group.SetLimit()][1] to limit concurrency,
avoid leaking go routines, and avoid unnecessary work.
[1]: https://pkg.go.dev/golang.org/x/sync/errgroup#Group.SetLimit
This adds concurrency to the 'holos render platform' command so platform
components are rendered in less time than before.
Default concurrency is set to `min(runtime.NumCPU(), 8)`, which is the
lesser of 8 or the number of CPU cores. In testing, I found that past 8,
there are diminishing or negative returns due to memory usage or
rendering each component.
In practice, this reduced rendering of the saas platform components from
~90s to ~28s on my 12-core macbook pro.
This also changes the key name of the Helm Chart's version in log lines
from `version` to `chart_version` since `version` already exists and
shows the Holos CLI version.
Previously, when a user registered and logged into the holos app server,
they were able to reach admin interfaces like
https://argocd.admin.example.com
This patch adds AuthorizationPolicy resources governing the whole
cluster. Users with the prod-cluster-{admin,edit,view} roles may access
admin services like argocd.
Users without these roles are blocked with RBAC: access denied.
In ZITADEL, the Holos Platform project is granted to the CIAM
organization without granting the prod-cluster-* roles, so there's no
possible way a CIAM user account can have these roles.
Previously there wasn't a good way to populate the platform model in the
database after building a new instance of holos server.
With this patch, the process to reset clean is:
```
export HOLOS_SERVER=https://dev.app.holos.run:443
grpcurl -H "x-oidc-id-token: $(holos token)" ${HOLOS_SERVER##*/} holos.user.v1alpha1.SystemService.DropTables
grpcurl -H "x-oidc-id-token: $(holos token)" ${HOLOS_SERVER##*/} holos.system.v1alpha1.SystemService.SeedDatabase
```
Then populate the form and model:
```
holos push platform form .
holos push platform model .
```
The `platform.config.json` file stored in version control is pushed to
the holos server and stored in the database. This makes it nice and
easy to reset entirely, or move to another service url.
Previously the default oidc issuer was to one of the kubernetes clusters
running in my basement. This patch changes the issuer to the production
ready issuer running in EKS.
Previously the holos server Service was not exposed.
This patch exposes the holos service with an HTTPRoute behind the auth
proxy. Holos successfully authenticates the user with the
x-oidc-id-token header set by the default Gateway.
---
Add dev-holos-infra and dev-holos-app
Previously the PostgresCluster and the holos server Deployment are not
managed on the aws2 cluster.
This patch is a start, but the Deployment does not yet start. We need
to pass an option for the oidc issuer.
---
Add namespaces and cert for prod-holos, dev-holos, jeff-holos
Previously we didn't have a place to deploy holos server. This patch
adds a namespace, creates a Gateway listener, and binds the tls certs
for app.example.com and *.app.example.com to the listeners.
In addition, cluster specific endpoints of *.app.aws2.example.com,
*.app.aws1.example.com, etc. are created to provide dev environment
urls. For example jeff.app.aws2.example.com is my personal dev hostname.
Previously holos render platform ./platform did not render any GitOps
resources for Flux or ArgoCD.
This patch uses the new DeployFiles field in holos v0.83.0 to write an
Application resource for every component BuildPlan listed in the
platform.
Previously, each BuildPlan has no clear way to produce an ArgoCD
Application resource. This patch provides a general solution where each
BuildPlan can provide arbitrary files as a map[string]string where the
key is the file path relative to the gitops repository `deploy/` folder.
Previously ArgoCD has no ssh credentials to connect to GitHub. This
patch adds an ssh ed25519 key as a secret in the management cluster.
The secret is synced to the workload clusters using an ExternalSecret
with the proper label for ArgoCD to find and load it for use with any
application that references the Git URL.
Previously a logged in user could not modify anything in ArgoCD. With
this patch users who have been granted the prod-cluster-admin role in
ZITADEL are granted the admin role in ArgoCD.
Previously ArgoCD was present in the platform configuration, but not
functional. This patch brings ArgoCD fully up, integrated with the
service mesh, auth proxy, and SSO at
https://argocd.admin.clustername.example.com/
The upstream [helm chart][1] is used instead of the kustomize install
method. We had existing prior art integrating the v6 helm chart with
the holos platform identity provider, so we continue with the helm
chart.
CRDs are still managed with the kustomize version. The CRDs need to be
kept in sync. It's possible to generate the kustomization.yaml file
from the same version value as is used by the helm chart, but we don't
for the time being.
[1]: https://github.com/argoproj/argo-helm/tree/argo-cd-7.1.1/charts/argo-cd
Previously, no RequestAuthentication or AuthorizationPolicy resources
govern the default Gateway. This patch adds the resources and
configures the service mesh with the authproxy as an ExtAuthZ provider
for CUSTOM AuthorizationPolicy rules.
This patch also fixes a bug in the zitadel-server component where
resources from the upstream helm chart did not specify a namespace.
Kustomize is used as a post processor to force all resources into the
zitadel namespace.
Add multiple HTTPRoutes to validate http2 connection reuse
This patch adds multiple HTTPRoute resources which match
*.admin.example.com The purpose is to validate http2 connections are
reused properly with Chrome.
With this patch no 404 no route errors are encountered when navigating
between the various httpbin{1,2,3,4} urls.
Add note backupRestore will trigger a restore
The process of configuring ZITADEL to provision from a datasource will
cause an in-place restore from S3. This isn't a major issue, but users
should be aware data added since the most recent backup will be lost.
Previously, HTTPRoute resources were in the same namespace as the
backend service, httpbin in this case. This doesn't follow the default
behavior of a Gateway listener only allowing attachment from HTTPRoute
resources in the same namespace as the Gateway.
This also complicates intercepting the authproxy path prefix and sending
it to the authproxy. We'd need to add a ReferenceGrant in the authproxy
namespace, which seems backwards and dangerous because it would grant
the application developer the ability to route requests to all Services
in the istio-gateways namespace.
This patch enables Cluster Operators to manage the HTTPRoute resources
and direct the auth proxy path prefix of `/holos/authproxy` to the auth
proxy Service in the same namespace.
ReferenceGrant resources are used to enable the HTTPRoute backend
references.
When an application developer needs to manage their own HTTPRoute, as is
the case for ZITADEL, a label selector may be used and will override
less specific HTTPRoute hostsnames in the istio-gateways namespace.
With redis. The auth proxy authenticates correctly against zitadel
running in the same cluster. Validated by visiting
https://httpbin.admin.clustername.example.com/holos/authproxy
Visiting
https://httpbin.admin.clustername.example.com/holos/authproxy/auth
returns the id token in the response header, visible in the Chrome
network inspector. The ID token works as expected from multiple orgs
with project grants in ZITADEL from the Holos org to the OIS org.
This patch doesn't fully implement the auth proxy feature.
AuthorizationPolicy and RequestAuthentication resources need to be
added.
Before we do so, we need to move the HTTPRoute resources into the
gateway namespace so all of the security policies are in one place and
to simplify the process of routing requests to two backends, the
authproxy and the backend server.
This patch adds multiple HTTPRoute resources which match
*.admin.example.com The purpose is to validate http2 connections are
reused properly with Chrome.
With this patch no 404 no route errors are encountered when navigating
between the various httpbin{1,2,3,4} urls.
Problem:
Istio 1.22 with Gateway API and HTTPRoute is mis-routing HTTP2 requests
when the tls certificate has two dns names, for example
login.example.com and *.login.example.com.
When the user visits login.example.com and then tries to visit
other.login.example.com with Chrome, the connection is re-used and istio
returns a 404 route not found error even though there is a valid and
accepted HTTPRoute for *.login.example.com
This patch attempts to fix the problem by ensuring certificate dns names
map exactly to Gateway listeners. When a wildcard cert is used, the
corresponding Gateway listener host field exactly matches the wildcard
cert dns name so Istio and envoy should not get confused.
This patch adds the ZITADEL server component, which deploys zitadel from
a helm chart. Kustomize is used heavily to patch the output of helm to
make the configuration fit nicely with the holos platform.
With this patch the two Jobs that initialize the database and setup
ZITADEL run successfully. The ZITADEL deployment starts successfully.
ZITADEL is accessible at https://login.example.com/ with the default
admin username of `zitadel-admin@zitadel.login.example.com` and password
`Password1!`.
Use grant.holos.run/subdomain.admin: "true" for HTTPRoute
This patch clarifies the label that grants httproute attachment for a
subdomain Gateway listener to a namespace.
Fix istio-base holos component name
Was named `base` which is the chart name, not the holos component name.
This patch adds the postgres clusters and a few console form controls to
configure how backups are taken and if the postgres cluster is
initialized from an existing backup or not.
The pgo-s3-creds file is manually created at this time. It looks like:
❯ holos get secret -n zitadel pgo-s3-creds --print-key s3.conf
[global]
repo2-cipher-pass=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
repo2-s3-key=KKKKKKKKKKKKKKKKKKKK
repo2-s3-key-secret=/SSSSSSS/SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
repo3-cipher-pass=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
repo3-s3-key=KKKKKKKKKKKKKKKKKKKK
repo3-s3-key-secret=/SSSSSSS/SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
The s3 key and secret are credentials to read / write to the bucket.
The cipher pass is a random string for client side encryption. Generate
it with `tr -dc A-Za-z0-9 </dev/urandom | head -c 64`
This patch is foundational work for the ZITADEL login service.
This patch adds a tls certificate with names *.login.example.com and
login.example.com, a pair of listeners attached to the certificate in
the `default` Gateway, and the ExternalSecret to sync the secret from
the management cluster.
The zitadel namespace is managed and has the label
holos.run/login.grant: "true" to grant HTTPRoute attachment from the
zitadel namespace to the default Gateway in the istio-gateways
namespace.
With this change, https://httpbin.admin.aws1.example.com works as
expected.
PROXY protocol is configured on the AWS load balancer and the istio
gateway. The istio gateway logs have the correct client source ip
address and x-forwarded-for headers.
Namespaces must have the holos.run/admin.grant: "true" label in order to
attach an HTTPRoute to the admin section of the default Gateway.
The TLS certificate is working as expected and hopefully does not suffer
from the NR route not found issued encountered with the Istio Gateway
API.
This patch gets the istio-ingressgateway up and running in AWS with
minimal configuration. No authentication or authorization policies have
been migrated from previous iterations of the platform. These will be
handled in subsequent iterations.
Connectivity to a backend service like httpbin has not yet been tested.
This will happen in a follow up as well using /httpbin path prefixes on
existing services like argocd to conserve certificate resources.
This is the standard way to issue public facing certificates. Be aware
of the 50 cert limit per week from LetsEncrypt. We map names to certs
1:1 to avoid http2 connection reuse issues with istio.
Manage certificates on a project basis similar to how namespaces
associated with each project are managed.
Manage the Certificate resources on the management cluster in the
istio-ingress namespace so the tls certs can be synced to the workload
clusters.
The secretstores component is critical and provides the mechanism to
securely fetch Secret resources from the Management Cluster.
The holos server and configuration code stored in version control
contains only ExternalSecret references, no actual secrets.
This component adds a `default` `SecretStore` to each management
namespace which uses the `eso-reader` service account token to
authenticate to the management cluster. This service account is limited
to reading secrets within the namespace it resides in.
For example:
```yaml
---
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: default
namespace: external-secrets
spec:
provider:
kubernetes:
auth:
token:
bearerToken:
key: token
name: eso-reader
remoteNamespace: external-secrets
server:
caBundle: Long Base64 encoded string
url: https://34.121.54.174
```
This patch adds the `eso-creds-manager` component which needs to be
applied to the management cluster prior to the `eso-creds-refreher`
component being applied to workload clusters.
The manager component configures rbac to allow the creds-refresher job
to complete.
This patch also adjusts the behavior to only create secrets for the
eso-reader account by default.
Namespaces with the label `holos.run/eso.writer=true` will also have an
eso-writer secret provisioned in their namespace, allowing secrets to be
written back to the management cluster. This is intended for the
PushSecret resource.
Use v0.81.2 to build out the holos platform. Once we have the
components structured fairly well we can circle back around and copy the
components to schematics. There's a bit of friction regenerating the
platform from schematic each time.
Using CUE definitions like #Platform to hold data is confusing. Clarify
the use of fields, definitions like #Platform define the shape (schema)
of the data while private fields like _Platform represent and hold the
data.
The first thing most platforms need to do is come up with a strategy for
managing namespaces across multiple clusters.
This patch defines #Namespaces in the holos platform and adds a
namespaces component which loops over all values in the #Namespaces
struct and manages a kubernetes Namespace object.
The platform resource itself loops over all clusters in the platform to
manage all namespaces across all clusters.
From a blank slate:
```
❯ holos generate platform holos
4:26PM INF platform.go:79 wrote platform.metadata.json version=0.82.0 platform_id=018fa1cf-a609-7463-aa6e-fa53bfded1dc path=/home/jeff/workspace/holos-run/holos-infra/saas/platform.metadata.json
4:26PM INF platform.go:91 generated platform holos version=0.82.0 platform_id=018fa1cf-a609-7463-aa6e-fa53bfded1dc path=/home/jeff/workspace/holos-run/holos-infra/saas
❯ holos pull platform config .
4:26PM INF pull.go:64 pulled platform model version=0.82.0 server=https://jeff.app.dev.k2.holos.run:443 platform_id=018fa1cf-a609-7463-aa6e-fa53bfded1dc
4:26PM INF pull.go:75 saved platform config version=0.82.0 server=https://jeff.app.dev.k2.holos.run:443 platform_id=018fa1cf-a609-7463-aa6e-fa53bfded1dc path=platform.config.json
❯ (cd components && holos generate component cue namespaces)
4:26PM INF component.go:147 generated component version=0.82.0 name=namespaces path=/home/jeff/workspace/holos-run/holos-infra/saas/components/namespaces
❯ holos render platform ./platform/
4:26PM INF platform.go:29 ok render component version=0.82.0 path=components/namespaces cluster=management num=1 total=2 duration=464.055541ms
4:26PM INF platform.go:29 ok render component version=0.82.0 path=components/namespaces cluster=aws1 num=2 total=2 duration=467.978499ms
```
The result:
```sh
cat deploy/clusters/management/components/namespaces/namespaces.gen.yaml
```
```yaml
---
metadata:
name: holos
labels:
kubernetes.io/metadata.name: holos
kind: Namespace
apiVersion: v1
```
Without this patch the
holos.platform.v1alpha1.PlatformService.CreatePlatform doesn't work as
expected. The Platform message is used which incorrectly requires a
client supplied id which is ignored by the server.
This patch allows the creation of a new platform by reusing the update
operation as a mutation that applies to both create and update. Only
modifiable fields are part of the PlatformMutation message.
This patch adds to more example helm chart based components. podinfo
installs as a normal https repository based helm chart. podinfo-oci
uses an oci image to manage the helm chart.
The way holos handls OCI images is subtle, so it's good to include an
example right out of the chute. Github actions uses OCI images for
example.
This patch adds a command to generate CUE based holos components from
examples embedded in the executable. The examples are passed through
the go template rendering engine with values pulled from flags.
Each directory in the embedded filesystem becomes a unique command for
nice tab completion. The `--name` flag defaults to "example" and is the
resulting component name.
A follow up patch with more flags will set the stage for a Helm
component schematic.
```
holos generate component cue minimal
```
```txt
3:07PM INF component.go:91 generated component version=0.80.2 name=example path=/home/jeff/holos/dev/bare/components/example
```
Split holos render into component and platform.
This patch splits the previous `holos render` command into subcommands.
`holos render component ./path/to/component/` behaves as the previous
`holos render` command and renders an individual component.
The new `holos render platform ./path/to/platform/` subcommand makes
space to render the entire platform using the platform model pulled from
the PlatformService.
Starting with an empty directory:
```sh
holos register user
holos generate platform bare
holos pull platform config .
holos render platform ./platform/
```
```txt
10:01AM INF platform.go:29 ok render component version=0.80.2 path=components/configmap cluster=k1 num=1 total=1 duration=448.133038ms
```
The bare platform has a single component which refers to the platform
model pulled from the PlatformService:
```sh
cat deploy/clusters/mycluster/components/platform-configmap/platform-configmap.gen.yaml
```
```yaml
---
kind: ConfigMap
apiVersion: v1
metadata:
name: platform
namespace: default
data:
platform: |
spec:
model:
cloud:
providers:
- cloudflare
cloudflare:
email: platform@openinfrastructure.co
org:
displayName: Open Infrastructure Services
name: ois
```
This patch adds a subcommand to pull the data necessary to construct a
PlatformConfig DTO. The PlatformConfig message contains all of the
fields and values necessary to build a platform and the platform
components. This is an alternative to holos passing multiple tags to
CUE. The PlatformConfig is marshalled and passed once.
The platform config is also stored in the local filesystem in the root
directory of the platform. This enables repeated local building and
rendering without making an rpc call.
The build / render pipeline is expected to cache the PlatformConfig once
at the start of the pipeline using the pull subcommand.
The `holos render platform` command is unimplemented. This patch
partially implements platform rendering by fetching the platform model
from the PlatformService and providing it to CUE using a tag.
CUE returns a `kind: Platform` resource to `holos` which will eventually
process a Buildlan for each platform component listed in the Platform
spec.
For now, however, it's sufficient to have the current platform model
available to CUE.
Problem:
Rendering the whole platform doesn't need a cluster name.
Solution:
Make the flag optional, do not set the cue tag if it's empty.
Result:
Holos renders the platform resource and proceeds to the point where we
need to implement the iteration over platform components, passing the
platform model to each one and rendering the component.
We need to output a kind: Platform resource from cue so holos can
iterate over each build plan. The platform resource itself should also
contain a copy of the platform model obtained from the PlatformService
so holos can easily pass the model to each BuildPlan it needs to execute
to render the full platform.
This patch lays the groundwork for the Platform resource. A future
patch will have the holos cli obtain the platform model and inject it as
a JSON encoded string to CUE. CUE will return the Platform resource
which is a list of references to build plans. Holos will then iterate
over each build plan, pass the model back in, and execute the build
plan.
To illustrate where we're headed, the `cue export` step will move into
`holos` with a future patch.
```
❯ holos register user
3:34PM INF register.go:77 user version=0.80.0 email=jeff@ois.run server=https://app.dev.k2.holos.run:443 user_id=018f8839-3d74-7e39-afe9-181ad2fc8abe org_id=018f8839-3d74-7e3a-918c-b36494da0115
❯ holos generate platform bare
3:34PM INF generate.go:79 wrote platform.metadata.json version=0.80.0 platform_id=018f8839-3d74-7e3b-8cb8-77a2c124d173 path=/home/jeff/holos/dev/bare/platform.metadata.json
3:34PM INF generate.go:91 generated platform bare version=0.80.0 platform_id=018f8839-3d74-7e3b-8cb8-77a2c124d173 path=/home/jeff/holos/dev/bare
❯ holos push platform form .
3:34PM INF push.go:70 pushed: https://app.dev.k2.holos.run:443/ui/platform/018f8839-3d74-7e3b-8cb8-77a2c124d173 version=0.80.0
❯ cue export ./platform/
{
"metadata": {
"name": "bare",
"labels": {},
"annotations": {}
},
"spec": {
"model": {}
},
"kind": "Platform",
"apiVersion": "holos.run/v1alpha1"
}
```
When the holos server URL switches, we also need to update the client
context to get the correct org id.
Also improve quality of life by printing the url to the form when the
platform form is pushed to the server.
❯ holos push platform form .
11:41AM INF push.go:71 updated platform form version=0.79.0 server=https://app.dev.k2.holos.run:443 platform_id=018f87d1-7ca2-7e37-97ed-a06bcee9b442
11:41AM INF push.go:72 https://app.dev.k2.holos.run:443/ui/platform/018f87d1-7ca2-7e37-97ed-a06bcee9b442 version=0.79.0
This sub-command renders the web app form from CUE code and updates the
form using the `holos.platform.v1alpha1.PlatformService/UpdatePlatform`
rpc method.
Example use case, starting fresh:
```
rm -rf ~/holos
mkdir ~/holos
cd ~/holos
```
Step 1: Login
```sh
holos login
```
```txt
9:53AM INF login.go:40 logged in as jeff@ois.run version=0.79.0 name="Jeff McCune" exp="2024-05-17 21:16:07 -0700 PDT" email=jeff@ois.run
```
Step 2: Register to create server side resources.
```sh
holos register user
```
```
9:52AM INF register.go:68 user version=0.79.0 email=jeff@ois.run user_id=018f826d-85a8-751d-81ee-64d0f2775b3f org_id=018f826d-85a8-751e-98dd-a6cddd9dd8f0
```
Step 3: Generate the bare platform in the local filesystem.
```sh
holos generate platform bare
```
```txt
9:52AM INF generate.go:79 wrote platform.metadata.json version=0.79.0 platform_id=018f826d-85a8-751f-96d0-0d2bf70df909 path=/home/jeff/holos/platform.metadata.json
9:52AM INF generate.go:91 generated platform bare version=0.79.0 platform_id=018f826d-85a8-751f-96d0-0d2bf70df909 path=/home/jeff/holos
```
Step 4: Push the platform form to the `holos server` web app.
```sh
holos push platform form .
```
```txt
9:52AM INF client.go:67 updated platform version=0.79.0 platform_id=018f826d-85a8-751f-96d0-0d2bf70df909 duration=73.62995ms
```
At this point the platform form is published and functions as expected
when visiting the platform web interface.
Makes it easier to work with grpcurl:
grpcurl -H "x-oidc-id-token: $(holos token)" -d '{"org_id":"'$(holos orgid)'"}' ${HOLOS_SERVER##*/} holos.platform.v1alpha1.PlatformService.ListPlatforms
When the user generates a platform, we need to know the platform ID it's
linked to in the holos server. If there is no platform with the same
name, the `holos generate platform` command should error out.
This is necessary because the first thing we want to show is pushing an
updated form to `holos server`. To update the web ui the CLI needs to
know the platform ID to update.
This patch modifies the generate command to obtain a list of platforms
for the org and verify the generated name matches one of the platforms
that already exists.
A future patch could have the `generate platform` command call the
`holos.platform.v1alpha1.PlatformService.CreatePlatform` method if the
platform isn't found.
Results:
```sh
holos generate platform bare
```
```txt
4:15PM INF generate.go:77 wrote platform.metadata.json version=0.77.1 platform_id=018f826d-85a8-751f-96d0-0d2bf70df909 path=/home/jeff/holos/platform.metadata.json
4:15PM INF generate.go:89 generated platform bare version=0.77.1 platform_id=018f826d-85a8-751f-96d0-0d2bf70df909 path=/home/jeff/holos
```
```sh
cat platform.metadata.json
```
```json
{
"id": "018f826d-85a8-751f-96d0-0d2bf70df909",
"name": "bare",
"display_name": "Bare Platform"
}
```
This patch logs the service and rpc method of every request at Info
level. The error code and message is also logged. This gives a good
indication of what rpc methods are being called and by whom.
This patch adds a `holos register user` command. Given an authenticated
id token and no other record of the user in the database, the cli tool
use the API to:
1. User is registered in `holos server`
2. User is linked to one Holos Organization.
3. Holos Organization has the `bare` platform.
4. Holos Organization has the `reference` platform.
5. Ensure `~/.holos/client-context.json` contains the user id and an
org id.
The `holos.ClientContext` struct is intended as a light weight way to
save and load the current organization id to the file system for further
API calls.
The assumption is most users will have only one single org. We can add
a more complicated config context system like kubectl uses if and when
we need it.
This patch adds a generate subcommand that copies a platform embedded
into the executable to the local filesystem. The purpose is to
accelerate initial setup with canned example platforms.
Two platforms are intended to start, one bare and one reference
platform. The number of platforms embedded into holos should be kept
small (2-3) to limit our support burden.
This patch adds the GetVersion rpc method to
holos.system.v1alpha1.SystemService and wires the version information up
to the Web UI.
This is a good example to crib from later regarding fetching and
refreshing data from the web ui using grpc and field masks.
This patch refactors the API following the [API Best Practices][api]
documentation. The UpdatePlatform method is modeled after a mutating
operation described [by Netflix][nflx] instead of using a REST resource
representation. This makes it much easier to iterate over the fields
that need to be updated as the PlatformUpdateOperation is a flat data
structure while a Platform resource may have nested fields. Nested
fields are more complicated and less clear to handle with a FieldMask.
This patch also adds a snapckbar message on save. Previously, the save
button didn't give any indication of success or failure. This patch
fixes the problem by adding a snackbar message that pop up at the bottom
of the screen nicely.
When the snackbar message is dismissed or times out the save button is
re-enabled.
[api]: https://protobuf.dev/programming-guides/api/
[nflx]: https://netflixtechblog.com/practical-api-design-at-netflix-part-2-protobuf-fieldmask-for-mutation-operations-2e75e1d230e4
Examples:
FieldMask for ListPlatforms
```
grpcurl -H "x-oidc-id-token: $(holos token)" -d @ ${HOLOS_SERVER##*/} holos.platform.v1alpha1.PlatformService.ListPlatforms <<EOF
{
"org_id": "018f36fb-e3f7-7f7f-a1c5-c85fb735d215",
"field_mask": { "paths": ["id","name"] }
}
EOF
```
```json
{
"platforms": [
{
"id": "018f36fb-e3ff-7f7f-a5d1-7ca2bf499e94",
"name": "bare"
},
{
"id": "018f6b06-9e57-7223-91a9-784e145d998c",
"name": "gary"
},
{
"id": "018f6b06-9e53-7223-8ae1-1ad53d46b158",
"name": "jeff"
},
{
"id": "018f6b06-9e5b-7223-8b8b-ea62618e8200",
"name": "nate"
}
]
}
```
Closes: #171
This patch refactors the API to be resource-oriented around one service
per resource type. PlatformService, OrganizationService, UserService,
etc...
Validation is improved to use CEL rules provided by [protovalidate][1].
Place holders for FieldMask and other best practices are added, but are
unimplemented as per [API Best Practices][2].
The intent is to set us up well for copying and pasting solid existing
examples as we add features.
With this patch the server and web app client are both updated to use
the refactored API, however the following are not working:
1. Update the model.
2. Field Masks.
[1]: https://buf.build/bufbuild/protovalidate
[2]: https://protobuf.dev/programming-guides/api/
This command is just a prototype of how to fetch the platform model so
we can make it available to CUE.
The idea is we take the data from the holos server and write it into a
CUE `_Platform` struct. This will probably involve converting the data
to CUE format and nesting it under the platform struct spec field.
This patch restructures the bare platform in preparation for a
`Platform` kind of output from CUE in addition to the existing
`BuildPlan` kind.
This patch establishes a pattern where our own CUE defined code goes
into the two CUE module paths:
1. `internal/platforms/cue.mod/gen/github.com/holos-run/holos/api/v1alpha1`
2. `internal/platforms/cue.mod/pkg/github.com/holos-run/holos/api/v1alpha1`
3. `internal/platforms/cue.mod/usr/github.com/holos-run/holos/api/v1alpha1`
The first path is automatically generated from Go structs. The second
path is where we override and provide additional cue level integration.
The third path is reserved for the end user to further refine and
constrain our definitions.
This form goes a good way toward capturing what we need to configure the
entire reference platform. Elements and sections are responsive to
which cloud providers are selected, which achieves my goal of modeling a
reasonably advanced form using only JSON data produced by CUE.
To write the form via the API:
cue export ./forms/platform/ --out json \
| jq '{platform_id: "'${platformId}'", fields: .spec.fields}' \
| grpcurl -H "x-oidc-id-token: $(holos token)" -d @ ${host}:443 \
holos.platform.v1alpha1.PlatformService.PutForm
The way we were organizing fields into section broke Formly validation.
This patch fixes the problem by using the recommended approach of
[Nested Forms][1].
This patch also refactors the PlatformService API to clean it up.
GetForm / PutForm are separated from the Platform methods. Similarly
GetModel / PutModel are separated out and are specific to get and put
the model data.
NOTE: I'm not sure we should have separated out the platform service
into it's own protobuf package. Seems maybe unnecessary.
❯ grpcurl -H "x-oidc-id-token: $(holos token)" -d '{"platform_id":"018f36fb-e3ff-7f7f-a5d1-7ca2bf499e94"}' jeff.app.dev.k2.holos.run:443 holos.platform.v1alpha1.PlatformService.GetModel
{
"model": {
"org": {
"contactEmail": "platform@openinfrastructure.co",
"displayName": "Open Infrastructure Services LLC",
"domain": "ois.run",
"name": "ois"
},
"privacy": {
"country": "earth",
"regions": [
"us-east-2",
"us-west-2"
]
},
"terms": {
"didAgree": true
}
}
}
[1]: https://formly.dev/docs/examples/other/nested-formly-forms
This patch wires up a Select and a Multi Select box. This patch also
establishes a decision as it relates to Formly TypeScript / gRPC Proto3
/ CUE definitions of the form data structure. The decision is to use
gRPC as a transport for any JSON to avoid friction trying to fit Formly
types into Proto3 messages.
Note when using google.protobuf.Value messages with bufbuild/connect-es,
we need to round trip them one last time through JSON to get the
original JSON on the other side. This is because connect-es preserves
the type discriminators in the case and value fields of the message.
Refer to: [Accessing oneof
groups](https://github.com/bufbuild/protobuf-es/blob/main/docs/runtime_api.md#accessing-oneof-groups)
NOTE: On the wire, carry any JSON as field configs for expedience. I
attempted to reflect FormlyFieldConfig in protobuf, but it was too time
consuming. The loosely defined Formly json data API creates significant
friction when joined with a well defined protobuf API. Therefore, we do
not specify anything about the Forms API, convey any valid JSON, and
leave it up to CUE and Formly on the sending and receiving side of the
API.
We use CUE to define our own holos form elements as a subset of the loose
Formly definitions. We further hope Formly will move toward a better JSON
data API, but it's unlikely. Consider replacing Formly entirely and
building on top of the strongly typed Angular Dyanmic Forms API.
Refer to: https://github.com/ngx-formly/ngx-formly/blob/v6.3.0/src/core/src/lib/models/fieldconfig.ts#L15
Consider: https://angular.io/guide/dynamic-form
Usage:
Generate the form from CUE
cue export ./forms/platform/ --out json | jq -cM | pbcopy
Store the form JSON in the config_values column of the platforms table.
View the form, and submit some data. Then get the data back out for use rendering the platform:
grpcurl -H "x-oidc-id-token: $(holos token)" -d '{"platform_id":"'${platformId}'"}' $holos holos.v1alpha1.PlatformService.GetConfig
```json
{
"platform": {
"spec": {
"config": {
"user": {
"sections": {
"org": {
"fields": {
"contactEmail": "jeff@openinfrastructure.co",
"displayName": "Open Infrastructure Services LLC",
"domain": "ois.run",
"name": "ois"
}
},
"privacy": {
"fields": {
"country": "earth",
"regions": [
"us-east-2",
"us-west-2"
]
}
},
"terms": {
"fields": {
"didAgree": true
}
}
}
}
}
}
}
}
```
Problem:
The GetConfig response value isn't directly usable with CUE without some
gymnastics.
Solution:
Refactor the protobuf definition and response output to make the user
defined and supplied config values provided by the API directly usable
in the CUE code that defines the platform.
Result:
The top level platform config is directly usable in the
`internal/platforms/bare` directory:
grpcurl -H "x-oidc-id-token: $(holos token)" -d '{"platform_id":"'${platformID}'"}' $host \
holos.v1alpha1.PlatformService.GetConfig \
> platform.holos.json
Vet the user supplied data:
cue vet ./ -d '#PlatformConfig' platform.holos.json
Build the holos component. The ConfigMap consumes the user supplied
data:
cue export --out yaml -t cluster=k2 ./components/configmap platform.holos.json \
| yq .spec.components
Note the data provided by the input form is embedded into the
ConfigMap managed by Holos:
```yaml
KubernetesObjectsList:
- metadata:
name: platform-configmap
apiObjectMap:
ConfigMap:
platform: |
metadata:
name: platform
namespace: default
labels:
app.holos.run/managed: "true"
data:
platform: |
kind: Platform
spec:
config:
user:
sections:
org:
fields:
contactEmail: jeff@openinfrastructure.co
displayName: Open Infrastructure Services LLC
domain: ois.run
name: ois
apiVersion: app.holos.run/v1alpha1
metadata:
name: bare
labels: {}
annotations: {}
holos:
flags:
cluster: k2
kind: ConfigMap
apiVersion: v1
Skip: false
```
Problem:
The use of google.protobuf.Any was making it awkward to work with the
data provided by the user. The structure of the form data is defined by
the platform engineer, so the intent of Any was to wrap the data in a
way we can pass over the network and persist in the database.
The escaped JSON encoding was problematic and error prone to decode on
the other end.
Solution:
Define the Platform values as a two level map with string keys, but with
protobuf message fields "sections" and "fields" respectively. Use
google.protobuf.Value from the struct package to encode the actual
value.
Result:
In TypeScript, google.protobuf.Value encodes and decodes easily to a
JSON value. On the go side, connect correctly handles the value as
well.
No more ugly error prone escaping:
```
❯ grpcurl -H "x-oidc-id-token: $(holos token)" -d '{"platform_id":"'${platformId}'"}' $host holos.v1alpha1.PlatformService.GetConfig
{
"sections": {
"org": {
"fields": {
"contactEmail": "jeff@openinfrastructure.co",
"displayName": "Open Infrastructure Services LLC",
"domain": "ois.run",
"name": "ois"
}
}
}
}
```
This return value is intended to be directly usable in the CUE code, so
we may further nest the values into a platform.spec key.
This patch changes the backend to store the platform config form
definition and the config values supplied by the form as JSON in the
database.
The gRPC API does not change with this patch, but may need to depending
on how this works and how easy it is to evolve the data model and add
features.
This patch is a work in progress wiring up the form to put the values to
the holos server using grpc.
In an effort to simplify the platform configuration, the structure is a
two level map with the top level being configuration sections and the
second level being the fields associated with the config section.
To support multiple kinds of values and field controls, the values are
serialized to JSON for rpc over the network and for storage in the
database. When they values are used, either by the UI or by the `holos
render` command, they're to be unmarshalled and in-lined into the
Platform Config data structure.
Pick back up ensuring the Platform rpc handler correctly encodes and
decodes the structure to the database.
Consider changing the config_form and config_values fields to JSON field
types in the database. It will likely make working with this a lot
easier.
With this patch we're ready to wire up the holos render command to fetch
the platform configuration and create the end to end demo.
Here's essentially what the render command will fetch and lay down as a
json file for CUE:
```
❯ grpcurl -H "x-oidc-id-token: $(holos token)" -d '{"platform_id":"018f2c4e-ecde-7bcb-8b89-27a99e6cc7a1"}' jeff.app.dev.k2.holos.run:443 holos.v1alpha1.PlatformService.GetPlatform | jq .platform.config.values
{
"sections": {
"org": {
"values": {
"contactEmail": "\"platform@openinfrastructure.co\"",
"displayName": "\"Open Infrastructure Services LLC\"",
"domain": "\"ois.run\"",
"name": "\"ois\""
}
}
}
}
```
This patch adds a /platform/:id route path to a PlatformDetail
component. The platform detail component calls the GetPlatform method
given the platform ID and renders the platform config form on the detail
tab.
The submit button is not yet wired up.
The API for adding platforms changes, allowing raw json bytes using the
RawConfig. The raw bytes are not presented on the read path though,
calling GetPlatforms provides the platform and the config form inline in
the response.
Use the `raw_config` field instead of `config` when creating the form
data.
```
❯ grpcurl -H "x-oidc-id-token: $(holos token)" -d @ jeff.app.dev.k2.holos.run:443 holos.v1alpha1.PlatformService.AddPlatform <<EOF
{
"platform": {
"org_id": "018f27cd-e5ac-7f98-bfe1-2dbab208a48c",
"name": "bare2",
"raw_config": {
"form": "$(cue export ./forms/platform/ --out json | jq -cM | base64 -w0)"
}
}
}
EOF
```
This patch adds 4 fields to the Platform table:
1. Config Form represents the JSON FormlyFieldConfig for the UI.
2. Config CUE represents the CUE file containing a definition the
Config Values must unify with.
3. Config Definition is the CUE definition variable name used to unify
the values with the cue code. Should be #PlatformSpec in most
cases.
4. Config Values represents the JSON values provided by the UI.
The use case is the platform engineer defines the #PlatformSpec in cue,
and provides the form field config. The platform engineer then provides
1-3 above when adding or updating a Platform.
The UI then presents the form to the end user and provides values for 4
when the user submits the form.
This patch also refactors the AddPlatform method to accept a Platform
message. To do so we make the id field optional since it is server
assigned.
The patch also adds a database constraint to ensure platform names are
unique within the scope of an organization.
Results:
Note how the CUE representation of the Platform Form is exported to JSON
then converted to a base64 encoded string, which is the protobuf JSON
representation of a bytes[] value.
```
grpcurl -H "x-oidc-id-token: $(holos token)" -d @ jeff.app.dev.k2.holos.run:443 holos.v1alpha1.PlatformService.AddPlatform <<EOF
{
"platform": {
"id": "0d3dc0c0-bbc8-41f8-8c6e-75f0476509d6",
"org_id": "018f27cd-e5ac-7f98-bfe1-2dbab208a48c",
"name": "bare",
"config": {
"form": "$(cd internal/platforms/bare && cue export ./forms/platform/ --out json | jq -cM | base64 -w0)"
}
}
}
EOF
```
Note the requested platform ID is ignored.
```
{
"platforms": [
{
"id": "018f2af9-f7ba-772a-9db6-f985ece8fed1",
"timestamps": {
"createdAt": "2024-04-29T17:49:36.058379Z",
"updatedAt": "2024-04-29T17:49:36.058379Z"
},
"name": "bare",
"creator": {
"id": "018f27cd-e591-7f98-a9d2-416167282d37"
},
"config": {
"form": "eyJhcGlWZXJzaW9uIjoiZm9ybXMuaG9sb3MucnVuL3YxYWxwaGExIiwia2luZCI6IlBsYXRmb3JtRm9ybSIsIm1ldGFkYXRhIjp7Im5hbWUiOiJiYXJlIn0sInNwZWMiOnsic2VjdGlvbnMiOlt7Im5hbWUiOiJvcmciLCJkaXNwbGF5TmFtZSI6Ik9yZ2FuaXphdGlvbiIsImRlc2NyaXB0aW9uIjoiT3JnYW5pemF0aW9uIGNvbmZpZyB2YWx1ZXMgYXJlIHVzZWQgdG8gZGVyaXZlIG1vcmUgc3BlY2lmaWMgY29uZmlndXJhdGlvbiB2YWx1ZXMgdGhyb3VnaG91dCB0aGUgcGxhdGZvcm0uIiwiZmllbGRDb25maWdzIjpbeyJrZXkiOiJuYW1lIiwidHlwZSI6ImlucHV0IiwicHJvcHMiOnsibGFiZWwiOiJOYW1lIiwicGxhY2Vob2xkZXIiOiJleGFtcGxlIiwiZGVzY3JpcHRpb24iOiJETlMgbGFiZWwsIGUuZy4gJ2V4YW1wbGUnIiwicmVxdWlyZWQiOnRydWV9fSx7ImtleSI6ImRvbWFpbiIsInR5cGUiOiJpbnB1dCIsInByb3BzIjp7ImxhYmVsIjoiRG9tYWluIiwicGxhY2Vob2xkZXIiOiJleGFtcGxlLmNvbSIsImRlc2NyaXB0aW9uIjoiRE5TIGRvbWFpbiwgZS5nLiAnZXhhbXBsZS5jb20nIiwicmVxdWlyZWQiOnRydWV9fSx7ImtleSI6ImRpc3BsYXlOYW1lIiwidHlwZSI6ImlucHV0IiwicHJvcHMiOnsibGFiZWwiOiJEaXNwbGF5IE5hbWUiLCJwbGFjZWhvbGRlciI6IkV4YW1wbGUgT3JnYW5pemF0aW9uIiwiZGVzY3JpcHRpb24iOiJEaXNwbGF5IG5hbWUsIGUuZy4gJ0V4YW1wbGUgT3JnYW5pemF0aW9uJyIsInJlcXVpcmVkIjp0cnVlfX0seyJrZXkiOiJjb250YWN0RW1haWwiLCJ0eXBlIjoiaW5wdXQiLCJwcm9wcyI6eyJsYWJlbCI6IkNvbnRhY3QgRW1haWwiLCJwbGFjZWhvbGRlciI6InBsYXRmb3JtLXRlYW1AZXhhbXBsZS5jb20iLCJkZXNjcmlwdGlvbiI6IlRlY2huaWNhbCBjb250YWN0IGVtYWlsIGFkZHJlc3MiLCJyZXF1aXJlZCI6dHJ1ZX19XX1dfX0K"
}
}
]
}
```
This patch adds a basic AddPlatform method that adds a platform with a
name and a display name.
Next steps are to add fields for the Platform Config Form definition and
the Platform Config values submitted from the form.
Next step: AddPlatform
Also consider extracting the queries to get the requested org_id to a
helper function. This will likely eventually move to an interceptor
because every request is org scoped and needs authorization checks
against the org.
```
grpcurl -H "x-oidc-id-token: $(holos token)" -d '{"org_id":"018f27cd-e5ac-7f98-bfe1-2dbab208a48c"}' jeff.app.dev.k2.holos.run:443 holos.v1alpha1.PlatformService.GetPlatforms
```
Problem:
Platform engineers need the ability to define custom input fields for
their own platform level configuration values. The holos web UI needs
to present the platform config values in a clean way. The values
entered on the form need to make their way into the top level
Platform.spec field for use across all components and clusters in the
platform.
Solution:
Define a Platform Form in a forms cue package. The output of this
definition is intended to be sent to the holos server to provide to the
web UI.
Result:
Platform engineers can define their platform config input values in
their infrastructure repository. For example, the bare platform form
inputs are defined at `platforms/bare/forms/platform/platform-form.cue`.
This cue file produces [FormlyFieldConfig][1] output.
```console
cue export ./forms/platform/ --out yaml
```
```yaml
apiVersion: forms.holos.run/v1alpha1
kind: PlatformForm
metadata:
name: bare
spec:
sections:
- name: org
displayName: Organization
description: Organization config values are used to derive more specific configuration values throughout the platform.
fieldConfigs:
- key: name
type: input
props:
label: Name
placeholder: example
description: DNS label, e.g. 'example'
required: true
- key: domain
type: input
props:
label: Domain
placeholder: example.com
description: DNS domain, e.g. 'example.com'
required: true
- key: displayName
type: input
props:
label: Display Name
placeholder: Example Organization
description: Display name, e.g. 'Example Organization'
required: true
- key: contactEmail
type: input
props:
label: Contact Email
placeholder: platform-team@example.com
description: Technical contact email address
required: true
```
Next Steps:
Add a holos subcommand to produce the output and store it in the
backend. Wire the front end to fetch the form config from the backend.
[1]: https://formly.dev/docs/api/core#formlyfieldconfig
This patch adds a bare platform that does nothing but render a configmap
containing the platform config structure itself.
The definition of the platform structure is firming up. The platform
designer, which may be a holos customer, is responsible for defining the
structure of the `platform.spec` output field.
Us holos developers have a reserved namespace to add configuration
fields and data in the `platform.holos` output file.
Beyond these two fields, the platform config structure has TypeMeta and
ObjectMeta fields similar to a kubernetes api object to support
versioning the platform config data, naming the platform, annotating the
platform, and labeling the platform.
The path forward from here is to:
1. Eventually move the stable definitions into a CUE module that gets
imported into the user's package.
2. As a platform designer, add the organization field to the
#PlatformSpec definition as a CUE definition.
3. As a platform designer, add the organization field Form data
structure as a JSON file.
4. Add an API to upload the #PlatformSpec cue file and the
#PlatformSpec form json file to the saas backend.
5. Wire up Angular to pull the form json from the API and render the
form.
6. Wire up Angular to write the form data to a gRPC service method.
7. Wire up the `holos cli` to read the form data from a gRPC service
method.
8. Tie it all together where the holos cli renders the configmap.
This patch adds an organization "selector" that's really just a place
holder. The active organization is the last element in the list
returned by the GetCallerOrganizations method for now.
The purpose is to make sure we have the structure in place for more than
one organizations without needing to implement full support for the
feature at this early stage.
The Angular frontend is expected to call the activeOrg() method of the
OrganizationService. In the future this could store the state of which
organization the user has selected. The purpose is to return an org id
to send as a request parameter for other requests.
Note this patch also implements refresh behavior. The list of orgs is
fetched once on application load. If there is no user, or the user has
zero orgs, the user is created and an organization is added with them as
an owner. This is accompished using observable pipes.
The pipe is tied to a refresh behavior. Clicking the org button
triggers the refresh behavior, which executes the pipe again and
notifies all subscribers.
This works quite well and should be idiomatic angular / rxjs. Clicking
the button automatically updates the UI after making the necessary API
calls.
This patch adds the OrganizationService to the Angular front end and
displays a simple list of the organizations the user is a member of in
the profile card.
There isn't a service yet to return the currently selected
organization, but that could be a simple method to return the most
recent entry in the list until we put something more complicated in
place like local storage of what the user has selected.
It may make sense to put a database constraint on the number of
organizations until we implement the feature later, it's too early to do
so now, I just want to make sure it's possible to add later.
Problem:
When loading the page the GetCallerClaims rpc method is called multiple
times unnecessarily.
Solution:
Use [shareReplay][1] to replay the last observable event for all
subscribers, including subscribers coming late to the party.
Result:
Network inspector in chrome indicates GetCallerClaims is called once and
only once.
[1]: https://rxjs.dev/api/operators/shareReplay
This patch adds a ProfileButton component which makes a ConnectRPC gRPC
call to the `holos.v1alpha1.UserService.GetCallerClaims` method and
renders the profile button based on the claims.
Note, in the network inspector there are two API calls to
`holos.v1alpha1.UserService.GetCallerClaims` which is unfortunate. A
follow up patch might be good to fix this.
Problem:
It's slow to build the angular app, compile it into the go executable,
copy it to the pod, then restart the server.
Solution:
Configure the mesh to route /ui to `ng serve` running on my local
host.
Result:
Navigating to https://jeff.app.dev.k2.holos.run/ui gets responses from
the ng development server.
Use:
ng serve --host 0.0.0.0
// Label is an arbitrary unique identifier internal to holos itself. The holos
// cli is expected to never write a Label value to rendered output files,
// therefore use a [Label] then the identifier must be unique and internal.
// Defined as a type for clarity and type checking.
//
// A Label is useful to convert a CUE struct to a list, for example producing a list of [APIObject] resources from an [APIObjectMap]. A CUE struct using
// Label keys is guaranteed to not lose data when rendering output because a
// Label is expected to never be written to the final output.
typeLabelstring
// Kind is a kubernetes api object kind. Defined as a type for clarity and type checking.
typeKindstring
// APIObject represents the most basic generic form of a single kubernetes api
// object. Represented as a JSON object internally for compatibility between
// tools, for example loading from CUE.
typeAPIObjectstructpb.Struct
// APIObjectMap represents the marshalled yaml representation of kubernetes api
// objects. Do not produce an APIObjectMap directly, instead use [APIObjects]
// to produce the marshalled yaml representation from CUE data, then provide the
// result to [HolosComponent].
typeAPIObjectMapmap[Kind]map[Label]string
// APIObjects represents Kubernetes API objects defined directly from CUE code.
// Useful to mix in resources to any kind of [HolosComponent], for example
// adding an ExternalSecret resource to a [HelmChart].
//
// [Kind] must be the resource kind, e.g. Deployment or Service.
//
// [Label] is an arbitrary internal identifier to uniquely identify the resource
// within the context of a `holos` command. Holos will never write the
// intermediate label to rendered output.
//
// Refer to [HolosComponent] which accepts an [APIObjectMap] field provided by
stderr 'Error: execution error at \(zitadel/templates/secret_zitadel-masterkey.yaml:2:4\): Either set .Values.zitadel.masterkey xor .Values.zitadel.masterkeySecretName'
Package v1alpha2 contains the core API contract between the holos cli and CUE configuration code. Platform designers, operators, and software developers use this API to write configuration in CUE which \`holos\` loads. The overall shape of the API defines imperative actions \`holos\` should carry out to render the complete yaml that represents a Platform.
[Platform](<#Platform>) defines the complete configuration of a platform. With the holos reference platform this takes the shape of one management cluster and at least two workload cluster. Each cluster has multiple [HolosComponent](<#HolosComponent>) resources applied to it.
Each holos component path, e.g. \`components/namespaces\` produces exactly one [BuildPlan](<#BuildPlan>) which in turn contains a set of [HolosComponent](<#HolosComponent>) kinds.
The primary kinds of [HolosComponent](<#HolosComponent>) are:
1. [HelmChart](<#HelmChart>) to render config from a helm chart.
2. [KustomizeBuild](<#KustomizeBuild>) to render config from [Kustomize](<#Kustomize>)
3. [KubernetesObjects](<#KubernetesObjects>) to render [APIObjects](<#APIObjects>) defined directly in CUE configuration.
Note that Holos operates as a data pipeline, so the output of a [HelmChart](<#HelmChart>) may be provided to [Kustomize](<#Kustomize>) for post\-processing.
// ChartDir is the directory name created in the holos component directory to cache a chart.
ChartDir="vendor"
// ResourcesFile is the file name used to store component output when post-processing with kustomize.
ResourcesFile="resources.yaml"
)
```
<a name="KubernetesObjectsKind"></a>
```go
constKubernetesObjectsKind="KubernetesObjects"
```
<a name="APIObject"></a>
## type APIObject {#APIObject}
APIObject represents the most basic generic form of a single kubernetes api object. Represented as a JSON object internally for compatibility between tools, for example loading from CUE.
```go
typeAPIObjectstructpb.Struct
```
<a name="APIObjectMap"></a>
## type APIObjectMap {#APIObjectMap}
APIObjectMap represents the marshalled yaml representation of kubernetes api objects. Do not produce an APIObjectMap directly, instead use [APIObjects](<#APIObjects>) to produce the marshalled yaml representation from CUE data, then provide the result to [HolosComponent](<#HolosComponent>).
```go
typeAPIObjectMapmap[Kind]map[Label]string
```
<a name="APIObjects"></a>
## type APIObjects {#APIObjects}
APIObjects represents Kubernetes API objects defined directly from CUE code. Useful to mix in resources to any kind of [HolosComponent](<#HolosComponent>), for example adding an ExternalSecret resource to a [HelmChart](<#HelmChart>).
[Kind](<#Kind>) must be the resource kind, e.g. Deployment or Service.
[Label](<#Label>) is an arbitrary internal identifier to uniquely identify the resource within the context of a \`holos\` command. Holos will never write the intermediate label to rendered output.
Refer to [HolosComponent](<#HolosComponent>) which accepts an [APIObjectMap](<#APIObjectMap>) field provided by [APIObjects](<#APIObjects>).
BuildPlan represents a build plan for the holos cli to execute. The purpose of a BuildPlan is to define one or more [HolosComponent](<#HolosComponent>) kinds. For example a [HelmChart](<#HelmChart>), [KustomizeBuild](<#KustomizeBuild>), or [KubernetesObjects](<#KubernetesObjects>).
A BuildPlan usually has an additional empty [KubernetesObjects](<#KubernetesObjects>) for the purpose of using the [HolosComponent](<#HolosComponent>) DeployFiles field to deploy an ArgoCD or Flux gitops resource for the holos component.
// Release represents the chart release when executing helm template.
Releasestring`json:"release"`
// Repository represents the repository to fetch the chart from.
RepositoryRepository`json:"repository,omitempty"`
}
```
<a name="FileContent"></a>
## type FileContent {#FileContent}
FileContent represents file contents.
```go
typeFileContentstring
```
<a name="FileContentMap"></a>
## type FileContentMap {#FileContentMap}
FileContentMap represents a mapping of file paths to file contents. Paths are relative to the \`holos\` output "deploy" directory, and may contain sub\-directories.
```go
typeFileContentMapmap[FilePath]FileContent
```
<a name="FilePath"></a>
## type FilePath {#FilePath}
FilePath represents a file path.
```go
typeFilePathstring
```
<a name="HelmChart"></a>
## type HelmChart {#HelmChart}
HelmChart represents a holos component which wraps around an upstream helm chart. Holos orchestrates helm by providing values obtained from CUE, renders the output using \`helm template\`, then post\-processes the helm output yaml using the general functionality provided by [HolosComponent](<#HolosComponent>), for example [Kustomize](<#Kustomize>) post\-rendering and mixing in additional kubernetes api objects.
```go
typeHelmChartstruct{
HolosComponent`json:",inline"`
Kindstring`json:"kind" cue:"\"HelmChart\""`
// Chart represents a helm chart to manage.
ChartChart`json:"chart"`
// ValuesContent represents the values.yaml file holos passes to the `helm
// template` command.
ValuesContentstring`json:"valuesContent"`
// EnableHooks enables helm hooks when executing the `helm template` command.
// Kustomize represents a kubectl kustomize build post-processing step.
Kustomize`json:"kustomize,omitempty"`
// Skip causes holos to take no action regarding this component.
Skipbool`json:"skip" cue:"bool | *false"`
}
```
<a name="Kind"></a>
## type Kind {#Kind}
Kind is a kubernetes api object kind. Defined as a type for clarity and type checking.
```go
typeKindstring
```
<a name="KubernetesObjects"></a>
## type KubernetesObjects {#KubernetesObjects}
KubernetesObjects represents a [HolosComponent](<#HolosComponent>) composed of Kubernetes API objects provided directly from CUE using [APIObjects](<#APIObjects>).
Kustomize represents resources necessary to execute a kustomize build. Intended for at least two use cases:
1. Process a [KustomizeBuild](<#KustomizeBuild>) [HolosComponent](<#HolosComponent>) which represents raw yaml file resources in a holos component directory.
2. Post process a [HelmChart](<#HelmChart>) [HolosComponent](<#HolosComponent>) to inject istio, patch jobs, add custom labels, etc...
```go
typeKustomizestruct{
// KustomizeFiles holds file contents for kustomize, e.g. patch files.
KustomizeBuild represents a [HolosComponent](<#HolosComponent>) that renders plain yaml files in the holos component directory using \`kubectl kustomize build\`.
```go
typeKustomizeBuildstruct{
HolosComponent`json:",inline"`
Kindstring`json:"kind" cue:"\"KustomizeBuild\""`
}
```
<a name="Label"></a>
## type Label {#Label}
Label is an arbitrary unique identifier internal to holos itself. The holos cli is expected to never write a Label value to rendered output files, therefore use a [Label](<#Label>) then the identifier must be unique and internal. Defined as a type for clarity and type checking.
A Label is useful to convert a CUE struct to a list, for example producing a list of [APIObject](<#APIObject>) resources from an [APIObjectMap](<#APIObjectMap>). A CUE struct using Label keys is guaranteed to not lose data when rendering output because a Label is expected to never be written to the final output.
```go
typeLabelstring
```
<a name="Metadata"></a>
## type Metadata {#Metadata}
Metadata represents data about the holos component such as the Name.
```go
typeMetadatastruct{
// Name represents the name of the holos component.
Namestring`json:"name"`
// Namespace is the primary namespace of the holos component. A holos
// component may manage resources in multiple namespaces, in this case
// consider setting the component namespace to default.
//
// This field is optional because not all resources require a namespace,
// particularly CRD's and DeployFiles functionality.
// +optional
Namespacestring`json:"namespace,omitempty"`
}
```
<a name="Platform"></a>
## type Platform {#Platform}
Platform represents a platform to manage. A Platform resource informs holos which components to build. The platform resource also acts as a container for the platform model form values provided by the PlatformService. The primary use case is to collect the cluster names, cluster types, platform model, and holos components to build into one resource.
```go
typePlatformstruct{
// Kind is a string value representing the resource this object represents.
Kindstring`json:"kind" cue:"\"Platform\""`
// APIVersion represents the versioned schema of this representation of an object.
// Metadata represents data about the object such as the Name.
MetadataPlatformMetadata`json:"metadata"`
// Spec represents the specification.
SpecPlatformSpec`json:"spec"`
}
```
<a name="PlatformMetadata"></a>
## type PlatformMetadata {#PlatformMetadata}
```go
typePlatformMetadatastruct{
// Name represents the Platform name.
Namestring`json:"name"`
}
```
<a name="PlatformSpec"></a>
## type PlatformSpec {#PlatformSpec}
PlatformSpec represents the specification of a Platform. Think of a platform specification as a list of platform components to apply to a list of kubernetes clusters combined with the user\-specified Platform Model.
```go
typePlatformSpecstruct{
// Model represents the platform model holos gets from from the
// PlatformService.GetPlatform rpc method and provides to CUE using a tag.
Modelstructpb.Struct`json:"model"`
// Components represents a list of holos components to manage.
Package v1alpha3 contains the core API contract between the holos cli and CUE configuration code. Platform designers, operators, and software developers use this API to write configuration in CUE which \`holos\` loads. The overall shape of the API defines imperative actions \`holos\` should carry out to render the complete yaml that represents a Platform.
[Platform](<#Platform>) defines the complete configuration of a platform. With the holos reference platform this takes the shape of one management cluster and at least two workload cluster. Each cluster has multiple [Component](<#Component>) resources applied to it.
Each holos component path, e.g. \`components/namespaces\` produces exactly one [BuildPlan](<#BuildPlan>) which in turn contains a set of [Component](<#Component>) kinds.
The primary kinds of [Component](<#Component>) are:
1. [HelmChart](<#HelmChart>) to render config from a helm chart.
2. [KustomizeBuild](<#KustomizeBuild>) to render config from [Kustomize](<#Kustomize>)
3. [KubernetesObjects](<#KubernetesObjects>) to render [APIObjects](<#APIObjects>) defined directly in CUE configuration.
Note that Holos operates as a data pipeline, so the output of a [HelmChart](<#HelmChart>) may be provided to [Kustomize](<#Kustomize>) for post\-processing.
// ChartDir is the directory name created in the holos component directory to cache a chart.
ChartDir="vendor"
// ResourcesFile is the file name used to store component output when post-processing with kustomize.
ResourcesFile="resources.yaml"
)
```
<a name="KubernetesObjectsKind"></a>
```go
constKubernetesObjectsKind="KubernetesObjects"
```
<a name="APIObject"></a>
## type APIObject {#APIObject}
APIObject represents the most basic generic form of a single kubernetes api object. Represented as a JSON object internally for compatibility between tools, for example loading from CUE.
```go
typeAPIObjectstructpb.Struct
```
<a name="APIObjectMap"></a>
## type APIObjectMap {#APIObjectMap}
APIObjectMap represents the marshalled yaml representation of kubernetes api objects. Do not produce an APIObjectMap directly, instead use [APIObjects](<#APIObjects>) to produce the marshalled yaml representation from CUE data, then provide the result to [Component](<#Component>).
```go
typeAPIObjectMapmap[Kind]map[InternalLabel]string
```
<a name="APIObjects"></a>
## type APIObjects {#APIObjects}
APIObjects represents Kubernetes API objects defined directly from CUE code. Useful to mix in resources to any kind of [Component](<#Component>), for example adding an ExternalSecret resource to a [HelmChart](<#HelmChart>).
[Kind](<#Kind>) must be the resource kind, e.g. Deployment or Service.
[InternalLabel](<#InternalLabel>) is an arbitrary internal identifier to uniquely identify the resource within the context of a \`holos\` command. Holos will never write the intermediate label to rendered output.
Refer to [Component](<#Component>) which accepts an [APIObjectMap](<#APIObjectMap>) field provided by [APIObjects](<#APIObjects>).
BuildPlan represents a build plan for the holos cli to execute. The purpose of a BuildPlan is to define one or more [Component](<#Component>) kinds. For example a [HelmChart](<#HelmChart>), [KustomizeBuild](<#KustomizeBuild>), or [KubernetesObjects](<#KubernetesObjects>).
A BuildPlan usually has an additional empty [KubernetesObjects](<#KubernetesObjects>) for the purpose of using the [Component](<#Component>) DeployFiles field to deploy an ArgoCD or Flux gitops resource for the holos component.
// Kustomize represents a kubectl kustomize build post-processing step.
Kustomize`json:"kustomize,omitempty"`
// Skip causes holos to take no action regarding this component.
Skipbool`json:"skip" cue:"bool | *false"`
}
```
<a name="FileContent"></a>
## type FileContent {#FileContent}
FileContent represents file contents.
```go
typeFileContentstring
```
<a name="FileContentMap"></a>
## type FileContentMap {#FileContentMap}
FileContentMap represents a mapping of file paths to file contents.
```go
typeFileContentMapmap[FilePath]FileContent
```
<a name="FilePath"></a>
## type FilePath {#FilePath}
FilePath represents a file path.
```go
typeFilePathstring
```
<a name="HelmChart"></a>
## type HelmChart {#HelmChart}
HelmChart represents a holos component which wraps around an upstream helm chart. Holos orchestrates helm by providing values obtained from CUE, renders the output using \`helm template\`, then post\-processes the helm output yaml using the general functionality provided by [Component](<#Component>), for example [Kustomize](<#Kustomize>) post\-rendering and mixing in additional kubernetes api objects.
```go
typeHelmChartstruct{
Component`json:",inline"`
Kindstring`json:"kind" cue:"\"HelmChart\""`
// Chart represents a helm chart to manage.
ChartChart`json:"chart"`
// ValuesContent represents the values.yaml file holos passes to the `helm
// template` command.
ValuesContentstring`json:"valuesContent"`
// EnableHooks enables helm hooks when executing the `helm template` command.
InternalLabel is an arbitrary unique identifier internal to holos itself. The holos cli is expected to never write a InternalLabel value to rendered output files, therefore use a [InternalLabel](<#InternalLabel>) when the identifier must be unique and internal. Defined as a type for clarity and type checking.
A InternalLabel is useful to convert a CUE struct to a list, for example producing a list of [APIObject](<#APIObject>) resources from an [APIObjectMap](<#APIObjectMap>). A CUE struct using InternalLabel keys is guaranteed to not lose data when rendering output because a InternalLabel is expected to never be written to the final output.
```go
typeInternalLabelstring
```
<a name="Kind"></a>
## type Kind {#Kind}
Kind is a kubernetes api object kind. Defined as a type for clarity and type checking.
```go
typeKindstring
```
<a name="KubernetesObjects"></a>
## type KubernetesObjects {#KubernetesObjects}
KubernetesObjects represents a [Component](<#Component>) composed of Kubernetes API objects provided directly from CUE using [APIObjects](<#APIObjects>).
KustomizeBuild represents a [Component](<#Component>) that renders plain yaml files in the holos component directory using \`kubectl kustomize build\`.
```go
typeKustomizeBuildstruct{
Component`json:",inline"`
Kindstring`json:"kind" cue:"\"KustomizeBuild\""`
}
```
<a name="Metadata"></a>
## type Metadata {#Metadata}
Metadata represents data about the object such as the Name.
```go
typeMetadatastruct{
// Name represents the name of the holos component.
Namestring`json:"name"`
// Namespace is the primary namespace of the holos component. A holos
// component may manage resources in multiple namespaces, in this case
// consider setting the component namespace to default.
//
// This field is optional because not all resources require a namespace,
// particularly CRDs and DeployFiles functionality.
// +optional
Namespacestring`json:"namespace,omitempty"`
}
```
<a name="Platform"></a>
## type Platform {#Platform}
Platform represents a platform to manage. A Platform resource informs holos which components to build. The platform resource also acts as a container for the platform model form values provided by the PlatformService. The primary use case is to collect the cluster names, cluster types, platform model, and holos components to build into one resource.
```go
typePlatformstruct{
// Kind is a string value representing the resource this object represents.
Kindstring`json:"kind" cue:"\"Platform\""`
// APIVersion represents the versioned schema of this representation of an object.
// Metadata represents data about the object such as the Name.
MetadataPlatformMetadata`json:"metadata"`
// Spec represents the specification.
SpecPlatformSpec`json:"spec"`
}
```
<a name="PlatformMetadata"></a>
## type PlatformMetadata {#PlatformMetadata}
```go
typePlatformMetadatastruct{
// Name represents the Platform name.
Namestring`json:"name"`
}
```
<a name="PlatformSpec"></a>
## type PlatformSpec {#PlatformSpec}
PlatformSpec represents the specification of a Platform. Think of a platform specification as a list of platform components to apply to a list of kubernetes clusters combined with the user\-specified Platform Model.
```go
typePlatformSpecstruct{
// Model represents the platform model holos gets from from the
// PlatformService.GetPlatform rpc method and provides to CUE using a tag.
Modelstructpb.Struct`json:"model"`
// Components represents a list of holos components to manage.
Holos is a tool intended to lighten the burden of managing Kubernetes resources. In 2020 we set out to develop a holistic platform composed from open source cloud native components. We quickly became frustrated with how each of the major components packaged and distributed their software in a different way. Many projects choose to distribute their software with Helm charts, while others provide plain yaml files and Kustomize bases. The popular Kube Prometheus Stack project provides Jsonnet to render and update Kubernetes yaml manifests.
Holos is designed to complement and improve, not replace, existing tools in the cloud native ecosystem.
:::
## Helm
### Chart Users
Describe how things are different when using an upstream helm chart.
### Chart Authors
Describe how things are different when writing a new helm chart.
## Kustomize
TODO
## ArgoCD
TODO
## Flux
TODO
## Timoni
| Aspect | Timoni | Holos | Comment |
| -- | -- | -- | -- |
| Language | CUE | CUE | Like Holos, Timoni is also built on CUE. |
| Artifact | OCI Image | Plain YAML Files | The Holos Authors find plain files easier to work with and reason about than OCI images. |
| Outputs to | OCI Image Repository | Local Git repository | Holos is designed for use with existing GitOps tools. |
| Concept | Module | Component | A Timoni Module is analogous to a Holos Component. |
| Concept | Bundle | Platform | A Timoni Bundle is somewhat similar, but smaller in scope to a Holos Platform. |
:::important
The Holos Authors are deeply grateful to Stefan and Timoni for the capability of
importing Kubernetes custom resource definitions into CUE. Without this
functionality, much of the Kubernetes ecosystem would be more difficult to
manage in CUE and therefore in Holos.
:::
## KubeVela
1. Also built on CUE.
2. Intended to create an Application abstraction.
3. Holos prioritizes composition over abstraction.
4. An abstraction of an Application acts as a filter that removes all but the lowest common denominator functionality. The Holos Authors have found this filtering effect to create excessive friction for software developers.
5. Holos focuses instead on composition to empower developers and platform engineers to leverage the unique features and functionality of their software and platform.
This page is intended as a high level conceptual overview of the key concepts in
Holos. Refer to the [Core API](/docs/api/core/) for low level reference
documentation.
Holos is a tool built for platform engineers. The Holos authors share three
core values which guide our design decisions for the tool.
1. Safety
2. Ease of use
3. Consistency
Each of the following concepts are intended to support and strengthen one or
more of these core values. In this way we hope to lighten the burden carried by
platform engineers.
## Concepts
- [Component](<#component>) - The primary building block in Holos, wraps a Helm chart, Kustomize base, or plain resources defined in CUE.
- [Platform](<#platform>) - A collection of Components integrated into a software development platform.
- [Model](<#model>) - Structured data included in the Platform specification, available to all Components. For example, your organization's domain name.
- [Rendering](<#rendering>) - Holos is a tool that makes the process of rendering Kubernetes manifests safer, easier, and consistent.
```mermaid
graph BT
Platform[<a href="#platform">Platform</a>]
Component[<a href="#component">Components</a>]
Helm[<a href="#component">Helm</a>]
Kustomize[<a href="#component">Kustomize</a>]
CUE[<a href="#component">CUE</a>]
Component --> Platform
Helm --> Component
Kustomize --> Component
CUE --> Component
```
<!--
```mermaid
---
title: Figure 1 - Holos Concepts
---
mindmap
root((Holos))
Platform
Components
HelmChart
KustomizeBuild
KubernetesObjects
Model
name: Example Org
domain: example.com
Renders
YAML Files
Kubernetes Manifests
ArgoCD Application
FluxCD Kustomization
```
-->
## Component
A Component is the primary building block when managing software with Holos. A
software project you wish to integrate into your platform, for example ArgoCD,
is managed using one or more components.
The primary Component kinds are:
1.**HelmChart** to render config provided by Helm.
2.**KustomizeBuild** to render config provided by Kustomize.
3.**KubernetesObjects** to render config provided by CUE.
Components are intended to integrate unmodified upstream software releases into
your Platform. In this way, the focus of a Component is more about the unique
differentiating aspects of your platform than the upstream software contained in
the Component.
#### Example HelmChart Component
The ArgoCD Component is a good example of a HelmChart component because it takes
advantage of most of the key features that empower you to focus on the key
differentiators of your unique platform.
Take note of the following key points in this example ArgoCD Component:
1. The Component wraps the ArgoCD Helm Chart in a way that's easy to upgrade and maintain over time.
2. Newer Gateway API resources are mixed-in replacing the older Ingress resource included in the chart.
3. Helm output is passed through Kustomize to configure secure mutual TLS encryption.
4. Helm values are easier and safer to manipulate with CUE instead of text markup.
5. Kustomize is easier and safer to manipulate with CUE instead of text markup.
6. Platform data Model values are easily accessible, for example the OIDC issuer and the organizations's domain name.
The Component wraps around the unmodified upstream ArgoCD helm chart
providing easier upgrades as new versions are released.
Note how the Component facilitates composition by allowing us to mix-in new
functionality from the ecosystem without modifying the upstream chart. The
Platform this Component integrates with uses the new Gateway API, but the
upstream helm chart does not yet support Gateway API. See how the Resources
field is used to mix-in a ReferenceGrant from the Gateway API without modifying
the upstream helm chart.
The Platform uses Istio to implement service to service encryption with mutual
TLS. The Component passes the Helm output to Kustomize to integrate with Istio.
Kustomize is used to patch the argocd-server Deployment resource to inject the
Istio sidecar for mutual TLS.
Helm values are safer and easier to work with in CUE. Note how you can modify
helm values using well defined data instead of manipulating text yaml files.
Similarly, the yaml files used for Kustomize are produced by CUE, which is again
safer and easier because the Kustomize spec has been imported into CUE and is
validated.
Finally, the domain name used by this Platform is easily accessible from the
PlatformSpec which is defined at the root level and made available to all
components integrated into the platform. Similarly, data values shared by all
of the Components that make up ArgoCD is defined in a structure accessible by
each of these components.
```cue
packageholos
import(
"encoding/yaml"
"strings"
)
// Produce a helm chart build plan.
(#Helm&Chart).Output
letChart={
Name:"argo-cd"
Namespace:"argocd"
Version:"7.1.1"
Chart:chart:release:"argocd"
// The upstream chart uses a Job to create the argocd-redis Secret. Enable
// hooks to enable the Job.
Chart:enableHooks:true
Repo:name:"argocd"
Repo:url:"https://argoproj.github.io/argo-helm"
// Ensure all of our mix-in resources go into the same namespace as the Chart.
Resources:[_]:[_]:metadata:namespace:Namespace
// Grant the Gateway namespace the ability to refer to the backend service
Holos leverages a simple web app to collect and store platform attributes with a web form. Register an account with the web app to create and retrieve the platform model.
```
holos register user
```
:::tip
Holos allows you to customize all of the sections and fields of your platform model.
:::
## Generate your Platform
Generate your platform configuration from the holos reference platform embedded in the `holos` executable. Platform configuration is stored in a git repository.
```bash
mkdir holos-infra
cd holos-infra
holos generate platform holos
```
The generate command writes many files organized by platform component into the current directory
TODO: Put a table here describing key elements?
:::tip
Take a peek at `holos generate platform --help` to see other platforms embedded in the holos executable.
:::
## Push the Platform Form
```
holos push platform form .
```
## Fill in the form
TODO
## Pull the Platform Model
Once the platform model is saved, pull it into the holos-infra repository:
```
holos pull platform model .
```
## Render the Platform
With the platform model and the platform spec, you're ready to render the complete platform configuration:
⚡️ Holos will help you build your **software development platform in no time.**
💸 Building a software development platform is **time consuming and expensive**. Spend more time building features for your customers and less time managing your development platform.
💥 Already have a platform? Add new features and services to your platform easily with Holos.
🧐 Holos is a platform builder. It builds a hollistic software development platform composed of best-of-breed cloud native open source projects. Holos is also a tool to make it easier to manage cloud infrastructure by providing a typed alternative to yaml templates.
## Features
Holos was built to solve two main problems:
1. Building a platform usually takes 3 engineers 6-9 months of effort. Holos provides a reference platform that enables you to deploy and customize your platform in a fraction of the time.
2. Configuration changes often cause outages. Existing tools like Helm make it difficult to understand the impact a configuration change will have. Holos provides a unique, unified configuration model powered by CUE that makes it safer and easier to roll out configuration changes.
A core principle of Holos is that organizations gain value from owning the the platform they build on. Avoid vendor lock-in, future price hikes, and expensive licensing changes by building on a solid foundation of open source, cloud native computing foundation backed projects.
The following features are built into the Holos reference platform.
:::tip
Don't see your preferred technology in the stack? Holos is designed to enable you to swap out components of the platform tech stack.
:::
- **Continuous Delivery**
- Holos builds a GitOps workflow for each application running in the platform.
- Developers push changes which are automatically deployed.
- Powered by [ArgoCD](https://argo-cd.readthedocs.io/en/stable/)
- **Identity and Access Management** (IAM)
- Holos builds a standard OIDC identity provider for you.
- Integrates with your exisitng IAM and SSO system, or works independently.
- Powerful customer identity and access management features.
- Role based access control.
- Powered by [ZITADEL](https://zitadel.com/)
- **Zero Trust**
- Authenticate and Authorize users at the platform layer instead of or in addition to the application layer.
- Integrated with observability to measure and alert about problems before customers complain.
- Powered by [Istio](https://istio.io/)
- **Observability**
- Holos collects performance and availability metrics automatically, without requiring application changes.
- Optional, deeper integration into the application layer.
- Distributed Tracing
- Logging
- Powered by Prometheus, Grafana, Loki, and OpenTelemetry.
- **Data Platform**
- Integrated management of PostgreSQL
- Automatic backups
- Automatic restore from backup
- Quickly fail over across multiple regions
- **Multi-Region**
- Holos is designed to operate in multiple regions and multiple clouds.
- Keep customer data in the region that makes the most sense for your business.
- Easily cut over from one region to another for redundancy and business continuity.
## Development Status
Holos is being actively developed by [Open Infrastructure Services](https://openinfrastructure.co). Release can be found [here](https://github.com/holos-run/holos/releases).
## Adoption
Organizations who have officially adopted Holos can be found [here](https://github.com/holos-run/holos/blob/main/ADOPTERS.md).
Holos follows the [Namespace Sameness - Sig Multicluster Position][1]. A
namespace is the same on all clusters within the scope of a platform.
Namespaces are also security boundaries for role based access control. As such,
permission to read a secret in a namespace means the secret is readable on all
clusters in the platform.
When adding a component to a platform, create a namespace using the following
process. This ensures a namespace scoped `SecretStore` is created to sync
`ExternalSecret` resources from the management cluster.
1. Add a new project to the `_Projects` struct in `platform.cue`.
2. Add the namespace to the `spec.namespaces` field of the project.
3. Render the platform
4. Apply the `namespaces` component to the management cluster
5. Apply the `eso-creds-manager` component to the management cluster to create the `eso-reader` ksa for the namespace `SecretStore`
6. Get a timestamp: `STAMP="$(date +%s)"`
7. Run the job to populate ecr creds: `kubectl create job -n holos-system --from=cronjob/ecr-creds-manager ecr-creds-manager-$STAMP`
8. Wait for the job to complete: `kubectl -n holos-system logs -l job-name=ecr-creds-manager-$STAMP -f`
9. Apply the `namespaces` component to the workload clusters
10. On the workload cluster, run the job to fetch the eso-reader creds: `kubectl create job -n holos-system --from=cronjob/eso-creds-refresher eso-creds-refresher-${STAMP}`
11. Wait for the job to complete: `kubectl -n holos-system logs -l job-name=eso-creds-refresher-${STAMP}`
12. Apply the secretstores component to the workload cluster.
13. Apply any other cluster specific components which were modified by the `holos render platform ./platform` command.
Your namespace is created and you have the ability to create secrets in the management cluster and pull them using ExternalSecret resources. (edited)
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.