Compare commits

..

27 Commits

Author SHA1 Message Date
Jeff McCune
27e5846a4c Changes we discuessed at Zipper on 7/17 2024-07-18 07:57:47 -07:00
Jeff McCune
3845174738 server: add holos server init subcommand for migration (#204)
When starting holos server from the production Deployment, pgbouncer
blocks the automatic migration on startup.

```json
{
  "time": "2024-07-16T16:35:52.54507682-07:00",
  "level": "ERROR",
  "msg": "could not execute",
  "version": "0.87.2",
  "code": "unknown",
  "err": "sql/schema: create \"users\" table: ERROR: permission denied for schema public (SQLSTATE 42501)",
  "loc": "cli.go:82"
}
```

This patch separates automatic migration into a `holos server init`
subcommand intended for use in a Job.

Closes: #204
2024-07-16 17:55:40 -07:00
Jeff McCune
f0bc21a606 tilt: local development using k3d (#200)
Previously, the Tiltfile was hard-wired to Jeff's development
environment on the k2 cluster on-prem.  This doesn't work for other
contributors.

This patch fixes the problem by re-using the [Try Holos Locally][1]
documentation to create a local development enironment.  This has a
number of benefits.  The evaluation documentation will be kept up to
date because it doubles as our development environment.  Developing
locally is preferrable to developing in a remote cluster.  Hostnames and
URL's can be constant, e.g. https://app.holos.localhost/ for local dev
and https://app.holos.run/ for production.  We don't need to push to a
remote container registry, k3d has a local registry built in that works
with Tilt.

The only difference presently between evaluation and development when
following the local/k3d doc is the addition of a local registry.

With this patch holos starts up and is accessible at
https://app.holos.localhost/

[1]: https://holos.run/docs/tutorial/local/k3d/
2024-07-15 17:08:33 -07:00
Jeff McCune
6d0e48fccb github/workflows: disable test workflow
until we allocate time to fix it
2024-07-15 12:20:13 -07:00
Nate McCurdy
f5035ce699 docs/website: Touch up the k3d tutorial
This applies various grammar, formatting, and flow improvements to the
local k3d tutorial steps based on running through it from start to
finish.

This also removes the Go code responsible for embedding the website into
`holos`, which isn't needed since the site is hosted on Cloudflare
Pages.
2024-07-15 11:37:23 -07:00
Jeff McCune
3c694d2a92 doc/website: final first pass at local k3d (#199)
Link it off the nav, footer, and sidebar.  Follow up with another task
to reorganize and slim it down.

Closes: #199
2024-07-14 19:45:56 -07:00
Jeff McCune
b8592b0b72 doc/website: add holos social card (#199)
Made it in preview using a background png from https://social.cards/ and
converting our logo.

    mogrify -background none -resize 1200x -format png logo.svg
2024-07-14 14:38:17 -07:00
Jeff McCune
cf2289ef19 doc/website: make try holos next after intro (#199)
Previously the intro page linked next to the glossary.  This patch makes
the try holos locally page immediately follow the introduction page.
2024-07-14 14:06:55 -07:00
Jeff McCune
5e5b9c97d4 doc/website: fix link and mermaid colors (#199)
This patch fixes up the link colors and mermaid diagrams to look better
in both light and dark mode.  This may not be the final result but it
moves in the right direction.

Links are now blue with a visible line on hover.
2024-07-14 13:34:02 -07:00
Jeff McCune
a19e0ff3f3 doc/website: fix spelling errors (#199)
This patch adds cspell over doc/md to the make lint task and fixes
existing spelling errors in the documentation.
2024-07-14 12:48:31 -07:00
Jeff McCune
ac632cb407 doc/website: sync ArgoCD Applications automatically (#199)
Previously the guide did not cover reconciling holos platform components
with GitOps.  This patch adds instructions on how to apply the
application resources, review the diff, sync manually, and finally
enable automatic sync using CUE's struct merge feature.
2024-07-14 10:02:22 -07:00
Jeff McCune
154bbabf01 doc/website: add argocd to k3d platform (#199)
Previously there is no web app except httpbin in the k3d platform.  This
commit adds ArgoCD with an httproute and authorization policy at the
mesh layer.  The application layer authenticates against a separate
oidc client id in the same issuer the mesh uses to demonstrate zero
trust and compatibility between the application and platform layers.

With this patch the user can authenticate and log in, but applications
are not configured.  The user has no roles in ArgoCD either, rbac needs
to be configured properly for the getting started guide.
2024-07-14 06:56:15 -07:00
Jeff McCune
95e45d59cb doc/website: clarify why we use httpbin (#199)
Useful to inspect request headers from the perspective of the backend.
2024-07-13 19:50:26 -07:00
Jeff McCune
a45abedd32 doc/website: touch up process after a run through (#199)
Clean up, touch up.
2024-07-13 19:36:08 -07:00
Jeff McCune
a644b1181b doc/website: move rendering section to k3d (#199)
Previously the intro was spread out.  This patch focuses the tutorial
solely onto the k3d process.
2024-07-13 14:24:44 -07:00
Jeff McCune
861b552b0b doc/website: add k3d authproxy and authpolicy (#199)
This patch adds the authproxy and authpolicy holos components to the k3d
platform for local evaluation.  This combination implements a basic Zero
Trust security model.  The httpbin backend service is protected with
authenication and authorization at the platform level without any
changes to the backend service.

The client id and project are static because they're defined centrally
in https://login.holos.run to avoid needing to setup a full identity
provider locally in k3d.

With this patch authentication and authorization work from both the web
browser and from the command line with curl using the token provided by
the holos cli.
2024-07-13 14:09:41 -07:00
Jeff McCune
5d0212e832 doc/website: local k3d with httpbin working (#199)
Previously the local k3d tutorial doesn't expose any services to verify
the local certificate and the local dns changes work as expected.

This patch adds instructions and modifies the k3d platform to work with
a local mkcert certificate.  A ClusterIssuer is configured to issue
Certificate resources using the ca private key created my mkcert.

With this patch, following the instructions results in a working and
trusted httpbin resource at https://httpbin.holos.localhost  This works
both in Chrome and curl on the command line.
2024-07-13 07:35:44 -07:00
Jeff McCune
9f434928d6 doc/website: add istio gateway and local ca (#199)
This patch adds a script to install a local CA and configure cert
manager to issue certs similar to how it issues certs using LetsEncrypt
in a real cluster.
2024-07-12 10:19:30 -07:00
Jeff McCune
5b1fa4b046 doc/website: add helm chart cue example (#199)
This patch adds an example of how Holos uses unmodified upstream helm
charts to integrate software projects into a platform.
2024-07-11 21:27:29 -07:00
Jeff McCune
ae4614c35b internal/generate: add k3d platform and tutorial (#199)
Previously there is no way to evaluate Holos on local host.  This is a
problem because it's a high barrier to entry to setup a full blown GKE
and EKS cluster to evaluate the reference platform.

This patch adds a minimal, but useful, k3d platform which deploys to a
single local k3d cluster.  The purpose is to provide a shorter on ramp
to see the value of ArgoCD integrated with Istio to provide a zero trust
auth proxy.

The intentional trade off is to provide a less-holistic k3d platform
with a faster on-ramp to learn about the value the more-holistic holos
platform.

With this patch the documentation is correct and the platform renders
fully.  The user doesn't need to provide any Platform Model values, the
default suffice.

For the ArgoCD client ID, we'll use https://login.holos.run as the
issuer instead of building a new OIDC issuer inside of k3d, which would
create significant friction.
2024-07-11 21:07:05 -07:00
Jeff McCune
e99a00f0a1 doc/website: fix API reference docs links in header and footer
Previously the API nav link went to the CLI docs which was weird.
Should go to the current API reference docs.
2024-07-11 11:36:30 -07:00
Jeff McCune
e89dcb9783 doc/website: tagline: The Platform Operating System
Gary and I chatted about this yesterday.  Best tagline we've come up
with so far driving at the analogy with a debian distribution.
2024-07-11 10:44:13 -07:00
Jeff McCune
05806cb439 doc/website: add rendering pipeline diagram
This patch adds a diagram that gives an overview of the holos rendering
pipeline.  This is an importantn concept to understand when working with
holos components.

Note this probably should not go in the Overview, which is intended only
to give a sense of what getting started looks like.  Move it to the
render page when we add it.
2024-07-07 14:16:46 -07:00
Jeff McCune
bfb8458bcb doc/website: draft architecture
This patch fills out some of the architecture page.  Not totally happy
with it yet but it's a start.
2024-07-07 13:28:15 -07:00
Jeff McCune
55d4033116 doc/website: add mermaid architecture diagram
Previously there are no diagrams in the documentation.  This patch wires
up mermaid for use in code blocks in the markdown files.  A minimal
diagram is added to verify mermaid works but it's not the final diagram.
2024-07-07 08:54:22 -07:00
Jeff McCune
276dc95029 doc/website: tweak observability feature
Madison says it sounds weird.
2024-07-06 16:57:16 -07:00
Jeff McCune
c473321817 doc/website: add holos features on landing page
Previously the Docusaurus features examples were still in place on the
home page.  This patch replaces the homepage features with Holos
specific features and illustrations from undraw.

Refer to https://undraw.co/search
2024-07-06 16:11:07 -07:00
144 changed files with 24755 additions and 1327 deletions

52
.cspell.json Normal file
View File

@@ -0,0 +1,52 @@
{
"version": "0.2",
"language": "en",
"enableFiletypes": [
"mdx"
],
"words": [
"applicationset",
"argoproj",
"authpolicy",
"authproxy",
"authroutes",
"buildplan",
"cainjector",
"clusterissuer",
"cookiesecret",
"coredns",
"crds",
"crossplane",
"dnsmasq",
"dscacheutil",
"flushcache",
"gitops",
"holos",
"httpbin",
"Infima",
"istiod",
"jetstack",
"killall",
"kubeadm",
"kubeconfig",
"kustomize",
"libnss",
"loadbalancer",
"mxcl",
"myhostname",
"nameserver",
"Parentspanid",
"putenv",
"quickstart",
"retryable",
"spanid",
"spiffe",
"startupapicheck",
"Tiltfile",
"Traceid",
"traefik",
"uibutton",
"urandom",
"zitadel"
]
}

5
.gitignore vendored
View File

@@ -7,3 +7,8 @@ coverage.out
/deploy/
.vscode/
tmp/
# In case we run through the tutorial in this directory.
/holos-k3d/
/holos-infra/
node_modules/

View File

@@ -72,6 +72,11 @@ build: ## Build holos executable.
@echo "GOPATH=${GOPATH}"
go build -trimpath -o bin/$(BIN_NAME) -ldflags $(LD_FLAGS) $(REPO_PATH)/cmd/$(BIN_NAME)
linux: ## Build holos executable for tilt.
@echo "building ${BIN_NAME}.linux ${VERSION}"
@echo "GOPATH=${GOPATH}"
GOOS=linux go build -trimpath -o bin/$(BIN_NAME).linux -ldflags $(LD_FLAGS) $(REPO_PATH)/cmd/$(BIN_NAME)
.PHONY: install
install: build ## Install holos to GOPATH/bin
install bin/$(BIN_NAME) $(shell go env GOPATH)/bin/$(BIN_NAME)
@@ -89,6 +94,7 @@ lint: ## Run linters.
buf lint
cd internal/frontend/holos && ng lint
golangci-lint run
./hack/cspell
.PHONY: coverage
coverage: test ## Test coverage profile.

233
Tiltfile
View File

@@ -1,5 +1,5 @@
# -*- mode: Python -*-
# This Tiltfile manages a Go project with live leload in Kubernetes
# This Tiltfile manages a Go project with live reload in Kubernetes
listen_port = 3000
metrics_port = 9090
@@ -8,56 +8,21 @@ metrics_port = 9090
if os.getenv('TILT_WRAPPER') != '1':
fail("could not run, ./hack/tilt/bin/tilt was not used to start tilt")
# AWS Account to work in
aws_account = '271053619184'
aws_region = 'us-east-2'
# Resource ids
holos_backend = 'Holos Backend'
pg_admin = 'pgAdmin'
pg_cluster = 'PostgresCluster'
pg_svc = 'Database Pod'
compile_id = 'Go Build'
auth_id = 'Auth Policy'
lint_id = 'Run Linters'
tests_id = 'Run Tests'
# PostgresCluster resource name in k8s
pg_cluster_name = 'holos'
# Database name inside the PostgresCluster
pg_database_name = 'holos'
# PGAdmin name
pg_admin_name = 'pgadmin'
# Default Registry.
# See: https://github.com/tilt-dev/tilt.build/blob/master/docs/choosing_clusters.md#manual-configuration
# Note, Tilt will append the image name to the registry uri path
default_registry('{account}.dkr.ecr.{region}.amazonaws.com/holos-run/holos-server'.format(account=aws_account, region=aws_region))
# default_registry('{account}.dkr.ecr.{region}.amazonaws.com/holos-run/holos'.format(account=aws_account, region=aws_region))
# Set a name prefix specific to the user. Multiple developers share the tilt-holos namespace.
developer = os.getenv('USER')
holos_server = 'holos'
# See ./hack/tilt/bin/tilt
namespace = os.getenv('NAMESPACE')
# We always develop against the k1 cluster.
# We always develop against the k3d-workload cluster
os.putenv('KUBECONFIG', os.path.abspath('./hack/tilt/kubeconfig'))
# The context defined in ./hack/tilt/kubeconfig
allow_k8s_contexts('sso@k1')
allow_k8s_contexts('sso@k2')
allow_k8s_contexts('sso@k3')
allow_k8s_contexts('sso@k4')
allow_k8s_contexts('sso@k5')
# PG db connection for localhost -> k8s port-forward
os.putenv('PGHOST', 'localhost')
os.putenv('PGPORT', '15432')
# We always develop in the dev aws account.
os.putenv('AWS_CONFIG_FILE', os.path.abspath('./hack/tilt/aws.config'))
os.putenv('AWS_ACCOUNT', aws_account)
os.putenv('AWS_DEFAULT_REGION', aws_region)
os.putenv('AWS_PROFILE', 'dev-holos')
os.putenv('AWS_SDK_LOAD_CONFIG', '1')
# Authenticate to AWS ECR when tilt up is run by the developer
local_resource('AWS Credentials', './hack/tilt/aws-login.sh', auto_init=True)
# Extensions are open-source, pre-packaged functions that extend Tilt
#
@@ -81,8 +46,8 @@ developer_paths = [
'./service/holos',
]
# Builds the holos-server executable
local_resource(compile_id, 'make build', deps=developer_paths)
# Builds the holos executable GOOS=linux
local_resource(compile_id, 'make linux', deps=developer_paths)
# Build Docker image
# Tilt will automatically associate image builds with the resource(s)
@@ -91,84 +56,31 @@ local_resource(compile_id, 'make build', deps=developer_paths)
# More info: https://docs.tilt.dev/api.html#api.docker_build
#
docker_build_with_restart(
'holos',
'k3d-registry.holos.localhost:5100/holos',
context='.',
entrypoint=[
'/app/bin/holos',
'/app/bin/holos.linux',
'server',
'--listen-port={}'.format(listen_port),
'--oidc-issuer=https://login.ois.run',
'--oidc-audience=262096764402729854@holos_platform',
'--log-level=debug',
'--metrics-port={}'.format(metrics_port),
'--oidc-issuer=https://login.holos.run',
'--oidc-audience=275571128859132936',
],
dockerfile='./hack/tilt/Dockerfile',
dockerfile='./Dockerfile',
only=['./bin'],
# (Recommended) Updating a running container in-place
# https://docs.tilt.dev/live_update_reference.html
live_update=[
# Sync files from host to container
sync('./bin', '/app/bin'),
# Wait for aws-login https://github.com/tilt-dev/tilt/issues/3048
sync('./tilt/aws-login.last', '/dev/null'),
# Execute commands in the container when paths change
# run('/app/hack/codegen.sh', trigger=['./app/api'])
sync('./bin/', '/app/bin/'),
],
)
# Run local commands
# Local commands can be helpful for one-time tasks like installing
# project prerequisites. They can also manage long-lived processes
# for non-containerized services or dependencies.
#
# More info: https://docs.tilt.dev/local_resource.html
#
# local_resource('install-helm',
# cmd='which helm > /dev/null || brew install helm',
# # `cmd_bat`, when present, is used instead of `cmd` on Windows.
# cmd_bat=[
# 'powershell.exe',
# '-Noninteractive',
# '-Command',
# '& {if (!(Get-Command helm -ErrorAction SilentlyContinue)) {scoop install helm}}'
# ]
# )
# Teach tilt about our custom resources (Note, this may be intended for workloads)
# k8s_kind('authorizationpolicy')
# k8s_kind('requestauthentication')
# k8s_kind('virtualservice')
k8s_kind('pgadmin')
# Troubleshooting
def resource_name(id):
print('resource: {}'.format(id))
return id.name
workload_to_resource_function(resource_name)
# Apply Kubernetes manifests
# Tilt will build & push any necessary images, re-deploying your
# resources as they change.
#
# More info: https://docs.tilt.dev/api.html#api.k8s_yaml
#
def holos_yaml():
"""Return a k8s Deployment personalized for the developer."""
k8s_yaml_template = str(read_file('./hack/tilt/k8s.yaml'))
return k8s_yaml_template.format(
name=holos_server,
developer=developer,
namespace=namespace,
listen_port=listen_port,
metrics_port=metrics_port,
tz=os.getenv('TZ'),
)
# Customize a Kubernetes resource
# By default, Kubernetes resource names are automatically assigned
# based on objects in the YAML manifests, e.g. Deployment name.
@@ -179,133 +91,18 @@ def holos_yaml():
#
# More info: https://docs.tilt.dev/api.html#api.k8s_resource
#
k8s_yaml(blob(holos_yaml()))
k8s_yaml(blob(str(read_file('./hack/tilt/k8s/dev-holos-app/deployment.yaml'))))
# Backend server process
k8s_resource(
workload=holos_server,
new_name=holos_backend,
objects=[
'{}:serviceaccount'.format(holos_server),
'{}:servicemonitor'.format(holos_server),
],
objects=[],
resource_deps=[compile_id],
links=[
link('https://{}.app.dev.k2.holos.run/ui/'.format(developer), "Holos Web UI")
],
)
# AuthorizationPolicy - Beyond Corp functionality
k8s_resource(
new_name=auth_id,
objects=[
'{}:virtualservice'.format(holos_server),
link('https://app.holos.localhost/ui/'.format(developer), "Holos Web UI")
],
)
# Database
# Note: Tilt confuses the backup pods with the database server pods, so this code is careful to tease the pods
# apart so logs are streamed correctly.
# See: https://github.com/tilt-dev/tilt.specs/blob/master/resource_assembly.md
# pgAdmin Web UI
k8s_resource(
workload=pg_admin_name,
new_name=pg_admin,
port_forwards=[
port_forward(15050, 5050, pg_admin),
],
)
# Disabled because these don't group resources nicely
# k8s_kind('postgrescluster')
# Postgres database in-cluster
k8s_resource(
new_name=pg_cluster,
objects=['holos:postgrescluster'],
)
# Needed to select the database by label
# https://docs.tilt.dev/api.html#api.k8s_custom_deploy
k8s_custom_deploy(
pg_svc,
apply_cmd=['./hack/tilt/k8s-get-db-sts', pg_cluster_name],
delete_cmd=['echo', 'Skipping delete. Object managed by custom resource.'],
deps=[],
)
k8s_resource(
pg_svc,
port_forwards=[
port_forward(15432, 5432, 'psql'),
],
resource_deps=[pg_cluster]
)
# Run tests
local_resource(
tests_id,
'make test',
allow_parallel=True,
auto_init=False,
deps=developer_paths,
)
# Run linter
local_resource(
lint_id,
'make lint',
allow_parallel=True,
auto_init=False,
deps=developer_paths,
)
# UI Buttons for helpful things.
# Icons: https://fonts.google.com/icons
os.putenv("GH_FORCE_TTY", "80%")
cmd_button(
'{}:go-test-failfast'.format(tests_id),
argv=['./hack/tilt/go-test-failfast'],
resource=tests_id,
icon_name='quiz',
text='Fail Fast',
)
cmd_button(
'{}:issues'.format(holos_server),
argv=['./hack/tilt/gh-issues'],
resource=holos_backend,
icon_name='folder_data',
text='Issues',
)
cmd_button(
'{}:gh-issue-view'.format(holos_server),
argv=['./hack/tilt/gh-issue-view'],
resource=holos_backend,
icon_name='task',
text='View Issue',
)
cmd_button(
'{}:get-pgdb-creds'.format(holos_server),
argv=['./hack/tilt/get-pgdb-creds', pg_cluster_name, pg_database_name],
resource=pg_svc,
icon_name='lock_open_right',
text='DB Creds',
)
cmd_button(
'{}:get-pgdb-creds'.format(pg_admin_name),
argv=['./hack/tilt/get-pgdb-creds', pg_cluster_name, pg_database_name],
resource=pg_admin,
icon_name='lock_open_right',
text='DB Creds',
)
cmd_button(
'{}:get-pgadmin-creds'.format(pg_admin_name),
argv=['./hack/tilt/get-pgadmin-creds', pg_admin_name],
resource=pg_admin,
icon_name='lock_open_right',
text='pgAdmin Login',
)
print("✨ Tiltfile evaluated")

View File

@@ -1,403 +0,0 @@
<!-- Code generated by gomarkdoc. DO NOT EDIT -->
# v1alpha2
```go
import "github.com/holos-run/holos/api/core/v1alpha2"
```
Package v1alpha2 contains the core API contract between the holos cli and CUE configuration code. Platform designers, operators, and software developers use this API to write configuration in CUE which \`holos\` loads. The overall shape of the API defines imperative actions \`holos\` should carry out to render the complete yaml that represents a Platform.
[Platform](<#Platform>) defines the complete configuration of a platform. With the holos reference platform this takes the shape of one management cluster and at least two workload cluster. Each cluster has multiple [HolosComponent](<#HolosComponent>) resources applied to it.
Each holos component path, e.g. \`components/namespaces\` produces exactly one [BuildPlan](<#BuildPlan>) which in turn contains a set of [HolosComponent](<#HolosComponent>) kinds.
The primary kinds of [HolosComponent](<#HolosComponent>) are:
1. [HelmChart](<#HelmChart>) to render config from a helm chart.
2. [KustomizeBuild](<#KustomizeBuild>) to render config from [Kustomize](<#Kustomize>)
3. [KubernetesObjects](<#KubernetesObjects>) to render [APIObjects](<#APIObjects>) defined directly in CUE configuration.
Note that Holos operates as a data pipeline, so the output of a [HelmChart](<#HelmChart>) may be provided to [Kustomize](<#Kustomize>) for post\-processing.
## Index
- [Constants](<#constants>)
- [type APIObject](<#APIObject>)
- [type APIObjectMap](<#APIObjectMap>)
- [type APIObjects](<#APIObjects>)
- [type BuildPlan](<#BuildPlan>)
- [type BuildPlanComponents](<#BuildPlanComponents>)
- [type BuildPlanSpec](<#BuildPlanSpec>)
- [type Chart](<#Chart>)
- [type FileContent](<#FileContent>)
- [type FileContentMap](<#FileContentMap>)
- [type FilePath](<#FilePath>)
- [type HelmChart](<#HelmChart>)
- [type HolosComponent](<#HolosComponent>)
- [type Kind](<#Kind>)
- [type KubernetesObjects](<#KubernetesObjects>)
- [type Kustomize](<#Kustomize>)
- [type KustomizeBuild](<#KustomizeBuild>)
- [type Label](<#Label>)
- [type Metadata](<#Metadata>)
- [type Platform](<#Platform>)
- [type PlatformMetadata](<#PlatformMetadata>)
- [type PlatformSpec](<#PlatformSpec>)
- [type PlatformSpecComponent](<#PlatformSpecComponent>)
- [type Repository](<#Repository>)
## Constants
<a name="APIVersion"></a>
```go
const (
APIVersion = "v1alpha2"
BuildPlanKind = "BuildPlan"
HelmChartKind = "HelmChart"
// ChartDir is the directory name created in the holos component directory to cache a chart.
ChartDir = "vendor"
// ResourcesFile is the file name used to store component output when post-processing with kustomize.
ResourcesFile = "resources.yaml"
)
```
<a name="KubernetesObjectsKind"></a>
```go
const KubernetesObjectsKind = "KubernetesObjects"
```
<a name="APIObject"></a>
## APIObject
APIObject represents the most basic generic form of a single kubernetes api object. Represented as a JSON object internally for compatibility between tools, for example loading from CUE.
```go
type APIObject structpb.Struct
```
<a name="APIObjectMap"></a>
## APIObjectMap
APIObjectMap represents the marshalled yaml representation of kubernetes api objects. Do not produce an APIObjectMap directly, instead use [APIObjects](<#APIObjects>) to produce the marshalled yaml representation from CUE data, then provide the result to [HolosComponent](<#HolosComponent>).
```go
type APIObjectMap map[Kind]map[Label]string
```
<a name="APIObjects"></a>
## APIObjects
APIObjects represents Kubernetes API objects defined directly from CUE code. Useful to mix in resources to any kind of [HolosComponent](<#HolosComponent>), for example adding an ExternalSecret resource to a [HelmChart](<#HelmChart>).
[Kind](<#Kind>) must be the resource kind, e.g. Deployment or Service.
[Label](<#Label>) is an arbitrary internal identifier to uniquely identify the resource within the context of a \`holos\` command. Holos will never write the intermediate label to rendered output.
Refer to [HolosComponent](<#HolosComponent>) which accepts an [APIObjectMap](<#APIObjectMap>) field provided by [APIObjects](<#APIObjects>).
```go
type APIObjects struct {
APIObjects map[Kind]map[Label]APIObject `json:"apiObjects"`
APIObjectMap APIObjectMap `json:"apiObjectMap"`
}
```
<a name="BuildPlan"></a>
## BuildPlan
BuildPlan represents a build plan for the holos cli to execute. The purpose of a BuildPlan is to define one or more [HolosComponent](<#HolosComponent>) kinds. For example a [HelmChart](<#HelmChart>), [KustomizeBuild](<#KustomizeBuild>), or [KubernetesObjects](<#KubernetesObjects>).
A BuildPlan usually has an additional empty [KubernetesObjects](<#KubernetesObjects>) for the purpose of using the [HolosComponent](<#HolosComponent>) DeployFiles field to deploy an ArgoCD or Flux gitops resource for the holos component.
```go
type BuildPlan struct {
Kind string `json:"kind" cue:"\"BuildPlan\""`
APIVersion string `json:"apiVersion" cue:"string | *\"v1alpha2\""`
Spec BuildPlanSpec `json:"spec"`
}
```
<a name="BuildPlanComponents"></a>
## BuildPlanComponents
```go
type BuildPlanComponents struct {
Resources map[Label]KubernetesObjects `json:"resources,omitempty"`
KubernetesObjectsList []KubernetesObjects `json:"kubernetesObjectsList,omitempty"`
HelmChartList []HelmChart `json:"helmChartList,omitempty"`
KustomizeBuildList []KustomizeBuild `json:"kustomizeBuildList,omitempty"`
}
```
<a name="BuildPlanSpec"></a>
## BuildPlanSpec
BuildPlanSpec represents the specification of the build plan.
```go
type BuildPlanSpec struct {
// Disabled causes the holos cli to take no action over the [BuildPlan].
Disabled bool `json:"disabled,omitempty"`
// Components represents multiple [HolosComponent] kinds to manage.
Components BuildPlanComponents `json:"components,omitempty"`
}
```
<a name="Chart"></a>
## Chart
Chart represents a helm chart.
```go
type Chart struct {
// Name represents the chart name.
Name string `json:"name"`
// Version represents the chart version.
Version string `json:"version"`
// Release represents the chart release when executing helm template.
Release string `json:"release"`
// Repository represents the repository to fetch the chart from.
Repository Repository `json:"repository,omitempty"`
}
```
<a name="FileContent"></a>
## FileContent
FileContent represents file contents.
```go
type FileContent string
```
<a name="FileContentMap"></a>
## FileContentMap
FileContentMap represents a mapping of file paths to file contents. Paths are relative to the \`holos\` output "deploy" directory, and may contain sub\-directories.
```go
type FileContentMap map[FilePath]FileContent
```
<a name="FilePath"></a>
## FilePath
FilePath represents a file path.
```go
type FilePath string
```
<a name="HelmChart"></a>
## HelmChart
HelmChart represents a holos component which wraps around an upstream helm chart. Holos orchestrates helm by providing values obtained from CUE, renders the output using \`helm template\`, then post\-processes the helm output yaml using the general functionality provided by [HolosComponent](<#HolosComponent>), for example [Kustomize](<#Kustomize>) post\-rendering and mixing in additional kubernetes api objects.
```go
type HelmChart struct {
HolosComponent `json:",inline"`
Kind string `json:"kind" cue:"\"HelmChart\""`
// Chart represents a helm chart to manage.
Chart Chart `json:"chart"`
// ValuesContent represents the values.yaml file holos passes to the `helm
// template` command.
ValuesContent string `json:"valuesContent"`
// EnableHooks enables helm hooks when executing the `helm template` command.
EnableHooks bool `json:"enableHooks" cue:"bool | *false"`
}
```
<a name="HolosComponent"></a>
## HolosComponent
HolosComponent defines the fields common to all holos component kinds. Every holos component kind should embed HolosComponent.
```go
type HolosComponent struct {
// Kind is a string value representing the resource this object represents.
Kind string `json:"kind"`
// APIVersion represents the versioned schema of this representation of an object.
APIVersion string `json:"apiVersion" cue:"string | *\"v1alpha2\""`
// Metadata represents data about the holos component such as the Name.
Metadata Metadata `json:"metadata"`
// APIObjectMap holds the marshalled representation of api objects. Useful to
// mix in resources to each HolosComponent type, for example adding an
// ExternalSecret to a HelmChart HolosComponent. Refer to [APIObjects].
APIObjectMap APIObjectMap `json:"apiObjectMap,omitempty"`
// DeployFiles represents file paths relative to the cluster deploy directory
// with the value representing the file content. Intended for defining the
// ArgoCD Application resource or Flux Kustomization resource from within CUE,
// but may be used to render any file related to the build plan from CUE.
DeployFiles FileContentMap `json:"deployFiles,omitempty"`
// Kustomize represents a kubectl kustomize build post-processing step.
Kustomize `json:"kustomize,omitempty"`
// Skip causes holos to take no action regarding this component.
Skip bool `json:"skip" cue:"bool | *false"`
}
```
<a name="Kind"></a>
## Kind
Kind is a kubernetes api object kind. Defined as a type for clarity and type checking.
```go
type Kind string
```
<a name="KubernetesObjects"></a>
## KubernetesObjects
KubernetesObjects represents a [HolosComponent](<#HolosComponent>) composed of Kubernetes API objects provided directly from CUE using [APIObjects](<#APIObjects>).
```go
type KubernetesObjects struct {
HolosComponent `json:",inline"`
Kind string `json:"kind" cue:"\"KubernetesObjects\""`
}
```
<a name="Kustomize"></a>
## Kustomize
Kustomize represents resources necessary to execute a kustomize build. Intended for at least two use cases:
1. Process a [KustomizeBuild](<#KustomizeBuild>) [HolosComponent](<#HolosComponent>) which represents raw yaml file resources in a holos component directory.
2. Post process a [HelmChart](<#HelmChart>) [HolosComponent](<#HolosComponent>) to inject istio, patch jobs, add custom labels, etc...
```go
type Kustomize struct {
// KustomizeFiles holds file contents for kustomize, e.g. patch files.
KustomizeFiles FileContentMap `json:"kustomizeFiles,omitempty"`
// ResourcesFile is the file name used for api objects in kustomization.yaml
ResourcesFile string `json:"resourcesFile,omitempty"`
}
```
<a name="KustomizeBuild"></a>
## KustomizeBuild
KustomizeBuild represents a [HolosComponent](<#HolosComponent>) that renders plain yaml files in the holos component directory using \`kubectl kustomize build\`.
```go
type KustomizeBuild struct {
HolosComponent `json:",inline"`
Kind string `json:"kind" cue:"\"KustomizeBuild\""`
}
```
<a name="Label"></a>
## Label
Label is an arbitrary unique identifier internal to holos itself. The holos cli is expected to never write a Label value to rendered output files, therefore use a [Label](<#Label>) then the identifier must be unique and internal. Defined as a type for clarity and type checking.
A Label is useful to convert a CUE struct to a list, for example producing a list of [APIObject](<#APIObject>) resources from an [APIObjectMap](<#APIObjectMap>). A CUE struct using Label keys is guaranteed to not lose data when rendering output because a Label is expected to never be written to the final output.
```go
type Label string
```
<a name="Metadata"></a>
## Metadata
Metadata represents data about the holos component such as the Name.
```go
type Metadata struct {
// Name represents the name of the holos component.
Name string `json:"name"`
// Namespace is the primary namespace of the holos component. A holos
// component may manage resources in multiple namespaces, in this case
// consider setting the component namespace to default.
//
// This field is optional because not all resources require a namespace,
// particularly CRD's and DeployFiles functionality.
// +optional
Namespace string `json:"namespace,omitempty"`
}
```
<a name="Platform"></a>
## Platform
Platform represents a platform to manage. A Platform resource informs holos which components to build. The platform resource also acts as a container for the platform model form values provided by the PlatformService. The primary use case is to collect the cluster names, cluster types, platform model, and holos components to build into one resource.
```go
type Platform struct {
// Kind is a string value representing the resource this object represents.
Kind string `json:"kind" cue:"\"Platform\""`
// APIVersion represents the versioned schema of this representation of an object.
APIVersion string `json:"apiVersion" cue:"string | *\"v1alpha2\""`
// Metadata represents data about the object such as the Name.
Metadata PlatformMetadata `json:"metadata"`
// Spec represents the specification.
Spec PlatformSpec `json:"spec"`
}
```
<a name="PlatformMetadata"></a>
## PlatformMetadata
```go
type PlatformMetadata struct {
// Name represents the Platform name.
Name string `json:"name"`
}
```
<a name="PlatformSpec"></a>
## PlatformSpec
PlatformSpec represents the specification of a Platform. Think of a platform specification as a list of platform components to apply to a list of kubernetes clusters combined with the user\-specified Platform Model.
```go
type PlatformSpec struct {
// Model represents the platform model holos gets from from the
// PlatformService.GetPlatform rpc method and provides to CUE using a tag.
Model structpb.Struct `json:"model"`
// Components represents a list of holos components to manage.
Components []PlatformSpecComponent `json:"components"`
}
```
<a name="PlatformSpecComponent"></a>
## PlatformSpecComponent
PlatformSpecComponent represents a holos component to build or render.
```go
type PlatformSpecComponent struct {
// Path is the path of the component relative to the platform root.
Path string `json:"path"`
// Cluster is the cluster name to provide when rendering the component.
Cluster string `json:"cluster"`
}
```
<a name="Repository"></a>
## Repository
Repository represents a helm chart repository.
```go
type Repository struct {
Name string `json:"name"`
URL string `json:"url"`
}
```
Generated by [gomarkdoc](<https://github.com/princjef/gomarkdoc>)

17
doc/md/glossary.md Normal file
View File

@@ -0,0 +1,17 @@
# Glossary
This page describes the terms used within the context of Holos.
## Management Cluster
## Workload Cluster
## Platform Form
## Platform Model
## Secret Store
## Service Mesh
## Zero Trust

View File

@@ -0,0 +1,28 @@
# Local Development
This document captures notes on locally developing Holos.
Follow the steps in [Try Holos Locally](/docs/tutorial/local/k3d), but take
care to select `Develop` tabs when creating the k3d cluster so you have a local
registry to push to.
## Apply Resources
Work will be done in the `dev-holos` namespace.
Apply the infrastructure, which should persist when tilt is started / stopped.
```bash
kubectl apply --server-side=true -f ./hack/tilt/k8s/dev-holos-infra
```
This creates the PostgresCluster, service account, etc...
## Start tilt
Tilt will build the go executable, build the container, then push it to the
local repository associated with k3d.
```bash
./hack/tilt/bin/tilt up
```

View File

@@ -0,0 +1,81 @@
# Architecture
This page describes the architecture of the Holos reference platform.
## Overview
The reference platform manages three kubernetes clusters by default. One management cluster and two workload clusters.
```mermaid
graph TB
subgraph "Management"
secrets(Secrets)
c1(Controllers)
end
subgraph "Primary"
s1p(Service 1)
s2p(Service 2)
end
subgraph "Standby"
s1s(Service 1)
s2s(Service 2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class c1,s1p,s2p,s1s,s2s,secrets k8s;
class Management,Primary,Standby cluster;
```
The services in each cluster type are:
:::tip
The management cluster is designed to operate reliably on spot instances. A highly available management cluster typically costs less than a cup of coffee per month to operate.
:::
1. Management Cluster
- **SecretStore** to provide namespace scoped secrets to workload clusters.
- **CertManager** to provision TLS certificates and make them available to workload clusters.
- **ClusterAPI** to provision and manage workload clusters via GitOps. For example, EKS or GKE clusters.
- **Crossplane** to provision and manage cloud resources via GitOps. For example, buckets, managed databases, any other cloud resource.
- **CronJobs** to refresh short lived credentials. For example image pull credentials.
- **ArgoCD** to manage resources within the management cluster via GitOps.
2. Primary Workload Cluster
- **ArgoCD** to continuously deploy your applications and services via GitOps.
- **External Secrets Operator** to synchronize namespace scoped secrets.
- **Istio** to provide a Gateway to expose services.
- **ZITADEL** to provide SSO login for all other services (e.g. ArgoCD, Grafana, Backstage, etc...)
- **PostgreSQL** for in-cluster databases.
- **Backstage** to provide your developer portal into the whole platform.
- **Observability** implemented by Prometheus, Grafana, and Loki to provide monitoring and logging.
- **AuthorizationPolicy** to provide role based access control to all services in the cluster.
3. Standby Workload Cluster
- Identical configuration to the primary cluster.
- May be scaled down to zero to reduce expenses.
- Intended to take the primary cluster role quickly, within minutes, for disaster recovery or regular maintenance purposes.
## Security
### Namespaces
Namespaces are security boundaries in the reference platform. A given namespace is treated as the same security context across multiple clusters following the [SIG Multi-cluster Position](https://github.com/kubernetes/community/blob/dd4c8b704ef1c9c3bfd928c6fa9234276d61ad18/sig-multicluster/namespace-sameness-position-statement.md).
The namespace sameness principle makes role based access control straightforward to manage and comprehend. For example, granting a developer the ability to create secrets in namespace `example` means the developer has the ability to do so in the secret store in the management cluster and also synchronize the secret to the services they own in the workload clusters.
## Data Platform
Holos is designed to work with two distinct types of databases by default:
1. In-cluster PostgresSQL databases for lower cost and rapid development and testing.
2. Out-of-cluster SQL databases for production services, e.g. RDS, CloudSQL, Aurora, Redshift, etc...
:::tip
To simplify maintenance the holos reference platform provisions databases from the most recent backup by default.
:::
In-cluster databases in the holos reference platform automatically save backups to an S3 or GCS bucket. For regular maintenance and disaster recovery, the standby cluster automatically restores databases from the most recent backup in the bucket. This capability makes maintenance much simpler, most maintenance tasks are carried out on the standby cluster which is then promoted to the primary. Software upgrades in particular are intended to be carried out against the standby, verified, then promoted to primary. Once live traffic shifts to the upgraded services in the new primary the previous cluster can be spun down to save cost or upgraded safely in place.

Binary file not shown.

After

Width:  |  Height:  |  Size: 934 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 703 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1014 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1014 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 854 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB

View File

@@ -0,0 +1,847 @@
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
# Try Holos Locally
Learn how to configure and deploy the Holos reference platform to your local
host with k3d.
---
This guide assumes commands are run from your local host. Capitalized terms
have specific definitions described in the [Glossary](/docs/glossary).
## Requirements
You'll need the following tools installed on your local host to complete this guide.
1. [k3d](https://k3d.io/#installation) - to provide an api server.
2. [Docker](https://docs.docker.com/get-docker/) - to use k3d.
3. [holos](/docs/tutorial/install) - to build the platform.
4. [kubectl](https://kubernetes.io/docs/tasks/tools/) - to interact with the Kubernetes cluster.
5. [helm](https://helm.sh/docs/intro/install/) - to render Holos components that integrate vendor provided Helm charts.
6. [mkcert](https://github.com/FiloSottile/mkcert?tab=readme-ov-file#installation) - for local trusted certificates.
7. [jq](https://jqlang.github.io/jq/download/) - to manipulate json output.
## Outcome
At the end of this guide you'll have built a development platform that provides
Zero Trust security by holistically integrating off-the-shelf components.
1. ArgoCD to review and apply platform configuration changes.
2. Istio service mesh with mTLS encryption.
3. ZITADEL to provide single sign-on identity tokens with multi factor authentication.
The platform running on your local host will configure Istio to authenticate and
authorize requests using an oidc id token issued by ZITADEL _before_ the request
ever reaches ArgoCD.
:::tip
With Holos, developers don't need to write authentication or authorization logic
for many use cases.
:::
Single sign-on and role based access control are provided by the platform itself
for all service running in the platform using standardized policies.
The `k3d` platform is derived from the larger holos reference platform to
provide a smooth on-ramp to evaluate the value Holos offers.
1. Holos wraps unmodified Helm charts provided by software vendors.
2. Holos eliminates the need to template yaml.
3. Holos is composable, scaling down to local host and up to multi-cloud and multi-cluster.
4. The Zero Trust security model implemented by the reference platform.
5. Configuration unification with CUE.
## Register with Holos
Register an account with the Holos web service. This registration is required
to save platform configuration values via a simple web form and to explore how
Holos implements Zero Trust.
```bash
holos register user
```
## Create the Platform
Create the platform, which stores the Platform Form and its values in the Holos
web service. The Platform Form represents the Platform Model.
```bash
holos create platform --name k3d --display-name "Try Holos Locally"
```
## Generate the Platform
Holos builds the platform by building each component of the platform into fully
rendered Kubernetes configuration resources. Generate the source code for the
platform in a blank local directory. This directory is named `holos-infra` by
convention because it represents the Holos managed platform infrastructure.
Create a new Git repository to store the platform code:
```bash
mkdir holos-k3d
cd holos-k3d
git init .
```
Generate the platform code in the current directory:
```bash
holos generate platform k3d
```
Commit the generated platform config to the repository:
```bash
git add .
git commit -m "holos generate platform k3d - $(holos --version)"
```
## Push the Platform Form
TODO: Describe what the Platform Form is. Why is it needed? To get a value
that varies which ArgoCD needs to make you an admin user.
GREAT IDEA: Add a --open to open the URL in the default browser when pushing so it's an obvious call to action.
Gary wasn't sure what to do with the SUB value... More call to action.
Push the Platform Form to the web service to provide top-level configuration
values from which the platform components derive their final configuration.
```bash
holos push platform form .
```
:::important
Visit the printed URL to view the Platform Form.
:::
![Platform Form](./platform-form.png)
:::tip
You have complete control over the form fields and validation rules.
:::
## Submit the Platform Model
Fill out the form and submit the Platform Model.
For the Role Based Access Control section, provide the value of the `sub`
subject claim of your identity to ensure only you have administrative access to
ArgoCD.
```bash
holos login --print-claims | jq -r .sub
```
For the ArgoCD Git repository URL, enter the url of a public repository where
you will push your local `holos-k3d` repository.
TODO: Make it clear the repo needs to be created.
```bash
git remote add origin https://github.com/example/holos-k3d
git push origin HEAD:main
```
## Pull the Platform Model
The Platform Model is the JSON representation of the Platform Form values.
Holos provides the Platform Model to CUE to render the platform configuration to
plain YAML. Configuration that varies is derived from the Platform Model using
CUE.
Pull the Platform Model to your local host to render the platform.
```bash
holos pull platform model .
```
The `platform.config.json` file is intended to be committed to version control.
```bash
git add platform.config.json
git commit -m "Add platform model"
```
TODO: Do not warn people about storing secrets until the advanced doc about customizing the platform form.jj
TODO: Pull out commentary about the reference platform. Only mention the reference platform at the beginning and end.
NATE: Secret Sauce is going from minimal input into a fully rendered platform. Not Zero Trust, not external secrets. Mention the value of deriving the entire unified platform from a simple customizable input form at the beginning and the end.
## Render the Platform
Rendering the platform iterates over each platform component and renders the
component into the final Kubernetes resources that will be sent to the API Server.
```bash
holos render platform ./platform
```
This command writes fully rendered Kubernetes resource yaml to the `deploy/` directory.
:::warning
Do not edit the files in the `deploy` as they will be written over.
:::
Commit the rendered platform configuration for `git diff` later.
```bash
git add deploy
git commit -m "holos render platform ./platform"
```
### Rendering
Holos uses the Kubernetes resource model to manage configuration. The `holos`
command line interface (cli) is the primary method you'll use to manage your
platform. Holos uses CUE to provide a unified configuration model of the
platform which is built from components packaged with Helm, Kustomize, CUE, or
any tool that can produce Kubernetes resources as output. This process can be
thought of as a yaml **rendering pipeline**.
Each component in a platform defines a rendering pipeline shown in Figure 2 to
produce Kubernetes api resources
```mermaid
---
title: Figure 2 - Render Pipeline
---
graph LR
PS[<a href="/docs/api/core/v1alpha2#PlatformSpec">PlatformSpec</a>]
BP[<a href="/docs/api/core/v1alpha2#BuildPlan">BuildPlan</a>]
HC[<a href="/docs/api/core/v1alpha2#HolosComponent">HolosComponent</a>]
H[<a href="/docs/api/core/v1alpha2#HelmChart">HelmChart</a>]
K[<a href="/docs/api/core/v1alpha2#KustomizeBuild">KustomizeBuild</a>]
O[<a href="/docs/api/core/v1alpha2#KubernetesObjects">KubernetesObjects</a>]
P[<a href="/docs/api/core/v1alpha2#Kustomize">Kustomize</a>]
Y[Kubernetes <br>Resources]
G[GitOps <br>Resource]
C[Kube API Server]
PS --> BP --> HC
HC --> H --> P
HC --> K --> P
HC --> O --> P
P --> Y --> C
P --> G --> C
```
The `holos` cli can be thought of as executing a data pipeline. The Platform
Model is the top level input to the pipeline and specifies the ways your
platform varies from other organizations. The `holos` cli takes the Platform
Model as input and executes a series of steps to produce the platform
configuration. The platform configuration output of `holos` are full
Kubernetes API resources, suitable for application to a cluster with `kubectl
apply -f`, or GitOps tools such as ArgoCD or Flux.
## Review the Platform Config
:::tip
This section is optional, included to provide insight into how Holos uses CUE
and Helm to unify and render the platform configuration.
:::
Take a moment to review the platform config `holos` rendered.
### ArgoCD Application
Note the Git URL you entered into the Platform Form is used to derive the ArgoCD
`Application` resource from the Platform Model.
```yaml
# deploy/clusters/workload/gitops/namespaces.application.gen.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: namespaces
namespace: argocd
spec:
destination:
server: https://kubernetes.default.svc
project: default
source:
# highlight-next-line
path: /deploy/clusters/workload/components/namespaces
# highlight-next-line
repoURL: https://github.com/holos-run/holos-k3d
# highlight-next-line
targetRevision: HEAD
```
One ArgoCD `Application` resource is produced for each Holos component by
default. Note the `cert-manger` component renders the output using Helm.
Holos unifies the Application resource using CUE. The CUE definition which
produces the rendered output is defined in `buildplan.cue` around line 222.
:::tip
Note how CUE does not use error-prone text templates, the language is well
specified and typed which reduces errors when unifying the configuration with
the Platform Model in the following `#Argo` definition.
:::
```cue
// buildplan.cue
// #Argo represents an argocd Application resource for each component, written
// using the #HolosComponent.deployFiles field.
#Argo: {
ComponentName: string
Application: app.#Application & {
metadata: name: ComponentName
metadata: namespace: "argocd"
spec: {
destination: server: "https://kubernetes.default.svc"
project: "default"
source: {
// highlight-next-line
path: "\(_Platform.Model.argocd.deployRoot)/deploy/clusters/\(_ClusterName)/components/\(ComponentName)"
// highlight-next-line
repoURL: _Platform.Model.argocd.repoURL
// highlight-next-line
targetRevision: _Platform.Model.argocd.targetRevision
}
}
}
// deployFiles represents the output files to write along side the component.
deployFiles: "clusters/\(_ClusterName)/gitops/\(ComponentName).application.gen.yaml": yaml.Marshal(Application)
}
```
### Helm Chart
Holos uses CUE to safely integrate the unmodified upstream `cert-manager` Helm
chart.
:::tip
Holos fully supports your existing Helm charts. Consider leveraging `holos` as
an safer alternative to umbrella charts.
:::
```cue
// components/cert-manager/cert-manager.cue
package holos
// Produce a helm chart build plan.
(#Helm & Chart).Output
let Chart = {
Name: "cert-manager"
Version: "1.14.5"
Namespace: "cert-manager"
Repo: name: "jetstack"
Repo: url: "https://charts.jetstack.io"
// highlight-next-line
Values: {
installCRDs: true
startupapicheck: enabled: false
// Must not use kube-system on gke autopilot. GKE Warden blocks access.
// highlight-next-line
global: leaderElection: namespace: Namespace
// https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-resource-requests#min-max-requests
resources: requests: {
cpu: "250m"
memory: "512Mi"
"ephemeral-storage": "100Mi"
}
// highlight-next-line
webhook: resources: Values.resources
// highlight-next-line
cainjector: resources: Values.resources
// highlight-next-line
startupapicheck: resource: Values.resources
// https://cloud.google.com/kubernetes-engine/docs/how-to/autopilot-spot-pods
nodeSelector: {
"kubernetes.io/os": "linux"
if _ClusterName == "management" {
"cloud.google.com/gke-spot": "true"
}
}
webhook: nodeSelector: Values.nodeSelector
cainjector: nodeSelector: Values.nodeSelector
startupapicheck: nodeSelector: Values.nodeSelector
}
}
```
## Create the Workload Cluster
The Workload Cluster is where your applications and services will be deployed.
In production this is usually an EKS, GKE, or AKS cluster.
:::tip
Holos supports any compliant Kubernetes cluster and was developed and tested on
GKE, EKS, Talos, and Kubeadm clusters.
:::
<Tabs>
<TabItem value="evaluate" label="Evaluate" default>
Use this command when evaluating Holos.
```bash
k3d cluster create workload \
--port "443:443@loadbalancer" \
--k3s-arg "--disable=traefik@server:0"
```
</TabItem>
<TabItem value="develop" label="Develop" default>
Use this command when developing Holos.
```bash
k3d registry create registry.holos.localhost --port 5100
```
```bash
k3d cluster create workload \
--registry-use k3d-registry.holos.localhost:5100 \
--port "443:443@loadbalancer" \
--k3s-arg "--disable=traefik@server:0"
```
</TabItem>
</Tabs>
Traefik is disabled because Istio provides the same functionality.
## Local CA
Create and apply the `local-ca` Secret containing the CA private key. This
Secret is necessary to issue certificates trusted by your browser when using the
local k3d platform.
```bash
bash ./scripts/local-ca
```
:::note
Admin access is necessary for `mkcert` to install the newly generated CA cert
into your local host's trust store.
:::
## DNS Setup
Configure your localhost to resolve `*.holos.localhost` to your loopback
interface. This is necessary for your browser requests to reach the k3d
workload cluster.
<Tabs>
<TabItem value="macos" label="macOS" default>
```bash
brew install dnsmasq
```
```bash
cat <<EOF >"$(brew --prefix)/etc/dnsmasq.d/holos.localhost.conf"
# Refer to https://holos.run/docs/tutorial/local/k3d/
address=/holos.localhost/127.0.0.1
EOF
```
```bash
if [[ -r /Library/LaunchDaemons/homebrew.mxcl.dnsmasq.plist ]]; then
echo "dnsmasq already configured"
else
sudo cp "$(brew list dnsmasq | grep 'dnsmasq.plist$')" \
/Library/LaunchDaemons/homebrew.mxcl.dnsmasq.plist
sudo launchctl unload /Library/LaunchDaemons/homebrew.mxcl.dnsmasq.plist
sudo launchctl load /Library/LaunchDaemons/homebrew.mxcl.dnsmasq.plist
dscacheutil -flushcache
echo "dnsmasq configured"
fi
```
```bash
sudo mkdir -p /etc/resolver
sudo tee /etc/resolver/holos.localhost <<EOF
domain holos.localhost
nameserver 127.0.0.1
EOF
sudo killall -HUP mDNSResponder
```
</TabItem>
<TabItem value="linux" label="Linux">
[NSS-myhostname](http://man7.org/linux/man-pages/man8/nss-myhostname.8.html)
ships with many Linux distributions and should resolve *.localhost
automatically to 127.0.0.1.
Otherwise it is installable with:
```bash
sudo apt install libnss-myhostname
```
</TabItem>
<TabItem value="windows" label="Windows">
Ensure the loopback interface has at least the following names in `C:\windows\system32\drivers\etc\hosts`
```
127.0.0.1 httpbin.holos.localhost argocd.holos.localhost app.holos.localhost
```
</TabItem>
</Tabs>
## Apply the Platform Components
Use `kubectl` to apply each platform component. In production, it's common to
fully automate this process with ArgoCD, but we use `kubectl` in development
and exploration contexts to the same effect.
### Namespaces
```bash
kubectl apply --server-side=true -f ./deploy/clusters/workload/components/namespaces
```
### Custom Resource Definitions
Services are exposed with standard `HTTPRoute` resources from the Gateway API.
```bash
kubectl apply --server-side=true -f ./deploy/clusters/workload/components/gateway-api
kubectl apply --server-side=true -f ./deploy/clusters/workload/components/istio-base
kubectl apply --server-side=true -f ./deploy/clusters/workload/components/argo-crds
```
### Cert Manager
Apply the ClusterIssuer which issues Certificate resources using the local ca.
```bash
kubectl apply --server-side=true -f ./deploy/clusters/workload/components/cert-manager
kubectl apply --server-side=true -f deploy/clusters/workload/components/local-ca
kubectl apply --server-side=true -f deploy/clusters/workload/components/certificates
```
### Istio
```bash
kubectl apply --server-side=true -f ./deploy/clusters/workload/components/istio-cni
kubectl apply --server-side=true -f ./deploy/clusters/workload/components/istiod
kubectl apply --server-side=true -f ./deploy/clusters/workload/components/gateway
```
Verify the Gateway is programmed and the listeners have been accepted:
```bash
kubectl get -n istio-gateways gateway default -o json \
| jq -r '.status.conditions[].message'
```
```txt
Resource accepted
Resource programmed, assigned to service(s) default-istio.istio-gateways.svc.cluster.local:443
```
### httpbin
httpbin is a simple backend service useful for end-to-end testing.
```bash
kubectl apply --server-side=true -f deploy/clusters/workload/components/httpbin-backend
kubectl apply --server-side=true -f deploy/clusters/workload/components/httpbin-routes
```
:::important
Browse to [https://httpbin.holos.localhost/](https://httpbin.holos.localhost/)
to verify end to end connectivity.
:::
### Cookie Secret
Generate a random cookie encryption Secret and apply.
```bash
LC_ALL=C tr -dc A-Za-z0-9 </dev/urandom \
| head -c 32 \
| kubectl create secret generic "authproxy" \
--from-file=cookiesecret=/dev/stdin \
--dry-run=client -o yaml \
| kubectl apply -n istio-gateways -f-
```
:::tip
The Holos reference platform uses an ExternalSecret to automatically sync this
Secret from your SecretStore.
:::
### Auth Proxy
The auth proxy is responsible for authenticating web browser requests. The auth
proxy provides a standard oidc id token to all services integrated with the
mesh.
```bash
kubectl apply --server-side=true -f deploy/clusters/workload/components/authproxy
kubectl apply --server-side=true -f deploy/clusters/workload/components/authroutes
```
:::important
Verify authentication is working by visiting
[https://httpbin.holos.localhost/holos/authproxy](https://httpbin.holos.localhost/holos/authproxy).
Expect a simple `Authenticated` response.
:::
:::note
Istio will respond with `no healthy upstream` until the pod becomes ready.
:::
Once authenticated, visit
[https://httpbin.holos.localhost/holos/authproxy/userinfo](https://httpbin.holos.localhost/holos/authproxy/userinfo)
which returns a subset of claims from your id token:
```json
{
"user": "275552236589843464",
"email": "demo@holos.run",
"preferredUsername": "demo"
}
```
### Auth Policy
Configure authorization policies using the claims provided in the authenticated
id token.
```bash
kubectl apply --server-side=true -f deploy/clusters/workload/components/authpolicy
```
:::important
Requests to `https://httpbin.holos.localhost` are protected by
AuthorizationPolicy platform resources after applying this component.
:::
### Zero Trust
A basic Zero Trust security model is now in place. Verify authentication is
working by browsing to
[https://httpbin.holos.localhost/dump/request](https://httpbin.holos.localhost/dump/request).
:::note
Istio make take a few seconds to program the Gateway with the
AuthorizationPolicy resources.
:::
:::tip
Note the `x-oidc-id-token` header is not sent by your browser but is received
by the backend service. This design reduces the risk of exposing id tokens.
Requests over the internet are also smaller and more reliable because large id
tokens with may claims are confined to the cluster.
:::
Verify unauthenticated requests are blocked:
```bash
curl https://httpbin.holos.localhost/dump/request
```
Expect a response that redirects to the identity provider.
Verify authenticated requests are allowed:
```bash
curl -H x-oidc-id-token:$(holos token) https://httpbin.holos.localhost/dump/request
```
Expect a response from the backend httpbin service with the id token header the
platform authenticated and authorized.
:::tip
Note how the platform secures both web browser and command line api access to
the backend httpbin service. httpbin itself has no authentication or
authorization functionality.
:::
### ArgoCD
ArgoCD automatically applies resources defined in Git similar to how this guide
uses `kubectl apply`.
Apply controller deployments and supporting resources.
```bash
kubectl apply --server-side=true -f ./deploy/clusters/workload/components/argo-cd
kubectl apply --server-side=true -f ./deploy/clusters/workload/components/argo-authpolicy
kubectl apply --server-side=true -f ./deploy/clusters/workload/components/argo-routes
```
Verify all Pods are running and all containers are ready.
```bash
kubectl get pods -n argocd
```
```txt
NAME READY STATUS RESTARTS AGE
argocd-application-controller-0 1/1 Running 0 10s
argocd-applicationset-controller-578db65fcd-lnn76 1/1 Running 0 10s
argocd-notifications-controller-67c856dbb7-12stk 1/1 Running 0 10s
argocd-redis-698f57d9b9-v4kqs 1/1 Running 0 10s
argocd-redis-secret-init-z5zg8 0/1 Completed 0 10s
argocd-repo-server-69f78dfb8-f6pb7 1/1 Running 0 10s
argocd-server-58f7f4466d-db5fv 2/2 Running 0 10s
```
Browse to [https://argocd.holos.localhost/](https://argocd.holos.localhost/) and
verify you get the ArgoCD login page.
![ArgoCD Login Page](./argocd-login.png)
:::note
Both the platform layer and the ArgoCD application layer performs authentication
and authorization using the same identity provider. Note how the Zero Trust
model provides an additional layer of security without friction.
:::
Login using the SSO button and verify you get to the Applications page.
![ArgoCD Applications](./argocd-apps.png)
### ArgoCD Applications
Apply the Application resources for all of the Holos components that compose the
platform. The Application resources provide drift detection and optional
automatic reconciliation of platform components.
```bash
kubectl apply --server-side=true -f deploy/clusters/workload/gitops
```
Browse to or refresh [https://argocd.holos.localhost/applications](https://argocd.holos.localhost/applications).
![ArgoCD Holos Components](./argocd-apps-2.png)
:::important
If you do not see any applications after refreshing the page ensure the `sub`
value in the Platform Model (`platform.config.json`) is correct and matches
`holos login --print-claims`.
:::
### Sync Applications
Navigate to the [namespaces Application](https://argocd.holos.localhost/applications/argocd/namespaces).
![ArgoCD Out of Sync](./argocd-out-of-sync.png)
Review the differences between the live platform and the git configuration.
![ArgoCD Diff](./argocd-diff.png)
Sync the application to reconcile the differences.
![ArgoCD Sync](./argocd-sync.png)
The Holos components should report Sync OK.
![ArgoCD Sync OK](./argocd-sync-ok.png)
:::tip
Automatic reconciliation is turned off by default.
:::
Optionally enable automatic reconciliation by adding `spec.syncPolicy.automated:
{}` to the `#Argo` definition.
Add the following to `buildplan.site.cue` to avoid `holos generate platform k3d`
writing over the customization.
:::tip
CUE merges definitions located in multiple files. This feature is used to
customize the platform.
:::
```bash
cat <<EOF > buildplan.site.cue
package holos
// Enable automated sync of platform components.
#Argo: Application: spec: syncPolicy: automated: {}
EOF
```
Re-render the platform.
```bash
holos render platform ./platform
```
Add and commit the changes.
```bash
git add .
git commit -m 'enable argocd automatic sync'
git push origin HEAD
```
Apply the new changes.
```bash
kubectl apply --server-side=true -f deploy/clusters/workload/gitops
```
Automatic reconciliation is enabled for all platform components.
![ArgoCD Automatic Sync OK](./argocd-auto-sync-ok.png)
## Summary
TODO
1. Configured the Service Mesh with mTLS.
2. Configured authentication and authorization.
3. Protected a backend service without backend code changes.
4. ArgoCD

Binary file not shown.

After

Width:  |  Height:  |  Size: 558 KiB

View File

@@ -0,0 +1,17 @@
# Overview
<!-- https://kubernetes.io/docs/contribute/style/diagram-guide/ -->
This tutorial covers the following process of getting started with Holos.
```mermaid
graph LR
A[1. Install <br>holos] -->
B[2. Register <br>account] -->
C[3. Generate <br>platform] -->
D[4. Render <br>platform] -->
E[5. Apply <br>config]
classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000;
class A,B,C,D,E box
```

View File

@@ -1,16 +1,4 @@
facebook:
label: Facebook
permalink: /facebook
description: Facebook tag description
hello:
label: Hello
permalink: /hello
description: Hello tag description
docusaurus:
label: Docusaurus
permalink: /docusaurus
description: Docusaurus tag description
hola:
label: Hola
permalink: /hola
description: Hola tag description
holos:
label: Holos
permalink: /holos
description: Holos Platform

View File

@@ -4,7 +4,7 @@ import type * as Preset from '@docusaurus/preset-classic';
const config: Config = {
title: 'Holos',
tagline: 'The Cloud Native Platform Distribution',
tagline: 'The Platform Operating System',
favicon: 'img/favicon.ico',
// Set the production url of your site here
@@ -12,6 +12,7 @@ const config: Config = {
// Set the /<baseUrl>/ pathname under which your site is served
// For GitHub pages deployment, it is often '/<projectName>/'
baseUrl: '/',
trailingSlash: true,
// GitHub pages deployment config.
// If you aren't using GitHub pages, you don't need these.
@@ -29,6 +30,12 @@ const config: Config = {
locales: ['en'],
},
// https://docusaurus.io/docs/markdown-features/diagrams
markdown: {
mermaid: true
},
themes: ['@docusaurus/theme-mermaid'],
presets: [
[
'classic',
@@ -60,7 +67,12 @@ const config: Config = {
themeConfig: {
// Replace with your project's social card
image: 'img/docusaurus-social-card.jpg',
image: 'img/holos-social-card.png',
docs: {
sidebar: {
autoCollapseCategories: false,
}
},
navbar: {
title: '',
logo: {
@@ -68,6 +80,12 @@ const config: Config = {
srcDark: 'img/logo-dark.svg',
},
items: [
{
type: 'doc',
docId: 'tutorial/local/k3d',
position: 'left',
label: 'Try Holos',
},
{
type: 'doc',
docId: 'intro',
@@ -101,9 +119,17 @@ const config: Config = {
title: 'Docs',
items: [
{
label: 'Tutorial',
label: 'Try Holos Locally',
to: '/docs/tutorial/local/k3d',
},
{
label: 'Documentation',
to: '/docs/intro',
},
{
label: 'API Reference',
to: '/docs/api/core/v1alpha2',
},
],
},
{
@@ -154,6 +180,10 @@ const config: Config = {
},
],
},
mermaid: {
// Refer to https://mermaid.js.org/config/theming.html
theme: { light: 'neutral', dark: 'dark' },
},
} satisfies Preset.ThemeConfig,
};

File diff suppressed because it is too large Load Diff

View File

@@ -17,6 +17,7 @@
"dependencies": {
"@docusaurus/core": "3.4.0",
"@docusaurus/preset-classic": "3.4.0",
"@docusaurus/theme-mermaid": "^3.4.0",
"@mdx-js/react": "^3.0.0",
"clsx": "^2.0.0",
"prism-react-renderer": "^2.3.0",
@@ -28,6 +29,7 @@
"@docusaurus/tsconfig": "^3.4.0",
"@docusaurus/types": "^3.4.0",
"@wcj/html-to-markdown-cli": "^2.1.1",
"cspell": "^8.10.4",
"html-to-markdown": "^1.0.0",
"typescript": "~5.2.2"
},

View File

@@ -16,15 +16,24 @@ const sidebars: SidebarsConfig = {
{
type: 'category',
label: 'Tutorial',
collapsed: false,
items: [
'tutorial/start',
'tutorial/register',
'tutorial/local/k3d',
],
},
{
type: 'category',
label: 'Reference Platform',
collapsed: false,
items: [
'reference-platform/architecture',
],
},
'glossary',
],
api: [
'api/core/v1alpha2',
'cli',
'api/core/v1alpha2'
],
};

View File

@@ -10,38 +10,40 @@ type FeatureItem = {
const FeatureList: FeatureItem[] = [
{
title: 'Easy to Use',
Svg: require('@site/static/img/undraw_docusaurus_mountain.svg').default,
title: 'Zero Trust Security',
Svg: require('@site/static/img/base00/undraw_security_on_re_e491.svg').default,
description: (
<>
Docusaurus was designed from the ground up to be easily installed and
used to get your website up and running quickly.
Spend more time on your business features and less time rebuilding
authentication and authorization. Holos provides zero trust security
with no code needed to protect your services.
</>
),
},
{
title: 'Focus on What Matters',
Svg: require('@site/static/img/undraw_docusaurus_tree.svg').default,
title: 'Multi-Cloud',
Svg: require('@site/static/img/base00/undraw_cloud_hosting_7xb1.svg').default,
description: (
<>
Docusaurus lets you focus on your docs, and we&apos;ll do the chores. Go
ahead and move your docs into the <code>docs</code> directory.
Avoid vendor lock in, downtime, and price hikes. Holos is designed to
easily deploy workloads into multiple clouds and multiple regions.
</>
),
},
{
title: 'Powered by React',
Svg: require('@site/static/img/undraw_docusaurus_react.svg').default,
title: 'Developer Portal',
Svg: require('@site/static/img/base00/undraw_data_trends_re_2cdy.svg').default,
description: (
<>
Extend or customize your website layout by reusing React. Docusaurus can
be extended while reusing the same header and footer.
Ship high quality code quickly, provide a great developer experience,
and maintain control over your infrastructure with the integrated
Backstage developer portal.
</>
),
},
];
function Feature({title, Svg, description}: FeatureItem) {
function Feature({ title, Svg, description }: FeatureItem) {
return (
<div className={clsx('col col--4')}>
<div className="text--center">

View File

@@ -6,29 +6,34 @@
/* You can override the default Infima variables here. */
:root {
--ifm-link-color: #268bd2;
--docusaurus-highlighted-code-line-bg: #eee8d5;
/* Solarized Base03 */
--ifm-color-primary: #002b36;
/* Solarized Base3 */
--ifm-color-primary-light-background: #fdf6e3;
/* Solarized Base02 */
--ifm-color-primary-dark: #073642;
/* Solarized Base00 */
--ifm-color-primary-dark: #657b83;
/* Solarized Base01 */
--ifm-color-primary-darker: #586e75;
/* Solarized Base00 */
--ifm-color-primary-darkest: #657b83;
/* Solarized Base2 */
--ifm-color-primary-light: #eee8d5;
/* Solarized Base02 */
--ifm-color-primary-darkest: #073642;
/* Solarized Base0 */
--ifm-color-primary-light: #839496;
/* Solarized Base1 */
--ifm-color-primary-lighter: #93a1a1;
/* Solarized Base0 */
--ifm-color-primary-lightest: #839496;
/* Solarized Base2 */
--ifm-color-primary-lightest: #eee8d5;
--ifm-code-font-size: 95%;
--docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.1);
}
/* For readability concerns, you should choose a lighter palette in dark mode. */
[data-theme='dark'] {
--ifm-link-color: #268bd2;
--docusaurus-highlighted-code-line-bg: #073642;
/* Solarized Base3 */
--ifm-color-primary: #fdf6e3;
/* Solarized Base03 */
@@ -47,5 +52,4 @@
/* Solarized Base0 */
--ifm-color-primary-lightest: #839496;
--ifm-code-font-size: 95%;
--docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.3);
}

View File

@@ -16,6 +16,11 @@ function HomepageHeader() {
{siteConfig.title}
</Heading>
<p className="hero__subtitle">{siteConfig.tagline}</p>
<p className="projectDesc">
Holos is a holistic software development platform built from the most
popular open source projects.<br /> Build your developer platform in
no time.
</p>
<div className={styles.buttons}>
<Link
className="button button--secondary button--lg"

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 13 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 8.0 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 17 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 5.2 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 22 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 17 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 9.4 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 11 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 9.8 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 28 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 8.4 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 5.3 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 10 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 5.3 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 10 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 13 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 8.3 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 461 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 8.0 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 22 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 34 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 9.4 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 9.8 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 8.4 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 5.3 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 10 KiB

View File

@@ -1,29 +0,0 @@
// Package website embeds the docs website for the server subcommand. Docs are
// served at /docs similar to how the ui is served at /ui.
package website
// DISABLED go:generate rm -rf build
// DISABLED go:generate mkdir build
// DISABLED go:generate npm run build
// DISABLED go:generate touch $GOFILE
import (
"embed"
"io/fs"
)
// Output must be the relative path to where the build tool places the static
// site index.html file.
const OutputPath = "build"
//go:embed all:build
var Dist embed.FS
// Root returns the static site root directory.
func Root() fs.FS {
sub, err := fs.Sub(Dist, OutputPath)
if err != nil {
panic(err)
}
return sub
}

7
hack/cspell Executable file
View File

@@ -0,0 +1,7 @@
#! /bin/bash
#
set -euo pipefail
TOPLEVEL="$(cd $(dirname "$0") && git rev-parse --show-toplevel)"
cd "${TOPLEVEL}" && npx cspell ./doc/md/**/*.{md,mdx,markdown}

View File

@@ -1,8 +0,0 @@
FROM 271053619184.dkr.ecr.us-east-2.amazonaws.com/holos-run/container-images/debian:bullseye AS final
USER root
WORKDIR /app
ADD bin bin
RUN chown -R app: /app
# Kubernetes requires the user to be numeric
USER 8192
ENTRYPOINT bin/holos server

View File

@@ -1,21 +0,0 @@
#! /bin/bash
set -euo pipefail
PARENT="$(cd $(dirname "$0") && pwd)"
# If necessary
if [[ -s "${PARENT}/aws-login.last" ]]; then
last="$(<"${PARENT}/aws-login.last")"
now="$(date +%s)"
if [[ $(( now - last )) -lt 28800 ]]; then
echo "creds are still valid" >&2
exit 0
fi
fi
aws sso logout
aws sso login
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin "${AWS_ACCOUNT}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com"
# Touch a file so tilt docker_build can watch it as a dep
date +%s > "${PARENT}/aws-login.last"

View File

@@ -1,7 +0,0 @@
[profile dev-holos]
sso_account_id = 271053619184
sso_role_name = AdministratorAccess
sso_start_url = https://openinfrastructure.awsapps.com/start
sso_region = us-east-2
region = us-east-2
output = json

View File

@@ -3,7 +3,9 @@
set -euo pipefail
TOPLEVEL="$(cd $(dirname "$0")/.. && pwd)"
export NAMESPACE="${USER}-holos"
echo "Local development assumes a k3d-workload local cluster exists." >&2
echo "Refer to https://holos.run/docs/tutorial/local/k3d" >&2
kubectl config view --minify --context=k3d-workload --flatten > "${TOPLEVEL}/kubeconfig"
export KUBECONFIG="${TOPLEVEL}/kubeconfig"
envsubst < "${KUBECONFIG}.template" > "${KUBECONFIG}"
export TILT_WRAPPER=1
exec tilt "$@"

View File

@@ -1,153 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: container-registry
app.kubernetes.io/instance: holos-system-ecr
app.kubernetes.io/name: holos-system-ecr
app.kubernetes.io/part-of: holos
name: holos-system-ecr
namespace: holos-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: container-registry
app.kubernetes.io/instance: holos-system-ecr
app.kubernetes.io/name: holos-system-ecr
app.kubernetes.io/part-of: holos
name: holos-system-ecr
rules:
- apiGroups:
- ""
resources:
- secrets
- namespaces
verbs:
- list
- apiGroups:
- ""
resourceNames:
- holos-system-ecr-image-pull-creds
resources:
- secrets
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: container-registry
app.kubernetes.io/instance: holos-system-ecr
app.kubernetes.io/name: holos-system-ecr
app.kubernetes.io/part-of: holos
name: holos-system-ecr
namespace: holos-system
roleRef:
kind: ClusterRole
name: holos-system-ecr
subjects:
- kind: ServiceAccount
name: holos-system-ecr
namespace: holos-system
---
apiVersion: v1
data:
refresh.sh: |-
#! /bin/bash
tmpdir="$(mktemp -d)"
finish() {
rm -rf "${tmpdir}"
}
trap finish EXIT
set -euo pipefail
aws sts assume-role-with-web-identity \
--role-arn ${AWS_ROLE_ARN} \
--role-session-name CronJob \
--web-identity-token file:///run/secrets/irsa/serviceaccount/token \
> "${tmpdir}/creds.json"
export AWS_ACCESS_KEY_ID=$(jq -r .Credentials.AccessKeyId "${tmpdir}/creds.json")
export AWS_SECRET_ACCESS_KEY=$(jq -r .Credentials.SecretAccessKey "${tmpdir}/creds.json")
export AWS_SESSION_TOKEN=$(jq -r .Credentials.SessionToken "${tmpdir}/creds.json")
set -x
aws ecr get-login-password --region ${AWS_REGION} \
| docker login --username AWS --password-stdin ${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com
kubectl create secret docker-registry 'holos-system-ecr-image-pull-creds' \
--from-file=.dockerconfigjson=${HOME}/.docker/config.json \
--dry-run=client -o yaml \
> "${tmpdir}/secret.yaml"
# Get namespaces one per line
kubectl -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' get namespaces > ${tmpdir}/namespaces.txt
# Copy the secret to all namespaces
for ns in $(grep -vE '^gke-|^kube-|^gmp-' ${tmpdir}/namespaces.txt); do
echo "---" >> "${tmpdir}/secretlist.yaml"
kubectl --dry-run=client -o yaml -n $ns apply -f "${tmpdir}/secret.yaml" >> "${tmpdir}/secretlist.yaml"
done
kubectl apply --server-side=true -f "${tmpdir}/secretlist.yaml"
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: image-pull-secret
app.kubernetes.io/instance: holos-system-ecr
app.kubernetes.io/name: refresher
app.kubernetes.io/part-of: holos
name: holos-system-ecr
namespace: holos-system
---
apiVersion: batch/v1
kind: CronJob
metadata:
labels:
app.kubernetes.io/component: container-registry
app.kubernetes.io/instance: holos-system-ecr
app.kubernetes.io/name: holos-system-ecr
app.kubernetes.io/part-of: holos
name: holos-system-ecr
namespace: holos-system
spec:
schedule: 0 */4 * * *
jobTemplate:
spec:
template:
spec:
containers:
- command:
- bash
- /app/scripts/refresh.sh
env:
- name: AWS_ACCOUNT
value: "271053619184"
- name: AWS_REGION
value: us-east-2
- name: AWS_ROLE_ARN
value: arn:aws:iam::271053619184:role/ImagePull
image: quay.io/holos/toolkit:latest
imagePullPolicy: Always
name: toolkit
resources:
limits:
cpu: 50m
memory: 64Mi
requests:
cpu: 50m
memory: 64Mi
volumeMounts:
- mountPath: /app/scripts
name: scripts
- mountPath: /run/secrets/irsa/serviceaccount
name: irsa
restartPolicy: OnFailure
serviceAccountName: holos-system-ecr
volumes:
- configMap:
name: holos-system-ecr
name: scripts
- name: irsa
projected:
sources:
- serviceAccountToken:
path: "token"
audience: "irsa"
expirationSeconds: 3600

View File

@@ -1,5 +0,0 @@
#! /bin/bash
#
set -euo pipefail
cp "${KUBECONFIG}.template" "${KUBECONFIG}"
kubectl config set-context --current --namespace "${NAMESPACE}"

View File

@@ -1,226 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: '{name}'
namespace: '{namespace}'
labels:
app: '{name}'
holos.run/developer: '{developer}'
spec:
selector:
matchLabels:
app: '{name}'
template:
metadata:
labels:
app: '{name}'
holos.run/developer: '{developer}'
sidecar.istio.io/inject: 'true'
spec:
serviceAccountName: holos
containers:
- name: holos
image: holos # Tilt appends a tilt-* tag for the built docker image
# args are configured in the Tiltfile
env:
- name: GOMAXPROCS
value: '1'
- name: TZ
value: '{tz}'
- name: SHUTDOWN_DELAY
value: '0'
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: holos-pguser-holos
key: uri
ports:
- name: http
containerPort: {listen_port}
protocol: TCP
resources:
requests:
cpu: 250m
memory: 100Mi
limits:
cpu: 1000m
memory: 200Mi
---
apiVersion: v1
kind: Service
metadata:
name: '{name}'
namespace: '{namespace}'
labels:
app: '{name}'
holos.run/developer: '{developer}'
spec:
type: ClusterIP
selector:
app: '{name}'
ports:
- name: http
port: {listen_port}
appProtocol: http2
protocol: TCP
targetPort: {listen_port}
- name: metrics
port: {metrics_port}
appProtocol: http
protocol: TCP
targetPort: {metrics_port}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: '{name}'
namespace: '{namespace}'
labels:
app: '{name}'
holos.run/developer: '{developer}'
spec:
endpoints:
- port: metrics
path: /metrics
interval: 15s
selector:
matchLabels:
app: '{name}'
holos.run/developer: '{developer}'
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: '{name}'
namespace: '{namespace}'
labels:
app: '{name}'
holos.run/developer: '{developer}'
spec:
gateways:
- istio-ingress/default
hosts:
- '{developer}.app.dev.k2.holos.run'
http:
- name: "coffee-ui"
match:
- uri:
prefix: "/ui"
route:
- destination:
host: coffee
port:
number: 4200
- name: "holos-api"
route:
- destination:
host: '{name}'
port:
number: {listen_port}
---
apiVersion: v1
kind: Service
metadata:
name: coffee
spec:
ports:
- protocol: TCP
port: 4200
---
apiVersion: v1
kind: Endpoints
metadata:
name: coffee
subsets:
- addresses:
- ip: 192.168.2.21
ports:
- port: 4200
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: holos
namespace: '{namespace}'
labels:
app: '{name}'
holos.run/developer: '{developer}'
imagePullSecrets:
- name: kube-system-ecr-image-pull-creds
---
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PGAdmin
metadata:
name: 'pgadmin'
namespace: '{namespace}'
labels:
holos.run/developer: '{developer}'
spec:
serverGroups:
- name: holos
postgresClusterSelector:
matchLabels:
holos.run/developer: '{developer}'
dataVolumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
---
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: 'holos'
namespace: '{namespace}'
labels:
holos.run/developer: '{developer}'
spec:
image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.1-0
postgresVersion: 16
users:
- name: holos
databases:
- holos
options: 'SUPERUSER'
- name: '{developer}'
databases:
- holos
- '{developer}'
options: 'SUPERUSER'
# https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/user-management
instances:
- name: db
dataVolumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
postgres-operator.crunchydata.com/cluster: '{name}'
backups:
pgbackrest:
image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.47-2
# https://github.com/CrunchyData/postgres-operator/issues/2531#issuecomment-1713676019
global:
archive-async: "y"
archive-push-queue-max: "100MiB"
spool-path: "/pgdata/backups"
repos:
- name: repo1
volume:
volumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi

View File

@@ -0,0 +1,74 @@
---
# Source: CUE apiObjects.Deployment.holos
metadata:
name: holos
namespace: dev-holos
labels:
app.holos.run/environment: dev
app.holos.run/name: holos
app.holos.run/component: app
app.kubernetes.io/component: server
render.holos.run/component: dev-holos-app
spec:
selector:
matchLabels:
app.kubernetes.io/component: server
template:
metadata:
labels:
app.holos.run/environment: dev
app.holos.run/name: holos
app.holos.run/component: app
app.kubernetes.io/component: server
sidecar.istio.io/inject: "true"
render.holos.run/component: dev-holos-app
spec:
serviceAccountName: holos
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: holos
image: k3d-registry.holos.localhost:5100/holos:latest
imagePullPolicy: IfNotPresent
command:
- /app/bin/holos
- server
- --log-format=json
- --oidc-issuer=https://login.holos.run
- --oidc-audience=275571128859132936
env:
- name: TZ
value: '{tz}'
- name: GOMAXPROCS
value: '1'
- name: SHUTDOWN_DELAY
value: '0'
- name: DATABASE_URL
valueFrom:
secretKeyRef:
key: uri
name: holos-pguser-holos
ports:
- containerPort: 3000
name: http
protocol: TCP
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
allowPrivilegeEscalation: false
resources:
limits:
cpu: "0.5"
memory: 512Mi
requests:
cpu: "0.5"
memory: 512Mi
strategy:
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
kind: Deployment
apiVersion: apps/v1

View File

@@ -0,0 +1,27 @@
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
labels:
app.kubernetes.io/name: authpolicy-allow-app
app.kubernetes.io/part-of: default-gateway
name: authpolicy-allow-app
namespace: istio-gateways
spec:
action: ALLOW
rules:
- to:
- operation:
hosts:
- app.holos.localhost
- app.holos.localhost:*
when:
- key: request.auth.principal
values:
- https://login.holos.run/*
- key: request.auth.audiences
values:
- 270319630705329162@holos_platform
- "275571128859132936"
selector:
matchLabels:
istio.io/gateway-name: default

View File

@@ -0,0 +1,56 @@
---
# Source: CUE apiObjects.PostgresCluster.holos
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: holos
namespace: dev-holos
labels:
app.holos.run/environment: dev
app.holos.run/name: holos
app.holos.run/component: infra
render.holos.run/component: dev-holos-infra
annotations: {}
spec:
image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.1-0
instances:
- affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
postgres-operator.crunchydata.com/cluster: holos
topologyKey: topology.kubernetes.io/zone
weight: 1
dataVolumeClaimSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
name: db
replicas: 1
port: 5432
postgresVersion: 16
users:
- databases:
- holos
name: holos
options: SUPERUSER
backups:
pgbackrest:
global:
archive-async: "y"
archive-push-queue-max: 100MiB
spool-path: /pgdata/backups
image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.47-2
repos:
- name: repo1
volume:
volumeClaimSpec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

View File

@@ -0,0 +1,28 @@
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
labels:
app: holos
name: holos
namespace: istio-gateways
spec:
hostnames:
- app.holos.localhost
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: default
namespace: istio-gateways
rules:
- backendRefs:
- group: ""
kind: Service
name: holos
namespace: dev-holos
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /

View File

@@ -0,0 +1,49 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: holos
namespace: dev-holos
labels:
app.holos.run/environment: dev
app.holos.run/name: holos
---
# Source: CUE apiObjects.Service.holos
apiVersion: v1
metadata:
name: holos
namespace: dev-holos
labels:
app.holos.run/environment: dev
app.holos.run/name: holos
annotations: {}
spec:
type: ClusterIP
selector:
app.kubernetes.io/component: server
ports:
- appProtocol: http2
name: http
port: 3000
protocol: TCP
targetPort: 3000
- appProtocol: http
name: metrics
port: 9090
protocol: TCP
targetPort: 9090
kind: Service
---
# Source: CUE apiObjects.ReferenceGrant.istio-gateways
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: istio-gateways
namespace: dev-holos
spec:
from:
- group: gateway.networking.k8s.io
kind: HTTPRoute
namespace: istio-gateways
to:
- group: ""
kind: Service

View File

@@ -1,45 +0,0 @@
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJpVENDQVRDZ0F3SUJBZ0lSQU9TenlHd2VMK3N4NjVvckVCTXV1c293Q2dZSUtvWkl6ajBFQXdJd0ZURVQKTUJFR0ExVUVDaE1LYTNWaVpYSnVaWFJsY3pBZUZ3MHlOREF5TVRNd05UQTRNRFJhRncwek5EQXlNVEF3TlRBNApNRFJhTUJVeEV6QVJCZ05WQkFvVENtdDFZbVZ5Ym1WMFpYTXdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CCkJ3TkNBQVREWUluR09EN2ZpbFVIeXNpZG1ac2Vtd2liTk9hT1A5ZzVJT1VsTkllUHZ1Y01ZV01aNWNkZXpVQmIKMGh4Zm1WYXR0QWxpcnorMlFpVld5by9WZFNsOG8yRXdYekFPQmdOVkhROEJBZjhFQkFNQ0FvUXdIUVlEVlIwbApCQll3RkFZSUt3WUJCUVVIQXdFR0NDc0dBUVVGQndNQ01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPCkJCWUVGTGVtcEhSM25lVXYvSUc1WWpwempDbWUydmIyTUFvR0NDcUdTTTQ5QkFNQ0EwY0FNRVFDSUNZajRsNUgKL043OG5UcnJxQzMxWjlsY0lpODEwcno5N3JIdUJnWFZZUkxBQWlBNHVEc0YyNEI5aGV3WklUbWEwaHpCMjNOdQpwZnprTWV5VzZHV2U2RWh4NGc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://k2.core.ois.run:6443
name: k2
contexts:
- context:
cluster: k2
namespace: default
user: admin@k2
name: admin@k2
- context:
cluster: k2
namespace: ${NAMESPACE}
user: oidc
name: sso@k2
current-context: sso@k2
kind: Config
preferences: {}
users:
- name: admin@k2
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJoRENDQVNxZ0F3SUJBZ0lRVXZKTlEvV0Ewalg5RXF6ZElIMFA4ekFLQmdncWhrak9QUVFEQWpBVk1STXcKRVFZRFZRUUtFd3ByZFdKbGNtNWxkR1Z6TUI0WERUSTBNRE14TVRJek1UY3hPVm9YRFRJMU1ETXhNVEl6TVRjeQpPVm93S1RFWE1CVUdBMVVFQ2hNT2MzbHpkR1Z0T20xaGMzUmxjbk14RGpBTUJnTlZCQU1UQldGa2JXbHVNRmt3CkV3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFNjZrMStQb1l5OHlPWTZkRFR5MHJYRTUvRlZJVU0rbkcKNEVzSXZxOHBuZ2lVRWRkeTdYM3hvZ2E5d2NSZy8xeVZ4Q2FNbzBUVEZveXkxaVZMMWxGWDNLTklNRVl3RGdZRApWUjBQQVFIL0JBUURBZ1dnTUJNR0ExVWRKUVFNTUFvR0NDc0dBUVVGQndNQ01COEdBMVVkSXdRWU1CYUFGTGVtCnBIUjNuZVV2L0lHNVlqcHpqQ21lMnZiMk1Bb0dDQ3FHU000OUJBTUNBMGdBTUVVQ0lDaDVGTWlXV3hxVHYyc0wKQVdvQ2lxaWJ0OUNUMnpsNzRlSTllMEZPTzRKTkFpRUF5T0wwR3RxVnlTSHUzbUsvVDBxZFhYQ3dmdHdWQVE4cgo2ejJWaVZrMzg2dz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSURtdTh0UGVrRmhlNzRXWm5idXlwOFZ1VUIxTVYwcTN4QklOclVVbjBaRjVvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFNjZrMStQb1l5OHlPWTZkRFR5MHJYRTUvRlZJVU0rbkc0RXNJdnE4cG5naVVFZGR5N1gzeApvZ2E5d2NSZy8xeVZ4Q2FNbzBUVEZveXkxaVZMMWxGWDNBPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://login.ois.run
- --oidc-client-id=261774567918339420@holos_platform
- --oidc-extra-scope=openid
- --oidc-extra-scope=email
- --oidc-extra-scope=profile
- --oidc-extra-scope=groups
- --oidc-extra-scope=offline_access
- --oidc-extra-scope=urn:zitadel:iam:org:domain:primary:openinfrastructure.co
- --oidc-use-pkce
command: kubectl
env: null
interactiveMode: IfAvailable
provideClusterInfo: false

View File

@@ -1,33 +0,0 @@
#! /bin/bash
#
tmpdir="$(mktemp -d)"
finish() {
rm -rf "$tmpdir"
}
trap finish EXIT
set -euo pipefail
umask 077
kubectl -n "dev-${USER}" get secret "${USER}.holos-server-db.credentials.postgresql.acid.zalan.do" -o json > "${tmpdir}/creds.json"
if [[ -f ~/.pgpass ]]; then
(grep -v "^localhost:14126:holos:${USER}:" ~/.pgpass || true) > "${tmpdir}/pgpass"
fi
PGUSER="$(jq -r '.data | map_values(@base64d) | .username' ${tmpdir}/creds.json)"
PGPASSWORD="$(jq -r '.data | map_values(@base64d) | .password' ${tmpdir}/creds.json)"
echo "${PGHOST}:${PGPORT}:${PGDATABASE}:${PGUSER}:${PGPASSWORD}" >> "${tmpdir}/pgpass"
cp "${tmpdir}/pgpass" ~/.pgpass
echo "updated: ${HOME}/.pgpass" >&2
cat <<EOF >&2
## Connect from a localhost shell through the port forward to the cluster
export PGHOST=${PGHOST}
export PGPORT=${PGPORT}
export PGDATABASE=${PGDATABASE}
export PGUSER=${PGUSER}
psql -c '\conninfo'
EOF
psql --host=${PGHOST} --port=${PGPORT} ${PGDATABASE} -c '\conninfo'

View File

@@ -12,5 +12,5 @@
</style><link rel="stylesheet" href="styles-IHLR3ZBD.css" media="print" onload="this.media='all'"><noscript><link rel="stylesheet" href="styles-IHLR3ZBD.css"></noscript><link rel="modulepreload" href="chunk-EYHLAWIE.js"></head>
<body class="mat-typography">
<app-root></app-root>
<script src="polyfills-A7MJM4D4.js" type="module"></script><script src="main-E473TGC2.js" type="module"></script></body>
<script src="polyfills-A7MJM4D4.js" type="module"></script><script src="main-PZVU2IPA.js" type="module"></script></body>
</html>

View File

@@ -7,7 +7,6 @@
<span>Menu</span>
</mat-toolbar>
<mat-nav-list>
<a mat-list-item routerLink="/home" routerLinkActive="active-link">Home</a>
<a mat-list-item routerLink="/platforms" routerLinkActive="active-link">Platforms</a>
</mat-nav-list>
</mat-sidenav>

View File

@@ -59,6 +59,7 @@ func GeneratePlatform(ctx context.Context, rpc *client.Client, orgID string, nam
rpcPlatform = p
break
}
log.DebugContext(ctx, "checking platform", "want", name, "have", p.GetName())
}
if rpcPlatform == nil {
return errors.Wrap(errors.New("cannot generate: platform not found in the holos server"))

View File

@@ -195,3 +195,15 @@ _Selector: #Selector
status: component: string
}
// Customize the istio sidecar proxy. The default resource limit is 2 cpu and
// 1Gi ram, which quickly exhausts an EKS cluster node pool.
#IstioProxy: corev1.#Container & {
name: "istio-proxy"
image: "auto"
resources: limits: {
cpu: "100m"
memory: "128Mi"
}
resources: requests: resources.limits
}

View File

@@ -0,0 +1,15 @@
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
db.sqlite3
__pycache__
.venv
.env
.terraform
.terraform.lock.hcl
tfplan
vendor/

View File

@@ -0,0 +1,271 @@
package holos
import (
"encoding/yaml"
core "github.com/holos-run/holos/api/core/v1alpha2"
kc "sigs.k8s.io/kustomize/api/types"
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
appsv1 "k8s.io/api/apps/v1"
rbacv1 "k8s.io/api/rbac/v1"
gwv1 "gateway.networking.k8s.io/gateway/v1"
hrv1 "gateway.networking.k8s.io/httproute/v1"
rgv1 "gateway.networking.k8s.io/referencegrant/v1beta1"
ra "security.istio.io/requestauthentication/v1"
ap "security.istio.io/authorizationpolicy/v1"
is "cert-manager.io/issuer/v1"
ci "cert-manager.io/clusterissuer/v1"
certv1 "cert-manager.io/certificate/v1"
ss "external-secrets.io/secretstore/v1beta1"
es "external-secrets.io/externalsecret/v1beta1"
pc "postgres-operator.crunchydata.com/postgrescluster/v1beta1"
app "argoproj.io/application/v1alpha1"
cpv1 "pkg.crossplane.io/provider/v1"
cpdrcv1beta1 "pkg.crossplane.io/deploymentruntimeconfig/v1beta1"
cpfuncv1beta1 "pkg.crossplane.io/function/v1beta1"
cpawspcv1beta1 "aws.upbound.io/providerconfig/v1beta1"
)
// #Resources represents kubernetes api objects output along side a build plan.
// These resources are defined directly within CUE.
#Resources: {
[Kind=string]: [NAME=string]: {
kind: Kind
metadata: name: string | *NAME
}
Namespace: [string]: corev1.#Namespace
ServiceAccount: [string]: corev1.#ServiceAccount
ConfigMap: [string]: corev1.#ConfigMap
Service: [string]: corev1.#Service
Deployment: [string]: appsv1.#Deployment
Job: [string]: batchv1.#Job
CronJob: [string]: batchv1.#CronJob
ClusterRole: [string]: rbacv1.#ClusterRole
ClusterRoleBinding: [string]: rbacv1.#ClusterRoleBinding
Role: [string]: rbacv1.#Role
RoleBinding: [string]: rbacv1.#RoleBinding
Issuer: [string]: is.#Issuer
ClusterIssuer: [string]: ci.#ClusterIssuer
Certificate: [string]: certv1.#Certificate
SecretStore: [string]: ss.#SecretStore
ExternalSecret: [string]: es.#ExternalSecret
HTTPRoute: [string]: hrv1.#HTTPRoute
ReferenceGrant: [string]: rgv1.#ReferenceGrant
PostgresCluster: [string]: pc.#PostgresCluster
RequestAuthentication: [string]: ra.#RequestAuthentication
AuthorizationPolicy: [string]: ap.#AuthorizationPolicy
Gateway: [string]: gwv1.#Gateway & {
spec: gatewayClassName: string | *"istio"
}
// Crossplane resources
DeploymentRuntimeConfig: [string]: cpdrcv1beta1.#DeploymentRuntimeConfig
Provider: [string]: cpv1.#Provider
Function: [string]: cpfuncv1beta1.#Function
ProviderConfig: [string]: cpawspcv1beta1.#ProviderConfig
}
#ReferenceGrant: rgv1.#ReferenceGrant & {
spec: from: [{
group: "gateway.networking.k8s.io"
kind: "HTTPRoute"
namespace: #IstioGatewaysNamespace
}]
spec: to: [{
group: ""
kind: "Service"
}]
}
// #Helm represents a holos build plan composed of one helm chart.
#Helm: {
// Name represents the holos component name
Name: string
Version: string
Namespace: string
Resources: #Resources
Repo: {
name: string | *""
url: string | *""
}
Values: {...}
Chart: core.#HelmChart & {
metadata: name: string | *Name
metadata: namespace: string | *Namespace
chart: name: string | *Name
chart: release: chart.name
chart: version: string | *Version
chart: repository: Repo
// Render the values to yaml for holos to provide to helm.
valuesContent: yaml.Marshal(Values)
// Kustomize post-processor
if EnableKustomizePostProcessor == true {
// resourcesFile represents the file helm output is written two and
// kustomize reads from. Typically "resources.yaml" but referenced as a
// constant to ensure the holos cli uses the same file.
kustomize: resourcesFile: core.#ResourcesFile
// kustomizeFiles represents the files in a kustomize directory tree.
kustomize: kustomizeFiles: core.#FileContentMap
for FileName, Object in KustomizeFiles {
kustomize: kustomizeFiles: "\(FileName)": yaml.Marshal(Object)
}
}
apiObjectMap: (#APIObjects & {apiObjects: Resources}).apiObjectMap
}
// EnableKustomizePostProcessor processes helm output with kustomize if true.
EnableKustomizePostProcessor: true | *false
// KustomizeFiles represents additional files to include in a Kustomization
// resources list. Useful to patch helm output. The implementation is a
// struct with filename keys and structs as values. Holos encodes the struct
// value to yaml then writes the result to the filename key. Component
// authors may then reference the filename in the kustomization.yaml resources
// or patches lists.
// Requires EnableKustomizePostProcessor: true.
KustomizeFiles: {
// Embed KustomizeResources
KustomizeResources
// The kustomization.yaml file must be included for kustomize to work.
"kustomization.yaml": kc.#Kustomization & {
apiVersion: "kustomize.config.k8s.io/v1beta1"
kind: "Kustomization"
resources: [core.#ResourcesFile, for FileName, _ in KustomizeResources {FileName}]
patches: [for x in KustomizePatches {x}]
}
}
// KustomizePatches represents patches to apply to the helm output. Requires
// EnableKustomizePostProcessor: true.
KustomizePatches: [ArbitraryLabel=string]: kc.#Patch
// KustomizeResources represents additional resources files to include in the
// kustomize resources list.
KustomizeResources: [FileName=string]: {...}
// output represents the build plan provided to the holos cli.
Output: #BuildPlan & {
_Name: Name
_Namespace: Namespace
spec: components: helmChartList: [Chart]
}
}
// #Kustomize represents a holos build plan composed of one kustomize build.
#Kustomize: {
// Name represents the holos component name
Name: string
Kustomization: core.#KustomizeBuild & {
metadata: name: string | *Name
}
// output represents the build plan provided to the holos cli.
Output: #BuildPlan & {
_Name: Name
spec: components: kustomizeBuildList: [Kustomization]
}
}
// #Kubernetes represents a holos build plan composed of inline kubernetes api
// objects.
#Kubernetes: {
// Name represents the holos component name
Name: string
Namespace: string
Resources: #Resources
// output represents the build plan provided to the holos cli.
Output: #BuildPlan & {
_Name: Name
_Namespace: Namespace
// resources is a map unlike other build plans which use a list.
spec: components: resources: "\(Name)": {
metadata: name: Name
metadata: namespace: Namespace
apiObjectMap: (#APIObjects & {apiObjects: Resources}).apiObjectMap
}
}
}
#BuildPlan: core.#BuildPlan & {
_Name: string
_Namespace?: string
let NAME = "gitops/\(_Name)"
// Render the ArgoCD Application for GitOps.
spec: components: resources: (NAME): {
metadata: name: NAME
if _Namespace != _|_ {
metadata: namespace: _Namespace
}
deployFiles: (#Argo & {ComponentName: _Name}).deployFiles
}
}
// #Argo represents an argocd Application resource for each component, written
// using the #HolosComponent.deployFiles field.
#Argo: {
ComponentName: string
Application: app.#Application & {
metadata: name: ComponentName
metadata: namespace: "argocd"
spec: {
destination: server: "https://kubernetes.default.svc"
project: "default"
source: {
path: "\(_Platform.Model.argocd.deployRoot)/deploy/clusters/\(_ClusterName)/components/\(ComponentName)"
repoURL: _Platform.Model.argocd.repoURL
targetRevision: _Platform.Model.argocd.targetRevision
}
}
}
// deployFiles represents the output files to write along side the component.
deployFiles: "clusters/\(_ClusterName)/gitops/\(ComponentName).application.gen.yaml": yaml.Marshal(Application)
}
// #APIObjects defines the output format for kubernetes api objects. The holos
// cli expects the yaml representation of each api object in the apiObjectMap
// field.
#APIObjects: core.#APIObjects & {
// apiObjects represents the un-marshalled form of each kubernetes api object
// managed by a holos component.
apiObjects: {
[Kind=string]: {
[string]: {
kind: Kind
...
}
}
ConfigMap: [string]: corev1.#ConfigMap & {apiVersion: "v1"}
}
// apiObjectMap holds the marshalled representation of apiObjects
apiObjectMap: {
for kind, v in apiObjects {
"\(kind)": {
for name, obj in v {
"\(name)": yaml.Marshal(obj)
}
}
}
}
}

View File

@@ -0,0 +1,4 @@
package holos
// _ClusterName is the --cluster-name flag value provided by the holos cli.
_ClusterName: string @tag(cluster, type=string)

View File

@@ -0,0 +1,21 @@
package holos
_ArgoCD: {
metadata: name: "argocd"
metadata: namespace: "argocd"
hostname: "argocd.\(_Platform.Model.org.domain)"
// issuerHost is the hostname portion of issuerURL
issuerHost: _AuthProxy.issuerHost
// issuerURL is the oidc id provider issuer, zitadel for this platform.
issuerURL: "https://" + issuerHost
// clientID is the client id of the authproxy in the id provider (zitadel).
clientID: _Platform.Model.argocd.clientID
// scopesList represents a list of scopes
// Omit urn:zitadel:iam:org:domain:primary:example.com scope because members
// of the Holos and the Open Infrastructure Services orgs may access ArgoCD.
scopesList: ["openid", "profile", "email", "groups"]
}

View File

@@ -0,0 +1,59 @@
package holos
// Produce a kubernetes objects build plan.
(#Kubernetes & Objects).Output
let Objects = {
Name: "argo-authpolicy"
Namespace: _AuthProxy.metadata.namespace
let Selector = {matchLabels: "istio.io/gateway-name": "default"}
Resources: [_]: [NAME=string]: {
metadata: _IAP.metadata
metadata: name: NAME
metadata: namespace: Namespace
}
// Auth policy resources represent the RequestAuthentication and
// AuthorizationPolicy resources in the istio-gateways namespace governing the
// default Gateway.
Resources: {
AuthorizationPolicy: "\(Name)-allow-argocd": {
_description: "Allow argocd access"
spec: {
action: "ALLOW"
selector: Selector
rules: [
{
to: [{
// Refer to https://istio.io/latest/docs/ops/best-practices/security/#writing-host-match-policies
operation: hosts: [
"argocd.\(_Platform.Model.org.domain)",
"argocd.\(_Platform.Model.org.domain):*",
]
}]
when: [
// Must be issued by the platform identity provider.
{
key: "request.auth.principal"
values: [_AuthProxy.issuerURL + "/*"]
},
// Must be intended for an app within the Holos Platform ZITADEL project.
{
key: "request.auth.audiences"
values: [_AuthProxy.projectID]
},
// Must be presented by the istio ExtAuthz auth proxy.
{
key: "request.auth.presenter"
values: [_AuthProxy.clientID]
},
]
},
]
}
}
}
}

View File

@@ -0,0 +1,85 @@
package holos
import (
"encoding/yaml"
"strings"
)
// Produce a helm chart build plan.
(#Helm & Chart).Output
let Chart = {
Name: "argo-cd"
Namespace: _ArgoCD.metadata.namespace
Version: "7.1.1"
Chart: chart: release: _ArgoCD.metadata.name
// Upstream uses a Kubernetes Job to create the argocd-redis Secret. Enable
// hooks to enable the Job.
Chart: enableHooks: true
Repo: name: "argocd"
Repo: url: "https://argoproj.github.io/argo-helm"
Resources: [_]: [_]: metadata: namespace: Namespace
// Grant the Gateway namespace the ability to refer to the backend service
// from HTTPRoute resources.
Resources: ReferenceGrant: (#IstioGatewaysNamespace): #ReferenceGrant
EnableKustomizePostProcessor: true
// Force all resources into the component namespace, some resources in the
// helm chart may not specify the namespace so they may get mis-applied
// depending on the kubectl (client-go) context.
KustomizeFiles: "kustomization.yaml": namespace: Namespace
// Patch the backend with the service mesh sidecar.
KustomizePatches: {
mesh: {
target: {
group: "apps"
version: "v1"
kind: "Deployment"
name: "argocd-server"
}
patch: yaml.Marshal(IstioInject)
}
}
Values: #Values & {
kubeVersionOverride: "1.29.0"
// handled in the argo-crds component
crds: install: false
global: domain: _ArgoCD.hostname
dex: enabled: false
// for integration with istio
configs: params: "server.insecure": true
configs: cm: {
"admin.enabled": false
"oidc.config": yaml.Marshal(OIDCConfig)
}
// Refer to https://argo-cd.readthedocs.io/en/stable/operator-manual/rbac/
let Policy = [
"g, argocd-view, role:readonly",
"g, prod-cluster-view, role:readonly",
"g, prod-cluster-edit, role:readonly",
"g, prod-cluster-admin, role:admin",
"g, \(_Platform.Model.rbac.sub), role:admin",
]
configs: rbac: "policy.csv": strings.Join(Policy, "\n")
}
}
let IstioInject = [{op: "add", path: "/spec/template/metadata/labels/sidecar.istio.io~1inject", value: "true"}]
let OIDCConfig = {
name: "Holos Platform"
issuer: _ArgoCD.issuerURL
clientID: _ArgoCD.clientID
requestedScopes: _ArgoCD.scopesList
// Set redirect uri to https://argocd.example.com/pkce/verify
enablePKCEAuthentication: true
// groups is essential for rbac
requestedIDTokenClaims: groups: essential: true
}

View File

@@ -0,0 +1,7 @@
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: "argocd"
resources:
- "https://raw.githubusercontent.com/argoproj/argo-cd/v2.11.2/manifests/install.yaml"

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,4 @@
package holos
// Produce a kubectl kustomize build plan.
(#Kustomize & {Name: "argo-crds"}).Output

View File

@@ -0,0 +1,6 @@
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- "https://github.com/argoproj/argo-cd//manifests/crds/?ref=v2.11.2"

View File

@@ -0,0 +1,21 @@
package holos
// Produce a kubernetes objects build plan.
(#Kubernetes & Objects).Output
let Objects = {
Name: "argo-creds"
Namespace: _ArgoCD.metadata.namespace
Resources: [_]: [_]: metadata: namespace: Namespace
Resources: {
// ssh-keygen -t ed25519 -f sshPrivateKey -m pem -C argocd -N ''
// echo echo git@github.com:myorg/holos-infra.git > url
// holos create secret -n argocd --append-hash=false creds-holos-infra --from-file .
ExternalSecret: "creds-holos-infra": #ExternalSecret & {
// Labels and annotations are copied over
metadata: labels: "argocd.argoproj.io/secret-type": "repo-creds"
}
}
}

View File

@@ -0,0 +1,29 @@
package holos
// Produce a kubernetes objects build plan.
(#Kubernetes & Objects).Output
let Objects = {
Name: "argo-routes"
Namespace: #IstioGatewaysNamespace
Resources: [_]: [_]: metadata: namespace: Namespace
Resources: HTTPRoute: argocd: {
spec: hostnames: [_ArgoCD.hostname]
spec: parentRefs: [{
name: "default"
namespace: #IstioGatewaysNamespace
}]
spec: rules: [
{
matches: [{path: {type: "PathPrefix", value: "/"}}]
backendRefs: [{
name: "argocd-server"
port: 80
namespace: _ArgoCD.metadata.namespace
}]
},
]
}
}

View File

@@ -0,0 +1,20 @@
package holos
import ci "cert-manager.io/clusterissuer/v1"
// Produce a kubernetes objects build plan.
(#Kubernetes & Objects).Output
let Objects = {
Name: "local-ca"
Namespace: "cert-manager"
Resources: {
ClusterIssuer: {
"local-ca": ci.#ClusterIssuer & {
metadata: name: Name
spec: ca: secretName: Name
}
}
}
}

View File

@@ -0,0 +1,41 @@
package holos
// Produce a helm chart build plan.
(#Helm & Chart).Output
let Chart = {
Name: "cert-manager"
Version: "1.14.5"
Namespace: "cert-manager"
Repo: name: "jetstack"
Repo: url: "https://charts.jetstack.io"
Values: {
installCRDs: true
startupapicheck: enabled: false
// Must not use kube-system on gke autopilot. GKE Warden blocks access.
global: leaderElection: namespace: Namespace
// https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-resource-requests#min-max-requests
resources: requests: {
cpu: "250m"
memory: "512Mi"
"ephemeral-storage": "100Mi"
}
webhook: resources: Values.resources
cainjector: resources: Values.resources
startupapicheck: resource: Values.resources
// https://cloud.google.com/kubernetes-engine/docs/how-to/autopilot-spot-pods
nodeSelector: {
"kubernetes.io/os": "linux"
if _ClusterName == "management" {
"cloud.google.com/gke-spot": "true"
}
}
webhook: nodeSelector: Values.nodeSelector
cainjector: nodeSelector: Values.nodeSelector
startupapicheck: nodeSelector: Values.nodeSelector
}
}

View File

@@ -0,0 +1,30 @@
package holos
import certv1 "cert-manager.io/certificate/v1"
let Objects = {
Name: "certificates"
Namespace: "istio-gateways"
Resources: Certificate: [NAME=string]: certv1.#Certificate & {
metadata: name: NAME
metadata: namespace: Namespace
spec: {
commonName: NAME
secretName: NAME
dnsNames: [NAME]
issuerRef: {
kind: "ClusterIssuer"
name: "local-ca"
}
}
}
Resources: Certificate: "httpbin.\(_Platform.Model.org.domain)": _
Resources: Certificate: "argocd.\(_Platform.Model.org.domain)": _
Resources: Certificate: "app.\(_Platform.Model.org.domain)": _
Resources: Certificate: "backstage.\(_Platform.Model.org.domain)": _
}
// Produce a kubernetes objects build plan.
(#Kubernetes & Objects).Output

View File

@@ -0,0 +1,30 @@
package holos
import "encoding/yaml"
import v1 "github.com/holos-run/holos/api/v1alpha1"
// Provide a BuildPlan to the holos cli to render k8s api objects.
v1.#BuildPlan & {
spec: components: resources: platformConfigmap: {
metadata: name: "platform-configmap"
apiObjectMap: OBJECTS.apiObjectMap
}
}
// OBJECTS represents the kubernetes api objects to manage.
let OBJECTS = v1.#APIObjects & {
apiObjects: ConfigMap: platform: {
metadata: {
name: "platform"
namespace: "default"
}
// Output the platform model which is derived from the web app form the
// platform engineer provides and the form values the end user provides.
data: platform: yaml.Marshal(PLATFORM)
}
}
let PLATFORM = {
spec: model: _Platform.spec.model
}

View File

@@ -0,0 +1,4 @@
package holos
// Produce a kubectl kustomize build plan.
(#Kustomize & {Name: "gateway-api"}).Output

View File

@@ -0,0 +1,6 @@
---
# Refer to https://istio.io/latest/docs/tasks/traffic-management/ingress/gateway-api/
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.1.0"

View File

@@ -0,0 +1,11 @@
package holos
_IAP: {
metadata: {
name: string
namespace: _AuthProxy.metadata.namespace
labels: "app.kubernetes.io/name": name
labels: "app.kubernetes.io/part-of": "default-gateway"
...
}
}

View File

@@ -0,0 +1,17 @@
package holos
// Produce a helm chart build plan.
(#Helm & Chart).Output
let Chart = {
Name: "istio-base"
Version: #IstioVersion
Namespace: "istio-system"
Chart: chart: name: "base"
Repo: name: "istio"
Repo: url: "https://istio-release.storage.googleapis.com/charts"
Values: #IstioValues
}

View File

@@ -0,0 +1,17 @@
package holos
// Produce a helm chart build plan.
(#Helm & Chart).Output
let Chart = {
Name: "istio-cni"
Version: #IstioVersion
Namespace: "istio-system"
Chart: chart: name: "cni"
Repo: name: "istio"
Repo: url: "https://istio-release.storage.googleapis.com/charts"
Values: #CNIValues
}

View File

@@ -0,0 +1,70 @@
package holos
// Produce a kubernetes objects build plan.
(#Kubernetes & Objects).Output
let Objects = {
Name: "gateway"
Namespace: #IstioGatewaysNamespace
Resources: {
// Manage a service account to prevent ArgoCD from pruning it.
ServiceAccount: "default-istio": {
metadata: namespace: Namespace
metadata: labels: {
"gateway.istio.io/managed": "istio.io-gateway-controller"
"gateway.networking.k8s.io/gateway-name": "default"
"istio.io/gateway-name": "default"
}
}
// The default gateway with all listeners attached to tls certs.
Gateway: default: {
metadata: namespace: Namespace
spec: {
// Work with a struct of listeners instead of a list.
_listeners: (#WildcardListener & {Name: "httpbin", Cluster: false}).Output
_listeners: (#WildcardListener & {Name: "argocd", Cluster: false}).Output
_listeners: (#WildcardListener & {Name: "backstage", Cluster: false}).Output
_listeners: (#WildcardListener & {Name: "app", Cluster: false}).Output
listeners: [for x in _listeners {x}]
}
}
}
}
#WildcardListener: {
Name: string
Cluster: false | *true
Selector: matchLabels: {[string]: string}
_Hostname: string
_Prefix: string
if Cluster == true {
_Hostname: "\(Name).\(_ClusterName).\(_Platform.Model.org.domain)"
_Prefix: "region-\(Name)"
}
if Cluster == false {
_Hostname: "\(Name).\(_Platform.Model.org.domain)"
_Prefix: "global-\(Name)"
}
Output: [NAME=string]: {name: NAME}
Output: {
"\(_Prefix)-apex": {
hostname: _Hostname
port: 443
protocol: "HTTPS"
tls: {
certificateRefs: [{
kind: "Secret"
name: _Hostname
}]
}
allowedRoutes: namespaces: from: "Selector"
allowedRoutes: namespaces: selector: Selector
}
}
}
_ProxyProtocol: gatewayTopology: proxyProtocol: {}

Some files were not shown because too many files have changed in this diff Show More