Compare commits

..

49 Commits

Author SHA1 Message Date
Jeff McCune
44fea098de (#101) Manage an ExternalSecret for every Server in the default Gateway
This patch loops over every Gateway.spec.servers entry in the default
gateway and manages an ExternalSecret to sync the credential from the
provisioner cluster.
2024-04-18 09:53:39 -07:00
Jeff McCune
52286efa25 (#101) Fix duplicate certs in holos components
Problem:
A Holos Component is created for each project stage, but all hosts for
all stages in the project are added.  This creates duplicates.

Solution:
Sort project hosts by their stage and map the holos component for a
stage to the hosts for that stage.

Result:
Duplicates are eliminated, the prod certs are not in the dev holos
component and vice-versa.
2024-04-18 09:17:49 -07:00
Jeff McCune
a1b2179442 (#101) Remove holos-saas-certs holos component
No longer needed now that project host certs are using wildcards and
organized nicely.
2024-04-18 06:32:06 -07:00
Jeff McCune
cffc430738 (#101) Provision wildcard certs for all Gateway servers
This patch provisions wildcard certs in the provisioning cluster.  The
CN matches the project stage host global hostname without any cluster
qualifiers.

The use of a wildcard in place of the environment name dns segment at
the leftmost position of the fully qualified dns name enables additional
environments to be configured without reissuing certificates.

This is to avoid the 100 name per cert limit in LetsEncrypt.
2024-04-18 06:26:29 -07:00
Jeff McCune
d76454272b (#101) Simplify the GatewayServers struct
Mapping each project host fqdn to the stage is unnecessary.  The list of
gateway servers is constructed from each FQDN in the project.

This patch removes the unnecessary struct mappings.
2024-04-18 05:32:19 -07:00
Jeff McCune
9d1e77c00f (#101) Define #ProjectHosts to manage project hosts
Problem:
It's difficult to map and reduce the collection of project hosts when
configuring related Certificate, Gateway.spec.servers, VirtualService,
and auth proxy cookie domain settings.

Solution:
Define #ProjectHosts which takes a project and provides Hosts which is a
struct with a fqdn key and a #CertInfo value.  The #CertInfo definition
is intended to provide everything need to reduce the Hosts property to
structs usful for the problematic resources mentioned previously.

Result:
Gateway.spec.servers are mapped using #ProjectHosts

Next step is to map the Certificate resources on the provisioner
cluster.
2024-04-17 21:59:04 -07:00
Jeff McCune
2050abdc6c (#101) Add wildcard support to project certs
Problem:
Adding environments to a project causes certs to be re-issued.

Solution:
Enable wildcard certs for per-environment namespaces like jeff, gary,
nate, etc...

Result:
Environments can be added to a project stage without needing the cert to
be re-issued.
2024-04-17 12:32:44 -07:00
Jeff McCune
3ea013c503 (#101) Consolidate certificates by project stage
This patch avoids LetsEncrypt rate limits by consolidating multiple dns
names into one certificate.

For each project host, create a certificate for each stage in the
project.  The certificate contains the dns names for all clusters and
environments associated with that stage and host.

This can become quite a list, the limit is 100 dnsNames.

For the Holos project which has 7 clusters and 4 dev environments, the
number of dns names is 32 (4 envs + 4 envs * 7 clusters = 32 dns names).

Still, a much needed improvement because we're limited to 50 certs per
week.

It may be worth considering wildcards for the per-developer
environments, which are the ones we'll likely spin up the most
frequently.
2024-04-17 11:58:46 -07:00
Jeff McCune
309db96138 (#133) Choria Broker for Holos Controller provisioning
This patch is a partial step toward getting the choria broker up
and running in my own namespace.  The choria broker is necessary for
provisioning machine room agents such as the holos controller.
2024-04-17 08:48:31 -07:00
Jeff McCune
283b4be71c (#132) Use forked version of machine-room
Until https://github.com/choria-io/machine-room/pull/12 gets merged
2024-04-16 19:46:36 -07:00
Jeff McCune
ab9bca0750 (#132) Controller Subcommand
This patch adds an initial holos controller subcommand.  The machine
room agent starts, but doesn't yet provision because we haven't deployed
the provisioning infrastructure yet.
2024-04-16 15:40:25 -07:00
Jeff McCune
ac2be67c3c (#130) NATS deployment with operator jwt
Configure NATS in a 3 Node deployment with resolver authentication using
an Operator JWT.

The operator secret nkeys are stored in the provisioner cluster.  Get
them with:

    holos get secret -n jeff-holos nats-nsc --print-key nsc.tgz | tar -tvzf-
2024-04-15 17:02:18 -07:00
Jeff McCune
6ffafb8cca (#127) Setup Routing using Dashboard Schematic
This patch sets up basic routing and a 404 not found page.  The Home and
Clusters page are generated from the [dashboard schematic][1]

    ng generate @angular/material:dashboard home
    ng generate @angular/material:dashboard cluster-list
    ng g c error-not-found

[1]: https://material.angular.io/guide/schematics#dashboard-schematic
2024-04-15 13:48:00 -07:00
Jeff McCune
590e6b556c (#127) Generate Angular Material navigation
Instead of trying to hand-craft a navigation sidebar and toolbar from
Youtube videos, use the [navigation schematic][1] to quickly get a "good
enough" UI.

    ng generate @angular/material:navigation nav

[1]: https://material.angular.io/guide/schematics#navigation-schematic
2024-04-15 10:43:24 -07:00
Jeff McCune
5dc5c6fbdf (#127) ng add @angular/material
And start working on the sidenav and toolbar.
2024-04-14 07:03:45 -07:00
Jeff McCune
cd8c9f2c32 (#127) ConnectRPC generated code 2024-04-13 11:03:19 -07:00
Jeff McCune
3490941d4c (#127) Frontend deps from make tools
Needed to generate the connectrpc bindings and build the holos
executable.
2024-04-12 20:09:41 -07:00
Jeff McCune
3f201df0c2 (#126) Configure Angular to align with frontend.go
Angular must build output into a path compatible with the Go
http.FileServer.  We cannot easily graft an fs.FS onto a sub-path, so we
need the `./ui/` path in the output.  This requires special
configuration from the Angular default application builder behavior.
2024-04-12 20:08:37 -07:00
Jeff McCune
4c22d515bd (#127) ng new holos
ng new holos --routing --skip-git --standalone
SCSS
No SSR
2024-04-12 20:07:17 -07:00
Jeff McCune
ec0ef1c4b3 (#127) Angular - Restart again
Restart again this time with SCSS instead of CSS.
2024-04-12 20:03:45 -07:00
Jeff McCune
1e51e2d49a (#127) Angular Navigation schematic
Following [Navigation schematic][1].

    ng generate @angular/material:navigation navigation

[1]: https://material.angular.io/guide/schematics#navigation-schematic
2024-04-12 19:45:26 -07:00
Jeff McCune
5186499b90 Revert "(#127) Angular - ng add ng-matero"
This reverts commit fc275e4164.

Yuck, don't like it.
2024-04-12 17:21:26 -07:00
Jeff McCune
fc275e4164 (#127) Angular - ng add ng-matero
Trying [ng-matero][1].  Seems to exceed the max prod budget of 1mb, but
worth trying anyway.

[1]: https://github.com/ng-matero/ng-matero
2024-04-12 17:18:33 -07:00
Jeff McCune
9fa466f7cf (#126) Build the front end app when building holos
Always build the front end app bundle when rebuilding the holos cli so
we're sure things are up to date.
2024-04-12 17:04:41 -07:00
Jeff McCune
efd6f256a5 (#126) Connect generated bindings for the frontend 2024-04-12 16:57:30 -07:00
Jeff McCune
f7f9d6b5f0 (#126) Angular Material - ng add @angular/material 2024-04-12 16:57:15 -07:00
Jeff McCune
0526062ab2 (#126) Configure Angular to align with frontend.go
Angular must build output into a path compatible with the Go
http.FileServer.  We cannot easily graft an fs.FS onto a sub-path, so we
need the `./ui/` path in the output.  This requires special
configuration from the Angular default application builder behavior.
2024-04-12 16:57:15 -07:00
Jeff McCune
a1ededa722 (#126) http.FileServer serves /ui instead of /app
This fixes Angular not being served up correctly.

Note, special configuration in Angular is necessary to get the build
output into the ui/ directory.  Refer to: [Output path configuration][1]
and [browser directory created in outputPath][2].

[1]: https://angular.io/guide/workspace-config#output-path-configuration
[2]: https://github.com/angular/angular-cli/issues/26304
2024-04-12 16:51:45 -07:00
Jeff McCune
9b09a02912 (#115) Angular new project with defaults
Setup angular with the defaults.  CSS, No SSR / Static Site Generation.

    npm install -g @angular/cli
    ng new holos

```
? Which stylesheet format would you like to use? CSS             [ https://developer.mozilla.org/docs/Web/CSS                     ]
? Do you want to enable Server-Side Rendering (SSR) and Static Site Generation (SSG/Prerendering)? No
```

```
CREATE holos/README.md (1059 bytes)
CREATE holos/.editorconfig (274 bytes)
CREATE holos/.gitignore (548 bytes)
CREATE holos/angular.json (2587 bytes)
CREATE holos/package.json (1036 bytes)
CREATE holos/tsconfig.json (857 bytes)
CREATE holos/tsconfig.app.json (263 bytes)
CREATE holos/tsconfig.spec.json (273 bytes)
CREATE holos/.vscode/extensions.json (130 bytes)
CREATE holos/.vscode/launch.json (470 bytes)
CREATE holos/.vscode/tasks.json (938 bytes)
CREATE holos/src/main.ts (250 bytes)
CREATE holos/src/favicon.ico (15086 bytes)
CREATE holos/src/index.html (291 bytes)
CREATE holos/src/styles.css (80 bytes)
CREATE holos/src/app/app.component.css (0 bytes)
CREATE holos/src/app/app.component.html (19903 bytes)
CREATE holos/src/app/app.component.spec.ts (913 bytes)
CREATE holos/src/app/app.component.ts (301 bytes)
CREATE holos/src/app/app.config.ts (227 bytes)
CREATE holos/src/app/app.routes.ts (77 bytes)
CREATE holos/src/assets/.gitkeep (0 bytes)
✔ Packages installed successfully.
```
2024-04-12 15:07:38 -07:00
Jeff McCune
657a5e82a5 (#115) Remove Angular SSR
We don't want Angular Server Side Rendering, we want plain old client
side angular.
2024-04-12 14:57:39 -07:00
Jeff McCune
1eece02254 (#126) Angular Material UI
ng add @angular/material

```
❯ ng add @angular/material
Skipping installation: Package already installed
? Choose a prebuilt theme name, or "custom" for a custom theme: Indigo/Pink        [ Preview: https://material.angular.io?theme=indigo-pink ]
? Set up global Angular Material typography styles? Yes
? Include the Angular animations module? Include and enable animations Yes
```
2024-04-12 14:16:45 -07:00
Jeff McCune
c866b47dcb (#126) Check for errors decoding claims
Return an empty claims struct when there's an error.
2024-04-12 14:16:44 -07:00
Jeff McCune
ff52ec750b (#126) Try to fix golangci-lint
It's doing way too much, might want to consider something else.

Getting these errors:

```
/usr/bin/tar: ../../../go/pkg/mod/github.com/bufbuild/buf@v1.30.1/.dockerignore: Cannot open: File exists
/usr/bin/tar: ../../../go/pkg/mod/github.com/bufbuild/buf@v1.30.1/.envrc: Cannot open: File exists
/usr/bin/tar: ../../../go/pkg/mod/github.com/bufbuild/buf@v1.30.1/.gitattributes: Cannot open: File exists
/usr/bin/tar: ../../../go/pkg/mod/github.com/bufbuild/buf@v1.30.1/.github/CODEOWNERS: Cannot open: File exists
/usr/bin/tar: ../../../go/pkg/mod/github.com/bufbuild/buf@v1.30.1/.github/buf-logo.svg: Cannot open: File exists
/usr/bin/tar: ../../../go/pkg/mod/github.com/bufbuild/buf@v1.30.1/.github/dependabot.yml: Cannot open: File exists
/usr/bin/tar: ../../../go/pkg/mod/github.com/bufbuild/buf@v1.30.1/.github/workflows/add-to-project.yaml: Cannot open: File exists
/usr/bin/tar: ../../../go/pkg/mod/github.com/bufbuild/buf@v1.30.1/.github/workflows/back-to-development.yaml: Cannot open: File exists
/usr/bin/tar: ../../../go/pkg/mod/github.com/bufbuild/buf@v1.30.1/.github/workflows/buf-binary-size.yaml: Cannot open: File exists
/usr/bin/tar: ../../../go/pkg/mod/github.com/bufbuild/buf@v1.30.1/.github/workflows/buf-shadow-sync.yaml: Cannot open: File exists
/usr/bin/tar: ../../../go/pkg/mod/github.com/bufbuild/buf@v1.30.1/.github/workflows/buf.yaml: Cannot open: File exists
```
2024-04-12 14:01:16 -07:00
Jeff McCune
4184619afc (#126) Refactor pkg to internal
pkg folder is not needed.  Move everything internal for now.
2024-04-12 13:56:16 -07:00
Jeff McCune
954dbd1ec8 (#126) Refactor id token acquisition to token package
And add a logout command that deletes the token cache.

The token package is intended for subcommands that need to make API
calls to the holos api server, getting a token should be a simple matter
of calling the token.Get() method, which takes minimal dependencies.
2024-04-12 13:15:03 -07:00
Jeff McCune
30b70e76aa (#126) Add login command
This copies the login command from the previous holos cli.  Wire
dependency injection and all the rest of the unnecessary stuff from
kubelogin are removed, streamlined down into a single function that
takes a few oidc related parameters.

This will need to be extracted out into an infrastructure service so
multiple other command line tools can easily re-use it and get the ID
token into the x-oidc-id-token header.
2024-04-12 12:13:33 -07:00
Jeff McCune
ec6d112711 (#126) Remove hydra and kratos databases
No longer needed for dev.
2024-04-12 10:24:26 -07:00
Jeff McCune
e796c6a763 (#126) Default to DATABASE_URL env var 2024-04-12 10:20:13 -07:00
Jeff McCune
be32201294 (#126) Basic User and Organization Ent models
Get rid of the previous UserIdentity model, this is no longer part of
the core domain and instead handled within the context of ZITADEL.
2024-04-12 09:59:40 -07:00
Jeff McCune
5ebc54b5b7 (#124) Go Tools 2024-04-12 09:14:13 -07:00
Jeff McCune
2954a57872 (#120) Fix NATS target namespace
The upstream nats charts don't specify namespaces for each attribute.
This works with helm update, but not helm template which holos uses to
render the yaml.

The missing namespace causes flux to fail.

This patch uses the flux kustomization to add the target namespace to
all resources.
2024-04-10 21:54:58 -07:00
Jeff McCune
df705bd79f (#121) Fix Multiple Charts cause holos render to fail
When rendering a holos component which contains more than one helm chart, rendering fails.  It should succeed.

```
holos render --cluster-name=k2 /home/jeff/workspace/holos-run/holos/docs/examples/platforms/reference/clusters/holos/... --log-level debug
```

```
9:03PM ERR could not execute version=0.64.2 err="could not rename: rename /home/jeff/workspace/holos-run/holos/docs/examples/platforms/reference/clusters/holos/nats/envs/vendor553679311 /home/jeff/workspace/holos-run/holos/docs/examples/platforms/reference/clusters/holos/nats/envs/vendor: file exists" loc=helm.go:145
```

This patch fixes the problem by moving each child item of the temporary
directory charts are installed into.  This avoids the problem of moving
the parent when the parent target already exists.
2024-04-10 21:27:39 -07:00
Jeff McCune
4e8ce3585d (#115) Minor clean up of cue code 2024-04-10 21:21:16 -07:00
Jeff McCune
ab5f17c3d2 (#115) Fix goreleaser
Import modules to take the direct dependency and prevent go mod tidy
from modifying go.mod and go.sum which causes goreleaser to fail.
2024-04-10 19:09:30 -07:00
Jeff McCune
a8918c74d4 (#115) Angular spike - fix make frontend
And install frontend deps.
2024-04-09 21:03:26 -07:00
Jeff McCune
ae5738d82d (#115) Angular with SSR
Executed:

    ng new
    ng add @angular/ssr

Name: holos
Style: CSS
SSR and SSG?: No

ssr added using ng add following https://angular.io/guide/prerendering
2024-04-09 20:52:42 -07:00
Jeff McCune
bb99aedffa (#115) Remove frontend
Clean up for ng new in angular spike.
2024-04-09 20:35:43 -07:00
Jeff McCune
d6ee1864c8 (#116) Tilt for development
Add Tilt back from holos server

Note with this patch the ec-creds.yaml file needs to be applied to the
provisioner and an external secret used to sync the image pull creds.

With this patch the dev instance is accessible behind the auth proxy.
pgAdmin also works from the Tilt UI.

https://jeff.holos.dev.k2.ois.run/app/start
2024-04-09 20:26:37 -07:00
Jeff McCune
8a4be66277 (#113) Fix goreleaser try 4
Please check in your pipeline what can be changing the following files:
  M go.sum
2024-04-09 16:48:21 -07:00
260 changed files with 23792 additions and 10930 deletions

View File

@@ -30,14 +30,15 @@ jobs:
with:
go-version: stable
- name: Install tools
run: sudo apt update && sudo apt -qq -y install curl zip unzip tar bzip2 make
- name: Install Packages
run: sudo apt update && sudo apt -qq -y install git curl zip unzip tar bzip2 make
- name: Install Deps
- name: Install Tools
run: |
make go-deps
set -x
make tools
make buf
go generate ./...
make frontend-deps
make frontend
go mod tidy
@@ -45,3 +46,4 @@ jobs:
uses: golangci/golangci-lint-action@v4
with:
version: latest
skip-pkg-cache: true

View File

@@ -35,12 +35,13 @@ jobs:
go-version: stable
# Necessary to run these outside of goreleaser, otherwise
# /home/runner/_work/holos/holos/internal/server/frontend/node_modules/.bin/protoc-gen-connect-query is not in PATH
- name: Install Deps
# /home/runner/_work/holos/holos/internal/frontend/node_modules/.bin/protoc-gen-connect-query is not in PATH
- name: Install Tools
run: |
make go-deps
set -x
make tools
make buf
go generate ./...
make frontend-deps
make frontend
go mod tidy

View File

@@ -28,8 +28,8 @@ jobs:
with:
go-version: stable
- name: Install tools
run: sudo apt update && sudo apt -qq -y install curl zip unzip tar bzip2 make
- name: Install Packages
run: sudo apt update && sudo apt -qq -y install git curl zip unzip tar bzip2 make
- name: Set up Helm
uses: azure/setup-helm@v4
@@ -37,11 +37,12 @@ jobs:
- name: Set up Kubectl
uses: azure/setup-kubectl@v3
- name: Install Deps
- name: Install Tools
run: |
make go-deps
set -x
make tools
make buf
go generate ./...
make frontend-deps
make frontend
go mod tidy

2
.gitignore vendored
View File

@@ -1,4 +1,4 @@
bin/
/bin/
vendor/
.idea/
coverage.out

View File

@@ -10,8 +10,8 @@ version: 1
before:
hooks:
- go mod tidy
- go generate ./...
- go mod tidy
builds:
- main: ./cmd/holos

View File

@@ -4,7 +4,7 @@ PROJ=holos
ORG_PATH=github.com/holos-run
REPO_PATH=$(ORG_PATH)/$(PROJ)
VERSION := $(shell cat pkg/version/embedded/major pkg/version/embedded/minor pkg/version/embedded/patch | xargs printf "%s.%s.%s")
VERSION := $(shell cat version/embedded/major version/embedded/minor version/embedded/patch | xargs printf "%s.%s.%s")
BIN_NAME := holos
DOCKER_REPO=quay.io/openinfrastructure/holos
@@ -13,13 +13,13 @@ IMAGE_NAME=$(DOCKER_REPO)
$( shell mkdir -p bin)
# For buf plugin protoc-gen-connect-es
export PATH := $(PWD)/internal/server/frontend/node_modules/.bin:$(PATH)
export PATH := $(PWD)/internal/frontend/holos/node_modules/.bin:$(PATH)
GIT_COMMIT=$(shell git rev-parse HEAD)
GIT_TREE_STATE=$(shell test -n "`git status --porcelain`" && echo "dirty" || echo "clean")
BUILD_DATE=$(shell date -Iseconds)
LD_FLAGS="-w -X ${ORG_PATH}/${PROJ}/pkg/version.GitCommit=${GIT_COMMIT} -X ${ORG_PATH}/${PROJ}/pkg/version.GitTreeState=${GIT_TREE_STATE} -X ${ORG_PATH}/${PROJ}/pkg/version.BuildDate=${BUILD_DATE}"
LD_FLAGS="-w -X ${ORG_PATH}/${PROJ}/version.GitCommit=${GIT_COMMIT} -X ${ORG_PATH}/${PROJ}/version.GitTreeState=${GIT_TREE_STATE} -X ${ORG_PATH}/${PROJ}/version.BuildDate=${BUILD_DATE}"
.PHONY: default
default: test
@@ -68,7 +68,7 @@ generate: ## Generate code.
go generate ./...
.PHONY: build
build: generate ## Build holos executable.
build: generate frontend ## Build holos executable.
@echo "building ${BIN_NAME} ${VERSION}"
@echo "GOPATH=${GOPATH}"
go build -trimpath -o bin/$(BIN_NAME) -ldflags $(LD_FLAGS) $(REPO_PATH)/cmd/$(BIN_NAME)
@@ -102,34 +102,34 @@ buf: ## buf generate
cd service && buf mod update
buf generate
.PHONY: tools
tools: go-deps frontend-deps ## install tool dependencies
.PHONY: go-deps
go-deps: ## install go executables
go install github.com/bufbuild/buf/cmd/buf@v1
go install github.com/fullstorydev/grpcurl/cmd/grpcurl@v1
go install google.golang.org/protobuf/cmd/protoc-gen-go@v1
go install connectrpc.com/connect/cmd/protoc-gen-connect-go@v1
go-deps: ## tool versions pinned in tools.go
go install github.com/bufbuild/buf/cmd/buf
go install github.com/fullstorydev/grpcurl/cmd/grpcurl
go install google.golang.org/protobuf/cmd/protoc-gen-go
go install connectrpc.com/connect/cmd/protoc-gen-connect-go
go install honnef.co/go/tools/cmd/staticcheck@latest
# curl https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | bash
.PHONY: frontend-deps
frontend-deps: ## Setup npm and vite
cd internal/server/frontend && npm install
cd internal/server/frontend && npm install --save-dev @bufbuild/buf @connectrpc/protoc-gen-connect-es
cd internal/server/frontend && npm install @connectrpc/connect @connectrpc/connect-web @bufbuild/protobuf
cd internal/frontend/holos && npm install
cd internal/frontend/holos && npm install --save-dev @bufbuild/buf @connectrpc/protoc-gen-connect-es
cd internal/frontend/holos && npm install @connectrpc/connect @connectrpc/connect-web @bufbuild/protobuf
# https://github.com/connectrpc/connect-query-es/blob/1350b6f07b6aead81793917954bdb1cc3ce09df9/packages/protoc-gen-connect-query/README.md?plain=1#L23
cd internal/server/frontend && npm install --save-dev @connectrpc/protoc-gen-connect-query @bufbuild/protoc-gen-es
cd internal/server/frontend && npm install @connectrpc/connect-query @bufbuild/protobuf
# https://github.com/aleclarson/vite-tsconfig-paths
cd internal/server/frontend && npm install --save-dev vite-tsconfig-paths
cd internal/frontend/holos && npm install --save-dev @connectrpc/protoc-gen-connect-query @bufbuild/protoc-gen-es
cd internal/frontend/holos && npm install @connectrpc/connect-query @bufbuild/protobuf
.PHONY: frontend
frontend: buf
mkdir -p internal/server/frontend/dist
cd internal/server/frontend/dist && rm -rf app
cd internal/server/frontend && ./node_modules/.bin/vite build
# Necessary to force go build cache miss
touch internal/server/frontend/frontend.go
cd internal/frontend/holos && rm -rf dist
mkdir -p internal/frontend/holos/dist
cd internal/frontend/holos && ng build
touch internal/frontend/frontend.go
.PHONY: help
help: ## Display this help menu.

315
Tiltfile Normal file
View File

@@ -0,0 +1,315 @@
# -*- mode: Python -*-
# This Tiltfile manages a Go project with live leload in Kubernetes
listen_port = 3000
metrics_port = 9090
# Use our wrapper to set the kube namespace
if os.getenv('TILT_WRAPPER') != '1':
fail("could not run, ./hack/tilt/bin/tilt was not used to start tilt")
# AWS Account to work in
aws_account = '271053619184'
aws_region = 'us-east-2'
# Resource ids
holos_backend = 'Holos Backend'
pg_admin = 'pgAdmin'
pg_cluster = 'PostgresCluster'
pg_svc = 'Database Pod'
compile_id = 'Go Build'
auth_id = 'Auth Policy'
lint_id = 'Run Linters'
tests_id = 'Run Tests'
# PostgresCluster resource name in k8s
pg_cluster_name = 'holos'
# Database name inside the PostgresCluster
pg_database_name = 'holos'
# PGAdmin name
pg_admin_name = 'pgadmin'
# Default Registry.
# See: https://github.com/tilt-dev/tilt.build/blob/master/docs/choosing_clusters.md#manual-configuration
# Note, Tilt will append the image name to the registry uri path
default_registry('{account}.dkr.ecr.{region}.amazonaws.com/holos-run/holos-server'.format(account=aws_account, region=aws_region))
# Set a name prefix specific to the user. Multiple developers share the tilt-holos namespace.
developer = os.getenv('USER')
holos_server = 'holos'
# See ./hack/tilt/bin/tilt
namespace = os.getenv('NAMESPACE')
# We always develop against the k1 cluster.
os.putenv('KUBECONFIG', os.path.abspath('./hack/tilt/kubeconfig'))
# The context defined in ./hack/tilt/kubeconfig
allow_k8s_contexts('sso@k1')
allow_k8s_contexts('sso@k2')
allow_k8s_contexts('sso@k3')
allow_k8s_contexts('sso@k4')
allow_k8s_contexts('sso@k5')
# PG db connection for localhost -> k8s port-forward
os.putenv('PGHOST', 'localhost')
os.putenv('PGPORT', '15432')
# We always develop in the dev aws account.
os.putenv('AWS_CONFIG_FILE', os.path.abspath('./hack/tilt/aws.config'))
os.putenv('AWS_ACCOUNT', aws_account)
os.putenv('AWS_DEFAULT_REGION', aws_region)
os.putenv('AWS_PROFILE', 'dev-holos')
os.putenv('AWS_SDK_LOAD_CONFIG', '1')
# Authenticate to AWS ECR when tilt up is run by the developer
local_resource('AWS Credentials', './hack/tilt/aws-login.sh', auto_init=True)
# Extensions are open-source, pre-packaged functions that extend Tilt
#
# More info: https://github.com/tilt-dev/tilt-extensions
# More info: https://docs.tilt.dev/extensions.html
load('ext://restart_process', 'docker_build_with_restart')
load('ext://k8s_attach', 'k8s_attach')
load('ext://git_resource', 'git_checkout')
load('ext://uibutton', 'cmd_button')
# Paths edited by the developer Tilt watches to trigger compilation.
# Generated files should be excluded to avoid an infinite build loop.
developer_paths = [
'./cmd',
'./internal/server',
'./internal/ent/schema',
'./frontend/package-lock.json',
'./frontend/src',
'./go.mod',
'./pkg',
'./service/holos',
]
# Builds the holos-server executable
local_resource(compile_id, 'make build', deps=developer_paths)
# Build Docker image
# Tilt will automatically associate image builds with the resource(s)
# that reference them (e.g. via Kubernetes or Docker Compose YAML).
#
# More info: https://docs.tilt.dev/api.html#api.docker_build
#
docker_build_with_restart(
'holos',
context='.',
entrypoint=[
'/app/bin/holos',
'server',
'--listen-port={}'.format(listen_port),
'--oidc-issuer=https://login.ois.run',
'--oidc-audience=262096764402729854@holos_platform',
'--metrics-port={}'.format(metrics_port),
],
dockerfile='./hack/tilt/Dockerfile',
only=['./bin'],
# (Recommended) Updating a running container in-place
# https://docs.tilt.dev/live_update_reference.html
live_update=[
# Sync files from host to container
sync('./bin', '/app/bin'),
# Wait for aws-login https://github.com/tilt-dev/tilt/issues/3048
sync('./tilt/aws-login.last', '/dev/null'),
# Execute commands in the container when paths change
# run('/app/hack/codegen.sh', trigger=['./app/api'])
],
)
# Run local commands
# Local commands can be helpful for one-time tasks like installing
# project prerequisites. They can also manage long-lived processes
# for non-containerized services or dependencies.
#
# More info: https://docs.tilt.dev/local_resource.html
#
# local_resource('install-helm',
# cmd='which helm > /dev/null || brew install helm',
# # `cmd_bat`, when present, is used instead of `cmd` on Windows.
# cmd_bat=[
# 'powershell.exe',
# '-Noninteractive',
# '-Command',
# '& {if (!(Get-Command helm -ErrorAction SilentlyContinue)) {scoop install helm}}'
# ]
# )
# Teach tilt about our custom resources (Note, this may be intended for workloads)
# k8s_kind('authorizationpolicy')
# k8s_kind('requestauthentication')
# k8s_kind('virtualservice')
k8s_kind('pgadmin')
# Troubleshooting
def resource_name(id):
print('resource: {}'.format(id))
return id.name
workload_to_resource_function(resource_name)
# Apply Kubernetes manifests
# Tilt will build & push any necessary images, re-deploying your
# resources as they change.
#
# More info: https://docs.tilt.dev/api.html#api.k8s_yaml
#
def holos_yaml():
"""Return a k8s Deployment personalized for the developer."""
k8s_yaml_template = str(read_file('./hack/tilt/k8s.yaml'))
return k8s_yaml_template.format(
name=holos_server,
developer=developer,
namespace=namespace,
listen_port=listen_port,
metrics_port=metrics_port,
tz=os.getenv('TZ'),
)
# Customize a Kubernetes resource
# By default, Kubernetes resource names are automatically assigned
# based on objects in the YAML manifests, e.g. Deployment name.
#
# Tilt strives for sane defaults, so calling k8s_resource is
# optional, and you only need to pass the arguments you want to
# override.
#
# More info: https://docs.tilt.dev/api.html#api.k8s_resource
#
k8s_yaml(blob(holos_yaml()))
# Backend server process
k8s_resource(
workload=holos_server,
new_name=holos_backend,
objects=[
'{}:serviceaccount'.format(holos_server),
'{}:servicemonitor'.format(holos_server),
],
resource_deps=[compile_id],
links=[
link('https://{}.holos.dev.k2.ois.run/app/'.format(developer), "Holos Web UI")
],
)
# AuthorizationPolicy - Beyond Corp functionality
k8s_resource(
new_name=auth_id,
objects=[
'{}:virtualservice'.format(holos_server),
'{}-allow-groups:authorizationpolicy'.format(holos_server),
'{}-allow-nothing:authorizationpolicy'.format(holos_server),
'{}-allow-well-known-paths:authorizationpolicy'.format(holos_server),
'{}-auth:authorizationpolicy'.format(holos_server),
'{}:requestauthentication'.format(holos_server),
],
)
# Database
# Note: Tilt confuses the backup pods with the database server pods, so this code is careful to tease the pods
# apart so logs are streamed correctly.
# See: https://github.com/tilt-dev/tilt.specs/blob/master/resource_assembly.md
# pgAdmin Web UI
k8s_resource(
workload=pg_admin_name,
new_name=pg_admin,
port_forwards=[
port_forward(15050, 5050, pg_admin),
],
)
# Disabled because these don't group resources nicely
# k8s_kind('postgrescluster')
# Postgres database in-cluster
k8s_resource(
new_name=pg_cluster,
objects=['holos:postgrescluster'],
)
# Needed to select the database by label
# https://docs.tilt.dev/api.html#api.k8s_custom_deploy
k8s_custom_deploy(
pg_svc,
apply_cmd=['./hack/tilt/k8s-get-db-sts', pg_cluster_name],
delete_cmd=['echo', 'Skipping delete. Object managed by custom resource.'],
deps=[],
)
k8s_resource(
pg_svc,
port_forwards=[
port_forward(15432, 5432, 'psql'),
],
resource_deps=[pg_cluster]
)
# Run tests
local_resource(
tests_id,
'make test',
allow_parallel=True,
auto_init=False,
deps=developer_paths,
)
# Run linter
local_resource(
lint_id,
'make lint',
allow_parallel=True,
auto_init=False,
deps=developer_paths,
)
# UI Buttons for helpful things.
# Icons: https://fonts.google.com/icons
os.putenv("GH_FORCE_TTY", "80%")
cmd_button(
'{}:go-test-failfast'.format(tests_id),
argv=['./hack/tilt/go-test-failfast'],
resource=tests_id,
icon_name='quiz',
text='Fail Fast',
)
cmd_button(
'{}:issues'.format(holos_server),
argv=['./hack/tilt/gh-issues'],
resource=holos_backend,
icon_name='folder_data',
text='Issues',
)
cmd_button(
'{}:gh-issue-view'.format(holos_server),
argv=['./hack/tilt/gh-issue-view'],
resource=holos_backend,
icon_name='task',
text='View Issue',
)
cmd_button(
'{}:get-pgdb-creds'.format(holos_server),
argv=['./hack/tilt/get-pgdb-creds', pg_cluster_name, pg_database_name],
resource=pg_svc,
icon_name='lock_open_right',
text='DB Creds',
)
cmd_button(
'{}:get-pgdb-creds'.format(pg_admin_name),
argv=['./hack/tilt/get-pgdb-creds', pg_cluster_name, pg_database_name],
resource=pg_admin,
icon_name='lock_open_right',
text='DB Creds',
)
cmd_button(
'{}:get-pgadmin-creds'.format(pg_admin_name),
argv=['./hack/tilt/get-pgadmin-creds', pg_admin_name],
resource=pg_admin,
icon_name='lock_open_right',
text='pgAdmin Login',
)
print("✨ Tiltfile evaluated")

View File

@@ -8,9 +8,9 @@ import (
"strings"
"github.com/holos-run/holos"
"github.com/holos-run/holos/pkg/errors"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/pkg/util"
"github.com/holos-run/holos/internal/errors"
"github.com/holos-run/holos/internal/logger"
"github.com/holos-run/holos/internal/util"
)
// A HelmChart represents a helm command to provide chart values in order to render kubernetes api objects.
@@ -141,9 +141,25 @@ func cacheChart(ctx context.Context, path holos.InstancePath, chartDir string, c
log.Debug("helm pull", "stdout", helmOut.Stdout, "stderr", helmOut.Stderr)
cachePath := filepath.Join(string(path), chartDir)
if err := os.Rename(cacheTemp, cachePath); err != nil {
return errors.Wrap(fmt.Errorf("could not rename: %w", err))
if err := os.MkdirAll(cachePath, 0777); err != nil {
return errors.Wrap(fmt.Errorf("could not mkdir: %w", err))
}
items, err := os.ReadDir(cacheTemp)
if err != nil {
return errors.Wrap(fmt.Errorf("could not read directory: %w", err))
}
for _, item := range items {
src := filepath.Join(cacheTemp, item.Name())
dst := filepath.Join(cachePath, item.Name())
log.DebugContext(ctx, "rename", "src", src, "dst", dst)
if err := os.Rename(src, dst); err != nil {
return errors.Wrap(fmt.Errorf("could not rename: %w", err))
}
}
log.InfoContext(ctx, "cached", "chart", chart.Name, "version", chart.Version, "path", cachePath)
return nil

View File

@@ -4,9 +4,9 @@ import (
"context"
"github.com/holos-run/holos"
"github.com/holos-run/holos/pkg/errors"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/pkg/util"
"github.com/holos-run/holos/internal/errors"
"github.com/holos-run/holos/internal/logger"
"github.com/holos-run/holos/internal/util"
)
const KustomizeBuildKind = "KustomizeBuild"

View File

@@ -7,9 +7,9 @@ import (
"path/filepath"
"slices"
"github.com/holos-run/holos/pkg/errors"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/pkg/util"
"github.com/holos-run/holos/internal/errors"
"github.com/holos-run/holos/internal/logger"
"github.com/holos-run/holos/internal/util"
)
// Result is the build result for display or writing. Holos components Render the Result as a data pipeline.

View File

@@ -11,14 +11,14 @@ plugins:
out: service/gen
opt: paths=source_relative
- plugin: es
out: internal/server/frontend/gen
out: internal/frontend/holos/gen
opt:
- target=ts
- plugin: connect-es
out: internal/server/frontend/gen
out: internal/frontend/holos/gen
opt:
- target=ts
- plugin: connect-query
out: internal/server/frontend/gen
out: internal/frontend/holos/gen
opt:
- target=ts

View File

@@ -1,8 +1,9 @@
package main
import (
"github.com/holos-run/holos/pkg/cli"
"os"
"github.com/holos-run/holos/internal/cli"
)
func main() {

View File

@@ -1,10 +1,11 @@
package main
import (
"github.com/holos-run/holos/pkg/cli"
"github.com/rogpeppe/go-internal/testscript"
"os"
"testing"
"github.com/holos-run/holos/internal/cli"
"github.com/rogpeppe/go-internal/testscript"
)
func TestMain(m *testing.M) {

View File

@@ -4,3 +4,8 @@ package v1
apiVersion: "apps/v1"
kind: "Deployment"
}
#StatefulSet: {
apiVersion: "apps/v1"
kind: "StatefulSet"
}

View File

@@ -20,6 +20,7 @@ import "encoding/yaml"
ConfigMap?: [Name=_]: #ConfigMap & {metadata: name: Name}
Deployment?: [_]: #Deployment
StatefulSet?: [_]: #StatefulSet
RequestAuthentication?: [_]: #RequestAuthentication
AuthorizationPolicy?: [_]: #AuthorizationPolicy
}

View File

@@ -0,0 +1,26 @@
package holos
// NOTE: Beyond the base reference platform, services should typically be added to #OptionalServices instead of directly to a managed namespace.
// ManagedNamespace is a namespace to manage across all clusters in the holos platform.
#ManagedNamespace: {
namespace: {
metadata: {
name: string
labels: [string]: string
}
}
// clusterNames represents the set of clusters the namespace is managed on. Usually all clusters.
clusterNames: [...string]
for cluster in clusterNames {
clusters: (cluster): name: cluster
}
}
// #ManagedNamepsaces is the union of all namespaces across all cluster types and optional services.
// Holos adopts the namespace sameness position of SIG Multicluster, refer to https://github.com/kubernetes/community/blob/dd4c8b704ef1c9c3bfd928c6fa9234276d61ad18/sig-multicluster/namespace-sameness-position-statement.md
#ManagedNamespaces: {
[Name=_]: #ManagedNamespace & {
namespace: metadata: name: Name
}
}

View File

@@ -1,6 +1,8 @@
// Controls optional feature flags for services distributed across multiple holos components.
// For example, enable issuing certificates in the provisioner cluster when an optional service is
// enabled for a workload cluster.
// enabled for a workload cluster. Another example is NATS, which isn't necessary on all clusters,
// but is necessary on clusters with a project like holos which depends on NATS.
package holos
import "list"

View File

@@ -0,0 +1,48 @@
package holos
let Namespace = "jeff-holos"
let Broker = "broker"
spec: components: KubernetesObjectsList: [
#KubernetesObjects & {
_dependsOn: "prod-platform-issuer": _
metadata: name: "\(Namespace)-broker"
apiObjectMap: OBJECTS.apiObjectMap
},
]
let SelectorLabels = {
"app.kubernetes.io/instance": Broker
"app.kubernetes.io/name": Broker
}
let OBJECTS = #APIObjects & {
apiObjects: {
Certificate: "\(Broker)-tls": #Certificate & {
metadata: {
name: "\(Broker)-tls"
namespace: Namespace
labels: SelectorLabels
}
spec: {
commonName: "\(Broker).\(Namespace).svc.cluster.local"
dnsNames: [
Broker,
"\(Broker).\(Namespace).svc",
"\(Broker).\(Namespace).svc.cluster.local",
"provision-\(Broker)",
"provision-\(Broker).\(Namespace).svc",
"provision-\(Broker).\(Namespace).svc.cluster.local",
"*.\(Broker)",
"*.\(Broker).\(Namespace).svc",
"*.\(Broker).\(Namespace).svc.cluster.local",
]
issuerRef: kind: "ClusterIssuer"
issuerRef: name: "platform-issuer"
secretName: metadata.name
usages: ["signing", "key encipherment", "server auth", "client auth"]
}
}
}
}

View File

@@ -0,0 +1,168 @@
package holos
let Namespace = "jeff-holos"
let Broker = "broker"
spec: components: KubernetesObjectsList: [
#KubernetesObjects & {
_dependsOn: "prod-secrets-stores": _
metadata: name: "\(Namespace)-broker"
apiObjectMap: OBJECTS.apiObjectMap
},
]
let SelectorLabels = {
"app.kubernetes.io/instance": Broker
"app.kubernetes.io/name": Broker
}
let Metadata = {
name: Broker
namespace: Namespace
labels: SelectorLabels
}
let OBJECTS = #APIObjects & {
apiObjects: {
ExternalSecret: "\(Broker)-tls": #ExternalSecret & {
metadata: name: "\(Broker)-tls"
metadata: namespace: Namespace
}
ExternalSecret: "choria-\(Broker)": #ExternalSecret & {
metadata: name: "choria-\(Broker)"
metadata: namespace: Namespace
}
StatefulSet: "\(Broker)": {
metadata: Metadata
spec: {
selector: matchLabels: SelectorLabels
serviceName: Broker
template: metadata: labels: SelectorLabels
template: spec: {
containers: [
{
name: Broker
command: ["choria", "broker", "run", "--config", "/etc/choria/broker.conf"]
image: "registry.choria.io/choria/choria:0.28.0"
imagePullPolicy: "IfNotPresent"
ports: [
{
containerPort: 4222
name: "tcp-nats"
protocol: "TCP"
},
{
containerPort: 4333
name: "https-wss"
protocol: "TCP"
},
{
containerPort: 5222
name: "tcp-cluster"
protocol: "TCP"
},
{
containerPort: 8222
name: "http-stats"
protocol: "TCP"
},
]
livenessProbe: httpGet: {
path: "/healthz"
port: "http-stats"
}
readinessProbe: livenessProbe
resources: {}
securityContext: {}
volumeMounts: [
{
mountPath: "/etc/choria"
name: Broker
},
{
mountPath: "/etc/choria-tls"
name: "\(Broker)-tls"
},
]
},
]
securityContext: {}
serviceAccountName: Broker
volumes: [
{
name: Broker
secret: secretName: "choria-\(Broker)"
},
{
name: "\(Broker)-tls"
secret: secretName: "\(Broker)-tls"
},
]
}
}
}
ServiceAccount: "\(Broker)": #ServiceAccount & {
metadata: Metadata
}
Service: "\(Broker)": #Service & {
metadata: Metadata
spec: {
type: "ClusterIP"
clusterIP: "None"
selector: SelectorLabels
ports: [
{
name: "tcp-nats"
appProtocol: "tcp"
port: 4222
protocol: "TCP"
targetPort: "tcp-nats"
},
{
name: "tcp-cluster"
appProtocol: "tcp"
port: 5222
protocol: "TCP"
targetPort: "tcp-cluster"
},
{
name: "https-wss"
appProtocol: "https"
port: 443
protocol: "TCP"
targetPort: "https-wss"
},
]
}
}
DestinationRule: "\(Broker)-wss": #DestinationRule & {
metadata: Metadata
spec: host: "\(Broker).\(Namespace).svc.cluster.local"
spec: trafficPolicy: tls: {
credentialName: "istio-ingress-mtls-cert"
mode: "MUTUAL"
}
}
VirtualService: "\(Broker)-wss": #VirtualService & {
metadata: name: "\(Broker)-wss"
metadata: namespace: Namespace
spec: {
gateways: ["istio-ingress/default"]
hosts: ["jeff.provision.dev.\(#ClusterName).holos.run"]
http: [
{
route: [
{
destination: {
host: "\(Broker).\(Namespace).svc.cluster.local"
port: "number": 443
}
},
]
},
]
}
}
}
}

View File

@@ -0,0 +1,220 @@
# build output from https://github.com/holos-run/holos-infra/blob/main/experiments/components/holos-saas/broker/build
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/instance: broker
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: broker
app.kubernetes.io/version: 0.1.0
helm.sh/chart: broker-0.1.0
name: broker
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/instance: broker
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: broker
app.kubernetes.io/version: 0.1.0
helm.sh/chart: broker-0.1.0
name: broker
spec:
clusterIP: None
ports:
- appProtocol: tcp
name: tcp-nats
port: 4222
protocol: TCP
targetPort: tcp-nats
- appProtocol: tcp
name: tcp-cluster
port: 5222
protocol: TCP
targetPort: tcp-cluster
- appProtocol: https
name: https-wss
port: 443
protocol: TCP
targetPort: https-wss
selector:
app.kubernetes.io/instance: broker
app.kubernetes.io/name: broker
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/instance: broker
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: broker
app.kubernetes.io/version: 0.1.0
helm.sh/chart: broker-0.1.0
name: broker-lb
spec:
externalTrafficPolicy: Local
loadBalancerIP: 1.2.3.4
ports:
- appProtocol: tcp
name: tcp-nats
port: 4222
protocol: TCP
targetPort: tcp-nats
- appProtocol: https
name: https-wss
port: 443
protocol: TCP
targetPort: https-wss
selector:
app.kubernetes.io/instance: broker
app.kubernetes.io/name: broker
type: LoadBalancer
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app.kubernetes.io/instance: broker
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: broker
app.kubernetes.io/version: 0.1.0
helm.sh/chart: broker-0.1.0
name: broker
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/instance: broker
app.kubernetes.io/name: broker
serviceName: broker
template:
metadata:
labels:
app.kubernetes.io/instance: broker
app.kubernetes.io/name: broker
spec:
containers:
- command:
- choria
- broker
- run
- --config
- /etc/choria/broker.conf
image: registry.choria.io/choria/choria:latest
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: http-stats
name: broker
ports:
- containerPort: 4222
name: tcp-nats
protocol: TCP
- containerPort: 4333
name: https-wss
protocol: TCP
- containerPort: 5222
name: tcp-cluster
protocol: TCP
- containerPort: 8222
name: http-stats
protocol: TCP
readinessProbe:
httpGet:
path: /healthz
port: http-stats
resources: {}
securityContext: {}
volumeMounts:
- mountPath: /etc/choria
name: broker
- mountPath: /etc/choria-tls
name: broker-tls
securityContext: {}
serviceAccountName: broker
volumes:
- name: broker
secret:
secretName: broker
- name: broker-tls
secret:
secretName: broker-tls
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: broker-tls
namespace: holos-dev
spec:
commonName: broker.holos-dev.svc.cluster.local
dnsNames:
- broker
- broker.holos-dev.svc
- broker.holos-dev.svc.cluster.local
- provision-broker
- provision-broker.holos-dev.svc
- provision-broker.holos-dev.svc.cluster.local
- '*.broker'
- '*.broker.holos-dev.svc'
- '*.broker.holos-dev.svc.cluster.local'
issuerRef:
kind: ClusterIssuer
name: cluster-issuer
secretName: broker-tls
usages:
- signing
- key encipherment
- server auth
- client auth
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: broker
spec:
dataFrom:
- extract:
key: kv//kube-namespace/holos-dev/broker
refreshInterval: 1h
secretStoreRef:
kind: SecretStore
name: core-vault
target:
creationPolicy: Owner
name: broker
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: broker-wss
namespace: holos-dev
spec:
host: broker.holos-dev.svc.cluster.local
trafficPolicy:
tls:
credentialName: istio-ingress-mtls-cert
mode: MUTUAL
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: broker-wss
namespace: holos-dev
spec:
gateways:
- istio-ingress/wildcard-pub-gw
hosts:
- provision.pub.k2.holos.run
http:
- route:
- destination:
host: broker.holos-dev.svc.cluster.local
port:
number: 443
tls:
mode: SIMPLE

View File

@@ -0,0 +1,8 @@
# Machine Room Provisioner
This sub-tree contains Holos Components to manage a [Choria Provisioner][1]
system for the use case of provisioning `holos controller` instances. These
instances are implementations of Machine Room which are in turn implementations
of Choria Server, hence why we use Choria Provisioner.
[1]: https://choria-io.github.io/provisioner/

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,6 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# curl -LO https://github.com/nats-io/nack/releases/latest/download/crds.yml
resources:
- crds.yml

View File

@@ -0,0 +1,8 @@
package holos
// NATS NetStream Controller (NACK)
spec: components: KustomizeBuildList: [
#KustomizeBuild & {
metadata: name: "prod-nack-crds"
},
]

View File

@@ -0,0 +1,62 @@
package holos
// for Project in _Projects {
// spec: components: resources: (#ProjectTemplate & {project: Project}).workload.resources
// }
let Namespace = "jeff-holos"
#Kustomization: spec: targetNamespace: Namespace
spec: components: HelmChartList: [
#HelmChart & {
metadata: name: "jeff-holos-nats"
namespace: Namespace
_dependsOn: "prod-secrets-stores": _
chart: {
name: "nats"
version: "1.1.10"
repository: NatsRepository
}
_values: #NatsValues & {
config: {
// https://github.com/nats-io/k8s/tree/main/helm/charts/nats#operator-mode-with-nats-resolver
resolver: enabled: true
resolver: merge: {
type: "full"
interval: "2m"
timeout: "1.9s"
}
merge: {
operator: "eyJ0eXAiOiJKV1QiLCJhbGciOiJlZDI1NTE5LW5rZXkifQ.eyJqdGkiOiJUSElBTDM2NUtOS0lVVVJDMzNLNFJGQkJVRlFBSTRLS0NQTDJGVDZYVjdNQVhWU1dFNElRIiwiaWF0IjoxNzEzMjIxMzE1LCJpc3MiOiJPREtQM0RZTzc3T1NBRU5IU0FFR0s3WUNFTFBYT1FFWUI3RVFSTVBLWlBNQUxINE5BRUVLSjZDRyIsIm5hbWUiOiJIb2xvcyIsInN1YiI6Ik9ES1AzRFlPNzdPU0FFTkhTQUVHSzdZQ0VMUFhPUUVZQjdFUVJNUEtaUE1BTEg0TkFFRUtKNkNHIiwibmF0cyI6eyJ0eXBlIjoib3BlcmF0b3IiLCJ2ZXJzaW9uIjoyfX0.dQURTb-zIQMc-OYd9328oY887AEnvog6gOXY1-VCsDG3L89nq5x_ks4ME7dJ4Pn-Pvm2eyBi1Jx6ubgkthHgCQ"
system_account: "ADIQCYK4K3OKTPODGCLI4PDQ6XBO52MISBPTAIDESEJMLZCMNULDKCCY"
resolver_preload: {
// NOTEL: Make sure you do not include the trailing , in the SYS_ACCOUNT_JWT
"ADIQCYK4K3OKTPODGCLI4PDQ6XBO52MISBPTAIDESEJMLZCMNULDKCCY": "eyJ0eXAiOiJKV1QiLCJhbGciOiJlZDI1NTE5LW5rZXkifQ.eyJqdGkiOiI2SEVMNlhKSUdWUElMNFBURVI1MkUzTkFITjZLWkVUUUdFTlFVS0JWRzNUWlNLRzVLT09RIiwiaWF0IjoxNzEzMjIxMzE1LCJpc3MiOiJPREtQM0RZTzc3T1NBRU5IU0FFR0s3WUNFTFBYT1FFWUI3RVFSTVBLWlBNQUxINE5BRUVLSjZDRyIsIm5hbWUiOiJTWVMiLCJzdWIiOiJBRElRQ1lLNEszT0tUUE9ER0NMSTRQRFE2WEJPNTJNSVNCUFRBSURFU0VKTUxaQ01OVUxES0NDWSIsIm5hdHMiOnsibGltaXRzIjp7InN1YnMiOi0xLCJkYXRhIjotMSwicGF5bG9hZCI6LTEsImltcG9ydHMiOi0xLCJleHBvcnRzIjotMSwid2lsZGNhcmRzIjp0cnVlLCJjb25uIjotMSwibGVhZiI6LTF9LCJkZWZhdWx0X3Blcm1pc3Npb25zIjp7InB1YiI6e30sInN1YiI6e319LCJhdXRob3JpemF0aW9uIjp7fSwidHlwZSI6ImFjY291bnQiLCJ2ZXJzaW9uIjoyfX0.TiGIk8XON394D9SBEowGHY_nTeOyHiM-ihyw6HZs8AngOnYPFXH9OVjsaAf8Poa2k_V84VtH7yVNgNdjBgduDA"
}
}
cluster: enabled: true
jetstream: enabled: true
websocket: enabled: true
monitor: enabled: true
}
promExporter: enabled: true
promExporter: podMonitor: enabled: true
}
},
#HelmChart & {
metadata: name: "jeff-holos-nack"
namespace: Namespace
_dependsOn: "jeff-holos-nats": _
chart: {
name: "nack"
version: "0.25.2"
repository: NatsRepository
}
},
]
let NatsRepository = {
name: "nats"
url: "https://nats-io.github.io/k8s/helm/charts/"
}

View File

@@ -0,0 +1,722 @@
package holos
#NatsValues: {
//###############################################################################
// Global options
//###############################################################################
global: {
image: {
// global image pull policy to use for all container images in the chart
// can be overridden by individual image pullPolicy
pullPolicy: null
// global list of secret names to use as image pull secrets for all pod specs in the chart
// secrets must exist in the same namespace
// https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
pullSecretNames: []
// global registry to use for all container images in the chart
// can be overridden by individual image registry
registry: null
}
// global labels will be applied to all resources deployed by the chart
labels: {}
}
//###############################################################################
// Common options
//###############################################################################
// override name of the chart
nameOverride: null
// override full name of the chart+release
fullnameOverride: null
// override the namespace that resources are installed into
namespaceOverride: null
// reference a common CA Certificate or Bundle in all nats config `tls` blocks and nats-box contexts
// note: `tls.verify` still must be set in the appropriate nats config `tls` blocks to require mTLS
tlsCA: {
enabled: false
// set configMapName in order to mount an existing configMap to dir
configMapName: null
// set secretName in order to mount an existing secretName to dir
secretName: null
// directory to mount the configMap or secret to
dir: "/etc/nats-ca-cert"
// key in the configMap or secret that contains the CA Certificate or Bundle
key: "ca.crt"
}
//###############################################################################
// NATS Stateful Set and associated resources
//###############################################################################
//###########################################################
// NATS config
//###########################################################
config: {
cluster: {
enabled: true | *false
port: 6222
// must be 2 or higher when jetstream is enabled
replicas: 3
// apply to generated route URLs that connect to other pods in the StatefulSet
routeURLs: {
// if both user and password are set, they will be added to route URLs
// and the cluster authorization block
user: null
password: null
// set to true to use FQDN in route URLs
useFQDN: false
k8sClusterDomain: "cluster.local"
}
tls: {
enabled: true | *false
// set secretName in order to mount an existing secret to dir
secretName: null
dir: "/etc/nats-certs/cluster"
cert: "tls.crt"
key: "tls.key"
// merge or patch the tls config
// https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls
merge: {}
patch: []
}
// merge or patch the cluster config
// https://docs.nats.io/running-a-nats-service/configuration/clustering/cluster_config
merge: {}
patch: []
}
jetstream: {
enabled: true | *false
fileStore: {
enabled: true
dir: "/data"
//###########################################################
// stateful set -> volume claim templates -> jetstream pvc
//###########################################################
pvc: {
enabled: true
size: "10Gi"
storageClassName: null
// merge or patch the jetstream pvc
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#persistentvolumeclaim-v1-core
merge: {}
patch: []
// defaults to "{{ include "nats.fullname" $ }}-js"
name: null
}
// defaults to the PVC size
maxSize: null
}
memoryStore: {
enabled: false
// ensure that container has a sufficient memory limit greater than maxSize
maxSize: "1Gi"
}
// merge or patch the jetstream config
// https://docs.nats.io/running-a-nats-service/configuration#jetstream
merge: {}
patch: []
}
nats: {
port: 4222
tls: {
enabled: false
// set secretName in order to mount an existing secret to dir
secretName: null
dir: "/etc/nats-certs/nats"
cert: "tls.crt"
key: "tls.key"
// merge or patch the tls config
// https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls
merge: {}
patch: []
}
}
leafnodes: {
enabled: false
port: 7422
tls: {
enabled: false
// set secretName in order to mount an existing secret to dir
secretName: null
dir: "/etc/nats-certs/leafnodes"
cert: "tls.crt"
key: "tls.key"
// merge or patch the tls config
// https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls
merge: {}
patch: []
}
// merge or patch the leafnodes config
// https://docs.nats.io/running-a-nats-service/configuration/leafnodes/leafnode_conf
merge: {}
patch: []
}
websocket: {
enabled: true | *false
port: 8080
tls: {
enabled: false
// set secretName in order to mount an existing secret to dir
secretName: null
dir: "/etc/nats-certs/websocket"
cert: "tls.crt"
key: "tls.key"
// merge or patch the tls config
// https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls
merge: {}
patch: []
}
//###########################################################
// ingress
//###########################################################
// service must be enabled also
ingress: {
enabled: false
// must contain at least 1 host otherwise ingress will not be created
hosts: []
path: "/"
pathType: "Exact"
// sets to the ingress class name
className: null
// set to an existing secret name to enable TLS on the ingress; applies to all hosts
tlsSecretName: null
// merge or patch the ingress
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#ingress-v1-networking-k8s-io
merge: {}
patch: []
// defaults to "{{ include "nats.fullname" $ }}-ws"
name: null
}
// merge or patch the websocket config
// https://docs.nats.io/running-a-nats-service/configuration/websocket/websocket_conf
merge: {}
patch: []
}
mqtt: {
enabled: false
port: 1883
tls: {
enabled: false
// set secretName in order to mount an existing secret to dir
secretName: null
dir: "/etc/nats-certs/mqtt"
cert: "tls.crt"
key: "tls.key"
// merge or patch the tls config
// https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls
merge: {}
patch: []
}
// merge or patch the mqtt config
// https://docs.nats.io/running-a-nats-service/configuration/mqtt/mqtt_config
merge: {}
patch: []
}
gateway: {
enabled: false
port: 7222
tls: {
enabled: false
// set secretName in order to mount an existing secret to dir
secretName: null
dir: "/etc/nats-certs/gateway"
cert: "tls.crt"
key: "tls.key"
// merge or patch the tls config
// https://docs.nats.io/running-a-nats-service/configuration/securing_nats/tls
merge: {}
patch: []
}
// merge or patch the gateway config
// https://docs.nats.io/running-a-nats-service/configuration/gateways/gateway#gateway-configuration-block
merge: {}
patch: []
}
monitor: {
enabled: true
port: 8222
tls: {
// config.nats.tls must be enabled also
// when enabled, monitoring port will use HTTPS with the options from config.nats.tls
enabled: false
}
}
profiling: {
enabled: false
port: 65432
}
resolver: {
enabled: true | *false
dir: "/data/resolver"
//###########################################################
// stateful set -> volume claim templates -> resolver pvc
//###########################################################
pvc: {
enabled: true
size: "1Gi"
storageClassName: null
// merge or patch the pvc
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#persistentvolumeclaim-v1-core
merge: {}
patch: []
// defaults to "{{ include "nats.fullname" $ }}-resolver"
name: null
}
// merge or patch the resolver
// https://docs.nats.io/running-a-nats-service/configuration/securing_nats/auth_intro/jwt/resolver
merge: {
type?: string
interval?: string
timeout?: string
}
patch: []
}
// adds a prefix to the server name, which defaults to the pod name
// helpful for ensuring server name is unique in a super cluster
serverNamePrefix: ""
// merge or patch the nats config
// https://docs.nats.io/running-a-nats-service/configuration
// following special rules apply
// 1. strings that start with << and end with >> will be unquoted
// use this for variables and numbers with units
// 2. keys ending in $include will be switched to include directives
// keys are sorted alphabetically, use prefix before $includes to control includes ordering
// paths should be relative to /etc/nats-config/nats.conf
// example:
//
// merge:
// $include: ./my-config.conf
// zzz$include: ./my-config-last.conf
// server_name: nats
// authorization:
// token: << $TOKEN >>
// jetstream:
// max_memory_store: << 1GB >>
//
// will yield the config:
// {
// include ./my-config.conf;
// "authorization": {
// "token": $TOKEN
// },
// "jetstream": {
// "max_memory_store": 1GB
// },
// "server_name": "nats",
// include ./my-config-last.conf;
// }
merge: {
operator?: string
system_account?: string
resolver_preload?: [string]: string
}
patch: []
}
//###########################################################
// stateful set -> pod template -> nats container
//###########################################################
container: {
image: {
repository: "nats"
tag: "2.10.12-alpine"
pullPolicy: null
registry: null
}
// container port options
// must be enabled in the config section also
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#containerport-v1-core
ports: {
nats: {}
leafnodes: {}
websocket: {}
mqtt: {}
cluster: {}
gateway: {}
monitor: {}
profiling: {}
}
// map with key as env var name, value can be string or map
// example:
//
// env:
// GOMEMLIMIT: 7GiB
// TOKEN:
// valueFrom:
// secretKeyRef:
// name: nats-auth
// key: token
env: {}
// merge or patch the container
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core
merge: {}
patch: []
}
//###########################################################
// stateful set -> pod template -> reloader container
//###########################################################
reloader: {
enabled: true
image: {
repository: "natsio/nats-server-config-reloader"
tag: "0.14.1"
pullPolicy: null
registry: null
}
// env var map, see nats.env for an example
env: {}
// all nats container volume mounts with the following prefixes
// will be mounted into the reloader container
natsVolumeMountPrefixes: ["/etc/"]
// merge or patch the container
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core
merge: {}
patch: []
}
//###########################################################
// stateful set -> pod template -> prom-exporter container
//###########################################################
// config.monitor must be enabled
promExporter: {
enabled: true | *false
image: {
repository: "natsio/prometheus-nats-exporter"
tag: "0.14.0"
pullPolicy: null
registry: null
}
port: 7777
// env var map, see nats.env for an example
env: {}
// merge or patch the container
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core
merge: {}
patch: []
//###########################################################
// prometheus pod monitor
//###########################################################
podMonitor: {
enabled: true | *false
// merge or patch the pod monitor
// https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.PodMonitor
merge: {}
patch: []
// defaults to "{{ include "nats.fullname" $ }}"
name: null
}
}
//###########################################################
// service
//###########################################################
service: {
enabled: true
// service port options
// additional boolean field enable to control whether port is exposed in the service
// must be enabled in the config section also
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#serviceport-v1-core
ports: {
nats: enabled: true
leafnodes: enabled: true
websocket: enabled: true
mqtt: enabled: true
cluster: enabled: false
gateway: enabled: false
monitor: enabled: false
profiling: enabled: false
}
// merge or patch the service
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#service-v1-core
merge: {}
patch: []
// defaults to "{{ include "nats.fullname" $ }}"
name: null
}
//###########################################################
// other nats extension points
//###########################################################
// stateful set
statefulSet: {
// merge or patch the stateful set
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#statefulset-v1-apps
merge: {}
patch: []
// defaults to "{{ include "nats.fullname" $ }}"
name: null
}
// stateful set -> pod template
podTemplate: {
// adds a hash of the ConfigMap as a pod annotation
// this will cause the StatefulSet to roll when the ConfigMap is updated
configChecksumAnnotation: true
// map of topologyKey: topologySpreadConstraint
// labelSelector will be added to match StatefulSet pods
//
// topologySpreadConstraints:
// kubernetes.io/hostname:
// maxSkew: 1
//
topologySpreadConstraints: {}
// merge or patch the pod template
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pod-v1-core
merge: {}
patch: []
}
// headless service
headlessService: {
// merge or patch the headless service
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#service-v1-core
merge: {}
patch: []
// defaults to "{{ include "nats.fullname" $ }}-headless"
name: null
}
// config map
configMap: {
// merge or patch the config map
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#configmap-v1-core
merge: {}
patch: []
// defaults to "{{ include "nats.fullname" $ }}-config"
name: null
}
// pod disruption budget
podDisruptionBudget: {
enabled: true
// merge or patch the pod disruption budget
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#poddisruptionbudget-v1-policy
merge: {}
patch: []
// defaults to "{{ include "nats.fullname" $ }}"
name: null
}
// service account
serviceAccount: {
enabled: false
// merge or patch the service account
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#serviceaccount-v1-core
merge: {}
patch: []
// defaults to "{{ include "nats.fullname" $ }}"
name: null
}
//###########################################################
// natsBox
//
// NATS Box Deployment and associated resources
//###########################################################
natsBox: {
enabled: true
//###########################################################
// NATS contexts
//###########################################################
contexts: {
default: {
creds: {
// set contents in order to create a secret with the creds file contents
contents: null
// set secretName in order to mount an existing secret to dir
secretName: null
// defaults to /etc/nats-creds/<context-name>
dir: null
key: "nats.creds"
}
nkey: {
// set contents in order to create a secret with the nkey file contents
contents: null
// set secretName in order to mount an existing secret to dir
secretName: null
// defaults to /etc/nats-nkeys/<context-name>
dir: null
key: "nats.nk"
}
// used to connect with client certificates
tls: {
// set secretName in order to mount an existing secret to dir
secretName: null
// defaults to /etc/nats-certs/<context-name>
dir: null
cert: "tls.crt"
key: "tls.key"
}
// merge or patch the context
// https://docs.nats.io/using-nats/nats-tools/nats_cli#nats-contexts
merge: {}
patch: []
}
}
// name of context to select by default
defaultContextName: "default"
//###########################################################
// deployment -> pod template -> nats-box container
//###########################################################
container: {
image: {
repository: "natsio/nats-box"
tag: "0.14.2"
pullPolicy: null
registry: null
}
// env var map, see nats.env for an example
env: {}
// merge or patch the container
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#container-v1-core
merge: {}
patch: []
}
//###########################################################
// other nats-box extension points
//###########################################################
// deployment
deployment: {
// merge or patch the deployment
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#deployment-v1-apps
merge: {}
patch: []
// defaults to "{{ include "nats.fullname" $ }}-box"
name: null
}
// deployment -> pod template
podTemplate: {
// merge or patch the pod template
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pod-v1-core
merge: {}
patch: []
}
// contexts secret
contextsSecret: {
// merge or patch the context secret
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secret-v1-core
merge: {}
patch: []
// defaults to "{{ include "nats.fullname" $ }}-box-contexts"
name: null
}
// contents secret
contentsSecret: {
// merge or patch the contents secret
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secret-v1-core
merge: {}
patch: []
// defaults to "{{ include "nats.fullname" $ }}-box-contents"
name: null
}
// service account
serviceAccount: {
enabled: false
// merge or patch the service account
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#serviceaccount-v1-core
merge: {}
patch: []
// defaults to "{{ include "nats.fullname" $ }}-box"
name: null
}
}
//###############################################################################
// Extra user-defined resources
//###############################################################################
//
// add arbitrary user-generated resources
// example:
//
// config:
// websocket:
// enabled: true
// extraResources:
// - apiVersion: networking.istio.io/v1beta1
// kind: VirtualService
// metadata:
// name:
// $tplYaml: >
// {{ include "nats.fullname" $ | quote }}
// labels:
// $tplYaml: |
// {{ include "nats.labels" $ }}
// spec:
// hosts:
// - demo.nats.io
// gateways:
// - my-gateway
// http:
// - name: default
// match:
// - name: root
// uri:
// exact: /
// route:
// - destination:
// host:
// $tplYaml: >
// {{ .Values.service.name | quote }}
// port:
// number:
// $tplYaml: >
// {{ .Values.config.websocket.port }}
//
extraResources: []
}

View File

@@ -0,0 +1,81 @@
#! /bin/bash
#
# This script initializes authorization for a nats cluster. The process is:
#
# Locally:
# 1. Generate the nats operator jwt.
# 2. Generate a SYS account jwt issued by the operator.
# 3. Store both into vault
#
# When nats is deployed an ExternalSecret populates auth.conf which is included
# into nats.conf. This approach allows helm values to be used for most things
# except for secrets.
#
# Clean up by removing the nsc directory.
set -euo pipefail
tmpdir="$(mktemp -d)"
finish() {
[[ -d "$tmpdir" ]] && rm -rf "$tmpdir"
}
trap finish EXIT
PARENT="$(cd "$(dirname $0)" && pwd)"
: "${OPERATOR_NAME:="Holos"}"
: "${OIX_NAMESPACE:=$(kubectl config view --minify --flatten -ojsonpath='{.contexts[0].context.namespace}')}"
nsc="${HOME}/.bin/nsc"
ROOT="${PARENT}/${OIX_NAMESPACE}/nsc"
export NKEYS_PATH="${ROOT}/nkeys"
export NSC_HOME="${ROOT}/accounts"
mkdir -p "$NKEYS_PATH"
mkdir -p "$NSC_HOME"
# Install nsc if not already installed
if ! [[ -x $nsc ]]; then
platform="$(kubectl version --output=json | jq .clientVersion.platform -r)"
platform="${platform//\//-}"
curl -fSLo "${tmpdir}/nsc.zip" "https://github.com/nats-io/nsc/releases/download/v2.8.6/nsc-${platform}.zip"
(cd "${tmpdir}" && unzip nsc.zip)
sudo install -o 0 -g 0 -m 0755 "${tmpdir}/nsc" $nsc
fi
echo "export NKEYS_PATH='${NKEYS_PATH}'" > "${ROOT}/nsc.env"
echo "export NSC_HOME='${NSC_HOME}'" >> "${ROOT}/nsc.env"
# use kubectl port-forward nats-headless 4222
echo "export NATS_URL='nats://localhost:4222'" >> "${ROOT}/nsc.env"
echo "export NATS_CREDS='${ROOT}/nkeys/creds/${OPERATOR_NAME}/SYS/sys.creds'" >> "${ROOT}/nsc.env"
echo "export NATS_CA='${ROOT}/ca.crt'" >> "${ROOT}/nsc.env"
echo "export NATS_CERT='${ROOT}/tls.crt'" >> "${ROOT}/nsc.env"
echo "export NATS_KEY='${ROOT}/tls.key'" >> "${ROOT}/nsc.env"
$nsc --data-dir="${ROOT}/stores" list operators
# Create operator
$nsc add operator --name "${OPERATOR_NAME}"
# Create system account
$nsc add account --name SYS
$nsc add user --name sys
# Create account for STAN purposes.
$nsc add account --name STAN
$nsc add user --name stan
# Generate an auth config compatible with the StatefulSet mounting the
# nats-jwt-pvc PersistentVolumeClaim at path /data/accounts
$nsc generate config --sys-account SYS --nats-resolver \
| sed "s,dir.*jwt',dir: '/data/accounts'" \
> "${ROOT}/auth.conf"
# Store the auth config in vault.
# vault kv put kv/${OIX_CLUSTER_NAME}/kube-namespace/holos-dev/nats-auth-config "auth.conf=@${tmpdir}/auth.conf"
# Store the SYS creds in vault for use by the nack controller.
# vault kv put kv/${OIX_CLUSTER_NAME}/kube-namespace/holos-dev/nats-sys-creds "sys.creds=@${OIX_CLUSTER_NAME}/nsc/nkeys/creds/${OPERATOR_NAME}/SYS/sys.creds"
echo "After deploying the nats component, use the get-cert command to fetch the client cert."
echo "Use kubectl port-forward svc/nats-headless 4222" >&2
echo "source ${ROOT}/nsc.env to make it all work." >&2

View File

@@ -0,0 +1,5 @@
# Holos
This subtree contains holos components for holos itself. We strive for minimal dependencies, so this is likely going to contain NATS and/or Postgres resources.
Components depend on the holos project and may iterate over the defined environments in the project stages.

View File

@@ -21,12 +21,12 @@ spec: components: KubernetesObjectsList: [
// GatewayServers represents all hosts for all VirtualServices in the cluster attached to Gateway/default
// NOTE: This is a critical structure because the default Gateway should be used in most cases.
let GatewayServers = {
// Critical Feature: Map all Project hosts to the default Gateway.
for Project in _Projects {
for server in (#ProjectTemplate & {project: Project}).ClusterGatewayServers {
(server.port.name): server
}
(#ProjectTemplate & {project: Project}).ClusterDefaultGatewayServers
}
// TODO: Refactor to use FQDN as key
for k, svc in #OptionalServices {
if svc.enabled && list.Contains(svc.clusterNames, #ClusterName) {
for server in svc.servers {
@@ -35,6 +35,7 @@ let GatewayServers = {
}
}
// TODO: Remove? Why aren't these part of the platform project?
if #PlatformServers[#ClusterName] != _|_ {
for server in #PlatformServers[#ClusterName] {
(server.port.name): server
@@ -52,6 +53,11 @@ let OBJECTS = #APIObjects & {
spec: servers: [for x in GatewayServers {x}]
}
// Manage an ExternalSecret for each server defined in the default Gateway to sync the cert.
for Server in Gateway.default.spec.servers {
ExternalSecret: "\(Server.tls.credentialName)": metadata: namespace: "istio-ingress"
}
for k, svc in #OptionalServices {
if svc.enabled && list.Contains(svc.clusterNames, #ClusterName) {
for k, s in svc.servers {

View File

@@ -8,7 +8,7 @@ let ComponentName = "\(#InstancePrefix)-ingress"
spec: components: HelmChartList: [
#HelmChart & {
_dependsOn: "prod-secrets-namespaces": _
_dependsOn: "prod-secrets-stores": _
_dependsOn: "\(#InstancePrefix)-istio-base": _
_dependsOn: "\(#InstancePrefix)-istiod": _
@@ -76,6 +76,10 @@ let RedirectMetaName = {
let OBJECTS = #APIObjects & {
apiObjects: {
ExternalSecret: "istio-ingress-mtls-cert": #ExternalSecret & {
metadata: name: "istio-ingress-mtls-cert"
metadata: namespace: #TargetNamespace
}
Gateway: {
"\(RedirectMetaName.name)": #Gateway & {
metadata: RedirectMetaName

View File

@@ -4,7 +4,7 @@ package holos
let Namespace = "prod-platform"
// FYI: kube-prometheus-stack is a large umbrella chart what brings in other large charts like
// FYI: kube-prometheus-stack is a large umbrella chart that brings in other large charts like
// [grafana](https://github.com/grafana/helm-charts/tree/main/charts/grafana).
// This may make affect maintainability. Consider breaking the integration down into
// constituent charts represented as holos component instances.
@@ -77,7 +77,7 @@ spec: components: HelmChartList: [
token_url: OIDC.token_endpoint
api_url: OIDC.userinfo_endpoint
use_pkce: true
name_attribute_path: name
name_attribute_path: "name"
// TODO: Lift the admin, editor, and viewer group names up to the plaform config struct.
role_attribute_path: "contains(groups[*], 'prod-cluster-admin') && 'Admin' || contains(groups[*], 'prod-cluster-editor') && 'Editor' || 'Viewer'"
}

View File

@@ -21,8 +21,15 @@ _Projects: #Projects & {
holos: {
resourceId: ZitadelProjectID
clusters: k1: _
clusters: k2: _
domain: "holos.run"
clusters: core1: _
clusters: core2: _
clusters: k1: _
clusters: k2: _
clusters: k3: _
clusters: k4: _
clusters: k5: _
environments: {
prod: stage: "prod"
dev: stage: "dev"
@@ -30,6 +37,13 @@ _Projects: #Projects & {
gary: stage: dev.stage
nate: stage: dev.stage
}
// app is the holos web app and grpc api.
hosts: app: _
// provision is the choria broker provisioning system.
hosts: provision: _
// nats is the nats service holos controller machine room agents connect after provisioning.
hosts: nats: _
}
iam: {

View File

@@ -0,0 +1,40 @@
package holos
// Certificate used by the ingress to connect to services using a platform
// issued certificate but which are not using istio sidecar injection.
// Examples are keycloak, vault, nats, choria, etc...
let Namespace = "istio-ingress"
let CertName = "istio-ingress-mtls-cert"
spec: components: KubernetesObjectsList: [
#KubernetesObjects & {
_dependsOn: "prod-platform-issuer": _
metadata: name: CertName
apiObjectMap: OBJECTS.apiObjectMap
},
]
let OBJECTS = #APIObjects & {
apiObjects: {
Certificate: "\(CertName)": #Certificate & {
metadata: {
name: CertName
namespace: Namespace
}
spec: {
secretName: metadata.name
issuerRef: kind: "ClusterIssuer"
issuerRef: name: "platform-issuer"
commonName: "istio-ingress"
dnsNames: [
"istio-ingress",
"istio-ingress.\(Namespace)",
"istio-ingress.\(Namespace).svc",
"istio-ingress.\(Namespace).svc.cluster.local",
]
}
}
}
}

View File

@@ -0,0 +1,52 @@
package holos
// Refer to https://cert-manager.io/docs/configuration/selfsigned/#bootstrapping-ca-issuers
let Namespace = "cert-manager"
spec: components: KubernetesObjectsList: [
#KubernetesObjects & {
metadata: name: "prod-platform-issuer"
_dependsOn: "prod-mesh-certmanager": _
apiObjectMap: OBJECTS.apiObjectMap
},
]
let SelfSigned = "platform-selfsigned"
let PlatformIssuer = "platform-issuer"
let OBJECTS = #APIObjects & {
apiObjects: {
ClusterIssuer: {
"\(SelfSigned)": #ClusterIssuer & {
metadata: name: SelfSigned
spec: selfSigned: {}
}
}
Certificate: {
"\(PlatformIssuer)": #Certificate & {
metadata: name: PlatformIssuer
metadata: namespace: Namespace
spec: {
isCA: true
commonName: PlatformIssuer
secretName: PlatformIssuer
privateKey: algorithm: "ECDSA"
privateKey: size: 256
issuerRef: {
name: SelfSigned
kind: "ClusterIssuer"
group: "cert-manager.io"
}
}
}
}
ClusterIssuer: {
"\(PlatformIssuer)": #ClusterIssuer & {
metadata: name: PlatformIssuer
spec: ca: secretName: PlatformIssuer
}
}
}
}

View File

@@ -0,0 +1,5 @@
# Platform Issuer
The platform issuer is a self signed root certificate authority which acts as a private pki for the platform. Used to issue certificates for use internally within the platform in a way that supports multi-cluster communication.
Refer to [Bootstrapping CA Issuers](https://cert-manager.io/docs/configuration/selfsigned/#bootstrapping-ca-issuers)

View File

@@ -1,5 +1,11 @@
package holos
for Project in _Projects {
spec: components: resources: (#ProjectTemplate & {project: Project}).provisioner.resources
// Debugging variable to enable inspecting the project host data:
// cue eval --out json -t cluster=provisioner ./platforms/reference/clusters/provisioner/projects/... -e _ProjectHosts.holos > hosts.json
let ProjectData = (#ProjectTemplate & {project: Project})
_ProjectHosts: "\(Project.name)": ProjectData.ProjectHosts
spec: components: resources: ProjectData.provisioner.resources
}

View File

@@ -0,0 +1,22 @@
package holos
// Platform level definition of a project.
#Project: {
name: string
// All projects have at least a prod environment and stage.
stages: prod: stageSegments: []
environments: prod: stage: "prod"
environments: prod: envSegments: []
stages: dev: _
environments: dev: stage: "dev"
environments: dev: envSegments: []
// Ensure at least the project name is a short hostname. Additional may be added.
hosts: (name): _
// environments share the stage segments of their stage.
environments: [_]: {
stage: string
stageSegments: stages[stage].stageSegments
}
}

View File

@@ -1 +0,0 @@
package holos

View File

@@ -0,0 +1,117 @@
package holos
import "strings"
// #ProjectHosts represents all of the hosts associated with the project
// organized for use in Certificates, Gateways, VirtualServices.
#ProjectHosts: {
project: #Project
// Hosts map key fqdn to host to reduce into structs organized by stage,
// canonical name, etc... The flat nature and long list of properties is
// intended to make it straight forward to derive another struct for Gateways,
// VirtualServices, Certificates, AuthProxy cookie domains, etc...
Hosts: {
for Env in project.environments {
for Host in project.hosts {
// Global hostname, e.g. app.holos.run
let CertInfo = (#MakeCertInfo & {
host: Host
env: Env
domain: project.domain
}).CertInfo
"\(CertInfo.fqdn)": CertInfo
// Cluster hostname, e.g. app.east1.holos.run, app.west1.holos.run
for Cluster in project.clusters {
let CertInfo = (#MakeCertInfo & {
host: Host
env: Env
domain: project.domain
cluster: Cluster.name
}).CertInfo
"\(CertInfo.fqdn)": CertInfo
}
}
}
}
}
// #MakeCertInfo provides dns info for a certificate
// Refer to: https://github.com/holos-run/holos/issues/66#issuecomment-2027562626
#MakeCertInfo: {
host: #Host
env: #Environment
domain: string
cluster: string
let Stage = #StageInfo & {name: env.stage, project: env.project}
let Env = env
// DNS segments from left to right.
let EnvSegments = env.envSegments
WildcardSegments: [...string]
if len(env.envSegments) > 0 {
WildcardSegments: ["*"]
}
let HostSegments = [host.name]
let StageSegments = env.stageSegments
ClusterSegments: [...string]
if cluster != _|_ {
ClusterSegments: [cluster]
}
let DomainSegments = [domain]
// Assemble the segments
let FQDN = EnvSegments + HostSegments + StageSegments + ClusterSegments + DomainSegments
let WILDCARD = WildcardSegments + HostSegments + StageSegments + ClusterSegments + DomainSegments
let CANONICAL = HostSegments + StageSegments + DomainSegments
CertInfo: #CertInfo & {
fqdn: strings.Join(FQDN, ".")
wildcard: strings.Join(WILDCARD, ".")
canonical: strings.Join(CANONICAL, ".")
project: name: Env.project
stage: #StageOrEnvRef & {
name: Stage.name
slug: Stage.slug
namespace: Stage.namespace
}
env: #StageOrEnvRef & {
name: Env.name
slug: Env.slug
namespace: Env.namespace
}
}
}
// #CertInfo defines the attributes associated with a fully qualfied domain name
#CertInfo: {
// fqdn is the fully qualified domain name, never a wildcard.
fqdn: string
// canonical is the canonical name this name may be an alternate name for.
canonical: string
// wildcard may replace the left most segment fqdn with a wildcard to consolidate cert dnsNames. If not a wildcad, must be fqdn
wildcard: string
// Project, stage and env attributes for mapping and collecting.
project: name: string
stage: #StageOrEnvRef
env: #StageOrEnvRef
}
#StageOrEnvRef: {
name: string
slug: string
namespace: string
}

View File

@@ -0,0 +1,45 @@
package holos
#ProjectTemplate: {
project: _
GatewayServers: _
// Sort GatewayServers by the tls credentialName to issue wildcards
let GatewayCerts = {
for FQDN, Server in GatewayServers {
let CertInfo = Server._CertInfo
// Sort into stage for the holos components, e.g. prod-iam-certs, dev-iam-certs
"\(CertInfo.stage.slug)": {
"\(Server.tls.credentialName)": #Certificate & {
// Store the dnsNames in a struct so they can be collected into a list
_dnsNames: "\(CertInfo.wildcard)": CertInfo.wildcard
metadata: name: CertInfo.canonical & Server.tls.credentialName
metadata: namespace: "istio-ingress"
spec: {
commonName: CertInfo.canonical
secretName: CertInfo.canonical & Server.tls.credentialName
dnsNames: [for x in _dnsNames {x}]
issuerRef: {
kind: "ClusterIssuer"
name: "letsencrypt-staging"
}
}
}
}
}
}
// Resources to be managed on the provisioner cluster.
provisioner: resources: {
for stage in project.stages {
"\(stage.slug)-certs": #KubernetesObjects & {
apiObjectMap: (#APIObjects & {
apiObjects: Certificate: GatewayCerts[stage.slug]
}).apiObjectMap
}
}
}
}

View File

@@ -1,87 +1,69 @@
package holos
import "encoding/yaml"
import (
h "github.com/holos-run/holos/api/v1alpha1"
"encoding/yaml"
)
// Platform level definition of a project.
#Project: {
name: string
// let SourceLoc = "project-template.cue"
// All projects have at least a prod environment and stage.
stages: prod: stageSegments: []
environments: prod: stage: "prod"
environments: prod: envSegments: []
stages: dev: _
environments: dev: stage: "dev"
environments: dev: envSegments: []
// Ensure at least the project name is a short hostname. Additional may be added.
hosts: (name): _
#ProjectTemplate: {
project: #Project
// environments share the stage segments of their stage.
environments: [_]: {
stage: string
stageSegments: stages[stage].stageSegments
// workload cluster resources
workload: resources: [Name=_]: h.#KubernetesObjects & {
metadata: name: Name
}
// provisioner cluster resources
provisioner: resources: [Name=_]: h.#KubernetesObjects & {
metadata: name: Name
}
}
// Reference Platform Project Template
#ProjectTemplate: {
project: #Project
let Project = project
// GatewayServers maps Gateway spec.servers #GatewayServer values indexed by stage then name.
let GatewayServers = {
// Initialize all stages, even if they have no environments.
for stage in project.stages {
(stage.name): {}
}
ProjectHosts: (#ProjectHosts & {project: Project}).Hosts
// For each stage, construct entries for the Gateway spec.servers.hosts field.
for env in project.environments {
(env.stage): {
let Env = env
let Stage = project.stages[env.stage]
for host in (#EnvHosts & {project: Project, env: Env}).hosts {
(host.name): #GatewayServer & {
hosts: [
"\(env.namespace)/\(host.name)",
// Allow the authproxy VirtualService to match the project.authProxyPrefix path.
"\(Stage.namespace)/\(host.name)",
]
port: host.port
tls: credentialName: host.name
tls: mode: "SIMPLE"
}
// GatewayServers maps Gateway spec.servers #GatewayServer values indexed by stage then name.
GatewayServers: {
for FQDN, Host in ProjectHosts {
"\(FQDN)": #GatewayServer & {
_CertInfo: Host
hosts: [
"\(Host.env.namespace)/\(FQDN)",
// Allow the authproxy VirtualService to match the project.authProxyPrefix path.
"\(Host.stage.namespace)/\(FQDN)",
]
port: {
name: "https"
number: 443
protocol: "HTTPS"
}
tls: credentialName: Host.canonical
tls: mode: "SIMPLE"
}
}
}
// ClusterGatewayServers provides a struct of Gateway servers for the current cluster.
// ClusterDefaultGatewayServers provides a struct of Gateway servers for the current cluster.
// This is intended for Gateway/default to add all servers to the default gateway.
ClusterGatewayServers: {
ClusterDefaultGatewayServers: {
if project.clusters[#ClusterName] != _|_ {
for Stage in project.stages {
for server in GatewayServers[Stage.name] {
(server.port.name): server
}
}
GatewayServers
}
}
workload: resources: {
// Provide resources only if the project is managed on --cluster-name
// Provide resources only if the project is managed on the cluster specified
// by --cluster-name
if project.clusters[#ClusterName] != _|_ {
for stage in project.stages {
let Stage = stage
// Istio Gateway
"\(stage.slug)-gateway": #KubernetesObjects & {
apiObjectMap: (#APIObjects & {
for host in GatewayServers[stage.name] {
apiObjects: ExternalSecret: (host.tls.credentialName): metadata: namespace: "istio-ingress"
}
}).apiObjectMap
}
// Manage auth-proxy in each stage
if project.features.authproxy.enabled {
"\(stage.slug)-authproxy": #KubernetesObjects & {
@@ -114,90 +96,6 @@ import "encoding/yaml"
}
}
}
provisioner: resources: {
for stage in project.stages {
"\(stage.slug)-certs": #KubernetesObjects & {
apiObjectMap: (#APIObjects & {
for host in GatewayServers[stage.name] {
let CN = host.tls.credentialName
apiObjects: Certificate: (CN): #Certificate & {
metadata: name: CN
metadata: namespace: "istio-ingress"
spec: {
commonName: CN
dnsNames: [CN]
secretName: CN
issuerRef: {
kind: "ClusterIssuer"
name: "letsencrypt"
}
}
}
}
}).apiObjectMap
}
}
}
}
let HTTPBIN = {
name: string | *"httpbin"
project: #Project
env: #Environment
let Name = name
let Stage = project.stages[env.stage]
let Metadata = {
name: Name
namespace: env.namespace
labels: app: name
}
let Labels = {
"app.kubernetes.io/name": Name
"app.kubernetes.io/instance": env.slug
"app.kubernetes.io/part-of": env.project
"security.holos.run/authproxy": Stage.extAuthzProviderName
}
apiObjects: {
Deployment: (Name): #Deployment & {
metadata: Metadata
spec: selector: matchLabels: Metadata.labels
spec: template: {
metadata: labels: Metadata.labels & #IstioSidecar & Labels
spec: securityContext: seccompProfile: type: "RuntimeDefault"
spec: containers: [{
name: Name
image: "quay.io/holos/mccutchen/go-httpbin"
ports: [{containerPort: 8080}]
securityContext: {
seccompProfile: type: "RuntimeDefault"
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 8192
runAsGroup: 8192
capabilities: drop: ["ALL"]
}}]
}
}
Service: (Name): #Service & {
metadata: Metadata
spec: selector: Metadata.labels
spec: ports: [
{port: 80, targetPort: 8080, protocol: "TCP", name: "http"},
]
}
VirtualService: (Name): #VirtualService & {
metadata: Metadata
let Project = project
let Env = env
spec: hosts: [for host in (#EnvHosts & {project: Project, env: Env}).hosts {host.name}]
spec: gateways: ["istio-ingress/default"]
spec: http: [{route: [{destination: host: Name}]}]
}
}
}
// AUTHPROXY configures one oauth2-proxy deployment for each host in each stage of a project. Multiple deployments per stage are used to narrow down the cookie domain.
@@ -462,6 +360,65 @@ let AUTHPROXY = {
}
}
let HTTPBIN = {
name: string | *"httpbin"
project: #Project
env: #Environment
let Name = name
let Stage = project.stages[env.stage]
let Metadata = {
name: Name
namespace: env.namespace
labels: app: name
}
let Labels = {
"app.kubernetes.io/name": Name
"app.kubernetes.io/instance": env.slug
"app.kubernetes.io/part-of": env.project
"security.holos.run/authproxy": Stage.extAuthzProviderName
}
apiObjects: {
Deployment: (Name): #Deployment & {
metadata: Metadata
spec: selector: matchLabels: Metadata.labels
spec: template: {
metadata: labels: Metadata.labels & #IstioSidecar & Labels
spec: securityContext: seccompProfile: type: "RuntimeDefault"
spec: containers: [{
name: Name
image: "quay.io/holos/mccutchen/go-httpbin"
ports: [{containerPort: 8080}]
securityContext: {
seccompProfile: type: "RuntimeDefault"
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 8192
runAsGroup: 8192
capabilities: drop: ["ALL"]
}}]
}
}
Service: (Name): #Service & {
metadata: Metadata
spec: selector: Metadata.labels
spec: ports: [
{port: 80, targetPort: 8080, protocol: "TCP", name: "http"},
]
}
VirtualService: (Name): #VirtualService & {
metadata: Metadata
let Project = project
let Env = env
spec: hosts: [for host in (#EnvHosts & {project: Project, env: Env}).hosts {host.name}]
spec: gateways: ["istio-ingress/default"]
spec: http: [{route: [{destination: host: Name}]}]
}
}
}
// AUTHPOLICY configures the baseline AuthorizationPolicy and RequestAuthentication policy for each stage of each project.
let AUTHPOLICY = {
project: #Project

View File

@@ -1,12 +1,12 @@
package holos
import h "github.com/holos-run/holos/api/v1alpha1"
import "strings"
// #Projects is a map of all the projects in the platform.
#Projects: [Name=_]: #Project & {name: Name}
_Projects: #Projects
// The platform project is required and where platform services reside. ArgoCD, Grafana, Prometheus, etc...
#Projects: platform: _
@@ -59,7 +59,7 @@ import "strings"
}
}
// features is YAGNI maybe?
// Thes are useful to enable / disable.
features: [Name=string]: #Feature & {name: Name}
features: authproxy: _
features: httpbin: _
@@ -91,15 +91,25 @@ import "strings"
name: string
cluster?: string
clusterSegments: [...string]
wildcard: true | *false
if cluster != _|_ {
clusterSegments: [cluster]
}
let SEGMENTS = envSegments + [name] + stageSegments + clusterSegments + [#Platform.org.domain]
_EnvSegments: [...string]
if wildcard {
if len(envSegments) > 0 {
_EnvSegments: ["*"]
}
}
if !wildcard {
_EnvSegments: envSegments
}
let SEGMENTS = _EnvSegments + [name] + stageSegments + clusterSegments + [_Projects[project].domain]
let NAMESEGMENTS = ["https"] + SEGMENTS
host: {
name: strings.Join(SEGMENTS, ".")
port: {
name: strings.Replace(strings.Join(NAMESEGMENTS, "-"), ".", "-", -1)
name: strings.Replace(strings.Replace(strings.Join(NAMESEGMENTS, "-"), ".", "-", -1), "*", "wildcard", -1)
number: 443
protocol: "HTTPS"
}
@@ -107,17 +117,26 @@ import "strings"
}
}
#Stage: {
#StageInfo: {
name: string
project: string
slug: "\(name)-\(project)"
// namespace is the system namespace for the project stage
namespace: "\(name)-\(project)-system"
}
#Stage: {
#StageInfo
name: string
project: string
namespace: string
slug: string
// Manage a system namespace for each stage
namespaces: [Name=_]: name: Name
namespaces: (namespace): _
namespaces: "\(namespace)": _
// stageSegments are the stage portion of the dns segments
stageSegments: [...string] | *[name]
stageSegments: [] | *[name]
// authProxyClientID is the ClientID registered with the oidc issuer.
authProxyClientID: string
// extAuthzProviderName is the provider name in the mesh config
@@ -130,20 +149,6 @@ import "strings"
enabled: true | *false
}
#ProjectTemplate: {
project: #Project
// workload cluster resources
workload: resources: [Name=_]: h.#KubernetesObjects & {
metadata: name: Name
}
// provisioner cluster resources
provisioner: resources: [Name=_]: h.#KubernetesObjects & {
metadata: name: Name
}
}
// #EnvHosts provides hostnames given a project and environment.
// Refer to https://github.com/holos-run/holos/issues/66#issuecomment-2027562626
#EnvHosts: {
@@ -166,7 +171,7 @@ import "strings"
}
// #StageDomains provides hostnames given a project and stage. Primarily for the
// auth proxy.
// auth proxy cookie domains.
// Refer to https://github.com/holos-run/holos/issues/66#issuecomment-2027562626
#StageDomains: {
// names are the leading prefix names to create hostnames for.

View File

@@ -15,6 +15,7 @@ import (
crt "cert-manager.io/certificate/v1"
gw "networking.istio.io/gateway/v1beta1"
vs "networking.istio.io/virtualservice/v1beta1"
dr "networking.istio.io/destinationrule/v1beta1"
ra "security.istio.io/requestauthentication/v1"
ap "security.istio.io/authorizationpolicy/v1"
pg "postgres-operator.crunchydata.com/postgrescluster/v1beta1"
@@ -77,7 +78,9 @@ _apiVersion: "holos.run/v1alpha1"
#Job: #NamespaceObject & batchv1.#Job
#CronJob: #NamespaceObject & batchv1.#CronJob
#Deployment: #NamespaceObject & appsv1.#Deployment
#StatefulSet: #NamespaceObject & appsv1.#StatefulSet
#VirtualService: #NamespaceObject & vs.#VirtualService
#DestinationRule: #NamespaceObject & dr.#DestinationRule
#RequestAuthentication: #NamespaceObject & ra.#RequestAuthentication
#AuthorizationPolicy: #NamespaceObject & ap.#AuthorizationPolicy
#Certificate: #NamespaceObject & crt.#Certificate
@@ -182,7 +185,7 @@ _apiVersion: "holos.run/v1alpha1"
pool?: string
// region is the geographic region of the cluster.
region?: string
// primary is true if name matches the primaryCluster name
// primary is true if the cluster is the primary cluster among a group of related clusters.
primary: bool
}
@@ -219,6 +222,7 @@ _apiVersion: "holos.run/v1alpha1"
primary: false
}
}
// TODO: Remove stages, they're in the subdomain of projects.
stages: [ID=_]: {
name: string & ID
environments: [...{name: string}]
@@ -226,9 +230,11 @@ _apiVersion: "holos.run/v1alpha1"
projects: [ID=_]: {
name: string & ID
}
// TODO: Remove services, they're in the subdomain of projects.
services: [ID=_]: {
name: string & ID
}
// authproxy configures the auth proxy attached to the default ingress gateway in the istio-ingress namespace.
authproxy: #AuthProxySpec & {
namespace: "istio-ingress"
@@ -277,29 +283,6 @@ _apiVersion: "holos.run/v1alpha1"
idTokenHeader: string | *"x-oidc-id-token"
}
// ManagedNamespace is a namespace to manage across all clusters in the holos platform.
#ManagedNamespace: {
namespace: {
metadata: {
name: string
labels: [string]: string
}
}
// clusterNames represents the set of clusters the namespace is managed on. Usually all clusters.
clusterNames: [...string]
for cluster in clusterNames {
clusters: (cluster): name: cluster
}
}
// #ManagedNamepsaces is the union of all namespaces across all cluster types and optional services.
// Holos adopts the namespace sameness position of SIG Multicluster, refer to https://github.com/kubernetes/community/blob/dd4c8b704ef1c9c3bfd928c6fa9234276d61ad18/sig-multicluster/namespace-sameness-position-statement.md
#ManagedNamespaces: {
[Name=_]: #ManagedNamespace & {
namespace: metadata: name: Name
}
}
// #Backups defines backup configuration.
// TODO: Consider the best place for this, possibly as part of the site platform config. This represents the primary location for backups.
#Backups: {

175
go.mod
View File

@@ -8,22 +8,28 @@ require (
connectrpc.com/validate v0.1.0
cuelang.org/go v0.8.0
entgo.io/ent v0.13.1
github.com/bufbuild/buf v1.30.1
github.com/choria-io/machine-room v0.0.0-20231204170637-dbd92497cddc
github.com/coreos/go-oidc/v3 v3.10.0
github.com/fullstorydev/grpcurl v1.9.1
github.com/go-jose/go-jose/v3 v3.0.3
github.com/gofrs/uuid v4.4.0+incompatible
github.com/google/uuid v1.5.0
github.com/int128/kubelogin v1.28.0
github.com/jackc/pgx/v5 v5.5.5
github.com/lmittmann/tint v1.0.4
github.com/mattn/go-isatty v0.0.20
github.com/mattn/go-runewidth v0.0.15
github.com/olekukonko/tablewriter v0.0.5
github.com/prometheus/client_golang v1.19.0
github.com/rogpeppe/go-internal v1.12.0
github.com/sethvargo/go-retry v0.2.4
github.com/spf13/cobra v1.8.0
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.8.4
github.com/stretchr/testify v1.9.0
golang.org/x/net v0.22.0
golang.org/x/tools v0.19.0
google.golang.org/protobuf v1.33.0
google.golang.org/protobuf v1.33.1-0.20240408130810-98873a205002
honnef.co/go/tools v0.4.7
k8s.io/api v0.29.2
k8s.io/apimachinery v0.29.2
k8s.io/client-go v0.29.2
@@ -34,62 +40,202 @@ require (
require (
ariga.io/atlas v0.19.1-0.20240203083654-5948b60a8e43 // indirect
cloud.google.com/go/compute v1.23.3 // indirect
cloud.google.com/go/compute/metadata v0.2.3 // indirect
connectrpc.com/otelconnect v0.7.0 // indirect
cuelabs.dev/go/oci/ociregistry v0.0.0-20240314152124-224736b49f2e // indirect
github.com/AlecAivazis/survey/v2 v2.3.7 // indirect
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 // indirect
github.com/BurntSushi/toml v1.3.2 // indirect
github.com/Freman/eventloghook v0.0.0-20191003051739-e4d803b6b48b // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/semver v1.5.0 // indirect
github.com/Masterminds/semver/v3 v3.2.1 // indirect
github.com/Masterminds/sprig/v3 v3.2.3 // indirect
github.com/Microsoft/go-winio v0.6.1 // indirect
github.com/OneOfOne/xxhash v1.2.8 // indirect
github.com/achanda/go-sysctl v0.0.0-20160222034550-6be7678c45d2 // indirect
github.com/agext/levenshtein v1.2.1 // indirect
github.com/antlr/antlr4/runtime/Go/antlr/v4 v4.0.0-20230512164433-5d1fd1a340c9 // indirect
github.com/agnivade/levenshtein v1.1.1 // indirect
github.com/antlr4-go/antlr/v4 v4.13.0 // indirect
github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bufbuild/protovalidate-go v0.3.0 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/bufbuild/protocompile v0.10.0 // indirect
github.com/bufbuild/protovalidate-go v0.6.0 // indirect
github.com/bufbuild/protoyaml-go v0.1.8 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/census-instrumentation/opencensus-proto v0.4.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/cheekybits/genny v1.0.0 // indirect
github.com/choria-io/fisk v0.6.1 // indirect
github.com/choria-io/go-choria v0.27.1-0.20231204170245-efda165efc54 // indirect
github.com/choria-io/go-updater v0.1.0 // indirect
github.com/choria-io/stream-replicator v0.8.3-0.20230503130504-86152f798aec // indirect
github.com/choria-io/tokens v0.0.3 // indirect
github.com/cloudevents/sdk-go/v2 v2.14.0 // indirect
github.com/cncf/udpa/go v0.0.0-20220112060539-c52dc94e7fbe // indirect
github.com/cncf/xds/go v0.0.0-20231128003011-0fa0005c9caa // indirect
github.com/cockroachdb/apd/v3 v3.2.1 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.15.1 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.4 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/docker/cli v26.0.0+incompatible // indirect
github.com/docker/distribution v2.8.3+incompatible // indirect
github.com/docker/docker v26.0.0+incompatible // indirect
github.com/docker/docker-credential-helpers v0.8.1 // indirect
github.com/docker/go-connections v0.5.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/emicklei/proto v1.10.0 // indirect
github.com/envoyproxy/go-control-plane v0.12.0 // indirect
github.com/envoyproxy/protoc-gen-validate v1.0.4 // indirect
github.com/evanphx/json-patch v5.7.0+incompatible // indirect
github.com/expr-lang/expr v1.15.6 // indirect
github.com/fatih/color v1.16.0 // indirect
github.com/felixge/fgprof v0.9.4 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/ghodss/yaml v1.0.0 // indirect
github.com/go-chi/chi/v5 v5.0.12 // indirect
github.com/go-ini/ini v1.67.0 // indirect
github.com/go-jose/go-jose/v4 v4.0.1 // indirect
github.com/go-logr/logr v1.3.0 // indirect
github.com/go-logr/logr v1.4.1 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-openapi/inflect v0.19.0 // indirect
github.com/go-openapi/jsonpointer v0.20.0 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.22.4 // indirect
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 // indirect
github.com/gobwas/glob v0.2.3 // indirect
github.com/gofrs/uuid/v5 v5.0.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/cel-go v0.17.4 // indirect
github.com/golang-jwt/jwt/v4 v4.5.0 // indirect
github.com/golang/mock v1.6.0 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/cel-go v0.20.1 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/go-containerregistry v0.19.1 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/pprof v0.0.0-20240327155427-868f304927ed // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/google/wire v0.5.0 // indirect
github.com/gorilla/mux v1.8.1 // indirect
github.com/goss-org/GOnetstat v0.0.0-20230101144325-22be0bd9e64d // indirect
github.com/goss-org/go-ps v0.0.0-20230609005227-7b318e6a56e5 // indirect
github.com/goss-org/goss v0.4.6 // indirect
github.com/gosuri/uilive v0.0.4 // indirect
github.com/gosuri/uiprogress v0.0.1 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.19.0 // indirect
github.com/guptarohit/asciigraph v0.5.6 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/hashicorp/hcl/v2 v2.13.0 // indirect
github.com/hashicorp/logutils v1.0.0 // indirect
github.com/huandu/xstrings v1.4.0 // indirect
github.com/imdario/mergo v0.3.16 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/int128/listener v1.1.0 // indirect
github.com/int128/oauth2cli v1.14.0 // indirect
github.com/int128/oauth2dev v1.0.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a // indirect
github.com/jackc/puddle/v2 v2.2.1 // indirect
github.com/jdx/go-netrc v1.0.0 // indirect
github.com/jhump/protoreflect v1.16.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect
github.com/klauspost/compress v1.17.7 // indirect
github.com/klauspost/pgzip v1.2.6 // indirect
github.com/lib/pq v1.10.9 // indirect
github.com/looplab/fsm v1.0.1 // indirect
github.com/lufia/plan9stats v0.0.0-20231016141302-07b5767bb0ed // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d // indirect
github.com/miekg/dns v1.1.58 // indirect
github.com/miekg/pkcs11 v1.1.1 // indirect
github.com/minio/highwayhash v1.0.2 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/moby/docker-image-spec v1.3.1 // indirect
github.com/moby/sys/mountinfo v0.7.1 // indirect
github.com/moby/term v0.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/nats-io/jsm.go v0.1.1-0.20231204140718-1ad3bcd9702c // indirect
github.com/nats-io/jwt/v2 v2.5.3 // indirect
github.com/nats-io/nats-server/v2 v2.10.6 // indirect
github.com/nats-io/nats.go v1.31.0 // indirect
github.com/nats-io/nkeys v0.4.6 // indirect
github.com/nats-io/nuid v1.0.1 // indirect
github.com/ncruces/go-strftime v0.1.9 // indirect
github.com/oleiade/reflections v1.0.1 // indirect
github.com/onsi/ginkgo/v2 v2.15.0 // indirect
github.com/onsi/gomega v1.31.1 // indirect
github.com/open-policy-agent/opa v0.59.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.0 // indirect
github.com/patrickmn/go-cache v2.1.0+incompatible // indirect
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pkg/profile v1.7.0 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_model v0.5.0 // indirect
github.com/prometheus/common v0.48.0 // indirect
github.com/power-devops/perfstat v0.0.0-20221212215047-62379fc7944b // indirect
github.com/prometheus/client_model v0.6.0 // indirect
github.com/prometheus/common v0.50.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
github.com/protocolbuffers/txtpbfmt v0.0.0-20230328191034-3462fbc510c0 // indirect
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/rivo/uniseg v0.4.4 // indirect
github.com/robfig/cron v1.2.0 // indirect
github.com/rs/cors v1.10.1 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/samber/lo v1.39.0 // indirect
github.com/santhosh-tekuri/jsonschema/v5 v5.3.1 // indirect
github.com/segmentio/ksuid v1.0.4 // indirect
github.com/shirou/gopsutil/v3 v3.23.11 // indirect
github.com/shoenig/go-m1cpu v0.1.6 // indirect
github.com/shopspring/decimal v1.3.1 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
github.com/spf13/cast v1.6.0 // indirect
github.com/stoewer/go-strcase v1.3.0 // indirect
github.com/stretchr/objx v0.5.2 // indirect
github.com/tchap/go-patricia/v2 v2.3.1 // indirect
github.com/tidwall/gjson v1.17.1 // indirect
github.com/tidwall/match v1.1.1 // indirect
github.com/tidwall/pretty v1.2.1 // indirect
github.com/tklauser/go-sysconf v0.3.13 // indirect
github.com/tklauser/numcpus v0.7.0 // indirect
github.com/vbatts/tar-split v0.11.5 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
github.com/xlab/tablewriter v0.0.0-20160610135559-80b567a11ad5 // indirect
github.com/yashtewari/glob-intersection v0.2.0 // indirect
github.com/yusufpapurcu/wmi v1.2.3 // indirect
github.com/zclconf/go-cty v1.8.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 // indirect
go.opentelemetry.io/otel v1.25.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.25.0 // indirect
go.opentelemetry.io/otel/metric v1.25.0 // indirect
go.opentelemetry.io/otel/sdk v1.25.0 // indirect
go.opentelemetry.io/otel/trace v1.25.0 // indirect
go.uber.org/atomic v1.11.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
golang.org/x/crypto v0.21.0 // indirect
golang.org/x/exp v0.0.0-20231108232855-2478ac86f678 // indirect
golang.org/x/exp v0.0.0-20240325151524-a685a6edb6d8 // indirect
golang.org/x/exp/typeparams v0.0.0-20221208152030-732eee02a75a // indirect
golang.org/x/mod v0.16.0 // indirect
golang.org/x/oauth2 v0.18.0 // indirect
golang.org/x/sync v0.6.0 // indirect
@@ -98,8 +244,9 @@ require (
golang.org/x/text v0.14.0 // indirect
golang.org/x/time v0.5.0 // indirect
google.golang.org/appengine v1.6.8 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20230530153820-e85fd2cbaebc // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20230530153820-e85fd2cbaebc // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240325203815-454cdb8f5daa // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240325203815-454cdb8f5daa // indirect
google.golang.org/grpc v1.62.1 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
@@ -115,3 +262,5 @@ require (
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
)
replace github.com/choria-io/machine-room v0.0.0 => github.com/jeffmccune/machine-room v0.0.990

796
go.sum

File diff suppressed because it is too large Load Diff

22
hack/choria/gen-machine-signer Executable file
View File

@@ -0,0 +1,22 @@
#! /bin/bash
#
# build github.com/choria-io/go-choria with go build -trimpath -o choria -ldflags "-w" ./
# Refer to https://github.com/ripienaar/choria-compose/blob/main/setup.sh#L41
# Refer to https://github.com/holos-run/holos-infra/blob/v0.60.4/experiments/components/holos-saas/initialize/setup
# choria jwt keys machine-signer.seed machine-signer.public
set -euo pipefail
PARENT="$(cd "$(dirname "$0")" && pwd)"
tmpdir="$(mktemp -d)"
finish() {
[[ -d "$tmpdir" ]] && rm -rf "$tmpdir"
}
trap finish EXIT
cd "$tmpdir"
mkdir machine-signer
cd machine-signer
choria jwt keys machine-signer.seed machine-signer.public
holos create secret machine-signer --from-file .

5
hack/choria/initialize/.gitignore vendored Normal file
View File

@@ -0,0 +1,5 @@
/issuer/
/provisioner/
/broker/
/customers/
/agents/

View File

@@ -0,0 +1,8 @@
Initialize machine room provisioning credentials
Setup Notes:
The holos server flag `--provisioner-seed` must match the issuer.seed value.
To get the correct value to configure for holos server:
holos get secret choria-issuer --print-key=issuer.seed --namespace $NAMESPACE

61
hack/choria/initialize/setup Executable file
View File

@@ -0,0 +1,61 @@
#! /bin/bash
#
export BROKER_PASSWORD="$(LC_ALL=C tr -dc "[:alpha:]" </dev/random | tr '[:upper:]' '[:lower:]' | head -c 32)"
export PROVISIONER_TOKEN="$(LC_ALL=C tr -dc "[:alpha:]" </dev/random | tr '[:upper:]' '[:lower:]' | head -c 32)"
set -xeuo pipefail
PARENT="$(cd $(dirname "$0") && pwd)"
TOPLEVEL="$(cd "${PARENT}" && git rev-parse --show-toplevel)"
: "${NAMESPACE:=jeff-holos}"
export NAMESPACE
tmpdir="$(mktemp -d)"
finish() {
[[ -d "$tmpdir" ]] && rm -rf "$tmpdir"
}
trap finish EXIT
cd "$tmpdir"
# Generate Secrets
# Create organization issuer
mkdir issuer
choria jwt keys "./issuer/issuer.seed" "./issuer/issuer.public"
ISSUER="$(<issuer/issuer.public)"
export ISSUER
# Provisioner token used for ???
mkdir provisioner
echo -n "${PROVISIONER_TOKEN}" > ./provisioner/token
# Provisioner signer
choria jwt keys ./provisioner/signer.seed ./provisioner/signer.public
choria jwt client ./provisioner/signer.jwt provisioner_signer ./issuer/issuer.seed \
--public-key "$(<provisioner/signer.public)" --server-provisioner --validity $((999*365))d --issuer
# Provisioner Secret
mkdir -p provisioner/secret
gomplate --input-dir "${PARENT}/templates/provisioner" --output-dir ./provisioner/secret/
cp ./provisioner/signer.seed ./provisioner/secret/signer.seed
cp ./provisioner/signer.jwt ./provisioner/secret/signer.jwt
# Provisioner Broker
mkdir broker
choria jwt keys ./broker/broker.seed ./broker/broker.public
choria jwt server ./broker/broker.jwt broker.holos.local "$(<broker/broker.public)" ./issuer/issuer.seed \
--org choria \
--collectives choria \
--subjects 'choria.node_metadata.>'
gomplate --input-dir "${PARENT}/templates/broker/" --output-dir ./broker/
echo -n "${BROKER_PASSWORD}" > ./broker/password
mkdir agents
choria jwt keys ./agents/signer.seed ./agents/signer.public
# Now save the secrets
holos create secret --append-hash=false --namespace $NAMESPACE choria-issuer --from-file=issuer
holos create secret --append-hash=false --namespace $NAMESPACE choria-broker --from-file=broker
holos create secret --append-hash=false --namespace $NAMESPACE choria-provisioner --from-file=provisioner
holos create secret --append-hash=false --namespace $NAMESPACE choria-agents --from-file=agents

View File

@@ -0,0 +1,21 @@
loglevel = info
plugin.choria.stats_address = 0.0.0.0
plugin.choria.stats_port = 8222
plugin.choria.broker_network = true
plugin.choria.network.client_port = 4222
plugin.choria.network.peer_port = 5222
plugin.choria.network.system.user = system
plugin.choria.network.system.password = system
plugin.choria.network.peers = nats://broker-0.broker:5222,nats://broker-1.broker:5222,nats://broker-2.broker:5222
plugin.choria.use_srv = false
plugin.choria.network.websocket_port = 4333
plugin.security.provider = choria
plugin.security.choria.certificate = /etc/choria-tls/tls.crt
plugin.security.choria.key = /etc/choria-tls/tls.key
plugin.security.choria.token_file = /etc/choria/broker.jwt
plugin.security.choria.seed_file = /etc/choria/broker.seed
plugin.choria.network.provisioning.client_password = {{ .Env.BROKER_PASSWORD }}
plugin.security.issuer.names = choria
plugin.security.issuer.choria.public = {{ .Env.ISSUER }}

View File

@@ -0,0 +1 @@
{{ .Env.ISSUER -}}

View File

@@ -0,0 +1,7 @@
plugin.security.provider = choria
plugin.security.choria.token_file = /etc/provisioner/signer.jwt
plugin.security.choria.seed_file = /etc/provisioner/signer.seed
identity = provisioner_signer
plugin.choria.middleware_hosts = broker-0.broker:4222,broker-1.broker:4222,broker-2.broker:4222

View File

@@ -0,0 +1,16 @@
workers: 4
interval: 1m
logfile: /dev/stdout
loglevel: info
helper: /app/.venv/bin/helper
token: "{{ .Env.PROVISIONER_TOKEN }}"
choria_insecure: false
site: holos
broker_provisioning_password: "{{ .Env.BROKER_PASSWORD }}"
jwt_verify_cert: "{{ .Env.ISSUER }}"
jwt_signing_key: /etc/provisioner/signer.seed
jwt_signing_token: /etc/provisioner/signer.jwt
features:
jwt: true
ed25519: true

2
hack/tilt/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
aws-login.last
kubeconfig

7
hack/tilt/Dockerfile Normal file
View File

@@ -0,0 +1,7 @@
FROM 271053619184.dkr.ecr.us-east-2.amazonaws.com/holos-run/container-images/debian:bullseye AS final
USER root
WORKDIR /app
ADD bin bin
RUN chown -R app: /app
USER app
ENTRYPOINT bin/holos server

21
hack/tilt/aws-login.sh Executable file
View File

@@ -0,0 +1,21 @@
#! /bin/bash
set -euo pipefail
PARENT="$(cd $(dirname "$0") && pwd)"
# If necessary
if [[ -s "${PARENT}/aws-login.last" ]]; then
last="$(<"${PARENT}/aws-login.last")"
now="$(date +%s)"
if [[ $(( now - last )) -lt 28800 ]]; then
echo "creds are still valid" >&2
exit 0
fi
fi
aws sso logout
aws sso login
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin "${AWS_ACCOUNT}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com"
# Touch a file so tilt docker_build can watch it as a dep
date +%s > "${PARENT}/aws-login.last"

7
hack/tilt/aws.config Normal file
View File

@@ -0,0 +1,7 @@
[profile dev-holos]
sso_account_id = 271053619184
sso_role_name = AdministratorAccess
sso_start_url = https://openinfrastructure.awsapps.com/start
sso_region = us-east-2
region = us-east-2
output = json

9
hack/tilt/bin/tilt Executable file
View File

@@ -0,0 +1,9 @@
#! /bin/bash
# Override kubeconfig so we can create it with local()
set -euo pipefail
TOPLEVEL="$(cd $(dirname "$0")/.. && pwd)"
export NAMESPACE="${USER}-holos"
export KUBECONFIG="${TOPLEVEL}/kubeconfig"
envsubst < "${KUBECONFIG}.template" > "${KUBECONFIG}"
export TILT_WRAPPER=1
exec tilt "$@"

153
hack/tilt/ecr-creds.yaml Normal file
View File

@@ -0,0 +1,153 @@
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: container-registry
app.kubernetes.io/instance: holos-system-ecr
app.kubernetes.io/name: holos-system-ecr
app.kubernetes.io/part-of: holos
name: holos-system-ecr
namespace: holos-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: container-registry
app.kubernetes.io/instance: holos-system-ecr
app.kubernetes.io/name: holos-system-ecr
app.kubernetes.io/part-of: holos
name: holos-system-ecr
rules:
- apiGroups:
- ""
resources:
- secrets
- namespaces
verbs:
- list
- apiGroups:
- ""
resourceNames:
- holos-system-ecr-image-pull-creds
resources:
- secrets
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: container-registry
app.kubernetes.io/instance: holos-system-ecr
app.kubernetes.io/name: holos-system-ecr
app.kubernetes.io/part-of: holos
name: holos-system-ecr
namespace: holos-system
roleRef:
kind: ClusterRole
name: holos-system-ecr
subjects:
- kind: ServiceAccount
name: holos-system-ecr
namespace: holos-system
---
apiVersion: v1
data:
refresh.sh: |-
#! /bin/bash
tmpdir="$(mktemp -d)"
finish() {
rm -rf "${tmpdir}"
}
trap finish EXIT
set -euo pipefail
aws sts assume-role-with-web-identity \
--role-arn ${AWS_ROLE_ARN} \
--role-session-name CronJob \
--web-identity-token file:///run/secrets/irsa/serviceaccount/token \
> "${tmpdir}/creds.json"
export AWS_ACCESS_KEY_ID=$(jq -r .Credentials.AccessKeyId "${tmpdir}/creds.json")
export AWS_SECRET_ACCESS_KEY=$(jq -r .Credentials.SecretAccessKey "${tmpdir}/creds.json")
export AWS_SESSION_TOKEN=$(jq -r .Credentials.SessionToken "${tmpdir}/creds.json")
set -x
aws ecr get-login-password --region ${AWS_REGION} \
| docker login --username AWS --password-stdin ${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com
kubectl create secret docker-registry 'holos-system-ecr-image-pull-creds' \
--from-file=.dockerconfigjson=${HOME}/.docker/config.json \
--dry-run=client -o yaml \
> "${tmpdir}/secret.yaml"
# Get namespaces one per line
kubectl -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' get namespaces > ${tmpdir}/namespaces.txt
# Copy the secret to all namespaces
for ns in $(grep -vE '^gke-|^kube-|^gmp-' ${tmpdir}/namespaces.txt); do
echo "---" >> "${tmpdir}/secretlist.yaml"
kubectl --dry-run=client -o yaml -n $ns apply -f "${tmpdir}/secret.yaml" >> "${tmpdir}/secretlist.yaml"
done
kubectl apply --server-side=true -f "${tmpdir}/secretlist.yaml"
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: image-pull-secret
app.kubernetes.io/instance: holos-system-ecr
app.kubernetes.io/name: refresher
app.kubernetes.io/part-of: holos
name: holos-system-ecr
namespace: holos-system
---
apiVersion: batch/v1
kind: CronJob
metadata:
labels:
app.kubernetes.io/component: container-registry
app.kubernetes.io/instance: holos-system-ecr
app.kubernetes.io/name: holos-system-ecr
app.kubernetes.io/part-of: holos
name: holos-system-ecr
namespace: holos-system
spec:
schedule: 0 */4 * * *
jobTemplate:
spec:
template:
spec:
containers:
- command:
- bash
- /app/scripts/refresh.sh
env:
- name: AWS_ACCOUNT
value: "271053619184"
- name: AWS_REGION
value: us-east-2
- name: AWS_ROLE_ARN
value: arn:aws:iam::271053619184:role/ImagePull
image: quay.io/holos/toolkit:latest
imagePullPolicy: Always
name: toolkit
resources:
limits:
cpu: 50m
memory: 64Mi
requests:
cpu: 50m
memory: 64Mi
volumeMounts:
- mountPath: /app/scripts
name: scripts
- mountPath: /run/secrets/irsa/serviceaccount
name: irsa
restartPolicy: OnFailure
serviceAccountName: holos-system-ecr
volumes:
- configMap:
name: holos-system-ecr
name: scripts
- name: irsa
projected:
sources:
- serviceAccountToken:
path: "token"
audience: "irsa"
expirationSeconds: 3600

37
hack/tilt/get-pgadmin-creds Executable file
View File

@@ -0,0 +1,37 @@
#! /bin/bash
#
tmpdir="$(mktemp -d)"
finish() {
code=$?
if [[ $code -gt 10 ]]; then
jq . "${tmpdir}/creds.json"
echo "could not update pg password: jq got null on line $code" >&2
fi
rm -rf "$tmpdir"
exit $code
}
trap finish EXIT
set -euo pipefail
umask 077
if [[ $(uname) != Darwin ]]; then
pbcopy() {
xsel --input --clipboard
xsel --output --clipboard | xsel --input --primary
}
fi
sel="postgres-operator.crunchydata.com/pgadmin=${1}"
# secret="(kubectl -n "${NAMESPACE}" get secret --selector=$sel '--output=jsonpath={.items..metadata.name}')"
kubectl get secret "--selector=$sel" -o=json | jq '.items[].data | map_values(@base64d)' > "${tmpdir}/creds.json"
echo -n "username: "
jq --exit-status -r ".username" "${tmpdir}/creds.json"
password="$(jq --exit-status -r ".password" "${tmpdir}/creds.json")"
# n.b. don't send the trailing newline.
echo -n "$password" | pbcopy
echo "password: copied to clipboard."

53
hack/tilt/get-pgdb-creds Executable file
View File

@@ -0,0 +1,53 @@
#! /bin/bash
#
tmpdir="$(mktemp -d)"
finish() {
code=$?
if [[ $code -gt 10 ]]; then
jq . "${tmpdir}/creds.json"
echo "could not update pg password: jq got null on line $code" >&2
fi
rm -rf "$tmpdir"
exit $code
}
trap finish EXIT
set -euo pipefail
umask 077
if [[ $(uname) != Darwin ]]; then
pbcopy() {
xsel --input --clipboard
xsel --output --clipboard | xsel --input --primary
}
fi
kubectl get secret "${1}-pguser-${2}" -o json > "${tmpdir}/creds.json"
export PGDATABASE="$(jq --exit-status -r '.data | map_values(@base64d) | .dbname' ${tmpdir}/creds.json || exit $LINENO)"
export PGUSER="$(jq --exit-status -r '.data | map_values(@base64d) | .user' ${tmpdir}/creds.json || exit $LINENO)"
export PGPASSWORD="$(jq --exit-status -r '.data | map_values(@base64d) | .password' ${tmpdir}/creds.json || exit $LINENO)"
prefix="${PGHOST}:${PGPORT}:${PGDATABASE}:${PGUSER}"
if [[ -f ~/.pgpass ]]; then
(grep -v "^${prefix}:" ~/.pgpass || true) > "${tmpdir}/pgpass"
fi
echo "${prefix}:${PGPASSWORD}" >> "${tmpdir}/pgpass"
cp "${tmpdir}/pgpass" ~/.pgpass
echo "updated: ${HOME}/.pgpass" >&2
cat <<EOF >&2
## Connect from a localhost shell through the port forward to the cluster
export PGHOST=${PGHOST}
export PGPORT=${PGPORT}
export PGDATABASE=${PGDATABASE}
export PGUSER=${PGUSER}
psql -c '\conninfo'
EOF
psql --host=${PGHOST} --port=${PGPORT} ${PGDATABASE} -c '\conninfo'
# n.b. do not send a trailing newline to xsel
echo -n "$PGPASSWORD" | pbcopy
echo "password: copied to clipboard."

9
hack/tilt/gh-issue-view Executable file
View File

@@ -0,0 +1,9 @@
#! /bin/bash
#
set -euo pipefail
issue="$(git rev-parse --abbrev-ref HEAD | tr -d -c 0-9)"
if [[ -z $issue ]]; then
echo "could not extract issue number from branch name" >&2
exit 1
fi
exec gh issue view --comments $issue

4
hack/tilt/gh-issues Executable file
View File

@@ -0,0 +1,4 @@
#! /bin/bash
set -euo pipefail
export GH_FORCE_TTY='120%'
exec gh issue list

8
hack/tilt/go-test-failfast Executable file
View File

@@ -0,0 +1,8 @@
#! /bin/bash
#
set -euo pipefail
for s in $(go list ./...); do
if ! go test -failfast -v -p 1 $s; then
break
fi
done

20
hack/tilt/k8s-get-db-sts Executable file
View File

@@ -0,0 +1,20 @@
#! /bin/bash
#
# Output the stateful set yaml of the database using selectors
set -euo pipefail
sel="postgres-operator.crunchydata.com/cluster=${1},postgres-operator.crunchydata.com/instance-set=db"
x=30
while [[ $x -gt 0 ]]; do
for pod in $(kubectl get statefulsets --selector=$sel '--output=jsonpath={.items..metadata.name}'); do
echo "---"
kubectl get -o yaml statefulsets/$pod
x=0
done
if [[ $x -gt 0 ]]; then
((x--))
sleep 1
fi
done

5
hack/tilt/k8s-namespace Executable file
View File

@@ -0,0 +1,5 @@
#! /bin/bash
#
set -euo pipefail
cp "${KUBECONFIG}.template" "${KUBECONFIG}"
kubectl config set-context --current --namespace "${NAMESPACE}"

308
hack/tilt/k8s.yaml Normal file
View File

@@ -0,0 +1,308 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: '{name}'
namespace: '{namespace}'
labels:
app: '{name}'
holos.run/developer: '{developer}'
spec:
selector:
matchLabels:
app: '{name}'
template:
metadata:
labels:
app: '{name}'
holos.run/developer: '{developer}'
sidecar.istio.io/inject: 'true'
spec:
serviceAccountName: holos
containers:
- name: holos
image: holos # Tilt appends a tilt-* tag for the built docker image
# args are configured in the Tiltfile
env:
- name: GOMAXPROCS
value: '1'
- name: TZ
value: '{tz}'
- name: SHUTDOWN_DELAY
value: '0'
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: holos-pguser-holos
key: uri
ports:
- name: http
containerPort: {listen_port}
protocol: TCP
resources:
requests:
cpu: 250m
memory: 100Mi
limits:
cpu: 1000m
memory: 200Mi
---
apiVersion: v1
kind: Service
metadata:
name: '{name}'
namespace: '{namespace}'
labels:
app: '{name}'
holos.run/developer: '{developer}'
spec:
type: ClusterIP
selector:
app: '{name}'
ports:
- name: http
port: {listen_port}
appProtocol: http2
protocol: TCP
targetPort: {listen_port}
- name: metrics
port: {metrics_port}
appProtocol: http
protocol: TCP
targetPort: {metrics_port}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: '{name}'
namespace: '{namespace}'
labels:
app: '{name}'
holos.run/developer: '{developer}'
spec:
endpoints:
- port: metrics
path: /metrics
interval: 15s
selector:
matchLabels:
app: '{name}'
holos.run/developer: '{developer}'
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: '{name}'
namespace: '{namespace}'
labels:
app: '{name}'
holos.run/developer: '{developer}'
spec:
gateways:
- istio-ingress/default
hosts:
- '{developer}.holos.dev.k2.ois.run'
http:
- route:
- destination:
host: '{name}'
port:
number: {listen_port}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: holos
namespace: '{namespace}'
labels:
app: '{name}'
holos.run/developer: '{developer}'
imagePullSecrets:
- name: kube-system-ecr-image-pull-creds
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
labels:
app: '{name}'
holos.run/developer: '{developer}'
name: '{name}-allow-groups'
namespace: '{namespace}'
spec:
action: ALLOW
rules:
- when:
- key: request.auth.claims[groups]
values:
- holos-developer
- holos-developer@openinfrastructure.co
selector:
matchLabels:
holos.run/authz: dev-holos-sso
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: '{name}-allow-nothing'
namespace: '{namespace}'
labels:
app: '{name}'
holos.run/developer: '{developer}'
spec:
action: ALLOW
selector:
matchLabels:
holos.run/authz: dev-holos-sso
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: '{name}-allow-well-known-paths'
namespace: '{namespace}'
labels:
app: '{name}'
holos.run/developer: '{developer}'
spec:
action: ALLOW
rules:
- to:
- operation:
paths:
- /healthz
- /metrics
- /callbacks/github
selector:
matchLabels:
holos.run/authz: dev-holos-sso
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: '{name}-auth'
namespace: '{namespace}'
labels:
app: '{name}'
holos.run/developer: '{developer}'
spec:
action: CUSTOM
provider:
name: dev-holos-sso
rules:
- to:
- operation:
notPaths:
- /healthz
- /metrics
- /callbacks/github
when:
- key: request.headers[Authorization]
notValues:
- Bearer *
selector:
matchLabels:
holos.run/authz: dev-holos-sso
---
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: '{name}'
namespace: '{namespace}'
labels:
app: '{name}'
holos.run/developer: '{developer}'
spec:
jwtRules:
- audiences:
- https://sso.dev.holos.run
forwardOriginalToken: true
fromHeaders:
- name: x-auth-request-access-token
issuer: https://idex.core.ois.run
jwksUri: https://idex.core.ois.run/keys
- audiences:
- holos-cli
forwardOriginalToken: true
fromHeaders:
- name: authorization
prefix: 'Bearer '
issuer: https://idex.core.ois.run
jwksUri: https://idex.core.ois.run/keys
selector:
matchLabels:
holos.run/authz: dev-holos-sso
---
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PGAdmin
metadata:
name: 'pgadmin'
namespace: '{namespace}'
labels:
holos.run/developer: '{developer}'
spec:
serverGroups:
- name: holos
postgresClusterSelector:
matchLabels:
holos.run/developer: '{developer}'
dataVolumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
---
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: 'holos'
namespace: '{namespace}'
labels:
holos.run/developer: '{developer}'
spec:
image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.1-0
postgresVersion: 16
users:
- name: holos
databases:
- holos
options: 'SUPERUSER'
- name: '{developer}'
databases:
- holos
- '{developer}'
options: 'SUPERUSER'
# https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/user-management
instances:
- name: db
dataVolumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
postgres-operator.crunchydata.com/cluster: '{name}'
backups:
pgbackrest:
image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.47-2
# https://github.com/CrunchyData/postgres-operator/issues/2531#issuecomment-1713676019
global:
archive-async: "y"
archive-push-queue-max: "100MiB"
spool-path: "/pgdata/backups"
repos:
- name: repo1
volume:
volumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi

View File

@@ -0,0 +1,45 @@
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJpVENDQVRDZ0F3SUJBZ0lSQU9TenlHd2VMK3N4NjVvckVCTXV1c293Q2dZSUtvWkl6ajBFQXdJd0ZURVQKTUJFR0ExVUVDaE1LYTNWaVpYSnVaWFJsY3pBZUZ3MHlOREF5TVRNd05UQTRNRFJhRncwek5EQXlNVEF3TlRBNApNRFJhTUJVeEV6QVJCZ05WQkFvVENtdDFZbVZ5Ym1WMFpYTXdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CCkJ3TkNBQVREWUluR09EN2ZpbFVIeXNpZG1ac2Vtd2liTk9hT1A5ZzVJT1VsTkllUHZ1Y01ZV01aNWNkZXpVQmIKMGh4Zm1WYXR0QWxpcnorMlFpVld5by9WZFNsOG8yRXdYekFPQmdOVkhROEJBZjhFQkFNQ0FvUXdIUVlEVlIwbApCQll3RkFZSUt3WUJCUVVIQXdFR0NDc0dBUVVGQndNQ01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPCkJCWUVGTGVtcEhSM25lVXYvSUc1WWpwempDbWUydmIyTUFvR0NDcUdTTTQ5QkFNQ0EwY0FNRVFDSUNZajRsNUgKL043OG5UcnJxQzMxWjlsY0lpODEwcno5N3JIdUJnWFZZUkxBQWlBNHVEc0YyNEI5aGV3WklUbWEwaHpCMjNOdQpwZnprTWV5VzZHV2U2RWh4NGc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://k2.core.ois.run:6443
name: k2
contexts:
- context:
cluster: k2
namespace: default
user: admin@k2
name: admin@k2
- context:
cluster: k2
namespace: ${NAMESPACE}
user: oidc
name: sso@k2
current-context: sso@k2
kind: Config
preferences: {}
users:
- name: admin@k2
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJoRENDQVNxZ0F3SUJBZ0lRVXZKTlEvV0Ewalg5RXF6ZElIMFA4ekFLQmdncWhrak9QUVFEQWpBVk1STXcKRVFZRFZRUUtFd3ByZFdKbGNtNWxkR1Z6TUI0WERUSTBNRE14TVRJek1UY3hPVm9YRFRJMU1ETXhNVEl6TVRjeQpPVm93S1RFWE1CVUdBMVVFQ2hNT2MzbHpkR1Z0T20xaGMzUmxjbk14RGpBTUJnTlZCQU1UQldGa2JXbHVNRmt3CkV3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFNjZrMStQb1l5OHlPWTZkRFR5MHJYRTUvRlZJVU0rbkcKNEVzSXZxOHBuZ2lVRWRkeTdYM3hvZ2E5d2NSZy8xeVZ4Q2FNbzBUVEZveXkxaVZMMWxGWDNLTklNRVl3RGdZRApWUjBQQVFIL0JBUURBZ1dnTUJNR0ExVWRKUVFNTUFvR0NDc0dBUVVGQndNQ01COEdBMVVkSXdRWU1CYUFGTGVtCnBIUjNuZVV2L0lHNVlqcHpqQ21lMnZiMk1Bb0dDQ3FHU000OUJBTUNBMGdBTUVVQ0lDaDVGTWlXV3hxVHYyc0wKQVdvQ2lxaWJ0OUNUMnpsNzRlSTllMEZPTzRKTkFpRUF5T0wwR3RxVnlTSHUzbUsvVDBxZFhYQ3dmdHdWQVE4cgo2ejJWaVZrMzg2dz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSURtdTh0UGVrRmhlNzRXWm5idXlwOFZ1VUIxTVYwcTN4QklOclVVbjBaRjVvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFNjZrMStQb1l5OHlPWTZkRFR5MHJYRTUvRlZJVU0rbkc0RXNJdnE4cG5naVVFZGR5N1gzeApvZ2E5d2NSZy8xeVZ4Q2FNbzBUVEZveXkxaVZMMWxGWDNBPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://login.ois.run
- --oidc-client-id=261774567918339420@holos_platform
- --oidc-extra-scope=openid
- --oidc-extra-scope=email
- --oidc-extra-scope=profile
- --oidc-extra-scope=groups
- --oidc-extra-scope=offline_access
- --oidc-extra-scope=urn:zitadel:iam:org:domain:primary:openinfrastructure.co
- --oidc-use-pkce
command: kubectl
env: null
interactiveMode: IfAvailable
provideClusterInfo: false

33
hack/tilt/pgpass-zalando Executable file
View File

@@ -0,0 +1,33 @@
#! /bin/bash
#
tmpdir="$(mktemp -d)"
finish() {
rm -rf "$tmpdir"
}
trap finish EXIT
set -euo pipefail
umask 077
kubectl -n "dev-${USER}" get secret "${USER}.holos-server-db.credentials.postgresql.acid.zalan.do" -o json > "${tmpdir}/creds.json"
if [[ -f ~/.pgpass ]]; then
(grep -v "^localhost:14126:holos:${USER}:" ~/.pgpass || true) > "${tmpdir}/pgpass"
fi
PGUSER="$(jq -r '.data | map_values(@base64d) | .username' ${tmpdir}/creds.json)"
PGPASSWORD="$(jq -r '.data | map_values(@base64d) | .password' ${tmpdir}/creds.json)"
echo "${PGHOST}:${PGPORT}:${PGDATABASE}:${PGUSER}:${PGPASSWORD}" >> "${tmpdir}/pgpass"
cp "${tmpdir}/pgpass" ~/.pgpass
echo "updated: ${HOME}/.pgpass" >&2
cat <<EOF >&2
## Connect from a localhost shell through the port forward to the cluster
export PGHOST=${PGHOST}
export PGPORT=${PGPORT}
export PGDATABASE=${PGDATABASE}
export PGUSER=${PGUSER}
psql -c '\conninfo'
EOF
psql --host=${PGHOST} --port=${PGPORT} ${PGDATABASE} -c '\conninfo'

View File

@@ -4,10 +4,10 @@ import (
"fmt"
"strings"
"github.com/holos-run/holos/pkg/cli/command"
"github.com/holos-run/holos/pkg/errors"
"github.com/holos-run/holos/pkg/holos"
"github.com/holos-run/holos/pkg/internal/builder"
"github.com/holos-run/holos/internal/cli/command"
"github.com/holos-run/holos/internal/errors"
"github.com/holos-run/holos/internal/holos"
"github.com/holos-run/holos/internal/internal/builder"
"github.com/spf13/cobra"
)

View File

@@ -3,8 +3,8 @@ package command
import (
"fmt"
"github.com/holos-run/holos/pkg/errors"
"github.com/holos-run/holos/pkg/version"
"github.com/holos-run/holos/internal/errors"
"github.com/holos-run/holos/version"
"github.com/spf13/cobra"
)
@@ -26,5 +26,6 @@ func New(name string) *cobra.Command {
SilenceUsage: true,
SilenceErrors: true,
}
cmd.Flags().SortFlags = false
return cmd
}

View File

@@ -0,0 +1,50 @@
// Package controller integrates Choria Machine Room into Holos for cluster management.
package controller
import (
"context"
"fmt"
mr "github.com/choria-io/machine-room"
"github.com/holos-run/holos/internal/cli/command"
"github.com/holos-run/holos/internal/errors"
"github.com/holos-run/holos/internal/holos"
"github.com/holos-run/holos/version"
"github.com/spf13/cobra"
)
var (
// SigningKey is the public key from choria jwt keys machine-signer.seed machine-signer.public, refer to gen-machine-signer.
SigningKey = "2a136e3875f4375968ae8e8d400ba24864d3ed7c4109675f357d32cc3ca1d5a7"
)
func New(cfg *holos.Config) *cobra.Command {
cmd := command.New("controller")
cmd.Args = cobra.ArbitraryArgs
cmd.DisableFlagParsing = true
cmd.RunE = func(c *cobra.Command, args []string) error {
if SigningKey == "" {
return errors.Wrap(fmt.Errorf("could not run: controller.SigningKey not set from build variables"))
}
ctx := c.Context()
if ctx == nil {
ctx = context.Background()
}
app, err := mr.New(mr.Options{
Name: "controller",
Contact: "jeff@openinfrastructure.co",
Version: version.Version,
Help: "Holos Controller",
MachineSigningKey: SigningKey,
Args: args,
})
if err != nil {
return errors.Wrap(fmt.Errorf("could not make machine room app: %w", err))
}
return app.Run(ctx)
}
return cmd
}

View File

@@ -1,9 +1,9 @@
package create
import (
"github.com/holos-run/holos/pkg/cli/command"
"github.com/holos-run/holos/pkg/cli/secret"
"github.com/holos-run/holos/pkg/holos"
"github.com/holos-run/holos/internal/cli/command"
"github.com/holos-run/holos/internal/cli/secret"
"github.com/holos-run/holos/internal/holos"
"github.com/spf13/cobra"
)

View File

@@ -1,9 +1,9 @@
package get
import (
"github.com/holos-run/holos/pkg/cli/command"
"github.com/holos-run/holos/pkg/cli/secret"
"github.com/holos-run/holos/pkg/holos"
"github.com/holos-run/holos/internal/cli/command"
"github.com/holos-run/holos/internal/cli/secret"
"github.com/holos-run/holos/internal/holos"
"github.com/spf13/cobra"
)

View File

@@ -5,12 +5,12 @@ import (
"fmt"
"sort"
"github.com/holos-run/holos/pkg/cli/command"
"github.com/holos-run/holos/pkg/cli/secret"
"github.com/holos-run/holos/pkg/errors"
"github.com/holos-run/holos/pkg/holos"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/pkg/util"
"github.com/holos-run/holos/internal/cli/command"
"github.com/holos-run/holos/internal/cli/secret"
"github.com/holos-run/holos/internal/errors"
"github.com/holos-run/holos/internal/holos"
"github.com/holos-run/holos/internal/logger"
"github.com/holos-run/holos/internal/util"
"github.com/spf13/cobra"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

View File

@@ -1,9 +1,9 @@
package kv
import (
"github.com/holos-run/holos/pkg/cli/command"
"github.com/holos-run/holos/pkg/errors"
"github.com/holos-run/holos/pkg/holos"
"github.com/holos-run/holos/internal/cli/command"
"github.com/holos-run/holos/internal/errors"
"github.com/holos-run/holos/internal/holos"
"github.com/spf13/cobra"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"

View File

@@ -1,10 +1,10 @@
package kv
import (
"github.com/holos-run/holos/pkg/cli/command"
"github.com/holos-run/holos/pkg/cli/secret"
"github.com/holos-run/holos/pkg/errors"
"github.com/holos-run/holos/pkg/holos"
"github.com/holos-run/holos/internal/cli/command"
"github.com/holos-run/holos/internal/cli/secret"
"github.com/holos-run/holos/internal/errors"
"github.com/holos-run/holos/internal/holos"
"github.com/spf13/cobra"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

View File

@@ -11,11 +11,11 @@ import (
"path/filepath"
"strings"
"github.com/holos-run/holos/pkg/cli/command"
"github.com/holos-run/holos/pkg/cli/secret"
"github.com/holos-run/holos/pkg/errors"
"github.com/holos-run/holos/pkg/holos"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/internal/cli/command"
"github.com/holos-run/holos/internal/cli/secret"
"github.com/holos-run/holos/internal/errors"
"github.com/holos-run/holos/internal/holos"
"github.com/holos-run/holos/internal/logger"
"github.com/spf13/cobra"
"golang.org/x/tools/txtar"
v1 "k8s.io/api/core/v1"

View File

@@ -0,0 +1,47 @@
package login
import (
"context"
"flag"
"fmt"
"log/slog"
"github.com/holos-run/holos/internal/cli/command"
"github.com/holos-run/holos/internal/holos"
"github.com/holos-run/holos/internal/token"
"github.com/spf13/cobra"
)
// New returns a new login command.
func New(cfg *holos.Config) *cobra.Command {
cmd := command.New("login")
var printClaims bool
config := token.NewConfig()
cmd.Flags().AddGoFlagSet(config.FlagSet())
fs := &flag.FlagSet{}
fs.BoolVar(&printClaims, "print-claims", false, "print id token claims")
cmd.Flags().AddGoFlagSet(fs)
cmd.RunE = func(c *cobra.Command, args []string) error {
ctx := c.Context()
if ctx == nil {
ctx = context.Background()
}
token, err := token.Get(ctx, cfg.Logger(), config)
if err != nil {
slog.Error("could not get token", "err", err)
return fmt.Errorf("could not get token: %w", err)
}
claims := token.Claims()
slog.Info("logged in as "+claims.Email, "name", claims.Name, "exp", token.Expiry, "email", claims.Email)
if printClaims {
fmt.Fprintln(cmd.OutOrStdout(), token.Pretty)
}
return nil
}
return cmd
}

View File

@@ -0,0 +1,24 @@
package logout
import (
"fmt"
"os"
"github.com/holos-run/holos/internal/cli/command"
"github.com/holos-run/holos/internal/errors"
"github.com/holos-run/holos/internal/holos"
"github.com/holos-run/holos/internal/token"
"github.com/spf13/cobra"
)
func New(cfg *holos.Config) *cobra.Command {
cmd := command.New("logout")
cmd.RunE = func(c *cobra.Command, args []string) error {
if err := os.RemoveAll(token.CacheDir); err != nil {
return errors.Wrap(fmt.Errorf("could not logout: %w", err))
}
cfg.Logger().Info("logged out: removed " + token.CacheDir)
return nil
}
return cmd
}

View File

@@ -6,8 +6,8 @@ import (
"log/slog"
cue "cuelang.org/go/cue/errors"
"github.com/holos-run/holos/pkg/errors"
"github.com/holos-run/holos/pkg/holos"
"github.com/holos-run/holos/internal/errors"
"github.com/holos-run/holos/internal/holos"
)
// MakeMain makes a main function for the cli or tests.

View File

@@ -5,9 +5,9 @@ import (
"fmt"
"strings"
"github.com/holos-run/holos/pkg/errors"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/pkg/util"
"github.com/holos-run/holos/internal/errors"
"github.com/holos-run/holos/internal/logger"
"github.com/holos-run/holos/internal/util"
)
type ghAuthStatusResponse string

View File

@@ -5,9 +5,9 @@ import (
"github.com/spf13/cobra"
"github.com/holos-run/holos/pkg/cli/command"
"github.com/holos-run/holos/pkg/holos"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/internal/cli/command"
"github.com/holos-run/holos/internal/holos"
"github.com/holos-run/holos/internal/logger"
)
// Config holds configuration parameters for preflight checks.

View File

@@ -3,11 +3,11 @@ package render
import (
"fmt"
"github.com/holos-run/holos/pkg/cli/command"
"github.com/holos-run/holos/pkg/errors"
"github.com/holos-run/holos/pkg/holos"
"github.com/holos-run/holos/pkg/internal/builder"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/internal/cli/command"
"github.com/holos-run/holos/internal/errors"
"github.com/holos-run/holos/internal/holos"
"github.com/holos-run/holos/internal/internal/builder"
"github.com/holos-run/holos/internal/logger"
"github.com/spf13/cobra"
)
@@ -18,7 +18,7 @@ func makeRenderRunFunc(cfg *holos.Config) command.RunFunc {
}
ctx := cmd.Context()
log := logger.FromContext(ctx)
log := logger.FromContext(ctx).With("cluster", cfg.ClusterName())
build := builder.New(builder.Entrypoints(args), builder.Cluster(cfg.ClusterName()))
results, err := build.Run(cmd.Context())
if err != nil {

View File

@@ -7,16 +7,19 @@ import (
"github.com/holos-run/holos/internal/server"
"github.com/holos-run/holos/pkg/cli/build"
"github.com/holos-run/holos/pkg/cli/create"
"github.com/holos-run/holos/pkg/cli/get"
"github.com/holos-run/holos/pkg/cli/kv"
"github.com/holos-run/holos/pkg/cli/preflight"
"github.com/holos-run/holos/pkg/cli/render"
"github.com/holos-run/holos/pkg/cli/txtar"
"github.com/holos-run/holos/pkg/holos"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/pkg/version"
"github.com/holos-run/holos/internal/cli/build"
"github.com/holos-run/holos/internal/cli/controller"
"github.com/holos-run/holos/internal/cli/create"
"github.com/holos-run/holos/internal/cli/get"
"github.com/holos-run/holos/internal/cli/kv"
"github.com/holos-run/holos/internal/cli/login"
"github.com/holos-run/holos/internal/cli/logout"
"github.com/holos-run/holos/internal/cli/preflight"
"github.com/holos-run/holos/internal/cli/render"
"github.com/holos-run/holos/internal/cli/txtar"
"github.com/holos-run/holos/internal/holos"
"github.com/holos-run/holos/internal/logger"
"github.com/holos-run/holos/version"
)
// New returns a new root *cobra.Command for command line execution.
@@ -56,6 +59,8 @@ func New(cfg *holos.Config) *cobra.Command {
rootCmd.AddCommand(get.New(cfg))
rootCmd.AddCommand(create.New(cfg))
rootCmd.AddCommand(preflight.New(cfg))
rootCmd.AddCommand(login.New(cfg))
rootCmd.AddCommand(logout.New(cfg))
// Maybe not needed?
rootCmd.AddCommand(txtar.New(cfg))
@@ -66,5 +71,8 @@ func New(cfg *holos.Config) *cobra.Command {
// Server
rootCmd.AddCommand(server.New(cfg))
// Controller
rootCmd.AddCommand(controller.New(cfg))
return rootCmd
}

View File

@@ -2,12 +2,13 @@ package cli
import (
"bytes"
"github.com/holos-run/holos/pkg/holos"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/pkg/version"
"github.com/spf13/cobra"
"strings"
"testing"
"github.com/holos-run/holos/internal/holos"
"github.com/holos-run/holos/internal/logger"
"github.com/holos-run/holos/version"
"github.com/spf13/cobra"
)
func newCommand() (*cobra.Command, *bytes.Buffer) {

View File

@@ -9,10 +9,10 @@ import (
"path/filepath"
"strings"
"github.com/holos-run/holos/pkg/cli/command"
"github.com/holos-run/holos/pkg/errors"
"github.com/holos-run/holos/pkg/holos"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/internal/cli/command"
"github.com/holos-run/holos/internal/errors"
"github.com/holos-run/holos/internal/holos"
"github.com/holos-run/holos/internal/logger"
"github.com/spf13/cobra"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"

View File

@@ -8,11 +8,11 @@ import (
"path/filepath"
"sort"
"github.com/holos-run/holos/pkg/cli/command"
"github.com/holos-run/holos/pkg/errors"
"github.com/holos-run/holos/pkg/holos"
"github.com/holos-run/holos/pkg/logger"
"github.com/holos-run/holos/pkg/util"
"github.com/holos-run/holos/internal/cli/command"
"github.com/holos-run/holos/internal/errors"
"github.com/holos-run/holos/internal/holos"
"github.com/holos-run/holos/internal/logger"
"github.com/holos-run/holos/internal/util"
"github.com/spf13/cobra"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

View File

@@ -1,7 +1,7 @@
package secret
import (
"github.com/holos-run/holos/pkg/holos"
"github.com/holos-run/holos/internal/holos"
"github.com/spf13/pflag"
)

View File

@@ -1,15 +1,16 @@
package secret_test
import (
"github.com/holos-run/holos/pkg/cli"
"github.com/holos-run/holos/pkg/holos"
"testing"
"time"
"github.com/holos-run/holos/internal/cli"
"github.com/holos-run/holos/internal/holos"
"github.com/rogpeppe/go-internal/testscript"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/kubernetes/fake"
"testing"
"time"
)
const clientsetKey = "clientset"

Some files were not shown because too many files have changed in this diff Show More