Compare commits

...

23 Commits

Author SHA1 Message Date
Jeff McCune
0d0dae8742 (#89) Disable project auth proxies by default
Focus on the ingress gateway auth proxy for now and see how far it gets
us.
2024-04-01 21:48:08 -07:00
Jeff McCune
61b4b5bd17 (#89) Refactor auth proxy callbacks
The ingress gateway auth proxy callback conflicts with the project stage
auth proxy callback for the same backend Host: header value.

This patch disambiguates them by the namespace the auth proxy resides
in.
2024-04-01 21:37:52 -07:00
Jeff McCune
0060740b76 (#82) ingress gateway AuthorizationPolicy
This patch adds a `RequestAuthentication` and `AuthorizationPolicy` rule
to protect all requests flowing through the default ingress gateway.

Consider a browser request for httpbin.k2.example.com representing any
arbitrary host with a valid destination inside the service mesh.  The
default ingress gateway will check if there is already an
x-oidc-id-token header, and if so validate the token is issued by
ZITADEL and the aud value contains the ZITADEL project number.

If the header is not present, the request is forwarded to oauth2-proxy
in the istio-ingress namespace.  This auth proxy is configured to start
the oidc auth flow with a redirect back to /holos/oidc/callback of the
Host: value originally provided in the browser request.

Closes: #82
2024-04-01 20:37:34 -07:00
Jeff McCune
bf8a4af579 (#82) ingressgateway ExtAuthzHttp provider
This patch adds an ingress gateway extauthz provider.  Because ZITADEL
returns all applications associated with a ZITADEL project in the aud
claim, it makes sense to have one ingress auth proxy at the initial
ingress gateway so we can get the ID token in the request header for
backend namespaces to match using `RequestAuthentication` and
`AuthorizationPolicy`.

This change likely makes the additional per-stage auth proxies
unnecessary and over-engineered.  Backend namespaces will have access to
the ID token.
2024-04-01 16:53:11 -07:00
Jeff McCune
dc057fe39d (#89) Add platform project hosts for argocd, grafana, and prometheus
Certificates are issued by the provisioner and synced to the workload
clusters.
2024-04-01 13:09:46 -07:00
Jeff McCune
9877ab131a (#89) Platform Project
This patch manages a platform project to host platform level services
like ArgoCD, Kube Prom Stack, Kiali, etc...
2024-04-01 11:46:02 -07:00
Jeff McCune
13aba64cb7 (#66) Move CUSTOM AuthorizationPolicy to env namespace
It doesn't make sense to link the stage ext authz provider to the
ingress gateway because there can be only one provider per workload.

Link it instead to the backend environment and use the
`security.holos.run/authproxy` label to match the workload.
2024-03-31 18:56:14 -07:00
Jeff McCune
fe9bc2dbfc (#81) Istio 1.21.0 2024-03-31 12:51:56 -07:00
Jeff McCune
c53b682852 (#66) Use x-oidc-id-token instead of authorization header
Problem:
Backend services and web apps expect to place their own credentials into
the Authorization header.  oauth2-proxy writes over the authorization
header creating a conflict.

Solution:
Use the alpha configuration to place the id token into the
x-oidc-id-token header and configure the service mesh to authenticate
requests that have this header in place.

Note: ZITADEL does not use a JWT for an access token, unlike Keycloak
and Dex.  The access token is not compatible with a
RequestAuthentication jwt rule so we must use the id token.
2024-03-31 11:41:23 -07:00
Jeff McCune
3aca6a9e4c (#66) configure auth proxies to set Authorization: Bearer header
Without this patch the istio RequestAuthentication resources fail to
match because the access token from ZITADEL returned by oauth2-proxy in
the x-auth-request-access-token header is not a proper jwt.

The error is:

```
Jwt is not in the form of Header.Payload.Signature with two dots and 3 sections
```

This patch works around the problem by configuring oauth2-proxy to set
the ID token, which is guaranteed to be a proper JWT in the
authorization response headers.

Unfortunately, oauth2-proxy will only place the ID token in the
Authorization header response, which will write over any header set by a
client application.  This is likely to cause problems with single page
apps.

We'll probably need to work around this issue by using the alpha
configuration to set the id token in some out-of-the-way header.  We've
done this before, it'll just take some work to setup the ConfigMap and
translate the config again.
2024-03-30 16:15:27 -07:00
Jeff McCune
40fdfc0317 (#66) Fix auth proxy provider name, stage is always first
dev-holos-authproxy not authproxy-dev-holos
2024-03-30 14:05:50 -07:00
Jeff McCune
25d9415b0a (#66) Fix redis not able to write to /data
Without this patch redis cannot write to the /data directory, which
causes oauth2-proxy to fail with a 500 server error.
2024-03-30 13:40:34 -07:00
Jeff McCune
43c8702398 (#66) Configure an ExtAuthzProxy provider for each project stage
This patch configures an istio envoyExtAuthzHttp provider for each stage
in each project.  An example provider for the dev stage of the holos
project is `authproxy-dev-holos`
2024-03-30 11:28:23 -07:00
Jeff McCune
ce94776dbb (#66) Add ZITADEL project and client ids for iam project
core1 and core2 don't render without these resource identifiers in
place.
2024-03-30 09:18:54 -07:00
Jeff McCune
78ab6cd848 (#66) Match /holos/oidc for all hosts in the project stage
This has the same effect and makes the VirtualService much more
manageable, particularly when calling `kubectl get vs -A`.
2024-03-29 22:50:17 -07:00
Jeff McCune
0a7001f868 (#66) Configure the primary domain for zitadel
This bypasses the account selection screen and automatically redirects
back to the application without user interaction.
2024-03-29 22:44:52 -07:00
Jeff McCune
2db7be671b (#66) Route prefix /holos/oidc to authproxy
This patch configures the service mesh to route all requests with a uri
path prefix of `/holos/oidc` to the auth proxy associated with the
project stage.

Consider a request to https://jeff.holos.dev.k2.ois.run/holos/oidc/sign_in

This request is usually routed to the backend app, but
VirtualService/authproxy in the dev-holos-system namespace matches the
request and routes it to the auth proxy instead.

The auth proxy matches the request Host: header against the whitelist
and cookiedomain setting, which matches the suffix
`.holos.dev.k2.ois.run`.  The auth proxy redirects to the oidc issuer
with a callback url of the request Host for a url of
`https://jeff.holos.dev.k2.ois.run/holos/oidc/callback`.

ZITADEL matches the callback against those registered with the app and
the app client id.  A code is then sent back to the auth proxy.

The auth proxy sets a cookie named `__Secure-authproxy-dev-holos` with a
domain of `.holos.dev.k2.ois.run` from the suffix match of the
`--cookiedomain` flag.

Because this all works using paths, the `auth` prefix domains have been
removed.  They're unnecessary, oauth2-proxy is available for any host
routed to the project stage at path prefix `/holos/oidc`.

Refer to https://oauth2-proxy.github.io/oauth2-proxy/features/endpoints/
for good endpoints for debuggin, replacing `/oauth2` with `/holos/oidc`
2024-03-29 21:56:46 -07:00
Jeff McCune
b51870f7bf (#66) Deploy oauth2-proxy and redis to stage namespaces
This patch deploys oauth2-proxy and redis to the system namespace of
each stage in each project.  The plan is to redirect unauthenticated
requests to the request host at the /holos/oidc/callback endpoint.

This patch removes the --redirect-uri flag, which makes the auth domain
prefix moot, so a future patch should remove those if they really are
unnecessary.

The reason to remove the --redirect-uri flag is to make sure we set the
cookie to a domain suffix of the request Host: header.
2024-03-29 20:56:26 -07:00
Jeff McCune
0227dfa7e5 (#66) Add Gateway entries for oauth2-proxy
This patch adds entries to the project stage Gateway for oauth2-proxy.
Three entries for each stage are added, one for the global endpoint plus
one for each cluster.
2024-03-29 15:30:02 -07:00
Jeff McCune
05b59d9af0 (#66) Refactor project hosts for auth proxy cookies
Without this patch the auth proxy cookie domain is difficult to manage.
This patch refactors the hosts managed for each environment in a project
to better align with security domains and auth proxy session cookies.

The convention is: `<env?>.<host>.<stage?>.<cluster?>.<domain>` where
`host` can be 0..N entries with a default value of `[projectName]`.

env may be omitted for prod or the dev env of the dev stage.  stage may
be omitted for prod.  cluster may be omitted for the global endpoint.

For a project named `holos`:

| Project | Stage | Env  | Cluster | Host                      |
| ------- | ----- | ---  | ------- | ------                    |
| holos   | dev   | jeff | k2      | jeff.holos.dev.k2.ois.run |
| holos   | dev   | jeff | global  | jeff.holos.dev.ois.run    |
| holos   | dev   | -    | k2      | holos.dev.k2.ois.run      |
| holos   | dev   | -    | global  | holos.dev.ois.run         |
| holos   | prod  | -    | k2      | holos.k2.ois.run          |
| holos   | prod  | -    | global  | holos.ois.run             |

Auth proxy:

| Project | Stage | Auth Proxy Host           | Auth Cookie Domain   |
| ------- | ----- | ------                    | ------------------   |
| holos   | dev   | auth.holos.dev.ois.run    | holos.dev.ois.run    |
| holos   | dev   | auth.holos.dev.k1.ois.run | holos.dev.k1.ois.run |
| holos   | dev   | auth.holos.dev.k2.ois.run | holos.dev.k2.ois.run |
| holos   | prod  | auth.holos.ois.run        | holos.ois.run        |
| holos   | prod  | auth.holos.k1.ois.run     | holos.k1.ois.run     |
| holos   | prod  | auth.holos.k2.ois.run     | holos.k2.ois.run     |
2024-03-29 15:30:01 -07:00
Jeff McCune
04f9f3b3a8 Merge pull request #79 from holos-run/nate/makefile_version
Show the holos version in 'make install|build'
2024-03-29 15:04:48 -07:00
Nate McCurdy
b58be8b38c Show the holos version in 'make install|build'
Prior to this, when running the 'install' or 'build' Makefile target,
the version of holos being built was not shown even though the 'build'
target attempted to show the version.

```
.PHONY: build
build: generate ## Build holos executable.
	@echo "building ${BIN_NAME} ${VERSION}"
```

For example:
```
> make install
go generate ./...
building holos
...
```

Holo's version is stored in pkg/version/embedded/{major,minor,patch},
not the `Version` const. So the fix is to change the value of `VERSION`
so that it comes from those embedded files.

Now the version of holos is shown:

```
> make install
go generate ./...
building holos 0.61.1
...
```

This also adds a new Makefile target called `show-version` which shows
the full version string (i.e. the value of `$VERSION`).
2024-03-29 15:01:33 -07:00
Jeff McCune
10493d754a (#66) Add httpbin to each project environment
The goal of this patch is to verify each project environment is wired up
to the ingress Gateway for the project stage.

This is a necessary step to eventually configure the VirtualService and
AuthorizationPolicy to only match on the `/dump/request` path of each
endpoint for troubleshooting.
2024-03-28 21:51:34 -07:00
19 changed files with 1121 additions and 239 deletions

View File

@@ -4,7 +4,7 @@ PROJ=holos
ORG_PATH=github.com/holos-run
REPO_PATH=$(ORG_PATH)/$(PROJ)
VERSION := $(shell grep "const Version " pkg/version/version.go | sed -E 's/.*"(.+)"$$/\1/')
VERSION := $(shell cat pkg/version/embedded/major pkg/version/embedded/minor pkg/version/embedded/patch | xargs printf "%s.%s.%s")
BIN_NAME := holos
DOCKER_REPO=quay.io/openinfrastructure/holos
@@ -39,6 +39,10 @@ bumpmajor: ## Bump the major version.
scripts/bump minor 0
scripts/bump patch 0
.PHONY: show-version
show-version: ## Print the full version.
@echo $(VERSION)
.PHONY: tidy
tidy: ## Tidy go module.
go mod tidy

View File

@@ -306,19 +306,10 @@ import "strings"
// "value"` for prefix-based match - `regex: "value"` for RE2
// style regex-based match
// (https://github.com/google/re2/wiki/Syntax).
uri?: ({} | {
exact: _
} | {
prefix: _
} | {
regex: _
}) & {
uri?: {
exact?: string
prefix?: string
// RE2 style regex-based match
// (https://github.com/google/re2/wiki/Syntax).
regex?: string
regex?: string
}
// withoutHeader has the same syntax with the header, but has

View File

@@ -17,6 +17,11 @@ import "encoding/yaml"
VirtualService?: [Name=_]: #VirtualService & {metadata: name: Name}
Issuer?: [Name=_]: #Issuer & {metadata: name: Name}
Gateway?: [Name=_]: #Gateway & {metadata: name: Name}
ConfigMap?: [Name=_]: #ConfigMap & {metadata: name: Name}
Deployment?: [_]: #Deployment
RequestAuthentication?: [_]: #RequestAuthentication
AuthorizationPolicy?: [_]: #AuthorizationPolicy
}
// apiObjectMap holds the marshalled representation of apiObjects

View File

@@ -0,0 +1,54 @@
package holos
// #MeshConfig provides the istio meshconfig in the config key given projects.
#MeshConfig: {
projects: #Projects
// clusterName is the value of the --cluster-name flag, the cluster currently being manged / rendered.
clusterName: string | *#ClusterName
// for extAuthzHttp extension providers
extensionProviderMap: [Name=_]: #ExtAuthzProxy & {name: Name}
// for other extension providers like zipkin
extensionProviderExtraMap: [Name=_]: {name: Name}
config: {
accessLogEncoding: string | *"JSON"
accessLogFile: string | *"/dev/stdout"
defaultConfig: {
discoveryAddress: string | *"istiod.istio-system.svc:15012"
tracing: zipkin: address: string | *"zipkin.istio-system:9411"
}
defaultProviders: metrics: [...string] | *["prometheus"]
enablePrometheusMerge: false | *true
rootNamespace: string | *"istio-system"
trustDomain: string | *"cluster.local"
extensionProviders: [
for x in extensionProviderMap {x},
for y in extensionProviderExtraMap {y},
]
}
}
// #ExtAuthzProxy defines the provider configuration for an istio external authorization auth proxy.
#ExtAuthzProxy: {
name: string
envoyExtAuthzHttp: {
headersToDownstreamOnDeny: [
"content-type",
"set-cookie",
]
headersToUpstreamOnAllow: [
"authorization",
"path",
"x-oidc-id-token",
]
includeAdditionalHeadersInCheck: "X-Auth-Request-Redirect": "%REQ(x-forwarded-proto)%://%REQ(:authority)%%REQ(:path)%%REQ(:query)%"
includeRequestHeadersInCheck: [
"authorization",
"cookie",
"x-forwarded-for",
]
port: 4180
service: string
}
}

View File

@@ -1,7 +1,5 @@
package holos
import "list"
spec: components: KubernetesObjectsList: [
#KubernetesObjects & {
metadata: name: "prod-secrets-namespaces"
@@ -9,7 +7,7 @@ spec: components: KubernetesObjectsList: [
apiObjects: {
// #ManagedNamespaces is the set of all namespaces across all clusters in the platform.
for k, ns in #ManagedNamespaces {
if list.Contains(ns.clusterNames, #ClusterName) {
if ns.clusters[#ClusterName] != _|_ {
Namespace: "\(k)": #Namespace & ns.namespace
}
}

View File

@@ -8,7 +8,7 @@ spec: components: HelmChartList: [
namespace: "istio-system"
chart: {
name: "base"
version: "1.20.3"
version: #IstioVersion
repository: {
name: "istio"
url: "https://istio-release.storage.googleapis.com/charts"

View File

@@ -50,8 +50,8 @@ let OBJECTS = #APIObjects & {
seccompProfile: type: "RuntimeDefault"
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1337
runAsGroup: 1337
runAsUser: 8192
runAsGroup: 8192
capabilities: drop: ["ALL"]
}}]
}

View File

@@ -52,6 +52,10 @@ spec: components: HelmChartList: [
}
}
apiObjectMap: OBJECTS.apiObjectMap
// Auth Proxy
apiObjectMap: _IngressAuthProxy.Deployment.apiObjectMap
// Auth Policy
apiObjectMap: _IngressAuthProxy.Policy.apiObjectMap
},
]

View File

@@ -2,7 +2,7 @@ package holos
#HelmChart: {
chart: {
version: "1.20.3"
version: #IstioVersion
repository: {
name: "istio"
url: "https://istio-release.storage.googleapis.com/charts"

View File

@@ -1,74 +1,9 @@
package holos
// Ingress Gateway default auth proxy
let Provider = _IngressAuthProxy.AuthProxySpec.provider
let Service = _IngressAuthProxy.service
#MeshConfig: extensionProviderMap: (Provider): envoyExtAuthzHttp: service: Service
// Istio meshconfig
// TODO: Generate per-project extauthz providers.
_MeshConfig: {
accessLogEncoding: "JSON"
accessLogFile: "/dev/stdout"
defaultConfig: {
discoveryAddress: "istiod.istio-system.svc:15012"
tracing: zipkin: address: "zipkin.istio-system:9411"
}
defaultProviders: metrics: ["prometheus"]
enablePrometheusMerge: true
// For PROXY PROTOCOL at the ingress gateway.
gatewayTopology: {
numTrustedProxies: 2
}
rootNamespace: "istio-system"
trustDomain: "cluster.local"
extensionProviders: [{
name: "cluster-trace"
zipkin: {
maxTagLength: 56
port: 9411
service: "zipkin.istio-system.svc"
}
}, {
name: "cluster-gatekeeper"
envoyExtAuthzHttp: {
headersToDownstreamOnDeny: [
"content-type",
"set-cookie",
]
headersToUpstreamOnAllow: [
"authorization",
"path",
"x-auth-request-user",
"x-auth-request-email",
"x-auth-request-access-token",
]
includeAdditionalHeadersInCheck: "X-Auth-Request-Redirect": "%REQ(x-forwarded-proto)%://%REQ(:authority)%%REQ(:path)%%REQ(:query)%"
includeRequestHeadersInCheck: [
"authorization",
"cookie",
"x-forwarded-for",
]
port: 4180
service: "oauth2-proxy.istio-ingress.svc.cluster.local"
}
}, {
name: "core-authorizer"
envoyExtAuthzHttp: {
headersToDownstreamOnDeny: [
"content-type",
"set-cookie",
]
headersToUpstreamOnAllow: [
"authorization",
"path",
"x-auth-request-user",
"x-auth-request-email",
"x-auth-request-access-token",
]
includeAdditionalHeadersInCheck: "X-Auth-Request-Redirect": "%REQ(x-forwarded-proto)%://%REQ(:authority)%%REQ(:path)%%REQ(:query)%"
includeRequestHeadersInCheck: [
"authorization",
"cookie",
"x-forwarded-for",
]
port: 4180
service: "oauth2-proxy.prod-core-system.svc.cluster.local"
}
}]
}
_MeshConfig: (#MeshConfig & {projects: _Projects}).config

View File

@@ -126,7 +126,7 @@ package holos
hub: "docker.io/istio"
// Default tag for Istio images.
tag: "1.20.3"
tag: #IstioVersion
// Variant of the image to use.
// Currently supported are: [debug, distroless]

View File

@@ -1,3 +1,281 @@
package holos
import "encoding/yaml"
#InstancePrefix: "prod-mesh"
#IstioVersion: "1.21.0"
// The ingress gateway auth proxy is used by multiple cue instances.
// AUTHPROXY configures one oauth2-proxy deployment for each host in each stage of a project. Multiple deployments per stage are used to narrow down the cookie domain.
_IngressAuthProxy: {
Name: "authproxy"
Namespace: "istio-ingress"
service: "\(Name).\(Namespace).svc.cluster.local"
AuthProxySpec: #AuthProxySpec & #Platform.authproxy
Domains: [DOMAIN=string]: {name: DOMAIN}
Domains: (#Platform.org.domain): _
Domains: "\(#ClusterName).\(#Platform.org.domain)": _
let Metadata = {
name: string
namespace: Namespace
labels: "app.kubernetes.io/name": name
labels: "app.kubernetes.io/part-of": "istio-ingressgateway"
...
}
let ProxyMetadata = Metadata & {name: Name}
let RedisMetadata = Metadata & {name: Name + "-redis"}
// Deployment represents the oauth2-proxy deployment
Deployment: #APIObjects & {
apiObjects: {
// oauth2-proxy
ExternalSecret: (Name): metadata: ProxyMetadata
// Place the ID token in a header that does not conflict with the Authorization header.
// Refer to: https://github.com/oauth2-proxy/oauth2-proxy/issues/1877#issuecomment-1364033723
ConfigMap: (Name): {
metadata: ProxyMetadata
data: "config.yaml": yaml.Marshal(AuthProxyConfig)
let AuthProxyConfig = {
injectResponseHeaders: [{
name: "x-oidc-id-token"
values: [{claim: "id_token"}]
}]
providers: [{
id: "Holos Platform"
name: "Holos Platform"
provider: "oidc"
scope: "openid profile email groups offline_access urn:zitadel:iam:org:domain:primary:\(AuthProxySpec.orgDomain)"
clientID: AuthProxySpec.clientID
clientSecretFile: "/dev/null"
code_challenge_method: "S256"
loginURLParameters: [{
default: ["force"]
name: "approval_prompt"
}]
oidcConfig: {
issuerURL: AuthProxySpec.issuer
audienceClaims: ["aud"]
emailClaim: "email"
groupsClaim: "groups"
userIDClaim: "sub"
}
}]
server: BindAddress: ":4180"
upstreamConfig: upstreams: [{
id: "static://200"
path: "/"
static: true
staticCode: 200
}]
}
}
Deployment: (Name): #Deployment & {
metadata: ProxyMetadata
spec: {
replicas: 1
selector: matchLabels: ProxyMetadata.labels
template: {
metadata: labels: ProxyMetadata.labels
metadata: labels: #IstioSidecar
spec: {
securityContext: seccompProfile: type: "RuntimeDefault"
containers: [{
image: "quay.io/oauth2-proxy/oauth2-proxy:v7.6.0"
imagePullPolicy: "IfNotPresent"
name: "oauth2-proxy"
volumeMounts: [{
name: "config"
mountPath: "/config"
readOnly: true
}]
args: [
// callback url is proxy prefix + /callback
"--proxy-prefix=" + AuthProxySpec.proxyPrefix,
"--email-domain=*",
"--session-store-type=redis",
"--redis-connection-url=redis://\(RedisMetadata.name):6379",
"--cookie-refresh=12h",
"--cookie-expire=2160h",
"--cookie-secure=true",
"--cookie-name=__Secure-\(#ClusterName)-ingress-\(Name)",
"--cookie-samesite=lax",
for domain in Domains {"--cookie-domain=.\(domain.name)"},
for domain in Domains {"--cookie-domain=\(domain.name)"},
for domain in Domains {"--whitelist-domain=.\(domain.name)"},
for domain in Domains {"--whitelist-domain=\(domain.name)"},
"--cookie-csrf-per-request=true",
"--cookie-csrf-expire=120s",
// will skip authentication for OPTIONS requests
"--skip-auth-preflight=true",
"--real-client-ip-header=X-Forwarded-For",
"--skip-provider-button=true",
"--auth-logging",
"--alpha-config=/config/config.yaml",
]
env: [{
name: "OAUTH2_PROXY_COOKIE_SECRET"
// echo '{"cookiesecret":"'$(LC_ALL=C tr -dc "[:alpha:]" </dev/random | tr '[:upper:]' '[:lower:]' | head -c 32)'"}' | holos create secret -n istio-ingress --append-hash=false --data-stdin authproxy
valueFrom: secretKeyRef: {
key: "cookiesecret"
name: Name
}
}]
ports: [{
containerPort: 4180
protocol: "TCP"
}]
securityContext: {
seccompProfile: type: "RuntimeDefault"
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 8192
runAsGroup: 8192
capabilities: drop: ["ALL"]
}
}]
volumes: [{name: "config", configMap: name: Name}]
}
}
}
}
Service: (Name): #Service & {
metadata: ProxyMetadata
spec: selector: ProxyMetadata.labels
spec: ports: [
{port: 4180, targetPort: 4180, protocol: "TCP", name: "http"},
]
}
VirtualService: (Name): #VirtualService & {
metadata: ProxyMetadata
spec: hosts: ["*"]
spec: gateways: ["istio-ingress/default"]
spec: http: [{
match: [{uri: prefix: AuthProxySpec.proxyPrefix}]
route: [{
destination: host: Name
destination: port: number: 4180
}]
}]
}
// redis
ConfigMap: (RedisMetadata.name): {
metadata: RedisMetadata
data: "redis.conf": """
maxmemory 128mb
maxmemory-policy allkeys-lru
"""
}
Deployment: (RedisMetadata.name): {
metadata: RedisMetadata
spec: {
selector: matchLabels: RedisMetadata.labels
template: {
metadata: labels: RedisMetadata.labels
metadata: labels: #IstioSidecar
spec: securityContext: seccompProfile: type: "RuntimeDefault"
spec: {
containers: [{
command: [
"redis-server",
"/redis-master/redis.conf",
]
env: [{
name: "MASTER"
value: "true"
}]
image: "quay.io/holos/redis:7.2.4"
livenessProbe: {
initialDelaySeconds: 15
tcpSocket: port: "redis"
}
name: "redis"
ports: [{
containerPort: 6379
name: "redis"
}]
readinessProbe: {
exec: command: [
"redis-cli",
"ping",
]
initialDelaySeconds: 5
}
resources: limits: cpu: "0.5"
securityContext: {
seccompProfile: type: "RuntimeDefault"
allowPrivilegeEscalation: false
capabilities: drop: ["ALL"]
runAsNonRoot: true
runAsUser: 999
runAsGroup: 999
}
volumeMounts: [{
mountPath: "/redis-master-data"
name: "data"
}, {
mountPath: "/redis-master"
name: "config"
}]
}]
volumes: [{
emptyDir: {}
name: "data"
}, {
configMap: name: RedisMetadata.name
name: "config"
}]
}
}
}
}
Service: (RedisMetadata.name): #Service & {
metadata: RedisMetadata
spec: selector: RedisMetadata.labels
spec: type: "ClusterIP"
spec: ports: [{
name: "redis"
port: 6379
protocol: "TCP"
targetPort: 6379
}]
}
}
}
// Policy represents the AuthorizationPolicy and RequestAuthentication policy
Policy: #APIObjects & {
apiObjects: {
RequestAuthentication: (Name): #RequestAuthentication & {
metadata: Metadata & {name: Name}
spec: jwtRules: [{
audiences: ["\(AuthProxySpec.projectID)"]
forwardOriginalToken: true
fromHeaders: [{name: AuthProxySpec.idTokenHeader}]
issuer: AuthProxySpec.issuer
}]
spec: selector: matchLabels: istio: "ingressgateway"
}
AuthorizationPolicy: "\(Name)-custom": {
metadata: Metadata & {name: "\(Name)-custom"}
spec: {
action: "CUSTOM"
provider: name: AuthProxySpec.provider
// bypass the external authorizer when the id token is already in the request.
// the RequestAuthentication rule will verify the token.
rules: [{when: [
{key: "request.headers[\(AuthProxySpec.idTokenHeader)]", notValues: ["*"]},
// TODO: Define a way for hosts to be excluded.
{key: "request.headers[host]", notValues: [AuthProxySpec.issuerHost]},
]}]
selector: matchLabels: istio: "ingressgateway"
}
}
}
}
}

View File

@@ -172,20 +172,14 @@ package holos
enabled: true
// Indicates whether to enable WebAssembly runtime for stats filter.
wasmEnabled: false
// overrides stats EnvoyFilter configuration.
configOverride: {
gateway: {}
inboundSidecar: {}
outboundSidecar: {}
}
}
// stackdriver filter settings.
stackdriver: {
enabled: false
logging: false
monitoring: false
topology: false // deprecated. setting this to true will have no effect, as this option is no longer supported.
disableOutbound: false
enabled: false
logging: false
monitoring: false
topology: false // deprecated. setting this to true will have no effect, as this option is no longer supported.
// configOverride parts give you the ability to override the low level configuration params passed to envoy filter.
configOverride: {}
@@ -248,7 +242,7 @@ package holos
// Dev builds from prow are on gcr.io
hub: string | *"docker.io/istio"
// Default tag for Istio images.
tag: string | *"1.20.3"
tag: #IstioVersion
// Variant of the image to use.
// Currently supported are: [debug, distroless]
variant: string | *""

View File

@@ -1,11 +1,27 @@
package holos
#Project: authProxyOrgDomain: "openinfrastructure.co"
_Projects: #Projects & {
// The platform project is required and where platform services reside. ArgoCD, Grafana, Prometheus, etc...
platform: {
resourceId: 257713952794870157
clusters: k1: _
clusters: k2: _
stages: dev: authProxyClientID: "260887327029658738@holos_platform"
stages: prod: authProxyClientID: "260887404288738416@holos_platform"
// Services hosted in the platform project
hosts: argocd: _
hosts: grafana: _
hosts: prometheus: _
}
holos: {
clusters: {
k1: _
k2: _
}
resourceId: 260446255245690199
clusters: k1: _
clusters: k2: _
stages: dev: authProxyClientID: "260505543108527218@holos"
stages: prod: authProxyClientID: "260506079325128023@holos"
environments: {
prod: stage: "prod"
dev: stage: "dev"
@@ -16,16 +32,21 @@ _Projects: #Projects & {
}
iam: {
resourceId: 260582480954787159
clusters: {
core1: _
core2: _
}
stages: dev: authProxyClientID: "260582521186616432@iam"
stages: prod: authProxyClientID: "260582633862399090@iam"
}
}
// Manage namespaces for platform project environments.
for project in _Projects {
for ns in project.managedNamespaces {
#ManagedNamespaces: (ns.namespace.metadata.name): ns
if ns.clusters[#ClusterName] != _|_ {
#ManagedNamespaces: (ns.namespace.metadata.name): ns
}
}
}

View File

@@ -0,0 +1,34 @@
package holos
#MeshConfig: {
projects: _
clusterName: _
extensionProviderExtraMap: {
"cluster-trace": {
zipkin: {
maxTagLength: 56
port: 9411
service: "zipkin.istio-system.svc"
}
}
}
config: {
// For PROXY PROTOCOL at the ingress gateway.
gatewayTopology: {
numTrustedProxies: 2
}
}
// Configure an ExtAuthzHttp provider for each stage's authproxy
for Project in projects {
if Project.clusters[clusterName] != _|_ {
for Stage in Project.stages {
extensionProviderMap: (Stage.extAuthzProviderName): #ExtAuthzProxy & {
envoyExtAuthzHttp: service: "authproxy.\(Stage.namespace).svc.cluster.local"
}
}
}
}
}

View File

@@ -1,79 +1,108 @@
package holos
import "strings"
import "encoding/yaml"
// Platform level definition of a project.
#Project: {
name: string
// All projects have at least a prod environment and stage.
stages: prod: stageSegments: []
environments: prod: stage: "prod"
environments: prod: dnsSegments: []
stages: prod: _
stages: dev: _
// Short hostnames to construct fqdns.
environments: prod: envSegments: []
stages: dev: _
environments: dev: stage: "dev"
environments: dev: envSegments: []
// Ensure at least the project name is a short hostname. Additional may be added.
hosts: (name): _
// environments share the stage segments of their stage.
environments: [_]: {
stage: string
stageSegments: stages[stage].stageSegments
}
}
#ProjectTemplate: {
project: #Project
let Project = project
// ExtAuthzHosts maps host names to the backend environment namespace for ExtAuthz.
let ExtAuthzHosts = {
// GatewayServers maps Gateway spec.servers #GatewayServer values indexed by stage then name.
let GatewayServers = {
// Initialize all stages, even if they have no environments.
for stage in project.stages {
(stage.name): {}
}
// For each stage, construct entries for the Gateway spec.servers.hosts field.
for env in project.environments {
(env.stage): {
for host in project.hosts {
let NAME = "https-\(project.name)-\(env.name)-\(host.name)"
let SEGMENTS = [host.name] + env.dnsSegments + [#Platform.org.domain]
let HOST = strings.Join(SEGMENTS, ".")
(NAME): #GatewayServer & {
hosts: ["\(env.namespace)/\(HOST)"]
// name must be unique across all servers in all gateways
port: name: NAME
port: number: 443
port: protocol: "HTTPS"
// TODO: Manage a certificate with each host in the dns alt names.
tls: credentialName: HOST
let Env = env
let Stage = project.stages[env.stage]
for host in (#EnvHosts & {project: Project, env: Env}).hosts {
(host.name): #GatewayServer & {
hosts: [
"\(env.namespace)/\(host.name)",
// Allow the authproxy VirtualService to match the project.authProxyPrefix path.
"\(Stage.namespace)/\(host.name)",
]
port: host.port
tls: credentialName: host.name
tls: mode: "SIMPLE"
}
for cluster in project.clusters {
let NAME = "https-\(cluster.name)-\(project.name)-\(env.name)-\(host.name)"
let SEGMENTS = [host.name] + env.dnsSegments + [cluster.name, #Platform.org.domain]
let HOST = strings.Join(SEGMENTS, ".")
(NAME): #GatewayServer & {
hosts: ["\(env.namespace)/\(HOST)"]
// name must be unique across all servers in all gateways
port: name: NAME
port: number: 443
port: protocol: "HTTPS"
// TODO: Manage a certificate with each host in the dns alt names.
tls: credentialName: HOST
tls: mode: "SIMPLE"
}
}
}
}
}
}
workload: resources: {
for stage in project.stages {
// Istio Gateway
"\(stage.slug)-gateway": #KubernetesObjects & {
apiObjectMap: (#APIObjects & {
apiObjects: Gateway: (stage.slug): #Gateway & {
spec: servers: [for host in ExtAuthzHosts[stage.name] {host}]
// Provide resources only if the project is managed on --cluster-name
if project.clusters[#ClusterName] != _|_ {
for stage in project.stages {
let Stage = stage
// Istio Gateway
"\(stage.slug)-gateway": #KubernetesObjects & {
apiObjectMap: (#APIObjects & {
apiObjects: Gateway: (stage.slug): #Gateway & {
spec: servers: [for server in GatewayServers[stage.name] {server}]
}
for host in GatewayServers[stage.name] {
apiObjects: ExternalSecret: (host.tls.credentialName): metadata: namespace: "istio-ingress"
}
}).apiObjectMap
}
// Manage auth-proxy in each stage
if project.features.authproxy.enabled {
"\(stage.slug)-authproxy": #KubernetesObjects & {
apiObjectMap: (#APIObjects & {
apiObjects: (AUTHPROXY & {stage: Stage, project: Project, servers: GatewayServers[stage.name]}).apiObjects
}).apiObjectMap
}
for host in ExtAuthzHosts[stage.name] {
apiObjects: ExternalSecret: (host.tls.credentialName): metadata: namespace: "istio-ingress"
for Env in project.environments if Env.stage == stage.name {
"\(Env.slug)-authpolicy": #KubernetesObjects & {
// Manage auth policy in each env
apiObjectMap: (#APIObjects & {
apiObjects: (AUTHPOLICY & {env: Env, project: Project, servers: GatewayServers[stage.name]}).apiObjects
}).apiObjectMap
}
}
}).apiObjectMap
}
// Manage httpbin in each environment
if project.features.httpbin.enabled {
for Env in project.environments if Env.stage == stage.name {
"\(Env.slug)-httpbin": #KubernetesObjects & {
let Project = project
apiObjectMap: (#APIObjects & {
apiObjects: (HTTPBIN & {env: Env, project: Project}).apiObjects
}).apiObjectMap
}
}
}
}
}
}
@@ -82,7 +111,7 @@ import "strings"
for stage in project.stages {
"\(stage.slug)-certs": #KubernetesObjects & {
apiObjectMap: (#APIObjects & {
for host in ExtAuthzHosts[stage.name] {
for host in GatewayServers[stage.name] {
let CN = host.tls.credentialName
apiObjects: Certificate: (CN): #Certificate & {
metadata: name: CN
@@ -98,83 +127,405 @@ import "strings"
}
}
}
}).apiObjectMap
}
}
}
}
// #GatewayServer defines the value of the istio Gateway.spec.servers field.
#GatewayServer: {
// The ip or the Unix domain socket to which the listener should
// be bound to.
bind?: string
defaultEndpoint?: string
let HTTPBIN = {
name: string | *"httpbin"
project: #Project
env: #Environment
let Name = name
let Stage = project.stages[env.stage]
// One or more hosts exposed by this gateway.
hosts: [...string]
// An optional name of the server, when set must be unique across
// all servers.
name?: string
// The Port on which the proxy should listen for incoming
// connections.
port: {
// Label assigned to the port.
name: string
// A valid non-negative integer port number.
number: int
// The protocol exposed on the port.
protocol: string
targetPort?: int
let Metadata = {
name: Name
namespace: env.namespace
labels: app: name
}
let Labels = {
"app.kubernetes.io/name": Name
"app.kubernetes.io/instance": env.slug
"app.kubernetes.io/part-of": env.project
"security.holos.run/authproxy": Stage.extAuthzProviderName
}
// Set of TLS related options that govern the server's behavior.
tls?: {
// REQUIRED if mode is `MUTUAL` or `OPTIONAL_MUTUAL`.
caCertificates?: string
apiObjects: {
Deployment: (Name): #Deployment & {
metadata: Metadata
// Optional: If specified, only support the specified cipher list.
cipherSuites?: [...string]
// For gateways running on Kubernetes, the name of the secret that
// holds the TLS certs including the CA certificates.
credentialName?: string
// If set to true, the load balancer will send a 301 redirect for
// all http connections, asking the clients to use HTTPS.
httpsRedirect?: bool
// Optional: Maximum TLS protocol version.
maxProtocolVersion?: "TLS_AUTO" | "TLSV1_0" | "TLSV1_1" | "TLSV1_2" | "TLSV1_3"
// Optional: Minimum TLS protocol version.
minProtocolVersion?: "TLS_AUTO" | "TLSV1_0" | "TLSV1_1" | "TLSV1_2" | "TLSV1_3"
// Optional: Indicates whether connections to this port should be
// secured using TLS.
mode?: "PASSTHROUGH" | "SIMPLE" | "MUTUAL" | "AUTO_PASSTHROUGH" | "ISTIO_MUTUAL" | "OPTIONAL_MUTUAL"
// REQUIRED if mode is `SIMPLE` or `MUTUAL`.
privateKey?: string
// REQUIRED if mode is `SIMPLE` or `MUTUAL`.
serverCertificate?: string
// A list of alternate names to verify the subject identity in the
// certificate presented by the client.
subjectAltNames?: [...string]
// An optional list of hex-encoded SHA-256 hashes of the
// authorized client certificates.
verifyCertificateHash?: [...string]
// An optional list of base64-encoded SHA-256 hashes of the SPKIs
// of authorized client certificates.
verifyCertificateSpki?: [...string]
spec: selector: matchLabels: Metadata.labels
spec: template: {
metadata: labels: Metadata.labels & #IstioSidecar & Labels
spec: securityContext: seccompProfile: type: "RuntimeDefault"
spec: containers: [{
name: Name
image: "quay.io/holos/mccutchen/go-httpbin"
ports: [{containerPort: 8080}]
securityContext: {
seccompProfile: type: "RuntimeDefault"
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 8192
runAsGroup: 8192
capabilities: drop: ["ALL"]
}}]
}
}
Service: (Name): #Service & {
metadata: Metadata
spec: selector: Metadata.labels
spec: ports: [
{port: 80, targetPort: 8080, protocol: "TCP", name: "http"},
]
}
VirtualService: (Name): #VirtualService & {
metadata: Metadata
let Project = project
let Env = env
spec: hosts: [for host in (#EnvHosts & {project: Project, env: Env}).hosts {host.name}]
spec: gateways: ["istio-ingress/\(env.stageSlug)"]
spec: http: [{route: [{destination: host: Name}]}]
}
}
}
// AUTHPROXY configures one oauth2-proxy deployment for each host in each stage of a project. Multiple deployments per stage are used to narrow down the cookie domain.
let AUTHPROXY = {
name: string | *"authproxy"
project: #Project
stage: #Stage
servers: {}
let Name = name
let Project = project
let Stage = stage
let AuthProxySpec = #AuthProxySpec & {
namespace: stage.namespace
projectID: project.resourceId
clientID: stage.authProxyClientID
orgDomain: project.authProxyOrgDomain
provider: stage.extAuthzProviderName
}
let Metadata = {
name: Name
namespace: stage.namespace
labels: {
"app.kubernetes.io/name": name
"app.kubernetes.io/instance": stage.name
"app.kubernetes.io/part-of": stage.project
}
}
let RedisMetadata = {
name: Name + "-redis"
namespace: stage.namespace
labels: {
"app.kubernetes.io/name": name
"app.kubernetes.io/instance": stage.name
"app.kubernetes.io/part-of": stage.project
}
}
apiObjects: {
// oauth2-proxy
ExternalSecret: (Name): metadata: Metadata
// Place the ID token in a header that does not conflict with the Authorization header.
// Refer to: https://github.com/oauth2-proxy/oauth2-proxy/issues/1877#issuecomment-1364033723
ConfigMap: (Name): {
metadata: Metadata
data: "config.yaml": yaml.Marshal(AuthProxyConfig)
let AuthProxyConfig = {
injectResponseHeaders: [{
name: AuthProxySpec.idTokenHeader
values: [{claim: "id_token"}]
}]
providers: [{
id: "Holos Platform"
name: "Holos Platform"
provider: "oidc"
scope: "openid profile email groups offline_access urn:zitadel:iam:org:domain:primary:\(AuthProxySpec.orgDomain)"
clientID: AuthProxySpec.clientID
clientSecretFile: "/dev/null"
code_challenge_method: "S256"
loginURLParameters: [{
default: ["force"]
name: "approval_prompt"
}]
oidcConfig: {
issuerURL: AuthProxySpec.issuer
audienceClaims: ["aud"]
emailClaim: "email"
groupsClaim: "groups"
userIDClaim: "sub"
}
}]
server: BindAddress: ":4180"
upstreamConfig: upstreams: [{
id: "static://200"
path: "/"
static: true
staticCode: 200
}]
}
}
Deployment: (Name): #Deployment & {
metadata: Metadata
// project.dev.example.com, project.dev.k1.example.com, project.dev.k2.example.com
let StageDomains = {
for host in (#StageDomains & {project: Project, stage: Stage}).hosts {
(host.name): host
}
}
spec: {
replicas: 1
selector: matchLabels: Metadata.labels
template: {
metadata: labels: Metadata.labels
metadata: labels: #IstioSidecar
spec: {
securityContext: seccompProfile: type: "RuntimeDefault"
containers: [{
image: "quay.io/oauth2-proxy/oauth2-proxy:v7.6.0"
imagePullPolicy: "IfNotPresent"
name: "oauth2-proxy"
volumeMounts: [{
name: "config"
mountPath: "/config"
readOnly: true
}]
args: [
// callback url is proxy prefix + /callback
"--proxy-prefix=" + AuthProxySpec.proxyPrefix,
"--email-domain=*",
"--session-store-type=redis",
"--redis-connection-url=redis://\(RedisMetadata.name):6379",
"--cookie-refresh=12h",
"--cookie-expire=2160h",
"--cookie-secure=true",
"--cookie-name=__Secure-\(stage.slug)-\(Name)",
"--cookie-samesite=lax",
for domain in StageDomains {"--cookie-domain=.\(domain.name)"},
for domain in StageDomains {"--cookie-domain=\(domain.name)"},
for domain in StageDomains {"--whitelist-domain=.\(domain.name)"},
for domain in StageDomains {"--whitelist-domain=\(domain.name)"},
"--cookie-csrf-per-request=true",
"--cookie-csrf-expire=120s",
// will skip authentication for OPTIONS requests
"--skip-auth-preflight=true",
"--real-client-ip-header=X-Forwarded-For",
"--skip-provider-button=true",
"--auth-logging",
"--alpha-config=/config/config.yaml",
]
env: [{
name: "OAUTH2_PROXY_COOKIE_SECRET"
// echo '{"cookiesecret":"'$(LC_ALL=C tr -dc "[:alpha:]" </dev/random | tr '[:upper:]' '[:lower:]' | head -c 32)'"}' | holos create secret -n dev-holos-system --append-hash=false --data-stdin authproxy
valueFrom: secretKeyRef: {
key: "cookiesecret"
name: Name
}
}]
ports: [{
containerPort: 4180
protocol: "TCP"
}]
securityContext: {
seccompProfile: type: "RuntimeDefault"
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 8192
runAsGroup: 8192
capabilities: drop: ["ALL"]
}
}]
volumes: [{name: "config", configMap: name: Name}]
}
}
}
}
Service: (Name): #Service & {
metadata: Metadata
spec: selector: Metadata.labels
spec: ports: [
{port: 4180, targetPort: 4180, protocol: "TCP", name: "http"},
]
}
VirtualService: (Name): #VirtualService & {
metadata: Metadata
spec: hosts: ["*"]
spec: gateways: ["istio-ingress/\(stage.slug)"]
spec: http: [{
match: [{uri: prefix: AuthProxySpec.proxyPrefix}]
route: [{
destination: host: Name
destination: port: number: 4180
}]
}]
}
// redis
ConfigMap: (RedisMetadata.name): {
metadata: RedisMetadata
data: "redis.conf": """
maxmemory 128mb
maxmemory-policy allkeys-lru
"""
}
Deployment: (RedisMetadata.name): {
metadata: RedisMetadata
spec: {
selector: matchLabels: RedisMetadata.labels
template: {
metadata: labels: RedisMetadata.labels
metadata: labels: #IstioSidecar
spec: securityContext: seccompProfile: type: "RuntimeDefault"
spec: {
containers: [{
command: [
"redis-server",
"/redis-master/redis.conf",
]
env: [{
name: "MASTER"
value: "true"
}]
image: "quay.io/holos/redis:7.2.4"
livenessProbe: {
initialDelaySeconds: 15
tcpSocket: port: "redis"
}
name: "redis"
ports: [{
containerPort: 6379
name: "redis"
}]
readinessProbe: {
exec: command: [
"redis-cli",
"ping",
]
initialDelaySeconds: 5
}
resources: limits: cpu: "0.5"
securityContext: {
seccompProfile: type: "RuntimeDefault"
allowPrivilegeEscalation: false
capabilities: drop: ["ALL"]
runAsNonRoot: true
runAsUser: 999
runAsGroup: 999
}
volumeMounts: [{
mountPath: "/redis-master-data"
name: "data"
}, {
mountPath: "/redis-master"
name: "config"
}]
}]
volumes: [{
emptyDir: {}
name: "data"
}, {
configMap: name: RedisMetadata.name
name: "config"
}]
}
}
}
}
Service: (RedisMetadata.name): #Service & {
metadata: RedisMetadata
spec: selector: RedisMetadata.labels
spec: type: "ClusterIP"
spec: ports: [{
name: "redis"
port: 6379
protocol: "TCP"
targetPort: 6379
}]
}
}
}
// AUTHPOLICY configures the baseline AuthorizationPolicy and RequestAuthentication policy for each stage of each project.
let AUTHPOLICY = {
project: #Project
env: #Environment
let Name = "\(stage.slug)-authproxy"
let Project = project
let stage = project.stages[env.stage]
let Env = env
let AuthProxySpec = #AuthProxySpec & {
namespace: stage.namespace
projectID: project.resourceId
clientID: stage.authProxyClientID
orgDomain: project.authProxyOrgDomain
provider: stage.extAuthzProviderName
}
let Metadata = {
name: string
namespace: env.namespace
labels: {
"app.kubernetes.io/name": name
"app.kubernetes.io/instance": stage.name
"app.kubernetes.io/part-of": stage.project
}
}
// Collect all the hosts associated with the stage
let Hosts = {
for HOST in (#EnvHosts & {project: Project, env: Env}).hosts {
(HOST.name): HOST
}
}
// HostList is a list of hosts for AuthorizationPolicy rules
let HostList = [
for host in Hosts {host.name},
for host in Hosts {host.name + ":*"},
]
let MatchLabels = {"security.holos.run/authproxy": AuthProxySpec.provider}
apiObjects: {
RequestAuthentication: (Name): #RequestAuthentication & {
metadata: Metadata & {name: Name}
spec: jwtRules: [{
audiences: [AuthProxySpec.clientID]
forwardOriginalToken: true
fromHeaders: [{name: AuthProxySpec.idTokenHeader}]
issuer: AuthProxySpec.issuer
}]
spec: selector: matchLabels: MatchLabels
}
AuthorizationPolicy: "\(Name)-custom": {
metadata: Metadata & {name: "\(Name)-custom"}
spec: {
action: "CUSTOM"
// send the request to the auth proxy
provider: name: AuthProxySpec.provider
rules: [{
to: [{operation: hosts: HostList}]
when: [
{
key: "request.headers[\(AuthProxySpec.idTokenHeader)]"
notValues: ["*"]
},
{
key: "request.headers[host]"
notValues: [AuthProxySpec.issuerHost]
},
]}]
selector: matchLabels: MatchLabels
}
}
}
}

View File

@@ -2,11 +2,18 @@ package holos
import h "github.com/holos-run/holos/api/v1alpha1"
import "strings"
// #Projects is a map of all the projects in the platform.
#Projects: [Name=_]: #Project & {name: Name}
// The platform project is required and where platform services reside. ArgoCD, Grafana, Prometheus, etc...
#Projects: platform: _
#Project: {
name: string
// resourceId is the zitadel project Resource ID
resourceId: number
let ProjectName = name
description: string
environments: [Name=string]: #Environment & {
@@ -17,11 +24,19 @@ import h "github.com/holos-run/holos/api/v1alpha1"
name: Name
project: ProjectName
}
domain: string | *#Platform.org.domain
// authProxyOrgDomain is the primary org domain for zitadel.
authProxyOrgDomain: string | *#Platform.org.domain
// authProxyIssuer is the issuer url
authProxyIssuer: string | *"https://login.\(#Platform.org.domain)"
// hosts are short hostnames to configure for the project.
// Each value is routed to every environment in the project as a dns prefix.
hosts: [Name=string]: #Host & {name: Name}
// clusters are the cluster names the project is configured on.
clusters: [Name=string]: #Cluster & {name: Name}
clusterNames: [for c in clusters {c.name}]
// managedNamespaces ensures project namespaces have SecretStores that can sync ExternalSecrets from the provisioner cluster.
managedNamespaces: {
@@ -46,6 +61,8 @@ import h "github.com/holos-run/holos/api/v1alpha1"
// features is YAGNI maybe?
features: [Name=string]: #Feature & {name: Name}
features: authproxy: _
features: httpbin: _
}
// #Cluster defines a cluster
@@ -61,22 +78,56 @@ import h "github.com/holos-run/holos/api/v1alpha1"
stage: string | "dev" | "prod"
slug: "\(name)-\(project)"
namespace: "\(name)-\(project)"
dnsSegments: [...string] | *[name]
stageSlug: "\(stage)-\(project)"
// envSegments are the env portion of the dns segments
envSegments: [...string] | *[name]
// stageSegments are the stage portion of the dns segments
stageSegments: [...string] | *[stage]
// #host provides a hostname
// Refer to: https://github.com/holos-run/holos/issues/66#issuecomment-2027562626
#host: {
name: string
cluster?: string
clusterSegments: [...string]
if cluster != _|_ {
clusterSegments: [cluster]
}
let SEGMENTS = envSegments + [name] + stageSegments + clusterSegments + [#Platform.org.domain]
let NAMESEGMENTS = ["https"] + SEGMENTS
host: {
name: strings.Join(SEGMENTS, ".")
port: {
name: strings.Replace(strings.Join(NAMESEGMENTS, "-"), ".", "-", -1)
number: 443
protocol: "HTTPS"
}
}
}
}
#Stage: {
name: string
project: string
slug: "\(name)-\(project)"
// namespace is the system namespace for the project stage
namespace: "\(name)-\(project)-system"
// Manage a system namespace for each stage
namespaces: [Name=_]: name: Name
namespaces: "\(name)-\(project)-system": _
namespaces: (namespace): _
// stageSegments are the stage portion of the dns segments
stageSegments: [...string] | *[name]
// authProxyClientID is the ClientID registered with the oidc issuer.
authProxyClientID: string
// extAuthzProviderName is the provider name in the mesh config
extAuthzProviderName: "\(slug)-authproxy"
}
#Feature: {
name: string
description: string
enabled: *true | false
enabled: true | *false
}
#ProjectTemplate: {
@@ -92,3 +143,58 @@ import h "github.com/holos-run/holos/api/v1alpha1"
metadata: name: Name
}
}
// #EnvHosts provides hostnames given a project and environment.
// Refer to https://github.com/holos-run/holos/issues/66#issuecomment-2027562626
#EnvHosts: {
project: #Project & {name: env.project}
env: #Environment
hosts: {
for host in project.hosts {
// globally scoped hostname
let HOST = (env.#host & {name: host.name}).host
(HOST.name): HOST
// cluster scoped hostname
for Cluster in project.clusters {
let HOST = (env.#host & {name: host.name, cluster: Cluster.name}).host
(HOST.name): HOST
}
}
}
}
// #StageDomains provides hostnames given a project and stage. Primarily for the
// auth proxy.
// Refer to https://github.com/holos-run/holos/issues/66#issuecomment-2027562626
#StageDomains: {
// names are the leading prefix names to create hostnames for.
// this is a two level list to support strings.Join()
prefixes: [...[...string]] | *[[]]
stage: #Stage
project: #Project & {
name: stage.project
}
// blank segment for the global domain plus each cluster in the project.
let ClusterSegments = [[], for cluster in project.clusters {[cluster.name]}]
hosts: {
for prefix in prefixes {
for ClusterSegment in ClusterSegments {
let SEGMENTS = prefix + [project.name] + stage.stageSegments + ClusterSegment + [project.domain]
let NAMESEGMENTS = ["https"] + SEGMENTS
let HOSTNAME = strings.Join(SEGMENTS, ".")
(HOSTNAME): {
name: HOSTNAME
port: {
name: strings.Replace(strings.Join(NAMESEGMENTS, "-"), ".", "-", -1)
number: 443
protocol: "HTTPS"
}
}
}
}
}
}

View File

@@ -13,6 +13,8 @@ import (
crt "cert-manager.io/certificate/v1"
gw "networking.istio.io/gateway/v1beta1"
vs "networking.istio.io/virtualservice/v1beta1"
ra "security.istio.io/requestauthentication/v1"
ap "security.istio.io/authorizationpolicy/v1"
pg "postgres-operator.crunchydata.com/postgrescluster/v1beta1"
)
@@ -63,19 +65,21 @@ _apiVersion: "holos.run/v1alpha1"
#ClusterRoleBinding: #ClusterObject & rbacv1.#ClusterRoleBinding
#ClusterIssuer: #ClusterObject & ci.#ClusterIssuer & {...}
#Issuer: #NamespaceObject & is.#Issuer
#Role: #NamespaceObject & rbacv1.#Role
#RoleBinding: #NamespaceObject & rbacv1.#RoleBinding
#ConfigMap: #NamespaceObject & corev1.#ConfigMap
#ServiceAccount: #NamespaceObject & corev1.#ServiceAccount
#Pod: #NamespaceObject & corev1.#Pod
#Service: #NamespaceObject & corev1.#Service
#Job: #NamespaceObject & batchv1.#Job
#CronJob: #NamespaceObject & batchv1.#CronJob
#Deployment: #NamespaceObject & appsv1.#Deployment
#VirtualService: #NamespaceObject & vs.#VirtualService
#Certificate: #NamespaceObject & crt.#Certificate
#PostgresCluster: #NamespaceObject & pg.#PostgresCluster
#Issuer: #NamespaceObject & is.#Issuer
#Role: #NamespaceObject & rbacv1.#Role
#RoleBinding: #NamespaceObject & rbacv1.#RoleBinding
#ConfigMap: #NamespaceObject & corev1.#ConfigMap
#ServiceAccount: #NamespaceObject & corev1.#ServiceAccount
#Pod: #NamespaceObject & corev1.#Pod
#Service: #NamespaceObject & corev1.#Service
#Job: #NamespaceObject & batchv1.#Job
#CronJob: #NamespaceObject & batchv1.#CronJob
#Deployment: #NamespaceObject & appsv1.#Deployment
#VirtualService: #NamespaceObject & vs.#VirtualService
#RequestAuthentication: #NamespaceObject & ra.#RequestAuthentication
#AuthorizationPolicy: #NamespaceObject & ap.#AuthorizationPolicy
#Certificate: #NamespaceObject & crt.#Certificate
#PostgresCluster: #NamespaceObject & pg.#PostgresCluster
#Gateway: #NamespaceObject & gw.#Gateway & {
metadata: namespace: string | *"istio-ingress"
@@ -108,7 +112,7 @@ _apiVersion: "holos.run/v1alpha1"
_name: string
metadata: {
name: _name
namespace: #TargetNamespace
namespace: string | *#TargetNamespace
}
spec: {
refreshInterval: string | *"1h"
@@ -221,6 +225,32 @@ _apiVersion: "holos.run/v1alpha1"
services: [ID=_]: {
name: string & ID
}
// authproxy configures the auth proxy attached to the default ingress gateway in the istio-ingress namespace.
authproxy: #AuthProxySpec & {
namespace: "istio-ingress"
provider: "ingressauth"
}
}
#AuthProxySpec: {
// projectID is the zitadel project resource id.
projectID: number
// clientID is the zitadel application client id.
clientID: string
// namespace is the namespace
namespace: string
// provider is the istio extension provider name in the mesh config.
provider: string
// orgDomain is the zitadel organization domain for logins.
orgDomain: string | *#Platform.org.domain
// issuerHost is the Host: header value of the oidc issuer
issuerHost: string | *"login.\(#Platform.org.domain)"
// issuer is the oidc identity provider issuer url
issuer: string | *"https://\(issuerHost)"
// path is the oauth2-proxy --proxy-prefix value. The default callback url is the Host: value with a path of /holos/oidc/callback
proxyPrefix: string | *"/holos/authproxy/\(namespace)"
// idTokenHeader represents the header where the id token is placed
idTokenHeader: string | *"x-oidc-id-token"
}
// ManagedNamespace is a namespace to manage across all clusters in the holos platform.
@@ -233,6 +263,9 @@ _apiVersion: "holos.run/v1alpha1"
}
// clusterNames represents the set of clusters the namespace is managed on. Usually all clusters.
clusterNames: [...string]
for cluster in clusterNames {
clusters: (cluster): name: cluster
}
}
// #ManagedNamepsaces is the union of all namespaces across all cluster types and optional services.
@@ -299,3 +332,77 @@ _apiVersion: "holos.run/v1alpha1"
// #IsPrimaryCluster is true if the cluster being rendered is the primary cluster
// Used by the iam project to determine where https://login.example.com is active.
#IsPrimaryCluster: bool & #ClusterName == #Platform.primaryCluster.name
// #GatewayServer defines the value of the istio Gateway.spec.servers field.
#GatewayServer: {
// The ip or the Unix domain socket to which the listener should
// be bound to.
bind?: string
defaultEndpoint?: string
// One or more hosts exposed by this gateway.
hosts: [...string]
// An optional name of the server, when set must be unique across
// all servers.
name?: string
// The Port on which the proxy should listen for incoming
// connections.
port: {
// Label assigned to the port.
name: string
// A valid non-negative integer port number.
number: int
// The protocol exposed on the port.
protocol: string
targetPort?: int
}
// Set of TLS related options that govern the server's behavior.
tls?: {
// REQUIRED if mode is `MUTUAL` or `OPTIONAL_MUTUAL`.
caCertificates?: string
// Optional: If specified, only support the specified cipher list.
cipherSuites?: [...string]
// For gateways running on Kubernetes, the name of the secret that
// holds the TLS certs including the CA certificates.
credentialName?: string
// If set to true, the load balancer will send a 301 redirect for
// all http connections, asking the clients to use HTTPS.
httpsRedirect?: bool
// Optional: Maximum TLS protocol version.
maxProtocolVersion?: "TLS_AUTO" | "TLSV1_0" | "TLSV1_1" | "TLSV1_2" | "TLSV1_3"
// Optional: Minimum TLS protocol version.
minProtocolVersion?: "TLS_AUTO" | "TLSV1_0" | "TLSV1_1" | "TLSV1_2" | "TLSV1_3"
// Optional: Indicates whether connections to this port should be
// secured using TLS.
mode?: "PASSTHROUGH" | "SIMPLE" | "MUTUAL" | "AUTO_PASSTHROUGH" | "ISTIO_MUTUAL" | "OPTIONAL_MUTUAL"
// REQUIRED if mode is `SIMPLE` or `MUTUAL`.
privateKey?: string
// REQUIRED if mode is `SIMPLE` or `MUTUAL`.
serverCertificate?: string
// A list of alternate names to verify the subject identity in the
// certificate presented by the client.
subjectAltNames?: [...string]
// An optional list of hex-encoded SHA-256 hashes of the
// authorized client certificates.
verifyCertificateHash?: [...string]
// An optional list of base64-encoded SHA-256 hashes of the SPKIs
// of authorized client certificates.
verifyCertificateSpki?: [...string]
}
}

View File

@@ -1 +1 @@
1
7