Without this patch ArgoCD treats the Application as constantly out of
sync. This is also a good example of how to patch an arbitrary
component, though it patches the core BuildPlan itself now. If this is
widely used, it would be nice to add this behavior to the schema api
(aka author api).
Without this patch ArgoCD treats the Application as constantly out of
sync. This is also a good example of how to patch an arbitrary
component, though it patches the core BuildPlan itself now. If this is
widely used, it would be nice to add this behavior to the schema api
(aka author api).
Without this patch ArgoCD treats the Application as constantly out of
sync. This is also a good example of how to patch an arbitrary
component, though it patches the core BuildPlan itself now. If this is
widely used, it would be nice to add this behavior to the schema api
(aka author api).
Without this patch browsing https://bank.holos.localhost frequently gets
connection reset errors. These errors are caused by the frontend
deployment redirecting the browser to http, which is not enabled on the
Gateway we use in the guides.
This patch sets the scheme to https which corrects the problems.
See https://github.com/GoogleCloudPlatform/bank-of-anthos/issues/478
With this patch the frontend, accounts-db, and userservice all start and
become ready.
The user can log in, but on redirecting to home the site can't be
reached.
Rather than commit the jwt private key to version control like upstream
does, we use a SecretStore and ExternalSecret to sync the secret
generated by the security team in the bank-security namespace.
With this patch the SecretStore validates and the ExternalSecret
automatically syncs the secret from the bank-security namespace to the
bank-frontend namespace.
```
❯ k get ss
NAME AGE STATUS CAPABILITIES READY
bank-security 1s Valid ReadWrite True
❯ k get es
NAME STORE REFRESH INTERVAL STATUS READY
jwt-key bank-security 5s SecretSynced True
```
The pod start successfully.
```
❯ k get pods
NAME READY STATUS RESTARTS AGE
frontend-646d797d6b-7jhrx 1/1 Running 0 2m39s
❯ k logs frontend-646d797d6b-7jhrx
{"timestamp": "2024-09-16 21:44:47", "message": "info | Starting gunicorn 22.0.0", "severity": "INFO"}
{"timestamp": "2024-09-16 21:44:47", "message": "info | Listening at: http://0.0.0.0:8080 (7)", "severity": "INFO"}
{"timestamp": "2024-09-16 21:44:47", "message": "info | Using worker: gthread", "severity": "INFO"}
{"timestamp": "2024-09-16 21:44:47", "message": "info | Booting worker with pid: 8", "severity": "INFO"}
{"timestamp": "2024-09-16 21:44:57", "message": "create_app | Unable to retrieve cluster name from metadata server metadata.google.internal.", "severity": "WARNING"}
{"timestamp": "2024-09-16 21:44:57", "message": "create_app | Unable to retrieve zone from metadata server metadata.google.internal.", "severity": "WARNING"}
{"timestamp": "2024-09-16 21:44:57", "message": "create_app | Starting frontend service.", "severity": "INFO"}
{"timestamp": "2024-09-16 21:44:57", "message": "create_app | 🚫 Tracing disabled.", "severity": "INFO"}
{"timestamp": "2024-09-16 21:44:57", "message": "create_app | Platform is set to 'local'", "severity": "INFO"}
```
Expose Service frontend in the bank-frontend namespace via httproute
https://bank.holos.localhost
Organize into frontend, backend, security projects to align with three
teams who would each own this work.
remove secret from version control
Google added the secret to version control but we can generate the
secret in-cluster. Holos makes it easier to manage the ExternalSecret
or RoleBinding necessary to get it in the right place.
We need a way to demonstrate the value Holos offers in a platform team
managing projects for other teams. This patch addresses the need by
establishing the bank-of-holos schematic, which is a port of the Bank of
Anthos project to Holos.
This patch adds only the frontend to get the process started. As of
this patch the frontend pod starts and becomes ready but is not exposed
via HTTPRoute.
Refer to https://github.com/GoogleCloudPlatform/bank-of-anthos/
Previously all generated ArgoCD Application resources go into the
default project following the Quickstart guide. The configuration code
is being organized into the concept of projects in the filesystem, so we
want to the GitOps configuration to also reflect this concept of
projects.
This patch extends the ArgoConfig user facing schema to accept a project
string. The app-projects component automatically manages AppProject
resources in the argocd namespace for each of the defined projects.
This allows CUE configuration in the a project directory to specify the
project name so that all Applications are automatically assigned to the
correct project.
Define a place for components to register HTTPRoute resources the
platform team needs to manage in the Gateway namespace.
The files are organized to delegate to the platform team.
This patch also fixes the naming of the argocd component so that the
Service is argocd-server instead of argo-cd-argocd-server.
Previously, the #Resources struct listing valid resources to use with
APIObjects in each of the components types was closed. This made it
very difficult for users to mix in new resources and use the Kubernetes
component kind.
This patch moves the definition of the valid resources to package holos
from the schema API. The schema still enforces some light constraints,
but doesn't keep the struct closed.
A new convention is introduced in the form of configuring all components
using _ComponentConfig defined at the root, then unifying this struct
with all of the component kinds. See schema.gen.cue for how this works.
This approach enables mixing in ArgoCD applications to all component
kinds, not just Helm as was done previously. Similarly, the
user-constrained #Resources definition unifies with all component kinds.
It's OK to leave the yaml.Marshall in the schema API. The user
shouldn't ever have to deal with #APIObjects, instead they should pass
Resources through the schema API which will use APIObjects to create
apiObjectMap for each component type and the BuildPlan.
This is still more awkward than I want, but it's a good step in the
right direction.
Without this patch the istio-gateway component isn't functional, the
HTTPRoute created for httpbin isn't programmed correctly. There is no
Gateway resource, just a deployment created by the istio helm chart.
This patch replaces the helm chart with a Gateway resource as was done
previously in the k3d platform schematic.
This patch also simplifies the certificate management to issue a single
cert valid for the platform domain and a wildcard. We intentionally
avoid building a dynamic Gateway.spec.listeners structure to keep the
expose a service guide relatively simple and focused on getting started
with Holos.
This patch adds the httpbin routes component. It's missing the
Certificate component, the next step is to wire up automatic certificate
management in the gateway configuration, which is a prime use case for
holos. Similar to how we register components and namespaces, we'll
register certificates.
This patch also adds the #Platform.Domain field to the user facing
schema API. We previously stored the domain in the Model but it makes
sense to lift it up to the Platform and have a sensible default value
for it.
Another example of #237 needing to be addressed soon.
This patch manages the httpbin Deployment, Service, and ReferenceGrant.
The remaining final step is to expose the service with an HTTPRoute and
Certificate.
We again needed to add a field to the schema APIObjects to get this to
work. We need to fix#237 soon. We'll need to do it again for the
HTTPRoute and Certificate resources.
The progression of namespaces, cert-manager, then gateway api and istio
makes much more sense than the previous progression of gateway api,
namespaces, istio.
cert-manager builds nicely on top of namespaces. gateway api are only
crds necessary for istio.
This patch also adds the local-ca component which surfaces issue #237
The Kubernetes APIObjects are unnecessarily constrained to resources we
define in the schema. We need to move the marshal code into package
holos so the user can add their own resource kinds.
This patch adds Istio to the Expose a Service documentation and
introduces new concepts. The Kubernetes build plan schema, the
namespaces component, and an example of how to safely re-use Helm values
from the root to multiple leaf components.
fix: istio cni not ready on k3d
---
The istio-k3d component embedded into holos fixes the cni pod not
becoming ready with our k3d local cluster guide. The pod log error this
fixes is:
configuration requires updates, (re)writing CNI config file at "": no networks found in /host/etc/cni/net.d
Istio CNI is configured as chained plugin, but cannot find existing CNI network config: no networks found in /host/etc/cni/net.d
Waiting for CNI network config file to be written in /host/etc/cni/net.d...
[Platform k3d]: https://istio.io/latest/docs/ambient/install/platform-prerequisites/#k3d
docs: clarify how to reset the local cluster
---
This is something we do all the time while developing and documenting,
so make it easy and fast to reset the cluster to a known good state.
This patch adds the schema api for the Kubernetes build plan, which
produces plain API resources directly from CUE. It's needed for the
namespaces component which is foundational to many of our guides.
The first guide that needs this is the expose a service guide, we need
to register the namespaces from the istio component.
The Expose a Service doc is meant to be the second step after the
Quickstart doc. This commit adds the section describing how to install
the Gateway API.
The Kustomize build plan is introduced at this point in a similar way
the Helm build plan was introduced in the quickstart.
We need an easy way to help people add a workload cluster to their
workload fleet when working through the guides. Generated platforms
should not define any clusters so they can be reused with multiple
guides.
This patch adds a simple component schematic that drops a root cue file
to define a workload cluster named workload.
The result is the following sequence renders the Gateway API when run
from an empty directory.
holos generate platform guide
holos generate component workload-cluster
holos generate component gateway-api
holos render platform ./platform
Without this patch nothing is rendered because there are no workload
clusters in the base guide platform.
Previously helm and cue components were split into two different
subcommands off the holos generate component command. This is
unnecessary, I'm not sure why it was there in the first place. The code
seemed perfectly duplicated.
This patch combines them to focus on the concept of a Component. It
doesn't matter what kind it is now that it's expected to be run from the
root of the platform repository and drop configuration at the root and
the leaf of the tree.
Previously, the quickstart step of generating the pod info component and
generating the platform as a whole left the task of integrating the
Component into the Platform as an exercise for the reader. This is a
problem because it creates unnecessary friction.
This patch addresses the problem by lifting up the Platform concept
into the user-facing Schema API. The generated platform includes a top
level #Platform definition which exposes the core Platform specification
on the Output field.
The Platform CUE instance then reduces to a simple `#Platform.Output`
which provides the Platform spec to holos for rendering each component
for each cluster.
The CUE code for the schema.#Platform iterates over each
Component to derive the list of components to manage for the Platform.
The CUE code for the generated quickstart platform links the definition
of StandardFleets, which is a Workload fleet and a Management cluster
fleet to the Platform conveninece wrapper.
Finally, the generated podinfo component drops a CUE file at the
repository root to automatically add the component to every workload
cluster.
The result is the only task left for the end user is to define at least
one workload cluster. Once defined, the component is automatically
managed because it is managed on all workload clusters.
This approach futher opens the door to allow generated components to
define their namespaces and generated secrets on the management cluster
separate from their workloads on the workload clusters.
This patch includes a behavior change, from now on all generated
components should assume they are writing to the root of the user's Git
repository so that they can generate files through the whole tree.
In the future, we should template output paths for generated components.
A simple approach might be to embed a file with a .target suffix, with
the contents being a simple Go template of the file path to write to.
The holos generate subcommand can then check if any given embedded file
foo has a foo.target companion, then write the target to the rendered
template value.
Users need to customize the default behavior of the core components,
like the Helm schema wrapper to mix-in an ArgoCD Application resource to
each component. This patch wires up #Helm in the holos package to
schema.#Helm from the v1alpha3 api.
The result is illustrated in the Quickstart documentation, it is now
simple for users to modify the definition of a Helm component such that
Application resources are mixed in to every component in the platform.
Previosly the end user needed to write, or at least copy and paste, a
large amount of boiler plate code to achieve the goal of declaring a
helm chart component. There is a gap between the cue code:
(#Helm & Chart).Output
And the full BuildPlan produced for the Holos cli to execute the
rendering process. The boiler plate code in schema.cue at the root of
the platform infrastructure repository was largely responsible for
defining how a BuildPlan with one HelmChart component is derived from
this #Helm definition.
This patch moves the definitions into a new, documented API named
`schema`. End users are expected to define their own #Helm definition
using the schema.#Helm, like so in the root level schema.cue:
#Helm: schema.#Helm
Using CUE definitions like #Platform to hold data is confusing. Clarify
the use of fields, definitions like #Platform define the shape (schema)
of the data while private fields like _Platform represent and hold the
data.
The first thing most platforms need to do is come up with a strategy for
managing namespaces across multiple clusters.
This patch defines #Namespaces in the holos platform and adds a
namespaces component which loops over all values in the #Namespaces
struct and manages a kubernetes Namespace object.
The platform resource itself loops over all clusters in the platform to
manage all namespaces across all clusters.
From a blank slate:
```
❯ holos generate platform holos
4:26PM INF platform.go:79 wrote platform.metadata.json version=0.82.0 platform_id=018fa1cf-a609-7463-aa6e-fa53bfded1dc path=/home/jeff/workspace/holos-run/holos-infra/saas/platform.metadata.json
4:26PM INF platform.go:91 generated platform holos version=0.82.0 platform_id=018fa1cf-a609-7463-aa6e-fa53bfded1dc path=/home/jeff/workspace/holos-run/holos-infra/saas
❯ holos pull platform config .
4:26PM INF pull.go:64 pulled platform model version=0.82.0 server=https://jeff.app.dev.k2.holos.run:443 platform_id=018fa1cf-a609-7463-aa6e-fa53bfded1dc
4:26PM INF pull.go:75 saved platform config version=0.82.0 server=https://jeff.app.dev.k2.holos.run:443 platform_id=018fa1cf-a609-7463-aa6e-fa53bfded1dc path=platform.config.json
❯ (cd components && holos generate component cue namespaces)
4:26PM INF component.go:147 generated component version=0.82.0 name=namespaces path=/home/jeff/workspace/holos-run/holos-infra/saas/components/namespaces
❯ holos render platform ./platform/
4:26PM INF platform.go:29 ok render component version=0.82.0 path=components/namespaces cluster=management num=1 total=2 duration=464.055541ms
4:26PM INF platform.go:29 ok render component version=0.82.0 path=components/namespaces cluster=aws1 num=2 total=2 duration=467.978499ms
```
The result:
```sh
cat deploy/clusters/management/components/namespaces/namespaces.gen.yaml
```
```yaml
---
metadata:
name: holos
labels:
kubernetes.io/metadata.name: holos
kind: Namespace
apiVersion: v1
```
This patch adds to more example helm chart based components. podinfo
installs as a normal https repository based helm chart. podinfo-oci
uses an oci image to manage the helm chart.
The way holos handls OCI images is subtle, so it's good to include an
example right out of the chute. Github actions uses OCI images for
example.