mpvl suggests @embed is a more ideal solution than our implementation of
core.Component.Instances for the use case of unifying YAML data updated
by Kargo Stage resources.
See the issue for a link to the discussion.
I'd like to add the kargo-demo repository to Unity to test evalv3, but
can't get a handle on the main function to wire up to testscript.
This patch fixes the problem by moving the MakeMain function to a public
package so the kargo-demo go module can import and call it using the go
mod tools technique.
Previously holos render platform was not setting the --extract-yaml file
when calling holos render component, causing data file instances defined
in the Platform spec to be discarded.
This patch passes the value along using the flag.
Previously holos unconditionally executed helm repo add which failed for
private repositories requiring basic authentication.
This patch addresses the problem by using the Helm SDK to pull and cache
charts without adding them as repositories. New fields for the
core.Helm type allow basic auth credentials to be read from environment
variables.
Multiple repositories are supported by using different env vars for
different repositories.
Without this patch we do not support installing Kargo from an OCI helm
chart. We want to support:
```
Component: #Helm & {
Name: "kargo"
Namespace: Kargo.Namespace
Chart: {
name: "oci://ghcr.io/akuity/kargo-charts/kargo"
version: "1.0.3"
release: Name
}
EnableHooks: true
Values: Kargo.Values
}
```
This patch fixes the problem by using the base name for filesystem cache
operations.
Previously the BuildPlan pipeline didn't execute generators and
transformers concurrently. All steps were sequentially executed. Holos
was primarily concurrent by executing multiple BuildPlans at once.
This patch changes the Build implementation for each BuildPlan to
execute a GoRoutine pipeline. One producer fans out to a group of
routines each executing the pipeline for one artifact in the build plan.
The pipeline has 3 stages:
1: Fan-out to build each Generator concurrently.
2: Fan-in to build each Transformer sequentially.
3: Fan-out again to run each validator concurrently.
When the artifact pipelines return, the producer closes the tasks
channel causing the worker tasks to return.
Note the overall runtime for 8 BuildPlans is roughly equivalent to
previously at 160ms with --concurrency=8 on my M3 Max. I expect this to
perform better than previously when multiple artifacts are rendered for
each BuildPlan.
Without this patch trying to use a Kustomize patch with the optional
name field omitted results in the following error:
could not run: holos.spec.artifacts.0.transformers.0.kustomize.kustomization.patches.0.target.name: cannot convert non-concrete value string at builder/v1alpha5/builder.go:218
holos.spec.artifacts.0.transformers.0.kustomize.kustomization.patches.0.target.name: cannot convert non-concrete value string:
$WORK/cue.mod/gen/sigs.k8s.io/kustomize/api/types/var_go_gen.cue:33:2
This patch fixes the problem by providing a default value for the name
field matching the Go zero value for a string.
Previously the holos render platform and component subcommands had flags
for oidc authentication and client access to the gRPC service. These
flags aren't currently used, they're remnants from the json powered form
prototype.
This patch gates the flags behind a feature flag which is disabled by
default.
Result:
holos render platform --help
render an entire platform
Usage:
holos render platform DIRECTORY [flags]
Examples:
holos render platform ./platform
Flags:
--concurrency int number of components to render concurrently (default 8)
-v, --version version for platform
Global Flags:
--log-drop strings log attributes to drop (example "user-agent,version")
--log-format string log format (text|json|console) (default "console")
--log-level string log level (debug|info|warn|error) (default "info")
---
HOLOS_FEATURE_CLIENT=1 holos render platform --help
render an entire platform
Usage:
holos render platform DIRECTORY [flags]
Examples:
holos render platform ./platform
Flags:
--concurrency int number of components to render concurrently (default 8)
--oidc-client-id string oidc client id. (default "270319630705329162@holos_platform")
--oidc-extra-scopes strings optional oidc scopes
--oidc-force-refresh force refresh
--oidc-issuer string oidc token issuer url. (default "https://login.holos.run")
--oidc-scopes strings required oidc scopes (default openid,email,profile,groups,offline_access)
--server string server to connect to (default "https://app.holos.run:443")
-v, --version version for platform
Global Flags:
--log-drop strings log attributes to drop (example "user-agent,version")
--log-format string log format (text|json|console) (default "console")
--log-level string log level (debug|info|warn|error) (default "info")
This patch strips down the v1alpha4 core and author schemas to only with
is absolutely necessary for all holos users. Aspects of platform
configuration applicable to some, even most, but not all users will be
moved into documentation topics organized as a recipe book.
The functionality removed from the v1alpha4 author schemas in v1alpha5
will move into self contained examples documented as topics on the docs
site.
The overall purpose is to have a focused, composeable, maintainable
author schema to help people get started and ideally we can support for
years with making breaking changes.
With this patch the v1alpha5 helm guide test passes. We're not going to
have this guide anymore but it demonstrates we're back to where we were
with v1alpha4.
It's too long for documentation. Shorten it for clarity.
Result:
```
❯ holos render platform ./platform
rendered bank-accounts-db for cluster workload in 160.7245ms
rendered bank-ledger-db for cluster workload in 162.465625ms
rendered bank-userservice for cluster workload in 166.150417ms
rendered bank-ledger-writer for cluster workload in 168.075459ms
rendered bank-balance-reader for cluster workload in 172.492292ms
rendered bank-backend-config for cluster workload in 198.117916ms
rendered bank-secrets for cluster workload in 223.200042ms
rendered gateway for cluster workload in 124.841917ms
rendered httproutes for cluster workload in 131.86625ms
rendered bank-contacts for cluster workload in 154.463792ms
rendered bank-transaction-history for cluster workload in 159.968208ms
rendered bank-frontend for cluster workload in 325.24425ms
rendered app-projects for cluster workload in 110.577916ms
rendered ztunnel for cluster workload in 137.502792ms
rendered cni for cluster workload in 209.993375ms
rendered cert-manager for cluster workload in 172.933834ms
rendered external-secrets for cluster workload in 135.759792ms
rendered local-ca for cluster workload in 98.026708ms
rendered istiod for cluster workload in 403.050833ms
rendered argocd for cluster workload in 294.663167ms
rendered gateway-api for cluster workload in 228.47875ms
rendered namespaces for cluster workload in 113.586916ms
rendered base for cluster workload in 533.76675ms
rendered external-secrets-crds for cluster workload in 529.053375ms
rendered crds for cluster workload in 931.180458ms
rendered platform in 1.248310167s
```
Previously:
```
❯ holos render platform ./platform
rendered projects/bank-of-holos/backend/components/bank-ledger-db for cluster workload in 158.534875ms
rendered projects/bank-of-holos/backend/components/bank-accounts-db for cluster workload in 159.836166ms
rendered projects/bank-of-holos/backend/components/bank-userservice for cluster workload in 160.360667ms
rendered projects/bank-of-holos/backend/components/bank-balance-reader for cluster workload in 169.478584ms
rendered projects/bank-of-holos/backend/components/bank-ledger-writer for cluster workload in 169.437833ms
rendered projects/bank-of-holos/backend/components/bank-backend-config for cluster workload in 182.089333ms
rendered projects/bank-of-holos/security/components/bank-secrets for cluster workload in 196.502792ms
rendered projects/platform/components/istio/gateway for cluster workload in 122.273083ms
rendered projects/bank-of-holos/frontend/components/bank-frontend for cluster workload in 307.573584ms
rendered projects/platform/components/httproutes for cluster workload in 149.631583ms
rendered projects/bank-of-holos/backend/components/bank-contacts for cluster workload in 153.529708ms
rendered projects/bank-of-holos/backend/components/bank-transaction-history for cluster workload in 165.375667ms
rendered projects/platform/components/app-projects for cluster workload in 107.253958ms
rendered projects/platform/components/istio/ztunnel for cluster workload in 137.22275ms
rendered projects/platform/components/istio/cni for cluster workload in 233.980958ms
rendered projects/platform/components/cert-manager for cluster workload in 171.966958ms
rendered projects/platform/components/external-secrets for cluster workload in 134.207792ms
rendered projects/platform/components/istio/istiod for cluster workload in 403.19ms
rendered projects/platform/components/local-ca for cluster workload in 97.544708ms
rendered projects/platform/components/argocd/argocd for cluster workload in 289.577208ms
rendered projects/platform/components/gateway-api for cluster workload in 218.290458ms
rendered projects/platform/components/namespaces for cluster workload in 109.534125ms
rendered projects/platform/components/istio/base for cluster workload in 526.32525ms
rendered projects/platform/components/external-secrets-crds for cluster workload in 523.7495ms
rendered projects/platform/components/argocd/crds for cluster workload in 1.002546375s
rendered platform in 1.312824333s
```
This patch adds Istio to the Expose a Service documentation and
introduces new concepts. The Kubernetes build plan schema, the
namespaces component, and an example of how to safely re-use Helm values
from the root to multiple leaf components.
fix: istio cni not ready on k3d
---
The istio-k3d component embedded into holos fixes the cni pod not
becoming ready with our k3d local cluster guide. The pod log error this
fixes is:
configuration requires updates, (re)writing CNI config file at "": no networks found in /host/etc/cni/net.d
Istio CNI is configured as chained plugin, but cannot find existing CNI network config: no networks found in /host/etc/cni/net.d
Waiting for CNI network config file to be written in /host/etc/cni/net.d...
[Platform k3d]: https://istio.io/latest/docs/ambient/install/platform-prerequisites/#k3d
docs: clarify how to reset the local cluster
---
This is something we do all the time while developing and documenting,
so make it easy and fast to reset the cluster to a known good state.
Previously helm and cue components were split into two different
subcommands off the holos generate component command. This is
unnecessary, I'm not sure why it was there in the first place. The code
seemed perfectly duplicated.
This patch combines them to focus on the concept of a Component. It
doesn't matter what kind it is now that it's expected to be run from the
root of the platform repository and drop configuration at the root and
the leaf of the tree.
Make sure go install works from the quickstart documentation by doing a
release. Otherwise, v0.93.1 is installed which doesn't include the
platform schema.
Without this patch the `holos generate platform` command automatically
makes an rpc call to holos server. This creates friction for the
quickstart guide because we don't need to require users to register and
have an organization and platform already created in the server just to
generate a simple platform to exercise a simple helm chart component.
A future patch should implement the behavior of linking a server side
platform to a local git repository by making the API call to get the
platform ID then updating the platform.metadata.json file.
Previously CUE paniced when holos tried to unify values originating from
two different cue runtimes. This patch fixes the problem by
initializaing cue.Value structs from the same cue context.
Log messages are also improved after making one complete pass through
the Try Holos Locally guide.
This patch addresses Nate's feedback that it's difficult to know what
platform is being operated on.
Previously it wasn't clear where the platform id used for push and pull
comes from. The source of truth is the platform.metadata.json file
created when the platform is first generated using `holos generate
platform k3d`.
This patch removes the platformId field from the platform.config.json
file, renames the platform.config.json file to platform.model.json and
renames the internal symbols to match the domain language of "Platform
Model" instead of the less clear "config"
This patch also changes the API between holos and CUE to use the proto
json imported from the proto file instead of generated from the go code
generated from the proto file. The purpose is to ensure protojson
encoding is used end to end.
Default log handler:
The patch also changes the default log output to print only the message
to stderr. This addresses similar feedback from both Gary and Nate that
the output is skipped over because it feels like internal debug logs.
We still want 100% of output to go through the logger so we can ensure
each line can be made into valid json. Info messages however are meant
for the user and all other attributes can be stripped off by default.
If additional source location is necessary, enable the text or json
output format.
Protobuf JSON:
This patch modifies the API contract between holos and CUE to ensure
data is exchanged exclusively using protojson. This is necessary
because protobuf has a canonical json format which is not compatible
with the go json package struct tags. When Holos handles a protobuf
message, it must marshal and unmarshal it using the protojson package.
Similarly, when importing protobuf messages into CUE, we must use `cue
import` instead of `cue go get` so that the canonical format is used
instead of the invalid go json struct tags.
Finally, when a Go struct like v1alpha1.Form is used to represent data
defined in cue which contains a nested protobuf message, Holos should
use a cue.Value to lookup the nested path, marshal it into json bytes,
then unmarshal it again using protojson.
The github workflows fail because yarn is not available. The Angular
frontend app uses npm so we should also use npm for the website to
minimize dependencies.
Previously `go install` fails to install holos.
```
❯ go install github.com/holos-run/holos/cmd/holos@latest
../../go/pkg/mod/github.com/holos-run/holos@v0.86.0/internal/frontend/frontend.go:25:12: pattern holos/dist/holos/ui/index.html: no matching files found
../../go/pkg/mod/github.com/holos-run/holos@v0.86.0/doc/website/website.go:14:12: pattern all:build: no matching files found
```
This is because we do not commit required files. This patch fixes the
problem by following Rob Pike's guidance to commit generated files.
This patch also replaces the previous use of Makefile tasks to generate
code with //go:generate directives.
This means the process of keeping the source code clean is straight
forward:
```
git clone
make tools
make generate
make build
```
Refer to https://go.dev/blog/generate
> Also, if the containing package is intended for import by go get, once
> the file is generated (and tested!) it must be checked into the source
> code repository to be available to clients. - Rob Pike
Previously docs are not published. This patch adds Docusaurus into the
doc/website directory which is also a Go package to embed the static
site into the executable.
Serve the site using http.Server with a h2c handler with the command:
holos website --log-format=json --log-drop=source
The website subcommand is intended to be run from a container as a
Deployment. For expedience, the website subcommand doesn't use the
signals package like the server subcommand does. Consider using it for
graceful Deployment restarts.
Refer to https://github.com/ent/ent/tree/master/doc/website
Previously a couple of methods were defined on the Result struct.
This patch moves the methods to an internal wrapper struct to remove
them from the API documentation.
With this patch the API between holos and CUE is entirely a data API.
Previously, each BuildPlan has no clear way to produce an ArgoCD
Application resource. This patch provides a general solution where each
BuildPlan can provide arbitrary files as a map[string]string where the
key is the file path relative to the gitops repository `deploy/` folder.