mirror of
https://github.com/outbackdingo/cozystack.git
synced 2026-01-29 18:19:00 +00:00
Compare commits
1 Commits
bats
...
platform-d
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3d0caaab19 |
@@ -18,7 +18,6 @@ repos:
|
||||
(cd "$dir" && make generate)
|
||||
fi
|
||||
done
|
||||
git diff --color=always | cat
|
||||
'
|
||||
language: script
|
||||
files: ^.*$
|
||||
|
||||
1
Makefile
1
Makefile
@@ -20,7 +20,6 @@ build: build-deps
|
||||
make -C packages/system/kubeovn image
|
||||
make -C packages/system/kubeovn-webhook image
|
||||
make -C packages/system/dashboard image
|
||||
make -C packages/system/metallb image
|
||||
make -C packages/system/kamaji image
|
||||
make -C packages/system/bucket image
|
||||
make -C packages/core/testing image
|
||||
|
||||
@@ -1,90 +0,0 @@
|
||||
This is the second release candidate for the upcoming Cozystack v0.31.0 release.
|
||||
The release notes show changes accumulated since the release of Cozystack v0.30.0.
|
||||
|
||||
Cozystack 0.31.0 further advances GPU support, monitoring, and all-around convenience features.
|
||||
|
||||
## New Features and Changes
|
||||
|
||||
* [kubernetes] Introduce GPU support for tenant Kubernetes clusters. (@kvaps in https://github.com/cozystack/cozystack/pull/834)
|
||||
* Add VerticalPodAutoscaler to a few more components:
|
||||
* [kubernetes] Kubernetes clusters in user tenants. (@klinch0 in https://github.com/cozystack/cozystack/pull/806)
|
||||
* [platform] Cozystack dashboard. (@klinch0 in https://github.com/cozystack/cozystack/pull/828)
|
||||
* [platform] Cozystack etcd-operator (@klinch0 in https://github.com/cozystack/cozystack/pull/850)
|
||||
* Introduce support for cross-architecture builds and Cozystack on ARM:
|
||||
* [build] Refactor Makefiles introducing build variables. (@nbykov0 in https://github.com/cozystack/cozystack/pull/907)
|
||||
* [build] Add support for multi-architecture and cross-platform image builds. (@nbykov0 in https://github.com/cozystack/cozystack/pull/932)
|
||||
* [platform] Introduce a new controller to synchronize tenant HelmReleases and propagate configuration changes. (@klinch0 in https://github.com/cozystack/cozystack/pull/870)
|
||||
* [platform] Introduce options `expose-services`, `expose-ingress` and `expose-external-ips` to the ingress service. (@kvaps in https://github.com/cozystack/cozystack/pull/929)
|
||||
* [kubevirt] Enable exporting VMs. (@kvaps in https://github.com/cozystack/cozystack/pull/808)
|
||||
* [kubevirt] Make KubeVirt's CPU allocation ratio configurable. (@lllamnyp in https://github.com/cozystack/cozystack/pull/905)
|
||||
* [cozystack-controller] Record the IP address pool and storage class in Workload objects. (@lllamnyp in https://github.com/cozystack/cozystack/pull/831)
|
||||
* [cilium] Enable Cilium Gateway API. (@zdenekjanda in https://github.com/cozystack/cozystack/pull/924)
|
||||
* [cilium] Enable user-added parameters in a tenant cluster Cilium. (@lllamnyp in https://github.com/cozystack/cozystack/pull/917)
|
||||
* Update the Cozystack release policy to include long-lived release branches and start with release candidates. Update CI workflows and docs accordingly.
|
||||
* Use release branches `release-X.Y` for gathering and releasing fixes after initial `vX.Y.0` release. (@kvaps in https://github.com/cozystack/cozystack/pull/816)
|
||||
* Automatically create release branches after initial `vX.Y.0` release is published. (@kvaps in https://github.com/cozystack/cozystack/pull/886)
|
||||
* Introduce Release Candidate versions. Automate patch backporting by applying patches from pull requests labeled `[backport]` to the current release branch. (@kvaps in https://github.com/cozystack/cozystack/pull/841 and https://github.com/cozystack/cozystack/pull/901, @nickvolynkin in https://github.com/cozystack/cozystack/pull/890)
|
||||
* Commit changes in release pipelines under `github-actions <github-actions@github.com>`. (@kvaps in https://github.com/cozystack/cozystack/pull/823)
|
||||
* Describe the Cozystack release workflow. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/817 and https://github.com/cozystack/cozystack/pull/897)
|
||||
|
||||
## Fixes
|
||||
|
||||
* [virtual-machine] Add GPU names to the virtual machine specifications. (@kvaps in https://github.com/cozystack/cozystack/pull/862)
|
||||
* [virtual-machine] Count Workload resources for pods by requests, not limits. Other improvements to VM resource tracking. (@lllamnyp in https://github.com/cozystack/cozystack/pull/904)
|
||||
* [platform] Fix installing HelmReleases on initial setup. (@kvaps in https://github.com/cozystack/cozystack/pull/833)
|
||||
* [platform] Migration scripts update Kubernetes ConfigMap with the current stack version for improved version tracking. (@klinch0 in https://github.com/cozystack/cozystack/pull/840)
|
||||
* [platform] Reduce requested CPU and RAM for the `kamaji` provider. (@klinch0 in https://github.com/cozystack/cozystack/pull/825)
|
||||
* [platform] Improve the reconciliation loop for the Cozystack system HelmReleases logic. (@klinch0 in https://github.com/cozystack/cozystack/pull/809 and https://github.com/cozystack/cozystack/pull/810, @kvaps in https://github.com/cozystack/cozystack/pull/811)
|
||||
* [platform] Remove extra dependencies for the Piraeus operator. (@klinch0 in https://github.com/cozystack/cozystack/pull/856)
|
||||
* [platform] Refactor dashboard values. (@kvaps in https://github.com/cozystack/cozystack/pull/928, patched by @llamnyp in https://github.com/cozystack/cozystack/pull/952)
|
||||
* [platform] Make FluxCD artifact disabled by default. (@klinch0 in https://github.com/cozystack/cozystack/pull/964)
|
||||
* [kubernetes] Update garbage collection of HelmReleases in tenant Kubernetes clusters. (@kvaps in https://github.com/cozystack/cozystack/pull/835)
|
||||
* [kubernetes] Fix merging `valuesOverride` for tenant clusters. (@kvaps in https://github.com/cozystack/cozystack/pull/879)
|
||||
* [kubernetes] Fix `ubuntu-container-disk` tag. (@kvaps in https://github.com/cozystack/cozystack/pull/887)
|
||||
* [kubernetes] Refactor Helm manifests for tenant Kubernetes clusters. (@kvaps in https://github.com/cozystack/cozystack/pull/866)
|
||||
* [tenant] Fix an issue with accessing external IPs of a cluster from the cluster itself. (@kvaps in https://github.com/cozystack/cozystack/pull/854)
|
||||
* [cluster-api] Remove the no longer necessary workaround for Kamaji. (@kvaps in https://github.com/cozystack/cozystack/pull/867, patched in https://github.com/cozystack/cozystack/pull/956)
|
||||
* [monitoring] Remove legacy label "POD" from the exclude filter in metrics. (@xy2 in https://github.com/cozystack/cozystack/pull/826)
|
||||
* [monitoring] Refactor management etcd monitoring config. Introduce a migration script for updating monitoring resources (`kube-rbac-proxy` daemonset). (@lllamnyp in https://github.com/cozystack/cozystack/pull/799 and https://github.com/cozystack/cozystack/pull/830)
|
||||
* [monitoring] Fix VerticalPodAutoscaler resource allocation for VMagent. (@klinch0 in https://github.com/cozystack/cozystack/pull/820)
|
||||
* [postgres] Remove duplicated `template` entry from backup manifest. (@etoshutka in https://github.com/cozystack/cozystack/pull/872)
|
||||
* [kube-ovn] Fix versions mapping in Makefile. (@kvaps in https://github.com/cozystack/cozystack/pull/883)
|
||||
* [dx] Automatically detect version for migrations in the installer.sh. (@kvaps in https://github.com/cozystack/cozystack/pull/837)
|
||||
* [e2e] Increase timeout durations for `capi` and `keycloak` to improve reliability during environment setup. (@kvaps in https://github.com/cozystack/cozystack/pull/858)
|
||||
* [e2e] Fix `device_ownership_from_security_context` CRI. (@dtrdnk in https://github.com/cozystack/cozystack/pull/896)
|
||||
* [e2e] Return `genisoimage` to the e2e-test Dockerfile (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/962)
|
||||
* [ci] Improve the check for `versions_map` running on pull requests. (@kvaps and @klinch0 in https://github.com/cozystack/cozystack/pull/836, https://github.com/cozystack/cozystack/pull/842, and https://github.com/cozystack/cozystack/pull/845)
|
||||
* [ci] If the release step was skipped on a tag, skip tests as well. (@kvaps in https://github.com/cozystack/cozystack/pull/822)
|
||||
* [ci] Allow CI to cancel the previous job if a new one is scheduled. (@kvaps in https://github.com/cozystack/cozystack/pull/873)
|
||||
* [ci] Use the correct version name when uploading build assets to the release page. (@kvaps in https://github.com/cozystack/cozystack/pull/876)
|
||||
* [ci] Stop using `ok-to-test` label to trigger CI in pull requests. (@kvaps in https://github.com/cozystack/cozystack/pull/875)
|
||||
* [ci] Do not run tests in the release building pipeline. (@kvaps in https://github.com/cozystack/cozystack/pull/882)
|
||||
* [ci] Fix release branch creation. (@kvaps in https://github.com/cozystack/cozystack/pull/884)
|
||||
* [ci, dx] Reduce noise in the test logs by suppressing the `wget` progress bar. (@lllamnyp in https://github.com/cozystack/cozystack/pull/865)
|
||||
* [ci] Revert "automatically trigger tests in releasing PR". (@kvaps in https://github.com/cozystack/cozystack/pull/900)
|
||||
|
||||
## Dependencies
|
||||
|
||||
* MetalLB s now included directly as a patched image based on version 0.14.9. (@lllamnyp in https://github.com/cozystack/cozystack/pull/945)
|
||||
* Update Kubernetes to v1.32.4. (@kvaps in https://github.com/cozystack/cozystack/pull/949)
|
||||
* Update Talos Linux to v1.10.1. (@kvaps in https://github.com/cozystack/cozystack/pull/931)
|
||||
* Update Cilium to v1.17.3. (@kvaps in https://github.com/cozystack/cozystack/pull/848)
|
||||
* Update LINSTOR to v1.31.0. (@kvaps in https://github.com/cozystack/cozystack/pull/846)
|
||||
* Update Kube-OVN to v1.13.11. (@kvaps in https://github.com/cozystack/cozystack/pull/847, @lllamnyp in https://github.com/cozystack/cozystack/pull/922)
|
||||
* Update tenant Kubernetes to v1.32. (@kvaps in https://github.com/cozystack/cozystack/pull/871)
|
||||
* Update flux-operator to 0.20.0. (@kingdonb in https://github.com/cozystack/cozystack/pull/880 and https://github.com/cozystack/cozystack/pull/934)
|
||||
* Update multiple Cluster API components. (@kvaps in https://github.com/cozystack/cozystack/pull/867 and https://github.com/cozystack/cozystack/pull/947)
|
||||
* Update KamajiControlPlane to edge-25.4.1. (@kvaps in https://github.com/cozystack/cozystack/pull/953)
|
||||
|
||||
## Maintenance
|
||||
|
||||
* Add @klinch0 to CODEOWNERS. (@kvaps in https://github.com/cozystack/cozystack/pull/838)
|
||||
|
||||
## New Contributors
|
||||
|
||||
* @etoshutka made their first contribution in https://github.com/cozystack/cozystack/pull/872
|
||||
* @dtrdnk made their first contribution in https://github.com/cozystack/cozystack/pull/896
|
||||
* @zdenekjanda made their first contribution in https://github.com/cozystack/cozystack/pull/924
|
||||
* @gwynbleidd2106 made their first contribution in https://github.com/cozystack/cozystack/pull/962
|
||||
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.30.0...v0.31.0-rc.2
|
||||
395
hack/e2e.bats
395
hack/e2e.bats
@@ -1,395 +0,0 @@
|
||||
#!/usr/bin/env bats
|
||||
# -----------------------------------------------------------------------------
|
||||
# Cozystack end‑to‑end provisioning test (Bats)
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
export TALOSCONFIG=$PWD/talosconfig
|
||||
export KUBECONFIG=$PWD/kubeconfig
|
||||
|
||||
# Runs before each @test
|
||||
setup() {
|
||||
[ ! -f ${BATS_RUN_TMPDIR}/.skip ] || skip "skip remaining tests" ]
|
||||
}
|
||||
|
||||
# Runs after each @test
|
||||
teardown() {
|
||||
[ -n "$BATS_TEST_COMPLETED" ] || touch ${BATS_RUN_TMPDIR}/.skip
|
||||
}
|
||||
|
||||
@test "Environment variable COZYSTACK_INSTALLER_YAML is defined" {
|
||||
if [ -z "${COZYSTACK_INSTALLER_YAML:-}" ]; then
|
||||
echo 'COZYSTACK_INSTALLER_YAML environment variable is not set!' >&2
|
||||
echo >&2
|
||||
echo 'Please export it with the following command:' >&2
|
||||
echo ' export COZYSTACK_INSTALLER_YAML=$(helm template -n cozy-system installer packages/core/installer)' >&2
|
||||
fi
|
||||
}
|
||||
|
||||
@test "IPv4 forwarding is enabled" {
|
||||
if [ "$(cat /proc/sys/net/ipv4/ip_forward)" != 1 ]; then
|
||||
echo "IPv4 forwarding is disabled!" >&2
|
||||
echo >&2
|
||||
echo "Enable it with:" >&2
|
||||
echo " echo 1 > /proc/sys/net/ipv4/ip_forward" >&2
|
||||
fi
|
||||
}
|
||||
|
||||
@test "Clean previous VMs" {
|
||||
kill $(cat srv1/qemu.pid srv2/qemu.pid srv3/qemu.pid 2>/dev/null) 2>/dev/null || true
|
||||
rm -rf srv1 srv2 srv3
|
||||
}
|
||||
|
||||
@test "Prepare networking and masquerading" {
|
||||
ip link del cozy-br0 2>/dev/null || true
|
||||
ip link add cozy-br0 type bridge
|
||||
ip link set cozy-br0 up
|
||||
ip address add 192.168.123.1/24 dev cozy-br0
|
||||
|
||||
# Masquerading rule – idempotent (delete first, then add)
|
||||
iptables -t nat -D POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE 2>/dev/null || true
|
||||
iptables -t nat -A POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE
|
||||
}
|
||||
|
||||
@test "Prepare cloud‑init drive for VMs" {
|
||||
mkdir -p srv1 srv2 srv3
|
||||
|
||||
# Generate cloud‑init ISOs
|
||||
for i in 1 2 3; do
|
||||
echo "hostname: srv${i}" > "srv${i}/meta-data"
|
||||
|
||||
cat > "srv${i}/user-data" <<'EOF'
|
||||
#cloud-config
|
||||
EOF
|
||||
|
||||
cat > "srv${i}/network-config" <<EOF
|
||||
version: 2
|
||||
ethernets:
|
||||
eth0:
|
||||
dhcp4: false
|
||||
addresses:
|
||||
- "192.168.123.1${i}/26"
|
||||
gateway4: "192.168.123.1"
|
||||
nameservers:
|
||||
search: [cluster.local]
|
||||
addresses: [8.8.8.8]
|
||||
EOF
|
||||
|
||||
( cd "srv${i}" && genisoimage \
|
||||
-output seed.img \
|
||||
-volid cidata -rational-rock -joliet \
|
||||
user-data meta-data network-config )
|
||||
done
|
||||
}
|
||||
|
||||
@test "Download Talos NoCloud image" {
|
||||
if [ ! -f nocloud-amd64.raw ]; then
|
||||
wget https://github.com/cozystack/cozystack/releases/latest/download/nocloud-amd64.raw.xz \
|
||||
-O nocloud-amd64.raw.xz --show-progress --output-file /dev/stdout --progress=dot:giga 2>/dev/null
|
||||
rm -f nocloud-amd64.raw
|
||||
xz --decompress nocloud-amd64.raw.xz
|
||||
fi
|
||||
}
|
||||
|
||||
@test "Prepare VM disks" {
|
||||
for i in 1 2 3; do
|
||||
cp nocloud-amd64.raw srv${i}/system.img
|
||||
qemu-img resize srv${i}/system.img 20G
|
||||
qemu-img create srv${i}/data.img 100G
|
||||
done
|
||||
}
|
||||
|
||||
@test "Create tap devices" {
|
||||
for i in 1 2 3; do
|
||||
ip link del cozy-srv${i} 2>/dev/null || true
|
||||
ip tuntap add dev cozy-srv${i} mode tap
|
||||
ip link set cozy-srv${i} up
|
||||
ip link set cozy-srv${i} master cozy-br0
|
||||
done
|
||||
}
|
||||
|
||||
@test "Boot QEMU VMs" {
|
||||
for i in 1 2 3; do
|
||||
qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -smp 8 -m 16384 \
|
||||
-device virtio-net,netdev=net0,mac=52:54:00:12:34:5${i} \
|
||||
-netdev tap,id=net0,ifname=cozy-srv${i},script=no,downscript=no \
|
||||
-drive file=srv${i}/system.img,if=virtio,format=raw \
|
||||
-drive file=srv${i}/seed.img,if=virtio,format=raw \
|
||||
-drive file=srv${i}/data.img,if=virtio,format=raw \
|
||||
-display none -daemonize -pidfile srv${i}/qemu.pid
|
||||
done
|
||||
|
||||
# Give qemu a few seconds to start up networking
|
||||
sleep 5
|
||||
}
|
||||
|
||||
@test "Wait until Talos API port 50000 is reachable on all machines" {
|
||||
timeout 60 bash -c 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
|
||||
}
|
||||
|
||||
@test "Generate Talos cluster configuration" {
|
||||
# Cluster‑wide patches
|
||||
cat > patch.yaml <<'EOF'
|
||||
machine:
|
||||
kubelet:
|
||||
nodeIP:
|
||||
validSubnets:
|
||||
- 192.168.123.0/24
|
||||
extraConfig:
|
||||
maxPods: 512
|
||||
kernel:
|
||||
modules:
|
||||
- name: openvswitch
|
||||
- name: drbd
|
||||
parameters:
|
||||
- usermode_helper=disabled
|
||||
- name: zfs
|
||||
- name: spl
|
||||
registries:
|
||||
mirrors:
|
||||
docker.io:
|
||||
endpoints:
|
||||
- https://mirror.gcr.io
|
||||
files:
|
||||
- content: |
|
||||
[plugins]
|
||||
[plugins."io.containerd.cri.v1.runtime"]
|
||||
device_ownership_from_security_context = true
|
||||
path: /etc/cri/conf.d/20-customization.part
|
||||
op: create
|
||||
|
||||
cluster:
|
||||
apiServer:
|
||||
extraArgs:
|
||||
oidc-issuer-url: "https://keycloak.example.org/realms/cozy"
|
||||
oidc-client-id: "kubernetes"
|
||||
oidc-username-claim: "preferred_username"
|
||||
oidc-groups-claim: "groups"
|
||||
network:
|
||||
cni:
|
||||
name: none
|
||||
dnsDomain: cozy.local
|
||||
podSubnets:
|
||||
- 10.244.0.0/16
|
||||
serviceSubnets:
|
||||
- 10.96.0.0/16
|
||||
EOF
|
||||
|
||||
# Control‑plane‑only patches
|
||||
cat > patch-controlplane.yaml <<'EOF'
|
||||
machine:
|
||||
nodeLabels:
|
||||
node.kubernetes.io/exclude-from-external-load-balancers:
|
||||
$patch: delete
|
||||
network:
|
||||
interfaces:
|
||||
- interface: eth0
|
||||
vip:
|
||||
ip: 192.168.123.10
|
||||
cluster:
|
||||
allowSchedulingOnControlPlanes: true
|
||||
controllerManager:
|
||||
extraArgs:
|
||||
bind-address: 0.0.0.0
|
||||
scheduler:
|
||||
extraArgs:
|
||||
bind-address: 0.0.0.0
|
||||
apiServer:
|
||||
certSANs:
|
||||
- 127.0.0.1
|
||||
proxy:
|
||||
disabled: true
|
||||
discovery:
|
||||
enabled: false
|
||||
etcd:
|
||||
advertisedSubnets:
|
||||
- 192.168.123.0/24
|
||||
EOF
|
||||
|
||||
# Generate secrets once
|
||||
if [ ! -f secrets.yaml ]; then
|
||||
talosctl gen secrets
|
||||
fi
|
||||
|
||||
rm -f controlplane.yaml worker.yaml talosconfig kubeconfig
|
||||
talosctl gen config --with-secrets secrets.yaml cozystack https://192.168.123.10:6443 \
|
||||
--config-patch=@patch.yaml --config-patch-control-plane @patch-controlplane.yaml
|
||||
}
|
||||
|
||||
@test "Apply Talos configuration to the node" {
|
||||
# Apply the configuration to all three nodes
|
||||
for node in 11 12 13; do
|
||||
talosctl apply -f controlplane.yaml -n 192.168.123.${node} -e 192.168.123.${node} -i
|
||||
done
|
||||
|
||||
# Wait for Talos services to come up again
|
||||
timeout 60 bash -c 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
|
||||
}
|
||||
|
||||
@test "Bootstrap Talos cluster" {
|
||||
# Bootstrap etcd on the first node
|
||||
timeout 10 bash -c 'until talosctl bootstrap -n 192.168.123.11 -e 192.168.123.11; do sleep 1; done'
|
||||
|
||||
# Wait until etcd is healthy
|
||||
timeout 180 bash -c 'until talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 >/dev/null 2>&1; do sleep 1; done'
|
||||
timeout 60 bash -c 'while talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 2>&1 | grep -q "rpc error"; do sleep 1; done'
|
||||
|
||||
# Retrieve kubeconfig
|
||||
rm -f kubeconfig
|
||||
talosctl kubeconfig kubeconfig -e 192.168.123.10 -n 192.168.123.10
|
||||
|
||||
# Wait until all three nodes register in Kubernetes
|
||||
timeout 60 bash -c 'until [ $(kubectl get node --no-headers | wc -l) -eq 3 ]; do sleep 1; done'
|
||||
}
|
||||
|
||||
@test "Install Cozystack" {
|
||||
# Create namespace & configmap required by installer
|
||||
kubectl create namespace cozy-system --dry-run=client -o yaml | kubectl apply -f -
|
||||
kubectl create configmap cozystack -n cozy-system \
|
||||
--from-literal=bundle-name=paas-full \
|
||||
--from-literal=ipv4-pod-cidr=10.244.0.0/16 \
|
||||
--from-literal=ipv4-pod-gateway=10.244.0.1 \
|
||||
--from-literal=ipv4-svc-cidr=10.96.0.0/16 \
|
||||
--from-literal=ipv4-join-cidr=100.64.0.0/16 \
|
||||
--from-literal=root-host=example.org \
|
||||
--from-literal=api-server-endpoint=https://192.168.123.10:6443 \
|
||||
--dry-run=client -o yaml | kubectl apply -f -
|
||||
|
||||
# Apply installer manifests from env variable
|
||||
echo "$COZYSTACK_INSTALLER_YAML" | kubectl apply -f -
|
||||
|
||||
# Wait for the installer deployment to become available
|
||||
kubectl wait deployment/cozystack -n cozy-system --timeout=1m --for=condition=Available
|
||||
|
||||
# Wait until HelmReleases appear & reconcile them
|
||||
timeout 60 bash -c 'until kubectl get hr -A | grep -q cozys; do sleep 1; done'
|
||||
sleep 5
|
||||
kubectl get hr -A | awk 'NR>1 {print "kubectl wait --timeout=15m --for=condition=ready -n "$1" hr/"$2" &"} END {print "wait"}' | bash -x
|
||||
|
||||
# Fail the test if any HelmRelease is not Ready
|
||||
if kubectl get hr -A | grep -v " True " | grep -v NAME; then
|
||||
kubectl get hr -A
|
||||
fail "Some HelmReleases failed to reconcile"
|
||||
fi
|
||||
}
|
||||
|
||||
@test "Wait for Cluster‑API provider deployments" {
|
||||
# Wait for Cluster‑API provider deployments
|
||||
timeout 60 bash -c 'until kubectl get deploy -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager >/dev/null 2>&1; do sleep 1; done'
|
||||
kubectl wait deployment/capi-controller-manager deployment/capi-kamaji-controller-manager deployment/capi-kubeadm-bootstrap-controller-manager deployment/capi-operator-cluster-api-operator deployment/capk-controller-manager -n cozy-cluster-api --timeout=1m --for=condition=available
|
||||
}
|
||||
|
||||
@test "Wait for LINSTOR and configure storage" {
|
||||
# Linstor controller and nodes
|
||||
kubectl wait deployment/linstor-controller -n cozy-linstor --timeout=5m --for=condition=available
|
||||
timeout 60 bash -c 'until [ $(kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor node list | grep -c Online) -eq 3 ]; do sleep 1; done'
|
||||
|
||||
for node in srv1 srv2 srv3; do
|
||||
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor ps cdp zfs ${node} /dev/vdc --pool-name data --storage-pool data
|
||||
done
|
||||
|
||||
# Storage classes
|
||||
kubectl apply -f - <<'EOF'
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: local
|
||||
annotations:
|
||||
storageclass.kubernetes.io/is-default-class: "true"
|
||||
provisioner: linstor.csi.linbit.com
|
||||
parameters:
|
||||
linstor.csi.linbit.com/storagePool: "data"
|
||||
linstor.csi.linbit.com/layerList: "storage"
|
||||
linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
allowVolumeExpansion: true
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: replicated
|
||||
provisioner: linstor.csi.linbit.com
|
||||
parameters:
|
||||
linstor.csi.linbit.com/storagePool: "data"
|
||||
linstor.csi.linbit.com/autoPlace: "3"
|
||||
linstor.csi.linbit.com/layerList: "drbd storage"
|
||||
linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
|
||||
property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
|
||||
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
|
||||
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
|
||||
property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
allowVolumeExpansion: true
|
||||
EOF
|
||||
}
|
||||
|
||||
@test "Wait for MetalLB and configure address pool" {
|
||||
# MetalLB address pool
|
||||
kubectl apply -f - <<'EOF'
|
||||
---
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: L2Advertisement
|
||||
metadata:
|
||||
name: cozystack
|
||||
namespace: cozy-metallb
|
||||
spec:
|
||||
ipAddressPools: [cozystack]
|
||||
---
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: IPAddressPool
|
||||
metadata:
|
||||
name: cozystack
|
||||
namespace: cozy-metallb
|
||||
spec:
|
||||
addresses: [192.168.123.200-192.168.123.250]
|
||||
autoAssign: true
|
||||
avoidBuggyIPs: false
|
||||
EOF
|
||||
}
|
||||
|
||||
@test "Check Cozystack API service" {
|
||||
kubectl wait --for=condition=Available apiservices/v1alpha1.apps.cozystack.io --timeout=2m
|
||||
}
|
||||
|
||||
@test "Configure Tenant and wait for applications" {
|
||||
# Patch root tenant and wait for its releases
|
||||
kubectl patch tenants/root -n tenant-root --type merge -p '{"spec":{"host":"example.org","ingress":true,"monitoring":true,"etcd":true,"isolated":true}}'
|
||||
|
||||
timeout 60 bash -c 'until kubectl get hr -n tenant-root etcd ingress monitoring tenant-root >/dev/null 2>&1; do sleep 1; done'
|
||||
kubectl wait hr/etcd hr/ingress hr/tenant-root -n tenant-root --timeout=2m --for=condition=ready
|
||||
|
||||
if ! kubectl wait hr/monitoring -n tenant-root --timeout=2m --for=condition=ready; then
|
||||
flux reconcile hr monitoring -n tenant-root --force
|
||||
kubectl wait hr/monitoring -n tenant-root --timeout=2m --for=condition=ready
|
||||
fi
|
||||
|
||||
# Expose Cozystack services through ingress
|
||||
kubectl patch configmap/cozystack -n cozy-system --type merge -p '{"data":{"expose-services":"api,dashboard,cdi-uploadproxy,vm-exportproxy,keycloak"}}'
|
||||
|
||||
# NGINX ingress controller
|
||||
timeout 60 bash -c 'until kubectl get deploy root-ingress-controller -n tenant-root >/dev/null 2>&1; do sleep 1; done'
|
||||
kubectl wait deploy/root-ingress-controller -n tenant-root --timeout=5m --for=condition=available
|
||||
|
||||
# etcd statefulset
|
||||
kubectl wait sts/etcd -n tenant-root --for=jsonpath='{.status.readyReplicas}'=3 --timeout=5m
|
||||
|
||||
# VictoriaMetrics components
|
||||
kubectl wait vmalert/vmalert-shortterm vmalertmanager/alertmanager -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=5m
|
||||
kubectl wait vlogs/generic -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=5m
|
||||
kubectl wait vmcluster/shortterm vmcluster/longterm -n tenant-root --for=jsonpath='{.status.clusterStatus}'=operational --timeout=5m
|
||||
|
||||
# Grafana
|
||||
kubectl wait clusters.postgresql.cnpg.io/grafana-db -n tenant-root --for=condition=ready --timeout=5m
|
||||
kubectl wait deploy/grafana-deployment -n tenant-root --for=condition=available --timeout=5m
|
||||
|
||||
# Verify Grafana via ingress
|
||||
ingress_ip=$(kubectl get svc root-ingress-controller -n tenant-root -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
curl -sS -k "https://${ingress_ip}" -H 'Host: grafana.example.org' | grep -q Found
|
||||
}
|
||||
|
||||
@test "Keycloak OIDC stack is healthy" {
|
||||
kubectl patch configmap/cozystack -n cozy-system --type merge -p '{"data":{"oidc-enabled":"true"}}'
|
||||
|
||||
timeout 120 bash -c 'until kubectl get hr -n cozy-keycloak keycloak keycloak-configure keycloak-operator >/dev/null 2>&1; do sleep 1; done'
|
||||
kubectl wait hr/keycloak hr/keycloak-configure hr/keycloak-operator -n cozy-keycloak --timeout=10m --for=condition=ready
|
||||
}
|
||||
370
hack/e2e.sh
Executable file
370
hack/e2e.sh
Executable file
@@ -0,0 +1,370 @@
|
||||
#!/bin/bash
|
||||
if [ "$COZYSTACK_INSTALLER_YAML" = "" ]; then
|
||||
echo 'COZYSTACK_INSTALLER_YAML variable is not set!' >&2
|
||||
echo 'please set it with following command:' >&2
|
||||
echo >&2
|
||||
echo 'export COZYSTACK_INSTALLER_YAML=$(helm template -n cozy-system installer packages/core/installer)' >&2
|
||||
echo >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$(cat /proc/sys/net/ipv4/ip_forward)" != 1 ]; then
|
||||
echo "IPv4 forwarding is not enabled!" >&2
|
||||
echo 'please enable forwarding with the following command:' >&2
|
||||
echo >&2
|
||||
echo 'echo 1 > /proc/sys/net/ipv4/ip_forward' >&2
|
||||
echo >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
set -x
|
||||
set -e
|
||||
|
||||
kill `cat srv1/qemu.pid srv2/qemu.pid srv3/qemu.pid` || true
|
||||
|
||||
ip link del cozy-br0 || true
|
||||
ip link add cozy-br0 type bridge
|
||||
ip link set cozy-br0 up
|
||||
ip addr add 192.168.123.1/24 dev cozy-br0
|
||||
|
||||
# Enable masquerading
|
||||
iptables -t nat -D POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE 2>/dev/null || true
|
||||
iptables -t nat -A POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE
|
||||
|
||||
rm -rf srv1 srv2 srv3
|
||||
mkdir -p srv1 srv2 srv3
|
||||
|
||||
# Prepare cloud-init
|
||||
for i in 1 2 3; do
|
||||
echo "hostname: srv$i" > "srv$i/meta-data"
|
||||
echo '#cloud-config' > "srv$i/user-data"
|
||||
cat > "srv$i/network-config" <<EOT
|
||||
version: 2
|
||||
ethernets:
|
||||
eth0:
|
||||
dhcp4: false
|
||||
addresses:
|
||||
- "192.168.123.1$i/26"
|
||||
gateway4: "192.168.123.1"
|
||||
nameservers:
|
||||
search: [cluster.local]
|
||||
addresses: [8.8.8.8]
|
||||
EOT
|
||||
|
||||
( cd srv$i && genisoimage \
|
||||
-output seed.img \
|
||||
-volid cidata -rational-rock -joliet \
|
||||
user-data meta-data network-config
|
||||
)
|
||||
done
|
||||
|
||||
# Prepare system drive
|
||||
if [ ! -f nocloud-amd64.raw ]; then
|
||||
wget https://github.com/cozystack/cozystack/releases/latest/download/nocloud-amd64.raw.xz \
|
||||
-O nocloud-amd64.raw.xz --show-progress --output-file /dev/stdout --progress=dot:giga 2>/dev/null
|
||||
rm -f nocloud-amd64.raw
|
||||
xz --decompress nocloud-amd64.raw.xz
|
||||
fi
|
||||
for i in 1 2 3; do
|
||||
cp nocloud-amd64.raw srv$i/system.img
|
||||
qemu-img resize srv$i/system.img 20G
|
||||
done
|
||||
|
||||
# Prepare data drives
|
||||
for i in 1 2 3; do
|
||||
qemu-img create srv$i/data.img 100G
|
||||
done
|
||||
|
||||
# Prepare networking
|
||||
for i in 1 2 3; do
|
||||
ip link del cozy-srv$i || true
|
||||
ip tuntap add dev cozy-srv$i mode tap
|
||||
ip link set cozy-srv$i up
|
||||
ip link set cozy-srv$i master cozy-br0
|
||||
done
|
||||
|
||||
# Start VMs
|
||||
for i in 1 2 3; do
|
||||
qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -smp 8 -m 16384 \
|
||||
-device virtio-net,netdev=net0,mac=52:54:00:12:34:5$i \
|
||||
-netdev tap,id=net0,ifname=cozy-srv$i,script=no,downscript=no \
|
||||
-drive file=srv$i/system.img,if=virtio,format=raw \
|
||||
-drive file=srv$i/seed.img,if=virtio,format=raw \
|
||||
-drive file=srv$i/data.img,if=virtio,format=raw \
|
||||
-display none -daemonize -pidfile srv$i/qemu.pid
|
||||
done
|
||||
|
||||
sleep 5
|
||||
|
||||
# Wait for VM to start up
|
||||
timeout 60 sh -c 'until nc -nzv 192.168.123.11 50000 && nc -nzv 192.168.123.12 50000 && nc -nzv 192.168.123.13 50000; do sleep 1; done'
|
||||
|
||||
cat > patch.yaml <<\EOT
|
||||
machine:
|
||||
kubelet:
|
||||
nodeIP:
|
||||
validSubnets:
|
||||
- 192.168.123.0/24
|
||||
extraConfig:
|
||||
maxPods: 512
|
||||
kernel:
|
||||
modules:
|
||||
- name: openvswitch
|
||||
- name: drbd
|
||||
parameters:
|
||||
- usermode_helper=disabled
|
||||
- name: zfs
|
||||
- name: spl
|
||||
registries:
|
||||
mirrors:
|
||||
docker.io:
|
||||
endpoints:
|
||||
- https://mirror.gcr.io
|
||||
files:
|
||||
- content: |
|
||||
[plugins]
|
||||
[plugins."io.containerd.cri.v1.runtime"]
|
||||
device_ownership_from_security_context = true
|
||||
path: /etc/cri/conf.d/20-customization.part
|
||||
op: create
|
||||
|
||||
cluster:
|
||||
apiServer:
|
||||
extraArgs:
|
||||
oidc-issuer-url: "https://keycloak.example.org/realms/cozy"
|
||||
oidc-client-id: "kubernetes"
|
||||
oidc-username-claim: "preferred_username"
|
||||
oidc-groups-claim: "groups"
|
||||
network:
|
||||
cni:
|
||||
name: none
|
||||
dnsDomain: cozy.local
|
||||
podSubnets:
|
||||
- 10.244.0.0/16
|
||||
serviceSubnets:
|
||||
- 10.96.0.0/16
|
||||
EOT
|
||||
|
||||
cat > patch-controlplane.yaml <<\EOT
|
||||
machine:
|
||||
nodeLabels:
|
||||
node.kubernetes.io/exclude-from-external-load-balancers:
|
||||
$patch: delete
|
||||
network:
|
||||
interfaces:
|
||||
- interface: eth0
|
||||
vip:
|
||||
ip: 192.168.123.10
|
||||
cluster:
|
||||
allowSchedulingOnControlPlanes: true
|
||||
controllerManager:
|
||||
extraArgs:
|
||||
bind-address: 0.0.0.0
|
||||
scheduler:
|
||||
extraArgs:
|
||||
bind-address: 0.0.0.0
|
||||
apiServer:
|
||||
certSANs:
|
||||
- 127.0.0.1
|
||||
proxy:
|
||||
disabled: true
|
||||
discovery:
|
||||
enabled: false
|
||||
etcd:
|
||||
advertisedSubnets:
|
||||
- 192.168.123.0/24
|
||||
EOT
|
||||
|
||||
# Gen configuration
|
||||
if [ ! -f secrets.yaml ]; then
|
||||
talosctl gen secrets
|
||||
fi
|
||||
|
||||
rm -f controlplane.yaml worker.yaml talosconfig kubeconfig
|
||||
talosctl gen config --with-secrets secrets.yaml cozystack https://192.168.123.10:6443 --config-patch=@patch.yaml --config-patch-control-plane @patch-controlplane.yaml
|
||||
export TALOSCONFIG=$PWD/talosconfig
|
||||
|
||||
# Apply configuration
|
||||
talosctl apply -f controlplane.yaml -n 192.168.123.11 -e 192.168.123.11 -i
|
||||
talosctl apply -f controlplane.yaml -n 192.168.123.12 -e 192.168.123.12 -i
|
||||
talosctl apply -f controlplane.yaml -n 192.168.123.13 -e 192.168.123.13 -i
|
||||
|
||||
# Wait for VM to be configured
|
||||
timeout 60 sh -c 'until nc -nzv 192.168.123.11 50000 && nc -nzv 192.168.123.12 50000 && nc -nzv 192.168.123.13 50000; do sleep 1; done'
|
||||
|
||||
# Bootstrap
|
||||
timeout 10 sh -c 'until talosctl bootstrap -n 192.168.123.11 -e 192.168.123.11; do sleep 1; done'
|
||||
|
||||
# Wait for etcd
|
||||
timeout 180 sh -c 'until timeout -s 9 2 talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 2>&1; do sleep 1; done'
|
||||
timeout 60 sh -c 'while talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 2>&1 | grep "rpc error"; do sleep 1; done'
|
||||
|
||||
rm -f kubeconfig
|
||||
talosctl kubeconfig kubeconfig -e 192.168.123.10 -n 192.168.123.10
|
||||
export KUBECONFIG=$PWD/kubeconfig
|
||||
|
||||
# Wait for kubernetes nodes appear
|
||||
timeout 60 sh -c 'until [ $(kubectl get node -o name | wc -l) = 3 ]; do sleep 1; done'
|
||||
kubectl create ns cozy-system -o yaml | kubectl apply -f -
|
||||
kubectl create -f - <<\EOT
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: cozystack
|
||||
namespace: cozy-system
|
||||
data:
|
||||
bundle-name: "paas-full"
|
||||
ipv4-pod-cidr: "10.244.0.0/16"
|
||||
ipv4-pod-gateway: "10.244.0.1"
|
||||
ipv4-svc-cidr: "10.96.0.0/16"
|
||||
ipv4-join-cidr: "100.64.0.0/16"
|
||||
root-host: example.org
|
||||
api-server-endpoint: https://192.168.123.10:6443
|
||||
EOT
|
||||
|
||||
#
|
||||
echo "$COZYSTACK_INSTALLER_YAML" | kubectl apply -f -
|
||||
|
||||
# wait for cozystack pod to start
|
||||
kubectl wait deploy --timeout=1m --for=condition=available -n cozy-system cozystack
|
||||
|
||||
# wait for helmreleases appear
|
||||
timeout 60 sh -c 'until kubectl get hr -A | grep cozy; do sleep 1; done'
|
||||
|
||||
sleep 5
|
||||
|
||||
# Wait for all HelmReleases to be installed
|
||||
kubectl get hr -A | awk 'NR>1 {print "kubectl wait --timeout=15m --for=condition=ready -n " $1 " hr/" $2 " &"} END{print "wait"}' | sh -x
|
||||
|
||||
failed_hrs=$(kubectl get hr -A | grep -v True)
|
||||
if [ -n "$(echo "$failed_hrs" | grep -v NAME)" ]; then
|
||||
printf 'Failed HelmReleases:\n%s\n' "$failed_hrs" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Wait for Cluster-API providers
|
||||
timeout 60 sh -c 'until kubectl get deploy -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager; do sleep 1; done'
|
||||
kubectl wait deploy --timeout=1m --for=condition=available -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager
|
||||
|
||||
# Wait for linstor controller
|
||||
kubectl wait deploy --timeout=5m --for=condition=available -n cozy-linstor linstor-controller
|
||||
|
||||
# Wait for all linstor nodes become Online
|
||||
timeout 60 sh -c 'until [ $(kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor node list | grep -c Online) = 3 ]; do sleep 1; done'
|
||||
|
||||
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor ps cdp zfs srv1 /dev/vdc --pool-name data --storage-pool data
|
||||
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor ps cdp zfs srv2 /dev/vdc --pool-name data --storage-pool data
|
||||
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor ps cdp zfs srv3 /dev/vdc --pool-name data --storage-pool data
|
||||
|
||||
kubectl create -f- <<EOT
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: local
|
||||
annotations:
|
||||
storageclass.kubernetes.io/is-default-class: "true"
|
||||
provisioner: linstor.csi.linbit.com
|
||||
parameters:
|
||||
linstor.csi.linbit.com/storagePool: "data"
|
||||
linstor.csi.linbit.com/layerList: "storage"
|
||||
linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
allowVolumeExpansion: true
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: replicated
|
||||
provisioner: linstor.csi.linbit.com
|
||||
parameters:
|
||||
linstor.csi.linbit.com/storagePool: "data"
|
||||
linstor.csi.linbit.com/autoPlace: "3"
|
||||
linstor.csi.linbit.com/layerList: "drbd storage"
|
||||
linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
|
||||
property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
|
||||
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
|
||||
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
|
||||
property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
allowVolumeExpansion: true
|
||||
EOT
|
||||
kubectl create -f- <<EOT
|
||||
---
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: L2Advertisement
|
||||
metadata:
|
||||
name: cozystack
|
||||
namespace: cozy-metallb
|
||||
spec:
|
||||
ipAddressPools:
|
||||
- cozystack
|
||||
---
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: IPAddressPool
|
||||
metadata:
|
||||
name: cozystack
|
||||
namespace: cozy-metallb
|
||||
spec:
|
||||
addresses:
|
||||
- 192.168.123.200-192.168.123.250
|
||||
autoAssign: true
|
||||
avoidBuggyIPs: false
|
||||
EOT
|
||||
|
||||
# Wait for cozystack-api
|
||||
kubectl wait --for=condition=Available apiservices v1alpha1.apps.cozystack.io --timeout=2m
|
||||
|
||||
kubectl patch -n tenant-root tenants.apps.cozystack.io root --type=merge -p '{"spec":{
|
||||
"host": "example.org",
|
||||
"ingress": true,
|
||||
"monitoring": true,
|
||||
"etcd": true,
|
||||
"isolated": true
|
||||
}}'
|
||||
|
||||
# Wait for HelmRelease be created
|
||||
timeout 60 sh -c 'until kubectl get hr -n tenant-root etcd ingress monitoring tenant-root; do sleep 1; done'
|
||||
|
||||
# Wait for HelmReleases be installed
|
||||
kubectl wait --timeout=2m --for=condition=ready -n tenant-root hr etcd ingress tenant-root
|
||||
|
||||
if ! kubectl wait --timeout=2m --for=condition=ready -n tenant-root hr monitoring; then
|
||||
flux reconcile hr monitoring -n tenant-root --force
|
||||
kubectl wait --timeout=2m --for=condition=ready -n tenant-root hr monitoring
|
||||
fi
|
||||
|
||||
kubectl patch -n tenant-root ingresses.apps.cozystack.io ingress --type=merge -p '{"spec":{
|
||||
"dashboard": true
|
||||
}}'
|
||||
|
||||
# Wait for nginx-ingress-controller
|
||||
timeout 60 sh -c 'until kubectl get deploy -n tenant-root root-ingress-controller; do sleep 1; done'
|
||||
kubectl wait --timeout=5m --for=condition=available -n tenant-root deploy root-ingress-controller
|
||||
|
||||
# Wait for etcd
|
||||
kubectl wait --timeout=5m --for=jsonpath=.status.readyReplicas=3 -n tenant-root sts etcd
|
||||
|
||||
# Wait for Victoria metrics
|
||||
kubectl wait --timeout=5m --for=jsonpath=.status.updateStatus=operational -n tenant-root vmalert/vmalert-shortterm vmalertmanager/alertmanager
|
||||
kubectl wait --timeout=5m --for=jsonpath=.status.updateStatus=operational -n tenant-root vlogs/generic
|
||||
kubectl wait --timeout=5m --for=jsonpath=.status.clusterStatus=operational -n tenant-root vmcluster/shortterm vmcluster/longterm
|
||||
|
||||
# Wait for grafana
|
||||
kubectl wait --timeout=5m --for=condition=ready -n tenant-root clusters.postgresql.cnpg.io grafana-db
|
||||
kubectl wait --timeout=5m --for=condition=available -n tenant-root deploy grafana-deployment
|
||||
|
||||
# Get IP of nginx-ingress
|
||||
ip=$(kubectl get svc -n tenant-root root-ingress-controller -o jsonpath='{.status.loadBalancer.ingress..ip}')
|
||||
|
||||
# Check Grafana
|
||||
curl -sS -k "https://$ip" -H 'Host: grafana.example.org' | grep Found
|
||||
|
||||
|
||||
# Test OIDC
|
||||
kubectl patch -n cozy-system cm/cozystack --type=merge -p '{"data":{
|
||||
"oidc-enabled": "true"
|
||||
}}'
|
||||
|
||||
timeout 120 sh -c 'until kubectl get hr -n cozy-keycloak keycloak keycloak-configure keycloak-operator; do sleep 1; done'
|
||||
kubectl wait --timeout=10m --for=condition=ready -n cozy-keycloak hr keycloak keycloak-configure keycloak-operator
|
||||
@@ -16,7 +16,7 @@ type: application
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.8.0
|
||||
version: 0.7.0
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
|
||||
@@ -7,10 +7,8 @@ generate:
|
||||
readme-generator -v values.yaml -s values.schema.json -r README.md
|
||||
|
||||
image:
|
||||
docker buildx build images/clickhouse-backup \
|
||||
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/clickhouse-backup \
|
||||
--provenance false \
|
||||
--builder=$(BUILDER) \
|
||||
--platform=$(PLATFORM) \
|
||||
--tag $(REGISTRY)/clickhouse-backup:$(call settag,$(CLICKHOUSE_BACKUP_TAG)) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/clickhouse-backup:latest \
|
||||
--cache-to type=inline \
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/clickhouse-backup:0.8.0@sha256:3faf7a4cebf390b9053763107482de175aa0fdb88c1e77424fd81100b1c3a205
|
||||
ghcr.io/cozystack/cozystack/clickhouse-backup:0.7.0@sha256:3faf7a4cebf390b9053763107482de175aa0fdb88c1e77424fd81100b1c3a205
|
||||
|
||||
@@ -11,34 +11,35 @@ These presets are for basic testing and not meant to be used in production
|
||||
{{ include "resources.preset" (dict "type" "nano") -}}
|
||||
*/}}
|
||||
{{- define "resources.preset" -}}
|
||||
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
|
||||
{{- $presets := dict
|
||||
"nano" (dict
|
||||
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"micro" (dict
|
||||
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"small" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"medium" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"large" (dict
|
||||
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"xlarge" (dict
|
||||
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"2xlarge" (dict
|
||||
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
}}
|
||||
{{- if hasKey $presets .type -}}
|
||||
|
||||
@@ -16,7 +16,7 @@ type: application
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.6.0
|
||||
version: 0.5.0
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/postgres-backup:0.11.0@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f
|
||||
ghcr.io/cozystack/cozystack/postgres-backup:0.10.1@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f
|
||||
|
||||
@@ -11,34 +11,35 @@ These presets are for basic testing and not meant to be used in production
|
||||
{{ include "resources.preset" (dict "type" "nano") -}}
|
||||
*/}}
|
||||
{{- define "resources.preset" -}}
|
||||
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
|
||||
{{- $presets := dict
|
||||
"nano" (dict
|
||||
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"micro" (dict
|
||||
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"small" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"medium" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"large" (dict
|
||||
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"xlarge" (dict
|
||||
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"2xlarge" (dict
|
||||
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
}}
|
||||
{{- if hasKey $presets .type -}}
|
||||
|
||||
@@ -16,7 +16,7 @@ type: application
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.5.0
|
||||
version: 0.4.0
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
|
||||
@@ -6,10 +6,8 @@ include ../../../scripts/package.mk
|
||||
image: image-nginx
|
||||
|
||||
image-nginx:
|
||||
docker buildx build images/nginx-cache \
|
||||
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/nginx-cache \
|
||||
--provenance false \
|
||||
--builder=$(BUILDER) \
|
||||
--platform=$(PLATFORM) \
|
||||
--tag $(REGISTRY)/nginx-cache:$(call settag,$(NGINX_CACHE_TAG)) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/nginx-cache:latest \
|
||||
--cache-to type=inline \
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/nginx-cache:0.5.0@sha256:785bd69cb593dc1509875d1e3128dac1a013b099fbb02f39330298d798706a0e
|
||||
ghcr.io/cozystack/cozystack/nginx-cache:0.4.0@sha256:4e1f5153d2673a399b315252238f4dc3eb5d6c59295aef594691710cc5b72eb4
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
FROM ubuntu:22.04 AS stage
|
||||
FROM ubuntu:22.04 as stage
|
||||
|
||||
ARG NGINX_VERSION=1.25.3
|
||||
ARG IP2LOCATION_C_VERSION=8.6.1
|
||||
@@ -9,15 +9,11 @@ ARG FIFTYONEDEGREES_NGINX_VERSION=3.2.21.1
|
||||
ARG NGINX_CACHE_PURGE_VERSION=2.5.3
|
||||
ARG NGINX_VTS_VERSION=0.2.2
|
||||
|
||||
ARG TARGETOS
|
||||
ARG TARGETARCH
|
||||
|
||||
# Install required packages for development
|
||||
RUN apt update -q \
|
||||
&& apt install -yq --no-install-recommends \
|
||||
ca-certificates \
|
||||
RUN apt-get update -q \
|
||||
&& apt-get install -yq \
|
||||
unzip \
|
||||
automake \
|
||||
autoconf \
|
||||
build-essential \
|
||||
libtool \
|
||||
libpcre3 \
|
||||
@@ -72,7 +68,7 @@ RUN checkinstall \
|
||||
--default \
|
||||
--pkgname=ip2location-c \
|
||||
--pkgversion=${IP2LOCATION_C_VERSION} \
|
||||
--pkgarch=${TARGETARCH} \
|
||||
--pkgarch=amd64 \
|
||||
--pkggroup=lib \
|
||||
--pkgsource="https://github.com/chrislim2888/IP2Location-C-Library" \
|
||||
--maintainer="Eduard Generalov <eduard@generalov.net>" \
|
||||
@@ -101,7 +97,7 @@ RUN checkinstall \
|
||||
--default \
|
||||
--pkgname=ip2proxy-c \
|
||||
--pkgversion=${IP2PROXY_C_VERSION} \
|
||||
--pkgarch=${TARGETARCH} \
|
||||
--pkgarch=amd64 \
|
||||
--pkggroup=lib \
|
||||
--pkgsource="https://github.com/ip2location/ip2proxy-c" \
|
||||
--maintainer="Eduard Generalov <eduard@generalov.net>" \
|
||||
@@ -148,7 +144,7 @@ RUN checkinstall \
|
||||
--default \
|
||||
--pkgname=nginx \
|
||||
--pkgversion=$VERS \
|
||||
--pkgarch=${TARGETARCH} \
|
||||
--pkgarch=amd64 \
|
||||
--pkggroup=web \
|
||||
--provides=nginx \
|
||||
--requires=ip2location-c,ip2proxy-c,libssl3,libc-bin,libc6,libzstd1,libpcre++0v5,libpcre16-3,libpcre2-8-0,libpcre3,libpcre32-3,libpcrecpp0v5,libmaxminddb0 \
|
||||
@@ -169,9 +165,10 @@ COPY nginx-reloader.sh /usr/bin/nginx-reloader.sh
|
||||
RUN set -x \
|
||||
&& groupadd --system --gid 101 nginx \
|
||||
&& useradd --system --gid nginx --no-create-home --home /nonexistent --comment "nginx user" --shell /bin/false --uid 101 nginx \
|
||||
&& apt update -q \
|
||||
&& apt install -yq --no-install-recommends --no-install-suggests gnupg1 ca-certificates inotify-tools \
|
||||
&& apt install -y /packages/*.deb \
|
||||
&& apt update \
|
||||
&& apt-get install --no-install-recommends --no-install-suggests -y gnupg1 ca-certificates inotify-tools \
|
||||
&& apt -y install /packages/*.deb \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& mkdir -p /var/lib/nginx /var/log/nginx \
|
||||
&& ln -sf /dev/stdout /var/log/nginx/access.log \
|
||||
|
||||
@@ -11,34 +11,35 @@ These presets are for basic testing and not meant to be used in production
|
||||
{{ include "resources.preset" (dict "type" "nano") -}}
|
||||
*/}}
|
||||
{{- define "resources.preset" -}}
|
||||
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
|
||||
{{- $presets := dict
|
||||
"nano" (dict
|
||||
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"micro" (dict
|
||||
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"small" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"medium" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"large" (dict
|
||||
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"xlarge" (dict
|
||||
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"2xlarge" (dict
|
||||
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
}}
|
||||
{{- if hasKey $presets .type -}}
|
||||
|
||||
@@ -16,7 +16,7 @@ type: application
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.6.0
|
||||
version: 0.5.0
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
|
||||
@@ -11,34 +11,35 @@ These presets are for basic testing and not meant to be used in production
|
||||
{{ include "resources.preset" (dict "type" "nano") -}}
|
||||
*/}}
|
||||
{{- define "resources.preset" -}}
|
||||
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
|
||||
{{- $presets := dict
|
||||
"nano" (dict
|
||||
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"micro" (dict
|
||||
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"small" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"medium" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"large" (dict
|
||||
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"xlarge" (dict
|
||||
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"2xlarge" (dict
|
||||
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
}}
|
||||
{{- if hasKey $presets .type -}}
|
||||
|
||||
@@ -22,4 +22,4 @@ version: 0.20.0
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
# follow Semantic Versioning. They should reflect the version the application is using.
|
||||
# It is recommended to use it with quotes.
|
||||
appVersion: 1.32.4
|
||||
appVersion: "1.30.1"
|
||||
|
||||
@@ -14,10 +14,8 @@ generate:
|
||||
image: image-ubuntu-container-disk image-kubevirt-cloud-provider image-kubevirt-csi-driver image-cluster-autoscaler
|
||||
|
||||
image-ubuntu-container-disk:
|
||||
docker buildx build images/ubuntu-container-disk \
|
||||
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/ubuntu-container-disk \
|
||||
--provenance false \
|
||||
--builder=$(BUILDER) \
|
||||
--platform=$(PLATFORM) \
|
||||
--build-arg KUBERNETES_VERSION=${KUBERNETES_VERSION} \
|
||||
--tag $(REGISTRY)/ubuntu-container-disk:$(call settag,$(KUBERNETES_VERSION)) \
|
||||
--tag $(REGISTRY)/ubuntu-container-disk:$(call settag,$(KUBERNETES_VERSION)-$(TAG)) \
|
||||
@@ -32,10 +30,8 @@ image-ubuntu-container-disk:
|
||||
rm -f images/ubuntu-container-disk.json
|
||||
|
||||
image-kubevirt-cloud-provider:
|
||||
docker buildx build images/kubevirt-cloud-provider \
|
||||
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/kubevirt-cloud-provider \
|
||||
--provenance false \
|
||||
--builder=$(BUILDER) \
|
||||
--platform=$(PLATFORM) \
|
||||
--tag $(REGISTRY)/kubevirt-cloud-provider:$(call settag,$(KUBERNETES_PKG_TAG)) \
|
||||
--tag $(REGISTRY)/kubevirt-cloud-provider:$(call settag,$(KUBERNETES_PKG_TAG)-$(TAG)) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/kubevirt-cloud-provider:latest \
|
||||
@@ -49,10 +45,8 @@ image-kubevirt-cloud-provider:
|
||||
rm -f images/kubevirt-cloud-provider.json
|
||||
|
||||
image-kubevirt-csi-driver:
|
||||
docker buildx build images/kubevirt-csi-driver \
|
||||
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/kubevirt-csi-driver \
|
||||
--provenance false \
|
||||
--builder=$(BUILDER) \
|
||||
--platform=$(PLATFORM) \
|
||||
--tag $(REGISTRY)/kubevirt-csi-driver:$(call settag,$(KUBERNETES_PKG_TAG)) \
|
||||
--tag $(REGISTRY)/kubevirt-csi-driver:$(call settag,$(KUBERNETES_PKG_TAG)-$(TAG)) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/kubevirt-csi-driver:latest \
|
||||
@@ -67,10 +61,8 @@ image-kubevirt-csi-driver:
|
||||
|
||||
|
||||
image-cluster-autoscaler:
|
||||
docker buildx build images/cluster-autoscaler \
|
||||
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/cluster-autoscaler \
|
||||
--provenance false \
|
||||
--builder=$(BUILDER) \
|
||||
--platform=$(PLATFORM) \
|
||||
--tag $(REGISTRY)/cluster-autoscaler:$(call settag,$(KUBERNETES_PKG_TAG)) \
|
||||
--tag $(REGISTRY)/cluster-autoscaler:$(call settag,$(KUBERNETES_PKG_TAG)-$(TAG)) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/cluster-autoscaler:latest \
|
||||
|
||||
@@ -45,7 +45,6 @@ kubectl get secret -n <namespace> kubernetes-<clusterName>-admin-kubeconfig -o g
|
||||
| `addons.certManager.enabled` | Enables the cert-manager | `false` |
|
||||
| `addons.certManager.valuesOverride` | Custom values to override | `{}` |
|
||||
| `addons.cilium.valuesOverride` | Custom values to override | `{}` |
|
||||
| `addons.gatewayAPI.enabled` | Enables the Gateway API | `false` |
|
||||
| `addons.ingressNginx.enabled` | Enable Ingress-NGINX controller (expect nodes with 'ingress-nginx' role) | `false` |
|
||||
| `addons.ingressNginx.valuesOverride` | Custom values to override | `{}` |
|
||||
| `addons.ingressNginx.hosts` | List of domain names that should be passed through to the cluster by upper cluster | `[]` |
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.20.0@sha256:8dbbe95fe8b933a1d1a3c638120f386fec0c4950092d3be5ddd592375bb8a760
|
||||
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.19.0@sha256:85371c6aabf5a7fea2214556deac930c600e362f92673464fe2443784e2869c3
|
||||
|
||||
@@ -1,14 +1,7 @@
|
||||
# Source: https://raw.githubusercontent.com/kubernetes/autoscaler/refs/heads/master/cluster-autoscaler/Dockerfile.amd64
|
||||
ARG builder_image=docker.io/library/golang:1.23.4
|
||||
ARG BASEIMAGE=gcr.io/distroless/static:nonroot-${TARGETARCH}
|
||||
|
||||
ARG BASEIMAGE=gcr.io/distroless/static:nonroot-amd64
|
||||
FROM ${builder_image} AS builder
|
||||
|
||||
ARG TARGETOS
|
||||
ARG TARGETARCH
|
||||
ENV GOOS=$TARGETOS
|
||||
ENV GOARCH=$TARGETARCH
|
||||
|
||||
RUN git clone https://github.com/kubernetes/autoscaler /src/autoscaler \
|
||||
&& cd /src/autoscaler/cluster-autoscaler \
|
||||
&& git checkout cluster-autoscaler-1.32.0
|
||||
@@ -21,8 +14,6 @@ RUN make build
|
||||
FROM $BASEIMAGE
|
||||
LABEL maintainer="Marcin Wielgus <mwielgus@google.com>"
|
||||
|
||||
ARG TARGETARCH
|
||||
|
||||
COPY --from=builder /src/autoscaler/cluster-autoscaler/cluster-autoscaler-${TARGETARCH} /cluster-autoscaler
|
||||
COPY --from=builder /src/autoscaler/cluster-autoscaler/cluster-autoscaler-amd64 /cluster-autoscaler
|
||||
WORKDIR /
|
||||
CMD ["/cluster-autoscaler"]
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.20.0@sha256:41fcdbd2f667f68bf554dd184ce362e65b88f350dc7b938c86079b719f5e5099
|
||||
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.19.0@sha256:795d8e1ef4b2b0df2aa1e09d96cd13476ebb545b4bf4b5779b7547a70ef64cf9
|
||||
|
||||
@@ -1,10 +1,5 @@
|
||||
# Source: https://github.com/kubevirt/cloud-provider-kubevirt/blob/main/build/images/kubevirt-cloud-controller-manager/Dockerfile
|
||||
FROM golang:1.20.6 AS builder
|
||||
|
||||
ARG TARGETOS
|
||||
ARG TARGETARCH
|
||||
ENV GOOS=$TARGETOS
|
||||
ENV GOARCH=$TARGETARCH
|
||||
FROM --platform=linux/amd64 golang:1.20.6 AS builder
|
||||
|
||||
RUN git clone https://github.com/kubevirt/cloud-provider-kubevirt /go/src/kubevirt.io/cloud-provider-kubevirt \
|
||||
&& cd /go/src/kubevirt.io/cloud-provider-kubevirt \
|
||||
@@ -19,7 +14,7 @@ RUN go get 'k8s.io/endpointslice/util@v0.28' 'k8s.io/apiserver@v0.28'
|
||||
RUN go mod tidy
|
||||
RUN go mod vendor
|
||||
|
||||
RUN CGO_ENABLED=0 go build -mod=vendor -ldflags="-s -w" -o bin/kubevirt-cloud-controller-manager ./cmd/kubevirt-cloud-controller-manager
|
||||
RUN CGO_ENABLED=0 GOOS=linux go build -mod=vendor -ldflags="-s -w" -o bin/kubevirt-cloud-controller-manager ./cmd/kubevirt-cloud-controller-manager
|
||||
|
||||
FROM registry.access.redhat.com/ubi9/ubi-micro
|
||||
COPY --from=builder /go/src/kubevirt.io/cloud-provider-kubevirt/bin/kubevirt-cloud-controller-manager /bin/kubevirt-cloud-controller-manager
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.20.0@sha256:61580fea56b745580989d85e3ef2563e9bb1accc9c4185f8e636bacd02551319
|
||||
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.19.0@sha256:5717919c75e609902c6d67138311a2a8fd07be822e2173f3802b67cf5f3486e9
|
||||
|
||||
@@ -5,11 +5,6 @@ RUN git clone https://github.com/kubevirt/csi-driver /src/kubevirt-csi-driver \
|
||||
&& cd /src/kubevirt-csi-driver \
|
||||
&& git checkout 35836e0c8b68d9916d29a838ea60cdd3fc6199cf
|
||||
|
||||
ARG TARGETOS
|
||||
ARG TARGETARCH
|
||||
ENV GOOS=$TARGETOS
|
||||
ENV GOARCH=$TARGETARCH
|
||||
|
||||
WORKDIR /src/kubevirt-csi-driver
|
||||
RUN make build
|
||||
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.32@sha256:186af6f71891bfc6d6948454802c08922baa508c30e7f79e330b7d26ffceff03
|
||||
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.32@sha256:4a4f8bee150e04d1efcd5ff1ea83e12f495a98851cc5fd47ef41ac7aebce9b74
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# TODO: Here we use ubuntu:22.04, as guestfish has some network issues running in ubuntu:24.04
|
||||
FROM ubuntu:22.04 AS guestfish
|
||||
FROM ubuntu:22.04 as guestfish
|
||||
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
RUN apt-get update \
|
||||
@@ -8,17 +8,15 @@ RUN apt-get update \
|
||||
linux-image-generic \
|
||||
wget \
|
||||
make \
|
||||
bash-completion
|
||||
bash-completion \
|
||||
&& apt-get clean
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
FROM guestfish AS builder
|
||||
|
||||
ARG TARGETOS
|
||||
ARG TARGETARCH
|
||||
FROM guestfish as builder
|
||||
|
||||
# noble is a code name for the Ubuntu 24.04 LTS release
|
||||
RUN wget -O image.img https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-${TARGETARCH}.img --show-progress --output-file /dev/stdout --progress=dot:giga 2>/dev/null
|
||||
RUN wget -O image.img https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img --show-progress --output-file /dev/stdout --progress=dot:giga 2>/dev/null
|
||||
|
||||
ARG KUBERNETES_VERSION
|
||||
|
||||
@@ -31,21 +29,19 @@ RUN qemu-img resize image.img 5G \
|
||||
&& guestfish --remote command "resize2fs /dev/sda1" \
|
||||
# docker repo
|
||||
&& guestfish --remote sh "curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg" \
|
||||
&& guestfish --remote sh 'echo "deb [signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list' \
|
||||
&& guestfish --remote sh 'echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list' \
|
||||
# kubernetes repo
|
||||
&& guestfish --remote sh "curl -fsSL https://pkgs.k8s.io/core:/stable:/${KUBERNETES_VERSION}/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg" \
|
||||
&& guestfish --remote sh "echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/${KUBERNETES_VERSION}/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list" \
|
||||
&& guestfish --remote command "apt-get check -q" \
|
||||
# install containerd
|
||||
&& guestfish --remote command "apt-get update -q" \
|
||||
&& guestfish --remote command "apt-get install -yq containerd.io" \
|
||||
&& guestfish --remote command "apt-get update -y" \
|
||||
&& guestfish --remote command "apt-get install -y containerd.io" \
|
||||
# configure containerd
|
||||
&& guestfish --remote command "mkdir -p /etc/containerd" \
|
||||
&& guestfish --remote sh "containerd config default | tee /etc/containerd/config.toml" \
|
||||
&& guestfish --remote command "sed -i '/SystemdCgroup/ s/=.*/= true/' /etc/containerd/config.toml" \
|
||||
&& guestfish --remote command "containerd config dump >/dev/null" \
|
||||
# install kubernetes
|
||||
&& guestfish --remote command "apt-get install -yq kubelet kubeadm" \
|
||||
&& guestfish --remote command "apt-get install -y kubelet kubeadm" \
|
||||
# clean apt cache
|
||||
&& guestfish --remote sh 'apt-get clean && rm -rf /var/lib/apt/lists/*' \
|
||||
# write system configuration
|
||||
|
||||
@@ -11,34 +11,35 @@ These presets are for basic testing and not meant to be used in production
|
||||
{{ include "resources.preset" (dict "type" "nano") -}}
|
||||
*/}}
|
||||
{{- define "resources.preset" -}}
|
||||
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
|
||||
{{- $presets := dict
|
||||
"nano" (dict
|
||||
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"micro" (dict
|
||||
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"small" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"medium" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"large" (dict
|
||||
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"xlarge" (dict
|
||||
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"2xlarge" (dict
|
||||
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
}}
|
||||
{{- if hasKey $presets .type -}}
|
||||
|
||||
@@ -150,14 +150,14 @@ spec:
|
||||
ingress:
|
||||
extraAnnotations:
|
||||
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
|
||||
hostname: {{ .Values.host | default (printf "%s.%s" .Release.Name $host) }}:443
|
||||
hostname: {{ .Values.host | default (printf "%s.%s" .Release.Name $host) }}
|
||||
className: "{{ $ingress }}"
|
||||
deployment:
|
||||
podAdditionalMetadata:
|
||||
labels:
|
||||
policy.cozystack.io/allow-to-etcd: "true"
|
||||
replicas: 2
|
||||
version: {{ $.Chart.AppVersion }}
|
||||
version: 1.30.1
|
||||
---
|
||||
apiVersion: cozystack.io/v1alpha1
|
||||
kind: WorkloadMonitor
|
||||
@@ -283,7 +283,7 @@ spec:
|
||||
kind: KubevirtMachineTemplate
|
||||
name: {{ $.Release.Name }}-{{ $groupName }}-{{ $kubevirtmachinetemplateHash }}
|
||||
namespace: {{ $.Release.Namespace }}
|
||||
version: v{{ $.Chart.AppVersion }}
|
||||
version: v1.32.3
|
||||
---
|
||||
apiVersion: cluster.x-k8s.io/v1beta1
|
||||
kind: MachineHealthCheck
|
||||
|
||||
@@ -5,12 +5,6 @@ cilium:
|
||||
routingMode: tunnel
|
||||
enableIPv4Masquerade: true
|
||||
ipv4NativeRoutingCIDR: ""
|
||||
{{- if $.Values.addons.gatewayAPI.enabled }}
|
||||
gatewayAPI:
|
||||
enabled: true
|
||||
envoy:
|
||||
enabled: true
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
apiVersion: helm.toolkit.fluxcd.io/v2
|
||||
@@ -52,7 +46,3 @@ spec:
|
||||
- name: {{ .Release.Name }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- end }}
|
||||
{{- if $.Values.addons.gatewayAPI.enabled }}
|
||||
- name: {{ .Release.Name }}-gateway-api-crds
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- end }}
|
||||
|
||||
@@ -30,7 +30,6 @@ spec:
|
||||
patch
|
||||
helmrelease
|
||||
{{ .Release.Name }}-cilium
|
||||
{{ .Release.Name }}-gateway-api-crds
|
||||
{{ .Release.Name }}-csi
|
||||
{{ .Release.Name }}-cert-manager
|
||||
{{ .Release.Name }}-cert-manager-crds
|
||||
|
||||
@@ -1,38 +0,0 @@
|
||||
{{- if $.Values.addons.gatewayAPI.enabled }}
|
||||
apiVersion: helm.toolkit.fluxcd.io/v2
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
name: {{ .Release.Name }}-gateway-api-crds
|
||||
labels:
|
||||
cozystack.io/repository: system
|
||||
cozystack.io/target-cluster-name: {{ .Release.Name }}
|
||||
spec:
|
||||
interval: 5m
|
||||
releaseName: gateway-api-crds
|
||||
chart:
|
||||
spec:
|
||||
chart: cozy-gateway-api-crds
|
||||
reconcileStrategy: Revision
|
||||
sourceRef:
|
||||
kind: HelmRepository
|
||||
name: cozystack-system
|
||||
namespace: cozy-system
|
||||
kubeConfig:
|
||||
secretRef:
|
||||
name: {{ .Release.Name }}-admin-kubeconfig
|
||||
key: super-admin.svc
|
||||
targetNamespace: kube-system
|
||||
storageNamespace: kube-system
|
||||
install:
|
||||
createNamespace: false
|
||||
remediation:
|
||||
retries: -1
|
||||
upgrade:
|
||||
remediation:
|
||||
retries: -1
|
||||
dependsOn:
|
||||
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
|
||||
- name: {{ .Release.Name }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
@@ -155,16 +155,6 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"gatewayAPI": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"enabled": {
|
||||
"type": "boolean",
|
||||
"description": "Enables the Gateway API",
|
||||
"default": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"ingressNginx": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
|
||||
@@ -48,12 +48,6 @@ addons:
|
||||
## @param addons.cilium.valuesOverride Custom values to override
|
||||
valuesOverride: {}
|
||||
|
||||
## Gateway API
|
||||
##
|
||||
gatewayAPI:
|
||||
## @param addons.gatewayAPI.enabled Enables the Gateway API
|
||||
enabled: false
|
||||
|
||||
## Ingress-NGINX Controller
|
||||
##
|
||||
ingressNginx:
|
||||
|
||||
@@ -16,7 +16,7 @@ type: application
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.7.0
|
||||
version: 0.6.0
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
|
||||
@@ -7,10 +7,8 @@ generate:
|
||||
readme-generator -v values.yaml -s values.schema.json -r README.md
|
||||
|
||||
image:
|
||||
docker buildx build images/mariadb-backup \
|
||||
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/mariadb-backup \
|
||||
--provenance false \
|
||||
--builder=$(BUILDER) \
|
||||
--platform=$(PLATFORM) \
|
||||
--tag $(REGISTRY)/mariadb-backup:$(call settag,$(MARIADB_BACKUP_TAG)) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/mariadb-backup:latest \
|
||||
--cache-to type=inline \
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/mariadb-backup:0.7.0@sha256:cfd1c37d8ad24e10681d82d6e6ce8a641b4602c1b0ffa8516ae15b4958bb12d4
|
||||
ghcr.io/cozystack/cozystack/mariadb-backup:0.6.0@sha256:cfd1c37d8ad24e10681d82d6e6ce8a641b4602c1b0ffa8516ae15b4958bb12d4
|
||||
|
||||
@@ -11,34 +11,35 @@ These presets are for basic testing and not meant to be used in production
|
||||
{{ include "resources.preset" (dict "type" "nano") -}}
|
||||
*/}}
|
||||
{{- define "resources.preset" -}}
|
||||
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
|
||||
{{- $presets := dict
|
||||
"nano" (dict
|
||||
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"micro" (dict
|
||||
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"small" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"medium" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"large" (dict
|
||||
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"xlarge" (dict
|
||||
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"2xlarge" (dict
|
||||
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
}}
|
||||
{{- if hasKey $presets .type -}}
|
||||
|
||||
@@ -16,7 +16,7 @@ type: application
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.6.0
|
||||
version: 0.5.0
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
|
||||
@@ -11,34 +11,35 @@ These presets are for basic testing and not meant to be used in production
|
||||
{{ include "resources.preset" (dict "type" "nano") -}}
|
||||
*/}}
|
||||
{{- define "resources.preset" -}}
|
||||
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
|
||||
{{- $presets := dict
|
||||
"nano" (dict
|
||||
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"micro" (dict
|
||||
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"small" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"medium" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"large" (dict
|
||||
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"xlarge" (dict
|
||||
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"2xlarge" (dict
|
||||
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
}}
|
||||
{{- if hasKey $presets .type -}}
|
||||
|
||||
@@ -16,7 +16,7 @@ type: application
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.11.0
|
||||
version: 0.10.1
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
|
||||
@@ -7,10 +7,8 @@ generate:
|
||||
readme-generator -v values.yaml -s values.schema.json -r README.md
|
||||
|
||||
image:
|
||||
docker buildx build images/postgres-backup \
|
||||
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/postgres-backup \
|
||||
--provenance false \
|
||||
--builder=$(BUILDER) \
|
||||
--platform=$(PLATFORM) \
|
||||
--tag $(REGISTRY)/postgres-backup:$(call settag,$(POSTGRES_BACKUP_TAG)) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/postgres-backup:latest \
|
||||
--cache-to type=inline \
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/postgres-backup:0.11.0@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f
|
||||
ghcr.io/cozystack/cozystack/postgres-backup:0.10.1@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f
|
||||
|
||||
@@ -11,34 +11,35 @@ These presets are for basic testing and not meant to be used in production
|
||||
{{ include "resources.preset" (dict "type" "nano") -}}
|
||||
*/}}
|
||||
{{- define "resources.preset" -}}
|
||||
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
|
||||
{{- $presets := dict
|
||||
"nano" (dict
|
||||
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"micro" (dict
|
||||
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"small" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"medium" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"large" (dict
|
||||
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"xlarge" (dict
|
||||
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"2xlarge" (dict
|
||||
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
}}
|
||||
{{- if hasKey $presets .type -}}
|
||||
|
||||
@@ -16,7 +16,7 @@ type: application
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.6.0
|
||||
version: 0.5.0
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
|
||||
@@ -11,34 +11,35 @@ These presets are for basic testing and not meant to be used in production
|
||||
{{ include "resources.preset" (dict "type" "nano") -}}
|
||||
*/}}
|
||||
{{- define "resources.preset" -}}
|
||||
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
|
||||
{{- $presets := dict
|
||||
"nano" (dict
|
||||
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"micro" (dict
|
||||
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"small" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"medium" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"large" (dict
|
||||
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"xlarge" (dict
|
||||
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"2xlarge" (dict
|
||||
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
}}
|
||||
{{- if hasKey $presets .type -}}
|
||||
|
||||
@@ -16,7 +16,7 @@ type: application
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.7.0
|
||||
version: 0.6.0
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
|
||||
@@ -11,34 +11,35 @@ These presets are for basic testing and not meant to be used in production
|
||||
{{ include "resources.preset" (dict "type" "nano") -}}
|
||||
*/}}
|
||||
{{- define "resources.preset" -}}
|
||||
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
|
||||
{{- $presets := dict
|
||||
"nano" (dict
|
||||
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"micro" (dict
|
||||
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"small" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"medium" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"large" (dict
|
||||
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"xlarge" (dict
|
||||
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"2xlarge" (dict
|
||||
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
}}
|
||||
{{- if hasKey $presets .type -}}
|
||||
|
||||
@@ -16,7 +16,7 @@ type: application
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.4.0
|
||||
version: 0.3.0
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
|
||||
@@ -11,34 +11,35 @@ These presets are for basic testing and not meant to be used in production
|
||||
{{ include "resources.preset" (dict "type" "nano") -}}
|
||||
*/}}
|
||||
{{- define "resources.preset" -}}
|
||||
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
|
||||
{{- $presets := dict
|
||||
"nano" (dict
|
||||
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"micro" (dict
|
||||
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"small" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"medium" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"large" (dict
|
||||
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"xlarge" (dict
|
||||
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"2xlarge" (dict
|
||||
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
}}
|
||||
{{- if hasKey $presets .type -}}
|
||||
|
||||
@@ -8,8 +8,7 @@ clickhouse 0.5.0 0f312d5c
|
||||
clickhouse 0.6.0 1ec10165
|
||||
clickhouse 0.6.1 c62a83a7
|
||||
clickhouse 0.6.2 8267072d
|
||||
clickhouse 0.7.0 93bdf411
|
||||
clickhouse 0.8.0 HEAD
|
||||
clickhouse 0.7.0 HEAD
|
||||
ferretdb 0.1.0 e9716091
|
||||
ferretdb 0.1.1 91b0499a
|
||||
ferretdb 0.2.0 6c5cf5bf
|
||||
@@ -17,14 +16,12 @@ ferretdb 0.3.0 b8e33d19
|
||||
ferretdb 0.4.0 b40e1b09
|
||||
ferretdb 0.4.1 1ec10165
|
||||
ferretdb 0.4.2 8267072d
|
||||
ferretdb 0.5.0 93bdf411
|
||||
ferretdb 0.6.0 HEAD
|
||||
ferretdb 0.5.0 HEAD
|
||||
http-cache 0.1.0 263e47be
|
||||
http-cache 0.2.0 53f2365e
|
||||
http-cache 0.3.0 6c5cf5bf
|
||||
http-cache 0.3.1 0f312d5c
|
||||
http-cache 0.4.0 93bdf411
|
||||
http-cache 0.5.0 HEAD
|
||||
http-cache 0.4.0 HEAD
|
||||
kafka 0.1.0 f7eaab0a
|
||||
kafka 0.2.0 c0685f43
|
||||
kafka 0.2.1 dfbc210b
|
||||
@@ -35,8 +32,7 @@ kafka 0.3.1 c62a83a7
|
||||
kafka 0.3.2 93c46161
|
||||
kafka 0.3.3 8267072d
|
||||
kafka 0.4.0 85ec09b8
|
||||
kafka 0.5.0 93bdf411
|
||||
kafka 0.6.0 HEAD
|
||||
kafka 0.5.0 HEAD
|
||||
kubernetes 0.1.0 263e47be
|
||||
kubernetes 0.2.0 53f2365e
|
||||
kubernetes 0.3.0 007d414f
|
||||
@@ -73,16 +69,14 @@ mysql 0.5.0 b40e1b09
|
||||
mysql 0.5.1 0f312d5c
|
||||
mysql 0.5.2 1ec10165
|
||||
mysql 0.5.3 8267072d
|
||||
mysql 0.6.0 93bdf411
|
||||
mysql 0.7.0 HEAD
|
||||
mysql 0.6.0 HEAD
|
||||
nats 0.1.0 e9716091
|
||||
nats 0.2.0 6c5cf5bf
|
||||
nats 0.3.0 78366f19
|
||||
nats 0.3.1 c62a83a7
|
||||
nats 0.4.0 898374b5
|
||||
nats 0.4.1 8267072d
|
||||
nats 0.5.0 93bdf411
|
||||
nats 0.6.0 HEAD
|
||||
nats 0.5.0 HEAD
|
||||
postgres 0.1.0 263e47be
|
||||
postgres 0.2.0 53f2365e
|
||||
postgres 0.2.1 d7cfa53c
|
||||
@@ -97,8 +91,7 @@ postgres 0.7.1 1ec10165
|
||||
postgres 0.8.0 4e68e65c
|
||||
postgres 0.9.0 8267072d
|
||||
postgres 0.10.0 721c12a7
|
||||
postgres 0.10.1 93bdf411
|
||||
postgres 0.11.0 HEAD
|
||||
postgres 0.10.1 HEAD
|
||||
rabbitmq 0.1.0 263e47be
|
||||
rabbitmq 0.2.0 53f2365e
|
||||
rabbitmq 0.3.0 6c5cf5bf
|
||||
@@ -107,20 +100,17 @@ rabbitmq 0.4.1 1128d0cb
|
||||
rabbitmq 0.4.2 4b90bf5a
|
||||
rabbitmq 0.4.3 1ec10165
|
||||
rabbitmq 0.4.4 8267072d
|
||||
rabbitmq 0.5.0 93bdf411
|
||||
rabbitmq 0.6.0 HEAD
|
||||
rabbitmq 0.5.0 HEAD
|
||||
redis 0.1.1 263e47be
|
||||
redis 0.2.0 53f2365e
|
||||
redis 0.3.0 6c5cf5bf
|
||||
redis 0.3.1 c62a83a7
|
||||
redis 0.4.0 84f3ccc0
|
||||
redis 0.5.0 4e68e65c
|
||||
redis 0.6.0 93bdf411
|
||||
redis 0.7.0 HEAD
|
||||
redis 0.6.0 HEAD
|
||||
tcp-balancer 0.1.0 263e47be
|
||||
tcp-balancer 0.2.0 53f2365e
|
||||
tcp-balancer 0.3.0 93bdf411
|
||||
tcp-balancer 0.4.0 HEAD
|
||||
tcp-balancer 0.3.0 HEAD
|
||||
tenant 0.1.4 afc997ef
|
||||
tenant 0.1.5 e3ab858a
|
||||
tenant 1.0.0 263e47be
|
||||
@@ -173,5 +163,4 @@ vpn 0.1.0 263e47be
|
||||
vpn 0.2.0 53f2365e
|
||||
vpn 0.3.0 6c5cf5bf
|
||||
vpn 0.3.1 1ec10165
|
||||
vpn 0.4.0 93bdf411
|
||||
vpn 0.5.0 HEAD
|
||||
vpn 0.4.0 HEAD
|
||||
|
||||
@@ -16,7 +16,7 @@ type: application
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.5.0
|
||||
version: 0.4.0
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
|
||||
@@ -11,34 +11,35 @@ These presets are for basic testing and not meant to be used in production
|
||||
{{ include "resources.preset" (dict "type" "nano") -}}
|
||||
*/}}
|
||||
{{- define "resources.preset" -}}
|
||||
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
|
||||
{{- $presets := dict
|
||||
"nano" (dict
|
||||
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"micro" (dict
|
||||
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"small" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
|
||||
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"medium" (dict
|
||||
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"large" (dict
|
||||
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"xlarge" (dict
|
||||
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
"2xlarge" (dict
|
||||
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
|
||||
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
|
||||
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
|
||||
)
|
||||
}}
|
||||
{{- if hasKey $presets .type -}}
|
||||
|
||||
@@ -3,24 +3,24 @@
|
||||
arch: amd64
|
||||
platform: metal
|
||||
secureboot: false
|
||||
version: v1.10.1
|
||||
version: v1.9.5
|
||||
input:
|
||||
kernel:
|
||||
path: /usr/install/amd64/vmlinuz
|
||||
initramfs:
|
||||
path: /usr/install/amd64/initramfs.xz
|
||||
baseInstaller:
|
||||
imageRef: ghcr.io/siderolabs/installer:v1.10.1
|
||||
imageRef: ghcr.io/siderolabs/installer:v1.9.5
|
||||
systemExtensions:
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250410
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250311
|
||||
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20241110
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250410
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250311
|
||||
- imageRef: ghcr.io/siderolabs/i915-ucode:20241110
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250410
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250311
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250410
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.1
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.1-v1.10.1
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250311
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.12-v1.9.5
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.2.7-v1.9.5
|
||||
output:
|
||||
kind: initramfs
|
||||
imageOptions: {}
|
||||
|
||||
@@ -3,24 +3,24 @@
|
||||
arch: amd64
|
||||
platform: metal
|
||||
secureboot: false
|
||||
version: v1.10.1
|
||||
version: v1.9.5
|
||||
input:
|
||||
kernel:
|
||||
path: /usr/install/amd64/vmlinuz
|
||||
initramfs:
|
||||
path: /usr/install/amd64/initramfs.xz
|
||||
baseInstaller:
|
||||
imageRef: ghcr.io/siderolabs/installer:v1.10.1
|
||||
imageRef: ghcr.io/siderolabs/installer:v1.9.5
|
||||
systemExtensions:
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250410
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250311
|
||||
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20241110
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250410
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250311
|
||||
- imageRef: ghcr.io/siderolabs/i915-ucode:20241110
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250410
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250311
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250410
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.1
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.1-v1.10.1
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250311
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.12-v1.9.5
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.2.7-v1.9.5
|
||||
output:
|
||||
kind: installer
|
||||
imageOptions: {}
|
||||
|
||||
@@ -3,24 +3,24 @@
|
||||
arch: amd64
|
||||
platform: metal
|
||||
secureboot: false
|
||||
version: v1.10.1
|
||||
version: v1.9.5
|
||||
input:
|
||||
kernel:
|
||||
path: /usr/install/amd64/vmlinuz
|
||||
initramfs:
|
||||
path: /usr/install/amd64/initramfs.xz
|
||||
baseInstaller:
|
||||
imageRef: ghcr.io/siderolabs/installer:v1.10.1
|
||||
imageRef: ghcr.io/siderolabs/installer:v1.9.5
|
||||
systemExtensions:
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250410
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250311
|
||||
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20241110
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250410
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250311
|
||||
- imageRef: ghcr.io/siderolabs/i915-ucode:20241110
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250410
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250311
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250410
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.1
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.1-v1.10.1
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250311
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.12-v1.9.5
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.2.7-v1.9.5
|
||||
output:
|
||||
kind: iso
|
||||
imageOptions: {}
|
||||
|
||||
@@ -3,24 +3,24 @@
|
||||
arch: amd64
|
||||
platform: metal
|
||||
secureboot: false
|
||||
version: v1.10.1
|
||||
version: v1.9.5
|
||||
input:
|
||||
kernel:
|
||||
path: /usr/install/amd64/vmlinuz
|
||||
initramfs:
|
||||
path: /usr/install/amd64/initramfs.xz
|
||||
baseInstaller:
|
||||
imageRef: ghcr.io/siderolabs/installer:v1.10.1
|
||||
imageRef: ghcr.io/siderolabs/installer:v1.9.5
|
||||
systemExtensions:
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250410
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250311
|
||||
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20241110
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250410
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250311
|
||||
- imageRef: ghcr.io/siderolabs/i915-ucode:20241110
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250410
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250311
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250410
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.1
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.1-v1.10.1
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250311
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.12-v1.9.5
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.2.7-v1.9.5
|
||||
output:
|
||||
kind: kernel
|
||||
imageOptions: {}
|
||||
|
||||
@@ -3,24 +3,24 @@
|
||||
arch: amd64
|
||||
platform: metal
|
||||
secureboot: false
|
||||
version: v1.10.1
|
||||
version: v1.9.5
|
||||
input:
|
||||
kernel:
|
||||
path: /usr/install/amd64/vmlinuz
|
||||
initramfs:
|
||||
path: /usr/install/amd64/initramfs.xz
|
||||
baseInstaller:
|
||||
imageRef: ghcr.io/siderolabs/installer:v1.10.1
|
||||
imageRef: ghcr.io/siderolabs/installer:v1.9.5
|
||||
systemExtensions:
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250410
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250311
|
||||
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20241110
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250410
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250311
|
||||
- imageRef: ghcr.io/siderolabs/i915-ucode:20241110
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250410
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250311
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250410
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.1
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.1-v1.10.1
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250311
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.12-v1.9.5
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.2.7-v1.9.5
|
||||
output:
|
||||
kind: image
|
||||
imageOptions: { diskSize: 1306525696, diskFormat: raw }
|
||||
|
||||
@@ -3,24 +3,24 @@
|
||||
arch: amd64
|
||||
platform: nocloud
|
||||
secureboot: false
|
||||
version: v1.10.1
|
||||
version: v1.9.5
|
||||
input:
|
||||
kernel:
|
||||
path: /usr/install/amd64/vmlinuz
|
||||
initramfs:
|
||||
path: /usr/install/amd64/initramfs.xz
|
||||
baseInstaller:
|
||||
imageRef: ghcr.io/siderolabs/installer:v1.10.1
|
||||
imageRef: ghcr.io/siderolabs/installer:v1.9.5
|
||||
systemExtensions:
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250410
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250311
|
||||
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20241110
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250410
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250311
|
||||
- imageRef: ghcr.io/siderolabs/i915-ucode:20241110
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250410
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250311
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250410
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.1
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.1-v1.10.1
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250311
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.12-v1.9.5
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.2.7-v1.9.5
|
||||
output:
|
||||
kind: image
|
||||
imageOptions: { diskSize: 1306525696, diskFormat: raw }
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
cozystack:
|
||||
image: ghcr.io/cozystack/cozystack/installer:v0.31.0-rc.2@sha256:05d812a6ac1df86c614b528e8d171af05c0080545ceee383d6cb043f415a4372
|
||||
image: ghcr.io/cozystack/cozystack/installer:v0.31.0-rc.1@sha256:ab0e8fd97632ba784a42a3d0714806ea327440f82ffa5c4896a87c5fb7c1ec6e
|
||||
|
||||
@@ -260,16 +260,68 @@ releases:
|
||||
releaseName: dashboard
|
||||
chart: cozy-dashboard
|
||||
namespace: cozy-dashboard
|
||||
dependsOn: [cilium,kubeovn,keycloak-configure]
|
||||
values:
|
||||
{{- $dashboardKCconfig := lookup "v1" "ConfigMap" "cozy-dashboard" "kubeapps-auth-config" }}
|
||||
{{- $dashboardKCValues := dig "data" "values.yaml" "" $dashboardKCconfig | fromYaml }}
|
||||
{{- toYaml (deepCopy $dashboardKCValues | mergeOverwrite (fromYaml (include "cozystack.defaultDashboardValues" .))) | nindent 4 }}
|
||||
dependsOn:
|
||||
- cilium
|
||||
- kubeovn
|
||||
{{- if eq $oidcEnabled "true" }}
|
||||
- keycloak-configure
|
||||
kubeapps:
|
||||
{{- if .Capabilities.APIVersions.Has "source.toolkit.fluxcd.io/v1" }}
|
||||
{{- with (lookup "source.toolkit.fluxcd.io/v1" "HelmRepository" "cozy-public" "").items }}
|
||||
redis:
|
||||
master:
|
||||
podAnnotations:
|
||||
{{- range $index, $repo := . }}
|
||||
{{- with (($repo.status).artifact).revision }}
|
||||
repository.cozystack.io/{{ $repo.metadata.name }}: {{ quote . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
frontend:
|
||||
resourcesPreset: "none"
|
||||
dashboard:
|
||||
resourcesPreset: "none"
|
||||
{{- $cozystackBranding:= lookup "v1" "ConfigMap" "cozy-system" "cozystack-branding" }}
|
||||
{{- $branding := dig "data" "branding" "" $cozystackBranding }}
|
||||
{{- if $branding }}
|
||||
customLocale:
|
||||
"Kubeapps": {{ $branding }}
|
||||
{{- end }}
|
||||
customStyle: |
|
||||
{{- $logoImage := dig "data" "logo" "" $cozystackBranding }}
|
||||
{{- if $logoImage }}
|
||||
.kubeapps-logo {
|
||||
background-image: {{ $logoImage }}
|
||||
}
|
||||
{{- end }}
|
||||
#serviceaccount-selector {
|
||||
display: none;
|
||||
}
|
||||
.login-moreinfo {
|
||||
display: none;
|
||||
}
|
||||
a[href="#/docs"] {
|
||||
display: none;
|
||||
}
|
||||
.login-group .clr-form-control .clr-control-label {
|
||||
display: none;
|
||||
}
|
||||
.appview-separator div.appview-first-row div.center {
|
||||
display: none;
|
||||
}
|
||||
.appview-separator div.appview-first-row section[aria-labelledby="app-secrets"] {
|
||||
display: none;
|
||||
}
|
||||
.appview-first-row section[aria-labelledby="access-urls-title"] {
|
||||
width: 100%;
|
||||
}
|
||||
{{- $dashboardKCconfig := lookup "v1" "ConfigMap" "cozy-dashboard" "kubeapps-auth-config" }}
|
||||
{{- $dashboardKCValues := dig "data" "values.yaml" "" $dashboardKCconfig }}
|
||||
{{- if $dashboardKCValues }}
|
||||
valuesFrom:
|
||||
- kind: ConfigMap
|
||||
name: kubeapps-auth-config
|
||||
valuesKey: values.yaml
|
||||
{{- end }}
|
||||
dependsOn: [keycloak-configure]
|
||||
|
||||
- name: kamaji
|
||||
releaseName: kamaji
|
||||
|
||||
@@ -155,9 +155,66 @@ releases:
|
||||
chart: cozy-dashboard
|
||||
namespace: cozy-dashboard
|
||||
values:
|
||||
{{- $dashboardKCconfig := lookup "v1" "ConfigMap" "cozy-dashboard" "kubeapps-auth-config" }}
|
||||
{{- $dashboardKCValues := dig "data" "values.yaml" (dict) $dashboardKCconfig }}
|
||||
{{- toYaml (deepCopy $dashboardKCValues | mergeOverwrite (fromYaml (include "cozystack.defaultDashboardValues" .))) | nindent 4 }}
|
||||
kubeapps:
|
||||
{{- if .Capabilities.APIVersions.Has "source.toolkit.fluxcd.io/v1" }}
|
||||
{{- with (lookup "source.toolkit.fluxcd.io/v1" "HelmRepository" "cozy-public" "").items }}
|
||||
redis:
|
||||
master:
|
||||
podAnnotations:
|
||||
{{- range $index, $repo := . }}
|
||||
{{- with (($repo.status).artifact).revision }}
|
||||
repository.cozystack.io/{{ $repo.metadata.name }}: {{ quote . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
frontend:
|
||||
resourcesPreset: "none"
|
||||
dashboard:
|
||||
resourcesPreset: "none"
|
||||
{{- $cozystackBranding:= lookup "v1" "ConfigMap" "cozy-system" "cozystack-branding" }}
|
||||
{{- $branding := dig "data" "branding" "" $cozystackBranding }}
|
||||
{{- if $branding }}
|
||||
customLocale:
|
||||
"Kubeapps": {{ $branding }}
|
||||
{{- end }}
|
||||
customStyle: |
|
||||
{{- $logoImage := dig "data" "logo" "" $cozystackBranding }}
|
||||
{{- if $logoImage }}
|
||||
.kubeapps-logo {
|
||||
background-image: {{ $logoImage }}
|
||||
}
|
||||
{{- end }}
|
||||
#serviceaccount-selector {
|
||||
display: none;
|
||||
}
|
||||
.login-moreinfo {
|
||||
display: none;
|
||||
}
|
||||
a[href="#/docs"] {
|
||||
display: none;
|
||||
}
|
||||
.login-group .clr-form-control .clr-control-label {
|
||||
display: none;
|
||||
}
|
||||
.appview-separator div.appview-first-row div.center {
|
||||
display: none;
|
||||
}
|
||||
.appview-separator div.appview-first-row section[aria-labelledby="app-secrets"] {
|
||||
display: none;
|
||||
}
|
||||
.appview-first-row section[aria-labelledby="access-urls-title"] {
|
||||
width: 100%;
|
||||
}
|
||||
{{- $dashboardKCconfig := lookup "v1" "ConfigMap" "cozy-dashboard" "kubeapps-auth-config" }}
|
||||
{{- $dashboardKCValues := dig "data" "values.yaml" "" $dashboardKCconfig }}
|
||||
{{- if $dashboardKCValues }}
|
||||
valuesFrom:
|
||||
- kind: ConfigMap
|
||||
name: kubeapps-auth-config
|
||||
valuesKey: values.yaml
|
||||
{{- end }}
|
||||
|
||||
{{- if eq $oidcEnabled "true" }}
|
||||
dependsOn: [keycloak-configure]
|
||||
{{- else }}
|
||||
|
||||
@@ -16,57 +16,3 @@ Get IP-addresses of master nodes
|
||||
{{- end -}}
|
||||
{{ join "," $ips }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "cozystack.defaultDashboardValues" -}}
|
||||
kubeapps:
|
||||
{{- if .Capabilities.APIVersions.Has "source.toolkit.fluxcd.io/v1" }}
|
||||
{{- with (lookup "source.toolkit.fluxcd.io/v1" "HelmRepository" "cozy-public" "").items }}
|
||||
redis:
|
||||
master:
|
||||
podAnnotations:
|
||||
{{- range $index, $repo := . }}
|
||||
{{- with (($repo.status).artifact).revision }}
|
||||
repository.cozystack.io/{{ $repo.metadata.name }}: {{ quote . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
frontend:
|
||||
resourcesPreset: "none"
|
||||
dashboard:
|
||||
resourcesPreset: "none"
|
||||
{{- $cozystackBranding:= lookup "v1" "ConfigMap" "cozy-system" "cozystack-branding" }}
|
||||
{{- $branding := dig "data" "branding" "" $cozystackBranding }}
|
||||
{{- if $branding }}
|
||||
customLocale:
|
||||
"Kubeapps": {{ $branding }}
|
||||
{{- end }}
|
||||
customStyle: |
|
||||
{{- $logoImage := dig "data" "logo" "" $cozystackBranding }}
|
||||
{{- if $logoImage }}
|
||||
.kubeapps-logo {
|
||||
background-image: {{ $logoImage }}
|
||||
}
|
||||
{{- end }}
|
||||
#serviceaccount-selector {
|
||||
display: none;
|
||||
}
|
||||
.login-moreinfo {
|
||||
display: none;
|
||||
}
|
||||
a[href="#/docs"] {
|
||||
display: none;
|
||||
}
|
||||
.login-group .clr-form-control .clr-control-label {
|
||||
display: none;
|
||||
}
|
||||
.appview-separator div.appview-first-row div.center {
|
||||
display: none;
|
||||
}
|
||||
.appview-separator div.appview-first-row section[aria-labelledby="app-secrets"] {
|
||||
display: none;
|
||||
}
|
||||
.appview-first-row section[aria-labelledby="access-urls-title"] {
|
||||
width: 100%;
|
||||
}
|
||||
{{- end }}
|
||||
|
||||
@@ -7,6 +7,9 @@
|
||||
|
||||
{{/* collect dependency namespaces from releases */}}
|
||||
{{- range $x := $bundle.releases }}
|
||||
{{- if or (has $x.name $disabledComponents) (and ($x.optional) (not (has $x.name $enabledComponents))) }}
|
||||
{{- continue }}
|
||||
{{- end }}
|
||||
{{- $_ := set $dependencyNamespaces $x.name $x.namespace }}
|
||||
{{- end }}
|
||||
|
||||
@@ -72,10 +75,21 @@ spec:
|
||||
{{- toYaml . | nindent 4}}
|
||||
{{- end }}
|
||||
|
||||
{{- if $x.valuesFrom }}
|
||||
valuesFrom:
|
||||
{{- range $source := $x.valuesFrom }}
|
||||
- kind: {{ $source.kind }}
|
||||
name: {{ $source.name }}
|
||||
{{- if $source.valuesKey }}
|
||||
valuesKey: {{ $source.valuesKey }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- with $x.dependsOn }}
|
||||
dependsOn:
|
||||
{{- range $dep := . }}
|
||||
{{- if not (has $dep $disabledComponents) }}
|
||||
{{- if hasKey $dependencyNamespaces $dep }}
|
||||
- name: {{ $dep }}
|
||||
namespace: {{ index $dependencyNamespaces $dep }}
|
||||
{{- end }}
|
||||
|
||||
@@ -17,8 +17,6 @@ image: image-e2e-sandbox
|
||||
image-e2e-sandbox:
|
||||
docker buildx build -f images/e2e-sandbox/Dockerfile ../../.. \
|
||||
--provenance false \
|
||||
--builder=$(BUILDER) \
|
||||
--platform=$(PLATFORM) \
|
||||
--tag $(REGISTRY)/e2e-sandbox:$(call settag,$(TAG)) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/e2e-sandbox:latest \
|
||||
--cache-to type=inline \
|
||||
@@ -31,7 +29,7 @@ image-e2e-sandbox:
|
||||
rm -f images/e2e-sandbox.json
|
||||
|
||||
test: ## Run the end-to-end tests in existing sandbox.
|
||||
docker exec "${SANDBOX_NAME}" sh -c 'cd /workspace && export COZYSTACK_INSTALLER_YAML=$$(helm template -n cozy-system installer ./packages/core/installer) && hack/e2e.bats'
|
||||
docker exec "${SANDBOX_NAME}" sh -c 'cd /workspace && export COZYSTACK_INSTALLER_YAML=$$(helm template -n cozy-system installer ./packages/core/installer) && hack/e2e.sh'
|
||||
|
||||
delete: ## Remove sandbox from existing Kubernetes cluster.
|
||||
docker rm -f "${SANDBOX_NAME}" || true
|
||||
|
||||
@@ -4,16 +4,14 @@ ARG KUBECTL_VERSION=1.32.0
|
||||
ARG TALOSCTL_VERSION=1.9.5
|
||||
ARG HELM_VERSION=3.16.4
|
||||
|
||||
ARG TARGETOS
|
||||
ARG TARGETARCH
|
||||
|
||||
RUN apt update -q
|
||||
RUN apt install -yq --no-install-recommends genisoimage ca-certificates qemu-kvm qemu-utils iproute2 iptables wget xz-utils netcat curl jq make git bats
|
||||
RUN curl -sSL "https://github.com/siderolabs/talos/releases/download/v${TALOSCTL_VERSION}/talosctl-${TARGETOS}-${TARGETARCH}" -o /usr/local/bin/talosctl \
|
||||
&& chmod +x /usr/local/bin/talosctl
|
||||
RUN curl -sSL "https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/${TARGETOS}/${TARGETARCH}/kubectl" -o /usr/local/bin/kubectl \
|
||||
&& chmod +x /usr/local/bin/kubectl
|
||||
RUN curl -sSL "https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3" | bash -s - --version "v${HELM_VERSION}"
|
||||
RUN curl -sSL "https://github.com/mikefarah/yq/releases/download/v4.44.3/yq_${TARGETOS}_${TARGETARCH}" -o /usr/local/bin/yq \
|
||||
&& chmod +x /usr/local/bin/yq
|
||||
RUN curl -sSL "https://fluxcd.io/install.sh" | bash
|
||||
RUN apt-get update
|
||||
RUN apt-get -y install genisoimage qemu-kvm qemu-utils iproute2 iptables wget xz-utils netcat curl jq make git
|
||||
RUN curl -LO "https://github.com/siderolabs/talos/releases/download/v${TALOSCTL_VERSION}/talosctl-linux-amd64" \
|
||||
&& chmod +x talosctl-linux-amd64 \
|
||||
&& mv talosctl-linux-amd64 /usr/local/bin/talosctl
|
||||
RUN curl -LO "https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/linux/amd64/kubectl" \
|
||||
&& chmod +x kubectl \
|
||||
&& mv kubectl /usr/local/bin/kubectl
|
||||
RUN curl -sSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash -s - --version "v${HELM_VERSION}"
|
||||
RUN wget https://github.com/mikefarah/yq/releases/download/v4.44.3/yq_linux_amd64 -O /usr/local/bin/yq && chmod +x /usr/local/bin/yq
|
||||
RUN curl -s https://fluxcd.io/install.sh | bash
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
e2e:
|
||||
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.31.0-rc.2@sha256:2d35b2cce2f093c5c3145a08d9981e2bf28cf45a6804117d50bd6345a15ecd1a
|
||||
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.31.0-rc.1@sha256:a20a6834527ccfc8daf7413a15234f3f7dbbd7774810c8e1966736d487ef7d0c
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/matchbox:v0.31.0-rc.2@sha256:78e5a28badd3c804e55e5c1376f12dad77e04b9f6f80637596939ba348f7104b
|
||||
ghcr.io/cozystack/cozystack/matchbox:v0.31.0-rc.1@sha256:de69166fd6efec988cad7ad5be41bbb57c8134508c531d7496fc7f15772e4993
|
||||
|
||||
@@ -3,4 +3,4 @@ name: ingress
|
||||
description: NGINX Ingress Controller
|
||||
icon: /logos/ingress-nginx.svg
|
||||
type: application
|
||||
version: 1.6.0
|
||||
version: 1.5.1
|
||||
|
||||
@@ -4,10 +4,13 @@
|
||||
|
||||
### Common parameters
|
||||
|
||||
| Name | Description | Value |
|
||||
| ---------------- | ----------------------------------------------------------------- | ------- |
|
||||
| `replicas` | Number of ingress-nginx replicas | `2` |
|
||||
| `externalIPs` | List of externalIPs for service. | `[]` |
|
||||
| `whitelist` | List of client networks | `[]` |
|
||||
| `clouflareProxy` | Restoring original visitor IPs when Cloudflare proxied is enabled | `false` |
|
||||
| Name | Description | Value |
|
||||
| ----------------- | ----------------------------------------------------------------- | ------- |
|
||||
| `replicas` | Number of ingress-nginx replicas | `2` |
|
||||
| `externalIPs` | List of externalIPs for service. | `[]` |
|
||||
| `whitelist` | List of client networks | `[]` |
|
||||
| `clouflareProxy` | Restoring original visitor IPs when Cloudflare proxied is enabled | `false` |
|
||||
| `dashboard` | Should ingress serve Cozystack service dashboard | `false` |
|
||||
| `cdiUploadProxy` | Should ingress serve CDI upload proxy | `false` |
|
||||
| `virtExportProxy` | Should ingress serve KubeVirt export proxy | `false` |
|
||||
|
||||
|
||||
37
packages/extra/ingress/templates/cdi-uploadproxy.yaml
Normal file
37
packages/extra/ingress/templates/cdi-uploadproxy.yaml
Normal file
@@ -0,0 +1,37 @@
|
||||
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
|
||||
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
|
||||
|
||||
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
|
||||
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
|
||||
|
||||
{{- if .Values.cdiUploadProxy }}
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
|
||||
cert-manager.io/cluster-issuer: letsencrypt-prod
|
||||
{{- if eq $issuerType "cloudflare" }}
|
||||
{{- else }}
|
||||
acme.cert-manager.io/http01-ingress-class: {{ .Release.Namespace }}
|
||||
{{- end }}
|
||||
name: cdi-uploadproxy-{{ .Release.Namespace }}
|
||||
namespace: cozy-kubevirt-cdi
|
||||
spec:
|
||||
ingressClassName: {{ .Release.Namespace }}
|
||||
rules:
|
||||
- host: cdi-uploadproxy.{{ $host }}
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
service:
|
||||
name: cdi-uploadproxy
|
||||
port:
|
||||
number: 443
|
||||
path: /
|
||||
pathType: Prefix
|
||||
tls:
|
||||
- hosts:
|
||||
- cdi-uploadproxy.{{ $host }}
|
||||
secretName: cdi-uploadproxy-{{ .Release.Namespace }}-tls
|
||||
{{- end }}
|
||||
@@ -1,10 +1,19 @@
|
||||
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
|
||||
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
|
||||
{{- $host := index $cozyConfig.data "root-host" }}
|
||||
{{- $exposeServices := splitList "," ((index $cozyConfig.data "expose-services") | default "") }}
|
||||
{{- $exposeIngress := index $cozyConfig.data "expose-ingress" | default "tenant-root" }}
|
||||
|
||||
{{- if and (has "dashboard" $exposeServices) }}
|
||||
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
|
||||
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
|
||||
|
||||
{{- $tenantRoot := dict }}
|
||||
{{- if .Capabilities.APIVersions.Has "helm.toolkit.fluxcd.io/v2" }}
|
||||
{{- $tenantRoot = lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" "tenant-root" "tenant-root" }}
|
||||
{{- end }}
|
||||
{{- if and $tenantRoot $tenantRoot.spec $tenantRoot.spec.values $tenantRoot.spec.values.host }}
|
||||
{{- $host = $tenantRoot.spec.values.host }}
|
||||
{{- else }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.dashboard }}
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
@@ -12,16 +21,16 @@ metadata:
|
||||
cert-manager.io/cluster-issuer: letsencrypt-prod
|
||||
{{- if eq $issuerType "cloudflare" }}
|
||||
{{- else }}
|
||||
acme.cert-manager.io/http01-ingress-class: {{ $exposeIngress }}
|
||||
{{- end }}
|
||||
acme.cert-manager.io/http01-ingress-class: {{ .Release.Namespace }}
|
||||
nginx.ingress.kubernetes.io/proxy-body-size: 100m
|
||||
nginx.ingress.kubernetes.io/proxy-buffer-size: 100m
|
||||
nginx.ingress.kubernetes.io/proxy-buffers-number: "4"
|
||||
nginx.ingress.kubernetes.io/client-max-body-size: 100m
|
||||
name: dashboard
|
||||
{{- end }}
|
||||
name: dashboard-{{ .Release.Namespace }}
|
||||
namespace: cozy-dashboard
|
||||
spec:
|
||||
ingressClassName: {{ $exposeIngress }}
|
||||
ingressClassName: {{ .Release.Namespace }}
|
||||
rules:
|
||||
- host: dashboard.{{ $host }}
|
||||
http:
|
||||
@@ -36,5 +45,5 @@ spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- dashboard.{{ $host }}
|
||||
secretName: dashboard-tls
|
||||
secretName: dashboard-{{ .Release.Namespace }}-tls
|
||||
{{- end }}
|
||||
@@ -1,6 +1,3 @@
|
||||
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
|
||||
{{- $exposeIngress := index $cozyConfig.data "expose-ingress" | default "tenant-root" }}
|
||||
{{- $exposeExternalIPs := (index $cozyConfig.data "expose-external-ips") | default "" }}
|
||||
apiVersion: helm.toolkit.fluxcd.io/v2
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
@@ -34,9 +31,9 @@ spec:
|
||||
enabled: false
|
||||
{{- end }}
|
||||
service:
|
||||
{{- if and (eq $exposeIngress .Release.Namespace) $exposeExternalIPs }}
|
||||
{{- if .Values.externalIPs }}
|
||||
externalIPs:
|
||||
{{- toYaml (splitList "," $exposeExternalIPs) | nindent 12 }}
|
||||
{{- toYaml .Values.externalIPs | nindent 12 }}
|
||||
type: ClusterIP
|
||||
externalTrafficPolicy: Cluster
|
||||
{{- else }}
|
||||
|
||||
@@ -25,6 +25,21 @@
|
||||
"type": "boolean",
|
||||
"description": "Restoring original visitor IPs when Cloudflare proxied is enabled",
|
||||
"default": false
|
||||
},
|
||||
"dashboard": {
|
||||
"type": "boolean",
|
||||
"description": "Should ingress serve Cozystack service dashboard",
|
||||
"default": false
|
||||
},
|
||||
"cdiUploadProxy": {
|
||||
"type": "boolean",
|
||||
"description": "Should ingress serve CDI upload proxy",
|
||||
"default": false
|
||||
},
|
||||
"virtExportProxy": {
|
||||
"type": "boolean",
|
||||
"description": "Should ingress serve KubeVirt export proxy",
|
||||
"default": false
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -4,6 +4,17 @@
|
||||
##
|
||||
replicas: 2
|
||||
|
||||
## @param externalIPs [array] List of externalIPs for service.
|
||||
## Optional. If not specified will use LoadBalancer service by default.
|
||||
##
|
||||
## e.g:
|
||||
## externalIPs:
|
||||
## - "11.22.33.44"
|
||||
## - "11.22.33.45"
|
||||
## - "11.22.33.46"
|
||||
##
|
||||
externalIPs: []
|
||||
|
||||
## @param whitelist List of client networks
|
||||
## Example:
|
||||
## whitelist:
|
||||
@@ -13,3 +24,12 @@ whitelist: []
|
||||
|
||||
## @param clouflareProxy Restoring original visitor IPs when Cloudflare proxied is enabled
|
||||
clouflareProxy: false
|
||||
|
||||
## @param dashboard Should ingress serve Cozystack service dashboard
|
||||
dashboard: false
|
||||
|
||||
## @param cdiUploadProxy Should ingress serve CDI upload proxy
|
||||
cdiUploadProxy: false
|
||||
|
||||
## @param virtExportProxy Should ingress serve KubeVirt export proxy
|
||||
virtExportProxy: false
|
||||
|
||||
37
packages/extra/ingress/vm-exportproxy.yaml
Normal file
37
packages/extra/ingress/vm-exportproxy.yaml
Normal file
@@ -0,0 +1,37 @@
|
||||
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
|
||||
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
|
||||
|
||||
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
|
||||
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
|
||||
|
||||
{{- if .Values.virtExportProxy }}
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
|
||||
cert-manager.io/cluster-issuer: letsencrypt-prod
|
||||
{{- if eq $issuerType "cloudflare" }}
|
||||
{{- else }}
|
||||
acme.cert-manager.io/http01-ingress-class: {{ .Release.Namespace }}
|
||||
{{- end }}
|
||||
name: virt-exportproxy-{{ .Release.Namespace }}
|
||||
namespace: cozy-kubevirt
|
||||
spec:
|
||||
ingressClassName: {{ .Release.Namespace }}
|
||||
rules:
|
||||
- host: virt-exportproxy.{{ $host }}
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
service:
|
||||
name: virt-exportproxy
|
||||
port:
|
||||
number: 443
|
||||
path: /
|
||||
pathType: ImplementationSpecific
|
||||
tls:
|
||||
- hosts:
|
||||
virt-exportproxy.{{ $host }}
|
||||
secretName: virt-exportproxy-{{ .Release.Namespace }}-tls
|
||||
{{- end }}
|
||||
@@ -13,10 +13,8 @@ generate:
|
||||
rm -f values.schema.json.tmp
|
||||
|
||||
image:
|
||||
docker buildx build images/grafana \
|
||||
docker buildx build --platform linux/amd64 images/grafana \
|
||||
--provenance false \
|
||||
--builder=$(BUILDER) \
|
||||
--platform=$(PLATFORM) \
|
||||
--tag $(REGISTRY)/grafana:$(call settag,$(GRAFANA_TAG)) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/grafana:latest \
|
||||
--cache-to type=inline \
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/grafana:1.9.2@sha256:c63978e1ed0304e8518b31ddee56c4e8115541b997d8efbe1c0a74da57140399
|
||||
ghcr.io/cozystack/cozystack/grafana:1.9.2@sha256:66c4547efd18b4d7475ff73b2c4e2f39e9b4471d55e85237e2fe3e87af05c302
|
||||
|
||||
@@ -19,7 +19,7 @@ ingress 1.2.0 28fca4ef
|
||||
ingress 1.3.0 fde4bcfa
|
||||
ingress 1.4.0 fd240701
|
||||
ingress 1.5.0 93bdf411
|
||||
ingress 1.6.0 HEAD
|
||||
ingress 1.5.1 HEAD
|
||||
monitoring 1.0.0 d7cfa53c
|
||||
monitoring 1.1.0 25221fdc
|
||||
monitoring 1.2.0 f81be075
|
||||
|
||||
@@ -6,15 +6,14 @@ include ../../../scripts/common-envs.mk
|
||||
include ../../../scripts/package.mk
|
||||
|
||||
update:
|
||||
@echo Nothing to update
|
||||
rm -rf charts
|
||||
helm pull oci://ghcr.io/aenix-io/charts/etcd-operator --untar --untardir charts
|
||||
|
||||
image: image-s3manager
|
||||
|
||||
image-s3manager:
|
||||
docker buildx build images/s3manager \
|
||||
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/s3manager \
|
||||
--provenance false \
|
||||
--builder=$(BUILDER) \
|
||||
--platform=$(PLATFORM) \
|
||||
--tag $(REGISTRY)/s3manager:$(call settag,$(S3MANAGER_TAG)) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/s3manager:latest \
|
||||
--cache-to type=inline \
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:a219040fed290492f047818e5c5a864a30112ff418ad4b12b24de9709302427a
|
||||
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:67e4a5da0ab43d93e8b75094d5a2db8159cb927a47b94f945f80d0ffb93d3301
|
||||
|
||||
@@ -1,15 +1,11 @@
|
||||
# Source: https://github.com/cloudlena/s3manager/blob/main/Dockerfile
|
||||
|
||||
FROM docker.io/library/golang:1 AS builder
|
||||
|
||||
ARG TARGETOS
|
||||
ARG TARGETARCH
|
||||
|
||||
WORKDIR /usr/src/app
|
||||
RUN wget -O- https://github.com/cloudlena/s3manager/archive/9a7c8e446b422f8973b8c461990f39fdafee9c27.tar.gz | tar -xzf- --strip 1
|
||||
ADD cozystack.patch /
|
||||
RUN git apply /cozystack.patch
|
||||
RUN GOOS=$TARGETOS GOARCH=$TARGETARCH CGO_ENABLED=0 go build -ldflags="-s -w" -a -installsuffix cgo -o bin/s3manager
|
||||
RUN CGO_ENABLED=0 go build -ldflags="-s -w" -a -installsuffix cgo -o bin/s3manager
|
||||
|
||||
FROM docker.io/library/alpine:latest
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
@@ -1 +1 @@
|
||||
bucketName: "cozystack"
|
||||
bucketName: ""
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
export NAME=capi-operator
|
||||
export NAMESPACE=cozy-cluster-api
|
||||
export REPO_NAME=capi-operator
|
||||
export REPO_URL=https://kubernetes-sigs.github.io/cluster-api-operator
|
||||
export CHART_NAME=cluster-api-operator
|
||||
export CHART_VERSION=^0.19
|
||||
|
||||
include ../../../scripts/package.mk
|
||||
|
||||
update: clean capi-operator-update
|
||||
rm -rf charts/cluster-api-operator/charts/
|
||||
update:
|
||||
rm -rf charts
|
||||
helm repo add capi-operator https://kubernetes-sigs.github.io/cluster-api-operator
|
||||
helm repo update capi-operator
|
||||
helm pull capi-operator/cluster-api-operator --untar --untardir charts
|
||||
rm -rf charts/cluster-api-operator/charts
|
||||
|
||||
@@ -5,7 +5,7 @@ metadata:
|
||||
name: cluster-api
|
||||
spec:
|
||||
# https://github.com/kubernetes-sigs/cluster-api
|
||||
version: v1.10.1
|
||||
version: v1.10.0
|
||||
---
|
||||
apiVersion: operator.cluster.x-k8s.io/v1alpha2
|
||||
kind: ControlPlaneProvider
|
||||
@@ -13,7 +13,7 @@ metadata:
|
||||
name: kamaji
|
||||
spec:
|
||||
# https://github.com/clastix/cluster-api-control-plane-provider-kamaji
|
||||
version: v0.15.1
|
||||
version: v0.14.2
|
||||
deployment:
|
||||
containers:
|
||||
- name: manager
|
||||
@@ -31,7 +31,7 @@ metadata:
|
||||
name: kubeadm
|
||||
spec:
|
||||
# https://github.com/kubernetes-sigs/cluster-api
|
||||
version: v1.10.1
|
||||
version: v1.10.0
|
||||
---
|
||||
apiVersion: operator.cluster.x-k8s.io/v1alpha2
|
||||
kind: InfrastructureProvider
|
||||
|
||||
@@ -18,8 +18,6 @@ update:
|
||||
image:
|
||||
docker buildx build images/cilium \
|
||||
--provenance false \
|
||||
--builder=$(BUILDER) \
|
||||
--platform=$(PLATFORM) \
|
||||
--tag $(REGISTRY)/cilium:$(call settag,$(CILIUM_TAG)) \
|
||||
--tag $(REGISTRY)/cilium:$(call settag,$(CILIUM_TAG)-$(TAG)) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/cilium:latest \
|
||||
|
||||
@@ -9,8 +9,6 @@ image: image-cozystack-api
|
||||
image-cozystack-api:
|
||||
docker buildx build -f images/cozystack-api/Dockerfile ../../.. \
|
||||
--provenance false \
|
||||
--builder=$(BUILDER) \
|
||||
--platform=$(PLATFORM) \
|
||||
--tag $(REGISTRY)/cozystack-api:$(call settag,$(TAG)) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/cozystack-api:latest \
|
||||
--cache-to type=inline \
|
||||
|
||||
@@ -1,19 +1,16 @@
|
||||
FROM golang:1.23-alpine AS builder
|
||||
|
||||
ARG TARGETOS
|
||||
ARG TARGETARCH
|
||||
FROM golang:1.23-alpine as builder
|
||||
|
||||
WORKDIR /workspace
|
||||
|
||||
COPY go.mod go.sum ./
|
||||
RUN GOOS=$TARGETOS GOARCH=$TARGETARCH go mod download
|
||||
RUN go mod download
|
||||
|
||||
COPY api api/
|
||||
COPY pkg pkg/
|
||||
COPY cmd cmd/
|
||||
COPY internal internal/
|
||||
|
||||
RUN GOOS=$TARGETOS GOARCH=$TARGETARCH CGO_ENABLED=0 go build -ldflags="-extldflags=-static" -o /cozystack-api cmd/cozystack-api/main.go
|
||||
RUN CGO_ENABLED=0 go build -ldflags="-extldflags=-static" -o /cozystack-api cmd/cozystack-api/main.go
|
||||
|
||||
FROM scratch
|
||||
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
|
||||
{{- $host := index $cozyConfig.data "root-host" }}
|
||||
{{- $exposeServices := splitList "," ((index $cozyConfig.data "expose-services") | default "") }}
|
||||
{{- $exposeIngress := index $cozyConfig.data "expose-ingress" | default "tenant-root" }}
|
||||
|
||||
{{- if and (has "api" $exposeServices) }}
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
|
||||
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
|
||||
name: kubernetes
|
||||
namespace: default
|
||||
spec:
|
||||
ingressClassName: {{ $exposeIngress }}
|
||||
rules:
|
||||
- host: api.{{ $host }}
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
service:
|
||||
name: kubernetes
|
||||
port:
|
||||
number: 443
|
||||
path: /
|
||||
pathType: Prefix
|
||||
{{- end }}
|
||||
@@ -1,2 +1,2 @@
|
||||
cozystackAPI:
|
||||
image: ghcr.io/cozystack/cozystack/cozystack-api:v0.31.0-rc.2@sha256:d3794a5ebd49ee28ef7108213d3bb5053f5247ef62855f4731c7cafb8059a635
|
||||
image: ghcr.io/cozystack/cozystack/cozystack-api:v0.31.0-rc.1@sha256:1dd9f3ec9d5630d5b49ffe9380d6a0131bf04e7e9bddcc3fd6f59089c6563b1c
|
||||
|
||||
@@ -9,8 +9,6 @@ image: image-cozystack-controller update-version
|
||||
image-cozystack-controller:
|
||||
docker buildx build -f images/cozystack-controller/Dockerfile ../../.. \
|
||||
--provenance false \
|
||||
--builder=$(BUILDER) \
|
||||
--platform=$(PLATFORM) \
|
||||
--tag $(REGISTRY)/cozystack-controller:$(call settag,$(TAG)) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/cozystack-controller:latest \
|
||||
--cache-to type=inline \
|
||||
|
||||
@@ -1,19 +1,16 @@
|
||||
FROM golang:1.23-alpine AS builder
|
||||
|
||||
ARG TARGETOS
|
||||
ARG TARGETARCH
|
||||
FROM golang:1.23-alpine as builder
|
||||
|
||||
WORKDIR /workspace
|
||||
|
||||
COPY go.mod go.sum ./
|
||||
RUN GOOS=$TARGETOS GOARCH=$TARGETARCH go mod download
|
||||
RUN go mod download
|
||||
|
||||
COPY api api/
|
||||
COPY pkg pkg/
|
||||
COPY cmd cmd/
|
||||
COPY internal internal/
|
||||
|
||||
RUN GOOS=$TARGETOS GOARCH=$TARGETARCH CGO_ENABLED=0 go build -ldflags="-extldflags=-static" -o /cozystack-controller cmd/cozystack-controller/main.go
|
||||
RUN CGO_ENABLED=0 go build -ldflags="-extldflags=-static" -o /cozystack-controller cmd/cozystack-controller/main.go
|
||||
|
||||
FROM scratch
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
cozystackController:
|
||||
image: ghcr.io/cozystack/cozystack/cozystack-controller:v0.31.0-rc.2@sha256:fb7cfdf62a128103954f0cb711b2d21650c0d2f7ff639d41a56f68d454a1e4ea
|
||||
image: ghcr.io/cozystack/cozystack/cozystack-controller:v0.31.0-rc.1@sha256:96492f384c07619c091764c759adde6ef91054b1223f03f7ddd62a56c40b06ac
|
||||
debug: false
|
||||
disableTelemetry: false
|
||||
cozystackVersion: "v0.31.0-rc.2"
|
||||
cozystackVersion: "v0.31.0-rc.1"
|
||||
|
||||
@@ -17,8 +17,7 @@ update-chart:
|
||||
patch --no-backup-if-mismatch charts/kubeapps/templates/frontend/configmap.yaml < patches/logos.patch
|
||||
|
||||
update-dockerfiles:
|
||||
@echo Update dockerfiles manually
|
||||
#tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/vmware-tanzu/kubeapps | awk -F'[/^]' 'END{print $$3}') && \
|
||||
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/vmware-tanzu/kubeapps | awk -F'[/^]' 'END{print $$3}') && \
|
||||
wget https://github.com/vmware-tanzu/kubeapps/raw/$${tag}/cmd/kubeapps-apis/Dockerfile -O images/kubeapps-apis/Dockerfile && \
|
||||
patch --no-backup-if-mismatch images/kubeapps-apis/Dockerfile < images/kubeapps-apis/dockerfile.diff && \
|
||||
node_image=$$(wget -O- https://github.com/vmware-tanzu/kubeapps/raw/main/dashboard/Dockerfile | awk '/FROM bitnami\/node/ {print $$2}') && \
|
||||
@@ -29,8 +28,6 @@ update-dockerfiles:
|
||||
image-dashboard: update-version
|
||||
docker buildx build images/dashboard \
|
||||
--provenance false \
|
||||
--builder=$(BUILDER) \
|
||||
--platform=$(PLATFORM) \
|
||||
--tag $(REGISTRY)/dashboard:$(call settag,$(TAG)) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/dashboard:latest \
|
||||
--cache-to type=inline \
|
||||
@@ -51,8 +48,6 @@ image-dashboard: update-version
|
||||
image-kubeapps-apis: update-version
|
||||
docker buildx build images/kubeapps-apis \
|
||||
--provenance false \
|
||||
--builder=$(BUILDER) \
|
||||
--platform=$(PLATFORM) \
|
||||
--tag $(REGISTRY)/kubeapps-apis:$(call settag,$(TAG)) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/kubeapps-apis:latest \
|
||||
--cache-to type=inline \
|
||||
|
||||
@@ -76,7 +76,7 @@ data:
|
||||
"kubeappsNamespace": {{ .Release.Namespace | quote }},
|
||||
"helmGlobalNamespace": {{ include "kubeapps.helmGlobalPackagingNamespace" . | quote }},
|
||||
"carvelGlobalNamespace": {{ .Values.kubeappsapis.pluginConfig.kappController.packages.v1alpha1.globalPackagingNamespace | quote }},
|
||||
"appVersion": "v0.31.0-rc.2",
|
||||
"appVersion": "v0.31.0-rc.1",
|
||||
"authProxyEnabled": {{ .Values.authProxy.enabled }},
|
||||
"oauthLoginURI": {{ .Values.authProxy.oauthLoginURI | quote }},
|
||||
"oauthLogoutURI": {{ .Values.authProxy.oauthLogoutURI | quote }},
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
|
||||
# syntax = docker/dockerfile:1
|
||||
|
||||
FROM alpine AS source
|
||||
FROM alpine as source
|
||||
ARG COMMIT_REF=dd02680d796c962b8dcc4e5ea70960a846c1acdc
|
||||
RUN apk add --no-cache patch
|
||||
WORKDIR /source
|
||||
@@ -12,9 +12,8 @@ RUN wget -O- https://github.com/cozystack/kubeapps/archive/${COMMIT_REF}.tar.gz
|
||||
FROM bitnami/golang:1.23.4 AS builder
|
||||
WORKDIR /go/src/github.com/vmware-tanzu/kubeapps
|
||||
COPY --from=source /source/go.mod /source/go.sum ./
|
||||
ARG TARGETOS
|
||||
ARG TARGETARCH
|
||||
ARG VERSION="devel"
|
||||
ARG TARGETARCH
|
||||
|
||||
# If true, run golangci-lint to detect issues
|
||||
ARG lint
|
||||
@@ -30,12 +29,10 @@ ARG GRPC_HEALTH_PROBE_VERSION="0.4.34"
|
||||
|
||||
# Install lint tools
|
||||
RUN if [ ! -z ${lint:-} ]; then \
|
||||
GOOS=$TARGETOS GOARCH=$TARGETARCH go install github.com/golangci/golangci-lint/cmd/golangci-lint@v$GOLANGCILINT_VERSION; \
|
||||
go install github.com/golangci/golangci-lint/cmd/golangci-lint@v$GOLANGCILINT_VERSION; \
|
||||
fi
|
||||
|
||||
RUN if [ $TARGETARCH = 'amd64' ]; then BUF_ARCH='x86_64'; elif [ $TARGETARCH = 'arm64' ]; then BUF_ARCH='aarch64'; fi && \
|
||||
if [ $TARGETOS = 'linux' ]; then BUF_PLATFORM='Linux'; fi && \
|
||||
curl -sSL "https://github.com/bufbuild/buf/releases/download/v${BUF_VERSION}/buf-${BUF_PLATFORM}-${BUF_ARCH}" -o "/tmp/buf" && chmod +x "/tmp/buf"
|
||||
RUN curl -sSL "https://github.com/bufbuild/buf/releases/download/v$BUF_VERSION/buf-Linux-x86_64" -o "/tmp/buf" && chmod +x "/tmp/buf"
|
||||
|
||||
# TODO: Remove and instead use built-in gRPC container probes once we're supporting >= 1.24 only. https://kubernetes.io/blog/2022/05/13/grpc-probes-now-in-beta/
|
||||
RUN curl -sSL "https://github.com/grpc-ecosystem/grpc-health-probe/releases/download/v${GRPC_HEALTH_PROBE_VERSION}/grpc_health_probe-linux-${TARGETARCH}" -o "/bin/grpc_health_probe" && chmod +x "/bin/grpc_health_probe"
|
||||
@@ -44,7 +41,7 @@ RUN curl -sSL "https://github.com/grpc-ecosystem/grpc-health-probe/releases/down
|
||||
# https://github.com/golang/go/issues/27719#issuecomment-514747274
|
||||
RUN --mount=type=cache,target=/go/pkg/mod \
|
||||
--mount=type=cache,target=/root/.cache/go-build \
|
||||
GOOS=$TARGETOS GOARCH=$TARGETARCH GOPROXY="https://proxy.golang.org,direct" go mod download
|
||||
GOPROXY="https://proxy.golang.org,direct" go mod download
|
||||
|
||||
# We don't copy the pkg and cmd directories until here so the above layers can
|
||||
# be reused.
|
||||
@@ -63,7 +60,7 @@ RUN /tmp/buf lint ./cmd/kubeapps-apis
|
||||
# Build the main grpc server
|
||||
RUN --mount=type=cache,target=/go/pkg/mod \
|
||||
--mount=type=cache,target=/root/.cache/go-build \
|
||||
GOOS=$TARGETOS GOARCH=$TARGETARCH GOPROXY="https://proxy.golang.org,direct" \
|
||||
GOPROXY="https://proxy.golang.org,direct" \
|
||||
go build \
|
||||
-ldflags "-X github.com/vmware-tanzu/kubeapps/cmd/kubeapps-apis/cmd.version=$VERSION" \
|
||||
./cmd/kubeapps-apis
|
||||
@@ -71,7 +68,7 @@ RUN --mount=type=cache,target=/go/pkg/mod \
|
||||
## Build 'fluxv2' plugin, version 'v1alpha1'
|
||||
RUN --mount=type=cache,target=/go/pkg/mod \
|
||||
--mount=type=cache,target=/root/.cache/go-build \
|
||||
GOOS=$TARGETOS GOARCH=$TARGETARCH GOPROXY="https://proxy.golang.org,direct" \
|
||||
GOPROXY="https://proxy.golang.org,direct" \
|
||||
go build \
|
||||
-ldflags "-X github.com/vmware-tanzu/kubeapps/cmd/kubeapps-apis/cmd.version=$VERSION" \
|
||||
-o /fluxv2-packages-v1alpha1-plugin.so -buildmode=plugin \
|
||||
@@ -80,7 +77,7 @@ RUN --mount=type=cache,target=/go/pkg/mod \
|
||||
## Build 'helm' plugin, version 'v1alpha1'
|
||||
RUN --mount=type=cache,target=/go/pkg/mod \
|
||||
--mount=type=cache,target=/root/.cache/go-build \
|
||||
GOOS=$TARGETOS GOARCH=$TARGETARCH GOPROXY="https://proxy.golang.org,direct" \
|
||||
GOPROXY="https://proxy.golang.org,direct" \
|
||||
go build \
|
||||
-ldflags "-X github.com/vmware-tanzu/kubeapps/cmd/kubeapps-apis/cmd.version=$VERSION" \
|
||||
-o /helm-packages-v1alpha1-plugin.so -buildmode=plugin \
|
||||
@@ -89,7 +86,7 @@ RUN --mount=type=cache,target=/go/pkg/mod \
|
||||
## Build 'resources' plugin, version 'v1alpha1'
|
||||
RUN --mount=type=cache,target=/go/pkg/mod \
|
||||
--mount=type=cache,target=/root/.cache/go-build \
|
||||
GOOS=$TARGETOS GOARCH=$TARGETARCH GOPROXY="https://proxy.golang.org,direct" \
|
||||
GOPROXY="https://proxy.golang.org,direct" \
|
||||
go build \
|
||||
-ldflags "-X github.com/vmware-tanzu/kubeapps/cmd/kubeapps-apis/cmd.version=$VERSION" \
|
||||
-o /resources-v1alpha1-plugin.so -buildmode=plugin \
|
||||
|
||||
@@ -19,24 +19,15 @@ kubeapps:
|
||||
image:
|
||||
registry: ghcr.io/cozystack/cozystack
|
||||
repository: dashboard
|
||||
tag: v0.31.0-rc.2
|
||||
tag: v0.31.0-rc.1
|
||||
digest: "sha256:a83fe4654f547469cfa469a02bda1273c54bca103a41eb007fdb2e18a7a91e93"
|
||||
redis:
|
||||
master:
|
||||
resourcesPreset: "none"
|
||||
resources:
|
||||
requests:
|
||||
cpu: 200m
|
||||
memory: 256Mi
|
||||
limits:
|
||||
memory: 256Mi
|
||||
kubeappsapis:
|
||||
resourcesPreset: "none"
|
||||
image:
|
||||
registry: ghcr.io/cozystack/cozystack
|
||||
repository: kubeapps-apis
|
||||
tag: v0.31.0-rc.2
|
||||
digest: "sha256:db4f33e9ca6969459c9baf0131c2c342cb6c366df16df7361e7cbdeb4a854cea"
|
||||
tag: v0.31.0-rc.1
|
||||
digest: "sha256:ca65949e84c9a92436f47525ae92861984406644779cbb2ecdb8e2a1a133fabf"
|
||||
pluginConfig:
|
||||
flux:
|
||||
packages:
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user