Preapare release v0.0.1

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
This commit is contained in:
Andrei Kvapil
2023-12-10 20:09:43 +01:00
commit f642698921
1181 changed files with 381555 additions and 0 deletions

1
.dockerignore Normal file
View File

@@ -0,0 +1 @@
_out

1
.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
_out

201
LICENSE Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

15
Makefile Normal file
View File

@@ -0,0 +1,15 @@
.PHONY: manifests repos assets
manifests:
(cd packages/core/installer/; helm template -n cozy-installer installer .) > manifests/cozystack-installer.yaml
repos:
rm -rf _out
make -C packages/apps check-version-map
make -C packages/extra check-version-map
make -C packages/system repo
make -C packages/apps repo
make -C packages/extra repo
assets:
make -C packages/core/talos/ assets

560
README.md Normal file
View File

@@ -0,0 +1,560 @@
![Cozystack](img/cozystack-logo.svg)
[![Open Source](https://img.shields.io/badge/Open-Source-brightgreen)](https://opensource.org/)
[![Apache-2.0 License](https://img.shields.io/github/license/aenix-io/cozystack)](https://opensource.org/licenses/)
[![Support](https://img.shields.io/badge/$-support-12a0df.svg?style=flat)](https://aenix.io/contact-us/#meet)
[![Active](http://img.shields.io/badge/Status-Active-green.svg)](https://aenix.io/cozystack/)
[![GitHub Release](https://img.shields.io/github/release/aenix-io/cozystack.svg?style=flat)](https://github.com/aenix-io/cozystack)
[![GitHub Commit](https://img.shields.io/github/commit-activity/y/aenix-io/cozystack)](https://github.com/aenix-io/cozystack)
# Cozystack
**Cozystack** is an open-source **PaaS platform** for cloud providers.
With Cozystack, you can transform your bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters, Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.
You can use Cozystack to build your own cloud or to provide a cost-effective development environments.
## Use-Cases
### As a backend for a public cloud
Cozystack positions itself as a kind of framework for building public clouds. The key word here is framework. In this case, it's important to understand that Cozystack is made for cloud providers, not for end users.
Despite having a graphical interface, the current security model does not imply public user access to your management cluster.
Instead, end users get access to their own Kubernetes clusters, can order LoadBalancers and additional services from it, but they have no access and know nothing about your management cluster powered by Cozystack.
Thus, to integrate with your billing system, it's enough to teach your system to go to the management Kubernetes and place a YAML file signifying the service you're interested in. Cozystack will do the rest of the work for you.
![](https://aenix.io/wp-content/uploads/2024/02/Wireframe-1.png)
### As a private cloud for Infrastructure-as-Code
One of the use cases is a self-portal for users within your company, where they can order the service they're interested in or a managed database.
You can implement best GitOps practices, where users will launch their own Kubernetes clusters and databases for their needs with a simple commit of configuration into your infrastructure Git repository.
Thanks to the standardization of the approach to deploying applications, you can expand the platform's capabilities using the functionality of standard Helm charts.
### As a Kubernetes distribution for Bare Metal
We created Cozystack primarily for our own needs, having vast experience in building reliable systems on bare metal infrastructure. This experience led to the formation of a separate boxed product, which is aimed at standardizing and providing a ready-to-use tool for managing your infrastructure.
Currently, Cozystack already solves a huge scope of infrastructure tasks: starting from provisioning bare metal servers, having a ready monitoring system, fast and reliable storage, a network fabric with the possibility of interconnect with your infrastructure, the ability to run virtual machines, databases, and much more right out of the box.
All this makes Cozystack a convenient platform for delivering and launching your application on Bare Metal.
## Screenshot
![](https://aenix.io/wp-content/uploads/2023/12/cozystack1-1.png)
## Core values
### Standardization and unification
All components of the platform are based on open source tools and technologies which are widely known in the industry.
### Collaborate, not compete
If a feature being developed for the platform could be useful to a upstream project, it should be contributed to upstream project, rather than being implemented within the platform.
### API-first
Cozystack is based on Kubernetes and involves close interaction with its API. We don't aim to completely hide the all elements behind a pretty UI or any sort of customizations; instead, we provide a standard interface and teach users how to work with basic primitives. The web interface is used solely for deploying applications and quickly diving into basic concepts of platform.
## Quick Start
### Preapre infrastructure
![](https://aenix.io/wp-content/uploads/2024/02/Wireframe-2.png)
You need 3 physical servers or VMs with nested virtualisation:
```
CPU: 4 cores
CPU model: host
RAM: 8-16 GB
HDD1: 32 GB
HDD2: 100GB (raw)
```
And one management VM or physical server connected to the same network.
Any Linux system installed on it (eg. Ubuntu should be enough)
**Note:** The VM should support `x86-64-v2` architecture, the most probably you can achieve this by setting cpu model to `host`
#### Install dependicies:
- `docker`
- `talosctl`
- `dialog`
- `nmap`
- `make`
- `yq`
- `kubectl`
- `helm`
### Netboot server
Start matchbox with prebuilt Talos image for Cozystack:
```bash
sudo docker run --name=matchbox -d --net=host ghcr.io/aenix-io/cozystack/matchbox:v0.0.1 \
-address=:8080 \
-log-level=debug
```
Start DHCP-Server:
```bash
sudo docker run --name=dnsmasq -d --cap-add=NET_ADMIN --net=host quay.io/poseidon/dnsmasq \
-d -q -p0 \
--dhcp-range=192.168.100.3,192.168.100.254 \
--dhcp-option=option:router,192.168.100.1 \
--enable-tftp \
--tftp-root=/var/lib/tftpboot \
--dhcp-match=set:bios,option:client-arch,0 \
--dhcp-boot=tag:bios,undionly.kpxe \
--dhcp-match=set:efi32,option:client-arch,6 \
--dhcp-boot=tag:efi32,ipxe.efi \
--dhcp-match=set:efibc,option:client-arch,7 \
--dhcp-boot=tag:efibc,ipxe.efi \
--dhcp-match=set:efi64,option:client-arch,9 \
--dhcp-boot=tag:efi64,ipxe.efi \
--dhcp-userclass=set:ipxe,iPXE \
--dhcp-boot=tag:ipxe,http://192.168.100.254:8080/boot.ipxe \
--log-queries \
--log-dhcp
```
Where:
- `192.168.100.3,192.168.100.254` range to allocate IPs from
- `192.168.100.1` your gateway
- `192.168.100.254` is address of your management server
Check status of containers:
```
docker ps
```
example output:
```console
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
22044f26f74d quay.io/poseidon/dnsmasq "/usr/sbin/dnsmasq -…" 6 seconds ago Up 5 seconds dnsmasq
231ad81ff9e0 ghcr.io/aenix-io/cozystack/matchbox:v0.0.1 "/matchbox -address=…" 58 seconds ago Up 57 seconds matchbox
```
### Bootstrap cluster
Write configuration for Cozystack:
```yaml
cat > patch.yaml <<\EOT
machine:
kubelet:
nodeIP:
validSubnets:
- 192.168.100.0/24
kernel:
modules:
- name: openvswitch
- name: drbd
parameters:
- usermode_helper=disabled
- name: zfs
install:
image: ghcr.io/aenix-io/cozystack/talos:v1.6.4
files:
- content: |
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
device_ownership_from_security_context = true
path: /etc/cri/conf.d/20-customization.part
op: create
cluster:
network:
cni:
name: none
podSubnets:
- 10.244.0.0/16
serviceSubnets:
- 10.96.0.0/16
EOT
cat > patch-controlplane.yaml <<\EOT
cluster:
allowSchedulingOnControlPlanes: true
controllerManager:
extraArgs:
bind-address: 0.0.0.0
scheduler:
extraArgs:
bind-address: 0.0.0.0
apiServer:
certSANs:
- 127.0.0.1
proxy:
disabled: true
discovery:
enabled: false
etcd:
advertisedSubnets:
- 192.168.100.0/24
EOT
```
Run [talos-bootstrap](https://github.com/aenix-io/talos-bootstrap/) to deploy cluster:
```bash
talos-bootstrap install
```
Save admin kubeconfig to access your Kubernetes cluster:
```bash
cp -i kubeconfig ~/.kube/config
```
Check connection:
```bash
kubectl get ns
```
example output:
```console
NAME STATUS AGE
default Active 7m56s
kube-node-lease Active 7m56s
kube-public Active 7m56s
kube-system Active 7m56s
```
### Install Cozystack
write config for cozystack:
**Note:** please make sure that you written the same setting specified in `patch.yaml` and `patch-controlplane.yaml` files.
```yaml
cat > cozystack-config.yaml <<\EOT
apiVersion: v1
kind: ConfigMap
metadata:
name: cozystack
namespace: cozy-system
data:
cluster-name: "cozystack"
ipv4-pod-cidr: "10.244.0.0/16"
ipv4-pod-gateway: "10.244.0.1"
ipv4-svc-cidr: "10.96.0.0/16"
ipv4-join-cidr: "100.64.0.0/16"
EOT
```
Create namesapce and install Cozystack system components:
```bash
kubectl create ns cozy-system
kubectl apply -f cozystack-config.yaml
kubectl apply -f manifests/cozystack-installer.yaml
```
(optional) You can track the logs of installer:
```bash
kubectl logs -n cozy-system deploy/cozystack -f
```
Wait for a while, then check the status of installation:
```bash
kubectl get hr -A
```
Wait until all releases become to `Ready` state:
```console
NAMESPACE NAME AGE READY STATUS
cozy-cert-manager cert-manager 4m1s True Release reconciliation succeeded
cozy-cert-manager cert-manager-issuers 4m1s True Release reconciliation succeeded
cozy-cilium cilium 4m1s True Release reconciliation succeeded
cozy-cluster-api capi-operator 4m1s True Release reconciliation succeeded
cozy-cluster-api capi-providers 4m1s True Release reconciliation succeeded
cozy-dashboard dashboard 4m1s True Release reconciliation succeeded
cozy-fluxcd cozy-fluxcd 4m1s True Release reconciliation succeeded
cozy-grafana-operator grafana-operator 4m1s True Release reconciliation succeeded
cozy-kamaji kamaji 4m1s True Release reconciliation succeeded
cozy-kubeovn kubeovn 4m1s True Release reconciliation succeeded
cozy-kubevirt-cdi kubevirt-cdi 4m1s True Release reconciliation succeeded
cozy-kubevirt-cdi kubevirt-cdi-operator 4m1s True Release reconciliation succeeded
cozy-kubevirt kubevirt 4m1s True Release reconciliation succeeded
cozy-kubevirt kubevirt-operator 4m1s True Release reconciliation succeeded
cozy-linstor linstor 4m1s True Release reconciliation succeeded
cozy-linstor piraeus-operator 4m1s True Release reconciliation succeeded
cozy-mariadb-operator mariadb-operator 4m1s True Release reconciliation succeeded
cozy-metallb metallb 4m1s True Release reconciliation succeeded
cozy-monitoring monitoring 4m1s True Release reconciliation succeeded
cozy-postgres-operator postgres-operator 4m1s True Release reconciliation succeeded
cozy-rabbitmq-operator rabbitmq-operator 4m1s True Release reconciliation succeeded
cozy-redis-operator redis-operator 4m1s True Release reconciliation succeeded
cozy-telepresence telepresence 4m1s True Release reconciliation succeeded
cozy-victoria-metrics-operator victoria-metrics-operator 4m1s True Release reconciliation succeeded
tenant-root tenant-root 4m1s True Release reconciliation succeeded
```
#### Configure Storage
Setup alias to access LINSTOR:
```bash
alias linstor='kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor'
```
list your nodes
```bash
linstor node list
```
example output:
```console
+-------------------------------------------------------+
| Node | NodeType | Addresses | State |
|=======================================================|
| srv1 | SATELLITE | 192.168.100.11:3367 (SSL) | Online |
| srv2 | SATELLITE | 192.168.100.12:3367 (SSL) | Online |
| srv3 | SATELLITE | 192.168.100.13:3367 (SSL) | Online |
+-------------------------------------------------------+
```
list empty devices:
```bash
linstor physical-storage list
```
example output:
```console
+--------------------------------------------+
| Size | Rotational | Nodes |
|============================================|
| 107374182400 | True | srv3[/dev/sdb] |
| | | srv1[/dev/sdb] |
| | | srv2[/dev/sdb] |
+--------------------------------------------+
```
create storage pools:
```bash
linstor ps cdp lvm srv1 /dev/sdb --pool-name data --storage-pool data
linstor ps cdp lvm srv2 /dev/sdb --pool-name data --storage-pool data
linstor ps cdp lvm srv3 /dev/sdb --pool-name data --storage-pool data
```
list storage pools:
```bash
linstor sp l
```
example output:
```console
+-------------------------------------------------------------------------------------------------------------------------------------+
| StoragePool | Node | Driver | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State | SharedName |
|=====================================================================================================================================|
| DfltDisklessStorPool | srv1 | DISKLESS | | | | False | Ok | srv1;DfltDisklessStorPool |
| DfltDisklessStorPool | srv2 | DISKLESS | | | | False | Ok | srv2;DfltDisklessStorPool |
| DfltDisklessStorPool | srv3 | DISKLESS | | | | False | Ok | srv3;DfltDisklessStorPool |
| data | srv1 | LVM | data | 100.00 GiB | 100.00 GiB | False | Ok | srv1;data |
| data | srv2 | LVM | data | 100.00 GiB | 100.00 GiB | False | Ok | srv2;data |
| data | srv3 | LVM | data | 100.00 GiB | 100.00 GiB | False | Ok | srv3;data |
+-------------------------------------------------------------------------------------------------------------------------------------+
```
Create default storage classes:
```yaml
kubectl create -f- <<EOT
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/layerList: "storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: replicated
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/autoPlace: "3"
linstor.csi.linbit.com/layerList: "drbd storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
EOT
```
list storageclasses:
```bash
kubectl get storageclasses
```
example output:
```console
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local (default) linstor.csi.linbit.com Delete WaitForFirstConsumer true 11m
replicated linstor.csi.linbit.com Delete WaitForFirstConsumer true 11m
```
#### Configure Networking interconnection
To access your services select the range of unused IPs, eg. `192.168.100.200-192.168.100.250`
**Note:** These IPs should be from the same network as nodes or they should have all necessary routes for them.
Configure MetalLB to use and announce this range:
```yaml
kubectl create -f- <<EOT
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: cozystack
namespace: cozy-metallb
spec:
ipAddressPools:
- cozy-public
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: cozystack
namespace: cozy-metallb
spec:
addresses:
- 192.168.100.200-192.168.100.250
autoAssign: true
avoidBuggyIPs: false
EOT
```
#### Setup basic applications
Get token from `tenant-root`:
```bash
kubectl get secret -n tenant-root tenant-root -o go-template='{{ printf "%s\n" (index .data "token" | base64decode) }}'
```
Enable port forward to cozy-dashboard:
```bash
kubectl port-forward -n cozy-dashboard svc/dashboard 8080:80
```
Open: http://localhost:8080/
- Select `tenant-root`
- Click `Upgrade` button
- Write a domain into `host` which you wish to use as parent domain for all deployed applications
**Note:**
- if you have no domain yet, you can use `192.168.100.200.nip.io` where `192.168.100.200` is a first IP address in your network addresses range.
- alternatively you can leave the default value, however you'll be need to modify your `/etc/hosts` every time you want to access specific application.
- Set `etcd`, `monitoring` and `ingress` to enabled position
- Click Deploy
Check persistent volumes provisioned:
```bash
kubectl get pvc -n tenant-root
```
example output:
```console
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
data-etcd-0 Bound pvc-4cbd29cc-a29f-453d-b412-451647cd04bf 10Gi RWO local <unset> 2m10s
data-etcd-1 Bound pvc-1579f95a-a69d-4a26-bcc2-b15ccdbede0d 10Gi RWO local <unset> 115s
data-etcd-2 Bound pvc-907009e5-88bf-4d18-91e7-b56b0dbfb97e 10Gi RWO local <unset> 91s
grafana-db-1 Bound pvc-7b3f4e23-228a-46fd-b820-d033ef4679af 10Gi RWO local <unset> 2m41s
grafana-db-2 Bound pvc-ac9b72a4-f40e-47e8-ad24-f50d843b55e4 10Gi RWO local <unset> 113s
vmselect-cachedir-vmselect-longterm-0 Bound pvc-622fa398-2104-459f-8744-565eee0a13f1 2Gi RWO local <unset> 2m21s
vmselect-cachedir-vmselect-longterm-1 Bound pvc-fc9349f5-02b2-4e25-8bef-6cbc5cc6d690 2Gi RWO local <unset> 2m21s
vmselect-cachedir-vmselect-shortterm-0 Bound pvc-7acc7ff6-6b9b-4676-bd1f-6867ea7165e2 2Gi RWO local <unset> 2m41s
vmselect-cachedir-vmselect-shortterm-1 Bound pvc-e514f12b-f1f6-40ff-9838-a6bda3580eb7 2Gi RWO local <unset> 2m40s
vmstorage-db-vmstorage-longterm-0 Bound pvc-e8ac7fc3-df0d-4692-aebf-9f66f72f9fef 10Gi RWO local <unset> 2m21s
vmstorage-db-vmstorage-longterm-1 Bound pvc-68b5ceaf-3ed1-4e5a-9568-6b95911c7c3a 10Gi RWO local <unset> 2m21s
vmstorage-db-vmstorage-shortterm-0 Bound pvc-cee3a2a4-5680-4880-bc2a-85c14dba9380 10Gi RWO local <unset> 2m41s
vmstorage-db-vmstorage-shortterm-1 Bound pvc-d55c235d-cada-4c4a-8299-e5fc3f161789 10Gi RWO local <unset> 2m41s
```
Check all pods are running:
```bash
kubectl get pod -n tenant-root
```
example output:
```console
NAME READY STATUS RESTARTS AGE
etcd-0 1/1 Running 0 2m1s
etcd-1 1/1 Running 0 106s
etcd-2 1/1 Running 0 82s
grafana-db-1 1/1 Running 0 119s
grafana-db-2 1/1 Running 0 13s
grafana-deployment-74b5656d6-5dcvn 1/1 Running 0 90s
grafana-deployment-74b5656d6-q5589 1/1 Running 1 (105s ago) 111s
root-ingress-controller-6ccf55bc6d-pg79l 2/2 Running 0 2m27s
root-ingress-controller-6ccf55bc6d-xbs6x 2/2 Running 0 2m29s
root-ingress-defaultbackend-686bcbbd6c-5zbvp 1/1 Running 0 2m29s
vmalert-vmalert-644986d5c-7hvwk 2/2 Running 0 2m30s
vmalertmanager-alertmanager-0 2/2 Running 0 2m32s
vmalertmanager-alertmanager-1 2/2 Running 0 2m31s
vminsert-longterm-75789465f-hc6cz 1/1 Running 0 2m10s
vminsert-longterm-75789465f-m2v4t 1/1 Running 0 2m12s
vminsert-shortterm-78456f8fd9-wlwww 1/1 Running 0 2m29s
vminsert-shortterm-78456f8fd9-xg7cw 1/1 Running 0 2m28s
vmselect-longterm-0 1/1 Running 0 2m12s
vmselect-longterm-1 1/1 Running 0 2m12s
vmselect-shortterm-0 1/1 Running 0 2m31s
vmselect-shortterm-1 1/1 Running 0 2m30s
vmstorage-longterm-0 1/1 Running 0 2m12s
vmstorage-longterm-1 1/1 Running 0 2m12s
vmstorage-shortterm-0 1/1 Running 0 2m32s
vmstorage-shortterm-1 1/1 Running 0 2m31s
```
Now you can get public IP of ingress controller:
```
kubectl get svc -n tenant-root root-ingress-controller
```
example output:
```console
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
root-ingress-controller LoadBalancer 10.96.16.141 192.168.100.200 80:31632/TCP,443:30113/TCP 3m33s
```
Use `grafana.example.org` (under 192.168.100.200) to access system monitoring, where `example.org` is your domain specified for `tenant-root`
- login: `admin`
- password:
```bash
kubectl get secret -n tenant-root grafana-admin-password -o go-template='{{ printf "%s\n" (index .data "password" | base64decode) }}'
```

1
dashboards/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
*.tmp

3038
dashboards/cache/nginx-vts-stats.json vendored Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,611 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"target": {
"limit": 100,
"matchAny": false,
"tags": [],
"type": "dashboard"
},
"type": "dashboard"
}
]
},
"description": "Track whether the cluster can be upgraded to the newer Kubernetes versions",
"editable": false,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": 30,
"iteration": 1656060742701,
"links": [],
"liveNow": false,
"panels": [
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"fixedColor": "blue",
"mode": "fixed"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 5,
"x": 0,
"y": 0
},
"id": 9,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"text": {},
"textMode": "name"
},
"pluginVersion": "8.5.2",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"editorMode": "code",
"expr": "topk(1, sum by (git_version) (kubernetes_build_info{job=\"kube-apiserver\"}))",
"legendFormat": "{{ git_version }}",
"range": true,
"refId": "A"
}
],
"title": "Current K8s version",
"transformations": [
{
"id": "reduce",
"options": {
"labelsToFields": false,
"reducers": []
}
}
],
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [
{
"options": {
"from": 1,
"result": {
"index": 0,
"text": "Cannot be upgraded"
},
"to": 1e+18
},
"type": "range"
},
{
"options": {
"match": "null+nan",
"result": {
"index": 1,
"text": "Can be upgraded"
}
},
"type": "special"
}
],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 7,
"x": 5,
"y": 0
},
"id": 4,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "center",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "8.5.2",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"editorMode": "code",
"exemplar": false,
"expr": "sum(\n ceil(\n sum by(removed_release, resource, group, version) (\n sum by(removed_release, resource, group, version) \n (apiserver_requested_deprecated_apis{removed_release=\"$k8s\"}) \n *\n on(group,version,resource,subresource)\n group_right() (increase(apiserver_request_total[1h]))\n )\n )\n) or vector(0)\n+ \nsum(\n sum by (api_version, kind, helm_release_name, helm_release_namespace)\n (resource_versions_compatibility{k8s_version=~\"$k8s\"})\n) or vector(0)\n> 0",
"instant": true,
"range": false,
"refId": "A"
}
],
"title": "Upgrade to desired version status",
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"gridPos": {
"h": 5,
"w": 12,
"x": 12,
"y": 0
},
"id": 7,
"options": {
"content": "<br>\n\n#### Follow instructions to migrate from using **deprecated APIs**\n\nhttps://kubernetes.io/docs/reference/using-api/deprecation-guide/",
"mode": "markdown"
},
"pluginVersion": "8.5.2",
"type": "text"
},
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"align": "auto",
"displayMode": "color-background",
"filterable": false,
"inspect": false,
"minWidth": 100
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "super-light-orange",
"value": null
},
{
"color": "light-orange",
"value": 50
},
{
"color": "orange",
"value": 200
},
{
"color": "semi-dark-orange",
"value": 500
},
{
"color": "dark-orange",
"value": 1000
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "Group"
},
"properties": [
{
"id": "custom.displayMode",
"value": "auto"
}
]
},
{
"matcher": {
"id": "byName",
"options": "Version"
},
"properties": [
{
"id": "custom.displayMode",
"value": "auto"
}
]
},
{
"matcher": {
"id": "byName",
"options": "Resource"
},
"properties": [
{
"id": "custom.displayMode",
"value": "auto"
}
]
}
]
},
"gridPos": {
"h": 13,
"w": 12,
"x": 0,
"y": 5
},
"id": 2,
"options": {
"footer": {
"enablePagination": false,
"fields": "",
"reducer": [
"sum"
],
"show": false
},
"showHeader": true,
"sortBy": []
},
"pluginVersion": "8.5.2",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"editorMode": "code",
"exemplar": false,
"expr": "ceil(\n sum by(removed_release, resource, group, version)(\n sum by(removed_release, resource, group, version) \n (apiserver_requested_deprecated_apis{removed_release=~\"$k8s\"}) \n * \n on(group,version,resource,subresource)\n group_right() (increase(apiserver_request_total[3h]))\n )\n) > 0",
"format": "table",
"instant": true,
"range": false,
"refId": "A"
}
],
"title": "Requests to kube-apiserver (last 3 hours)",
"transformations": [
{
"id": "filterFieldsByName",
"options": {
"include": {
"names": [
"group",
"removed_release",
"resource",
"version",
"Value"
]
}
}
},
{
"id": "organize",
"options": {
"excludeByName": {},
"indexByName": {
"Value": 4,
"group": 0,
"removed_release": 3,
"resource": 2,
"version": 1
},
"renameByName": {
"Value": "",
"group": "Group",
"removed_release": "Removed Release",
"resource": "Resource",
"version": "Version"
}
}
},
{
"id": "sortBy",
"options": {
"fields": {},
"sort": [
{
"desc": true,
"field": "Value"
}
]
}
}
],
"type": "table"
},
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"custom": {
"align": "auto",
"displayMode": "color-background",
"filterable": false,
"inspect": false,
"minWidth": 100
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "super-light-orange",
"value": null
},
{
"color": "#EAB839",
"value": 5
},
{
"color": "orange",
"value": 15
},
{
"color": "dark-orange",
"value": 50
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "API version"
},
"properties": [
{
"id": "custom.displayMode",
"value": "auto"
}
]
},
{
"matcher": {
"id": "byName",
"options": "Helm release"
},
"properties": [
{
"id": "custom.displayMode",
"value": "auto"
}
]
},
{
"matcher": {
"id": "byName",
"options": "Helm release namespace"
},
"properties": [
{
"id": "custom.displayMode",
"value": "auto"
}
]
},
{
"matcher": {
"id": "byName",
"options": "Kind"
},
"properties": [
{
"id": "custom.displayMode",
"value": "auto"
}
]
}
]
},
"gridPos": {
"h": 13,
"w": 12,
"x": 12,
"y": 5
},
"id": 5,
"options": {
"footer": {
"enablePagination": false,
"fields": "",
"reducer": [
"sum"
],
"show": false
},
"showHeader": true,
"sortBy": []
},
"pluginVersion": "8.5.2",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"editorMode": "code",
"exemplar": false,
"expr": "sum by (api_version, kind, helm_release_name, helm_release_namespace) (resource_versions_compatibility{k8s_version=~\"$k8s\"})",
"format": "table",
"instant": true,
"legendFormat": "__auto",
"range": false,
"refId": "A"
}
],
"title": "Helm releases",
"transformations": [
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true
},
"indexByName": {},
"renameByName": {
"Time": "",
"Value": "Quantity",
"api_version": "API version",
"helm_release_name": "Helm release",
"helm_release_namespace": "Helm release namespace",
"kind": "Kind"
}
}
},
{
"id": "sortBy",
"options": {
"fields": {},
"sort": [
{
"desc": true,
"field": "Quantity"
}
]
}
}
],
"type": "table"
}
],
"schemaVersion": 36,
"style": "dark",
"tags": [],
"templating": {
"list": [
{
"current": {
"selected": false,
"text": "default",
"value": "default"
},
"description": null,
"error": null,
"hide": 0,
"includeAll": false,
"label": "datasource",
"multi": false,
"name": "ds_prometheus",
"options": [],
"query": "prometheus",
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"type": "datasource"
},
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"definition": "label_values(apiserver_requested_deprecated_apis, removed_release)",
"hide": 0,
"includeAll": false,
"label": "Desired K8s version",
"multi": false,
"name": "k8s",
"options": [],
"query": {
"query": "label_values(apiserver_requested_deprecated_apis, removed_release)",
"refId": "StandardVariableQuery"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 0,
"type": "query"
}
]
},
"time": {
"from": "now-3h",
"to": "now"
},
"timepicker": {
"hidden": true,
"refresh_intervals": [
"30s"
]
},
"timezone": "",
"title": "Deprecated APIs",
"uid": "B0d1Wt3nk",
"version": 2,
"weekStart": ""
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

4436
dashboards/db/maria-db.json Normal file

File diff suppressed because it is too large Load Diff

1659
dashboards/db/redis.json Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

13341
dashboards/main/node.json Normal file

File diff suppressed because it is too large Load Diff

1942
dashboards/main/nodes.json Normal file

File diff suppressed because it is too large Load Diff

1271
dashboards/main/ntp.json Normal file

File diff suppressed because it is too large Load Diff

4473
dashboards/main/pod.json Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,492 @@
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"target": {
"limit": 100,
"matchAny": false,
"tags": [],
"type": "dashboard"
},
"type": "dashboard"
}
]
},
"editable": false,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": 51,
"links": [],
"liveNow": false,
"panels": [
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "continuous-GrYlRd"
},
"decimals": 2,
"mappings": [],
"max": 1,
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 0.8
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 26,
"w": 12,
"x": 0,
"y": 0
},
"id": 2,
"options": {
"displayMode": "gradient",
"minVizHeight": 10,
"minVizWidth": 0,
"orientation": "horizontal",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showUnfilled": true,
"valueMode": "color"
},
"pluginVersion": "10.1.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"editorMode": "code",
"expr": "sort_desc(kubelet_volume_stats_used_bytes{namespace=~\"$namespace\"} / kubelet_volume_stats_capacity_bytes{namespace=~\"$namespace\"})",
"legendFormat": "{{persistentvolumeclaim}}",
"range": true,
"refId": "A"
}
],
"title": "Persistent Volume Usage",
"type": "bargauge"
},
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "continuous-GrYlRd"
},
"decimals": 2,
"mappings": [],
"max": 1,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 26,
"w": 12,
"x": 12,
"y": 0
},
"id": 4,
"options": {
"displayMode": "basic",
"minVizHeight": 10,
"minVizWidth": 0,
"orientation": "horizontal",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showUnfilled": true,
"valueMode": "color"
},
"pluginVersion": "10.1.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"editorMode": "code",
"expr": "sort_desc(kubelet_volume_stats_inodes_used{namespace=~\"$namespace\"} / kubelet_volume_stats_inodes{namespace=~\"$namespace\"})",
"legendFormat": "{{persistentvolumeclaim}}",
"range": true,
"refId": "A"
}
],
"title": "Persistent Volume Inodes Usage",
"type": "bargauge"
},
{
"collapsed": true,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 26
},
"id": 14,
"panels": [
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"decimals": 0,
"mappings": [],
"max": 1,
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "short"
},
"overrides": []
},
"gridPos": {
"h": 9,
"w": 24,
"x": 0,
"y": 19
},
"id": 16,
"options": {
"legend": {
"calcs": [],
"displayMode": "table",
"placement": "right",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"editorMode": "code",
"expr": "kube_persistentvolume_status_phase{phase=\"Bound\"} == 1",
"legendFormat": "{{phase}} - {{persistentvolume}}",
"range": true,
"refId": "A"
}
],
"title": "Persistent Volume Status by Phase",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"decimals": 0,
"mappings": [],
"max": 1,
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "short"
},
"overrides": [
{
"__systemRef": "hideSeriesFrom",
"matcher": {
"id": "byNames",
"options": {
"mode": "exclude",
"names": [
"Bound - vmselect-cachedir-vmselect-vmcluster-longterm-0"
],
"prefix": "All except:",
"readOnly": true
}
},
"properties": [
{
"id": "custom.hideFrom",
"value": {
"legend": false,
"tooltip": false,
"viz": true
}
}
]
}
]
},
"gridPos": {
"h": 9,
"w": 24,
"x": 0,
"y": 28
},
"id": 12,
"options": {
"legend": {
"calcs": [],
"displayMode": "table",
"placement": "right",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"editorMode": "code",
"expr": "kube_persistentvolumeclaim_status_phase{phase=\"Pending\"} == 1",
"legendFormat": "{{phase}} - {{persistentvolumeclaim}} ",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"editorMode": "code",
"expr": "kube_persistentvolumeclaim_status_phase{phase=\"Lost\"} == 1",
"hide": false,
"legendFormat": "{{phase}} - {{persistentvolumeclaim}}",
"range": true,
"refId": "B"
},
{
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"editorMode": "code",
"expr": "kube_persistentvolumeclaim_status_phase{phase=\"Bound\"} == 1",
"hide": false,
"legendFormat": "{{phase}} - {{persistentvolumeclaim}}",
"range": true,
"refId": "C"
}
],
"title": "Persistent Volume Claim Status by Phase",
"type": "timeseries"
}
],
"title": "kube-state",
"type": "row"
}
],
"refresh": "30s",
"schemaVersion": 38,
"style": "dark",
"tags": [],
"templating": {
"list": [
{
"current": {
"selected": false,
"text": "vm",
"value": "7de66667-64e8-4685-b5b4-2c3d573910bc"
},
"hide": 0,
"includeAll": false,
"label": "Prometheus",
"multi": false,
"name": "ds_prometheus",
"options": [],
"query": "prometheus",
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"type": "datasource"
},
{
"current": {
"selected": true,
"text": [
"All"
],
"value": [
"$__all"
]
},
"datasource": {
"type": "prometheus",
"uid": "$ds_prometheus"
},
"definition": "label_values(kubelet_volume_stats_capacity_bytes, namespace)",
"hide": 0,
"includeAll": true,
"multi": true,
"name": "namespace",
"options": [],
"query": {
"query": "label_values(kubelet_volume_stats_capacity_bytes, namespace)",
"refId": "StandardVariableQuery"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 1,
"type": "query"
}
]
},
"time": {
"from": "now-3h",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "Volumes",
"uid": "AxpMCRd4z",
"version": 260,
"weekStart": ""
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

112
hack/download-dashboards.sh Executable file
View File

@@ -0,0 +1,112 @@
#https://github.com/deckhouse/deckhouse/blob/main/modules/340-monitoring-kubernetes-control-plane/monitoring/grafana-dashboards/kubernetes-cluster/control-plane-status.json
base=https://github.com/deckhouse/deckhouse/raw/main/
dir="grafana-dashboards"
mkdir -p "$dir"
add_ds_prometheus(){
jq '.templating.list |= [{"current":{"selected":false,"text":"default","value":"default"},"description":null,"error":null,"hide":0,"includeAll":false,"label":"datasource","multi":false,"name":"ds_prometheus","options":[],"query":"prometheus","refresh":1,"regex":"","skipUrlSync":false,"type":"datasource"}] + .'
}
indent() {
sed "s/^/$(head -c "$1" < /dev/zero | tr '\0' ' ')/"
}
fix_d8() {
sed \
-e 's|$__interval_sx3|$__rate_interval|g' \
-e 's|$__interval_sx4|$__rate_interval|g' \
-e 's|P0D6E4079E36703EB|$ds_prometheus|g'
}
swap_pvc_overview() {
jq '(.panels[] | select(.title=="PVC Detailed") | .panels[] | select(.title=="Overview")) as $a | del(.panels[] | select(.title=="PVC Detailed").panels[] | select(.title=="Overview")) | ( (.panels[] | select(.title=="PVC Detailed"))) as $b | del( .panels[] | select(.title=="PVC Detailed")) | (.panels[.panels|length]=($a|.gridPos.y=$b.gridPos.y)) | (.panels[.panels|length]=($b|.gridPos.y=$a.gridPos.y))'
}
deprectaed_remove_faq() {
jq 'del(.panels[] | select(.title == "How to find who sends requests to deprecated APIs"))| (.panels[]|select(.type == "text")).gridPos.x = 12 | (.panels[]|select(.type == "text")).gridPos.w = 12'
}
while read url others; do
name="$(basename "$url" .json | tr _ -)"
case $url in
modules/042-kube-dns/*)
ddir=control-plane
;;
modules/402-ingress-nginx/*)
ddir=ingress
;;
modules/340-monitoring-kubernetes/*)
ddir=main
;;
modules/340-monitoring-kubernetes-control-plane/*)
ddir=control-plane
;;
esac
file="$dir/$ddir/$name.json"
echo "$file"
mkdir -p "$dir/$ddir"
curl -sSL "$base$url" -o "$file.tmp"
case "$name" in
"deprecated-resources")
filters="add_ds_prometheus | deprectaed_remove_faq | fix_d8"
;;
*)
filters="fix_d8"
;;
esac
cat "$file.tmp" | eval "$filters" > "$file"
done <<\EOT
modules/042-kube-dns/monitoring/grafana-dashboards/kubernetes-cluster/dns/dns-coredns.json
modules/402-ingress-nginx/monitoring/grafana-dashboards/kubernetes-cluster/controllers.json
modules/402-ingress-nginx/monitoring/grafana-dashboards/kubernetes-cluster/controller-detail.json
modules/402-ingress-nginx/monitoring/grafana-dashboards/ingress-nginx/namespace/namespace_detail.json
modules/402-ingress-nginx/monitoring/grafana-dashboards/ingress-nginx/namespace/namespaces.json
modules/402-ingress-nginx/monitoring/grafana-dashboards/ingress-nginx/vhost/vhost_detail.json
modules/402-ingress-nginx/monitoring/grafana-dashboards/ingress-nginx/vhost/vhosts.json
modules/340-monitoring-kubernetes-control-plane/monitoring/grafana-dashboards/kubernetes-cluster/control-plane-status.json
modules/340-monitoring-kubernetes-control-plane/monitoring/grafana-dashboards/kubernetes-cluster/kube-etcd3.json #TODO
modules/340-monitoring-kubernetes-control-plane/monitoring/grafana-dashboards/kubernetes-cluster/deprecated-resources.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//kubernetes-cluster/nodes/ntp.json #TODO
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//kubernetes-cluster/nodes/nodes.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//kubernetes-cluster/nodes/node.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//main/controller.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//main/pod.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//main/namespace/namespaces.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//main/namespace/namespace.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//main/capacity-planning/capacity-planning.json
EOT
while read url others; do
name="$(basename "$url" .json | tr _ -)"
case $url in
*VictoriaMetrics*)
ddir=victoria-metrics
;;
*/dotdc/*)
ddir=dotdc
;;
esac
file="$dir/$ddir/$name.json"
mkdir -p "$dir/$ddir"
echo "$file"
curl -sSL "$url" -o "$file"
done <<\EOT
https://raw.githubusercontent.com/VictoriaMetrics/VictoriaMetrics/master/dashboards/victoriametrics.json
https://raw.githubusercontent.com/VictoriaMetrics/VictoriaMetrics/master/dashboards/vmagent.json
https://raw.githubusercontent.com/VictoriaMetrics/VictoriaMetrics/master/dashboards/victoriametrics-cluster.json
https://raw.githubusercontent.com/VictoriaMetrics/VictoriaMetrics/master/dashboards/vmalert.json
https://raw.githubusercontent.com/VictoriaMetrics/VictoriaMetrics/master/dashboards/operator.json
https://raw.githubusercontent.com/VictoriaMetrics/VictoriaMetrics/master/dashboards/backupmanager.json
https://raw.githubusercontent.com/dotdc/grafana-dashboards-kubernetes/master/dashboards/k8s-system-coredns.json
https://raw.githubusercontent.com/dotdc/grafana-dashboards-kubernetes/master/dashboards/k8s-views-global.json
https://raw.githubusercontent.com/dotdc/grafana-dashboards-kubernetes/master/dashboards/k8s-views-namespaces.json
https://raw.githubusercontent.com/dotdc/grafana-dashboards-kubernetes/master/dashboards/k8s-views-pods.json
EOT

31
hack/gen_versions_map.sh Executable file
View File

@@ -0,0 +1,31 @@
#!/bin/sh
set -e
file=versions_map
charts=$(find . -mindepth 2 -maxdepth 2 -name Chart.yaml | awk 'sub("/Chart.yaml", "")')
# <chart> <version> <commit>
new_map=$(
for chart in $charts; do
awk '/^name:/ {chart=$2} /^version:/ {version=$2} END{printf "%s %s %s\n", chart, version, "HEAD"}' $chart/Chart.yaml
done
)
if [ ! -f "$file" ] || [ ! -s "$file" ]; then
echo "$new_map" > "$file"
exit 0
fi
miss_map=$(echo "$new_map" | awk 'NR==FNR { new_map[$1 " " $2] = $3; next } { if (!($1 " " $2 in new_map)) print $1, $2, $3}' - $file)
resolved_miss_map=$(
echo "$miss_map" | while read chart version commit; do
if [ "$commit" = HEAD ]; then
line=$(git show HEAD:"./$chart/Chart.yaml" | awk '/^version:/ {print NR; exit}')
change_commit=$(git --no-pager blame -L"$line",+1 HEAD -- "$chart/Chart.yaml" | awk '{print $1}')
commit=$(git describe --always "$change_commit~1")
fi
echo "$chart $version $commit"
done
)
printf "%s\n" "$new_map" "$resolved_miss_map" | sort -k1,1 -k2,2 -V | awk '$1' > "$file"

51
img/cozystack-logo.svg Normal file
View File

@@ -0,0 +1,51 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg
width="521"
height="185"
fill="none"
version="1.1"
id="svg830"
sodipodi:docname="cozyl.svg"
inkscape:version="1.1.1 (c3084ef, 2021-09-22)"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<defs
id="defs834" />
<sodipodi:namedview
id="namedview832"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
showgrid="false"
inkscape:zoom="2.193858"
inkscape:cx="266.42563"
inkscape:cy="81.363517"
inkscape:window-width="1312"
inkscape:window-height="969"
inkscape:window-x="0"
inkscape:window-y="25"
inkscape:window-maximized="0"
inkscape:current-layer="svg830" />
<path
d="M472.993 120.863V57.41h7.38V86.21h.18l30.782-28.802h9.451L485.503 90.26l.09-3.96 37.173 34.562h-9.72l-32.493-29.882h-.18v29.882zM439.884 121.673c-6.24 0-11.58-1.32-16.02-3.96-4.442-2.64-7.862-6.39-10.262-11.25-2.34-4.861-3.51-10.652-3.51-17.372 0-6.72 1.17-12.48 3.51-17.281 2.4-4.86 5.82-8.61 10.261-11.251 4.44-2.64 9.78-3.96 16.021-3.96 4.38 0 8.43.69 12.151 2.07 3.72 1.38 6.84 3.39 9.36 6.03l-2.88 6.03c-2.76-2.58-5.64-4.44-8.64-5.58-2.94-1.2-6.21-1.8-9.81-1.8-7.142 0-12.602 2.25-16.382 6.75-3.78 4.5-5.67 10.831-5.67 18.992 0 8.16 1.89 14.52 5.67 19.081 3.78 4.5 9.24 6.75 16.381 6.75 3.6 0 6.87-.57 9.81-1.71 3.001-1.2 5.881-3.09 8.641-5.67l2.88 6.03c-2.52 2.58-5.64 4.591-9.36 6.031-3.72 1.38-7.77 2.07-12.15 2.07zM341.795 120.863l27.992-63.454h6.3l27.992 63.454h-7.65l-7.83-18.091 3.6 1.89h-38.703l3.69-1.89-7.74 18.091zm31.052-54.814-14.49 34.113-2.16-1.71h33.301l-1.98 1.71-14.49-34.113zM316.2 120.863V63.8h-23.043v-6.39h53.554v6.39H323.67v57.064z"
fill="#fff"
id="path824"
style="fill:#000000" />
<path
fill-rule="evenodd"
clip-rule="evenodd"
d="M236.645 57.33h50.827v6.353h-50.827zm0 57.18h50.827v6.353h-50.827zm50.827-28.59h-50.827v6.353h50.827z"
fill="#fff"
id="path826"
style="fill:#000000" />
<path
d="M199.863 120.863V88.011l1.62 5.13-25.922-35.732h8.641l20.431 28.262h-1.89l20.432-28.262h8.37L205.713 93.14l1.531-5.13v32.852zM126.991 120.863v-5.49l38.523-54.454v2.88h-38.523v-6.39h45.453v5.49l-38.522 54.364v-2.79h39.783v6.39zM89.464 121.673c-4.38 0-8.371-.75-11.971-2.25-3.6-1.56-6.66-3.75-9.18-6.57-2.521-2.82-4.471-6.24-5.851-10.261-1.32-4.02-1.98-8.52-1.98-13.501 0-5.04.66-9.54 1.98-13.501 1.38-4.02 3.33-7.41 5.85-10.17 2.52-2.82 5.55-4.981 9.09-6.481 3.601-1.56 7.621-2.34 12.062-2.34 4.5 0 8.52.75 12.06 2.25 3.6 1.5 6.66 3.66 9.181 6.48 2.58 2.82 4.53 6.24 5.85 10.261 1.38 3.96 2.07 8.43 2.07 13.41 0 5.041-.69 9.572-2.07 13.592-1.38 4.02-3.33 7.44-5.85 10.26-2.52 2.82-5.58 5.01-9.18 6.571-3.54 1.5-7.561 2.25-12.061 2.25zm0-6.57c4.56 0 8.4-1.02 11.52-3.06 3.18-2.04 5.61-5.01 7.29-8.911 1.681-3.9 2.521-8.58 2.521-14.041 0-5.52-.84-10.2-2.52-14.041-1.68-3.84-4.11-6.78-7.29-8.82-3.12-2.04-6.961-3.06-11.521-3.06-4.44 0-8.251 1.02-11.431 3.06-3.12 2.04-5.52 5.01-7.2 8.91-1.68 3.84-2.52 8.49-2.52 13.95 0 5.461.84 10.142 2.52 14.042 1.68 3.84 4.08 6.81 7.2 8.91 3.18 2.04 6.99 3.06 11.43 3.06zM31.559 121.673c-6.24 0-11.581-1.32-16.022-3.96-4.44-2.64-7.86-6.39-10.26-11.25-2.34-4.861-3.51-10.652-3.51-17.372 0-6.72 1.17-12.48 3.51-17.281 2.4-4.86 5.82-8.61 10.26-11.251 4.44-2.64 9.781-3.96 16.022-3.96 4.38 0 8.43.69 12.15 2.07 3.72 1.38 6.84 3.39 9.361 6.03l-2.88 6.03c-2.76-2.58-5.64-4.44-8.64-5.58-2.941-1.2-6.211-1.8-9.811-1.8-7.14 0-12.601 2.25-16.382 6.75-3.78 4.5-5.67 10.831-5.67 18.992 0 8.16 1.89 14.52 5.67 19.081 3.78 4.5 9.241 6.75 16.382 6.75 3.6 0 6.87-.57 9.81-1.71 3-1.2 5.88-3.09 8.64-5.67l2.881 6.03c-2.52 2.58-5.64 4.591-9.36 6.031-3.72 1.38-7.771 2.07-12.151 2.07z"
fill="#fff"
id="path828"
style="fill:#000000" />
</svg>

After

Width:  |  Height:  |  Size: 4.0 KiB

View File

@@ -0,0 +1,94 @@
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: v1
kind: Namespace
metadata:
name: cozy-system
labels:
pod-security.kubernetes.io/enforce: privileged
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: cozystack
namespace: cozy-system
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: cozystack
namespace: cozy-system
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cozystack
subjects:
- kind: ServiceAccount
name: cozystack
namespace: cozy-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: v1
kind: Service
metadata:
name: cozystack
namespace: cozy-system
spec:
ports:
- name: http
port: 80
targetPort: 8123
selector:
app: cozystack
type: ClusterIP
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cozystack
namespace: cozy-system
spec:
replicas: 1
selector:
matchLabels:
app: cozystack
strategy:
type: Recreate
template:
metadata:
labels:
app: cozystack
spec:
hostNetwork: true
serviceAccountName: cozystack
containers:
- name: cozystack
image: "ghcr.io/aenix-io/cozystack/installer:v0.0.1@sha256:d9edaf7fd53910c979575caf633875af2d16cc8611f79553cee8c18aade067b3"
env:
- name: KUBERNETES_SERVICE_HOST
value: localhost
- name: KUBERNETES_SERVICE_PORT
value: "7445"
- name: darkhttpd
image: "ghcr.io/aenix-io/cozystack/installer:v0.0.1@sha256:d9edaf7fd53910c979575caf633875af2d16cc8611f79553cee8c18aade067b3"
command:
- /usr/bin/darkhttpd
- /cozystack/assets
- --port
- "8123"
ports:
- name: http
containerPort: 8123
tolerations:
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoSchedule"

20
packages/apps/Makefile Normal file
View File

@@ -0,0 +1,20 @@
OUT=../../_out/repos/apps
TMP=../../_out/repos/apps/historical
repo:
rm -rf "$(OUT)"
mkdir -p "$(OUT)"
awk '$$3 != "HEAD" {print "mkdir -p $(TMP)/" $$1 "-" $$2}' versions_map | sh -ex
awk '$$3 != "HEAD" {print "git archive " $$3 " " $$1 " | tar -xf- --strip-components=1 -C $(TMP)/" $$1 "-" $$2 }' versions_map | sh -ex
helm package -d "$(OUT)" $$(find . $(TMP) -mindepth 2 -maxdepth 2 -name Chart.yaml | awk 'sub("/Chart.yaml", "")' | sort -V)
cd "$(OUT)" && helm repo index .
rm -rf "$(TMP)"
fix-chartnames:
find . -name Chart.yaml -maxdepth 2 | awk -F/ '{print $$2}' | while read i; do sed -i "s/^name: .*/name: $$i/" "$$i/Chart.yaml"; done
gen-versions-map: fix-chartnames
../../hack/gen_versions_map.sh
check-version-map: gen-versions-map
git diff --exit-code -- versions_map

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,25 @@
apiVersion: v2
name: http-cache
description: Layer7 load balacner and caching service
icon: https://www.svgrepo.com/show/373924/nginx.svg
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -0,0 +1,35 @@
PUSH := 1
LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack
TAG := v0.0.1
image: image-nginx
image-nginx:
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/nginx-cache \
--provenance false \
--tag $(REGISTRY)/nginx-cache:$(TAG) \
--cache-from type=registry,ref=$(REGISTRY)/nginx-cache:$(TAG) \
--cache-to type=inline \
--metadata-file images/nginx-cache.json \
--push=$(PUSH) \
--load=$(LOAD)
echo "$(REGISTRY)/nginx:$(TAG)" > images/nginx-cache.tag
update:
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/chrislim2888/IP2Location-C-Library | awk -F'[/^]' 'END{print $$3}') && \
sed -i "/^ARG IP2LOCATION_C_VERSION=/ s/=.*/=$$tag/" images/nginx/Dockerfile
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/ip2location/ip2proxy-c | awk -F'[/^]' 'END{print $$3}') && \
sed -i "/^ARG IP2PROXY_C_VERSION=/ s/=.*/=$$tag/" images/nginx/Dockerfile
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/ip2location/ip2location-nginx | awk -F'[/^]' 'END{print $$3}') && \
sed -i "/^ARG IP2LOCATION_NGINX_VERSION=/ s/=.*/=$$tag/" images/nginx/Dockerfile
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/ip2location/ip2proxy-nginx | awk -F'[/^]' 'END{print $$3}') && \
sed -i "/^ARG IP2PROXY_NGINX_VERSION=/ s/=.*/=$$tag/" images/nginx/Dockerfile
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/nginx/nginx | awk -F'[/^]' 'END{print $$3}' | awk -F- '{print $$2}') && \
sed -i "/^ARG NGINX_VERSION=/ s/=.*/=$$tag/" images/nginx/Dockerfile
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/nginx-modules/ngx_cache_purge | awk -F'[/^]' 'END{print $$3}') && \
sed -i "/^ARG NGINX_CACHE_PURGE_VERSION=/ s/=.*/=$$tag/" images/nginx/Dockerfile
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/vozlt/nginx-module-vts | awk -F'[/^]' 'END{print $$3}' | sed 's/^v//') && \
sed -i "/^ARG NGINX_VTS_VERSION=/ s/=.*/=$$tag/" images/nginx/Dockerfile
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/51Degrees/Device-Detection | awk -F'[/^]' 'END{print $$3}' | sed 's/^v//') && \
sed -i "/^ARG FIFTYONEDEGREES_NGINX_VERSION=/ s/=.*/=$$tag/" images/nginx/Dockerfile

View File

@@ -0,0 +1,57 @@
# Managed Nginx Caching Service
The Nginx Caching Service is designed to optimize web traffic and enhance web application performance. This service combines custom-built Nginx instances with HAproxy for efficient caching and load balancing.
## Deployment infromation
The Nginx instances include the following modules and features:
- VTS module for statistics
- Integration with ip2location
- Integration with ip2proxy
- Support for 51Degrees
- Cache purge functionality
HAproxy plays a vital role in this setup by directing incoming traffic to specific Nginx instances based on a consistent hash calculated from the URL. Each Nginx instance includes a Persistent Volume Claim (PVC) for storing cached content, ensuring fast and reliable access to frequently used resources.
## Deployment Details
The deployment architecture is illustrated in the diagram below:
```
┌─────────┐
│ metallb │ arp announce
└────┬────┘
┌───────▼───────────────────────────┐
│ kubernetes service │ node
│ (externalTrafficPolicy: Local) │ level
└──────────┬────────────────────────┘
┌────▼────┐ ┌─────────┐
│ haproxy │ │ haproxy │ loadbalancer
│ (active)│ │ (backup)│ layer
└────┬────┘ └─────────┘
│ balance uri whole
│ hash-type consistent
┌──────┴──────┬──────────────┐
┌───▼───┐ ┌───▼───┐ ┌───▼───┐ caching
│ nginx │ │ nginx │ │ nginx │ layer
└───┬───┘ └───┬───┘ └───┬───┘
│ │ │
┌────┴───────┬─────┴────┬─────────┴──┐
│ │ │ │
┌───▼────┐ ┌────▼───┐ ┌───▼────┐ ┌────▼───┐
│ origin │ │ origin │ │ origin │ │ origin │
└────────┘ └────────┘ └────────┘ └────────┘
```
## Known issues
VTS module shows wrong upstream resonse time
- https://github.com/vozlt/nginx-module-vts/issues/198

View File

@@ -0,0 +1,14 @@
{
"containerimage.config.digest": "sha256:d68641167af14b246e0332c14a7a9d9f6c0a4f813881db2de5fc53816bd35786",
"containerimage.descriptor": {
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"digest": "sha256:241da53aba9b121d5d1829744a9ba31036cd5e5ffd6cf584da8113ddd79764f2",
"size": 1093,
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
"containerimage.digest": "sha256:241da53aba9b121d5d1829744a9ba31036cd5e5ffd6cf584da8113ddd79764f2",
"image.name": "ghcr.io/aenix-io/cozystack/nginx-cache:v0.0.1"
}

View File

@@ -0,0 +1 @@
ghcr.io/aenix-io/cozystack/nginx:v0.0.1

View File

@@ -0,0 +1,178 @@
FROM ubuntu:22.04 as stage
ARG NGINX_VERSION=1.25.3
ARG IP2LOCATION_C_VERSION=8.6.1
ARG IP2PROXY_C_VERSION=4.1.2
ARG IP2LOCATION_NGINX_VERSION=8.6.0
ARG IP2PROXY_NGINX_VERSION=8.1.1
ARG FIFTYONEDEGREES_NGINX_VERSION=3.2.21.1
ARG NGINX_CACHE_PURGE_VERSION=2.5.3
ARG NGINX_VTS_VERSION=0.2.2
# Install required packages for development
RUN apt-get update -q \
&& apt-get install -yq \
unzip \
autoconf \
build-essential \
libtool \
libpcre3 \
libpcre3-dev \
libssl-dev \
libgd-dev \
zlib1g-dev \
gcc \
make \
git \
wget \
curl \
checkinstall
# Download sources
RUN mkdir ip2location \
&& curl -sS -L https://github.com/chrislim2888/IP2Location-C-Library/archive/refs/tags/${IP2LOCATION_C_VERSION}.tar.gz \
| tar -C ip2location -xzvf- --strip=1
RUN mkdir ip2proxy \
&& curl -sS -L https://github.com/ip2location/ip2proxy-c/archive/refs/tags/${IP2PROXY_C_VERSION}.tar.gz \
| tar -C ip2proxy -xzvf- --strip=1
RUN mkdir ip2mod-location \
&& curl -sS -L https://github.com/ip2location/ip2location-nginx/archive/refs/tags/${IP2LOCATION_NGINX_VERSION}.tar.gz \
| tar -C ip2mod-location -xzvf- --strip=1
RUN mkdir ip2mod-proxy \
&& curl -sS -L https://github.com/ip2location/ip2proxy-nginx/archive/refs/tags/${IP2PROXY_NGINX_VERSION}.tar.gz \
| tar -C ip2mod-proxy -xzvf- --strip=1
RUN mkdir cache-purge-module \
&& curl -sS -L https://github.com/nginx-modules/ngx_cache_purge/archive/refs/tags/${NGINX_CACHE_PURGE_VERSION}.tar.gz \
| tar -C cache-purge-module -xzvf- --strip=1
RUN mkdir nginx-module-vts \
&& curl -sS -L https://github.com/vozlt/nginx-module-vts/archive/refs/tags/v${NGINX_VTS_VERSION}.tar.gz \
| tar -C nginx-module-vts -xzvf- --strip=1
RUN mkdir nginx \
&& curl -sS -L https://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz \
| tar -C nginx -xzvf- --strip=1
# Compile C Library for IP2Location module
WORKDIR /ip2location
RUN autoreconf -i -v --force
RUN ./configure
RUN make
RUN checkinstall \
-D \
--install=no \
--default \
--pkgname=ip2location-c \
--pkgversion=${IP2LOCATION_C_VERSION} \
--pkgarch=amd64 \
--pkggroup=lib \
--pkgsource="https://github.com/chrislim2888/IP2Location-C-Library" \
--maintainer="Eduard Generalov <eduard@generalov.net>" \
--requires=librtmp1 \
--autodoinst=no \
--deldoc=yes \
--deldesc=yes \
--delspec=yes \
--backup=no \
make install
WORKDIR /ip2location/data
RUN perl ip-country.pl
WORKDIR /ip2location/test
RUN ./test-IP2Location
# Compile C Library for IP2Proxy module
WORKDIR /ip2proxy
RUN autoreconf -i -v --force
RUN ./configure
RUN make
RUN checkinstall \
-D \
--install=no \
--default \
--pkgname=ip2proxy-c \
--pkgversion=${IP2PROXY_C_VERSION} \
--pkgarch=amd64 \
--pkggroup=lib \
--pkgsource="https://github.com/ip2location/ip2proxy-c" \
--maintainer="Eduard Generalov <eduard@generalov.net>" \
--requires=librtmp1 \
--autodoinst=no \
--deldoc=yes \
--deldesc=yes \
--delspec=yes \
--backup=no \
make install
# Compile Nginx
WORKDIR /nginx
RUN ./configure \
--with-compat \
--prefix=/usr/share/nginx \
--sbin-path=/usr/bin/nginx \
--with-http_ssl_module \
--conf-path=/etc/nginx/nginx.conf \
--http-log-path=/var/log/nginx/access.log \
--error-log-path=/var/log/nginx/error.log \
--lock-path=/var/lock/nginx.lock \
--pid-path=/run/nginx.pid \
--modules-path=/usr/lib/nginx/modules \
--http-client-body-temp-path=/var/lib/nginx/body \
--http-fastcgi-temp-path=/var/lib/nginx/fastcgi \
--http-proxy-temp-path=/var/lib/nginx/proxy \
--http-scgi-temp-path=/var/lib/nginx/scgi \
--http-uwsgi-temp-path=/var/lib/nginx/uwsgi \
--with-http_realip_module \
--with-http_stub_status_module \
--with-stream \
--add-module=../nginx-module-vts \
--add-module=../cache-purge-module \
--add-dynamic-module=../ip2mod-proxy \
--add-dynamic-module=../ip2mod-location \
--with-compat
RUN make modules
RUN make -j 8
RUN checkinstall \
-D \
--install=no \
--default \
--pkgname=nginx \
--pkgversion=$VERS \
--pkgarch=amd64 \
--pkggroup=web \
--provides=nginx \
--requires=ip2location-c,ip2proxy-c,libssl3,libc-bin,libc6,libzstd1,libpcre++0v5,libpcre16-3,libpcre2-8-0,libpcre3,libpcre32-3,libpcrecpp0v5,libmaxminddb0 \
--autodoinst=no \
--deldoc=yes \
--deldesc=yes \
--delspec=yes \
--backup=no \
make install
RUN mkdir /packages \
&& mv /*/*.deb /packages/
FROM ubuntu:22.04
COPY --from=stage /packages /packages
COPY nginx-reloader.sh /usr/bin/nginx-reloader.sh
RUN set -x \
&& groupadd --system --gid 101 nginx \
&& useradd --system --gid nginx --no-create-home --home /nonexistent --comment "nginx user" --shell /bin/false --uid 101 nginx \
&& apt update \
&& apt-get install --no-install-recommends --no-install-suggests -y gnupg1 ca-certificates inotify-tools \
&& apt -y install /packages/*.deb \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /var/lib/nginx /var/log/nginx \
&& ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log \
&& chown -R nginx: /var/lib/nginx /var/log/nginx
ENTRYPOINT ["/usr/bin/nginx", "-g", "daemon off;"]

View File

@@ -0,0 +1,13 @@
#!/bin/sh
set -e
cleanup() {
echo "Received termination signal. Exiting..."
exit 0
}
trap cleanup INT
while true; do
inotifywait -s -e close_write,attrib --include 'reload' /data >/dev/null
nginx -s reload
done

View File

@@ -0,0 +1,106 @@
{{- define "backendoptions" }}
{{- if eq . "http" }}
mode http
option forwardfor
balance uri whole
hash-type consistent
retry-on conn-failure 503
retries 2
option redispatch 1
default-server observe layer7 error-limit 10 on-error mark-down check
{{- else if eq . "tcp" }}
mode tcp
balance roundrobin
default-server observe layer4 error-limit 10 on-error mark-down check
{{- else if eq . "tcp-with-proxy" }}
mode tcp
balance roundrobin
default-server observe layer4 error-limit 10 on-error mark-down check send-proxy-v2
{{- else }}
{{- fail (printf "mode %s is not supported" .) }}
{{- end }}
{{- end }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-haproxy
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
data:
haproxy.cfg: |
defaults
mode tcp
option dontlognull
timeout http-request 10s
timeout queue 20s
timeout connect 5s
timeout client 5m
timeout server 5m
timeout tunnel 5m
timeout http-keep-alive 10s
timeout check 10s
frontend http
bind :::8080 v4v6
mode http
{{- if $.Values.whitelistHTTP }}
{{- with $.Values.whitelist }}
acl whitelist src{{ range . }} {{ . }}{{ end }}
{{- end }}
acl all src 0.0.0.0
tcp-request content accept if whitelist
tcp-request content reject
{{- end }}
tcp-request content set-dst-port int(80)
# match real IP from cloudflare
acl from_cf src -f /usr/local/etc/haproxy/CF_ips.lst
acl cf_ip_hdr req.hdr(CF-Connecting-IP) -m found
http-request set-header X-Forwarded-For %[req.hdr(CF-Connecting-IP)] if from_cf cf_ip_hdr
# overwrite real IP header from anywhere else
http-request set-header X-Forwarded-For %[src] if !from_cf
default_backend http
backend http
mode http
balance uri whole
hash-type consistent
retry-on conn-failure 503
retries 2
option redispatch 1
default-server observe layer7 error-limit 10 on-error mark-down
{{- range $i, $e := until (int $.Values.replicas) }}
server cache{{ $i }} {{ $.Release.Name }}-nginx-cache-{{ $i }}:80 check
{{- end }}
{{- range $i, $e := $.Values.endpoints }}
server origin{{ $i }} {{ $e }} backup
{{- end }}
# https://developers.cloudflare.com/support/troubleshooting/restoring-visitor-ips/restoring-original-visitor-ips/
CF_ips.lst: |
173.245.48.0/20
103.21.244.0/22
103.22.200.0/22
103.31.4.0/22
141.101.64.0/18
108.162.192.0/18
190.93.240.0/20
188.114.96.0/20
197.234.240.0/22
198.41.128.0/17
162.158.0.0/15
104.16.0.0/13
104.24.0.0/14
172.64.0.0/13
131.0.72.0/22
2400:cb00::/32
2606:4700::/32
2803:f800::/32
2405:b500::/32
2405:8100::/32
2a06:98c0::/29
2c0f:f248::/32

View File

@@ -0,0 +1,45 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-haproxy
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 2
selector:
matchLabels:
app: {{ .Release.Name }}-haproxy
template:
metadata:
labels:
app: {{ .Release.Name }}-haproxy
annotations:
checksum/config: {{ include (print $.Template.BasePath "/haproxy/configmap.yaml") . | sha256sum }}
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- {{ .Release.Name }}-haproxy
topologyKey: kubernetes.io/hostname
containers:
- image: haproxy:latest
name: haproxy
ports:
- containerPort: 8080
name: http
volumeMounts:
- mountPath: /usr/local/etc/haproxy
name: config
volumes:
- configMap:
name: {{ .Release.Name }}-haproxy
name: config

View File

@@ -0,0 +1,21 @@
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-haproxy
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
type: {{ ternary "LoadBalancer" "ClusterIP" .Values.external }}
{{- if .Values.external }}
externalTrafficPolicy: Local
allocateLoadBalancerNodePorts: false
{{- end }}
selector:
app: {{ .Release.Name }}-haproxy
ports:
- name: http
protocol: TCP
port: 80
targetPort: http

View File

@@ -0,0 +1,162 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ $.Release.Name }}-nginx-cache
labels:
app.kubernetes.io/instance: {{ $.Release.Name }}
app.kubernetes.io/managed-by: {{ $.Release.Service }}
data:
nginx.conf: |
user nginx;
worker_processes 2;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
#load_module /usr/lib/nginx/modules/ngx_http_ip2location_module.so;
#load_module /usr/lib/nginx/modules/ngx_http_ip2proxy_module.so;
events {
use epoll;
multi_accept on;
worker_connections 10240;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
vhost_traffic_status_zone;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
gzip on;
proxy_cache_path /data/cache levels=1:2 keys_zone=mycache:400m max_size=100g
inactive=30d use_temp_path=off;
#ip2location_database /data/dbs/ip2location.bin;
#ip2location_proxy_recursive on;
#ip2location_proxy 10.0.0.0/8;
#ip2proxy_database /data/dbs/ip2proxy.bin;
#ip2proxy_proxy_recursive on;
#ip2proxy_proxy 10.0.0.0/8;
server {
listen *:10253;
server_name _;
vhost_traffic_status_bypass_limit on;
vhost_traffic_status_bypass_stats on;
location /health {
access_log off;
add_header 'Content-Type' 'text/plain';
return 200 "healthy\n";
}
location /metrics {
vhost_traffic_status_display;
vhost_traffic_status_display_format prometheus;
}
}
upstream origin_servers {
{{- range $num, $ep := $.Values.endpoints }}
server {{ $ep }};
{{- end }}
}
# URL shorter:
# / --> /
# /a --> /a
# /a/b --> /a/*
map $uri $shorten_url {
~^/$ /;
~^/([^/]+)$ /$1;
~^/([^/]+)/.*$ /$1/*;
}
# URL shortener:
# Example: / --> /
# Example: /a --> /a
# Example: /a/ --> /a
# Example: /a/b --> /a/*
map $uri $shorten_url {
~^/$ /;
~^/([^/]+)$ /$1;
~^/([^/]+)/$ /$1;
~^/([^/]+)/.*$ /$1/*;
}
server {
listen *:80;
server_name _;
vhost_traffic_status_filter_by_host on;
#vhost_traffic_status_filter_by_set_key $host country::$ip2location_country_short;
vhost_traffic_status_filter_by_set_key $shorten_url url::$host;
proxy_cache mycache;
proxy_cache_revalidate on;
proxy_cache_lock on;
proxy_cache_key $scheme$http_host$request_uri;
proxy_cache_purge PURGE from all;
cache_purge_response_type json;
proxy_cache_valid 200 1h;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_background_update on;
proxy_connect_timeout 400ms;
proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
location / {
proxy_set_header Host $http_host;
## debug
add_header X-Cache-Status $upstream_cache_status;
#add_header X-Cache-Node $hostname;
#add_header X-Cache-Key $scheme$http_host$request_uri;
proxy_set_header X-Real-IP $remote_addr;
real_ip_header $real_ip_header;
real_ip_recursive on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-Anonymous-Type $ip2proxy_proxy_type;
#proxy_set_header X-Country $ip2location_country_short;
#proxy_set_header X-Country-Code $ip2location_country_short;
#proxy_set_header X-Country-Name $ip2location_country_long;
#proxy_set_header X-GeoIP-Region $ip2location_region;
#proxy_set_header X-GeoIP-City $ip2location_city;
#proxy_set_header X-Geoip-Country $ip2location_country_short;
#proxy_set_header X-Geoip-Latitude $ip2location_latitude;
#proxy_set_header X-Geoip-Longitude $ip2location_longitude;
#proxy_set_header X-GeoIP-ISP $ip2location_isp;
##proxy_set_header X-GeoIP-Postal-Code $ip2location_zipcode;
##proxy_set_header X-Geoip-Timezone $ip2location_timezone;
##proxy_set_header X-Geoip-Asn $ip2location_asn;
proxy_hide_header Pragma;
proxy_hide_header Expires;
# to backends
proxy_pass http://origin_servers;
proxy_buffering on;
}
}
}

View File

@@ -0,0 +1,140 @@
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: {{ $.Release.Name }}-nginx-cache
labels:
app.kubernetes.io/instance: {{ $.Release.Name }}
app.kubernetes.io/managed-by: {{ $.Release.Service }}
spec:
maxUnavailable: 1
selector:
matchLabels:
app: {{ $.Release.Name }}-nginx-cache
{{- range $i := until 3 }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $.Release.Name }}-nginx-cache-{{ $i }}
labels:
app.kubernetes.io/instance: {{ $.Release.Name }}
app.kubernetes.io/managed-by: {{ $.Release.Service }}
spec:
selector:
matchLabels:
app: {{ $.Release.Name }}-nginx-cache
instance: "{{ $i }}"
template:
metadata:
labels:
app: {{ $.Release.Name }}-nginx-cache
instance: "{{ $i }}"
annotations:
checksum/config: {{ include (print $.Template.BasePath "/nginx/configmap.yaml") $ | sha256sum }}
spec:
imagePullSecrets:
- name: {{ $.Release.Name }}-regsecret
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- {{ $.Release.Name }}-nginx-cache
- key: instance
operator: NotIn
values:
- "{{ $i }}"
topologyKey: kubernetes.io/hostname
shareProcessNamespace: true
containers:
- name: nginx
image: "{{ $.Files.Get "images/nginx-cache.tag" | trim }}@{{ index ($.Files.Get "images/nginx-cache.json" | fromJson) "containerimage.digest" }}"
readinessProbe:
httpGet:
path: /healthz
port: metrics
initialDelaySeconds: 5
periodSeconds: 5
livenessProbe:
httpGet:
path: /healthz
port: metrics
failureThreshold: 1
periodSeconds: 10
ports:
- containerPort: 80
name: http
- containerPort: 8087
name: cache
- containerPort: 10253
name: metrics
volumeMounts:
- mountPath: /etc/nginx/nginx.conf
name: config
subPath: nginx.conf
- mountPath: /data
name: data
- mountPath: /run
name: run
- name: reloader
image: "{{ $.Files.Get "images/nginx-cache.tag" | trim }}@{{ index ($.Files.Get "images/nginx-cache.json" | fromJson) "containerimage.digest" }}"
command: ["/usr/bin/nginx-reloader.sh"]
#command: ["sleep", "infinity"]
volumeMounts:
- mountPath: /etc/nginx/nginx.conf
name: config
subPath: nginx.conf
- mountPath: /data
name: data
- mountPath: /run
name: run
volumes:
- name: config
configMap:
name: {{ $.Release.Name }}-nginx-cache
- name: data
persistentVolumeClaim:
claimName: {{ $.Release.Name }}-nginx-cache-{{ $i }}
- name: run
emptyDir: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ $.Release.Name }}-nginx-cache-{{ $i }}
labels:
app.kubernetes.io/instance: {{ $.Release.Name }}
app.kubernetes.io/managed-by: {{ $.Release.Service }}
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: "{{ $.Values.size }}"
---
apiVersion: v1
kind: Service
metadata:
name: {{ $.Release.Name }}-nginx-cache-{{ $i }}
labels:
app: {{ $.Release.Name }}-nginx-cache
app.kubernetes.io/instance: {{ $.Release.Name }}
app.kubernetes.io/managed-by: {{ $.Release.Service }}
spec:
type: ClusterIP
selector:
app: {{ $.Release.Name }}-nginx-cache
instance: "{{ $i }}"
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
- name: metrics
protocol: TCP
port: 10253
targetPort: metrics
{{- end }}

View File

@@ -0,0 +1,26 @@
---
apiVersion: operator.victoriametrics.com/v1beta1
kind: VMServiceScrape
metadata:
name: nginx-cache
spec:
jobLabel: jobLabel
namespaceSelector:
matchNames:
- infra-nginx-cache
endpoints:
- path: /metrics
port: metrics
honorLabels: true
relabelConfigs:
- replacement: nginx-cache
targetLabel: job
- source_labels: [__meta_kubernetes_service_name]
target_label: instance
- sourceLabels: [__meta_kubernetes_pod_node_name]
targetLabel: node
- targetLabel: tier
replacement: cluster
selector:
matchLabels:
app: {{ $.Release.Name }}-nginx-cache

View File

@@ -0,0 +1,9 @@
external: false
size: 10Gi
endpoints:
- 10.100.3.1:80
- 10.100.3.11:80
- 10.100.3.2:80
- 10.100.3.12:80
- 10.100.3.3:80
- 10.100.3.13:80

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,25 @@
apiVersion: v2
name: kubernetes
description: Managed Kubernetes service
icon: https://upload.wikimedia.org/wikipedia/commons/thumb/3/39/Kubernetes_logo_without_workmark.svg/723px-Kubernetes_logo_without_workmark.svg.png
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -0,0 +1,18 @@
PUSH := 1
LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack
TAG := v0.0.1
UBUNTU_CONTAINER_DISK_TAG = v1.29.1
image: image-ubuntu-container-disk
image-ubuntu-container-disk:
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/ubuntu-container-disk \
--provenance false \
--tag $(REGISTRY)/ubuntu-container-disk:$(TAG)-$(UBUNTU_CONTAINER_DISK_TAG) \
--cache-from type=registry,ref=$(REGISTRY)/ubuntu-container-disk:$(UBUNTU_CONTAINER_DISK_TAG) \
--cache-to type=inline \
--metadata-file images/ubuntu-container-disk.json \
--push=$(PUSH) \
--load=$(LOAD)
echo "$(REGISTRY)/ubuntu-container-disk:$(UBUNTU_CONTAINER_DISK_TAG)" > images/ubuntu-container-disk.tag

View File

@@ -0,0 +1,28 @@
# Managed Kubernetes Service
## Overview
The Managed Kubernetes Service offers a streamlined solution for efficiently managing server workloads. Kubernetes has emerged as the industry standard, providing a unified and accessible API, primarily utilizing YAML for configuration. This means that teams can easily understand and work with Kubernetes, streamlining infrastructure management.
The Kubernetes leverages robust software design patterns, enabling continuous recovery in any scenario through the reconciliation method. Additionally, it ensures seamless scaling across a multitude of servers, addressing the challenges posed by complex and outdated APIs found in traditional virtualization platforms. This managed service eliminates the need for developing custom solutions or modifying source code, saving valuable time and effort.
## Deployment Details
The managed Kubernetes service deploys a standard Kubernetes cluster utilizing the Cluster API, Kamaji as control-plane provicer and the KubeVirt infrastructure provider. This ensures a consistent and reliable setup for workloads.
Within this cluster, users can take advantage of LoadBalancer services and easily provision physical volumes as needed. The control-plane operates within containers, while the worker nodes are deployed as virtual machines, all seamlessly managed by the application.
- Docs: https://github.com/clastix/kamaji
- Docs: https://cluster-api.sigs.k8s.io/
- GitHub: https://github.com/clastix/kamaji
- GitHub: https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt
- GitHub: https://github.com/kubevirt/csi-driver
## How-Tos
How to access to deployed cluster:
```
kubectl get secret -n <namespace> kubernetes-<clusterName>-admin-kubeconfig -o go-template='{{ printf "%s\n" (index .data "super-admin.conf" | base64decode) }}' > test
```

View File

@@ -0,0 +1,4 @@
{
"containerimage.config.digest": "sha256:e982cfa2320d3139ed311ae44bcc5ea18db7e4e76d2746e0af04c516288ff0f1",
"containerimage.digest": "sha256:34f6aba5b5a2afbb46bbb891ef4ddc0855c2ffe4f9e5a99e8e553286ddd2c070"
}

View File

@@ -0,0 +1 @@
ghcr.io/aenix-io/cozystack/ubuntu-container-disk:v1.29.1

View File

@@ -0,0 +1,51 @@
FROM ubuntu:22.04 as guestfish
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get -y install \
libguestfs-tools \
linux-image-generic \
make \
bash-completion \
&& apt-get clean
WORKDIR /build
FROM guestfish as builder
RUN wget -O image.img https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
RUN qemu-img resize image.img 5G \
&& eval "$(guestfish --listen --network)" \
&& guestfish --remote add-drive image.img \
&& guestfish --remote run \
&& guestfish --remote mount /dev/sda1 / \
&& guestfish --remote command "growpart /dev/sda 1 --verbose" \
&& guestfish --remote command "resize2fs /dev/sda1" \
# docker repo
&& guestfish --remote sh "curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg" \
&& guestfish --remote sh 'echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list' \
# kubernetes repo
&& guestfish --remote sh "curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg" \
&& guestfish --remote sh "echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list" \
# install containerd
&& guestfish --remote command "apt-get update -y" \
&& guestfish --remote command "apt-get install -y containerd.io" \
# configure containerd
&& guestfish --remote command "mkdir -p /etc/containerd" \
&& guestfish --remote sh "containerd config default | tee /etc/containerd/config.toml" \
&& guestfish --remote command "sed -i '/SystemdCgroup/ s/=.*/= true/' /etc/containerd/config.toml" \
# install kubernetes
&& guestfish --remote command "apt-get install -y kubelet kubeadm" \
# clean apt cache
&& guestfish --remote sh 'apt-get clean && rm -rf /var/lib/apt/lists/*' \
# write system configuration
&& guestfish --remote sh 'printf "%s\n" net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 net.ipv6.conf.all.forwarding=1 net.ipv6.conf.all.disable_ipv6=0 net.ipv4.tcp_congestion_control=bbr vm.overcommit_memory=1 kernel.panic=10 kernel.panic_on_oops=1 fs.inotify.max_user_instances=8192 fs.inotify.max_user_watches=524288 | tee > /etc/sysctl.d/kubernetes.conf' \
&& guestfish --remote sh 'printf "%s\n" overlay br_netfilter | tee /etc/modules-load.d/kubernetes.conf' \
&& guestfish --remote sh "rm -f /etc/resolv.conf && ln -s ../run/systemd/resolve/stub-resolv.conf /etc/resolv.conf" \
# umount all and exit
&& guestfish --remote umount-all \
&& guestfish --remote exit
FROM scratch
COPY --from=builder /build/image.img /disk/image.qcow2

View File

@@ -0,0 +1,3 @@
To get kubeconfig for this cluster run:
kubectl get secret -n {{ .Release.Namespace }} {{ .Release.Name }}-admin-kubeconfig -o go-template='{{`{{ printf "%s\n" (index .data "super-admin.conf" | base64decode) }}`}}'

View File

@@ -0,0 +1,51 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "kubernetes.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "kubernetes.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "kubernetes.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "kubernetes.labels" -}}
helm.sh/chart: {{ include "kubernetes.chart" . }}
{{ include "kubernetes.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "kubernetes.selectorLabels" -}}
app.kubernetes.io/name: {{ include "kubernetes.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-cloud-config
data:
cloud-config: |
loadBalancer:
creationPollInterval: 5
creationPollTimeout: 60
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,86 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-cluster-autoscaler
labels:
app: {{ .Release.Name }}-cluster-autoscaler
spec:
selector:
matchLabels:
app: {{ .Release.Name }}-cluster-autoscaler
replicas: 1
template:
metadata:
labels:
app: {{ .Release.Name }}-cluster-autoscaler
spec:
containers:
- image: ghcr.io/kvaps/test:cluster-autoscaller
name: cluster-autoscaler
command:
- /cluster-autoscaler
args:
- --cloud-provider=clusterapi
- --kubeconfig=/etc/kubernetes/kubeconfig/super-admin.svc
- --clusterapi-cloud-config-authoritative
- --node-group-auto-discovery=clusterapi:namespace={{ .Release.Namespace }},clusterName={{ .Release.Name }}
volumeMounts:
- mountPath: /etc/kubernetes/kubeconfig
name: kubeconfig
readOnly: true
volumes:
- configMap:
name: {{ .Release.Name }}-cloud-config
name: cloud-config
- secret:
secretName: {{ .Release.Name }}-admin-kubeconfig
name: kubeconfig
serviceAccountName: {{ .Release.Name }}-cluster-autoscaler
terminationGracePeriodSeconds: 10
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-cluster-autoscaler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ .Release.Name }}-cluster-autoscaler
subjects:
- kind: ServiceAccount
name: {{ .Release.Name }}-cluster-autoscaler
namespace: {{ .Release.Namespace }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Release.Name }}-cluster-autoscaler
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-cluster-autoscaler
rules:
- apiGroups:
- cluster.x-k8s.io
resources:
- machinedeployments
- machinedeployments/scale
- machines
- machinesets
- machinepools
verbs:
- get
- list
- update
- watch
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- kubevirtmachinetemplates
verbs:
- get
- list
- update
- watch

View File

@@ -0,0 +1,150 @@
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $etcd := index $myNS.metadata.annotations "namespace.cozystack.io/etcd" }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
spec:
clusterNetwork:
pods:
cidrBlocks:
- 10.243.0.0/16
services:
cidrBlocks:
- 10.95.0.0/16
controlPlaneRef:
namespace: {{ .Release.Namespace }}
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlane
name: {{ .Release.Name }}
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtCluster
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlane
metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
labels:
cluster.x-k8s.io/role: control-plane
annotations:
kamaji.clastix.io/kubeconfig-secret-key: "super-admin.svc"
spec:
dataStoreName: "{{ $etcd }}"
addons:
coreDNS: {}
konnectivity: {}
kubelet:
cgroupfs: systemd
preferredAddressTypes:
- InternalIP
- ExternalIP
network:
serviceType: ClusterIP
ingress:
extraAnnotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
hostname: {{ .Values.host | default (printf "%s.%s" .Release.Name $host) }}:443
className: "{{ $ingress }}"
deployment:
replicas: 2
version: 1.29.0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtCluster
metadata:
annotations:
cluster.x-k8s.io/managed-by: kamaji
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: {{ .Release.Name }}-md-0
namespace: {{ .Release.Namespace }}
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs: {}
discovery:
bootstrapToken:
apiServerEndpoint: {{ .Release.Name }}.{{ .Release.Namespace }}.svc:6443
initConfiguration:
skipPhases:
- addon/kube-proxy
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtMachineTemplate
metadata:
name: {{ .Release.Name }}-md-0
namespace: {{ .Release.Namespace }}
spec:
template:
spec:
virtualMachineBootstrapCheck:
checkStrategy: ssh
virtualMachineTemplate:
metadata:
namespace: {{ .Release.Namespace }}
spec:
runStrategy: Always
template:
spec:
domain:
cpu:
threads: 1
cores: 2
sockets: 1
devices:
disks:
- disk:
bus: virtio
name: containervolume
networkInterfaceMultiqueue: true
memory:
guest: 1024Mi
evictionStrategy: External
volumes:
- containerDisk:
image: "{{ $.Files.Get "images/ubuntu-container-disk.tag" | trim }}@{{ index ($.Files.Get "images/ubuntu-container-disk.json" | fromJson) "containerimage.digest" }}"
name: containervolume
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: {{ .Release.Name }}-md-0
namespace: {{ .Release.Namespace }}
annotations:
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "2"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "0"
capacity.cluster-autoscaler.kubernetes.io/memory: "1024Mi"
capacity.cluster-autoscaler.kubernetes.io/cpu: "2"
spec:
clusterName: {{ .Release.Name }}
selector:
matchLabels: null
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: {{ .Release.Name }}-md-0
namespace: default
clusterName: {{ .Release.Name }}
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtMachineTemplate
name: {{ .Release.Name }}-md-0
namespace: default
version: v1.23.10

View File

@@ -0,0 +1,126 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ .Release.Name }}-kcsi-controller
labels:
app: {{ .Release.Name }}-kcsi-driver
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Release.Name }}-kcsi-driver
template:
metadata:
labels:
app: {{ .Release.Name }}-kcsi-driver
spec:
serviceAccountName: {{ .Release.Name }}-kcsi
priorityClassName: system-cluster-critical
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
operator: Exists
effect: "NoSchedule"
containers:
- name: csi-driver
imagePullPolicy: Always
image: ghcr.io/kvaps/test:kubevirt-csi-driver
args:
- "--endpoint=$(CSI_ENDPOINT)"
- "--infra-cluster-namespace=$(INFRACLUSTER_NAMESPACE)"
- "--infra-cluster-labels=$(INFRACLUSTER_LABELS)"
- "--v=5"
ports:
- name: healthz
containerPort: 10301
protocol: TCP
env:
- name: CSI_ENDPOINT
value: unix:///var/lib/csi/sockets/pluginproxy/csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: INFRACLUSTER_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INFRACLUSTER_LABELS
value: "csi-driver/cluster=test"
- name: INFRA_STORAGE_CLASS_ENFORCEMENT
valueFrom:
configMapKeyRef:
name: driver-config
key: infraStorageClassEnforcement
optional: true
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
- name: kubeconfig
mountPath: /etc/kubernetes/kubeconfig
readOnly: true
resources:
requests:
memory: 50Mi
cpu: 10m
- name: csi-provisioner
image: quay.io/openshift/origin-csi-external-provisioner:latest
args:
- "--csi-address=$(ADDRESS)"
- "--default-fstype=ext4"
- "--kubeconfig=/etc/kubernetes/kubeconfig/super-admin.svc"
- "--v=5"
- "--timeout=3m"
- "--retry-interval-max=1m"
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
- name: kubeconfig
mountPath: /etc/kubernetes/kubeconfig
readOnly: true
- name: csi-attacher
image: quay.io/openshift/origin-csi-external-attacher:latest
args:
- "--csi-address=$(ADDRESS)"
- "--kubeconfig=/etc/kubernetes/kubeconfig/super-admin.svc"
- "--v=5"
- "--timeout=3m"
- "--retry-interval-max=1m"
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
- name: kubeconfig
mountPath: /etc/kubernetes/kubeconfig
readOnly: true
resources:
requests:
memory: 50Mi
cpu: 10m
- name: csi-liveness-probe
image: quay.io/openshift/origin-csi-livenessprobe:latest
args:
- "--csi-address=/csi/csi.sock"
- "--probe-timeout=3s"
- "--health-port=10301"
volumeMounts:
- name: socket-dir
mountPath: /csi
resources:
requests:
memory: 50Mi
cpu: 10m
volumes:
- name: socket-dir
emptyDir: {}
- secret:
secretName: {{ .Release.Name }}-admin-kubeconfig
name: kubeconfig

View File

@@ -0,0 +1,32 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Release.Name }}-kcsi
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ .Release.Name }}-kcsi
rules:
- apiGroups: ["cdi.kubevirt.io"]
resources: ["datavolumes"]
verbs: ["get", "create", "delete"]
- apiGroups: ["kubevirt.io"]
resources: ["virtualmachineinstances"]
verbs: ["list", "get"]
- apiGroups: ["subresources.kubevirt.io"]
resources: ["virtualmachineinstances/addvolume", "virtualmachineinstances/removevolume"]
verbs: ["update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ .Release.Name }}-kcsi
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ .Release.Name }}-kcsi
subjects:
- kind: ServiceAccount
name: {{ .Release.Name }}-kcsi

View File

@@ -0,0 +1,46 @@
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-cilium
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 1m
releaseName: cilium
chart:
spec:
chart: cozy-cilium
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
targetNamespace: cozy-cilium
storageNamespace: cozy-cilium
install:
createNamespace: true
values:
cilium:
tunnel: disabled
autoDirectNodeRoutes: true
cgroup:
autoMount:
enabled: true
hostRoot: /run/cilium/cgroupv2
k8sServiceHost: {{ .Release.Name }}.{{ .Release.Namespace }}.svc
k8sServicePort: 6443
cni:
chainingMode: ~
customConf: false
configMap: ""
routingMode: native
enableIPv4Masquerade: true
ipv4NativeRoutingCIDR: "10.244.0.0/16"
dependsOn:
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,28 @@
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-csi
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 1m
releaseName: csi
chart:
spec:
chart: cozy-kubevirt-csi-node
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
targetNamespace: cozy-csi
storageNamespace: cozy-csi
install:
createNamespace: true
dependsOn:
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,73 @@
---
apiVersion: batch/v1
kind: Job
metadata:
annotations:
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "10"
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation,hook-failed
name: {{ .Release.Name }}-flux-teardown
spec:
template:
spec:
serviceAccountName: {{ .Release.Name }}-flux-teardown
restartPolicy: Never
containers:
- name: kubectl
image: docker.io/clastix/kubectl:v1.29.1
command:
- kubectl
- --namespace={{ .Release.Namespace }}
- patch
- helmrelease
- {{ .Release.Name }}-cilium
- {{ .Release.Name }}-csi
- -p
- '{"spec": {"suspend": true}}'
- --type=merge
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Release.Name }}-flux-teardown
annotations:
helm.sh/hook: pre-delete
helm.sh/hook-delete-policy: before-hook-creation,hook-failed
helm.sh/hook-weight: "0"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
"helm.sh/hook": pre-install,post-install,pre-delete
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation,hook-failed
"helm.sh/hook-weight": "5"
name: {{ .Release.Name }}-flux-teardown
rules:
- apiGroups:
- "helm.toolkit.fluxcd.io"
resources:
- helmreleases
verbs:
- get
- patch
resourceNames:
- {{ .Release.Name }}-cilium
- {{ .Release.Name }}-csi
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
helm.sh/hook: pre-delete
helm.sh/hook-delete-policy: hook-succeeded,before-hook-creation,hook-failed
helm.sh/hook-weight: "5"
name: {{ .Release.Name }}-flux-teardown
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ .Release.Name }}-flux-teardown
subjects:
- kind: ServiceAccount
name: {{ .Release.Name }}-flux-teardown
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,11 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Release.Namespace }}-{{ .Release.Name }}-kccm
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- get

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Release.Namespace }}-{{ .Release.Name }}-kccm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ .Release.Namespace }}-{{ .Release.Name }}-kccm
subjects:
- kind: ServiceAccount
name: {{ .Release.Name }}-kccm
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,42 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ .Release.Name }}-kccm
rules:
- apiGroups:
- kubevirt.io
resources:
- virtualmachines
verbs:
- get
- watch
- list
- apiGroups:
- kubevirt.io
resources:
- virtualmachineinstances
verbs:
- get
- watch
- list
- update
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- "*"
- apiGroups:
- ""
resources:
- nodes
verbs:
- get

View File

@@ -0,0 +1,27 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ .Release.Namespace }}-{{ .Release.Name }}-kccm
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: {{ .Release.Name }}-kccm
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ .Release.Name }}-kccm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ .Release.Name }}-kccm
subjects:
- kind: ServiceAccount
name: {{ .Release.Name }}-kccm
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,49 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-kccm
labels:
k8s-app: {{ .Release.Name }}-kccm
spec:
replicas: 1
selector:
matchLabels:
k8s-app: {{ .Release.Name }}-kccm
template:
metadata:
labels:
k8s-app: {{ .Release.Name }}-kccm
spec:
containers:
- name: kubevirt-cloud-controller-manager
args:
- --cloud-provider=kubevirt
- --cloud-config=/etc/cloud/cloud-config
- --kubeconfig=/etc/kubernetes/kubeconfig/super-admin.svc
- --cluster-name={{ .Release.Name }}
command:
- /bin/kubevirt-cloud-controller-manager
image: ghcr.io/kvaps/test:kubevirt-cloud-provider
imagePullPolicy: Always
#securityContext:
# privileged: true
resources:
requests:
cpu: 100m
volumeMounts:
- mountPath: /etc/kubernetes/kubeconfig
name: kubeconfig
readOnly: true
- mountPath: /etc/cloud
name: cloud-config
readOnly: true
volumes:
- configMap:
name: {{ .Release.Name }}-cloud-config
name: cloud-config
- secret:
secretName: {{ .Release.Name }}-admin-kubeconfig
name: kubeconfig
tolerations:
- operator: Exists
serviceAccountName: {{ .Release.Name }}-kccm

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Release.Name }}-kccm

View File

@@ -0,0 +1,11 @@
{
"$schema": "http://json-schema.org/schema#",
"type": "object",
"properties": {
"host": {
"type": "string",
"title": "Domain name for this kubernetes cluster",
"description": "This host will be used for all apps deployed in this tenant"
}
}
}

View File

@@ -0,0 +1 @@
host: ""

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,25 @@
apiVersion: v2
name: mysql
description: Managed MariaDB service
icon: https://static-00.iconduck.com/assets.00/mariadb-icon-512x340-txozryr2.png
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -0,0 +1,64 @@
## Managed MariaDB Service
The Managed MariaDB Service offers a powerful and widely used relational database solution. This service allows you to create and manage a replicated MariaDB cluster seamlessly.
## Deployment Details
This managed service is controlled by mariadb-operator, ensuring efficient management and seamless operation.
- Docs: https://mariadb.com/kb/en/documentation/
- GitHub: https://github.com/mariadb-operator/mariadb-operator
## HowTos
### How to switch master/slave replica
```
kubectl edit mariadb <instnace>
```
update:
```
spec:
replication:
primary:
podIndex: 1
```
check status:
```
NAME READY STATUS PRIMARY POD AGE
<instance> True Running app-db1-1 41d
```
### How to restore backup:
find snapshot:
```
restic -r s3:s3.example.org/mariadb-backups/database_name snapshots
```
restore:
```
restic -r s3:s3.example.org/mariadb-backups/database_name restore latest --target /tmp/
```
more details:
- https://itnext.io/restic-effective-backup-from-stdin-4bc1e8f083c1
### Known issues
- **Replication can't not be finished with various errors**
- **Replication can't be finised in case if binlog purged**
Until mariadbbackup is not used to bootstrap a node by mariadb-operator (this feature is not inmplemented yet), follow these manual steps to fix it:
https://github.com/mariadb-operator/mariadb-operator/issues/141#issuecomment-1804760231
- **Corrupted indicies**
Sometimes some indecies can be corrupted on master replica, you can recover them from slave:
```
mysqldump -h <slave> -P 3306 -u<user> -p<password> --column-statistics=0 <database> <table> ~/tmp/fix-table.sql
mysql -h <master> -P 3306 -u<user> -p<password> <database> < ~/tmp/fix-table.sql
```

View File

View File

@@ -0,0 +1,94 @@
{{- if .Values.backup.enabled }}
{{ $image := .Files.Get "images/backup.json" | fromJson }}
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ .Release.Name }}-backup
spec:
schedule: "{{ .Values.backup.schedule }}"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
backoffLimit: 2
template:
spec:
restartPolicy: OnFailure
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/backup-script.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/backup-secret.yaml") . | sha256sum }}
spec:
imagePullSecrets:
- name: {{ .Release.Name }}-regsecret
restartPolicy: Never
containers:
- name: mysqldump
image: "{{ index $image "image.name" }}@{{ index $image "containerimage.digest" }}"
command:
- /bin/sh
- /scripts/backup.sh
env:
- name: REPO_PREFIX
value: {{ required "s3Bucket is not specified!" .Values.backup.s3Bucket | quote }}
- name: CLEANUP_STRATEGY
value: {{ required "cleanupPolicy is not specified!" .Values.backup.cleanupStrategy | quote }}
- name: MYSQL_USER
value: root
- name: MYSQL_PWD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}
key: root-password
- name: MYSQL_HOST
value: {{ .Release.Name }}-secondary
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-backup
key: s3AccessKey
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-backup
key: s3SecretKey
- name: AWS_DEFAULT_REGION
value: {{ .Values.backup.s3Region }}
- name: RESTIC_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-backup
key: resticPassword
volumeMounts:
- mountPath: /scripts
name: scripts
- mountPath: /tmp
name: tmp
- mountPath: /.cache
name: cache
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
volumes:
- name: scripts
secret:
secretName: {{ .Release.Name }}-backup-script
- name: tmp
emptyDir: {}
- name: cache
emptyDir: {}
securityContext:
runAsNonRoot: true
runAsUser: 9000
runAsGroup: 9000
seccompProfile:
type: RuntimeDefault
{{- end }}

View File

@@ -0,0 +1,50 @@
{{- if .Values.backup.enabled }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-backup-script
stringData:
backup.sh: |
#!/bin/sh
set -e
set -o pipefail
JOB_ID="job-$(uuidgen|cut -f1 -d-)"
DB_LIST=$(mysql -u "$MYSQL_USER" -h "$MYSQL_HOST" -sNe 'SHOW DATABASES;' | grep -v '^\(#.*\|mysql\|sys\|information_schema\|performance_schema\)$')
echo DB_LIST=$(echo "$DB_LIST" | shuf) # shuffle list
echo "Job ID: $JOB_ID"
echo "Target repo: $REPO_PREFIX"
echo "Cleanup strategy: $CLEANUP_STRATEGY"
echo "Start backup for:"
echo "$DB_LIST"
echo
echo "Backup started at `date +%Y-%m-%d\ %H:%M:%S`"
for db in $DB_LIST; do
(
set -x
restic -r "s3:${REPO_PREFIX}/$db" cat config >/dev/null 2>&1 || \
restic -r "s3:${REPO_PREFIX}/$db" init --repository-version 2
restic -r "s3:${REPO_PREFIX}/$db" unlock --remove-all >/dev/null 2>&1 || true # no locks, k8s takes care of it
mysqldump -u "$MYSQL_USER" -h "$MYSQL_HOST" --single-transaction --databases $db | \
restic -r "s3:${REPO_PREFIX}/$db" backup --tag "$JOB_ID" --stdin --stdin-filename dump.sql
restic -r "s3:${REPO_PREFIX}/$db" tag --tag "$JOB_ID" --set "completed"
)
done
echo "Backup finished at `date +%Y-%m-%d\ %H:%M:%S`"
echo
echo "Run cleanup:"
echo
echo "Cleanup started at `date +%Y-%m-%d\ %H:%M:%S`"
for db in $DB_LIST; do
(
set -x
restic forget -r "s3:${REPO_PREFIX}/$db" --group-by=tags --keep-tag "completed" # keep completed snapshots only
restic forget -r "s3:${REPO_PREFIX}/$db" --group-by=tags $CLEANUP_STRATEGY
restic prune -r "s3:${REPO_PREFIX}/$db"
)
done
echo "Cleanup finished at `date +%Y-%m-%d\ %H:%M:%S`"
{{- end }}

View File

@@ -0,0 +1,11 @@
{{- if .Values.backup.enabled }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-backup
stringData:
s3AccessKey: {{ required "s3AccessKey is not specified!" .Values.backup.s3AccessKey }}
s3SecretKey: {{ required "s3SecretKey is not specified!" .Values.backup.s3SecretKey }}
resticPassword: {{ required "resticPassword is not specified!" .Values.backup.resticPassword }}
{{- end }}

View File

@@ -0,0 +1,35 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-my-cnf
data:
config: |
[mysqld]
sql-mode=NO_ENGINE_SUBSTITUTION
max_connections=4096
default_authentication_plugin=mysql_native_password
#innodb_buffer_pool_dump_at_shutdown=1
innodb_buffer_pool_instances=48
innodb_buffer_pool_size=60G
innodb_fast_shutdown=0
innodb_flush_method=O_DIRECT_NO_FSYNC
innodb_flush_log_at_trx_commit=2
innodb_io_capacity=10000
innodb_io_capacity_max=50000
#innodb_log_buffer_size=128M
innodb_log_file_size=4096M
#innodb_log_files_in_group=6
innodb_thread_concurrency=24
join_buffer_size=2M
key_buffer_size=1024M
read_rnd_buffer_size=16M
#sync_binlog=0
table_open_cache=40714
table_definition_cache=4000
thread_pool_size=24
tmp_table_size=512M
master_info_repository=TABLE
relay_log_info_repository=TABLE
innodb_read_io_threads=12
innodb_write_io_threads=12

View File

@@ -0,0 +1,14 @@
{{- range $name := .Values.databases }}
{{ $dnsName := replace "_" "-" $name }}
---
apiVersion: mariadb.mmontes.io/v1alpha1
kind: Database
metadata:
name: {{ $.Release.Name }}-{{ $dnsName }}
spec:
name: {{ $name }}
mariaDbRef:
name: {{ $.Release.Name }}
characterSet: utf8
collate: utf8_general_ci
{{- end }}

View File

@@ -0,0 +1,71 @@
---
apiVersion: mariadb.mmontes.io/v1alpha1
kind: MariaDB
metadata:
name: {{ .Release.Name }}
spec:
rootPasswordSecretKeyRef:
name: {{ .Release.Name }}
key: root-password
image: "mariadb:11.0.2"
port: 3306
replicas: 2
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- mariadb
- key: app.kubernetes.io/instance
operator: In
values:
- {{ .Release.Name }}
topologyKey: "kubernetes.io/hostname"
replication:
enabled: true
#primary:
# podIndex: 0
# automaticFailover: true
metrics:
exporter:
image: prom/mysqld-exporter:v0.14.0
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 300m
memory: 512Mi
port: 9104
serviceMonitor:
interval: 10s
scrapeTimeout: 10s
myCnfConfigMapKeyRef:
name: {{ .Release.Name }}-my-cnf
key: config
volumeClaimTemplate:
resources:
requests:
storage: {{ .Values.size }}
accessModes:
- ReadWriteOnce
{{- if .Values.external }}
primaryService:
type: LoadBalancer
{{- end }}
#secondaryService:
# type: LoadBalancer

View File

@@ -0,0 +1,10 @@
{{- if .Values.registrySecret }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-regsecret
type: kubernetes.io/dockerconfigjson
stringData:
.dockerconfigjson: |
{{- toJson .Values.registrySecret | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,9 @@
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}
stringData:
{{- range $name, $u := .Values.users }}
{{ $name }}-password: {{ $u.password }}
{{- end }}

View File

@@ -0,0 +1,31 @@
{{- range $name, $u := .Values.users }}
{{ if not (eq $name "root") }}
{{ $dnsName := replace "_" "-" $name }}
---
apiVersion: mariadb.mmontes.io/v1alpha1
kind: User
metadata:
name: {{ $.Release.Name }}-{{ $dnsName }}
spec:
name: {{ $name }}
mariaDbRef:
name: {{ $.Release.Name }}
passwordSecretKeyRef:
name: {{ $.Release.Name }}
key: {{ $name }}-password
maxUserConnections: {{ $u.maxUserConnections }}
---
apiVersion: mariadb.mmontes.io/v1alpha1
kind: Grant
metadata:
name: {{ $.Release.Name }}-{{ $dnsName }}
spec:
mariaDbRef:
name: {{ $.Release.Name }}
privileges: {{ $u.privileges | toJson }}
database: "*"
table: "*"
username: {{ $name }}
grantOption: true
{{- end }}
{{- end }}

View File

@@ -0,0 +1,30 @@
external: false
size: 10Gi
users:
root:
password: strongpassword
user1:
privileges: ['ALL']
maxUserConnections: 1000
password: hackme
user2:
privileges: ['SELECT']
maxUserConnections: 1000
password: hackme
databases:
- wordpress1
- wordpress2
- wordpress3
- wordpress4
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/postgres-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,25 @@
apiVersion: v2
name: postgres
description: Managed PostgreSQL service
icon: https://cdn-icons-png.flaticon.com/512/5968/5968342.png
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

Some files were not shown because too many files have changed in this diff Show More