fix(docs): update guides to latest changes

This commit is contained in:
bsctl
2022-08-09 20:07:26 +02:00
committed by Dario Tranchitella
parent e4227e1c81
commit f7483dcb01
2 changed files with 112 additions and 282 deletions

View File

@@ -8,23 +8,27 @@ This guide will lead you through the process of creating a working Kamaji setup
* [Prepare the bootstrap workspace](#prepare-the-bootstrap-workspace)
* [Access Admin cluster](#access-admin-cluster)
* [Setup multi-tenant etcd](#setup-multi-tenant-etcd)
* [Install Kamaji controller](#install-kamaji-controller)
* [Create Tenant Cluster](#create-tenant-cluster)
* [Cleanup](#cleanup)
## Prepare the bootstrap workspace
First, prepare the workspace directory:
This guide is supposed to be run from a remote or local bootstrap machine. First, clone the repo and prepare the workspace directory:
```
```bash
git clone https://github.com/clastix/kamaji
cd kamaji/deploy
```
1. Follow the instructions in [Prepare the bootstrap workspace](./kamaji-deployment-guide.md#prepare-the-bootstrap-workspace).
2. Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli).
3. Make sure you have a valid Azure subscription
4. Login to Azure:
We assume you have installed on your workstation:
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [helm](https://helm.sh/docs/intro/install/)
- [jq](https://stedolan.github.io/jq/)
- [openssl](https://www.openssl.org/)
- [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli)
Make sure you have a valid Azure subscription, and login to Azure:
```bash
az account set --subscription "MySubscription"
@@ -39,11 +43,7 @@ Throughout the following instructions, shell variables are used to indicate valu
```bash
source kamaji-azure.env
```
> we use the Azure CLI to setup the Kamaji Admin cluster on AKS.
```
az group create \
--name $KAMAJI_RG \
--location $KAMAJI_REGION
@@ -73,15 +73,38 @@ And check you can access:
kubectl cluster-info
```
## Setup multi-tenant etcd
Follow the instructions [here](./kamaji-deployment-guide.md#setup-multi-tenant-etcd).
## Install Kamaji controller
Follow the instructions [here](./kamaji-deployment-guide.md#install-kamaji-controller).
There are multiple ways to deploy the Kamaji controller:
- Use the single YAML file installer
- Use Kustomize with Makefile
- Use the Kamaji Helm Chart
The Kamaji controller needs to access a multi-tenant `etcd` in order to provision the access for tenant `kube-apiserver`. The multi-tenant `etcd` cluster will be deployed as three replicas StatefulSet into the admin cluster. Data persistence for multi-tenant `etcd` cluster is required. the Helm [Chart](../helm/kamaji/) provides the installation of an internal `etcd`. However, an externally managed `etcd` is highly recommended. If you'd like to use an externally one, you can specify the overrides and by setting the value `etcd.deploy=false`.
### Install with Helm Chart
Install with the `helm` in a dedicated namespace of the Admin cluster:
```bash
helm install --create-namespace --namespace kamaji-system kamaji ../helm/kamaji
```
The Kamaji controller and the multi-tenant `etcd` are now running:
```bash
kubectl -n kamaji-system get pods
NAME READY STATUS RESTARTS AGE
etcd-0 1/1 Running 0 120m
etcd-1 1/1 Running 0 120m
etcd-2 1/1 Running 0 119m
kamaji-857fcdf599-4fb2p 2/2 Running 0 120m
```
You just turned your AKS cluster into a Kamaji cluster to run multiple Tenant Control Planes.
## Create Tenant Cluster
### Create a tenant control plane
### Tenant Control Plane
With Kamaji on AKS, the tenant control plane is accessible:
- from tenant work nodes through an internal loadbalancer
@@ -170,20 +193,16 @@ spec:
kamaji.clastix.io/soot: ${TENANT_NAME}
type: LoadBalancer
EOF
kubectl create namespace ${TENANT_NAMESPACE}
kubectl apply -f ${TENANT_NAMESPACE}-${TENANT_NAME}-tcp.yaml
```
Make sure:
- the `tcp.spec.controlPlane.service.serviceType=LoadBalancer` and the following annotation: `service.beta.kubernetes.io/azure-load-balancer-internal=true` is set. This tells AKS to expose the service within an Azure internal loadbalancer.
- the following annotation: `service.beta.kubernetes.io/azure-load-balancer-internal=true` is set on the `tcp` service. It tells Azure to expose the service within an internal loadbalancer.
- the public loadbalancer service has the following annotation: `service.beta.kubernetes.io/azure-dns-label-name=${TENANT_NAME}` to expose the Tenant Control Plane with domain name: `${TENANT_NAME}.${TENANT_DOMAIN}`.
Create the Tenant Control Plane
```
kubectl create namespace ${TENANT_NAMESPACE}
kubectl apply -f ${TENANT_NAMESPACE}-${TENANT_NAME}-tcp.yaml
```
- the following annotation: `service.beta.kubernetes.io/azure-dns-label-name=${TENANT_NAME}` is set the public loadbalancer service. It tells Azure to expose the Tenant Control Plane with domain name: `${TENANT_NAME}.${TENANT_DOMAIN}`.
### Working with Tenant Control Plane

View File

@@ -1,74 +1,33 @@
# Setup Kamaji
This guide will lead you through the process of creating a working Kamaji setup on a generic Kubernetes cluster It requires:
This guide will lead you through the process of creating a working Kamaji setup on a generic Kubernetes cluster. It requires:
- one bootstrap local workstation
- a Kubernetes cluster, to run the Admin and Tenant Control Planes
- a Kubernetes cluster 1.22+, to run the Admin and Tenant Control Planes
- an additional `etcd` cluster made of 3 replicas to host the datastore for the Tenants' clusters
- an arbitrary number of machines to host Tenants' workloads
> In this guide, we assume all machines are running `Ubuntu 20.04`.
> In this guide, we assume the machines are running `Ubuntu 20.04`.
* [Prepare the bootstrap workspace](#prepare-the-bootstrap-workspace)
* [Access Admin cluster](#access-admin-cluster)
* [Setup multi-tenant etcd](#setup-multi-tenant-etcd)
* [Install Kamaji controller](#install-kamaji-controller)
* [Create Tenant Cluster](#create-tenant-cluster)
* [Cleanup](#cleanup)
## Prepare the bootstrap workspace
First, prepare the workspace directory:
This guide is supposed to be run from a remote or local bootstrap machine. First, clone the repo and prepare the workspace directory:
```
```bash
git clone https://github.com/clastix/kamaji
cd kamaji/deploy
```
### Install required tools
On the bootstrap machine, install all the required tools to work with a Kamaji setup.
#### cfssl and cfssljson
The `cfssl` and `cfssljson` command line utilities will be used in addition to `kubeadm` to provision the PKI Infrastructure and generate TLS certificates.
```
wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssl \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssljson
chmod +x cfssl cfssljson
sudo mv cfssl cfssljson /usr/local/bin/
```
#### Kubernetes tools
Install `kubeadm` and `kubectl`
```bash
sudo apt update && sudo apt install -y apt-transport-https ca-certificates curl && \
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg && \
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
sudo apt update && sudo apt install -y kubeadm kubectl --allow-change-held-packages && \
sudo apt-mark hold kubeadm kubectl
```
#### etcdctl
For administration of the `etcd` cluster, download and install the `etcdctl` CLI utility on the bootstrap machine
```bash
ETCD_VER=v3.5.1
ETCD_URL=https://storage.googleapis.com/etcd
curl -L ${ETCD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o etcd-${ETCD_VER}-linux-amd64.tar.gz
tar xzvf etcd-${ETCD_VER}-linux-amd64.tar.gz etcd-${ETCD_VER}-linux-amd64/etcdctl
sudo cp etcd-${ETCD_VER}-linux-amd64/etcdctl /usr/bin/etcdctl
rm -rf etcd-${ETCD_VER}-linux-amd64*
```
Verify `etcdctl` version is installed
```bash
etcdctl version
etcdctl version: 3.5.1
API version: 3.5
```
We assume you have installed on your workstation:
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [helm](https://helm.sh/docs/intro/install/)
- [jq](https://stedolan.github.io/jq/)
- [openssl](https://www.openssl.org/)
## Access Admin cluster
In Kamaji, an Admin Cluster is a regular Kubernetes cluster which hosts zero to many Tenant Cluster Control Planes. The admin cluster acts as management cluster for all the Tenant clusters and implements Monitoring, Logging, and Governance of all the Kamaji setup, including all Tenant clusters.
@@ -88,221 +47,40 @@ Throughout the following instructions, shell variables are used to indicate valu
source kamaji.env
```
## Setup multi-tenant etcd
### Create certificates
From the bootstrap machine, use `kubeadm` init phase, to create the `etcd` CA certificates:
```bash
sudo kubeadm init phase certs etcd-ca
mkdir kamaji
sudo cp -r /etc/kubernetes/pki/etcd kamaji
sudo chown -R ${USER}. kamaji/etcd
```
Generate the `etcd` certificates for peers:
```
cat << EOF | tee kamaji/etcd/peer-csr.json
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"etcd-0",
"etcd-0.etcd",
"etcd-0.etcd.${ETCD_NAMESPACE}.svc",
"etcd-0.etcd.${ETCD_NAMESPACE}.svc.cluster.local",
"etcd-1",
"etcd-1.etcd",
"etcd-1.etcd.${ETCD_NAMESPACE}.svc",
"etcd-1.etcd.${ETCD_NAMESPACE}.svc.cluster.local",
"etcd-2",
"etcd-2.etcd",
"etcd-2.etcd.${ETCD_NAMESPACE}.svc",
"etcd-2.etcd.${ETCD_NAMESPACE}.cluster.local"
]
}
EOF
cfssl gencert -ca=kamaji/etcd/ca.crt -ca-key=kamaji/etcd/ca.key \
-config=cfssl-cert-config.json \
-profile=peer-authentication kamaji/etcd/peer-csr.json | cfssljson -bare kamaji/etcd/peer
```
Generate the `etcd` certificates for server:
```
cat << EOF | tee kamaji/etcd/server-csr.json
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"etcd-server",
"etcd-server.${ETCD_NAMESPACE}.svc",
"etcd-server.${ETCD_NAMESPACE}.svc.cluster.local",
"etcd-0.etcd.${ETCD_NAMESPACE}.svc.cluster.local",
"etcd-1.etcd.${ETCD_NAMESPACE}.svc.cluster.local",
"etcd-2.etcd.${ETCD_NAMESPACE}.svc.cluster.local"
]
}
EOF
cfssl gencert -ca=kamaji/etcd/ca.crt -ca-key=kamaji/etcd/ca.key \
-config=cfssl-cert-config.json \
-profile=peer-authentication kamaji/etcd/server-csr.json | cfssljson -bare kamaji/etcd/server
```
Generate certificates for the `root` user of the `etcd`
```
cat << EOF | tee kamaji/etcd/root-csr.json
{
"CN": "root",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF
cfssl gencert -ca=kamaji/etcd/ca.crt -ca-key=kamaji/etcd/ca.key \
-config=cfssl-cert-config.json \
-profile=client-authentication kamaji/etcd/root-csr.json | cfssljson -bare kamaji/etcd/root
```
Store the certificates of `etcd` into secrets:
```bash
kubectl create namespace ${ETCD_NAMESPACE}
kubectl -n ${ETCD_NAMESPACE} create secret generic etcd-certs \
--from-file=kamaji/etcd/ca.crt \
--from-file=kamaji/etcd/ca.key \
--from-file=kamaji/etcd/peer-key.pem --from-file=kamaji/etcd/peer.pem \
--from-file=kamaji/etcd/server-key.pem --from-file=kamaji/etcd/server.pem
kubectl -n ${ETCD_NAMESPACE} create secret tls root-client-certs \
--key=kamaji/etcd/root-key.pem \
--cert=kamaji/etcd/root.pem
```
### Create the etcd cluster
You can install tenants' `etcd` as StatefulSet in the Kamaji admin cluster. To achieve data persistency, make sure a Storage Class (default) is defined. Refer to the [documentation](https://etcd.io/docs/v3.5/op-guide/) for requirements and best practices to run `etcd` in production.
You should use topology spread constraints to control how `etcd` replicas are spread across the cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This helps to achieve high availability as well as efficient resource utilization. You can set cluster-level constraints as a default, or configure topology spread constraints by assigning the label `topology.kubernetes.io/zone` to the Kamaji admin cluster nodes hosting the tenants' `etcd`.
Install the tenants' `etcd` server:
```bash
kubectl -n ${ETCD_NAMESPACE} apply -f etcd/etcd-cluster.yaml
```
Install an `etcd` client to interact with the `etcd` server:
```bash
kubectl -n ${ETCD_NAMESPACE} apply -f etcd/etcd-client.yaml
```
Wait the `etcd` instances discover each other and the cluster is formed:
```bash
kubectl -n ${ETCD_NAMESPACE} exec etcd-root-client -- /bin/bash -c "etcdctl member list"
```
### Enable multi-tenancy
The `root` user has full access to `etcd`, must be created before activating authentication. The `root` user must have the `root` role and is allowed to change anything inside `etcd`.
```bash
kubectl -n ${ETCD_NAMESPACE} exec etcd-root-client -- etcdctl user add --no-password=true root
kubectl -n ${ETCD_NAMESPACE} exec etcd-root-client -- etcdctl role add root
kubectl -n ${ETCD_NAMESPACE} exec etcd-root-client -- etcdctl user grant-role root root
kubectl -n ${ETCD_NAMESPACE} exec etcd-root-client -- etcdctl auth enable
```
## Install Kamaji controller
Currently, the behaviour of the Kamaji controller for Tenant Control Plane is controlled by (in this order):
- CLI flags
- Environment variables
- Configuration file `kamaji.yaml` built into the image
By default Kamaji search for the configuration file and uses parameters found inside of it. In case some environment variable are passed, this will override configuration file parameters. In the end, if also a CLI flag is passed, this will override both env vars and config file as well.
There are multiple ways to deploy the Kamaji controller:
- Use the single YAML file installer
- Use Kustomize with Makefile
- Use the Kamaji Helm Chart
The Kamaji controller needs to access the multi-tenant `etcd` in order to provision the access for tenant `kube-apiserver`.
The Kamaji controller needs to access a multi-tenant `etcd` in order to provision the access for tenant `kube-apiserver`. The multi-tenant `etcd` cluster will be deployed as three replicas StatefulSet into the admin cluster. Data persistence for multi-tenant `etcd` cluster is required. the Helm [Chart](../helm/kamaji/) provides the installation of an internal `etcd`. However, an externally managed `etcd` is highly recommended. If you'd like to use an externally one, you can specify the overrides and by setting the value `etcd.deploy=false`.
Create the secrets containing the `etcd` certificates
### Install with Helm Chart
Install with the `helm` in a dedicated namespace of the Admin cluster:
```bash
kubectl create namespace kamaji-system
kubectl -n kamaji-system create secret generic etcd-certs \
--from-file=kamaji/etcd/ca.crt \
--from-file=kamaji/etcd/ca.key
kubectl -n kamaji-system create secret tls root-client-certs \
--cert=kamaji/etcd/root.pem \
--key=kamaji/etcd/root-key.pem
helm install --create-namespace --namespace kamaji-system kamaji ../helm/kamaji
```
### Install with a single manifest
Install with the single YAML file installer:
The Kamaji controller and the multi-tenant `etcd` are now running:
```bash
kubectl -n kamaji-system apply -f ../config/install.yaml
kubectl -n kamaji-system get pods
NAME READY STATUS RESTARTS AGE
etcd-0 1/1 Running 0 120m
etcd-1 1/1 Running 0 120m
etcd-2 1/1 Running 0 119m
kamaji-857fcdf599-4fb2p 2/2 Running 0 120m
```
Make sure to patch the `etcd` endpoints of the Kamaji controller, according to your environment:
```bash
cat > patch-deploy.yaml <<EOF
spec:
template:
spec:
containers:
- name: manager
args:
- --health-probe-bind-address=:8081
- --metrics-bind-address=127.0.0.1:8080
- --leader-elect
- --etcd-endpoints=etcd-0.etcd.${ETCD_NAMESPACE}.svc.cluster.local:2379,etcd-1.etcd.${ETCD_NAMESPACE}.svc.cluster.local:2379,etcd-2.etcd.${ETCD_NAMESPACE}.svc.cluster.local:2379
EOF
kubectl -n kamaji-system patch \
deployment kamaji-controller-manager \
--patch-file patch-deploy.yaml
```
The Kamaji Tenant Control Plane controller is now running on the Admin Cluster:
```bash
kubectl -n kamaji-system get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
operator-controller-manager 1/1 1 1 14h
```
You turned your cluster into a Kamaji cluster to run multiple Tenant Control Planes.
You just turned your Kubernetes cluster into a Kamaji cluster to run multiple Tenant Control Planes.
## Create Tenant Cluster
### Create a tenant control plane
### Tenant Control Plane
Create a tenant control plane of example
A tenant control plane of example looks like:
```yaml
cat > ${TENANT_NAMESPACE}-${TENANT_NAME}-tcp.yaml <<EOF
@@ -367,16 +145,31 @@ spec:
cpu: 100m
memory: 128Mi
EOF
```
Kamaji implements [konnectivity](https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/) as sidecar container of the tenant control plane pod and it is exposed using the same service on port `8132`. It's required when workers are directly not reachable from the tenant control plane, and it's enabled by default.
```bash
kubectl create namespace ${TENANT_NAMESPACE}
kubectl apply -f ${TENANT_NAMESPACE}-${TENANT_NAME}-tcp.yaml
```
A tenant control plane control plane is now running and it is exposed through a service like this:
After a few minutes, check the created resources in the tenants namespace and when ready it will look similar to the following:
```command
kubectl -n tenants get tcp,deploy,pods,svc
NAME VERSION STATUS CONTROL-PLANE-ENDPOINT KUBECONFIG AGE
tenantcontrolplane.kamaji.clastix.io/tenant-00 v1.23.1 Ready 192.168.32.240:6443 tenant-00-admin-kubeconfig 2m20s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tenant-00 3/3 3 3 118s
NAME READY STATUS RESTARTS AGE
pod/tenant-00-58847c8cdd-7hc4n 4/4 Running 0 82s
pod/tenant-00-58847c8cdd-ft5xt 4/4 Running 0 82s
pod/tenant-00-58847c8cdd-shc7t 4/4 Running 0 82s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/tenant-00 LoadBalancer 10.32.132.241 192.168.32.240 6443:32152/TCP,8132:32713/TCP 2m20s
```
The regular Tenant Control Plane containers: `kube-apiserver`, `kube-controller-manager`, `kube-scheduler` are running unchanged in the `tcp` pods instead of dedicated machines and they are exposed through a service on the port `6443` of worker nodes in the Admin cluster.
```yaml
apiVersion: v1
@@ -402,12 +195,21 @@ spec:
type: LoadBalancer
```
The `LoadBalancer` service type is used to expose the Tenant Control Plane. However, `NodePort` and `ClusterIP` with an Ingress Controller are still viable options, depending on the case. High Availability and rolling updates of the Tenant Control Plane are provided by the `tcp` Deployment and all the resources reconcilied by the Kamaji controller.
### Konnectivity
In addition to the standard control plane containers, Kamaji creates an instance of [konnectivity-server](https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/) running as sidecar container in the `tcp` pod and exposed on port `8132` of the `tcp` service.
This is required when the tenant worker nodes are not reachable from the `tcp` pods. The Konnectivity service consists of two parts: the Konnectivity server in the tenant control plane network and the Konnectivity agents in the tenant worker nodes network. The Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections. After enabling the Konnectivity service, all control plane to nodes traffic goes through these connections.
> In Kamaji, Konnectivity is enabled by default and can be disabled when not required.
### Working with Tenant Control Plane
Collect the IP address of the loadbalancer service where the Tenant control Plane is exposed:
Collect the external IP address of the `tcp` service:
```bash
TENANT_ADDR=$(kubectl -n ${TENANT_NAMESPACE} get svc ${TENANT_NAME} -o json | jq -r ."status.loadBalancer.ingress[].ip")
TENANT_ADDR=$(kubectl -n ${TENANT_NAMESPACE} get svc ${TENANT_NAME} -o json | jq -r ."spec.loadBalancerIP")
```
and check it out:
@@ -417,7 +219,7 @@ curl -k https://${TENANT_ADDR}:${TENANT_PORT}/healthz
curl -k https://${TENANT_ADDR}:${TENANT_PORT}/version
```
Let's retrieve the `kubeconfig` in order to work with the tenant control plane.
The `kubeconfig` required to access the Tenant Control Plane is stored in a secret:
```bash
kubectl get secrets -n ${TENANT_NAMESPACE} ${TENANT_NAME}-admin-kubeconfig -o json \
@@ -428,6 +230,15 @@ kubectl get secrets -n ${TENANT_NAMESPACE} ${TENANT_NAME}-admin-kubeconfig -o js
and let's check it out:
```bash
kubectl --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig cluster-info
Kubernetes control plane is running at https://192.168.32.240:6443
CoreDNS is running at https://192.168.32.240:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
```
Check out how the Tenant control Plane advertises itself to workloads:
```bash
kubectl --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig get svc
@@ -435,8 +246,6 @@ NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 6m
```
Check out how the Tenant control Plane advertises itself to workloads:
```bash
kubectl --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig get ep
@@ -444,6 +253,8 @@ NAME ENDPOINTS AGE
kubernetes 192.168.32.240:6443 18m
```
And make sure it is `${TENANT_ADDR}:${TENANT_PORT}`.
### Preparing Worker Nodes to join
Currently Kamaji does not provide any helper for creation of tenant worker nodes. You should get a set of machines from your infrastructure provider, turn them into worker nodes, and then join to the tenant control plane with the `kubeadm`. In the future, we'll provide integration with Cluster APIs and other IaC tools.
@@ -640,7 +451,7 @@ ETag: "60aced88-264"
Accept-Ranges: bytes
```
## Cleanup Tenant cluster
## Cleanup
Remove the worker nodes joined the tenant control plane
```bash