feat(docs): document cluster-api usage

This commit is contained in:
bsctl
2025-02-25 18:37:38 +01:00
committed by Adriano Pezzuto
parent a8f8582ea6
commit 620647b2da
9 changed files with 658 additions and 10 deletions

View File

@@ -0,0 +1,127 @@
# Cluster Autoscaler
The [Cluster Autoscaler](https://github.com/kubernetes/autoscaler) is a tool that automatically adjusts the size of a Kubernetes cluster so that all pods have a place to run and there are no unneeded nodes.
When pods are unschedulable because there are not enough resources, Cluster Autoscaler scales up the cluster. When nodes are underutilized, Cluster Autoscaler scales down the cluster.
Cluster API supports the Cluster Autoscaler. See the [Cluster Autoscaler on Cluster API](https://cluster-api.sigs.k8s.io/tasks/automated-machine-management/autoscaling) for more information.
## Getting started with the Cluster Autoscaler on Kamaji
Kamaji supports the Cluster Autoscaler through Cluster API. There are several way to run the Cluster autoscaler with Cluster API. In this guide, we're leveraging the unique features of Kamaji to run the Cluster Autoscaler as part of Hosted Control Plane.
In other words, the Cluster Autoscaler is running as a pod in the Kamaji Management Cluster, side by side with the Tenant Control Plane pods, and connecting directly to the apiserver of the workload cluster, hiding sensitive data and information from the tenant: this can be done by mounting the kubeconfig of the tenant cluster in the Cluster Autoscaler pod.
### Create the workload cluster
Create a workload cluster using the Kamaji Control Plane Provider and the Infrastructure Provider of choice. The following example creates a workload cluster using the vSphere Infrastructure Provider:
The template file [`capi-kamaji-vsphere-autoscaler-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/capi-kamaji-vsphere-autoscaler-template.yaml) provides a full example of a cluster with autoscaler enabled. You can generate the cluster manifest using `clusterctl`.
Before you need to list all the variables in the template file:
```bash
cat capi-kamaji-vsphere-autoscaler-template.yaml | clusterctl generate yaml --list-variables
```
Fill them with the desired values and generate the manifest:
```bash
clusterctl generate yaml \
--from capi-kamaji-vsphere-autoscaler-template.yaml \
> capi-kamaji-vsphere-cluster.yaml
```
Apply the generated manifest to create the ClusterClass:
```bash
kubectl apply -f capi-kamaji-vsphere-cluster.yaml
```
### Install the Cluster Autoscaler
Install the Cluster Autoscaler via Helm in the Management Cluster, in the same namespace where workload cluster is deployed.
!!! info "Options for install Cluster Autoscaler"
Cluster Autoscaler works on a single cluster: it means every cluster must have its own Cluster Autoscaler instance. This could be solved by leveraging on Project Sveltos automations, by deploying a Cluster Autoscaler instance for each Kamaji Cluster API instance.
```bash
helm repo add autoscaler https://kubernetes.github.io/autoscaler
helm repo update
helm upgrade --install ${CLUSTER_NAME}-autoscaler autoscaler/cluster-autoscaler \
--set cloudProvider=clusterapi \
--set autodiscvovery.namespace=default \
--set "autoDiscovery.labels[0].autoscaling=enabled" \
--set clusterAPIKubeconfigSecret=${CLUSTER_NAME}-kubeconfig \
--set clusterAPIMode=kubeconfig-incluster
```
The `autoDiscovery.labels` values are used to pick dynamically clusters to autoscale.
Such labels must be set on the workload cluster, in the `Cluster` and `MachineDeployment` resources.
```yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
cluster.x-k8s.io/cluster-name: sample
# Cluster Autoscaler labels
autoscaling: enabled
name: sample
# other fields omitted for brevity
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
annotations:
# Cluster Autoscaler annotations
cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "0"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "6"
labels:
cluster.x-k8s.io/cluster-name: sample
# Cluster Autoscaler labels
autoscaling: enabled
name: sample-md-0
# other fields omitted for brevity
---
# other Cluster API resources omitted for brevity
```
### Verify the Cluster Autoscaler
To verify the Cluster Autoscaler is working as expected, you can deploy a workload in the Tenant cluster with some CPU requirements in order to simulate workload requiring resources.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello-node
name: hello-node
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: hello-node
template:
metadata:
labels:
app: hello-node
spec:
containers:
- image: quay.io/google-containers/pause-amd64:3.0
imagePullPolicy: IfNotPresent
name: pause-amd64
resources:
limits:
cpu: 500m
```
Apply the workload to the Tenant cluster and simulate the load spike by increasing the replicas. The Cluster Autoscaler should scale up the cluster to accommodate the workload. Cooldown time must be configured properly on a cluster basis.
!!! warning "Possible Resource Wasting"
With Cluster Autoscaler, new machines are automatically created in a very short time, ending up with some up-provisioning and potentially wasting resources. The official Cluster Autosclaler documentation must be understood to provide correct values according to the infrastructure and provisioning times.

View File

@@ -0,0 +1,104 @@
# Cluster Class
Kamaji supports **ClusterClass**, a simple way to create many clusters of a similar shape. This is useful for creating many clusters with the same configuration, such as a development cluster, a staging cluster, and a production cluster.
!!! warning "Caution!"
ClusterClass is an experimental feature of Cluster API. As with any experimental features it should be used with caution as it may be unreliable. All experimental features are not subject to any compatibility or deprecation policy and are not yet recommended for production use.
You can read more about ClusterClass in the [Cluster API documentation](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-class/).
## Enabling ClusterClass
To enable ClusterClass, you need to set `CLUSTER_TOPOLOGY` before running `clusterctl init`. This will enable the Cluster API feature gate for ClusterClass.
```bash
export CLUSTER_TOPOLOGY=true
clusterctl init --infrastructure vsphere --control-plane kamaji
```
## Creating a ClusterClass
To create a ClusterClass, you need to create a `ClusterClass` custom resource. Here is an example of a `ClusterClass` that will create a cluster running control plane on the Kamaji Management Cluster and worker nodes on vSphere:
```yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
name: kamaji-clusterclass
spec:
controlPlane:
ref:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlaneTemplate
name: kamaji-clusterclass-kamaji-control-plane-template
infrastructure:
ref:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereClusterTemplate
name: kamaji-clusterclass-vsphere-cluster-template
workers:
machineDeployments:
- class: kamaji-clusterclass
template:
bootstrap:
ref:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: kamaji-clusterclass-kubeadm-config-template
infrastructure:
ref:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachineTemplate
name: kamaji-clusterclass-vsphere-machine-template
# other resources omitted for brevity ...
```
The template file [`capi-kamaji-vsphere-class-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/capi-kamaji-vsphere-class-template.yaml) provides a full example of a ClusterClass for vSphere. You can generate a ClusterClass manifest using `clusterctl`.
Before you need to list all the variables in the template file:
```bash
cat capi-kamaji-vsphere-class-template.yaml | clusterctl generate yaml --list-variables
```
Fill them with the desired values and generate the manifest:
```bash
clusterctl generate yaml \
--from capi-kamaji-vsphere-class-template.yaml \
> capi-kamaji-vsphere-class.yaml
```
Apply the generated manifest to create the ClusterClass:
```bash
kubectl apply -f capi-kamaji-vsphere-class.yaml
```
## Creating a Cluster from a ClusterClass
Once a ClusterClass is created, you can create a Cluster using the ClusterClass. Here is an example of a Cluster that uses the `kamaji-clusterclass`:
```yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: sample
spec:
topology:
class: kamaji-clusterclass
classNamespace: capi-clusterclass
version: v1.31.0
controlPlane:
replicas: 2
workers:
machineDeployments:
- class: kamaji-clusterclass
name: md-sample
replicas: 3
# other resources omitted for brevity ...
```
Always refer to the [Cluster API documentation](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-class/) for the most up-to-date information on ClusterClass.

View File

@@ -0,0 +1,98 @@
# Kamaji Control Plane Provider
Kamaji can act as a Cluster API Control Plane provider using the `KamajiControlPlane` custom resource, which defines the control plane of a Tenant Cluster.
Here is an example of a `KamajiControlPlane`:
```yaml
kind: KamajiControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
metadata:
name: '${CLUSTER_NAME}'
namespace: '${CLUSTER_NAMESPACE}'
spec:
apiServer:
extraArgs:
- --cloud-provider=external
controllerManager:
extraArgs:
- --cloud-provider=external
dataStoreName: default
addons:
coreDNS: {}
kubeProxy: {}
konnectivity: {}
kubelet:
cgroupfs: systemd
preferredAddressTypes:
- InternalIP
network:
serviceType: LoadBalancer
version: ${KUBERNETES_VERSION}
```
You can use this as reference in a standard `Cluster` custom resource as controlplane provider:
```yaml
kind: Cluster
apiVersion: cluster.x-k8s.io/v1beta1
metadata:
labels:
cluster.x-k8s.io/cluster-name: '${CLUSTER_NAME}'
name: '${CLUSTER_NAME}'
namespace: '${CLUSTER_NAMESPACE}'
spec:
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KamajiControlPlane
name: '${CLUSTER_NAME}'
clusterNetwork:
pods:
cidrBlocks:
- '${PODS_CIDR}'
services:
cidrBlocks:
- '${SERVICES_CIDR}'
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: ... # your infrastructure kind may vary
name: '${CLUSTER_NAME}'
```
!!! info "Full Reference"
For a full reference of the `KamajiControlPlane` custom resource, please see the [Reference APIs](reference/api.md).
## Getting started with the Kamaji Control Plane Provider
Cluster API Provider Kamaji is compliant with the `clusterctl` contract, which means you can use it with the `clusterctl` CLI to create and manage your Kamaji based clusters.
!!! info "Options for install Cluster API"
There are two ways to getting started with Cluster API:
* using `clusterctl` to install the Cluster API components.
* using the Cluster API Operator. Please refer to the [Cluster API Operator](https://cluster-api-operator.sigs.k8s.io/) guide for this option.
### Prerequisites
* [`clusterctl`](https://cluster-api.sigs.k8s.io/user/quick-start#install-clusterctl) installed in your workstation to handle the lifecycle of your clusters.
* [`kubectl`](https://kubernetes.io/docs/tasks/tools/) installed in your workstation to interact with your clusters.
* [Kamaji](../getting-started/getting-started.md) installed in your Management Cluster.
### Initialize the Management Cluster
Use `clusterctl` to initialize the Management Cluster. When executed for the first time, `clusterctl init` will fetch and install the Cluster API components in the Management Cluster
```bash
clusterctl init --control-plane kamaji
```
As result, the following Cluster API components will be installed:
* Cluster API Provider in `capi-system` namespace
* Bootstrap Provider in `capi-kubeadm-bootstrap-system` namespace
* Kamaji Control Plane Provider in `kamaji-system` namespace
In the next step, we will create a fully functional Kubernetes cluster using the Kamaji Control Plane Provider and the Infrastructure provider of choice.
For a complete list of supported infrastructure providers, please refer to the [other providers](other-providers.md) page.

View File

@@ -0,0 +1,11 @@
# Cluster APIs Support
The [Cluster API](https://github.com/kubernetes-sigs/cluster-api) brings declarative, Kubernetes-style APIs to the creation, configuration, and management of Kubernetes clusters. If you're not familiar with the Cluster API project, you can learn more from the [official documentation](https://cluster-api.sigs.k8s.io/).
Users can utilize Kamaji in two distinct ways:
* **Standalone:** Kamaji can be used as a standalone Kubernetes Operator installed in the Management Cluster to manage multiple Tenant Control Planes. Worker nodes of Tenant Clusters can join any infrastructure, whether it be cloud, data-center, or edge, using various automation tools such as _Ansible_, _Terraform_, or even manually with any script calling `kubeadm`. See [yaki](https://goyaki.clastix.io/) as an example.
* **Cluster API Provider:** Kamaji can be used as a [Cluster API Control Plane Provider](https://cluster-api.sigs.k8s.io/reference/providers#control-plane) to manage multiple Tenant Control Planes across various infrastructures. Kamaji offers seamless integration with the most popular [Cluster API Infrastructure Providers](https://cluster-api.sigs.k8s.io/reference/providers#infrastructure).
Check the currently supported infrastructure providers and the roadmap on the related [repository](https://github.com/clastix/cluster-api-control-plane-provider-kamaji).

View File

@@ -0,0 +1,21 @@
# Other Infra Providers
Kamaji offers seamless integration with the most popular [Cluster API Infrastructure Providers](https://cluster-api.sigs.k8s.io/reference/providers#infrastructure):
- AWS
- Azure
- Google Cloud
- Equinix/Packet
- Hetzner
- KubeVirt
- Metal³
- Nutanix
- OpenStack
- Tinkerbell
- vSphere
- IONOS Cloud
- Proxmox by IONOS Cloud
For the most up-to-date information and technical considerations, please always check the related [repository](https://github.com/clastix/cluster-api-control-plane-provider-kamaji).

View File

@@ -0,0 +1,287 @@
# vSphere Infra Provider
Use the [vSphere Infrastructure Provider](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere) to create a fully functional Kubernetes cluster on **vSphere** using the [Kamaji Control Plane Provider](https://github.com/clastix/cluster-api-control-plane-provider-kamaji).
!!! info "Control Plane and Infrastructure Decoupling"
Kamaji decouples the Control Plane from the infrastructure, so the Kamaji Management Cluster hosting the Tenant Control Plane does not need to be on the same vSphere as the worker machines. As long as network reachability is satisfied, you can have your Kamaji Management Cluster on a different vSphere or even on a different cloud provider.
## vSphere Requirements
You need to access a **vSphere** environment with the following requirements:
- The vSphere environment should be configured with a DHCP service in the primary VM network for your tenant clusters. Alternatively you can use an [IPAM Provider](https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster).
- Configure one Resource Pool across the hosts onto which the tenant clusters will be provisioned. Every host in the Resource Pool will need access to a shared storage.
- A Template VM based on published [OVA images](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere). For production-like environments, it is highly recommended to build and use your own custom OVA images. Take a look to the [image-builder](https://github.com/kubernetes-sigs/image-builder) project.
- To use the vSphere Container Storage Interface (CSI), your vSphere cluster needs support for Cloud Native Storage (CNS). CNS relies on a shared datastore. Ensure that your vSphere environment is properly configured to support CNS.
## Install the vSphere Infrastructure Provider
In order to use vSphere Cluster API provider, you must be able to connect and authenticate to a **vCenter**. Ensure you have credentials to your vCenter server:
```bash
export VSPHERE_USERNAME="admin@vsphere.local"
export VSPHERE_PASSWORD="*******"
```
Install the vSphere Infrastructure Provider:
```bash
clusterctl init --infrastructure vsphere
```
## Install the IPAM Provider
If you intend to use IPAM to assign addresses to the nodes, you can use the in-cluster [IPAM provider](https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster) instead of rely on DHCP service. To do so, initialize the Management Cluster with the `--ipam in-cluster` flag:
```bash
clusterctl init --ipam in-cluster
```
## Create a Tenant Cluster
Once all the controllers are up and running in the management cluster, you can generate and apply the cluster manifests of the tenant cluster you want to provision.
### Generate the Cluster Manifest using the template
Using `clusterctl`, you can generate a tenant cluster manifest for your vSphere environment. Set the environment variables to match your vSphere configuration.
For example:
```bash
# vSphere Configuration
export VSPHERE_SERVER="vcenter.vsphere.local"
export VSPHERE_DATACENTER="SDDC-Datacenter"
export VSPHERE_DATASTORE="DefaultDatastore"
export VSPHERE_NETWORK="VM Network"
export VSPHERE_RESOURCE_POOL="*/Resources"
export VSPHERE_FOLDER="kamaji-capi-pool"
export VSPHERE_TLS_THUMBPRINT="..."
export VSPHERE_STORAGE_POLICY="vSAN Storage Policy"
```
If you intend to use IPAM, set the environment variables to match your IPAM configuration.
For example:
```bash
# IPAM Configuration
export NODE_IPAM_POOL_RANGE="10.9.62.100-10.9.62.200"
export NODE_IPAM_POOL_PREFIX="24"
export NODE_IPAM_POOL_GATEWAY="10.9.62.1"
```
Set the environment variables to match your cluster configuration.
For example:
```bash
# Cluster Configuration
export CLUSTER_NAME="sample"
export CLUSTER_NAMESPACE="default"
export POD_CIDR="10.36.0.0/16"
export SVC_CIDR="10.96.0.0/16"
export CONTROL_PLANE_REPLICAS=2
export NAMESERVER="8.8.8.8"
export KUBERNETES_VERSION="v1.31.0"
export CPI_IMAGE_VERSION="v1.31.0"
```
Set the environment variables to match your machine configuration.
For example:
```bash
# Machine Configuration
export MACHINE_TEMPLATE="ubuntu-2404-kube-v1.31.0"
export MACHINE_DEPLOY_REPLICAS=2
export NODE_DISK_SIZE=25
export NODE_MEMORY_SIZE=8192
export NODE_CPU_COUNT=2
export SSH_USER="clastix"
export SSH_AUTHORIZED_KEY="ssh-rsa AAAAB3N..."
```
The following command will generate a cluster manifest based on the [`capi-kamaji-vsphere-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/capi-kamaji-vsphere-template.yaml) template file:
```bash
clusterctl generate cluster $CLUSTER_NAME \
--from capi-kamaji-vsphere-template.yaml \
> capi-kamaji-vsphere-cluster.yaml
```
If you want to use DHCP instead of IPAM, use the [`capi-kamaji-vsphere-dhcp-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/capi-kamaji-vsphere-dhcp-template.yaml) template file:
```bash
clusterctl generate cluster $CLUSTER_NAME \
--from capi-kamaji-vsphere-dhcp-template.yaml \
> capi-kamaji-vsphere-cluster.yaml
```
### Additional cloud-init configuration
Cluster API requires to use templates for the machines, which are based on `cloud-init`. You can add additional `cloud-init` configuration to further customize the worker nodes by including an additional `cloud-init` file in the `KubeadmConfigTemplate`:
```yaml
kind: KubeadmConfigTemplate
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
template:
spec:
files:
- path: "/etc/cloud/cloud.cfg.d/99-custom.cfg"
content: "${CLOUD_INIT_CONFIG:-}"
owner: "root:root"
permissions: "0644"
```
You can then set the `CLOUD_INIT_CONFIG` environment variable to include the additional configuration:
```bash
export CLOUD_INIT_CONFIG="#cloud-config package_update: true packages: - net-tools"
```
and include it in the `clusterctl generate cluster` command:
```bash
clusterctl generate cluster $CLUSTER_NAME \
--from capi-kamaji-vsphere-template.yaml \
> capi-kamaji-vsphere-cluster.yaml
```
### Apply the Cluster Manifest
Apply the generated cluster manifest to create the tenant cluster:
```bash
kubectl apply -f capi-kamaji-vsphere-cluster.yaml
```
You can check the status of the cluster deployment with `clusterctl`:
```bash
clusterctl describe cluster $CLUSTER_NAME
```
You can check the status of the tenant cluster with `kubectl`:
```bash
kubectl get clusters -n default
```
and related tenant control plane created on the Kamaji Management Cluster:
```bash
kubectl get tcp -n default
```
## Access the Tenant Cluster
To access the tenant cluster, you can estract the `kubeconfig` file from the Kamaji Management Cluster:
```bash
clusterctl get kubeconfig $CLUSTER_NAME \
> ~/.kube/$CLUSTER_NAME.kubeconfig
```
and use it to access the tenant cluster:
```bash
export KUBECONFIG=~/.kube/$CLUSTER_NAME.kubeconfig
kubectl cluster-info
```
## Cloud Controller Manager
The template file `capi-kamaji-vsphere-template.yaml` includes the external [Cloud Controller Manager (CCM)](https://github.com/kubernetes/cloud-provider-vsphere) configuration for vSphere. The CCM is a Kubernetes controller that manages the cloud provider's resources.
Usually, the CCM is deployed on control plane nodes, but in Kamaji there are no nodes for Control Plane, so the CCM is deployed on the worker nodes as daemonset.
As alternative, you can deploy the CCM as part of the Hosted Control Plane on the Management Cluster. To do so, the template file [`capi-kamaji-vsphere-template-ccm.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/capi-kamaji-vsphere-template-ccm.yaml) includes the configuration for the CCM as part of the Kamaji Control Plane. This approach provides security benefits by isolating vSphere credentials from tenant users while maintaining full Cluster API integration.
The following command will generate a cluster manifest with the CCM installed on the Management Cluster:
```bash
clusterctl generate cluster $CLUSTER_NAME \
--from capi-kamaji-vsphere-template-ccm.yaml \
> capi-kamaji-vsphere-cluster.yaml
```
Apply the generated cluster manifest to create the tenant cluster:
```bash
kubectl apply -f capi-kamaji-vsphere-cluster.yaml
```
## vSphere CSI Driver
The template file `capi-kamaji-vsphere-template-csi.yaml` includes the [vSphere CSI Driver](https://github.com/kubernetes-sigs/vsphere-csi-driver) configuration for vSphere. The vSphere CSI Driver is a Container Storage Interface (CSI) driver that provides a way to use vSphere storage with Kubernetes.
This template file introduces a *"split configuration"* for the vSphere CSI Driver, with the CSI driver deployed on the worker nodes as daemonset and the CSI Controller Manager deployed on the Management Cluster as part of the Hosted Control Plane. In this way, no vSphere credentials are required on the tenant cluster.
This spit architecture enables:
* Tenant isolation from vSphere credentials
* Simplified networking requirements
* Centralized controller management
The template file also include a default storage class for the vSphere CSI Driver.
Set the environment variables to match your storage configuration.
For example:
```bash
# Storage Configuration
export CSI_INSECURE="false"
export CSI_LOG_LEVEL="PRODUCTION" # or "DEVELOPMENT"
export CSI_STORAGE_CLASS_NAME="vsphere-csi"
```
The following command will generate a cluster manifest with split configuration for the vSphere CSI Driver:
```bash
clusterctl generate cluster $CLUSTER_NAME \
--from capi-kamaji-vsphere-template-csi.yaml \
> capi-kamaji-vsphere-cluster.yaml
```
Apply the generated cluster manifest to create the tenant cluster:
```bash
kubectl apply -f capi-kamaji-vsphere-cluster.yaml
```
## Delete the Tenant Cluster
For cluster deletion, use the following command:
```bash
kubectl delete cluster sample
```
Always use `kubectl delete cluster $CLUSTER_NAME` to delete the tenant cluster. Using `kubectl delete -f capi-kamaji-vsphere-cluster.yaml` may lead to orphaned resources in some scenarios, as this method doesn't always respect ownership references between resources that were created after the initial deployment.
## Install the Tenant Cluster as Helm Release
Another option to create a Tenant Cluster is to use the Helm Chart [cluster-api-kamaji-vsphere](https://github.com/clastix/cluster-api-kamaji-vsphere).
!!! warning "Advanced Usage"
This Helm Chart provides several additional configuration options to customize the Tenant Cluster. Please refer to its documentation for more information. Make sure you get comfortable with the Cluster API concepts and Kamaji before to attempt to use it.
Create a Tenant Cluster as Helm Release:
```bash
helm repo add clastix https://clastix.github.io/cluster-api-kamaji-vsphere
helm repo update
helm install sample clastix/cluster-api-kamaji-vsphere \
--set cluster.name=sample \
--namespace default \
--values my-values.yaml
```
where `my-values.yaml` is a file containing the configuration values for the Tenant Cluster.

View File

@@ -274,7 +274,7 @@ The Tenant Control Plane is made of pods running in the Kamaji Management Cluste
!!! warning "Opening Ports"
To make sure worker nodes can join the Tenant Control Plane, you must allow incoming connections to: `${TENANT_ADDR}:${TENANT_PORT}` and `${TENANT_ADDR}:${TENANT_PROXY_PORT}`
Kamaji does not provide any helper for creation of tenant worker nodes, instead it leverages the [Cluster Management API](https://github.com/kubernetes-sigs/cluster-api). This allows you to create the Tenant Clusters, including worker nodes, in a completely declarative way. Refer to the [Cluster API guide](guides/cluster-api.md) to learn more about supported providers.
Kamaji does not provide any helper for creation of tenant worker nodes, instead it leverages the [Cluster API](https://github.com/kubernetes-sigs/cluster-api). This allows you to create the Tenant Clusters, including worker nodes, in a completely declarative way. Refer to the [Cluster API guide](guides/cluster-api/index.md) to learn more about Cluster API support in Kamaji.
An alternative approach for joining nodes is to use the `kubeadm` command on each node. Follow the related [documentation](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) in order to:
@@ -308,7 +308,7 @@ done
```
!!! tip "yaki"
This manual process can be further automated to handle the node prerequisites and joining. See [yaki](https://github.com/clastix/yaki) script, which you could modify for your preferred operating system and version. The provided script is just a facility: it assumes all worker nodes are running `Ubuntu 22.04`. Make sure to adapt the script if you're using a different distribution.
This manual process can be further automated to handle the node prerequisites and joining. See [yaki](https://goyaki.clastix.io/) script, which you could modify for your preferred operating system and version. The provided script is just a facility: it assumes all worker nodes are running `Ubuntu`. Make sure to adapt the script if you're using a different OS distribution.
Checking the nodes:

View File

@@ -1,6 +0,0 @@
# Cluster APIs Support
The [Cluster API](https://github.com/kubernetes-sigs/cluster-api) brings declarative, Kubernetes-style APIs to creation of Kubernetes clusters, including configuration and management.
Kamaji offers seamless integration with the most popular Cluster API Infrastructure Providers. Check the currently supported providers and the roadmap on the related [reposistory](https://github.com/clastix/cluster-api-control-plane-provider-kamaji).

View File

@@ -9,7 +9,7 @@ site_author: bsctl
site_description: >-
Kamaji deploys and operates Kubernetes Control Plane at scale with a fraction of the operational burden.
copyright: Copyright © 2020 - 2023 Clastix Labs
copyright: Copyright © 2020 - 2025 Clastix Labs
theme:
name: material
@@ -60,6 +60,13 @@ nav:
- getting-started/getting-started.md
- getting-started/kind.md
- 'Concepts': concepts.md
- 'Cluster API':
- cluster-api/index.md
- cluster-api/control-plane-provider.md
- cluster-api/vsphere-infra-provider.md
- cluster-api/cluster-class.md
- cluster-api/cluster-autoscaler.md
- cluster-api/other-providers.md
- 'Guides':
- guides/index.md
- guides/kamaji-azure-deployment.md
@@ -70,7 +77,6 @@ nav:
- guides/datastore-migration.md
- guides/backup-and-restore.md
- guides/certs-lifecycle.md
- guides/cluster-api.md
- guides/console.md
- 'Use Cases': use-cases.md
- 'Reference':