feat(docs): refactoring (#784)

* feat(docs): add landing page

* feat(docs): refactoring
This commit is contained in:
Adriano Pezzuto
2025-04-16 11:13:35 +02:00
committed by GitHub
parent 223aa6d4c9
commit 69141e5765
42 changed files with 828 additions and 323 deletions

View File

@@ -18,7 +18,7 @@ export KAMAJI_NAMESPACE=kamaji-system
export TENANT_NAMESPACE=tenant-00
export TENANT_NAME=tenant-00
export TENANT_DOMAIN=internal.kamaji.aws.com
export TENANT_VERSION=v1.30.2
export TENANT_VERSION=v1.31.0
export TENANT_PORT=6443 # port used to expose the tenant api server
export TENANT_PROXY_PORT=8132 # port used to expose the konnectivity server
export TENANT_POD_CIDR=10.36.0.0/16

View File

@@ -15,7 +15,7 @@ export KAMAJI_NAMESPACE=kamaji-system
export TENANT_NAMESPACE=default
export TENANT_NAME=tenant-00
export TENANT_DOMAIN=$KAMAJI_REGION.cloudapp.azure.com
export TENANT_VERSION=v1.26.0
export TENANT_VERSION=v1.31.0
export TENANT_PORT=6443 # port used to expose the tenant api server
export TENANT_PROXY_PORT=8132 # port used to expose the konnectivity server
export TENANT_POD_CIDR=10.36.0.0/16

View File

@@ -5,7 +5,7 @@ export KAMAJI_NAMESPACE=kamaji-system
export TENANT_NAMESPACE=default
export TENANT_NAME=tenant-00
export TENANT_DOMAIN=clastix.labs
export TENANT_VERSION=v1.26.0
export TENANT_VERSION=v1.31.0
export TENANT_PORT=6443 # port used to expose the tenant api server
export TENANT_PROXY_PORT=8132 # port used to expose the konnectivity server
export TENANT_POD_CIDR=10.36.0.0/16

View File

@@ -16,7 +16,7 @@ In other words, the Cluster Autoscaler is running as a pod in the Kamaji Managem
Create a workload cluster using the Kamaji Control Plane Provider and the Infrastructure Provider of choice. The following example creates a workload cluster using the vSphere Infrastructure Provider:
The template file [`capi-kamaji-vsphere-autoscaler-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/capi-kamaji-vsphere-autoscaler-template.yaml) provides a full example of a cluster with autoscaler enabled. You can generate the cluster manifest using `clusterctl`.
The template file [`capi-kamaji-vsphere-autoscaler-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/vsphere/capi-kamaji-vsphere-autoscaler-template.yaml) provides a full example of a cluster with autoscaler enabled. You can generate the cluster manifest using `clusterctl`.
Before you need to list all the variables in the template file:

View File

@@ -2,7 +2,7 @@
Kamaji supports **ClusterClass**, a simple way to create many clusters of a similar shape. This is useful for creating many clusters with the same configuration, such as a development cluster, a staging cluster, and a production cluster.
!!! warning "Caution!"
!!! warning "Experimental Feature"
ClusterClass is an experimental feature of Cluster API. As with any experimental features it should be used with caution as it may be unreliable. All experimental features are not subject to any compatibility or deprecation policy and are not yet recommended for production use.
You can read more about ClusterClass in the [Cluster API documentation](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-class/).
@@ -54,7 +54,7 @@ spec:
# other resources omitted for brevity ...
```
The template file [`capi-kamaji-vsphere-class-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/capi-kamaji-vsphere-class-template.yaml) provides a full example of a ClusterClass for vSphere. You can generate a ClusterClass manifest using `clusterctl`.
The template file [`capi-kamaji-vsphere-class-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/vsphere/capi-kamaji-vsphere-class-template.yaml) provides a full example of a ClusterClass for vSphere. You can generate a ClusterClass manifest using `clusterctl`.
Before you need to list all the variables in the template file:

View File

@@ -76,7 +76,7 @@ Cluster API Provider Kamaji is compliant with the `clusterctl` contract, which m
* [`clusterctl`](https://cluster-api.sigs.k8s.io/user/quick-start#install-clusterctl) installed in your workstation to handle the lifecycle of your clusters.
* [`kubectl`](https://kubernetes.io/docs/tasks/tools/) installed in your workstation to interact with your clusters.
* [Kamaji](../getting-started/getting-started.md) installed in your Management Cluster.
* [Kamaji](../getting-started/index.md) installed in your Management Cluster.
### Initialize the Management Cluster

View File

@@ -104,7 +104,7 @@ export SSH_USER="clastix"
export SSH_AUTHORIZED_KEY="ssh-rsa AAAAB3N..."
```
The following command will generate a cluster manifest based on the [`capi-kamaji-vsphere-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/capi-kamaji-vsphere-template.yaml) template file:
The following command will generate a cluster manifest based on the [`capi-kamaji-vsphere-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/vsphere/capi-kamaji-vsphere-template.yaml) template file:
```bash
clusterctl generate cluster $CLUSTER_NAME \
@@ -112,7 +112,7 @@ clusterctl generate cluster $CLUSTER_NAME \
> capi-kamaji-vsphere-cluster.yaml
```
If you want to use DHCP instead of IPAM, use the [`capi-kamaji-vsphere-dhcp-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/capi-kamaji-vsphere-dhcp-template.yaml) template file:
If you want to use DHCP instead of IPAM, use the [`capi-kamaji-vsphere-dhcp-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/vsphere/capi-kamaji-vsphere-dhcp-template.yaml) template file:
```bash
clusterctl generate cluster $CLUSTER_NAME \
@@ -197,11 +197,11 @@ kubectl cluster-info
## Cloud Controller Manager
The template file `capi-kamaji-vsphere-template.yaml` includes the external [Cloud Controller Manager (CCM)](https://github.com/kubernetes/cloud-provider-vsphere) configuration for vSphere. The CCM is a Kubernetes controller that manages the cloud provider's resources.
The template file [`capi-kamaji-vsphere-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/vsphere/capi-kamaji-vsphere-template.yaml) includes the external [Cloud Controller Manager (CCM)](https://github.com/kubernetes/cloud-provider-vsphere) configuration for vSphere. The CCM is a Kubernetes controller that manages the cloud provider's resources.
Usually, the CCM is deployed on control plane nodes, but in Kamaji there are no nodes for Control Plane, so the CCM is deployed on the worker nodes as daemonset.
As alternative, you can deploy the CCM as part of the Hosted Control Plane on the Management Cluster. To do so, the template file [`capi-kamaji-vsphere-template-ccm.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/capi-kamaji-vsphere-template-ccm.yaml) includes the configuration for the CCM as part of the Kamaji Control Plane. This approach provides security benefits by isolating vSphere credentials from tenant users while maintaining full Cluster API integration.
As alternative, you can deploy the CCM as part of the Hosted Control Plane on the Management Cluster. To do so, the template file [`capi-kamaji-vsphere-template-ccm.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/vsphere/capi-kamaji-vsphere-template-ccm.yaml) includes the configuration for the CCM as part of the Kamaji Control Plane. This approach provides security benefits by isolating vSphere credentials from tenant users while maintaining full Cluster API integration.
The following command will generate a cluster manifest with the CCM installed on the Management Cluster:
@@ -219,7 +219,7 @@ kubectl apply -f capi-kamaji-vsphere-cluster.yaml
## vSphere CSI Driver
The template file `capi-kamaji-vsphere-template-csi.yaml` includes the [vSphere CSI Driver](https://github.com/kubernetes-sigs/vsphere-csi-driver) configuration for vSphere. The vSphere CSI Driver is a Container Storage Interface (CSI) driver that provides a way to use vSphere storage with Kubernetes.
The template file [`capi-kamaji-vsphere-template-csi.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/vsphere/capi-kamaji-vsphere-template-csi.yaml) includes the [vSphere CSI Driver](https://github.com/kubernetes-sigs/vsphere-csi-driver) configuration for vSphere. The vSphere CSI Driver is a Container Storage Interface (CSI) driver that provides a way to use vSphere storage with Kubernetes.
This template file introduces a *"split configuration"* for the vSphere CSI Driver, with the CSI driver deployed on the worker nodes as daemonset and the CSI Controller Manager deployed on the Management Cluster as part of the Hosted Control Plane. In this way, no vSphere credentials are required on the tenant cluster.

View File

@@ -1,54 +0,0 @@
# Concepts
**Kamaji** is a **Kubernetes Control Plane Manager**. It operates Kubernetes at scale with a fraction of the operational burden. Kamaji turns any Kubernetes cluster into a _“Management Cluster”_ to orchestrate other Kubernetes clusters called _“Tenant Clusters”_.
These are requirements of the design behind Kamaji:
- Communication between the _“Management Cluster”_ and a _“Tenant Cluster”_ is unidirectional. The _“Management Cluster”_ manages a _“Tenant Cluster”_, but a _“Tenant Cluster”_ has no awareness of the _“Management Cluster”_.
- Communication between different _“Tenant Clusters”_ is not allowed.
- The worker nodes of tenant should not run anything beyond tenant's workloads.
Goals and scope may vary as the project evolves.
## Tenant Control Plane
Kamaji is special because the Control Planes of the _“Tenant Clusters”_ are regular pods running in a namespace of the _“Management Cluster”_ instead of a dedicated machines. This solution makes running Control Planes at scale cheaper and easier to deploy and operate. The Tenant Control Plane components are packaged in the same way they are running in bare metal or virtual nodes. We leverage the `kubeadm` code to set up the control plane components as they were running on their own server. The unchanged images of upstream `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager` are used.
High Availability and rolling updates of the Tenant Control Plane pods are provided by a regular Deployment. Autoscaling based on the metrics is available. A Service is used to espose the Tenant Control Plane outside of the _“Management Cluster”_. The `LoadBalancer` service type is used, `NodePort` and `ClusterIP` are other viable options, depending on the case.
Kamaji offers a [Custom Resource Definition](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) to provide a declarative approach of managing a Tenant Control Plane. This *CRD* is called `TenantControlPlane`, or `tcp` in short.
All the _“Tenant Clusters”_ built with Kamaji are fully compliant CNCF Kubernetes clusters and are compatible with the standard Kubernetes toolchains everybody knows and loves. See [CNCF compliance](reference/conformance.md).
## Tenant worker nodes
And what about the tenant worker nodes?
They are just _"worker nodes"_, i.e. regular virtual or bare metal machines, connecting to the APIs server of the Tenant Control Plane.
Kamaji's goal is to manage the lifecycle of hundreds of these _“Tenant Clusters”_, not only one, so how to add another Tenant Cluster to Kamaji?
As you could expect, you have just deploys a new Tenant Control Plane in one of the _“Management Cluster”_ namespace, and then joins the tenant worker nodes to it.
A [Cluster API ControlPlane provider](https://github.com/clastix/cluster-api-control-plane-provider-kamaji) has been released, allowing to offer a Cluster API-native declarative lifecycle, by automating the worker nodes join.
## Datastores
Putting the Tenant Control Plane in a pod is the easiest part. Also, we have to make sure each Tenant Cluster saves the state to be able to store and retrieve data. As we can deploy a Kubernetes cluster with an external `etcd` cluster, we explored this option for the Tenant Control Planes. On the Management Cluster, you can deploy one or multi-tenant `etcd` to save the state of multiple Tenant Clusters. Kamaji offers a Custom Resource Definition called `DataStore` to provide a declarative approach of managing multiple datastores. By sharing the datastore between multiple tenants, the resiliency is still guaranteed and the pods' count remains under control, so it solves the main goal of resiliency and costs optimization. The trade-off here is that you have to operate external datastores, in addition to `etcd` of the _“Management Cluster”_ and manage the access to be sure that each _“Tenant Cluster”_ uses only its data.
### Other storage drivers
Kamaji offers the option of using a more capable datastore than `etcd` to save the state of multiple tenants' clusters. Thanks to the native [kine](https://github.com/k3s-io/kine) integration, you can run _MySQL_ or _PostgreSQL_ compatible databases as datastore for _“Tenant Clusters”_.
### Pooling
By default, Kamaji is expecting to persist all the _“Tenant Clusters”_ data in a unique datastore that could be backed by different drivers. However, you can pick a different datastore for a specific set of _“Tenant Clusters”_ that could have different resources assigned or a different tiering. Pooling of multiple datastore is an option you can leverage for a very large set of _“Tenant Clusters”_ so you can distribute the load properly. As future improvements, we have a _datastore scheduler_ feature in roadmap so that Kamaji itself can assign automatically a _“Tenant Cluster”_ to the best datastore in the pool.
### Migration
In order to simplify Day2 Operations and reduce the operational burden, Kamaji provides the capability to live migrate data from a datastore to another one of the same driver without manual and error prone backup and restore operations.
> Currently, live data migration is only available between datastores having the same driver.
## Konnectivity
In addition to the standard control plane containers, Kamaji creates an instance of [konnectivity-server](https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/) running as sidecar container in the `tcp` pod and exposed on port `8132` of the `tcp` service.
This is required when the tenant worker nodes are not reachable from the `tcp` pods. The Konnectivity service consists of two parts: the Konnectivity server in the tenant control plane pod and the Konnectivity agents running on the tenant worker nodes.
After worker nodes joined the tenant control plane, the Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections. After enabling the Konnectivity service, all control plane to worker nodes traffic goes through these connections.
> In Kamaji, Konnectivity is enabled by default and can be disabled when not required.

View File

@@ -0,0 +1,36 @@
# Datastore
A critical part of any Kubernetes control plane is its datastore, the system that persists the clusters state, configuration, and operational data. In Kamaji, this requirement is addressed with flexibility and scalability in mind, allowing you to choose the best storage backend for your needs and to manage many clusters efficiently.
Kamajis architecture decouples the control plane from its underlying datastore. Instead of each Tenant Cluster running its own dedicated datastore instance, Kamaji enables you to share datastores across multiple Tenant Clusters, or assign a dedicated datastore to each Tenant Cluster where needed. This approach optimizes resource usage, simplifies operations, and supports a variety of backend technologies.
## Supported Datastore Backends
Kamaji supports several options for persisting Tenant Cluster state:
- **etcd:**
The default and most widely used Kubernetes datastore. You can deploy one or more etcd clusters in the Management Cluster and assign them to Tenant Control Planes as needed.
- **SQL Databases:**
For environments where etcd is not ideal, Kamaji integrates with [kine](https://github.com/k3s-io/kine), allowing you to use MySQL or PostgreSQL-compatible databases as the backend for Tenant Clusters.
!!! info "NATS"
The support of [NATS](https://nats.io/) is still experimental, mostly because multi-tenancy is not (yet) supported in NATS.
## Declarative Management
Datastores are managed declaratively using the `DataStore` Custom Resource Definition (CRD). This makes it easy to define, configure, and assign datastores to Tenant Control Planes, and fits naturally into GitOps and Infrastructure as Code workflows.
## Pooling and Scalability
By default, Kamaji can persist all Tenant Clusters data in a single datastore, but you can also create pools of datastores and assign clusters based on resource requirements, performance needs, or organizational policies. This pooling capability is especially useful for large-scale environments, where distributing the load across multiple datastores ensures resilience and scalability.
Kamajis roadmap includes a datastore scheduler, which will automatically assign new Tenant Clusters to the most appropriate datastore in the pool, further reducing operational overhead.
## Live Migration
Operational needs change over time, and Kamaji makes it easy to adapt. You can live-migrate a Tenant Clusters data from one datastore to another, as long as they use the same backend driver, without manual backup and restore steps. This feature simplifies Day 2 operations and helps you optimize your infrastructure as your requirements evolve.
!!! info "Datastore Migration"
Currently, live data migration is only available between datastores having the same driver.

View File

@@ -0,0 +1,57 @@
# High Level Overview
Kamaji is an open source Kubernetes Operator that transforms any Kubernetes cluster into a **Management Cluster** capable of orchestrating and managing multiple independent **Tenant Clusters**. This architecture is designed to simplify large-scale Kubernetes operations, reduce infrastructure costs, and provide strong isolation between tenants.
![Kamaji Architecture](../images/architecture.png)
## Architecture Overview
- **Management Cluster:**
The central cluster where Kamaji is installed. It hosts the control planes for all Tenant Clusters as regular Kubernetes pods, leveraging the Management Clusters reliability, scalability, and operational features.
- **Tenant Clusters:**
These are user-facing Kubernetes clusters, each with its own dedicated control plane running as pods in the Management Cluster. Tenant Clusters are fully isolated from each other and unaware of the Management Clusters existence.
- **Tenant Worker Nodes:**
Regular virtual or bare metal machines that join a Tenant Cluster by connecting to its control plane. These nodes run only tenant workloads, ensuring strong security and resource isolation.
## Design Principles
- **Unidirectional Management:**
The Management Cluster manages all Tenant Clusters. Communication is strictly one-way: Tenant Clusters do not have access to or awareness of the Management Cluster.
- **Strong Isolation:**
There is no communication between different Tenant Clusters. Each cluster is fully isolated at the control plane and data store level.
- **Declarative Operations:**
Kamaji leverages Kubernetes Custom Resource Definitions (CRDs) to provide a fully declarative approach to managing control planes, datastores, and other resources.
- **CNCF Compliance:**
Kamaji uses upstream, unmodified Kubernetes components and kubeadm for control plane setup, ensuring that all Tenant Clusters are [CNCF Certified Kubernetes](https://www.cncf.io/certification/software-conformance/) and compatible with standard Kubernetes tooling.
## Extensibility and Integrations
Kamaji is designed to integrate seamlessly with the broader cloud-native and enterprise ecosystem, enabling organizations to leverage their existing tools and infrastructure:
- **Infrastructure as Code:**
Kamaji works well with tools like [Terraform](https://www.terraform.io/) and [Ansible](https://www.ansible.com/) for automated cluster provisioning and management.
- **GitOps:**
Kamaji supports GitOps workflows, enabling you to manage cluster and tenant lifecycle declaratively through version-controlled repositories using tools like [Flux](https://fluxcd.io/) or [Argo CD](https://argo-cd.readthedocs.io/). This ensures consistency, auditability, and repeatability in your operations.
- **Cluster API Integration:**
Kamaji can be used as a [Cluster API Control Plane Provider](https://github.com/clastix/cluster-api-control-plane-provider-kamaji), enabling automated, declarative lifecycle management of clusters and worker nodes across any infrastructure.
- **Enterprise Addons:**
Additional features, such as Ingress management for Tenant Control Planes, are available as enterprise-grade addons.
## Learn More
Explore the following concepts to understand how Kamaji works under the hood:
- [Tenant Control Plane](tenant-control-plane.md)
- [Datastore](datastore.md)
- [Tenant Worker Nodes](tenant-worker-nodes.md)
- [Konnectivity](konnectivity.md)
Kamajis architecture is designed for flexibility, scalability, and operational simplicity, making it an ideal solution for organizations managing multiple Kubernetes clusters at scale.

View File

@@ -0,0 +1,37 @@
# Konnectivity
In traditional Kubernetes deployments, the control plane components need to communicate directly with worker nodes for various operations like executing commands in pods, retrieving logs, or managing port forwards. However, in many real-world environments, especially those spanning multiple networks or cloud providers, direct communication isn't always possible or desirable. This is where Konnectivity comes in.
## Understanding Konnectivity in Kamaji
Kamaji integrates [Konnectivity](https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/) as a core component of its architecture. Each Tenant Control Plane pod includes a konnectivity-server running as a sidecar container, which establishes and maintains secure tunnels with agents running on the worker nodes. This design ensures reliable communication even in complex network environments.
The Konnectivity service consists of two main components:
1. **Konnectivity Server:**
Runs alongside the control plane components in each Tenant Control Plane pod and is exposed on port 8132. It manages connections from worker nodes and routes traffic appropriately.
2. **Konnectivity Agent:**
Runs on each worker node and initiates outbound connections to its control plane's Konnectivity server. These connections are maintained to create a reliable tunnel for all control plane to worker node communication.
## How It Works
When a worker node joins a Tenant Cluster, the Konnectivity agents automatically establish connections to their designated Konnectivity server. These connections are maintained continuously, ensuring reliable communication paths between the control plane and worker nodes.
All traffic from the control plane to worker nodes flows through these established tunnels, enabling operations such as:
- Executing commands in pods
- Retrieving container logs
- Managing port forwards
- Collecting metrics and health information
- Running exec sessions for debugging
## Configuration and Management
Konnectivity is enabled by default in Kamaji, as it's considered a best practice for modern Kubernetes deployments. However, it can be disabled if your environment has different requirements or if you need to use alternative networking solutions.
The service is automatically configured when worker nodes join a cluster, without requiring any operational overhead. The connection details are managed as part of the standard node bootstrap process, making it transparent to cluster operators and users.
---
By integrating Konnectivity as a core feature, Kamaji ensures that your Tenant Clusters can operate reliably and securely across any network topology, making it easier to build and manage distributed Kubernetes environments at scale.

View File

@@ -0,0 +1,29 @@
# Tenant Control Plane
Kamaji introduces a new way to manage Kubernetes control planes at scale. Instead of dedicating separate machines to each clusters control plane, Kamaji runs every Tenant Clusters control plane as a set of pods inside the Management Cluster. This design unlocks significant efficiencies: you can operate hundreds or thousands of isolated Kubernetes clusters on shared infrastructure, all while maintaining strong separation and reliability.
At the heart of this approach is Kamajis commitment to upstream compatibility. The control plane components—`kube-apiserver`, `kube-scheduler`, and `kube-controller-manager`—are the same as those used in any CNCF-compliant Kubernetes cluster. Kamaji uses `kubeadm` for setup and lifecycle management, so you get the benefits of a standard, certified Kubernetes experience.
## How It Works
When you want to create a new Tenant Cluster, you simply define a `TenantControlPlane` resource in the Management Cluster. Kamajis controllers take over from there, deploying the necessary control plane pods, configuring networking, and connecting to the appropriate datastore. The control plane is exposed via a Kubernetes Service—by default as a `LoadBalancer`, but you can also use `NodePort` or `ClusterIP` depending on your needs.
Worker nodes, whether virtual machines or bare metal, join the Tenant Cluster by connecting to its control plane endpoint. This process is compatible with standard Kubernetes tools and can be automated using Cluster API or other infrastructure automation solutions.
## Highlights
- **Efficiency and Scale:**
By running control planes as pods, Kamaji reduces the infrastructure and operational overhead of managing many clusters.
- **High Availability and Automation:**
Control plane pods are managed by Kubernetes Deployments, enabling rolling updates, self-healing, and autoscaling. Kamaji automates the entire lifecycle, from creation to deletion.
- **Declarative and GitOps:**
The `TenantControlPlane` custom resource allows you to manage clusters declaratively, fitting perfectly with GitOps and Infrastructure as Code workflows.
- **Seamless Integration:**
Kamaji works with Cluster API, supports a variety of datastores, and is compatible with the full Kubernetes ecosystem.
Kamajis Tenant Control Plane model is designed for organizations that need to deliver robust, production-grade Kubernetes clusters at scale—whether for internal platform engineering, managed services, or multi-tenant environments.

View File

@@ -0,0 +1,53 @@
# Tenant Worker Nodes
While Kamaji innovates in how control planes are managed, Tenant Worker Nodes remain true to their Kubernetes roots: they are regular virtual machines or bare metal servers that run your workloads. What makes them special in Kamaji's architecture is how they integrate with the containerized control planes and how they can be managed at scale across diverse infrastructure environments.
## Understanding Worker Nodes in Kamaji
In a Kamaji managed cluster, worker nodes connect to their Tenant Control Plane just as they would in a traditional Kubernetes setup. The key difference is that the control plane they're connecting to runs as pods within the Management Cluster, rather than on dedicated machines. This architectural choice maintains compatibility with existing tools and workflows while enabling more efficient resource utilization.
Each worker node belongs to exactly one Tenant Cluster and runs only that tenant's workloads. This clear separation ensures strong isolation between different tenants' applications and data, making Kamaji suitable for multi-tenant environments.
## Infrastructure Flexibility
Your worker nodes can run:
- On bare metal servers in a data center
- As virtual machines in private clouds
- On public cloud instances
- At edge locations
- In hybrid or multi-cloud configurations
This flexibility allows you to place workloads where they make the most sense for your use case, whether that's close to users, near data sources, or in specific regulatory environments.
## Lifecycle Management Options
Kamaji supports multiple approaches to managing worker node lifecycles:
### Manual Management
For simple setups or specific requirements, you can join worker nodes to their Tenant Clusters using standard `kubeadm` commands. This process is familiar to Kubernetes administrators and works just as it would with traditionally deployed clusters.
!!! tip "yaki"
See [yaki](https://goyaki.clastix.io/) script, which you could modify for your preferred operating system and version. The provided script is just a facility: it assumes all worker nodes are running `Ubuntu`. Make sure to adapt the script if you're using a different OS distribution.
### Automation Tools
You can use standard infrastructure automation tools to manage worker nodes:
- Terraform for infrastructure provisioning
- Ansible for configuration management
### Cluster API Integration
For more sophisticated automation, Kamaji provides a [Cluster API Control Plane Provider](https://github.com/clastix/cluster-api-control-plane-provider-kamaji).
This integration enables:
- Declarative management of both tenant control planes and tenant worker nodes
- Automated scaling and updates
- Integration with infrastructure providers for major cloud platforms
- Consistent management across different environments
---
Kamaji's approach to worker nodes combines the familiarity of traditional Kubernetes with the flexibility to run anywhere and the ability to manage at scale. Whether you're building a private cloud platform, offering Kubernetes as a service, or managing edge computing infrastructure, Kamaji provides the tools and patterns you need.

View File

@@ -31,7 +31,8 @@ Following is the list of supported Ingress Controllers:
- [HAProxy Technologies Kubernetes Ingress](https://github.com/haproxytech/kubernetes-ingress)
> Active subscribers can request additional Ingress Controller flavours
!!! info "Other Ingress Controllers"
Active subscribers can request support for additional Ingress Controller flavours.
## How to enable the Addon
@@ -89,9 +90,7 @@ spec:
```
The pattern for the generated hosts is the following:
`${tcp.namespace}-${tcp.name}.{k8s|konnectivity}.${ADDON_ANNOTATION_VALUE}`
> Please, notice the `konnectivity` rule will be created only if the `konnectivity` addon has been enabled.
`${tcp.namespace}-${tcp.name}.{k8s|konnectivity}.${ADDON_ANNOTATION_VALUE}`. Please, notice the `konnectivity` rule will be created only if the `konnectivity` addon has been enabled.
## Infrastructure requirements
@@ -142,7 +141,8 @@ spec:
The `ingressClassName` value must match a non-handled `IngressClass` object,
the addon will take care of generating the correct object.
> Nota Bene: the `hostname` must absolutely point to the 443 port
!!! warning "Use the right port"
The `hostname` field must absolutely point to the 443 port!
### Kubernetes components extra Arguments

View File

@@ -1,6 +1,16 @@
# Getting started
This section contains the information on how to get started with Kamaji
This section contains how to get started with Kamaji on different environments:
!!! success "Slow Start"
The material provided in this section is intended to be a slow start to Kamaji.
It is intended to be a deep learning experience, and to help you getting started with Kamaji while understanding the components involved and the core concepts behind it. We do not provide any "one-click" deployment here.
- [Getting started with Kamaji on Kind](./kamaji-kind.md)
- [Getting started with Kamaji on generic infra](./kamaji-generic.md)
- [Getting started with Kamaji on EKS](./kamaji-aws.md)
- [Getting started with Kamaji on AKS](./kamaji-azure.md)
- [Getting started with Kamaji](getting-started.md): install the required components and Kamaji on any Kubernetes cluster
- [Kamaji: Getting started on Kind](kind.md): useful for development environments, create a Kamaji environment on `kind`

View File

@@ -1,12 +1,12 @@
# Setup Kamaji on AWS
# Kamaji on AWS
This guide will lead you through the process of creating a working Kamaji setup on on AWS.
The guide requires:
- a bootstrap machine
- a Kubernetes cluster (EKS) to run the Admin and Tenant Control Planes
- an arbitrary number of machines to host `Tenant`s' workloads
- a Kubernetes cluster (EKS) to run the Management and Tenant Control Planes
- an arbitrary number of machines to host Tenant workloads.
## Summary
@@ -36,13 +36,11 @@ We assume you have installed on the bootstrap machine:
Make sure you have a valid AWS Account, and login to AWS:
> The easiest way to get started with AWS is to create [access keys](https://docs.aws.amazon.com/cli/v1/userguide/cli-authentication-user.html#cli-authentication-user-configure.title) associated to your account
```bash
aws configure
```
## Create Management cluster
## Access Management cluster
In Kamaji, a Management Cluster is a regular Kubernetes cluster which hosts zero to many Tenant Cluster Control Planes. The Management Cluster acts as a cockpit for all the Tenant clusters and implements monitoring, logging, and governance of all the Kamaji setups, including all Tenant Clusters. For this guide, we're going to use an instance of AWS Kubernetes Service (EKS) as a Management Cluster.
@@ -66,7 +64,9 @@ In order to create quickly an EKS cluster, we will use `eksctl` provided by AWS.
For our use case, we will create an EKS cluster with the following configuration:
```bash
cat >eks-cluster.yaml <<EOF
source kamaji-aws.env
cat > eks-cluster.yaml <<EOF
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
@@ -97,7 +97,6 @@ addons:
EOF
eks create cluster -f eks-cluster.yaml
```
Please note :
@@ -114,22 +113,20 @@ aws eks update-kubeconfig --region ${KAMAJI_REGION} --name ${KAMAJI_CLUSTER}
kubectl cluster-info
# make ebs as a default storage class
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
```
### (optional) Add route 53 domain
### Add route 53 domain
In order to easily access tenant clusters, it is recommended to create a Route53 domain or use an existing one if it exists
```bash
# for within VPC
aws route53 create-hosted-zone --name "$TENANT_DOMAIN" --caller-reference $(date +%s) --vpc "VPCRegion=$KAMAJI_REGION,VPCId=$KAMAJI_VPC_ID"
```
## Install Kamaji
Follow the [Getting Started](../getting-started.md) to install Cert Manager and the Kamaji Controller.
Follow the [Getting Started](kamaji-generic.md) to install Cert Manager and the Kamaji Controller.
### Install Cert Manager
@@ -146,12 +143,11 @@ helm install \
--set installCRDs=true
```
### (optional) Install ExternalDNS
### Install ExternalDNS (optional)
ExternalDNS allows updating your DNS records dynamically from an annotation that you add in the service within EKS. Run the following commands to install the ExternalDNS Helm chart:
```bash
helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/
helm repo update
helm install external-dns external-dns/external-dns \
@@ -160,7 +156,7 @@ helm install external-dns external-dns/external-dns \
--version 1.15.1
```
## Install Kamaji Controller
### Install Kamaji Controller
Installing Kamaji via Helm charts is the preferred way. Run the following commands to install a stable release of Kamaji:
@@ -170,7 +166,7 @@ helm repo update
helm install kamaji clastix/kamaji -n kamaji-system --create-namespace
```
## Create a Tenant Cluster
## Create Tenant Cluster
Now that our management cluster is up and running, we can create a Tenant Cluster. A Tenant Cluster is a Kubernetes cluster that is managed by Kamaji.
@@ -183,11 +179,8 @@ Before creating a Tenant Control Plane, you need to define some variables:
```bash
export KAMAJI_VPC_ID=$(aws ec2 describe-vpcs --filters "Name=tag:Name,Values=$KAMAJI_VPC_NAME" --query "Vpcs[0].VpcId" --output text)
export KAMAJI_PUBLIC_SUBNET_ID=$(aws ec2 describe-subnets --filters "Name=vpc-id,Values=$KAMAJI_VPC_ID" --filters "Name=tag:Name,Values=$KAMAJI_PUBLIC_SUBNET_NAME" --query "Subnets[0].SubnetId" --output text)
export TENANT_EIP_ID=$(aws ec2 allocate-address --query 'AllocationId' --output text)
export TENANT_PUBLIC_IP=$(aws ec2 describe-addresses --allocation-ids $TENANT_EIP_ID --query 'Addresses[0].PublicIp' --output text)
```
In the next step, we will create a Tenant Control Plane with the following configuration:
@@ -287,20 +280,22 @@ it is important to provide a static public IP address for the API server in orde
- The following annotation: `external-dns.alpha.kubernetes.io/hostname` is set to create the DNS record. It tells AWS to expose the Tenant Control Plane with a public domain name: `${TENANT_NAME}.${TENANT_DOMAIN}`.
> Since AWS load Balancer does not support setting LoadBalancerIP, you will get the following warning on the service created for the control plane tenant `Error syncing load balancer: failed to ensure load balancer: LoadBalancerIP cannot be specified for AWS ELB`. you can ignore it for now.
Since AWS load Balancer does not support setting LoadBalancerIP, you will get the following warning on the service created for the control plane tenant `Error syncing load balancer: failed to ensure load balancer: LoadBalancerIP cannot be specified for AWS ELB`. you can ignore it for now.
### Working with Tenant Control Plane
Check the access to the Tenant Control Plane:
> If the domain you used is a private route53 domain make sure to map the public IP of the LB to `${TENANT_NAME}.${TENANT_DOMAIN}` in your `/etc/hosts`. otherwise, `kubectl` will fail to check SSL certificates
```bash
curl -k https://${TENANT_PUBLIC_IP}:${TENANT_PORT}/version
curl -k https://${TENANT_NAME}.${TENANT_DOMAIN}:${TENANT_PORT}/healthz
curl -k https://${TENANT_NAME}.${TENANT_DOMAIN}:${TENANT_PORT}/version
```
!!! warning "Using Private Domains"
If the domain you used is a private __Route 53__ domain make sure to map the public IP of the LoadBalancer to `${TENANT_NAME}.${TENANT_DOMAIN}` in your `/etc/hosts`. Otherwise, `kubectl` will fail to check SSL certificates
Let's retrieve the `kubeconfig` in order to work with it:
```bash
@@ -332,7 +327,7 @@ NAME ENDPOINTS AGE
kubernetes 13.37.33.12:6443 3m22s
```
## Join worker nodes
### Join worker nodes
The Tenant Control Plane is made of pods running in the Kamaji Management Cluster. At this point, the Tenant Cluster has no worker nodes. So, the next step is to join some worker nodes to the Tenant Control Plane.
@@ -349,34 +344,27 @@ TENANT_ADDR=$(kubectl -n ${TENANT_NAMESPACE} get svc ${TENANT_NAME} -o json | jq
JOIN_CMD=$(echo "sudo kubeadm join ${TENANT_ADDR}:6443 ")$(kubeadm --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig token create --ttl 0 --print-join-command |cut -d" " -f4-)
```
> Setting `--ttl=0` on the `kubeadm token create` will guarantee that the token will never expire and can be used every time.
>
> It's not intended for production-grade setups.
!!! tip "Token expiration"
Setting `--ttl=0` on the `kubeadm token create` will guarantee that the token will never expire and can be used every time. It's not intended for production-grade setups.
### Create tenant worker nodes
In this section, we will use AMI provided by CAPA (Cluster API Provider AWS) to create the worker nodes. Those AMIs are built using [image builder](https://github.com/kubernetes-sigs/image-builder/tree/main) and contain all the necessary components to join the cluster.
```bash
export KAMAJI_PRIVATE_SUBNET_ID=$(aws ec2 describe-subnets --filters "Name=vpc-id,Values=$KAMAJI_VPC_ID" --filters "Name=tag:Name,Values=$KAMAJI_PRIVATE_SUBNET_NAME" --query "Subnets[0].SubnetId" --output text)
export WORKER_AMI=$(clusterawsadm ami list --kubernetes-version=$TENANT_VERSION --os=ubuntu-24.04 --region=$KAMAJI_REGION -o json | jq -r .items[0].spec.imageID)
cat <<EOF >> worker-user-data.sh
#!/bin/bash
$JOIN_CMD
EOF
aws ec2 run-instances --image-id $WORKER_AMI --instance-type "t2.medium" --user-data $(cat worker-user-data.sh | base64 -w0) --network-interfaces '{"SubnetId":'"'${KAMAJI_PRIVATE_SUBNET_ID}'"',"AssociatePublicIpAddress":false,"DeviceIndex":0,"Groups":["<REPLACE_WITH_SG>"]}' --count "1"
```
> We have used user data to run the `kubeadm join` command on the instance boot. This will make sure that the worker node will join the cluster automatically.
We have used user data to run the `kubeadm join` command on the instance boot. This will make sure that the worker node will join the cluster automatically.
> Make sure to replace `<REPLACE_WITH_SG>` with the security group id that allows the worker nodes to communicate with the public IP of the tenant control plane
Make sure to replace `<REPLACE_WITH_SG>` with the security group id that allows the worker nodes to communicate with the public IP of the tenant control plane
Checking the nodes in the Tenant Cluster:
@@ -422,5 +410,6 @@ To get rid of the whole Kamaji infrastructure, remove the EKS cluster:
```bash
eksctl delete cluster -f eks-cluster.yaml
```
That's all folks!

View File

@@ -1,14 +1,11 @@
# Setup Kamaji on Azure
# Kamaji on Azure
This guide will lead you through the process of creating a working Kamaji setup on on MS Azure.
!!! warning ""
The material here is relatively dense. We strongly encourage you to dedicate time to walk through these instructions, with a mind to learning. We do NOT provide any "one-click" deployment here. However, once you've understood the components involved it is encouraged that you build suitable, auditable GitOps deployment processes around your final infrastructure.
The guide requires:
- a bootstrap machine
- a Kubernetes cluster to run the Admin and Tenant Control Planes
- an arbitrary number of machines to host `Tenant`s' workloads
- a Kubernetes cluster (AKS) to run the Management and Tenant Control Planes
- an arbitrary number of machines to host Tenant workloads.
## Summary
@@ -98,7 +95,7 @@ kubectl cluster-info
## Install Kamaji
Follow the [Getting Started](../getting-started.md) to install Cert Manager and the Kamaji Controller.
Follow the [Getting Started](kamaji-generic.md) to install Cert Manager and the Kamaji Controller.
## Create Tenant Cluster

View File

@@ -1,14 +1,11 @@
# Getting started with Kamaji
# Kamaji on generic infra
This guide will lead you through the process of creating a working Kamaji setup on a generic infrastructure.
!!! info "Slow Start"
The material here is relatively dense. We strongly encourage you to dedicate time to walk through these instructions, with a mind to learning how Kamaji works. We do NOT provide any "one-click" deployment here. However, once you've understood the components involved it is encouraged that you build suitable, auditable GitOps deployment processes around your final infrastructure.
The guide requires:
- a bootstrap machine
- a Kubernetes cluster to run the Admin and Tenant Control Planes
- an arbitrary number of machines to host `Tenant`s' workloads
- a Kubernetes cluster to run the Management and Tenant Control Planes
- an arbitrary number of machines to host Tenant workloads.
## Summary
@@ -75,9 +72,9 @@ helm install \
Installing Kamaji via Helm charts is the preferred way to deploy the Kamaji controller. The Helm chart is available in the `charts` directory of the Kamaji repository.
!!! info "Stable Releases"
As of July 2024 [Clastix Labs](https://github.com/clastix) does no longer publish stable release artifacts. Stable releases are offered on a subscription basis by [CLASTIX](https://clastix.io), the main Kamaji project contributor.
As of July 2024 [Clastix Labs](https://github.com/clastix) no longer publish stable release artifacts. Stable releases are offered on a subscription basis by [CLASTIX](https://clastix.io), the main Kamaji project contributor.
Run the following commands to install latest edge release of Kamaji:
Run the following commands to install the latest edge release of Kamaji:
```bash
git clone https://github.com/clastix/kamaji
@@ -190,7 +187,7 @@ service/tenant-00 LoadBalancer 10.32.132.241 192.168.32.240 6443:32152/T
The regular Tenant Control Plane containers: `kube-apiserver`, `kube-controller-manager`, `kube-scheduler` are running unchanged in the `tcp` pods instead of dedicated machines and they are exposed through a service on the port `6443` of worker nodes in the Management Cluster.
The `LoadBalancer` service type is used to expose the Tenant Control Plane on the assigned `loadBalancerIP` acting as `ControlPlaneEndpoint` for the worker nodes and other clients as, for example, `kubectl`. Service types `NodePort` and `ClusterIP` are still viable options to expose the Tenant Control Plane, depending on the case. High Availability and rolling updates of the Tenant Control Planes are provided by the `tcp` Deployment and all the resources reconcilied by the Kamaji controller.
The `LoadBalancer` service type is used to expose the Tenant Control Plane on the assigned `loadBalancerIP` acting as `ControlPlaneEndpoint` for the worker nodes and other clients as, for example, `kubectl`. Service types `NodePort` and `ClusterIP` are still viable options to expose the Tenant Control Plane, depending on the case. High Availability and rolling updates of the Tenant Control Planes are provided by the `tcp` Deployment and all the resources reconciled by the Kamaji controller.
### Assign a Specific Address to the Tenant Control Plane
@@ -283,7 +280,7 @@ The Tenant Control Plane is made of pods running in the Kamaji Management Cluste
!!! warning "Opening Ports"
To make sure worker nodes can join the Tenant Control Plane, you must allow incoming connections to: `${TENANT_ADDR}:${TENANT_PORT}` and `${TENANT_ADDR}:${TENANT_PROXY_PORT}`
Kamaji does not provide any helper for creation of tenant worker nodes, instead it leverages the [Cluster API](https://github.com/kubernetes-sigs/cluster-api). This allows you to create the Tenant Clusters, including worker nodes, in a completely declarative way. Refer to the [Cluster API guide](guides/cluster-api/index.md) to learn more about Cluster API support in Kamaji.
Kamaji does not provide any helper for creation of tenant worker nodes, instead it leverages the [Cluster API](https://github.com/kubernetes-sigs/cluster-api). This allows you to create the Tenant Clusters, including worker nodes, in a completely declarative way. Refer to the section [Cluster API](../cluster-api/index.md) to learn more about Cluster API support in Kamaji.
An alternative approach for joining nodes is to use the `kubeadm` command on each node. Follow the related [documentation](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) in order to:

View File

@@ -1,19 +1,19 @@
# Kamaji: Getting started on Kind
This guide will lead you through the process of creating a setup of a working Kamaji setup using Kind clusters.
# Kamaji on Kind
This guide will lead you through the process of creating a working Kamaji setup using Kind cluster. The guide requires the following installed on your workstation: `docker`, `kind`, `helm`, and `kubectl`.
The guide requires the following installed:
- Docker
- Kind
- Helm
!!! warning "Development Only"
Run Kamaji on kind only for development or learning purposes.
Kamaji is designed to be run on production-grade Kubernetes clusters, such as those provided by cloud providers or on-premises solutions. Kind is not a production-grade Kubernetes cluster, and it is not recommended to run in production environments.
## Summary
* [Creating Kind Cluster](#creating-kind-cluster)
* [Installing Dependencies: Cert-Manager](#installing-dependencies-cert-manager)
* [Installing Cert-Manager](#installing-cert-manager)
* [Installing MetalLb](#installing-metallb)
* [Creating IP Address Pool](#creating-ip-address-pool)
* [Installing Kamaji](#installing-kamaji)
* [Creating Tenant Control Plane](#creating-tenant-control-plane)
## Creating Kind Cluster
@@ -23,37 +23,37 @@ Create a kind cluster.
kind create cluster --name kamaji
```
This will take a short while for the kind cluster to created.
This will take a short while for the kind cluster to be created.
## Installing Dependencies: Cert-Manager
## Installing Cert-Manager
Kamaji has a dependency on Cert Manager, as it uses dynamic admission control, validating and mutating webhook configurations which are secured by a TLS communication, these certificates are managed by `cert-manager`. Hence, it needs to be added.
Add the Bitnami Repo to the Helm Manager.
Add the Bitnami Repo to the Helm Manager.
```
helm repo add bitnami https://charts.bitnami.com/bitnami
```
Install Cert Manager to the cluster using the bitnami charts using Helm --
Install Cert Manager using Helm
```
helm upgrade --install cert-manager bitnami/cert-manager --namespace certmanager-system --create-namespace --set "installCRDs=true"
helm upgrade --install cert-manager bitnami/cert-manager \
--namespace certmanager-system \
--create-namespace \
--set "installCRDs=true"
```
This will install cert-manager to the cluster. You can watch the progress of the installation on the cluster using the command -
This will install cert-manager to the cluster. You can watch the progress of the installation on the cluster using the command
```
kubectl get pods -Aw
```
!!! Info ""
Another pre-requisite is to have a __storage provider__.
Kind by default provides `local-path-provisioner`, but one can have any other CSI Drivers. Since there are ETCD and Control-Planes running, having persistent volumes is essential for the cluster.
## Installing MetalLb
MetalLB is used in order to dynamically assign IP addresses to the components, and also define custom IP Address Pools.
MetalLB is used in order to dynamically assign IP addresses to the components, and also define custom IP Address Pools. Install MetalLb using the `kubectl` command for apply the manifest:
Install MetalLb using the `kubectl` manifest apply command --
```
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
```
@@ -63,11 +63,13 @@ This will install MetalLb onto the cluster with all the necessary resources.
## Creating IP Address Pool
Extract the Gateway IP of the network Kind is running on.
```
GW_IP=$(docker network inspect -f '{{range .IPAM.Config}}{{.Gateway}}{{end}}' kind)
```
Modify the IP Address, and create the resource to be added to the cluster to create the IP Address Pool.
Modify the IP Address, and create the resource to be added to the cluster to create the IP Address Pool
```
NET_IP=$(echo ${GW_IP} | sed -E 's|^([0-9]+\.[0-9]+)\..*$|\1|g')
cat << EOF | sed -E "s|172.19|${NET_IP}|g" | kubectl apply -f -
@@ -90,51 +92,68 @@ EOF
## Installing Kamaji
- Add the Clastix Repo in the Helm Repo lists.
- Clone the Kamaji repository
```
helm repo add clastix https://clastix.github.io/charts
helm repo update
git clone https://github.com/clastix/kamaji
cd kamaji
```
- Install Kamaji
- Install Kamaji with Helm
```
helm upgrade --install kamaji clastix/kamaji --namespace kamaji-system --create-namespace --set 'resources=null'
helm upgrade --install kamaji charts/kamaji \
--namespace kamaji-system \
--create-namespace \
--set image.tag=latest \
--set 'resources=null'
```
- Watch the progress of the deployments --
- Watch the progress of the deployments
```
kubectl get pods -Aw
```
- Verify by first checking Kamaji CRDs.
- Verify by first checking Kamaji CRDs
```
kubectl get crds | grep -i kamaji
```
- Install a Tenant Control Plane using the command --
!!! Info "CSI Drivers"
Kamaji requires a __storage provider__ installed on the management cluster. Kind by default provides `local-path-provisioner`, but one can have any other CSI Drivers.
## Creating Tenant Control Plane
- Create a Tenant Control Plane using the command
```
kubectl apply -f https://raw.githubusercontent.com/clastix/kamaji/master/config/samples/kamaji_v1alpha1_tenantcontrolplane.yaml
```
- Watch the progress of the Tenant Control Plane by ---
- Watch the progress of the Tenant Control Plane by
```
kubectl get tcp -w
```
- You can attempt to get the details of the control plane by downloading the kubeconfig file ---
- You can attempt to get the details of the control plane by downloading the `kubeconfig` file
```
# Set the SECRET as KUBECONFIG column listed in the tcp output.
SECRET=""
kubectl get secret $SECRET -o jsonpath='{.data.admin\.conf}'|base64 -d > /tmp/kamaji.conf
```
- Export the KUBECONFIG
- Export the `kubeconfig` file to the environment variable `KUBECONFIG`
```
export KUBECONFIG=/tmp/kamaji.conf
```
- Notice that the `kubectl` version changes, and there is no nodes now.
- Notice that the `kubectl` version changes, and there are no nodes now.
```
kubectl version
kubectl get nodes

View File

@@ -1,15 +1,10 @@
# Use Alternative Datastores
# Alternative Datastores
Kamaji offers the possibility of having a different storage system than `etcd` thanks to [kine](https://github.com/k3s-io/kine) integration.
## Installing Drivers
The following `make` recipes help you to setup alternative `Datastore` resources.
> The default settings are not production grade:
> the following scripts are just used to test the Kamaji usage of different drivers.
On the Management Cluster, you can use the following commands:
The following `make` recipes help you to setup alternative `Datastore` resources. On the Management Cluster, you can use the following commands:
- **MySQL**: `$ make -C deploy/kine/mysql mariadb`
@@ -17,6 +12,9 @@ On the Management Cluster, you can use the following commands:
- **NATS**: `$ make -C deploy/kine/nats nats`
!!! warning "Not for production"
The default settings are not production grade: the following scripts are just used to test the Kamaji usage of different drivers.
## Defining a default Datastore upon Kamaji installation
Use Helm to install the Kamaji Operator and make sure it uses a datastore with the proper driver `datastore.driver=<MySQL|PostgreSQL|NATS>`.
@@ -62,6 +60,4 @@ When the said key is omitted, Kamaji will use the default datastore configured w
The NATS support is still experimental, mostly because multi-tenancy is **NOT** supported.
> A `NATS` based DataStore can host one and only one Tenant Control Plane.
> When a `TenantControlPlane` is referring to a NATS `DataStore` already used by another instance,
> reconciliation will fail and blocked.
A `NATS` based DataStore can host one and only one Tenant Control Plane. When a `TenantControlPlane` is referring to a NATS `DataStore` already used by another instance, reconciliation will fail and blocked.

View File

@@ -39,10 +39,10 @@ velero backup describe tenant-00
## Restore step
>_WARNING_: this procedure will restore just the TCP resource.
In the event that the related datastore has been lost, you MUST restore it BEFORE continue; to do this, refer to the backup and restore strategy of the datastore of your choice.
---
!!! warning "Restoring Datastore"
This procedure will restore just the TCP resource.
In the event that the related datastore has been lost, you MUST restore it BEFORE continue; to do this, refer to the backup and restore strategy of the datastore of your choice.
To restore just the desired TCP, simply execute:

View File

@@ -99,11 +99,10 @@ By default, the rotation will occur the day before their expiration.
This rotation deadline can be dynamically configured using the Kamaji CLI flag `--certificate-expiration-deadline` using the Go _Duration_ syntax:
e.g.: set the value `7d` to trigger the renewal a week before the effective expiration date.
> Nota Bene:
>
> Kamaji is responsible for creating the `etcd` client certificate, and the generation of a new one will occur.
> For other Datastore drivers, such as MySQL, PostgreSQL, or NATS, the referenced Secret will always be deleted by the Controller to trigger the rotation:
> the PKI management, since it's offloaded externally, must provide the renewed certificates.
!!! info "Other Datastore Drivers"
Kamaji is responsible for creating the `etcd` client certificate, and the generation of a new one will occur.
For other Datastore drivers, such as MySQL, PostgreSQL, or NATS, the referenced Secret will always be deleted by the Controller to trigger the rotation: the PKI management, since it's offloaded externally, must provide the renewed certificates.
## Certificate Authority rotation

View File

@@ -1,10 +1,17 @@
# Kamaji Console
This guide will introduce you to the basics of the Kamaji Console, a web UI to help you to view and control your Kamaji setup.
When you login to the console you are brought to the Tenant Control Planes, which allows you to quickly understand the state of your Kamaji setup at a glance. It shows summary information about all the Tenant Control Plane objects, including: name, namespace, status, endpoint, version, and datastore.
![Kamaji Console](../images/kamaji-console.png)
## Install with Helm
The Kamaji Console is a web interface running on the Kamaji Management Cluster that you can install with Helm. Check the Helm Chart [documentation](https://github.com/clastix/kamaji-console) for all the available settings.
The Kamaji Console requires a Secret in the Kamaji Management Cluster that contains the configuration and credentials to access the console from the browser. You can have the Helm Chart generate it for you, or create it yourself and provide the name of the Secret during installation. Before to install the Kamaji Console, access your workstation, replace the placeholders with actual values, and execute the following command:
The Kamaji Console requires a Secret in the Kamaji Management Cluster that contains the configuration and credentials to access the console from the browser. You can have the Helm Chart generate it for you, or create it yourself and provide the name of the Secret during installation.
Before to install the Kamaji Console, access your workstation, replace the placeholders with actual values, and execute the following command:
```bash
# The secret is required, otherwise the installation will fail
@@ -32,11 +39,6 @@ Install the Chart with the release name `console` in the `kamaji-system` namespa
helm repo add clastix https://clastix.github.io/charts
helm repo update
helm -n kamaji-system install console clastix/kamaji-console
```
Show the status:
```
helm status console -n kamaji-system
```
@@ -54,39 +56,13 @@ and point the browser to `http://127.0.0.1:8080/ui` to access the console. Login
!!! note "Expose with Ingress"
The Kamaji Console can be exposed with an ingress. Refer the Helm Chart documentation on how to configure it properly.
## Explore the Kamaji Console
The Kamaji Console provides a high level view of all Tenant Control Planes configured in your Kamaji setup. When you login to the console you are brought to the Tenant Control Planes view, which allows you to quickly understand the state of your Kamaji setup at a glance. It shows summary information about all the Tenant Control Plane objects, including: name, namespace, status, endpoint, version, and datastore.
![Console Tenant Control Plane List](../images/console-tcp-list.png)
From this view, you can also create a new Tenant Control Plane from a basic placeholder in yaml format:
![Console Tenant Control Plane Create](../images/console-tcp-create.png)
### Working with Tenant Control Plane
From the main view, clicking on a Tenant Control Plane row will bring you to the detailed view. This view shows you all the details about the selected Tenant Control Plane, including all child components: pods, deployment, service, config maps, and secrets. From this view, you can also view, copy, and download the `kubeconfig` to access the Tenant Control Plane as tenant admin.
![Console Tenant Control Plane View](../images/console-tcp-view.png)
### Working with Datastore
From the menu bar on the left, clicking on the Datastores item, you can access the list of provisioned Datastores.
It shows a summary about datastores, including name and the used driver, i.e. `etcd`, `MySQL`, `PostgreSQL`, and `NATS`.
![Console Datastore List](../images/console-ds-list.png)
From this view, you can also create, delete, edit, and inspect the single datastore.
### Additional Operations
The Kamaji Console offers additional capabilities as part of the commercial edition Clastix Operating Platform:
## Additional Operations
The Kamaji Console offers additional capabilities unlocked by Clastix Enterprise Platform:
- Infrastructure Drivers Management
- Applications Delivery via GitOps Operators
- Applications Delivery
- Centralized Authentication and Access Control
- Auditing and Logging
- Monitoring
- Backup & Restore
!!! note "Ready for more?"
To purchase entitlement to Clastix Operating Platform please contact hello@clastix.io.

View File

@@ -1,4 +1,4 @@
# General
# Contribute
Thank you for your interest in contributing to Kamaji. Whether it's a bug report, new feature, correction, or additional documentation, we greatly value feedback and contributions from our community.

View File

@@ -169,7 +169,8 @@ admission webhook "catchall.migrate.kamaji.clastix.io" denied the request
After a while, depending on the amount of data to migrate, the Tenant Control Plane is put back in full operating mode by the Kamaji controller.
> Please, note the datastore migration leaves the data on the default datastore, so you have to remove it manually.
!!! info "Leftover"
Please, note the datastore migration leaves the data on the default datastore, so you have to remove it manually.
## Post migration
After migrating data to the new datastore, complete the migration procedure by restarting the `kubelet.service` on all the tenant worker nodes.

View File

@@ -1,4 +1,4 @@
# Manage Tenant Control Planes with GitOps
# GitOps
This guide describe a declarative way to deploy Kubernetes add-ons across multiple Tenant Clusters, the GitOps-way. An admin may need to apply a specific workload into Tenant Clusters and ensure is constantly reconciled, no matter what the tenants will do in their clusters. Examples include installing monitoring agents, ensuring specific policies, installing infrastructure operators like Cert Manager and so on.
@@ -27,7 +27,8 @@ NAME VERSION STATUS CONTROL-PLANE-ENDPOINT KUBECONFIG
tenant1 v1.25.1 Ready 172.18.0.2:31443 tenant1-admin-kubeconfig 108s
```
> As the *admin* user has *cluster-admin* `ClusterRole` it will have the necessary privileges to operate on Custom Resources too.
!!! info "Admin Permissions"
As the *admin* user has *cluster-admin* `ClusterRole` it will have the necessary privileges to operate on Custom Resources too.
Given that Flux it's installed in the *Management Cluster* - guide [here](https://fluxcd.io/flux/installation/) - resources can be ensured for specifics Tenant Clusters, by filling the `spec.kubeConfig` field of the Flux reconciliation resource.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 207 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 249 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 304 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 177 KiB

View File

@@ -1,48 +1,7 @@
# Kamaji
**Kamaji** is the **Kubernetes Control Plane Manager**. It operates Kubernetes at scale with a fraction of the operational burden.
## How it works
Kamaji turns any Kubernetes cluster into a _“Management Cluster”_ to orchestrate other Kubernetes clusters called _“Tenant Clusters”_. Kamaji is special because the Control Plane components are running inside pods instead of dedicated machines. This solution makes running multiple Control Planes cheaper and easier to deploy and operate.
<img src="images/architecture.png" width="600">
View [Concepts](concepts.md) for a deeper understanding of principles behind Kamaji's design.
!!! info "CNCF Compliance"
All the Tenant Clusters built with Kamaji are fully compliant [CNCF Certified Kubernetes](https://www.cncf.io/certification/software-conformance/) and are compatible with the standard toolchains everybody knows and loves.
## Getting started
Please refer to the [Getting Started guide](getting-started/index.md) to deploy a minimal setup of Kamaji.
## FAQs
Q. What does Kamaji mean?
A. Kamaji is named as the character _Kamajī_ (釜爺, lit. "Boiler Geezer") from the Japanese movie [_Spirited Away_](https://en.wikipedia.org/wiki/Spirited_Away). Kamajī is an elderly man with six, long arms who operates the boiler room of the Bathhouse. The silent professional, whom no one sees, but who gets the hot, fragrant water to all the guests, like our Kamaji provides Kubernetes as a service!
Q. Is Kamaji another Kubernetes distribution yet?
A. No, Kamaji is a Kubernetes Operator you can install on top of any Kubernetes cluster to provide hundreds or thousands of managed Kubernetes clusters as a service. The tenant clusters made with Kamaji are conformant CNCF Kubernetes clusters as we leverage [`kubeadm`](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/).
Q. How is Kamaji different from typical multi-cluster management solutions?
A. Most of the existing multi-cluster management solutions provision specific infrastructure for the control plane, in most cases dedicated machines. Kamaji is special because the control plane of the downstream clusters are regular pods running in the management cluster. This solution makes running control plane at scale cheaper and easier to deploy and operate.
Q. Is it safe to run Kubernetes control plane components in a pod instead of dedicated virtual machines?
A. Yes, the tenant control plane components are packaged in the same way they are running in bare metal or virtual nodes. We leverage the `kubeadm` code to set up the control plane components as they were running on their own server. The unchanged images of upstream `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager` are used, no forks!.
Q. How is Kamaji different from managed Kubernetes services offered by Public Clouds?
A. Kamaji gives you full control over all your Kubernetes infrastructures, offering unparalleled consistency across disparate environments: cloud, data-center, and edge while simplifying and centralizing operations, maintenance, and management tasks. Unlike other Managed Kubernetes services, Kamaji allows you to connect worker nodes from any infrastructure, providing you greater freedom, flexibility, and consistency than public Managed Kubernetes services.
Q. How Kamaji differs from Cluster API?
A. Kamaji and Cluster API complement each other. Kamaji's core idea is having a more efficient control plane management. Cluster API provides a declarative approach to clusters bootstrap and lifecycle management across different environments, cloud providers, and on-premises infrastructures. Thus combined together you get the best of class: Kamaji by simplifying the Control Plane management, Cluster API to abstract from the infrastructure. See supported [CAPI providers](guides/cluster-api.md) by Kamaji.
Q. You already provide a Kubernetes multi-tenancy solution with [Capsule](https://capsule.clastix.io). Why does Kamaji matter?
A. A multi-tenancy solution, like Capsule shares the Kubernetes control plane among all tenants keeping tenant namespaces isolated by policies. While the solution is the right choice by balancing between features and ease of usage, there are cases where a tenant user requires access to the control plane, for example, when a tenant requires to manage CRDs on his own. With Kamaji, you can provide full cluster admin permissions to the tenant.
---
template: home.html
title: Home
hide:
- navigation
- toc
---

View File

@@ -1,24 +1,19 @@
# CNCF Conformance
For organizations using Kubernetes, conformance enables interoperability, consistency, and confirmability between Kubernetes installations. The Cloud Computing Native Foundation - CNCF - provides the [Certified Kubernetes Conformance Program](https://www.cncf.io/certification/software-conformance/).
For organizations using Kubernetes, conformance enables interoperability, consistency, and confirmability between Kubernetes installations.
The standard set of conformance tests is currently those defined by the `[Conformance]` tag in the
[kubernetes e2e](https://github.com/kubernetes/kubernetes/tree/master/test/e2e) suite.
All the _“Tenant Clusters”_ built with Kamaji are CNCF conformant:
- [v1.23](https://github.com/cncf/k8s-conformance/pull/2194)
- [v1.24](https://github.com/cncf/k8s-conformance/pull/2193)
- [v1.25](https://github.com/cncf/k8s-conformance/pull/2188)
- [v1.26](https://github.com/cncf/k8s-conformance/pull/2787)
- [v1.27](https://github.com/cncf/k8s-conformance/pull/2786)
- [v1.28](https://github.com/cncf/k8s-conformance/pull/2785)
- [v1.29](https://github.com/cncf/k8s-conformance/pull/3273)
- [v1.30](https://github.com/cncf/k8s-conformance/pull/3274)
The Cloud Computing Native Foundation (_CNCF_) provides the [Certified Kubernetes Conformance Program](https://www.cncf.io/certification/software-conformance/).
<p align="left" style="padding: 6px 6px">
<img src="https://raw.githubusercontent.com/cncf/artwork/master/projects/kubernetes/certified-kubernetes/versionless/color/certified-kubernetes-color.png" width="100" />
</p>
All the _“Tenant Clusters”_ built with Kamaji are CNCF conformant.
!!! note "Conformance Test Suite"
The standard set of conformance tests is currently those defined by the `[Conformance]` tag in the [kubernetes e2e](https://github.com/kubernetes/kubernetes/tree/master/test/e2e) repository.
## Running the conformance tests
The standard tool for running CNCF conformance tests is [Sonobuoy](https://github.com/vmware-tanzu/sonobuoy). Sonobuoy is
@@ -38,7 +33,7 @@ Deploy a Sonobuoy pod to your Tenant Cluster with:
sonobuoy run --mode=certified-conformance
```
> You can run the command synchronously by adding the flag `--wait` but be aware that running the conformance tests can take an hour or more.
You can run the command synchronously by adding the flag `--wait` but be aware that running the conformance tests can take an hour or more.
View actively running pods:

View File

@@ -19,10 +19,6 @@ Using Edge Release artifacts and reporting bugs helps us ensure a rapid pace of
### Stable Releases
Stable Release artifacts of Kamaji follow semantic versioning, whereby changes in major version denote large feature additions and possible breaking changes and changes in minor versions denote safe upgrades without breaking changes. As of July 2024 [Clastix Labs](https://github.com/clastix) organization does no longer provide stable release artifacts. Latest stable release available is:
Stable Release artifacts of Kamaji follow semantic versioning, whereby changes in major version denote large feature additions and possible breaking changes and changes in minor versions denote safe upgrades without breaking changes. As of July 2024, [Clastix Labs](https://github.com/clastix) does no longer provide stable release artifacts.
| Kamaji | Management Cluster | Tenant Cluster |
|--------|--------------------|----------------------|
| v1.0.0 | v1.22+ | [v1.21.0 .. v1.30.2] |
Stable Release artifacts are offered now on a subscription basis by [CLASTIX](https://clastix.io), the main Kamaji project contributor. Learn more about [available subscription plans](https://clastix.io/support/) provided by CLASTIX.
Stable Release artifacts are offered now on a subscription basis by [CLASTIX](https://clastix.io), the main Kamaji project contributor. Learn more about the available [Subscription Plans](https://clastix.io/support/).

View File

@@ -1,11 +0,0 @@
# Use Cases
Kamaji project has been initially started as a solution for actual and common problems such as minimizing the Total Cost of Ownership while running Kubernetes at large scale. However, it can open a wider range of use cases.
Here are a few:
- **Managed Kubernetes:** enable companies to provide Cloud Native Infrastructure with ease by introducing a strong separation of concerns between management and workloads. Centralize clusters management, monitoring, and observability by leaving developers to focus on applications, increase productivity and reduce operational costs.
- **Kubernetes as a Service:** provide Kubernetes clusters in a self-service fashion by running management and workloads on different infrastructures with the option of Bring Your Own Device, BYOD.
- **Control Plane as a Service:** provide multiple Kubernetes control planes running on top of a single Kubernetes cluster. Tenants who use namespaces based isolation often still need access to cluster wide resources like Cluster Roles, Admission Webhooks, or Custom Resource Definitions.
- **Edge Computing:** distribute Kubernetes workloads across edge computing locations without having to manage multiple clusters across various providers. Centralize management of hundreds of control planes while leaving workloads to run isolated on their own dedicated infrastructure.
- **Cluster Simulation:** check new Kubernetes API or experimental flag or a new tool without impacting production operations. Kamaji will let you simulate such things in a safe and controlled environment.
- **Workloads Testing:** check the behaviour of your workloads on different and multiple versions of Kubernetes with ease by deploying multiple Control Planes in a single cluster.

View File

@@ -7,7 +7,7 @@ docs_dir: content
site_dir: site
site_author: bsctl
site_description: >-
Kamaji deploys and operates Kubernetes Control Plane at scale with a fraction of the operational burden.
Kamaji is the Control Plane Manager for Kubernetes
copyright: Copyright &copy; 2020 - 2025 Clastix Labs
@@ -23,28 +23,21 @@ theme:
- content.code.copy
include_sidebar: true
palette:
# Palette toggle for automatic mode
- media: "(prefers-color-scheme)"
- scheme: default
primary: white
toggle:
icon: material/brightness-auto
name: Switch to light mode
# Palette toggle for light mode
- media: "(prefers-color-scheme: light)"
scheme: default
primary: white
media: "(prefers-color-scheme: light)"
toggle:
icon: material/lightbulb
name: Switch to dark mode
# Palette toggle for dark mode
- media: "(prefers-color-scheme: dark)"
scheme: slate
- scheme: slate
primary: white
media: "(prefers-color-scheme: dark)"
toggle:
icon: material/lightbulb-outline
name: Switch to system preference
name: Switch to light mode
favicon: images/favicon.png
logo: images/logo.png
custom_dir: overrides
markdown_extensions:
- admonition
@@ -57,9 +50,16 @@ nav:
- 'Kamaji': index.md
- 'Getting started':
- getting-started/index.md
- getting-started/getting-started.md
- getting-started/kind.md
- 'Concepts': concepts.md
- getting-started/kamaji-kind.md
- getting-started/kamaji-generic.md
- getting-started/kamaji-aws.md
- getting-started/kamaji-azure.md
- 'Concepts':
- concepts/index.md
- concepts/tenant-control-plane.md
- concepts/datastore.md
- concepts/tenant-worker-nodes.md
- concepts/konnectivity.md
- 'Cluster API':
- cluster-api/index.md
- cluster-api/control-plane-provider.md
@@ -69,17 +69,15 @@ nav:
- cluster-api/other-providers.md
- 'Guides':
- guides/index.md
- guides/kamaji-azure-deployment.md
- guides/kamaji-aws-deployment.md
- guides/alternative-datastore.md
- guides/kamaji-gitops-flux.md
- guides/gitops.md
- guides/upgrade.md
- guides/datastore-migration.md
- guides/backup-and-restore.md
- guides/monitoring.md
- guides/certs-lifecycle.md
- guides/console.md
- 'Use Cases': use-cases.md
- guides/certs-lifecycle.md
- guides/contribute.md
- 'Reference':
- reference/index.md
- reference/benchmark.md
@@ -88,7 +86,7 @@ nav:
- reference/versioning.md
- reference/api.md
- 'Telemetry': telemetry.md
- 'Enterprise Addons':
- 'Addons':
- enterprise-addons/index.md
- enterprise-addons/ingress.md
- 'Contribute': contribute.md

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 5.1 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 5.0 KiB

View File

@@ -0,0 +1,226 @@
/* Root variables */
:root {
--spacing-xs: 0.5rem;
--spacing-sm: 0.8rem;
--spacing-md: 1.2rem;
--spacing-lg: 2rem;
--border-radius: 4px;
}
/* Common section styles */
.tx-section {
padding: var(--spacing-lg) 0;
}
.tx-section--alternate {
background: var(--md-default-bg-color--lightest);
}
/* Grid layouts */
.tx-grid {
display: grid;
gap: var(--spacing-lg);
}
.tx-grid--3x2 {
grid-template-columns: repeat(3, 1fr);
}
.tx-grid--3x1 {
grid-template-columns: repeat(3, 1fr);
}
/* Card styles */
.tx-card {
padding: var(--spacing-md);
background: var(--md-default-bg-color);
border-radius: var(--border-radius);
box-shadow: var(--md-shadow-z1);
transition: transform 0.125s ease, box-shadow 0.125s ease;
}
.tx-card:hover {
transform: translateY(-2px);
box-shadow: var(--md-shadow-z2);
}
/* FAQ styles */
.tx-faq {
max-width: var(--md-typeset-width);
margin: 0 auto;
padding: 0 var(--spacing-md);
}
.tx-faq_item {
margin-bottom: var(--spacing-lg);
}
.tx-faq_question {
font-size: 1rem;
margin-bottom: 1rem;
color: var(--md-default-fg-color);
}
.tx-faq_answer {
font-size: 0.8rem;
}
.tx-hero {
padding: var(--spacing-lg) 0;
}
.tx-hero .md-grid {
display: flex;
align-items: center;
gap: var(--spacing-lg);
}
.tx-hero_content {
flex: 1;
margin: 0;
}
.tx-hero_image {
flex: 0.618;
text-align: center;
}
.tx-hero_image img {
max-width: 100%;
height: auto;
}
.tx-hero_buttons {
margin-top: var(--spacing-lg);
}
/* Dark Mode Styles */
[data-md-color-scheme="slate"] {
/* Hero section */
.tx-hero {
background: var(--md-default-bg-color);
color: var(--md-default-fg-color);
}
/* Primary button */
.tx-hero .md-button.md-button--primary {
border: none;
color: #ffffff;
background-color: #D81D56;
}
/* Secondary button */
.tx-hero .md-button:not(.md-button--primary) {
border: none;
color: #24456F;
background-color: #DBDCDB;
}
/* Hover for both buttons */
.tx-hero .md-button:hover {
color: #24456F;
background-color: #ffffff;
}
/* Hero images */
.tx-hero_image--light {
display: block;
}
.tx-hero_image--dark {
display: none;
}
}
/* Light Mode Styles */
[data-md-color-scheme="default"] {
/* Hero section */
.tx-hero {
background: var(--md-default-bg-color--dark);
color: var(--md-primary-bg-color);
}
/* Primary button */
.tx-hero .md-button.md-button--primary {
border: none;
color: #ffffff;
background-color: #D81D56;
}
/* Secondary button */
.tx-hero .md-button:not(.md-button--primary) {
border: none;
color: #24456F;
background-color: #DBDCDB;
}
/* Hover for both buttons */
.tx-hero .md-button:hover {
background-color: #24456F;
color: #ffffff;
}
/* Hero images */
.tx-hero_image--light {
display: none;
}
.tx-hero_image--dark {
display: block;
}
}
/* Responsive design */
@media screen and (max-width: 960px) {
.tx-hero .md-grid {
flex-direction: column;
text-align: center;
}
.tx-hero_content {
margin-bottom: var(--spacing-lg);
}
.tx-grid--3x2,
.tx-grid--3x1 {
grid-template-columns: repeat(2, 1fr);
}
}
@media screen and (max-width: 600px) {
.tx-grid--3x2,
.tx-grid--3x1 {
grid-template-columns: 1fr;
}
.tx-hero_buttons {
display: flex;
flex-direction: column;
gap: var(--spacing-sm);
}
.tx-hero_buttons .md-button {
width: 100%;
margin: 0;
}
}
/* Icon styles */
.tx-card_icon {
margin-bottom: 1rem;
}
.tx-card_icon svg {
width: 2rem;
height: 2rem;
}
/* Light Mode Icon Color */
[data-md-color-scheme="default"] .tx-card_icon svg {
fill: #24456F;
}
/* Dark Mode Icon Color */
[data-md-color-scheme="slate"] .tx-card_icon svg {
fill: #ffffff;
}

197
docs/overrides/home.html Normal file
View File

@@ -0,0 +1,197 @@
{% extends "main.html" %}
{% block extrahead %}
<link rel="stylesheet" href="{{ 'assets/stylesheets/home.css' | url }}">
{% endblock %}
{% block content %}
<!-- Hero Section -->
<section class="tx-hero">
<div class="md-grid md-typeset">
<div class="tx-hero_content">
<h1>The Control Plane Manager for Kubernetes</h1>
<p>Kamaji runs the Control Plane as pods within a Management Cluster, rather than on dedicated machines. This approach simplifies operations and enables the management of multiple Kubernetes clusters with a fraction of the operational burden.</p>
<div class="tx-hero_buttons">
<a href="{{ 'getting-started/' | url }}" class="md-button md-button--primary">Get Started</a>
<a href="{{ 'concepts/' | url }}" class="md-button">Concepts</a>
</div>
</div>
<div class="tx-hero_image">
<img class="tx-hero_image--light" src="assets/images/hero_logo_light.svg" alt="Kamaji Light Theme" draggable="false">
<img class="tx-hero_image--dark" src="assets/images/hero_logo_dark.svg" alt="Kamaji Dark Theme" draggable="false">
</div>
</div>
</section>
<!-- Highlights Section -->
<section class="tx-section tx-section--alternate">
<div class="md-grid md-typeset">
<h2 class="tx-section-title">Highlights</h2>
<div class="tx-grid tx-grid--3x2">
<div class="tx-card">
<div class="tx-card_icon">
{% include ".icons/fontawesome/solid/layer-group.svg" %}
</div>
<h3>Multi-Tenancy</h3>
<p>Deploy multiple Kubernetes control planes as pods within a single management cluster. Each control plane operates independently, ensuring complete isolation between tenants.</p>
</div>
<div class="tx-card">
<div class="tx-card_icon">
{% include ".icons/fontawesome/solid/cube.svg" %}
</div>
<h3>Upstream Kubernetes</h3>
<p>Uses unmodified upstream Kubernetes components and leverages kubeadm, the default tool for cluster bootstrapping and management.</p>
</div>
<div class="tx-card">
<div class="tx-card_icon">
{% include ".icons/fontawesome/solid/server.svg" %}
</div>
<h3>Infrastructure Agnostic</h3>
<p>Connect worker nodes from any infrastructure provider. Supports bare metal, virtual machines, and cloud instances, allowing hybrid and multi-cloud deployments.</p>
</div>
<div class="tx-card">
<div class="tx-card_icon">
{% include ".icons/fontawesome/solid/chart-line.svg" %}
</div>
<h3>Resource Optimization</h3>
<p>Control planes run as pods, sharing the management cluster's resources efficiently. Scale control planes independently based on actual usage patterns and requirements.</p>
</div>
<div class="tx-card">
<div class="tx-card_icon">
{% include ".icons/fontawesome/solid/puzzle-piece.svg" %}
</div>
<h3>Cluster API Integration</h3>
<p>Seamlessly integrates with Cluster API providers for automated infrastructure provisioning and lifecycle management across different environments.</p>
</div>
<div class="tx-card">
<div class="tx-card_icon">
{% include ".icons/fontawesome/solid/shield.svg" %}
</div>
<h3>High Availability</h3>
<p>Support for multi-node control plane deployments with distributed etcd clusters. Includes automated failover and recovery mechanisms for production workloads.</p>
</div>
</div>
</div>
</section>
<!-- Use Cases Section -->
<section class="tx-section tx-section--alternate">
<div class="md-grid md-typeset">
<h2 class="tx-section-title">Use Cases</h2>
<div class="tx-grid tx-grid--3x2">
<div class="tx-card">
<div class="tx-card_icon">
{% include ".icons/fontawesome/solid/building.svg" %}
</div>
<h3>Private Cloud</h3>
<p>Optimize your data center resources by running multiple Kubernetes control planes. Perfect for organizations that need complete control over their infrastructure while maintaining strict isolation between different business units.</p>
</div>
<div class="tx-card">
<div class="tx-card_icon">
{% include ".icons/fontawesome/solid/cloud.svg" %}
</div>
<h3>Public Cloud</h3>
<p>Build independent public cloud offerings with Kubernetes as a Service capabilities. Provide the same user experience of major cloud providers while maintaining full control over the infrastructure and operational costs.</p>
</div>
<div class="tx-card">
<div class="tx-card_icon">
{% include ".icons/fontawesome/solid/microchip.svg" %}
</div>
<h3>Bare Metal</h3>
<p>Maximize hardware utilization by running multiple control planes on your physical infrastructure. Ideal for environments where direct hardware access, network performance, and data locality are critical.</p>
</div>
<div class="tx-card">
<div class="tx-card_icon">
{% include ".icons/fontawesome/solid/wave-square.svg" %}
</div>
<h3>Edge Computing</h3>
<p>Run lightweight Kubernetes clusters at the edge while managing their control planes centrally. Reduce the hardware footprint at edge locations by keeping control planes in your central management cluster.</p>
</div>
<div class="tx-card">
<div class="tx-card_icon">
{% include ".icons/fontawesome/solid/gears.svg" %}
</div>
<h3>Platform Engineering</h3>
<p>Build internal Kubernetes platforms with standardized cluster provisioning and management. Enable self-service capabilities while maintaining centralized control and governance over all clusters.</p>
</div>
<div class="tx-card">
<div class="tx-card_icon">
{% include ".icons/fontawesome/solid/cloud-arrow-up.svg" %}
</div>
<h3>BYO Cloud</h3>
<p>Create your own managed Kubernetes service using standard upstream components. Provide dedicated clusters to your users while maintaining operational efficiency through centralized control plane management.</p>
</div>
</div>
</div>
</section>
<!-- FAQ Section -->
<section class="tx-section tx-section--alternate">
<div class="md-grid md-typeset">
<h2 class="tx-section-title">Frequently Asked Questions</h2>
<div class="tx-faq">
<div class="tx-faq_item">
<div class="tx-faq_question">Q. What does Kamaji mean?</div>
<div class="tx-faq_answer">
<p>A. Kamaji is named after <em>Kamajī ( かまじ )</em> from the Japanese movie <a href="https://en.wikipedia.org/wiki/Spirited_Away">Spirited Away</a>. Kamajī is the boiler room operator who efficiently manages the bathhouse's water system - just like how our Kamaji manages Kubernetes clusters!</p>
</div>
</div>
<div class="tx-faq_item">
<div class="tx-faq_question">Q. Is Kamaji another Kubernetes distribution?</div>
<div class="tx-faq_answer">
<p>A. No, Kamaji is a Kubernetes Operator that provides managed Kubernetes clusters as a service, leveraging kubeadm for conformant CNCF Kubernetes clusters.</p>
</div>
</div>
<div class="tx-faq_item">
<div class="tx-faq_question">Q. How is it different from typical solutions?</div>
<div class="tx-faq_answer">
<p>A. Kamaji runs the Control Plane as regular pods in the Management Cluster, offering it as a service and making it more cost-effective and easier to operate at scale.</p>
</div>
</div>
<div class="tx-faq_item">
<div class="tx-faq_question">Q. How does it compare to Public Cloud services?</div>
<div class="tx-faq_answer">
<p>A. Kamaji gives you full control over your Kubernetes infrastructures, offering consistency across cloud, data-center, and edge while simplifying centralized operations.</p>
</div>
</div>
<div class="tx-faq_item">
<div class="tx-faq_question">Q. How does it differ from Cluster API?</div>
<div class="tx-faq_answer">
<p>A. They complement each other: Kamaji simplifies Control Plane management, while Cluster API handles infrastructure abstraction and lifecycle management.</p>
</div>
</div>
<div class="tx-faq_item">
<div class="tx-faq_question">Q. Why Kamaji when Capsule exists?</div>
<div class="tx-faq_answer">
<p>A. While <a href="https://projectcapsule.dev">Capsule</a> provides a single control plane with isolated namespaces, Kamaji provides dedicated control planes when tenants need full cluster admin permissions.</p>
</div>
</div>
<div class="tx-faq_item">
<div class="tx-faq_question">Q. Do you provide support?</div>
<div class="tx-faq_answer">
<p>A. Yes, <a href="https://clastix.io">Clastix</a> offers subscription-based, enterprise-grade support plans for Kamaji. Please contact us to discuss your support needs.</p>
</div>
</div>
</div>
</div>
</section>
{% endblock %}

View File

@@ -1,2 +1,2 @@
mkdocs>=1.3.0
mkdocs-material>=8.2.8
mkdocs
mkdocs-material