mirror of
https://github.com/outbackdingo/kamaji.git
synced 2026-01-27 02:19:22 +00:00
feat(docs): add documentation
This commit is contained in:
committed by
Dario Tranchitella
parent
432c50b081
commit
5183ae09e9
@@ -43,7 +43,7 @@ A dedicated `etcd` cluster for each tenant cluster doesn’t scale well for a ma
|
||||
With this solution, the resiliency is guaranteed by the usual `etcd` mechanism, and the pods' count remains under control, so it solves the main goal of resiliency and costs optimization. The trade-off here is that we have to operate an external `etcd` cluster and manage the access to be sure that each tenant cluster uses only its data. Also, there are limits in size in `etcd`, defaulted to 2GB and configurable to a maximum of 8GB. We’re solving this issue by pooling multiple `etcd` and sharding the tenant control planes.
|
||||
|
||||
## Use cases
|
||||
Kamaji project has been initially started as a solution for actual and common problems such as minimizing the Total Cost of Ownership - TCO while running Kubernetes at scale. However, it can open a wider range of use cases. Here are a few:
|
||||
Kamaji project has been initially started as a solution for actual and common problems such as minimizing the Total Cost of Ownership while running Kubernetes at scale. However, it can open a wider range of use cases. Here are a few:
|
||||
|
||||
### Managed Kubernetes
|
||||
Enabling companies to provide Cloud Native Infrastructure with ease by introducing a strong separation of concerns between management and workloads. Centralize clusters management, monitoring, and observability by leaving developers to focus on the applications, increase productivity and reduce operational costs.
|
||||
@@ -80,7 +80,7 @@ Tenant clusters are fully CNCF compliant built with upstream Kubernetes binaries
|
||||
## Roadmap
|
||||
|
||||
- [ ] Benchmarking and stress-test
|
||||
- [ ] Support for dynamic address allocation on native Load Balancer
|
||||
- [x] Support for dynamic address allocation on native Load Balancer
|
||||
- [x] Zero Downtime Tenant Control Plane upgrade
|
||||
- [ ] `konnectivity` integration
|
||||
- [ ] Provisioning of Tenant Control Plane through Cluster APIs
|
||||
@@ -119,4 +119,4 @@ A. Lighter Multi-Tenancy solutions, like Capsule shares the Kubernetes control p
|
||||
|
||||
Q. So I need a costly cloud infrastructure to try Kamaji?
|
||||
|
||||
A. No, it is possible to try Kamaji on your laptop with [KinD](./deploy/kind/README.md). We're porting it also on Canonical MicroK8s as an add-on, _tbd_.
|
||||
A. No, it is possible to try Kamaji on your laptop with [KinD](./deploy/kind/README.md).
|
||||
|
||||
680
deploy/calico-cni/calico-azure.yaml
Normal file
680
deploy/calico-cni/calico-azure.yaml
Normal file
@@ -0,0 +1,680 @@
|
||||
---
|
||||
# Source: calico/templates/calico-config.yaml
|
||||
# This ConfigMap is used to configure a self-hosted Calico installation.
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: calico-config
|
||||
namespace: kube-system
|
||||
data:
|
||||
# Typha is disabled.
|
||||
typha_service_name: "none"
|
||||
# Configure the backend to use.
|
||||
calico_backend: "vxlan"
|
||||
|
||||
# Configure the MTU to use for workload interfaces and tunnels.
|
||||
# By default, MTU is auto-detected, and explicitly setting this field should not be required.
|
||||
# You can override auto-detection by providing a non-zero value.
|
||||
veth_mtu: "0"
|
||||
|
||||
# The CNI network configuration to install on each node. The special
|
||||
# values in this config will be automatically populated.
|
||||
cni_network_config: |-
|
||||
{
|
||||
"name": "k8s-pod-network",
|
||||
"cniVersion": "0.3.1",
|
||||
"plugins": [
|
||||
{
|
||||
"type": "calico",
|
||||
"log_level": "info",
|
||||
"log_file_path": "/var/log/calico/cni/cni.log",
|
||||
"datastore_type": "kubernetes",
|
||||
"nodename": "__KUBERNETES_NODE_NAME__",
|
||||
"mtu": __CNI_MTU__,
|
||||
"ipam": {
|
||||
"type": "calico-ipam"
|
||||
},
|
||||
"policy": {
|
||||
"type": "k8s"
|
||||
},
|
||||
"kubernetes": {
|
||||
"kubeconfig": "__KUBECONFIG_FILEPATH__"
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "portmap",
|
||||
"snat": true,
|
||||
"capabilities": {"portMappings": true}
|
||||
},
|
||||
{
|
||||
"type": "bandwidth",
|
||||
"capabilities": {"bandwidth": true}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: calico-node
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
- nodes
|
||||
- namespaces
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- discovery.k8s.io
|
||||
resources:
|
||||
- endpointslices
|
||||
verbs:
|
||||
- watch
|
||||
- list
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- endpoints
|
||||
- services
|
||||
verbs:
|
||||
- watch
|
||||
- list
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes/status
|
||||
verbs:
|
||||
- patch
|
||||
- update
|
||||
- apiGroups:
|
||||
- networking.k8s.io
|
||||
resources:
|
||||
- networkpolicies
|
||||
verbs:
|
||||
- watch
|
||||
- list
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
- namespaces
|
||||
- serviceaccounts
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods/status
|
||||
verbs:
|
||||
- patch
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- globalfelixconfigs
|
||||
- felixconfigurations
|
||||
- bgppeers
|
||||
- globalbgpconfigs
|
||||
- bgpconfigurations
|
||||
- ippools
|
||||
- ipamblocks
|
||||
- globalnetworkpolicies
|
||||
- globalnetworksets
|
||||
- networkpolicies
|
||||
- networksets
|
||||
- clusterinformations
|
||||
- hostendpoints
|
||||
- blockaffinities
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- ippools
|
||||
- felixconfigurations
|
||||
- clusterinformations
|
||||
verbs:
|
||||
- create
|
||||
- update
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- bgpconfigurations
|
||||
- bgppeers
|
||||
verbs:
|
||||
- create
|
||||
- update
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- blockaffinities
|
||||
- ipamblocks
|
||||
- ipamhandles
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- create
|
||||
- update
|
||||
- delete
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- ipamconfigs
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- blockaffinities
|
||||
verbs:
|
||||
- watch
|
||||
- apiGroups:
|
||||
- apps
|
||||
resources:
|
||||
- daemonsets
|
||||
verbs:
|
||||
- get
|
||||
|
||||
---
|
||||
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: calico-kube-controllers
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes
|
||||
verbs:
|
||||
- watch
|
||||
- list
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- ippools
|
||||
verbs:
|
||||
- list
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- blockaffinities
|
||||
- ipamblocks
|
||||
- ipamhandles
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- create
|
||||
- update
|
||||
- delete
|
||||
- watch
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- hostendpoints
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- create
|
||||
- update
|
||||
- delete
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- clusterinformations
|
||||
verbs:
|
||||
- get
|
||||
- create
|
||||
- update
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- kubecontrollersconfigurations
|
||||
verbs:
|
||||
- get
|
||||
- create
|
||||
- update
|
||||
- watch
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: calico-node
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: calico-node
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: calico-node
|
||||
namespace: kube-system
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: calico-kube-controllers
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: calico-kube-controllers
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: calico-kube-controllers
|
||||
namespace: kube-system
|
||||
---
|
||||
# Source: calico/templates/calico-node.yaml
|
||||
# This manifest installs the calico-node container, as well
|
||||
# as the CNI plugins and network config on
|
||||
# each master and worker node in a Kubernetes cluster.
|
||||
kind: DaemonSet
|
||||
apiVersion: apps/v1
|
||||
metadata:
|
||||
name: calico-node
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: calico-node
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: calico-node
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: calico-node
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
hostNetwork: true
|
||||
tolerations:
|
||||
# Make sure calico-node gets scheduled on all nodes.
|
||||
- effect: NoSchedule
|
||||
operator: Exists
|
||||
# Mark the pod as a critical add-on for rescheduling.
|
||||
- key: CriticalAddonsOnly
|
||||
operator: Exists
|
||||
- effect: NoExecute
|
||||
operator: Exists
|
||||
serviceAccountName: calico-node
|
||||
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
|
||||
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
|
||||
terminationGracePeriodSeconds: 0
|
||||
priorityClassName: system-node-critical
|
||||
initContainers:
|
||||
# This container performs upgrade from host-local IPAM to calico-ipam.
|
||||
# It can be deleted if this is a fresh installation, or if you have already
|
||||
# upgraded to use calico-ipam.
|
||||
- name: upgrade-ipam
|
||||
image: docker.io/calico/cni:v3.20.0
|
||||
command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
|
||||
name: kubernetes-services-endpoint
|
||||
optional: true
|
||||
env:
|
||||
- name: KUBERNETES_NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
- name: CALICO_NETWORKING_BACKEND
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: calico_backend
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/cni/networks
|
||||
name: host-local-net-dir
|
||||
- mountPath: /host/opt/cni/bin
|
||||
name: cni-bin-dir
|
||||
securityContext:
|
||||
privileged: true
|
||||
# This container installs the CNI binaries
|
||||
# and CNI network config file on each node.
|
||||
- name: install-cni
|
||||
image: docker.io/calico/cni:v3.20.0
|
||||
command: ["/opt/cni/bin/install"]
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
|
||||
name: kubernetes-services-endpoint
|
||||
optional: true
|
||||
env:
|
||||
# Name of the CNI config file to create.
|
||||
- name: CNI_CONF_NAME
|
||||
value: "10-calico.conflist"
|
||||
# The CNI network config to install on each node.
|
||||
- name: CNI_NETWORK_CONFIG
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: cni_network_config
|
||||
# Set the hostname based on the k8s node name.
|
||||
- name: KUBERNETES_NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
# CNI MTU Config variable
|
||||
- name: CNI_MTU
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: veth_mtu
|
||||
# Prevents the container from sleeping forever.
|
||||
- name: SLEEP
|
||||
value: "false"
|
||||
volumeMounts:
|
||||
- mountPath: /host/opt/cni/bin
|
||||
name: cni-bin-dir
|
||||
- mountPath: /host/etc/cni/net.d
|
||||
name: cni-net-dir
|
||||
securityContext:
|
||||
privileged: true
|
||||
# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
|
||||
# to communicate with Felix over the Policy Sync API.
|
||||
- name: flexvol-driver
|
||||
image: docker.io/calico/pod2daemon-flexvol:v3.20.0
|
||||
volumeMounts:
|
||||
- name: flexvol-driver-host
|
||||
mountPath: /host/driver
|
||||
securityContext:
|
||||
privileged: true
|
||||
containers:
|
||||
# Runs calico-node container on each Kubernetes node. This
|
||||
# container programs network policy and routes on each
|
||||
# host.
|
||||
- name: calico-node
|
||||
image: docker.io/calico/node:v3.20.0
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
|
||||
name: kubernetes-services-endpoint
|
||||
optional: true
|
||||
env:
|
||||
# Use Kubernetes API as the backing datastore.
|
||||
- name: DATASTORE_TYPE
|
||||
value: "kubernetes"
|
||||
# Wait for the datastore.
|
||||
- name: WAIT_FOR_DATASTORE
|
||||
value: "true"
|
||||
# Set based on the k8s node name.
|
||||
- name: NODENAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
# Choose the backend to use.
|
||||
- name: CALICO_NETWORKING_BACKEND
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: calico_backend
|
||||
# Cluster type to identify the deployment type
|
||||
- name: CLUSTER_TYPE
|
||||
value: "k8s"
|
||||
# Auto-detect the BGP IP address.
|
||||
- name: IP
|
||||
value: "autodetect"
|
||||
# Enable IPIP
|
||||
- name: CALICO_IPV4POOL_IPIP
|
||||
value: "Never"
|
||||
# Enable or Disable VXLAN on the default IP pool.
|
||||
- name: CALICO_IPV4POOL_VXLAN
|
||||
value: "Always"
|
||||
# Set MTU for tunnel device used if ipip is enabled
|
||||
- name: FELIX_IPINIPMTU
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: veth_mtu
|
||||
# Set MTU for the VXLAN tunnel device.
|
||||
- name: FELIX_VXLANMTU
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: veth_mtu
|
||||
# Set MTU for the Wireguard tunnel device.
|
||||
- name: FELIX_WIREGUARDMTU
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: veth_mtu
|
||||
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
|
||||
# chosen from this range. Changing this value after installation will have
|
||||
# no effect. This should fall within `--cluster-cidr`.
|
||||
- name: CALICO_IPV4POOL_CIDR
|
||||
value: "10.36.0.0/16"
|
||||
# Disable file logging so `kubectl logs` works.
|
||||
- name: CALICO_DISABLE_FILE_LOGGING
|
||||
value: "true"
|
||||
# Set Felix endpoint to host default action to ACCEPT.
|
||||
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
|
||||
value: "ACCEPT"
|
||||
# Disable IPv6 on Kubernetes.
|
||||
- name: FELIX_IPV6SUPPORT
|
||||
value: "false"
|
||||
- name: FELIX_HEALTHENABLED
|
||||
value: "true"
|
||||
securityContext:
|
||||
privileged: true
|
||||
resources:
|
||||
requests:
|
||||
cpu: 250m
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /bin/calico-node
|
||||
- -felix-live
|
||||
#- -bird-live
|
||||
periodSeconds: 10
|
||||
initialDelaySeconds: 10
|
||||
failureThreshold: 6
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /bin/calico-node
|
||||
- -felix-ready
|
||||
#- -bird-ready
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 10
|
||||
volumeMounts:
|
||||
# For maintaining CNI plugin API credentials.
|
||||
- mountPath: /host/etc/cni/net.d
|
||||
name: cni-net-dir
|
||||
readOnly: false
|
||||
- mountPath: /lib/modules
|
||||
name: lib-modules
|
||||
readOnly: true
|
||||
- mountPath: /run/xtables.lock
|
||||
name: xtables-lock
|
||||
readOnly: false
|
||||
- mountPath: /var/run/calico
|
||||
name: var-run-calico
|
||||
readOnly: false
|
||||
- mountPath: /var/lib/calico
|
||||
name: var-lib-calico
|
||||
readOnly: false
|
||||
- name: policysync
|
||||
mountPath: /var/run/nodeagent
|
||||
# For eBPF mode, we need to be able to mount the BPF filesystem at /sys/fs/bpf so we mount in the
|
||||
# parent directory.
|
||||
- name: sysfs
|
||||
mountPath: /sys/fs/
|
||||
# Bidirectional means that, if we mount the BPF filesystem at /sys/fs/bpf it will propagate to the host.
|
||||
# If the host is known to mount that filesystem already then Bidirectional can be omitted.
|
||||
mountPropagation: Bidirectional
|
||||
- name: cni-log-dir
|
||||
mountPath: /var/log/calico/cni
|
||||
readOnly: true
|
||||
volumes:
|
||||
# Used by calico-node.
|
||||
- name: lib-modules
|
||||
hostPath:
|
||||
path: /lib/modules
|
||||
- name: var-run-calico
|
||||
hostPath:
|
||||
path: /var/run/calico
|
||||
- name: var-lib-calico
|
||||
hostPath:
|
||||
path: /var/lib/calico
|
||||
- name: xtables-lock
|
||||
hostPath:
|
||||
path: /run/xtables.lock
|
||||
type: FileOrCreate
|
||||
- name: sysfs
|
||||
hostPath:
|
||||
path: /sys/fs/
|
||||
type: DirectoryOrCreate
|
||||
# Used to install CNI.
|
||||
- name: cni-bin-dir
|
||||
hostPath:
|
||||
path: /opt/cni/bin
|
||||
- name: cni-net-dir
|
||||
hostPath:
|
||||
path: /etc/cni/net.d
|
||||
# Used to access CNI logs.
|
||||
- name: cni-log-dir
|
||||
hostPath:
|
||||
path: /var/log/calico/cni
|
||||
# Mount in the directory for host-local IPAM allocations. This is
|
||||
# used when upgrading from host-local to calico-ipam, and can be removed
|
||||
# if not using the upgrade-ipam init container.
|
||||
- name: host-local-net-dir
|
||||
hostPath:
|
||||
path: /var/lib/cni/networks
|
||||
# Used to create per-pod Unix Domain Sockets
|
||||
- name: policysync
|
||||
hostPath:
|
||||
type: DirectoryOrCreate
|
||||
path: /var/run/nodeagent
|
||||
# Used to install Flex Volume Driver
|
||||
- name: flexvol-driver-host
|
||||
hostPath:
|
||||
type: DirectoryOrCreate
|
||||
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: calico-node
|
||||
namespace: kube-system
|
||||
|
||||
---
|
||||
# Source: calico/templates/calico-kube-controllers.yaml
|
||||
# See https://github.com/projectcalico/kube-controllers
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: calico-kube-controllers
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: calico-kube-controllers
|
||||
spec:
|
||||
# The controllers can only have a single active instance.
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: calico-kube-controllers
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
name: calico-kube-controllers
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: calico-kube-controllers
|
||||
spec:
|
||||
tolerations:
|
||||
# Mark the pod as a critical add-on for rescheduling.
|
||||
- key: CriticalAddonsOnly
|
||||
operator: Exists
|
||||
- key: node-role.kubernetes.io/master
|
||||
effect: NoSchedule
|
||||
serviceAccountName: calico-kube-controllers
|
||||
priorityClassName: system-cluster-critical
|
||||
containers:
|
||||
- name: calico-kube-controllers
|
||||
image: docker.io/calico/kube-controllers:v3.20.0
|
||||
resouces:
|
||||
env:
|
||||
# Choose which controllers to run.
|
||||
- name: ENABLED_CONTROLLERS
|
||||
value: node
|
||||
- name: DATASTORE_TYPE
|
||||
value: kubernetes
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /usr/bin/check-status
|
||||
- -l
|
||||
periodSeconds: 10
|
||||
initialDelaySeconds: 10
|
||||
failureThreshold: 6
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /usr/bin/check-status
|
||||
- -r
|
||||
periodSeconds: 10
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: calico-kube-controllers
|
||||
namespace: kube-system
|
||||
|
||||
---
|
||||
|
||||
# This manifest creates a Pod Disruption Budget for Controller to allow K8s Cluster Autoscaler to evict
|
||||
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
name: calico-kube-controllers
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: calico-kube-controllers
|
||||
spec:
|
||||
maxUnavailable: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: calico-kube-controllers
|
||||
|
||||
3385
deploy/calico-cni/calico-crd.yaml
Normal file
3385
deploy/calico-cni/calico-crd.yaml
Normal file
File diff suppressed because it is too large
Load Diff
687
deploy/calico-cni/calico.yaml
Normal file
687
deploy/calico-cni/calico.yaml
Normal file
@@ -0,0 +1,687 @@
|
||||
---
|
||||
# Source: calico/templates/calico-config.yaml
|
||||
# This ConfigMap is used to configure a self-hosted Calico installation.
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: calico-config
|
||||
namespace: kube-system
|
||||
data:
|
||||
# Typha is disabled.
|
||||
typha_service_name: "none"
|
||||
# Configure the backend to use.
|
||||
calico_backend: "bird"
|
||||
|
||||
# Configure the MTU to use for workload interfaces and tunnels.
|
||||
# By default, MTU is auto-detected, and explicitly setting this field should not be required.
|
||||
# You can override auto-detection by providing a non-zero value.
|
||||
veth_mtu: "0"
|
||||
|
||||
# The CNI network configuration to install on each node. The special
|
||||
# values in this config will be automatically populated.
|
||||
cni_network_config: |-
|
||||
{
|
||||
"name": "k8s-pod-network",
|
||||
"cniVersion": "0.3.1",
|
||||
"plugins": [
|
||||
{
|
||||
"type": "calico",
|
||||
"log_level": "info",
|
||||
"log_file_path": "/var/log/calico/cni/cni.log",
|
||||
"datastore_type": "kubernetes",
|
||||
"nodename": "__KUBERNETES_NODE_NAME__",
|
||||
"mtu": __CNI_MTU__,
|
||||
"ipam": {
|
||||
"type": "calico-ipam"
|
||||
},
|
||||
"policy": {
|
||||
"type": "k8s"
|
||||
},
|
||||
"kubernetes": {
|
||||
"kubeconfig": "__KUBECONFIG_FILEPATH__"
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "portmap",
|
||||
"snat": true,
|
||||
"capabilities": {"portMappings": true}
|
||||
},
|
||||
{
|
||||
"type": "bandwidth",
|
||||
"capabilities": {"bandwidth": true}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: calico-node
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
- nodes
|
||||
- namespaces
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- discovery.k8s.io
|
||||
resources:
|
||||
- endpointslices
|
||||
verbs:
|
||||
- watch
|
||||
- list
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- endpoints
|
||||
- services
|
||||
verbs:
|
||||
- watch
|
||||
- list
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes/status
|
||||
verbs:
|
||||
- patch
|
||||
- update
|
||||
- apiGroups:
|
||||
- networking.k8s.io
|
||||
resources:
|
||||
- networkpolicies
|
||||
verbs:
|
||||
- watch
|
||||
- list
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
- namespaces
|
||||
- serviceaccounts
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods/status
|
||||
verbs:
|
||||
- patch
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- globalfelixconfigs
|
||||
- felixconfigurations
|
||||
- bgppeers
|
||||
- globalbgpconfigs
|
||||
- bgpconfigurations
|
||||
- ippools
|
||||
- ipamblocks
|
||||
- globalnetworkpolicies
|
||||
- globalnetworksets
|
||||
- networkpolicies
|
||||
- networksets
|
||||
- clusterinformations
|
||||
- hostendpoints
|
||||
- blockaffinities
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- ippools
|
||||
- felixconfigurations
|
||||
- clusterinformations
|
||||
verbs:
|
||||
- create
|
||||
- update
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- bgpconfigurations
|
||||
- bgppeers
|
||||
verbs:
|
||||
- create
|
||||
- update
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- blockaffinities
|
||||
- ipamblocks
|
||||
- ipamhandles
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- create
|
||||
- update
|
||||
- delete
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- ipamconfigs
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- blockaffinities
|
||||
verbs:
|
||||
- watch
|
||||
- apiGroups:
|
||||
- apps
|
||||
resources:
|
||||
- daemonsets
|
||||
verbs:
|
||||
- get
|
||||
|
||||
---
|
||||
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: calico-kube-controllers
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes
|
||||
verbs:
|
||||
- watch
|
||||
- list
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- ippools
|
||||
verbs:
|
||||
- list
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- blockaffinities
|
||||
- ipamblocks
|
||||
- ipamhandles
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- create
|
||||
- update
|
||||
- delete
|
||||
- watch
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- hostendpoints
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- create
|
||||
- update
|
||||
- delete
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- clusterinformations
|
||||
verbs:
|
||||
- get
|
||||
- create
|
||||
- update
|
||||
- apiGroups:
|
||||
- crd.projectcalico.org
|
||||
resources:
|
||||
- kubecontrollersconfigurations
|
||||
verbs:
|
||||
- get
|
||||
- create
|
||||
- update
|
||||
- watch
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: calico-node
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: calico-node
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: calico-node
|
||||
namespace: kube-system
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: calico-kube-controllers
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: calico-kube-controllers
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: calico-kube-controllers
|
||||
namespace: kube-system
|
||||
---
|
||||
# Source: calico/templates/calico-node.yaml
|
||||
# This manifest installs the calico-node container, as well
|
||||
# as the CNI plugins and network config on
|
||||
# each master and worker node in a Kubernetes cluster.
|
||||
kind: DaemonSet
|
||||
apiVersion: apps/v1
|
||||
metadata:
|
||||
name: calico-node
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: calico-node
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: calico-node
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: calico-node
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
hostNetwork: true
|
||||
tolerations:
|
||||
# Make sure calico-node gets scheduled on all nodes.
|
||||
- effect: NoSchedule
|
||||
operator: Exists
|
||||
# Mark the pod as a critical add-on for rescheduling.
|
||||
- key: CriticalAddonsOnly
|
||||
operator: Exists
|
||||
- effect: NoExecute
|
||||
operator: Exists
|
||||
serviceAccountName: calico-node
|
||||
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
|
||||
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
|
||||
terminationGracePeriodSeconds: 0
|
||||
priorityClassName: system-node-critical
|
||||
initContainers:
|
||||
# This container performs upgrade from host-local IPAM to calico-ipam.
|
||||
# It can be deleted if this is a fresh installation, or if you have already
|
||||
# upgraded to use calico-ipam.
|
||||
- name: upgrade-ipam
|
||||
image: docker.io/calico/cni:v3.20.0
|
||||
command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
|
||||
name: kubernetes-services-endpoint
|
||||
optional: true
|
||||
env:
|
||||
- name: KUBERNETES_NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
- name: CALICO_NETWORKING_BACKEND
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: calico_backend
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/cni/networks
|
||||
name: host-local-net-dir
|
||||
- mountPath: /host/opt/cni/bin
|
||||
name: cni-bin-dir
|
||||
securityContext:
|
||||
privileged: true
|
||||
# This container installs the CNI binaries
|
||||
# and CNI network config file on each node.
|
||||
- name: install-cni
|
||||
image: docker.io/calico/cni:v3.20.0
|
||||
command: ["/opt/cni/bin/install"]
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
|
||||
name: kubernetes-services-endpoint
|
||||
optional: true
|
||||
env:
|
||||
# Name of the CNI config file to create.
|
||||
- name: CNI_CONF_NAME
|
||||
value: "10-calico.conflist"
|
||||
# The CNI network config to install on each node.
|
||||
- name: CNI_NETWORK_CONFIG
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: cni_network_config
|
||||
# Set the hostname based on the k8s node name.
|
||||
- name: KUBERNETES_NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
# CNI MTU Config variable
|
||||
- name: CNI_MTU
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: veth_mtu
|
||||
# Prevents the container from sleeping forever.
|
||||
- name: SLEEP
|
||||
value: "false"
|
||||
volumeMounts:
|
||||
- mountPath: /host/opt/cni/bin
|
||||
name: cni-bin-dir
|
||||
- mountPath: /host/etc/cni/net.d
|
||||
name: cni-net-dir
|
||||
securityContext:
|
||||
privileged: true
|
||||
# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
|
||||
# to communicate with Felix over the Policy Sync API.
|
||||
- name: flexvol-driver
|
||||
image: docker.io/calico/pod2daemon-flexvol:v3.20.0
|
||||
volumeMounts:
|
||||
- name: flexvol-driver-host
|
||||
mountPath: /host/driver
|
||||
securityContext:
|
||||
privileged: true
|
||||
containers:
|
||||
# Runs calico-node container on each Kubernetes node. This
|
||||
# container programs network policy and routes on each
|
||||
# host.
|
||||
- name: calico-node
|
||||
image: docker.io/calico/node:v3.20.0
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
|
||||
name: kubernetes-services-endpoint
|
||||
optional: true
|
||||
env:
|
||||
# Use Kubernetes API as the backing datastore.
|
||||
- name: DATASTORE_TYPE
|
||||
value: "kubernetes"
|
||||
# Wait for the datastore.
|
||||
- name: WAIT_FOR_DATASTORE
|
||||
value: "true"
|
||||
# Set based on the k8s node name.
|
||||
- name: NODENAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
# Choose the backend to use.
|
||||
- name: CALICO_NETWORKING_BACKEND
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: calico_backend
|
||||
# Cluster type to identify the deployment type
|
||||
- name: CLUSTER_TYPE
|
||||
value: "k8s"
|
||||
# Auto-detect the BGP IP address.
|
||||
- name: IP
|
||||
value: "autodetect"
|
||||
- name: IP_AUTODETECTION_METHOD
|
||||
value: "can-reach=192.168.32.0"
|
||||
# Enable IPIP
|
||||
- name: CALICO_IPV4POOL_IPIP
|
||||
value: "Never"
|
||||
# Enable or Disable VXLAN on the default IP pool.
|
||||
- name: CALICO_IPV4POOL_VXLAN
|
||||
value: "Never"
|
||||
# Set MTU for tunnel device used if ipip is enabled
|
||||
- name: FELIX_IPINIPMTU
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: veth_mtu
|
||||
# Set MTU for the VXLAN tunnel device.
|
||||
- name: FELIX_VXLANMTU
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: veth_mtu
|
||||
# Set MTU for the Wireguard tunnel device.
|
||||
- name: FELIX_WIREGUARDMTU
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: veth_mtu
|
||||
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
|
||||
# chosen from this range. Changing this value after installation will have
|
||||
# no effect. This should fall within `--cluster-cidr`.
|
||||
# - name: CALICO_IPV4POOL_CIDR
|
||||
# value: "192.168.0.0/16"
|
||||
# Disable file logging so `kubectl logs` works.
|
||||
- name: CALICO_DISABLE_FILE_LOGGING
|
||||
value: "true"
|
||||
# Set Felix endpoint to host default action to ACCEPT.
|
||||
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
|
||||
value: "ACCEPT"
|
||||
# Disable IPv6 on Kubernetes.
|
||||
- name: FELIX_IPV6SUPPORT
|
||||
value: "false"
|
||||
- name: FELIX_HEALTHENABLED
|
||||
value: "true"
|
||||
securityContext:
|
||||
privileged: true
|
||||
resources:
|
||||
requests:
|
||||
cpu: 250m
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /bin/calico-node
|
||||
- -felix-live
|
||||
- -bird-live
|
||||
periodSeconds: 10
|
||||
initialDelaySeconds: 10
|
||||
failureThreshold: 6
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /bin/calico-node
|
||||
- -felix-ready
|
||||
- -bird-ready
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 10
|
||||
volumeMounts:
|
||||
# For maintaining CNI plugin API credentials.
|
||||
- mountPath: /host/etc/cni/net.d
|
||||
name: cni-net-dir
|
||||
readOnly: false
|
||||
- mountPath: /lib/modules
|
||||
name: lib-modules
|
||||
readOnly: true
|
||||
- mountPath: /run/xtables.lock
|
||||
name: xtables-lock
|
||||
readOnly: false
|
||||
- mountPath: /var/run/calico
|
||||
name: var-run-calico
|
||||
readOnly: false
|
||||
- mountPath: /var/lib/calico
|
||||
name: var-lib-calico
|
||||
readOnly: false
|
||||
- name: policysync
|
||||
mountPath: /var/run/nodeagent
|
||||
# For eBPF mode, we need to be able to mount the BPF filesystem at /sys/fs/bpf so we mount in the
|
||||
# parent directory.
|
||||
- name: sysfs
|
||||
mountPath: /sys/fs/
|
||||
# Bidirectional means that, if we mount the BPF filesystem at /sys/fs/bpf it will propagate to the host.
|
||||
# If the host is known to mount that filesystem already then Bidirectional can be omitted.
|
||||
mountPropagation: Bidirectional
|
||||
- name: cni-log-dir
|
||||
mountPath: /var/log/calico/cni
|
||||
readOnly: true
|
||||
volumes:
|
||||
# Used by calico-node.
|
||||
- name: lib-modules
|
||||
hostPath:
|
||||
path: /lib/modules
|
||||
- name: var-run-calico
|
||||
hostPath:
|
||||
path: /var/run/calico
|
||||
- name: var-lib-calico
|
||||
hostPath:
|
||||
path: /var/lib/calico
|
||||
- name: xtables-lock
|
||||
hostPath:
|
||||
path: /run/xtables.lock
|
||||
type: FileOrCreate
|
||||
- name: sysfs
|
||||
hostPath:
|
||||
path: /sys/fs/
|
||||
type: DirectoryOrCreate
|
||||
# Used to install CNI.
|
||||
- name: cni-bin-dir
|
||||
hostPath:
|
||||
path: /opt/cni/bin
|
||||
- name: cni-net-dir
|
||||
hostPath:
|
||||
path: /etc/cni/net.d
|
||||
# Used to access CNI logs.
|
||||
- name: cni-log-dir
|
||||
hostPath:
|
||||
path: /var/log/calico/cni
|
||||
# Mount in the directory for host-local IPAM allocations. This is
|
||||
# used when upgrading from host-local to calico-ipam, and can be removed
|
||||
# if not using the upgrade-ipam init container.
|
||||
- name: host-local-net-dir
|
||||
hostPath:
|
||||
path: /var/lib/cni/networks
|
||||
# Used to create per-pod Unix Domain Sockets
|
||||
- name: policysync
|
||||
hostPath:
|
||||
type: DirectoryOrCreate
|
||||
path: /var/run/nodeagent
|
||||
# Used to install Flex Volume Driver
|
||||
- name: flexvol-driver-host
|
||||
hostPath:
|
||||
type: DirectoryOrCreate
|
||||
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: calico-node
|
||||
namespace: kube-system
|
||||
|
||||
---
|
||||
# Source: calico/templates/calico-kube-controllers.yaml
|
||||
# See https://github.com/projectcalico/kube-controllers
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: calico-kube-controllers
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: calico-kube-controllers
|
||||
spec:
|
||||
# The controllers can only have a single active instance.
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: calico-kube-controllers
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
name: calico-kube-controllers
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: calico-kube-controllers
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
tolerations:
|
||||
# Mark the pod as a critical add-on for rescheduling.
|
||||
- key: CriticalAddonsOnly
|
||||
operator: Exists
|
||||
- key: node-role.kubernetes.io/master
|
||||
effect: NoSchedule
|
||||
serviceAccountName: calico-kube-controllers
|
||||
priorityClassName: system-cluster-critical
|
||||
containers:
|
||||
- name: calico-kube-controllers
|
||||
image: docker.io/calico/kube-controllers:v3.20.0
|
||||
env:
|
||||
# Choose which controllers to run.
|
||||
- name: ENABLED_CONTROLLERS
|
||||
value: node
|
||||
- name: DATASTORE_TYPE
|
||||
value: kubernetes
|
||||
resources:
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /usr/bin/check-status
|
||||
- -l
|
||||
periodSeconds: 10
|
||||
initialDelaySeconds: 10
|
||||
failureThreshold: 6
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /usr/bin/check-status
|
||||
- -r
|
||||
periodSeconds: 10
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: calico-kube-controllers
|
||||
namespace: kube-system
|
||||
|
||||
---
|
||||
|
||||
# This manifest creates a Pod Disruption Budget for Controller to allow K8s Cluster Autoscaler to evict
|
||||
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
name: calico-kube-controllers
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: calico-kube-controllers
|
||||
spec:
|
||||
maxUnavailable: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: calico-kube-controllers
|
||||
|
||||
---
|
||||
|
||||
|
||||
21
deploy/cfssl-cert-config.json
Normal file
21
deploy/cfssl-cert-config.json
Normal file
@@ -0,0 +1,21 @@
|
||||
{
|
||||
"signing": {
|
||||
"default": {
|
||||
"expiry": "8760h"
|
||||
},
|
||||
"profiles": {
|
||||
"server-authentication": {
|
||||
"usages": ["signing", "key encipherment", "server auth"],
|
||||
"expiry": "8760h"
|
||||
},
|
||||
"client-authentication": {
|
||||
"usages": ["signing", "key encipherment", "client auth"],
|
||||
"expiry": "8760h"
|
||||
},
|
||||
"peer-authentication": {
|
||||
"usages": ["signing", "key encipherment", "server auth", "client auth"],
|
||||
"expiry": "8760h"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
61
deploy/etcd/Makefile
Normal file
61
deploy/etcd/Makefile
Normal file
@@ -0,0 +1,61 @@
|
||||
etcd_path := $(patsubst %/,%,$(dir $(abspath $(lastword $(MAKEFILE_LIST)))))
|
||||
|
||||
.PHONY: etcd-cluster etcd-certificates etcd-cluster-install etcd-enable-multitenancy
|
||||
|
||||
etcd-cluster: etcd-certificates etcd-cluster-install etcd-cluster-healthcheck etcd-enable-multitenancy
|
||||
|
||||
etcd-certificates:
|
||||
rm -rf $(etcd_path)/certs && mkdir $(etcd_path)/certs
|
||||
cfssl gencert -initca $(etcd_path)/ca-csr.json | cfssljson -bare $(etcd_path)/certs/ca
|
||||
mv $(etcd_path)/certs/ca.pem $(etcd_path)/certs/ca.crt
|
||||
mv $(etcd_path)/certs/ca-key.pem $(etcd_path)/certs/ca.key
|
||||
cfssl gencert -ca=$(etcd_path)/certs/ca.crt -ca-key=$(etcd_path)/certs/ca.key \
|
||||
-config=$(etcd_path)/config.json \
|
||||
-profile=peer-authentication $(etcd_path)/peer-csr.json | cfssljson -bare $(etcd_path)/certs/peer
|
||||
cfssl gencert -ca=$(etcd_path)/certs/ca.crt -ca-key=$(etcd_path)/certs/ca.key \
|
||||
-config=$(etcd_path)/config.json \
|
||||
-profile=peer-authentication $(etcd_path)/server-csr.json | cfssljson -bare $(etcd_path)/certs/server
|
||||
cfssl gencert -ca=$(etcd_path)/certs/ca.crt -ca-key=$(etcd_path)/certs/ca.key \
|
||||
-config=$(etcd_path)/config.json \
|
||||
-profile=client-authentication $(etcd_path)/root-client-csr.json | cfssljson -bare $(etcd_path)/certs/root-client
|
||||
|
||||
etcd-cluster-install:
|
||||
@kubectl create namespace kamaji-system --dry-run=client -o yaml | kubectl apply -f -
|
||||
@kubectl -n kamaji-system apply -f $(etcd_path)/etcd-cluster.yaml
|
||||
@kubectl -n kamaji-system create secret generic etcd-certs \
|
||||
--from-file=$(etcd_path)/certs/ca.crt \
|
||||
--from-file=$(etcd_path)/certs/ca.key \
|
||||
--from-file=$(etcd_path)/certs/peer-key.pem --from-file=$(etcd_path)/certs/peer.pem \
|
||||
--from-file=$(etcd_path)/certs/server-key.pem --from-file=$(etcd_path)/certs/server.pem
|
||||
@kubectl -n kamaji-system create secret tls root-client-certs \
|
||||
--key=$(etcd_path)/certs/root-client-key.pem \
|
||||
--cert=$(etcd_path)/certs/root-client.pem
|
||||
|
||||
etcd-cluster-healthcheck:
|
||||
@sleep 20
|
||||
@echo "Wait the etcd instances discover each other and the cluster is formed"
|
||||
@kubectl wait pod --for=condition=ready -n kamaji-system -l app=etcd --timeout=120s
|
||||
@kubectl -n kamaji-system apply -f $(etcd_path)/etcd-client.yaml
|
||||
@sleep 20
|
||||
@echo -n "Checking endpoint's health..."
|
||||
@kubectl -n kamaji-system exec etcd-root-client -- /bin/bash -c \
|
||||
"etcdctl endpoint health 1>/dev/null 2>/dev/null; until [ \$$? -eq 0 ]; do sleep 10; printf "."; etcdctl endpoint health 1>/dev/null 2>/dev/null; done;"
|
||||
@echo -n "etcd cluster's health:\n"
|
||||
@kubectl -n kamaji-system exec etcd-root-client -- /bin/bash -c \
|
||||
"etcdctl endpoint health"
|
||||
@echo -n "Waiting for all members..."
|
||||
@kubectl -n kamaji-system exec etcd-root-client -- /bin/bash -c \
|
||||
"until [ \$$(etcdctl member list 2>/dev/null | wc -l) -eq 3 ]; do sleep 10; printf '.'; done;"
|
||||
@echo -n "etcd's members:\n"
|
||||
@kubectl -n kamaji-system exec etcd-root-client -- /bin/bash -c \
|
||||
"etcdctl member list -w table"
|
||||
|
||||
|
||||
etcd-enable-multitenancy:
|
||||
kubectl -n kamaji-system exec etcd-root-client -- etcdctl user add --no-password=true root
|
||||
kubectl -n kamaji-system exec etcd-root-client -- etcdctl role add root
|
||||
kubectl -n kamaji-system exec etcd-root-client -- etcdctl user grant-role root root
|
||||
kubectl -n kamaji-system exec etcd-root-client -- etcdctl auth enable
|
||||
|
||||
etcd-certificates/cleanup:
|
||||
@rm -rf $(etcd_path)/certs
|
||||
14
deploy/etcd/ca-csr.json
Normal file
14
deploy/etcd/ca-csr.json
Normal file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"CN": "Clastix CA",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"C": "IT",
|
||||
"ST": "Italy",
|
||||
"L": "Milan"
|
||||
}
|
||||
]
|
||||
}
|
||||
21
deploy/etcd/config.json
Normal file
21
deploy/etcd/config.json
Normal file
@@ -0,0 +1,21 @@
|
||||
{
|
||||
"signing": {
|
||||
"default": {
|
||||
"expiry": "8760h"
|
||||
},
|
||||
"profiles": {
|
||||
"server-authentication": {
|
||||
"usages": ["signing", "key encipherment", "server auth"],
|
||||
"expiry": "8760h"
|
||||
},
|
||||
"client-authentication": {
|
||||
"usages": ["signing", "key encipherment", "client auth"],
|
||||
"expiry": "8760h"
|
||||
},
|
||||
"peer-authentication": {
|
||||
"usages": ["signing", "key encipherment", "server auth", "client auth"],
|
||||
"expiry": "8760h"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
41
deploy/etcd/etcd-client.yaml
Normal file
41
deploy/etcd/etcd-client.yaml
Normal file
@@ -0,0 +1,41 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
app: etcd
|
||||
name: etcd-root-client
|
||||
namespace:
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
- sleep
|
||||
- infinity
|
||||
env:
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: ETCDCTL_ENDPOINTS
|
||||
value: https://etcd-server.$(POD_NAMESPACE).svc.cluster.local:2379
|
||||
- name: ETCDCTL_CACERT
|
||||
value: /opt/certs/ca/ca.crt
|
||||
- name: ETCDCTL_CERT
|
||||
value: /opt/certs/root-certs/tls.crt
|
||||
- name: ETCDCTL_KEY
|
||||
value: /opt/certs/root-certs/tls.key
|
||||
image: quay.io/coreos/etcd:v3.5.1
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: etcd-client
|
||||
resources: {}
|
||||
volumeMounts:
|
||||
- name: root-certs
|
||||
mountPath: /opt/certs/root-certs
|
||||
- name: ca
|
||||
mountPath: /opt/certs/ca
|
||||
volumes:
|
||||
- name: root-certs
|
||||
secret:
|
||||
secretName: root-client-certs
|
||||
- name: ca
|
||||
secret:
|
||||
secretName: etcd-certs
|
||||
113
deploy/etcd/etcd-cluster.yaml
Normal file
113
deploy/etcd/etcd-cluster.yaml
Normal file
@@ -0,0 +1,113 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: etcd
|
||||
namespace:
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: etcd-server
|
||||
namespace:
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- name: client
|
||||
port: 2379
|
||||
protocol: TCP
|
||||
targetPort: 2379
|
||||
selector:
|
||||
app: etcd
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: etcd
|
||||
namespace:
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: 2379
|
||||
name: client
|
||||
- port: 2380
|
||||
name: peer
|
||||
selector:
|
||||
app: etcd
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: etcd
|
||||
labels:
|
||||
app: etcd
|
||||
namespace:
|
||||
spec:
|
||||
serviceName: etcd
|
||||
selector:
|
||||
matchLabels:
|
||||
app: etcd
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
name: etcd
|
||||
labels:
|
||||
app: etcd
|
||||
spec:
|
||||
serviceAccountName: etcd
|
||||
volumes:
|
||||
- name: certs
|
||||
secret:
|
||||
secretName: etcd-certs
|
||||
containers:
|
||||
- name: etcd
|
||||
image: quay.io/coreos/etcd:v3.5.1
|
||||
ports:
|
||||
- containerPort: 2379
|
||||
name: client
|
||||
- containerPort: 2380
|
||||
name: peer
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /var/run/etcd
|
||||
- name: certs
|
||||
mountPath: /etc/etcd/pki
|
||||
command:
|
||||
- etcd
|
||||
- --data-dir=/var/run/etcd
|
||||
- --name=$(POD_NAME)
|
||||
- --initial-cluster-state=new
|
||||
- --initial-cluster=etcd-0=https://etcd-0.etcd.$(POD_NAMESPACE).svc.cluster.local:2380,etcd-1=https://etcd-1.etcd.$(POD_NAMESPACE).svc.cluster.local:2380,etcd-2=https://etcd-2.etcd.$(POD_NAMESPACE).svc.cluster.local:2380
|
||||
- --initial-advertise-peer-urls=https://$(POD_NAME).etcd.$(POD_NAMESPACE).svc.cluster.local:2380
|
||||
- --initial-cluster-token=kamaji
|
||||
- --listen-client-urls=https://0.0.0.0:2379
|
||||
- --advertise-client-urls=https://etcd-0.etcd.$(POD_NAMESPACE).svc.cluster.local:2379,https://etcd-1.etcd.$(POD_NAMESPACE).svc.cluster.local:2379,https://etcd-2.etcd.$(POD_NAMESPACE).svc.cluster.local:2379,https://etcd-server.$(POD_NAMESPACE).svc.cluster.local:2379
|
||||
- --client-cert-auth=true
|
||||
- --trusted-ca-file=/etc/etcd/pki/ca.crt
|
||||
- --cert-file=/etc/etcd/pki/server.pem
|
||||
- --key-file=/etc/etcd/pki/server-key.pem
|
||||
- --listen-peer-urls=https://0.0.0.0:2380
|
||||
- --peer-client-cert-auth=true
|
||||
- --peer-trusted-ca-file=/etc/etcd/pki/ca.crt
|
||||
- --peer-cert-file=/etc/etcd/pki/peer.pem
|
||||
- --peer-key-file=/etc/etcd/pki/peer-key.pem
|
||||
- --auto-compaction-mode=periodic
|
||||
- --auto-compaction-retention=5m
|
||||
- --v=8
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: data
|
||||
spec:
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
||||
|
||||
22
deploy/etcd/peer-csr.json
Normal file
22
deploy/etcd/peer-csr.json
Normal file
@@ -0,0 +1,22 @@
|
||||
{
|
||||
"CN": "etcd",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"hosts": [
|
||||
"127.0.0.1",
|
||||
"etcd-0",
|
||||
"etcd-0.etcd",
|
||||
"etcd-0.etcd.kamaji-system.svc",
|
||||
"etcd-0.etcd.kamaji-system.svc.cluster.local",
|
||||
"etcd-1",
|
||||
"etcd-1.etcd",
|
||||
"etcd-1.etcd.kamaji-system.svc",
|
||||
"etcd-1.etcd.kamaji-system.svc.cluster.local",
|
||||
"etcd-2",
|
||||
"etcd-2.etcd",
|
||||
"etcd-2.etcd.kamaji-system.svc",
|
||||
"etcd-2.etcd.kamaji-system.svc.cluster.local"
|
||||
]
|
||||
}
|
||||
12
deploy/etcd/root-client-csr.json
Normal file
12
deploy/etcd/root-client-csr.json
Normal file
@@ -0,0 +1,12 @@
|
||||
{
|
||||
"CN": "root",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"O": "system:masters"
|
||||
}
|
||||
]
|
||||
}
|
||||
16
deploy/etcd/server-csr.json
Normal file
16
deploy/etcd/server-csr.json
Normal file
@@ -0,0 +1,16 @@
|
||||
{
|
||||
"CN": "etcd",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"hosts": [
|
||||
"127.0.0.1",
|
||||
"etcd-server",
|
||||
"etcd-server.kamaji-system.svc",
|
||||
"etcd-server.kamaji-system.svc.cluster.local",
|
||||
"etcd-0.etcd.kamaji-system.svc.cluster.local",
|
||||
"etcd-1.etcd.kamaji-system.svc.cluster.local",
|
||||
"etcd-2.etcd.kamaji-system.svc.cluster.local"
|
||||
]
|
||||
}
|
||||
5
deploy/kamaji-azure.env
Normal file
5
deploy/kamaji-azure.env
Normal file
@@ -0,0 +1,5 @@
|
||||
export KAMAJI_REGION=westeurope
|
||||
export KAMAJI_RG=Kamaji
|
||||
# https://docs.microsoft.com/en-us/azure/aks/faq#why-are-two-resource-groups-created-with-aks
|
||||
export KAMAJI_CLUSTER=kamaji
|
||||
export KAMAJI_NODE_RG=MC_${KAMAJI_RG}_${KAMAJI_CLUSTER}_${KAMAJI_REGION}
|
||||
5
deploy/kamaji-external-etcd.env
Normal file
5
deploy/kamaji-external-etcd.env
Normal file
@@ -0,0 +1,5 @@
|
||||
# etcd machine addresses
|
||||
export ETCD0=192.168.32.10
|
||||
export ETCD1=192.168.32.11
|
||||
export ETCD2=192.168.32.12
|
||||
export ETCDHOSTS=($ETCD0 $ETCD1 $ETCD2)
|
||||
5
deploy/kamaji-internal-etcd.env
Normal file
5
deploy/kamaji-internal-etcd.env
Normal file
@@ -0,0 +1,5 @@
|
||||
# etcd endpoints
|
||||
export ETCD_NAMESPACE=etcd-system
|
||||
export ETCD0=etcd-0.etcd.${ETCD_NAMESPACE}.svc.cluster.local
|
||||
export ETCD1=etcd-1.etcd.${ETCD_NAMESPACE}.svc.cluster.local
|
||||
export ETCD2=etcd-2.etcd.${ETCD_NAMESPACE}.svc.cluster.local
|
||||
22
deploy/kamaji-tenant-azure.env
Normal file
22
deploy/kamaji-tenant-azure.env
Normal file
@@ -0,0 +1,22 @@
|
||||
export KAMAJI_REGION=westeurope
|
||||
|
||||
# tenant cluster parameters
|
||||
export TENANT_NAMESPACE=tenants
|
||||
export TENANT_NAME=tenant-00
|
||||
export TENANT_DOMAIN=$KAMAJI_REGION.cloudapp.azure.com
|
||||
export TENANT_VERSION=v1.23.4
|
||||
export TENANT_ADDR=10.240.0.100 # IP used to expose the tenant control plane
|
||||
export TENANT_PORT=6443 # PORT used to expose the tenant control plane
|
||||
export TENANT_POD_CIDR=10.36.0.0/16
|
||||
export TENANT_SVC_CIDR=10.96.0.0/16
|
||||
export TENANT_DNS_SERVICE=10.96.0.10
|
||||
|
||||
export TENANT_VM_SIZE=Standard_D2ds_v4
|
||||
export TENANT_VM_IMAGE=UbuntuLTS
|
||||
export TENANT_RG=$TENANT_NAME
|
||||
export TENANT_NSG=$TENANT_NAME-nsg
|
||||
export TENANT_VNET_NAME=$TENANT_NAME
|
||||
export TENANT_VNET_ADDRESS=192.168.0.0/16
|
||||
export TENANT_SUBNET_NAME=$TENANT_NAME-subnet
|
||||
export TENANT_SUBNET_ADDRESS=192.168.10.0/24
|
||||
export TENANT_VMSS=$TENANT_NAME-vmss
|
||||
17
deploy/kamaji-tenant.env
Normal file
17
deploy/kamaji-tenant.env
Normal file
@@ -0,0 +1,17 @@
|
||||
|
||||
# tenant cluster parameters
|
||||
export TENANT_NAMESPACE=tenants
|
||||
export TENANT_NAME=tenant-00
|
||||
export TENANT_DOMAIN=clastix.labs
|
||||
export TENANT_VERSION=v1.23.1
|
||||
export TENANT_ADDR=192.168.32.150 # IP used to expose the tenant control plane
|
||||
export TENANT_PORT=6443 # PORT used to expose the tenant control plane
|
||||
export TENANT_POD_CIDR=10.36.0.0/16
|
||||
export TENANT_SVC_CIDR=10.96.0.0/16
|
||||
export TENANT_DNS_SERVICE=10.96.0.10
|
||||
|
||||
# tenant node addresses
|
||||
export WORKER0=192.168.32.90
|
||||
export WORKER1=192.168.32.91
|
||||
export WORKER2=192.168.32.92
|
||||
export WORKER3=192.168.32.93
|
||||
36
deploy/kind/Makefile
Normal file
36
deploy/kind/Makefile
Normal file
@@ -0,0 +1,36 @@
|
||||
kind_path := $(patsubst %/,%,$(dir $(abspath $(lastword $(MAKEFILE_LIST)))))
|
||||
|
||||
include ../etcd/Makefile
|
||||
|
||||
.PHONY: kind ingress-nginx kamaji-kind-worker-build
|
||||
|
||||
.DEFAULT_GOAL := kamaji
|
||||
|
||||
prometheus-stack:
|
||||
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
|
||||
helm repo update
|
||||
helm install prometheus-stack --create-namespace -n monitoring prometheus-community/kube-prometheus-stack
|
||||
|
||||
kamaji: kind ingress-nginx etcd-cluster
|
||||
|
||||
destroy: kind/destroy etcd-certificates/cleanup
|
||||
|
||||
kind:
|
||||
@kind create cluster --config $(kind_path)/kind-kamaji.yaml
|
||||
|
||||
kind/destroy:
|
||||
@kind delete cluster --name kamaji
|
||||
|
||||
ingress-nginx: ingress-nginx-install
|
||||
|
||||
ingress-nginx-install:
|
||||
kubectl apply -f $(kind_path)/nginx-deploy.yaml
|
||||
|
||||
kamaji-kind-worker-build:
|
||||
docker build -f $(kind_path)/kamaji-kind-worker.dockerfile -t quay.io/clastix/kamaji-kind-worker:$${WORKER_VERSION:-latest} .
|
||||
|
||||
kamaji-kind-worker-push: kamaji-kind-worker-build
|
||||
docker push quay.io/clastix/kamaji-kind-worker:$${WORKER_VERSION:-latest}
|
||||
|
||||
kamaji-kind-worker-join:
|
||||
$(kind_path)/join-node.bash
|
||||
@@ -1 +1,138 @@
|
||||
# Setup a minimal Kamaji for development
|
||||
# Setup a minimal Kamaji for development
|
||||
|
||||
This document explains how to deploy a minimal Kamaji setup on [KinD](https://kind.sigs.k8s.io/) for development scopes. Please refer to the [Kamaji documentation](../../README.md) for understanding all the terms used in this guide, as for example: `admin cluster` and `tenant control plane`.
|
||||
|
||||
## Tools
|
||||
|
||||
We assume you have installed on your workstation:
|
||||
|
||||
- [Docker](https://docs.docker.com/engine/install/)
|
||||
- [KinD](https://kind.sigs.k8s.io/)
|
||||
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
|
||||
- [kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)
|
||||
- [jq](https://stedolan.github.io/jq/)
|
||||
- [openssl](https://www.openssl.org/)
|
||||
- [cfssl](https://github.com/cloudflare/cfssl)
|
||||
- [cfssljson](https://github.com/cloudflare/cfssl)
|
||||
|
||||
## Setup Kamaji on KinD
|
||||
|
||||
The instance of Kamaji is made of a single node hosting:
|
||||
|
||||
- admin control-plane
|
||||
- admin worker
|
||||
- multi-tenant etcd cluster
|
||||
|
||||
The multi-tenant etcd cluster is deployed as statefulset into the Kamaji node.
|
||||
|
||||
Run `make kamaji` to setup Kamaji on KinD.
|
||||
|
||||
```bash
|
||||
cd ./deploy/kind
|
||||
make kamaji
|
||||
```
|
||||
|
||||
At this moment you will have your KinD up and running and ETCD cluster in multitenant mode.
|
||||
|
||||
### Install Kamaji
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f ../../config/install.yaml
|
||||
```
|
||||
|
||||
### Deploy Tenant Control Plane
|
||||
|
||||
Now it is the moment of deploying your first tenant control plane.
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f - <<EOF
|
||||
apiVersion: kamaji.clastix.io/v1alpha1
|
||||
kind: TenantControlPlane
|
||||
metadata:
|
||||
name: tenant1
|
||||
spec:
|
||||
controlPlane:
|
||||
deployment:
|
||||
replicas: 2
|
||||
additionalMetadata:
|
||||
annotations:
|
||||
environment.clastix.io: tenant1
|
||||
tier.clastix.io: "0"
|
||||
labels:
|
||||
tenant.clastix.io: tenant1
|
||||
kind.clastix.io: deployment
|
||||
service:
|
||||
additionalMetadata:
|
||||
annotations:
|
||||
environment.clastix.io: tenant1
|
||||
tier.clastix.io: "0"
|
||||
labels:
|
||||
tenant.clastix.io: tenant1
|
||||
kind.clastix.io: service
|
||||
serviceType: NodePort
|
||||
ingress:
|
||||
enabled: false
|
||||
kubernetes:
|
||||
version: "v1.23.4"
|
||||
kubelet:
|
||||
cgroupfs: cgroupfs
|
||||
admissionControllers:
|
||||
- LimitRanger
|
||||
- ResourceQuota
|
||||
networkProfile:
|
||||
address: "172.18.0.2"
|
||||
port: 31443
|
||||
domain: "clastix.labs"
|
||||
serviceCidr: "10.96.0.0/16"
|
||||
podCidr: "10.244.0.0/16"
|
||||
dnsServiceIPs:
|
||||
- "10.96.0.10"
|
||||
EOF
|
||||
```
|
||||
|
||||
> Check networkProfile fields according to your installation
|
||||
> To let Kamaji works in kind, you have indicate that the service must be [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport)
|
||||
|
||||
### Get Kubeconfig
|
||||
|
||||
Let's retrieve kubeconfig and store in `/tmp/kubeconfig`
|
||||
|
||||
```bash
|
||||
$ kubectl get secrets tenant1-admin-kubeconfig -o json \
|
||||
| jq -r '.data["admin.conf"]' \
|
||||
| base64 -d > /tmp/kubeconfig
|
||||
```
|
||||
|
||||
It can be export it, to facilitate the next tasks:
|
||||
|
||||
```bash
|
||||
$ export KUBECONFIG=/tmp/kubeconfig
|
||||
```
|
||||
|
||||
### Install CNI
|
||||
|
||||
We highly recommend to install [kindnet](https://github.com/aojea/kindnet) as CNI for your kamaji TCP.
|
||||
|
||||
```bash
|
||||
$ kubectl create -f https://raw.githubusercontent.com/aojea/kindnet/master/install-kindnet.yaml
|
||||
```
|
||||
|
||||
### Join worker nodes
|
||||
|
||||
```bash
|
||||
$ make kamaji-kind-worker-join
|
||||
```
|
||||
|
||||
> To add more worker nodes, run again the command above.
|
||||
|
||||
Check out the node:
|
||||
|
||||
```bash
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
d2d4b468c9de Ready <none> 44s v1.23.4
|
||||
```
|
||||
|
||||
> For more complex scenarios (exposing port, different version and so on), run `join-node.bash`
|
||||
|
||||
Tenant control plane provision has been finished in a minimal Kamaji setup based on KinD. Therefore, you could develop, test and make your own experiments with Kamaji.
|
||||
|
||||
33
deploy/kind/cni-kindnet-config.json
Normal file
33
deploy/kind/cni-kindnet-config.json
Normal file
@@ -0,0 +1,33 @@
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
"name": "kindnet",
|
||||
"plugins": [
|
||||
{
|
||||
"type": "ptp",
|
||||
"ipMasq": false,
|
||||
"ipam": {
|
||||
"type": "host-local",
|
||||
"dataDir": "/run/cni-ipam-state",
|
||||
"routes": [
|
||||
{
|
||||
"dst": "0.0.0.0/0"
|
||||
}
|
||||
],
|
||||
"ranges": [
|
||||
[
|
||||
{
|
||||
"subnet": "10.244.0.0/24"
|
||||
}
|
||||
]
|
||||
]
|
||||
},
|
||||
"mtu": 1500
|
||||
},
|
||||
{
|
||||
"type": "portmap",
|
||||
"capabilities": {
|
||||
"portMappings": true
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
177
deploy/kind/etcd-cluster.yaml
Normal file
177
deploy/kind/etcd-cluster.yaml
Normal file
@@ -0,0 +1,177 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
creationTimestamp: "2021-12-08T17:49:38Z"
|
||||
name: etcd
|
||||
namespace: kamaji-system
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: etcd-server
|
||||
namespace: kamaji-system
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- name: client
|
||||
port: 2379
|
||||
protocol: TCP
|
||||
targetPort: 2379
|
||||
selector:
|
||||
app: etcd
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: etcd
|
||||
namespace: kamaji-system
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: 2379
|
||||
name: client
|
||||
- port: 2380
|
||||
name: peer
|
||||
selector:
|
||||
app: etcd
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: etcd
|
||||
labels:
|
||||
app: etcd
|
||||
namespace: kamaji-system
|
||||
spec:
|
||||
serviceName: etcd
|
||||
selector:
|
||||
matchLabels:
|
||||
app: etcd
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
name: etcd
|
||||
labels:
|
||||
app: etcd
|
||||
spec:
|
||||
serviceAccountName: etcd
|
||||
volumes:
|
||||
- name: certs
|
||||
secret:
|
||||
secretName: etcd-certs
|
||||
containers:
|
||||
- name: etcd
|
||||
image: quay.io/coreos/etcd:v3.5.1
|
||||
ports:
|
||||
- containerPort: 2379
|
||||
name: client
|
||||
- containerPort: 2380
|
||||
name: peer
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /var/run/etcd
|
||||
- name: certs
|
||||
mountPath: /etc/etcd/pki
|
||||
command:
|
||||
- etcd
|
||||
- --data-dir=/var/run/etcd
|
||||
- --name=$(POD_NAME)
|
||||
- --initial-cluster-state=new
|
||||
- --initial-cluster=etcd-0=https://etcd-0.etcd.$(POD_NAMESPACE).svc.cluster.local:2380,etcd-1=https://etcd-1.etcd.$(POD_NAMESPACE).svc.cluster.local:2380,etcd-2=https://etcd-2.etcd.$(POD_NAMESPACE).svc.cluster.local:2380
|
||||
- --initial-advertise-peer-urls=https://$(POD_NAME).etcd.$(POD_NAMESPACE).svc.cluster.local:2380
|
||||
- --initial-cluster-token=kamaji
|
||||
- --listen-client-urls=https://0.0.0.0:2379
|
||||
- --advertise-client-urls=https://etcd-0.etcd.$(POD_NAMESPACE).svc.cluster.local:2379,https://etcd-1.etcd.$(POD_NAMESPACE).svc.cluster.local:2379,https://etcd-2.etcd.$(POD_NAMESPACE).svc.cluster.local:2379,https://etcd-server.$(POD_NAMESPACE).svc.cluster.local:2379
|
||||
- --client-cert-auth=true
|
||||
- --trusted-ca-file=/etc/etcd/pki/ca.crt
|
||||
- --cert-file=/etc/etcd/pki/server.pem
|
||||
- --key-file=/etc/etcd/pki/server-key.pem
|
||||
- --listen-peer-urls=https://0.0.0.0:2380
|
||||
- --peer-client-cert-auth=true
|
||||
- --peer-trusted-ca-file=/etc/etcd/pki/ca.crt
|
||||
- --peer-cert-file=/etc/etcd/pki/peer.pem
|
||||
- --peer-key-file=/etc/etcd/pki/peer-key.pem
|
||||
- --auto-compaction-mode=periodic
|
||||
- --auto-compaction-retention=5m
|
||||
- --v=8
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
livenessProbe:
|
||||
failureThreshold: 8
|
||||
httpGet:
|
||||
host: 127.0.0.1
|
||||
path: /health
|
||||
port: 2381
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 15
|
||||
startupProbe:
|
||||
failureThreshold: 24
|
||||
httpGet:
|
||||
host: 127.0.0.1
|
||||
path: /health
|
||||
port: 2381
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 15
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: data
|
||||
spec:
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
resources:
|
||||
requests:
|
||||
storage: 8Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
app: etcd
|
||||
name: etcd-root-client
|
||||
namespace: kamaji-system
|
||||
spec:
|
||||
serviceAccountName: etcd
|
||||
containers:
|
||||
- command:
|
||||
- sleep
|
||||
- infinity
|
||||
env:
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: ETCDCTL_ENDPOINTS
|
||||
value: https://etcd-server.$(POD_NAMESPACE).svc.cluster.local:2379
|
||||
- name: ETCDCTL_CACERT
|
||||
value: /opt/certs/ca/ca.crt
|
||||
- name: ETCDCTL_CERT
|
||||
value: /opt/certs/root-client-certs/tls.crt
|
||||
- name: ETCDCTL_KEY
|
||||
value: /opt/certs/root-client-certs/tls.key
|
||||
image: quay.io/coreos/etcd:v3.5.1
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: etcd-client
|
||||
resources: {}
|
||||
volumeMounts:
|
||||
- name: root-client-certs
|
||||
mountPath: /opt/certs/root-client-certs
|
||||
- name: ca
|
||||
mountPath: /opt/certs/ca
|
||||
volumes:
|
||||
- name: root-client-certs
|
||||
secret:
|
||||
secretName: root-client-certs
|
||||
- name: ca
|
||||
secret:
|
||||
secretName: etcd-certs
|
||||
|
||||
32
deploy/kind/join-node.bash
Executable file
32
deploy/kind/join-node.bash
Executable file
@@ -0,0 +1,32 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
# Constants
|
||||
export DOCKER_IMAGE_NAME="quay.io/clastix/kamaji-kind-worker"
|
||||
export DOCKER_NETWORK="kind"
|
||||
|
||||
# Variables
|
||||
export KUBERNETES_VERSION=${1:-latest}
|
||||
|
||||
if [ -z $2 ]
|
||||
then
|
||||
MAPPING_PORT=""
|
||||
else
|
||||
MAPPING_PORT="-p ${2}:80"
|
||||
fi
|
||||
|
||||
clear
|
||||
echo "Welcome to join a new node to the Kind network"
|
||||
|
||||
echo -ne "\nChecking right kubeconfig\n"
|
||||
kubectl cluster-info
|
||||
echo "Are you pointing to the right tenant control plane? (Type return to continue)"
|
||||
read
|
||||
|
||||
JOIN_CMD="$(kubeadm --kubeconfig=/tmp/kubeconfig token create --print-join-command) --ignore-preflight-errors=SystemVerification"
|
||||
echo "Deploying new node..."
|
||||
NODE=$(docker run -d --privileged -v /lib/modules:/lib/modules:ro -v /var --net $DOCKER_NETWORK $MAPPING_PORT $DOCKER_IMAGE_NAME:$KUBERNETES_VERSION)
|
||||
sleep 10
|
||||
echo "Joining new node..."
|
||||
docker exec -e JOIN_CMD="$JOIN_CMD" $NODE /bin/bash -c "$JOIN_CMD"
|
||||
4
deploy/kind/kamaji-kind-worker.dockerfile
Normal file
4
deploy/kind/kamaji-kind-worker.dockerfile
Normal file
@@ -0,0 +1,4 @@
|
||||
ARG KUBERNETES_VERSION=v1.23.4
|
||||
FROM kindest/node:$KUBERNETES_VERSION
|
||||
|
||||
COPY ./cni-kindnet-config.json /etc/cni/net.d/10-kindnet.conflist
|
||||
30
deploy/kind/kind-kamaji.yaml
Normal file
30
deploy/kind/kind-kamaji.yaml
Normal file
@@ -0,0 +1,30 @@
|
||||
kind: Cluster
|
||||
apiVersion: kind.x-k8s.io/v1alpha4
|
||||
name: kamaji
|
||||
nodes:
|
||||
- role: control-plane
|
||||
image: kindest/node:v1.23.4
|
||||
kubeadmConfigPatches:
|
||||
- |
|
||||
kind: InitConfiguration
|
||||
nodeRegistration:
|
||||
kubeletExtraArgs:
|
||||
node-labels: "ingress-ready=true"
|
||||
extraPortMappings:
|
||||
## expose port 80 of the node to port 80 on the host
|
||||
- containerPort: 80
|
||||
hostPort: 80
|
||||
protocol: TCP
|
||||
## expose port 443 of the node to port 443 on the host
|
||||
- containerPort: 443
|
||||
hostPort: 443
|
||||
protocol: TCP
|
||||
## expose port 31443 of the node to port 31443 on the host
|
||||
- containerPort: 31443
|
||||
hostPort: 31443
|
||||
protocol: TCP
|
||||
## expose port 6443 of the node to port 8443 on the host
|
||||
- containerPort: 6443
|
||||
hostPort: 8443
|
||||
protocol: TCP
|
||||
|
||||
694
deploy/kind/nginx-deploy.yaml
Normal file
694
deploy/kind/nginx-deploy.yaml
Normal file
@@ -0,0 +1,694 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: ingress-nginx
|
||||
labels:
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
|
||||
---
|
||||
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: controller
|
||||
name: ingress-nginx
|
||||
namespace: ingress-nginx
|
||||
automountServiceAccountToken: true
|
||||
---
|
||||
# Source: ingress-nginx/templates/controller-configmap.yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: controller
|
||||
name: ingress-nginx-controller
|
||||
namespace: ingress-nginx
|
||||
data:
|
||||
allow-snippet-annotations: 'true'
|
||||
---
|
||||
# Source: ingress-nginx/templates/clusterrole.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
name: ingress-nginx
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ''
|
||||
resources:
|
||||
- configmaps
|
||||
- endpoints
|
||||
- nodes
|
||||
- pods
|
||||
- secrets
|
||||
- namespaces
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ''
|
||||
resources:
|
||||
- nodes
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ''
|
||||
resources:
|
||||
- services
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- networking.k8s.io
|
||||
resources:
|
||||
- ingresses
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ''
|
||||
resources:
|
||||
- events
|
||||
verbs:
|
||||
- create
|
||||
- patch
|
||||
- apiGroups:
|
||||
- networking.k8s.io
|
||||
resources:
|
||||
- ingresses/status
|
||||
verbs:
|
||||
- update
|
||||
- apiGroups:
|
||||
- networking.k8s.io
|
||||
resources:
|
||||
- ingressclasses
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
---
|
||||
# Source: ingress-nginx/templates/clusterrolebinding.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
name: ingress-nginx
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: ingress-nginx
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: ingress-nginx
|
||||
namespace: ingress-nginx
|
||||
---
|
||||
# Source: ingress-nginx/templates/controller-role.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: controller
|
||||
name: ingress-nginx
|
||||
namespace: ingress-nginx
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ''
|
||||
resources:
|
||||
- namespaces
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ''
|
||||
resources:
|
||||
- configmaps
|
||||
- pods
|
||||
- secrets
|
||||
- endpoints
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ''
|
||||
resources:
|
||||
- services
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- networking.k8s.io
|
||||
resources:
|
||||
- ingresses
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- networking.k8s.io
|
||||
resources:
|
||||
- ingresses/status
|
||||
verbs:
|
||||
- update
|
||||
- apiGroups:
|
||||
- networking.k8s.io
|
||||
resources:
|
||||
- ingressclasses
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ''
|
||||
resources:
|
||||
- configmaps
|
||||
resourceNames:
|
||||
- ingress-controller-leader
|
||||
verbs:
|
||||
- get
|
||||
- update
|
||||
- apiGroups:
|
||||
- ''
|
||||
resources:
|
||||
- configmaps
|
||||
verbs:
|
||||
- create
|
||||
- apiGroups:
|
||||
- ''
|
||||
resources:
|
||||
- events
|
||||
verbs:
|
||||
- create
|
||||
- patch
|
||||
---
|
||||
# Source: ingress-nginx/templates/controller-rolebinding.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: controller
|
||||
name: ingress-nginx
|
||||
namespace: ingress-nginx
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: ingress-nginx
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: ingress-nginx
|
||||
namespace: ingress-nginx
|
||||
---
|
||||
# Source: ingress-nginx/templates/controller-service-webhook.yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: controller
|
||||
name: ingress-nginx-controller-admission
|
||||
namespace: ingress-nginx
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- name: https-webhook
|
||||
port: 443
|
||||
targetPort: webhook
|
||||
appProtocol: https
|
||||
selector:
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/component: controller
|
||||
---
|
||||
# Source: ingress-nginx/templates/controller-service.yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: controller
|
||||
name: ingress-nginx-controller
|
||||
namespace: ingress-nginx
|
||||
spec:
|
||||
type: NodePort
|
||||
ipFamilyPolicy: SingleStack
|
||||
ipFamilies:
|
||||
- IPv4
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
protocol: TCP
|
||||
targetPort: http
|
||||
appProtocol: http
|
||||
- name: https
|
||||
port: 443
|
||||
protocol: TCP
|
||||
targetPort: https
|
||||
appProtocol: https
|
||||
selector:
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/component: controller
|
||||
---
|
||||
# Source: ingress-nginx/templates/controller-deployment.yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: controller
|
||||
name: ingress-nginx-controller
|
||||
namespace: ingress-nginx
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/component: controller
|
||||
revisionHistoryLimit: 10
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
type: RollingUpdate
|
||||
minReadySeconds: 0
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/component: controller
|
||||
spec:
|
||||
dnsPolicy: ClusterFirst
|
||||
containers:
|
||||
- name: controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a
|
||||
imagePullPolicy: IfNotPresent
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
command:
|
||||
- /wait-shutdown
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --election-id=ingress-controller-leader
|
||||
- --controller-class=k8s.io/ingress-nginx
|
||||
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
|
||||
- --validating-webhook=:8443
|
||||
- --validating-webhook-certificate=/usr/local/certificates/cert
|
||||
- --validating-webhook-key=/usr/local/certificates/key
|
||||
- --watch-ingress-without-class=true
|
||||
- --publish-status-address=localhost
|
||||
- --enable-ssl-passthrough=true
|
||||
securityContext:
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
add:
|
||||
- NET_BIND_SERVICE
|
||||
runAsUser: 101
|
||||
allowPrivilegeEscalation: true
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: LD_PRELOAD
|
||||
value: /usr/local/lib/libmimalloc.so
|
||||
livenessProbe:
|
||||
failureThreshold: 5
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
readinessProbe:
|
||||
failureThreshold: 3
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 80
|
||||
protocol: TCP
|
||||
hostPort: 80
|
||||
- name: https
|
||||
containerPort: 443
|
||||
protocol: TCP
|
||||
hostPort: 443
|
||||
- name: webhook
|
||||
containerPort: 8443
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- name: webhook-cert
|
||||
mountPath: /usr/local/certificates/
|
||||
readOnly: true
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 90Mi
|
||||
nodeSelector:
|
||||
ingress-ready: 'true'
|
||||
kubernetes.io/os: linux
|
||||
tolerations:
|
||||
- effect: NoSchedule
|
||||
key: node-role.kubernetes.io/master
|
||||
operator: Equal
|
||||
serviceAccountName: ingress-nginx
|
||||
terminationGracePeriodSeconds: 0
|
||||
volumes:
|
||||
- name: webhook-cert
|
||||
secret:
|
||||
secretName: ingress-nginx-admission
|
||||
---
|
||||
# Source: ingress-nginx/templates/controller-ingressclass.yaml
|
||||
# We don't support namespaced ingressClass yet
|
||||
# So a ClusterRole and a ClusterRoleBinding is required
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: IngressClass
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: controller
|
||||
name: nginx
|
||||
namespace: ingress-nginx
|
||||
spec:
|
||||
controller: k8s.io/ingress-nginx
|
||||
---
|
||||
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
|
||||
# before changing this value, check the required kubernetes version
|
||||
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
|
||||
apiVersion: admissionregistration.k8s.io/v1
|
||||
kind: ValidatingWebhookConfiguration
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: admission-webhook
|
||||
name: ingress-nginx-admission
|
||||
webhooks:
|
||||
- name: validate.nginx.ingress.kubernetes.io
|
||||
matchPolicy: Equivalent
|
||||
rules:
|
||||
- apiGroups:
|
||||
- networking.k8s.io
|
||||
apiVersions:
|
||||
- v1
|
||||
operations:
|
||||
- CREATE
|
||||
- UPDATE
|
||||
resources:
|
||||
- ingresses
|
||||
failurePolicy: Fail
|
||||
sideEffects: None
|
||||
admissionReviewVersions:
|
||||
- v1
|
||||
clientConfig:
|
||||
service:
|
||||
namespace: ingress-nginx
|
||||
name: ingress-nginx-controller-admission
|
||||
path: /networking/v1/ingresses
|
||||
---
|
||||
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: ingress-nginx-admission
|
||||
namespace: ingress-nginx
|
||||
annotations:
|
||||
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
|
||||
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: admission-webhook
|
||||
---
|
||||
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: ingress-nginx-admission
|
||||
annotations:
|
||||
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
|
||||
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: admission-webhook
|
||||
rules:
|
||||
- apiGroups:
|
||||
- admissionregistration.k8s.io
|
||||
resources:
|
||||
- validatingwebhookconfigurations
|
||||
verbs:
|
||||
- get
|
||||
- update
|
||||
---
|
||||
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: ingress-nginx-admission
|
||||
annotations:
|
||||
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
|
||||
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: admission-webhook
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: ingress-nginx-admission
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: ingress-nginx-admission
|
||||
namespace: ingress-nginx
|
||||
---
|
||||
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: ingress-nginx-admission
|
||||
namespace: ingress-nginx
|
||||
annotations:
|
||||
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
|
||||
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: admission-webhook
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ''
|
||||
resources:
|
||||
- secrets
|
||||
verbs:
|
||||
- get
|
||||
- create
|
||||
---
|
||||
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: ingress-nginx-admission
|
||||
namespace: ingress-nginx
|
||||
annotations:
|
||||
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
|
||||
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: admission-webhook
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: ingress-nginx-admission
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: ingress-nginx-admission
|
||||
namespace: ingress-nginx
|
||||
---
|
||||
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: ingress-nginx-admission-create
|
||||
namespace: ingress-nginx
|
||||
annotations:
|
||||
helm.sh/hook: pre-install,pre-upgrade
|
||||
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: admission-webhook
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
name: ingress-nginx-admission-create
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: admission-webhook
|
||||
spec:
|
||||
containers:
|
||||
- name: create
|
||||
image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
|
||||
imagePullPolicy: IfNotPresent
|
||||
args:
|
||||
- create
|
||||
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
|
||||
- --namespace=$(POD_NAMESPACE)
|
||||
- --secret-name=ingress-nginx-admission
|
||||
env:
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
restartPolicy: OnFailure
|
||||
serviceAccountName: ingress-nginx-admission
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 2000
|
||||
---
|
||||
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: ingress-nginx-admission-patch
|
||||
namespace: ingress-nginx
|
||||
annotations:
|
||||
helm.sh/hook: post-install,post-upgrade
|
||||
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: admission-webhook
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
name: ingress-nginx-admission-patch
|
||||
labels:
|
||||
helm.sh/chart: ingress-nginx-4.0.10
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/version: 1.1.0
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
app.kubernetes.io/component: admission-webhook
|
||||
spec:
|
||||
containers:
|
||||
- name: patch
|
||||
image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
|
||||
imagePullPolicy: IfNotPresent
|
||||
args:
|
||||
- patch
|
||||
- --webhook-name=ingress-nginx-admission
|
||||
- --namespace=$(POD_NAMESPACE)
|
||||
- --patch-mutating=false
|
||||
- --secret-name=ingress-nginx-admission
|
||||
- --patch-failure-policy=Fail
|
||||
env:
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
restartPolicy: OnFailure
|
||||
serviceAccountName: ingress-nginx-admission
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 2000
|
||||
45
deploy/nodes-prerequisites.sh
Normal file
45
deploy/nodes-prerequisites.sh
Normal file
@@ -0,0 +1,45 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
KUBERNETES_VERSION=$1; shift
|
||||
HOSTS=("$@")
|
||||
|
||||
# Install `containerd` as container runtime.
|
||||
cat << EOF | tee containerd.conf
|
||||
overlay
|
||||
br_netfilter
|
||||
EOF
|
||||
|
||||
cat << EOF | tee 99-kubernetes-cri.conf
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
net.ipv4.ip_forward = 1
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
EOF
|
||||
|
||||
for i in "${!HOSTS[@]}"; do
|
||||
HOST=${HOSTS[$i]}
|
||||
ssh ${USER}@${HOST} -t 'sudo apt update && sudo apt install -y containerd'
|
||||
ssh ${USER}@${HOST} -t 'sudo systemctl start containerd && sudo systemctl enable containerd'
|
||||
scp containerd.conf ${USER}@${HOST}:
|
||||
ssh ${USER}@${HOST} -t 'sudo chown -R root:root containerd.conf && sudo mv containerd.conf /etc/modules-load.d/containerd.conf'
|
||||
ssh ${USER}@${HOST} -t 'sudo modprobe overlay && sudo modprobe br_netfilter'
|
||||
scp 99-kubernetes-cri.conf ${USER}@${HOST}:
|
||||
ssh ${USER}@${HOST} -t 'sudo chown -R root:root 99-kubernetes-cri.conf && sudo mv 99-kubernetes-cri.conf /etc/sysctl.d/99-kubernetes-cri.conf'
|
||||
ssh ${USER}@${HOST} -t 'sudo sysctl --system'
|
||||
done
|
||||
|
||||
rm -f containerd.conf 99-kubernetes-cri.conf
|
||||
|
||||
# Install `kubectl`, `kubelet`, and `kubeadm` in the desired version.
|
||||
|
||||
INSTALL_KUBERNETES="sudo apt install -y kubelet=${KUBERNETES_VERSION}-00 kubeadm=${KUBERNETES_VERSION}-00 kubectl=${KUBERNETES_VERSION}-00 --allow-downgrades --allow-change-held-packages"
|
||||
|
||||
for i in "${!HOSTS[@]}"; do
|
||||
HOST=${HOSTS[$i]}
|
||||
ssh ${USER}@${HOST} -t 'sudo apt update'
|
||||
ssh ${USER}@${HOST} -t 'sudo apt install -y apt-transport-https ca-certificates curl'
|
||||
ssh ${USER}@${HOST} -t 'sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg'
|
||||
ssh ${USER}@${HOST} -t 'echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list'
|
||||
ssh ${USER}@${HOST} -t 'sudo apt update'
|
||||
ssh ${USER}@${HOST} -t ${INSTALL_KUBERNETES}
|
||||
ssh ${USER}@${HOST} -t 'sudo apt-mark hold kubelet kubeadm kubectl'
|
||||
done
|
||||
30
deploy/tenant-cloudinit.yaml
Normal file
30
deploy/tenant-cloudinit.yaml
Normal file
@@ -0,0 +1,30 @@
|
||||
#cloud-config
|
||||
package_upgrade: true
|
||||
packages:
|
||||
- containerd
|
||||
- apt-transport-https
|
||||
- ca-certificates
|
||||
- curl
|
||||
write_files:
|
||||
- owner: root:root
|
||||
path: /etc/modules-load.d/containerd.conf
|
||||
content: |
|
||||
overlay
|
||||
br_netfilter
|
||||
- owner: root:root
|
||||
path: /etc/sysctl.d/99-kubernetes-cri.conf
|
||||
content: |
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
net.ipv4.ip_forward = 1
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
runcmd:
|
||||
- sudo modprobe overlay
|
||||
- sudo modprobe br_netfilter
|
||||
- sudo sysctl --system
|
||||
- sudo systemctl start containerd
|
||||
- sudo systemctl enable containerd
|
||||
- sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
|
||||
- echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
|
||||
- sudo apt update
|
||||
- sudo apt install -y kubelet kubeadm kubectl containerd
|
||||
- sudo apt-mark hold kubelet kubeadm kubectl
|
||||
@@ -1 +0,0 @@
|
||||
# Deploy Kamaji on Azure
|
||||
@@ -1 +0,0 @@
|
||||
# Deploy a tenant cluster
|
||||
@@ -1 +1,604 @@
|
||||
# Getting Started with Kamaji
|
||||
# Setup a Kamaji environment
|
||||
This getting started guide will lead you through the process of creating a basic working Kamaji setup.
|
||||
|
||||
Kamaji requires:
|
||||
|
||||
- (optional) a bootstrap node;
|
||||
- a multi-tenant `etcd` cluster made of 3 nodes hosting the datastore for the `Tenant`s' clusters
|
||||
- a Kubernetes cluster, running the admin and Tenant Control Planes
|
||||
- an arbitrary number of machines hosting `Tenant`s' workloads
|
||||
|
||||
> In this guide, we assume all machines are running `Ubuntu 20.04`.
|
||||
|
||||
* [Prepare the bootstrap workspace](#prepare-the-bootstrap-workspace)
|
||||
* [Access Admin cluster](#access-admin-cluster)
|
||||
* [Setup external multi-tenant etcd](#setup-external-multi-tenant-etcd)
|
||||
* [Setup internal multi-tenant etcd](#setup-internal-multi-tenant-etcd)
|
||||
* [Install Kamaji controller](#install-kamaji-controller)
|
||||
* [Setup Tenant cluster](#setup-tenant-cluster)
|
||||
|
||||
## Prepare the bootstrap workspace
|
||||
This getting started guide is supposed to be run from a remote or local bootstrap machine.
|
||||
First, prepare the workspace directory:
|
||||
|
||||
```
|
||||
git clone https://github.com/clastix/kamaji
|
||||
cd kamaji/deploy
|
||||
```
|
||||
|
||||
Throughout the instructions, shell variables are used to indicate values that you should adjust to your own environment.
|
||||
|
||||
### Install required tools
|
||||
On the bootstrap machine, install all the required tools to work with a Kamaji setup.
|
||||
|
||||
#### cfssl and cfssljson
|
||||
The `cfssl` and `cfssljson` command line utilities will be used in addition to `kubeadm` to provision the PKI Infrastructure and generate TLS certificates.
|
||||
|
||||
```
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssl \
|
||||
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssljson
|
||||
|
||||
chmod +x cfssl cfssljson
|
||||
sudo mv cfssl cfssljson /usr/local/bin/
|
||||
```
|
||||
|
||||
#### Kubernetes tools
|
||||
Install `kubeadm` and `kubectl`
|
||||
|
||||
```bash
|
||||
sudo apt update && sudo apt install -y apt-transport-https ca-certificates curl && \
|
||||
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg && \
|
||||
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
|
||||
sudo apt update && sudo apt install -y kubeadm kubectl --allow-change-held-packages && \
|
||||
sudo apt-mark hold kubeadm kubectl
|
||||
```
|
||||
|
||||
#### etcdctl
|
||||
For administration of the `etcd` cluster, download and install the `etcdctl` CLI utility on the bootstrap machine
|
||||
|
||||
```bash
|
||||
ETCD_VER=v3.5.1
|
||||
ETCD_URL=https://storage.googleapis.com/etcd
|
||||
curl -L ${ETCD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o etcd-${ETCD_VER}-linux-amd64.tar.gz
|
||||
tar xzvf etcd-${ETCD_VER}-linux-amd64.tar.gz etcd-${ETCD_VER}-linux-amd64/etcdctl
|
||||
sudo cp etcd-${ETCD_VER}-linux-amd64/etcdctl /usr/bin/etcdctl
|
||||
rm -rf etcd-${ETCD_VER}-linux-amd64*
|
||||
```
|
||||
|
||||
Verify `etcdctl` version is installed
|
||||
|
||||
```bash
|
||||
etcdctl version
|
||||
etcdctl version: 3.5.1
|
||||
API version: 3.5
|
||||
```
|
||||
|
||||
|
||||
## Access Admin cluster
|
||||
In Kamaji, an Admin Cluster is a regular Kubernetes cluster which hosts zero to many Tenant Cluster Control Planes running as pods. The admin cluster acts as management cluster for all the Tenant clusters and implements Monitoring, Logging, and Governance of all the Kamaji setup, including all Tenant clusters.
|
||||
|
||||
Any regular and conformant Kubernetes v1.22+ cluster can be turned into a Kamaji setup. Currently we tested:
|
||||
|
||||
- [Kubernetes installed with `kubeadm`](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/).
|
||||
- [Azure AKS managed service](./kamaji-on-azure.md).
|
||||
- [KinD for local development](./kind/README.md).
|
||||
|
||||
The admin cluster should provide:
|
||||
|
||||
- CNI module installed, eg. Calico
|
||||
- Support for LoadBalancer Service Type, eg. MetalLB
|
||||
- Ingress Controller
|
||||
- CSI module installed with StorageClass for multi-tenant `etcd`
|
||||
- Monitoring Stack, eg. Prometheus and Grafana
|
||||
|
||||
Make sure you have a `kubeconfig` file with admin permissions on the cluster you want to turn into a Kamaji Admin Cluster.
|
||||
|
||||
## Setup external multi-tenant etcd
|
||||
In this section, we're going to setup a multi-tenant `etcd` cluster on dedicated nodes. Alternatively, if you want to use an internal `etcd` cluster as Kubernetes StatefulSet, jump [here](#setup-internal-multi-tenant-etcd).
|
||||
|
||||
### Ensure host access
|
||||
From the bootstrap machine load the environment for external `etcd` setup:
|
||||
|
||||
```bash
|
||||
source kamaji-external-etcd.env
|
||||
```
|
||||
|
||||
The installer requires a user that has access to all hosts. In order to run the installer as a non-root user, first configure passwordless sudo rights each host:
|
||||
|
||||
Generate an SSH key on the host you run the installer on:
|
||||
|
||||
```bash
|
||||
ssh-keygen -t rsa
|
||||
```
|
||||
|
||||
> Do not use a password.
|
||||
|
||||
Distribute the key to the other cluster hosts.
|
||||
|
||||
Depending on your environment, use a bash loop:
|
||||
|
||||
```bash
|
||||
HOSTS=(${ETCD0} ${ETCD1} ${ETCD2})
|
||||
for i in "${!HOSTS[@]}"; do
|
||||
HOST=${HOSTS[$i]}
|
||||
ssh-copy-id -i ~/.ssh/id_rsa.pub $HOST;
|
||||
done
|
||||
```
|
||||
|
||||
> Alternatively, inject the generated public key into machines metadata.
|
||||
|
||||
Confirm that you can access each host from bootstrap machine:
|
||||
|
||||
```bash
|
||||
HOSTS=(${ETCD0} ${ETCD1} ${ETCD2})
|
||||
for i in "${!HOSTS[@]}"; do
|
||||
HOST=${HOSTS[$i]}
|
||||
ssh ${USER}@${HOST} -t 'hostname';
|
||||
done
|
||||
```
|
||||
|
||||
### Configure disk layout
|
||||
As per `etcd` [requirements](https://etcd.io/docs/v3.5/op-guide/hardware/#disks), back `etcd`’s storage with a SSD. A SSD usually provides lower write latencies and with less variance than a spinning disk, thus improving the stability and reliability of `etcd`.
|
||||
|
||||
For each `etcd` machine, we assume an additional `sdb` disk of 10GB:
|
||||
|
||||
```
|
||||
clastix@kamaji-etcd-00:~$ lsblk
|
||||
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
|
||||
sda 8:0 0 16G 0 disk
|
||||
├─sda1 8:1 0 15.9G 0 part /
|
||||
├─sda14 8:14 0 4M 0 part
|
||||
└─sda15 8:15 0 106M 0 part /boot/efi
|
||||
sdb 8:16 0 10G 0 disk
|
||||
sr0 11:0 1 4M 0 rom
|
||||
```
|
||||
|
||||
Create partition, format, and mount the `etcd` disk, by running the script below from the bootstrap machine:
|
||||
|
||||
> If you already used the `etcd` disks, please make sure to wipe the partitions with `sudo wipefs --all --force /dev/sdb` before to attempt to recreate them.
|
||||
|
||||
```bash
|
||||
for i in "${!ETCDHOSTS[@]}"; do
|
||||
HOST=${ETCDHOSTS[$i]}
|
||||
ssh ${USER}@${HOST} -t 'echo type=83 | sudo sfdisk -f -q /dev/sdb'
|
||||
ssh ${USER}@${HOST} -t 'sudo mkfs -F -q -t ext4 /dev/sdb1'
|
||||
ssh ${USER}@${HOST} -t 'sudo mkdir -p /var/lib/etcd'
|
||||
ssh ${USER}@${HOST} -t 'sudo e2label /dev/sdb1 ETCD'
|
||||
ssh ${USER}@${HOST} -t 'echo LABEL=ETCD /var/lib/etcd ext4 defaults 0 1 | sudo tee -a /etc/fstab'
|
||||
ssh ${USER}@${HOST} -t 'sudo mount -a'
|
||||
ssh ${USER}@${HOST} -t 'sudo lsblk -f'
|
||||
done
|
||||
```
|
||||
|
||||
### Install prerequisites
|
||||
Use bash script `nodes-prerequisites.sh` to install all the dependencies on all the cluster nodes:
|
||||
|
||||
- Install `containerd` as container runtime
|
||||
- Install `crictl`, the command line for working with `containerd`
|
||||
- Install `kubectl`, `kubelet`, and `kubeadm` in the desired version, eg. `v1.24.0`
|
||||
|
||||
Run the installation script:
|
||||
|
||||
```bash
|
||||
VERSION=v1.24.0
|
||||
./nodes-prerequisites.sh ${VERSION:1} ${HOSTS[@]}
|
||||
```
|
||||
|
||||
### Configure kubelet
|
||||
|
||||
On each `etcd` node, configure the `kubelet` service to start `etcd` static pods using `containerd` as container runtime, by running the script below from the bootstrap machine:
|
||||
|
||||
```bash
|
||||
cat << EOF > 20-etcd-service-manager.conf
|
||||
[Service]
|
||||
ExecStart=
|
||||
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd --container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock
|
||||
Restart=always
|
||||
EOF
|
||||
```
|
||||
|
||||
```
|
||||
for i in "${!ETCDHOSTS[@]}"; do
|
||||
HOST=${ETCDHOSTS[$i]}
|
||||
scp 20-etcd-service-manager.conf ${USER}@${HOST}:
|
||||
ssh ${USER}@${HOST} -t 'sudo chown -R root:root 20-etcd-service-manager.conf && sudo mv 20-etcd-service-manager.conf /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf'
|
||||
ssh ${USER}@${HOST} -t 'sudo systemctl daemon-reload'
|
||||
ssh ${USER}@${HOST} -t 'sudo systemctl start kubelet'
|
||||
ssh ${USER}@${HOST} -t 'sudo systemctl enable kubelet'
|
||||
done
|
||||
|
||||
rm -f 20-etcd-service-manager.conf
|
||||
```
|
||||
|
||||
### Create configuration
|
||||
Create temp directories to store files that will end up on `etcd` hosts:
|
||||
|
||||
```bash
|
||||
mkdir -p /tmp/${ETCD0}/ /tmp/${ETCD1}/ /tmp/${ETCD2}/
|
||||
NAMES=("etcd00" "etcd01" "etcd02")
|
||||
|
||||
for i in "${!ETCDHOSTS[@]}"; do
|
||||
HOST=${ETCDHOSTS[$i]}
|
||||
NAME=${NAMES[$i]}
|
||||
|
||||
cat <<EOF | sudo tee /tmp/${HOST}/kubeadmcfg.yaml
|
||||
apiVersion: "kubeadm.k8s.io/v1beta2"
|
||||
kind: ClusterConfiguration
|
||||
etcd:
|
||||
local:
|
||||
serverCertSANs:
|
||||
- "${HOST}"
|
||||
peerCertSANs:
|
||||
- "${HOST}"
|
||||
extraArgs:
|
||||
initial-cluster: ${NAMES[0]}=https://${ETCDHOSTS[0]}:2380,${NAMES[1]}=https://${ETCDHOSTS[1]}:2380,${NAMES[2]}=https://${ETCDHOSTS[2]}:2380
|
||||
initial-cluster-state: new
|
||||
name: ${NAME}
|
||||
listen-peer-urls: https://${HOST}:2380
|
||||
listen-client-urls: https://${HOST}:2379
|
||||
advertise-client-urls: https://${HOST}:2379
|
||||
initial-advertise-peer-urls: https://${HOST}:2380
|
||||
auto-compaction-mode: periodic
|
||||
auto-compaction-retention: 5m
|
||||
quota-backend-bytes: '8589934592'
|
||||
EOF
|
||||
done
|
||||
```
|
||||
> Note:
|
||||
>
|
||||
> ##### Etcd compaction
|
||||
>
|
||||
> By enabling `etcd` authentication, it prevents the tenant apiservers (clients of `etcd`) to issue compaction requests. We set `etcd` to automatically compact the keyspace with the `--auto-compaction-*` option with a period of hours or minutes. When `--auto-compaction-mode=periodic` and `--auto-compaction-retention=5m` and writes per minute are about 1000, `etcd` compacts revision 5000 for every 5 minute.
|
||||
>
|
||||
> ##### Etcd storage quota
|
||||
>
|
||||
> Currently, `etcd` is limited in storage size, defaulted to `2GB` and configurable with `--quota-backend-bytes` flag up to `8GB`. In Kamaji, we use a single `etcd` to store multiple tenant clusters, so we need to increase this size. Please, note `etcd` warns at startup if the configured value exceeds `8GB`.
|
||||
|
||||
### Generate certificates
|
||||
On the bootstrap machine, using `kubeadm` init phase, create and distribute `etcd` CA certificates:
|
||||
|
||||
```bash
|
||||
sudo kubeadm init phase certs etcd-ca
|
||||
mkdir kamaji
|
||||
sudo cp -r /etc/kubernetes/pki/etcd kamaji
|
||||
sudo chown -R ${USER}. kamaji/etcd
|
||||
```
|
||||
|
||||
For each `etcd` host:
|
||||
|
||||
```bash
|
||||
for i in "${!ETCDHOSTS[@]}"; do
|
||||
HOST=${ETCDHOSTS[$i]}
|
||||
sudo kubeadm init phase certs etcd-server --config=/tmp/${HOST}/kubeadmcfg.yaml
|
||||
sudo kubeadm init phase certs etcd-peer --config=/tmp/${HOST}/kubeadmcfg.yaml
|
||||
sudo kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST}/kubeadmcfg.yaml
|
||||
sudo cp -R /etc/kubernetes/pki /tmp/${HOST}/
|
||||
sudo find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
|
||||
done
|
||||
```
|
||||
|
||||
### Startup the cluster
|
||||
Upload certificates on each `etcd` node and restart the `kubelet`
|
||||
|
||||
```bash
|
||||
for i in "${!ETCDHOSTS[@]}"; do
|
||||
HOST=${ETCDHOSTS[$i]}
|
||||
sudo chown -R ${USER}. /tmp/${HOST}
|
||||
scp -r /tmp/${HOST}/* ${USER}@${HOST}:
|
||||
ssh ${USER}@${HOST} -t 'sudo chown -R root:root pki'
|
||||
ssh ${USER}@${HOST} -t 'sudo mv pki /etc/kubernetes/'
|
||||
ssh ${USER}@${HOST} -t 'sudo kubeadm init phase etcd local --config=kubeadmcfg.yaml'
|
||||
ssh ${USER}@${HOST} -t 'sudo systemctl daemon-reload'
|
||||
ssh ${USER}@${HOST} -t 'sudo systemctl restart kubelet'
|
||||
done
|
||||
```
|
||||
|
||||
This will start the static `etcd` pod on each node and then the cluster gets formed.
|
||||
|
||||
Generate certificates for the `root` user
|
||||
|
||||
```bash
|
||||
cat > root-csr.json <<EOF
|
||||
{
|
||||
"CN": "root",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
```bash
|
||||
cfssl gencert \
|
||||
-ca=kamaji/etcd/ca.crt \
|
||||
-ca-key=kamaji/etcd/ca.key \
|
||||
-config=cfssl-cert-config.json \
|
||||
-profile=client-authentication \
|
||||
root-csr.json | cfssljson -bare root
|
||||
```
|
||||
|
||||
```bash
|
||||
cp root.pem kamaji/etcd/root.crt
|
||||
cp root-key.pem kamaji/etcd/root.key
|
||||
rm root*
|
||||
```
|
||||
|
||||
The result should be:
|
||||
|
||||
```bash
|
||||
$ tree kamaji
|
||||
kamaji
|
||||
└── etcd
|
||||
├── ca.crt
|
||||
├── ca.key
|
||||
├── root.crt
|
||||
└── root.key
|
||||
```
|
||||
|
||||
Use the `root` user to check the just formed `etcd` cluster is in health state
|
||||
|
||||
```bash
|
||||
export ETCDCTL_CACERT=kamaji/etcd/ca.crt
|
||||
export ETCDCTL_CERT=kamaji/etcd/root.crt
|
||||
export ETCDCTL_KEY=kamaji/etcd/root.key
|
||||
export ETCDCTL_ENDPOINTS=https://${ETCD0}:2379
|
||||
|
||||
etcdctl member list -w table
|
||||
```
|
||||
|
||||
The result should be something like this:
|
||||
|
||||
```
|
||||
+------------------+---------+--------+----------------------------+----------------------------+------------+
|
||||
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
|
||||
+------------------+---------+--------+----------------------------+----------------------------+------------+
|
||||
| 72657d6307364226 | started | etcd01 | https://192.168.32.11:2380 | https://192.168.32.11:2379 | false |
|
||||
| 91eb892c5ee87610 | started | etcd00 | https://192.168.32.10:2380 | https://192.168.32.10:2379 | false |
|
||||
| e9971c576949c34e | started | etcd02 | https://192.168.32.12:2380 | https://192.168.32.12:2379 | false |
|
||||
+------------------+---------+--------+----------------------------+----------------------------+------------+
|
||||
```
|
||||
|
||||
### Enable multi-tenancy
|
||||
The `root` user has full access to `etcd`, must be created before activating authentication. The `root` user must have the `root` role and is allowed to change anything inside `etcd`.
|
||||
|
||||
```bash
|
||||
etcdctl user add --no-password=true root
|
||||
etcdctl role add root
|
||||
etcdctl user grant-role root root
|
||||
etcdctl auth enable
|
||||
```
|
||||
|
||||
### Cleanup
|
||||
If you want to get rid of the etcd cluster, for each node, login and clean it:
|
||||
|
||||
```bash
|
||||
HOSTS=(${ETCD0} ${ETCD1} ${ETCD2})
|
||||
for i in "${!HOSTS[@]}"; do
|
||||
HOST=${HOSTS[$i]}
|
||||
ssh ${USER}@${HOST} -t 'sudo kubeadm reset -f';
|
||||
ssh ${USER}@${HOST} -t 'sudo systemctl reboot';
|
||||
done
|
||||
```
|
||||
|
||||
## Setup internal multi-tenant etcd
|
||||
If you opted for an internal etcd cluster running in the Kamaji admin cluster, follow steps below.
|
||||
|
||||
From the bootstrap machine load the environment for internal `etcd` setup:
|
||||
|
||||
```bash
|
||||
source kamaji-internal-etcd.env
|
||||
```
|
||||
|
||||
### Generate certificates
|
||||
On the bootstrap machine, using `kubeadm` init phase, create the `etcd` CA certificates:
|
||||
|
||||
```bash
|
||||
sudo kubeadm init phase certs etcd-ca
|
||||
mkdir kamaji
|
||||
sudo cp -r /etc/kubernetes/pki/etcd kamaji
|
||||
sudo chown -R ${USER}. kamaji/etcd
|
||||
```
|
||||
|
||||
Generate the `etcd` certificates for peers:
|
||||
|
||||
```
|
||||
cat << EOF | tee kamaji/etcd/peer-csr.json
|
||||
{
|
||||
"CN": "etcd",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"hosts": [
|
||||
"127.0.0.1",
|
||||
"etcd-0",
|
||||
"etcd-0.etcd",
|
||||
"etcd-0.etcd.${ETCD_NAMESPACE}.svc",
|
||||
"etcd-0.etcd.${ETCD_NAMESPACE}.svc.cluster.local",
|
||||
"etcd-1",
|
||||
"etcd-1.etcd",
|
||||
"etcd-1.etcd.${ETCD_NAMESPACE}.svc",
|
||||
"etcd-1.etcd.${ETCD_NAMESPACE}.svc.cluster.local",
|
||||
"etcd-2",
|
||||
"etcd-2.etcd",
|
||||
"etcd-2.etcd.${ETCD_NAMESPACE}.svc",
|
||||
"etcd-2.etcd.${ETCD_NAMESPACE}.cluster.local"
|
||||
]
|
||||
}
|
||||
EOF
|
||||
|
||||
cfssl gencert -ca=kamaji/etcd/ca.crt -ca-key=kamaji/etcd/ca.key \
|
||||
-config=cfssl-cert-config.json \
|
||||
-profile=peer-authentication kamaji/etcd/peer-csr.json | cfssljson -bare kamaji/etcd/peer
|
||||
|
||||
```
|
||||
|
||||
Generate the `etcd` certificates for server:
|
||||
|
||||
```
|
||||
cat << EOF | tee kamaji/etcd/server-csr.json
|
||||
{
|
||||
"CN": "etcd",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"hosts": [
|
||||
"127.0.0.1",
|
||||
"etcd-server",
|
||||
"etcd-server.${ETCD_NAMESPACE}.svc",
|
||||
"etcd-server.${ETCD_NAMESPACE}.svc.cluster.local",
|
||||
"etcd-0.etcd.${ETCD_NAMESPACE}.svc.cluster.local",
|
||||
"etcd-1.etcd.${ETCD_NAMESPACE}.svc.cluster.local",
|
||||
"etcd-2.etcd.${ETCD_NAMESPACE}.svc.cluster.local"
|
||||
]
|
||||
}
|
||||
EOF
|
||||
|
||||
cfssl gencert -ca=kamaji/etcd/ca.crt -ca-key=kamaji/etcd/ca.key \
|
||||
-config=cfssl-cert-config.json \
|
||||
-profile=peer-authentication kamaji/etcd/server-csr.json | cfssljson -bare kamaji/etcd/server
|
||||
```
|
||||
|
||||
Generate certificates for the `root` user of the `etcd`
|
||||
|
||||
```
|
||||
cat << EOF | tee kamaji/etcd/root-csr.json
|
||||
{
|
||||
"CN": "root",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
cfssl gencert -ca=kamaji/etcd/ca.crt -ca-key=kamaji/etcd/ca.key \
|
||||
-config=cfssl-cert-config.json \
|
||||
-profile=client-authentication kamaji/etcd/root-csr.json | cfssljson -bare kamaji/etcd/root
|
||||
```
|
||||
|
||||
Install the `etcd` in the Kamaji admin cluster
|
||||
|
||||
```bash
|
||||
kubectl create namespace ${ETCD_NAMESPACE}
|
||||
|
||||
kubectl -n ${ETCD_NAMESPACE} create secret generic etcd-certs \
|
||||
--from-file=kamaji/etcd/ca.crt \
|
||||
--from-file=kamaji/etcd/ca.key \
|
||||
--from-file=kamaji/etcd/peer-key.pem --from-file=kamaji/etcd/peer.pem \
|
||||
--from-file=kamaji/etcd/server-key.pem --from-file=kamaji/etcd/server.pem
|
||||
|
||||
kubectl -n ${ETCD_NAMESPACE} apply -f etcd/etcd-cluster.yaml
|
||||
```
|
||||
|
||||
Install an `etcd` client to interact with the `etcd` server
|
||||
|
||||
```bash
|
||||
kubectl -n ${ETCD_NAMESPACE} create secret tls root-certs \
|
||||
--key=kamaji/etcd/root-key.pem \
|
||||
--cert=kamaji/etcd/root.pem
|
||||
|
||||
kubectl -n ${ETCD_NAMESPACE} apply -f etcd/etcd-client.yaml
|
||||
```
|
||||
|
||||
Wait the etcd instances discover each other and the cluster is formed:
|
||||
|
||||
```bash
|
||||
kubectl -n ${ETCD_NAMESPACE} wait pod --for=condition=ready -l app=etcd --timeout=120s
|
||||
echo -n "\nChecking endpoint's health..."
|
||||
|
||||
kubectl -n ${ETCD_NAMESPACE} exec etcd-root-client -- /bin/bash -c "etcdctl endpoint health 1>/dev/null 2>/dev/null; until [ \$$? -eq 0 ]; do sleep 10; printf "."; etcdctl endpoint health 1>/dev/null 2>/dev/null; done;"
|
||||
echo -n "\netcd cluster's health:\n"
|
||||
|
||||
kubectl -n ${ETCD_NAMESPACE} exec etcd-root-client -- /bin/bash -c "etcdctl endpoint health"
|
||||
echo -n "\nWaiting for all members..."
|
||||
|
||||
kubectl -n ${ETCD_NAMESPACE} exec etcd-root-client -- /bin/bash -c "until [ \$$(etcdctl member list 2>/dev/null | wc -l) -eq 3 ]; do sleep 10; printf '.'; done;"
|
||||
@echo -n "\netcd's members:\n"
|
||||
|
||||
kubectl -n ${ETCD_NAMESPACE} exec etcd-root-client -- /bin/bash -c "etcdctl member list -w table"
|
||||
```
|
||||
|
||||
### Enable multi-tenancy
|
||||
The `root` user has full access to `etcd`, must be created before activating authentication. The `root` user must have the `root` role and is allowed to change anything inside `etcd`.
|
||||
|
||||
```bash
|
||||
kubectl -n ${ETCD_NAMESPACE} exec etcd-root-client -- etcdctl user add --no-password=true root
|
||||
kubectl -n ${ETCD_NAMESPACE} exec etcd-root-client -- etcdctl role add root
|
||||
kubectl -n ${ETCD_NAMESPACE} exec etcd-root-client -- etcdctl user grant-role root root
|
||||
kubectl -n ${ETCD_NAMESPACE} exec etcd-root-client -- etcdctl auth enable
|
||||
```
|
||||
|
||||
|
||||
## Install Kamaji controller
|
||||
Currently, the behaviour of the Kamaji controller for Tenant Control Plane is controlled by (in this order):
|
||||
|
||||
- CLI flags
|
||||
- Environment variables
|
||||
- Configuration file `kamaji.yaml` built into the image
|
||||
|
||||
By default Kamaji search for the configuration file and uses parameters found inside of it. In case some environment variable are passed, this will override configuration file parameters. In the end, if also a CLI flag is passed, this will override both env vars and config file as well.
|
||||
|
||||
There are multiple ways to deploy the Kamaji controller:
|
||||
|
||||
- Use the single YAML file installer
|
||||
- Use Kustomize with Makefile
|
||||
- Use the Kamaji Helm Chart
|
||||
|
||||
The Kamaji controller needs to access the multi-tenant `etcd` in order to provision the access for tenant `kube-apiserver`.
|
||||
|
||||
Create the secrets containing the `etcd` certificates
|
||||
|
||||
```bash
|
||||
kubectl create namespace kamaji-system
|
||||
kubectl -n kamaji-system create secret generic etcd-certs \
|
||||
--from-file=kamaji/etcd/ca.crt \
|
||||
--from-file=kamaji/etcd/ca.key
|
||||
|
||||
kubectl -n kamaji-system create secret tls root-client-certs \
|
||||
--cert=kamaji/etcd/root.crt \
|
||||
--key=kamaji/etcd/root.key
|
||||
```
|
||||
|
||||
### Install with a single manifest
|
||||
Install with the single YAML file installer:
|
||||
|
||||
```bash
|
||||
kubectl -n kamaji-system apply -f ../config/install.yaml
|
||||
```
|
||||
|
||||
Make sure to patch the `etcd` endpoints of the Kamaji controller, according to your environment:
|
||||
|
||||
```bash
|
||||
cat > patch-deploy.yaml <<EOF
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: manager
|
||||
args:
|
||||
- --health-probe-bind-address=:8081
|
||||
- --metrics-bind-address=127.0.0.1:8080
|
||||
- --leader-elect
|
||||
- --etcd-endpoints=${ETCD0}:2379,${ETCD1}:2379,${ETCD2}:2379
|
||||
EOF
|
||||
|
||||
kubectl -n kamaji-system patch \
|
||||
deployment kamaji-controller-manager \
|
||||
--patch-file patch-deploy.yaml
|
||||
```
|
||||
|
||||
The Kamaji Tenant Control Plane controller is now running on the Admin Cluster:
|
||||
|
||||
```bash
|
||||
kubectl -n kamaji-system get deploy
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
operator-controller-manager 1/1 1 1 14h
|
||||
```
|
||||
|
||||
## Setup Tenant Cluster
|
||||
Now you are getting an Admin Cluster available to run multiple Tenant Control Planes, deployed by the Kamaji controller. Please, refer to the Kamaji Tenant Deployment [guide](./kamaji-tenant-deployment-guide.md).
|
||||
|
||||
|
||||
403
docs/kamaji-azure-deployment-guide.md
Normal file
403
docs/kamaji-azure-deployment-guide.md
Normal file
@@ -0,0 +1,403 @@
|
||||
# Setup Kamaji on Azure
|
||||
|
||||
In this section, we're going to setup Kamaji on MS Azure:
|
||||
|
||||
- one bootstrap local workstation
|
||||
- a regular AKS cluster as Kamaji Admin Cluster
|
||||
- a multi-tenant etcd internal cluster running on AKS
|
||||
- an arbitrary number of Azure virtual machines hosting `Tenant`s' workloads
|
||||
|
||||
## Bootstrap machine
|
||||
This getting started guide is supposed to be run from a remote or local bootstrap machine.
|
||||
First, prepare the workspace directory:
|
||||
|
||||
```
|
||||
git clone https://github.com/clastix/kamaji
|
||||
cd kamaji/deploy
|
||||
```
|
||||
|
||||
1. Follow the instructions in [Prepare the bootstrap workspace](./getting-started-with-kamaji.md#prepare-the-bootstrap-workspace).
|
||||
2. Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli).
|
||||
3. Make sure you have a valid Azure subscription
|
||||
4. Login to Azure:
|
||||
|
||||
```bash
|
||||
az account set --subscription "MySubscription"
|
||||
az login
|
||||
```
|
||||
> Currently, the Kamaji setup, including Admin and Tenant clusters need to be deployed within the same Azure region. Cross-regions deployments are not (yet) validated.
|
||||
|
||||
## Setup Admin cluster on AKS
|
||||
Throughout the instructions, shell variables are used to indicate values that you should adjust to your own Azure environment:
|
||||
|
||||
```bash
|
||||
source kamaji-azure.env
|
||||
```
|
||||
|
||||
> we use the Azure CLI to setup the Kamaji Admin cluster on AKS.
|
||||
|
||||
```
|
||||
az group create \
|
||||
--name $KAMAJI_RG \
|
||||
--location $KAMAJI_REGION
|
||||
|
||||
az aks create \
|
||||
--resource-group $KAMAJI_RG \
|
||||
--name $KAMAJI_CLUSTER \
|
||||
--location $KAMAJI_REGION \
|
||||
--zones 1 2 3 \
|
||||
--node-count 3 \
|
||||
--nodepool-name $KAMAJI_CLUSTER \
|
||||
--ssh-key-value @~/.ssh/id_rsa.pub \
|
||||
--no-wait
|
||||
```
|
||||
|
||||
Once the cluster formation succedes, get credentials to access the cluster as admin
|
||||
|
||||
```
|
||||
az aks get-credentials \
|
||||
--resource-group $KAMAJI_RG \
|
||||
--name $KAMAJI_CLUSTER
|
||||
```
|
||||
|
||||
And check you can access:
|
||||
|
||||
```
|
||||
kubectl cluster-info
|
||||
```
|
||||
|
||||
## Setup internal multi-tenant etcd
|
||||
Follow the instructions [here](./getting-started-with-kamaji.md#setup-internal-multi-tenant-etcd).
|
||||
|
||||
## Install Kamaji controller
|
||||
Follow the instructions [here](./getting-started-with-kamaji.md#install-kamaji-controller).
|
||||
|
||||
## Create Tenant Clusters
|
||||
To create a Tenant Cluster in Kamaji on AKS, we have to work on both the Kamaji and Azure infrastructure sides.
|
||||
|
||||
```
|
||||
source kamaji-tenant-azure.env
|
||||
```
|
||||
|
||||
### On Kamaji side
|
||||
With Kamaji on AKS, the tenant control plane is accessible:
|
||||
|
||||
- from tenant work nodes through an internal loadbalancer as `https://${TENANT_ADDR}:${TENANT_PORT}`
|
||||
- from tenant admin user through an external loadbalancer `https://${TENANT_NAME}.${KAMAJI_REGION}.cloudapp.azure.com:443`
|
||||
|
||||
#### Allocate an internal IP address for the Tenant Control Plane
|
||||
Currently, Kamaji has a known limitation, meaning the address `${TENANT_ADDR}:${TENANT_PORT}` must be known in advance before to create the Tenant Control Plane. Given this limitation, let's to reserve an IP address and port in the same virtual subnet used by the Kamaji admin cluster:
|
||||
|
||||
|
||||
```bash
|
||||
export TENANT_ADDR=10.240.0.100
|
||||
export TENANT_PORT=6443
|
||||
export TENANT_DOMAIN=$KAMAJI_REGION.cloudapp.azure.com
|
||||
```
|
||||
|
||||
> Make sure the `TENANT_ADDR` value does not overlap with already allocated IP addresses in the AKS virtual network. In the future, Kamaji will implement a dynamic IP allocation.
|
||||
|
||||
|
||||
#### Create the Tenant Control Plane
|
||||
|
||||
Create the manifest for Tenant Control Plane:
|
||||
|
||||
```yaml
|
||||
cat > ${TENANT_NAMESPACE}-${TENANT_NAME}-tcp.yaml <<EOF
|
||||
---
|
||||
apiVersion: kamaji.clastix.io/v1alpha1
|
||||
kind: TenantControlPlane
|
||||
metadata:
|
||||
name: ${TENANT_NAME}
|
||||
namespace: ${TENANT_NAMESPACE}
|
||||
spec:
|
||||
controlPlane:
|
||||
deployment:
|
||||
replicas: 2
|
||||
additionalMetadata:
|
||||
annotations:
|
||||
environment.clastix.io: ${TENANT_NAME}
|
||||
labels:
|
||||
tenant.clastix.io: ${TENANT_NAME}
|
||||
kind.clastix.io: deployment
|
||||
service:
|
||||
additionalMetadata:
|
||||
annotations:
|
||||
environment.clastix.io: ${TENANT_NAME}
|
||||
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
|
||||
labels:
|
||||
tenant.clastix.io: ${TENANT_NAME}
|
||||
kind.clastix.io: service
|
||||
serviceType: LoadBalancer
|
||||
ingress:
|
||||
enabled: false
|
||||
kubernetes:
|
||||
version: ${TENANT_VERSION}
|
||||
kubelet:
|
||||
cgroupfs: systemd
|
||||
admissionControllers:
|
||||
- ResourceQuota
|
||||
- LimitRanger
|
||||
networkProfile:
|
||||
address: ${TENANT_ADDR}
|
||||
port: ${TENANT_PORT}
|
||||
domain: ${TENANT_DOMAIN}
|
||||
serviceCidr: ${TENANT_SVC_CIDR}
|
||||
podCidr: ${TENANT_POD_CIDR}
|
||||
dnsServiceIPs:
|
||||
- ${TENANT_DNS_SERVICE}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ${TENANT_NAME}-public
|
||||
namespace: ${TENANT_NAMESPACE}
|
||||
annotations:
|
||||
service.beta.kubernetes.io/azure-dns-label-name: ${TENANT_NAME}
|
||||
spec:
|
||||
ports:
|
||||
- port: 443
|
||||
protocol: TCP
|
||||
targetPort: ${TENANT_PORT}
|
||||
selector:
|
||||
kamaji.clastix.io/soot: ${TENANT_NAME}
|
||||
type: LoadBalancer
|
||||
EOF
|
||||
```
|
||||
|
||||
Make sure:
|
||||
|
||||
- the `tcp.spec.controlPlane.service.serviceType=LoadBalancer` and the following annotation: `service.beta.kubernetes.io/azure-load-balancer-internal=true` is set. This tells AKS to expose the service within an Azure internal loadbalancer.
|
||||
|
||||
- the public loadbalancer service has the following annotation: `service.beta.kubernetes.io/azure-dns-label-name=${TENANT_NAME}` to expose the Tenant Control Plane with domain name: `${TENANT_NAME}.${KAMAJI_REGION}.cloudapp.azure.com`.
|
||||
|
||||
Create the Tenant Control Plane
|
||||
|
||||
```
|
||||
kubectl create namespace ${TENANT_NAMESPACE}
|
||||
kubectl apply -f ${TENANT_NAMESPACE}-${TENANT_NAME}-tcp.yaml
|
||||
```
|
||||
|
||||
And check it out:
|
||||
|
||||
```
|
||||
$ kubectl get tcp
|
||||
NAME VERSION CONTROL-PLANE-ENDPOINT KUBECONFIG PRIVATE AGE
|
||||
tenant-00 v1.23.4 10.240.0.100:6443 tenant-00-admin-kubeconfig true 46m
|
||||
|
||||
$ kubectl get svc
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
tenant-00 LoadBalancer 10.0.223.161 10.240.0.100 6443:31902/TCP 46m
|
||||
tenant-00-public LoadBalancer 10.0.205.97 20.101.215.149 443:30697/TCP 19h
|
||||
|
||||
$ kubectl get deploy
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
tenant-00 2/2 2 2 47m
|
||||
```
|
||||
|
||||
|
||||
#### Working with Tenant Control Plane
|
||||
Check the access to the Tenant Control Plane:
|
||||
|
||||
```
|
||||
curl -k https://${TENANT_NAME}.${KAMAJI_REGION}.cloudapp.azure.com/healthz
|
||||
```
|
||||
|
||||
Let's retrieve the `kubeconfig` in order to work with it:
|
||||
|
||||
```
|
||||
kubectl get secrets -n ${TENANT_NAMESPACE} ${TENANT_NAME}-admin-kubeconfig -o json \
|
||||
| jq -r '.data["admin.conf"]' \
|
||||
| base64 -d \
|
||||
> ${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig
|
||||
|
||||
kubectl --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig config \
|
||||
set-cluster ${TENANT_NAME} \
|
||||
--server https://${TENANT_NAME}.${KAMAJI_REGION}.cloudapp.azure.com
|
||||
```
|
||||
|
||||
and let's check it out:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig get svc
|
||||
|
||||
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
default kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 6m
|
||||
```
|
||||
|
||||
Check out how the Tenant Control Plane advertises itself to workloads:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig get ep
|
||||
|
||||
NAME ENDPOINTS AGE
|
||||
kubernetes 10.240.0.100:6443 57m
|
||||
```
|
||||
|
||||
Make sure it's `${TENANT_ADDR}:${TENANT_PORT}`.
|
||||
|
||||
### Prepare the Infrastructure for the Tenant virtual machines
|
||||
Kamaji provides Control Plane as a Service, so the tenant user can join his own virtual machines as worker nodes. Each tenant can place his virtual machines in a dedicated Azure virtual network.
|
||||
|
||||
Prepare the Tenant infrastructure:
|
||||
|
||||
```
|
||||
az group create \
|
||||
--name $TENANT_RG \
|
||||
--location $KAMAJI_REGION
|
||||
|
||||
az network nsg create \
|
||||
--resource-group $TENANT_RG \
|
||||
--name $TENANT_NSG
|
||||
|
||||
az network nsg rule create \
|
||||
--resource-group $TENANT_RG \
|
||||
--nsg-name $TENANT_NSG \
|
||||
--name $TENANT_NSG-ssh \
|
||||
--protocol tcp \
|
||||
--priority 1000 \
|
||||
--destination-port-range 22 \
|
||||
--access allow
|
||||
|
||||
az network vnet create \
|
||||
--resource-group $TENANT_RG \
|
||||
--name $TENANT_VNET_NAME \
|
||||
--address-prefix $TENANT_VNET_ADDRESS \
|
||||
--subnet-name $TENANT_SUBNET_NAME \
|
||||
--subnet-prefix $TENANT_SUBNET_ADDRESS
|
||||
|
||||
az network vnet subnet create \
|
||||
--resource-group $TENANT_RG \
|
||||
--vnet-name $TENANT_VNET_NAME \
|
||||
--name $TENANT_SUBNET_NAME \
|
||||
--address-prefixes $TENANT_SUBNET_ADDRESS \
|
||||
--network-security-group $TENANT_NSG
|
||||
```
|
||||
|
||||
Connection between the Tenant virtual network and the Kamaji AKS virtual network leverages on the [Azure Network Peering](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview).
|
||||
|
||||
Enable the network peering between the Tenant Virtual Network and the Kamaji AKS Virtual Network:
|
||||
|
||||
```bash
|
||||
KAMAJI_VNET_NAME=`az network vnet list -g $KAMAJI_NODE_RG --query [].name --out tsv`
|
||||
KAMAJI_VNET_ID=`az network vnet list -g $KAMAJI_NODE_RG --query [].id --out tsv`
|
||||
TENANT_VNET_ID=`az network vnet list -g $TENANT_RG --query [].id --out tsv`
|
||||
|
||||
az network vnet peering create \
|
||||
--resource-group $TENANT_RG \
|
||||
--name $TENANT_NAME-$KAMAJI_CLUSTER \
|
||||
--vnet-name $TENANT_VNET_NAME \
|
||||
--remote-vnet $KAMAJI_VNET_ID \
|
||||
--allow-vnet-access
|
||||
|
||||
az network vnet peering create \
|
||||
--resource-group $KAMAJI_NODE_RG \
|
||||
--name $KAMAJI_CLUSTER-$TENANT_NAME \
|
||||
--vnet-name $KAMAJI_VNET_NAME \
|
||||
--remote-vnet $TENANT_VNET_ID \
|
||||
--allow-vnet-access
|
||||
```
|
||||
|
||||
[Azure Network Security Groups](https://docs.microsoft.com/en-us/azure/virtual-network/network-security-groups-overview) can be used to control the traffic between the Tenant virtual network and the Kamaji AKS virtual network for a stronger isolation. See the required [ports and protocols](https://kubernetes.io/docs/reference/ports-and-protocols/) between Kubernetes control plane and worker nodes.
|
||||
|
||||
### Create the tenant virtual machines
|
||||
Create an Azure VM Stateful Set to host virtual machines
|
||||
|
||||
```
|
||||
az vmss create \
|
||||
--name $TENANT_VMSS \
|
||||
--resource-group $TENANT_RG \
|
||||
--image $TENANT_VM_IMAGE \
|
||||
--public-ip-per-vm \
|
||||
--vnet-name $TENANT_VNET_NAME \
|
||||
--subnet $TENANT_SUBNET_NAME \
|
||||
--ssh-key-value @~/.ssh/id_rsa.pub \
|
||||
--computer-name-prefix $TENANT_NAME- \
|
||||
--nsg $TENANT_NSG \
|
||||
--custom-data ./tenant-cloudinit.yaml \
|
||||
--instance-count 0
|
||||
|
||||
az vmss update \
|
||||
--resource-group $TENANT_RG \
|
||||
--name $TENANT_VMSS \
|
||||
--set virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].enableIPForwarding=true
|
||||
|
||||
az vmss scale \
|
||||
--resource-group $TENANT_RG \
|
||||
--name $TENANT_VMSS \
|
||||
--new-capacity 3
|
||||
```
|
||||
|
||||
### Join the tenant virtual machines to the tenant control plane
|
||||
The current approach for joining nodes is to use the `kubeadm` one therefore, we will create a bootstrap token to perform the action:
|
||||
|
||||
```bash
|
||||
JOIN_CMD=$(echo "sudo kubeadm join ${TENANT_ADDR}:${TENANT_PORT} ")$(kubeadm --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig token create --print-join-command |cut -d" " -f4-)
|
||||
```
|
||||
|
||||
A bash loop will be used to join all the available nodes.
|
||||
|
||||
|
||||
```bash
|
||||
HOSTS=($(az vmss list-instance-public-ips \
|
||||
--resource-group $TENANT_RG \
|
||||
--name $TENANT_VMSS \
|
||||
--query "[].ipAddress" \
|
||||
--output tsv))
|
||||
|
||||
for i in ${!HOSTS[@]}; do
|
||||
HOST=${HOSTS[$i]}
|
||||
echo $HOST
|
||||
ssh ${USER}@${HOST} -t ${JOIN_CMD};
|
||||
done
|
||||
```
|
||||
|
||||
Checking the nodes:
|
||||
|
||||
```bash
|
||||
kubectl get nodes --kubeconfig=${CLUSTER_NAMESPACE}-${CLUSTER_NAME}.kubeconfig
|
||||
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
kamaji-tenant-worker-00 NotReady <none> 1m v1.23.4
|
||||
kamaji-tenant-worker-01 NotReady <none> 1m v1.23.4
|
||||
kamaji-tenant-worker-02 NotReady <none> 1m v1.23.4
|
||||
kamaji-tenant-worker-03 NotReady <none> 1m v1.23.4
|
||||
```
|
||||
|
||||
The cluster needs a [CNI](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) plugin to get the nodes ready. In our case, we are going to install [calico](https://projectcalico.docs.tigera.io/about/about-calico).
|
||||
|
||||
```bash
|
||||
kubectl apply -f calico-cni/calico-crd.yaml --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig
|
||||
kubectl apply -f calico-cni/calico-azure.yaml --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig
|
||||
```
|
||||
|
||||
And after a while, `kube-system` pods will be running.
|
||||
|
||||
```bash
|
||||
kubectl get po -n kube-system --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
calico-kube-controllers-8594699699-dlhbj 1/1 Running 0 3m
|
||||
calico-node-kxf6n 1/1 Running 0 3m
|
||||
calico-node-qtdlw 1/1 Running 0 3m
|
||||
coredns-64897985d-2v5lc 1/1 Running 0 5m
|
||||
coredns-64897985d-nq276 1/1 Running 0 5m
|
||||
kube-proxy-cwdww 1/1 Running 0 3m
|
||||
kube-proxy-m48v4 1/1 Running 0 3m
|
||||
```
|
||||
|
||||
And the nodes will be ready
|
||||
|
||||
```bash
|
||||
kubectl get nodes --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig
|
||||
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
kamaji-tenant-worker-01 Ready <none> 10m v1.23.4
|
||||
kamaji-tenant-worker-02 Ready <none> 10m v1.23.4
|
||||
```
|
||||
|
||||
|
||||
## Cleanup
|
||||
To get rid of the Tenant infrastructure, remove the RESOURCE_GROUP: `az group delete --name $TENANT_RG --yes --no-wait`.
|
||||
|
||||
To get rid of the Kamaji infrastructure, remove the RESOURCE_GROUP: `az group delete --name $KAMAJI_RG --yes --no-wait`.
|
||||
361
docs/kamaji-tenant-deployment-guide.md
Normal file
361
docs/kamaji-tenant-deployment-guide.md
Normal file
@@ -0,0 +1,361 @@
|
||||
# Kamaji Tenant Deployment Guide
|
||||
|
||||
This guide defines the necessary actions to generate a kubernetes tenant cluster, which can be considered made of a virtual kubernetes control plane, deployed by Kamaji, and joining worker nodes pool to start workloads.
|
||||
|
||||
## Requirements
|
||||
|
||||
* [Kubernetes](https://kubernetes.io) Admin Cluster having [Kamaji](./getting-started-with-kamaji.md) installed.
|
||||
* [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl)
|
||||
* [jq](https://stedolan.github.io/jq/)
|
||||
|
||||
## Tenant Control Plane
|
||||
|
||||
Kamaji offers a [CRD](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) to provide a declarative approach of managing tenant control planes. This *CRD* is called `TenantControlPlane`, or `tcp` in short.
|
||||
|
||||
Use the command `kubectl explain tcp.spec` to understand the fields and their usage.
|
||||
|
||||
### Variable Definitions
|
||||
Throughout the instructions, shell variables are used to indicate values that you should adjust to your own environment:
|
||||
|
||||
```bash
|
||||
source kamaji-tenant.env
|
||||
```
|
||||
|
||||
### Creation
|
||||
|
||||
Create a tenant control plane of example
|
||||
|
||||
```yaml
|
||||
cat > ${TENANT_NAMESPACE}-${TENANT_NAME}-tcp.yaml <<EOF
|
||||
apiVersion: kamaji.clastix.io/v1alpha1
|
||||
kind: TenantControlPlane
|
||||
metadata:
|
||||
name: ${TENANT_NAME}
|
||||
namespace: ${TENANT_NAMESPACE}
|
||||
spec:
|
||||
controlPlane:
|
||||
deployment:
|
||||
replicas: 2
|
||||
additionalMetadata:
|
||||
annotations:
|
||||
environment.clastix.io: ${TENANT_NAME}
|
||||
labels:
|
||||
tenant.clastix.io: ${TENANT_NAME}
|
||||
kind.clastix.io: deployment
|
||||
service:
|
||||
additionalMetadata:
|
||||
annotations:
|
||||
environment.clastix.io: ${TENANT_NAME}
|
||||
labels:
|
||||
tenant.clastix.io: ${TENANT_NAME}
|
||||
kind.clastix.io: service
|
||||
serviceType: LoadBalancer
|
||||
ingress:
|
||||
enabled: false
|
||||
kubernetes:
|
||||
version: ${TENANT_VERSION}
|
||||
kubelet:
|
||||
cgroupfs: systemd
|
||||
admissionControllers:
|
||||
- ResourceQuota
|
||||
- LimitRanger
|
||||
networkProfile:
|
||||
address: ${TENANT_ADDR}
|
||||
port: ${TENANT_PORT}
|
||||
domain: ${TENANT_DOMAIN}
|
||||
serviceCidr: ${TENANT_SVC_CIDR}
|
||||
podCidr: ${TENANT_POD_CIDR}
|
||||
dnsServiceIPs:
|
||||
- ${TENANT_DNS_SERVICE}
|
||||
EOF
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl create namespace ${TENANT_NAMESPACE}
|
||||
kubectl apply -f ${TENANT_NAMESPACE}-${TENANT_NAME}-tcp.yaml
|
||||
```
|
||||
|
||||
A tenant control plane control plane is now running as deployment and it is exposed through a service.
|
||||
|
||||
Check if control plane of the tenant is reachable and in healty state
|
||||
|
||||
```bash
|
||||
curl -k https://${TENANT_ADDR}:${TENANT_PORT}/healthz
|
||||
curl -k https://${TENANT_ADDR}:${TENANT_PORT}/version
|
||||
```
|
||||
|
||||
The tenant control plane components, i.e. `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager` are running as containers in the same pods. The `kube-scheduler`, and `kube-controller-manager` connect the `kube-apiserver` throught localhost: `https://127.0.0.1.${TENANT_PORT}`
|
||||
|
||||
Let's retrieve the `kubeconfig` files in order to check:
|
||||
|
||||
```bash
|
||||
kubectl get secrets -n ${TENANT_NAMESPACE} ${TENANT_NAME}-scheduler-kubeconfig -o json \
|
||||
| jq -r '.data["scheduler.conf"]' \
|
||||
| base64 -d \
|
||||
> ${TENANT_NAMESPACE}-${TENANT_NAME}-scheduler.kubeconfig
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl get secrets -n ${TENANT_NAMESPACE} ${TENANT_NAME}-controller-manager-kubeconfig -o json \
|
||||
| jq -r '.data["controller-manager.conf"]' \
|
||||
| base64 -d \
|
||||
> ${TENANT_NAMESPACE}-${TENANT_NAME}-controller-manager.kubeconfig
|
||||
```
|
||||
|
||||
## Working with Tenant Control Plane
|
||||
|
||||
A new Tenant cluster will be available at this moment but, it will not be useful without having worker nodes joined to it.
|
||||
|
||||
### Getting Tenant Control Plane Kubeconfig
|
||||
|
||||
Let's retrieve the `kubeconfig` in order to work with the tenant control plane.
|
||||
|
||||
```bash
|
||||
kubectl get secrets -n ${TENANT_NAMESPACE} ${TENANT_NAME}-admin-kubeconfig -o json \
|
||||
| jq -r '.data["admin.conf"]' \
|
||||
| base64 -d \
|
||||
> ${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig
|
||||
```
|
||||
|
||||
and let's check it out:
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig get svc
|
||||
|
||||
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
default kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 6m
|
||||
```
|
||||
|
||||
Check out how the Tenant control Plane advertises itself to workloads:
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig get ep
|
||||
|
||||
NAME ENDPOINTS AGE
|
||||
kubernetes 192.168.32.150:6443 18m
|
||||
```
|
||||
|
||||
Make sure it's `${TENANT_ADDR}:${TENANT_PORT}`.
|
||||
|
||||
### Preparing Worker Nodes to join
|
||||
|
||||
Currently Kamaji does not provide any helper for creation of tenant worker nodes. You should get a set of machines from your infrastructure provider, turn them into worker nodes, and then join to the tenant control plane with the `kubeadm`. In the future, we'll provide integration with Cluster APIs and other IaC tools.
|
||||
|
||||
Use bash script `nodes-prerequisites.sh` to install the dependencies on all the worker nodes:
|
||||
|
||||
- Install `containerd` as container runtime
|
||||
- Install `crictl`, the command line for working with `containerd`
|
||||
- Install `kubectl`, `kubelet`, and `kubeadm` in the desired version
|
||||
|
||||
> Warning: we assume worker nodes are machines running `Ubuntu 20.04`
|
||||
|
||||
Run the installation script:
|
||||
|
||||
```bash
|
||||
HOSTS=(${WORKER0} ${WORKER1} ${WORKER2} ${WORKER3})
|
||||
./nodes-prerequisites.sh ${TENANT_VERSION:1} ${HOSTS[@]}
|
||||
```
|
||||
|
||||
### Join Command
|
||||
|
||||
The current approach for joining nodes is to use the kubeadm one therefore, we will create a bootstrap token to perform the action. In order to facilitate the step, we will store the entire command of joining in a variable.
|
||||
|
||||
```bash
|
||||
JOIN_CMD=$(echo "sudo ")$(kubeadm --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig token create --print-join-command)
|
||||
```
|
||||
|
||||
### Adding Worker Nodes
|
||||
|
||||
A bash loop will be used to join all the available nodes.
|
||||
|
||||
```bash
|
||||
HOSTS=(${WORKER0} ${WORKER1} ${WORKER2} ${WORKER3})
|
||||
for i in "${!HOSTS[@]}"; do
|
||||
HOST=${HOSTS[$i]}
|
||||
ssh ${USER}@${HOST} -t ${JOIN_CMD};
|
||||
done
|
||||
```
|
||||
|
||||
Checking the nodes:
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig get nodes
|
||||
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
kamaji-tenant-worker-00 NotReady <none> 1m v1.23.4
|
||||
kamaji-tenant-worker-01 NotReady <none> 1m v1.23.4
|
||||
kamaji-tenant-worker-02 NotReady <none> 1m v1.23.4
|
||||
kamaji-tenant-worker-03 NotReady <none> 1m v1.23.4
|
||||
```
|
||||
|
||||
The cluster needs a [CNI](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) plugin to get the nodes ready. In our case, we are going to install [calico](https://projectcalico.docs.tigera.io/about/about-calico).
|
||||
|
||||
```bash
|
||||
kubectl apply -f calico-cni/calico-crd.yaml --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig
|
||||
kubectl apply -f calico-cni/calico.yaml --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig
|
||||
```
|
||||
|
||||
And after a while, `kube-system` pods will be running.
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig get pods -n kube-system
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
calico-kube-controllers-8594699699-dlhbj 1/1 Running 0 3m
|
||||
calico-node-kxf6n 1/1 Running 0 3m
|
||||
calico-node-qtdlw 1/1 Running 0 3m
|
||||
coredns-64897985d-2v5lc 1/1 Running 0 5m
|
||||
coredns-64897985d-nq276 1/1 Running 0 5m
|
||||
kube-proxy-cwdww 1/1 Running 0 3m
|
||||
kube-proxy-m48v4 1/1 Running 0 3m
|
||||
```
|
||||
|
||||
And the nodes will be ready
|
||||
|
||||
```bash
|
||||
kubectl get nodes --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig
|
||||
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
kamaji-tenant-worker-01 Ready <none> 10m v1.23.4
|
||||
kamaji-tenant-worker-02 Ready <none> 10m v1.23.4
|
||||
```
|
||||
|
||||
## Smoke test
|
||||
|
||||
The tenant cluster is now ready to accept workloads.
|
||||
|
||||
Export its `kubeconfig` file
|
||||
|
||||
```bash
|
||||
export KUBECONFIG=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig
|
||||
```
|
||||
|
||||
#### Deployment
|
||||
Deploy a `nginx` application on the tenant cluster
|
||||
|
||||
```bash
|
||||
kubectl create deployment nginx --image=nginx
|
||||
```
|
||||
|
||||
and check the `nginx` pod gets scheduled
|
||||
|
||||
```bash
|
||||
kubectl get pods -o wide
|
||||
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
nginx-6799fc88d8-4sgcb 1/1 Running 0 33s 172.12.121.1 worker02
|
||||
```
|
||||
|
||||
#### Port Forwarding
|
||||
Verify the ability to access applications remotely using port forwarding.
|
||||
|
||||
Retrieve the full name of the `nginx` pod:
|
||||
|
||||
```bash
|
||||
POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
|
||||
```
|
||||
|
||||
Forward port 8080 on your local machine to port 80 of the `nginx` pod:
|
||||
|
||||
```bash
|
||||
kubectl port-forward $POD_NAME 8080:80
|
||||
|
||||
Forwarding from 127.0.0.1:8080 -> 80
|
||||
Forwarding from [::1]:8080 -> 80
|
||||
```
|
||||
|
||||
In a new terminal make an HTTP request using the forwarding address:
|
||||
|
||||
```bash
|
||||
curl --head http://127.0.0.1:8080
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Server: nginx/1.21.0
|
||||
Date: Sat, 19 Jun 2021 08:19:01 GMT
|
||||
Content-Type: text/html
|
||||
Content-Length: 612
|
||||
Last-Modified: Tue, 25 May 2021 12:28:56 GMT
|
||||
Connection: keep-alive
|
||||
ETag: "60aced88-264"
|
||||
Accept-Ranges: bytes
|
||||
```
|
||||
|
||||
Switch back to the previous terminal and stop the port forwarding to the `nginx` pod.
|
||||
|
||||
#### Logs
|
||||
Verify the ability to retrieve container logs.
|
||||
|
||||
Print the `nginx` pod logs:
|
||||
|
||||
```bash
|
||||
kubectl logs $POD_NAME
|
||||
...
|
||||
127.0.0.1 - - [19/Jun/2021:08:19:01 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.68.0" "-"
|
||||
```
|
||||
|
||||
#### Kubelet tunnel
|
||||
Verify the ability to execute commands in a container.
|
||||
|
||||
Print the `nginx` version by executing the `nginx -v` command in the `nginx` container:
|
||||
|
||||
```bash
|
||||
kubectl exec -ti $POD_NAME -- nginx -v
|
||||
nginx version: nginx/1.21.0
|
||||
```
|
||||
|
||||
#### Services
|
||||
Verify the ability to expose applications using a service.
|
||||
|
||||
Expose the `nginx` deployment using a `NodePort` service:
|
||||
|
||||
```bash
|
||||
kubectl expose deployment nginx --port 80 --type NodePort
|
||||
```
|
||||
|
||||
Retrieve the node port assigned to the `nginx` service:
|
||||
|
||||
```bash
|
||||
NODE_PORT=$(kubectl get svc nginx \
|
||||
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
|
||||
```
|
||||
|
||||
Retrieve the IP address of a worker instance and make an HTTP request:
|
||||
|
||||
```bash
|
||||
curl -I http://${WORKER0}:${NODE_PORT}
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Server: nginx/1.21.0
|
||||
Date: Sat, 19 Jun 2021 09:29:01 GMT
|
||||
Content-Type: text/html
|
||||
Content-Length: 612
|
||||
Last-Modified: Tue, 25 May 2021 12:28:56 GMT
|
||||
Connection: keep-alive
|
||||
ETag: "60aced88-264"
|
||||
Accept-Ranges: bytes
|
||||
```
|
||||
|
||||
## Cleanup Tenant cluster
|
||||
Remove the worker nodes joined the tenant control plane
|
||||
|
||||
```bash
|
||||
kubectl delete nodes --all --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig
|
||||
```
|
||||
|
||||
For each worker node, login and clean it
|
||||
|
||||
```bash
|
||||
HOSTS=(${WORKER0} ${WORKER1} ${WORKER2} ${WORKER3})
|
||||
for i in "${!HOSTS[@]}"; do
|
||||
HOST=${HOSTS[$i]}
|
||||
ssh ${USER}@${HOST} -t 'sudo kubeadm reset -f';
|
||||
ssh ${USER}@${HOST} -t 'sudo rm -rf /etc/cni/net.d';
|
||||
ssh ${USER}@${HOST} -t 'sudo systemctl reboot';
|
||||
done
|
||||
```
|
||||
|
||||
Delete the tenant control plane from kamaji
|
||||
|
||||
```bash
|
||||
kubectl delete -f ${TENANT_NAMESPACE}-${TENANT_NAME}-tcp.yaml
|
||||
```
|
||||
@@ -1 +0,0 @@
|
||||
# Operate Kamaji setup
|
||||
83
docs/reference.md
Normal file
83
docs/reference.md
Normal file
@@ -0,0 +1,83 @@
|
||||
## Configuration
|
||||
|
||||
Currently **kamaji** supports (in this order):
|
||||
|
||||
* CLI flags
|
||||
* Environment variables
|
||||
* Configuration files
|
||||
|
||||
By default **kamaji** search for the configuration file (`kamaji.yaml`) and uses parameters found inside of it. In case some environment variable are passed, this will override configuration file parameters. In the end, if also a CLI flag is passed, this will override both env vars and config file as well.
|
||||
|
||||
This is easily explained in this way:
|
||||
|
||||
`cli-flags` > `env-vars` > `config-files`
|
||||
|
||||
Available flags are the following:
|
||||
|
||||
```
|
||||
--config-file string Configuration file alternative. (default "./kamaji.yaml")
|
||||
--etcd-ca-secret-name Name of the secret which contains CA's certificate and private key. (default: "etcd-certs")
|
||||
--etcd-ca-secret-namespace Namespace of the secret which contains CA's certificate and private key. (default: "kamaji")
|
||||
--etcd-client-secret-name Name of the secret which contains ETCD client certificates. (default: "root-client-certs")
|
||||
--etcd-client-secret-namespace Name of the namespace where the secret which contains ETCD client certificates is. (default: "kamaji")
|
||||
--etcd-compaction-interval ETCD Compaction interval (i.e. "5m0s"). (default: "0" (disabled))
|
||||
--etcd-endpoints Comma-separated list with ETCD endpoints (i.e. etcd-0.etcd.kamaji.svc.cluster.local,etcd-1.etcd.kamaji.svc.cluster.local,etcd-2.etcd.kamaji.svc.cluster.local)
|
||||
--health-probe-bind-address string The address the probe endpoint binds to. (default ":8081")
|
||||
--kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster.
|
||||
--leader-elect Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.
|
||||
--metrics-bind-address string The address the metric endpoint binds to. (default ":8080")
|
||||
--tmp-directory Directory which will be used to work with temporary files. (default "/tmp/kamaji")
|
||||
--zap-devel Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) (default true)
|
||||
--zap-encoder encoder Zap log encoding (one of 'json' or 'console')
|
||||
--zap-log-level level Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity
|
||||
--zap-stacktrace-level level Zap Level at and above which stacktraces are captured (one of 'info', 'error', 'panic').
|
||||
```
|
||||
|
||||
Available environment variables are:
|
||||
|
||||
| Environment variable | Description |
|
||||
| ---------------------------------- | ------------------------------------------------------------ |
|
||||
| `KAMAJI_ETCD_CA_SECRET_NAME` | Name of the secret which contains CA's certificate and private key. (default: "etcd-certs") |
|
||||
| `KAMAJI_ETCD_CA_SECRET_NAMESPACE` | Namespace of the secret which contains CA's certificate and private key. (default: "kamaji") |
|
||||
| `KAMAJI_ETCD_CLIENT_SECRET_NAME` | Name of the secret which contains ETCD client certificates. (default: "root-client-certs") |
|
||||
| `KAMAJI_ETCD_CLIENT_SECRET_NAMESPACE` | Name of the namespace where the secret which contains ETCD client certificates is. (default: "kamaji") |
|
||||
| `KAMAJI_ETCD_COMPACTION_INTERVAL` | ETCD Compaction interval (i.e. "5m0s"). (default: "0" (disabled)) |
|
||||
| `KAMAJI_ETCD_ENDPOINTS` | Comma-separated list with ETCD endpoints (i.e. etcd-server-1:2379,etcd-server-2:2379). (default: "etcd-server:2379") |
|
||||
| `KAMAJI_ETCD_SERVERS` | Comma-separated list with ETCD servers (i.e. etcd-0.etcd.kamaji.svc.cluster.local,etcd-1.etcd.kamaji.svc.cluster.local,etcd-2.etcd.kamaji.svc.cluster.local) |
|
||||
| `KAMAJI_METRICS_BIND_ADDRESS` | The address the metric endpoint binds to. (default ":8080") |
|
||||
| `KAMAJI_HEALTH_PROBE_BIND_ADDRESS` | The address the probe endpoint binds to. (default ":8081") |
|
||||
| `KAMAJI_LEADER_ELECTION` | Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager. |
|
||||
| `KAMAJI_TMP_DIRECTORY` | Directory which will be used to work with temporary files. (default "/tmp/kamaji") |
|
||||
|
||||
|
||||
## Build and deploy
|
||||
Clone the repo on your workstation.
|
||||
|
||||
```bash
|
||||
## Install dependencies
|
||||
$ go mod tidy
|
||||
|
||||
## Generate code
|
||||
$ make generate
|
||||
|
||||
## Generate Manifests
|
||||
$ make manifests
|
||||
|
||||
## Install Manifests
|
||||
$ make install
|
||||
|
||||
## Build Docker Image
|
||||
$ IMG=<image name and tag> make docker-build
|
||||
|
||||
## Push Docker Image
|
||||
$ IMG=<image name and tag> make docker-push
|
||||
|
||||
## Deploy Kamaji
|
||||
$ IMG=<image name and tag> make deploy
|
||||
|
||||
## YAML Installation File
|
||||
$ make yaml-installation-file
|
||||
|
||||
```
|
||||
|
||||
It will generate a yaml installation file at `config/installation.yaml`. It should be customize accordingly.
|
||||
@@ -1 +0,0 @@
|
||||
# Kamaji APIs references
|
||||
Reference in New Issue
Block a user