|
|
|
|
@@ -6,14 +6,14 @@ As described in the `README.md` of this repository. Kerberos Factory runs on kub
|
|
|
|
|
|
|
|
|
|
Just like `docker`, you bring your Kubernetes cluster where you want `edge` or `cloud`; private or public. Depending where you will host and how (e.g. managed Kubernetes cluster vs self-hosted) you'll have less/more responsibilities and/or control. Where and how is totally up to you, and your company preferences.
|
|
|
|
|
|
|
|
|
|
This installation guide will slighy modify depending on if you are self-hosting or leveraging a managed Kubernetes service by a cloud provider. Within a self-hosted installation you'll be required to install specific Kubernetes resources yourself, such as persistent volumes, storage and a load balancer.
|
|
|
|
|
This installation guide will slighy modify depending on if you are self-hosting or leveraging a managed Kubernetes service by a cloud provider. Within a self-hosted installation you'll be required to install specific Kubernetes resources yourself, such as persistent volumes, storage and a load baluancer.
|
|
|
|
|
|
|
|
|
|

|
|
|
|
|
|
|
|
|
|
### A. Self-hosted Kubernetes
|
|
|
|
|
|
|
|
|
|
1. [Prerequisites](#prerequisites-1)
|
|
|
|
|
2. [Docker](#docker)
|
|
|
|
|
2. [Container Engine](#container-engine)
|
|
|
|
|
3. [Kubernetes](#kubernetes)
|
|
|
|
|
4. [Untaint all nodes](#untaint-all-nodes)
|
|
|
|
|
5. [Calico](#calico)
|
|
|
|
|
@@ -48,43 +48,47 @@ The good things is that installation of a self-hosted Kubernetes cluster, contai
|
|
|
|
|
|
|
|
|
|
### Prerequisites
|
|
|
|
|
|
|
|
|
|
We'll assume you have a blank Ubuntu 20.04 LTS machine (or multiple machines/nodes) at your posession. We'll start with updating the Ubuntu operating system.
|
|
|
|
|
We'll assume you have a blank Ubuntu 20.04 / 22.04 LTS machine (or multiple machines/nodes) at your posession. We'll start with updating the Ubuntu operating system.
|
|
|
|
|
|
|
|
|
|
apt-get update -y && apt-get upgrade -y
|
|
|
|
|
|
|
|
|
|
### Docker
|
|
|
|
|
### Container Engine
|
|
|
|
|
|
|
|
|
|
Let's install our container runtime `docker` so we can run our containers.
|
|
|
|
|
export OS_VERSION_ID=xUbuntu_$(cat /etc/os-release | grep VERSION_ID | awk -F"=" '{print $2}' | tr -d '"')
|
|
|
|
|
export CRIO_VERSION=1.25
|
|
|
|
|
|
|
|
|
|
apt install docker.io -y
|
|
|
|
|
Add repositories
|
|
|
|
|
|
|
|
|
|
Once installed modify the `cgroup driver`, so kubernetes will be using it correctly. By default Kubernetes cgroup driver was set to systems but docker was set to systemd.
|
|
|
|
|
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS_VERSION_ID/ /"|sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
|
|
|
|
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$CRIO_VERSION/$OS_VERSION_ID/ /"|sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION.list
|
|
|
|
|
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION/$OS_VERSION_ID/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
|
|
|
|
|
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS_VERSION_ID/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
|
|
|
|
|
|
|
|
|
|
sudo mkdir /etc/docker
|
|
|
|
|
cat <<EOF | sudo tee /etc/docker/daemon.json
|
|
|
|
|
{
|
|
|
|
|
"exec-opts": ["native.cgroupdriver=systemd"],
|
|
|
|
|
"log-driver": "json-file",
|
|
|
|
|
"log-opts": {
|
|
|
|
|
"max-size": "100m"
|
|
|
|
|
},
|
|
|
|
|
"storage-driver": "overlay2"
|
|
|
|
|
}
|
|
|
|
|
EOF
|
|
|
|
|
Update package index and install crio:
|
|
|
|
|
|
|
|
|
|
sudo systemctl enable docker
|
|
|
|
|
sudo systemctl daemon-reload
|
|
|
|
|
sudo systemctl restart docker
|
|
|
|
|
apt-get update
|
|
|
|
|
apt-get install cri-o cri-o-runc cri-tools -y
|
|
|
|
|
|
|
|
|
|
Enable and start crio:
|
|
|
|
|
|
|
|
|
|
systemctl daemon-reload
|
|
|
|
|
systemctl enable crio --now
|
|
|
|
|
|
|
|
|
|
### Kubernetes
|
|
|
|
|
|
|
|
|
|
After Docker being installed go ahead and install the different Kubernetes servicess and tools.
|
|
|
|
|
After Container Engine being installed go ahead and install the different Kubernetes servicess and tools.
|
|
|
|
|
|
|
|
|
|
apt update -y
|
|
|
|
|
apt install apt-transport-https curl -y
|
|
|
|
|
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
|
|
|
|
|
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
|
|
|
|
|
apt update -y && apt install kubeadm kubelet kubectl kubernetes-cni -y
|
|
|
|
|
apt-get install -y apt-transport-https ca-certificates curl
|
|
|
|
|
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
|
|
|
|
|
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
|
|
|
|
|
|
|
|
|
|
apt-get update
|
|
|
|
|
apt-get install -y kubelet kubeadm kubectl
|
|
|
|
|
|
|
|
|
|
Hold these packages to prevent unintentional updates:
|
|
|
|
|
|
|
|
|
|
apt-mark hold kubelet kubeadm kubectl
|
|
|
|
|
|
|
|
|
|
Make sure you disable swap, this is required by Kubernetes.
|
|
|
|
|
|
|
|
|
|
@@ -132,13 +136,13 @@ Now we have a Kubernetes cluster, we need to make sure we add make it available
|
|
|
|
|
|
|
|
|
|
By default, and in this example, we only have one node our master node. In a production scenario we would have additional worker nodes. By default the master nodes are marked as `tainted`, this means they cannot run workloads. To allow master nodes to run workloads, we need to untaint them. If we wouldn't do this our pods would never be scheduled, as we do not have worker nodes at this moment.
|
|
|
|
|
|
|
|
|
|
kubectl taint nodes --all node-role.kubernetes.io/master-
|
|
|
|
|
kubectl taint nodes <your_node_name> node-role.kubernetes.io/control-plane-
|
|
|
|
|
|
|
|
|
|
### Calico
|
|
|
|
|
|
|
|
|
|
Calico is an open source networking and network security solution for containers, virtual machines, and native host-based workloads. (https://www.projectcalico.org/). We will use it as our network layer in our Kubernetes cluster. You could use otthers like Flannel aswell, but we prefer Calico.
|
|
|
|
|
|
|
|
|
|
curl https://docs.projectcalico.org/manifests/calico.yaml -O
|
|
|
|
|
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml -O
|
|
|
|
|
kubectl apply -f calico.yaml
|
|
|
|
|
|
|
|
|
|
### Introduction
|
|
|
|
|
@@ -164,14 +168,14 @@ We'll start by cloning the configurations from our [Github repo](https://github.
|
|
|
|
|
|
|
|
|
|
Make sure to change directory to the `kubernetes` folder.
|
|
|
|
|
|
|
|
|
|
cd kubernetes
|
|
|
|
|
cd factory/kubernetes
|
|
|
|
|
|
|
|
|
|
### MetalLB
|
|
|
|
|
|
|
|
|
|
In a self-hosted scenario, we do not have fancy Load balancers and Public IPs from which we can "automatically" benefit. To overcome thism, solutions such as MetalLB - Baremetal Load Balancer - have been developed (https://metallb.universe.tf/installation/). MetalLB will dedicate an internal IP address, or IP range, which will be assigned to one or more Load Balancers. Using this dedicated IP address, you can reach your services or ingress.
|
|
|
|
|
|
|
|
|
|
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
|
|
|
|
|
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
|
|
|
|
|
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.1/manifests/namespace.yaml
|
|
|
|
|
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.1/manifests/metallb.yaml
|
|
|
|
|
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
|
|
|
|
|
|
|
|
|
|
After installing the different MetalLB components, we need to modify a `configmap.yaml` file, which you can find here `./metallb/configmap.yaml`. This file contains information of how MetalLB can get and use internal IP's as LoadBalancers.
|
|
|
|
|
@@ -285,36 +289,6 @@ Use one of the preferred OS package managers to install the Helm client:
|
|
|
|
|
|
|
|
|
|
gofish install helm
|
|
|
|
|
|
|
|
|
|
### Traefik
|
|
|
|
|
|
|
|
|
|
[**Traefik**](https://containo.us/traefik/) is a reverse proxy and load balancer which allows you to expose your deployments more easily. Kerberos uses Traefik to expose its APIs more easily.
|
|
|
|
|
|
|
|
|
|
Add the Helm repository and install traefik.
|
|
|
|
|
|
|
|
|
|
kubectl create namespace traefik
|
|
|
|
|
helm repo add traefik https://helm.traefik.io/traefik
|
|
|
|
|
helm install traefik traefik/traefik -n traefik
|
|
|
|
|
|
|
|
|
|
After installation, you should have an IP attached to Traefik service, look for it by executing the `get service` command. You will see the ip address in the `EXTERNAL-IP` attribute.
|
|
|
|
|
|
|
|
|
|
kubectl get svc
|
|
|
|
|
|
|
|
|
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
|
|
|
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 36h
|
|
|
|
|
--> traefik LoadBalancer 10.0.27.93 40.114.168.96 443:31623/TCP,80:31804/TCP 35h
|
|
|
|
|
traefik-dashboard NodePort 10.0.252.6 <none> 80:31146/TCP 35h
|
|
|
|
|
|
|
|
|
|
Go to your DNS provider and link the domain you've configured in the first step `traefik.domain.com` to the IP address of thT `EXTERNAL-IP` attribute. When browsing to `traefik.domain.com`, you should see the traefik dashboard showing up.
|
|
|
|
|
|
|
|
|
|
### Ingress-Nginx (alternative for Traefik)
|
|
|
|
|
|
|
|
|
|
If you don't like `Traefik` but you prefer `Ingress Nginx`, that works as well.
|
|
|
|
|
|
|
|
|
|
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
|
|
|
|
|
helm repo update
|
|
|
|
|
kubectl create namespace ingress-nginx
|
|
|
|
|
helm install ingress-nginx -n ingress-nginx ingress-nginx/ingress-nginx
|
|
|
|
|
|
|
|
|
|
### MongoDB
|
|
|
|
|
|
|
|
|
|
When using Kerberos Factory, it will persist the configurations of your Kerberos Agents in a MongoDB database. As used before, we are using `helm` to install MongoDB in our Kubernetes cluster.
|
|
|
|
|
@@ -350,7 +324,28 @@ Create the config map.
|
|
|
|
|
|
|
|
|
|
### Deployment
|
|
|
|
|
|
|
|
|
|
Before installing Kerberos Factory, open the `./kerberos-factory/deployment.yaml` configuration file. At the of the bottom file you will find two endpoints, similar to the Ingres file below. Update the hostname to your own preferred domain, and add these to your DNS server or `/etc/hosts` file (pointing to the same IP as the Traefik/Ingress-nginx EXTERNAL-IP).
|
|
|
|
|
To install the Kerberos Factory web app inside your cluster, simply execute below `kubectl` command. This will create the deployment for us with the necessary configurations, and exposed it on internal/external IP address, thanks to our `LoadBalancer` MetalLB or cloud provider.
|
|
|
|
|
|
|
|
|
|
kubectl apply -f ./kerberos-factory/deployment.yaml -n kerberos-factory
|
|
|
|
|
|
|
|
|
|
Kerberos Factory will create Kerberos Agents on our behalf, and so create Kubernetes resource deployments. Therefore we'll need to enable some `ClusterRole` and `ClusterRoleBinding`, so we are able to create deployments from Kerberos Factory web app through the Kubernetes Golang SDK.
|
|
|
|
|
|
|
|
|
|
kubectl apply -f ./kerberos-factory/clusterrole.yaml -n kerberos-factory
|
|
|
|
|
|
|
|
|
|
Verify that the Kerberos Factory got assigned an internal IP address.
|
|
|
|
|
|
|
|
|
|
kubectl get svc -n kerberos-factory
|
|
|
|
|
|
|
|
|
|
You should see the service `factory-lb` being created, together with and IP address assigned from the MetalLB pool or cloud provider.
|
|
|
|
|
|
|
|
|
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
|
|
|
--> factory-lb LoadBalancer 10.107.209.226 192.168.1.81 80:30636/TCP 24m
|
|
|
|
|
|
|
|
|
|
### (Optional) Ingress
|
|
|
|
|
|
|
|
|
|
By default Kerberos Factory deployment will use and create a `LoadBalancer`. This means that Kerberos Factory will be granted an internal/external IP from which you can navigate to and consume the Kerberos Factory UI. However if you wish to use the `Ingress` functionality by assigning a readable DNS name, you'll need to modify a few things.
|
|
|
|
|
|
|
|
|
|
First make sure to install either Traefik or Ingress-nginx, following sections below. Once you have chosen an `Ingress`, open the `./kerberos-factory/ingress.yaml` configuration file. At the of the bottom file you will find an endpoint, similar to the `Ingress` file below. Update the hostname to your own preferred domain, and add these to your DNS server or `/etc/hosts` file (pointing to the same IP address as the Traefik/Ingress-nginx IP address).
|
|
|
|
|
|
|
|
|
|
spec:
|
|
|
|
|
rules:
|
|
|
|
|
@@ -359,7 +354,7 @@ Before installing Kerberos Factory, open the `./kerberos-factory/deployment.yaml
|
|
|
|
|
paths:
|
|
|
|
|
- path: /
|
|
|
|
|
backend:
|
|
|
|
|
serviceName: kerberos-factory
|
|
|
|
|
serviceName: factory
|
|
|
|
|
servicePort: 80
|
|
|
|
|
|
|
|
|
|
If you are using Ingress Nginx, do not forgot to comment `Traefik` and uncomment `Ingress Nginx`.
|
|
|
|
|
@@ -375,13 +370,39 @@ If you are using Ingress Nginx, do not forgot to comment `Traefik` and uncomment
|
|
|
|
|
nginx.ingress.kubernetes.io/ssl-redirect: "true"
|
|
|
|
|
cert-manager.io/cluster-issuer: "letsencrypt-prod"
|
|
|
|
|
|
|
|
|
|
Once you have corrected the DNS names (or internal /etc/hosts file), install the Kerberos Factory web app inside your cluster.
|
|
|
|
|
Once done, apply the `Ingress` file.
|
|
|
|
|
|
|
|
|
|
kubectl apply -f ./kerberos-factory/deployment.yaml -n kerberos-factory
|
|
|
|
|
kubectl apply -f ./kerberos-factory/ingress.yaml -n kerberos-factory
|
|
|
|
|
|
|
|
|
|
Kerberos Factory will create Kerberos Agents on our behalf, and so create Kubernetes resource deployments. Therefore we'll need to enable some `ClusterRole` and `ClusterRoleBinding`, so we are able to create deployments from Kerberos Factory web app through the Kubernetes Golang SDK.
|
|
|
|
|
#### (Option 1) Traefik
|
|
|
|
|
|
|
|
|
|
kubectl create -f ./kerberos-factory/clusterrole.yaml -n kerberos-factory
|
|
|
|
|
[**Traefik**](https://containo.us/traefik/) is a reverse proxy and load balancer which allows you to expose your deployments more easily. Kerberos uses Traefik to expose its APIs more easily.
|
|
|
|
|
|
|
|
|
|
Add the Helm repository and install traefik.
|
|
|
|
|
|
|
|
|
|
kubectl create namespace traefik
|
|
|
|
|
helm repo add traefik https://helm.traefik.io/traefik
|
|
|
|
|
helm install traefik traefik/traefik -n traefik
|
|
|
|
|
|
|
|
|
|
After installation, you should have an IP attached to Traefik service, look for it by executing the `get service` command. You will see the ip address in the `EXTERNAL-IP` attribute.
|
|
|
|
|
|
|
|
|
|
kubectl get svc
|
|
|
|
|
|
|
|
|
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
|
|
|
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 36h
|
|
|
|
|
--> traefik LoadBalancer 10.0.27.93 40.114.168.96 443:31623/TCP,80:31804/TCP 35h
|
|
|
|
|
traefik-dashboard NodePort 10.0.252.6 <none> 80:31146/TCP 35h
|
|
|
|
|
|
|
|
|
|
Go to your DNS provider and link the domain you've configured in the first step `traefik.domain.com` to the IP address of thT `EXTERNAL-IP` attribute. When browsing to `traefik.domain.com`, you should see the traefik dashboard showing up.
|
|
|
|
|
|
|
|
|
|
#### (Option 2) Ingress-Nginx (alternative for Traefik)
|
|
|
|
|
|
|
|
|
|
If you don't like `Traefik` but you prefer `Ingress Nginx`, that works as well.
|
|
|
|
|
|
|
|
|
|
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
|
|
|
|
|
helm repo update
|
|
|
|
|
kubectl create namespace ingress-nginx
|
|
|
|
|
helm install ingress-nginx -n ingress-nginx ingress-nginx/ingress-nginx
|
|
|
|
|
|
|
|
|
|
### Test out configuration
|
|
|
|
|
|
|
|
|
|
@@ -407,6 +428,6 @@ It should look like this.
|
|
|
|
|
|
|
|
|
|
### Access the system
|
|
|
|
|
|
|
|
|
|
Once everything is configured correctly your cluster and DNS or `/etc/hosts` file, you should be able to access the Kerberos Factory application. By navigating to the domain `factory.domain.com` in your browser you will see the Kerberos Factory login page showing up.
|
|
|
|
|
Once everything is configured correctly, you should be able to access the Kerberos Factory application. By navigating to the internal/external IP address (`LoadBalancer`) or domain (`Ingress`) with your browser you will see the Kerberos Factory login page showing up.
|
|
|
|
|
|
|
|
|
|

|
|
|
|
|
|