36 Commits

Author SHA1 Message Date
Cédric Verstraeten
39304220bd Merge pull request #17 from emresaracoglu/patch-1
Update README.md
2025-12-31 16:01:00 +01:00
Cédric Verstraeten
94ed02c479 Merge pull request #21 from kevindai007/patch-1
Update mongodb.config.yaml
2025-11-21 08:57:11 +01:00
Wenkai Dai
3e1e9a1921 Update mongodb.config.yaml
fix mongodb config error
2025-11-21 11:33:23 +04:00
Cédric Verstraeten
0f40791490 Update mongodb.config.yaml 2025-09-01 09:28:58 +02:00
Cédric Verstraeten
d5e1ec2038 Update mongodb.config.yaml with comments and new database name
Added comments and updated database name in MongoDB config.
2025-08-25 13:10:17 +02:00
Cédric Verstraeten
e54422f5fe Update factory image version to v1.0.9 2025-08-25 13:06:01 +02:00
Cédric Verstraeten
c09dce8441 Update factory image version to v1.0.8 in deployment.yaml 2025-05-20 12:26:41 +02:00
Cédric Verstraeten
96c37e6f7f Update deployment.yaml 2024-10-19 08:25:33 +02:00
Emre Saraçoğlu
443fcd7c75 Update README.md
Typo fixed
2024-09-07 20:26:50 +03:00
Cédric Verstraeten
da26325068 Update deployment.yaml 2024-01-03 10:34:09 +01:00
Cédric Verstraeten
884687c31d Update Kerberos Agent version 2024-01-02 15:58:00 +01:00
Cedric Verstraeten
6678b1cc8a add white-labeling option Kerberos Factory 2023-10-25 17:12:29 +02:00
Cédric Verstraeten
73cc0a3701 Upgrade Kerberos Agent 2023-09-30 19:57:15 +02:00
Cédric Verstraeten
e2cfcadc22 Update README.md 2023-08-03 13:24:19 +02:00
Cédric Verstraeten
37c000cfdd Update README.md 2023-08-03 13:20:24 +02:00
Cédric Verstraeten
efa8e48502 Update README.md 2023-08-03 13:19:23 +02:00
Cédric Verstraeten
9bba4c3a78 Update deployment.yaml 2023-08-03 13:11:13 +02:00
Cédric Verstraeten
cfb4382b57 Update README.md 2023-07-12 16:04:14 +02:00
Cédric Verstraeten
47daec83be move to k8s 1.25 2023-07-12 15:49:29 +02:00
Cédric Verstraeten
e1139843bc align passwords: yourmongodbpassword 2023-07-12 10:24:01 +02:00
Cédric Verstraeten
e84bb2e388 Update deployment.yaml 2023-07-10 20:53:55 +02:00
Cédric Verstraeten
0709cf0b8a Update node taint command 2023-07-10 19:15:09 +02:00
Cédric Verstraeten
f4e62acc44 Update calico manifest 2023-07-10 17:52:02 +02:00
Cedric Verstraeten
1633b742f4 update agent base version and factory new release 2023-06-24 13:02:36 +02:00
Cédric Verstraeten
ab31266619 Update deployment.yaml 2023-05-17 17:46:11 +02:00
Cédric Verstraeten
b220ae6641 Update mongodb.config.yaml 2023-05-17 16:06:19 +02:00
Cédric Verstraeten
bab630f890 Update mongodb.config.yaml 2023-05-17 16:01:17 +02:00
Cedric Verstraeten
a5af0ceefe fix ingress name + add resource limits 2023-05-03 18:49:33 +02:00
Cédric Verstraeten
1d0ef1e914 Update deployment.yaml 2023-05-03 11:58:38 +02:00
Cedric Verstraeten
adadcbb817 align loadbalancer with external one (missed that) 2023-04-28 12:16:07 +02:00
Cedric Verstraeten
ad4ec6cd65 typo 2023-04-28 12:11:40 +02:00
Cedric Verstraeten
fe4e16c409 add service check 2023-04-28 12:10:47 +02:00
Cedric Verstraeten
bb89eb43c3 make clear this is an option 2023-04-28 12:07:50 +02:00
Cedric Verstraeten
b738f21a3c Merge branch 'master' of https://github.com/kerberos-io/factory 2023-04-28 12:05:28 +02:00
Cedric Verstraeten
b04bf1ab53 make LoadBalancer default, because we can now (single binary) 2023-04-28 12:05:04 +02:00
Cédric Verstraeten
1b7d496b1c upgrade mettalb to v0.10.1, thank you Adam! 2023-04-28 11:37:50 +02:00
7 changed files with 173 additions and 101 deletions

View File

@@ -6,14 +6,14 @@ As described in the `README.md` of this repository. Kerberos Factory runs on kub
Just like `docker`, you bring your Kubernetes cluster where you want `edge` or `cloud`; private or public. Depending where you will host and how (e.g. managed Kubernetes cluster vs self-hosted) you'll have less/more responsibilities and/or control. Where and how is totally up to you, and your company preferences.
This installation guide will slighy modify depending on if you are self-hosting or leveraging a managed Kubernetes service by a cloud provider. Within a self-hosted installation you'll be required to install specific Kubernetes resources yourself, such as persistent volumes, storage and a load balancer.
This installation guide will slighy modify depending on if you are self-hosting or leveraging a managed Kubernetes service by a cloud provider. Within a self-hosted installation you'll be required to install specific Kubernetes resources yourself, such as persistent volumes, storage and a load baluancer.
![Kerberos Factory deployments](assets/kerberosfactory-deployments.svg)
### A. Self-hosted Kubernetes
1. [Prerequisites](#prerequisites-1)
2. [Docker](#docker)
2. [Container Engine](#container-engine)
3. [Kubernetes](#kubernetes)
4. [Untaint all nodes](#untaint-all-nodes)
5. [Calico](#calico)
@@ -48,43 +48,47 @@ The good things is that installation of a self-hosted Kubernetes cluster, contai
### Prerequisites
We'll assume you have a blank Ubuntu 20.04 LTS machine (or multiple machines/nodes) at your posession. We'll start with updating the Ubuntu operating system.
We'll assume you have a blank Ubuntu 20.04 / 22.04 LTS machine (or multiple machines/nodes) at your posession. We'll start with updating the Ubuntu operating system.
apt-get update -y && apt-get upgrade -y
### Docker
### Container Engine
Let's install our container runtime `docker` so we can run our containers.
export OS_VERSION_ID=xUbuntu_$(cat /etc/os-release | grep VERSION_ID | awk -F"=" '{print $2}' | tr -d '"')
export CRIO_VERSION=1.25
apt install docker.io -y
Add repositories
Once installed modify the `cgroup driver`, so kubernetes will be using it correctly. By default Kubernetes cgroup driver was set to systems but docker was set to systemd.
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS_VERSION_ID/ /"|sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$CRIO_VERSION/$OS_VERSION_ID/ /"|sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION.list
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION/$OS_VERSION_ID/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS_VERSION_ID/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
Update package index and install crio:
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
apt-get update
apt-get install cri-o cri-o-runc cri-tools -y
Enable and start crio:
systemctl daemon-reload
systemctl enable crio --now
### Kubernetes
After Docker being installed go ahead and install the different Kubernetes servicess and tools.
After Container Engine being installed go ahead and install the different Kubernetes servicess and tools.
apt update -y
apt install apt-transport-https curl -y
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
apt update -y && apt install kubeadm kubelet kubectl kubernetes-cni -y
apt-get install -y apt-transport-https ca-certificates curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubelet kubeadm kubectl
Hold these packages to prevent unintentional updates:
apt-mark hold kubelet kubeadm kubectl
Make sure you disable swap, this is required by Kubernetes.
@@ -132,13 +136,13 @@ Now we have a Kubernetes cluster, we need to make sure we add make it available
By default, and in this example, we only have one node our master node. In a production scenario we would have additional worker nodes. By default the master nodes are marked as `tainted`, this means they cannot run workloads. To allow master nodes to run workloads, we need to untaint them. If we wouldn't do this our pods would never be scheduled, as we do not have worker nodes at this moment.
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes <your_node_name> node-role.kubernetes.io/control-plane-
### Calico
Calico is an open source networking and network security solution for containers, virtual machines, and native host-based workloads. (https://www.projectcalico.org/). We will use it as our network layer in our Kubernetes cluster. You could use otthers like Flannel aswell, but we prefer Calico.
curl https://docs.projectcalico.org/manifests/calico.yaml -O
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml -O
kubectl apply -f calico.yaml
### Introduction
@@ -164,14 +168,14 @@ We'll start by cloning the configurations from our [Github repo](https://github.
Make sure to change directory to the `kubernetes` folder.
cd kubernetes
cd factory/kubernetes
### MetalLB
In a self-hosted scenario, we do not have fancy Load balancers and Public IPs from which we can "automatically" benefit. To overcome thism, solutions such as MetalLB - Baremetal Load Balancer - have been developed (https://metallb.universe.tf/installation/). MetalLB will dedicate an internal IP address, or IP range, which will be assigned to one or more Load Balancers. Using this dedicated IP address, you can reach your services or ingress.
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.1/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
After installing the different MetalLB components, we need to modify a `configmap.yaml` file, which you can find here `./metallb/configmap.yaml`. This file contains information of how MetalLB can get and use internal IP's as LoadBalancers.
@@ -285,36 +289,6 @@ Use one of the preferred OS package managers to install the Helm client:
gofish install helm
### Traefik
[**Traefik**](https://containo.us/traefik/) is a reverse proxy and load balancer which allows you to expose your deployments more easily. Kerberos uses Traefik to expose its APIs more easily.
Add the Helm repository and install traefik.
kubectl create namespace traefik
helm repo add traefik https://helm.traefik.io/traefik
helm install traefik traefik/traefik -n traefik
After installation, you should have an IP attached to Traefik service, look for it by executing the `get service` command. You will see the ip address in the `EXTERNAL-IP` attribute.
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 36h
--> traefik LoadBalancer 10.0.27.93 40.114.168.96 443:31623/TCP,80:31804/TCP 35h
traefik-dashboard NodePort 10.0.252.6 <none> 80:31146/TCP 35h
Go to your DNS provider and link the domain you've configured in the first step `traefik.domain.com` to the IP address of thT `EXTERNAL-IP` attribute. When browsing to `traefik.domain.com`, you should see the traefik dashboard showing up.
### Ingress-Nginx (alternative for Traefik)
If you don't like `Traefik` but you prefer `Ingress Nginx`, that works as well.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
kubectl create namespace ingress-nginx
helm install ingress-nginx -n ingress-nginx ingress-nginx/ingress-nginx
### MongoDB
When using Kerberos Factory, it will persist the configurations of your Kerberos Agents in a MongoDB database. As used before, we are using `helm` to install MongoDB in our Kubernetes cluster.
@@ -350,7 +324,28 @@ Create the config map.
### Deployment
Before installing Kerberos Factory, open the `./kerberos-factory/deployment.yaml` configuration file. At the of the bottom file you will find two endpoints, similar to the Ingres file below. Update the hostname to your own preferred domain, and add these to your DNS server or `/etc/hosts` file (pointing to the same IP as the Traefik/Ingress-nginx EXTERNAL-IP).
To install the Kerberos Factory web app inside your cluster, simply execute below `kubectl` command. This will create the deployment for us with the necessary configurations, and exposed it on internal/external IP address, thanks to our `LoadBalancer` MetalLB or cloud provider.
kubectl apply -f ./kerberos-factory/deployment.yaml -n kerberos-factory
Kerberos Factory will create Kerberos Agents on our behalf, and so create Kubernetes resource deployments. Therefore we'll need to enable some `ClusterRole` and `ClusterRoleBinding`, so we are able to create deployments from Kerberos Factory web app through the Kubernetes Golang SDK.
kubectl apply -f ./kerberos-factory/clusterrole.yaml -n kerberos-factory
Verify that the Kerberos Factory got assigned an internal IP address.
kubectl get svc -n kerberos-factory
You should see the service `factory-lb` being created, together with and IP address assigned from the MetalLB pool or cloud provider.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
--> factory-lb LoadBalancer 10.107.209.226 192.168.1.81 80:30636/TCP 24m
### (Optional) Ingress
By default Kerberos Factory deployment will use and create a `LoadBalancer`. This means that Kerberos Factory will be granted an internal/external IP from which you can navigate to and consume the Kerberos Factory UI. However if you wish to use the `Ingress` functionality by assigning a readable DNS name, you'll need to modify a few things.
First make sure to install either Traefik or Ingress-nginx, following sections below. Once you have chosen an `Ingress`, open the `./kerberos-factory/ingress.yaml` configuration file. At the of the bottom file you will find an endpoint, similar to the `Ingress` file below. Update the hostname to your own preferred domain, and add these to your DNS server or `/etc/hosts` file (pointing to the same IP address as the Traefik/Ingress-nginx IP address).
spec:
rules:
@@ -359,7 +354,7 @@ Before installing Kerberos Factory, open the `./kerberos-factory/deployment.yaml
paths:
- path: /
backend:
serviceName: kerberos-factory
serviceName: factory
servicePort: 80
If you are using Ingress Nginx, do not forgot to comment `Traefik` and uncomment `Ingress Nginx`.
@@ -375,13 +370,39 @@ If you are using Ingress Nginx, do not forgot to comment `Traefik` and uncomment
nginx.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
Once you have corrected the DNS names (or internal /etc/hosts file), install the Kerberos Factory web app inside your cluster.
Once done, apply the `Ingress` file.
kubectl apply -f ./kerberos-factory/deployment.yaml -n kerberos-factory
kubectl apply -f ./kerberos-factory/ingress.yaml -n kerberos-factory
Kerberos Factory will create Kerberos Agents on our behalf, and so create Kubernetes resource deployments. Therefore we'll need to enable some `ClusterRole` and `ClusterRoleBinding`, so we are able to create deployments from Kerberos Factory web app through the Kubernetes Golang SDK.
#### (Option 1) Traefik
kubectl create -f ./kerberos-factory/clusterrole.yaml -n kerberos-factory
[**Traefik**](https://containo.us/traefik/) is a reverse proxy and load balancer which allows you to expose your deployments more easily. Kerberos uses Traefik to expose its APIs more easily.
Add the Helm repository and install traefik.
kubectl create namespace traefik
helm repo add traefik https://helm.traefik.io/traefik
helm install traefik traefik/traefik -n traefik
After installation, you should have an IP attached to Traefik service, look for it by executing the `get service` command. You will see the ip address in the `EXTERNAL-IP` attribute.
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 36h
--> traefik LoadBalancer 10.0.27.93 40.114.168.96 443:31623/TCP,80:31804/TCP 35h
traefik-dashboard NodePort 10.0.252.6 <none> 80:31146/TCP 35h
Go to your DNS provider and link the domain you've configured in the first step `traefik.domain.com` to the IP address of thT `EXTERNAL-IP` attribute. When browsing to `traefik.domain.com`, you should see the traefik dashboard showing up.
#### (Option 2) Ingress-Nginx (alternative for Traefik)
If you don't like `Traefik` but you prefer `Ingress Nginx`, that works as well.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
kubectl create namespace ingress-nginx
helm install ingress-nginx -n ingress-nginx ingress-nginx/ingress-nginx
### Test out configuration
@@ -407,6 +428,6 @@ It should look like this.
### Access the system
Once everything is configured correctly your cluster and DNS or `/etc/hosts` file, you should be able to access the Kerberos Factory application. By navigating to the domain `factory.domain.com` in your browser you will see the Kerberos Factory login page showing up.
Once everything is configured correctly, you should be able to access the Kerberos Factory application. By navigating to the internal/external IP address (`LoadBalancer`) or domain (`Ingress`) with your browser you will see the Kerberos Factory login page showing up.
![Once successfully installed Kerberos Factory, it will show you the login page.](../assets/factory-login.gif)

View File

@@ -0,0 +1,6 @@
(function(window) {
window["env"] = window["env"] || {};
// Environment variables
window["env"]["environment"] = "whitelabel";
window["env"]["pageTitle"] = "Your application";
})(this);

View File

@@ -0,0 +1,5 @@
@charset "utf-8";
.body {
background: white;
}

View File

@@ -20,26 +20,39 @@ spec:
spec:
containers:
- name: factory
image: "kerberos/factory:1.0.851776620"
image: "uugai/factory:v1.0.9" # or you can use "uugai/factory:latest"
#imagePullPolicy: Always
resources:
requests:
memory: 256Mi
cpu: 500m
limits:
memory: 256Mi
cpu: 500m
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: mongodb
# Injecting the ca-certificates inside the container.
#volumeMounts:
#- name: rootcerts
# mountPath: /etc/ssl/certs/ca-certificates.crt
# subPath: ca-certificates.crt
#- name: custom-layout
# mountPath: /home/factory/www/assets/custom
env:
- name: GIN_MODE
value: release
- name: KERBEROS_LOGIN_USERNAME
value: "root"
- name: KERBEROS_LOGIN_PASSWORD
value: "kerberos"
- name: KERBEROS_AGENT_IMAGE
value: "kerberos/agent:9d70778"
value: "kerberos/agent:latest"
- name: KERBEROS_AGENT_MEMORY_LIMIT
value: "256Mi"
@@ -56,18 +69,23 @@ spec:
# the Kerberos Agent in following directory: /etc/ssl/certs/
#- name: CERTIFICATES_CONFIGMAP
# value: "rootcerts"
#volumes:
#- name: rootcerts
# configMap:
# name: rootcerts
#- name: custom-layout
# persistentVolumeClaim:
# claimName: custom-layout-claim
---
apiVersion: v1
kind: Service
metadata:
name: factory
name: factory-lb
labels:
app: factory
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
@@ -75,30 +93,3 @@ spec:
protocol: TCP
selector:
app: factory
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: factory
annotations:
kubernetes.io/ingress.class: traefik
#kubernetes.io/ingress.class: nginx
#kubernetes.io/tls-acme: "true"
#nginx.ingress.kubernetes.io/ssl-redirect: "true"
#cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
#tls:
#- hosts:
#- factory.domain.com
#secretName: factory-tls
rules:
- host: factory.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: factory
port:
number: 80

View File

@@ -0,0 +1,42 @@
---
apiVersion: v1
kind: Service
metadata:
name: factory
labels:
app: factory
spec:
ports:
- port: 80
targetPort: 80
name: frontend
protocol: TCP
selector:
app: factory
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: factory
annotations:
kubernetes.io/ingress.class: traefik
#kubernetes.io/ingress.class: nginx
#kubernetes.io/tls-acme: "true"
#nginx.ingress.kubernetes.io/ssl-redirect: "true"
#cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
#tls:
#- hosts:
#- factory.domain.com
#secretName: factory-tls
rules:
- host: factory.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: factory
port:
number: 80

View File

@@ -3,8 +3,15 @@ kind: ConfigMap
metadata:
name: mongodb
data:
# This is the mongodb database where configurations will be stored, you might use a different name if you want.
MONGODB_DATABASE_FACTORY: "KerberosFactory"
# MongoDB URI (for example for a SaaS service like MongoDB Atlas)
# If uri is set, the below properties are not used (host, adminDatabase, username, password)
#MONGODB_URI: "mongodb+srv://xx:xx@kerberos-hub.xxx.mongodb.net/?retryWrites=true&w=majority&appName=xxx"
# If you do not wish to use the URI, you can specify the individual values.
MONGODB_HOST: "mongodb.mongodb"
MONGODB_DATABASE_CREDENTIALS: "admin"
MONGODB_USERNAME: "root"
MONGODB_PASSWORD: "yourmongodbpassword"
MONGODB_DATABASE_STORAGE: "KerberosStorage"

View File

@@ -145,7 +145,7 @@ auth:
## @param auth.rootPassword MongoDB(&reg;) root password
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#setting-the-root-password-on-first-run
##
rootPassword: "yourpassword"
rootPassword: "yourmongodbpassword"
## MongoDB(&reg;) custom users and databases
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#creating-users-and-databases-on-first-run
## @param auth.usernames List of custom users to be created during the initialization
@@ -2024,4 +2024,4 @@ metrics:
## annotations:
## summary: High request latency
##
rules: []
rules: []