mirror of
https://github.com/lingble/talos.git
synced 2026-03-20 04:03:37 +00:00
1 line
16 KiB
JSON
1 line
16 KiB
JSON
[{"categories":null,"contents":" The osd service enforces a high level of security by utilizing mutual TLS for authentication and authorization. In this section we will configure mutual TLS by generating the certificates for the servers (osd) and clients (osctl).\nCluster Owners We recommend that the configuration of osd be performed by a cluster owner. A cluster owner should be a person of authority within an organization. Perhaps a director, manager, or senior member of a team. They are responsible for storing the root CA, and distributing the PKI for authorized cluster administrators.\nCluster Administrators The authorization to use osctl should be granted to a person fit for cluster administration. As a cluster administrator, the user gains access to the out-of-band management tools offered by Dianemo.\nConfiguring osd To configure osd, we will need:\n static IP addresses for each node that will participate as a master a root CA and identity certificates for each node participating as a master signed by the root CA The following steps should be performed by a cluster owner.\nGenerating the Root CA The root CA can be generated by running:\nosctl gen ca --hours \u0026lt;hours\u0026gt; --organization \u0026lt;organization\u0026gt; The cluster owner should store the generated private key (\u0026lt;organization\u0026gt;.key) in a safe place, that only other cluster owners have access to. The public certificate (\u0026lt;organization\u0026gt;.crt) should be made available to cluster administrators because, as we will see shortly, it is required to configure osctl.\nNote: The --rsa flag should not be specified for the generation of the osd CA.\n Generating the Identity Certificates Now that we have our root CA, we must create certificates that identify the node. As the cluster owner, run:\nosctl gen key --name \u0026lt;node-name\u0026gt; osctl gen csr --ip \u0026lt;node-ip\u0026gt; --key \u0026lt;node-name\u0026gt;.key osctl gen crt --hours \u0026lt;hours\u0026gt; --ca \u0026lt;organization\u0026gt; --csr \u0026lt;node-name\u0026gt;.csr --name \u0026lt;node-name\u0026gt; Repeat this process for each node that will participate as a master.\nConfiguring osctl To configure osctl, we will need:\n the root CA we generated above and a certificate signed by the root CA specific to the user The process for setting up osctl is done in part between a cluster owner and a user requesting to become a cluster administrator.\nGenerating the User Certificate The user requesting cluster administration access runs the following:\nosctl gen key --name \u0026lt;user\u0026gt; osctl gen csr --ip 127.0.0.1 --key \u0026lt;user\u0026gt;.key Now, the cluster owner must generate a certificate from the above CSR. To do this, the user requesting access submits the CSR generated above to the cluster owner, and the cluster owner runs the following:\nosctl gen crt --hours \u0026lt;hours\u0026gt; --ca \u0026lt;organization\u0026gt; --csr \u0026lt;user\u0026gt;.csr --name \u0026lt;user\u0026gt; The generated certificate is then sent to the requesting user using a secure channel.\nThe Configuration File With all the above steps done, the new cluster administrator can now create the configuration file for osctl.\ncat \u0026lt;organization\u0026gt;.crt | base64 cat \u0026lt;user\u0026gt;.crt | base64 cat \u0026lt;user\u0026gt;.key | base64 Now, create ~/.dianemo/config with the following contents:\ncontext: \u0026lt;context\u0026gt; contexts: \u0026lt;context\u0026gt;: target: \u0026lt;node-ip\u0026gt; ca: \u0026lt;base 64 encoded root public certificate\u0026gt; crt: \u0026lt;base 64 encoded user public certificate\u0026gt; key: \u0026lt;base 64 encoded user private key\u0026gt; ","permalink":"https://dianemo.autonomy.io/configuration/osd/","tags":null,"title":"osd"},{"categories":null,"contents":"First, create the AMI:\ndocker run \\ --rm \\ --volume $HOME/.aws/credentials:/root/.aws/credentials \\ --env AWS_DEFAULT_PROFILE=${PROFILE} \\ --env AWS_DEFAULT_REGION=${REGION} \\ autonomy/dianemo:latest ami -var regions=${COMMA_SEPARATED_LIST_OF_REGIONS} Once the AMI is created, you can now start an EC2 instance using the AMI ID. Provide the proper configuration as the instance\u0026rsquo;s user data.\n An official Terraform module is currently being developed, stay tuned!\n ","permalink":"https://dianemo.autonomy.io/examples/aws/","tags":null,"title":"AWS"},{"categories":null,"contents":"The kernel included with Dianemo is configured according to the recommendations outlined in the Kernel Self Protection Project (KSSP).\n","permalink":"https://dianemo.autonomy.io/components/kernel/","tags":null,"title":"kernel"},{"categories":null,"contents":" Creating a Master Node On the KVM host, install a master node to an available block device:\ndocker run \\ --rm \\ --privileged \\ --volume /dev:/dev \\ autonomy/dianemo:latest image -b /dev/sdb -f -p bare-metal -u http://${IP}:8080/master.yaml Note: http://${IP}:8080/master.yaml should be reachable by the VM and contain a valid master configuration file.\n Now, create the VM:\nvirt-install \\ -n master \\ --description \u0026quot;Kubernetes master node.\u0026quot; \\ --os-type=Linux \\ --os-variant=generic \\ --virt-type=kvm \\ --cpu=host \\ --vcpus=2 \\ --ram=4096 \\ --disk path=/dev/sdb \\ --network bridge=br0,model=e1000,mac=52:54:00:A8:4C:E1 \\ --graphics none \\ --boot hd \\ --rng /dev/random Creating a Worker Node On the KVM host, install a worker node to an available block device:\ndocker run \\ --rm \\ --privileged \\ --volume /dev:/dev \\ autonomy/dianemo:latest image -b /dev/sdc -f -p bare-metal -u http://${IP}:8080/worker.yaml Note: http://${IP}:8080/worker.yaml should be reachable by the VM and contain a valid worker configuration file.\n Now, create the VM:\nvirt-install \\ -n master \\ --description \u0026quot;Kubernetes worker node.\u0026quot; \\ --os-type=Linux \\ --os-variant=generic \\ --virt-type=kvm \\ --cpu=host \\ --vcpus=2 \\ --ram=4096 \\ --disk path=/dev/sdc \\ --network bridge=br0,model=e1000,mac=52:54:00:B9:5D:F2 \\ --graphics none \\ --boot hd \\ --rng /dev/random ","permalink":"https://dianemo.autonomy.io/examples/kvm/","tags":null,"title":"KVM"},{"categories":null,"contents":" Configuring master nodes in a Dianemo Kubernetes cluster is a two part process:\n configuring the Dianemo specific options and configuring the Kubernetes specific options To get started, create a YAML file we will use in the following steps:\ntouch \u0026lt;node-name\u0026gt;.yaml Configuring Dianemo Injecting the Dianemo PKI Using osctl, and our output from the osd configuration documentation, inject the generated PKI into the configuration file:\nosctl inject os --crt \u0026lt;organization\u0026gt;.crt --key \u0026lt;organization\u0026gt;.key \u0026lt;node-name\u0026gt;.yaml osctl inject identity --crt \u0026lt;node-name\u0026gt;.crt --key \u0026lt;node-name\u0026gt;.key \u0026lt;node-name\u0026gt;.yaml You should see the following fields populated:\nsecurity: os: ca: crt: \u0026lt;base 64 encoded root public certificate\u0026gt; key: \u0026lt;base 64 encoded root private key\u0026gt; identity: crt: \u0026lt;base 64 encoded identity public certificate\u0026gt; key: \u0026lt;base 64 encoded identity private key\u0026gt; ... Configuring trustd Each master node participates as a Root of Trust in the cluster. The responsibilities of trustd include:\n certificate as a service and Kubernetes PKI distribution amongst master nodes The auth done between trustd and a client is, for now, a simple username and password combination. Having these credentials gives a client the power to request a certifcate that identifies itself. In the \u0026lt;node-name\u0026gt;.yaml, add the follwing:\nsecurity: ... services: ... trustd: username: \u0026lt;username\u0026gt; password: \u0026lt;password\u0026gt; ... Configuring Kubernetes Generating the Root CA To create the root CA for the Kubernetes cluster, run:\nosctl gen ca --rsa --hours \u0026lt;hours\u0026gt; --organization \u0026lt;kubernetes-organization\u0026gt; Note: The --rsa flag is required for the generation of the Kubernetes CA.\n Injecting the Kubernetes PKI Using osctl, inject the generated PKI into the configuration file:\nosctl inject kubernetes --crt \u0026lt;kubernetes-organization\u0026gt;.crt --key \u0026lt;kubernetes-organization\u0026gt;.key \u0026lt;node-name\u0026gt;.yaml You should see the following fields populated:\nsecurity: ... kubernetes: ca: crt: \u0026lt;base 64 encoded root public certificate\u0026gt; key: \u0026lt;base 64 encoded root private key\u0026gt; ... Configuring Kubeadm The configuration of the kubeadm service is done in two parts:\n supplying the Dianemo specific options supplying the kubeadm InitConfiguration Dianemo Specific Options services: ... kubeadm: init: type: initial etcdMemberName: \u0026lt;member-name\u0026gt; ... Kubeadm Specific Options services: ... kubeadm: ... configuration: | apiVersion: kubeadm.k8s.io/v1alpha3 kind: InitConfiguration ... ... See the official documentation for the options available in InitConfiguration.\n ","permalink":"https://dianemo.autonomy.io/configuration/masters/","tags":null,"title":"Masters"},{"categories":null,"contents":"A common theme throughout the design of Dianemo is minimalism. We believe strongly in the UNIX philosophy that each program should do one job well. The init included in Dianemo is one example of this.\nWe wanted to create a focused init that had one job - run Kubernetes. There simply is no mechanism in place to do anything else.\nTo accomplish this, we must address real world operations needs like:\n Orchestration around creating a highly available control plane Log retrieval Restarting system services Rebooting a node and more In the following sections we will take a closer look at how these needs are addressed, and how services managed by init are designed to enhance the Kubernetes experience.\n","permalink":"https://dianemo.autonomy.io/components/init/","tags":null,"title":"init"},{"categories":null,"contents":" Creating a Master Node On Dom0, install Dianemo to an available block device:\ndocker run \\ --rm \\ --privileged \\ --volume /dev:/dev \\ autonomy/dianemo:latest image -b /dev/sdb Save the following as /etc/xen/master.cfg\nname = \u0026quot;master\u0026quot; builder='hvm' bootloader = \u0026quot;/bin/pygrub\u0026quot; firmware_override = \u0026quot;/usr/lib64/xen/boot/hvmloader\u0026quot; vcpus=2 memory = 4096 serial = \u0026quot;pty\u0026quot; kernel = \u0026quot;/var/lib/xen/dianemo/vmlinuz\u0026quot; ramdisk = \u0026quot;/var/lib/xen/dianemo/initramfs.xz\u0026quot; disk = [ 'phy:/dev/sdb,xvda,w', ] vif = [ 'mac=52:54:00:A8:4C:E1,bridge=xenbr0,model=e1000', ] extra = \u0026quot;ip=dhcp consoleblank=0 console=hvc0 console=tty0 console=ttyS0,9600 dianemo.autonomy.io/platform=bare-metal dianemo.autonomy.io/userdata=http://${IP}:8080/master.yaml\u0026quot; Note: http://${IP}:8080/master.yaml should be reachable by the VM and contain a valid master configuration file.\n Now, create the VM:\nxl create /etc/xen/master.cfg Creating a Worker Node On Dom0, install Dianemo to an available block device:\ndocker run \\ --rm \\ --privileged \\ --volume /dev:/dev \\ autonomy/dianemo:latest image -b /dev/sdc Save the following as /etc/xen/worker.cfg\nname = \u0026quot;worker\u0026quot; builder='hvm' bootloader = \u0026quot;/bin/pygrub\u0026quot; firmware_override = \u0026quot;/usr/lib64/xen/boot/hvmloader\u0026quot; vcpus=2 memory = 4096 serial = \u0026quot;pty\u0026quot; kernel = \u0026quot;/var/lib/xen/dianemo/vmlinuz\u0026quot; ramdisk = \u0026quot;/var/lib/xen/dianemo/initramfs.xz\u0026quot; disk = [ 'phy:/dev/sdc,xvda,w', ] vif = [ 'mac=52:54:00:B9:5D:F2,bridge=xenbr0,model=e1000', ] extra = \u0026quot;ip=dhcp consoleblank=0 console=hvc0 console=tty0 console=ttyS0,9600 dianemo.autonomy.io/platform=bare-metal dianemo.autonomy.io/userdata=http://${IP}:8080/worker.yaml\u0026quot; Note: http://${IP}:8080/worker.yaml should be reachable by the VM and contain a valid worker configuration file.\n Now, create the VM:\nxl create /etc/xen/worker.cfg ","permalink":"https://dianemo.autonomy.io/examples/xen/","tags":null,"title":"Xen"},{"categories":null,"contents":"Configuring the worker nodes is much more simple in comparison to configuring the master nodes. Using the trustd API, worker nodes submit a CSR, and, if authenticated, receive a valid osd certificate. Similarly, using a kubeadm token, the node joins an existing cluster.\nWe need to specify:\n the osd public certificate trustd credentials and endpoints and a kubeadm JoinConfiguration version: \u0026quot;\u0026quot; security: os: ca: crt: \u0026lt;base 64 encoded root public certificate\u0026gt; services: kubeadm: configuration: | apiVersion: kubeadm.k8s.io/v1alpha3 kind: JoinConfiguration ... trustd: username: \u0026lt;username\u0026gt; password: \u0026lt;password\u0026gt; endpoints: - \u0026lt;master-1\u0026gt; ... - \u0026lt;master-n\u0026gt; See the official documentation for the options available in JoinConfiguration.\n ","permalink":"https://dianemo.autonomy.io/configuration/workers/","tags":null,"title":"Workers"},{"categories":null,"contents":"At the heart of Dianemo is kubeadm, allowing it to harness the power of the official upstream bootstrap tool. By integrating with kubeadm natively, Dianemo stands to gain a strong community of users and developers already familiar with kubeadm.\n","permalink":"https://dianemo.autonomy.io/components/kubeadm/","tags":null,"title":"kubeadm"},{"categories":null,"contents":"Security is one of the highest priorities within Autonomy. To run a Kubernetes cluster a certain level of trust is required to operate a cluster. For example, orchestrating the bootstrap of a highly available control plane requires the distribution of sensitive PKI data.\nTo that end, we created trustd. Based on the concept of a Root of Trust, trustd is a simple daemon responsible for establishing trust within the system. Once trust is established, various methods become available to the trustee. It can, for example, accept a write request from another node to place a file on disk.\nWe imagine that the number available methods will grow as Dianemo gets tested in the real world.\n","permalink":"https://dianemo.autonomy.io/components/trustd/","tags":null,"title":"trustd"},{"categories":null,"contents":"Highly available Kubernetes clusters are crucial for production quality clusters. The proxyd component is a simple yet powerful reverse proxy that adapts to where Dianemo is employed and provides load balancing across all API servers.\n","permalink":"https://dianemo.autonomy.io/components/proxyd/","tags":null,"title":"proxyd"},{"categories":null,"contents":"Dianemo is unique in that it has no concept of host-level access. There are no shells installed. No ssh daemon. Only what is required to run Kubernetes. Furthermore, there is no way to run any custom processes on the host level.\nTo make this work, we needed an out-of-band tool for managing the nodes. In an ideal world, the system would be self-healing and we would never have to touch it. But, in the real world, this does not happen. We still need a way to handle operational scenarios that may arise.\nThe osd daemon provides a way to do just that. Based on the Principle of Least Privilege, osd provides operational value for cluster administrators by providing an API for node management.\n","permalink":"https://dianemo.autonomy.io/components/osd/","tags":null,"title":"osd"},{"categories":null,"contents":"The osctl CLI is the client to the osd service running on every node. With it you can do things like:\n retrieve container logs restart a service reset a node reboot a node retrieve kernel logs generate pki resources inject data into node configuration files ","permalink":"https://dianemo.autonomy.io/components/osctl/","tags":null,"title":"osctl"},{"categories":null,"contents":"Dianemo comes with a reserved block device with three partitions:\n an EFI System Partition (ESP) a ROOT partition mounted as read-only that contains the minimal set of binaries to operate system services and a DATA partion that is mounted as read/write at /var/run These partitions are reserved and cannot be modified. The one exception to this is that the DATA partition will be resized automatically in the init process to the maximum size possible. Managing any other block device can be done via the blockd service.\n","permalink":"https://dianemo.autonomy.io/components/blockd/","tags":null,"title":"blockd"},{"categories":null,"contents":"Dianemo is a modern Linux distribution designed to be secure, immutable, and minimal.\n","permalink":"https://dianemo.autonomy.io/dianemo/","tags":null,"title":"Dianemo"}] |