fix bootstrap instruction

This commit is contained in:
Serge Logvinov
2024-11-02 14:59:19 +02:00
parent bf1ee7fa02
commit 17dac27379
15 changed files with 65 additions and 82 deletions

View File

@@ -46,7 +46,7 @@ Having a single Kubernetes control plane that spans multiple cloud providers can
| [Hetzner](hetzner) | 1.7.6 | CCM,CSI,Autoscaler | many regions, one network zone | ✗ | ✓ | ✓ |
| [Openstack](openstack) | 1.3.4 | CCM,CSI | many regions, many zones | ✓ | ✓ | ✓ |
| [Oracle](oracle) | 1.3.4 | CCM,CSI,Autoscaler | one region, many zones | ✓ | ✓ | |
| [Proxmox](proxmox) | 1.7.6 | CCM,CSI | one region, mny zones | ✓ | ✓ | ✓ |
| [Proxmox](proxmox) | 1.8.2 | CCM,CSI | one region, mny zones | ✓ | ✓ | ✓ |
| [Scaleway](scaleway) | 1.7.6 | CCM,CSI | one region | ✓ | ✓ | ✓ |
## Known issues

3
linode/README.md Normal file
View File

@@ -0,0 +1,3 @@
# Linode
Status: **abandoned**

View File

@@ -18,7 +18,7 @@ init: ## Initialize terraform
create-age: ## Create age key
age-keygen -o age.key.txt
create-config: ## Genereate talos configs
create-config: ## Generete talos configs
terraform apply -auto-approve -target=local_file.worker_patch
talosctl gen config --output-dir _cfgs --with-docs=false --with-examples=false ${CLUSTERNAME} https://${ENDPOINT}:6443
talosctl --talosconfig _cfgs/talosconfig config endpoint ${ENDPOINT}
@@ -43,6 +43,9 @@ create-templates:
@sops --encrypt --input-type=yaml --output-type=yaml _cfgs/controlplane.yaml > _cfgs/controlplane.sops.yaml
@git add -f _cfgs/talosconfig.sops.yaml _cfgs/ca.crt terraform.tfvars.sops.json
create-cluster: ## Create cluster
terraform apply
bootstrap: ## Bootstrap controlplane
talosctl --talosconfig _cfgs/talosconfig config endpoint ${ENDPOINT}
talosctl --talosconfig _cfgs/talosconfig --nodes ${CPFIRST} bootstrap
@@ -61,18 +64,15 @@ nodes: ## Show kubernetes nodes
@kubectl get nodes -owide --sort-by '{.metadata.name}' --label-columns topology.kubernetes.io/region,topology.kubernetes.io/zone,node.kubernetes.io/instance-type
system:
helm --kubeconfig=kubeconfig upgrade -i --namespace=kube-system --version=1.15.6 -f deployments/cilium.yaml \
helm --kubeconfig=kubeconfig upgrade -i --namespace=kube-system --version=1.16.3 -f deployments/cilium.yaml \
cilium cilium/cilium
kubectl --kubeconfig=kubeconfig -n kube-system delete svc cilium-agent
kubectl --kubeconfig=kubeconfig apply -f ../_deployments/vars/coredns-local.yaml
helm --kubeconfig=kubeconfig upgrade -i --namespace=kube-system -f ../_deployments/vars/metrics-server.yaml \
metrics-server metrics-server/metrics-server
helm --kubeconfig=kubeconfig upgrade -i --namespace=kube-system -f deployments/talos-ccm.yaml \
--set useDaemonSet=true \
talos-cloud-controller-manager \
oci://ghcr.io/siderolabs/charts/talos-cloud-controller-manager
@@ -83,5 +83,5 @@ system:
# File vars/secrets.proxmox.yaml should be created manually
#
kubectl --kubeconfig=kubeconfig apply -f vars/proxmox-ns.yaml
helm --kubeconfig=kubeconfig secrets upgrade -i --namespace=csi-proxmox -f vars/proxmox-csi.yaml -f vars/secrets.proxmox.yaml \
proxmox-csi-plugin oci://ghcr.io/sergelogvinov/charts/proxmox-csi-plugin
# helm --kubeconfig=kubeconfig secrets upgrade -i --namespace=csi-proxmox -f vars/proxmox-csi.yaml -f vars/secrets.proxmox.yaml \
# proxmox-csi-plugin oci://ghcr.io/sergelogvinov/charts/proxmox-csi-plugin

View File

@@ -7,13 +7,13 @@ Local utilities
* terraform
* talosctl
* kubectl
* sops
* yq
## Kubernetes addons
* [cilium](https://github.com/cilium/cilium) 1.12.4
* [metrics-server](https://github.com/kubernetes-sigs/metrics-server) 0.5.0
* [rancher.io/local-path](https://github.com/rancher/local-path-provisioner) 0.0.19
* [cilium](https://github.com/cilium/cilium) 1.16.3
* [metrics-server](https://github.com/kubernetes-sigs/metrics-server) 0.7.2
* [Talos CCM](https://github.com/siderolabs/talos-cloud-controller-manager) edge, controller: `cloud-node`.
Talos CCM labels the nodes, and approve node server certificate signing request.
* [Proxmox CCM](https://github.com/sergelogvinov/proxmox-cloud-controller-manager) edge, controller: `cloud-node-lifecycle`.
@@ -35,11 +35,11 @@ All deployments use nodeSelector, controllers runs on control-plane, all other o
First we need to upload the talos OS image to the Proxmox host machine.
If you do not have shared storage, you need to upload image to each machine.
Folow this link [README](images/README.md) to make it.
Follow this link [README](images/README.md) to make it.
## Init
Create Proxmox role and account.
Create Proxmox role and accounts.
This credentials will use by Proxmox CCM and CSI.
```shell
@@ -48,21 +48,13 @@ terraform init -upgrade
terraform apply
```
Terraform is not capable of creating account tokens, so you should create them through the web portal instead.
Or use this command:
```shell
# On the proxmox server.
pveum user token add kubernetes@pve ccm -privsep 0
```
## Bootstrap cluster
Terraform will create the Talos machine config and upload it to the Proxmox server, but only for worker nodes.
It will also create a metadata file, which is a very important file that contains information such as region, zone, and providerID.
This metadata is used by the Talos CCM to label the nodes and it also required by the Proxmox CCM/CSI.
Contol-plane machine config uploads by command `talosctl apply-config`, because I do not want to store all kubernetes secrets in proxmox server.
Control-plane machine config uploads by command `talosctl apply-config`, because I do not want to store all kubernetes secrets in proxmox server.
Terraform shows you command to launch.
VM config looks like:
@@ -102,11 +94,7 @@ machine:
First we need to define our cluster:
```hcl
proxmox_domain = "example.com"
proxmox_host = "node1.example.com"
proxmox_nodename = "node1"
proxmox_storage = "data"
proxmox_image = "talos"
vpc_main_cidr = "172.16.0.0/24"
@@ -160,13 +148,18 @@ make init create-config create-templates
Launch the control-plane node
```shell
make create-controlplane
make create-cluster
# wait ~2 minutes
make create-controlplane-bootstrap
make bootstrap
```
Receive `kubeconfig` file
```shell
make create-kubeconfig
make kubeconfig
```
```shell
kubectl get nodes -o wide
kubectl get pods -o wide -A
```

View File

@@ -24,8 +24,10 @@ resource "proxmox_virtual_environment_download_file" "talos" {
file_name = "talos.raw.xz.img"
overwrite = false
# Hash: 376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba customization: {}
# Hash: 14e9b0100f05654bedf19b92313cdc224cbff52879193d24f3741f1da4a3cbb1 customization: siderolabs/binfmt-misc
decompression_algorithm = "zst"
url = "https://github.com/siderolabs/talos/releases/download/v${var.release}/nocloud-amd64.raw.xz"
url = "https://factory.talos.dev/image/376567988ad370138ad8b2698212367b8edcb69b5fd68c80be1f2ec7d603b4ba/v${var.release}/nocloud-amd64.raw.xz"
}
resource "proxmox_virtual_environment_vm" "template" {

View File

@@ -16,11 +16,14 @@ operator:
effect: NoSchedule
identityAllocationMode: crd
kubeProxyReplacement: strict
kubeProxyReplacement: true
enableK8sEndpointSlice: true
localRedirectPolicy: true
l7Proxy: false
tunnel: "vxlan"
# endpointRoutes:
# enabled: true
# routingMode: "native"
autoDirectNodeRoutes: false
devices: [eth+]
@@ -56,6 +59,10 @@ hostFirewall:
enabled: true
ingressController:
enabled: false
envoy:
enabled: false
prometheus:
enabled: false
securityContext:
privileged: true

View File

@@ -24,9 +24,12 @@ extraArgs:
- --node-cidr-mask-size-ipv4=24
- --node-cidr-mask-size-ipv6=80
# tolerations:
# - effect: NoSchedule
# operator: Exists
daemonSet:
enabled: true
tolerations:
- effect: NoSchedule
operator: Exists
transformations:
- name: web

View File

@@ -2,7 +2,7 @@ terraform {
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "0.60.0"
version = "0.66.3"
}
}
required_version = ">= 1.0"

View File

@@ -154,7 +154,7 @@ resource "proxmox_virtual_environment_vm" "controlplane" {
}
resource "proxmox_virtual_environment_firewall_options" "controlplane" {
for_each = local.controlplanes
for_each = lookup(var.security_groups, "controlplane", "") == "" ? {} : local.controlplanes
node_name = each.value.zone
vm_id = each.value.id
enabled = true
@@ -164,16 +164,16 @@ resource "proxmox_virtual_environment_firewall_options" "controlplane" {
log_level_in = "nolog"
log_level_out = "nolog"
macfilter = false
ndp = false
ndp = true
input_policy = "DROP"
output_policy = "ACCEPT"
radv = true
radv = false
depends_on = [proxmox_virtual_environment_vm.controlplane]
}
resource "proxmox_virtual_environment_firewall_rules" "controlplane" {
for_each = local.controlplanes
for_each = lookup(var.security_groups, "controlplane", "") == "" ? {} : local.controlplanes
node_name = each.value.zone
vm_id = each.value.id

View File

@@ -198,7 +198,7 @@ resource "proxmox_virtual_environment_vm" "db" {
}
resource "proxmox_virtual_environment_firewall_options" "db" {
for_each = local.dbs
for_each = lookup(var.security_groups, "db", "") == "" ? {} : local.dbs
node_name = each.value.zone
vm_id = each.value.id
enabled = true
@@ -217,13 +217,13 @@ resource "proxmox_virtual_environment_firewall_options" "db" {
}
resource "proxmox_virtual_environment_firewall_rules" "db" {
for_each = { for k, v in local.dbs : k => v if lookup(try(var.instances[v.zone], {}), "db_sg", "") != "" }
for_each = lookup(var.security_groups, "db", "") == "" ? {} : local.dbs
node_name = each.value.zone
vm_id = each.value.id
rule {
enabled = true
security_group = lookup(var.instances[each.value.zone], "db_sg")
security_group = var.security_groups["db"]
}
depends_on = [proxmox_virtual_environment_vm.db, proxmox_virtual_environment_firewall_options.db]

View File

@@ -215,7 +215,7 @@ resource "proxmox_virtual_environment_vm" "web" {
}
resource "proxmox_virtual_environment_firewall_options" "web" {
for_each = local.webs
for_each = lookup(var.security_groups, "web", "") == "" ? {} : local.webs
node_name = each.value.zone
vm_id = each.value.id
enabled = true
@@ -234,13 +234,13 @@ resource "proxmox_virtual_environment_firewall_options" "web" {
}
resource "proxmox_virtual_environment_firewall_rules" "web" {
for_each = { for k, v in local.webs : k => v if lookup(try(var.instances[v.zone], {}), "web_sg", "") != "" }
for_each = lookup(var.security_groups, "web", "") == "" ? {} : local.webs
node_name = each.value.zone
vm_id = each.value.id
rule {
enabled = true
security_group = lookup(var.instances[each.value.zone], "web_sg")
security_group = var.security_groups["web"]
}
depends_on = [proxmox_virtual_environment_vm.web, proxmox_virtual_environment_firewall_options.web]

View File

@@ -194,7 +194,7 @@ resource "proxmox_virtual_environment_vm" "worker" {
}
resource "proxmox_virtual_environment_firewall_options" "worker" {
for_each = local.workers
for_each = lookup(var.security_groups, "worker", "") == "" ? {} : local.workers
node_name = each.value.node_name
vm_id = each.value.id
enabled = true
@@ -213,13 +213,13 @@ resource "proxmox_virtual_environment_firewall_options" "worker" {
}
resource "proxmox_virtual_environment_firewall_rules" "worker" {
for_each = { for k, v in local.workers : k => v if lookup(try(var.instances[v.zone], {}), "worker_sg", "") != "" }
for_each = lookup(var.security_groups, "worker", "") == "" ? {} : local.workers
node_name = each.value.node_name
vm_id = each.value.id
rule {
enabled = true
security_group = lookup(var.instances[each.value.zone], "worker_sg")
security_group = var.security_groups["worker"]
}
depends_on = [proxmox_virtual_environment_vm.worker, proxmox_virtual_environment_firewall_options.worker]

View File

@@ -26,10 +26,3 @@ iptables_apply_changes: false
iptables_configuration_template: iptables_proxmox.j2
iptables6_configuration_template: iptables6_proxmox.j2
iptables_nat_enabled: true
iptables_input_policy: "ACCEPT"
iptables_forward_policy: "ACCEPT"
iptables_output_policy: "ACCEPT"
iptables6_input_policy: "ACCEPT"
iptables6_forward_policy: "ACCEPT"
iptables6_output_policy: "ACCEPT"

View File

@@ -1,22 +1,10 @@
variable "proxmox_host" {
description = "Proxmox host"
description = "Proxmox API host"
type = string
default = "192.168.1.1"
}
variable "proxmox_domain" {
description = "Proxmox domain name"
type = string
default = "proxmox.local"
}
variable "proxmox_image" {
description = "Proxmox source image name"
type = string
default = "talos"
}
variable "region" {
description = "Proxmox Cluster Name"
type = string
@@ -38,7 +26,7 @@ variable "vpc_main_cidr" {
variable "release" {
type = string
description = "The version of the Talos image"
default = "1.7.6"
default = "1.8.2"
}
data "sops_file" "tfvars" {
@@ -97,7 +85,7 @@ variable "instances" {
type = map(any)
default = {
"all" = {
version = "v1.31.0"
version = "v1.31.2"
},
"hvm-1" = {
enabled = false,
@@ -107,20 +95,17 @@ variable "instances" {
web_mem = 27648,
web_template = "worker-sriov.yaml.tpl"
web_labels = ""
web_sg = "kubernetes"
worker_id = 11030,
worker_count = 0,
worker_cpu = 8,
worker_mem = 28672,
worker_template = "worker-sriov.yaml.tpl"
worker_sg = "kubernetes"
db_id = 11030
db_count = 0,
db_cpu = 8,
db_mem = 28672,
db_template = "worker-sriov.yaml.tpl"
db_labels = ""
db_sg = "kubernetes"
},
"hvm-2" = {
enabled = false,
@@ -130,20 +115,17 @@ variable "instances" {
web_mem = 27648,
web_template = "worker-sriov.yaml.tpl"
web_labels = ""
web_sg = "kubernetes"
worker_id = 12030,
worker_count = 0,
worker_cpu = 8,
worker_mem = 28672,
worker_template = "worker-sriov.yaml.tpl"
worker_sg = "kubernetes"
db_id = 12040
db_count = 0,
db_cpu = 8,
db_mem = 28672,
db_template = "worker-sriov.yaml.tpl"
db_labels = ""
db_sg = "kubernetes"
},
}
}
@@ -152,9 +134,9 @@ variable "security_groups" {
description = "Map of security groups"
type = map(any)
default = {
"controlplane" = "kubernetes"
"web" = "kubernetes"
"worker" = "kubernetes"
"db" = "kubernetes"
# "controlplane" = "kubernetes"
# "web" = "kubernetes"
# "worker" = "kubernetes"
# "db" = "kubernetes"
}
}

View File

@@ -2,7 +2,7 @@ terraform {
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "0.60.0"
version = "0.66.3"
}
sops = {
source = "carlpett/sops"