53 Commits

Author SHA1 Message Date
Dalton Hubble
581f24d11a Update README to correspond to bootkube v0.12.0 2018-04-12 20:09:05 -07:00
Dalton Hubble
15b380a471 Remove deprecated bootstrap apiserver flags
* Remove flags deprecated in Kubernetes v1.10.x
* https://github.com/poseidon/terraform-render-bootkube/pull/50
2018-04-12 19:50:25 -07:00
Dalton Hubble
33e00a6dc5 Use k8s.gcr.io instead of gcr.io/google_containers
* Kubernetes recommends using the alias to fetch images
from the nearest GCR regional mirror, to abstract the
use of GCR, and to drop names containing "google"
* https://groups.google.com/forum/#!msg/kubernetes-dev/ytjk_rNrTa0/3EFUHvovCAAJ
2018-04-08 11:41:48 -07:00
qbast
109ddd2dc1 Add flexvolume plugin mount to controller-manager
* Mount /var/lib/kubelet/volumeplugins by default
2018-04-08 11:37:21 -07:00
Dalton Hubble
b408d80c59 Update kube-dns from v1.14.8 to v1.14.9
* https://github.com/kubernetes/kubernetes/pull/61908
2018-04-04 20:49:59 -07:00
Dalton Hubble
61fb176647 Add optional trusted certs directory variable 2018-04-04 00:35:00 -07:00
Dalton Hubble
5f3546b66f Remove deprecated apiserver flags 2018-03-26 20:52:56 -07:00
Dalton Hubble
e01ff60e42 Update hyperkube from v1.9.6 to v1.10.0
* Update pod checkpointer from CRI v1alpha1 to v1alpha2
* https://github.com/kubernetes-incubator/bootkube/pull/940
* https://github.com/kubernetes-incubator/bootkube/pull/938
2018-03-26 19:45:14 -07:00
Dalton Hubble
88b361207d Update hyperkube from v1.9.5 to v1.9.6 2018-03-21 20:27:11 -07:00
Dalton Hubble
747603e90d Update Calico from v3.0.3 to v3.0.4
* Update cni-plugin from v2.0.0 to v2.0.1
* https://github.com/projectcalico/calico/releases/tag/v3.0.4
* https://github.com/projectcalico/cni-plugin/releases/tag/v2.0.1
2018-03-21 20:25:04 -07:00
Andy Cobaugh
366f751283 Change user-kubeconfig output to rendered content 2018-03-21 20:21:04 -07:00
Dalton Hubble
457b596fa0 Update hyperkube from v1.9.4 to v1.9.5 2018-03-18 17:10:15 -07:00
Dalton Hubble
36bf88af70 Add /var/lib/calico volume mount for Calico
* 73705b2cb3
2018-03-18 16:35:45 -07:00
Dalton Hubble
c5fc93d95f Update hyperkube from v1.9.3 to v1.9.4 2018-03-10 23:00:59 -08:00
Dalton Hubble
c92f3589db Update Calico from v3.0.2 to v3.0.3
* https://github.com/projectcalico/calico/releases/tag/v3.0.3
2018-02-24 19:10:49 -08:00
Dalton Hubble
13a20039f5 Update README to correspond to bootkube v0.11.0 2018-02-22 21:48:30 -08:00
Dalton Hubble
070d184644 Update pod-checkpointer image version
* No notable changes except a grace period flag we don't use
* https://github.com/kubernetes-incubator/bootkube/pull/826
2018-02-15 08:03:16 -08:00
Dalton Hubble
cd6f6fa20d Remove PersistentVolumeLabel admission controller flag
* PersistentVolumeLabel admission controller is deprecated in 1.9
2018-02-11 11:25:02 -08:00
Dalton Hubble
8159561165 Switch Deployments and DaemonSets to apps/v1 2018-02-11 11:22:52 -08:00
Dalton Hubble
203b90169e Add Calico GlobalNetworkSet CRD 2018-02-10 13:04:13 -08:00
Dalton Hubble
72ab2b6aa8 Update Calico from v3.0.1 to v3.0.2
* https://github.com/projectcalico/calico/releases/tag/v3.0.2
2018-02-10 12:58:07 -08:00
Dalton Hubble
5d8a9e8986 Remove deprecated apiserver --etcd-quorum-read flag 2018-02-09 17:53:55 -08:00
Dalton Hubble
27857322df Update hyperkube from v1.9.2 to v1.9.3 2018-02-09 16:44:54 -08:00
Dalton Hubble
27d5f62f6c Change DaemonSets to tolerate NoSchedule and NoExecute taints
* Change kube-proxy, flannel, and calico to tolerate any NoSchedule
or NoExecute taint, not just allow running on masters
* https://github.com/kubernetes-incubator/bootkube/pull/704
2018-02-03 05:58:23 +01:00
Dalton Hubble
20adb15d32 Add flannel service account and RBAC cluster role
* Define a limited ClusterRole and service account for flannel
* https://github.com/kubernetes-incubator/bootkube/pull/869
2018-02-03 05:46:31 +01:00
Dalton Hubble
8d40d6c64d Update flannel from v0.9.0 to v0.10.0
* https://github.com/coreos/flannel/releases/tag/v0.10.0
2018-01-28 22:19:42 -08:00
Dalton Hubble
f4ccbeee10 Migrate from Calico v2.6.6 to to 3.0.1
* https://github.com/projectcalico/calico/releases/tag/v3.0.1
2018-01-19 23:04:57 -08:00
Dalton Hubble
b339254ed5 Update README to correspond to bootkube v0.10.0 2018-01-19 23:03:03 -08:00
Dalton Hubble
9ccedf7b1e Update Calico from v2.6.5 to v2.6.6
* https://github.com/projectcalico/calico/releases/tag/v2.6.6
2018-01-19 22:18:58 -08:00
Dalton Hubble
9795894004 Update hyperkube from v1.9.1 to v1.9.2 2018-01-19 08:19:28 -08:00
Dalton Hubble
bf07c3edad Update kube-dns from v1.14.7 to v1.14.8
* https://github.com/kubernetes/kubernetes/pull/57918
2018-01-12 09:57:01 -08:00
Dalton Hubble
41a16db127 Add separate service account for kube-dns 2018-01-12 09:15:36 -08:00
Dalton Hubble
b83e321b35 Enable portmap plugin to fix hostPort with Calico
* Ask the Calico sidecar to add a CNI conflist to each node
(for calico and portmap plugins). Cleans up Switch from CNI conf to conflist
* https://github.com/projectcalico/cni-plugin/blob/v1.11.2/k8s-install/scripts/install-cni.sh
* Related https://github.com/kubernetes-incubator/bootkube/pull/711
2018-01-06 13:33:17 -08:00
Dalton Hubble
28333ec9da Update Calico from v2.6.4 to 2.6.5 2018-01-06 13:17:46 -08:00
Dalton Hubble
891e88a70b Update apiserver --admission-control for v1.9.x
* https://kubernetes.io/docs/admin/admission-controllers
2018-01-06 13:16:27 -08:00
Dalton Hubble
5326239074 Update hyperkube from v1.9.0 to v1.9.1 2018-01-06 11:25:26 -08:00
Dalton Hubble
abe1f6dbf3 Update kube-dns from v1.14.6 to v1.14.7
* https://github.com/kubernetes/kubernetes/pull/54443
2018-01-06 11:24:55 -08:00
Dalton Hubble
4260d9ae87 Update kube-dns version and probe for SRV records
* https://github.com/kubernetes/kubernetes/pull/51378
2018-01-06 11:24:55 -08:00
Dalton Hubble
84c86ed81a Update hyperkube from v1.8.6 to v1.9.0 2018-01-06 11:24:55 -08:00
Dalton Hubble
a97f2ea8de Use an isolated service account for controller-manager
* https://github.com/kubernetes-incubator/bootkube/pull/795
2018-01-06 11:19:11 -08:00
Dalton Hubble
5072569bb7 Update calico/cni sidecar from v1.11.1 to v1.11.2 2017-12-21 11:16:55 -08:00
Dalton Hubble
7a52b30713 Update hyperkube image from v1.8.5 to v1.8.6 2017-12-21 10:26:06 -08:00
Dalton Hubble
73fcee2471 Switch kubeconfig-in-cluster from Secret to ConfigMap
* kubeconfig-in-cluster doesn't contain secrets, just refernces
to locations
2017-12-21 09:15:15 -08:00
Dalton Hubble
b25d802e3e Update Calico from v2.6.3 to v2.6.4
* https://github.com/projectcalico/calico/releases/tag/v2.6.4
2017-12-21 08:57:02 -08:00
Dalton Hubble
df22b04db7 Update README to correspond to bootkube v0.9.1 2017-12-15 01:40:25 -08:00
Dalton Hubble
6dc7630020 Fix Terraform formatting with fmt 2017-12-13 00:58:26 -08:00
Dalton Hubble
3ec47194ce Rename cluster_dns_fqdn variable to cluster_domain_suffix 2017-12-13 00:11:16 -08:00
Barak Michener
03ca146ef3 Add option for Cluster DNS having a FQDN other than cluster.local 2017-12-12 10:17:53 -08:00
Dalton Hubble
5763b447de Remove self-hosted etcd TLS cert SANs
* Remove self-hosted etcd service IP out, defunct
2017-12-12 00:30:04 -08:00
Dalton Hubble
36243ff89b Update pod-checkpointer and drop ClusterRole to Role
* pod-checkpointer no longer needs to watch pods in all namespaces,
it should only have permission to watch kube-system
* https://github.com/kubernetes-incubator/bootkube/pull/784
2017-12-12 00:10:55 -08:00
Dalton Hubble
810ddfad9f Add controller-manager flag for service_cidr
* controller-manager can handle overlapping pod and service CIDRs
to avoid address collisions, if its informed of both ranges
* Still favor non-overlapping pod and service ranges of course
* https://github.com/kubernetes-incubator/bootkube/pull/797
2017-12-12 00:00:26 -08:00
Dalton Hubble
ec48758c5e Remove experimental self-hosted etcd options 2017-12-11 21:51:07 -08:00
Dalton Hubble
533e82f833 Update hyperkube from v1.8.4 to v1.8.5 2017-12-08 08:46:22 -08:00
49 changed files with 287 additions and 469 deletions

View File

@@ -34,15 +34,13 @@ Find bootkube assets rendered to the `asset_dir` path. That's it.
### Comparison
Render bootkube assets directly with bootkube v0.9.0.
#### On-host etcd (recommended)
Render bootkube assets directly with bootkube v0.12.0.
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=https://node1.example.com:2379
```
Compare assets. The only diffs you should see are TLS credentials.
Compare assets. Rendered assets may differ slightly from bootkube assets to reflect decisions made by the [Typhoon](https://github.com/poseidon/typhoon) distribution.
```sh
pushd /home/core/mycluster
@@ -50,21 +48,3 @@ mv manifests-networking/* manifests
popd
diff -rw assets /home/core/mycluster
```
#### Self-hosted etcd (deprecated)
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --experimental-self-hosted-etcd
```
Compare assets. Note that experimental must be generated to a separate directory for terraform applies to sync. Move the experimental `bootstrap-manifests` and `manifests` files during deployment.
```sh
pushd /home/core/mycluster
mv experimental/bootstrap-manifests/* boostrap-manifests
mv experimental/manifests/* manifests
mv manifests-networking/* manifests
popd
diff -rw assets /home/core/mycluster
```

View File

@@ -5,11 +5,13 @@ resource "template_dir" "bootstrap-manifests" {
vars {
hyperkube_image = "${var.container_images["hyperkube"]}"
etcd_servers = "${var.experimental_self_hosted_etcd ? format("https://%s:2379,https://127.0.0.1:12379", cidrhost(var.service_cidr, 15)) : join(",", formatlist("https://%s:2379", var.etcd_servers))}"
etcd_servers = "${join(",", formatlist("https://%s:2379", var.etcd_servers))}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
trusted_certs_dir = "${var.trusted_certs_dir}"
}
}
@@ -25,15 +27,17 @@ resource "template_dir" "manifests" {
kubedns_dnsmasq_image = "${var.container_images["kubedns_dnsmasq"]}"
kubedns_sidecar_image = "${var.container_images["kubedns_sidecar"]}"
etcd_servers = "${var.experimental_self_hosted_etcd ? format("https://%s:2379", cidrhost(var.service_cidr, 15)) : join(",", formatlist("https://%s:2379", var.etcd_servers))}"
etcd_servers = "${join(",", formatlist("https://%s:2379", var.etcd_servers))}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
kube_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
cluster_domain_suffix = "${var.cluster_domain_suffix}"
kube_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
trusted_certs_dir = "${var.trusted_certs_dir}"
ca_cert = "${base64encode(var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate)}"
server = "${format("https://%s:443", element(var.api_servers, 0))}"
server = "${format("https://%s:443", element(var.api_servers, 0))}"
apiserver_key = "${base64encode(tls_private_key.apiserver.private_key_pem)}"
apiserver_cert = "${base64encode(tls_locally_signed_cert.apiserver.cert_pem)}"
serviceaccount_pub = "${base64encode(tls_private_key.service-account.public_key_pem)}"

View File

@@ -1,4 +1,4 @@
# Assets generated only when experimental self-hosted etcd is enabled
# Assets generated only when certain options are chosen
resource "template_dir" "flannel-manifests" {
count = "${var.networking == "flannel" ? 1 : 0}"
@@ -26,49 +26,3 @@ resource "template_dir" "calico-manifests" {
pod_cidr = "${var.pod_cidr}"
}
}
# bootstrap-etcd.yaml pod bootstrap-manifest
resource "template_dir" "experimental-bootstrap-manifests" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/experimental/bootstrap-manifests"
destination_dir = "${var.asset_dir}/experimental/bootstrap-manifests"
vars {
etcd_image = "${var.container_images["etcd"]}"
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 20)}"
}
}
# etcd subfolder - bootstrap-etcd-service.json and migrate-etcd-cluster.json TPR
resource "template_dir" "etcd-subfolder" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/etcd"
destination_dir = "${var.asset_dir}/etcd"
vars {
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 20)}"
}
}
# etcd-operator deployment and etcd-service manifests
# etcd client, server, and peer tls secrets
resource "template_dir" "experimental-manifests" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/experimental/manifests"
destination_dir = "${var.asset_dir}/experimental/manifests"
vars {
etcd_operator_image = "${var.container_images["etcd_operator"]}"
etcd_checkpointer_image = "${var.container_images["etcd_checkpointer"]}"
etcd_service_ip = "${cidrhost(var.service_cidr, 15)}"
# Self-hosted etcd TLS certs / keys
etcd_ca_cert = "${base64encode(tls_self_signed_cert.etcd-ca.cert_pem)}"
etcd_client_cert = "${base64encode(tls_locally_signed_cert.client.cert_pem)}"
etcd_client_key = "${base64encode(tls_private_key.client.private_key_pem)}"
etcd_server_cert = "${base64encode(tls_locally_signed_cert.server.cert_pem)}"
etcd_server_key = "${base64encode(tls_private_key.server.private_key_pem)}"
etcd_peer_cert = "${base64encode(tls_locally_signed_cert.peer.cert_pem)}"
etcd_peer_key = "${base64encode(tls_private_key.peer.private_key_pem)}"
}
}

View File

@@ -10,16 +10,12 @@ output "kube_dns_service_ip" {
value = "${cidrhost(var.service_cidr, 10)}"
}
output "etcd_service_ip" {
value = "${cidrhost(var.service_cidr, 15)}"
}
output "kubeconfig" {
value = "${data.template_file.kubeconfig.rendered}"
}
output "user-kubeconfig" {
value = "${local_file.user-kubeconfig.filename}"
value = "${data.template_file.user-kubeconfig.rendered}"
}
# etcd TLS assets

View File

@@ -10,18 +10,16 @@ spec:
command:
- /hyperkube
- apiserver
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --authorization-mode=RBAC
- --bind-address=0.0.0.0
- --client-ca-file=/etc/kubernetes/secrets/ca.crt
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
- --etcd-cafile=/etc/kubernetes/secrets/etcd-client-ca.crt
- --etcd-certfile=/etc/kubernetes/secrets/etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/secrets/etcd-client.key
- --etcd-quorum-read=true
- --etcd-servers=${etcd_servers}
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
- --secure-port=443
@@ -29,7 +27,6 @@ spec:
- --service-cluster-ip-range=${service_cidr}
- --cloud-provider=${cloud_provider}
- --storage-backend=etcd3
- --tls-ca-file=/etc/kubernetes/secrets/ca.crt
- --tls-cert-file=/etc/kubernetes/secrets/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/secrets/apiserver.key
env:
@@ -54,7 +51,7 @@ spec:
path: /etc/kubernetes/bootstrap-secrets
- name: ssl-certs-host
hostPath:
path: /usr/share/ca-certificates
path: ${trusted_certs_dir}
- name: var-lock
hostPath:
path: /var/lock

View File

@@ -12,6 +12,7 @@ spec:
- controller-manager
- --allocate-node-cidrs=true
- --cluster-cidr=${pod_cidr}
- --service-cluster-ip-range=${service_cidr}
- --cloud-provider=${cloud_provider}
- --configure-cloud-routes=false
- --kubeconfig=/etc/kubernetes/kubeconfig
@@ -32,4 +33,4 @@ spec:
path: /etc/kubernetes
- name: ssl-host
hostPath:
path: /usr/share/ca-certificates
path: ${trusted_certs_dir}

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico BGP Configuration
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPConfiguration
plural: bgpconfigurations
singular: bgpconfiguration

View File

@@ -2,7 +2,6 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: calico-node
namespace: kube-system
rules:
- apiGroups: [""]
resources:
@@ -23,6 +22,17 @@ rules:
- get
- list
- watch
- patch
- apiGroups: [""]
resources:
- services
verbs:
- get
- apiGroups: [""]
resources:
- endpoints
verbs:
- get
- apiGroups: [""]
resources:
- nodes
@@ -41,10 +51,15 @@ rules:
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurations
- ippools
- globalnetworkpolicies
- globalnetworksets
- networkpolicies
- clusterinformations
verbs:
- create
- get

View File

@@ -4,26 +4,37 @@ metadata:
name: calico-config
namespace: kube-system
data:
# Disable Typha for now.
typha_service_name: "none"
# The CNI network configuration to install on each node.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.0",
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": ${network_mtu},
"ipam": {
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": ${network_mtu},
"ipam": {
"type": "host-local",
"subnet": "usePodCidr"
},
"policy": {
},
"policy": {
"type": "k8s",
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
},
"kubernetes": {
},
"kubernetes": {
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}

View File

@@ -1,13 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Global Felix Configuration
kind: CustomResourceDefinition
metadata:
name: globalfelixconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalFelixConfig
plural: globalfelixconfigs
singular: globalfelixconfig

View File

@@ -1,13 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Global BGP Configuration
kind: CustomResourceDefinition
metadata:
name: globalbgpconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalBGPConfig
plural: globalbgpconfigs
singular: globalbgpconfig

View File

@@ -1,4 +1,4 @@
apiVersion: apps/v1beta2
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: calico-node
@@ -9,6 +9,10 @@ spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
@@ -17,9 +21,10 @@ spec:
hostNetwork: true
serviceAccountName: calico-node
tolerations:
# Allow the pod to run on master nodes
- key: node-role.kubernetes.io/master
effect: NoSchedule
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
containers:
- name: calico-node
image: ${calico_image}
@@ -57,15 +62,20 @@ spec:
# Enable IP-in-IP within Felix.
- name: FELIX_IPINIPENABLED
value: "true"
# Typha support: controlled by the ConfigMap.
- name: FELIX_TYPHAK8SSERVICENAME
valueFrom:
configMapKeyRef:
name: calico-config
key: typha_service_name
# Set node name based on k8s nodeName.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Auto-detect the BGP IP address.
- name: IP
valueFrom:
fieldRef:
fieldPath: status.podIP
value: "autodetect"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
@@ -92,23 +102,30 @@ spec:
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
# Install Calico CNI binaries and CNI network config file on nodes
- name: install-cni
image: ${calico_cni_image}
command: ["/install-cni.sh"]
env:
# Name of the CNI config file to create on each node.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# Contents of the CNI config to create on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
- name: CNI_NET_DIR
value: "/etc/kubernetes/cni/net.d"
# Set node name based on k8s nodeName
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CNI_NET_DIR
value: "/etc/kubernetes/cni/net.d"
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
@@ -116,19 +133,20 @@ spec:
name: cni-net-dir
terminationGracePeriodSeconds: 0
volumes:
# Used by calico/node
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
# Used by install-cni
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/kubernetes/cni/net.d
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Cluster Information
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: ClusterInformation
plural: clusterinformations
singular: clusterinformation

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Felix Configuration
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: FelixConfiguration
plural: felixconfigurations
singular: felixconfiguration

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Global Network Sets
kind: CustomResourceDefinition
metadata:
name: globalnetworksets.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkSet
plural: globalnetworksets
singular: globalnetworkset

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Network Policies
kind: CustomResourceDefinition
metadata:
name: networkpolicies.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkPolicy
plural: networkpolicies
singular: networkpolicy

View File

@@ -1,26 +0,0 @@
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "bootstrap-etcd-service",
"namespace": "kube-system"
},
"spec": {
"selector": {
"k8s-app": "boot-etcd"
},
"clusterIP": "${bootstrap_etcd_service_ip}",
"ports": [
{
"name": "client",
"port": 12379,
"protocol": "TCP"
},
{
"name": "peers",
"port": 12380,
"protocol": "TCP"
}
]
}
}

View File

@@ -1,36 +0,0 @@
{
"apiVersion": "etcd.database.coreos.com/v1beta2",
"kind": "EtcdCluster",
"metadata": {
"name": "kube-etcd",
"namespace": "kube-system"
},
"spec": {
"size": 1,
"version": "v3.1.8",
"pod": {
"nodeSelector": {
"node-role.kubernetes.io/master": ""
},
"tolerations": [
{
"key": "node-role.kubernetes.io/master",
"operator": "Exists",
"effect": "NoSchedule"
}
]
},
"selfHosted": {
"bootMemberClientEndpoint": "https://${bootstrap_etcd_service_ip}:12379"
},
"TLS": {
"static": {
"member": {
"peerSecret": "etcd-peer-tls",
"serverSecret": "etcd-server-tls"
},
"operatorSecret": "etcd-client-tls"
}
}
}
}

View File

@@ -1,41 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: bootstrap-etcd
namespace: kube-system
labels:
k8s-app: boot-etcd
spec:
containers:
- name: etcd
image: ${etcd_image}
command:
- /usr/local/bin/etcd
- --name=boot-etcd
- --listen-client-urls=https://0.0.0.0:12379
- --listen-peer-urls=https://0.0.0.0:12380
- --advertise-client-urls=https://${bootstrap_etcd_service_ip}:12379
- --initial-advertise-peer-urls=https://${bootstrap_etcd_service_ip}:12380
- --initial-cluster=boot-etcd=https://${bootstrap_etcd_service_ip}:12380
- --initial-cluster-token=bootkube
- --initial-cluster-state=new
- --data-dir=/var/etcd/data
- --peer-client-cert-auth=true
- --peer-trusted-ca-file=/etc/kubernetes/secrets/etcd/peer-ca.crt
- --peer-cert-file=/etc/kubernetes/secrets/etcd/peer.crt
- --peer-key-file=/etc/kubernetes/secrets/etcd/peer.key
- --client-cert-auth=true
- --trusted-ca-file=/etc/kubernetes/secrets/etcd/server-ca.crt
- --cert-file=/etc/kubernetes/secrets/etcd/server.crt
- --key-file=/etc/kubernetes/secrets/etcd/server.key
volumeMounts:
- mountPath: /etc/kubernetes/secrets
name: secrets
readOnly: true
volumes:
- name: secrets
hostPath:
path: /etc/kubernetes/bootstrap-secrets
hostNetwork: true
restartPolicy: Never
dnsPolicy: ClusterFirstWithHostNet

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: etcd-client-tls
namespace: kube-system
type: Opaque
data:
etcd-client-ca.crt: ${etcd_ca_cert}
etcd-client.crt: ${etcd_client_cert}
etcd-client.key: ${etcd_client_key}

View File

@@ -1,46 +0,0 @@
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: etcd-operator
namespace: kube-system
labels:
k8s-app: etcd-operator
spec:
replicas: 1
selector:
matchLabels:
k8s-app: etcd-operator
template:
metadata:
labels:
k8s-app: etcd-operator
spec:
containers:
- name: etcd-operator
image: ${etcd_operator_image}
command:
- /usr/local/bin/etcd-operator
- --analytics=false
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
nodeSelector:
node-role.kubernetes.io/master: ""
securityContext:
runAsNonRoot: true
runAsUser: 65534
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: etcd-peer-tls
namespace: kube-system
type: Opaque
data:
peer-ca.crt: ${etcd_ca_cert}
peer.crt: ${etcd_peer_cert}
peer.key: ${etcd_peer_key}

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: etcd-server-tls
namespace: kube-system
type: Opaque
data:
server-ca.crt: ${etcd_ca_cert}
server.crt: ${etcd_server_cert}
server.key: ${etcd_server_key}

View File

@@ -1,18 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: etcd-service
namespace: kube-system
# This alpha annotation will retain the endpoints even if the etcd pod isn't ready.
# This feature is always enabled in endpoint controller in k8s even it is alpha.
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
selector:
app: etcd
etcd_cluster: kube-etcd
clusterIP: ${etcd_service_ip}
ports:
- name: client
port: 2379
protocol: TCP

View File

@@ -1,62 +0,0 @@
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: kube-etcd-network-checkpointer
namespace: kube-system
labels:
tier: control-plane
k8s-app: kube-etcd-network-checkpointer
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: kube-etcd-network-checkpointer
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-etcd-network-checkpointer
annotations:
checkpointer.alpha.coreos.com/checkpoint: "true"
spec:
containers:
- image: ${etcd_checkpointer_image}
name: kube-etcd-network-checkpointer
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/kubernetes/selfhosted-etcd
name: checkpoint-dir
readOnly: false
- mountPath: /var/etcd
name: etcd-dir
readOnly: false
- mountPath: /var/lock
name: var-lock
readOnly: false
command:
- /usr/bin/flock
- /var/lock/kenc.lock
- -c
- "kenc -r -m iptables && kenc -m iptables"
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- name: checkpoint-dir
hostPath:
path: /etc/kubernetes/checkpoint-iptables
- name: etcd-dir
hostPath:
path: /var/etcd
- name: var-lock
hostPath:
path: /var/lock
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system

View File

@@ -0,0 +1,24 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system

View File

@@ -1,4 +1,4 @@
apiVersion: apps/v1beta2
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel
@@ -17,6 +17,7 @@ spec:
tier: node
k8s-app: flannel
spec:
serviceAccountName: flannel
containers:
- name: kube-flannel
image: ${flannel_image}
@@ -59,9 +60,10 @@ spec:
mountPath: /host/opt/cni/bin/
hostNetwork: true
tolerations:
- key: node-role.kubernetes.io/master
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
effect: NoSchedule
volumes:
- name: run
hostPath:

View File

@@ -1,4 +1,4 @@
apiVersion: apps/v1beta2
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-apiserver
@@ -25,7 +25,6 @@ spec:
command:
- /hyperkube
- apiserver
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --anonymous-auth=false
@@ -33,19 +32,17 @@ spec:
- --bind-address=0.0.0.0
- --client-ca-file=/etc/kubernetes/secrets/ca.crt
- --cloud-provider=${cloud_provider}
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
- --etcd-cafile=/etc/kubernetes/secrets/etcd-client-ca.crt
- --etcd-certfile=/etc/kubernetes/secrets/etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/secrets/etcd-client.key
- --etcd-quorum-read=true
- --etcd-servers=${etcd_servers}
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
- --secure-port=443
- --service-account-key-file=/etc/kubernetes/secrets/service-account.pub
- --service-cluster-ip-range=${service_cidr}
- --storage-backend=etcd3
- --tls-ca-file=/etc/kubernetes/secrets/ca.crt
- --tls-cert-file=/etc/kubernetes/secrets/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/secrets/apiserver.key
env:
@@ -73,7 +70,7 @@ spec:
volumes:
- name: ssl-certs-host
hostPath:
path: /usr/share/ca-certificates
path: ${trusted_certs_dir}
- name: secrets
secret:
secretName: kube-apiserver

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: controller-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-controller-manager
subjects:
- kind: ServiceAccount
name: kube-controller-manager
namespace: kube-system

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: kube-controller-manager

View File

@@ -1,4 +1,4 @@
apiVersion: apps/v1beta2
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-controller-manager
@@ -40,11 +40,14 @@ spec:
command:
- ./hyperkube
- controller-manager
- --use-service-account-credentials
- --allocate-node-cidrs=true
- --cloud-provider=${cloud_provider}
- --cluster-cidr=${pod_cidr}
- --service-cluster-ip-range=${service_cidr}
- --configure-cloud-routes=false
- --leader-elect=true
- --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins
- --root-ca-file=/etc/kubernetes/secrets/ca.crt
- --service-account-private-key-file=/etc/kubernetes/secrets/service-account.key
livenessProbe:
@@ -57,6 +60,9 @@ spec:
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
- name: volumeplugins
mountPath: /var/lib/kubelet/volumeplugins
readOnly: true
- name: ssl-host
mountPath: /etc/ssl/certs
readOnly: true
@@ -65,6 +71,7 @@ spec:
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: kube-controller-manager
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
@@ -75,5 +82,8 @@ spec:
secretName: kube-controller-manager
- name: ssl-host
hostPath:
path: /usr/share/ca-certificates
path: ${trusted_certs_dir}
- name: volumeplugins
hostPath:
path: /var/lib/kubelet/volumeplugins
dnsPolicy: Default # Don't use cluster DNS.

View File

@@ -1,4 +1,4 @@
apiVersion: apps/v1beta2
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-dns
@@ -67,7 +67,7 @@ spec:
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --domain=${cluster_domain_suffix}.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
@@ -108,7 +108,7 @@ spec:
- --cache-size=1000
- --no-negcache
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/${cluster_domain_suffix}/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
ports:
@@ -140,8 +140,8 @@ spec:
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.${cluster_domain_suffix},5,SRV
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.${cluster_domain_suffix},5,SRV
ports:
- containerPort: 10054
name: metrics
@@ -151,3 +151,4 @@ spec:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
serviceAccountName: kube-dns

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-dns
namespace: kube-system

View File

@@ -1,4 +1,4 @@
apiVersion: apps/v1beta2
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-proxy
@@ -47,19 +47,20 @@ spec:
hostNetwork: true
serviceAccountName: kube-proxy
tolerations:
- key: node-role.kubernetes.io/master
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
effect: NoSchedule
volumes:
- name: lib-modules
hostPath:
path: /lib/modules
- name: ssl-certs-host
hostPath:
path: /usr/share/ca-certificates
path: ${trusted_certs_dir}
- name: kubeconfig
secret:
secretName: kubeconfig-in-cluster
configMap:
name: kubeconfig-in-cluster
updateStrategy:
rollingUpdate:
maxUnavailable: 1

View File

@@ -1,4 +1,4 @@
apiVersion: apps/v1beta2
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-scheduler

View File

@@ -1,16 +1,16 @@
apiVersion: v1
kind: Secret
kind: ConfigMap
metadata:
name: kubeconfig-in-cluster
namespace: kube-system
stringData:
data:
kubeconfig: |
apiVersion: v1
clusters:
- name: local
cluster:
server: ${server}
certificate-authority-data: ${ca_cert}
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
users:
- name: service-account
user:

View File

@@ -1,10 +1,11 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
kind: RoleBinding
metadata:
name: pod-checkpointer
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
kind: Role
name: pod-checkpointer
subjects:
- kind: ServiceAccount

View File

@@ -1,7 +1,8 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
kind: Role
metadata:
name: pod-checkpointer
namespace: kube-system
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]

View File

@@ -1,4 +1,4 @@
apiVersion: apps/v1beta2
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: pod-checkpointer
@@ -58,8 +58,8 @@ spec:
effect: NoSchedule
volumes:
- name: kubeconfig
secret:
secretName: kubeconfig-in-cluster
configMap:
name: kubeconfig-in-cluster
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes

View File

@@ -3,4 +3,3 @@ api_servers = ["node1.example.com"]
etcd_servers = ["node1.example.com"]
asset_dir = "/home/core/mycluster"
networking = "flannel"
experimental_self_hosted_etcd = false

View File

@@ -96,16 +96,12 @@ resource "tls_cert_request" "client" {
ip_addresses = [
"127.0.0.1",
"${cidrhost(var.service_cidr, 15)}",
"${cidrhost(var.service_cidr, 20)}",
]
dns_names = ["${concat(
var.etcd_servers,
list(
"localhost",
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"]
}
@@ -142,16 +138,12 @@ resource "tls_cert_request" "server" {
ip_addresses = [
"127.0.0.1",
"${cidrhost(var.service_cidr, 15)}",
"${cidrhost(var.service_cidr, 20)}",
]
dns_names = ["${concat(
var.etcd_servers,
list(
"localhost",
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"]
}
@@ -186,16 +178,7 @@ resource "tls_cert_request" "peer" {
organization = "etcd"
}
ip_addresses = [
"${cidrhost(var.service_cidr, 20)}",
]
dns_names = ["${concat(
var.etcd_servers,
list(
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"]
dns_names = ["${var.etcd_servers}"]
}
resource "tls_locally_signed_cert" "peer" {

View File

@@ -70,7 +70,7 @@ resource "tls_cert_request" "apiserver" {
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster.local",
"kubernetes.default.svc.${var.cluster_domain_suffix}",
]
ip_addresses = [

View File

@@ -9,15 +9,10 @@ variable "api_servers" {
}
variable "etcd_servers" {
description = "List of URLs used to reach etcd servers. Ignored if experimental self-hosted etcd is enabled."
description = "List of URLs used to reach etcd servers."
type = "list"
}
variable "experimental_self_hosted_etcd" {
description = "(Experimental) Create self-hosted etcd assets"
default = false
}
variable "asset_dir" {
description = "Path to a directory where generated assets should be placed (contains secrets)"
type = "string"
@@ -50,33 +45,42 @@ variable "pod_cidr" {
variable "service_cidr" {
description = <<EOD
CIDR IP range to assign Kubernetes services.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns, the 15th IP will be reserved for self-hosted etcd, and the 20th IP will be reserved for bootstrap self-hosted etcd.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
EOD
type = "string"
default = "10.3.0.0/24"
}
variable "cluster_domain_suffix" {
description = "Queries for domains with the suffix will be answered by kube-dns"
type = "string"
default = "cluster.local"
}
variable "container_images" {
description = "Container images to use"
type = "map"
default = {
calico = "quay.io/calico/node:v2.6.3"
calico_cni = "quay.io/calico/cni:v1.11.1"
etcd = "quay.io/coreos/etcd:v3.1.8"
etcd_operator = "quay.io/coreos/etcd-operator:v0.5.0"
etcd_checkpointer = "quay.io/coreos/kenc:0.0.2"
flannel = "quay.io/coreos/flannel:v0.9.1-amd64"
flannel_cni = "quay.io/coreos/flannel-cni:v0.3.0"
hyperkube = "gcr.io/google_containers/hyperkube:v1.8.4"
kubedns = "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5"
kubedns_dnsmasq = "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5"
kubedns_sidecar = "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5"
pod_checkpointer = "quay.io/coreos/pod-checkpointer:e22cc0e3714378de92f45326474874eb602ca0ac"
calico = "quay.io/calico/node:v3.0.4"
calico_cni = "quay.io/calico/cni:v2.0.1"
flannel = "quay.io/coreos/flannel:v0.10.0-amd64"
flannel_cni = "quay.io/coreos/flannel-cni:v0.3.0"
hyperkube = "k8s.gcr.io/hyperkube:v1.10.0"
kubedns = "k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.9"
kubedns_dnsmasq = "k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.9"
kubedns_sidecar = "k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.9"
pod_checkpointer = "quay.io/coreos/pod-checkpointer:9dc83e1ab3bc36ca25c9f7c18ddef1b91d4a0558"
}
}
variable "trusted_certs_dir" {
description = "Path to the directory on cluster nodes where trust TLS certs are kept"
type = "string"
default = "/usr/share/ca-certificates"
}
variable "ca_certificate" {
description = "Existing PEM-encoded CA certificate (generated if blank)"
type = "string"