mirror of
https://github.com/outbackdingo/terraform-render-bootstrap.git
synced 2026-01-27 18:20:40 +00:00
Compare commits
83 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
581f24d11a | ||
|
|
15b380a471 | ||
|
|
33e00a6dc5 | ||
|
|
109ddd2dc1 | ||
|
|
b408d80c59 | ||
|
|
61fb176647 | ||
|
|
5f3546b66f | ||
|
|
e01ff60e42 | ||
|
|
88b361207d | ||
|
|
747603e90d | ||
|
|
366f751283 | ||
|
|
457b596fa0 | ||
|
|
36bf88af70 | ||
|
|
c5fc93d95f | ||
|
|
c92f3589db | ||
|
|
13a20039f5 | ||
|
|
070d184644 | ||
|
|
cd6f6fa20d | ||
|
|
8159561165 | ||
|
|
203b90169e | ||
|
|
72ab2b6aa8 | ||
|
|
5d8a9e8986 | ||
|
|
27857322df | ||
|
|
27d5f62f6c | ||
|
|
20adb15d32 | ||
|
|
8d40d6c64d | ||
|
|
f4ccbeee10 | ||
|
|
b339254ed5 | ||
|
|
9ccedf7b1e | ||
|
|
9795894004 | ||
|
|
bf07c3edad | ||
|
|
41a16db127 | ||
|
|
b83e321b35 | ||
|
|
28333ec9da | ||
|
|
891e88a70b | ||
|
|
5326239074 | ||
|
|
abe1f6dbf3 | ||
|
|
4260d9ae87 | ||
|
|
84c86ed81a | ||
|
|
a97f2ea8de | ||
|
|
5072569bb7 | ||
|
|
7a52b30713 | ||
|
|
73fcee2471 | ||
|
|
b25d802e3e | ||
|
|
df22b04db7 | ||
|
|
6dc7630020 | ||
|
|
3ec47194ce | ||
|
|
03ca146ef3 | ||
|
|
5763b447de | ||
|
|
36243ff89b | ||
|
|
810ddfad9f | ||
|
|
ec48758c5e | ||
|
|
533e82f833 | ||
|
|
31cfae5789 | ||
|
|
680244706c | ||
|
|
dbcf3b599f | ||
|
|
b7b56a6e55 | ||
|
|
a613c7dfa6 | ||
|
|
ab4d7becce | ||
|
|
4d85d9c0d1 | ||
|
|
ec5f86b014 | ||
|
|
92ff0f253a | ||
|
|
4f6af5b811 | ||
|
|
f76e58b56d | ||
|
|
383aba4e8e | ||
|
|
aebb45e6e9 | ||
|
|
b6b320ef6a | ||
|
|
9f4ffe273b | ||
|
|
74366f6076 | ||
|
|
db7c13f5ee | ||
|
|
3ac28c9210 | ||
|
|
64748203ba | ||
|
|
262cc49856 | ||
|
|
125f29d43d | ||
|
|
aded06a0a7 | ||
|
|
cc2b45780a | ||
|
|
d93b7e4dc8 | ||
|
|
48b33db1f1 | ||
|
|
8a9b6f1270 | ||
|
|
3b8d762081 | ||
|
|
9c144e6522 | ||
|
|
c0d4f56a4c | ||
|
|
62c887f41b |
48
README.md
48
README.md
@@ -1,48 +1,46 @@
|
||||
# bootkube-terraform
|
||||
# terraform-render-bootkube
|
||||
|
||||
`bootkube-terraform` is Terraform module that renders [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrapping assets. It functions as a low-level component of the [Typhoon](https://github.com/poseidon/typhoon) Kubernetes distribution.
|
||||
`terraform-render-bootkube` is a Terraform module that renders [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube) assets for bootstrapping a Kubernetes cluster.
|
||||
|
||||
The module provides many of the same variable names, defaults, features, and outputs as running `bootkube render` directly.
|
||||
## Audience
|
||||
|
||||
`terraform-render-bootkube` is a low-level component of the [Typhoon](https://github.com/poseidon/typhoon) Kubernetes distribution. Use Typhoon modules to create and manage Kubernetes clusters across supported platforms. Use the bootkube module if you'd like to customize a Kubernetes control plane or build your own distribution.
|
||||
|
||||
## Usage
|
||||
|
||||
Use [Typhoon](https://github.com/poseidon/typhoon) to create and manage Kubernetes clusters in different environments. Use `bootkube-terraform` if you require low-level customizations to the control plane or wish to build your own distribution.
|
||||
|
||||
Add the `bootkube-terraform` module alongside existing Terraform configs. Provide the variables listed in `variables.tf` or check `terraform.tfvars.example` for examples.
|
||||
Use the module to declare bootkube assets. Check [variables.tf](variables.tf) for options and [terraform.tfvars.example](terraform.tfvars.example) for examples.
|
||||
|
||||
```hcl
|
||||
module "bootkube" {
|
||||
source = "git://https://github.com/dghubble/bootkube-terraform.git?ref=SHA"
|
||||
source = "git://https://github.com/poseidon/terraform-render-bootkube.git?ref=SHA"
|
||||
|
||||
cluster_name = "example"
|
||||
api_servers = ["node1.example.com"]
|
||||
etcd_servers = ["node1.example.com"]
|
||||
asset_dir = "/home/core/clusters/mycluster"
|
||||
experimental_self_hosted_etcd = false
|
||||
}
|
||||
```
|
||||
|
||||
Alternately, use a local checkout of this repo and copy `terraform.tfvars.example` to `terraform.tfvars` to generate assets without an existing terraform config repo.
|
||||
|
||||
Generate the bootkube assets.
|
||||
Generate the assets.
|
||||
|
||||
```sh
|
||||
terraform get
|
||||
terraform init
|
||||
terraform get --update
|
||||
terraform plan
|
||||
terraform apply
|
||||
```
|
||||
|
||||
Find bootkube assets rendered to the `asset_dir` path. That's it.
|
||||
|
||||
### Comparison
|
||||
|
||||
Render bootkube assets directly with bootkube v0.6.2.
|
||||
|
||||
#### On-host etcd
|
||||
Render bootkube assets directly with bootkube v0.12.0.
|
||||
|
||||
```sh
|
||||
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=https://node1.example.com:2379
|
||||
```
|
||||
|
||||
Compare assets. The only diffs you should see are TLS credentials.
|
||||
Compare assets. Rendered assets may differ slightly from bootkube assets to reflect decisions made by the [Typhoon](https://github.com/poseidon/typhoon) distribution.
|
||||
|
||||
```sh
|
||||
pushd /home/core/mycluster
|
||||
@@ -50,21 +48,3 @@ mv manifests-networking/* manifests
|
||||
popd
|
||||
diff -rw assets /home/core/mycluster
|
||||
```
|
||||
|
||||
#### Self-hosted etcd
|
||||
|
||||
```sh
|
||||
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --experimental-self-hosted-etcd
|
||||
```
|
||||
|
||||
Compare assets. Note that experimental must be generated to a separate directory for terraform applies to sync. Move the experimental `bootstrap-manifests` and `manifests` files during deployment.
|
||||
|
||||
```sh
|
||||
pushd /home/core/mycluster
|
||||
mv experimental/bootstrap-manifests/* boostrap-manifests
|
||||
mv experimental/manifests/* manifests
|
||||
mv manifests-networking/* manifests
|
||||
popd
|
||||
diff -rw assets /home/core/mycluster
|
||||
```
|
||||
|
||||
|
||||
24
assets.tf
24
assets.tf
@@ -5,11 +5,13 @@ resource "template_dir" "bootstrap-manifests" {
|
||||
|
||||
vars {
|
||||
hyperkube_image = "${var.container_images["hyperkube"]}"
|
||||
etcd_servers = "${var.experimental_self_hosted_etcd ? format("https://%s:2379,https://127.0.0.1:12379", cidrhost(var.service_cidr, 15)) : join(",", formatlist("https://%s:2379", var.etcd_servers))}"
|
||||
etcd_servers = "${join(",", formatlist("https://%s:2379", var.etcd_servers))}"
|
||||
|
||||
cloud_provider = "${var.cloud_provider}"
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
|
||||
trusted_certs_dir = "${var.trusted_certs_dir}"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -19,15 +21,23 @@ resource "template_dir" "manifests" {
|
||||
destination_dir = "${var.asset_dir}/manifests"
|
||||
|
||||
vars {
|
||||
hyperkube_image = "${var.container_images["hyperkube"]}"
|
||||
etcd_servers = "${var.experimental_self_hosted_etcd ? format("https://%s:2379", cidrhost(var.service_cidr, 15)) : join(",", formatlist("https://%s:2379", var.etcd_servers))}"
|
||||
hyperkube_image = "${var.container_images["hyperkube"]}"
|
||||
pod_checkpointer_image = "${var.container_images["pod_checkpointer"]}"
|
||||
kubedns_image = "${var.container_images["kubedns"]}"
|
||||
kubedns_dnsmasq_image = "${var.container_images["kubedns_dnsmasq"]}"
|
||||
kubedns_sidecar_image = "${var.container_images["kubedns_sidecar"]}"
|
||||
|
||||
cloud_provider = "${var.cloud_provider}"
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
kube_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
|
||||
etcd_servers = "${join(",", formatlist("https://%s:2379", var.etcd_servers))}"
|
||||
|
||||
cloud_provider = "${var.cloud_provider}"
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
kube_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
|
||||
trusted_certs_dir = "${var.trusted_certs_dir}"
|
||||
|
||||
ca_cert = "${base64encode(var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate)}"
|
||||
server = "${format("https://%s:443", element(var.api_servers, 0))}"
|
||||
apiserver_key = "${base64encode(tls_private_key.apiserver.private_key_pem)}"
|
||||
apiserver_cert = "${base64encode(tls_locally_signed_cert.apiserver.cert_pem)}"
|
||||
serviceaccount_pub = "${base64encode(tls_private_key.service-account.public_key_pem)}"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Assets generated only when experimental self-hosted etcd is enabled
|
||||
# Assets generated only when certain options are chosen
|
||||
|
||||
resource "template_dir" "flannel-manifests" {
|
||||
count = "${var.networking == "flannel" ? 1 : 0}"
|
||||
@@ -6,6 +6,9 @@ resource "template_dir" "flannel-manifests" {
|
||||
destination_dir = "${var.asset_dir}/manifests-networking"
|
||||
|
||||
vars {
|
||||
flannel_image = "${var.container_images["flannel"]}"
|
||||
flannel_cni_image = "${var.container_images["flannel_cni"]}"
|
||||
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
}
|
||||
}
|
||||
@@ -16,51 +19,10 @@ resource "template_dir" "calico-manifests" {
|
||||
destination_dir = "${var.asset_dir}/manifests-networking"
|
||||
|
||||
vars {
|
||||
calico_image = "${var.container_images["calico"]}"
|
||||
calico_cni_image = "${var.container_images["calico_cni"]}"
|
||||
|
||||
network_mtu = "${var.network_mtu}"
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
}
|
||||
}
|
||||
|
||||
# bootstrap-etcd.yaml pod bootstrap-manifest
|
||||
resource "template_dir" "experimental-bootstrap-manifests" {
|
||||
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
|
||||
source_dir = "${path.module}/resources/experimental/bootstrap-manifests"
|
||||
destination_dir = "${var.asset_dir}/experimental/bootstrap-manifests"
|
||||
|
||||
vars {
|
||||
etcd_image = "${var.container_images["etcd"]}"
|
||||
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 20)}"
|
||||
}
|
||||
}
|
||||
|
||||
# etcd subfolder - bootstrap-etcd-service.json and migrate-etcd-cluster.json TPR
|
||||
resource "template_dir" "etcd-subfolder" {
|
||||
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
|
||||
source_dir = "${path.module}/resources/etcd"
|
||||
destination_dir = "${var.asset_dir}/etcd"
|
||||
|
||||
vars {
|
||||
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 20)}"
|
||||
}
|
||||
}
|
||||
|
||||
# etcd-operator deployment and etcd-service manifests
|
||||
# etcd client, server, and peer tls secrets
|
||||
resource "template_dir" "experimental-manifests" {
|
||||
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
|
||||
source_dir = "${path.module}/resources/experimental/manifests"
|
||||
destination_dir = "${var.asset_dir}/experimental/manifests"
|
||||
|
||||
vars {
|
||||
etcd_service_ip = "${cidrhost(var.service_cidr, 15)}"
|
||||
|
||||
# Self-hosted etcd TLS certs / keys
|
||||
etcd_ca_cert = "${base64encode(tls_self_signed_cert.etcd-ca.cert_pem)}"
|
||||
etcd_client_cert = "${base64encode(tls_locally_signed_cert.client.cert_pem)}"
|
||||
etcd_client_key = "${base64encode(tls_private_key.client.private_key_pem)}"
|
||||
etcd_server_cert = "${base64encode(tls_locally_signed_cert.server.cert_pem)}"
|
||||
etcd_server_key = "${base64encode(tls_private_key.server.private_key_pem)}"
|
||||
etcd_peer_cert = "${base64encode(tls_locally_signed_cert.peer.cert_pem)}"
|
||||
etcd_peer_key = "${base64encode(tls_private_key.peer.private_key_pem)}"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -10,16 +10,12 @@ output "kube_dns_service_ip" {
|
||||
value = "${cidrhost(var.service_cidr, 10)}"
|
||||
}
|
||||
|
||||
output "etcd_service_ip" {
|
||||
value = "${cidrhost(var.service_cidr, 15)}"
|
||||
}
|
||||
|
||||
output "kubeconfig" {
|
||||
value = "${data.template_file.kubeconfig.rendered}"
|
||||
}
|
||||
|
||||
output "user-kubeconfig" {
|
||||
value = "${local_file.user-kubeconfig.filename}"
|
||||
value = "${data.template_file.user-kubeconfig.rendered}"
|
||||
}
|
||||
|
||||
# etcd TLS assets
|
||||
|
||||
@@ -8,22 +8,18 @@ spec:
|
||||
- name: kube-apiserver
|
||||
image: ${hyperkube_image}
|
||||
command:
|
||||
- /usr/bin/flock
|
||||
- /var/lock/api-server.lock
|
||||
- /hyperkube
|
||||
- apiserver
|
||||
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
|
||||
- --advertise-address=$(POD_IP)
|
||||
- --allow-privileged=true
|
||||
- --authorization-mode=RBAC
|
||||
- --bind-address=0.0.0.0
|
||||
- --client-ca-file=/etc/kubernetes/secrets/ca.crt
|
||||
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
|
||||
- --etcd-cafile=/etc/kubernetes/secrets/etcd-client-ca.crt
|
||||
- --etcd-certfile=/etc/kubernetes/secrets/etcd-client.crt
|
||||
- --etcd-keyfile=/etc/kubernetes/secrets/etcd-client.key
|
||||
- --etcd-quorum-read=true
|
||||
- --etcd-servers=${etcd_servers}
|
||||
- --insecure-port=0
|
||||
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
|
||||
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
|
||||
- --secure-port=443
|
||||
@@ -31,7 +27,6 @@ spec:
|
||||
- --service-cluster-ip-range=${service_cidr}
|
||||
- --cloud-provider=${cloud_provider}
|
||||
- --storage-backend=etcd3
|
||||
- --tls-ca-file=/etc/kubernetes/secrets/ca.crt
|
||||
- --tls-cert-file=/etc/kubernetes/secrets/apiserver.crt
|
||||
- --tls-private-key-file=/etc/kubernetes/secrets/apiserver.key
|
||||
env:
|
||||
@@ -56,7 +51,7 @@ spec:
|
||||
path: /etc/kubernetes/bootstrap-secrets
|
||||
- name: ssl-certs-host
|
||||
hostPath:
|
||||
path: /usr/share/ca-certificates
|
||||
path: ${trusted_certs_dir}
|
||||
- name: var-lock
|
||||
hostPath:
|
||||
path: /var/lock
|
||||
|
||||
@@ -12,6 +12,7 @@ spec:
|
||||
- controller-manager
|
||||
- --allocate-node-cidrs=true
|
||||
- --cluster-cidr=${pod_cidr}
|
||||
- --service-cluster-ip-range=${service_cidr}
|
||||
- --cloud-provider=${cloud_provider}
|
||||
- --configure-cloud-routes=false
|
||||
- --kubeconfig=/etc/kubernetes/kubeconfig
|
||||
@@ -32,4 +33,4 @@ spec:
|
||||
path: /etc/kubernetes
|
||||
- name: ssl-host
|
||||
hostPath:
|
||||
path: /usr/share/ca-certificates
|
||||
path: ${trusted_certs_dir}
|
||||
|
||||
13
resources/calico/bgpconfigurations-crd.yaml
Normal file
13
resources/calico/bgpconfigurations-crd.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
description: Calico BGP Configuration
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: bgpconfigurations.crd.projectcalico.org
|
||||
spec:
|
||||
scope: Cluster
|
||||
group: crd.projectcalico.org
|
||||
version: v1
|
||||
names:
|
||||
kind: BGPConfiguration
|
||||
plural: bgpconfigurations
|
||||
singular: bgpconfiguration
|
||||
@@ -1,4 +1,4 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: calico-node
|
||||
|
||||
@@ -1,8 +1,7 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: calico-node
|
||||
namespace: kube-system
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
@@ -23,6 +22,17 @@ rules:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- patch
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- services
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- endpoints
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- nodes
|
||||
@@ -41,10 +51,15 @@ rules:
|
||||
- apiGroups: ["crd.projectcalico.org"]
|
||||
resources:
|
||||
- globalfelixconfigs
|
||||
- felixconfigurations
|
||||
- bgppeers
|
||||
- globalbgpconfigs
|
||||
- bgpconfigurations
|
||||
- ippools
|
||||
- globalnetworkpolicies
|
||||
- globalnetworksets
|
||||
- networkpolicies
|
||||
- clusterinformations
|
||||
verbs:
|
||||
- create
|
||||
- get
|
||||
|
||||
@@ -4,26 +4,37 @@ metadata:
|
||||
name: calico-config
|
||||
namespace: kube-system
|
||||
data:
|
||||
# Disable Typha for now.
|
||||
typha_service_name: "none"
|
||||
# The CNI network configuration to install on each node.
|
||||
cni_network_config: |-
|
||||
{
|
||||
"name": "k8s-pod-network",
|
||||
"cniVersion": "0.3.0",
|
||||
"type": "calico",
|
||||
"log_level": "debug",
|
||||
"datastore_type": "kubernetes",
|
||||
"nodename": "__KUBERNETES_NODE_NAME__",
|
||||
"mtu": ${network_mtu},
|
||||
"ipam": {
|
||||
"name": "k8s-pod-network",
|
||||
"cniVersion": "0.3.1",
|
||||
"plugins": [
|
||||
{
|
||||
"type": "calico",
|
||||
"log_level": "info",
|
||||
"datastore_type": "kubernetes",
|
||||
"nodename": "__KUBERNETES_NODE_NAME__",
|
||||
"mtu": ${network_mtu},
|
||||
"ipam": {
|
||||
"type": "host-local",
|
||||
"subnet": "usePodCidr"
|
||||
},
|
||||
"policy": {
|
||||
},
|
||||
"policy": {
|
||||
"type": "k8s",
|
||||
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
|
||||
},
|
||||
"kubernetes": {
|
||||
},
|
||||
"kubernetes": {
|
||||
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
|
||||
"kubeconfig": "__KUBECONFIG_FILEPATH__"
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "portmap",
|
||||
"snat": true,
|
||||
"capabilities": {"portMappings": true}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -1,13 +0,0 @@
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
description: Calico Global Felix Configuration
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: globalfelixconfigs.crd.projectcalico.org
|
||||
spec:
|
||||
scope: Cluster
|
||||
group: crd.projectcalico.org
|
||||
version: v1
|
||||
names:
|
||||
kind: GlobalFelixConfig
|
||||
plural: globalfelixconfigs
|
||||
singular: globalfelixconfig
|
||||
@@ -1,13 +0,0 @@
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
description: Calico Global BGP Configuration
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: globalbgpconfigs.crd.projectcalico.org
|
||||
spec:
|
||||
scope: Cluster
|
||||
group: crd.projectcalico.org
|
||||
version: v1
|
||||
names:
|
||||
kind: GlobalBGPConfig
|
||||
plural: globalbgpconfigs
|
||||
singular: globalbgpconfig
|
||||
@@ -1,4 +1,4 @@
|
||||
apiVersion: extensions/v1beta1
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: calico-node
|
||||
@@ -9,25 +9,25 @@ spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: calico-node
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: calico-node
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
spec:
|
||||
hostNetwork: true
|
||||
serviceAccountName: calico-node
|
||||
tolerations:
|
||||
# Allow the pod to run on master nodes
|
||||
- key: node-role.kubernetes.io/master
|
||||
effect: NoSchedule
|
||||
# Mark the pod as a critical add-on for rescheduling
|
||||
- key: "CriticalAddonsOnly"
|
||||
operator: "Exists"
|
||||
- effect: NoSchedule
|
||||
operator: Exists
|
||||
- effect: NoExecute
|
||||
operator: Exists
|
||||
containers:
|
||||
- name: calico-node
|
||||
image: quay.io/calico/node:v2.5.1
|
||||
image: ${calico_image}
|
||||
env:
|
||||
# Use Kubernetes API as the backing datastore.
|
||||
- name: DATASTORE_TYPE
|
||||
@@ -58,19 +58,24 @@ spec:
|
||||
value: "${pod_cidr}"
|
||||
# Enable IPIP
|
||||
- name: CALICO_IPV4POOL_IPIP
|
||||
value: "always"
|
||||
value: "Always"
|
||||
# Enable IP-in-IP within Felix.
|
||||
- name: FELIX_IPINIPENABLED
|
||||
value: "true"
|
||||
# Typha support: controlled by the ConfigMap.
|
||||
- name: FELIX_TYPHAK8SSERVICENAME
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: typha_service_name
|
||||
# Set node name based on k8s nodeName.
|
||||
- name: NODENAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
# Auto-detect the BGP IP address.
|
||||
- name: IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
value: "autodetect"
|
||||
- name: FELIX_HEALTHENABLED
|
||||
value: "true"
|
||||
securityContext:
|
||||
@@ -97,42 +102,51 @@ spec:
|
||||
- mountPath: /var/run/calico
|
||||
name: var-run-calico
|
||||
readOnly: false
|
||||
- mountPath: /var/lib/calico
|
||||
name: var-lib-calico
|
||||
readOnly: false
|
||||
# Install Calico CNI binaries and CNI network config file on nodes
|
||||
- name: install-cni
|
||||
image: quay.io/calico/cni:v1.10.0
|
||||
image: ${calico_cni_image}
|
||||
command: ["/install-cni.sh"]
|
||||
env:
|
||||
# Name of the CNI config file to create on each node.
|
||||
- name: CNI_CONF_NAME
|
||||
value: "10-calico.conflist"
|
||||
# Contents of the CNI config to create on each node.
|
||||
- name: CNI_NETWORK_CONFIG
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: calico-config
|
||||
key: cni_network_config
|
||||
- name: CNI_NET_DIR
|
||||
value: "/etc/kubernetes/cni/net.d"
|
||||
# Set node name based on k8s nodeName
|
||||
- name: KUBERNETES_NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
- name: CNI_NET_DIR
|
||||
value: "/etc/kubernetes/cni/net.d"
|
||||
volumeMounts:
|
||||
- mountPath: /host/opt/cni/bin
|
||||
name: cni-bin-dir
|
||||
- mountPath: /host/etc/cni/net.d
|
||||
name: cni-net-dir
|
||||
terminationGracePeriodSeconds: 0
|
||||
volumes:
|
||||
# Used by calico/node
|
||||
- name: lib-modules
|
||||
hostPath:
|
||||
path: /lib/modules
|
||||
- name: var-run-calico
|
||||
hostPath:
|
||||
path: /var/run/calico
|
||||
- name: var-lib-calico
|
||||
hostPath:
|
||||
path: /var/lib/calico
|
||||
# Used by install-cni
|
||||
- name: cni-bin-dir
|
||||
hostPath:
|
||||
path: /opt/cni/bin
|
||||
- name: cni-net-dir
|
||||
hostPath:
|
||||
path: /etc/kubernetes/cni/net.d
|
||||
updateStrategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
type: RollingUpdate
|
||||
|
||||
13
resources/calico/clusterinformations-crd.yaml
Normal file
13
resources/calico/clusterinformations-crd.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
description: Calico Cluster Information
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: clusterinformations.crd.projectcalico.org
|
||||
spec:
|
||||
scope: Cluster
|
||||
group: crd.projectcalico.org
|
||||
version: v1
|
||||
names:
|
||||
kind: ClusterInformation
|
||||
plural: clusterinformations
|
||||
singular: clusterinformation
|
||||
13
resources/calico/felixconfigurations-crd.yaml
Normal file
13
resources/calico/felixconfigurations-crd.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
description: Calico Felix Configuration
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: felixconfigurations.crd.projectcalico.org
|
||||
spec:
|
||||
scope: Cluster
|
||||
group: crd.projectcalico.org
|
||||
version: v1
|
||||
names:
|
||||
kind: FelixConfiguration
|
||||
plural: felixconfigurations
|
||||
singular: felixconfiguration
|
||||
13
resources/calico/globalnetworksets-crd.yaml
Normal file
13
resources/calico/globalnetworksets-crd.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
description: Calico Global Network Sets
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: globalnetworksets.crd.projectcalico.org
|
||||
spec:
|
||||
scope: Cluster
|
||||
group: crd.projectcalico.org
|
||||
version: v1
|
||||
names:
|
||||
kind: GlobalNetworkSet
|
||||
plural: globalnetworksets
|
||||
singular: globalnetworkset
|
||||
13
resources/calico/networkpolicies-crd.yaml
Normal file
13
resources/calico/networkpolicies-crd.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
description: Calico Network Policies
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: networkpolicies.crd.projectcalico.org
|
||||
spec:
|
||||
scope: Namespaced
|
||||
group: crd.projectcalico.org
|
||||
version: v1
|
||||
names:
|
||||
kind: NetworkPolicy
|
||||
plural: networkpolicies
|
||||
singular: networkpolicy
|
||||
@@ -1,26 +0,0 @@
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Service",
|
||||
"metadata": {
|
||||
"name": "bootstrap-etcd-service",
|
||||
"namespace": "kube-system"
|
||||
},
|
||||
"spec": {
|
||||
"selector": {
|
||||
"k8s-app": "boot-etcd"
|
||||
},
|
||||
"clusterIP": "${bootstrap_etcd_service_ip}",
|
||||
"ports": [
|
||||
{
|
||||
"name": "client",
|
||||
"port": 12379,
|
||||
"protocol": "TCP"
|
||||
},
|
||||
{
|
||||
"name": "peers",
|
||||
"port": 12380,
|
||||
"protocol": "TCP"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -1,36 +0,0 @@
|
||||
{
|
||||
"apiVersion": "etcd.database.coreos.com/v1beta2",
|
||||
"kind": "EtcdCluster",
|
||||
"metadata": {
|
||||
"name": "kube-etcd",
|
||||
"namespace": "kube-system"
|
||||
},
|
||||
"spec": {
|
||||
"size": 1,
|
||||
"version": "v3.1.8",
|
||||
"pod": {
|
||||
"nodeSelector": {
|
||||
"node-role.kubernetes.io/master": ""
|
||||
},
|
||||
"tolerations": [
|
||||
{
|
||||
"key": "node-role.kubernetes.io/master",
|
||||
"operator": "Exists",
|
||||
"effect": "NoSchedule"
|
||||
}
|
||||
]
|
||||
},
|
||||
"selfHosted": {
|
||||
"bootMemberClientEndpoint": "https://${bootstrap_etcd_service_ip}:12379"
|
||||
},
|
||||
"TLS": {
|
||||
"static": {
|
||||
"member": {
|
||||
"peerSecret": "etcd-peer-tls",
|
||||
"serverSecret": "etcd-server-tls"
|
||||
},
|
||||
"operatorSecret": "etcd-client-tls"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,41 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: bootstrap-etcd
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: boot-etcd
|
||||
spec:
|
||||
containers:
|
||||
- name: etcd
|
||||
image: ${etcd_image}
|
||||
command:
|
||||
- /usr/local/bin/etcd
|
||||
- --name=boot-etcd
|
||||
- --listen-client-urls=https://0.0.0.0:12379
|
||||
- --listen-peer-urls=https://0.0.0.0:12380
|
||||
- --advertise-client-urls=https://${bootstrap_etcd_service_ip}:12379
|
||||
- --initial-advertise-peer-urls=https://${bootstrap_etcd_service_ip}:12380
|
||||
- --initial-cluster=boot-etcd=https://${bootstrap_etcd_service_ip}:12380
|
||||
- --initial-cluster-token=bootkube
|
||||
- --initial-cluster-state=new
|
||||
- --data-dir=/var/etcd/data
|
||||
- --peer-client-cert-auth=true
|
||||
- --peer-trusted-ca-file=/etc/kubernetes/secrets/etcd/peer-ca.crt
|
||||
- --peer-cert-file=/etc/kubernetes/secrets/etcd/peer.crt
|
||||
- --peer-key-file=/etc/kubernetes/secrets/etcd/peer.key
|
||||
- --client-cert-auth=true
|
||||
- --trusted-ca-file=/etc/kubernetes/secrets/etcd/server-ca.crt
|
||||
- --cert-file=/etc/kubernetes/secrets/etcd/server.crt
|
||||
- --key-file=/etc/kubernetes/secrets/etcd/server.key
|
||||
volumeMounts:
|
||||
- mountPath: /etc/kubernetes/secrets
|
||||
name: secrets
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: secrets
|
||||
hostPath:
|
||||
path: /etc/kubernetes/bootstrap-secrets
|
||||
hostNetwork: true
|
||||
restartPolicy: Never
|
||||
dnsPolicy: ClusterFirstWithHostNet
|
||||
@@ -1,10 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: etcd-client-tls
|
||||
namespace: kube-system
|
||||
type: Opaque
|
||||
data:
|
||||
etcd-client-ca.crt: ${etcd_ca_cert}
|
||||
etcd-client.crt: ${etcd_client_cert}
|
||||
etcd-client.key: ${etcd_client_key}
|
||||
@@ -1,43 +0,0 @@
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: etcd-operator
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: etcd-operator
|
||||
spec:
|
||||
strategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
maxSurge: 1
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: etcd-operator
|
||||
spec:
|
||||
containers:
|
||||
- name: etcd-operator
|
||||
image: quay.io/coreos/etcd-operator:v0.5.0
|
||||
command:
|
||||
- /usr/local/bin/etcd-operator
|
||||
- --analytics=false
|
||||
env:
|
||||
- name: MY_POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: MY_POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/master: ""
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 65534
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
operator: Exists
|
||||
effect: NoSchedule
|
||||
@@ -1,10 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: etcd-peer-tls
|
||||
namespace: kube-system
|
||||
type: Opaque
|
||||
data:
|
||||
peer-ca.crt: ${etcd_ca_cert}
|
||||
peer.crt: ${etcd_peer_cert}
|
||||
peer.key: ${etcd_peer_key}
|
||||
@@ -1,10 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: etcd-server-tls
|
||||
namespace: kube-system
|
||||
type: Opaque
|
||||
data:
|
||||
server-ca.crt: ${etcd_ca_cert}
|
||||
server.crt: ${etcd_server_cert}
|
||||
server.key: ${etcd_server_key}
|
||||
@@ -1,18 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: etcd-service
|
||||
namespace: kube-system
|
||||
# This alpha annotation will retain the endpoints even if the etcd pod isn't ready.
|
||||
# This feature is always enabled in endpoint controller in k8s even it is alpha.
|
||||
annotations:
|
||||
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
|
||||
spec:
|
||||
selector:
|
||||
app: etcd
|
||||
etcd_cluster: kube-etcd
|
||||
clusterIP: ${etcd_service_ip}
|
||||
ports:
|
||||
- name: client
|
||||
port: 2379
|
||||
protocol: TCP
|
||||
36
resources/flannel/flannel-cfg.yaml
Normal file
36
resources/flannel/flannel-cfg.yaml
Normal file
@@ -0,0 +1,36 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: kube-flannel-cfg
|
||||
namespace: kube-system
|
||||
labels:
|
||||
tier: node
|
||||
k8s-app: flannel
|
||||
data:
|
||||
cni-conf.json: |
|
||||
{
|
||||
"name": "cbr0",
|
||||
"cniVersion": "0.3.1",
|
||||
"plugins": [
|
||||
{
|
||||
"type": "flannel",
|
||||
"delegate": {
|
||||
"hairpinMode": true,
|
||||
"isDefaultGateway": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "portmap",
|
||||
"capabilities": {
|
||||
"portMappings": true
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
net-conf.json: |
|
||||
{
|
||||
"Network": "${pod_cidr}",
|
||||
"Backend": {
|
||||
"Type": "vxlan"
|
||||
}
|
||||
}
|
||||
12
resources/flannel/flannel-cluster-role-binding.yaml
Normal file
12
resources/flannel/flannel-cluster-role-binding.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: flannel
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: flannel
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: flannel
|
||||
namespace: kube-system
|
||||
24
resources/flannel/flannel-cluster-role.yaml
Normal file
24
resources/flannel/flannel-cluster-role.yaml
Normal file
@@ -0,0 +1,24 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: flannel
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes/status
|
||||
verbs:
|
||||
- patch
|
||||
5
resources/flannel/flannel-sa.yaml
Normal file
5
resources/flannel/flannel-sa.yaml
Normal file
@@ -0,0 +1,5 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: flannel
|
||||
namespace: kube-system
|
||||
@@ -1,4 +1,4 @@
|
||||
apiVersion: extensions/v1beta1
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: kube-flannel
|
||||
@@ -7,15 +7,20 @@ metadata:
|
||||
tier: node
|
||||
k8s-app: flannel
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
tier: node
|
||||
k8s-app: flannel
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
tier: node
|
||||
k8s-app: flannel
|
||||
spec:
|
||||
serviceAccountName: flannel
|
||||
containers:
|
||||
- name: kube-flannel
|
||||
image: quay.io/coreos/flannel:v0.8.0-amd64
|
||||
image: ${flannel_image}
|
||||
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=$(POD_IP)"]
|
||||
securityContext:
|
||||
privileged: true
|
||||
@@ -40,7 +45,7 @@ spec:
|
||||
- name: flannel-cfg
|
||||
mountPath: /etc/kube-flannel/
|
||||
- name: install-cni
|
||||
image: quay.io/coreos/flannel-cni:v0.2.0
|
||||
image: ${flannel_cni_image}
|
||||
command: ["/install-cni.sh"]
|
||||
env:
|
||||
- name: CNI_NETWORK_CONFIG
|
||||
@@ -55,9 +60,10 @@ spec:
|
||||
mountPath: /host/opt/cni/bin/
|
||||
hostNetwork: true
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
- effect: NoSchedule
|
||||
operator: Exists
|
||||
- effect: NoExecute
|
||||
operator: Exists
|
||||
effect: NoSchedule
|
||||
volumes:
|
||||
- name: run
|
||||
hostPath:
|
||||
@@ -1,25 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: kube-flannel-cfg
|
||||
namespace: kube-system
|
||||
labels:
|
||||
tier: node
|
||||
k8s-app: flannel
|
||||
data:
|
||||
cni-conf.json: |
|
||||
{
|
||||
"name": "cbr0",
|
||||
"type": "flannel",
|
||||
"delegate": {
|
||||
"isDefaultGateway": true,
|
||||
"hairpinMode": true
|
||||
}
|
||||
}
|
||||
net-conf.json: |
|
||||
{
|
||||
"Network": "${pod_cidr}",
|
||||
"Backend": {
|
||||
"Type": "vxlan"
|
||||
}
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
apiVersion: "extensions/v1beta1"
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: kube-apiserver
|
||||
@@ -7,6 +7,10 @@ metadata:
|
||||
tier: control-plane
|
||||
k8s-app: kube-apiserver
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
tier: control-plane
|
||||
k8s-app: kube-apiserver
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
@@ -14,17 +18,13 @@ spec:
|
||||
k8s-app: kube-apiserver
|
||||
annotations:
|
||||
checkpointer.alpha.coreos.com/checkpoint: "true"
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
spec:
|
||||
containers:
|
||||
- name: kube-apiserver
|
||||
image: ${hyperkube_image}
|
||||
command:
|
||||
- /usr/bin/flock
|
||||
- /var/lock/api-server.lock
|
||||
- /hyperkube
|
||||
- apiserver
|
||||
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
|
||||
- --advertise-address=$(POD_IP)
|
||||
- --allow-privileged=true
|
||||
- --anonymous-auth=false
|
||||
@@ -32,19 +32,17 @@ spec:
|
||||
- --bind-address=0.0.0.0
|
||||
- --client-ca-file=/etc/kubernetes/secrets/ca.crt
|
||||
- --cloud-provider=${cloud_provider}
|
||||
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
|
||||
- --etcd-cafile=/etc/kubernetes/secrets/etcd-client-ca.crt
|
||||
- --etcd-certfile=/etc/kubernetes/secrets/etcd-client.crt
|
||||
- --etcd-keyfile=/etc/kubernetes/secrets/etcd-client.key
|
||||
- --etcd-quorum-read=true
|
||||
- --etcd-servers=${etcd_servers}
|
||||
- --insecure-port=0
|
||||
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
|
||||
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
|
||||
- --secure-port=443
|
||||
- --service-account-key-file=/etc/kubernetes/secrets/service-account.pub
|
||||
- --service-cluster-ip-range=${service_cidr}
|
||||
- --storage-backend=etcd3
|
||||
- --tls-ca-file=/etc/kubernetes/secrets/ca.crt
|
||||
- --tls-cert-file=/etc/kubernetes/secrets/apiserver.crt
|
||||
- --tls-private-key-file=/etc/kubernetes/secrets/apiserver.key
|
||||
env:
|
||||
@@ -66,15 +64,13 @@ spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/master: ""
|
||||
tolerations:
|
||||
- key: CriticalAddonsOnly
|
||||
operator: Exists
|
||||
- key: node-role.kubernetes.io/master
|
||||
operator: Exists
|
||||
effect: NoSchedule
|
||||
volumes:
|
||||
- name: ssl-certs-host
|
||||
hostPath:
|
||||
path: /usr/share/ca-certificates
|
||||
path: ${trusted_certs_dir}
|
||||
- name: secrets
|
||||
secret:
|
||||
secretName: kube-apiserver
|
||||
|
||||
@@ -0,0 +1,12 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: controller-manager
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:kube-controller-manager
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: kube-controller-manager
|
||||
namespace: kube-system
|
||||
5
resources/manifests/kube-controller-manager-sa.yaml
Normal file
5
resources/manifests/kube-controller-manager-sa.yaml
Normal file
@@ -0,0 +1,5 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
namespace: kube-system
|
||||
name: kube-controller-manager
|
||||
@@ -1,4 +1,4 @@
|
||||
apiVersion: extensions/v1beta1
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: kube-controller-manager
|
||||
@@ -8,13 +8,15 @@ metadata:
|
||||
k8s-app: kube-controller-manager
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
tier: control-plane
|
||||
k8s-app: kube-controller-manager
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
tier: control-plane
|
||||
k8s-app: kube-controller-manager
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
spec:
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
@@ -38,11 +40,14 @@ spec:
|
||||
command:
|
||||
- ./hyperkube
|
||||
- controller-manager
|
||||
- --use-service-account-credentials
|
||||
- --allocate-node-cidrs=true
|
||||
- --cloud-provider=${cloud_provider}
|
||||
- --cluster-cidr=${pod_cidr}
|
||||
- --service-cluster-ip-range=${service_cidr}
|
||||
- --configure-cloud-routes=false
|
||||
- --leader-elect=true
|
||||
- --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
- --root-ca-file=/etc/kubernetes/secrets/ca.crt
|
||||
- --service-account-private-key-file=/etc/kubernetes/secrets/service-account.key
|
||||
livenessProbe:
|
||||
@@ -55,6 +60,9 @@ spec:
|
||||
- name: secrets
|
||||
mountPath: /etc/kubernetes/secrets
|
||||
readOnly: true
|
||||
- name: volumeplugins
|
||||
mountPath: /var/lib/kubelet/volumeplugins
|
||||
readOnly: true
|
||||
- name: ssl-host
|
||||
mountPath: /etc/ssl/certs
|
||||
readOnly: true
|
||||
@@ -63,9 +71,8 @@ spec:
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 65534
|
||||
serviceAccountName: kube-controller-manager
|
||||
tolerations:
|
||||
- key: CriticalAddonsOnly
|
||||
operator: Exists
|
||||
- key: node-role.kubernetes.io/master
|
||||
operator: Exists
|
||||
effect: NoSchedule
|
||||
@@ -75,5 +82,8 @@ spec:
|
||||
secretName: kube-controller-manager
|
||||
- name: ssl-host
|
||||
hostPath:
|
||||
path: /usr/share/ca-certificates
|
||||
path: ${trusted_certs_dir}
|
||||
- name: volumeplugins
|
||||
hostPath:
|
||||
path: /var/lib/kubelet/volumeplugins
|
||||
dnsPolicy: Default # Don't use cluster DNS.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
apiVersion: extensions/v1beta1
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: kube-dns
|
||||
@@ -23,14 +23,10 @@ spec:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kube-dns
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/master: ""
|
||||
tolerations:
|
||||
- key: "CriticalAddonsOnly"
|
||||
operator: "Exists"
|
||||
- key: node-role.kubernetes.io/master
|
||||
operator: Exists
|
||||
effect: NoSchedule
|
||||
@@ -41,7 +37,7 @@ spec:
|
||||
optional: true
|
||||
containers:
|
||||
- name: kubedns
|
||||
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
|
||||
image: ${kubedns_image}
|
||||
resources:
|
||||
# TODO: Set memory limits when we've profiled the container for large
|
||||
# clusters, then set request = limit to keep this container in
|
||||
@@ -71,7 +67,7 @@ spec:
|
||||
initialDelaySeconds: 3
|
||||
timeoutSeconds: 5
|
||||
args:
|
||||
- --domain=cluster.local.
|
||||
- --domain=${cluster_domain_suffix}.
|
||||
- --dns-port=10053
|
||||
- --config-dir=/kube-dns-config
|
||||
- --v=2
|
||||
@@ -92,7 +88,7 @@ spec:
|
||||
- name: kube-dns-config
|
||||
mountPath: /kube-dns-config
|
||||
- name: dnsmasq
|
||||
image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
|
||||
image: ${kubedns_dnsmasq_image}
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthcheck/dnsmasq
|
||||
@@ -110,8 +106,9 @@ spec:
|
||||
- --
|
||||
- -k
|
||||
- --cache-size=1000
|
||||
- --no-negcache
|
||||
- --log-facility=-
|
||||
- --server=/cluster.local/127.0.0.1#10053
|
||||
- --server=/${cluster_domain_suffix}/127.0.0.1#10053
|
||||
- --server=/in-addr.arpa/127.0.0.1#10053
|
||||
- --server=/ip6.arpa/127.0.0.1#10053
|
||||
ports:
|
||||
@@ -130,7 +127,7 @@ spec:
|
||||
- name: kube-dns-config
|
||||
mountPath: /etc/k8s/dns/dnsmasq-nanny
|
||||
- name: sidecar
|
||||
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
|
||||
image: ${kubedns_sidecar_image}
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /metrics
|
||||
@@ -143,8 +140,8 @@ spec:
|
||||
args:
|
||||
- --v=2
|
||||
- --logtostderr
|
||||
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
|
||||
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
|
||||
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.${cluster_domain_suffix},5,SRV
|
||||
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.${cluster_domain_suffix},5,SRV
|
||||
ports:
|
||||
- containerPort: 10054
|
||||
name: metrics
|
||||
@@ -154,3 +151,4 @@ spec:
|
||||
memory: 20Mi
|
||||
cpu: 10m
|
||||
dnsPolicy: Default # Don't use cluster DNS.
|
||||
serviceAccountName: kube-dns
|
||||
|
||||
5
resources/manifests/kube-dns-sa.yaml
Normal file
5
resources/manifests/kube-dns-sa.yaml
Normal file
@@ -0,0 +1,5 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: kube-dns
|
||||
namespace: kube-system
|
||||
@@ -1,58 +0,0 @@
|
||||
apiVersion: "extensions/v1beta1"
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: kube-etcd-network-checkpointer
|
||||
namespace: kube-system
|
||||
labels:
|
||||
tier: control-plane
|
||||
k8s-app: kube-etcd-network-checkpointer
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
tier: control-plane
|
||||
k8s-app: kube-etcd-network-checkpointer
|
||||
annotations:
|
||||
checkpointer.alpha.coreos.com/checkpoint: "true"
|
||||
spec:
|
||||
containers:
|
||||
- image: quay.io/coreos/kenc:0.0.2
|
||||
name: kube-etcd-network-checkpointer
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- mountPath: /etc/kubernetes/selfhosted-etcd
|
||||
name: checkpoint-dir
|
||||
readOnly: false
|
||||
- mountPath: /var/etcd
|
||||
name: etcd-dir
|
||||
readOnly: false
|
||||
- mountPath: /var/lock
|
||||
name: var-lock
|
||||
readOnly: false
|
||||
command:
|
||||
- /usr/bin/flock
|
||||
- /var/lock/kenc.lock
|
||||
- -c
|
||||
- "kenc -r -m iptables && kenc -m iptables"
|
||||
hostNetwork: true
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/master: ""
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
operator: Exists
|
||||
effect: NoSchedule
|
||||
volumes:
|
||||
- name: checkpoint-dir
|
||||
hostPath:
|
||||
path: /etc/kubernetes/checkpoint-iptables
|
||||
- name: etcd-dir
|
||||
hostPath:
|
||||
path: /var/etcd
|
||||
- name: var-lock
|
||||
hostPath:
|
||||
path: /var/lock
|
||||
updateStrategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
type: RollingUpdate
|
||||
12
resources/manifests/kube-proxy-role-binding.yaml
Normal file
12
resources/manifests/kube-proxy-role-binding.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: kube-proxy
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:node-proxier # Automatically created system role.
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: kube-proxy
|
||||
namespace: kube-system
|
||||
5
resources/manifests/kube-proxy-sa.yaml
Normal file
5
resources/manifests/kube-proxy-sa.yaml
Normal file
@@ -0,0 +1,5 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
namespace: kube-system
|
||||
name: kube-proxy
|
||||
@@ -1,4 +1,4 @@
|
||||
apiVersion: "extensions/v1beta1"
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: kube-proxy
|
||||
@@ -7,13 +7,15 @@ metadata:
|
||||
tier: node
|
||||
k8s-app: kube-proxy
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
tier: node
|
||||
k8s-app: kube-proxy
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
tier: node
|
||||
k8s-app: kube-proxy
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
spec:
|
||||
containers:
|
||||
- name: kube-proxy
|
||||
@@ -33,27 +35,33 @@ spec:
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- mountPath: /lib/modules
|
||||
name: lib-modules
|
||||
readOnly: true
|
||||
- mountPath: /etc/ssl/certs
|
||||
name: ssl-certs-host
|
||||
readOnly: true
|
||||
- name: etc-kubernetes
|
||||
- name: kubeconfig
|
||||
mountPath: /etc/kubernetes
|
||||
readOnly: true
|
||||
hostNetwork: true
|
||||
serviceAccountName: kube-proxy
|
||||
tolerations:
|
||||
- key: CriticalAddonsOnly
|
||||
- effect: NoSchedule
|
||||
operator: Exists
|
||||
- key: node-role.kubernetes.io/master
|
||||
- effect: NoExecute
|
||||
operator: Exists
|
||||
effect: NoSchedule
|
||||
volumes:
|
||||
- hostPath:
|
||||
path: /usr/share/ca-certificates
|
||||
name: ssl-certs-host
|
||||
- name: etc-kubernetes
|
||||
- name: lib-modules
|
||||
hostPath:
|
||||
path: /etc/kubernetes
|
||||
path: /lib/modules
|
||||
- name: ssl-certs-host
|
||||
hostPath:
|
||||
path: ${trusted_certs_dir}
|
||||
- name: kubeconfig
|
||||
configMap:
|
||||
name: kubeconfig-in-cluster
|
||||
updateStrategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
type: RollingUpdate
|
||||
type: RollingUpdate
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
apiVersion: extensions/v1beta1
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: kube-scheduler
|
||||
@@ -8,13 +8,15 @@ metadata:
|
||||
k8s-app: kube-scheduler
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
tier: control-plane
|
||||
k8s-app: kube-scheduler
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
tier: control-plane
|
||||
k8s-app: kube-scheduler
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
spec:
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
@@ -51,8 +53,6 @@ spec:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 65534
|
||||
tolerations:
|
||||
- key: CriticalAddonsOnly
|
||||
operator: Exists
|
||||
- key: node-role.kubernetes.io/master
|
||||
operator: Exists
|
||||
effect: NoSchedule
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1alpha1
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: system:default-sa
|
||||
|
||||
22
resources/manifests/kubeconfig-in-cluster.yaml
Normal file
22
resources/manifests/kubeconfig-in-cluster.yaml
Normal file
@@ -0,0 +1,22 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: kubeconfig-in-cluster
|
||||
namespace: kube-system
|
||||
data:
|
||||
kubeconfig: |
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- name: local
|
||||
cluster:
|
||||
server: ${server}
|
||||
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
users:
|
||||
- name: service-account
|
||||
user:
|
||||
# Use service account token
|
||||
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
contexts:
|
||||
- context:
|
||||
cluster: local
|
||||
user: service-account
|
||||
13
resources/manifests/pod-checkpointer-role-binding.yaml
Normal file
13
resources/manifests/pod-checkpointer-role-binding.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: pod-checkpointer
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: pod-checkpointer
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: pod-checkpointer
|
||||
namespace: kube-system
|
||||
12
resources/manifests/pod-checkpointer-role.yaml
Normal file
12
resources/manifests/pod-checkpointer-role.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: pod-checkpointer
|
||||
namespace: kube-system
|
||||
rules:
|
||||
- apiGroups: [""] # "" indicates the core API group
|
||||
resources: ["pods"]
|
||||
verbs: ["get", "watch", "list"]
|
||||
- apiGroups: [""] # "" indicates the core API group
|
||||
resources: ["secrets", "configmaps"]
|
||||
verbs: ["get"]
|
||||
5
resources/manifests/pod-checkpointer-sa.yaml
Normal file
5
resources/manifests/pod-checkpointer-sa.yaml
Normal file
@@ -0,0 +1,5 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
namespace: kube-system
|
||||
name: pod-checkpointer
|
||||
@@ -1,4 +1,4 @@
|
||||
apiVersion: "extensions/v1beta1"
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: pod-checkpointer
|
||||
@@ -7,6 +7,10 @@ metadata:
|
||||
tier: control-plane
|
||||
k8s-app: pod-checkpointer
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
tier: control-plane
|
||||
k8s-app: pod-checkpointer
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
@@ -17,11 +21,11 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: pod-checkpointer
|
||||
image: quay.io/coreos/pod-checkpointer:0cd390e0bc1dcdcc714b20eda3435c3d00669d0e
|
||||
image: ${pod_checkpointer_image}
|
||||
command:
|
||||
- /checkpoint
|
||||
- --v=4
|
||||
- --lock-file=/var/run/lock/pod-checkpointer.lock
|
||||
- --kubeconfig=/etc/checkpointer/kubeconfig
|
||||
env:
|
||||
- name: NODE_NAME
|
||||
valueFrom:
|
||||
@@ -37,10 +41,13 @@ spec:
|
||||
fieldPath: metadata.namespace
|
||||
imagePullPolicy: Always
|
||||
volumeMounts:
|
||||
- mountPath: /etc/checkpointer
|
||||
name: kubeconfig
|
||||
- mountPath: /etc/kubernetes
|
||||
name: etc-kubernetes
|
||||
- mountPath: /var/run
|
||||
name: var-run
|
||||
serviceAccountName: pod-checkpointer
|
||||
hostNetwork: true
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/master: ""
|
||||
@@ -50,6 +57,9 @@ spec:
|
||||
operator: Exists
|
||||
effect: NoSchedule
|
||||
volumes:
|
||||
- name: kubeconfig
|
||||
configMap:
|
||||
name: kubeconfig-in-cluster
|
||||
- name: etc-kubernetes
|
||||
hostPath:
|
||||
path: /etc/kubernetes
|
||||
@@ -57,6 +67,6 @@ spec:
|
||||
hostPath:
|
||||
path: /var/run
|
||||
updateStrategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
type: RollingUpdate
|
||||
|
||||
@@ -3,4 +3,3 @@ api_servers = ["node1.example.com"]
|
||||
etcd_servers = ["node1.example.com"]
|
||||
asset_dir = "/home/core/mycluster"
|
||||
networking = "flannel"
|
||||
experimental_self_hosted_etcd = false
|
||||
|
||||
27
tls-etcd.tf
27
tls-etcd.tf
@@ -96,17 +96,13 @@ resource "tls_cert_request" "client" {
|
||||
|
||||
ip_addresses = [
|
||||
"127.0.0.1",
|
||||
"${cidrhost(var.service_cidr, 15)}",
|
||||
"${cidrhost(var.service_cidr, 20)}",
|
||||
]
|
||||
|
||||
dns_names = "${concat(
|
||||
dns_names = ["${concat(
|
||||
var.etcd_servers,
|
||||
list(
|
||||
"localhost",
|
||||
"*.kube-etcd.kube-system.svc.cluster.local",
|
||||
"kube-etcd-client.kube-system.svc.cluster.local",
|
||||
))}"
|
||||
))}"]
|
||||
}
|
||||
|
||||
resource "tls_locally_signed_cert" "client" {
|
||||
@@ -142,17 +138,13 @@ resource "tls_cert_request" "server" {
|
||||
|
||||
ip_addresses = [
|
||||
"127.0.0.1",
|
||||
"${cidrhost(var.service_cidr, 15)}",
|
||||
"${cidrhost(var.service_cidr, 20)}",
|
||||
]
|
||||
|
||||
dns_names = "${concat(
|
||||
dns_names = ["${concat(
|
||||
var.etcd_servers,
|
||||
list(
|
||||
"localhost",
|
||||
"*.kube-etcd.kube-system.svc.cluster.local",
|
||||
"kube-etcd-client.kube-system.svc.cluster.local",
|
||||
))}"
|
||||
))}"]
|
||||
}
|
||||
|
||||
resource "tls_locally_signed_cert" "server" {
|
||||
@@ -186,16 +178,7 @@ resource "tls_cert_request" "peer" {
|
||||
organization = "etcd"
|
||||
}
|
||||
|
||||
ip_addresses = [
|
||||
"${cidrhost(var.service_cidr, 20)}",
|
||||
]
|
||||
|
||||
dns_names = "${concat(
|
||||
var.etcd_servers,
|
||||
list(
|
||||
"*.kube-etcd.kube-system.svc.cluster.local",
|
||||
"kube-etcd-client.kube-system.svc.cluster.local",
|
||||
))}"
|
||||
dns_names = ["${var.etcd_servers}"]
|
||||
}
|
||||
|
||||
resource "tls_locally_signed_cert" "peer" {
|
||||
|
||||
@@ -70,7 +70,7 @@ resource "tls_cert_request" "apiserver" {
|
||||
"kubernetes",
|
||||
"kubernetes.default",
|
||||
"kubernetes.default.svc",
|
||||
"kubernetes.default.svc.cluster.local",
|
||||
"kubernetes.default.svc.${var.cluster_domain_suffix}",
|
||||
]
|
||||
|
||||
ip_addresses = [
|
||||
|
||||
34
variables.tf
34
variables.tf
@@ -4,20 +4,15 @@ variable "cluster_name" {
|
||||
}
|
||||
|
||||
variable "api_servers" {
|
||||
description = "URL used to reach kube-apiserver"
|
||||
description = "List of URLs used to reach kube-apiserver"
|
||||
type = "list"
|
||||
}
|
||||
|
||||
variable "etcd_servers" {
|
||||
description = "List of etcd server URLs including protocol, host, and port. Ignored if experimental self-hosted etcd is enabled."
|
||||
description = "List of URLs used to reach etcd servers."
|
||||
type = "list"
|
||||
}
|
||||
|
||||
variable "experimental_self_hosted_etcd" {
|
||||
description = "(Experimental) Create self-hosted etcd assets"
|
||||
default = false
|
||||
}
|
||||
|
||||
variable "asset_dir" {
|
||||
description = "Path to a directory where generated assets should be placed (contains secrets)"
|
||||
type = "string"
|
||||
@@ -50,23 +45,42 @@ variable "pod_cidr" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IP range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns, the 15th IP will be reserved for self-hosted etcd, and the 20th IP will be reserved for bootstrap self-hosted etcd.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
default = "10.3.0.0/24"
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns"
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "container_images" {
|
||||
description = "Container images to use"
|
||||
type = "map"
|
||||
|
||||
default = {
|
||||
hyperkube = "quay.io/coreos/hyperkube:v1.7.5_coreos.0"
|
||||
etcd = "quay.io/coreos/etcd:v3.1.8"
|
||||
calico = "quay.io/calico/node:v3.0.4"
|
||||
calico_cni = "quay.io/calico/cni:v2.0.1"
|
||||
flannel = "quay.io/coreos/flannel:v0.10.0-amd64"
|
||||
flannel_cni = "quay.io/coreos/flannel-cni:v0.3.0"
|
||||
hyperkube = "k8s.gcr.io/hyperkube:v1.10.0"
|
||||
kubedns = "k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.9"
|
||||
kubedns_dnsmasq = "k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.9"
|
||||
kubedns_sidecar = "k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.9"
|
||||
pod_checkpointer = "quay.io/coreos/pod-checkpointer:9dc83e1ab3bc36ca25c9f7c18ddef1b91d4a0558"
|
||||
}
|
||||
}
|
||||
|
||||
variable "trusted_certs_dir" {
|
||||
description = "Path to the directory on cluster nodes where trust TLS certs are kept"
|
||||
type = "string"
|
||||
default = "/usr/share/ca-certificates"
|
||||
}
|
||||
|
||||
variable "ca_certificate" {
|
||||
description = "Existing PEM-encoded CA certificate (generated if blank)"
|
||||
type = "string"
|
||||
|
||||
Reference in New Issue
Block a user