33 Commits

Author SHA1 Message Date
Dalton Hubble
31cfae5789 Update README to correspond to v0.9.0 2017-12-01 22:13:33 -08:00
Dalton Hubble
680244706c Update Calico from v2.6.1 to v2.6.3
* Bug fixes for Calico 2.6.x
https://github.com/projectcalico/calico/releases/tag/v2.6.3
* Bug fixes for cni-plugin (i.e. cni) v1.11.x
https://github.com/projectcalico/cni-plugin/releases/tag/v1.11.1
2017-11-28 21:33:51 -08:00
Dalton Hubble
dbcf3b599f Remove flock from bootstrap-apiserver and kube-apiserver
* https://github.com/kubernetes-incubator/bootkube/pull/616
2017-11-28 21:13:15 -08:00
Dalton Hubble
b7b56a6e55 Update hyperkube from v1.8.3 to v1.8.4 2017-11-28 21:11:52 -08:00
Dalton Hubble
a613c7dfa6 Remove unused critical-pod annotations in manifests
* https://github.com/kubernetes-incubator/bootkube/pull/777
2017-11-28 21:10:05 -08:00
Dalton Hubble
ab4d7becce Disable Calico termination grace period
* Disable termination grace period to account for Kubernetes v1.8
changes to DaemonSet rolling behavior
* https://github.com/projectcalico/calico/pull/1293
* Fix IPIP mode casing https://github.com/projectcalico/calico/pull/1233
2017-11-17 00:40:25 -08:00
Dalton Hubble
4d85d9c0d1 Update flannel version from v0.9.0 to v0.9.1
* https://github.com/kubernetes-incubator/bootkube/pull/776
2017-11-17 00:38:37 -08:00
Dalton Hubble
ec5f86b014 Use service accounts for kube-proxy and pod-checkpointer
* Create separate service accounts for kube-proxy and pod-checkpointer
* Switch kube-proxy and pod-checkpointer to use a kubeconfig that
references the local service account, rather than the host kubeconfig
* https://github.com/kubernetes-incubator/bootkube/pull/767
2017-11-17 00:33:22 -08:00
Dalton Hubble
92ff0f253a Update README to correspond to bootkube v0.8.2 2017-11-10 19:54:35 -08:00
Dalton Hubble
4f6af5b811 Update hyperkube from v1.8.2 to v1.8.3
* https://github.com/kubernetes-incubator/bootkube/pull/765
2017-11-08 21:48:21 -08:00
Dalton Hubble
f76e58b56d Update checkpointer with state machine impl
* https://github.com/kubernetes-incubator/bootkube/pull/759
2017-11-08 21:45:01 -08:00
Dalton Hubble
383aba4e8e Add /lib/modules mount to kube-proxy
* Starting in Kubernetes v1.8, kube-proxy modprobes ipvs
* kube-proxy still uses iptables, but in future may switch to
ipvs, this prepares the way for that to happen
* https://github.com/kubernetes-incubator/bootkube/issues/741
2017-11-08 21:39:07 -08:00
Dalton Hubble
aebb45e6e9 Update README to correspond to bootkube v0.8.1 2017-10-28 12:44:06 -07:00
Dalton Hubble
b6b320ef6a Update hyperkube from v1.8.1 to v1.8.2
* v1.8.2 includes an apiserver memory leak fix
2017-10-24 21:27:46 -07:00
Dalton Hubble
9f4ffe273b Switch hyperkube from quay.io/coreos to gcr.io/google_containers
* Use the Kubernetes official hyperkube image
* Patches in quay.io/coreos/hyperkube are no longer needed
for kubernetes-incubator/bootkube clusters starting in
Kubernetes 1.8
2017-10-22 17:05:52 -07:00
Dalton Hubble
74366f6076 Enable hairpinMode in flannel CNI config
* Allow pods to communicate with themselves via service IP
* https://github.com/coreos/flannel/pull/849
2017-10-22 13:51:46 -07:00
Dalton Hubble
db7c13f5ee Update flannel from v0.8.0-amd64 to v0.9.0-amd64 2017-10-22 13:48:14 -07:00
Dalton Hubble
3ac28c9210 Add --no-negcache flag to dnsmasq args
* e1d6bcc227
2017-10-21 17:15:19 -07:00
Dalton Hubble
64748203ba Update assets generation for bootkube v0.8.0
* Update from Kubernetes v1.7.7 to v1.8.1
2017-10-19 20:48:24 -07:00
Dalton Hubble
262cc49856 Update README intro, repo name, and links 2017-10-08 23:00:58 -07:00
Dalton Hubble
125f29d43d Render images from the container_images map variable
* Container images may be customized to facilitate using mirrored
images or development with custom images
2017-10-08 22:29:26 -07:00
Dalton Hubble
aded06a0a7 Update assets generation for bootkube v0.7.0 2017-10-03 09:27:30 -07:00
Dalton Hubble
cc2b45780a Add square brackets for lists to be explicit
* Terraform's "type system" sometimes doesn't identify list
types correctly so be explicit
* https://github.com/hashicorp/terraform/issues/12263#issuecomment-282571256
2017-10-03 09:23:25 -07:00
Dalton Hubble
d93b7e4dc8 Update kube-dns image to address dnsmasq vulnerability
* https://security.googleblog.com/2017/10/behind-masq-yet-more-dns-and-dhcp.html
2017-10-02 10:23:22 -07:00
Dalton Hubble
48b33db1f1 Update Calico from v2.6.0 to v2.6.1 2017-09-30 16:12:29 -07:00
Dalton Hubble
8a9b6f1270 Update Calico from v2.5.1 to v2.6.0
* Update cni sidecar image from v1.10.0 to v1.11.0
* Lower log level in CNI config from debug to info
2017-09-28 20:43:15 -07:00
Dalton Hubble
3b8d762081 Merge pull request #16 from poseidon/etcd-network-checkpointer
Add kube-etcd-network-checkpointer for self-hosted etcd only
2017-09-27 18:06:19 -07:00
Dalton Hubble
9c144e6522 Add kube-etcd-network-checkpointer for self-hosted etcd only 2017-09-26 00:39:42 -07:00
Dalton Hubble
c0d4f56a4c Merge pull request #12 from cloudnativelabs/doc-fix-etcd_servers
Update etcd_servers variable description
2017-09-26 00:12:34 -07:00
bzub
62c887f41b Update etcd_servers variable description. 2017-09-16 16:12:40 -05:00
Dalton Hubble
dbfb11c6ea Update assets generation for bootkube v0.6.2
* Update hyperkube to v1.7.5_coreos.0
* Update etcd-operator to v0.5.0
* Update pod-checkpointer
* Update flannel-cni to v0.2.0
* Change etcd-operator TPR to CRD
2017-09-08 13:46:28 -07:00
Dalton Hubble
5ffbfec46d Configure the Calico MTU
* Add a network_mtu input variable (default 1500)
* Set the Calico CNI config (i.e. workload network interfaces)
* Set the Calico IP in IP MTU (for tunnel network interfaces)
2017-09-05 10:50:26 -07:00
Dalton Hubble
a52f99e8cc Add support for calico networking
* Add support for using Calico pod networking instead of flannel
* Add variable "networking" which may be "calico" or "flannel"
* Users MUST move the contents of assets_dir/manifests-networking
into the assets_dir/manifests directory before running bootkube
start. This is needed because Terraform cannot generate conditional
files into a template_dir because other resources write to the same
directory and delete.
https://github.com/terraform-providers/terraform-provider-template/issues/10
2017-09-01 10:27:43 -07:00
38 changed files with 633 additions and 168 deletions

2
.gitignore vendored
View File

@@ -1,2 +1,4 @@
*.tfvars
.terraform
*.tfstate*
assets

View File

@@ -1,38 +1,42 @@
# bootkube-terraform
# terraform-render-bootkube
`bootkube-terraform` is a Terraform module that renders [bootkube](https://github.com/kubernetes-incubator/bootkube) assets, just like running the binary `bootkube render`. It aims to provide the same variable names, defaults, features, and outputs.
`terraform-render-bootkube` is a Terraform module that renders [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube) assets for bootstrapping a Kubernetes cluster.
## Audience
`terraform-render-bootkube` is a low-level component of the [Typhoon](https://github.com/poseidon/typhoon) Kubernetes distribution. Use Typhoon modules to create and manage Kubernetes clusters across supported platforms. Use the bootkube module if you'd like to customize a Kubernetes control plane or build your own distribution.
## Usage
Use the `bootkube-terraform` module within your existing Terraform configs. Provide the variables listed in `variables.tf` or check `terraform.tfvars.example` for examples.
Use the module to declare bootkube assets. Check [variables.tf](variables.tf) for options and [terraform.tfvars.example](terraform.tfvars.example) for examples.
```hcl
module "bootkube" {
source = "git://https://github.com/dghubble/bootkube-terraform.git?ref=SHA"
source = "git://https://github.com/poseidon/terraform-render-bootkube.git?ref=SHA"
cluster_name = "example"
api_servers = ["node1.example.com"]
etcd_servers = ["node1.example.com"]
asset_dir = "/home/core/clusters/mycluster"
experimental_self_hosted_etcd = false
}
```
Alternately, use a local checkout of this repo and copy `terraform.tfvars.example` to `terraform.tfvars` to generate assets without an existing terraform config repo.
Generate the bootkube assets.
Generate the assets.
```sh
terraform get
terraform init
terraform get --update
terraform plan
terraform apply
```
Find bootkube assets rendered to the `asset_dir` path. That's it.
### Comparison
Render bootkube assets directly with bootkube v0.6.1.
Render bootkube assets directly with bootkube v0.9.0.
#### On-host etcd
#### On-host etcd (recommended)
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=https://node1.example.com:2379
@@ -41,10 +45,13 @@ bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 -
Compare assets. The only diffs you should see are TLS credentials.
```sh
pushd /home/core/mycluster
mv manifests-networking/* manifests
popd
diff -rw assets /home/core/mycluster
```
#### Self-hosted etcd
#### Self-hosted etcd (deprecated)
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --experimental-self-hosted-etcd
@@ -56,6 +63,7 @@ Compare assets. Note that experimental must be generated to a separate directory
pushd /home/core/mycluster
mv experimental/bootstrap-manifests/* boostrap-manifests
mv experimental/manifests/* manifests
mv manifests-networking/* manifests
popd
diff -rw assets /home/core/mycluster
```

View File

@@ -1,45 +0,0 @@
# Assets generated only when experimental self-hosted etcd is enabled
# bootstrap-etcd.yaml pod bootstrap-manifest
resource "template_dir" "experimental-bootstrap-manifests" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/experimental/bootstrap-manifests"
destination_dir = "${var.asset_dir}/experimental/bootstrap-manifests"
vars {
etcd_image = "${var.container_images["etcd"]}"
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 20)}"
}
}
# etcd subfolder - bootstrap-etcd-service.json and migrate-etcd-cluster.json TPR
resource "template_dir" "etcd-subfolder" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/etcd"
destination_dir = "${var.asset_dir}/etcd"
vars {
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 20)}"
}
}
# etcd-operator deployment and etcd-service manifests
# etcd client, server, and peer tls secrets
resource "template_dir" "experimental-manifests" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/experimental/manifests"
destination_dir = "${var.asset_dir}/experimental/manifests"
vars {
etcd_service_ip = "${cidrhost(var.service_cidr, 15)}"
# Self-hosted etcd TLS certs / keys
etcd_ca_cert = "${base64encode(tls_self_signed_cert.etcd-ca.cert_pem)}"
etcd_client_cert = "${base64encode(tls_locally_signed_cert.client.cert_pem)}"
etcd_client_key = "${base64encode(tls_private_key.client.private_key_pem)}"
etcd_server_cert = "${base64encode(tls_locally_signed_cert.server.cert_pem)}"
etcd_server_key = "${base64encode(tls_private_key.server.private_key_pem)}"
etcd_peer_cert = "${base64encode(tls_locally_signed_cert.peer.cert_pem)}"
etcd_peer_key = "${base64encode(tls_private_key.peer.private_key_pem)}"
}
}

View File

@@ -19,23 +19,29 @@ resource "template_dir" "manifests" {
destination_dir = "${var.asset_dir}/manifests"
vars {
hyperkube_image = "${var.container_images["hyperkube"]}"
etcd_servers = "${var.experimental_self_hosted_etcd ? format("https://%s:2379", cidrhost(var.service_cidr, 15)) : join(",", formatlist("https://%s:2379", var.etcd_servers))}"
hyperkube_image = "${var.container_images["hyperkube"]}"
pod_checkpointer_image = "${var.container_images["pod_checkpointer"]}"
kubedns_image = "${var.container_images["kubedns"]}"
kubedns_dnsmasq_image = "${var.container_images["kubedns_dnsmasq"]}"
kubedns_sidecar_image = "${var.container_images["kubedns_sidecar"]}"
cloud_provider = "${var.cloud_provider}"
etcd_servers = "${var.experimental_self_hosted_etcd ? format("https://%s:2379", cidrhost(var.service_cidr, 15)) : join(",", formatlist("https://%s:2379", var.etcd_servers))}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
kube_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
ca_cert = "${base64encode(var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate)}"
server = "${format("https://%s:443", element(var.api_servers, 0))}"
apiserver_key = "${base64encode(tls_private_key.apiserver.private_key_pem)}"
apiserver_cert = "${base64encode(tls_locally_signed_cert.apiserver.cert_pem)}"
serviceaccount_pub = "${base64encode(tls_private_key.service-account.public_key_pem)}"
serviceaccount_key = "${base64encode(tls_private_key.service-account.private_key_pem)}"
etcd_ca_cert = "${base64encode(tls_self_signed_cert.etcd-ca.cert_pem)}"
etcd_ca_cert = "${base64encode(tls_self_signed_cert.etcd-ca.cert_pem)}"
etcd_client_cert = "${base64encode(tls_locally_signed_cert.client.cert_pem)}"
etcd_client_key = "${base64encode(tls_private_key.client.private_key_pem)}"
etcd_client_key = "${base64encode(tls_private_key.client.private_key_pem)}"
}
}
@@ -73,4 +79,3 @@ data "template_file" "user-kubeconfig" {
server = "${format("https://%s:443", element(var.api_servers, 0))}"
}
}

74
conditional.tf Normal file
View File

@@ -0,0 +1,74 @@
# Assets generated only when experimental self-hosted etcd is enabled
resource "template_dir" "flannel-manifests" {
count = "${var.networking == "flannel" ? 1 : 0}"
source_dir = "${path.module}/resources/flannel"
destination_dir = "${var.asset_dir}/manifests-networking"
vars {
flannel_image = "${var.container_images["flannel"]}"
flannel_cni_image = "${var.container_images["flannel_cni"]}"
pod_cidr = "${var.pod_cidr}"
}
}
resource "template_dir" "calico-manifests" {
count = "${var.networking == "calico" ? 1 : 0}"
source_dir = "${path.module}/resources/calico"
destination_dir = "${var.asset_dir}/manifests-networking"
vars {
calico_image = "${var.container_images["calico"]}"
calico_cni_image = "${var.container_images["calico_cni"]}"
network_mtu = "${var.network_mtu}"
pod_cidr = "${var.pod_cidr}"
}
}
# bootstrap-etcd.yaml pod bootstrap-manifest
resource "template_dir" "experimental-bootstrap-manifests" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/experimental/bootstrap-manifests"
destination_dir = "${var.asset_dir}/experimental/bootstrap-manifests"
vars {
etcd_image = "${var.container_images["etcd"]}"
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 20)}"
}
}
# etcd subfolder - bootstrap-etcd-service.json and migrate-etcd-cluster.json TPR
resource "template_dir" "etcd-subfolder" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/etcd"
destination_dir = "${var.asset_dir}/etcd"
vars {
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 20)}"
}
}
# etcd-operator deployment and etcd-service manifests
# etcd client, server, and peer tls secrets
resource "template_dir" "experimental-manifests" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/experimental/manifests"
destination_dir = "${var.asset_dir}/experimental/manifests"
vars {
etcd_operator_image = "${var.container_images["etcd_operator"]}"
etcd_checkpointer_image = "${var.container_images["etcd_checkpointer"]}"
etcd_service_ip = "${cidrhost(var.service_cidr, 15)}"
# Self-hosted etcd TLS certs / keys
etcd_ca_cert = "${base64encode(tls_self_signed_cert.etcd-ca.cert_pem)}"
etcd_client_cert = "${base64encode(tls_locally_signed_cert.client.cert_pem)}"
etcd_client_key = "${base64encode(tls_private_key.client.private_key_pem)}"
etcd_server_cert = "${base64encode(tls_locally_signed_cert.server.cert_pem)}"
etcd_server_key = "${base64encode(tls_private_key.server.private_key_pem)}"
etcd_peer_cert = "${base64encode(tls_locally_signed_cert.peer.cert_pem)}"
etcd_peer_key = "${base64encode(tls_private_key.peer.private_key_pem)}"
}
}

View File

@@ -8,11 +8,9 @@ spec:
- name: kube-apiserver
image: ${hyperkube_image}
command:
- /usr/bin/flock
- /var/lock/api-server.lock
- /hyperkube
- apiserver
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --authorization-mode=RBAC

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico BGP Peers
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPPeer
plural: bgppeers
singular: bgppeer

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system

View File

@@ -0,0 +1,53 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: calico-node
namespace: kube-system
rules:
- apiGroups: [""]
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups: [""]
resources:
- pods/status
verbs:
- update
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- watch
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- update
- watch
- apiGroups: ["extensions"]
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- bgppeers
- globalbgpconfigs
- ippools
- globalnetworkpolicies
verbs:
- create
- get
- list
- update
- watch

View File

@@ -0,0 +1,29 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: calico-config
namespace: kube-system
data:
# The CNI network configuration to install on each node.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.0",
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": ${network_mtu},
"ipam": {
"type": "host-local",
"subnet": "usePodCidr"
},
"policy": {
"type": "k8s",
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
},
"kubernetes": {
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
}

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Global Felix Configuration
kind: CustomResourceDefinition
metadata:
name: globalfelixconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalFelixConfig
plural: globalfelixconfigs
singular: globalfelixconfig

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Global BGP Configuration
kind: CustomResourceDefinition
metadata:
name: globalbgpconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalBGPConfig
plural: globalbgpconfigs
singular: globalbgpconfig

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico IP Pools
kind: CustomResourceDefinition
metadata:
name: ippools.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPPool
plural: ippools
singular: ippool

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Global Network Policies
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkPolicy
plural: globalnetworkpolicies
singular: globalnetworkpolicy

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system

View File

@@ -0,0 +1,134 @@
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
template:
metadata:
labels:
k8s-app: calico-node
spec:
hostNetwork: true
serviceAccountName: calico-node
tolerations:
# Allow the pod to run on master nodes
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: calico-node
image: ${calico_image}
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Enable felix info logging.
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPV6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
value: "${network_mtu}"
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: "true"
# The Calico IPv4 pool CIDR (should match `--cluster-cidr`).
- name: CALICO_IPV4POOL_CIDR
value: "${pod_cidr}"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"
# Enable IP-in-IP within Felix.
- name: FELIX_IPINIPENABLED
value: "true"
# Set node name based on k8s nodeName.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
httpGet:
path: /liveness
port: 9099
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
httpGet:
path: /readiness
port: 9099
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
# Install Calico CNI binaries and CNI network config file on nodes
- name: install-cni
image: ${calico_cni_image}
command: ["/install-cni.sh"]
env:
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
- name: CNI_NET_DIR
value: "/etc/kubernetes/cni/net.d"
# Set node name based on k8s nodeName
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
terminationGracePeriodSeconds: 0
volumes:
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/kubernetes/cni/net.d
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -1,6 +1,6 @@
{
"apiVersion": "etcd.coreos.com/v1beta1",
"kind": "Cluster",
"apiVersion": "etcd.database.coreos.com/v1beta2",
"kind": "EtcdCluster",
"metadata": {
"name": "kube-etcd",
"namespace": "kube-system"

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: etcd-operator
@@ -6,12 +6,10 @@ metadata:
labels:
k8s-app: etcd-operator
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
replicas: 1
selector:
matchLabels:
k8s-app: etcd-operator
template:
metadata:
labels:
@@ -19,10 +17,10 @@ spec:
spec:
containers:
- name: etcd-operator
image: quay.io/coreos/etcd-operator:v0.4.2
image: ${etcd_operator_image}
command:
- /usr/local/bin/etcd-operator
- --analytics=false
- /usr/local/bin/etcd-operator
- --analytics=false
env:
- name: MY_POD_NAMESPACE
valueFrom:
@@ -41,3 +39,8 @@ spec:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1

View File

@@ -1,4 +1,4 @@
apiVersion: "extensions/v1beta1"
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: kube-etcd-network-checkpointer
@@ -7,6 +7,10 @@ metadata:
tier: control-plane
k8s-app: kube-etcd-network-checkpointer
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: kube-etcd-network-checkpointer
template:
metadata:
labels:
@@ -16,7 +20,7 @@ spec:
checkpointer.alpha.coreos.com/checkpoint: "true"
spec:
containers:
- image: quay.io/coreos/kenc:0.0.2
- image: ${etcd_checkpointer_image}
name: kube-etcd-network-checkpointer
securityContext:
privileged: true

View File

@@ -0,0 +1,36 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
k8s-app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "${pod_cidr}",
"Backend": {
"Type": "vxlan"
}
}

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: kube-flannel
@@ -7,6 +7,10 @@ metadata:
tier: node
k8s-app: flannel
spec:
selector:
matchLabels:
tier: node
k8s-app: flannel
template:
metadata:
labels:
@@ -15,7 +19,7 @@ spec:
spec:
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.8.0-amd64
image: ${flannel_image}
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=$(POD_IP)"]
securityContext:
privileged: true
@@ -40,7 +44,7 @@ spec:
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: install-cni
image: quay.io/coreos/flannel-cni:0.1.0
image: ${flannel_cni_image}
command: ["/install-cni.sh"]
env:
- name: CNI_NETWORK_CONFIG

View File

@@ -1,4 +1,4 @@
apiVersion: "extensions/v1beta1"
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: kube-apiserver
@@ -7,6 +7,10 @@ metadata:
tier: control-plane
k8s-app: kube-apiserver
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: kube-apiserver
template:
metadata:
labels:
@@ -14,17 +18,14 @@ spec:
k8s-app: kube-apiserver
annotations:
checkpointer.alpha.coreos.com/checkpoint: "true"
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
containers:
- name: kube-apiserver
image: ${hyperkube_image}
command:
- /usr/bin/flock
- /var/lock/api-server.lock
- /hyperkube
- apiserver
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --anonymous-auth=false
@@ -66,8 +67,6 @@ spec:
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
@@ -82,6 +81,6 @@ spec:
hostPath:
path: /var/lock
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: kube-controller-manager
@@ -8,13 +8,15 @@ metadata:
k8s-app: kube-controller-manager
spec:
replicas: 2
selector:
matchLabels:
tier: control-plane
k8s-app: kube-controller-manager
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-controller-manager
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
affinity:
podAntiAffinity:
@@ -64,8 +66,6 @@ spec:
runAsNonRoot: true
runAsUser: 65534
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: kube-dns
@@ -23,14 +23,10 @@ spec:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
@@ -41,7 +37,7 @@ spec:
optional: true
containers:
- name: kubedns
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
image: ${kubedns_image}
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
@@ -92,7 +88,7 @@ spec:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
image: ${kubedns_dnsmasq_image}
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
@@ -110,6 +106,7 @@ spec:
- --
- -k
- --cache-size=1000
- --no-negcache
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
@@ -130,7 +127,7 @@ spec:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
image: ${kubedns_sidecar_image}
livenessProbe:
httpGet:
path: /metrics

View File

@@ -1,25 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
k8s-app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"type": "flannel",
"delegate": {
"isDefaultGateway": true,
"hairpinMode": true
}
}
net-conf.json: |
{
"Network": "${pod_cidr}",
"Backend": {
"Type": "vxlan"
}
}

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-proxy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-proxier # Automatically created system role.
subjects:
- kind: ServiceAccount
name: kube-proxy
namespace: kube-system

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: kube-proxy

View File

@@ -1,4 +1,4 @@
apiVersion: "extensions/v1beta1"
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: kube-proxy
@@ -7,13 +7,15 @@ metadata:
tier: node
k8s-app: kube-proxy
spec:
selector:
matchLabels:
tier: node
k8s-app: kube-proxy
template:
metadata:
labels:
tier: node
k8s-app: kube-proxy
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
containers:
- name: kube-proxy
@@ -33,27 +35,32 @@ spec:
securityContext:
privileged: true
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
- name: etc-kubernetes
- name: kubeconfig
mountPath: /etc/kubernetes
readOnly: true
hostNetwork: true
serviceAccountName: kube-proxy
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
- name: etc-kubernetes
- name: lib-modules
hostPath:
path: /etc/kubernetes
path: /lib/modules
- name: ssl-certs-host
hostPath:
path: /usr/share/ca-certificates
- name: kubeconfig
secret:
secretName: kubeconfig-in-cluster
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
type: RollingUpdate

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: kube-scheduler
@@ -8,13 +8,15 @@ metadata:
k8s-app: kube-scheduler
spec:
replicas: 2
selector:
matchLabels:
tier: control-plane
k8s-app: kube-scheduler
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-scheduler
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
affinity:
podAntiAffinity:
@@ -51,8 +53,6 @@ spec:
runAsNonRoot: true
runAsUser: 65534
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule

View File

@@ -1,4 +1,4 @@
apiVersion: rbac.authorization.k8s.io/v1alpha1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:default-sa

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: Secret
metadata:
name: kubeconfig-in-cluster
namespace: kube-system
stringData:
kubeconfig: |
apiVersion: v1
clusters:
- name: local
cluster:
server: ${server}
certificate-authority-data: ${ca_cert}
users:
- name: service-account
user:
# Use service account token
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
contexts:
- context:
cluster: local
user: service-account

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pod-checkpointer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pod-checkpointer
subjects:
- kind: ServiceAccount
name: pod-checkpointer
namespace: kube-system

View File

@@ -0,0 +1,11 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-checkpointer
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets", "configmaps"]
verbs: ["get"]

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: pod-checkpointer

View File

@@ -1,4 +1,4 @@
apiVersion: "extensions/v1beta1"
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: pod-checkpointer
@@ -7,6 +7,10 @@ metadata:
tier: control-plane
k8s-app: pod-checkpointer
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: pod-checkpointer
template:
metadata:
labels:
@@ -17,11 +21,11 @@ spec:
spec:
containers:
- name: pod-checkpointer
image: quay.io/coreos/pod-checkpointer:4e7a7dab10bc4d895b66c21656291c6e0b017248
image: ${pod_checkpointer_image}
command:
- /checkpoint
- --v=4
- --lock-file=/var/run/lock/pod-checkpointer.lock
- --kubeconfig=/etc/checkpointer/kubeconfig
env:
- name: NODE_NAME
valueFrom:
@@ -37,10 +41,13 @@ spec:
fieldPath: metadata.namespace
imagePullPolicy: Always
volumeMounts:
- mountPath: /etc/checkpointer
name: kubeconfig
- mountPath: /etc/kubernetes
name: etc-kubernetes
- mountPath: /var/run
name: var-run
serviceAccountName: pod-checkpointer
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
@@ -50,6 +57,9 @@ spec:
operator: Exists
effect: NoSchedule
volumes:
- name: kubeconfig
secret:
secretName: kubeconfig-in-cluster
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes
@@ -57,6 +67,6 @@ spec:
hostPath:
path: /var/run
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -2,4 +2,5 @@ cluster_name = "example"
api_servers = ["node1.example.com"]
etcd_servers = ["node1.example.com"]
asset_dir = "/home/core/mycluster"
networking = "flannel"
experimental_self_hosted_etcd = false

View File

@@ -100,13 +100,13 @@ resource "tls_cert_request" "client" {
"${cidrhost(var.service_cidr, 20)}",
]
dns_names = "${concat(
dns_names = ["${concat(
var.etcd_servers,
list(
"localhost",
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"
))}"]
}
resource "tls_locally_signed_cert" "client" {
@@ -139,20 +139,20 @@ resource "tls_cert_request" "server" {
common_name = "etcd-server"
organization = "etcd"
}
ip_addresses = [
"127.0.0.1",
"${cidrhost(var.service_cidr, 15)}",
"${cidrhost(var.service_cidr, 20)}",
]
dns_names = "${concat(
dns_names = ["${concat(
var.etcd_servers,
list(
"localhost",
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"
))}"]
}
resource "tls_locally_signed_cert" "server" {
@@ -185,17 +185,17 @@ resource "tls_cert_request" "peer" {
common_name = "etcd-peer"
organization = "etcd"
}
ip_addresses = [
"${cidrhost(var.service_cidr, 20)}"
"${cidrhost(var.service_cidr, 20)}",
]
dns_names = "${concat(
dns_names = ["${concat(
var.etcd_servers,
list(
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"
))}"]
}
resource "tls_locally_signed_cert" "peer" {

View File

@@ -4,12 +4,12 @@ variable "cluster_name" {
}
variable "api_servers" {
description = "URL used to reach kube-apiserver"
description = "List of URLs used to reach kube-apiserver"
type = "list"
}
variable "etcd_servers" {
description = "List of etcd server URLs including protocol, host, and port. Ignored if experimental self-hosted etcd is enabled."
description = "List of URLs used to reach etcd servers. Ignored if experimental self-hosted etcd is enabled."
type = "list"
}
@@ -29,6 +29,18 @@ variable "cloud_provider" {
default = ""
}
variable "networking" {
description = "Choice of networking provider (flannel or calico)"
type = "string"
default = "flannel"
}
variable "network_mtu" {
description = "CNI interface MTU (applies to calico only)"
type = "string"
default = "1500"
}
variable "pod_cidr" {
description = "CIDR IP range to assign Kubernetes pods"
type = "string"
@@ -50,8 +62,18 @@ variable "container_images" {
type = "map"
default = {
hyperkube = "quay.io/coreos/hyperkube:v1.7.3_coreos.0"
etcd = "quay.io/coreos/etcd:v3.1.8"
calico = "quay.io/calico/node:v2.6.3"
calico_cni = "quay.io/calico/cni:v1.11.1"
etcd = "quay.io/coreos/etcd:v3.1.8"
etcd_operator = "quay.io/coreos/etcd-operator:v0.5.0"
etcd_checkpointer = "quay.io/coreos/kenc:0.0.2"
flannel = "quay.io/coreos/flannel:v0.9.1-amd64"
flannel_cni = "quay.io/coreos/flannel-cni:v0.3.0"
hyperkube = "gcr.io/google_containers/hyperkube:v1.8.4"
kubedns = "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5"
kubedns_dnsmasq = "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5"
kubedns_sidecar = "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5"
pod_checkpointer = "quay.io/coreos/pod-checkpointer:e22cc0e3714378de92f45326474874eb602ca0ac"
}
}