30 Commits

Author SHA1 Message Date
Dalton Hubble
df22b04db7 Update README to correspond to bootkube v0.9.1 2017-12-15 01:40:25 -08:00
Dalton Hubble
6dc7630020 Fix Terraform formatting with fmt 2017-12-13 00:58:26 -08:00
Dalton Hubble
3ec47194ce Rename cluster_dns_fqdn variable to cluster_domain_suffix 2017-12-13 00:11:16 -08:00
Barak Michener
03ca146ef3 Add option for Cluster DNS having a FQDN other than cluster.local 2017-12-12 10:17:53 -08:00
Dalton Hubble
5763b447de Remove self-hosted etcd TLS cert SANs
* Remove self-hosted etcd service IP out, defunct
2017-12-12 00:30:04 -08:00
Dalton Hubble
36243ff89b Update pod-checkpointer and drop ClusterRole to Role
* pod-checkpointer no longer needs to watch pods in all namespaces,
it should only have permission to watch kube-system
* https://github.com/kubernetes-incubator/bootkube/pull/784
2017-12-12 00:10:55 -08:00
Dalton Hubble
810ddfad9f Add controller-manager flag for service_cidr
* controller-manager can handle overlapping pod and service CIDRs
to avoid address collisions, if its informed of both ranges
* Still favor non-overlapping pod and service ranges of course
* https://github.com/kubernetes-incubator/bootkube/pull/797
2017-12-12 00:00:26 -08:00
Dalton Hubble
ec48758c5e Remove experimental self-hosted etcd options 2017-12-11 21:51:07 -08:00
Dalton Hubble
533e82f833 Update hyperkube from v1.8.4 to v1.8.5 2017-12-08 08:46:22 -08:00
Dalton Hubble
31cfae5789 Update README to correspond to v0.9.0 2017-12-01 22:13:33 -08:00
Dalton Hubble
680244706c Update Calico from v2.6.1 to v2.6.3
* Bug fixes for Calico 2.6.x
https://github.com/projectcalico/calico/releases/tag/v2.6.3
* Bug fixes for cni-plugin (i.e. cni) v1.11.x
https://github.com/projectcalico/cni-plugin/releases/tag/v1.11.1
2017-11-28 21:33:51 -08:00
Dalton Hubble
dbcf3b599f Remove flock from bootstrap-apiserver and kube-apiserver
* https://github.com/kubernetes-incubator/bootkube/pull/616
2017-11-28 21:13:15 -08:00
Dalton Hubble
b7b56a6e55 Update hyperkube from v1.8.3 to v1.8.4 2017-11-28 21:11:52 -08:00
Dalton Hubble
a613c7dfa6 Remove unused critical-pod annotations in manifests
* https://github.com/kubernetes-incubator/bootkube/pull/777
2017-11-28 21:10:05 -08:00
Dalton Hubble
ab4d7becce Disable Calico termination grace period
* Disable termination grace period to account for Kubernetes v1.8
changes to DaemonSet rolling behavior
* https://github.com/projectcalico/calico/pull/1293
* Fix IPIP mode casing https://github.com/projectcalico/calico/pull/1233
2017-11-17 00:40:25 -08:00
Dalton Hubble
4d85d9c0d1 Update flannel version from v0.9.0 to v0.9.1
* https://github.com/kubernetes-incubator/bootkube/pull/776
2017-11-17 00:38:37 -08:00
Dalton Hubble
ec5f86b014 Use service accounts for kube-proxy and pod-checkpointer
* Create separate service accounts for kube-proxy and pod-checkpointer
* Switch kube-proxy and pod-checkpointer to use a kubeconfig that
references the local service account, rather than the host kubeconfig
* https://github.com/kubernetes-incubator/bootkube/pull/767
2017-11-17 00:33:22 -08:00
Dalton Hubble
92ff0f253a Update README to correspond to bootkube v0.8.2 2017-11-10 19:54:35 -08:00
Dalton Hubble
4f6af5b811 Update hyperkube from v1.8.2 to v1.8.3
* https://github.com/kubernetes-incubator/bootkube/pull/765
2017-11-08 21:48:21 -08:00
Dalton Hubble
f76e58b56d Update checkpointer with state machine impl
* https://github.com/kubernetes-incubator/bootkube/pull/759
2017-11-08 21:45:01 -08:00
Dalton Hubble
383aba4e8e Add /lib/modules mount to kube-proxy
* Starting in Kubernetes v1.8, kube-proxy modprobes ipvs
* kube-proxy still uses iptables, but in future may switch to
ipvs, this prepares the way for that to happen
* https://github.com/kubernetes-incubator/bootkube/issues/741
2017-11-08 21:39:07 -08:00
Dalton Hubble
aebb45e6e9 Update README to correspond to bootkube v0.8.1 2017-10-28 12:44:06 -07:00
Dalton Hubble
b6b320ef6a Update hyperkube from v1.8.1 to v1.8.2
* v1.8.2 includes an apiserver memory leak fix
2017-10-24 21:27:46 -07:00
Dalton Hubble
9f4ffe273b Switch hyperkube from quay.io/coreos to gcr.io/google_containers
* Use the Kubernetes official hyperkube image
* Patches in quay.io/coreos/hyperkube are no longer needed
for kubernetes-incubator/bootkube clusters starting in
Kubernetes 1.8
2017-10-22 17:05:52 -07:00
Dalton Hubble
74366f6076 Enable hairpinMode in flannel CNI config
* Allow pods to communicate with themselves via service IP
* https://github.com/coreos/flannel/pull/849
2017-10-22 13:51:46 -07:00
Dalton Hubble
db7c13f5ee Update flannel from v0.8.0-amd64 to v0.9.0-amd64 2017-10-22 13:48:14 -07:00
Dalton Hubble
3ac28c9210 Add --no-negcache flag to dnsmasq args
* e1d6bcc227
2017-10-21 17:15:19 -07:00
Dalton Hubble
64748203ba Update assets generation for bootkube v0.8.0
* Update from Kubernetes v1.7.7 to v1.8.1
2017-10-19 20:48:24 -07:00
Dalton Hubble
262cc49856 Update README intro, repo name, and links 2017-10-08 23:00:58 -07:00
Dalton Hubble
125f29d43d Render images from the container_images map variable
* Container images may be customized to facilitate using mirrored
images or development with custom images
2017-10-08 22:29:26 -07:00
37 changed files with 197 additions and 430 deletions

View File

@@ -1,18 +1,18 @@
# bootkube-terraform
# terraform-render-bootkube
`bootkube-terraform` is Terraform module that renders [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrapping assets. It functions as a low-level component of the [Typhoon](https://github.com/poseidon/typhoon) Kubernetes distribution.
`terraform-render-bootkube` is a Terraform module that renders [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube) assets for bootstrapping a Kubernetes cluster.
The module provides many of the same variable names, defaults, features, and outputs as running `bootkube render` directly.
## Audience
`terraform-render-bootkube` is a low-level component of the [Typhoon](https://github.com/poseidon/typhoon) Kubernetes distribution. Use Typhoon modules to create and manage Kubernetes clusters across supported platforms. Use the bootkube module if you'd like to customize a Kubernetes control plane or build your own distribution.
## Usage
Use [Typhoon](https://github.com/poseidon/typhoon) to create and manage Kubernetes clusters in different environments. Use `bootkube-terraform` if you require low-level customizations to the control plane or wish to build your own distribution.
Add the `bootkube-terraform` module alongside existing Terraform configs. Provide the variables listed in `variables.tf` or check `terraform.tfvars.example` for examples.
Use the module to declare bootkube assets. Check [variables.tf](variables.tf) for options and [terraform.tfvars.example](terraform.tfvars.example) for examples.
```hcl
module "bootkube" {
source = "git://https://github.com/poseidon/bootkube-terraform.git?ref=SHA"
source = "git://https://github.com/poseidon/terraform-render-bootkube.git?ref=SHA"
cluster_name = "example"
api_servers = ["node1.example.com"]
@@ -21,21 +21,20 @@ module "bootkube" {
}
```
Alternately, use a local checkout of this repo and copy `terraform.tfvars.example` to `terraform.tfvars` to generate assets without an existing terraform config repo.
Generate the bootkube assets.
Generate the assets.
```sh
terraform get
terraform init
terraform get --update
terraform plan
terraform apply
```
Find bootkube assets rendered to the `asset_dir` path. That's it.
### Comparison
Render bootkube assets directly with bootkube v0.7.0.
#### On-host etcd
Render bootkube assets directly with bootkube v0.9.1.
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=https://node1.example.com:2379
@@ -49,21 +48,3 @@ mv manifests-networking/* manifests
popd
diff -rw assets /home/core/mycluster
```
#### Self-hosted etcd
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --experimental-self-hosted-etcd
```
Compare assets. Note that experimental must be generated to a separate directory for terraform applies to sync. Move the experimental `bootstrap-manifests` and `manifests` files during deployment.
```sh
pushd /home/core/mycluster
mv experimental/bootstrap-manifests/* boostrap-manifests
mv experimental/manifests/* manifests
mv manifests-networking/* manifests
popd
diff -rw assets /home/core/mycluster
```

View File

@@ -5,7 +5,7 @@ resource "template_dir" "bootstrap-manifests" {
vars {
hyperkube_image = "${var.container_images["hyperkube"]}"
etcd_servers = "${var.experimental_self_hosted_etcd ? format("https://%s:2379,https://127.0.0.1:12379", cidrhost(var.service_cidr, 15)) : join(",", formatlist("https://%s:2379", var.etcd_servers))}"
etcd_servers = "${join(",", formatlist("https://%s:2379", var.etcd_servers))}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
@@ -19,15 +19,22 @@ resource "template_dir" "manifests" {
destination_dir = "${var.asset_dir}/manifests"
vars {
hyperkube_image = "${var.container_images["hyperkube"]}"
etcd_servers = "${var.experimental_self_hosted_etcd ? format("https://%s:2379", cidrhost(var.service_cidr, 15)) : join(",", formatlist("https://%s:2379", var.etcd_servers))}"
hyperkube_image = "${var.container_images["hyperkube"]}"
pod_checkpointer_image = "${var.container_images["pod_checkpointer"]}"
kubedns_image = "${var.container_images["kubedns"]}"
kubedns_dnsmasq_image = "${var.container_images["kubedns_dnsmasq"]}"
kubedns_sidecar_image = "${var.container_images["kubedns_sidecar"]}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
kube_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
etcd_servers = "${join(",", formatlist("https://%s:2379", var.etcd_servers))}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
cluster_domain_suffix = "${var.cluster_domain_suffix}"
kube_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
ca_cert = "${base64encode(var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate)}"
server = "${format("https://%s:443", element(var.api_servers, 0))}"
apiserver_key = "${base64encode(tls_private_key.apiserver.private_key_pem)}"
apiserver_cert = "${base64encode(tls_locally_signed_cert.apiserver.cert_pem)}"
serviceaccount_pub = "${base64encode(tls_private_key.service-account.public_key_pem)}"

View File

@@ -1,4 +1,4 @@
# Assets generated only when experimental self-hosted etcd is enabled
# Assets generated only when certain options are chosen
resource "template_dir" "flannel-manifests" {
count = "${var.networking == "flannel" ? 1 : 0}"
@@ -6,6 +6,9 @@ resource "template_dir" "flannel-manifests" {
destination_dir = "${var.asset_dir}/manifests-networking"
vars {
flannel_image = "${var.container_images["flannel"]}"
flannel_cni_image = "${var.container_images["flannel_cni"]}"
pod_cidr = "${var.pod_cidr}"
}
}
@@ -16,51 +19,10 @@ resource "template_dir" "calico-manifests" {
destination_dir = "${var.asset_dir}/manifests-networking"
vars {
calico_image = "${var.container_images["calico"]}"
calico_cni_image = "${var.container_images["calico_cni"]}"
network_mtu = "${var.network_mtu}"
pod_cidr = "${var.pod_cidr}"
}
}
# bootstrap-etcd.yaml pod bootstrap-manifest
resource "template_dir" "experimental-bootstrap-manifests" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/experimental/bootstrap-manifests"
destination_dir = "${var.asset_dir}/experimental/bootstrap-manifests"
vars {
etcd_image = "${var.container_images["etcd"]}"
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 20)}"
}
}
# etcd subfolder - bootstrap-etcd-service.json and migrate-etcd-cluster.json TPR
resource "template_dir" "etcd-subfolder" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/etcd"
destination_dir = "${var.asset_dir}/etcd"
vars {
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 20)}"
}
}
# etcd-operator deployment and etcd-service manifests
# etcd client, server, and peer tls secrets
resource "template_dir" "experimental-manifests" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/experimental/manifests"
destination_dir = "${var.asset_dir}/experimental/manifests"
vars {
etcd_service_ip = "${cidrhost(var.service_cidr, 15)}"
# Self-hosted etcd TLS certs / keys
etcd_ca_cert = "${base64encode(tls_self_signed_cert.etcd-ca.cert_pem)}"
etcd_client_cert = "${base64encode(tls_locally_signed_cert.client.cert_pem)}"
etcd_client_key = "${base64encode(tls_private_key.client.private_key_pem)}"
etcd_server_cert = "${base64encode(tls_locally_signed_cert.server.cert_pem)}"
etcd_server_key = "${base64encode(tls_private_key.server.private_key_pem)}"
etcd_peer_cert = "${base64encode(tls_locally_signed_cert.peer.cert_pem)}"
etcd_peer_key = "${base64encode(tls_private_key.peer.private_key_pem)}"
}
}

View File

@@ -10,10 +10,6 @@ output "kube_dns_service_ip" {
value = "${cidrhost(var.service_cidr, 10)}"
}
output "etcd_service_ip" {
value = "${cidrhost(var.service_cidr, 15)}"
}
output "kubeconfig" {
value = "${data.template_file.kubeconfig.rendered}"
}

View File

@@ -8,11 +8,9 @@ spec:
- name: kube-apiserver
image: ${hyperkube_image}
command:
- /usr/bin/flock
- /var/lock/api-server.lock
- /hyperkube
- apiserver
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --authorization-mode=RBAC

View File

@@ -12,6 +12,7 @@ spec:
- controller-manager
- --allocate-node-cidrs=true
- --cluster-cidr=${pod_cidr}
- --service-cluster-ip-range=${service_cidr}
- --cloud-provider=${cloud_provider}
- --configure-cloud-routes=false
- --kubeconfig=/etc/kubernetes/kubeconfig

View File

@@ -1,4 +1,4 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node

View File

@@ -1,4 +1,4 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: calico-node

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: calico-node
@@ -13,8 +13,6 @@ spec:
metadata:
labels:
k8s-app: calico-node
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
hostNetwork: true
serviceAccountName: calico-node
@@ -22,12 +20,9 @@ spec:
# Allow the pod to run on master nodes
- key: node-role.kubernetes.io/master
effect: NoSchedule
# Mark the pod as a critical add-on for rescheduling
- key: "CriticalAddonsOnly"
operator: "Exists"
containers:
- name: calico-node
image: quay.io/calico/node:v2.6.1
image: ${calico_image}
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
@@ -58,7 +53,7 @@ spec:
value: "${pod_cidr}"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "always"
value: "Always"
# Enable IP-in-IP within Felix.
- name: FELIX_IPINIPENABLED
value: "true"
@@ -99,7 +94,7 @@ spec:
readOnly: false
# Install Calico CNI binaries and CNI network config file on nodes
- name: install-cni
image: quay.io/calico/cni:v1.11.0
image: ${calico_cni_image}
command: ["/install-cni.sh"]
env:
- name: CNI_NETWORK_CONFIG
@@ -119,6 +114,7 @@ spec:
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
terminationGracePeriodSeconds: 0
volumes:
- name: lib-modules
hostPath:

View File

@@ -1,26 +0,0 @@
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "bootstrap-etcd-service",
"namespace": "kube-system"
},
"spec": {
"selector": {
"k8s-app": "boot-etcd"
},
"clusterIP": "${bootstrap_etcd_service_ip}",
"ports": [
{
"name": "client",
"port": 12379,
"protocol": "TCP"
},
{
"name": "peers",
"port": 12380,
"protocol": "TCP"
}
]
}
}

View File

@@ -1,36 +0,0 @@
{
"apiVersion": "etcd.database.coreos.com/v1beta2",
"kind": "EtcdCluster",
"metadata": {
"name": "kube-etcd",
"namespace": "kube-system"
},
"spec": {
"size": 1,
"version": "v3.1.8",
"pod": {
"nodeSelector": {
"node-role.kubernetes.io/master": ""
},
"tolerations": [
{
"key": "node-role.kubernetes.io/master",
"operator": "Exists",
"effect": "NoSchedule"
}
]
},
"selfHosted": {
"bootMemberClientEndpoint": "https://${bootstrap_etcd_service_ip}:12379"
},
"TLS": {
"static": {
"member": {
"peerSecret": "etcd-peer-tls",
"serverSecret": "etcd-server-tls"
},
"operatorSecret": "etcd-client-tls"
}
}
}
}

View File

@@ -1,41 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: bootstrap-etcd
namespace: kube-system
labels:
k8s-app: boot-etcd
spec:
containers:
- name: etcd
image: ${etcd_image}
command:
- /usr/local/bin/etcd
- --name=boot-etcd
- --listen-client-urls=https://0.0.0.0:12379
- --listen-peer-urls=https://0.0.0.0:12380
- --advertise-client-urls=https://${bootstrap_etcd_service_ip}:12379
- --initial-advertise-peer-urls=https://${bootstrap_etcd_service_ip}:12380
- --initial-cluster=boot-etcd=https://${bootstrap_etcd_service_ip}:12380
- --initial-cluster-token=bootkube
- --initial-cluster-state=new
- --data-dir=/var/etcd/data
- --peer-client-cert-auth=true
- --peer-trusted-ca-file=/etc/kubernetes/secrets/etcd/peer-ca.crt
- --peer-cert-file=/etc/kubernetes/secrets/etcd/peer.crt
- --peer-key-file=/etc/kubernetes/secrets/etcd/peer.key
- --client-cert-auth=true
- --trusted-ca-file=/etc/kubernetes/secrets/etcd/server-ca.crt
- --cert-file=/etc/kubernetes/secrets/etcd/server.crt
- --key-file=/etc/kubernetes/secrets/etcd/server.key
volumeMounts:
- mountPath: /etc/kubernetes/secrets
name: secrets
readOnly: true
volumes:
- name: secrets
hostPath:
path: /etc/kubernetes/bootstrap-secrets
hostNetwork: true
restartPolicy: Never
dnsPolicy: ClusterFirstWithHostNet

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: etcd-client-tls
namespace: kube-system
type: Opaque
data:
etcd-client-ca.crt: ${etcd_ca_cert}
etcd-client.crt: ${etcd_client_cert}
etcd-client.key: ${etcd_client_key}

View File

@@ -1,43 +0,0 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: etcd-operator
namespace: kube-system
labels:
k8s-app: etcd-operator
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
replicas: 1
template:
metadata:
labels:
k8s-app: etcd-operator
spec:
containers:
- name: etcd-operator
image: quay.io/coreos/etcd-operator:v0.5.0
command:
- /usr/local/bin/etcd-operator
- --analytics=false
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
nodeSelector:
node-role.kubernetes.io/master: ""
securityContext:
runAsNonRoot: true
runAsUser: 65534
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: etcd-peer-tls
namespace: kube-system
type: Opaque
data:
peer-ca.crt: ${etcd_ca_cert}
peer.crt: ${etcd_peer_cert}
peer.key: ${etcd_peer_key}

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: etcd-server-tls
namespace: kube-system
type: Opaque
data:
server-ca.crt: ${etcd_ca_cert}
server.crt: ${etcd_server_cert}
server.key: ${etcd_server_key}

View File

@@ -1,18 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: etcd-service
namespace: kube-system
# This alpha annotation will retain the endpoints even if the etcd pod isn't ready.
# This feature is always enabled in endpoint controller in k8s even it is alpha.
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
selector:
app: etcd
etcd_cluster: kube-etcd
clusterIP: ${etcd_service_ip}
ports:
- name: client
port: 2379
protocol: TCP

View File

@@ -1,58 +0,0 @@
apiVersion: "extensions/v1beta1"
kind: DaemonSet
metadata:
name: kube-etcd-network-checkpointer
namespace: kube-system
labels:
tier: control-plane
k8s-app: kube-etcd-network-checkpointer
spec:
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-etcd-network-checkpointer
annotations:
checkpointer.alpha.coreos.com/checkpoint: "true"
spec:
containers:
- image: quay.io/coreos/kenc:0.0.2
name: kube-etcd-network-checkpointer
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/kubernetes/selfhosted-etcd
name: checkpoint-dir
readOnly: false
- mountPath: /var/etcd
name: etcd-dir
readOnly: false
- mountPath: /var/lock
name: var-lock
readOnly: false
command:
- /usr/bin/flock
- /var/lock/kenc.lock
- -c
- "kenc -r -m iptables && kenc -m iptables"
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- name: checkpoint-dir
hostPath:
path: /etc/kubernetes/checkpoint-iptables
- name: etcd-dir
hostPath:
path: /var/etcd
- name: var-lock
hostPath:
path: /var/lock
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -15,6 +15,7 @@ data:
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: kube-flannel
@@ -7,6 +7,10 @@ metadata:
tier: node
k8s-app: flannel
spec:
selector:
matchLabels:
tier: node
k8s-app: flannel
template:
metadata:
labels:
@@ -15,7 +19,7 @@ spec:
spec:
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.8.0-amd64
image: ${flannel_image}
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=$(POD_IP)"]
securityContext:
privileged: true
@@ -40,7 +44,7 @@ spec:
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: install-cni
image: quay.io/coreos/flannel-cni:v0.3.0
image: ${flannel_cni_image}
command: ["/install-cni.sh"]
env:
- name: CNI_NETWORK_CONFIG

View File

@@ -1,4 +1,4 @@
apiVersion: "extensions/v1beta1"
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: kube-apiserver
@@ -7,6 +7,10 @@ metadata:
tier: control-plane
k8s-app: kube-apiserver
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: kube-apiserver
template:
metadata:
labels:
@@ -14,17 +18,14 @@ spec:
k8s-app: kube-apiserver
annotations:
checkpointer.alpha.coreos.com/checkpoint: "true"
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
containers:
- name: kube-apiserver
image: ${hyperkube_image}
command:
- /usr/bin/flock
- /var/lock/api-server.lock
- /hyperkube
- apiserver
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --anonymous-auth=false
@@ -66,8 +67,6 @@ spec:
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: kube-controller-manager
@@ -8,13 +8,15 @@ metadata:
k8s-app: kube-controller-manager
spec:
replicas: 2
selector:
matchLabels:
tier: control-plane
k8s-app: kube-controller-manager
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-controller-manager
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
affinity:
podAntiAffinity:
@@ -41,6 +43,7 @@ spec:
- --allocate-node-cidrs=true
- --cloud-provider=${cloud_provider}
- --cluster-cidr=${pod_cidr}
- --service-cluster-ip-range=${service_cidr}
- --configure-cloud-routes=false
- --leader-elect=true
- --root-ca-file=/etc/kubernetes/secrets/ca.crt
@@ -64,8 +67,6 @@ spec:
runAsNonRoot: true
runAsUser: 65534
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: kube-dns
@@ -23,14 +23,10 @@ spec:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
@@ -41,7 +37,7 @@ spec:
optional: true
containers:
- name: kubedns
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5
image: ${kubedns_image}
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
@@ -71,7 +67,7 @@ spec:
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --domain=${cluster_domain_suffix}.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
@@ -92,7 +88,7 @@ spec:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
image: ${kubedns_dnsmasq_image}
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
@@ -110,8 +106,9 @@ spec:
- --
- -k
- --cache-size=1000
- --no-negcache
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/${cluster_domain_suffix}/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
ports:
@@ -130,7 +127,7 @@ spec:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5
image: ${kubedns_sidecar_image}
livenessProbe:
httpGet:
path: /metrics
@@ -143,8 +140,8 @@ spec:
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.${cluster_domain_suffix},5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.${cluster_domain_suffix},5,A
ports:
- containerPort: 10054
name: metrics

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-proxy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-proxier # Automatically created system role.
subjects:
- kind: ServiceAccount
name: kube-proxy
namespace: kube-system

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: kube-proxy

View File

@@ -1,4 +1,4 @@
apiVersion: "extensions/v1beta1"
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: kube-proxy
@@ -7,13 +7,15 @@ metadata:
tier: node
k8s-app: kube-proxy
spec:
selector:
matchLabels:
tier: node
k8s-app: kube-proxy
template:
metadata:
labels:
tier: node
k8s-app: kube-proxy
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
containers:
- name: kube-proxy
@@ -33,26 +35,31 @@ spec:
securityContext:
privileged: true
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
- name: etc-kubernetes
- name: kubeconfig
mountPath: /etc/kubernetes
readOnly: true
hostNetwork: true
serviceAccountName: kube-proxy
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
- name: etc-kubernetes
- name: lib-modules
hostPath:
path: /etc/kubernetes
path: /lib/modules
- name: ssl-certs-host
hostPath:
path: /usr/share/ca-certificates
- name: kubeconfig
secret:
secretName: kubeconfig-in-cluster
updateStrategy:
rollingUpdate:
maxUnavailable: 1

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: kube-scheduler
@@ -8,13 +8,15 @@ metadata:
k8s-app: kube-scheduler
spec:
replicas: 2
selector:
matchLabels:
tier: control-plane
k8s-app: kube-scheduler
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-scheduler
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
affinity:
podAntiAffinity:
@@ -51,8 +53,6 @@ spec:
runAsNonRoot: true
runAsUser: 65534
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule

View File

@@ -1,4 +1,4 @@
apiVersion: rbac.authorization.k8s.io/v1alpha1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:default-sa

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: Secret
metadata:
name: kubeconfig-in-cluster
namespace: kube-system
stringData:
kubeconfig: |
apiVersion: v1
clusters:
- name: local
cluster:
server: ${server}
certificate-authority-data: ${ca_cert}
users:
- name: service-account
user:
# Use service account token
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
contexts:
- context:
cluster: local
user: service-account

View File

@@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-checkpointer
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-checkpointer
subjects:
- kind: ServiceAccount
name: pod-checkpointer
namespace: kube-system

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-checkpointer
namespace: kube-system
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets", "configmaps"]
verbs: ["get"]

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: pod-checkpointer

View File

@@ -1,4 +1,4 @@
apiVersion: "extensions/v1beta1"
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: pod-checkpointer
@@ -7,6 +7,10 @@ metadata:
tier: control-plane
k8s-app: pod-checkpointer
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: pod-checkpointer
template:
metadata:
labels:
@@ -17,11 +21,11 @@ spec:
spec:
containers:
- name: pod-checkpointer
image: quay.io/coreos/pod-checkpointer:abdcbc46df985b832cccf805b34f4652a0ca9d56
image: ${pod_checkpointer_image}
command:
- /checkpoint
- --v=4
- --lock-file=/var/run/lock/pod-checkpointer.lock
- --kubeconfig=/etc/checkpointer/kubeconfig
env:
- name: NODE_NAME
valueFrom:
@@ -37,10 +41,13 @@ spec:
fieldPath: metadata.namespace
imagePullPolicy: Always
volumeMounts:
- mountPath: /etc/checkpointer
name: kubeconfig
- mountPath: /etc/kubernetes
name: etc-kubernetes
- mountPath: /var/run
name: var-run
serviceAccountName: pod-checkpointer
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
@@ -50,6 +57,9 @@ spec:
operator: Exists
effect: NoSchedule
volumes:
- name: kubeconfig
secret:
secretName: kubeconfig-in-cluster
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes

View File

@@ -3,4 +3,3 @@ api_servers = ["node1.example.com"]
etcd_servers = ["node1.example.com"]
asset_dir = "/home/core/mycluster"
networking = "flannel"
experimental_self_hosted_etcd = false

View File

@@ -96,16 +96,12 @@ resource "tls_cert_request" "client" {
ip_addresses = [
"127.0.0.1",
"${cidrhost(var.service_cidr, 15)}",
"${cidrhost(var.service_cidr, 20)}",
]
dns_names = ["${concat(
var.etcd_servers,
list(
"localhost",
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"]
}
@@ -142,16 +138,12 @@ resource "tls_cert_request" "server" {
ip_addresses = [
"127.0.0.1",
"${cidrhost(var.service_cidr, 15)}",
"${cidrhost(var.service_cidr, 20)}",
]
dns_names = ["${concat(
var.etcd_servers,
list(
"localhost",
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"]
}
@@ -186,16 +178,7 @@ resource "tls_cert_request" "peer" {
organization = "etcd"
}
ip_addresses = [
"${cidrhost(var.service_cidr, 20)}",
]
dns_names = ["${concat(
var.etcd_servers,
list(
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"]
dns_names = ["${var.etcd_servers}"]
}
resource "tls_locally_signed_cert" "peer" {

View File

@@ -70,7 +70,7 @@ resource "tls_cert_request" "apiserver" {
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster.local",
"kubernetes.default.svc.${var.cluster_domain_suffix}",
]
ip_addresses = [

View File

@@ -9,15 +9,10 @@ variable "api_servers" {
}
variable "etcd_servers" {
description = "List of URLs used to reach etcd servers. Ignored if experimental self-hosted etcd is enabled."
description = "List of URLs used to reach etcd servers."
type = "list"
}
variable "experimental_self_hosted_etcd" {
description = "(Experimental) Create self-hosted etcd assets"
default = false
}
variable "asset_dir" {
description = "Path to a directory where generated assets should be placed (contains secrets)"
type = "string"
@@ -50,20 +45,33 @@ variable "pod_cidr" {
variable "service_cidr" {
description = <<EOD
CIDR IP range to assign Kubernetes services.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns, the 15th IP will be reserved for self-hosted etcd, and the 20th IP will be reserved for bootstrap self-hosted etcd.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
EOD
type = "string"
default = "10.3.0.0/24"
}
variable "cluster_domain_suffix" {
description = "Queries for domains with the suffix will be answered by kube-dns"
type = "string"
default = "cluster.local"
}
variable "container_images" {
description = "Container images to use"
type = "map"
default = {
hyperkube = "quay.io/coreos/hyperkube:v1.7.7_coreos.0"
etcd = "quay.io/coreos/etcd:v3.1.8"
calico = "quay.io/calico/node:v2.6.3"
calico_cni = "quay.io/calico/cni:v1.11.1"
flannel = "quay.io/coreos/flannel:v0.9.1-amd64"
flannel_cni = "quay.io/coreos/flannel-cni:v0.3.0"
hyperkube = "gcr.io/google_containers/hyperkube:v1.8.5"
kubedns = "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5"
kubedns_dnsmasq = "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5"
kubedns_sidecar = "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5"
pod_checkpointer = "quay.io/coreos/pod-checkpointer:08fa021813231323e121ecca7383cc64c4afe888"
}
}