41 Commits

Author SHA1 Message Date
Dalton Hubble
92ff0f253a Update README to correspond to bootkube v0.8.2 2017-11-10 19:54:35 -08:00
Dalton Hubble
4f6af5b811 Update hyperkube from v1.8.2 to v1.8.3
* https://github.com/kubernetes-incubator/bootkube/pull/765
2017-11-08 21:48:21 -08:00
Dalton Hubble
f76e58b56d Update checkpointer with state machine impl
* https://github.com/kubernetes-incubator/bootkube/pull/759
2017-11-08 21:45:01 -08:00
Dalton Hubble
383aba4e8e Add /lib/modules mount to kube-proxy
* Starting in Kubernetes v1.8, kube-proxy modprobes ipvs
* kube-proxy still uses iptables, but in future may switch to
ipvs, this prepares the way for that to happen
* https://github.com/kubernetes-incubator/bootkube/issues/741
2017-11-08 21:39:07 -08:00
Dalton Hubble
aebb45e6e9 Update README to correspond to bootkube v0.8.1 2017-10-28 12:44:06 -07:00
Dalton Hubble
b6b320ef6a Update hyperkube from v1.8.1 to v1.8.2
* v1.8.2 includes an apiserver memory leak fix
2017-10-24 21:27:46 -07:00
Dalton Hubble
9f4ffe273b Switch hyperkube from quay.io/coreos to gcr.io/google_containers
* Use the Kubernetes official hyperkube image
* Patches in quay.io/coreos/hyperkube are no longer needed
for kubernetes-incubator/bootkube clusters starting in
Kubernetes 1.8
2017-10-22 17:05:52 -07:00
Dalton Hubble
74366f6076 Enable hairpinMode in flannel CNI config
* Allow pods to communicate with themselves via service IP
* https://github.com/coreos/flannel/pull/849
2017-10-22 13:51:46 -07:00
Dalton Hubble
db7c13f5ee Update flannel from v0.8.0-amd64 to v0.9.0-amd64 2017-10-22 13:48:14 -07:00
Dalton Hubble
3ac28c9210 Add --no-negcache flag to dnsmasq args
* e1d6bcc227
2017-10-21 17:15:19 -07:00
Dalton Hubble
64748203ba Update assets generation for bootkube v0.8.0
* Update from Kubernetes v1.7.7 to v1.8.1
2017-10-19 20:48:24 -07:00
Dalton Hubble
262cc49856 Update README intro, repo name, and links 2017-10-08 23:00:58 -07:00
Dalton Hubble
125f29d43d Render images from the container_images map variable
* Container images may be customized to facilitate using mirrored
images or development with custom images
2017-10-08 22:29:26 -07:00
Dalton Hubble
aded06a0a7 Update assets generation for bootkube v0.7.0 2017-10-03 09:27:30 -07:00
Dalton Hubble
cc2b45780a Add square brackets for lists to be explicit
* Terraform's "type system" sometimes doesn't identify list
types correctly so be explicit
* https://github.com/hashicorp/terraform/issues/12263#issuecomment-282571256
2017-10-03 09:23:25 -07:00
Dalton Hubble
d93b7e4dc8 Update kube-dns image to address dnsmasq vulnerability
* https://security.googleblog.com/2017/10/behind-masq-yet-more-dns-and-dhcp.html
2017-10-02 10:23:22 -07:00
Dalton Hubble
48b33db1f1 Update Calico from v2.6.0 to v2.6.1 2017-09-30 16:12:29 -07:00
Dalton Hubble
8a9b6f1270 Update Calico from v2.5.1 to v2.6.0
* Update cni sidecar image from v1.10.0 to v1.11.0
* Lower log level in CNI config from debug to info
2017-09-28 20:43:15 -07:00
Dalton Hubble
3b8d762081 Merge pull request #16 from poseidon/etcd-network-checkpointer
Add kube-etcd-network-checkpointer for self-hosted etcd only
2017-09-27 18:06:19 -07:00
Dalton Hubble
9c144e6522 Add kube-etcd-network-checkpointer for self-hosted etcd only 2017-09-26 00:39:42 -07:00
Dalton Hubble
c0d4f56a4c Merge pull request #12 from cloudnativelabs/doc-fix-etcd_servers
Update etcd_servers variable description
2017-09-26 00:12:34 -07:00
bzub
62c887f41b Update etcd_servers variable description. 2017-09-16 16:12:40 -05:00
Dalton Hubble
dbfb11c6ea Update assets generation for bootkube v0.6.2
* Update hyperkube to v1.7.5_coreos.0
* Update etcd-operator to v0.5.0
* Update pod-checkpointer
* Update flannel-cni to v0.2.0
* Change etcd-operator TPR to CRD
2017-09-08 13:46:28 -07:00
Dalton Hubble
5ffbfec46d Configure the Calico MTU
* Add a network_mtu input variable (default 1500)
* Set the Calico CNI config (i.e. workload network interfaces)
* Set the Calico IP in IP MTU (for tunnel network interfaces)
2017-09-05 10:50:26 -07:00
Dalton Hubble
a52f99e8cc Add support for calico networking
* Add support for using Calico pod networking instead of flannel
* Add variable "networking" which may be "calico" or "flannel"
* Users MUST move the contents of assets_dir/manifests-networking
into the assets_dir/manifests directory before running bootkube
start. This is needed because Terraform cannot generate conditional
files into a template_dir because other resources write to the same
directory and delete.
https://github.com/terraform-providers/terraform-provider-template/issues/10
2017-09-01 10:27:43 -07:00
Dalton Hubble
1c1c4b36f8 Enable hairpin mode on cbr0 in kube-flannel-cfg 2017-08-16 18:22:42 -07:00
Dalton Hubble
c4e87f9695 Update assets generation for bootkube v0.6.1 2017-08-16 18:20:40 -07:00
Dalton Hubble
4cd0360a1a Add MIT License 2017-08-02 00:05:04 -07:00
Dalton Hubble
e7d2c1e597 Update assets generation for bootkube v0.6.0 2017-07-24 13:12:32 -07:00
Dalton Hubble
ce1cc6ae34 Update assets generation for bootkube v0.5.1 2017-07-19 10:46:24 -07:00
Dalton Hubble
498a7b0aea Merge pull request #5 from dghubble/bootkube-v0.5.0
Update assets generation for bootkube v0.5.0
2017-07-12 20:07:23 -07:00
Dalton Hubble
c8c56ca64a Update assets generation for bootkube v0.5.0 2017-07-12 19:17:11 -07:00
Dalton Hubble
99f50c5317 *: Upgrade manifests for Kubernetes v1.6.6 and bootkube v0.4.5
* Enable TLS for experimental self-hosted etcd
* Update the flannel Daemonset based on upstream
* Switch control plane components to run as non-root
* Add UpdateStrategy to control plane components
2017-06-24 14:05:32 -07:00
Dalton Hubble
dd26460395 Fix bootkube version mentioned in the README 2017-06-12 14:11:43 -07:00
Dalton Hubble
21131aa65e Add generated etcd credentials to kube-apiserver-secret.yaml 2017-06-07 16:11:19 -07:00
Dalton Hubble
f03b4c1c60 Update etcd_servers example in README.md 2017-06-07 13:55:11 -07:00
Dalton Hubble
99bf97aa79 Fix etcd_servers interpolation and example 2017-06-07 13:51:07 -07:00
Dalton Hubble
4cadd6f873 Output etcd TLS assets and fmt configs 2017-06-07 11:33:56 -07:00
Dalton Hubble
dc66e59fb2 Merge pull request #3 from dghubble/on-host-etcd
Generate on-host etcd CA, client, and peer TLS cert/key pairs
2017-06-07 10:56:24 -07:00
Dalton Hubble
6e8f0f9a1d Generate on-host etcd CA, client, and peer TLS cert/key pairs 2017-06-06 18:01:36 -07:00
Dalton Hubble
3720aff28a Bump hyperkube to v1.6.4_coreos.0 for bootkube v0.4.4 2017-05-19 16:26:52 -07:00
41 changed files with 982 additions and 201 deletions

2
.gitignore vendored
View File

@@ -1,2 +1,4 @@
*.tfvars
.terraform
*.tfstate*
assets

21
LICENSE Normal file
View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2017 Dalton Hubble
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -1,50 +1,57 @@
# bootkube-terraform
# terraform-render-bootkube
`bootkube-terraform` is a Terraform module that renders [bootkube](https://github.com/kubernetes-incubator/bootkube) assets, just like running the binary `bootkube render`. It aims to provide the same variable names, defaults, features, and outputs.
`terraform-render-bootkube` is a Terraform module that renders [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube) assets for bootstrapping a Kubernetes cluster.
## Audience
`terraform-render-bootkube` is a low-level component of the [Typhoon](https://github.com/poseidon/typhoon) Kubernetes distribution. Use Typhoon modules to create and manage Kubernetes clusters across supported platforms. Use the bootkube module if you'd like to customize a Kubernetes control plane or build your own distribution.
## Usage
Use the `bootkube-terraform` module within your existing Terraform configs. Provide the variables listed in `variables.tf` or check `terraform.tfvars.example` for examples.
Use the module to declare bootkube assets. Check [variables.tf](variables.tf) for options and [terraform.tfvars.example](terraform.tfvars.example) for examples.
```hcl
module "bootkube" {
source = "git://https://github.com/dghubble/bootkube-terraform.git"
source = "git://https://github.com/poseidon/terraform-render-bootkube.git?ref=SHA"
cluster_name = "example"
api_servers = ["node1.example.com"]
etcd_servers = ["http://127.0.0.1:2379"]
etcd_servers = ["node1.example.com"]
asset_dir = "/home/core/clusters/mycluster"
experimental_self_hosted_etcd = false
}
```
Alternately, use a local checkout of this repo and copy `terraform.tfvars.example` to `terraform.tfvars` to generate assets without an existing terraform config repo.
Generate the bootkube assets.
Generate the assets.
```sh
terraform get
terraform init
terraform get --update
terraform plan
terraform apply
```
Find bootkube assets rendered to the `asset_dir` path. That's it.
### Comparison
Render bootkube assets directly with bootkube v0.4.2.
Render bootkube assets directly with bootkube v0.8.2.
#### On-host etcd
#### On-host etcd (recommended)
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=http://127.0.0.1:2379
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=https://node1.example.com:2379
```
Compare assets. The only diffs you should see are TLS credentials.
```sh
diff -rw assets /home/core/cluster/mycluster
pushd /home/core/mycluster
mv manifests-networking/* manifests
popd
diff -rw assets /home/core/mycluster
```
#### Self-hosted etcd
#### Self-hosted etcd (deprecated)
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --experimental-self-hosted-etcd
@@ -53,6 +60,11 @@ bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 -
Compare assets. Note that experimental must be generated to a separate directory for terraform applies to sync. Move the experimental `bootstrap-manifests` and `manifests` files during deployment.
```sh
diff -rw assets /home/core/cluster/mycluster
pushd /home/core/mycluster
mv experimental/bootstrap-manifests/* boostrap-manifests
mv experimental/manifests/* manifests
mv manifests-networking/* manifests
popd
diff -rw assets /home/core/mycluster
```

View File

@@ -1,11 +1,11 @@
# Self-hosted Kubernetes bootstrap manifests
# Self-hosted Kubernetes bootstrap-manifests
resource "template_dir" "bootstrap-manifests" {
source_dir = "${path.module}/resources/bootstrap-manifests"
destination_dir = "${var.asset_dir}/bootstrap-manifests"
vars {
hyperkube_image = "${var.container_images["hyperkube"]}"
etcd_servers = "${var.experimental_self_hosted_etcd ? format("http://%s:2379,http://127.0.0.1:12379", cidrhost(var.service_cidr, 15)) : join(",", var.etcd_servers)}"
etcd_servers = "${var.experimental_self_hosted_etcd ? format("https://%s:2379,https://127.0.0.1:12379", cidrhost(var.service_cidr, 15)) : join(",", formatlist("https://%s:2379", var.etcd_servers))}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
@@ -19,13 +19,17 @@ resource "template_dir" "manifests" {
destination_dir = "${var.asset_dir}/manifests"
vars {
hyperkube_image = "${var.container_images["hyperkube"]}"
etcd_servers = "${var.experimental_self_hosted_etcd ? format("http://%s:2379", cidrhost(var.service_cidr, 15)) : join(",", var.etcd_servers)}"
hyperkube_image = "${var.container_images["hyperkube"]}"
pod_checkpointer_image = "${var.container_images["pod_checkpointer"]}"
kubedns_image = "${var.container_images["kubedns"]}"
kubedns_dnsmasq_image = "${var.container_images["kubedns_dnsmasq"]}"
kubedns_sidecar_image = "${var.container_images["kubedns_sidecar"]}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
etcd_servers = "${var.experimental_self_hosted_etcd ? format("https://%s:2379", cidrhost(var.service_cidr, 15)) : join(",", formatlist("https://%s:2379", var.etcd_servers))}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
kube_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
ca_cert = "${base64encode(var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate)}"
@@ -33,11 +37,25 @@ resource "template_dir" "manifests" {
apiserver_cert = "${base64encode(tls_locally_signed_cert.apiserver.cert_pem)}"
serviceaccount_pub = "${base64encode(tls_private_key.service-account.public_key_pem)}"
serviceaccount_key = "${base64encode(tls_private_key.service-account.private_key_pem)}"
etcd_ca_cert = "${base64encode(tls_self_signed_cert.etcd-ca.cert_pem)}"
etcd_client_cert = "${base64encode(tls_locally_signed_cert.client.cert_pem)}"
etcd_client_key = "${base64encode(tls_private_key.client.private_key_pem)}"
}
}
# Generated kubeconfig
resource "local_file" "kubeconfig" {
content = "${data.template_file.kubeconfig.rendered}"
filename = "${var.asset_dir}/auth/kubeconfig"
}
# Generated kubeconfig with user-context
resource "local_file" "user-kubeconfig" {
content = "${data.template_file.user-kubeconfig.rendered}"
filename = "${var.asset_dir}/auth/${var.cluster_name}-config"
}
# Generated kubeconfig (auth/kubeconfig)
data "template_file" "kubeconfig" {
template = "${file("${path.module}/resources/kubeconfig")}"
@@ -49,12 +67,6 @@ data "template_file" "kubeconfig" {
}
}
resource "local_file" "kubeconfig" {
content = "${data.template_file.kubeconfig.rendered}"
filename = "${var.asset_dir}/auth/kubeconfig"
}
# Generated kubeconfig (auth/kubeconfig)
data "template_file" "user-kubeconfig" {
template = "${file("${path.module}/resources/user-kubeconfig")}"
@@ -66,8 +78,3 @@ data "template_file" "user-kubeconfig" {
server = "${format("https://%s:443", element(var.api_servers, 0))}"
}
}
resource "local_file" "user-kubeconfig" {
content = "${data.template_file.user-kubeconfig.rendered}"
filename = "${var.asset_dir}/auth/${var.cluster_name}-config"
}

74
conditional.tf Normal file
View File

@@ -0,0 +1,74 @@
# Assets generated only when experimental self-hosted etcd is enabled
resource "template_dir" "flannel-manifests" {
count = "${var.networking == "flannel" ? 1 : 0}"
source_dir = "${path.module}/resources/flannel"
destination_dir = "${var.asset_dir}/manifests-networking"
vars {
flannel_image = "${var.container_images["flannel"]}"
flannel_cni_image = "${var.container_images["flannel_cni"]}"
pod_cidr = "${var.pod_cidr}"
}
}
resource "template_dir" "calico-manifests" {
count = "${var.networking == "calico" ? 1 : 0}"
source_dir = "${path.module}/resources/calico"
destination_dir = "${var.asset_dir}/manifests-networking"
vars {
calico_image = "${var.container_images["calico"]}"
calico_cni_image = "${var.container_images["calico_cni"]}"
network_mtu = "${var.network_mtu}"
pod_cidr = "${var.pod_cidr}"
}
}
# bootstrap-etcd.yaml pod bootstrap-manifest
resource "template_dir" "experimental-bootstrap-manifests" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/experimental/bootstrap-manifests"
destination_dir = "${var.asset_dir}/experimental/bootstrap-manifests"
vars {
etcd_image = "${var.container_images["etcd"]}"
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 20)}"
}
}
# etcd subfolder - bootstrap-etcd-service.json and migrate-etcd-cluster.json TPR
resource "template_dir" "etcd-subfolder" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/etcd"
destination_dir = "${var.asset_dir}/etcd"
vars {
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 20)}"
}
}
# etcd-operator deployment and etcd-service manifests
# etcd client, server, and peer tls secrets
resource "template_dir" "experimental-manifests" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/experimental/manifests"
destination_dir = "${var.asset_dir}/experimental/manifests"
vars {
etcd_operator_image = "${var.container_images["etcd_operator"]}"
etcd_checkpointer_image = "${var.container_images["etcd_checkpointer"]}"
etcd_service_ip = "${cidrhost(var.service_cidr, 15)}"
# Self-hosted etcd TLS certs / keys
etcd_ca_cert = "${base64encode(tls_self_signed_cert.etcd-ca.cert_pem)}"
etcd_client_cert = "${base64encode(tls_locally_signed_cert.client.cert_pem)}"
etcd_client_key = "${base64encode(tls_private_key.client.private_key_pem)}"
etcd_server_cert = "${base64encode(tls_locally_signed_cert.server.cert_pem)}"
etcd_server_key = "${base64encode(tls_private_key.server.private_key_pem)}"
etcd_peer_cert = "${base64encode(tls_locally_signed_cert.peer.cert_pem)}"
etcd_peer_key = "${base64encode(tls_private_key.peer.private_key_pem)}"
}
}

View File

@@ -1,68 +0,0 @@
# Experimental self-hosted etcd
# etcd pod and service bootstrap-manifests
data "template_file" "bootstrap-etcd" {
template = "${file("${path.module}/resources/experimental/bootstrap-manifests/bootstrap-etcd.yaml")}"
vars {
etcd_image = "${var.container_images["etcd"]}"
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 200)}"
}
}
resource "local_file" "bootstrap-etcd" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
content = "${data.template_file.bootstrap-etcd.rendered}"
filename = "${var.asset_dir}/experimental/bootstrap-manifests/bootstrap-etcd.yaml"
}
data "template_file" "bootstrap-etcd-service" {
template = "${file("${path.module}/resources/etcd/bootstrap-etcd-service.json")}"
vars {
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 200)}"
}
}
resource "local_file" "bootstrap-etcd-service" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
content = "${data.template_file.bootstrap-etcd-service.rendered}"
filename = "${var.asset_dir}/etcd/bootstrap-etcd-service.json"
}
data "template_file" "etcd-tpr" {
template = "${file("${path.module}/resources/etcd/migrate-etcd-cluster.json")}"
vars {
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 200)}"
}
}
resource "local_file" "etcd-tpr" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
content = "${data.template_file.etcd-tpr.rendered}"
filename = "${var.asset_dir}/etcd/migrate-etcd-cluster.json"
}
# etcd operator deployment and service manifests
resource "local_file" "etcd-operator" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
depends_on = ["template_dir.manifests"]
content = "${file("${path.module}/resources/experimental/manifests/etcd-operator.yaml")}"
filename = "${var.asset_dir}/experimental/manifests/etcd-operator.yaml"
}
data "template_file" "etcd-service" {
template = "${file("${path.module}/resources/experimental/manifests/etcd-service.yaml")}"
vars {
etcd_service_ip = "${cidrhost(var.service_cidr, 15)}"
}
}
resource "local_file" "etcd-service" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
depends_on = ["template_dir.manifests"]
content = "${data.template_file.etcd-service.rendered}"
filename = "${var.asset_dir}/experimental/manifests/etcd-service.yaml"
}

View File

@@ -22,6 +22,36 @@ output "user-kubeconfig" {
value = "${local_file.user-kubeconfig.filename}"
}
# etcd TLS assets
output "etcd_ca_cert" {
value = "${tls_self_signed_cert.etcd-ca.cert_pem}"
}
output "etcd_client_cert" {
value = "${tls_locally_signed_cert.client.cert_pem}"
}
output "etcd_client_key" {
value = "${tls_private_key.client.private_key_pem}"
}
output "etcd_server_cert" {
value = "${tls_locally_signed_cert.server.cert_pem}"
}
output "etcd_server_key" {
value = "${tls_private_key.server.private_key_pem}"
}
output "etcd_peer_cert" {
value = "${tls_locally_signed_cert.peer.cert_pem}"
}
output "etcd_peer_key" {
value = "${tls_private_key.peer.private_key_pem}"
}
# Some platforms may need to reconstruct the kubeconfig directly in user-data.
# That can't be done with the way template_file interpolates multi-line
# contents so the raw components of the kubeconfig may be needed.

View File

@@ -9,17 +9,19 @@ spec:
image: ${hyperkube_image}
command:
- /usr/bin/flock
- --exclusive
- --timeout=30
- /var/lock/api-server.lock
- /hyperkube
- apiserver
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --authorization-mode=RBAC
- --bind-address=0.0.0.0
- --client-ca-file=/etc/kubernetes/secrets/ca.crt
- --etcd-cafile=/etc/kubernetes/secrets/etcd-client-ca.crt
- --etcd-certfile=/etc/kubernetes/secrets/etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/secrets/etcd-client.key
- --etcd-quorum-read=true
- --etcd-servers=${etcd_servers}
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico BGP Peers
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPPeer
plural: bgppeers
singular: bgppeer

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system

View File

@@ -0,0 +1,53 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: calico-node
namespace: kube-system
rules:
- apiGroups: [""]
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups: [""]
resources:
- pods/status
verbs:
- update
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- watch
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- update
- watch
- apiGroups: ["extensions"]
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- bgppeers
- globalbgpconfigs
- ippools
- globalnetworkpolicies
verbs:
- create
- get
- list
- update
- watch

View File

@@ -0,0 +1,29 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: calico-config
namespace: kube-system
data:
# The CNI network configuration to install on each node.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.0",
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": ${network_mtu},
"ipam": {
"type": "host-local",
"subnet": "usePodCidr"
},
"policy": {
"type": "k8s",
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
},
"kubernetes": {
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
}

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Global Felix Configuration
kind: CustomResourceDefinition
metadata:
name: globalfelixconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalFelixConfig
plural: globalfelixconfigs
singular: globalfelixconfig

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Global BGP Configuration
kind: CustomResourceDefinition
metadata:
name: globalbgpconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalBGPConfig
plural: globalbgpconfigs
singular: globalbgpconfig

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico IP Pools
kind: CustomResourceDefinition
metadata:
name: ippools.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPPool
plural: ippools
singular: ippool

View File

@@ -0,0 +1,13 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Global Network Policies
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkPolicy
plural: globalnetworkpolicies
singular: globalnetworkpolicy

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system

View File

@@ -0,0 +1,138 @@
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
template:
metadata:
labels:
k8s-app: calico-node
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
hostNetwork: true
serviceAccountName: calico-node
tolerations:
# Allow the pod to run on master nodes
- key: node-role.kubernetes.io/master
effect: NoSchedule
# Mark the pod as a critical add-on for rescheduling
- key: "CriticalAddonsOnly"
operator: "Exists"
containers:
- name: calico-node
image: ${calico_image}
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Enable felix info logging.
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPV6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
value: "${network_mtu}"
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: "true"
# The Calico IPv4 pool CIDR (should match `--cluster-cidr`).
- name: CALICO_IPV4POOL_CIDR
value: "${pod_cidr}"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "always"
# Enable IP-in-IP within Felix.
- name: FELIX_IPINIPENABLED
value: "true"
# Set node name based on k8s nodeName.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
httpGet:
path: /liveness
port: 9099
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
httpGet:
path: /readiness
port: 9099
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
# Install Calico CNI binaries and CNI network config file on nodes
- name: install-cni
image: ${calico_cni_image}
command: ["/install-cni.sh"]
env:
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
- name: CNI_NET_DIR
value: "/etc/kubernetes/cni/net.d"
# Set node name based on k8s nodeName
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
volumes:
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/kubernetes/cni/net.d
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -1,13 +1,13 @@
{
"apiVersion": "etcd.coreos.com/v1beta1",
"kind": "Cluster",
"apiVersion": "etcd.database.coreos.com/v1beta2",
"kind": "EtcdCluster",
"metadata": {
"name": "kube-etcd",
"namespace": "kube-system"
},
"spec": {
"size": 1,
"version": "v3.1.6",
"version": "v3.1.8",
"pod": {
"nodeSelector": {
"node-role.kubernetes.io/master": ""
@@ -21,7 +21,16 @@
]
},
"selfHosted": {
"bootMemberClientEndpoint": "http://${bootstrap_etcd_service_ip}:12379"
"bootMemberClientEndpoint": "https://${bootstrap_etcd_service_ip}:12379"
},
"TLS": {
"static": {
"member": {
"peerSecret": "etcd-peer-tls",
"serverSecret": "etcd-server-tls"
},
"operatorSecret": "etcd-client-tls"
}
}
}
}

View File

@@ -12,18 +12,30 @@ spec:
command:
- /usr/local/bin/etcd
- --name=boot-etcd
- --listen-client-urls=http://0.0.0.0:12379
- --listen-peer-urls=http://0.0.0.0:12380
- --advertise-client-urls=http://${bootstrap_etcd_service_ip}:12379
- --initial-advertise-peer-urls=http://${bootstrap_etcd_service_ip}:12380
- --initial-cluster=boot-etcd=http://${bootstrap_etcd_service_ip}:12380
- --listen-client-urls=https://0.0.0.0:12379
- --listen-peer-urls=https://0.0.0.0:12380
- --advertise-client-urls=https://${bootstrap_etcd_service_ip}:12379
- --initial-advertise-peer-urls=https://${bootstrap_etcd_service_ip}:12380
- --initial-cluster=boot-etcd=https://${bootstrap_etcd_service_ip}:12380
- --initial-cluster-token=bootkube
- --initial-cluster-state=new
- --data-dir=/var/etcd/data
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- --peer-client-cert-auth=true
- --peer-trusted-ca-file=/etc/kubernetes/secrets/etcd/peer-ca.crt
- --peer-cert-file=/etc/kubernetes/secrets/etcd/peer.crt
- --peer-key-file=/etc/kubernetes/secrets/etcd/peer.key
- --client-cert-auth=true
- --trusted-ca-file=/etc/kubernetes/secrets/etcd/server-ca.crt
- --cert-file=/etc/kubernetes/secrets/etcd/server.crt
- --key-file=/etc/kubernetes/secrets/etcd/server.key
volumeMounts:
- mountPath: /etc/kubernetes/secrets
name: secrets
readOnly: true
volumes:
- name: secrets
hostPath:
path: /etc/kubernetes/bootstrap-secrets
hostNetwork: true
restartPolicy: Never
dnsPolicy: ClusterFirstWithHostNet

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Secret
metadata:
name: etcd-client-tls
namespace: kube-system
type: Opaque
data:
etcd-client-ca.crt: ${etcd_ca_cert}
etcd-client.crt: ${etcd_client_cert}
etcd-client.key: ${etcd_client_key}

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: etcd-operator
@@ -7,6 +7,9 @@ metadata:
k8s-app: etcd-operator
spec:
replicas: 1
selector:
matchLabels:
k8s-app: etcd-operator
template:
metadata:
labels:
@@ -14,10 +17,10 @@ spec:
spec:
containers:
- name: etcd-operator
image: quay.io/coreos/etcd-operator:v0.3.0
image: ${etcd_operator_image}
command:
- /usr/local/bin/etcd-operator
- --analytics=false
- /usr/local/bin/etcd-operator
- --analytics=false
env:
- name: MY_POD_NAMESPACE
valueFrom:
@@ -29,7 +32,15 @@ spec:
fieldPath: metadata.name
nodeSelector:
node-role.kubernetes.io/master: ""
securityContext:
runAsNonRoot: true
runAsUser: 65534
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Secret
metadata:
name: etcd-peer-tls
namespace: kube-system
type: Opaque
data:
peer-ca.crt: ${etcd_ca_cert}
peer.crt: ${etcd_peer_cert}
peer.key: ${etcd_peer_key}

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Secret
metadata:
name: etcd-server-tls
namespace: kube-system
type: Opaque
data:
server-ca.crt: ${etcd_ca_cert}
server.crt: ${etcd_server_cert}
server.key: ${etcd_server_key}

View File

@@ -3,6 +3,10 @@ kind: Service
metadata:
name: etcd-service
namespace: kube-system
# This alpha annotation will retain the endpoints even if the etcd pod isn't ready.
# This feature is always enabled in endpoint controller in k8s even it is alpha.
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
selector:
app: etcd

View File

@@ -1,4 +1,4 @@
apiVersion: "extensions/v1beta1"
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: kube-etcd-network-checkpointer
@@ -7,6 +7,10 @@ metadata:
tier: control-plane
k8s-app: kube-etcd-network-checkpointer
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: kube-etcd-network-checkpointer
template:
metadata:
labels:
@@ -16,7 +20,7 @@ spec:
checkpointer.alpha.coreos.com/checkpoint: "true"
spec:
containers:
- image: quay.io/coreos/kenc:48b6feceeee56c657ea9263f47b6ea091e8d3035
- image: ${etcd_checkpointer_image}
name: kube-etcd-network-checkpointer
securityContext:
privileged: true
@@ -24,6 +28,9 @@ spec:
- mountPath: /etc/kubernetes/selfhosted-etcd
name: checkpoint-dir
readOnly: false
- mountPath: /var/etcd
name: etcd-dir
readOnly: false
- mountPath: /var/lock
name: var-lock
readOnly: false
@@ -43,6 +50,13 @@ spec:
- name: checkpoint-dir
hostPath:
path: /etc/kubernetes/checkpoint-iptables
- name: etcd-dir
hostPath:
path: /var/etcd
- name: var-lock
hostPath:
path: /var/lock
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -0,0 +1,36 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
k8s-app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "${pod_cidr}",
"Backend": {
"Type": "vxlan"
}
}

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: kube-flannel
@@ -7,6 +7,10 @@ metadata:
tier: node
k8s-app: flannel
spec:
selector:
matchLabels:
tier: node
k8s-app: flannel
template:
metadata:
labels:
@@ -15,7 +19,7 @@ spec:
spec:
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.7.1-amd64
image: ${flannel_image}
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=$(POD_IP)"]
securityContext:
privileged: true
@@ -40,13 +44,19 @@ spec:
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: install-cni
image: busybox
command: [ "/bin/sh", "-c", "set -e -x; TMP=/etc/cni/net.d/.tmp-flannel-cfg; cp /etc/kube-flannel/cni-conf.json $TMP; mv $TMP /etc/cni/net.d/10-flannel.conf; while :; do sleep 3600; done" ]
image: ${flannel_cni_image}
command: ["/install-cni.sh"]
env:
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: kube-flannel-cfg
key: cni-conf.json
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
mountPath: /host/etc/cni/net.d
- name: host-cni-bin
mountPath: /host/opt/cni/bin/
hostNetwork: true
tolerations:
- key: node-role.kubernetes.io/master
@@ -62,3 +72,10 @@ spec:
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: host-cni-bin
hostPath:
path: /opt/cni/bin
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -9,3 +9,6 @@ data:
apiserver.crt: ${apiserver_cert}
service-account.pub: ${serviceaccount_pub}
ca.crt: ${ca_cert}
etcd-client-ca.crt: ${etcd_ca_cert}
etcd-client.crt: ${etcd_client_cert}
etcd-client.key: ${etcd_client_key}

View File

@@ -1,4 +1,4 @@
apiVersion: "extensions/v1beta1"
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: kube-apiserver
@@ -7,6 +7,10 @@ metadata:
tier: control-plane
k8s-app: kube-apiserver
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: kube-apiserver
template:
metadata:
labels:
@@ -21,12 +25,10 @@ spec:
image: ${hyperkube_image}
command:
- /usr/bin/flock
- --exclusive
- --timeout=30
- /var/lock/api-server.lock
- /hyperkube
- apiserver
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --anonymous-auth=false
@@ -34,6 +36,10 @@ spec:
- --bind-address=0.0.0.0
- --client-ca-file=/etc/kubernetes/secrets/ca.crt
- --cloud-provider=${cloud_provider}
- --etcd-cafile=/etc/kubernetes/secrets/etcd-client-ca.crt
- --etcd-certfile=/etc/kubernetes/secrets/etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/secrets/etcd-client.key
- --etcd-quorum-read=true
- --etcd-servers=${etcd_servers}
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
@@ -79,3 +85,7 @@ spec:
- name: var-lock
hostPath:
path: /var/lock
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: kube-controller-manager
@@ -8,6 +8,10 @@ metadata:
k8s-app: kube-controller-manager
spec:
replicas: 2
selector:
matchLabels:
tier: control-plane
k8s-app: kube-controller-manager
template:
metadata:
labels:
@@ -30,7 +34,7 @@ spec:
- key: k8s-app
operator: In
values:
- kube-contoller-manager
- kube-controller-manager
topologyKey: kubernetes.io/hostname
containers:
- name: kube-controller-manager
@@ -60,6 +64,9 @@ spec:
readOnly: true
nodeSelector:
node-role.kubernetes.io/master: ""
securityContext:
runAsNonRoot: true
runAsUser: 65534
tolerations:
- key: CriticalAddonsOnly
operator: Exists

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: kube-dns
@@ -6,6 +6,7 @@ metadata:
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
@@ -25,9 +26,22 @@ spec:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1
image: ${kubedns_image}
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
@@ -78,7 +92,7 @@ spec:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1
image: ${kubedns_dnsmasq_image}
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
@@ -96,6 +110,7 @@ spec:
- --
- -k
- --cache-size=1000
- --no-negcache
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
@@ -116,7 +131,7 @@ spec:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1
image: ${kubedns_sidecar_image}
livenessProbe:
httpGet:
path: /metrics
@@ -140,16 +155,3 @@ spec:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true

View File

@@ -1,24 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
k8s-app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}
net-conf.json: |
{
"Network": "${pod_cidr}",
"Backend": {
"Type": "vxlan"
}
}

View File

@@ -1,4 +1,4 @@
apiVersion: "extensions/v1beta1"
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: kube-proxy
@@ -7,6 +7,10 @@ metadata:
tier: node
k8s-app: kube-proxy
spec:
selector:
matchLabels:
tier: node
k8s-app: kube-proxy
template:
metadata:
labels:
@@ -19,7 +23,7 @@ spec:
- name: kube-proxy
image: ${hyperkube_image}
command:
- /hyperkube
- ./hyperkube
- proxy
- --cluster-cidr=${pod_cidr}
- --hostname-override=$(NODE_NAME)
@@ -33,6 +37,9 @@ spec:
securityContext:
privileged: true
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
@@ -47,9 +54,16 @@ spec:
operator: Exists
effect: NoSchedule
volumes:
- hostPath:
- name: lib-modules
hostPath:
path: /lib/modules
- name: ssl-certs-host
hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: kube-scheduler
@@ -8,6 +8,10 @@ metadata:
k8s-app: kube-scheduler
spec:
replicas: 2
selector:
matchLabels:
tier: control-plane
k8s-app: kube-scheduler
template:
metadata:
labels:
@@ -47,6 +51,9 @@ spec:
timeoutSeconds: 15
nodeSelector:
node-role.kubernetes.io/master: ""
securityContext:
runAsNonRoot: true
runAsUser: 65534
tolerations:
- key: CriticalAddonsOnly
operator: Exists

View File

@@ -1,4 +1,4 @@
apiVersion: rbac.authorization.k8s.io/v1alpha1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:default-sa

View File

@@ -1,4 +1,4 @@
apiVersion: "extensions/v1beta1"
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: pod-checkpointer
@@ -7,6 +7,10 @@ metadata:
tier: control-plane
k8s-app: pod-checkpointer
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: pod-checkpointer
template:
metadata:
labels:
@@ -16,11 +20,10 @@ spec:
checkpointer.alpha.coreos.com/checkpoint: "true"
spec:
containers:
- name: checkpoint
image: quay.io/coreos/pod-checkpointer:2cad4cac4186611a79de1969e3ea4924f02f459e
- name: pod-checkpointer
image: ${pod_checkpointer_image}
command:
- /checkpoint
- --v=4
- --lock-file=/var/run/lock/pod-checkpointer.lock
env:
- name: NODE_NAME
@@ -56,3 +59,7 @@ spec:
- name: var-run
hostPath:
path: /var/run
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -1,5 +1,6 @@
cluster_name = "example"
api_servers = ["node1.example.com"]
etcd_servers = ["http://127.0.0.1:2379"]
asset_dir = "/home/core/clusters/mycluster"
etcd_servers = ["node1.example.com"]
asset_dir = "/home/core/mycluster"
networking = "flannel"
experimental_self_hosted_etcd = false

216
tls-etcd.tf Normal file
View File

@@ -0,0 +1,216 @@
# etcd-client-ca.crt
resource "local_file" "etcd_client_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
filename = "${var.asset_dir}/tls/etcd-client-ca.crt"
}
# etcd-client.crt
resource "local_file" "etcd_client_crt" {
content = "${tls_locally_signed_cert.client.cert_pem}"
filename = "${var.asset_dir}/tls/etcd-client.crt"
}
# etcd-client.key
resource "local_file" "etcd_client_key" {
content = "${tls_private_key.client.private_key_pem}"
filename = "${var.asset_dir}/tls/etcd-client.key"
}
# server-ca.crt
resource "local_file" "etcd_server_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
filename = "${var.asset_dir}/tls/etcd/server-ca.crt"
}
# server.crt
resource "local_file" "etcd_server_crt" {
content = "${tls_locally_signed_cert.server.cert_pem}"
filename = "${var.asset_dir}/tls/etcd/server.crt"
}
# server.key
resource "local_file" "etcd_server_key" {
content = "${tls_private_key.server.private_key_pem}"
filename = "${var.asset_dir}/tls/etcd/server.key"
}
# peer-ca.crt
resource "local_file" "etcd_peer_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
filename = "${var.asset_dir}/tls/etcd/peer-ca.crt"
}
# peer.crt
resource "local_file" "etcd_peer_crt" {
content = "${tls_locally_signed_cert.peer.cert_pem}"
filename = "${var.asset_dir}/tls/etcd/peer.crt"
}
# peer.key
resource "local_file" "etcd_peer_key" {
content = "${tls_private_key.peer.private_key_pem}"
filename = "${var.asset_dir}/tls/etcd/peer.key"
}
# certificates and keys
resource "tls_private_key" "etcd-ca" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_self_signed_cert" "etcd-ca" {
key_algorithm = "${tls_private_key.etcd-ca.algorithm}"
private_key_pem = "${tls_private_key.etcd-ca.private_key_pem}"
subject {
common_name = "etcd-ca"
organization = "etcd"
}
is_ca_certificate = true
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"cert_signing",
]
}
# client certs are used for client (apiserver, locksmith, etcd-operator)
# to etcd communication
resource "tls_private_key" "client" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "client" {
key_algorithm = "${tls_private_key.client.algorithm}"
private_key_pem = "${tls_private_key.client.private_key_pem}"
subject {
common_name = "etcd-client"
organization = "etcd"
}
ip_addresses = [
"127.0.0.1",
"${cidrhost(var.service_cidr, 15)}",
"${cidrhost(var.service_cidr, 20)}",
]
dns_names = ["${concat(
var.etcd_servers,
list(
"localhost",
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"]
}
resource "tls_locally_signed_cert" "client" {
cert_request_pem = "${tls_cert_request.client.cert_request_pem}"
ca_key_algorithm = "${join(" ", tls_self_signed_cert.etcd-ca.*.key_algorithm)}"
ca_private_key_pem = "${join(" ", tls_private_key.etcd-ca.*.private_key_pem)}"
ca_cert_pem = "${join(" ", tls_self_signed_cert.etcd-ca.*.cert_pem)}"
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"server_auth",
"client_auth",
]
}
resource "tls_private_key" "server" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "server" {
key_algorithm = "${tls_private_key.server.algorithm}"
private_key_pem = "${tls_private_key.server.private_key_pem}"
subject {
common_name = "etcd-server"
organization = "etcd"
}
ip_addresses = [
"127.0.0.1",
"${cidrhost(var.service_cidr, 15)}",
"${cidrhost(var.service_cidr, 20)}",
]
dns_names = ["${concat(
var.etcd_servers,
list(
"localhost",
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"]
}
resource "tls_locally_signed_cert" "server" {
cert_request_pem = "${tls_cert_request.server.cert_request_pem}"
ca_key_algorithm = "${join(" ", tls_self_signed_cert.etcd-ca.*.key_algorithm)}"
ca_private_key_pem = "${join(" ", tls_private_key.etcd-ca.*.private_key_pem)}"
ca_cert_pem = "${join(" ", tls_self_signed_cert.etcd-ca.*.cert_pem)}"
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"server_auth",
"client_auth",
]
}
resource "tls_private_key" "peer" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "peer" {
key_algorithm = "${tls_private_key.peer.algorithm}"
private_key_pem = "${tls_private_key.peer.private_key_pem}"
subject {
common_name = "etcd-peer"
organization = "etcd"
}
ip_addresses = [
"${cidrhost(var.service_cidr, 20)}",
]
dns_names = ["${concat(
var.etcd_servers,
list(
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"]
}
resource "tls_locally_signed_cert" "peer" {
cert_request_pem = "${tls_cert_request.peer.cert_request_pem}"
ca_key_algorithm = "${join(" ", tls_self_signed_cert.etcd-ca.*.key_algorithm)}"
ca_private_key_pem = "${join(" ", tls_private_key.etcd-ca.*.private_key_pem)}"
ca_cert_pem = "${join(" ", tls_self_signed_cert.etcd-ca.*.cert_pem)}"
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"server_auth",
"client_auth",
]
}

View File

View File

@@ -4,18 +4,18 @@ variable "cluster_name" {
}
variable "api_servers" {
description = "URL used to reach kube-apiserver"
description = "List of URLs used to reach kube-apiserver"
type = "list"
}
variable "etcd_servers" {
description = "List of etcd server URLs including protocol, host, and port"
description = "List of URLs used to reach etcd servers. Ignored if experimental self-hosted etcd is enabled."
type = "list"
}
variable "experimental_self_hosted_etcd" {
description = "(Experimental) Create self-hosted etcd assets"
default = false
default = false
}
variable "asset_dir" {
@@ -29,6 +29,18 @@ variable "cloud_provider" {
default = ""
}
variable "networking" {
description = "Choice of networking provider (flannel or calico)"
type = "string"
default = "flannel"
}
variable "network_mtu" {
description = "CNI interface MTU (applies to calico only)"
type = "string"
default = "1500"
}
variable "pod_cidr" {
description = "CIDR IP range to assign Kubernetes pods"
type = "string"
@@ -38,10 +50,11 @@ variable "pod_cidr" {
variable "service_cidr" {
description = <<EOD
CIDR IP range to assign Kubernetes services.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns, the 15th IP will be reserved for self-hosted etcd, and the 200th IP will be reserved for bootstrap self-hosted etcd.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns, the 15th IP will be reserved for self-hosted etcd, and the 20th IP will be reserved for bootstrap self-hosted etcd.
EOD
type = "string"
default = "10.3.0.0/24"
type = "string"
default = "10.3.0.0/24"
}
variable "container_images" {
@@ -49,8 +62,18 @@ variable "container_images" {
type = "map"
default = {
hyperkube = "quay.io/coreos/hyperkube:v1.6.2_coreos.0"
etcd = "quay.io/coreos/etcd:v3.1.6"
calico = "quay.io/calico/node:v2.6.1"
calico_cni = "quay.io/calico/cni:v1.11.0"
etcd = "quay.io/coreos/etcd:v3.1.8"
etcd_operator = "quay.io/coreos/etcd-operator:v0.5.0"
etcd_checkpointer = "quay.io/coreos/kenc:0.0.2"
flannel = "quay.io/coreos/flannel:v0.9.0-amd64"
flannel_cni = "quay.io/coreos/flannel-cni:v0.3.0"
hyperkube = "gcr.io/google_containers/hyperkube:v1.8.3"
kubedns = "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5"
kubedns_dnsmasq = "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5"
kubedns_sidecar = "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5"
pod_checkpointer = "quay.io/coreos/pod-checkpointer:e22cc0e3714378de92f45326474874eb602ca0ac"
}
}