35 Commits

Author SHA1 Message Date
Dalton Hubble
8005052cfb Remove unused raw kubeconfig field outputs
* Remove unused `ca_cert`, `kubelet_cert`, `kubelet_key`,
and `server` outputs
* These outputs were once needed to support clusters with
managed instance groups, but that hasn't been the case for
quite some time
2019-11-13 16:49:07 -08:00
Dalton Hubble
0f1f16c612 Add small CPU resource requests to static pods
* Set small CPU requests on static pods kube-apiserver,
kube-controller-manager, and kube-scheduler to align with
upstream tooling and for edge cases
* Control plane nodes are tainted to isolate them from
ordinary workloads. Even dense workloads can only compress
CPU resources on worker nodes.
* Control plane static pods use the highest priority class, so
contention favors control plane pods (over say node-exporter)
and CPU is compressible too.
* Effectively, a practical case for these requests hasn't been
observed. However, a small static pod CPU request may offer
a slight benefit if a controller became overloaded and the
above mechanisms were insufficient for some reason (bit of a
stretch, due to CPU compressibility)
* Continue to avoid setting a memory request for static pods.
It would impose a hard size requirement on controller nodes,
which isn't warranted and is handled more gently by Typhoon
default instance types across clouds and via docs
2019-11-13 16:44:33 -08:00
Dalton Hubble
43e1230c55 Update CoreDNS from v1.6.2 to v1.6.5
* Add health `lameduck` option 5s. Before CoreDNS shuts down,
it will wait and report unhealthy for 5s to allow time for
plugins to shutdown cleanly
* Minor bug fixes over a few releases
* https://coredns.io/2019/08/31/coredns-1.6.3-release/
* https://coredns.io/2019/09/27/coredns-1.6.4-release/
* https://coredns.io/2019/11/05/coredns-1.6.5-release/
2019-11-13 14:33:50 -08:00
Dalton Hubble
1bba891d95 Adopt Terraform v0.12 templatefile function
* Adopt Terrform v0.12 type and templatefile function
features to replace the use of terraform-provider-template's
`template_dir`
* Use of `for_each` to write local assets requires
that consumers use Terraform v0.12.6+ (action required)
* Continue use of `template_file` as its quite common. In
future, we may replace it as well.
* Remove outputs `id` and `content_hash` (no longer used)

Background:

* `template_dir` was added to `terraform-provider-template`
to add support for template directory rendering in CoreOS
Tectonic Kubernetes distribution (~2017)
* Terraform v0.12 introduced a native `templatefile` function
and v0.12.6 introduced native `for_each` support (July 2019)
that makes it possible to replace `template_dir` usage
2019-11-13 14:05:01 -08:00
Dalton Hubble
0daa1276c6 Update Kubernetes from v1.16.2 to v1.16.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1163
2019-11-13 13:02:01 -08:00
Dalton Hubble
a2b1dbe2c0 Update Calico from v3.10.0 to v3.10.1
* https://docs.projectcalico.org/v3.10/release-notes/
2019-11-07 11:07:15 -08:00
Dalton Hubble
3c7334ab55 Upgrade Calico from v3.9.2 to v3.10.0
* Change calico-node livenessProve from httpGet to exec
a calico-node -felix-ready, as recommended by Calico
* Allow advertising Kubernetes service ClusterIPs
2019-10-27 01:06:09 -07:00
Dalton Hubble
e09d6bef33 Switch kube-proxy from iptables mode to ipvs mode
* Kubernetes v1.11 considered kube-proxy IPVS mode GA
* Many problems were found https://github.com/poseidon/typhoon/pull/321
* Since then, major blockers seem to have been addressed
2019-10-15 22:55:17 -07:00
Dalton Hubble
0fcc067476 Update Kubernetes from v1.16.1 to v1.16.2
* https://github.com/kubernetes/kubernetes/releases/tag/v1.16.2
2019-10-15 22:38:51 -07:00
Dalton Hubble
6f2734bb3c Update Calico from v3.9.1 to v3.9.2
* https://github.com/projectcalico/calico/releases/tag/v3.9.2
2019-10-15 22:36:37 -07:00
Dalton Hubble
10d9cec5c2 Add stricter type constraints to variables 2019-10-06 20:41:50 -07:00
Dalton Hubble
1f8b634652 Remove unneeded control plane flags
* Several flags now default to the arguments we've been
setting and are no longer needed
2019-10-06 20:25:46 -07:00
Dalton Hubble
586d6e36f6 Update Kubernetes from v1.16.0 to v1.16.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1161
2019-10-02 21:22:11 -07:00
Dalton Hubble
18b7a74d30 Update Calico from v3.8.2 to v3.9.1
* https://docs.projectcalico.org/v3.9/release-notes/
2019-09-29 11:14:20 -07:00
Dalton Hubble
539b725093 Update Kubernetes from v1.15.3 to v1.16.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1160
2019-09-17 21:15:46 -07:00
Dalton Hubble
d6206abedd Replace Terraform element function with indexing
* Better to explictly index (and error on out-of-bounds) than
use Terraform `element` (which has special wrap-around behavior)
* https://www.terraform.io/docs/configuration/functions/element.html
2019-09-14 16:46:27 -07:00
Dalton Hubble
e839ec5a2b Fix Terraform formatting 2019-09-14 16:44:36 -07:00
Dalton Hubble
3dade188f2 Rename project to terraform-render-bootstrap
* Rename from terraform-render-bootkube to terraform-render-bootstrap
* Generated manifest and certificate assets are no longer geared
specifically for bootkube (no longer used)
2019-09-14 16:16:49 -07:00
Dalton Hubble
97bbed6c3a Rename CA organization from bootkube to typhoon
* Rename the organization in generated CA certificates for
clusters from bootkube to typhoon
* Mainly helpful to avoid confusion with bootkube CA certificates
if users inspect their CA, especially now that bootkube isn't used
(better their searches lead to Typhoon)
2019-09-14 16:08:06 -07:00
Dalton Hubble
6e59af7113 Migrate from a self-hosted to static pod control plane
* Run kube-apiserver, kube-scheduler, and kube-controller-manager
as static pods on each controller node
* Boostrap a minimal control plane by copying `static-manifests`
to the Kubelet `--pod-manifest-path` and tls/auth secrets to
`/etc/kubernetes/bootstrap-secrets`. Then, kubectl apply Kubernetes
manifests.
* Discontinue using bootkube to bootstrap and pivot to a self-hosted
control plane.
* Remove bootkube self-hosted kube-apiserver DaemonSet and
kube-scheduler and kube-controller-manager Deployments
* Remove pod-checkpointer manifests (no longer needed)

Advantages:

* Reduce control plane bootstrapping complexity. Self-hosted pivot and
pod checkpointing worked well, but in-place edits to kube-apiserver,
kube-controller-manager, or kube-scheduler is infrequently used. The
concept was originally geared toward continuously in-place upgrading
clusters, a goal Typhoon doesn't take on (rec. blue/green clusters).
As such, the value-add isn't justifying the extra components for this
particular project.
* Static pods still provide kubectl visibility and log access

Drawbacks:

* In-place edits to kube-apiserver, kube-controller-manager, and
kube-scheduler are not possible via kubectl (non-goal)
* Assets must be copied to each controller (not just one)
* Static pod must load credentials via hostPath, which is less clean
compared with the former Kubernetes secrets and service accounts
2019-09-02 20:52:46 -07:00
Dalton Hubble
98cc19f80f Update CoreDNS from v1.5.0 to v1.6.2
* https://coredns.io/2019/06/26/coredns-1.5.1-release/
* https://coredns.io/2019/07/03/coredns-1.5.2-release/
* https://coredns.io/2019/07/28/coredns-1.6.0-release/
* https://coredns.io/2019/08/02/coredns-1.6.1-release/
* https://coredns.io/2019/08/13/coredns-1.6.2-release/
2019-08-31 15:20:55 -07:00
Dalton Hubble
248675e7a9 Update Kubernetes from v1.15.2 to v1.15.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md/#v1153
2019-08-19 14:41:54 -07:00
Dalton Hubble
8b3738b2cc Update Calico from v3.8.1 to v3.8.2
* https://docs.projectcalico.org/v3.8/release-notes/
2019-08-16 14:53:20 -07:00
Dalton Hubble
c21da02249 Update Kubernetes from v1.15.1 to v1.15.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#downloads-for-v1152
2019-08-05 08:44:54 -07:00
Dalton Hubble
83dd5a7cfc Update Calico from v3.8.0 to v3.8.1
* https://github.com/projectcalico/calico/releases/tag/v3.8.1
2019-07-27 15:17:47 -07:00
Dalton Hubble
ed94836925 Update kube-router from v0.3.1 to v0.3.2
* kube-router is experimental and not supported or validated
* Bumping so the next time kube-router is evaluated, we're on
a modern version
* https://github.com/cloudnativelabs/kube-router/releases/tag/v0.3.2
2019-07-27 15:12:43 -07:00
Dalton Hubble
5b9faa9031 Update Kubernetes from v1.15.0 to v1.15.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#downloads-for-v1151
2019-07-19 01:18:09 -07:00
Dalton Hubble
119cb00fa7 Upgrade Calico from v3.7.4 to v3.8.0
* Enable CNI bandwidth plugin for traffic shaping
* https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#support-traffic-shaping
2019-07-11 21:00:58 -07:00
Dalton Hubble
4caca47776 Run kube-apiserver as non-root user (nobody) 2019-07-06 13:51:54 -07:00
Dalton Hubble
3bfd1253ec Always run kube-apiserver on port 6443 (internally)
* Require bootstrap-kube-apiserver and kube-apiserver components
listen on port 6443 (internally) to allow kube-apiserver pods to
run with lower user privilege
* Remove variable `apiserver_port`. The kube-apiserver listen
port is no longer customizable.
* Add variable `external_apiserver_port` to allow architectures
where a load balancer fronts kube-apiserver 6443 backends, but
listens on a different port externally. For example, Google Cloud
TCP Proxy load balancers cannot listen on 6443
2019-07-06 13:50:22 -07:00
Dalton Hubble
95f6fc7fa5 Update Calico from v3.7.3 to v3.7.4
* https://docs.projectcalico.org/v3.7/release-notes/
2019-07-02 20:15:53 -07:00
Dalton Hubble
62df9ad69c Update Kubernetes from v1.14.3 to v1.15.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#v1150
2019-06-23 13:04:13 -07:00
Dalton Hubble
89c3ab4e27 Update Calico from v3.7.2 to v3.7.3
* https://docs.projectcalico.org/v3.7/release-notes/
2019-06-13 23:36:35 -07:00
Dalton Hubble
0103bc06bb Define module required provider versions 2019-06-06 09:39:48 -07:00
Dalton Hubble
33d033f1a6 Migrate from Terraform v0.11.x to v0.12.x (breaking!)
* Terraform v0.12 is a major Terraform release with breaking changes
to the HCL language. In v0.11, it was required to use redundant brackets
as interpreter type hints to pass lists or concat and flatten lists and
strings. In v0.12, that work-around is no longer supported. Lists are
represented as first-class objects and the redundant brackets create
nested lists. Consequently, its not possible to pass lists in a way that
works with both v0.11 and v0.12 at the same time. We've made the
difficult choice to pursue a hard cutover to Terraform v0.12.x
* https://www.terraform.io/upgrade-guides/0-12.html#referring-to-list-variables
* Use expression syntax instead of interpolated strings, where suggested
* Define Terraform required_version ~> v0.12.0 (> v0.12, < v0.13)
2019-06-06 09:39:46 -07:00
57 changed files with 386 additions and 841 deletions

View File

@@ -1,18 +1,18 @@
# terraform-render-bootkube
# terraform-render-bootstrap
`terraform-render-bootkube` is a Terraform module that renders [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube) assets for bootstrapping a Kubernetes cluster.
`terraform-render-bootstrap` is a Terraform module that renders TLS certificates, static pods, and manifests for bootstrapping a Kubernetes cluster.
## Audience
`terraform-render-bootkube` is a low-level component of the [Typhoon](https://github.com/poseidon/typhoon) Kubernetes distribution. Use Typhoon modules to create and manage Kubernetes clusters across supported platforms. Use the bootkube module if you'd like to customize a Kubernetes control plane or build your own distribution.
`terraform-render-bootstrap` is a low-level component of the [Typhoon](https://github.com/poseidon/typhoon) Kubernetes distribution. Use Typhoon modules to create and manage Kubernetes clusters across supported platforms. Use the bootstrap module if you'd like to customize a Kubernetes control plane or build your own distribution.
## Usage
Use the module to declare bootkube assets. Check [variables.tf](variables.tf) for options and [terraform.tfvars.example](terraform.tfvars.example) for examples.
Use the module to declare bootstrap assets. Check [variables.tf](variables.tf) for options and [terraform.tfvars.example](terraform.tfvars.example) for examples.
```hcl
module "bootkube" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=SHA"
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=SHA"
cluster_name = "example"
api_servers = ["node1.example.com"]
@@ -29,21 +29,5 @@ terraform plan
terraform apply
```
Find bootkube assets rendered to the `asset_dir` path. That's it.
Find bootstrap assets rendered to the `asset_dir` path. That's it.
### Comparison
Render bootkube assets directly with bootkube v0.14.0.
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:6443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=https://node1.example.com:2379
```
Compare assets. Rendered assets may differ slightly from bootkube assets to reflect decisions made by the [Typhoon](https://github.com/poseidon/typhoon) distribution.
```sh
pushd /home/core/mycluster
mv manifests-networking/* manifests
popd
diff -rw assets /home/core/mycluster
```

110
assets.tf
View File

@@ -1,110 +1,42 @@
# Self-hosted Kubernetes bootstrap-manifests
resource "template_dir" "bootstrap-manifests" {
source_dir = "${path.module}/resources/bootstrap-manifests"
destination_dir = "${var.asset_dir}/bootstrap-manifests"
# Generated kubeconfig for Kubelets
data "template_file" "kubeconfig-kubelet" {
template = file("${path.module}/resources/kubeconfig-kubelet")
vars {
hyperkube_image = "${var.container_images["hyperkube"]}"
etcd_servers = "${join(",", formatlist("https://%s:2379", var.etcd_servers))}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
trusted_certs_dir = "${var.trusted_certs_dir}"
apiserver_port = "${var.apiserver_port}"
vars = {
ca_cert = base64encode(tls_self_signed_cert.kube-ca.cert_pem)
kubelet_cert = base64encode(tls_locally_signed_cert.kubelet.cert_pem)
kubelet_key = base64encode(tls_private_key.kubelet.private_key_pem)
server = format("https://%s:%s", var.api_servers[0], var.external_apiserver_port)
}
}
# Self-hosted Kubernetes manifests
resource "template_dir" "manifests" {
source_dir = "${path.module}/resources/manifests"
destination_dir = "${var.asset_dir}/manifests"
# Generated admin kubeconfig to bootstrap control plane
data "template_file" "kubeconfig-admin" {
template = file("${path.module}/resources/kubeconfig-admin")
vars {
hyperkube_image = "${var.container_images["hyperkube"]}"
pod_checkpointer_image = "${var.container_images["pod_checkpointer"]}"
coredns_image = "${var.container_images["coredns"]}"
etcd_servers = "${join(",", formatlist("https://%s:2379", var.etcd_servers))}"
control_plane_replicas = "${max(2, length(var.etcd_servers))}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
cluster_domain_suffix = "${var.cluster_domain_suffix}"
cluster_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
trusted_certs_dir = "${var.trusted_certs_dir}"
apiserver_port = "${var.apiserver_port}"
ca_cert = "${base64encode(tls_self_signed_cert.kube-ca.cert_pem)}"
ca_key = "${base64encode(tls_private_key.kube-ca.private_key_pem)}"
server = "${format("https://%s:%s", element(var.api_servers, 0), var.apiserver_port)}"
apiserver_key = "${base64encode(tls_private_key.apiserver.private_key_pem)}"
apiserver_cert = "${base64encode(tls_locally_signed_cert.apiserver.cert_pem)}"
serviceaccount_pub = "${base64encode(tls_private_key.service-account.public_key_pem)}"
serviceaccount_key = "${base64encode(tls_private_key.service-account.private_key_pem)}"
etcd_ca_cert = "${base64encode(tls_self_signed_cert.etcd-ca.cert_pem)}"
etcd_client_cert = "${base64encode(tls_locally_signed_cert.client.cert_pem)}"
etcd_client_key = "${base64encode(tls_private_key.client.private_key_pem)}"
aggregation_flags = "${var.enable_aggregation == "true" ? indent(8, local.aggregation_flags) : ""}"
aggregation_ca_cert = "${var.enable_aggregation == "true" ? base64encode(join(" ", tls_self_signed_cert.aggregation-ca.*.cert_pem)) : ""}"
aggregation_client_cert = "${var.enable_aggregation == "true" ? base64encode(join(" ", tls_locally_signed_cert.aggregation-client.*.cert_pem)) : ""}"
aggregation_client_key = "${var.enable_aggregation == "true" ? base64encode(join(" ", tls_private_key.aggregation-client.*.private_key_pem)) : ""}"
vars = {
name = var.cluster_name
ca_cert = base64encode(tls_self_signed_cert.kube-ca.cert_pem)
kubelet_cert = base64encode(tls_locally_signed_cert.admin.cert_pem)
kubelet_key = base64encode(tls_private_key.admin.private_key_pem)
server = format("https://%s:%s", var.api_servers[0], var.external_apiserver_port)
}
}
locals {
aggregation_flags = <<EOF
- --proxy-client-cert-file=/etc/kubernetes/secrets/aggregation-client.crt
- --proxy-client-key-file=/etc/kubernetes/secrets/aggregation-client.key
- --requestheader-client-ca-file=/etc/kubernetes/secrets/aggregation-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-UserEOF
}
# Generated kubeconfig for Kubelets
resource "local_file" "kubeconfig-kubelet" {
content = "${data.template_file.kubeconfig-kubelet.rendered}"
content = data.template_file.kubeconfig-kubelet.rendered
filename = "${var.asset_dir}/auth/kubeconfig-kubelet"
}
# Generated admin kubeconfig (bootkube requires it be at auth/kubeconfig)
# https://github.com/kubernetes-incubator/bootkube/blob/master/pkg/bootkube/bootkube.go#L42
# Generated admin kubeconfig to bootstrap control plane
resource "local_file" "kubeconfig-admin" {
content = "${data.template_file.kubeconfig-admin.rendered}"
content = data.template_file.kubeconfig-admin.rendered
filename = "${var.asset_dir}/auth/kubeconfig"
}
# Generated admin kubeconfig in a file named after the cluster
resource "local_file" "kubeconfig-admin-named" {
content = "${data.template_file.kubeconfig-admin.rendered}"
content = data.template_file.kubeconfig-admin.rendered
filename = "${var.asset_dir}/auth/${var.cluster_name}-config"
}
data "template_file" "kubeconfig-kubelet" {
template = "${file("${path.module}/resources/kubeconfig-kubelet")}"
vars {
ca_cert = "${base64encode(tls_self_signed_cert.kube-ca.cert_pem)}"
kubelet_cert = "${base64encode(tls_locally_signed_cert.kubelet.cert_pem)}"
kubelet_key = "${base64encode(tls_private_key.kubelet.private_key_pem)}"
server = "${format("https://%s:%s", element(var.api_servers, 0), var.apiserver_port)}"
}
}
data "template_file" "kubeconfig-admin" {
template = "${file("${path.module}/resources/kubeconfig-admin")}"
vars {
name = "${var.cluster_name}"
ca_cert = "${base64encode(tls_self_signed_cert.kube-ca.cert_pem)}"
kubelet_cert = "${base64encode(tls_locally_signed_cert.admin.cert_pem)}"
kubelet_key = "${base64encode(tls_private_key.admin.private_key_pem)}"
server = "${format("https://%s:%s", element(var.api_servers, 0), var.apiserver_port)}"
}
}

View File

@@ -1,47 +1,76 @@
# Assets generated only when certain options are chosen
resource "template_dir" "flannel-manifests" {
count = "${var.networking == "flannel" ? 1 : 0}"
source_dir = "${path.module}/resources/flannel"
destination_dir = "${var.asset_dir}/manifests-networking"
locals {
# flannel manifests (manifest.yaml => content)
flannel_manifests = {
for name in fileset("${path.module}/resources/flannel", "*.yaml"):
"manifests-networking/${name}" => templatefile(
"${path.module}/resources/flannel/${name}",
{
flannel_image = var.container_images["flannel"]
flannel_cni_image = var.container_images["flannel_cni"]
pod_cidr = var.pod_cidr
}
)
if var.networking == "flannel"
}
vars {
flannel_image = "${var.container_images["flannel"]}"
flannel_cni_image = "${var.container_images["flannel_cni"]}"
# calico manifests (manifest.yaml => content)
calico_manifests = {
for name in fileset("${path.module}/resources/calico", "*.yaml"):
"manifests-networking/${name}" => templatefile(
"${path.module}/resources/calico/${name}",
{
calico_image = var.container_images["calico"]
calico_cni_image = var.container_images["calico_cni"]
network_mtu = var.network_mtu
network_encapsulation = indent(2, var.network_encapsulation == "vxlan" ? "vxlanMode: Always" : "ipipMode: Always")
ipip_enabled = var.network_encapsulation == "ipip" ? true : false
ipip_readiness = var.network_encapsulation == "ipip" ? indent(16, "- --bird-ready") : ""
vxlan_enabled = var.network_encapsulation == "vxlan" ? true : false
network_ip_autodetection_method = var.network_ip_autodetection_method
pod_cidr = var.pod_cidr
enable_reporting = var.enable_reporting
}
)
if var.networking == "calico"
}
pod_cidr = "${var.pod_cidr}"
# kube-router manifests (manifest.yaml => content)
kube_router_manifests = {
for name in fileset("${path.module}/resources/kube-router", "*.yaml"):
"manifests-networking/${name}" => templatefile(
"${path.module}/resources/kube-router/${name}",
{
kube_router_image = var.container_images["kube_router"]
flannel_cni_image = var.container_images["flannel_cni"]
network_mtu = var.network_mtu
}
)
if var.networking == "kube-router"
}
}
resource "template_dir" "calico-manifests" {
count = "${var.networking == "calico" ? 1 : 0}"
source_dir = "${path.module}/resources/calico"
destination_dir = "${var.asset_dir}/manifests-networking"
# flannel manifests
resource "local_file" "flannel-manifests" {
for_each = local.flannel_manifests
vars {
calico_image = "${var.container_images["calico"]}"
calico_cni_image = "${var.container_images["calico_cni"]}"
network_mtu = "${var.network_mtu}"
network_encapsulation = "${indent(2, var.network_encapsulation == "vxlan" ? "vxlanMode: Always" : "ipipMode: Always")}"
ipip_enabled = "${var.network_encapsulation == "ipip" ? true : false}"
ipip_readiness = "${var.network_encapsulation == "ipip" ? indent(16, "- --bird-ready") : ""}"
vxlan_enabled = "${var.network_encapsulation == "vxlan" ? true : false}"
network_ip_autodetection_method = "${var.network_ip_autodetection_method}"
pod_cidr = "${var.pod_cidr}"
enable_reporting = "${var.enable_reporting}"
}
filename = "${var.asset_dir}/${each.key}"
content = each.value
}
resource "template_dir" "kube-router-manifests" {
count = "${var.networking == "kube-router" ? 1 : 0}"
source_dir = "${path.module}/resources/kube-router"
destination_dir = "${var.asset_dir}/manifests-networking"
# Calico manifests
resource "local_file" "calico-manifests" {
for_each = local.calico_manifests
vars {
kube_router_image = "${var.container_images["kube_router"]}"
flannel_cni_image = "${var.container_images["flannel_cni"]}"
network_mtu = "${var.network_mtu}"
}
filename = "${var.asset_dir}/${each.key}"
content = each.value
}
# kube-router manifests
resource "local_file" "kube-router-manifests" {
for_each = local.kube_router_manifests
filename = "${var.asset_dir}/${each.key}"
content = each.value
}

65
manifests.tf Normal file
View File

@@ -0,0 +1,65 @@
locals {
# Kubernetes static pod manifests (manifest.yaml => content)
static_manifests = {
for name in fileset("${path.module}/resources/static-manifests", "*.yaml"):
"static-manifests/${name}" => templatefile(
"${path.module}/resources/static-manifests/${name}",
{
hyperkube_image = var.container_images["hyperkube"]
etcd_servers = join(",", formatlist("https://%s:2379", var.etcd_servers))
cloud_provider = var.cloud_provider
pod_cidr = var.pod_cidr
service_cidr = var.service_cidr
trusted_certs_dir = var.trusted_certs_dir
aggregation_flags = var.enable_aggregation ? indent(4, local.aggregation_flags) : ""
}
)
}
# Kubernetes control plane manifests (manifest.yaml => content)
manifests = {
for name in fileset("${path.module}/resources/manifests", "**/*.yaml"):
"manifests/${name}" => templatefile(
"${path.module}/resources/manifests/${name}",
{
hyperkube_image = var.container_images["hyperkube"]
coredns_image = var.container_images["coredns"]
control_plane_replicas = max(2, length(var.etcd_servers))
pod_cidr = var.pod_cidr
cluster_domain_suffix = var.cluster_domain_suffix
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
trusted_certs_dir = var.trusted_certs_dir
server = format("https://%s:%s", var.api_servers[0], var.external_apiserver_port)
}
)
}
}
# Kubernetes static pod manifests
resource "local_file" "static-manifests" {
for_each = local.static_manifests
content = each.value
filename = "${var.asset_dir}/${each.key}"
}
# Kubernetes control plane manifests
resource "local_file" "manifests" {
for_each = local.manifests
content = each.value
filename = "${var.asset_dir}/${each.key}"
}
locals {
aggregation_flags = <<EOF
- --proxy-client-cert-file=/etc/kubernetes/secrets/aggregation-client.crt
- --proxy-client-key-file=/etc/kubernetes/secrets/aggregation-client.key
- --requestheader-client-ca-file=/etc/kubernetes/secrets/aggregation-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
EOF
}

View File

@@ -1,71 +1,44 @@
output "id" {
value = "${sha1("${template_dir.bootstrap-manifests.id} ${template_dir.manifests.id}")}"
}
output "content_hash" {
value = "${sha1("${template_dir.bootstrap-manifests.id} ${template_dir.manifests.id}")}"
}
output "cluster_dns_service_ip" {
value = "${cidrhost(var.service_cidr, 10)}"
value = cidrhost(var.service_cidr, 10)
}
// Generated kubeconfig for Kubelets (i.e. lower privilege than admin)
output "kubeconfig-kubelet" {
value = "${data.template_file.kubeconfig-kubelet.rendered}"
value = data.template_file.kubeconfig-kubelet.rendered
}
// Generated kubeconfig for admins (i.e. human super-user)
output "kubeconfig-admin" {
value = "${data.template_file.kubeconfig-admin.rendered}"
value = data.template_file.kubeconfig-admin.rendered
}
# etcd TLS assets
output "etcd_ca_cert" {
value = "${tls_self_signed_cert.etcd-ca.cert_pem}"
value = tls_self_signed_cert.etcd-ca.cert_pem
}
output "etcd_client_cert" {
value = "${tls_locally_signed_cert.client.cert_pem}"
value = tls_locally_signed_cert.client.cert_pem
}
output "etcd_client_key" {
value = "${tls_private_key.client.private_key_pem}"
value = tls_private_key.client.private_key_pem
}
output "etcd_server_cert" {
value = "${tls_locally_signed_cert.server.cert_pem}"
value = tls_locally_signed_cert.server.cert_pem
}
output "etcd_server_key" {
value = "${tls_private_key.server.private_key_pem}"
value = tls_private_key.server.private_key_pem
}
output "etcd_peer_cert" {
value = "${tls_locally_signed_cert.peer.cert_pem}"
value = tls_locally_signed_cert.peer.cert_pem
}
output "etcd_peer_key" {
value = "${tls_private_key.peer.private_key_pem}"
}
# Some platforms may need to reconstruct the kubeconfig directly in user-data.
# That can't be done with the way template_file interpolates multi-line
# contents so the raw components of the kubeconfig may be needed.
output "ca_cert" {
value = "${base64encode(tls_self_signed_cert.kube-ca.cert_pem)}"
}
output "kubelet_cert" {
value = "${base64encode(tls_locally_signed_cert.kubelet.cert_pem)}"
}
output "kubelet_key" {
value = "${base64encode(tls_private_key.kubelet.private_key_pem)}"
}
output "server" {
value = "${format("https://%s:%s", element(var.api_servers, 0), var.apiserver_port)}"
value = tls_private_key.peer.private_key_pem
}

View File

@@ -67,18 +67,19 @@ rules:
- ipamblocks
- globalnetworkpolicies
- globalnetworksets
- networksets
- networkpolicies
- networksets
- clusterinformations
- hostendpoints
- blockaffinities
verbs:
- get
- list
- watch
- apiGroups: ["crd.projectcalico.org"]
resources:
- felixconfigurations
- ippools
- felixconfigurations
- clusterinformations
verbs:
- create

View File

@@ -36,6 +36,10 @@ data:
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
}
]
}

View File

@@ -139,10 +139,10 @@ spec:
requests:
cpu: 150m
livenessProbe:
httpGet:
path: /liveness
port: 9099
host: localhost
exec:
command:
- /bin/calico-node
- -felix-ready
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6

View File

@@ -1,4 +1,4 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kube-router

View File

@@ -7,7 +7,9 @@ data:
Corefile: |
.:53 {
errors
health
health {
lameduck 5s
}
ready
log . {
class error

View File

@@ -6,7 +6,6 @@ metadata:
labels:
k8s-app: coredns
kubernetes.io/name: "CoreDNS"
kubernetes.io/cluster-service: "true"
spec:
replicas: ${control_plane_replicas}
strategy:

View File

@@ -3,5 +3,3 @@ kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"

View File

@@ -8,7 +8,6 @@ metadata:
prometheus.io/port: "9153"
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-apiserver
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kube-apiserver
namespace: kube-system

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: kube-apiserver

View File

@@ -1,18 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: kube-apiserver
namespace: kube-system
type: Opaque
data:
apiserver.key: ${apiserver_key}
apiserver.crt: ${apiserver_cert}
service-account.pub: ${serviceaccount_pub}
ca.crt: ${ca_cert}
etcd-client-ca.crt: ${etcd_ca_cert}
etcd-client.crt: ${etcd_client_cert}
etcd-client.key: ${etcd_client_key}
aggregation-ca.crt: ${aggregation_ca_cert}
aggregation-client.crt: ${aggregation_client_cert}
aggregation-client.key: ${aggregation_client_key}

View File

@@ -1,82 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-apiserver
namespace: kube-system
labels:
tier: control-plane
k8s-app: kube-apiserver
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: kube-apiserver
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-apiserver
annotations:
checkpointer.alpha.coreos.com/checkpoint: "true"
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
priorityClassName: system-cluster-critical
serviceAccountName: kube-apiserver
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: kube-apiserver
image: ${hyperkube_image}
command:
- /hyperkube
- apiserver
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --anonymous-auth=false
- --authorization-mode=RBAC
- --bind-address=0.0.0.0
- --client-ca-file=/etc/kubernetes/secrets/ca.crt
- --cloud-provider=${cloud_provider}
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,Priority
- --etcd-cafile=/etc/kubernetes/secrets/etcd-client-ca.crt
- --etcd-certfile=/etc/kubernetes/secrets/etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/secrets/etcd-client.key
- --etcd-servers=${etcd_servers}
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname${aggregation_flags}
- --secure-port=${apiserver_port}
- --service-account-key-file=/etc/kubernetes/secrets/service-account.pub
- --service-cluster-ip-range=${service_cidr}
- --storage-backend=etcd3
- --tls-cert-file=/etc/kubernetes/secrets/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/secrets/apiserver.key
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
- name: ssl-certs-host
mountPath: /etc/ssl/certs
readOnly: true
volumes:
- name: secrets
secret:
secretName: kube-apiserver
- name: ssl-certs-host
hostPath:
path: ${trusted_certs_dir}

View File

@@ -1,11 +0,0 @@
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kube-controller-manager
namespace: kube-system
spec:
minAvailable: 1
selector:
matchLabels:
tier: control-plane
k8s-app: kube-controller-manager

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-controller-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-controller-manager
subjects:
- kind: ServiceAccount
name: kube-controller-manager
namespace: kube-system

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: kube-controller-manager

View File

@@ -1,11 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: kube-controller-manager
namespace: kube-system
type: Opaque
data:
service-account.key: ${serviceaccount_key}
ca.crt: ${ca_cert}
ca.key: ${ca_key}

View File

@@ -1,96 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-controller-manager
namespace: kube-system
labels:
tier: control-plane
k8s-app: kube-controller-manager
spec:
replicas: ${control_plane_replicas}
selector:
matchLabels:
tier: control-plane
k8s-app: kube-controller-manager
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-controller-manager
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: tier
operator: In
values:
- control-plane
- key: k8s-app
operator: In
values:
- kube-controller-manager
topologyKey: kubernetes.io/hostname
nodeSelector:
node-role.kubernetes.io/master: ""
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: kube-controller-manager
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: kube-controller-manager
image: ${hyperkube_image}
command:
- ./hyperkube
- controller-manager
- --use-service-account-credentials
- --allocate-node-cidrs=true
- --cloud-provider=${cloud_provider}
- --cluster-cidr=${pod_cidr}
- --service-cluster-ip-range=${service_cidr}
- --cluster-signing-cert-file=/etc/kubernetes/secrets/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/secrets/ca.key
- --configure-cloud-routes=false
- --leader-elect=true
- --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins
- --pod-eviction-timeout=1m
- --root-ca-file=/etc/kubernetes/secrets/ca.crt
- --service-account-private-key-file=/etc/kubernetes/secrets/service-account.key
livenessProbe:
httpGet:
scheme: HTTPS
path: /healthz
port: 10257
initialDelaySeconds: 15
timeoutSeconds: 15
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
- name: volumeplugins
mountPath: /var/lib/kubelet/volumeplugins
readOnly: true
- name: ssl-host
mountPath: /etc/ssl/certs
readOnly: true
volumes:
- name: secrets
secret:
secretName: kube-controller-manager
- name: ssl-host
hostPath:
path: ${trusted_certs_dir}
- name: volumeplugins
hostPath:
path: /var/lib/kubelet/volumeplugins
dnsPolicy: Default # Don't use cluster DNS.

View File

@@ -36,11 +36,11 @@ spec:
image: ${hyperkube_image}
command:
- ./hyperkube
- proxy
- kube-proxy
- --cluster-cidr=${pod_cidr}
- --hostname-override=$(NODE_NAME)
- --kubeconfig=/etc/kubernetes/kubeconfig
- --proxy-mode=iptables
- --proxy-mode=ipvs
env:
- name: NODE_NAME
valueFrom:

View File

@@ -1,11 +0,0 @@
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kube-scheduler
namespace: kube-system
spec:
minAvailable: 1
selector:
matchLabels:
tier: control-plane
k8s-app: kube-scheduler

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-scheduler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-scheduler
subjects:
- kind: ServiceAccount
name: kube-scheduler
namespace: kube-system

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: kube-scheduler

View File

@@ -1,13 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: volume-scheduler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:volume-scheduler
subjects:
- kind: ServiceAccount
name: kube-scheduler
namespace: kube-system

View File

@@ -1,63 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-scheduler
namespace: kube-system
labels:
tier: control-plane
k8s-app: kube-scheduler
spec:
replicas: ${control_plane_replicas}
selector:
matchLabels:
tier: control-plane
k8s-app: kube-scheduler
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-scheduler
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: tier
operator: In
values:
- control-plane
- key: k8s-app
operator: In
values:
- kube-scheduler
topologyKey: kubernetes.io/hostname
nodeSelector:
node-role.kubernetes.io/master: ""
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: kube-scheduler
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: kube-scheduler
image: ${hyperkube_image}
command:
- ./hyperkube
- scheduler
- --leader-elect=true
livenessProbe:
httpGet:
scheme: HTTPS
path: /healthz
port: 10259
initialDelaySeconds: 15
timeoutSeconds: 15

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pod-checkpointer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pod-checkpointer
subjects:
- kind: ServiceAccount
name: pod-checkpointer
namespace: kube-system

View File

@@ -1,11 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-checkpointer
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/proxy
verbs:
- get

View File

@@ -1,13 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-checkpointer
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-checkpointer
subjects:
- kind: ServiceAccount
name: pod-checkpointer
namespace: kube-system

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-checkpointer
namespace: kube-system
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets", "configmaps"]
verbs: ["get"]

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: pod-checkpointer

View File

@@ -1,72 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: pod-checkpointer
namespace: kube-system
labels:
tier: control-plane
k8s-app: pod-checkpointer
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: pod-checkpointer
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
tier: control-plane
k8s-app: pod-checkpointer
annotations:
checkpointer.alpha.coreos.com/checkpoint: "true"
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
priorityClassName: system-node-critical
serviceAccountName: pod-checkpointer
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: pod-checkpointer
image: ${pod_checkpointer_image}
command:
- /checkpoint
- --lock-file=/var/run/lock/pod-checkpointer.lock
- --kubeconfig=/etc/checkpointer/kubeconfig
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: kubeconfig
mountPath: /etc/checkpointer
- name: etc-kubernetes
mountPath: /etc/kubernetes
- name: var-run
mountPath: /var/run
volumes:
- name: kubeconfig
configMap:
name: kubeconfig-in-cluster
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes
- name: var-run
hostPath:
path: /var/run

View File

@@ -1,26 +1,31 @@
apiVersion: v1
kind: Pod
metadata:
name: bootstrap-kube-apiserver
name: kube-apiserver
namespace: kube-system
labels:
k8s-app: kube-apiserver
tier: control-plane
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
containers:
- name: kube-apiserver
image: ${hyperkube_image}
command:
- /hyperkube
- apiserver
- kube-apiserver
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --anonymous-auth=false
- --authorization-mode=RBAC
- --bind-address=0.0.0.0
- --client-ca-file=/etc/kubernetes/secrets/ca.crt
- --cloud-provider=${cloud_provider}
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,Priority
- --etcd-cafile=/etc/kubernetes/secrets/etcd-client-ca.crt
- --etcd-certfile=/etc/kubernetes/secrets/etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/secrets/etcd-client.key
@@ -28,11 +33,10 @@ spec:
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --secure-port=${apiserver_port}
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname${aggregation_flags}
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/secrets/service-account.pub
- --service-cluster-ip-range=${service_cidr}
- --storage-backend=etcd3
- --tls-cert-file=/etc/kubernetes/secrets/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/secrets/apiserver.key
env:
@@ -40,6 +44,9 @@ spec:
valueFrom:
fieldRef:
fieldPath: status.podIP
resources:
requests:
cpu: 200m
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/secrets

View File

@@ -1,17 +1,25 @@
apiVersion: v1
kind: Pod
metadata:
name: bootstrap-kube-controller-manager
name: kube-controller-manager
namespace: kube-system
labels:
k8s-app: kube-controller-manager
tier: control-plane
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
containers:
- name: kube-controller-manager
image: ${hyperkube_image}
command:
- ./hyperkube
- controller-manager
- /hyperkube
- kube-controller-manager
- --allocate-node-cidrs=true
- --cluster-cidr=${pod_cidr}
- --service-cluster-ip-range=${service_cidr}
@@ -23,6 +31,17 @@ spec:
- --leader-elect=true
- --root-ca-file=/etc/kubernetes/secrets/ca.crt
- --service-account-private-key-file=/etc/kubernetes/secrets/service-account.key
livenessProbe:
httpGet:
scheme: HTTPS
host: 127.0.0.1
path: /healthz
port: 10257
initialDelaySeconds: 15
timeoutSeconds: 15
resources:
requests:
cpu: 200m
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/secrets
@@ -30,7 +49,6 @@ spec:
- name: ssl-host
mountPath: /etc/ssl/certs
readOnly: true
hostNetwork: true
volumes:
- name: secrets
hostPath:

View File

@@ -1,24 +1,42 @@
apiVersion: v1
kind: Pod
metadata:
name: bootstrap-kube-scheduler
name: kube-scheduler
namespace: kube-system
labels:
k8s-app: kube-scheduler
tier: control-plane
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
containers:
- name: kube-scheduler
image: ${hyperkube_image}
command:
- ./hyperkube
- scheduler
- /hyperkube
- kube-scheduler
- --kubeconfig=/etc/kubernetes/secrets/kubeconfig
- --leader-elect=true
livenessProbe:
httpGet:
scheme: HTTPS
host: 127.0.0.1
path: /healthz
port: 10259
initialDelaySeconds: 15
timeoutSeconds: 15
resources:
requests:
cpu: 100m
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
hostNetwork: true
volumes:
- name: secrets
hostPath:

View File

@@ -1,27 +1,18 @@
# NOTE: Across this module, the following workaround is used:
# `"${var.some_var == "condition" ? join(" ", tls_private_key.aggregation-ca.*.private_key_pem) : ""}"`
# Due to https://github.com/hashicorp/hil/issues/50, both sides of conditions
# are evaluated, until one of them is discarded. When a `count` is used resources
# can be referenced as lists with the `.*` notation, and arrays are allowed to be
# empty. The `join()` interpolation function is then used to cast them back to
# a string. Since `count` can only be 0 or 1, the returned value is either empty
# (and discarded anyways) or the desired value.
# Kubernetes Aggregation CA (i.e. front-proxy-ca)
# Files: tls/{aggregation-ca.crt,aggregation-ca.key}
resource "tls_private_key" "aggregation-ca" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
count = var.enable_aggregation ? 1 : 0
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_self_signed_cert" "aggregation-ca" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
count = var.enable_aggregation ? 1 : 0
key_algorithm = "${tls_private_key.aggregation-ca.algorithm}"
private_key_pem = "${tls_private_key.aggregation-ca.private_key_pem}"
key_algorithm = tls_private_key.aggregation-ca[0].algorithm
private_key_pem = tls_private_key.aggregation-ca[0].private_key_pem
subject {
common_name = "kubernetes-front-proxy-ca"
@@ -38,16 +29,16 @@ resource "tls_self_signed_cert" "aggregation-ca" {
}
resource "local_file" "aggregation-ca-key" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
count = var.enable_aggregation ? 1 : 0
content = "${tls_private_key.aggregation-ca.private_key_pem}"
content = tls_private_key.aggregation-ca[0].private_key_pem
filename = "${var.asset_dir}/tls/aggregation-ca.key"
}
resource "local_file" "aggregation-ca-crt" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
count = var.enable_aggregation ? 1 : 0
content = "${tls_self_signed_cert.aggregation-ca.cert_pem}"
content = tls_self_signed_cert.aggregation-ca[0].cert_pem
filename = "${var.asset_dir}/tls/aggregation-ca.crt"
}
@@ -55,17 +46,17 @@ resource "local_file" "aggregation-ca-crt" {
# Files: tls/{aggregation-client.crt,aggregation-client.key}
resource "tls_private_key" "aggregation-client" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
count = var.enable_aggregation ? 1 : 0
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "aggregation-client" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
count = var.enable_aggregation ? 1 : 0
key_algorithm = "${tls_private_key.aggregation-client.algorithm}"
private_key_pem = "${tls_private_key.aggregation-client.private_key_pem}"
key_algorithm = tls_private_key.aggregation-client[0].algorithm
private_key_pem = tls_private_key.aggregation-client[0].private_key_pem
subject {
common_name = "kube-apiserver"
@@ -73,13 +64,13 @@ resource "tls_cert_request" "aggregation-client" {
}
resource "tls_locally_signed_cert" "aggregation-client" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
count = var.enable_aggregation ? 1 : 0
cert_request_pem = "${tls_cert_request.aggregation-client.cert_request_pem}"
cert_request_pem = tls_cert_request.aggregation-client[0].cert_request_pem
ca_key_algorithm = "${tls_self_signed_cert.aggregation-ca.key_algorithm}"
ca_private_key_pem = "${tls_private_key.aggregation-ca.private_key_pem}"
ca_cert_pem = "${tls_self_signed_cert.aggregation-ca.cert_pem}"
ca_key_algorithm = tls_self_signed_cert.aggregation-ca[0].key_algorithm
ca_private_key_pem = tls_private_key.aggregation-ca[0].private_key_pem
ca_cert_pem = tls_self_signed_cert.aggregation-ca[0].cert_pem
validity_period_hours = 8760
@@ -91,15 +82,16 @@ resource "tls_locally_signed_cert" "aggregation-client" {
}
resource "local_file" "aggregation-client-key" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
count = var.enable_aggregation ? 1 : 0
content = "${tls_private_key.aggregation-client.private_key_pem}"
content = tls_private_key.aggregation-client[0].private_key_pem
filename = "${var.asset_dir}/tls/aggregation-client.key"
}
resource "local_file" "aggregation-client-crt" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
count = var.enable_aggregation ? 1 : 0
content = "${tls_locally_signed_cert.aggregation-client.cert_pem}"
content = tls_locally_signed_cert.aggregation-client[0].cert_pem
filename = "${var.asset_dir}/tls/aggregation-client.crt"
}

View File

@@ -1,66 +1,66 @@
# etcd-ca.crt
resource "local_file" "etcd_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
content = tls_self_signed_cert.etcd-ca.cert_pem
filename = "${var.asset_dir}/tls/etcd-ca.crt"
}
# etcd-ca.key
resource "local_file" "etcd_ca_key" {
content = "${tls_private_key.etcd-ca.private_key_pem}"
content = tls_private_key.etcd-ca.private_key_pem
filename = "${var.asset_dir}/tls/etcd-ca.key"
}
# etcd-client-ca.crt
resource "local_file" "etcd_client_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
content = tls_self_signed_cert.etcd-ca.cert_pem
filename = "${var.asset_dir}/tls/etcd-client-ca.crt"
}
# etcd-client.crt
resource "local_file" "etcd_client_crt" {
content = "${tls_locally_signed_cert.client.cert_pem}"
content = tls_locally_signed_cert.client.cert_pem
filename = "${var.asset_dir}/tls/etcd-client.crt"
}
# etcd-client.key
resource "local_file" "etcd_client_key" {
content = "${tls_private_key.client.private_key_pem}"
content = tls_private_key.client.private_key_pem
filename = "${var.asset_dir}/tls/etcd-client.key"
}
# server-ca.crt
resource "local_file" "etcd_server_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
content = tls_self_signed_cert.etcd-ca.cert_pem
filename = "${var.asset_dir}/tls/etcd/server-ca.crt"
}
# server.crt
resource "local_file" "etcd_server_crt" {
content = "${tls_locally_signed_cert.server.cert_pem}"
content = tls_locally_signed_cert.server.cert_pem
filename = "${var.asset_dir}/tls/etcd/server.crt"
}
# server.key
resource "local_file" "etcd_server_key" {
content = "${tls_private_key.server.private_key_pem}"
content = tls_private_key.server.private_key_pem
filename = "${var.asset_dir}/tls/etcd/server.key"
}
# peer-ca.crt
resource "local_file" "etcd_peer_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
content = tls_self_signed_cert.etcd-ca.cert_pem
filename = "${var.asset_dir}/tls/etcd/peer-ca.crt"
}
# peer.crt
resource "local_file" "etcd_peer_crt" {
content = "${tls_locally_signed_cert.peer.cert_pem}"
content = tls_locally_signed_cert.peer.cert_pem
filename = "${var.asset_dir}/tls/etcd/peer.crt"
}
# peer.key
resource "local_file" "etcd_peer_key" {
content = "${tls_private_key.peer.private_key_pem}"
content = tls_private_key.peer.private_key_pem
filename = "${var.asset_dir}/tls/etcd/peer.key"
}
@@ -72,8 +72,8 @@ resource "tls_private_key" "etcd-ca" {
}
resource "tls_self_signed_cert" "etcd-ca" {
key_algorithm = "${tls_private_key.etcd-ca.algorithm}"
private_key_pem = "${tls_private_key.etcd-ca.private_key_pem}"
key_algorithm = tls_private_key.etcd-ca.algorithm
private_key_pem = tls_private_key.etcd-ca.private_key_pem
subject {
common_name = "etcd-ca"
@@ -98,8 +98,8 @@ resource "tls_private_key" "client" {
}
resource "tls_cert_request" "client" {
key_algorithm = "${tls_private_key.client.algorithm}"
private_key_pem = "${tls_private_key.client.private_key_pem}"
key_algorithm = tls_private_key.client.algorithm
private_key_pem = tls_private_key.client.private_key_pem
subject {
common_name = "etcd-client"
@@ -110,19 +110,15 @@ resource "tls_cert_request" "client" {
"127.0.0.1",
]
dns_names = ["${concat(
var.etcd_servers,
list(
"localhost",
))}"]
dns_names = concat(var.etcd_servers, ["localhost"])
}
resource "tls_locally_signed_cert" "client" {
cert_request_pem = "${tls_cert_request.client.cert_request_pem}"
cert_request_pem = tls_cert_request.client.cert_request_pem
ca_key_algorithm = "${join(" ", tls_self_signed_cert.etcd-ca.*.key_algorithm)}"
ca_private_key_pem = "${join(" ", tls_private_key.etcd-ca.*.private_key_pem)}"
ca_cert_pem = "${join(" ", tls_self_signed_cert.etcd-ca.*.cert_pem)}"
ca_key_algorithm = tls_self_signed_cert.etcd-ca.key_algorithm
ca_private_key_pem = tls_private_key.etcd-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.etcd-ca.cert_pem
validity_period_hours = 8760
@@ -140,8 +136,8 @@ resource "tls_private_key" "server" {
}
resource "tls_cert_request" "server" {
key_algorithm = "${tls_private_key.server.algorithm}"
private_key_pem = "${tls_private_key.server.private_key_pem}"
key_algorithm = tls_private_key.server.algorithm
private_key_pem = tls_private_key.server.private_key_pem
subject {
common_name = "etcd-server"
@@ -152,19 +148,15 @@ resource "tls_cert_request" "server" {
"127.0.0.1",
]
dns_names = ["${concat(
var.etcd_servers,
list(
"localhost",
))}"]
dns_names = concat(var.etcd_servers, ["localhost"])
}
resource "tls_locally_signed_cert" "server" {
cert_request_pem = "${tls_cert_request.server.cert_request_pem}"
cert_request_pem = tls_cert_request.server.cert_request_pem
ca_key_algorithm = "${join(" ", tls_self_signed_cert.etcd-ca.*.key_algorithm)}"
ca_private_key_pem = "${join(" ", tls_private_key.etcd-ca.*.private_key_pem)}"
ca_cert_pem = "${join(" ", tls_self_signed_cert.etcd-ca.*.cert_pem)}"
ca_key_algorithm = tls_self_signed_cert.etcd-ca.key_algorithm
ca_private_key_pem = tls_private_key.etcd-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.etcd-ca.cert_pem
validity_period_hours = 8760
@@ -182,23 +174,23 @@ resource "tls_private_key" "peer" {
}
resource "tls_cert_request" "peer" {
key_algorithm = "${tls_private_key.peer.algorithm}"
private_key_pem = "${tls_private_key.peer.private_key_pem}"
key_algorithm = tls_private_key.peer.algorithm
private_key_pem = tls_private_key.peer.private_key_pem
subject {
common_name = "etcd-peer"
organization = "etcd"
}
dns_names = ["${var.etcd_servers}"]
dns_names = var.etcd_servers
}
resource "tls_locally_signed_cert" "peer" {
cert_request_pem = "${tls_cert_request.peer.cert_request_pem}"
cert_request_pem = tls_cert_request.peer.cert_request_pem
ca_key_algorithm = "${join(" ", tls_self_signed_cert.etcd-ca.*.key_algorithm)}"
ca_private_key_pem = "${join(" ", tls_private_key.etcd-ca.*.private_key_pem)}"
ca_cert_pem = "${join(" ", tls_self_signed_cert.etcd-ca.*.cert_pem)}"
ca_key_algorithm = tls_self_signed_cert.etcd-ca.key_algorithm
ca_private_key_pem = tls_private_key.etcd-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.etcd-ca.cert_pem
validity_period_hours = 8760
@@ -209,3 +201,4 @@ resource "tls_locally_signed_cert" "peer" {
"client_auth",
]
}

View File

@@ -6,12 +6,12 @@ resource "tls_private_key" "kube-ca" {
}
resource "tls_self_signed_cert" "kube-ca" {
key_algorithm = "${tls_private_key.kube-ca.algorithm}"
private_key_pem = "${tls_private_key.kube-ca.private_key_pem}"
key_algorithm = tls_private_key.kube-ca.algorithm
private_key_pem = tls_private_key.kube-ca.private_key_pem
subject {
common_name = "kubernetes-ca"
organization = "bootkube"
organization = "typhoon"
}
is_ca_certificate = true
@@ -25,12 +25,12 @@ resource "tls_self_signed_cert" "kube-ca" {
}
resource "local_file" "kube-ca-key" {
content = "${tls_private_key.kube-ca.private_key_pem}"
content = tls_private_key.kube-ca.private_key_pem
filename = "${var.asset_dir}/tls/ca.key"
}
resource "local_file" "kube-ca-crt" {
content = "${tls_self_signed_cert.kube-ca.cert_pem}"
content = tls_self_signed_cert.kube-ca.cert_pem
filename = "${var.asset_dir}/tls/ca.crt"
}
@@ -42,33 +42,33 @@ resource "tls_private_key" "apiserver" {
}
resource "tls_cert_request" "apiserver" {
key_algorithm = "${tls_private_key.apiserver.algorithm}"
private_key_pem = "${tls_private_key.apiserver.private_key_pem}"
key_algorithm = tls_private_key.apiserver.algorithm
private_key_pem = tls_private_key.apiserver.private_key_pem
subject {
common_name = "kube-apiserver"
organization = "system:masters"
}
dns_names = [
"${var.api_servers}",
dns_names = flatten([
var.api_servers,
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.${var.cluster_domain_suffix}",
]
])
ip_addresses = [
"${cidrhost(var.service_cidr, 1)}",
cidrhost(var.service_cidr, 1),
]
}
resource "tls_locally_signed_cert" "apiserver" {
cert_request_pem = "${tls_cert_request.apiserver.cert_request_pem}"
cert_request_pem = tls_cert_request.apiserver.cert_request_pem
ca_key_algorithm = "${tls_self_signed_cert.kube-ca.key_algorithm}"
ca_private_key_pem = "${tls_private_key.kube-ca.private_key_pem}"
ca_cert_pem = "${tls_self_signed_cert.kube-ca.cert_pem}"
ca_key_algorithm = tls_self_signed_cert.kube-ca.key_algorithm
ca_private_key_pem = tls_private_key.kube-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.kube-ca.cert_pem
validity_period_hours = 8760
@@ -81,12 +81,12 @@ resource "tls_locally_signed_cert" "apiserver" {
}
resource "local_file" "apiserver-key" {
content = "${tls_private_key.apiserver.private_key_pem}"
content = tls_private_key.apiserver.private_key_pem
filename = "${var.asset_dir}/tls/apiserver.key"
}
resource "local_file" "apiserver-crt" {
content = "${tls_locally_signed_cert.apiserver.cert_pem}"
content = tls_locally_signed_cert.apiserver.cert_pem
filename = "${var.asset_dir}/tls/apiserver.crt"
}
@@ -98,8 +98,8 @@ resource "tls_private_key" "admin" {
}
resource "tls_cert_request" "admin" {
key_algorithm = "${tls_private_key.admin.algorithm}"
private_key_pem = "${tls_private_key.admin.private_key_pem}"
key_algorithm = tls_private_key.admin.algorithm
private_key_pem = tls_private_key.admin.private_key_pem
subject {
common_name = "kubernetes-admin"
@@ -108,11 +108,11 @@ resource "tls_cert_request" "admin" {
}
resource "tls_locally_signed_cert" "admin" {
cert_request_pem = "${tls_cert_request.admin.cert_request_pem}"
cert_request_pem = tls_cert_request.admin.cert_request_pem
ca_key_algorithm = "${tls_self_signed_cert.kube-ca.key_algorithm}"
ca_private_key_pem = "${tls_private_key.kube-ca.private_key_pem}"
ca_cert_pem = "${tls_self_signed_cert.kube-ca.cert_pem}"
ca_key_algorithm = tls_self_signed_cert.kube-ca.key_algorithm
ca_private_key_pem = tls_private_key.kube-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.kube-ca.cert_pem
validity_period_hours = 8760
@@ -124,12 +124,12 @@ resource "tls_locally_signed_cert" "admin" {
}
resource "local_file" "admin-key" {
content = "${tls_private_key.admin.private_key_pem}"
content = tls_private_key.admin.private_key_pem
filename = "${var.asset_dir}/tls/admin.key"
}
resource "local_file" "admin-crt" {
content = "${tls_locally_signed_cert.admin.cert_pem}"
content = tls_locally_signed_cert.admin.cert_pem
filename = "${var.asset_dir}/tls/admin.crt"
}
@@ -141,12 +141,12 @@ resource "tls_private_key" "service-account" {
}
resource "local_file" "service-account-key" {
content = "${tls_private_key.service-account.private_key_pem}"
content = tls_private_key.service-account.private_key_pem
filename = "${var.asset_dir}/tls/service-account.key"
}
resource "local_file" "service-account-crt" {
content = "${tls_private_key.service-account.public_key_pem}"
content = tls_private_key.service-account.public_key_pem
filename = "${var.asset_dir}/tls/service-account.pub"
}
@@ -158,8 +158,8 @@ resource "tls_private_key" "kubelet" {
}
resource "tls_cert_request" "kubelet" {
key_algorithm = "${tls_private_key.kubelet.algorithm}"
private_key_pem = "${tls_private_key.kubelet.private_key_pem}"
key_algorithm = tls_private_key.kubelet.algorithm
private_key_pem = tls_private_key.kubelet.private_key_pem
subject {
common_name = "kubelet"
@@ -168,11 +168,11 @@ resource "tls_cert_request" "kubelet" {
}
resource "tls_locally_signed_cert" "kubelet" {
cert_request_pem = "${tls_cert_request.kubelet.cert_request_pem}"
cert_request_pem = tls_cert_request.kubelet.cert_request_pem
ca_key_algorithm = "${tls_self_signed_cert.kube-ca.key_algorithm}"
ca_private_key_pem = "${tls_private_key.kube-ca.private_key_pem}"
ca_cert_pem = "${tls_self_signed_cert.kube-ca.cert_pem}"
ca_key_algorithm = tls_self_signed_cert.kube-ca.key_algorithm
ca_private_key_pem = tls_private_key.kube-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.kube-ca.cert_pem
validity_period_hours = 8760
@@ -185,11 +185,12 @@ resource "tls_locally_signed_cert" "kubelet" {
}
resource "local_file" "kubelet-key" {
content = "${tls_private_key.kubelet.private_key_pem}"
content = tls_private_key.kubelet.private_key_pem
filename = "${var.asset_dir}/tls/kubelet.key"
}
resource "local_file" "kubelet-crt" {
content = "${tls_locally_signed_cert.kubelet.cert_pem}"
content = tls_locally_signed_cert.kubelet.cert_pem
filename = "${var.asset_dir}/tls/kubelet.crt"
}

View File

@@ -1,113 +1,114 @@
variable "cluster_name" {
type = string
description = "Cluster name"
type = "string"
}
variable "api_servers" {
type = list(string)
description = "List of URLs used to reach kube-apiserver"
type = "list"
}
variable "etcd_servers" {
type = list(string)
description = "List of URLs used to reach etcd servers."
type = "list"
}
variable "asset_dir" {
description = "Path to a directory where generated assets should be placed (contains secrets)"
type = "string"
type = string
description = "Absolute path to a directory where generated assets should be placed (contains secrets)"
}
variable "cloud_provider" {
type = string
description = "The provider for cloud services (empty string for no provider)"
type = "string"
default = ""
}
variable "networking" {
type = string
description = "Choice of networking provider (flannel or calico or kube-router)"
type = "string"
default = "flannel"
}
variable "network_mtu" {
type = number
description = "CNI interface MTU (only applies to calico and kube-router)"
type = "string"
default = "1500"
default = 1500
}
variable "network_encapsulation" {
type = string
description = "Network encapsulation mode either ipip or vxlan (only applies to calico)"
type = "string"
default = "ipip"
}
variable "network_ip_autodetection_method" {
type = string
description = "Method to autodetect the host IPv4 address (only applies to calico)"
type = "string"
default = "first-found"
}
variable "pod_cidr" {
type = string
description = "CIDR IP range to assign Kubernetes pods"
type = "string"
default = "10.2.0.0/16"
}
variable "service_cidr" {
type = string
description = <<EOD
CIDR IP range to assign Kubernetes services.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
EOD
type = "string"
default = "10.3.0.0/24"
}
variable "cluster_domain_suffix" {
description = "Queries for domains with the suffix will be answered by kube-dns"
type = "string"
default = "cluster.local"
}
variable "container_images" {
type = map(string)
description = "Container images to use"
type = "map"
default = {
calico = "quay.io/calico/node:v3.7.2"
calico_cni = "quay.io/calico/cni:v3.7.2"
flannel = "quay.io/coreos/flannel:v0.11.0-amd64"
flannel_cni = "quay.io/coreos/flannel-cni:v0.3.0"
kube_router = "cloudnativelabs/kube-router:v0.3.1"
hyperkube = "k8s.gcr.io/hyperkube:v1.14.3"
coredns = "k8s.gcr.io/coredns:1.5.0"
pod_checkpointer = "quay.io/coreos/pod-checkpointer:83e25e5968391b9eb342042c435d1b3eeddb2be1"
calico = "quay.io/calico/node:v3.10.1"
calico_cni = "quay.io/calico/cni:v3.10.1"
flannel = "quay.io/coreos/flannel:v0.11.0-amd64"
flannel_cni = "quay.io/coreos/flannel-cni:v0.3.0"
kube_router = "cloudnativelabs/kube-router:v0.3.2"
hyperkube = "k8s.gcr.io/hyperkube:v1.16.3"
coredns = "k8s.gcr.io/coredns:1.6.5"
}
}
variable "enable_reporting" {
type = "string"
description = "Enable usage or analytics reporting to upstream component owners (Tigera: Calico)"
default = "false"
}
variable "trusted_certs_dir" {
type = string
description = "Path to the directory on cluster nodes where trust TLS certs are kept"
type = "string"
default = "/usr/share/ca-certificates"
}
variable "enable_reporting" {
type = bool
description = "Enable usage or analytics reporting to upstream component owners (Tigera: Calico)"
default = false
}
variable "enable_aggregation" {
type = bool
description = "Enable the Kubernetes Aggregation Layer (defaults to false, recommended)"
type = "string"
default = "false"
default = false
}
# unofficial, temporary, may be removed without notice
variable "apiserver_port" {
description = "kube-apiserver port"
type = "string"
default = "6443"
variable "external_apiserver_port" {
type = number
description = "External kube-apiserver port (e.g. 6443 to match internal kube-apiserver port)"
default = 6443
}
variable "cluster_domain_suffix" {
type = string
description = "Queries for domains with the suffix will be answered by kube-dns"
default = "cluster.local"
}

10
versions.tf Normal file
View File

@@ -0,0 +1,10 @@
# Terraform version and plugin versions
terraform {
required_version = "~> 0.12.0"
required_providers {
local = "~> 1.2"
template = "~> 2.1"
tls = "~> 2.0"
}
}