404 Commits

Author SHA1 Message Date
Dalton Hubble
f04e07c001 Update kube-apiserver runtime-config flags
* MutatingAdmissionPolicy is available as a beta and alpha API
2025-11-23 16:05:07 -08:00
dghubble-renovate[bot]
a589c32870 Bump actions/checkout action from v5 to v6 2025-11-21 09:57:50 -08:00
Dalton Hubble
3c8c071333 Update Kubernetes from v1.34.1 to v1.34.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.34.md
2025-11-21 09:30:14 -08:00
Dalton Hubble
a4e9ef0430 Update Kubernetes components from v1.33.1 to v1.34.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.34.md
2025-11-21 09:14:02 -08:00
dependabot[bot]
01667f6904 Bump actions/checkout from 4 to 5
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-24 21:31:06 -07:00
Dalton Hubble
c7e2a637d7 Rollback Cilium from v1.17.6 to v1.17.5
* Cilium v1.17.6 is broken, see https://github.com/cilium/cilium/issues/40571
2025-07-27 14:20:21 -07:00
Dalton Hubble
cd82a41654 Update Kubernetes from v1.33.2 to v1.33.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.33.md#v1333
2025-07-19 09:48:16 -07:00
Dalton Hubble
9af5837c35 Update Cilium and flannel container images 2025-06-29 17:30:21 -07:00
Dalton Hubble
36d543051b Update Kubernetes from v1.33.1 to v1.33.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.33.md#v1332
2025-06-29 17:20:32 -07:00
Dalton Hubble
2c7e627201 Update Kubernetes from v1.33.0 to v1.33.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.33.md#v1331
2025-05-24 20:22:11 -07:00
Dalton Hubble
18eb9cded5 Update Kubernetes from v1.32.3 to v1.33.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.33.md#v1330
2025-05-06 19:59:01 -07:00
Dalton Hubble
1e4b00eab9 Update Cilium and flannel container images
* Bump component images for those using the builtin bootstrap
2025-03-18 20:08:24 -07:00
Dalton Hubble
209e02b4f2 Update Kubernetes from v1.32.1 to v1.32.3
* Update Cilium from v1.16.5 to v1.17.1 as well
2025-03-12 21:06:46 -07:00
Dalton Hubble
c50071487c Add service_account_issuer variable for kube-apiserver
* Allow the service account token issuer to be adjusted or served
from a public bucket or static cache
* Output the public key used to sign service account tokens so that
it can be used to compute JWKS (JSON Web Key Sets) if desired

Docs: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery
2025-02-07 10:58:54 -08:00
Dalton Hubble
997f6012b5 Update Kubernetes from v1.32.0 to v1.32.1
* Enable the Kubernetes MutatingAdmissionPolicy alpha via feature gate
* Update CoreDNS from v1.11.4 to v1.12.0
* Update flannel from v0.26.2 to v0.26.3

Docs: https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/
2025-01-20 15:23:22 -08:00
Dalton Hubble
3edb0ae646 Change flannel port from 4789 to 8472
* flannel and Cilium default to UDP 8472 for VXLAN traffic to
avoid conflicts with other VXLAN usage (e.g. Open vSwith)
* Aligning flannel and Cilium to use the same vxlan port makes
firewall rules or security policies simpler across clouds
2024-12-30 11:59:36 -08:00
Dalton Hubble
33f8d2083c Remove calico_manifests from assets_dist outputs 2024-12-28 20:37:28 -08:00
Dalton Hubble
79b8ae1280 Remove Calico and associated variables
* Drop support for Calico CNI
2024-12-28 20:34:29 -08:00
Dalton Hubble
0d3f17393e Change the default Pod CIDR to 10.20.0.0/14
* Change the default Pod CIDR from 10.2.0.0/16 to 10.20.0.0/14
(10.20.0.0 - 10.23.255.255) to support 1024 nodes by default
* Most CNI providers divide the Pod CIDR so that each node has
a /24 to allocate to local pods (256). The previous `10.2.0.0/16`
default only fits 256 /24's so 256 nodes were supported without
customizing the pod_cidr
2024-12-23 10:16:42 -08:00
Dalton Hubble
c775b4de9a Update Kubernetes from v1.31.4 to v1.32.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.32.md#v1320
2024-12-20 16:58:35 -08:00
Dalton Hubble
fbe7fa0a57 Update Kubernetes from v1.31.3 to v1.31.4
* Update flannel from v0.26.0 to v0.26.2
* Update Cilium from v1.16.4 to v1.16.5
2024-12-20 15:06:55 -08:00
Dalton Hubble
e6a1c7bccf Update Kubernetes from v1.31.2 to v1.31.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1313
2024-11-24 08:40:22 -08:00
Dalton Hubble
95203db11c Update Kubernetes from v1.31.1 to v1.31.2
* Update Cilium from v1.16.1 to v1.16.3
* Update flannel from v0.25.6 to v0.26.0
2024-10-26 08:30:38 -07:00
Dalton Hubble
1cfc654494 Update Kubernetes from v1.30.0 to v1.30.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1311
2024-09-20 14:29:33 -07:00
Dalton Hubble
1ddecb1cef Change Cilium configuration to use kube-proxy replacement
* Skip creating the kube-proxy DaemonSet when Cilium is chosen
2024-08-23 07:15:18 -07:00
Dalton Hubble
0b78c87997 Fix flannel-cni container image version to v0.4.2
* This was mistakenly bumped to v0.4.4 which doesn't exist

Rel: https://github.com/poseidon/terraform-render-bootstrap/pull/379
2024-08-22 19:19:37 -07:00
Dalton Hubble
7e8551750c Update Kubernetes from v1.30.4 to v1.31.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1310
2024-08-17 08:02:30 -07:00
Dalton Hubble
8b6a3a4c0d Update Kubernetes from v1.30.3 to v1.30.4
* Update Cilium from v1.16.0 to v1.16.1
2024-08-16 08:21:49 -07:00
Dalton Hubble
66d8fe3a4d Update CoreDNS and Cilium components
* Update CoreDNS from v1.11.1 to v1.11.3
* Update Cilium from v1.15.7 to v1.16.0
2024-08-04 21:03:23 -07:00
Dalton Hubble
45b6b7e877 Rename context in kubeconfig-admin
* Use the cluster_name as the kubeconfig context, cluster,
and user. Drop the trailing -context suffix
2024-08-04 20:43:03 -07:00
Dalton Hubble
1609060f4f Update Kubernetes from v1.30.2 to v1.30.3
* Update builtin Cilium manifests from v1.15.6 to v1.15.7
* Update builtin flannel manifests from v0.25.4 to v0.25.5
2024-07-20 10:59:20 -07:00
Dalton Hubble
886f501bf7 Update Kubernetes from v1.30.1 to v1.30.2
* Update CoreDNS from v1.9.4 to v1.11.1
* Update Cilium from v1.15.5 to v1.15.6
* Update flannel from v0.25.1 to v0.25.4
2024-06-17 08:11:41 -07:00
Dalton Hubble
e1b1e0c75e Update Cilium from v1.15.4 to v1.15.5
* https://github.com/cilium/cilium/releases/tag/v1.15.5
2024-05-19 16:36:55 -07:00
Dalton Hubble
a54fe54d98 Extend components variable with flannel, calico, and cilium
* By default the `networking` CNI provider is pre-installed,
but the component variable provides an extensible mechanism
to skip installing these components
* Validate that networking can only be set to one of flannel,
calico, or cilium
2024-05-18 14:56:44 -07:00
Dalton Hubble
452bcf379d Update Kubernetes from v1.30.0 to v1.30.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md#v1301
2024-05-15 21:56:50 -07:00
Dalton Hubble
990286021a Organize CoreDNS and kube-proxy manifests so they're optional
* Add a `coredns` variable to configure the CoreDNS manifests,
with an `enable` field to determine whether CoreDNS manifests
are applied to the cluster during provisioning (default true)
* Add a `kube-proxy` variable to configure kube-proxy manifests,
with an `enable` field to determine whether the kube-proxy
Daemonset is applied to the cluster during provisioning (default
true)
* These optional allow for provisioning clusters without CoreDNS
or kube-proxy, so these components can be customized or managed
through separate plan/apply processes or automation
2024-05-12 18:05:55 -07:00
Dalton Hubble
baf406f261 Update Cilium and flannel container images
* Update Cilium from v1.15.3 to v1.25.4
* Update flannel from v0.24.4 to v0.25.1
2024-05-12 08:26:50 -07:00
dghubble-renovate[bot]
2bb4ec5bfd Bump provider tls from 3.4.0 to v4 2024-05-04 09:00:14 -07:00
Dalton Hubble
d233e90754 Update Kubernetes from v1.29.3 to v1.30.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md#v1300
2024-04-23 20:43:33 -07:00
Dalton Hubble
959b9ea04d Update Calico and Cilium container image versions
* Update Cilium from v1.15.2 to v1.15.3
* Update Calico from v3.27.2 to v3.27.3
2024-04-03 22:43:55 -07:00
Dalton Hubble
9145a587b3 Update Kubernetes from v1.29.2 to v1.29.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#v1293
2024-03-23 00:45:02 -07:00
Dalton Hubble
5dfa185b9d Update Cilium and flannel container image versions
* https://github.com/cilium/cilium/releases/tag/v1.15.2
* https://github.com/flannel-io/flannel/releases/tag/v0.24.4
2024-03-22 11:12:32 -07:00
Dalton Hubble
e9d52a997e Update Calico from v2.26.3 to v2.27.2
* Calico update fixes https://github.com/projectcalico/calico/issues/8372
2024-02-25 12:01:23 -08:00
Dalton Hubble
da65b4816d Update Cilium from v1.14.3 to v1.15.1
* https://github.com/cilium/cilium/releases/tag/v1.15.1
2024-02-23 22:46:20 -08:00
Dalton Hubble
2909ea9da3 Update flannel from v0.22.3 to v0.24.2
* https://github.com/flannel-io/flannel/releases/tag/v0.24.2
2024-02-18 16:13:19 -08:00
Dalton Hubble
763f56d0a5 Update Kubernetes from v1.29.1 to v1.29.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#v1292
2024-02-18 15:46:02 -08:00
Dalton Hubble
acc7460fcc Update Kubernetes from v1.29.0 to v1.29.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#v1291
2024-02-04 10:43:58 -08:00
Dalton Hubble
f0d22ec895 Update Kubernetes from v1.28.4 to v1.29.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#v1290
2023-12-22 09:01:31 -08:00
Dalton Hubble
a6e637d196 Update Kubernetes from v1.28.3 to v1.28.4 2023-11-21 06:11:30 -08:00
dependabot[bot]
521cf9604f Bump hashicorp/setup-terraform from 2 to 3
Bumps [hashicorp/setup-terraform](https://github.com/hashicorp/setup-terraform) from 2 to 3.
- [Release notes](https://github.com/hashicorp/setup-terraform/releases)
- [Changelog](https://github.com/hashicorp/setup-terraform/blob/main/CHANGELOG.md)
- [Commits](https://github.com/hashicorp/setup-terraform/compare/v2...v3)

---
updated-dependencies:
- dependency-name: hashicorp/setup-terraform
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-31 09:10:42 -07:00
Dalton Hubble
9a942ce016 Update flannel from v022.2 to v0.22.3
* https://github.com/flannel-io/flannel/releases/tag/v0.22.3
2023-10-29 22:11:26 -07:00
Dalton Hubble
d151ab77b7 Update Calico from v3.26.1 to v3.26.3
* https://github.com/projectcalico/calico/releases/tag/v3.26.3
2023-10-29 16:31:18 -07:00
Dalton Hubble
f911337cd8 Update Cilium from v1.14.2 to v1.14.3
* https://github.com/cilium/cilium/releases/tag/v1.14.3
2023-10-29 16:18:08 -07:00
Dalton Hubble
720adbeb43 Configure Cilium agents to connect to apiserver explicitly
* Cilium v1.14 seems to have problems reliably accessing the
apiserver via default in-cluster service discovery (relies on
kube-proxy instead of DNS) after some time
* Configure Cilium agents to use the DNS name resolving to the
cluster's load balanced apiserver and port. Regrettably, this
relies on external DNS rather than being self-contained, but its
what Cilium pushes users towards
2023-10-29 16:08:21 -07:00
Dalton Hubble
ae571974b0 Update Kubernetes from v1.28.2 to v1.28.3
* https://github.com/poseidon/kubelet/pull/95
2023-10-22 18:38:28 -07:00
Dalton Hubble
19b59cc66f Update Cilium from v1.14.1 to v1.14.2
* https://github.com/cilium/cilium/releases/tag/v1.14.2
2023-09-16 17:07:19 +02:00
Dalton Hubble
e3ffe4a5d5 Bump Kubernetes images from v1.28.1 to v1.28.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md#v1282
2023-09-14 12:58:55 -07:00
dependabot[bot]
ebfd639ff8 Bump actions/checkout from 3 to 4
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-04 13:42:29 -07:00
Dalton Hubble
0065e511c5 Update Kubernetes from v1.28.0 to v1.28.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md#v1281
2023-08-26 11:34:33 -07:00
Dalton Hubble
251adf88d4 Emulate Cilium KubeProxyReplacement partial mode
* Cilium KubeProxyReplacement mode used to support a partial
option, but in v1.14 it became true or false
* Emulate the old partial mode by disabling KubeProxyReplacement
but turning on the individual features
* The alternative of enabling KubeProxyReplacement has ramifications
because Cilium then needs to be configured with the apiserver server
address, which creates a dependency on the cloud provider's DNS,
clashes with kube-proxy, and removing kube-proxy creates complications
for how node health is assessed. Removing kube-proxy is further
complicated by the fact its still used by other supported CNIs which
creates a tricky support matrix

Docs: https://docs.cilium.io/en/latest/network/kubernetes/kubeproxy-free/#kube-proxy-hybrid-modes
2023-08-26 10:45:23 -07:00
Dalton Hubble
a4fc73db7e Fix Cilium kube-proxy-replacement mode to true
* In Cilium v1.14, kube-proxy-replacement mode again changed
its valid values, this time from partial to true/false. The
value should be true for Cilium to support HostPort features
as expected

```
cilium status --verbose
Services:
  - ClusterIP:      Enabled
  - NodePort:       Enabled (Range: 30000-32767)
  - LoadBalancer:   Enabled
  - externalIPs:    Enabled
  - HostPort:       Enabled
```
2023-08-21 19:53:46 -07:00
Dalton Hubble
29e81aedd4 Update Cilium from v1.14.0 to v1.14.1
* Also bump flannel from v0.22.1 to v0.22.2
2023-08-20 16:03:43 -07:00
Dalton Hubble
a55741d51d Update Kubernetes from v1.27.4 to v1.28.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md#v1280
2023-08-20 14:57:39 -07:00
Dalton Hubble
35848a50c6 Upgrade Cilium from v1.13.4 to v1.14.0
* https://github.com/cilium/cilium/releases/tag/v1.14.0
2023-07-30 09:17:31 -07:00
Dalton Hubble
d4da2f99fb Update flannel from v0.22.0 to v0.22.1
* https://github.com/flannel-io/flannel/releases/tag/v0.22.1
2023-07-29 17:38:33 -07:00
Dalton Hubble
31a13c53af Update Kubernetes from v1.27.3 to v1.27.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1274
2023-07-21 07:58:14 -07:00
Dalton Hubble
162baaf5e1 Upgrade Calico from v3.25.1 to v3.26.1
* Calico made some changes related to how the install-cni init
container runs, RBAC, and adds a new CRD bgpfilters
* https://github.com/projectcalico/calico/releases/tag/v3.26.1
2023-06-19 12:25:36 -07:00
Dalton Hubble
e727c63cc2 Update Kubernetes from v1.27.2 to v1.27.3
* Update Cilium from v1.13.3 to v1.13.4

Rel: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1273
2023-06-16 08:25:32 -07:00
Dalton Hubble
8c3ca3e935 Update flannel from v0.21.2 to v0.22.0
* https://github.com/flannel-io/flannel/releases/tag/v0.22.0
2023-06-11 19:56:57 -07:00
Dalton Hubble
9c4134240f Update Cilium from v1.13.2 to v1.13.3
* https://github.com/cilium/cilium/releases/tag/v1.13.3
2023-06-11 19:53:33 -07:00
Tobias Jungel
7c559e15e2 Update cilium masquerade flag
14ced84f7e
introduced `enable-ipv6-masquerade` and `enable-ipv4-masquerade`. This
updates the ConfigMap of cilium to align with the expected flag.

enable-ipv4-masquerade is enabled and enable-ipv6-masquerade is
disabled.
2023-05-23 17:47:23 -07:00
Dalton Hubble
9932d03696 Update Kubernetes from v1.27.1 to v1.27.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1272
2023-05-21 14:00:23 -07:00
Dalton Hubble
39d7b3eff9 Update Cilium from v1.13.1 to v1.13.2
* https://github.com/cilium/cilium/releases/tag/v1.13.2
2023-04-20 08:41:17 -07:00
Dalton Hubble
4d3eeadb35 Update Kubernetes from v1.27.0 to v1.27.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1271
2023-04-15 23:03:39 -07:00
Dalton Hubble
c0a4082796 Update Calico from v3.25.0 to v3.25.1
* https://github.com/projectcalico/calico/releases/tag/v3.25.1
2023-04-15 22:58:00 -07:00
Dalton Hubble
54ebf13564 Update Kubernetes from v1.26.3 to v1.27.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1270
2023-04-13 20:00:20 -07:00
Dalton Hubble
0a5d722de6 Change Cilium to use an init container to install CNI plugins
* Starting in Cilium v1.13.1, the cilium-cni plugin is installed
via an init container rather than by the Cilium agent container

Rel: https://github.com/cilium/cilium/issues/24457
2023-03-29 10:03:08 -07:00
Dalton Hubble
44315b8c02 Update Kubernetes from v1.26.2 to v1.26.3
* Update Cilium from v1.13.0 to v1.13.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1263
2023-03-21 18:15:16 -07:00
Dalton Hubble
5fe3380d5f Update Cilium from v1.12.6 to v1.13.0
* https://github.com/cilium/cilium/releases/tag/v1.13.0
* Change kube-proxy-replacement from probe (deprecated) to
partial and disable nodeport health checks as recommended
* Add ciliumloadbalanacerippools to ClusterRole
* Enable BPF socket load balancing from host namespace
2023-03-14 11:13:23 -07:00
Dalton Hubble
607a05692b Update Kubernetes from v1.26.1 to v1.26.2
* Update Kubernetes from v1.26.1 to v1.26.2
* Update flannel from v0.21.1 to v0.21.2
* Fix flannel container image location to docker.io/flannel/flannel

Rel:

* https://github.com/kubernetes/kubernetes/releases/tag/v1.26.2
* https://github.com/flannel-io/flannel/releases/tag/v0.21.2
2023-03-01 17:09:30 -08:00
Dalton Hubble
7f9853fca3 Update flannel from v0.20.2 to v0.21.1
* https://github.com/flannel-io/flannel/releases/tag/v0.21.1
2023-02-09 09:55:02 -08:00
Dalton Hubble
9a2822282b Update Cilium from v1.12.5 to v1.12.6
* https://github.com/cilium/cilium/releases/tag/v1.12.6
2023-02-09 09:39:00 -08:00
Dalton Hubble
4621c6b256 Update Calico from v3.24.5 to v3.25.0
* https://github.com/projectcalico/calico/blob/v3.25.0/calico/_includes/release-notes/v3.25.0-release-notes.md
2023-01-23 09:22:27 -08:00
Dalton Hubble
adcc942508 Update Kubernetes from v1.26.0 to v1.26.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1261
* Update CoreDNS from v1.9.3 to v1.9.4

Rel: https://github.com/coredns/coredns/releases/tag/v1.9.4
2023-01-19 08:12:12 -08:00
Dalton Hubble
4476e946f6 Update Cilium from v1.12.4 to v1.12.5
* https://github.com/cilium/cilium/releases/tag/v1.12.5
2022-12-21 08:06:13 -08:00
Dalton Hubble
8b17f2e85e Update Kubernetes from v1.26.0-rc.1 to v1.26.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1260
2022-12-08 08:41:47 -08:00
Dalton Hubble
f863f7a551 Update Kubernetes from v1.26.0-rc.0 to v1.26.0-rc.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1260-rc1
2022-12-05 08:55:29 -08:00
Dalton Hubble
616069203e Update Kubernetes from v1.25.4 to v1.26.0-rc.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1260-rc0
2022-11-30 08:45:09 -08:00
Dalton Hubble
88d0ea5a87 Fix flannel container image registry location
* flannel moved their container image to docker.io/flannelcni/flannel
2022-11-23 16:10:00 -08:00
Dalton Hubble
7350fd24fc Update flannel from v0.15.1 to v0.20.1
* https://github.com/flannel-io/flannel/releases/tag/v0.20.1
2022-11-23 10:54:08 -08:00
Dalton Hubble
8f6b55859b Update Cilium from v1.24.3 to v1.24.4
* https://github.com/cilium/cilium/releases/tag/v1.12.4
2022-11-23 10:52:40 -08:00
Dalton Hubble
dc652cf469 Add Mastodon badge alongside Twitter 2022-11-10 09:43:21 -08:00
Dalton Hubble
9b56c710b3 Update Kubernetes from v1.25.3 to v1.25.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1254
2022-11-10 09:34:19 -08:00
Dalton Hubble
e57a66623b Update Calico from v3.24.3 to v3.24.5
* https://github.com/projectcalico/calico/releases/tag/v3.24.5
2022-11-10 09:31:58 -08:00
Dalton Hubble
8fb30b7732 Update Calico from v3.24.2 to v3.24.3
* https://github.com/projectcalico/calico/releases/tag/v3.24.3
2022-10-23 21:53:27 -07:00
Dalton Hubble
5b2fbbef84 Allow Kubelet kubeconfig to drain nodes
* Allow the Kubelet kubeconfig to get/list workloads and
evict pods to perform drain operations, via the kubelet-delete
ClusterRole bound to the system:nodes group
* Previously, the ClusterRole only allowed node deletion
2022-10-23 21:49:38 -07:00
Dalton Hubble
3db4055ccf Update Calico from v3.24.1 to v3.24.2
* https://github.com/projectcalico/calico/releases/tag/v3.24.2
2022-10-20 09:25:53 -07:00
Dalton Hubble
946d81be09 Update Cilium from v1.12.2 to v1.12.3
* https://github.com/cilium/cilium/releases/tag/v1.12.3
2022-10-17 17:16:56 -07:00
Dalton Hubble
bf465a8525 Update Kubernetes from v1.25.2 to v1.25.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1253
2022-10-13 20:53:23 -07:00
Dalton Hubble
457894c1a4 Update Kubernetes from v1.25.1 to v1.25.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1252
2022-09-22 08:11:15 -07:00
Dalton Hubble
c5928dbe5e Update Cilium from v1.12.1 to v1.12.2
* https://github.com/cilium/cilium/releases/tag/v1.12.2
2022-09-15 08:26:23 -07:00
Dalton Hubble
f3220d34cc Update Kubernetes from v1.25.0 to v1.25.1
* https://github.com/kubernetes/kubernetes/releases/tag/v1.25.1
2022-09-15 08:15:51 -07:00
Dalton Hubble
50d43778d0 Update Calico from v3.23.3 to v3.24.1
* https://github.com/projectcalico/calico/releases/tag/v3.24.1
2022-09-14 08:00:17 -07:00
Dalton Hubble
3fa08c542c Update Kubernetes from v1.24.4 to v1.25.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md
2022-08-23 17:30:42 -07:00
Dalton Hubble
31bbef9024 Update Kubernetes from v1.24.3 to v1.24.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1244
2022-08-17 20:06:24 -07:00
Dalton Hubble
c58cbec52b Update Cilium from v1.12.0 to v1.12.1
* https://github.com/cilium/cilium/releases/tag/v1.12.1
2022-08-17 08:22:48 -07:00
Dalton Hubble
bf8bdd4fb5 Switch upstream Kubernetes registry from k8s.gcr.io to registry.k8s.io
* Upstream would like to switch to registry.k8s.io to reduce costs

Rel: https://groups.google.com/g/kubernetes-sig-testing/c/U7b_im9vRrM
2022-08-13 15:45:13 -07:00
Dalton Hubble
0d981c24cd Upgrade CoreDNS from v1.8.5 to v1.9.3
* https://coredns.io/2022/05/27/coredns-1.9.3-release/
* https://coredns.io/2022/05/13/coredns-1.9.2-release/
* https://coredns.io/2022/03/09/coredns-1.9.1-release/
* https://coredns.io/2022/02/01/coredns-1.9.0-release/
* https://coredns.io/2021/12/09/coredns-1.8.7-release/
* https://coredns.io/2021/10/07/coredns-1.8.6-release/
2022-08-13 15:37:30 -07:00
Dalton Hubble
6d92cab7a0 Update Cilium from v1.11.7 to v1.12.0
* https://github.com/cilium/cilium/releases/tag/v1.12.0
2022-08-08 19:56:05 -07:00
Dalton Hubble
13e40a342b Add Terraform fmt GitHub Action and dependabot config
* Run terraform fmt on pull requests and merge to main
* Show workflow status in README
* Add dependabot.yaml to keep GitHub Actions updated
2022-08-01 09:45:38 -07:00
Dalton Hubble
b7136c94c2 Add badges to README 2022-07-31 17:43:36 -07:00
Dalton Hubble
97fe45c93e Update Calico from v3.23.1 to v3.23.3
* https://github.com/projectcalico/calico/releases/tag/v3.23.3
2022-07-30 18:10:02 -07:00
Dalton Hubble
77981d7fd4 Update Cilium from v1.11.6 to v1.11.7
* https://github.com/cilium/cilium/releases/tag/v1.11.7
2022-07-19 09:04:58 -07:00
Dalton Hubble
19a19c0e7a Update Kubernetes from v1.24.2 to v1.24.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1243
2022-07-13 20:57:47 -07:00
Dalton Hubble
178664d84e Update Calico from v3.22.2 to v3.23.1
* https://github.com/projectcalico/calico/releases/tag/v3.23.1
2022-06-18 18:49:58 -07:00
Dalton Hubble
dee92368af Update Cilium from v1.11.5 to v1.11.6
* https://github.com/cilium/cilium/releases/tag/v1.11.6
2022-06-18 18:42:44 -07:00
Dalton Hubble
70764c32c5 Update Kubernetes from v1.24.1 to v1.24.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1242
2022-06-18 18:27:38 -07:00
Dalton Hubble
f325be5041 Update Cilium from v1.11.4 to v1.11.5
* https://github.com/cilium/cilium/releases/tag/v1.11.5
2022-05-31 15:21:36 +01:00
Dalton Hubble
22ab988fdb Update Kubernetes from v1.24.0 to v1.24.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1241
2022-05-27 09:56:57 +01:00
Dalton Hubble
81e4c5b267 Update Kubernetes from v1.23.6 to v1.24.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1240
2022-05-03 07:38:42 -07:00
Dalton Hubble
7a18a221bb Remove unneeded use of key_algorithm and ca_key_algorithm
* Remove uses of `key_algorithm` on `tls_self_signed_cert` and
`tls_cert_request` resources. The field is deprecated and inferred
from the `private_key_pem`
* Remove uses of `ca_key_algorithm` on `tls_locally_signed_cert`
resources. The field is deprecated and inferred from the
`ca_private_key_pem`
* Require at least hashicorp/tls provider v3.2

Rel: https://github.com/hashicorp/terraform-provider-tls/blob/main/CHANGELOG.md#320-april-04-2022
2022-04-20 19:45:27 -07:00
Dalton Hubble
3f21908175 Update Kubernetes from v1.23.5 to v1.23.6
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1236
2022-04-20 18:48:58 -07:00
James Harmison
5bbca44f66 Update cilium ds name and label to align with upstream 2022-04-20 18:47:59 -07:00
Dalton Hubble
031e9fdb6c Update Cilium and Calico CNI providers
* Update Cilium from v1.11.3 to v1.11.4
* Update Calico from v3.22.1 to v3.22.2
2022-04-19 08:25:54 -07:00
Dalton Hubble
ab5e18bba9 Update Cilium from v1.11.2 to v1.11.3
* https://github.com/cilium/cilium/releases/tag/v1.11.3
2022-04-01 16:40:17 -07:00
Dalton Hubble
e5bdb6f6c6 Update Kubernetes from v1.23.4 to v1.23.5
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1235
2022-03-16 20:57:32 -07:00
Dalton Hubble
fa4745d155 Update Calico from v3.21.2 to v3.22.1
* Calico aims to fix https://github.com/projectcalico/calico/issues/5011
2022-03-11 10:57:07 -08:00
Dalton Hubble
db159bbd99 Update Cilium from v1.11.1 to v1.11.2
* https://github.com/cilium/cilium/releases/tag/v1.11.2
2022-03-11 10:04:11 -08:00
Dalton Hubble
205e5f212b Update Kubernetes from v1.23.3 to v1.23.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1234
2022-02-17 08:48:14 -08:00
Dalton Hubble
26bea83b95 Update Kubernetes from v1.23.2 to v1.23.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1233
2022-01-27 09:21:43 -08:00
Dalton Hubble
f45deec67e Update Kubernetes from v1.23.1 to v1.23.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1232
2022-01-19 17:04:06 -08:00
Dalton Hubble
5b5f7a00fd Update Cilium from v1.11.0 to v1.11.1
* https://github.com/cilium/cilium/releases/tag/v1.11.1
2022-01-19 17:01:40 -08:00
Dalton Hubble
0d2135e687 Remove use of template provider
* Switch to using Terraform `templatefile` instead of the
`template` provider (i.e. `data.template_file`)
* Available since Terraform v0.12
2022-01-14 09:42:32 -08:00
Dalton Hubble
4dc0388149 Update Kubernetes from v1.23.0 to v1.23.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1231
2021-12-20 08:32:37 -08:00
Dalton Hubble
37f45cb28b Update Cilium from v1.10.5 to v1.11.0
* https://github.com/cilium/cilium/releases/tag/v1.11.0
2021-12-10 11:23:56 -08:00
Dalton Hubble
8add7022d1 Normalize CA certs mounts in static Pods and kube-proxy
* Mount both /etc/ssl/certs and /etc/pki into control plane static
pods and kube-proxy, rather than choosing one based a variable
(set based on Flatcar Linux or Fedora CoreOS)
* Remove `trusted_certs_dir` variable
* Remove deprecated `--port` from `kube-scheduler` static Pod
2021-12-09 09:26:28 -08:00
Dalton Hubble
362158a6d6 Add missing caliconodestatuses CRD for Calico
* https://github.com/projectcalico/calico/pull/5012
2021-12-09 09:19:12 -08:00
Dalton Hubble
091ebeaed6 Update Kubernetes from v1.22.4 to v1.23.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1230
2021-12-09 09:16:52 -08:00
Dalton Hubble
cb1f4410ed Update minimum Terraform provider versions
* Update minimum required versions for `tls`, `template`,
and `random`. Older versions have some differing behaviors
(e.g. `random` may be missing sensitive fields) and we'd
prefer to shake loose any setups still using very old
providers than continue allowing them
* Remove the upper bound version constraint since providers
are more regularly updated these days and require less
manual vetting to allow use
2021-12-07 16:16:28 -08:00
Dalton Hubble
2d60731cef Update Calico from v1.21.1 to v1.21.2
* https://github.com/projectcalico/calico/releases/tag/v3.21.2
2021-12-07 14:48:08 -08:00
Dalton Hubble
c32e1c73ee Update Calico from v3.21.0 to v3.21.1
* https://github.com/projectcalico/calico/releases/tag/v3.21.1
2021-11-28 16:44:38 -08:00
Dalton Hubble
5353769db6 Update Kubernetes from v1.22.3 to v1.22.4
* Update flannel from v0.15.0 to v0.15.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#v1224
2021-11-17 18:48:53 -08:00
Dalton Hubble
e6193bbdcf Update CoreDNS from v1.8.4 to v1.8.6
* https://github.com/kubernetes/kubernetes/pull/106091
2021-11-12 10:59:20 -08:00
Dalton Hubble
9f9d7708c3 Update Calico and flannel CNI providers
* Update Calico from v3.20.2 to v3.21.0
* Update flannel from v0.14.0 to v0.15.0
2021-11-11 14:25:11 -08:00
Dalton Hubble
f587918c33 Update Kubernetes from v1.22.2 to v1.22.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#v1223
2021-10-28 10:05:42 -07:00
Dalton Hubble
7fbbbe7923 Update flannel from v0.13.0 to v0.14.0
* https://github.com/flannel-io/flannel/releases/tag/v0.14.0
2021-10-17 12:33:22 -07:00
Dalton Hubble
6b5d088795 Update Cilium from v1.10.4 to v1.10.5
* https://github.com/cilium/cilium/releases/tag/v1.10.5
2021-10-17 11:22:59 -07:00
Dalton Hubble
0b102c4089 Update Calico from v3.20.1 to v3.20.2
* https://github.com/projectcalico/calico/releases/tag/v3.20.2
* Add support for iptables legacy vs nft detection
2021-10-05 19:33:09 -07:00
Dalton Hubble
fadb5bbdaa Enable Kubernetes aggregation by default
* Change `enable_aggregation` default from false to true
* These days, Kubernetes control plane components emit annoying
messages related to assumptions baked into the Kubernetes API
Aggregation Layer if you don't enable it. Further the conformance
tests force you to remember to enable it if you care about passing
those
* This change is motivated by eliminating annoyances, rather than
any enthusiasm for Kubernetes' aggregation features
* https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/
2021-10-05 19:20:26 -07:00
Dalton Hubble
c6fa09bda1 Update Calico and Cilium CNI providers
* Update Calico from v3.20.0 to v3.20.1
* Update Cilium from v1.10.3 to v1.10.4
* Remove Cilium wait for BGF mount
2021-09-21 09:11:49 -07:00
Dalton Hubble
2f29d99d8a Update Kubernetes from v1.22.1 to v1.22.2 2021-09-15 19:47:11 -07:00
Dalton Hubble
bfc2fa9697 Fix ClusterIP access when using Cilium
* When a router sets node(s) as next-hops in a network,
ClusterIP Services should be able to respond as usual
* https://github.com/cilium/cilium/issues/14581
2021-09-15 19:43:58 -07:00
Dalton Hubble
d7fd3f6266 Update Kubernetes from v1.22.0 to v1.22.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#v1221
2021-08-19 21:09:01 -07:00
Dalton Hubble
a2e1cdfd8a Update Calico from v3.19.2 to v3.20.0
* https://github.com/projectcalico/calico/blob/v3.20.0/_includes/charts/calico/templates/calico-node.yaml
2021-08-18 19:43:40 -07:00
Dalton Hubble
074c6ed5f3 Update Calico from v3.19.1 to v3.19.2
* https://github.com/projectcalico/calico/releases/tag/v3.19.2
2021-08-13 18:15:55 -07:00
Dalton Hubble
b5f5d843ec Disable kube-scheduler insecure port
* Kubernetes v1.22.0 disables kube-controller-manager insecure
port which was used internally for Prometheus metrics scraping
In Typhoon, we'll switch to using the https port which requires
Prometheus present a bearer token
* Go ahead and disable the insecure port for kube-scheduler too,
we'll configure Prometheus to scrape it with a bearer token as
well
* Remove unused kube-apiserver `--port` flag

Rel:

* https://github.com/kubernetes/kubernetes/pull/96216
2021-08-10 21:11:30 -07:00
Dalton Hubble
b766ff2346 Update Kubernetes from v1.21.3 to v1.22.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#v1220
2021-08-04 21:38:23 -07:00
Dalton Hubble
5c0bebc1e7 Add Cilium init container to auto-mount cgroup2
* Add init container to auto-mount /sys/fs/cgroup cgroup2
at /run/cilium/cgroupv2 for the Cilium agent
* Enable CNI exclusive mode, to disable other configs
found in /etc/cni/net.d/
* https://github.com/cilium/cilium/pull/16259
2021-07-24 10:30:06 -07:00
Dalton Hubble
5746f9c221 Update Kubernetes from v1.21.2 to v1.21.3
* https://github.com/kubernetes/kubernetes/releases/tag/v1.21.3
2021-07-17 18:15:06 -07:00
Dalton Hubble
bde255228d Update Cilium from v1.10.2 to v1.10.3
* https://github.com/cilium/cilium/releases/tag/v1.10.3
2021-07-17 18:12:06 -07:00
Dalton Hubble
48ac8945d1 Update Cilium from v1.10.1 to v1.10.2
* https://github.com/cilium/cilium/releases/tag/v1.10.2
2021-07-04 10:09:38 -07:00
Dalton Hubble
c0718e8552 Move daemonset tolerations up, they're documented 2021-06-27 18:01:34 -07:00
Dalton Hubble
362f42a7a2 Update CoreDNS from v1.8.0 to v1.8.4
* https://coredns.io/2021/01/20/coredns-1.8.1-release/
* https://coredns.io/2021/02/23/coredns-1.8.2-release/
* https://coredns.io/2021/02/24/coredns-1.8.3-release/
* https://coredns.io/2021/05/28/coredns-1.8.4-release/
2021-06-23 23:26:27 -07:00
Dalton Hubble
e1543746cb Update Kubernetes from v1.21.1 to v1.21.2
* https://github.com/kubernetes/kubernetes/releases/tag/v1.21.2
2021-06-17 16:10:52 -07:00
Dalton Hubble
33a85e6603 Add support for Terraform v1.0.0 2021-06-17 13:24:45 -07:00
Dalton Hubble
0f33aeba5d Update Cilium from v1.10.0 to v1.10.1
* https://github.com/cilium/cilium/releases/tag/v1.10.1
2021-06-16 11:40:42 -07:00
Dalton Hubble
d17684dd5b Update Calico from v3.19.0 to v3.19.1
* https://docs.projectcalico.org/archive/v3.19/release-notes/
2021-05-24 11:55:34 -07:00
Dalton Hubble
067405ecc4 Update Cilium from v0.10.0-rc1 to v0.10.0
* https://github.com/cilium/cilium/releases/tag/v1.10.0
2021-05-24 10:44:08 -07:00
Dalton Hubble
c3b16275af Update Cilium from v1.9.6 to v1.10.0-rc1
* Add multi-arch container images and arm64 support!
* https://github.com/cilium/cilium/releases/tag/v1.10.0-rc1
2021-05-14 14:23:55 -07:00
Dalton Hubble
ebe3d5526a Update Kubernetes from v1.21.0 to v1.21.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#v1211
2021-05-13 11:18:39 -07:00
Dalton Hubble
7052c66882 Update Calico from v3.18.1 to v3.19.0
* https://docs.projectcalico.org/archive/v3.19/release-notes/
2021-05-13 11:17:48 -07:00
Dalton Hubble
079b348bf7 Update Cilium from v1.9.5 to v1.9.6
* https://github.com/cilium/cilium/releases/tag/v1.9.6
2021-04-26 10:52:51 -07:00
Dalton Hubble
f8fd2f8912 Update required Terraform versions to v0.13 <= x < v0.16
* Allow Terraform v0.13.x, v0.14.x, or v0.15.x
2021-04-15 19:16:41 -07:00
Dalton Hubble
a4ecf168df Update static Pod manifests for Kubernetes v1.21.0
* Set `kube-apiserver` `service-account-jwks-uri` because conformance
ServiceAccountIssuerDiscovery OIDC discovery will access a JWT endpoint
using the kube-apiserver's advertise address by default, instead of
using the intended in-cluster service (10.3.0.1) resolved by cluster DNS
`kubernetes.default.svc.cluster.local`, which causes a cert SAN error
* Set the authentication and authorization kubeconfig for kube-scheduler
and kube-controller-manager. Here, authn/z refer to aggregated API
use cases only, so its not strictly neccessary and warnings about
missing `extension-apiserver-authentication` when enable_aggregation
is false can be ignored
* Mount `/var/lib/kubelet/volumeplugins` to to the default location
expected within kube-controller-manager to remove the need for a flag
* Enable `tokencleaner` controller to automatically delete expired
bootstrap tokens (default node token is good 1 year, so cleanup won't
really matter at that point, but enable regardless)
* Remove unused `cloud-provider` flag, we never intend to use in-tree
cloud providers or support custom providers
2021-04-10 17:42:18 -07:00
Dalton Hubble
55e1633376 Update Kubernetes from v1.20.5 to v1.21.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#v1210
2021-04-08 21:21:04 -07:00
Dalton Hubble
f87aa7f96a Change CNI config directory to /etc/cni/net.d
* Change CNI config directory from `/etc/kubernetes/cni/net.d`
to `/etc/cni/net.d` (Kubelet default)
2021-04-01 16:48:46 -07:00
Dalton Hubble
8c2e766d18 Update CoreDNS from v1.7.0 to v1.8.0
* https://coredns.io/2020/10/22/coredns-1.8.0-release/
2021-03-20 15:43:58 -07:00
Dalton Hubble
5f4378a0e1 Update Kubernetes from v1.20.4 to v1.20.5
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#v1205
2021-03-19 10:32:06 -07:00
Dalton Hubble
ca37685867 Update Cilium from v1.9.4 to v1.9.5
* https://github.com/cilium/cilium/releases/tag/v1.9.5
2021-03-14 11:25:15 -07:00
Dalton Hubble
8fc689b89c Switch bootstrap token to a random_password
* Terraform `random_password` is identical to `random_string` except
its value is marked sensitive so it isn't displayed in terraform
plan and other outputs
* Prefer marking the bootstrap token as sensitive for cases where
terraform is run in an automated CI/CD system
2021-03-14 10:45:27 -07:00
Dalton Hubble
adcba1c211 Update Calico from v3.17.3 to v3.18.1
* https://docs.projectcalico.org/archive/v3.18/release-notes/
2021-03-14 10:15:39 -07:00
Dalton Hubble
5633f97f75 Update Kubernetes from v1.20.3 to v1.20.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#v1204
2021-02-18 23:57:04 -08:00
Dalton Hubble
e7b05a5d20 Update Calico from v3.17.2 to v3.17.3
* https://github.com/projectcalico/calico/releases/tag/v3.17.3
2021-02-18 23:55:46 -08:00
Dalton Hubble
213cd16c38 Update Kubernetes from v1.20.2 to v1.20.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#v1203
2021-02-17 22:26:33 -08:00
Dalton Hubble
efd750d7a8 Update flannel-cni from v0.4.1 to v0.4.2
* https://github.com/poseidon/flannel-cni/releases/tag/v0.4.2
2021-02-14 12:03:07 -08:00
Dalton Hubble
75fc91deb8 Update Calico from v3.17.1 to v3.17.2
* https://github.com/projectcalico/calico/releases/tag/v3.17.2
2021-02-04 22:01:40 -08:00
Dalton Hubble
ae5449a9fb Update Cilium from v1.9.3 to v1.9.4
* https://github.com/cilium/cilium/releases/tag/v1.9.4
2021-02-03 23:06:28 -08:00
Dalton Hubble
ae9bc1af60 Update Cilium from v1.9.2 to v1.9.3
* https://github.com/cilium/cilium/releases/tag/v1.9.3
2021-01-24 23:05:30 -08:00
Dalton Hubble
9304f46ec7 Update Cilium from v1.9.1 to v1.9.2
* https://github.com/cilium/cilium/releases/tag/v1.9.2
2021-01-20 22:05:01 -08:00
Dalton Hubble
b3bf2ecbbe Update Kubernetes from v1.20.1 to v1.20.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#v1202
2021-01-13 17:44:27 -08:00
Dalton Hubble
80a350bce5 Update Kubernetes from v1.20.0 to v1.20.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#v1201
2020-12-19 12:53:47 -08:00
Ben Drucker
445627e1c3 Allow v3 of tls and random providers
* https://github.com/hashicorp/terraform-provider-random/blob/master/CHANGELOG.md#300-october-09-2020
* https://github.com/hashicorp/terraform-provider-tls/blob/master/CHANGELOG.md#300-october-14-2020
2020-12-19 12:52:48 -08:00
Dalton Hubble
4edd79dd02 Update Calico from v3.17.0 to v3.17.1
* https://github.com/projectcalico/calico/releases/tag/v3.17.1
2020-12-10 22:47:40 -08:00
Dalton Hubble
c052741cc3 Update Kubernetes from v1.20.0-rc.0 to v1.20.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#v1200
2020-12-08 18:24:24 -08:00
Dalton Hubble
64793aa593 Update required Terraform versions to v0.13 <= x < v0.15
* Allow Terraform v0.13.x or v0.14.x to be used
* Drop support for Terraform v0.12 since Typhoon already
requires v0.13+ https://github.com/poseidon/typhoon/pull/880
2020-12-07 00:09:27 -08:00
Dalton Hubble
2ed597002a Update Cilium from v1.9.0 to v1.9.1
* https://github.com/cilium/cilium/releases/tag/v1.9.1
2020-12-04 13:59:50 -08:00
Dalton Hubble
0e9c3598bd Update Kubernetes from v1.19.4 to v1.20.0-rc.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#v1200-rc0
2020-12-02 23:29:00 -08:00
Dalton Hubble
84972373d4 Rename bootstrap-secrets directory to pki
* Change control plane static pods to mount `/etc/kubernetes/pki`,
instead of `/etc/kubernetes/bootstrap-secrets` to better reflect
their purpose and match some loose conventions upstream
* Require TLS assets to be placed at `/etc/kubernetes/pki`, instead
of `/etc/kubernetes/bootstrap-secrets` on hosts (breaking)
* Mount to `/etc/kubernetes/pki` to match the host (less surprise)
* https://kubernetes.io/docs/setup/best-practices/certificates/
2020-12-02 23:13:53 -08:00
Dalton Hubble
ac5cb95774 Generate kubeconfig's for kube-scheduler and kube-controller-manager
* Generate TLS client certificates for kube-scheduler and
kube-controller-manager with `system:kube-scheduler` and
`system:kube-controller-manager` CNs
* Template separate kubeconfigs for kube-scheduler and
kube-controller manager (`scheduler.conf` and
`controller-manager.conf`). Rename admin for clarity
* Before v1.16.0, Typhoon scheduled a self-hosted control
plane, which allowed the steady-state kube-scheduler and
kube-controller-manager to use a scoped ServiceAccount.
With a static pod control plane, separate CN TLS client
certificates are the nearest equiv.
* https://kubernetes.io/docs/setup/best-practices/certificates/
* Remove unused Kubelet certificate, TLS bootstrap is used
instead
2020-12-01 20:18:36 -08:00
Dalton Hubble
19c3ce61bd Add TokenReview and TokenRequestProjection kube-apiserver flags
* Add kube-apiserver flags for TokenReview and TokenRequestProjection
(beta, defaults on) to allow using Service Account Token Volume Projection
to create and mount service account tokens tied to a Pod's lifecycle
* Both features will be promoted from beta to stable in v1.20
* Rename `experimental-cluster-signing-duration` to just
`cluster-signing-duration`

Rel:

* https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection
2020-12-01 19:50:25 -08:00
Dalton Hubble
fd10b94f87 Update Calico from v3.16.5 to v3.17.0
* Consider Calico's MTU auto-detection, but leave
Calico MTU variable for now (`network_mtu` ignored)
* Remove SELinux level setting workaround for
https://github.com/projectcalico/cni-plugin/issues/874
2020-11-25 11:18:59 -08:00
Dalton Hubble
49216ab82c Update Kubernetes from v1.19.3 to v1.19.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#v1194
2020-11-11 22:26:43 -08:00
Dalton Hubble
1c3d293f7c Update Cilium from v1.9.0-rc3 to v1.9.0
* https://github.com/cilium/cilium/releases/tag/v1.9.0
* https://github.com/cilium/cilium/pull/13937
2020-11-10 23:15:30 -08:00
Dalton Hubble
ef17534c33 Update Calico from v3.16.4 to v3.16.5
* https://docs.projectcalico.org/v3.16/release-notes/
2020-11-10 18:27:20 -08:00
Starbuck
74c299bf2c Restore kube-controller-manager --use-service-account-credentials
* kube-controller-manager Pods can start control loops with credentials
that have been granted relevant controller manager roles or using
generated service accounts bound to each role
* During the migration of the control plane from self-hosted to static
pods (https://github.com/poseidon/terraform-render-bootstrap/pull/148)
the flag for using separate service accounts was inadvertently dropped
* Restore the --use-service-account-credentials flag used before v1.16

Related:

* https://kubernetes.io/docs/reference/access-authn-authz/rbac/#controller-roles
* https://github.com/poseidon/terraform-render-bootstrap/pull/225
2020-11-10 12:06:51 -08:00
Dalton Hubble
c6e3a2bcdc Update Cilium from v1.8.5 to v1.9.0-rc3
* https://github.com/cilium/cilium/releases/tag/v1.9.0-rc3
* https://github.com/cilium/cilium/releases/tag/v1.9.0-rc2
* https://github.com/cilium/cilium/releases/tag/v1.9.0-rc1
2020-11-03 00:05:32 -08:00
Dalton Hubble
3a0feda171 Update Cilium from v1.8.4 to v1.8.5
* https://github.com/cilium/cilium/releases/tag/v1.8.5
2020-10-29 00:48:39 -07:00
Dalton Hubble
7036f64891 Update Calico from v3.16.3 to v3.16.4
* https://docs.projectcalico.org/v3.16/release-notes/
2020-10-25 11:50:43 -07:00
Dalton Hubble
9037d7311b Remove asset_dir variable and optional asset writes
* Originally, generated TLS certificates, manifests, and
cluster "assets" written to local disk (`asset_dir`) during
terraform apply cluster bootstrap
* Typhoon v1.17.0 introduced bootstrapping using only Terraform
state to store cluster assets, to avoid ever writing sensitive
materials to disk and improve automated use-cases. `asset_dir`
was changed to optional and defaulted to "" (no writes)
* Typhoon v1.18.0 deprecated the `asset_dir` variable, removed
docs, and announced it would be deleted in future.
* Remove the `asset_dir` variable

Cluster assets are now stored in Terraform state only. For those
who wish to write those assets to local files, this is possible
doing so explicitly.

```
resource local_file "assets" {
  for_each = module.bootstrap.assets_dist
  filename = "some-assets/${each.key}"
  content = each.value
}
```

Related:

* https://github.com/poseidon/typhoon/pull/595
* https://github.com/poseidon/typhoon/pull/678
2020-10-17 14:57:13 -07:00
Maikel
84f897b5f1 Add Cilium manifests to local_file asset_dir (#221)
* Note, asset_dir is deprecated https://github.com/poseidon/typhoon/pull/678
2020-10-17 14:30:50 -07:00
Dalton Hubble
7988fb7159 Update Calico from v3.15.3 to v3.16.3
* https://github.com/projectcalico/calico/releases/tag/v3.16.3
* https://docs.projectcalico.org/v3.16/release-notes/
2020-10-15 20:00:41 -07:00
Dalton Hubble
5bebcc5f00 Update Kubernetes from v1.19.2 to v1.19.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#v1193
2020-10-14 20:42:07 -07:00
Dalton Hubble
4448143f64 Update flannel from v0.13.0-rc2 to v0.13.0
* https://github.com/coreos/flannel/releases/tag/v0.13.0
2020-10-14 20:29:20 -07:00
Dalton Hubble
a2eb1dcbcf Update Cilium from v1.8.3 to v1.8.4
* https://github.com/cilium/cilium/releases/tag/v1.8.4
2020-10-02 00:20:19 -07:00
Dalton Hubble
d0f2123c59 Update Kubernetes from v1.19.1 to v1.19.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#v1192
2020-09-16 20:03:44 -07:00
Dalton Hubble
9315350f55 Update flannel and flannel-cni image versions
* Update flannel from v0.12.0 to v0.13.0-rc2
* Use new flannel multi-arch image
* Update flannel-cni to update CNI plugins from v0.8.6 to
v0.8.7
2020-09-16 20:02:42 -07:00
Nesc58
016d4ebd0c Mount /run/xtables.lock in flannel Daemonset
* Mount xtables.lock (like Calico and Cilium) since iptables
may be called by other processes (kube-proxy)
2020-09-16 19:01:42 -07:00
Dalton Hubble
f2dd897d67 Change seccomp annotations to Pod seccompProfile
* seccomp graduated to GA in Kubernetes v1.19. Support
for seccomp alpha annotations will be removed in v1.22
* Replace seccomp annotations with the GA seccompProfile
field in the PodTemplate securityContext
* Switch profile from `docker/default` to `runtime/default`
(no effective change, since docker is the runtime)
* Verify with docker inspect SecurityOpt. Without the
profile, you'd see `seccomp=unconfined`

Related:
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#seccomp-graduates-to-general-availability
2020-09-10 00:28:58 -07:00
Dalton Hubble
c72826908b Update Kubernetes from v1.19.0 to v1.19.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#v1191
2020-09-09 20:42:43 -07:00
Dalton Hubble
81ac7e6e2f Update Calico from v3.15.2 to v3.15.3
* https://github.com/projectcalico/calico/releases/tag/v3.15.3
2020-09-09 20:40:41 -07:00
Dalton Hubble
9ce9148557 Update Cilium from v1.8.2 to v1.8.3
* https://github.com/cilium/cilium/releases/tag/v1.8.3
2020-09-07 17:54:56 -07:00
Dalton Hubble
2686d59203 Allow leader election among Cilium operator replicas
* Allow Cilium operator Pods to leader elect when Deployment
has more than one replica
* Use topology spread constraint to keep multiple operators
from running on the same node (pods bind hostNetwork ports)
2020-09-07 17:48:19 -07:00
Dalton Hubble
79343f02ae Update Kubernetes from v1.18.8 to v1.19.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md/#v1190
2020-08-26 19:31:20 -07:00
Dalton Hubble
91738c35ff Update Calico from v3.15.1 to v3.15.2
* https://docs.projectcalico.org/release-notes/
2020-08-26 19:30:37 -07:00
Dalton Hubble
8ef2fe7c99 Update Kubernetes from v1.18.6 to v1.18.8
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1188
2020-08-13 20:43:42 -07:00
Dalton Hubble
60540868e0 Relax Terraform version constraint
* bootstrap uses only Hashicorp Terraform modules, allow
Terraform v0.13.x usage
2020-08-10 21:21:54 -07:00
Dalton Hubble
3675b3a539 Update from coreos/flannel-cni to poseidon/flannel-cni
* Update CNI plugins from v0.6.0 to v0.8.6 to fix several CVEs
* Update the base image to alpine:3.12
* Use `flannel-cni` as an init container and remove sleep
* Add Linux ARM64 and multi-arch container images
* https://github.com/poseidon/flannel-cni
* https://quay.io/repository/poseidon/flannel-cni

Background

* Switch from github.com/coreos/flannel-cni v0.3.0 which was last
published by me in 2017 and which is no longer accessible to me
to maintain or patch
* Port to the poseidon/flannel-cni rewrite, which releases v0.4.0
to continue the prior release numbering
2020-08-02 15:06:18 -07:00
Dalton Hubble
45053a62cb Update Cilium from v1.8.1 to v1.8.2
* Drop unused option https://github.com/cilium/cilium/pull/12618
2020-07-25 15:52:19 -07:00
Dalton Hubble
9de4267c28 Update CoreDNS from v1.6.7 to v1.7.0
* https://coredns.io/2020/06/15/coredns-1.7.0-release/
2020-07-25 13:08:29 -07:00
Dalton Hubble
835890025b Update Calico from v3.15.0 to v3.15.1
* https://docs.projectcalico.org/v3.15/release-notes/
2020-07-15 22:03:54 -07:00
Dalton Hubble
2bab6334ad Update Kubernetes from v1.18.5 to v1.18.6
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1186
2020-07-15 21:55:02 -07:00
Dalton Hubble
9a5132b2ad Update Cilium from v1.8.0 to v1.8.1
* https://github.com/cilium/cilium/releases/tag/v1.8.1
2020-07-05 15:58:53 -07:00
Dalton Hubble
5a7c963caf Update Kubernetes from v1.18.4 to v1.18.5
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1185
2020-06-27 13:49:10 -07:00
Dalton Hubble
5043456b05 Update Calico from v3.14.1 to v3.15.0
* https://docs.projectcalico.org/v3.15/release-notes/
2020-06-26 02:39:01 -07:00
Dalton Hubble
c014b77090 Update Cilium from v1.8.0-rc4 to v1.8.0
* https://github.com/cilium/cilium/releases/tag/v1.8.0
2020-06-22 22:25:38 -07:00
Dalton Hubble
1c07dfbc2a Remove experimental kube-router CNI provider 2020-06-21 21:55:56 -07:00
Dalton Hubble
af36c53936 Add experimental Cilium CNI provider
* Accept experimental CNI `networking` mode "cilium"
* Run Cilium v1.8.0 with overlay vxlan tunnels and a
minimal set of features. We're interested in:
  * IPAM: Divide pod_cidr into /24 subnets per node
  * CNI networking pod-to-pod, pod-to-external
  * BPF masquerade
  * NetworkPolicy as defined by Kubernetes (no L7)
* Continue using kube-proxy with Cilium probe mode
* Firewall changes:
  * Require UDP 8472 for vxlan (Linux kernel default) between nodes
  * Optional ICMP echo(8) between nodes for host reachability (health)
  * Optional TCP 4240 between nodes for host reachability (health)
2020-06-21 16:21:09 -07:00
Dalton Hubble
e75697ce35 Rename controller node label and NoSchedule taint
* Use node label `node.kubernetes.io/controller` to select
controller nodes (action required)
* Tolerate node taint `node-role.kubernetes.io/controller`
for workloads that should run on controller nodes. Don't
tolerate `node-role.kubernetes.io/master` (action required)
2020-06-17 22:46:35 -07:00
Dalton Hubble
3fe903d0ac Update Kubernetes from v1.18.3 to v1.18.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1184
* Remove unused template file
2020-06-17 19:23:12 -07:00
Dalton Hubble
fc1a7bac89 Remove unused Kubelet certificate and key pair
* Kubelet certificate and key pair in state (not distributed)
are not needed after with Kubelet TLS bootstrap
* https://github.com/poseidon/terraform-render-bootstrap/pull/185

Fix https://github.com/poseidon/typhoon/issues/757
2020-06-11 21:20:41 -07:00
Dalton Hubble
c3b1f23b5d Update Calico from v3.14.0 to v3.14.1
* https://docs.projectcalico.org/v3.14/release-notes/
2020-05-29 00:35:16 -07:00
Dalton Hubble
ff7ec52d0a Update Kubernetes from v1.18.2 to v1.18.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md
2020-05-20 20:34:42 -07:00
Dalton Hubble
a83ddbb30e Add CoreDNS "soft" nodeAffinity for controller nodes
* Add nodeAffinity to CoreDNS deployment PodSpec to
prefer running CoreDNS pods on controllers, while
relying on podAntiAffinity for spreading.
* For single master clusters, running two CoreDNS pods
on the master or running one pod on a worker is
permissible.
* Note: Its still _possible_ to end up with CoreDNS pods
all running on workers since we only express scheduling
preference ("soft"), but unlikely. Plus the motivating
scenario (below) is also rare.

Background:

* CoreDNS replicas are set to the higher of 2 or the
number of control plane nodes to (at a minimum) support
Deployment updates or pod restarts and match the cluster
size (e.g. 5 master/controller nodes likely means a
larger cluster, so run 5 CoreDNS replicas)
* In the past (before v1.14), we required kube-dns (CoreOS
predecessor) to run CoreDNS pods on master nodes. With
CoreDNS this node selection was relaxed. We'd like a
gentler form of it now.

Motivation:

* On clusters using 100% preemptible/spot workers, it is
possible that CoreDNS pods schedule to workers that are all
preempted at the same time, causing a loss of cluster internal
DNS service until a CoreDNS pod reschedules (1 min). We'd like
CoreDNS to prefer controller/master nodes (which aren't preempted)
to reduce the possibility of control plane disruption
2020-05-09 22:48:56 -07:00
Dalton Hubble
157336db92 Update Calico from v3.13.3 to v3.14.0
* https://docs.projectcalico.org/v3.14/release-notes/
2020-05-09 16:02:38 -07:00
Dalton Hubble
1dc36b58b8 Fix Calico node crash loop on Pod restart
* Set a consistent MCS level/range for Calico install-cni
* Note: Rebooting a node was a workaround, because Kubelet
relabels /etc/kubernetes(/cni/net.d)

Background:

* On SELinux enforcing systems, the Calico CNI install-cni
container ran with default SELinux context and a random MCS
pair. install-cni places CNI configs by first creating a
temporary file and then moving them into place, which means
the file MCS categories depend on the containers SELinux
context.
* calico-node Pod restarts creates a new install-cni container
with a different MCS pair that cannot access the earlier
written file (it places configs every time), causing the
init container to error and calico-node to crash loop
* https://github.com/projectcalico/cni-plugin/issues/874

```
mv: inter-device move failed: '/calico.conf.tmp' to
'/host/etc/cni/net.d/10-calico.conflist'; unable to remove target:
Permission denied
Failed to mv files. This may be caused by selinux configuration on the
host, or something else.
```

Note, this isn't a host SELinux configuration issue.
2020-05-09 15:20:06 -07:00
Dalton Hubble
924beb4b0c Enable Kubelet TLS bootstrap and NodeRestriction
* Enable bootstrap token authentication on kube-apiserver
* Generate the bootstrap.kubernetes.io/token Secret that
may be used as a bootstrap token
* Generate a bootstrap kubeconfig (with a bootstrap token)
to be securely distributed to nodes. Each Kubelet will use
the bootstrap kubeconfig to authenticate to kube-apiserver
as `system:bootstrappers` and send a node-unique CSR for
kube-controller-manager to automatically approve to issue
a Kubelet certificate and kubeconfig (expires in 72 hours)
* Add ClusterRoleBinding for bootstrap token subjects
(`system:bootstrappers`) to have the `system:node-bootstrapper`
ClusterRole
* Add ClusterRoleBinding for bootstrap token subjects
(`system:bootstrappers`) to have the csr nodeclient ClusterRole
* Add ClusterRoleBinding for bootstrap token subjects
(`system:bootstrappers`) to have the csr selfnodeclient ClusterRole
* Enable NodeRestriction admission controller to limit the
scope of Node or Pod objects a Kubelet can modify to those of
the node itself
* Ability for a Kubelet to delete its Node object is retained
as preemptible nodes or those in auto-scaling instance groups
need to be able to remove themselves on shutdown. This need
continues to have precedence over any risk of a node deleting
itself maliciously

Security notes:

1. Issued Kubelet certificates authenticate as user `system:node:NAME`
and group `system:nodes` and are limited in their authorization
to perform API operations by Node authorization and NodeRestriction
admission. Previously, a Kubelet's authorization was broader. This
is the primary security motivation.

2. The bootstrap kubeconfig credential has the same sensitivity
as the previous generated TLS client-certificate kubeconfig.
It must be distributed securely to nodes. Its compromise still
allows an attacker to obtain a Kubelet kubeconfig

3. Bootstrapping Kubelet kubeconfig's with a limited lifetime offers
a slight security improvement.
  * An attacker who obtains the kubeconfig can likely obtain the
  bootstrap kubeconfig as well, to obtain the ability to renew
  their access
  * A compromised bootstrap kubeconfig could plausibly be handled
  by replacing the bootstrap token Secret, distributing the token
  to new nodes, and expiration. Whereas a compromised TLS-client
  certificate kubeconfig can't be revoked (no CRL). However,
  replacing a bootstrap token can be impractical in real cluster
  environments, so the limited lifetime is mostly a theoretical
  benefit.
  * Cluster CSR objects are visible via kubectl which is nice

4. Bootstrapping node-unique Kubelet kubeconfigs means Kubelet
clients have more identity information, which can improve the
utility of audits and future features

Rel: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/
2020-04-25 19:38:56 -07:00
Dalton Hubble
c62c7f5a1a Update Calico from v3.13.1 to v3.13.3
* https://docs.projectcalico.org/v3.13/release-notes/
2020-04-22 20:26:32 -07:00
Dalton Hubble
14d0b20879 Update Kubernetes from v1.18.1 to v1.18.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#downloads-for-v1182
2020-04-16 23:33:42 -07:00
Dalton Hubble
1ad53d3b1c Update Kubernetes from v1.18.0 to v1.18.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md
2020-04-08 19:37:27 -07:00
Dalton Hubble
45dc2f5c0c Update flannel from v0.11.0 to v0.12.0
* https://github.com/coreos/flannel/releases/tag/v0.12.0
2020-03-31 18:23:57 -07:00
Dalton Hubble
42723d13a6 Change default kube-system DaemonSet tolerations
* Change kube-proxy, flannel, and calico-node DaemonSet
tolerations to tolerate `node.kubernetes.io/not-ready`
and `node-role.kubernetes.io/master` (i.e. controllers)
explicitly, rather than tolerating all taints
* kube-system DaemonSets will no longer tolerate custom
node taints by default. Instead, custom node taints must
be enumerated to opt-in to scheduling/executing the
kube-system DaemonSets.

Background: Tolerating all taints ruled out use-cases
where certain nodes might legitimately need to keep
kube-proxy or CNI networking disabled
2020-03-25 22:43:50 -07:00
Dalton Hubble
cb170f802d Update Kubernetes from v1.17.4 to v1.18.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md
2020-03-25 17:47:30 -07:00
Dalton Hubble
e76f0a09fa Switch from upstream hyperkube to component images
* Kubernetes plans to stop releasing the hyperkube image in
the future.
* Upstream will continue releasing container images for
`kube-apiserver`, `kube-controller-manager`, `kube-proxy`,
and `kube-scheduler`. Typhoon will use these images
* Upstream will release the kubelet as a binary for distros
to package, either as a traditional DEB/RPP or as a container
image for container-optimized operating systems. Typhoon will
take on the packaging of Kubelet and its dependencies as a new
container image (alongside kubectl)

Rel: https://github.com/kubernetes/kubernetes/pull/88676
See: https://github.com/poseidon/kubelet
2020-03-17 22:13:42 -07:00
Dalton Hubble
73784c1b2c Update Kubernetes from v1.17.3 to v1.17.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md#v1174
2020-03-12 22:57:14 -07:00
Dalton Hubble
804029edd5 Update Calico from v3.12.0 to v3.13.1
* https://docs.projectcalico.org/v3.13/release-notes/
2020-03-12 22:55:57 -07:00
Dalton Hubble
d1831e626a Update CoreDNS from v1.6.6 to v1.6.7
* https://coredns.io/2020/01/28/coredns-1.6.7-release/
2020-02-17 14:24:17 -08:00
Dalton Hubble
7961945834 Update Kubernetes from v1.17.2 to v1.17.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md#v1173
2020-02-11 20:18:02 -08:00
Dalton Hubble
1ea8fe7a85 Update Calico from v3.11.2 to v3.12.0
* https://docs.projectcalico.org/release-notes/#v3120
2020-02-06 00:03:00 -08:00
Dalton Hubble
05297b94a9 Update Kuberenetes from v1.17.1 to v1.17.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.17.md#v1172
2020-01-21 18:25:46 -08:00
Dalton Hubble
de85f1da7d Update Calico from v3.11.1 to v3.11.2
* https://docs.projectcalico.org/v3.11/release-notes/
2020-01-18 13:37:17 -08:00
Dalton Hubble
5ce4fc6953 Update Kubernetes from v1.17.0 to v1.17.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.17.md#v1171
2020-01-14 20:17:30 -08:00
Dalton Hubble
ac4b7af570 Configure kube-proxy to serve /metrics on 0.0.0.0:10249
* Set kube-proxy --metrics-bind-address to 0.0.0.0 (default
127.0.0.1) so Prometheus metrics can be scraped
* Add pod port list (informational only)
* Require node firewall rules to be updated before scrapes
can succeed
2019-12-29 11:56:52 -08:00
Dalton Hubble
c8c21deb76 Update Calico from v3.10.2 to v3.11.1
* https://docs.projectcalico.org/v3.11/release-notes/
2019-12-28 10:51:11 -08:00
Dalton Hubble
f021d9cb34 Update CoreDNS from v1.6.5 to v1.6.6
* https://coredns.io/2019/12/11/coredns-1.6.6-release/
2019-12-22 10:41:43 -05:00
Dalton Hubble
24e5513ee6 Update Calico from v3.10.1 to v3.10.2
* https://docs.projectcalico.org/v3.10/release-notes/
2019-12-09 20:56:18 -08:00
Dalton Hubble
0ddd90fd05 Update Kubernetes from v1.16.3 to v1.17.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.17.md/#v1170
2019-12-09 18:29:06 -08:00
Dalton Hubble
4369c706e2 Restore kube-controller-manager settings lost in static pod migration
* Migration from a self-hosted to a static pod control plane dropped
a few kube-controller-manager customizations
* Reduce kube-controller-manager --pod-eviction-timeout from 5m to 1m
to move pods more quickly when nodes are preempted
* Fix flex-volume-plugin-dir since the Kubernetes default points to
a read-only filesystem on Container Linux / Fedora CoreOS

Related:

* https://github.com/poseidon/terraform-render-bootstrap/pull/148
* 7b06557b7a
2019-12-08 22:37:36 -08:00
Dalton Hubble
7df6bd8d1e Tune static pod CPU requests slightly lower
* Reduce kube-apiserver and kube-controller-manager CPU
requests from 200m to 150m. Prefer slightly lower commitment
after running with the requests chosen in #161 for a while
* Reduce calico-node CPU request from 150m to 100m to match
CoreDNS and flannel
2019-12-08 22:25:58 -08:00
Dalton Hubble
dce49114a0 Fix terraform format with fmt 2019-12-05 01:02:01 -08:00
Dalton Hubble
50a221e042 Annotate sensitive output variables to suppress display
* Annotate terraform output variables containing generated TLS
credentials and kubeconfigs as sensitive to suppress / mask
them in terraform CLI display.
* Allow for easier use in automation systems and logged environments
2019-12-05 00:57:07 -08:00
Dalton Hubble
4d7484f72a Change asset_dir variable from required to optional
* `asset_dir` is an absolute path to a directory where generated
assets from terraform-render-bootstrap are written (sensitive)
* Change `asset_dir` to default to "" so no assets are written
(favor Terraform output mechanisms). Previously, asset_dir was
required so all users set some path. To take advantage of the
new optionality, remove asset_dir or set it to ""
2019-12-05 00:56:54 -08:00
Dalton Hubble
6c7ba3864f Introduce a Terraform output map with distribution assets
* Introduce a new `assets_dist` output variable that provides
a mapping from suggested asset paths to asset contents (for
assets that should be distributed to controller nodes). This
new output format is intended to align with a modified asset
distribution style in Typhoon.
* Lay the groundwork for `assets_dir` to become optional. The
output map provides output variable access to the minimal assets
that are required for bootstrap
* Assets that aren't required for bootstrap itself (e.g.
the etcd CA key) but can be used by admins may later be added
as specific output variables to further reduce asset_dir use

Background:

* `terraform-render-bootstrap` rendered assets were previously
only provided by rendering files to an `asset_dir`. This was
neccessary, but created a responsibility to maintain those
assets on the machine where terraform apply was run
2019-12-04 20:15:40 -08:00
Dalton Hubble
8005052cfb Remove unused raw kubeconfig field outputs
* Remove unused `ca_cert`, `kubelet_cert`, `kubelet_key`,
and `server` outputs
* These outputs were once needed to support clusters with
managed instance groups, but that hasn't been the case for
quite some time
2019-11-13 16:49:07 -08:00
Dalton Hubble
0f1f16c612 Add small CPU resource requests to static pods
* Set small CPU requests on static pods kube-apiserver,
kube-controller-manager, and kube-scheduler to align with
upstream tooling and for edge cases
* Control plane nodes are tainted to isolate them from
ordinary workloads. Even dense workloads can only compress
CPU resources on worker nodes.
* Control plane static pods use the highest priority class, so
contention favors control plane pods (over say node-exporter)
and CPU is compressible too.
* Effectively, a practical case for these requests hasn't been
observed. However, a small static pod CPU request may offer
a slight benefit if a controller became overloaded and the
above mechanisms were insufficient for some reason (bit of a
stretch, due to CPU compressibility)
* Continue to avoid setting a memory request for static pods.
It would impose a hard size requirement on controller nodes,
which isn't warranted and is handled more gently by Typhoon
default instance types across clouds and via docs
2019-11-13 16:44:33 -08:00
Dalton Hubble
43e1230c55 Update CoreDNS from v1.6.2 to v1.6.5
* Add health `lameduck` option 5s. Before CoreDNS shuts down,
it will wait and report unhealthy for 5s to allow time for
plugins to shutdown cleanly
* Minor bug fixes over a few releases
* https://coredns.io/2019/08/31/coredns-1.6.3-release/
* https://coredns.io/2019/09/27/coredns-1.6.4-release/
* https://coredns.io/2019/11/05/coredns-1.6.5-release/
2019-11-13 14:33:50 -08:00
Dalton Hubble
1bba891d95 Adopt Terraform v0.12 templatefile function
* Adopt Terrform v0.12 type and templatefile function
features to replace the use of terraform-provider-template's
`template_dir`
* Use of `for_each` to write local assets requires
that consumers use Terraform v0.12.6+ (action required)
* Continue use of `template_file` as its quite common. In
future, we may replace it as well.
* Remove outputs `id` and `content_hash` (no longer used)

Background:

* `template_dir` was added to `terraform-provider-template`
to add support for template directory rendering in CoreOS
Tectonic Kubernetes distribution (~2017)
* Terraform v0.12 introduced a native `templatefile` function
and v0.12.6 introduced native `for_each` support (July 2019)
that makes it possible to replace `template_dir` usage
2019-11-13 14:05:01 -08:00
Dalton Hubble
0daa1276c6 Update Kubernetes from v1.16.2 to v1.16.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1163
2019-11-13 13:02:01 -08:00
Dalton Hubble
a2b1dbe2c0 Update Calico from v3.10.0 to v3.10.1
* https://docs.projectcalico.org/v3.10/release-notes/
2019-11-07 11:07:15 -08:00
Dalton Hubble
3c7334ab55 Upgrade Calico from v3.9.2 to v3.10.0
* Change calico-node livenessProve from httpGet to exec
a calico-node -felix-ready, as recommended by Calico
* Allow advertising Kubernetes service ClusterIPs
2019-10-27 01:06:09 -07:00
Dalton Hubble
e09d6bef33 Switch kube-proxy from iptables mode to ipvs mode
* Kubernetes v1.11 considered kube-proxy IPVS mode GA
* Many problems were found https://github.com/poseidon/typhoon/pull/321
* Since then, major blockers seem to have been addressed
2019-10-15 22:55:17 -07:00
Dalton Hubble
0fcc067476 Update Kubernetes from v1.16.1 to v1.16.2
* https://github.com/kubernetes/kubernetes/releases/tag/v1.16.2
2019-10-15 22:38:51 -07:00
Dalton Hubble
6f2734bb3c Update Calico from v3.9.1 to v3.9.2
* https://github.com/projectcalico/calico/releases/tag/v3.9.2
2019-10-15 22:36:37 -07:00
Dalton Hubble
10d9cec5c2 Add stricter type constraints to variables 2019-10-06 20:41:50 -07:00
Dalton Hubble
1f8b634652 Remove unneeded control plane flags
* Several flags now default to the arguments we've been
setting and are no longer needed
2019-10-06 20:25:46 -07:00
Dalton Hubble
586d6e36f6 Update Kubernetes from v1.16.0 to v1.16.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1161
2019-10-02 21:22:11 -07:00
Dalton Hubble
18b7a74d30 Update Calico from v3.8.2 to v3.9.1
* https://docs.projectcalico.org/v3.9/release-notes/
2019-09-29 11:14:20 -07:00
Dalton Hubble
539b725093 Update Kubernetes from v1.15.3 to v1.16.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1160
2019-09-17 21:15:46 -07:00
Dalton Hubble
d6206abedd Replace Terraform element function with indexing
* Better to explictly index (and error on out-of-bounds) than
use Terraform `element` (which has special wrap-around behavior)
* https://www.terraform.io/docs/configuration/functions/element.html
2019-09-14 16:46:27 -07:00
Dalton Hubble
e839ec5a2b Fix Terraform formatting 2019-09-14 16:44:36 -07:00
Dalton Hubble
3dade188f2 Rename project to terraform-render-bootstrap
* Rename from terraform-render-bootkube to terraform-render-bootstrap
* Generated manifest and certificate assets are no longer geared
specifically for bootkube (no longer used)
2019-09-14 16:16:49 -07:00
Dalton Hubble
97bbed6c3a Rename CA organization from bootkube to typhoon
* Rename the organization in generated CA certificates for
clusters from bootkube to typhoon
* Mainly helpful to avoid confusion with bootkube CA certificates
if users inspect their CA, especially now that bootkube isn't used
(better their searches lead to Typhoon)
2019-09-14 16:08:06 -07:00
Dalton Hubble
6e59af7113 Migrate from a self-hosted to static pod control plane
* Run kube-apiserver, kube-scheduler, and kube-controller-manager
as static pods on each controller node
* Boostrap a minimal control plane by copying `static-manifests`
to the Kubelet `--pod-manifest-path` and tls/auth secrets to
`/etc/kubernetes/bootstrap-secrets`. Then, kubectl apply Kubernetes
manifests.
* Discontinue using bootkube to bootstrap and pivot to a self-hosted
control plane.
* Remove bootkube self-hosted kube-apiserver DaemonSet and
kube-scheduler and kube-controller-manager Deployments
* Remove pod-checkpointer manifests (no longer needed)

Advantages:

* Reduce control plane bootstrapping complexity. Self-hosted pivot and
pod checkpointing worked well, but in-place edits to kube-apiserver,
kube-controller-manager, or kube-scheduler is infrequently used. The
concept was originally geared toward continuously in-place upgrading
clusters, a goal Typhoon doesn't take on (rec. blue/green clusters).
As such, the value-add isn't justifying the extra components for this
particular project.
* Static pods still provide kubectl visibility and log access

Drawbacks:

* In-place edits to kube-apiserver, kube-controller-manager, and
kube-scheduler are not possible via kubectl (non-goal)
* Assets must be copied to each controller (not just one)
* Static pod must load credentials via hostPath, which is less clean
compared with the former Kubernetes secrets and service accounts
2019-09-02 20:52:46 -07:00
Dalton Hubble
98cc19f80f Update CoreDNS from v1.5.0 to v1.6.2
* https://coredns.io/2019/06/26/coredns-1.5.1-release/
* https://coredns.io/2019/07/03/coredns-1.5.2-release/
* https://coredns.io/2019/07/28/coredns-1.6.0-release/
* https://coredns.io/2019/08/02/coredns-1.6.1-release/
* https://coredns.io/2019/08/13/coredns-1.6.2-release/
2019-08-31 15:20:55 -07:00
Dalton Hubble
248675e7a9 Update Kubernetes from v1.15.2 to v1.15.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md/#v1153
2019-08-19 14:41:54 -07:00
Dalton Hubble
8b3738b2cc Update Calico from v3.8.1 to v3.8.2
* https://docs.projectcalico.org/v3.8/release-notes/
2019-08-16 14:53:20 -07:00
Dalton Hubble
c21da02249 Update Kubernetes from v1.15.1 to v1.15.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#downloads-for-v1152
2019-08-05 08:44:54 -07:00
Dalton Hubble
83dd5a7cfc Update Calico from v3.8.0 to v3.8.1
* https://github.com/projectcalico/calico/releases/tag/v3.8.1
2019-07-27 15:17:47 -07:00
Dalton Hubble
ed94836925 Update kube-router from v0.3.1 to v0.3.2
* kube-router is experimental and not supported or validated
* Bumping so the next time kube-router is evaluated, we're on
a modern version
* https://github.com/cloudnativelabs/kube-router/releases/tag/v0.3.2
2019-07-27 15:12:43 -07:00
Dalton Hubble
5b9faa9031 Update Kubernetes from v1.15.0 to v1.15.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#downloads-for-v1151
2019-07-19 01:18:09 -07:00
Dalton Hubble
119cb00fa7 Upgrade Calico from v3.7.4 to v3.8.0
* Enable CNI bandwidth plugin for traffic shaping
* https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#support-traffic-shaping
2019-07-11 21:00:58 -07:00
Dalton Hubble
4caca47776 Run kube-apiserver as non-root user (nobody) 2019-07-06 13:51:54 -07:00
Dalton Hubble
3bfd1253ec Always run kube-apiserver on port 6443 (internally)
* Require bootstrap-kube-apiserver and kube-apiserver components
listen on port 6443 (internally) to allow kube-apiserver pods to
run with lower user privilege
* Remove variable `apiserver_port`. The kube-apiserver listen
port is no longer customizable.
* Add variable `external_apiserver_port` to allow architectures
where a load balancer fronts kube-apiserver 6443 backends, but
listens on a different port externally. For example, Google Cloud
TCP Proxy load balancers cannot listen on 6443
2019-07-06 13:50:22 -07:00
Dalton Hubble
95f6fc7fa5 Update Calico from v3.7.3 to v3.7.4
* https://docs.projectcalico.org/v3.7/release-notes/
2019-07-02 20:15:53 -07:00
Dalton Hubble
62df9ad69c Update Kubernetes from v1.14.3 to v1.15.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#v1150
2019-06-23 13:04:13 -07:00
Dalton Hubble
89c3ab4e27 Update Calico from v3.7.2 to v3.7.3
* https://docs.projectcalico.org/v3.7/release-notes/
2019-06-13 23:36:35 -07:00
Dalton Hubble
0103bc06bb Define module required provider versions 2019-06-06 09:39:48 -07:00
Dalton Hubble
33d033f1a6 Migrate from Terraform v0.11.x to v0.12.x (breaking!)
* Terraform v0.12 is a major Terraform release with breaking changes
to the HCL language. In v0.11, it was required to use redundant brackets
as interpreter type hints to pass lists or concat and flatten lists and
strings. In v0.12, that work-around is no longer supported. Lists are
represented as first-class objects and the redundant brackets create
nested lists. Consequently, its not possible to pass lists in a way that
works with both v0.11 and v0.12 at the same time. We've made the
difficult choice to pursue a hard cutover to Terraform v0.12.x
* https://www.terraform.io/upgrade-guides/0-12.html#referring-to-list-variables
* Use expression syntax instead of interpolated strings, where suggested
* Define Terraform required_version ~> v0.12.0 (> v0.12, < v0.13)
2019-06-06 09:39:46 -07:00
Dalton Hubble
082921d679 Update Kubernetes from v1.14.2 to v1.14.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md#v1143
2019-05-31 01:05:00 -07:00
Dalton Hubble
efd1cfd9bf Update CoreDNS from v1.3.1 to v1.5.0
* Add `ready` plugin and change the readinessProbe to check
default port 8181 to ensure all plugins are ready
* `upstream [ADDRESS]` defines upstream resolvers for external
services. If no address is given, resolution is against CoreDNS
itself, which is the default. So `upstream` can be removed
2019-05-27 00:07:59 -07:00
Dalton Hubble
85571f6dae Update Kubernetes from v1.14.1 to v1.14.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md#v1142
2019-05-17 13:00:30 +02:00
Dalton Hubble
eca7c49fe1 Update Calico from v3.7.0 to v3.7.2
* https://docs.projectcalico.org/v3.7/release-notes/
2019-05-17 12:26:02 +02:00
Dalton Hubble
42b9e782b2 Update kube-router from v0.3.0 to v0.3.1
* kube-router is experimental and not supported
* https://github.com/cloudnativelabs/kube-router/releases/tag/v0.3.1
2019-05-17 12:20:23 +02:00
Dalton Hubble
fc7a6fb20a Change flannel port from 8472 to 4789
* Change flannel port from the kernel default 8472 to the
IANA assigned VXLAN port 4789
* Requires a change to firewall rules or security groups
depending on the platform (**action required!**)
* Why now? Calico now offers its own VXLAN backend so
standardizing on the IANA port simplifies configuration
* https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan
2019-05-06 21:23:08 -07:00
Dalton Hubble
b96d641f6d Update Calico from v3.6.1 to v3.7.0
* Accept a `network_encapsulation` variable to choose whether the
default IPPool should use ipip (default) or vxlan encapsulation
* Use `network_mtu` as the MTU for workload interfaces for ipip
or vxlan (although Calico can have a IPPools with a mix, we're
picking ipip xor vxlan)
2019-05-05 20:41:53 -07:00
Dalton Hubble
614defe090 Update kube-router from v0.2.5 to v0.3.0
* https://github.com/cloudnativelabs/kube-router/releases/tag/v0.3.0
* Recall, kube-router is experimental and not vouched for
as part of clusters
2019-05-04 11:38:19 -07:00
Dalton Hubble
a80eed2b6a Update Kubernetes from v1.14.0 to v1.14.1 2019-04-09 21:43:39 -07:00
Dalton Hubble
53b2520d70 Remove deprecated user-kubeconfig output
* Use kubeconfig-admin output instead
* https://github.com/poseidon/terraform-render-bootkube/pull/100
2019-04-09 21:41:26 -07:00
Dalton Hubble
feb6e4cb3e Fix a few ca_cert vars that are lists and should be strings
* Error introduced in prior commit #104
2019-04-07 11:59:33 -07:00
Dalton Hubble
88fd15c2f6 Remove support for using a pre-existing certificate authority
* Remove the `ca_certificate`, `ca_key_alg`, and `ca_private_key`
variables
* Typhoon does not plan to expose custom CA support. Continuing
to support it clutters the implementation and security auditing
* Using an existing CA certificate and private key has been
supported in terraform-render-bootkube only to match bootkube
2019-04-07 11:42:57 -07:00
Dalton Hubble
b9bef14a0b Add enable_aggregation option (defaults to false)
* Add an `enable_aggregation` variable to enable the kube-apiserver
aggregation layer for adding extension apiservers to clusters
* Aggregation is **disabled** by default. Typhoon recommends you not
enable aggregation. Consider whether less invasive ways to achieve
your goals are possible and whether those goals are well-founded
* Enabling aggregation and extension apiservers increases the attack
surface of a cluster and makes extensions a part of the control plane.
Admins must scrutinize and trust any extension apiserver used.
* Passing a v1.14 CNCF conformance test requires aggregation be enabled.
Having an option for aggregation keeps compliance, but retains the stricter
security posture on default clusters
2019-04-07 02:27:40 -07:00
Dalton Hubble
a693381400 Update Kubernetes from v1.13.5 to v1.14.0 2019-03-31 17:45:25 -07:00
Dalton Hubble
bcb015e105 Update Calico from v3.6.0 to v3.6.1
* https://docs.projectcalico.org/v3.6/release-notes/
2019-03-31 17:41:15 -07:00
Dalton Hubble
da0321287b Update hyperkube from v1.13.4 to v1.13.5
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#v1135
2019-03-25 21:37:15 -07:00
Dalton Hubble
9862888bb2 Reduce calico-node CPU request from 250m to 150m
* calico-node uses only a small fraction of its CPU request
(i.e. reservation) even under stress. The unbounded limit
already allows usage to scale favorably in bursty cases
* Motivation: On instance types that skew memory-optimized
(e.g. GCP n1), over-requesting can push the system toward
overcommitment (alerts can be tuned)
* Overcommitment is not necessarily bad, but 250m seems too
generous a minimum given the actual usage
2019-03-24 11:55:56 -07:00
Dalton Hubble
23f81a5e8c Upgrade Calico from v3.5.2 to v3.6.0
* Add calico-ipam CRDs and RBAC permissions
* Switch IPAM from host-local to calico-ipam!
  * `calico-ipam` subnets `ippools` (defaults to pod CIDR) into
`ipamblocks` (defaults to /26, but set to /24 in Typhoon)
  * `host-local` subnets the pod CIDR based on the node PodCIDR
field (set via kube-controller-manager as /24's)
* Create a custom default IPv4 IPPool to ensure the block size
is kept at /24 to allow 110 pods per node (Kubernetes default)
* Retaining host-local was slightly preferred, but Calico v3.6
is migrating all usage to calico-ipam. The codepath that skipped
calico-ipam for KDD was removed
*  https://docs.projectcalico.org/v3.6/release-notes/
2019-03-18 22:28:48 -07:00
Dalton Hubble
6cda319b9d Revert "Update Calico from v3.5.2 to v3.6.0"
* Calico is not using host-local IPAM as desired
* This reverts commit e6e051ef47.
2019-03-18 21:32:23 -07:00
Dalton Hubble
e6e051ef47 Update Calico from v3.5.2 to v3.6.0
* Add calico-ipam CRDs and RBAC permissions
* Continue using host-local IPAM
*  https://docs.projectcalico.org/v3.6/release-notes/
2019-03-18 21:03:27 -07:00
Dalton Hubble
1528266595 Resolve in-addr.arpa and ip6.arpa zones with CoreDNS kubernetes plugin
* Resolve in-addr.arpa and ip6.arpa DNS PTR requests for Kubernetes
service IPs and pod IPs
* Previously, CoreDNS was configured to resolve in-addr.arpa PTR
records for service IPs (but not pod IPs)
2019-03-04 22:33:21 -08:00
Dalton Hubble
953521dbba Update hyperkube from v1.13.3 to v1.13.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#v1134
2019-02-28 22:22:35 -08:00
Dalton Hubble
0a7c4fda35 Update Calico from v3.5.1 to v3.5.2
* https://docs.projectcalico.org/v3.5/releases/
2019-02-25 21:20:47 -08:00
Dalton Hubble
593f0e3655 Add a readinessProbe to CoreDNS
* https://github.com/kubernetes/kubernetes/pull/74137
2019-02-23 13:11:19 -08:00
Dalton Hubble
c5f5aacce9 Assign Pod Priority Classes to control plane components
* Priority Admission Controller has been enabled since Typhoon
v1.11.1
* Assign cluster and node components a builtin priorityClassName
(higher is higher priority) to inform scheduler prepemption,
scheduling order, and node out-of-resource eviction order
2019-02-17 17:12:46 -08:00
Dalton Hubble
4d315afd41 Update Calico from v3.5.0 to v3.5.1
* https://github.com/projectcalico/confd/pull/205
2019-02-09 11:45:38 -08:00
Dalton Hubble
c12a11c800 Update hyperkube from v1.13.2 to v1.13.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#v1133
2019-02-01 23:23:07 -08:00
Dalton Hubble
1de56ef7c8 Update kube-router from v0.2.4 to v0.2.5
* https://github.com/cloudnativelabs/kube-router/releases/tag/v0.2.5
2019-02-01 23:21:58 -08:00
Dalton Hubble
7dc8f8bf8c Switch CoreDNS to use the forward plugin instead of proxy
* Use the forward plugin to forward to upstream resolvers, instead
of the proxy plugin. The forward plugin is reported to be a faster
alternative since it can re-use open sockets
* https://coredns.io/explugins/forward/
* https://coredns.io/plugins/proxy/
* https://github.com/kubernetes/kubernetes/issues/73254
2019-01-30 22:19:13 -08:00
Dalton Hubble
c5bc23ef7a Update flannel from v0.10.0 to v0.11.0
* https://github.com/coreos/flannel/releases/tag/v0.11.0
2019-01-29 21:48:47 -08:00
Dalton Hubble
54f15b6c8c Update Calico from v3.4.0 to v3.5.0
* https://docs.projectcalico.org/v3.5/releases/
2019-01-27 16:25:57 -08:00
Dalton Hubble
7b06557b7a Reduce kube-controller-manager --pod-eviction-timeout to 1m
* Pods on preempted nodes should be moved to healthy nodes
more quickly (1 min instead of 5 minutes)
2019-01-27 16:20:01 -08:00
Dalton Hubble
ef99293eb2 Update CoreDNS from v1.3.0 to v1.3.1
* https://coredns.io/2019/01/13/coredns-1.3.1-release/
2019-01-15 21:22:40 -08:00
Dalton Hubble
e892e291b5 Restore Kubelet authorization to delete nodes
* Fix a regression caused by lowering the Kubelet TLS client
certificate to system:nodes group (#100) since dropping
cluster-admin dropped the Kubelet's ability to delete nodes.
* On clouds where workers can scale down (manual terraform apply,
AWS spot termination, Azure low priority deletion), worker shutdown
runs the delete-node.service to remove a node to prevent NotReady
nodes from accumulating
* Allow Kubelets to delete cluster nodes via system:nodes group. Kubelets
acting with system:node and kubelet-delete ClusterRoles is still an
improvement over acting as cluster-admin
2019-01-14 23:26:41 -08:00
Dalton Hubble
2353c586a1 Update kube-router from v0.2.3 to v0.2.4
* https://github.com/cloudnativelabs/kube-router/releases/tag/v0.2.4
2019-01-12 14:19:36 -08:00
Dalton Hubble
bcbdddd8d0 Update hyperkube from v1.13.1 to v1.13.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#v1132
2019-01-11 23:59:24 -08:00
Dalton Hubble
f1e69f1d93 Re-enable kube-scheduler and kube-controller-manager HTTP ports
* Fix regression added in 48730c0f12, allow Prometheus to scrape
metrics from kube-scheduler and kube-controller-manager
2019-01-11 23:52:57 -08:00
Dalton Hubble
48730c0f12 Probe kube-scheduler and kube-controller-manager HTTPS ports
* Disable kube-scheduler and kube-controller-manager HTTP ports
2019-01-09 20:50:57 -08:00
Dalton Hubble
0e65e3567e Enable certificates.k8s.io API certificate issuance
* Allow kube-controller-manager to sign Approved CSR's using the
cluster CA private key to issue cluster certificates
* System components that need to use certificates signed by the
cluster CA can submit a CSR to the apiserver, have an admin
inspect and manually approve it, and be issued a certificate
* Admins should inspect CSRs very carefully to ensure their
origin and authorization level are appropriate
* https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/#approving-certificate-signing-requests
2019-01-06 17:17:03 -08:00
Dalton Hubble
4f8952a956 Disable anonymous auth on the bootstrap kube-apiserver
* Anonymous auth isn't used during bootstrapping and can
be disabled
2019-01-05 21:48:40 -08:00
Dalton Hubble
ea30087577 Structure control plane manifests neatly 2019-01-05 21:47:30 -08:00
Dalton Hubble
847ec5929b Consolidate both variants of the admin kubeconfig
* Provide an admin kubeconfig which includes a named context
and also sets that context as the current-context
* Retains support for both the KUBECONFIG=path style of usage
or adding many kubeconfig's to a ~/.kube/configs folder and
using `kubectl use-context CLUSTER-context`
2019-01-05 14:56:45 -08:00
Dalton Hubble
f5ea389e8c Update CoreDNS from v1.2.6 to v1.3.0
* https://coredns.io/2018/12/15/coredns-1.3.0-release/
* Limit log plugin to just log error class
2019-01-05 13:21:10 -08:00
Dalton Hubble
3431a12ac1 Remove deprecated kube_dns_service_ip output
* Use cluster_dns_service_ip output instead
2019-01-05 13:11:15 -08:00
Dalton Hubble
a7bd306679 Add admin kubeconfig and limit Kubelet cert to system:nodes group
* Change Kubelet TLS client certificate to belong to the system:nodes
group instead of the system:masters group (more limited)
* Bind the system:node ClusterRole to the system:nodes group (yes,
the ClusterRole is singular)
* Generate separate admin.crt and admin.key files (which do still use
system:masters). Output kubeconfig-kubelet and kubeconfig-admin values
from the module
* Remove the kubeconfig output to force users to pick the correct
kubeconfig, depending on how the output is used (action required!)

Related:

* https://kubernetes.io/docs/reference/access-authn-authz/rbac/#core-component-roles

Note, NodeAuthorizer/NodeRestriction would be an enhancement, but to
work across platforms it effectively requires TLS bootstraping which
doesn't have a viable attestation strategy and clashes with CCM. This
change improves Kubelet limitations, but intentionally doesn't aim to
steer toward NodeAuthorizer/NodeRestriction
2019-01-02 23:08:09 -08:00
Dalton Hubble
f382415f2b Edit CA certificate CommonName to match upstream
* Consistency with https://kubernetes.io/docs/setup/certificates/#single-root-ca
2019-01-01 17:30:33 -08:00
Dalton Hubble
7bcca25043 Use a kube-apiserver ServiceAccount and ClusterRoleBinding
* Switch kube-apiserver from using the kube-system default ServicAccount
(with cluster-admin) to using a kube-apiserver ServiceAccount bound to
cluster-admin (as before)
* Remove the default-sa ClusterRoleBinding that allowed kube-apiserver
and kube-scheduler (or other 3rd-party components added to kube-system)
to use the kube-system default ServiceAccount for cluster-admin
* Require all future components in kube-system define their own
ServiceAccount
2019-01-01 17:30:28 -08:00
Dalton Hubble
fa4c2d8a68 Use a kube-scheduler ServiceAccount and ClusterRoleBinding
* Switch kube-scheduler from using the kube-system default ServiceAccount
(with cluster-admin) to using a kube-scheduler ServiceAccount bound to
the builtin system:kube-scheduler and system:volume-scheduler
(required for StorageClass) ClusterRoles
* https://kubernetes.io/docs/reference/access-authn-authz/rbac/#core-component-roles
2019-01-01 17:29:36 -08:00
Dalton Hubble
d14348a368 Update Calico from v3.3.2 to v3.4.0
* Use an init container to install CNI plugins
* Update the calico-node ClusterRole
2018-12-15 18:04:25 -08:00
Dalton Hubble
51e3323a6d Update hyperkube from v1.13.0 to v1.13.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#v1131
2018-12-15 11:42:32 -08:00
Dalton Hubble
95e568935c Update Calico from v3.3.1 to v3.3.2
* https://docs.projectcalico.org/v3.3/releases/
2018-12-06 22:49:48 -08:00
Dalton Hubble
b101fddf6e Configure kube-router to use in-cluster-kubeconfig
* Use access token, but access apiserver via apiserver endpoint
rather than internal service IP
2018-12-06 22:39:59 -08:00
Dalton Hubble
cff13f9248 Update hyperkube from v1.12.3 to v1.13.0
* Remove controller-manager empty dir mount added for v1.12
https://github.com/kubernetes/kubernetes/issues/68973
* No longer required https://github.com/kubernetes/kubernetes/pull/69884
2018-12-03 20:42:14 -08:00
Dalton Hubble
9d6f0c31d3 Add experimental kube-router CNI provider
* Allow using kube-router for pod-to-pod networking
and for NetworkPolicy
2018-12-03 19:42:02 -08:00
Dalton Hubble
7dc6e199f9 Fix terraform fmt 2018-12-03 19:41:30 -08:00
Hielke Christian Braun
bfb3d23d1b Write etcd CA cert and key to the asset directory
* Provide the etcd CA key for administrator usage. Note that
the key should rarely, if ever, be used
2018-12-03 19:37:25 -08:00
Dalton Hubble
4021467b7f Update hyperkube from v1.12.2 to v1.12.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md/#v1123
2018-11-26 20:56:11 -08:00
Dalton Hubble
bffb5d5d23 Update pod-checkpointer image to query Kubelet secure api
* Updates pod-checkpointer to prefer the Kubelet secure
API (before falling back to the Kubelet read-only API that
is disabled on Typhoon clusters since
https://github.com/poseidon/typhoon/pull/324)
* Previously, pod-checkpointer checkpointed an initial set
of pods during bootstrapping so recovery from power cycling
clusters was unaffected, but logs were noisy
* https://github.com/kubernetes-incubator/bootkube/pull/1027
* https://github.com/kubernetes-incubator/bootkube/pull/1025
2018-11-26 20:11:01 -08:00
Dalton Hubble
dbf67da1cb Disable Calico usage reporting by default
* Calico Felix has been reporting anonymous usage data about
Calico version and cluster size
* https://docs.projectcalico.org/v3.3/reference/felix/configuration
* Add an enable_reporting variable and default to false
2018-11-18 23:41:19 -08:00
Dalton Hubble
3d9f957aec Update CoreDNS from v1.2.4 to v1.2.6
* https://coredns.io/2018/11/05/coredns-1.2.6-release/
2018-11-18 16:18:52 -08:00
Dalton Hubble
39f9afb336 Add resource request to flannel and mount /run/flannel
* Request 100m CPU without a limit (similar to Calico)
2018-11-11 15:56:13 -08:00
Dalton Hubble
3f3ab6b5c0 Enable CoreDNS loop and loadbalance plugins
* loop sends an initial query to detect infinite forwarding
loops in configured upstream DNS servers and fast exit with
an error (its a fatal misconfiguration on the network that
will otherwise cause resolvers to consume memory/CPU until
crashing, masking the problem)
* https://github.com/coredns/coredns/tree/master/plugin/loop
* loadbalance randomizes the ordering of A, AAAA, and MX records
in responses to provide round-robin load balancing (as usual,
clients may still cache responses though)
* https://github.com/coredns/coredns/tree/master/plugin/loadbalance
2018-11-10 17:33:30 -08:00
Dalton Hubble
1cb00c8270 Update README to correspond to bootkube v0.14.0 2018-11-10 17:32:47 -08:00
Dalton Hubble
d045a8e6b8 Structure flannel/Calico manifests consistently
* Organize flannel and Calico manifests to use consistent
naming, structure, and ordering to align
* Downside: Makes direct diff'ing with upstream harder, but
that's become difficult lately anyway, since Calico uses a
templating engine
2018-11-10 13:14:36 -08:00
Dalton Hubble
8742024bbf Update Calico from v3.3.0 to v3.3.1
* https://docs.projectcalico.org/v3.3/releases/
2018-11-10 12:41:32 -08:00
Dalton Hubble
365d089610 Set kube-apiserver's kubelet preferred address types
* Prefer InternalIP and ExternalIP over the node's hostname,
to match upstream behavior and kubeadm
* Previously, hostname-override was used to set node names
to internal IP's to work around some cloud providers not
resolving hostnames for instances (e.g. DO droplets)
2018-11-03 14:58:30 -07:00
Dalton Hubble
f39f8294c4 Update hyperkube from v1.12.1 to v1.12.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md/#v1122
2018-10-27 15:35:49 -07:00
Dalton Hubble
6a77775e52 Update CoreDNS from v1.2.2 to v1.2.4
* https://coredns.io/2018/10/17/coredns-1.2.4-release/
* https://coredns.io/2018/10/16/coredns-1.2.3-release/
2018-10-27 15:35:21 -07:00
Dalton Hubble
e0e5577d37 Update Calico from v3.2.3 to v3.3.0
* https://docs.projectcalico.org/v3.3/releases/
2018-10-23 20:26:48 -07:00
Dalton Hubble
79065baa8c Fix CoreDNS AntiAffinity to prefer spreading pods 2018-10-17 22:15:53 -07:00
Dalton Hubble
81f19507fa Update Kubernetes from v1.11.3 to v1.12.1
* Mount an empty dir for the controller-manager to work around
https://github.com/kubernetes/kubernetes/issues/68973
* Update coreos/pod-checkpointer to strip affinity from
checkpointed pod manifests. Kubernetes v1.12.0-rc.1 introduced
a default affinity that appears on checkpointed manifests; but
it prevented scheduling and checkpointed pods should not have an
affinity, they're run directly by the Kubelet on the local node
* https://github.com/kubernetes-incubator/bootkube/issues/1001
* https://github.com/kubernetes/kubernetes/pull/68173
2018-10-16 20:03:04 -07:00
Dalton Hubble
2437023c10 Add docker/default seccomp profile to control plane pods
* By default, Kubernetes starts containers without the Docker
runtime's default seccomp profile (e.g. seccomp=unconfined)
* https://docs.docker.com/engine/security/seccomp/#pass-a-profile-for-a-container
2018-10-13 18:06:34 -07:00
Dalton Hubble
4e0ad77f96 Add livenessProbe to kube-proxy DaemonSet 2018-10-13 17:59:44 -07:00
Dalton Hubble
f7c2f8d590 Raise CoreDNS replica count to at least 2
* Run at least two replicas of CoreDNS to better support
rolling updates (previously, kube-dns had a pod nanny)
* On multi-master clusters, set the CoreDNS replica count
to match the number of masters (e.g. a 3-master cluster
previously used replicas:1, now replicas:3)
* Add AntiAffinity preferred rule to favor distributing
CoreDNS pods across nodes
2018-10-13 17:19:02 -07:00
Dalton Hubble
7797377d50 Raise scheduler/controller-manager replicas in multi-master
* Continue to ensure scheduler and controller-manager run
at least two replicas to support performing kubectl edits
on single-master clusters (no change)
* For multi-master clusters, set scheduler / controller-manager
replica count to the number of masters (e.g. a 3-master cluster
previously used replicas:2, now replicas:3)
2018-10-13 15:43:31 -07:00
Dalton Hubble
bccf3da096 Update Calico from v3.2.1 to v3.2.3
* https://github.com/projectcalico/calico/releases/tag/v3.2.2
* https://github.com/projectcalico/calico/releases/tag/v3.2.3
2018-10-02 15:59:50 +02:00
Dalton Hubble
9929abef7d Update CoreDNS from 1.1.3 to 1.2.2
* https://github.com/coredns/coredns/releases/tag/v1.2.2
* https://github.com/coredns/coredns/issues/2056
2018-10-02 15:58:07 +02:00
Dalton Hubble
5378e166ef Update hyperkube from v1.11.2 to v1.11.3
*  https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1113
2018-09-13 18:42:16 -07:00
Dalton Hubble
6f024c457e Update Calico from v3.1.3 to v3.2.1
* Most upstream changes were buried in calico#1884 which
switched from non-templated manifests to templating
* https://github.com/projectcalico/calico/pull/1884
* https://github.com/projectcalico/calico/pull/1853
* https://github.com/projectcalico/calico/pull/2069
* https://github.com/projectcalico/calico/pull/2032
* https://github.com/projectcalico/calico/pull/1841
* https://github.com/projectcalico/calico/pull/1770
2018-08-25 17:46:31 -07:00
Dalton Hubble
70c2839970 Update hyperkube from v1.11.1 to v1.11.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1112
2018-08-07 21:49:27 -07:00
Dalton Hubble
9e6fc7e697 Update hyperkube from v1.11.0 to v1.11.1
* Kubernetes v1.11.1 defaults to enabling the Priority
admission controller. List the Priority admission controller
explicitly for readability
2018-07-20 00:27:31 -07:00
Dalton Hubble
81ba300e71 Switch from kube-dns to CoreDNS
* Add system:coredns ClusterRole and binding
* Annotate CoreDNS service for Prometheus metrics scraping
* Remove kube-dns deployment, service, service account, and
variables
* Deprecate kube_dns_service_ip module output, use
cluster_dns_service_ip instead
2018-07-01 16:17:04 -07:00
Dalton Hubble
eb2dfa64de Explicitly disable apiserver 127.0.0.1 insecure port
* Although the --insecure-port flag is deprecated, apiserver
continues to default to listening on 127.0.0.1:8080
* Explicitly disable insecure local listener since its unused
* https://github.com/kubernetes/kubernetes/pull/59018#discussion_r177849954
* 5f3546b66f
2018-06-27 22:30:29 -07:00
Dalton Hubble
34992426f6 Update hyperkube from v1.10.5 to v1.11.0 2018-06-27 22:29:21 -07:00
Dalton Hubble
1d4db824f0 Update hyperkube from v1.10.4 to v1.10.5
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1105
2018-06-21 22:46:00 -07:00
Dalton Hubble
2bcf61b2b5 Change apiserver port from 443 to 6443
* Requires updating load balancers, firewall rules,
security groups, and potentially routers/balancers
* Temporarily allow apiserver_port override to accommodate
edge cases or migration
* https://github.com/kubernetes-incubator/bootkube/pull/789
2018-06-19 23:40:09 -07:00
Dalton Hubble
0e98e89e14 Update hyperkube from v1.10.3 to v1.10.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1104
2018-06-06 23:11:33 -07:00
Dalton Hubble
24e900af46 Update Calico from v3.1.2 to v3.1.3
* https://github.com/projectcalico/calico/releases/tag/v3.1.3
* https://github.com/projectcalico/cni-plugin/releases/tag/v3.1.3
2018-05-30 21:17:46 -07:00
Dalton Hubble
3fa3c2d73b Update hyperkube from v1.10.2 to v1.10.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1103
2018-05-21 20:17:36 -07:00
Dalton Hubble
2a776e7054 Update Calico from v3.1.1 to v3.1.2
* https://github.com/projectcalico/calico/releases/tag/v3.1.2
2018-05-21 20:15:49 -07:00
Dalton Hubble
28f68db28e Switch apiserver certificate to system:masters org
* A kubernetes apiserver should be authorized to make requests
to kubelets using an admin role associated with system:masters
* Kubelet defaults to AlwaysAllow so an apiserver that presented
a valid certificate had all access to the Kubelet. With Webhook
authorization, we're making that admin access explicit
* Its important the apiserver be able to perform or proxy to
kubelets for kubectl log, exec, port-forward, etc.
* https://github.com/poseidon/typhoon/issues/215
2018-05-13 23:04:25 -07:00
Dalton Hubble
305c813234 Allow specifying the Calico IP autodetection method
* Calico's default method "first-found" is appropriate for
single-NIC or bonded-NIC bare-metal and for clouds
* On bare-metal machines with multiple NICs, first-found
may result in Calico pods picking an unintended IP address
(perhaps an admin has dedicated certain NICs to certain
purposes). It mat be helpful to use `can-reach=DEST` or
`interface=REGEX` to select the host's address
* Caveat: autodetection method is set for the Calico
DaemonSet so the choice must be appropriate for all
machines in the cluster.
* https://docs.projectcalico.org/v3.1/reference/node/configuration#ip-autodetection-methods
2018-05-13 19:57:44 -07:00
Dalton Hubble
911f411508 Update kube-dns from v1.14.9 to v1.14.10
* https://github.com/kubernetes/kubernetes/pull/62676
2018-04-28 00:39:44 -07:00
Dalton Hubble
a43af2562c Update hyperkube from v1.10.1 to v1.10.2 2018-04-27 23:50:57 -07:00
Ruben Das
dc721063af Fix typo in README module example 2018-04-27 23:49:58 -07:00
Dalton Hubble
6ec5e3c3af Update Calico from v3.0.4 to v3.1.1
* https://github.com/projectcalico/calico/releases/tag/v3.1.1
* https://github.com/projectcalico/calico/releases/tag/v3.1.0
* CNI config now defaults to having Kubelet CNI plugin read
from /var/lib/calico/nodename
* https://github.com/projectcalico/calico/pull/1722
2018-04-21 15:09:06 -07:00
Dalton Hubble
db36b92abc Update hyperkube from v1.10.0 to v1.10.1 2018-04-12 20:09:52 -07:00
80 changed files with 2015 additions and 1645 deletions

6
.github/dependabot.yaml vendored Normal file
View File

@@ -0,0 +1,6 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"

21
.github/workflows/test.yaml vendored Normal file
View File

@@ -0,0 +1,21 @@
name: test
on:
push:
branches:
- main
pull_request:
jobs:
terraform:
name: fmt
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v6
- name: terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.11.1
- name: fmt
run: terraform fmt -check -diff -recursive

View File

@@ -1,50 +1,42 @@
# terraform-render-bootkube
# terraform-render-bootstrap
[![Workflow](https://github.com/poseidon/terraform-render-bootstrap/actions/workflows/test.yaml/badge.svg)](https://github.com/poseidon/terraform-render-bootstrap/actions/workflows/test.yaml?query=branch%3Amain)
[![Sponsors](https://img.shields.io/github/sponsors/poseidon?logo=github)](https://github.com/sponsors/poseidon)
[![Mastodon](https://img.shields.io/badge/follow-news-6364ff?logo=mastodon)](https://fosstodon.org/@typhoon)
`terraform-render-bootkube` is a Terraform module that renders [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube) assets for bootstrapping a Kubernetes cluster.
`terraform-render-bootstrap` is a Terraform module that renders TLS certificates, static pods, and manifests for bootstrapping a Kubernetes cluster.
## Audience
`terraform-render-bootkube` is a low-level component of the [Typhoon](https://github.com/poseidon/typhoon) Kubernetes distribution. Use Typhoon modules to create and manage Kubernetes clusters across supported platforms. Use the bootkube module if you'd like to customize a Kubernetes control plane or build your own distribution.
`terraform-render-bootstrap` is a low-level component of the [Typhoon](https://github.com/poseidon/typhoon) Kubernetes distribution. Use Typhoon modules to create and manage Kubernetes clusters across supported platforms. Use the bootstrap module if you'd like to customize a Kubernetes control plane or build your own distribution.
## Usage
Use the module to declare bootkube assets. Check [variables.tf](variables.tf) for options and [terraform.tfvars.example](terraform.tfvars.example) for examples.
Use the module to declare bootstrap assets. Check [variables.tf](variables.tf) for options and [terraform.tfvars.example](terraform.tfvars.example) for examples.
```hcl
module "bootkube" {
source = "git://https://github.com/poseidon/terraform-render-bootkube.git?ref=SHA"
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=SHA"
cluster_name = "example"
api_servers = ["node1.example.com"]
etcd_servers = ["node1.example.com"]
asset_dir = "/home/core/clusters/mycluster"
}
```
Generate the assets.
Generate assets in Terraform state.
```sh
terraform init
terraform get --update
terraform plan
terraform apply
```
Find bootkube assets rendered to the `asset_dir` path. That's it.
To inspect and write assets locally (e.g. debugging) use the `assets_dist` Terraform output.
### Comparison
Render bootkube assets directly with bootkube v0.12.0.
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=https://node1.example.com:2379
```
Compare assets. Rendered assets may differ slightly from bootkube assets to reflect decisions made by the [Typhoon](https://github.com/poseidon/typhoon) distribution.
```sh
pushd /home/core/mycluster
mv manifests-networking/* manifests
popd
diff -rw assets /home/core/mycluster
resource local_file "assets" {
for_each = module.bootstrap.assets_dist
filename = "some-assets/${each.key}"
content = each.value
}
```

View File

@@ -1,85 +0,0 @@
# Self-hosted Kubernetes bootstrap-manifests
resource "template_dir" "bootstrap-manifests" {
source_dir = "${path.module}/resources/bootstrap-manifests"
destination_dir = "${var.asset_dir}/bootstrap-manifests"
vars {
hyperkube_image = "${var.container_images["hyperkube"]}"
etcd_servers = "${join(",", formatlist("https://%s:2379", var.etcd_servers))}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
trusted_certs_dir = "${var.trusted_certs_dir}"
}
}
# Self-hosted Kubernetes manifests
resource "template_dir" "manifests" {
source_dir = "${path.module}/resources/manifests"
destination_dir = "${var.asset_dir}/manifests"
vars {
hyperkube_image = "${var.container_images["hyperkube"]}"
pod_checkpointer_image = "${var.container_images["pod_checkpointer"]}"
kubedns_image = "${var.container_images["kubedns"]}"
kubedns_dnsmasq_image = "${var.container_images["kubedns_dnsmasq"]}"
kubedns_sidecar_image = "${var.container_images["kubedns_sidecar"]}"
etcd_servers = "${join(",", formatlist("https://%s:2379", var.etcd_servers))}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
cluster_domain_suffix = "${var.cluster_domain_suffix}"
kube_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
trusted_certs_dir = "${var.trusted_certs_dir}"
ca_cert = "${base64encode(var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate)}"
server = "${format("https://%s:443", element(var.api_servers, 0))}"
apiserver_key = "${base64encode(tls_private_key.apiserver.private_key_pem)}"
apiserver_cert = "${base64encode(tls_locally_signed_cert.apiserver.cert_pem)}"
serviceaccount_pub = "${base64encode(tls_private_key.service-account.public_key_pem)}"
serviceaccount_key = "${base64encode(tls_private_key.service-account.private_key_pem)}"
etcd_ca_cert = "${base64encode(tls_self_signed_cert.etcd-ca.cert_pem)}"
etcd_client_cert = "${base64encode(tls_locally_signed_cert.client.cert_pem)}"
etcd_client_key = "${base64encode(tls_private_key.client.private_key_pem)}"
}
}
# Generated kubeconfig
resource "local_file" "kubeconfig" {
content = "${data.template_file.kubeconfig.rendered}"
filename = "${var.asset_dir}/auth/kubeconfig"
}
# Generated kubeconfig with user-context
resource "local_file" "user-kubeconfig" {
content = "${data.template_file.user-kubeconfig.rendered}"
filename = "${var.asset_dir}/auth/${var.cluster_name}-config"
}
data "template_file" "kubeconfig" {
template = "${file("${path.module}/resources/kubeconfig")}"
vars {
ca_cert = "${base64encode(var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate)}"
kubelet_cert = "${base64encode(tls_locally_signed_cert.kubelet.cert_pem)}"
kubelet_key = "${base64encode(tls_private_key.kubelet.private_key_pem)}"
server = "${format("https://%s:443", element(var.api_servers, 0))}"
}
}
data "template_file" "user-kubeconfig" {
template = "${file("${path.module}/resources/user-kubeconfig")}"
vars {
name = "${var.cluster_name}"
ca_cert = "${base64encode(var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate)}"
kubelet_cert = "${base64encode(tls_locally_signed_cert.kubelet.cert_pem)}"
kubelet_key = "${base64encode(tls_private_key.kubelet.private_key_pem)}"
server = "${format("https://%s:443", element(var.api_servers, 0))}"
}
}

68
auth.tf Normal file
View File

@@ -0,0 +1,68 @@
locals {
# component kubeconfigs assets map
auth_kubeconfigs = {
"auth/admin.conf" = local.kubeconfig-admin,
"auth/controller-manager.conf" = local.kubeconfig-controller-manager
"auth/scheduler.conf" = local.kubeconfig-scheduler
}
}
locals {
# Generated admin kubeconfig to bootstrap control plane
kubeconfig-admin = templatefile("${path.module}/resources/kubeconfig-admin",
{
name = var.cluster_name
ca_cert = base64encode(tls_self_signed_cert.kube-ca.cert_pem)
kubelet_cert = base64encode(tls_locally_signed_cert.admin.cert_pem)
kubelet_key = base64encode(tls_private_key.admin.private_key_pem)
server = format("https://%s:%s", var.api_servers[0], var.external_apiserver_port)
}
)
# Generated kube-controller-manager kubeconfig
kubeconfig-controller-manager = templatefile("${path.module}/resources/kubeconfig-admin",
{
name = var.cluster_name
ca_cert = base64encode(tls_self_signed_cert.kube-ca.cert_pem)
kubelet_cert = base64encode(tls_locally_signed_cert.controller-manager.cert_pem)
kubelet_key = base64encode(tls_private_key.controller-manager.private_key_pem)
server = format("https://%s:%s", var.api_servers[0], var.external_apiserver_port)
}
)
# Generated kube-controller-manager kubeconfig
kubeconfig-scheduler = templatefile("${path.module}/resources/kubeconfig-admin",
{
name = var.cluster_name
ca_cert = base64encode(tls_self_signed_cert.kube-ca.cert_pem)
kubelet_cert = base64encode(tls_locally_signed_cert.scheduler.cert_pem)
kubelet_key = base64encode(tls_private_key.scheduler.private_key_pem)
server = format("https://%s:%s", var.api_servers[0], var.external_apiserver_port)
}
)
# Generated kubeconfig to bootstrap Kubelets
kubeconfig-bootstrap = templatefile("${path.module}/resources/kubeconfig-bootstrap",
{
ca_cert = base64encode(tls_self_signed_cert.kube-ca.cert_pem)
server = format("https://%s:%s", var.api_servers[0], var.external_apiserver_port)
token_id = random_password.bootstrap-token-id.result
token_secret = random_password.bootstrap-token-secret.result
}
)
}
# Generate a cryptographically random token id (public)
resource "random_password" "bootstrap-token-id" {
length = 6
upper = false
special = false
}
# Generate a cryptographically random token secret
resource "random_password" "bootstrap-token-secret" {
length = 16
upper = false
special = false
}

View File

@@ -1,28 +1,36 @@
# Assets generated only when certain options are chosen
resource "template_dir" "flannel-manifests" {
count = "${var.networking == "flannel" ? 1 : 0}"
source_dir = "${path.module}/resources/flannel"
destination_dir = "${var.asset_dir}/manifests-networking"
locals {
# flannel manifests map
# { manifests-networking/manifest.yaml => content }
flannel_manifests = {
for name in fileset("${path.module}/resources/flannel", "*.yaml") :
"manifests/network/${name}" => templatefile(
"${path.module}/resources/flannel/${name}",
{
flannel_image = var.container_images["flannel"]
flannel_cni_image = var.container_images["flannel_cni"]
pod_cidr = var.pod_cidr
daemonset_tolerations = var.daemonset_tolerations
}
)
if var.components.enable && var.components.flannel.enable && var.networking == "flannel"
}
vars {
flannel_image = "${var.container_images["flannel"]}"
flannel_cni_image = "${var.container_images["flannel_cni"]}"
pod_cidr = "${var.pod_cidr}"
# cilium manifests map
# { manifests-networking/manifest.yaml => content }
cilium_manifests = {
for name in fileset("${path.module}/resources/cilium", "**/*.yaml") :
"manifests/network/${name}" => templatefile(
"${path.module}/resources/cilium/${name}",
{
cilium_agent_image = var.container_images["cilium_agent"]
cilium_operator_image = var.container_images["cilium_operator"]
pod_cidr = var.pod_cidr
daemonset_tolerations = var.daemonset_tolerations
}
)
if var.components.enable && var.components.cilium.enable && var.networking == "cilium"
}
}
resource "template_dir" "calico-manifests" {
count = "${var.networking == "calico" ? 1 : 0}"
source_dir = "${path.module}/resources/calico"
destination_dir = "${var.asset_dir}/manifests-networking"
vars {
calico_image = "${var.container_images["calico"]}"
calico_cni_image = "${var.container_images["calico_cni"]}"
network_mtu = "${var.network_mtu}"
pod_cidr = "${var.pod_cidr}"
}
}

77
manifests.tf Normal file
View File

@@ -0,0 +1,77 @@
locals {
# Kubernetes static pod manifests map
# {static-manifests/manifest.yaml => content }
static_manifests = {
for name in fileset("${path.module}/resources/static-manifests", "*.yaml") :
"static-manifests/${name}" => templatefile(
"${path.module}/resources/static-manifests/${name}",
{
kube_apiserver_image = var.container_images["kube_apiserver"]
kube_controller_manager_image = var.container_images["kube_controller_manager"]
kube_scheduler_image = var.container_images["kube_scheduler"]
etcd_servers = join(",", formatlist("https://%s:2379", var.etcd_servers))
pod_cidr = var.pod_cidr
service_cidr = var.service_cidr
service_account_issuer = var.service_account_issuer
aggregation_flags = var.enable_aggregation ? indent(4, local.aggregation_flags) : ""
}
)
}
# Kubernetes control plane manifests map
# { manifests/manifest.yaml => content }
manifests = merge({
for name in fileset("${path.module}/resources/manifests", "**/*.yaml") :
"manifests/${name}" => templatefile(
"${path.module}/resources/manifests/${name}",
{
server = format("https://%s:%s", var.api_servers[0], var.external_apiserver_port)
apiserver_host = var.api_servers[0]
apiserver_port = var.external_apiserver_port
token_id = random_password.bootstrap-token-id.result
token_secret = random_password.bootstrap-token-secret.result
}
)
},
# CoreDNS manifests (optional)
{
for name in fileset("${path.module}/resources/coredns", "*.yaml") :
"manifests/coredns/${name}" => templatefile(
"${path.module}/resources/coredns/${name}",
{
coredns_image = var.container_images["coredns"]
control_plane_replicas = max(2, length(var.etcd_servers))
cluster_domain_suffix = var.cluster_domain_suffix
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
}
) if var.components.enable && var.components.coredns.enable
},
# kube-proxy manifests (optional)
{
for name in fileset("${path.module}/resources/kube-proxy", "*.yaml") :
"manifests/kube-proxy/${name}" => templatefile(
"${path.module}/resources/kube-proxy/${name}",
{
kube_proxy_image = var.container_images["kube_proxy"]
pod_cidr = var.pod_cidr
daemonset_tolerations = var.daemonset_tolerations
}
) if var.components.enable && var.components.kube_proxy.enable && var.networking != "cilium"
}
)
}
locals {
aggregation_flags = <<EOF
- --proxy-client-cert-file=/etc/kubernetes/pki/aggregation-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/aggregation-client.key
- --requestheader-client-ca-file=/etc/kubernetes/pki/aggregation-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
EOF
}

View File

@@ -1,69 +1,76 @@
output "id" {
value = "${sha1("${template_dir.bootstrap-manifests.id} ${local_file.kubeconfig.id}")}"
output "cluster_dns_service_ip" {
value = cidrhost(var.service_cidr, 10)
}
output "content_hash" {
value = "${sha1("${template_dir.bootstrap-manifests.id} ${template_dir.manifests.id}")}"
// Generated kubeconfig for Kubelets (i.e. lower privilege than admin)
output "kubeconfig-kubelet" {
value = local.kubeconfig-bootstrap
sensitive = true
}
output "kube_dns_service_ip" {
value = "${cidrhost(var.service_cidr, 10)}"
// Generated kubeconfig for admins (i.e. human super-user)
output "kubeconfig-admin" {
value = local.kubeconfig-admin
sensitive = true
}
output "kubeconfig" {
value = "${data.template_file.kubeconfig.rendered}"
}
output "user-kubeconfig" {
value = "${data.template_file.user-kubeconfig.rendered}"
# assets to distribute to controllers
# { some/path => content }
output "assets_dist" {
# combine maps of assets
value = merge(
local.auth_kubeconfigs,
local.etcd_tls,
local.kubernetes_tls,
local.aggregation_tls,
local.static_manifests,
local.manifests,
local.flannel_manifests,
local.cilium_manifests,
)
sensitive = true
}
# etcd TLS assets
output "etcd_ca_cert" {
value = "${tls_self_signed_cert.etcd-ca.cert_pem}"
value = tls_self_signed_cert.etcd-ca.cert_pem
sensitive = true
}
output "etcd_client_cert" {
value = "${tls_locally_signed_cert.client.cert_pem}"
value = tls_locally_signed_cert.client.cert_pem
sensitive = true
}
output "etcd_client_key" {
value = "${tls_private_key.client.private_key_pem}"
value = tls_private_key.client.private_key_pem
sensitive = true
}
output "etcd_server_cert" {
value = "${tls_locally_signed_cert.server.cert_pem}"
value = tls_locally_signed_cert.server.cert_pem
sensitive = true
}
output "etcd_server_key" {
value = "${tls_private_key.server.private_key_pem}"
value = tls_private_key.server.private_key_pem
sensitive = true
}
output "etcd_peer_cert" {
value = "${tls_locally_signed_cert.peer.cert_pem}"
value = tls_locally_signed_cert.peer.cert_pem
sensitive = true
}
output "etcd_peer_key" {
value = "${tls_private_key.peer.private_key_pem}"
value = tls_private_key.peer.private_key_pem
sensitive = true
}
# Some platforms may need to reconstruct the kubeconfig directly in user-data.
# That can't be done with the way template_file interpolates multi-line
# contents so the raw components of the kubeconfig may be needed.
# Kubernetes TLS assets
output "ca_cert" {
value = "${base64encode(var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate)}"
}
output "kubelet_cert" {
value = "${base64encode(tls_locally_signed_cert.kubelet.cert_pem)}"
}
output "kubelet_key" {
value = "${base64encode(tls_private_key.kubelet.private_key_pem)}"
}
output "server" {
value = "${format("https://%s:443", element(var.api_servers, 0))}"
output "service_account_public_key" {
value = tls_private_key.service-account.public_key_pem
}

View File

@@ -1,57 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: bootstrap-kube-apiserver
namespace: kube-system
spec:
containers:
- name: kube-apiserver
image: ${hyperkube_image}
command:
- /hyperkube
- apiserver
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --authorization-mode=RBAC
- --bind-address=0.0.0.0
- --client-ca-file=/etc/kubernetes/secrets/ca.crt
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
- --etcd-cafile=/etc/kubernetes/secrets/etcd-client-ca.crt
- --etcd-certfile=/etc/kubernetes/secrets/etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/secrets/etcd-client.key
- --etcd-servers=${etcd_servers}
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
- --secure-port=443
- --service-account-key-file=/etc/kubernetes/secrets/service-account.pub
- --service-cluster-ip-range=${service_cidr}
- --cloud-provider=${cloud_provider}
- --storage-backend=etcd3
- --tls-cert-file=/etc/kubernetes/secrets/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/secrets/apiserver.key
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
- mountPath: /etc/kubernetes/secrets
name: secrets
readOnly: true
- mountPath: /var/lock
name: var-lock
readOnly: false
hostNetwork: true
volumes:
- name: secrets
hostPath:
path: /etc/kubernetes/bootstrap-secrets
- name: ssl-certs-host
hostPath:
path: ${trusted_certs_dir}
- name: var-lock
hostPath:
path: /var/lock

View File

@@ -1,36 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: bootstrap-kube-controller-manager
namespace: kube-system
spec:
containers:
- name: kube-controller-manager
image: ${hyperkube_image}
command:
- ./hyperkube
- controller-manager
- --allocate-node-cidrs=true
- --cluster-cidr=${pod_cidr}
- --service-cluster-ip-range=${service_cidr}
- --cloud-provider=${cloud_provider}
- --configure-cloud-routes=false
- --kubeconfig=/etc/kubernetes/kubeconfig
- --leader-elect=true
- --root-ca-file=/etc/kubernetes/bootstrap-secrets/ca.crt
- --service-account-private-key-file=/etc/kubernetes/bootstrap-secrets/service-account.key
volumeMounts:
- name: kubernetes
mountPath: /etc/kubernetes
readOnly: true
- name: ssl-host
mountPath: /etc/ssl/certs
readOnly: true
hostNetwork: true
volumes:
- name: kubernetes
hostPath:
path: /etc/kubernetes
- name: ssl-host
hostPath:
path: ${trusted_certs_dir}

View File

@@ -1,23 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: bootstrap-kube-scheduler
namespace: kube-system
spec:
containers:
- name: kube-scheduler
image: ${hyperkube_image}
command:
- ./hyperkube
- scheduler
- --kubeconfig=/etc/kubernetes/kubeconfig
- --leader-elect=true
volumeMounts:
- name: kubernetes
mountPath: /etc/kubernetes
readOnly: true
hostNetwork: true
volumes:
- name: kubernetes
hostPath:
path: /etc/kubernetes

View File

@@ -1,13 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico BGP Configuration
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPConfiguration
plural: bgpconfigurations
singular: bgpconfiguration

View File

@@ -1,13 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico BGP Peers
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPPeer
plural: bgppeers
singular: bgppeer

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system

View File

@@ -1,68 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: calico-node
rules:
- apiGroups: [""]
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups: [""]
resources:
- pods/status
verbs:
- update
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- watch
- patch
- apiGroups: [""]
resources:
- services
verbs:
- get
- apiGroups: [""]
resources:
- endpoints
verbs:
- get
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- update
- watch
- apiGroups: ["extensions"]
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurations
- ippools
- globalnetworkpolicies
- globalnetworksets
- networkpolicies
- clusterinformations
verbs:
- create
- get
- list
- update
- watch

View File

@@ -1,40 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: calico-config
namespace: kube-system
data:
# Disable Typha for now.
typha_service_name: "none"
# The CNI network configuration to install on each node.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": ${network_mtu},
"ipam": {
"type": "host-local",
"subnet": "usePodCidr"
},
"policy": {
"type": "k8s",
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
},
"kubernetes": {
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system

View File

@@ -1,152 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
spec:
hostNetwork: true
serviceAccountName: calico-node
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
containers:
- name: calico-node
image: ${calico_image}
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Enable felix info logging.
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPV6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
value: "${network_mtu}"
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: "true"
# The Calico IPv4 pool CIDR (should match `--cluster-cidr`).
- name: CALICO_IPV4POOL_CIDR
value: "${pod_cidr}"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"
# Enable IP-in-IP within Felix.
- name: FELIX_IPINIPENABLED
value: "true"
# Typha support: controlled by the ConfigMap.
- name: FELIX_TYPHAK8SSERVICENAME
valueFrom:
configMapKeyRef:
name: calico-config
key: typha_service_name
# Set node name based on k8s nodeName.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
httpGet:
path: /liveness
port: 9099
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
httpGet:
path: /readiness
port: 9099
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
# Install Calico CNI binaries and CNI network config file on nodes
- name: install-cni
image: ${calico_cni_image}
command: ["/install-cni.sh"]
env:
# Name of the CNI config file to create on each node.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# Contents of the CNI config to create on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# Set node name based on k8s nodeName
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CNI_NET_DIR
value: "/etc/kubernetes/cni/net.d"
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
terminationGracePeriodSeconds: 0
volumes:
# Used by calico/node
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
# Used by install-cni
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/kubernetes/cni/net.d

View File

@@ -1,13 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Cluster Information
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: ClusterInformation
plural: clusterinformations
singular: clusterinformation

View File

@@ -1,13 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Felix Configuration
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: FelixConfiguration
plural: felixconfigurations
singular: felixconfiguration

View File

@@ -1,13 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Global Network Policies
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkPolicy
plural: globalnetworkpolicies
singular: globalnetworkpolicy

View File

@@ -1,13 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Global Network Sets
kind: CustomResourceDefinition
metadata:
name: globalnetworksets.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkSet
plural: globalnetworksets
singular: globalnetworkset

View File

@@ -1,13 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico IP Pools
kind: CustomResourceDefinition
metadata:
name: ippools.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPPool
plural: ippools
singular: ippool

View File

@@ -1,13 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
description: Calico Network Policies
kind: CustomResourceDefinition
metadata:
name: networkpolicies.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkPolicy
plural: networkpolicies
singular: networkpolicy

View File

@@ -0,0 +1,27 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cilium-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cilium-operator
subjects:
- kind: ServiceAccount
name: cilium-operator
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cilium-agent
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cilium-agent
subjects:
- kind: ServiceAccount
name: cilium-agent
namespace: kube-system

View File

@@ -0,0 +1,188 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cilium-operator
rules:
- apiGroups:
- ""
resources:
# to automatically delete [core|kube]dns pods so that are starting to being
# managed by Cilium
- pods
verbs:
- get
- list
- watch
- delete
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
- nodes/status
verbs:
- patch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
# to perform LB IP allocation for BGP
- services/status
verbs:
- update
- apiGroups:
- ""
resources:
# to perform the translation of a CNP that contains `ToGroup` to its endpoints
- services
- endpoints
# to check apiserver connectivity
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- cilium.io
resources:
- ciliumnetworkpolicies
- ciliumnetworkpolicies/status
- ciliumnetworkpolicies/finalizers
- ciliumclusterwidenetworkpolicies
- ciliumclusterwidenetworkpolicies/status
- ciliumclusterwidenetworkpolicies/finalizers
- ciliumendpoints
- ciliumendpoints/status
- ciliumendpoints/finalizers
- ciliumnodes
- ciliumnodes/status
- ciliumnodes/finalizers
- ciliumidentities
- ciliumidentities/status
- ciliumidentities/finalizers
- ciliumlocalredirectpolicies
- ciliumlocalredirectpolicies/status
- ciliumlocalredirectpolicies/finalizers
- ciliumendpointslices
- ciliumloadbalancerippools
- ciliumloadbalancerippools/status
- ciliumcidrgroups
- ciliuml2announcementpolicies
- ciliuml2announcementpolicies/status
- ciliumpodippools
verbs:
- '*'
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- create
- get
- list
- update
- watch
# Cilium leader elects if among multiple operator replicas
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cilium-agent
rules:
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- namespaces
- services
- pods
- endpoints
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- create
- get
- list
- watch
- update
- apiGroups:
- cilium.io
resources:
- ciliumnetworkpolicies
- ciliumnetworkpolicies/status
- ciliumclusterwidenetworkpolicies
- ciliumclusterwidenetworkpolicies/status
- ciliumendpoints
- ciliumendpoints/status
- ciliumnodes
- ciliumnodes/status
- ciliumidentities
- ciliumidentities/status
- ciliumlocalredirectpolicies
- ciliumlocalredirectpolicies/status
- ciliumegressnatpolicies
- ciliumendpointslices
- ciliumcidrgroups
- ciliuml2announcementpolicies
- ciliuml2announcementpolicies/status
- ciliumpodippools
verbs:
- '*'

View File

@@ -0,0 +1,175 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cilium
namespace: kube-system
data:
# Identity allocation mode selects how identities are shared between cilium
# nodes by setting how they are stored. The options are "crd" or "kvstore".
# - "crd" stores identities in kubernetes as CRDs (custom resource definition).
# These can be queried with:
# kubectl get ciliumid
# - "kvstore" stores identities in a kvstore, etcd or consul, that is
# configured below. Cilium versions before 1.6 supported only the kvstore
# backend. Upgrades from these older cilium versions should continue using
# the kvstore by commenting out the identity-allocation-mode below, or
# setting it to "kvstore".
identity-allocation-mode: crd
cilium-endpoint-gc-interval: "5m0s"
nodes-gc-interval: "5m0s"
# If you want to run cilium in debug mode change this value to true
debug: "false"
# The agent can be put into the following three policy enforcement modes
# default, always and never.
# https://docs.cilium.io/en/latest/policy/intro/#policy-enforcement-modes
enable-policy: "default"
# Prometheus
# enable-metrics: "true"
# prometheus-serve-addr: ":foo"
# operator-prometheus-serve-addr: ":bar"
# Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
# address.
enable-ipv4: "true"
# Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
# address.
enable-ipv6: "false"
# Enable probing for a more efficient clock source for the BPF datapath
enable-bpf-clock-probe: "true"
# Enable use of transparent proxying mechanisms (Linux 5.7+)
enable-bpf-tproxy: "false"
# If you want cilium monitor to aggregate tracing for packets, set this level
# to "low", "medium", or "maximum". The higher the level, the less packets
# that will be seen in monitor output.
monitor-aggregation: medium
# The monitor aggregation interval governs the typical time between monitor
# notification events for each allowed connection.
#
# Only effective when monitor aggregation is set to "medium" or higher.
monitor-aggregation-interval: 5s
# The monitor aggregation flags determine which TCP flags which, upon the
# first observation, cause monitor notifications to be generated.
#
# Only effective when monitor aggregation is set to "medium" or higher.
monitor-aggregation-flags: all
# Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
# sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
bpf-map-dynamic-size-ratio: "0.0025"
# bpf-policy-map-max specified the maximum number of entries in endpoint
# policy map (per endpoint)
bpf-policy-map-max: "16384"
# bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
# backend and affinity maps.
bpf-lb-map-max: "65536"
# Pre-allocation of map entries allows per-packet latency to be reduced, at
# the expense of up-front memory allocation for the entries in the maps. The
# default value below will minimize memory usage in the default installation;
# users who are sensitive to latency may consider setting this to "true".
#
# This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
# this option and behave as though it is set to "true".
#
# If this value is modified, then during the next Cilium startup the restore
# of existing endpoints and tracking of ongoing connections may be disrupted.
# As a result, reply packets may be dropped and the load-balancing decisions
# for established connections may change.
#
# If this option is set to "false" during an upgrade from 1.3 or earlier to
# 1.4 or later, then it may cause one-time disruptions during the upgrade.
preallocate-bpf-maps: "false"
# Name of the cluster. Only relevant when building a mesh of clusters.
cluster-name: default
# Unique ID of the cluster. Must be unique across all conneted clusters and
# in the range of 1 and 255. Only relevant when building a mesh of clusters.
cluster-id: "0"
# Encapsulation mode for communication between nodes
# Possible values:
# - disabled
# - vxlan (default)
# - geneve
routing-mode: "tunnel"
tunnel: vxlan
# Enables L7 proxy for L7 policy enforcement and visibility
enable-l7-proxy: "true"
auto-direct-node-routes: "false"
# enableXTSocketFallback enables the fallback compatibility solution
# when the xt_socket kernel module is missing and it is needed for
# the datapath L7 redirection to work properly. See documentation
# for details on when this can be disabled:
# http://docs.cilium.io/en/latest/install/system_requirements/#admin-kernel-version.
enable-xt-socket-fallback: "true"
# installIptablesRules enables installation of iptables rules to allow for
# TPROXY (L7 proxy injection), itpables based masquerading and compatibility
# with kube-proxy. See documentation for details on when this can be
# disabled.
install-iptables-rules: "true"
# masquerade traffic leaving the node destined for outside
enable-ipv4-masquerade: "true"
enable-ipv6-masquerade: "false"
# bpfMasquerade enables masquerading with BPF instead of iptables
enable-bpf-masquerade: "true"
# kube-proxy
kube-proxy-replacement: "true"
kube-proxy-replacement-healthz-bind-address: ":10256"
enable-session-affinity: "true"
# ClusterIPs from host namespace
bpf-lb-sock: "true"
# ClusterIPs from external nodes
bpf-lb-external-clusterip: "true"
# NodePort
enable-node-port: "true"
enable-health-check-nodeport: "false"
# ExternalIPs
enable-external-ips: "true"
# HostPort
enable-host-port: "true"
# IPAM
ipam: "cluster-pool"
disable-cnp-status-updates: "true"
cluster-pool-ipv4-cidr: "${pod_cidr}"
cluster-pool-ipv4-mask-size: "24"
# Health
agent-health-port: "9876"
enable-health-checking: "true"
enable-endpoint-health-checking: "true"
# Identity
enable-well-known-identities: "false"
enable-remote-node-identity: "true"
# Misc
enable-bandwidth-manager: "false"
enable-local-redirect-policy: "false"
policy-audit-mode: "false"
operator-api-serve-addr: "127.0.0.1:9234"
enable-l2-neigh-discovery: "true"
enable-k8s-terminating-endpoint: "true"
enable-k8s-networkpolicy: "true"
external-envoy-proxy: "false"
write-cni-conf-when-ready: /host/etc/cni/net.d/05-cilium.conflist
cni-exclusive: "true"
cni-log-file: "/var/run/cilium/cilium-cni.log"

View File

@@ -0,0 +1,219 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cilium
namespace: kube-system
labels:
k8s-app: cilium
spec:
selector:
matchLabels:
k8s-app: cilium-agent
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: cilium-agent
spec:
hostNetwork: true
priorityClassName: system-node-critical
serviceAccountName: cilium-agent
securityContext:
seccompProfile:
type: RuntimeDefault
tolerations:
- key: node-role.kubernetes.io/controller
operator: Exists
- key: node.kubernetes.io/not-ready
operator: Exists
%{~ for key in daemonset_tolerations ~}
- key: ${key}
operator: Exists
%{~ endfor ~}
initContainers:
# Cilium v1.13.1 starts installing CNI plugins in yet another init container
# https://github.com/cilium/cilium/pull/24075
- name: install-cni
image: ${cilium_agent_image}
command:
- /install-plugin.sh
securityContext:
privileged: true
capabilities:
drop:
- ALL
volumeMounts:
- name: cni-bin-dir
mountPath: /host/opt/cni/bin
# Required to mount cgroup2 filesystem on the underlying Kubernetes node.
# We use nsenter command with host's cgroup and mount namespaces enabled.
- name: mount-cgroup
image: ${cilium_agent_image}
command:
- sh
- -ec
# The statically linked Go program binary is invoked to avoid any
# dependency on utilities like sh and mount that can be missing on certain
# distros installed on the underlying host. Copy the binary to the
# same directory where we install cilium cni plugin so that exec permissions
# are available.
- 'cp /usr/bin/cilium-mount /hostbin/cilium-mount && nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "$${BIN_PATH}/cilium-mount" $CGROUP_ROOT; rm /hostbin/cilium-mount'
env:
- name: CGROUP_ROOT
value: /run/cilium/cgroupv2
- name: BIN_PATH
value: /opt/cni/bin
securityContext:
privileged: true
volumeMounts:
- name: hostproc
mountPath: /hostproc
- name: cni-bin-dir
mountPath: /hostbin
- name: clean-cilium-state
image: ${cilium_agent_image}
command:
- /init-container.sh
securityContext:
privileged: true
volumeMounts:
- name: sys-fs-bpf
mountPath: /sys/fs/bpf
- name: var-run-cilium
mountPath: /var/run/cilium
# Required to mount cgroup filesystem from the host to cilium agent pod
- name: cilium-cgroup
mountPath: /run/cilium/cgroupv2
mountPropagation: HostToContainer
containers:
- name: cilium-agent
image: ${cilium_agent_image}
command:
- cilium-agent
args:
- --config-dir=/tmp/cilium/config-map
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: KUBERNETES_SERVICE_HOST
valueFrom:
configMapKeyRef:
name: in-cluster
key: apiserver-host
- name: KUBERNETES_SERVICE_PORT
valueFrom:
configMapKeyRef:
name: in-cluster
key: apiserver-port
ports:
# Not yet used, prefer exec's
- name: health
protocol: TCP
containerPort: 9876
lifecycle:
preStop:
exec:
command:
- /cni-uninstall.sh
securityContext:
privileged: true
livenessProbe:
exec:
command:
- cilium
- status
- --brief
periodSeconds: 30
initialDelaySeconds: 120
successThreshold: 1
failureThreshold: 10
timeoutSeconds: 5
readinessProbe:
exec:
command:
- cilium
- status
- --brief
periodSeconds: 20
initialDelaySeconds: 5
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 5
volumeMounts:
# Load kernel modules
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: xtables-lock
mountPath: /run/xtables.lock
# Keep state between restarts
- name: var-run-cilium
mountPath: /var/run/cilium
- name: sys-fs-bpf
mountPath: /sys/fs/bpf
mountPropagation: Bidirectional
# Configuration
- name: config
mountPath: /tmp/cilium/config-map
readOnly: true
# Install config on host
- name: cni-conf-dir
mountPath: /host/etc/cni/net.d
terminationGracePeriodSeconds: 1
volumes:
# Load kernel modules
- name: lib-modules
hostPath:
path: /lib/modules
# Access iptables concurrently with other processes (e.g. kube-proxy)
- name: xtables-lock
hostPath:
type: FileOrCreate
path: /run/xtables.lock
# Keep state between restarts
- name: var-run-cilium
hostPath:
path: /var/run/cilium
type: DirectoryOrCreate
# Keep state between restarts for bpf maps
- name: sys-fs-bpf
hostPath:
path: /sys/fs/bpf
type: DirectoryOrCreate
# Mount host cgroup2 filesystem
- name: hostproc
hostPath:
path: /proc
type: Directory
- name: cilium-cgroup
hostPath:
path: /run/cilium/cgroupv2
type: DirectoryOrCreate
# Read configuration
- name: config
configMap:
name: cilium
# Install CNI plugin and config on host
- name: cni-bin-dir
hostPath:
type: DirectoryOrCreate
path: /opt/cni/bin
- name: cni-conf-dir
hostPath:
type: DirectoryOrCreate
path: /etc/cni/net.d

View File

@@ -0,0 +1,103 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: cilium-operator
namespace: kube-system
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
name: cilium-operator
template:
metadata:
labels:
name: cilium-operator
spec:
hostNetwork: true
priorityClassName: system-cluster-critical
serviceAccountName: cilium-operator
securityContext:
seccompProfile:
type: RuntimeDefault
tolerations:
- key: node-role.kubernetes.io/controller
operator: Exists
- key: node.kubernetes.io/not-ready
operator: Exists
containers:
- name: cilium-operator
image: ${cilium_operator_image}
command:
- cilium-operator-generic
args:
- --config-dir=/tmp/cilium/config-map
- --debug=$(CILIUM_DEBUG)
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: KUBERNETES_SERVICE_HOST
valueFrom:
configMapKeyRef:
name: in-cluster
key: apiserver-host
- name: KUBERNETES_SERVICE_PORT
valueFrom:
configMapKeyRef:
name: in-cluster
key: apiserver-port
- name: CILIUM_DEBUG
valueFrom:
configMapKeyRef:
name: cilium
key: debug
optional: true
ports:
- name: health
protocol: TCP
containerPort: 9234
livenessProbe:
httpGet:
scheme: HTTP
host: 127.0.0.1
port: 9234
path: /healthz
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 3
readinessProbe:
httpGet:
scheme: HTTP
host: 127.0.0.1
port: 9234
path: /healthz
periodSeconds: 15
timeoutSeconds: 3
failureThreshold: 5
volumeMounts:
- name: config
mountPath: /tmp/cilium/config-map
readOnly: true
topologySpreadConstraints:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
name: cilium-operator
maxSkew: 1
whenUnsatisfiable: DoNotSchedule
volumes:
# Read configuration
- name: config
configMap:
name: cilium

View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: cilium-operator
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cilium-agent
namespace: kube-system

View File

@@ -0,0 +1,16 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:coredns
labels:
kubernetes.io/bootstrapping: rbac-defaults
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system

View File

@@ -0,0 +1,27 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:coredns
labels:
kubernetes.io/bootstrapping: rbac-defaults
rules:
- apiGroups: [""]
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups: ["discovery.k8s.io"]
resources:
- endpointslices
verbs:
- list
- watch
- apiGroups: [""]
resources:
- nodes
verbs:
- get

View File

@@ -0,0 +1,27 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
log . {
class error
}
kubernetes ${cluster_domain_suffix} in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}

View File

@@ -0,0 +1,109 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/name: "CoreDNS"
spec:
replicas: ${control_plane_replicas}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
tier: control-plane
k8s-app: coredns
template:
metadata:
labels:
tier: control-plane
k8s-app: coredns
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: node.kubernetes.io/controller
operator: Exists
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: tier
operator: In
values:
- control-plane
- key: k8s-app
operator: In
values:
- coredns
topologyKey: kubernetes.io/hostname
priorityClassName: system-cluster-critical
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccountName: coredns
tolerations:
- key: node-role.kubernetes.io/controller
effect: NoSchedule
containers:
- name: coredns
image: ${coredns_image}
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config
mountPath: /etc/coredns
readOnly: true
ports:
- name: dns
protocol: UDP
containerPort: 53
- name: dns-tcp
protocol: TCP
containerPort: 53
- name: metrics
protocol: TCP
containerPort: 9153
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config
configMap:
name: coredns
items:
- key: Corefile
path: Corefile

View File

@@ -1,5 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-dns
name: coredns
namespace: kube-system

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: Service
metadata:
name: coredns
namespace: kube-system
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9153"
labels:
k8s-app: coredns
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: coredns
clusterIP: ${cluster_dns_service_ip}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

View File

@@ -1,7 +1,7 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-flannel-cfg
name: flannel-config
namespace: kube-system
labels:
tier: node
@@ -31,6 +31,7 @@ data:
{
"Network": "${pod_cidr}",
"Backend": {
"Type": "vxlan"
"Type": "vxlan",
"Port": 8472
}
}

View File

@@ -0,0 +1,99 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: flannel
namespace: kube-system
labels:
k8s-app: flannel
spec:
selector:
matchLabels:
k8s-app: flannel
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: flannel
spec:
hostNetwork: true
priorityClassName: system-node-critical
serviceAccountName: flannel
securityContext:
seccompProfile:
type: RuntimeDefault
tolerations:
- key: node-role.kubernetes.io/controller
operator: Exists
- key: node.kubernetes.io/not-ready
operator: Exists
%{~ for key in daemonset_tolerations ~}
- key: ${key}
operator: Exists
%{~ endfor ~}
initContainers:
- name: install-cni
image: ${flannel_cni_image}
command: ["/install-cni.sh"]
env:
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: flannel-config
key: cni-conf.json
volumeMounts:
- name: cni-bin-dir
mountPath: /host/opt/cni/bin/
- name: cni-conf-dir
mountPath: /host/etc/cni/net.d
containers:
- name: flannel
image: ${flannel_image}
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=$(POD_IP)"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
securityContext:
privileged: true
resources:
requests:
cpu: 100m
volumeMounts:
- name: flannel-config
mountPath: /etc/kube-flannel/
- name: run-flannel
mountPath: /run/flannel
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: flannel-config
configMap:
name: flannel-config
- name: run-flannel
hostPath:
path: /run/flannel
# Used by install-cni
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-conf-dir
hostPath:
type: DirectoryOrCreate
path: /etc/cni/net.d
# Access iptables concurrently
- name: xtables-lock
hostPath:
type: FileOrCreate
path: /run/xtables.lock

View File

@@ -1,83 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel
namespace: kube-system
labels:
tier: node
k8s-app: flannel
spec:
selector:
matchLabels:
tier: node
k8s-app: flannel
template:
metadata:
labels:
tier: node
k8s-app: flannel
spec:
serviceAccountName: flannel
containers:
- name: kube-flannel
image: ${flannel_image}
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=$(POD_IP)"]
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: run
mountPath: /run
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: install-cni
image: ${flannel_cni_image}
command: ["/install-cni.sh"]
env:
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: kube-flannel-cfg
key: cni-conf.json
volumeMounts:
- name: cni
mountPath: /host/etc/cni/net.d
- name: host-cni-bin
mountPath: /host/opt/cni/bin/
hostNetwork: true
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/kubernetes/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: host-cni-bin
hostPath:
path: /opt/cni/bin
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -0,0 +1,99 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-proxy
namespace: kube-system
labels:
tier: node
k8s-app: kube-proxy
spec:
selector:
matchLabels:
tier: node
k8s-app: kube-proxy
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
tier: node
k8s-app: kube-proxy
spec:
hostNetwork: true
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccountName: kube-proxy
tolerations:
- key: node-role.kubernetes.io/controller
operator: Exists
- key: node.kubernetes.io/not-ready
operator: Exists
%{~ for key in daemonset_tolerations ~}
- key: ${key}
operator: Exists
%{~ endfor ~}
containers:
- name: kube-proxy
image: ${kube_proxy_image}
command:
- kube-proxy
- --cluster-cidr=${pod_cidr}
- --hostname-override=$(NODE_NAME)
- --kubeconfig=/etc/kubernetes/kubeconfig
- --metrics-bind-address=0.0.0.0
- --proxy-mode=ipvs
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- name: metrics
containerPort: 10249
- name: health
containerPort: 10256
livenessProbe:
httpGet:
path: /healthz
port: 10256
initialDelaySeconds: 15
timeoutSeconds: 15
securityContext:
privileged: true
volumeMounts:
- name: kubeconfig
mountPath: /etc/kubernetes
readOnly: true
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: etc-ssl
mountPath: /etc/ssl/certs
readOnly: true
- name: etc-pki
mountPath: /etc/pki
readOnly: true
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: kubeconfig
configMap:
name: kubeconfig-in-cluster
- name: lib-modules
hostPath:
path: /lib/modules
- name: etc-ssl
hostPath:
path: /etc/ssl/certs
- name: etc-pki
hostPath:
path: /etc/pki
# Access iptables concurrently
- name: xtables-lock
hostPath:
type: FileOrCreate
path: /run/xtables.lock

View File

@@ -1,17 +1,18 @@
apiVersion: v1
kind: Config
clusters:
- name: ${name}-cluster
- name: ${name}
cluster:
server: ${server}
certificate-authority-data: ${ca_cert}
users:
- name: ${name}-user
- name: ${name}
user:
client-certificate-data: ${kubelet_cert}
client-key-data: ${kubelet_key}
current-context: ${name}
contexts:
- name: ${name}-context
- name: ${name}
context:
cluster: ${name}-cluster
user: ${name}-user
cluster: ${name}
user: ${name}

View File

@@ -8,8 +8,7 @@ clusters:
users:
- name: kubelet
user:
client-certificate-data: ${kubelet_cert}
client-key-data: ${kubelet_key}
token: ${token_id}.${token_secret}
contexts:
- context:
cluster: local

View File

@@ -0,0 +1,13 @@
# Bind system:bootstrappers to ClusterRole for node bootstrap
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: bootstrap-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers

View File

@@ -0,0 +1,13 @@
# Approve new CSRs from "system:bootstrappers" subjects
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: bootstrap-approve-new
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers

View File

@@ -0,0 +1,13 @@
# Approve renewal CSRs from "system:nodes" subjects
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: bootstrap-approve-renew
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: Secret
type: bootstrap.kubernetes.io/token
metadata:
# Name MUST be of form "bootstrap-token-<token_id>"
name: bootstrap-token-${token_id}
namespace: kube-system
stringData:
description: "Typhoon generated bootstrap token"
token-id: ${token_id}
token-secret: ${token_secret}
usage-bootstrap-authentication: "true"

View File

@@ -0,0 +1,10 @@
# in-cluster ConfigMap is for control plane components that must reach
# kube-apiserver before service IPs are available (e.g. 10.3.0.1)
apiVersion: v1
kind: ConfigMap
metadata:
name: in-cluster
namespace: kube-system
data:
apiserver-host: ${apiserver_host}
apiserver-port: "${apiserver_port}"

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: kube-apiserver
namespace: kube-system
type: Opaque
data:
apiserver.key: ${apiserver_key}
apiserver.crt: ${apiserver_cert}
service-account.pub: ${serviceaccount_pub}
ca.crt: ${ca_cert}
etcd-client-ca.crt: ${etcd_ca_cert}
etcd-client.crt: ${etcd_client_cert}
etcd-client.key: ${etcd_client_key}

View File

@@ -1,83 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-apiserver
namespace: kube-system
labels:
tier: control-plane
k8s-app: kube-apiserver
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: kube-apiserver
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-apiserver
annotations:
checkpointer.alpha.coreos.com/checkpoint: "true"
spec:
containers:
- name: kube-apiserver
image: ${hyperkube_image}
command:
- /hyperkube
- apiserver
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --anonymous-auth=false
- --authorization-mode=RBAC
- --bind-address=0.0.0.0
- --client-ca-file=/etc/kubernetes/secrets/ca.crt
- --cloud-provider=${cloud_provider}
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
- --etcd-cafile=/etc/kubernetes/secrets/etcd-client-ca.crt
- --etcd-certfile=/etc/kubernetes/secrets/etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/secrets/etcd-client.key
- --etcd-servers=${etcd_servers}
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
- --secure-port=443
- --service-account-key-file=/etc/kubernetes/secrets/service-account.pub
- --service-cluster-ip-range=${service_cidr}
- --storage-backend=etcd3
- --tls-cert-file=/etc/kubernetes/secrets/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/secrets/apiserver.key
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
- mountPath: /etc/kubernetes/secrets
name: secrets
readOnly: true
- mountPath: /var/lock
name: var-lock
readOnly: false
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- name: ssl-certs-host
hostPath:
path: ${trusted_certs_dir}
- name: secrets
secret:
secretName: kube-apiserver
- name: var-lock
hostPath:
path: /var/lock
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -1,11 +0,0 @@
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kube-controller-manager
namespace: kube-system
spec:
minAvailable: 1
selector:
matchLabels:
tier: control-plane
k8s-app: kube-controller-manager

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: controller-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-controller-manager
subjects:
- kind: ServiceAccount
name: kube-controller-manager
namespace: kube-system

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: kube-controller-manager

View File

@@ -1,9 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: kube-controller-manager
namespace: kube-system
type: Opaque
data:
service-account.key: ${serviceaccount_key}
ca.crt: ${ca_cert}

View File

@@ -1,89 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-controller-manager
namespace: kube-system
labels:
tier: control-plane
k8s-app: kube-controller-manager
spec:
replicas: 2
selector:
matchLabels:
tier: control-plane
k8s-app: kube-controller-manager
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-controller-manager
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: tier
operator: In
values:
- control-plane
- key: k8s-app
operator: In
values:
- kube-controller-manager
topologyKey: kubernetes.io/hostname
containers:
- name: kube-controller-manager
image: ${hyperkube_image}
command:
- ./hyperkube
- controller-manager
- --use-service-account-credentials
- --allocate-node-cidrs=true
- --cloud-provider=${cloud_provider}
- --cluster-cidr=${pod_cidr}
- --service-cluster-ip-range=${service_cidr}
- --configure-cloud-routes=false
- --leader-elect=true
- --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins
- --root-ca-file=/etc/kubernetes/secrets/ca.crt
- --service-account-private-key-file=/etc/kubernetes/secrets/service-account.key
livenessProbe:
httpGet:
path: /healthz
port: 10252 # Note: Using default port. Update if --port option is set differently.
initialDelaySeconds: 15
timeoutSeconds: 15
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
- name: volumeplugins
mountPath: /var/lib/kubelet/volumeplugins
readOnly: true
- name: ssl-host
mountPath: /etc/ssl/certs
readOnly: true
nodeSelector:
node-role.kubernetes.io/master: ""
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: kube-controller-manager
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- name: secrets
secret:
secretName: kube-controller-manager
- name: ssl-host
hostPath:
path: ${trusted_certs_dir}
- name: volumeplugins
hostPath:
path: /var/lib/kubelet/volumeplugins
dnsPolicy: Default # Don't use cluster DNS.

View File

@@ -1,154 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
image: ${kubedns_image}
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=${cluster_domain_suffix}.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: ${kubedns_dnsmasq_image}
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --no-negcache
- --log-facility=-
- --server=/${cluster_domain_suffix}/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: ${kubedns_sidecar_image}
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.${cluster_domain_suffix},5,SRV
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.${cluster_domain_suffix},5,SRV
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
serviceAccountName: kube-dns

View File

@@ -1,20 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: ${kube_dns_service_ip}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

View File

@@ -1,67 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-proxy
namespace: kube-system
labels:
tier: node
k8s-app: kube-proxy
spec:
selector:
matchLabels:
tier: node
k8s-app: kube-proxy
template:
metadata:
labels:
tier: node
k8s-app: kube-proxy
spec:
containers:
- name: kube-proxy
image: ${hyperkube_image}
command:
- ./hyperkube
- proxy
- --cluster-cidr=${pod_cidr}
- --hostname-override=$(NODE_NAME)
- --kubeconfig=/etc/kubernetes/kubeconfig
- --proxy-mode=iptables
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
privileged: true
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
- name: kubeconfig
mountPath: /etc/kubernetes
readOnly: true
hostNetwork: true
serviceAccountName: kube-proxy
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
volumes:
- name: lib-modules
hostPath:
path: /lib/modules
- name: ssl-certs-host
hostPath:
path: ${trusted_certs_dir}
- name: kubeconfig
configMap:
name: kubeconfig-in-cluster
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -1,11 +0,0 @@
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kube-scheduler
namespace: kube-system
spec:
minAvailable: 1
selector:
matchLabels:
tier: control-plane
k8s-app: kube-scheduler

View File

@@ -1,58 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-scheduler
namespace: kube-system
labels:
tier: control-plane
k8s-app: kube-scheduler
spec:
replicas: 2
selector:
matchLabels:
tier: control-plane
k8s-app: kube-scheduler
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-scheduler
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: tier
operator: In
values:
- control-plane
- key: k8s-app
operator: In
values:
- kube-scheduler
topologyKey: kubernetes.io/hostname
containers:
- name: kube-scheduler
image: ${hyperkube_image}
command:
- ./hyperkube
- scheduler
- --leader-elect=true
livenessProbe:
httpGet:
path: /healthz
port: 10251 # Note: Using default port. Update if --port option is set differently.
initialDelaySeconds: 15
timeoutSeconds: 15
nodeSelector:
node-role.kubernetes.io/master: ""
securityContext:
runAsNonRoot: true
runAsUser: 65534
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule

View File

@@ -9,6 +9,8 @@ data:
clusters:
- name: local
cluster:
# kubeconfig-in-cluster is for control plane components that must reach
# kube-apiserver before service IPs are available (e.g.10.3.0.1)
server: ${server}
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
users:

View File

@@ -1,12 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:default-sa
subjects:
- kind: ServiceAccount
name: default
namespace: kube-system
name: kubelet-delete
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubelet-delete
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes

View File

@@ -0,0 +1,23 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubelet-delete
rules:
- apiGroups: [""]
resources:
- nodes
verbs:
- delete
- apiGroups: ["apps"]
resources:
- deployments
- daemonsets
- statefulsets
verbs:
- get
- list
- apiGroups: [""]
resources:
- pods/eviction
verbs:
- create

View File

@@ -1,13 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-checkpointer
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-checkpointer
subjects:
- kind: ServiceAccount
name: pod-checkpointer
namespace: kube-system

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-checkpointer
namespace: kube-system
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets", "configmaps"]
verbs: ["get"]

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: pod-checkpointer

View File

@@ -1,72 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: pod-checkpointer
namespace: kube-system
labels:
tier: control-plane
k8s-app: pod-checkpointer
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: pod-checkpointer
template:
metadata:
labels:
tier: control-plane
k8s-app: pod-checkpointer
annotations:
checkpointer.alpha.coreos.com/checkpoint: "true"
spec:
containers:
- name: pod-checkpointer
image: ${pod_checkpointer_image}
command:
- /checkpoint
- --lock-file=/var/run/lock/pod-checkpointer.lock
- --kubeconfig=/etc/checkpointer/kubeconfig
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
imagePullPolicy: Always
volumeMounts:
- mountPath: /etc/checkpointer
name: kubeconfig
- mountPath: /etc/kubernetes
name: etc-kubernetes
- mountPath: /var/run
name: var-run
serviceAccountName: pod-checkpointer
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
restartPolicy: Always
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- name: kubeconfig
configMap:
name: kubeconfig-in-cluster
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes
- name: var-run
hostPath:
path: /var/run
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -0,0 +1,73 @@
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
labels:
k8s-app: kube-apiserver
tier: control-plane
spec:
hostNetwork: true
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
containers:
- name: kube-apiserver
image: ${kube_apiserver_image}
command:
- kube-apiserver
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --anonymous-auth=false
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd-client-ca.crt
- --etcd-certfile=/etc/kubernetes/pki/etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/etcd-client.key
- --etcd-servers=${etcd_servers}
- --feature-gates=MutatingAdmissionPolicy=true
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname${aggregation_flags}
- --runtime-config=admissionregistration.k8s.io/v1beta1=true,admissionregistration.k8s.io/v1alpha1=true
- --secure-port=6443
- --service-account-issuer=${service_account_issuer}
- --service-account-jwks-uri=${service_account_issuer}/openid/v1/jwks
- --service-account-key-file=/etc/kubernetes/pki/service-account.pub
- --service-account-signing-key-file=/etc/kubernetes/pki/service-account.key
- --service-cluster-ip-range=${service_cidr}
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
resources:
requests:
cpu: 150m
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/pki
readOnly: true
- name: etc-ssl
mountPath: /etc/ssl/certs
readOnly: true
- name: etc-pki
mountPath: /etc/pki
readOnly: true
volumes:
- name: secrets
hostPath:
path: /etc/kubernetes/pki
- name: etc-ssl
hostPath:
path: /etc/ssl/certs
- name: etc-pki
hostPath:
path: /etc/pki

View File

@@ -0,0 +1,75 @@
apiVersion: v1
kind: Pod
metadata:
name: kube-controller-manager
namespace: kube-system
labels:
k8s-app: kube-controller-manager
tier: control-plane
spec:
hostNetwork: true
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
containers:
- name: kube-controller-manager
image: ${kube_controller_manager_image}
command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/pki/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/pki/controller-manager.conf
- --allocate-node-cidrs=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=${pod_cidr}
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --cluster-signing-duration=72h
- --controllers=*,tokencleaner
- --configure-cloud-routes=false
- --kubeconfig=/etc/kubernetes/pki/controller-manager.conf
- --leader-elect=true
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/service-account.key
- --service-cluster-ip-range=${service_cidr}
- --use-service-account-credentials=true
livenessProbe:
httpGet:
scheme: HTTPS
host: 127.0.0.1
path: /healthz
port: 10257
initialDelaySeconds: 25
timeoutSeconds: 15
failureThreshold: 8
resources:
requests:
cpu: 150m
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/pki
readOnly: true
- name: etc-ssl
mountPath: /etc/ssl/certs
readOnly: true
- name: etc-pki
mountPath: /etc/pki
readOnly: true
- name: flex
mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
volumes:
- name: secrets
hostPath:
path: /etc/kubernetes/pki
- name: etc-ssl
hostPath:
path: /etc/ssl/certs
- name: etc-pki
hostPath:
path: /etc/pki
- name: flex
hostPath:
type: DirectoryOrCreate
path: /var/lib/kubelet/volumeplugins

View File

@@ -0,0 +1,44 @@
apiVersion: v1
kind: Pod
metadata:
name: kube-scheduler
namespace: kube-system
labels:
k8s-app: kube-scheduler
tier: control-plane
spec:
hostNetwork: true
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
containers:
- name: kube-scheduler
image: ${kube_scheduler_image}
command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/pki/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/pki/scheduler.conf
- --kubeconfig=/etc/kubernetes/pki/scheduler.conf
- --leader-elect=true
livenessProbe:
httpGet:
scheme: HTTPS
host: 127.0.0.1
path: /healthz
port: 10259
initialDelaySeconds: 15
timeoutSeconds: 15
resources:
requests:
cpu: 100m
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/pki/scheduler.conf
readOnly: true
volumes:
- name: secrets
hostPath:
path: /etc/kubernetes/pki/scheduler.conf

View File

@@ -1,5 +1,4 @@
cluster_name = "example"
api_servers = ["node1.example.com"]
etcd_servers = ["node1.example.com"]
asset_dir = "/home/core/mycluster"
networking = "flannel"

75
tls-aggregation.tf Normal file
View File

@@ -0,0 +1,75 @@
locals {
# Kubernetes Aggregation TLS assets map
aggregation_tls = var.enable_aggregation ? {
"tls/k8s/aggregation-ca.crt" = tls_self_signed_cert.aggregation-ca[0].cert_pem,
"tls/k8s/aggregation-client.crt" = tls_locally_signed_cert.aggregation-client[0].cert_pem,
"tls/k8s/aggregation-client.key" = tls_private_key.aggregation-client[0].private_key_pem,
} : {}
}
# Kubernetes Aggregation CA (i.e. front-proxy-ca)
# Files: tls/{aggregation-ca.crt,aggregation-ca.key}
resource "tls_private_key" "aggregation-ca" {
count = var.enable_aggregation ? 1 : 0
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_self_signed_cert" "aggregation-ca" {
count = var.enable_aggregation ? 1 : 0
private_key_pem = tls_private_key.aggregation-ca[0].private_key_pem
subject {
common_name = "kubernetes-front-proxy-ca"
}
is_ca_certificate = true
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"cert_signing",
]
}
# Kubernetes apiserver (i.e. front-proxy-client)
# Files: tls/{aggregation-client.crt,aggregation-client.key}
resource "tls_private_key" "aggregation-client" {
count = var.enable_aggregation ? 1 : 0
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "aggregation-client" {
count = var.enable_aggregation ? 1 : 0
private_key_pem = tls_private_key.aggregation-client[0].private_key_pem
subject {
common_name = "kube-apiserver"
}
}
resource "tls_locally_signed_cert" "aggregation-client" {
count = var.enable_aggregation ? 1 : 0
cert_request_pem = tls_cert_request.aggregation-client[0].cert_request_pem
ca_private_key_pem = tls_private_key.aggregation-ca[0].private_key_pem
ca_cert_pem = tls_self_signed_cert.aggregation-ca[0].cert_pem
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"client_auth",
]
}

View File

@@ -1,58 +1,19 @@
# etcd-client-ca.crt
resource "local_file" "etcd_client_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
filename = "${var.asset_dir}/tls/etcd-client-ca.crt"
locals {
# etcd TLS assets map
etcd_tls = {
"tls/etcd/etcd-client-ca.crt" = tls_self_signed_cert.etcd-ca.cert_pem,
"tls/etcd/etcd-client.crt" = tls_locally_signed_cert.client.cert_pem,
"tls/etcd/etcd-client.key" = tls_private_key.client.private_key_pem
"tls/etcd/server-ca.crt" = tls_self_signed_cert.etcd-ca.cert_pem,
"tls/etcd/server.crt" = tls_locally_signed_cert.server.cert_pem
"tls/etcd/server.key" = tls_private_key.server.private_key_pem
"tls/etcd/peer-ca.crt" = tls_self_signed_cert.etcd-ca.cert_pem,
"tls/etcd/peer.crt" = tls_locally_signed_cert.peer.cert_pem
"tls/etcd/peer.key" = tls_private_key.peer.private_key_pem
}
}
# etcd-client.crt
resource "local_file" "etcd_client_crt" {
content = "${tls_locally_signed_cert.client.cert_pem}"
filename = "${var.asset_dir}/tls/etcd-client.crt"
}
# etcd-client.key
resource "local_file" "etcd_client_key" {
content = "${tls_private_key.client.private_key_pem}"
filename = "${var.asset_dir}/tls/etcd-client.key"
}
# server-ca.crt
resource "local_file" "etcd_server_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
filename = "${var.asset_dir}/tls/etcd/server-ca.crt"
}
# server.crt
resource "local_file" "etcd_server_crt" {
content = "${tls_locally_signed_cert.server.cert_pem}"
filename = "${var.asset_dir}/tls/etcd/server.crt"
}
# server.key
resource "local_file" "etcd_server_key" {
content = "${tls_private_key.server.private_key_pem}"
filename = "${var.asset_dir}/tls/etcd/server.key"
}
# peer-ca.crt
resource "local_file" "etcd_peer_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
filename = "${var.asset_dir}/tls/etcd/peer-ca.crt"
}
# peer.crt
resource "local_file" "etcd_peer_crt" {
content = "${tls_locally_signed_cert.peer.cert_pem}"
filename = "${var.asset_dir}/tls/etcd/peer.crt"
}
# peer.key
resource "local_file" "etcd_peer_key" {
content = "${tls_private_key.peer.private_key_pem}"
filename = "${var.asset_dir}/tls/etcd/peer.key"
}
# certificates and keys
# etcd CA
resource "tls_private_key" "etcd-ca" {
algorithm = "RSA"
@@ -60,8 +21,7 @@ resource "tls_private_key" "etcd-ca" {
}
resource "tls_self_signed_cert" "etcd-ca" {
key_algorithm = "${tls_private_key.etcd-ca.algorithm}"
private_key_pem = "${tls_private_key.etcd-ca.private_key_pem}"
private_key_pem = tls_private_key.etcd-ca.private_key_pem
subject {
common_name = "etcd-ca"
@@ -78,16 +38,15 @@ resource "tls_self_signed_cert" "etcd-ca" {
]
}
# client certs are used for client (apiserver, locksmith, etcd-operator)
# to etcd communication
# etcd Client (apiserver to etcd communication)
resource "tls_private_key" "client" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "client" {
key_algorithm = "${tls_private_key.client.algorithm}"
private_key_pem = "${tls_private_key.client.private_key_pem}"
private_key_pem = tls_private_key.client.private_key_pem
subject {
common_name = "etcd-client"
@@ -98,19 +57,14 @@ resource "tls_cert_request" "client" {
"127.0.0.1",
]
dns_names = ["${concat(
var.etcd_servers,
list(
"localhost",
))}"]
dns_names = concat(var.etcd_servers, ["localhost"])
}
resource "tls_locally_signed_cert" "client" {
cert_request_pem = "${tls_cert_request.client.cert_request_pem}"
cert_request_pem = tls_cert_request.client.cert_request_pem
ca_key_algorithm = "${join(" ", tls_self_signed_cert.etcd-ca.*.key_algorithm)}"
ca_private_key_pem = "${join(" ", tls_private_key.etcd-ca.*.private_key_pem)}"
ca_cert_pem = "${join(" ", tls_self_signed_cert.etcd-ca.*.cert_pem)}"
ca_private_key_pem = tls_private_key.etcd-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.etcd-ca.cert_pem
validity_period_hours = 8760
@@ -122,14 +76,15 @@ resource "tls_locally_signed_cert" "client" {
]
}
# etcd Server
resource "tls_private_key" "server" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "server" {
key_algorithm = "${tls_private_key.server.algorithm}"
private_key_pem = "${tls_private_key.server.private_key_pem}"
private_key_pem = tls_private_key.server.private_key_pem
subject {
common_name = "etcd-server"
@@ -140,19 +95,14 @@ resource "tls_cert_request" "server" {
"127.0.0.1",
]
dns_names = ["${concat(
var.etcd_servers,
list(
"localhost",
))}"]
dns_names = concat(var.etcd_servers, ["localhost"])
}
resource "tls_locally_signed_cert" "server" {
cert_request_pem = "${tls_cert_request.server.cert_request_pem}"
cert_request_pem = tls_cert_request.server.cert_request_pem
ca_key_algorithm = "${join(" ", tls_self_signed_cert.etcd-ca.*.key_algorithm)}"
ca_private_key_pem = "${join(" ", tls_private_key.etcd-ca.*.private_key_pem)}"
ca_cert_pem = "${join(" ", tls_self_signed_cert.etcd-ca.*.cert_pem)}"
ca_private_key_pem = tls_private_key.etcd-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.etcd-ca.cert_pem
validity_period_hours = 8760
@@ -164,29 +114,29 @@ resource "tls_locally_signed_cert" "server" {
]
}
# etcd Peer
resource "tls_private_key" "peer" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "peer" {
key_algorithm = "${tls_private_key.peer.algorithm}"
private_key_pem = "${tls_private_key.peer.private_key_pem}"
private_key_pem = tls_private_key.peer.private_key_pem
subject {
common_name = "etcd-peer"
organization = "etcd"
}
dns_names = ["${var.etcd_servers}"]
dns_names = var.etcd_servers
}
resource "tls_locally_signed_cert" "peer" {
cert_request_pem = "${tls_cert_request.peer.cert_request_pem}"
cert_request_pem = tls_cert_request.peer.cert_request_pem
ca_key_algorithm = "${join(" ", tls_self_signed_cert.etcd-ca.*.key_algorithm)}"
ca_private_key_pem = "${join(" ", tls_private_key.etcd-ca.*.private_key_pem)}"
ca_cert_pem = "${join(" ", tls_self_signed_cert.etcd-ca.*.cert_pem)}"
ca_private_key_pem = tls_private_key.etcd-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.etcd-ca.cert_pem
validity_period_hours = 8760
@@ -197,3 +147,4 @@ resource "tls_locally_signed_cert" "peer" {
"client_auth",
]
}

View File

@@ -1,33 +1,28 @@
# NOTE: Across this module, the following syntax is used at various places:
# `"${var.ca_certificate == "" ? join(" ", tls_private_key.kube-ca.*.private_key_pem) : var.ca_private_key}"`
#
# Due to https://github.com/hashicorp/hil/issues/50, both sides of conditions
# are evaluated, until one of them is discarded. Unfortunately, the
# `{tls_private_key/tls_self_signed_cert}.kube-ca` resources are created
# conditionally and might not be present - in which case an error is
# generated. Because a `count` is used on these ressources, the resources can be
# referenced as lists with the `.*` notation, and arrays are allowed to be
# empty. The `join()` interpolation function is then used to cast them back to
# a string. Since `count` can only be 0 or 1, the returned value is either empty
# (and discarded anyways) or the desired value.
locals {
# Kubernetes TLS assets map
kubernetes_tls = {
"tls/k8s/ca.crt" = tls_self_signed_cert.kube-ca.cert_pem,
"tls/k8s/ca.key" = tls_private_key.kube-ca.private_key_pem,
"tls/k8s/apiserver.crt" = tls_locally_signed_cert.apiserver.cert_pem,
"tls/k8s/apiserver.key" = tls_private_key.apiserver.private_key_pem,
"tls/k8s/service-account.pub" = tls_private_key.service-account.public_key_pem
"tls/k8s/service-account.key" = tls_private_key.service-account.private_key_pem
}
}
# Kubernetes CA (tls/{ca.crt,ca.key})
resource "tls_private_key" "kube-ca" {
count = "${var.ca_certificate == "" ? 1 : 0}"
resource "tls_private_key" "kube-ca" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_self_signed_cert" "kube-ca" {
count = "${var.ca_certificate == "" ? 1 : 0}"
key_algorithm = "${tls_private_key.kube-ca.algorithm}"
private_key_pem = "${tls_private_key.kube-ca.private_key_pem}"
private_key_pem = tls_private_key.kube-ca.private_key_pem
subject {
common_name = "kube-ca"
organization = "bootkube"
common_name = "kubernetes-ca"
organization = "typhoon"
}
is_ca_certificate = true
@@ -40,50 +35,39 @@ resource "tls_self_signed_cert" "kube-ca" {
]
}
resource "local_file" "kube-ca-key" {
content = "${var.ca_certificate == "" ? join(" ", tls_private_key.kube-ca.*.private_key_pem) : var.ca_private_key}"
filename = "${var.asset_dir}/tls/ca.key"
}
resource "local_file" "kube-ca-crt" {
content = "${var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate}"
filename = "${var.asset_dir}/tls/ca.crt"
}
# Kubernetes API Server (tls/{apiserver.key,apiserver.crt})
resource "tls_private_key" "apiserver" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "apiserver" {
key_algorithm = "${tls_private_key.apiserver.algorithm}"
private_key_pem = "${tls_private_key.apiserver.private_key_pem}"
private_key_pem = tls_private_key.apiserver.private_key_pem
subject {
common_name = "kube-apiserver"
organization = "kube-master"
organization = "system:masters"
}
dns_names = [
"${var.api_servers}",
dns_names = flatten([
var.api_servers,
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.${var.cluster_domain_suffix}",
]
])
ip_addresses = [
"${cidrhost(var.service_cidr, 1)}",
cidrhost(var.service_cidr, 1),
]
}
resource "tls_locally_signed_cert" "apiserver" {
cert_request_pem = "${tls_cert_request.apiserver.cert_request_pem}"
cert_request_pem = tls_cert_request.apiserver.cert_request_pem
ca_key_algorithm = "${var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.key_algorithm) : var.ca_key_alg}"
ca_private_key_pem = "${var.ca_certificate == "" ? join(" ", tls_private_key.kube-ca.*.private_key_pem) : var.ca_private_key}"
ca_cert_pem = "${var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem): var.ca_certificate}"
ca_private_key_pem = tls_private_key.kube-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.kube-ca.cert_pem
validity_period_hours = 8760
@@ -95,71 +79,101 @@ resource "tls_locally_signed_cert" "apiserver" {
]
}
resource "local_file" "apiserver-key" {
content = "${tls_private_key.apiserver.private_key_pem}"
filename = "${var.asset_dir}/tls/apiserver.key"
# kube-controller-manager
resource "tls_private_key" "controller-manager" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "local_file" "apiserver-crt" {
content = "${tls_locally_signed_cert.apiserver.cert_pem}"
filename = "${var.asset_dir}/tls/apiserver.crt"
resource "tls_cert_request" "controller-manager" {
private_key_pem = tls_private_key.controller-manager.private_key_pem
subject {
common_name = "system:kube-controller-manager"
}
}
resource "tls_locally_signed_cert" "controller-manager" {
cert_request_pem = tls_cert_request.controller-manager.cert_request_pem
ca_private_key_pem = tls_private_key.kube-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.kube-ca.cert_pem
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"client_auth",
]
}
# kube-scheduler
resource "tls_private_key" "scheduler" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "scheduler" {
private_key_pem = tls_private_key.scheduler.private_key_pem
subject {
common_name = "system:kube-scheduler"
}
}
resource "tls_locally_signed_cert" "scheduler" {
cert_request_pem = tls_cert_request.scheduler.cert_request_pem
ca_private_key_pem = tls_private_key.kube-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.kube-ca.cert_pem
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"client_auth",
]
}
# Kubernetes Admin (tls/{admin.key,admin.crt})
resource "tls_private_key" "admin" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "admin" {
private_key_pem = tls_private_key.admin.private_key_pem
subject {
common_name = "kubernetes-admin"
organization = "system:masters"
}
}
resource "tls_locally_signed_cert" "admin" {
cert_request_pem = tls_cert_request.admin.cert_request_pem
ca_private_key_pem = tls_private_key.kube-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.kube-ca.cert_pem
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"client_auth",
]
}
# Kubernete's Service Account (tls/{service-account.key,service-account.pub})
resource "tls_private_key" "service-account" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "local_file" "service-account-key" {
content = "${tls_private_key.service-account.private_key_pem}"
filename = "${var.asset_dir}/tls/service-account.key"
}
resource "local_file" "service-account-crt" {
content = "${tls_private_key.service-account.public_key_pem}"
filename = "${var.asset_dir}/tls/service-account.pub"
}
# Kubelet
resource "tls_private_key" "kubelet" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "kubelet" {
key_algorithm = "${tls_private_key.kubelet.algorithm}"
private_key_pem = "${tls_private_key.kubelet.private_key_pem}"
subject {
common_name = "kubelet"
organization = "system:masters"
}
}
resource "tls_locally_signed_cert" "kubelet" {
cert_request_pem = "${tls_cert_request.kubelet.cert_request_pem}"
ca_key_algorithm = "${var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.key_algorithm) : var.ca_key_alg}"
ca_private_key_pem = "${var.ca_certificate == "" ? join(" ", tls_private_key.kube-ca.*.private_key_pem) : var.ca_private_key}"
ca_cert_pem = "${var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate}"
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"server_auth",
"client_auth",
]
}
resource "local_file" "kubelet-key" {
content = "${tls_private_key.kubelet.private_key_pem}"
filename = "${var.asset_dir}/tls/kubelet.key"
}
resource "local_file" "kubelet-crt" {
content = "${tls_locally_signed_cert.kubelet.cert_pem}"
filename = "${var.asset_dir}/tls/kubelet.crt"
}

View File

@@ -1,100 +1,140 @@
variable "cluster_name" {
type = string
description = "Cluster name"
type = "string"
}
variable "api_servers" {
type = list(string)
description = "List of URLs used to reach kube-apiserver"
type = "list"
}
variable "etcd_servers" {
type = list(string)
description = "List of URLs used to reach etcd servers."
type = "list"
}
variable "asset_dir" {
description = "Path to a directory where generated assets should be placed (contains secrets)"
type = "string"
}
variable "cloud_provider" {
description = "The provider for cloud services (empty string for no provider)"
type = "string"
default = ""
}
# optional
variable "networking" {
description = "Choice of networking provider (flannel or calico)"
type = "string"
default = "flannel"
}
variable "network_mtu" {
description = "CNI interface MTU (applies to calico only)"
type = "string"
default = "1500"
type = string
description = "Choice of networking provider (flannel or cilium)"
default = "cilium"
validation {
condition = contains(["flannel", "cilium"], var.networking)
error_message = "networking can be flannel or cilium."
}
}
variable "pod_cidr" {
type = string
description = "CIDR IP range to assign Kubernetes pods"
type = "string"
default = "10.2.0.0/16"
default = "10.20.0.0/14"
}
variable "service_cidr" {
type = string
description = <<EOD
CIDR IP range to assign Kubernetes services.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
EOD
type = "string"
default = "10.3.0.0/24"
}
variable "cluster_domain_suffix" {
description = "Queries for domains with the suffix will be answered by kube-dns"
type = "string"
default = "cluster.local"
default = "10.3.0.0/24"
}
variable "container_images" {
type = map(string)
description = "Container images to use"
type = "map"
default = {
calico = "quay.io/calico/node:v3.0.4"
calico_cni = "quay.io/calico/cni:v2.0.1"
flannel = "quay.io/coreos/flannel:v0.10.0-amd64"
flannel_cni = "quay.io/coreos/flannel-cni:v0.3.0"
hyperkube = "k8s.gcr.io/hyperkube:v1.10.0"
kubedns = "k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.9"
kubedns_dnsmasq = "k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.9"
kubedns_sidecar = "k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.9"
pod_checkpointer = "quay.io/coreos/pod-checkpointer:9dc83e1ab3bc36ca25c9f7c18ddef1b91d4a0558"
cilium_agent = "quay.io/cilium/cilium:v1.18.4"
cilium_operator = "quay.io/cilium/operator-generic:v1.18.4"
coredns = "registry.k8s.io/coredns/coredns:v1.13.1"
flannel = "docker.io/flannel/flannel:v0.27.0"
flannel_cni = "quay.io/poseidon/flannel-cni:v0.4.2"
kube_apiserver = "registry.k8s.io/kube-apiserver:v1.34.2"
kube_controller_manager = "registry.k8s.io/kube-controller-manager:v1.34.2"
kube_scheduler = "registry.k8s.io/kube-scheduler:v1.34.2"
kube_proxy = "registry.k8s.io/kube-proxy:v1.34.2"
}
}
variable "trusted_certs_dir" {
description = "Path to the directory on cluster nodes where trust TLS certs are kept"
type = "string"
default = "/usr/share/ca-certificates"
variable "enable_aggregation" {
type = bool
description = "Enable the Kubernetes Aggregation Layer (defaults to true)"
default = true
}
variable "ca_certificate" {
description = "Existing PEM-encoded CA certificate (generated if blank)"
type = "string"
default = ""
variable "daemonset_tolerations" {
type = list(string)
description = "List of additional taint keys kube-system DaemonSets should tolerate (e.g. ['custom-role', 'gpu-role'])"
default = []
}
variable "ca_key_alg" {
description = "Algorithm used to generate ca_key (required if ca_cert is specified)"
type = "string"
default = "RSA"
# unofficial, temporary, may be removed without notice
variable "external_apiserver_port" {
type = number
description = "External kube-apiserver port (e.g. 6443 to match internal kube-apiserver port)"
default = 6443
}
variable "ca_private_key" {
description = "Existing Certificate Authority private key (required if ca_certificate is set)"
type = "string"
default = ""
variable "cluster_domain_suffix" {
type = string
description = "Queries for domains with the suffix will be answered by kube-dns"
default = "cluster.local"
}
variable "components" {
description = "Configure pre-installed cluster components"
type = object({
enable = optional(bool, true)
coredns = optional(
object({
enable = optional(bool, true)
}),
{
enable = true
}
)
kube_proxy = optional(
object({
enable = optional(bool, true)
}),
{
enable = true
}
)
# CNI providers are enabled for pre-install by default, but only the
# provider matching var.networking is actually installed.
flannel = optional(
object({
enable = optional(bool, true)
}),
{
enable = true
}
)
cilium = optional(
object({
enable = optional(bool, true)
}),
{
enable = true
}
)
})
default = {
enable = true
coredns = null
kube_proxy = null
flannel = null
cilium = null
}
# Set the variable value to the default value when the caller
# sets it to null.
nullable = false
}
variable "service_account_issuer" {
type = string
description = "kube-apiserver service account token issuer (used as an identifier in 'iss' claims)"
default = "https://kubernetes.default.svc.cluster.local"
}

9
versions.tf Normal file
View File

@@ -0,0 +1,9 @@
# Terraform version and plugin versions
terraform {
required_version = ">= 0.13.0, < 2.0.0"
required_providers {
random = "~> 3.1"
tls = "~> 4.0"
}
}