307 Commits

Author SHA1 Message Date
Dalton Hubble
f04e07c001 Update kube-apiserver runtime-config flags
* MutatingAdmissionPolicy is available as a beta and alpha API
2025-11-23 16:05:07 -08:00
dghubble-renovate[bot]
a589c32870 Bump actions/checkout action from v5 to v6 2025-11-21 09:57:50 -08:00
Dalton Hubble
3c8c071333 Update Kubernetes from v1.34.1 to v1.34.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.34.md
2025-11-21 09:30:14 -08:00
Dalton Hubble
a4e9ef0430 Update Kubernetes components from v1.33.1 to v1.34.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.34.md
2025-11-21 09:14:02 -08:00
dependabot[bot]
01667f6904 Bump actions/checkout from 4 to 5
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-24 21:31:06 -07:00
Dalton Hubble
c7e2a637d7 Rollback Cilium from v1.17.6 to v1.17.5
* Cilium v1.17.6 is broken, see https://github.com/cilium/cilium/issues/40571
2025-07-27 14:20:21 -07:00
Dalton Hubble
cd82a41654 Update Kubernetes from v1.33.2 to v1.33.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.33.md#v1333
2025-07-19 09:48:16 -07:00
Dalton Hubble
9af5837c35 Update Cilium and flannel container images 2025-06-29 17:30:21 -07:00
Dalton Hubble
36d543051b Update Kubernetes from v1.33.1 to v1.33.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.33.md#v1332
2025-06-29 17:20:32 -07:00
Dalton Hubble
2c7e627201 Update Kubernetes from v1.33.0 to v1.33.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.33.md#v1331
2025-05-24 20:22:11 -07:00
Dalton Hubble
18eb9cded5 Update Kubernetes from v1.32.3 to v1.33.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.33.md#v1330
2025-05-06 19:59:01 -07:00
Dalton Hubble
1e4b00eab9 Update Cilium and flannel container images
* Bump component images for those using the builtin bootstrap
2025-03-18 20:08:24 -07:00
Dalton Hubble
209e02b4f2 Update Kubernetes from v1.32.1 to v1.32.3
* Update Cilium from v1.16.5 to v1.17.1 as well
2025-03-12 21:06:46 -07:00
Dalton Hubble
c50071487c Add service_account_issuer variable for kube-apiserver
* Allow the service account token issuer to be adjusted or served
from a public bucket or static cache
* Output the public key used to sign service account tokens so that
it can be used to compute JWKS (JSON Web Key Sets) if desired

Docs: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery
2025-02-07 10:58:54 -08:00
Dalton Hubble
997f6012b5 Update Kubernetes from v1.32.0 to v1.32.1
* Enable the Kubernetes MutatingAdmissionPolicy alpha via feature gate
* Update CoreDNS from v1.11.4 to v1.12.0
* Update flannel from v0.26.2 to v0.26.3

Docs: https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/
2025-01-20 15:23:22 -08:00
Dalton Hubble
3edb0ae646 Change flannel port from 4789 to 8472
* flannel and Cilium default to UDP 8472 for VXLAN traffic to
avoid conflicts with other VXLAN usage (e.g. Open vSwith)
* Aligning flannel and Cilium to use the same vxlan port makes
firewall rules or security policies simpler across clouds
2024-12-30 11:59:36 -08:00
Dalton Hubble
33f8d2083c Remove calico_manifests from assets_dist outputs 2024-12-28 20:37:28 -08:00
Dalton Hubble
79b8ae1280 Remove Calico and associated variables
* Drop support for Calico CNI
2024-12-28 20:34:29 -08:00
Dalton Hubble
0d3f17393e Change the default Pod CIDR to 10.20.0.0/14
* Change the default Pod CIDR from 10.2.0.0/16 to 10.20.0.0/14
(10.20.0.0 - 10.23.255.255) to support 1024 nodes by default
* Most CNI providers divide the Pod CIDR so that each node has
a /24 to allocate to local pods (256). The previous `10.2.0.0/16`
default only fits 256 /24's so 256 nodes were supported without
customizing the pod_cidr
2024-12-23 10:16:42 -08:00
Dalton Hubble
c775b4de9a Update Kubernetes from v1.31.4 to v1.32.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.32.md#v1320
2024-12-20 16:58:35 -08:00
Dalton Hubble
fbe7fa0a57 Update Kubernetes from v1.31.3 to v1.31.4
* Update flannel from v0.26.0 to v0.26.2
* Update Cilium from v1.16.4 to v1.16.5
2024-12-20 15:06:55 -08:00
Dalton Hubble
e6a1c7bccf Update Kubernetes from v1.31.2 to v1.31.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1313
2024-11-24 08:40:22 -08:00
Dalton Hubble
95203db11c Update Kubernetes from v1.31.1 to v1.31.2
* Update Cilium from v1.16.1 to v1.16.3
* Update flannel from v0.25.6 to v0.26.0
2024-10-26 08:30:38 -07:00
Dalton Hubble
1cfc654494 Update Kubernetes from v1.30.0 to v1.30.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1311
2024-09-20 14:29:33 -07:00
Dalton Hubble
1ddecb1cef Change Cilium configuration to use kube-proxy replacement
* Skip creating the kube-proxy DaemonSet when Cilium is chosen
2024-08-23 07:15:18 -07:00
Dalton Hubble
0b78c87997 Fix flannel-cni container image version to v0.4.2
* This was mistakenly bumped to v0.4.4 which doesn't exist

Rel: https://github.com/poseidon/terraform-render-bootstrap/pull/379
2024-08-22 19:19:37 -07:00
Dalton Hubble
7e8551750c Update Kubernetes from v1.30.4 to v1.31.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1310
2024-08-17 08:02:30 -07:00
Dalton Hubble
8b6a3a4c0d Update Kubernetes from v1.30.3 to v1.30.4
* Update Cilium from v1.16.0 to v1.16.1
2024-08-16 08:21:49 -07:00
Dalton Hubble
66d8fe3a4d Update CoreDNS and Cilium components
* Update CoreDNS from v1.11.1 to v1.11.3
* Update Cilium from v1.15.7 to v1.16.0
2024-08-04 21:03:23 -07:00
Dalton Hubble
45b6b7e877 Rename context in kubeconfig-admin
* Use the cluster_name as the kubeconfig context, cluster,
and user. Drop the trailing -context suffix
2024-08-04 20:43:03 -07:00
Dalton Hubble
1609060f4f Update Kubernetes from v1.30.2 to v1.30.3
* Update builtin Cilium manifests from v1.15.6 to v1.15.7
* Update builtin flannel manifests from v0.25.4 to v0.25.5
2024-07-20 10:59:20 -07:00
Dalton Hubble
886f501bf7 Update Kubernetes from v1.30.1 to v1.30.2
* Update CoreDNS from v1.9.4 to v1.11.1
* Update Cilium from v1.15.5 to v1.15.6
* Update flannel from v0.25.1 to v0.25.4
2024-06-17 08:11:41 -07:00
Dalton Hubble
e1b1e0c75e Update Cilium from v1.15.4 to v1.15.5
* https://github.com/cilium/cilium/releases/tag/v1.15.5
2024-05-19 16:36:55 -07:00
Dalton Hubble
a54fe54d98 Extend components variable with flannel, calico, and cilium
* By default the `networking` CNI provider is pre-installed,
but the component variable provides an extensible mechanism
to skip installing these components
* Validate that networking can only be set to one of flannel,
calico, or cilium
2024-05-18 14:56:44 -07:00
Dalton Hubble
452bcf379d Update Kubernetes from v1.30.0 to v1.30.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md#v1301
2024-05-15 21:56:50 -07:00
Dalton Hubble
990286021a Organize CoreDNS and kube-proxy manifests so they're optional
* Add a `coredns` variable to configure the CoreDNS manifests,
with an `enable` field to determine whether CoreDNS manifests
are applied to the cluster during provisioning (default true)
* Add a `kube-proxy` variable to configure kube-proxy manifests,
with an `enable` field to determine whether the kube-proxy
Daemonset is applied to the cluster during provisioning (default
true)
* These optional allow for provisioning clusters without CoreDNS
or kube-proxy, so these components can be customized or managed
through separate plan/apply processes or automation
2024-05-12 18:05:55 -07:00
Dalton Hubble
baf406f261 Update Cilium and flannel container images
* Update Cilium from v1.15.3 to v1.25.4
* Update flannel from v0.24.4 to v0.25.1
2024-05-12 08:26:50 -07:00
dghubble-renovate[bot]
2bb4ec5bfd Bump provider tls from 3.4.0 to v4 2024-05-04 09:00:14 -07:00
Dalton Hubble
d233e90754 Update Kubernetes from v1.29.3 to v1.30.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md#v1300
2024-04-23 20:43:33 -07:00
Dalton Hubble
959b9ea04d Update Calico and Cilium container image versions
* Update Cilium from v1.15.2 to v1.15.3
* Update Calico from v3.27.2 to v3.27.3
2024-04-03 22:43:55 -07:00
Dalton Hubble
9145a587b3 Update Kubernetes from v1.29.2 to v1.29.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#v1293
2024-03-23 00:45:02 -07:00
Dalton Hubble
5dfa185b9d Update Cilium and flannel container image versions
* https://github.com/cilium/cilium/releases/tag/v1.15.2
* https://github.com/flannel-io/flannel/releases/tag/v0.24.4
2024-03-22 11:12:32 -07:00
Dalton Hubble
e9d52a997e Update Calico from v2.26.3 to v2.27.2
* Calico update fixes https://github.com/projectcalico/calico/issues/8372
2024-02-25 12:01:23 -08:00
Dalton Hubble
da65b4816d Update Cilium from v1.14.3 to v1.15.1
* https://github.com/cilium/cilium/releases/tag/v1.15.1
2024-02-23 22:46:20 -08:00
Dalton Hubble
2909ea9da3 Update flannel from v0.22.3 to v0.24.2
* https://github.com/flannel-io/flannel/releases/tag/v0.24.2
2024-02-18 16:13:19 -08:00
Dalton Hubble
763f56d0a5 Update Kubernetes from v1.29.1 to v1.29.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#v1292
2024-02-18 15:46:02 -08:00
Dalton Hubble
acc7460fcc Update Kubernetes from v1.29.0 to v1.29.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#v1291
2024-02-04 10:43:58 -08:00
Dalton Hubble
f0d22ec895 Update Kubernetes from v1.28.4 to v1.29.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#v1290
2023-12-22 09:01:31 -08:00
Dalton Hubble
a6e637d196 Update Kubernetes from v1.28.3 to v1.28.4 2023-11-21 06:11:30 -08:00
dependabot[bot]
521cf9604f Bump hashicorp/setup-terraform from 2 to 3
Bumps [hashicorp/setup-terraform](https://github.com/hashicorp/setup-terraform) from 2 to 3.
- [Release notes](https://github.com/hashicorp/setup-terraform/releases)
- [Changelog](https://github.com/hashicorp/setup-terraform/blob/main/CHANGELOG.md)
- [Commits](https://github.com/hashicorp/setup-terraform/compare/v2...v3)

---
updated-dependencies:
- dependency-name: hashicorp/setup-terraform
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-31 09:10:42 -07:00
Dalton Hubble
9a942ce016 Update flannel from v022.2 to v0.22.3
* https://github.com/flannel-io/flannel/releases/tag/v0.22.3
2023-10-29 22:11:26 -07:00
Dalton Hubble
d151ab77b7 Update Calico from v3.26.1 to v3.26.3
* https://github.com/projectcalico/calico/releases/tag/v3.26.3
2023-10-29 16:31:18 -07:00
Dalton Hubble
f911337cd8 Update Cilium from v1.14.2 to v1.14.3
* https://github.com/cilium/cilium/releases/tag/v1.14.3
2023-10-29 16:18:08 -07:00
Dalton Hubble
720adbeb43 Configure Cilium agents to connect to apiserver explicitly
* Cilium v1.14 seems to have problems reliably accessing the
apiserver via default in-cluster service discovery (relies on
kube-proxy instead of DNS) after some time
* Configure Cilium agents to use the DNS name resolving to the
cluster's load balanced apiserver and port. Regrettably, this
relies on external DNS rather than being self-contained, but its
what Cilium pushes users towards
2023-10-29 16:08:21 -07:00
Dalton Hubble
ae571974b0 Update Kubernetes from v1.28.2 to v1.28.3
* https://github.com/poseidon/kubelet/pull/95
2023-10-22 18:38:28 -07:00
Dalton Hubble
19b59cc66f Update Cilium from v1.14.1 to v1.14.2
* https://github.com/cilium/cilium/releases/tag/v1.14.2
2023-09-16 17:07:19 +02:00
Dalton Hubble
e3ffe4a5d5 Bump Kubernetes images from v1.28.1 to v1.28.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md#v1282
2023-09-14 12:58:55 -07:00
dependabot[bot]
ebfd639ff8 Bump actions/checkout from 3 to 4
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-04 13:42:29 -07:00
Dalton Hubble
0065e511c5 Update Kubernetes from v1.28.0 to v1.28.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md#v1281
2023-08-26 11:34:33 -07:00
Dalton Hubble
251adf88d4 Emulate Cilium KubeProxyReplacement partial mode
* Cilium KubeProxyReplacement mode used to support a partial
option, but in v1.14 it became true or false
* Emulate the old partial mode by disabling KubeProxyReplacement
but turning on the individual features
* The alternative of enabling KubeProxyReplacement has ramifications
because Cilium then needs to be configured with the apiserver server
address, which creates a dependency on the cloud provider's DNS,
clashes with kube-proxy, and removing kube-proxy creates complications
for how node health is assessed. Removing kube-proxy is further
complicated by the fact its still used by other supported CNIs which
creates a tricky support matrix

Docs: https://docs.cilium.io/en/latest/network/kubernetes/kubeproxy-free/#kube-proxy-hybrid-modes
2023-08-26 10:45:23 -07:00
Dalton Hubble
a4fc73db7e Fix Cilium kube-proxy-replacement mode to true
* In Cilium v1.14, kube-proxy-replacement mode again changed
its valid values, this time from partial to true/false. The
value should be true for Cilium to support HostPort features
as expected

```
cilium status --verbose
Services:
  - ClusterIP:      Enabled
  - NodePort:       Enabled (Range: 30000-32767)
  - LoadBalancer:   Enabled
  - externalIPs:    Enabled
  - HostPort:       Enabled
```
2023-08-21 19:53:46 -07:00
Dalton Hubble
29e81aedd4 Update Cilium from v1.14.0 to v1.14.1
* Also bump flannel from v0.22.1 to v0.22.2
2023-08-20 16:03:43 -07:00
Dalton Hubble
a55741d51d Update Kubernetes from v1.27.4 to v1.28.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md#v1280
2023-08-20 14:57:39 -07:00
Dalton Hubble
35848a50c6 Upgrade Cilium from v1.13.4 to v1.14.0
* https://github.com/cilium/cilium/releases/tag/v1.14.0
2023-07-30 09:17:31 -07:00
Dalton Hubble
d4da2f99fb Update flannel from v0.22.0 to v0.22.1
* https://github.com/flannel-io/flannel/releases/tag/v0.22.1
2023-07-29 17:38:33 -07:00
Dalton Hubble
31a13c53af Update Kubernetes from v1.27.3 to v1.27.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1274
2023-07-21 07:58:14 -07:00
Dalton Hubble
162baaf5e1 Upgrade Calico from v3.25.1 to v3.26.1
* Calico made some changes related to how the install-cni init
container runs, RBAC, and adds a new CRD bgpfilters
* https://github.com/projectcalico/calico/releases/tag/v3.26.1
2023-06-19 12:25:36 -07:00
Dalton Hubble
e727c63cc2 Update Kubernetes from v1.27.2 to v1.27.3
* Update Cilium from v1.13.3 to v1.13.4

Rel: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1273
2023-06-16 08:25:32 -07:00
Dalton Hubble
8c3ca3e935 Update flannel from v0.21.2 to v0.22.0
* https://github.com/flannel-io/flannel/releases/tag/v0.22.0
2023-06-11 19:56:57 -07:00
Dalton Hubble
9c4134240f Update Cilium from v1.13.2 to v1.13.3
* https://github.com/cilium/cilium/releases/tag/v1.13.3
2023-06-11 19:53:33 -07:00
Tobias Jungel
7c559e15e2 Update cilium masquerade flag
14ced84f7e
introduced `enable-ipv6-masquerade` and `enable-ipv4-masquerade`. This
updates the ConfigMap of cilium to align with the expected flag.

enable-ipv4-masquerade is enabled and enable-ipv6-masquerade is
disabled.
2023-05-23 17:47:23 -07:00
Dalton Hubble
9932d03696 Update Kubernetes from v1.27.1 to v1.27.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1272
2023-05-21 14:00:23 -07:00
Dalton Hubble
39d7b3eff9 Update Cilium from v1.13.1 to v1.13.2
* https://github.com/cilium/cilium/releases/tag/v1.13.2
2023-04-20 08:41:17 -07:00
Dalton Hubble
4d3eeadb35 Update Kubernetes from v1.27.0 to v1.27.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1271
2023-04-15 23:03:39 -07:00
Dalton Hubble
c0a4082796 Update Calico from v3.25.0 to v3.25.1
* https://github.com/projectcalico/calico/releases/tag/v3.25.1
2023-04-15 22:58:00 -07:00
Dalton Hubble
54ebf13564 Update Kubernetes from v1.26.3 to v1.27.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1270
2023-04-13 20:00:20 -07:00
Dalton Hubble
0a5d722de6 Change Cilium to use an init container to install CNI plugins
* Starting in Cilium v1.13.1, the cilium-cni plugin is installed
via an init container rather than by the Cilium agent container

Rel: https://github.com/cilium/cilium/issues/24457
2023-03-29 10:03:08 -07:00
Dalton Hubble
44315b8c02 Update Kubernetes from v1.26.2 to v1.26.3
* Update Cilium from v1.13.0 to v1.13.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1263
2023-03-21 18:15:16 -07:00
Dalton Hubble
5fe3380d5f Update Cilium from v1.12.6 to v1.13.0
* https://github.com/cilium/cilium/releases/tag/v1.13.0
* Change kube-proxy-replacement from probe (deprecated) to
partial and disable nodeport health checks as recommended
* Add ciliumloadbalanacerippools to ClusterRole
* Enable BPF socket load balancing from host namespace
2023-03-14 11:13:23 -07:00
Dalton Hubble
607a05692b Update Kubernetes from v1.26.1 to v1.26.2
* Update Kubernetes from v1.26.1 to v1.26.2
* Update flannel from v0.21.1 to v0.21.2
* Fix flannel container image location to docker.io/flannel/flannel

Rel:

* https://github.com/kubernetes/kubernetes/releases/tag/v1.26.2
* https://github.com/flannel-io/flannel/releases/tag/v0.21.2
2023-03-01 17:09:30 -08:00
Dalton Hubble
7f9853fca3 Update flannel from v0.20.2 to v0.21.1
* https://github.com/flannel-io/flannel/releases/tag/v0.21.1
2023-02-09 09:55:02 -08:00
Dalton Hubble
9a2822282b Update Cilium from v1.12.5 to v1.12.6
* https://github.com/cilium/cilium/releases/tag/v1.12.6
2023-02-09 09:39:00 -08:00
Dalton Hubble
4621c6b256 Update Calico from v3.24.5 to v3.25.0
* https://github.com/projectcalico/calico/blob/v3.25.0/calico/_includes/release-notes/v3.25.0-release-notes.md
2023-01-23 09:22:27 -08:00
Dalton Hubble
adcc942508 Update Kubernetes from v1.26.0 to v1.26.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1261
* Update CoreDNS from v1.9.3 to v1.9.4

Rel: https://github.com/coredns/coredns/releases/tag/v1.9.4
2023-01-19 08:12:12 -08:00
Dalton Hubble
4476e946f6 Update Cilium from v1.12.4 to v1.12.5
* https://github.com/cilium/cilium/releases/tag/v1.12.5
2022-12-21 08:06:13 -08:00
Dalton Hubble
8b17f2e85e Update Kubernetes from v1.26.0-rc.1 to v1.26.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1260
2022-12-08 08:41:47 -08:00
Dalton Hubble
f863f7a551 Update Kubernetes from v1.26.0-rc.0 to v1.26.0-rc.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1260-rc1
2022-12-05 08:55:29 -08:00
Dalton Hubble
616069203e Update Kubernetes from v1.25.4 to v1.26.0-rc.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1260-rc0
2022-11-30 08:45:09 -08:00
Dalton Hubble
88d0ea5a87 Fix flannel container image registry location
* flannel moved their container image to docker.io/flannelcni/flannel
2022-11-23 16:10:00 -08:00
Dalton Hubble
7350fd24fc Update flannel from v0.15.1 to v0.20.1
* https://github.com/flannel-io/flannel/releases/tag/v0.20.1
2022-11-23 10:54:08 -08:00
Dalton Hubble
8f6b55859b Update Cilium from v1.24.3 to v1.24.4
* https://github.com/cilium/cilium/releases/tag/v1.12.4
2022-11-23 10:52:40 -08:00
Dalton Hubble
dc652cf469 Add Mastodon badge alongside Twitter 2022-11-10 09:43:21 -08:00
Dalton Hubble
9b56c710b3 Update Kubernetes from v1.25.3 to v1.25.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1254
2022-11-10 09:34:19 -08:00
Dalton Hubble
e57a66623b Update Calico from v3.24.3 to v3.24.5
* https://github.com/projectcalico/calico/releases/tag/v3.24.5
2022-11-10 09:31:58 -08:00
Dalton Hubble
8fb30b7732 Update Calico from v3.24.2 to v3.24.3
* https://github.com/projectcalico/calico/releases/tag/v3.24.3
2022-10-23 21:53:27 -07:00
Dalton Hubble
5b2fbbef84 Allow Kubelet kubeconfig to drain nodes
* Allow the Kubelet kubeconfig to get/list workloads and
evict pods to perform drain operations, via the kubelet-delete
ClusterRole bound to the system:nodes group
* Previously, the ClusterRole only allowed node deletion
2022-10-23 21:49:38 -07:00
Dalton Hubble
3db4055ccf Update Calico from v3.24.1 to v3.24.2
* https://github.com/projectcalico/calico/releases/tag/v3.24.2
2022-10-20 09:25:53 -07:00
Dalton Hubble
946d81be09 Update Cilium from v1.12.2 to v1.12.3
* https://github.com/cilium/cilium/releases/tag/v1.12.3
2022-10-17 17:16:56 -07:00
Dalton Hubble
bf465a8525 Update Kubernetes from v1.25.2 to v1.25.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1253
2022-10-13 20:53:23 -07:00
Dalton Hubble
457894c1a4 Update Kubernetes from v1.25.1 to v1.25.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1252
2022-09-22 08:11:15 -07:00
Dalton Hubble
c5928dbe5e Update Cilium from v1.12.1 to v1.12.2
* https://github.com/cilium/cilium/releases/tag/v1.12.2
2022-09-15 08:26:23 -07:00
Dalton Hubble
f3220d34cc Update Kubernetes from v1.25.0 to v1.25.1
* https://github.com/kubernetes/kubernetes/releases/tag/v1.25.1
2022-09-15 08:15:51 -07:00
Dalton Hubble
50d43778d0 Update Calico from v3.23.3 to v3.24.1
* https://github.com/projectcalico/calico/releases/tag/v3.24.1
2022-09-14 08:00:17 -07:00
Dalton Hubble
3fa08c542c Update Kubernetes from v1.24.4 to v1.25.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md
2022-08-23 17:30:42 -07:00
Dalton Hubble
31bbef9024 Update Kubernetes from v1.24.3 to v1.24.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1244
2022-08-17 20:06:24 -07:00
Dalton Hubble
c58cbec52b Update Cilium from v1.12.0 to v1.12.1
* https://github.com/cilium/cilium/releases/tag/v1.12.1
2022-08-17 08:22:48 -07:00
Dalton Hubble
bf8bdd4fb5 Switch upstream Kubernetes registry from k8s.gcr.io to registry.k8s.io
* Upstream would like to switch to registry.k8s.io to reduce costs

Rel: https://groups.google.com/g/kubernetes-sig-testing/c/U7b_im9vRrM
2022-08-13 15:45:13 -07:00
Dalton Hubble
0d981c24cd Upgrade CoreDNS from v1.8.5 to v1.9.3
* https://coredns.io/2022/05/27/coredns-1.9.3-release/
* https://coredns.io/2022/05/13/coredns-1.9.2-release/
* https://coredns.io/2022/03/09/coredns-1.9.1-release/
* https://coredns.io/2022/02/01/coredns-1.9.0-release/
* https://coredns.io/2021/12/09/coredns-1.8.7-release/
* https://coredns.io/2021/10/07/coredns-1.8.6-release/
2022-08-13 15:37:30 -07:00
Dalton Hubble
6d92cab7a0 Update Cilium from v1.11.7 to v1.12.0
* https://github.com/cilium/cilium/releases/tag/v1.12.0
2022-08-08 19:56:05 -07:00
Dalton Hubble
13e40a342b Add Terraform fmt GitHub Action and dependabot config
* Run terraform fmt on pull requests and merge to main
* Show workflow status in README
* Add dependabot.yaml to keep GitHub Actions updated
2022-08-01 09:45:38 -07:00
Dalton Hubble
b7136c94c2 Add badges to README 2022-07-31 17:43:36 -07:00
Dalton Hubble
97fe45c93e Update Calico from v3.23.1 to v3.23.3
* https://github.com/projectcalico/calico/releases/tag/v3.23.3
2022-07-30 18:10:02 -07:00
Dalton Hubble
77981d7fd4 Update Cilium from v1.11.6 to v1.11.7
* https://github.com/cilium/cilium/releases/tag/v1.11.7
2022-07-19 09:04:58 -07:00
Dalton Hubble
19a19c0e7a Update Kubernetes from v1.24.2 to v1.24.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1243
2022-07-13 20:57:47 -07:00
Dalton Hubble
178664d84e Update Calico from v3.22.2 to v3.23.1
* https://github.com/projectcalico/calico/releases/tag/v3.23.1
2022-06-18 18:49:58 -07:00
Dalton Hubble
dee92368af Update Cilium from v1.11.5 to v1.11.6
* https://github.com/cilium/cilium/releases/tag/v1.11.6
2022-06-18 18:42:44 -07:00
Dalton Hubble
70764c32c5 Update Kubernetes from v1.24.1 to v1.24.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1242
2022-06-18 18:27:38 -07:00
Dalton Hubble
f325be5041 Update Cilium from v1.11.4 to v1.11.5
* https://github.com/cilium/cilium/releases/tag/v1.11.5
2022-05-31 15:21:36 +01:00
Dalton Hubble
22ab988fdb Update Kubernetes from v1.24.0 to v1.24.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1241
2022-05-27 09:56:57 +01:00
Dalton Hubble
81e4c5b267 Update Kubernetes from v1.23.6 to v1.24.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1240
2022-05-03 07:38:42 -07:00
Dalton Hubble
7a18a221bb Remove unneeded use of key_algorithm and ca_key_algorithm
* Remove uses of `key_algorithm` on `tls_self_signed_cert` and
`tls_cert_request` resources. The field is deprecated and inferred
from the `private_key_pem`
* Remove uses of `ca_key_algorithm` on `tls_locally_signed_cert`
resources. The field is deprecated and inferred from the
`ca_private_key_pem`
* Require at least hashicorp/tls provider v3.2

Rel: https://github.com/hashicorp/terraform-provider-tls/blob/main/CHANGELOG.md#320-april-04-2022
2022-04-20 19:45:27 -07:00
Dalton Hubble
3f21908175 Update Kubernetes from v1.23.5 to v1.23.6
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1236
2022-04-20 18:48:58 -07:00
James Harmison
5bbca44f66 Update cilium ds name and label to align with upstream 2022-04-20 18:47:59 -07:00
Dalton Hubble
031e9fdb6c Update Cilium and Calico CNI providers
* Update Cilium from v1.11.3 to v1.11.4
* Update Calico from v3.22.1 to v3.22.2
2022-04-19 08:25:54 -07:00
Dalton Hubble
ab5e18bba9 Update Cilium from v1.11.2 to v1.11.3
* https://github.com/cilium/cilium/releases/tag/v1.11.3
2022-04-01 16:40:17 -07:00
Dalton Hubble
e5bdb6f6c6 Update Kubernetes from v1.23.4 to v1.23.5
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1235
2022-03-16 20:57:32 -07:00
Dalton Hubble
fa4745d155 Update Calico from v3.21.2 to v3.22.1
* Calico aims to fix https://github.com/projectcalico/calico/issues/5011
2022-03-11 10:57:07 -08:00
Dalton Hubble
db159bbd99 Update Cilium from v1.11.1 to v1.11.2
* https://github.com/cilium/cilium/releases/tag/v1.11.2
2022-03-11 10:04:11 -08:00
Dalton Hubble
205e5f212b Update Kubernetes from v1.23.3 to v1.23.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1234
2022-02-17 08:48:14 -08:00
Dalton Hubble
26bea83b95 Update Kubernetes from v1.23.2 to v1.23.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1233
2022-01-27 09:21:43 -08:00
Dalton Hubble
f45deec67e Update Kubernetes from v1.23.1 to v1.23.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1232
2022-01-19 17:04:06 -08:00
Dalton Hubble
5b5f7a00fd Update Cilium from v1.11.0 to v1.11.1
* https://github.com/cilium/cilium/releases/tag/v1.11.1
2022-01-19 17:01:40 -08:00
Dalton Hubble
0d2135e687 Remove use of template provider
* Switch to using Terraform `templatefile` instead of the
`template` provider (i.e. `data.template_file`)
* Available since Terraform v0.12
2022-01-14 09:42:32 -08:00
Dalton Hubble
4dc0388149 Update Kubernetes from v1.23.0 to v1.23.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1231
2021-12-20 08:32:37 -08:00
Dalton Hubble
37f45cb28b Update Cilium from v1.10.5 to v1.11.0
* https://github.com/cilium/cilium/releases/tag/v1.11.0
2021-12-10 11:23:56 -08:00
Dalton Hubble
8add7022d1 Normalize CA certs mounts in static Pods and kube-proxy
* Mount both /etc/ssl/certs and /etc/pki into control plane static
pods and kube-proxy, rather than choosing one based a variable
(set based on Flatcar Linux or Fedora CoreOS)
* Remove `trusted_certs_dir` variable
* Remove deprecated `--port` from `kube-scheduler` static Pod
2021-12-09 09:26:28 -08:00
Dalton Hubble
362158a6d6 Add missing caliconodestatuses CRD for Calico
* https://github.com/projectcalico/calico/pull/5012
2021-12-09 09:19:12 -08:00
Dalton Hubble
091ebeaed6 Update Kubernetes from v1.22.4 to v1.23.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1230
2021-12-09 09:16:52 -08:00
Dalton Hubble
cb1f4410ed Update minimum Terraform provider versions
* Update minimum required versions for `tls`, `template`,
and `random`. Older versions have some differing behaviors
(e.g. `random` may be missing sensitive fields) and we'd
prefer to shake loose any setups still using very old
providers than continue allowing them
* Remove the upper bound version constraint since providers
are more regularly updated these days and require less
manual vetting to allow use
2021-12-07 16:16:28 -08:00
Dalton Hubble
2d60731cef Update Calico from v1.21.1 to v1.21.2
* https://github.com/projectcalico/calico/releases/tag/v3.21.2
2021-12-07 14:48:08 -08:00
Dalton Hubble
c32e1c73ee Update Calico from v3.21.0 to v3.21.1
* https://github.com/projectcalico/calico/releases/tag/v3.21.1
2021-11-28 16:44:38 -08:00
Dalton Hubble
5353769db6 Update Kubernetes from v1.22.3 to v1.22.4
* Update flannel from v0.15.0 to v0.15.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#v1224
2021-11-17 18:48:53 -08:00
Dalton Hubble
e6193bbdcf Update CoreDNS from v1.8.4 to v1.8.6
* https://github.com/kubernetes/kubernetes/pull/106091
2021-11-12 10:59:20 -08:00
Dalton Hubble
9f9d7708c3 Update Calico and flannel CNI providers
* Update Calico from v3.20.2 to v3.21.0
* Update flannel from v0.14.0 to v0.15.0
2021-11-11 14:25:11 -08:00
Dalton Hubble
f587918c33 Update Kubernetes from v1.22.2 to v1.22.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#v1223
2021-10-28 10:05:42 -07:00
Dalton Hubble
7fbbbe7923 Update flannel from v0.13.0 to v0.14.0
* https://github.com/flannel-io/flannel/releases/tag/v0.14.0
2021-10-17 12:33:22 -07:00
Dalton Hubble
6b5d088795 Update Cilium from v1.10.4 to v1.10.5
* https://github.com/cilium/cilium/releases/tag/v1.10.5
2021-10-17 11:22:59 -07:00
Dalton Hubble
0b102c4089 Update Calico from v3.20.1 to v3.20.2
* https://github.com/projectcalico/calico/releases/tag/v3.20.2
* Add support for iptables legacy vs nft detection
2021-10-05 19:33:09 -07:00
Dalton Hubble
fadb5bbdaa Enable Kubernetes aggregation by default
* Change `enable_aggregation` default from false to true
* These days, Kubernetes control plane components emit annoying
messages related to assumptions baked into the Kubernetes API
Aggregation Layer if you don't enable it. Further the conformance
tests force you to remember to enable it if you care about passing
those
* This change is motivated by eliminating annoyances, rather than
any enthusiasm for Kubernetes' aggregation features
* https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/
2021-10-05 19:20:26 -07:00
Dalton Hubble
c6fa09bda1 Update Calico and Cilium CNI providers
* Update Calico from v3.20.0 to v3.20.1
* Update Cilium from v1.10.3 to v1.10.4
* Remove Cilium wait for BGF mount
2021-09-21 09:11:49 -07:00
Dalton Hubble
2f29d99d8a Update Kubernetes from v1.22.1 to v1.22.2 2021-09-15 19:47:11 -07:00
Dalton Hubble
bfc2fa9697 Fix ClusterIP access when using Cilium
* When a router sets node(s) as next-hops in a network,
ClusterIP Services should be able to respond as usual
* https://github.com/cilium/cilium/issues/14581
2021-09-15 19:43:58 -07:00
Dalton Hubble
d7fd3f6266 Update Kubernetes from v1.22.0 to v1.22.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#v1221
2021-08-19 21:09:01 -07:00
Dalton Hubble
a2e1cdfd8a Update Calico from v3.19.2 to v3.20.0
* https://github.com/projectcalico/calico/blob/v3.20.0/_includes/charts/calico/templates/calico-node.yaml
2021-08-18 19:43:40 -07:00
Dalton Hubble
074c6ed5f3 Update Calico from v3.19.1 to v3.19.2
* https://github.com/projectcalico/calico/releases/tag/v3.19.2
2021-08-13 18:15:55 -07:00
Dalton Hubble
b5f5d843ec Disable kube-scheduler insecure port
* Kubernetes v1.22.0 disables kube-controller-manager insecure
port which was used internally for Prometheus metrics scraping
In Typhoon, we'll switch to using the https port which requires
Prometheus present a bearer token
* Go ahead and disable the insecure port for kube-scheduler too,
we'll configure Prometheus to scrape it with a bearer token as
well
* Remove unused kube-apiserver `--port` flag

Rel:

* https://github.com/kubernetes/kubernetes/pull/96216
2021-08-10 21:11:30 -07:00
Dalton Hubble
b766ff2346 Update Kubernetes from v1.21.3 to v1.22.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#v1220
2021-08-04 21:38:23 -07:00
Dalton Hubble
5c0bebc1e7 Add Cilium init container to auto-mount cgroup2
* Add init container to auto-mount /sys/fs/cgroup cgroup2
at /run/cilium/cgroupv2 for the Cilium agent
* Enable CNI exclusive mode, to disable other configs
found in /etc/cni/net.d/
* https://github.com/cilium/cilium/pull/16259
2021-07-24 10:30:06 -07:00
Dalton Hubble
5746f9c221 Update Kubernetes from v1.21.2 to v1.21.3
* https://github.com/kubernetes/kubernetes/releases/tag/v1.21.3
2021-07-17 18:15:06 -07:00
Dalton Hubble
bde255228d Update Cilium from v1.10.2 to v1.10.3
* https://github.com/cilium/cilium/releases/tag/v1.10.3
2021-07-17 18:12:06 -07:00
Dalton Hubble
48ac8945d1 Update Cilium from v1.10.1 to v1.10.2
* https://github.com/cilium/cilium/releases/tag/v1.10.2
2021-07-04 10:09:38 -07:00
Dalton Hubble
c0718e8552 Move daemonset tolerations up, they're documented 2021-06-27 18:01:34 -07:00
Dalton Hubble
362f42a7a2 Update CoreDNS from v1.8.0 to v1.8.4
* https://coredns.io/2021/01/20/coredns-1.8.1-release/
* https://coredns.io/2021/02/23/coredns-1.8.2-release/
* https://coredns.io/2021/02/24/coredns-1.8.3-release/
* https://coredns.io/2021/05/28/coredns-1.8.4-release/
2021-06-23 23:26:27 -07:00
Dalton Hubble
e1543746cb Update Kubernetes from v1.21.1 to v1.21.2
* https://github.com/kubernetes/kubernetes/releases/tag/v1.21.2
2021-06-17 16:10:52 -07:00
Dalton Hubble
33a85e6603 Add support for Terraform v1.0.0 2021-06-17 13:24:45 -07:00
Dalton Hubble
0f33aeba5d Update Cilium from v1.10.0 to v1.10.1
* https://github.com/cilium/cilium/releases/tag/v1.10.1
2021-06-16 11:40:42 -07:00
Dalton Hubble
d17684dd5b Update Calico from v3.19.0 to v3.19.1
* https://docs.projectcalico.org/archive/v3.19/release-notes/
2021-05-24 11:55:34 -07:00
Dalton Hubble
067405ecc4 Update Cilium from v0.10.0-rc1 to v0.10.0
* https://github.com/cilium/cilium/releases/tag/v1.10.0
2021-05-24 10:44:08 -07:00
Dalton Hubble
c3b16275af Update Cilium from v1.9.6 to v1.10.0-rc1
* Add multi-arch container images and arm64 support!
* https://github.com/cilium/cilium/releases/tag/v1.10.0-rc1
2021-05-14 14:23:55 -07:00
Dalton Hubble
ebe3d5526a Update Kubernetes from v1.21.0 to v1.21.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#v1211
2021-05-13 11:18:39 -07:00
Dalton Hubble
7052c66882 Update Calico from v3.18.1 to v3.19.0
* https://docs.projectcalico.org/archive/v3.19/release-notes/
2021-05-13 11:17:48 -07:00
Dalton Hubble
079b348bf7 Update Cilium from v1.9.5 to v1.9.6
* https://github.com/cilium/cilium/releases/tag/v1.9.6
2021-04-26 10:52:51 -07:00
Dalton Hubble
f8fd2f8912 Update required Terraform versions to v0.13 <= x < v0.16
* Allow Terraform v0.13.x, v0.14.x, or v0.15.x
2021-04-15 19:16:41 -07:00
Dalton Hubble
a4ecf168df Update static Pod manifests for Kubernetes v1.21.0
* Set `kube-apiserver` `service-account-jwks-uri` because conformance
ServiceAccountIssuerDiscovery OIDC discovery will access a JWT endpoint
using the kube-apiserver's advertise address by default, instead of
using the intended in-cluster service (10.3.0.1) resolved by cluster DNS
`kubernetes.default.svc.cluster.local`, which causes a cert SAN error
* Set the authentication and authorization kubeconfig for kube-scheduler
and kube-controller-manager. Here, authn/z refer to aggregated API
use cases only, so its not strictly neccessary and warnings about
missing `extension-apiserver-authentication` when enable_aggregation
is false can be ignored
* Mount `/var/lib/kubelet/volumeplugins` to to the default location
expected within kube-controller-manager to remove the need for a flag
* Enable `tokencleaner` controller to automatically delete expired
bootstrap tokens (default node token is good 1 year, so cleanup won't
really matter at that point, but enable regardless)
* Remove unused `cloud-provider` flag, we never intend to use in-tree
cloud providers or support custom providers
2021-04-10 17:42:18 -07:00
Dalton Hubble
55e1633376 Update Kubernetes from v1.20.5 to v1.21.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#v1210
2021-04-08 21:21:04 -07:00
Dalton Hubble
f87aa7f96a Change CNI config directory to /etc/cni/net.d
* Change CNI config directory from `/etc/kubernetes/cni/net.d`
to `/etc/cni/net.d` (Kubelet default)
2021-04-01 16:48:46 -07:00
Dalton Hubble
8c2e766d18 Update CoreDNS from v1.7.0 to v1.8.0
* https://coredns.io/2020/10/22/coredns-1.8.0-release/
2021-03-20 15:43:58 -07:00
Dalton Hubble
5f4378a0e1 Update Kubernetes from v1.20.4 to v1.20.5
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#v1205
2021-03-19 10:32:06 -07:00
Dalton Hubble
ca37685867 Update Cilium from v1.9.4 to v1.9.5
* https://github.com/cilium/cilium/releases/tag/v1.9.5
2021-03-14 11:25:15 -07:00
Dalton Hubble
8fc689b89c Switch bootstrap token to a random_password
* Terraform `random_password` is identical to `random_string` except
its value is marked sensitive so it isn't displayed in terraform
plan and other outputs
* Prefer marking the bootstrap token as sensitive for cases where
terraform is run in an automated CI/CD system
2021-03-14 10:45:27 -07:00
Dalton Hubble
adcba1c211 Update Calico from v3.17.3 to v3.18.1
* https://docs.projectcalico.org/archive/v3.18/release-notes/
2021-03-14 10:15:39 -07:00
Dalton Hubble
5633f97f75 Update Kubernetes from v1.20.3 to v1.20.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#v1204
2021-02-18 23:57:04 -08:00
Dalton Hubble
e7b05a5d20 Update Calico from v3.17.2 to v3.17.3
* https://github.com/projectcalico/calico/releases/tag/v3.17.3
2021-02-18 23:55:46 -08:00
Dalton Hubble
213cd16c38 Update Kubernetes from v1.20.2 to v1.20.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#v1203
2021-02-17 22:26:33 -08:00
Dalton Hubble
efd750d7a8 Update flannel-cni from v0.4.1 to v0.4.2
* https://github.com/poseidon/flannel-cni/releases/tag/v0.4.2
2021-02-14 12:03:07 -08:00
Dalton Hubble
75fc91deb8 Update Calico from v3.17.1 to v3.17.2
* https://github.com/projectcalico/calico/releases/tag/v3.17.2
2021-02-04 22:01:40 -08:00
Dalton Hubble
ae5449a9fb Update Cilium from v1.9.3 to v1.9.4
* https://github.com/cilium/cilium/releases/tag/v1.9.4
2021-02-03 23:06:28 -08:00
Dalton Hubble
ae9bc1af60 Update Cilium from v1.9.2 to v1.9.3
* https://github.com/cilium/cilium/releases/tag/v1.9.3
2021-01-24 23:05:30 -08:00
Dalton Hubble
9304f46ec7 Update Cilium from v1.9.1 to v1.9.2
* https://github.com/cilium/cilium/releases/tag/v1.9.2
2021-01-20 22:05:01 -08:00
Dalton Hubble
b3bf2ecbbe Update Kubernetes from v1.20.1 to v1.20.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#v1202
2021-01-13 17:44:27 -08:00
Dalton Hubble
80a350bce5 Update Kubernetes from v1.20.0 to v1.20.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#v1201
2020-12-19 12:53:47 -08:00
Ben Drucker
445627e1c3 Allow v3 of tls and random providers
* https://github.com/hashicorp/terraform-provider-random/blob/master/CHANGELOG.md#300-october-09-2020
* https://github.com/hashicorp/terraform-provider-tls/blob/master/CHANGELOG.md#300-october-14-2020
2020-12-19 12:52:48 -08:00
Dalton Hubble
4edd79dd02 Update Calico from v3.17.0 to v3.17.1
* https://github.com/projectcalico/calico/releases/tag/v3.17.1
2020-12-10 22:47:40 -08:00
Dalton Hubble
c052741cc3 Update Kubernetes from v1.20.0-rc.0 to v1.20.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#v1200
2020-12-08 18:24:24 -08:00
Dalton Hubble
64793aa593 Update required Terraform versions to v0.13 <= x < v0.15
* Allow Terraform v0.13.x or v0.14.x to be used
* Drop support for Terraform v0.12 since Typhoon already
requires v0.13+ https://github.com/poseidon/typhoon/pull/880
2020-12-07 00:09:27 -08:00
Dalton Hubble
2ed597002a Update Cilium from v1.9.0 to v1.9.1
* https://github.com/cilium/cilium/releases/tag/v1.9.1
2020-12-04 13:59:50 -08:00
Dalton Hubble
0e9c3598bd Update Kubernetes from v1.19.4 to v1.20.0-rc.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#v1200-rc0
2020-12-02 23:29:00 -08:00
Dalton Hubble
84972373d4 Rename bootstrap-secrets directory to pki
* Change control plane static pods to mount `/etc/kubernetes/pki`,
instead of `/etc/kubernetes/bootstrap-secrets` to better reflect
their purpose and match some loose conventions upstream
* Require TLS assets to be placed at `/etc/kubernetes/pki`, instead
of `/etc/kubernetes/bootstrap-secrets` on hosts (breaking)
* Mount to `/etc/kubernetes/pki` to match the host (less surprise)
* https://kubernetes.io/docs/setup/best-practices/certificates/
2020-12-02 23:13:53 -08:00
Dalton Hubble
ac5cb95774 Generate kubeconfig's for kube-scheduler and kube-controller-manager
* Generate TLS client certificates for kube-scheduler and
kube-controller-manager with `system:kube-scheduler` and
`system:kube-controller-manager` CNs
* Template separate kubeconfigs for kube-scheduler and
kube-controller manager (`scheduler.conf` and
`controller-manager.conf`). Rename admin for clarity
* Before v1.16.0, Typhoon scheduled a self-hosted control
plane, which allowed the steady-state kube-scheduler and
kube-controller-manager to use a scoped ServiceAccount.
With a static pod control plane, separate CN TLS client
certificates are the nearest equiv.
* https://kubernetes.io/docs/setup/best-practices/certificates/
* Remove unused Kubelet certificate, TLS bootstrap is used
instead
2020-12-01 20:18:36 -08:00
Dalton Hubble
19c3ce61bd Add TokenReview and TokenRequestProjection kube-apiserver flags
* Add kube-apiserver flags for TokenReview and TokenRequestProjection
(beta, defaults on) to allow using Service Account Token Volume Projection
to create and mount service account tokens tied to a Pod's lifecycle
* Both features will be promoted from beta to stable in v1.20
* Rename `experimental-cluster-signing-duration` to just
`cluster-signing-duration`

Rel:

* https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection
2020-12-01 19:50:25 -08:00
Dalton Hubble
fd10b94f87 Update Calico from v3.16.5 to v3.17.0
* Consider Calico's MTU auto-detection, but leave
Calico MTU variable for now (`network_mtu` ignored)
* Remove SELinux level setting workaround for
https://github.com/projectcalico/cni-plugin/issues/874
2020-11-25 11:18:59 -08:00
Dalton Hubble
49216ab82c Update Kubernetes from v1.19.3 to v1.19.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#v1194
2020-11-11 22:26:43 -08:00
Dalton Hubble
1c3d293f7c Update Cilium from v1.9.0-rc3 to v1.9.0
* https://github.com/cilium/cilium/releases/tag/v1.9.0
* https://github.com/cilium/cilium/pull/13937
2020-11-10 23:15:30 -08:00
Dalton Hubble
ef17534c33 Update Calico from v3.16.4 to v3.16.5
* https://docs.projectcalico.org/v3.16/release-notes/
2020-11-10 18:27:20 -08:00
Starbuck
74c299bf2c Restore kube-controller-manager --use-service-account-credentials
* kube-controller-manager Pods can start control loops with credentials
that have been granted relevant controller manager roles or using
generated service accounts bound to each role
* During the migration of the control plane from self-hosted to static
pods (https://github.com/poseidon/terraform-render-bootstrap/pull/148)
the flag for using separate service accounts was inadvertently dropped
* Restore the --use-service-account-credentials flag used before v1.16

Related:

* https://kubernetes.io/docs/reference/access-authn-authz/rbac/#controller-roles
* https://github.com/poseidon/terraform-render-bootstrap/pull/225
2020-11-10 12:06:51 -08:00
Dalton Hubble
c6e3a2bcdc Update Cilium from v1.8.5 to v1.9.0-rc3
* https://github.com/cilium/cilium/releases/tag/v1.9.0-rc3
* https://github.com/cilium/cilium/releases/tag/v1.9.0-rc2
* https://github.com/cilium/cilium/releases/tag/v1.9.0-rc1
2020-11-03 00:05:32 -08:00
Dalton Hubble
3a0feda171 Update Cilium from v1.8.4 to v1.8.5
* https://github.com/cilium/cilium/releases/tag/v1.8.5
2020-10-29 00:48:39 -07:00
Dalton Hubble
7036f64891 Update Calico from v3.16.3 to v3.16.4
* https://docs.projectcalico.org/v3.16/release-notes/
2020-10-25 11:50:43 -07:00
Dalton Hubble
9037d7311b Remove asset_dir variable and optional asset writes
* Originally, generated TLS certificates, manifests, and
cluster "assets" written to local disk (`asset_dir`) during
terraform apply cluster bootstrap
* Typhoon v1.17.0 introduced bootstrapping using only Terraform
state to store cluster assets, to avoid ever writing sensitive
materials to disk and improve automated use-cases. `asset_dir`
was changed to optional and defaulted to "" (no writes)
* Typhoon v1.18.0 deprecated the `asset_dir` variable, removed
docs, and announced it would be deleted in future.
* Remove the `asset_dir` variable

Cluster assets are now stored in Terraform state only. For those
who wish to write those assets to local files, this is possible
doing so explicitly.

```
resource local_file "assets" {
  for_each = module.bootstrap.assets_dist
  filename = "some-assets/${each.key}"
  content = each.value
}
```

Related:

* https://github.com/poseidon/typhoon/pull/595
* https://github.com/poseidon/typhoon/pull/678
2020-10-17 14:57:13 -07:00
Maikel
84f897b5f1 Add Cilium manifests to local_file asset_dir (#221)
* Note, asset_dir is deprecated https://github.com/poseidon/typhoon/pull/678
2020-10-17 14:30:50 -07:00
Dalton Hubble
7988fb7159 Update Calico from v3.15.3 to v3.16.3
* https://github.com/projectcalico/calico/releases/tag/v3.16.3
* https://docs.projectcalico.org/v3.16/release-notes/
2020-10-15 20:00:41 -07:00
Dalton Hubble
5bebcc5f00 Update Kubernetes from v1.19.2 to v1.19.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#v1193
2020-10-14 20:42:07 -07:00
Dalton Hubble
4448143f64 Update flannel from v0.13.0-rc2 to v0.13.0
* https://github.com/coreos/flannel/releases/tag/v0.13.0
2020-10-14 20:29:20 -07:00
Dalton Hubble
a2eb1dcbcf Update Cilium from v1.8.3 to v1.8.4
* https://github.com/cilium/cilium/releases/tag/v1.8.4
2020-10-02 00:20:19 -07:00
Dalton Hubble
d0f2123c59 Update Kubernetes from v1.19.1 to v1.19.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#v1192
2020-09-16 20:03:44 -07:00
Dalton Hubble
9315350f55 Update flannel and flannel-cni image versions
* Update flannel from v0.12.0 to v0.13.0-rc2
* Use new flannel multi-arch image
* Update flannel-cni to update CNI plugins from v0.8.6 to
v0.8.7
2020-09-16 20:02:42 -07:00
Nesc58
016d4ebd0c Mount /run/xtables.lock in flannel Daemonset
* Mount xtables.lock (like Calico and Cilium) since iptables
may be called by other processes (kube-proxy)
2020-09-16 19:01:42 -07:00
Dalton Hubble
f2dd897d67 Change seccomp annotations to Pod seccompProfile
* seccomp graduated to GA in Kubernetes v1.19. Support
for seccomp alpha annotations will be removed in v1.22
* Replace seccomp annotations with the GA seccompProfile
field in the PodTemplate securityContext
* Switch profile from `docker/default` to `runtime/default`
(no effective change, since docker is the runtime)
* Verify with docker inspect SecurityOpt. Without the
profile, you'd see `seccomp=unconfined`

Related:
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#seccomp-graduates-to-general-availability
2020-09-10 00:28:58 -07:00
Dalton Hubble
c72826908b Update Kubernetes from v1.19.0 to v1.19.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#v1191
2020-09-09 20:42:43 -07:00
Dalton Hubble
81ac7e6e2f Update Calico from v3.15.2 to v3.15.3
* https://github.com/projectcalico/calico/releases/tag/v3.15.3
2020-09-09 20:40:41 -07:00
Dalton Hubble
9ce9148557 Update Cilium from v1.8.2 to v1.8.3
* https://github.com/cilium/cilium/releases/tag/v1.8.3
2020-09-07 17:54:56 -07:00
Dalton Hubble
2686d59203 Allow leader election among Cilium operator replicas
* Allow Cilium operator Pods to leader elect when Deployment
has more than one replica
* Use topology spread constraint to keep multiple operators
from running on the same node (pods bind hostNetwork ports)
2020-09-07 17:48:19 -07:00
Dalton Hubble
79343f02ae Update Kubernetes from v1.18.8 to v1.19.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md/#v1190
2020-08-26 19:31:20 -07:00
Dalton Hubble
91738c35ff Update Calico from v3.15.1 to v3.15.2
* https://docs.projectcalico.org/release-notes/
2020-08-26 19:30:37 -07:00
Dalton Hubble
8ef2fe7c99 Update Kubernetes from v1.18.6 to v1.18.8
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1188
2020-08-13 20:43:42 -07:00
Dalton Hubble
60540868e0 Relax Terraform version constraint
* bootstrap uses only Hashicorp Terraform modules, allow
Terraform v0.13.x usage
2020-08-10 21:21:54 -07:00
Dalton Hubble
3675b3a539 Update from coreos/flannel-cni to poseidon/flannel-cni
* Update CNI plugins from v0.6.0 to v0.8.6 to fix several CVEs
* Update the base image to alpine:3.12
* Use `flannel-cni` as an init container and remove sleep
* Add Linux ARM64 and multi-arch container images
* https://github.com/poseidon/flannel-cni
* https://quay.io/repository/poseidon/flannel-cni

Background

* Switch from github.com/coreos/flannel-cni v0.3.0 which was last
published by me in 2017 and which is no longer accessible to me
to maintain or patch
* Port to the poseidon/flannel-cni rewrite, which releases v0.4.0
to continue the prior release numbering
2020-08-02 15:06:18 -07:00
Dalton Hubble
45053a62cb Update Cilium from v1.8.1 to v1.8.2
* Drop unused option https://github.com/cilium/cilium/pull/12618
2020-07-25 15:52:19 -07:00
Dalton Hubble
9de4267c28 Update CoreDNS from v1.6.7 to v1.7.0
* https://coredns.io/2020/06/15/coredns-1.7.0-release/
2020-07-25 13:08:29 -07:00
Dalton Hubble
835890025b Update Calico from v3.15.0 to v3.15.1
* https://docs.projectcalico.org/v3.15/release-notes/
2020-07-15 22:03:54 -07:00
Dalton Hubble
2bab6334ad Update Kubernetes from v1.18.5 to v1.18.6
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1186
2020-07-15 21:55:02 -07:00
Dalton Hubble
9a5132b2ad Update Cilium from v1.8.0 to v1.8.1
* https://github.com/cilium/cilium/releases/tag/v1.8.1
2020-07-05 15:58:53 -07:00
Dalton Hubble
5a7c963caf Update Kubernetes from v1.18.4 to v1.18.5
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1185
2020-06-27 13:49:10 -07:00
Dalton Hubble
5043456b05 Update Calico from v3.14.1 to v3.15.0
* https://docs.projectcalico.org/v3.15/release-notes/
2020-06-26 02:39:01 -07:00
Dalton Hubble
c014b77090 Update Cilium from v1.8.0-rc4 to v1.8.0
* https://github.com/cilium/cilium/releases/tag/v1.8.0
2020-06-22 22:25:38 -07:00
Dalton Hubble
1c07dfbc2a Remove experimental kube-router CNI provider 2020-06-21 21:55:56 -07:00
Dalton Hubble
af36c53936 Add experimental Cilium CNI provider
* Accept experimental CNI `networking` mode "cilium"
* Run Cilium v1.8.0 with overlay vxlan tunnels and a
minimal set of features. We're interested in:
  * IPAM: Divide pod_cidr into /24 subnets per node
  * CNI networking pod-to-pod, pod-to-external
  * BPF masquerade
  * NetworkPolicy as defined by Kubernetes (no L7)
* Continue using kube-proxy with Cilium probe mode
* Firewall changes:
  * Require UDP 8472 for vxlan (Linux kernel default) between nodes
  * Optional ICMP echo(8) between nodes for host reachability (health)
  * Optional TCP 4240 between nodes for host reachability (health)
2020-06-21 16:21:09 -07:00
Dalton Hubble
e75697ce35 Rename controller node label and NoSchedule taint
* Use node label `node.kubernetes.io/controller` to select
controller nodes (action required)
* Tolerate node taint `node-role.kubernetes.io/controller`
for workloads that should run on controller nodes. Don't
tolerate `node-role.kubernetes.io/master` (action required)
2020-06-17 22:46:35 -07:00
Dalton Hubble
3fe903d0ac Update Kubernetes from v1.18.3 to v1.18.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1184
* Remove unused template file
2020-06-17 19:23:12 -07:00
Dalton Hubble
fc1a7bac89 Remove unused Kubelet certificate and key pair
* Kubelet certificate and key pair in state (not distributed)
are not needed after with Kubelet TLS bootstrap
* https://github.com/poseidon/terraform-render-bootstrap/pull/185

Fix https://github.com/poseidon/typhoon/issues/757
2020-06-11 21:20:41 -07:00
Dalton Hubble
c3b1f23b5d Update Calico from v3.14.0 to v3.14.1
* https://docs.projectcalico.org/v3.14/release-notes/
2020-05-29 00:35:16 -07:00
Dalton Hubble
ff7ec52d0a Update Kubernetes from v1.18.2 to v1.18.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md
2020-05-20 20:34:42 -07:00
Dalton Hubble
a83ddbb30e Add CoreDNS "soft" nodeAffinity for controller nodes
* Add nodeAffinity to CoreDNS deployment PodSpec to
prefer running CoreDNS pods on controllers, while
relying on podAntiAffinity for spreading.
* For single master clusters, running two CoreDNS pods
on the master or running one pod on a worker is
permissible.
* Note: Its still _possible_ to end up with CoreDNS pods
all running on workers since we only express scheduling
preference ("soft"), but unlikely. Plus the motivating
scenario (below) is also rare.

Background:

* CoreDNS replicas are set to the higher of 2 or the
number of control plane nodes to (at a minimum) support
Deployment updates or pod restarts and match the cluster
size (e.g. 5 master/controller nodes likely means a
larger cluster, so run 5 CoreDNS replicas)
* In the past (before v1.14), we required kube-dns (CoreOS
predecessor) to run CoreDNS pods on master nodes. With
CoreDNS this node selection was relaxed. We'd like a
gentler form of it now.

Motivation:

* On clusters using 100% preemptible/spot workers, it is
possible that CoreDNS pods schedule to workers that are all
preempted at the same time, causing a loss of cluster internal
DNS service until a CoreDNS pod reschedules (1 min). We'd like
CoreDNS to prefer controller/master nodes (which aren't preempted)
to reduce the possibility of control plane disruption
2020-05-09 22:48:56 -07:00
Dalton Hubble
157336db92 Update Calico from v3.13.3 to v3.14.0
* https://docs.projectcalico.org/v3.14/release-notes/
2020-05-09 16:02:38 -07:00
Dalton Hubble
1dc36b58b8 Fix Calico node crash loop on Pod restart
* Set a consistent MCS level/range for Calico install-cni
* Note: Rebooting a node was a workaround, because Kubelet
relabels /etc/kubernetes(/cni/net.d)

Background:

* On SELinux enforcing systems, the Calico CNI install-cni
container ran with default SELinux context and a random MCS
pair. install-cni places CNI configs by first creating a
temporary file and then moving them into place, which means
the file MCS categories depend on the containers SELinux
context.
* calico-node Pod restarts creates a new install-cni container
with a different MCS pair that cannot access the earlier
written file (it places configs every time), causing the
init container to error and calico-node to crash loop
* https://github.com/projectcalico/cni-plugin/issues/874

```
mv: inter-device move failed: '/calico.conf.tmp' to
'/host/etc/cni/net.d/10-calico.conflist'; unable to remove target:
Permission denied
Failed to mv files. This may be caused by selinux configuration on the
host, or something else.
```

Note, this isn't a host SELinux configuration issue.
2020-05-09 15:20:06 -07:00
Dalton Hubble
924beb4b0c Enable Kubelet TLS bootstrap and NodeRestriction
* Enable bootstrap token authentication on kube-apiserver
* Generate the bootstrap.kubernetes.io/token Secret that
may be used as a bootstrap token
* Generate a bootstrap kubeconfig (with a bootstrap token)
to be securely distributed to nodes. Each Kubelet will use
the bootstrap kubeconfig to authenticate to kube-apiserver
as `system:bootstrappers` and send a node-unique CSR for
kube-controller-manager to automatically approve to issue
a Kubelet certificate and kubeconfig (expires in 72 hours)
* Add ClusterRoleBinding for bootstrap token subjects
(`system:bootstrappers`) to have the `system:node-bootstrapper`
ClusterRole
* Add ClusterRoleBinding for bootstrap token subjects
(`system:bootstrappers`) to have the csr nodeclient ClusterRole
* Add ClusterRoleBinding for bootstrap token subjects
(`system:bootstrappers`) to have the csr selfnodeclient ClusterRole
* Enable NodeRestriction admission controller to limit the
scope of Node or Pod objects a Kubelet can modify to those of
the node itself
* Ability for a Kubelet to delete its Node object is retained
as preemptible nodes or those in auto-scaling instance groups
need to be able to remove themselves on shutdown. This need
continues to have precedence over any risk of a node deleting
itself maliciously

Security notes:

1. Issued Kubelet certificates authenticate as user `system:node:NAME`
and group `system:nodes` and are limited in their authorization
to perform API operations by Node authorization and NodeRestriction
admission. Previously, a Kubelet's authorization was broader. This
is the primary security motivation.

2. The bootstrap kubeconfig credential has the same sensitivity
as the previous generated TLS client-certificate kubeconfig.
It must be distributed securely to nodes. Its compromise still
allows an attacker to obtain a Kubelet kubeconfig

3. Bootstrapping Kubelet kubeconfig's with a limited lifetime offers
a slight security improvement.
  * An attacker who obtains the kubeconfig can likely obtain the
  bootstrap kubeconfig as well, to obtain the ability to renew
  their access
  * A compromised bootstrap kubeconfig could plausibly be handled
  by replacing the bootstrap token Secret, distributing the token
  to new nodes, and expiration. Whereas a compromised TLS-client
  certificate kubeconfig can't be revoked (no CRL). However,
  replacing a bootstrap token can be impractical in real cluster
  environments, so the limited lifetime is mostly a theoretical
  benefit.
  * Cluster CSR objects are visible via kubectl which is nice

4. Bootstrapping node-unique Kubelet kubeconfigs means Kubelet
clients have more identity information, which can improve the
utility of audits and future features

Rel: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/
2020-04-25 19:38:56 -07:00
Dalton Hubble
c62c7f5a1a Update Calico from v3.13.1 to v3.13.3
* https://docs.projectcalico.org/v3.13/release-notes/
2020-04-22 20:26:32 -07:00
Dalton Hubble
14d0b20879 Update Kubernetes from v1.18.1 to v1.18.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#downloads-for-v1182
2020-04-16 23:33:42 -07:00
Dalton Hubble
1ad53d3b1c Update Kubernetes from v1.18.0 to v1.18.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md
2020-04-08 19:37:27 -07:00
Dalton Hubble
45dc2f5c0c Update flannel from v0.11.0 to v0.12.0
* https://github.com/coreos/flannel/releases/tag/v0.12.0
2020-03-31 18:23:57 -07:00
Dalton Hubble
42723d13a6 Change default kube-system DaemonSet tolerations
* Change kube-proxy, flannel, and calico-node DaemonSet
tolerations to tolerate `node.kubernetes.io/not-ready`
and `node-role.kubernetes.io/master` (i.e. controllers)
explicitly, rather than tolerating all taints
* kube-system DaemonSets will no longer tolerate custom
node taints by default. Instead, custom node taints must
be enumerated to opt-in to scheduling/executing the
kube-system DaemonSets.

Background: Tolerating all taints ruled out use-cases
where certain nodes might legitimately need to keep
kube-proxy or CNI networking disabled
2020-03-25 22:43:50 -07:00
Dalton Hubble
cb170f802d Update Kubernetes from v1.17.4 to v1.18.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md
2020-03-25 17:47:30 -07:00
Dalton Hubble
e76f0a09fa Switch from upstream hyperkube to component images
* Kubernetes plans to stop releasing the hyperkube image in
the future.
* Upstream will continue releasing container images for
`kube-apiserver`, `kube-controller-manager`, `kube-proxy`,
and `kube-scheduler`. Typhoon will use these images
* Upstream will release the kubelet as a binary for distros
to package, either as a traditional DEB/RPP or as a container
image for container-optimized operating systems. Typhoon will
take on the packaging of Kubelet and its dependencies as a new
container image (alongside kubectl)

Rel: https://github.com/kubernetes/kubernetes/pull/88676
See: https://github.com/poseidon/kubelet
2020-03-17 22:13:42 -07:00
Dalton Hubble
73784c1b2c Update Kubernetes from v1.17.3 to v1.17.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md#v1174
2020-03-12 22:57:14 -07:00
Dalton Hubble
804029edd5 Update Calico from v3.12.0 to v3.13.1
* https://docs.projectcalico.org/v3.13/release-notes/
2020-03-12 22:55:57 -07:00
Dalton Hubble
d1831e626a Update CoreDNS from v1.6.6 to v1.6.7
* https://coredns.io/2020/01/28/coredns-1.6.7-release/
2020-02-17 14:24:17 -08:00
Dalton Hubble
7961945834 Update Kubernetes from v1.17.2 to v1.17.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md#v1173
2020-02-11 20:18:02 -08:00
Dalton Hubble
1ea8fe7a85 Update Calico from v3.11.2 to v3.12.0
* https://docs.projectcalico.org/release-notes/#v3120
2020-02-06 00:03:00 -08:00
Dalton Hubble
05297b94a9 Update Kuberenetes from v1.17.1 to v1.17.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.17.md#v1172
2020-01-21 18:25:46 -08:00
Dalton Hubble
de85f1da7d Update Calico from v3.11.1 to v3.11.2
* https://docs.projectcalico.org/v3.11/release-notes/
2020-01-18 13:37:17 -08:00
Dalton Hubble
5ce4fc6953 Update Kubernetes from v1.17.0 to v1.17.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.17.md#v1171
2020-01-14 20:17:30 -08:00
Dalton Hubble
ac4b7af570 Configure kube-proxy to serve /metrics on 0.0.0.0:10249
* Set kube-proxy --metrics-bind-address to 0.0.0.0 (default
127.0.0.1) so Prometheus metrics can be scraped
* Add pod port list (informational only)
* Require node firewall rules to be updated before scrapes
can succeed
2019-12-29 11:56:52 -08:00
Dalton Hubble
c8c21deb76 Update Calico from v3.10.2 to v3.11.1
* https://docs.projectcalico.org/v3.11/release-notes/
2019-12-28 10:51:11 -08:00
Dalton Hubble
f021d9cb34 Update CoreDNS from v1.6.5 to v1.6.6
* https://coredns.io/2019/12/11/coredns-1.6.6-release/
2019-12-22 10:41:43 -05:00
Dalton Hubble
24e5513ee6 Update Calico from v3.10.1 to v3.10.2
* https://docs.projectcalico.org/v3.10/release-notes/
2019-12-09 20:56:18 -08:00
Dalton Hubble
0ddd90fd05 Update Kubernetes from v1.16.3 to v1.17.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.17.md/#v1170
2019-12-09 18:29:06 -08:00
Dalton Hubble
4369c706e2 Restore kube-controller-manager settings lost in static pod migration
* Migration from a self-hosted to a static pod control plane dropped
a few kube-controller-manager customizations
* Reduce kube-controller-manager --pod-eviction-timeout from 5m to 1m
to move pods more quickly when nodes are preempted
* Fix flex-volume-plugin-dir since the Kubernetes default points to
a read-only filesystem on Container Linux / Fedora CoreOS

Related:

* https://github.com/poseidon/terraform-render-bootstrap/pull/148
* 7b06557b7a
2019-12-08 22:37:36 -08:00
Dalton Hubble
7df6bd8d1e Tune static pod CPU requests slightly lower
* Reduce kube-apiserver and kube-controller-manager CPU
requests from 200m to 150m. Prefer slightly lower commitment
after running with the requests chosen in #161 for a while
* Reduce calico-node CPU request from 150m to 100m to match
CoreDNS and flannel
2019-12-08 22:25:58 -08:00
Dalton Hubble
dce49114a0 Fix terraform format with fmt 2019-12-05 01:02:01 -08:00
Dalton Hubble
50a221e042 Annotate sensitive output variables to suppress display
* Annotate terraform output variables containing generated TLS
credentials and kubeconfigs as sensitive to suppress / mask
them in terraform CLI display.
* Allow for easier use in automation systems and logged environments
2019-12-05 00:57:07 -08:00
Dalton Hubble
4d7484f72a Change asset_dir variable from required to optional
* `asset_dir` is an absolute path to a directory where generated
assets from terraform-render-bootstrap are written (sensitive)
* Change `asset_dir` to default to "" so no assets are written
(favor Terraform output mechanisms). Previously, asset_dir was
required so all users set some path. To take advantage of the
new optionality, remove asset_dir or set it to ""
2019-12-05 00:56:54 -08:00
Dalton Hubble
6c7ba3864f Introduce a Terraform output map with distribution assets
* Introduce a new `assets_dist` output variable that provides
a mapping from suggested asset paths to asset contents (for
assets that should be distributed to controller nodes). This
new output format is intended to align with a modified asset
distribution style in Typhoon.
* Lay the groundwork for `assets_dir` to become optional. The
output map provides output variable access to the minimal assets
that are required for bootstrap
* Assets that aren't required for bootstrap itself (e.g.
the etcd CA key) but can be used by admins may later be added
as specific output variables to further reduce asset_dir use

Background:

* `terraform-render-bootstrap` rendered assets were previously
only provided by rendering files to an `asset_dir`. This was
neccessary, but created a responsibility to maintain those
assets on the machine where terraform apply was run
2019-12-04 20:15:40 -08:00
Dalton Hubble
8005052cfb Remove unused raw kubeconfig field outputs
* Remove unused `ca_cert`, `kubelet_cert`, `kubelet_key`,
and `server` outputs
* These outputs were once needed to support clusters with
managed instance groups, but that hasn't been the case for
quite some time
2019-11-13 16:49:07 -08:00
Dalton Hubble
0f1f16c612 Add small CPU resource requests to static pods
* Set small CPU requests on static pods kube-apiserver,
kube-controller-manager, and kube-scheduler to align with
upstream tooling and for edge cases
* Control plane nodes are tainted to isolate them from
ordinary workloads. Even dense workloads can only compress
CPU resources on worker nodes.
* Control plane static pods use the highest priority class, so
contention favors control plane pods (over say node-exporter)
and CPU is compressible too.
* Effectively, a practical case for these requests hasn't been
observed. However, a small static pod CPU request may offer
a slight benefit if a controller became overloaded and the
above mechanisms were insufficient for some reason (bit of a
stretch, due to CPU compressibility)
* Continue to avoid setting a memory request for static pods.
It would impose a hard size requirement on controller nodes,
which isn't warranted and is handled more gently by Typhoon
default instance types across clouds and via docs
2019-11-13 16:44:33 -08:00
Dalton Hubble
43e1230c55 Update CoreDNS from v1.6.2 to v1.6.5
* Add health `lameduck` option 5s. Before CoreDNS shuts down,
it will wait and report unhealthy for 5s to allow time for
plugins to shutdown cleanly
* Minor bug fixes over a few releases
* https://coredns.io/2019/08/31/coredns-1.6.3-release/
* https://coredns.io/2019/09/27/coredns-1.6.4-release/
* https://coredns.io/2019/11/05/coredns-1.6.5-release/
2019-11-13 14:33:50 -08:00
Dalton Hubble
1bba891d95 Adopt Terraform v0.12 templatefile function
* Adopt Terrform v0.12 type and templatefile function
features to replace the use of terraform-provider-template's
`template_dir`
* Use of `for_each` to write local assets requires
that consumers use Terraform v0.12.6+ (action required)
* Continue use of `template_file` as its quite common. In
future, we may replace it as well.
* Remove outputs `id` and `content_hash` (no longer used)

Background:

* `template_dir` was added to `terraform-provider-template`
to add support for template directory rendering in CoreOS
Tectonic Kubernetes distribution (~2017)
* Terraform v0.12 introduced a native `templatefile` function
and v0.12.6 introduced native `for_each` support (July 2019)
that makes it possible to replace `template_dir` usage
2019-11-13 14:05:01 -08:00
Dalton Hubble
0daa1276c6 Update Kubernetes from v1.16.2 to v1.16.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1163
2019-11-13 13:02:01 -08:00
Dalton Hubble
a2b1dbe2c0 Update Calico from v3.10.0 to v3.10.1
* https://docs.projectcalico.org/v3.10/release-notes/
2019-11-07 11:07:15 -08:00
Dalton Hubble
3c7334ab55 Upgrade Calico from v3.9.2 to v3.10.0
* Change calico-node livenessProve from httpGet to exec
a calico-node -felix-ready, as recommended by Calico
* Allow advertising Kubernetes service ClusterIPs
2019-10-27 01:06:09 -07:00
Dalton Hubble
e09d6bef33 Switch kube-proxy from iptables mode to ipvs mode
* Kubernetes v1.11 considered kube-proxy IPVS mode GA
* Many problems were found https://github.com/poseidon/typhoon/pull/321
* Since then, major blockers seem to have been addressed
2019-10-15 22:55:17 -07:00
Dalton Hubble
0fcc067476 Update Kubernetes from v1.16.1 to v1.16.2
* https://github.com/kubernetes/kubernetes/releases/tag/v1.16.2
2019-10-15 22:38:51 -07:00
Dalton Hubble
6f2734bb3c Update Calico from v3.9.1 to v3.9.2
* https://github.com/projectcalico/calico/releases/tag/v3.9.2
2019-10-15 22:36:37 -07:00
Dalton Hubble
10d9cec5c2 Add stricter type constraints to variables 2019-10-06 20:41:50 -07:00
Dalton Hubble
1f8b634652 Remove unneeded control plane flags
* Several flags now default to the arguments we've been
setting and are no longer needed
2019-10-06 20:25:46 -07:00
Dalton Hubble
586d6e36f6 Update Kubernetes from v1.16.0 to v1.16.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1161
2019-10-02 21:22:11 -07:00
Dalton Hubble
18b7a74d30 Update Calico from v3.8.2 to v3.9.1
* https://docs.projectcalico.org/v3.9/release-notes/
2019-09-29 11:14:20 -07:00
Dalton Hubble
539b725093 Update Kubernetes from v1.15.3 to v1.16.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1160
2019-09-17 21:15:46 -07:00
Dalton Hubble
d6206abedd Replace Terraform element function with indexing
* Better to explictly index (and error on out-of-bounds) than
use Terraform `element` (which has special wrap-around behavior)
* https://www.terraform.io/docs/configuration/functions/element.html
2019-09-14 16:46:27 -07:00
Dalton Hubble
e839ec5a2b Fix Terraform formatting 2019-09-14 16:44:36 -07:00
Dalton Hubble
3dade188f2 Rename project to terraform-render-bootstrap
* Rename from terraform-render-bootkube to terraform-render-bootstrap
* Generated manifest and certificate assets are no longer geared
specifically for bootkube (no longer used)
2019-09-14 16:16:49 -07:00
Dalton Hubble
97bbed6c3a Rename CA organization from bootkube to typhoon
* Rename the organization in generated CA certificates for
clusters from bootkube to typhoon
* Mainly helpful to avoid confusion with bootkube CA certificates
if users inspect their CA, especially now that bootkube isn't used
(better their searches lead to Typhoon)
2019-09-14 16:08:06 -07:00
Dalton Hubble
6e59af7113 Migrate from a self-hosted to static pod control plane
* Run kube-apiserver, kube-scheduler, and kube-controller-manager
as static pods on each controller node
* Boostrap a minimal control plane by copying `static-manifests`
to the Kubelet `--pod-manifest-path` and tls/auth secrets to
`/etc/kubernetes/bootstrap-secrets`. Then, kubectl apply Kubernetes
manifests.
* Discontinue using bootkube to bootstrap and pivot to a self-hosted
control plane.
* Remove bootkube self-hosted kube-apiserver DaemonSet and
kube-scheduler and kube-controller-manager Deployments
* Remove pod-checkpointer manifests (no longer needed)

Advantages:

* Reduce control plane bootstrapping complexity. Self-hosted pivot and
pod checkpointing worked well, but in-place edits to kube-apiserver,
kube-controller-manager, or kube-scheduler is infrequently used. The
concept was originally geared toward continuously in-place upgrading
clusters, a goal Typhoon doesn't take on (rec. blue/green clusters).
As such, the value-add isn't justifying the extra components for this
particular project.
* Static pods still provide kubectl visibility and log access

Drawbacks:

* In-place edits to kube-apiserver, kube-controller-manager, and
kube-scheduler are not possible via kubectl (non-goal)
* Assets must be copied to each controller (not just one)
* Static pod must load credentials via hostPath, which is less clean
compared with the former Kubernetes secrets and service accounts
2019-09-02 20:52:46 -07:00
Dalton Hubble
98cc19f80f Update CoreDNS from v1.5.0 to v1.6.2
* https://coredns.io/2019/06/26/coredns-1.5.1-release/
* https://coredns.io/2019/07/03/coredns-1.5.2-release/
* https://coredns.io/2019/07/28/coredns-1.6.0-release/
* https://coredns.io/2019/08/02/coredns-1.6.1-release/
* https://coredns.io/2019/08/13/coredns-1.6.2-release/
2019-08-31 15:20:55 -07:00
Dalton Hubble
248675e7a9 Update Kubernetes from v1.15.2 to v1.15.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md/#v1153
2019-08-19 14:41:54 -07:00
Dalton Hubble
8b3738b2cc Update Calico from v3.8.1 to v3.8.2
* https://docs.projectcalico.org/v3.8/release-notes/
2019-08-16 14:53:20 -07:00
Dalton Hubble
c21da02249 Update Kubernetes from v1.15.1 to v1.15.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#downloads-for-v1152
2019-08-05 08:44:54 -07:00
Dalton Hubble
83dd5a7cfc Update Calico from v3.8.0 to v3.8.1
* https://github.com/projectcalico/calico/releases/tag/v3.8.1
2019-07-27 15:17:47 -07:00
Dalton Hubble
ed94836925 Update kube-router from v0.3.1 to v0.3.2
* kube-router is experimental and not supported or validated
* Bumping so the next time kube-router is evaluated, we're on
a modern version
* https://github.com/cloudnativelabs/kube-router/releases/tag/v0.3.2
2019-07-27 15:12:43 -07:00
Dalton Hubble
5b9faa9031 Update Kubernetes from v1.15.0 to v1.15.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#downloads-for-v1151
2019-07-19 01:18:09 -07:00
Dalton Hubble
119cb00fa7 Upgrade Calico from v3.7.4 to v3.8.0
* Enable CNI bandwidth plugin for traffic shaping
* https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#support-traffic-shaping
2019-07-11 21:00:58 -07:00
Dalton Hubble
4caca47776 Run kube-apiserver as non-root user (nobody) 2019-07-06 13:51:54 -07:00
Dalton Hubble
3bfd1253ec Always run kube-apiserver on port 6443 (internally)
* Require bootstrap-kube-apiserver and kube-apiserver components
listen on port 6443 (internally) to allow kube-apiserver pods to
run with lower user privilege
* Remove variable `apiserver_port`. The kube-apiserver listen
port is no longer customizable.
* Add variable `external_apiserver_port` to allow architectures
where a load balancer fronts kube-apiserver 6443 backends, but
listens on a different port externally. For example, Google Cloud
TCP Proxy load balancers cannot listen on 6443
2019-07-06 13:50:22 -07:00
Dalton Hubble
95f6fc7fa5 Update Calico from v3.7.3 to v3.7.4
* https://docs.projectcalico.org/v3.7/release-notes/
2019-07-02 20:15:53 -07:00
Dalton Hubble
62df9ad69c Update Kubernetes from v1.14.3 to v1.15.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#v1150
2019-06-23 13:04:13 -07:00
Dalton Hubble
89c3ab4e27 Update Calico from v3.7.2 to v3.7.3
* https://docs.projectcalico.org/v3.7/release-notes/
2019-06-13 23:36:35 -07:00
Dalton Hubble
0103bc06bb Define module required provider versions 2019-06-06 09:39:48 -07:00
Dalton Hubble
33d033f1a6 Migrate from Terraform v0.11.x to v0.12.x (breaking!)
* Terraform v0.12 is a major Terraform release with breaking changes
to the HCL language. In v0.11, it was required to use redundant brackets
as interpreter type hints to pass lists or concat and flatten lists and
strings. In v0.12, that work-around is no longer supported. Lists are
represented as first-class objects and the redundant brackets create
nested lists. Consequently, its not possible to pass lists in a way that
works with both v0.11 and v0.12 at the same time. We've made the
difficult choice to pursue a hard cutover to Terraform v0.12.x
* https://www.terraform.io/upgrade-guides/0-12.html#referring-to-list-variables
* Use expression syntax instead of interpolated strings, where suggested
* Define Terraform required_version ~> v0.12.0 (> v0.12, < v0.13)
2019-06-06 09:39:46 -07:00
92 changed files with 1595 additions and 1898 deletions

6
.github/dependabot.yaml vendored Normal file
View File

@@ -0,0 +1,6 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"

21
.github/workflows/test.yaml vendored Normal file
View File

@@ -0,0 +1,21 @@
name: test
on:
push:
branches:
- main
pull_request:
jobs:
terraform:
name: fmt
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v6
- name: terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.11.1
- name: fmt
run: terraform fmt -check -diff -recursive

View File

@@ -1,27 +1,29 @@
# terraform-render-bootkube
# terraform-render-bootstrap
[![Workflow](https://github.com/poseidon/terraform-render-bootstrap/actions/workflows/test.yaml/badge.svg)](https://github.com/poseidon/terraform-render-bootstrap/actions/workflows/test.yaml?query=branch%3Amain)
[![Sponsors](https://img.shields.io/github/sponsors/poseidon?logo=github)](https://github.com/sponsors/poseidon)
[![Mastodon](https://img.shields.io/badge/follow-news-6364ff?logo=mastodon)](https://fosstodon.org/@typhoon)
`terraform-render-bootkube` is a Terraform module that renders [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube) assets for bootstrapping a Kubernetes cluster.
`terraform-render-bootstrap` is a Terraform module that renders TLS certificates, static pods, and manifests for bootstrapping a Kubernetes cluster.
## Audience
`terraform-render-bootkube` is a low-level component of the [Typhoon](https://github.com/poseidon/typhoon) Kubernetes distribution. Use Typhoon modules to create and manage Kubernetes clusters across supported platforms. Use the bootkube module if you'd like to customize a Kubernetes control plane or build your own distribution.
`terraform-render-bootstrap` is a low-level component of the [Typhoon](https://github.com/poseidon/typhoon) Kubernetes distribution. Use Typhoon modules to create and manage Kubernetes clusters across supported platforms. Use the bootstrap module if you'd like to customize a Kubernetes control plane or build your own distribution.
## Usage
Use the module to declare bootkube assets. Check [variables.tf](variables.tf) for options and [terraform.tfvars.example](terraform.tfvars.example) for examples.
Use the module to declare bootstrap assets. Check [variables.tf](variables.tf) for options and [terraform.tfvars.example](terraform.tfvars.example) for examples.
```hcl
module "bootkube" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=SHA"
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=SHA"
cluster_name = "example"
api_servers = ["node1.example.com"]
etcd_servers = ["node1.example.com"]
asset_dir = "/home/core/clusters/mycluster"
}
```
Generate the assets.
Generate assets in Terraform state.
```sh
terraform init
@@ -29,21 +31,12 @@ terraform plan
terraform apply
```
Find bootkube assets rendered to the `asset_dir` path. That's it.
To inspect and write assets locally (e.g. debugging) use the `assets_dist` Terraform output.
### Comparison
Render bootkube assets directly with bootkube v0.14.0.
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:6443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=https://node1.example.com:2379
```
Compare assets. Rendered assets may differ slightly from bootkube assets to reflect decisions made by the [Typhoon](https://github.com/poseidon/typhoon) distribution.
```sh
pushd /home/core/mycluster
mv manifests-networking/* manifests
popd
diff -rw assets /home/core/mycluster
resource local_file "assets" {
for_each = module.bootstrap.assets_dist
filename = "some-assets/${each.key}"
content = each.value
}
```

110
assets.tf
View File

@@ -1,110 +0,0 @@
# Self-hosted Kubernetes bootstrap-manifests
resource "template_dir" "bootstrap-manifests" {
source_dir = "${path.module}/resources/bootstrap-manifests"
destination_dir = "${var.asset_dir}/bootstrap-manifests"
vars {
hyperkube_image = "${var.container_images["hyperkube"]}"
etcd_servers = "${join(",", formatlist("https://%s:2379", var.etcd_servers))}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
trusted_certs_dir = "${var.trusted_certs_dir}"
apiserver_port = "${var.apiserver_port}"
}
}
# Self-hosted Kubernetes manifests
resource "template_dir" "manifests" {
source_dir = "${path.module}/resources/manifests"
destination_dir = "${var.asset_dir}/manifests"
vars {
hyperkube_image = "${var.container_images["hyperkube"]}"
pod_checkpointer_image = "${var.container_images["pod_checkpointer"]}"
coredns_image = "${var.container_images["coredns"]}"
etcd_servers = "${join(",", formatlist("https://%s:2379", var.etcd_servers))}"
control_plane_replicas = "${max(2, length(var.etcd_servers))}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
cluster_domain_suffix = "${var.cluster_domain_suffix}"
cluster_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
trusted_certs_dir = "${var.trusted_certs_dir}"
apiserver_port = "${var.apiserver_port}"
ca_cert = "${base64encode(tls_self_signed_cert.kube-ca.cert_pem)}"
ca_key = "${base64encode(tls_private_key.kube-ca.private_key_pem)}"
server = "${format("https://%s:%s", element(var.api_servers, 0), var.apiserver_port)}"
apiserver_key = "${base64encode(tls_private_key.apiserver.private_key_pem)}"
apiserver_cert = "${base64encode(tls_locally_signed_cert.apiserver.cert_pem)}"
serviceaccount_pub = "${base64encode(tls_private_key.service-account.public_key_pem)}"
serviceaccount_key = "${base64encode(tls_private_key.service-account.private_key_pem)}"
etcd_ca_cert = "${base64encode(tls_self_signed_cert.etcd-ca.cert_pem)}"
etcd_client_cert = "${base64encode(tls_locally_signed_cert.client.cert_pem)}"
etcd_client_key = "${base64encode(tls_private_key.client.private_key_pem)}"
aggregation_flags = "${var.enable_aggregation == "true" ? indent(8, local.aggregation_flags) : ""}"
aggregation_ca_cert = "${var.enable_aggregation == "true" ? base64encode(join(" ", tls_self_signed_cert.aggregation-ca.*.cert_pem)) : ""}"
aggregation_client_cert = "${var.enable_aggregation == "true" ? base64encode(join(" ", tls_locally_signed_cert.aggregation-client.*.cert_pem)) : ""}"
aggregation_client_key = "${var.enable_aggregation == "true" ? base64encode(join(" ", tls_private_key.aggregation-client.*.private_key_pem)) : ""}"
}
}
locals {
aggregation_flags = <<EOF
- --proxy-client-cert-file=/etc/kubernetes/secrets/aggregation-client.crt
- --proxy-client-key-file=/etc/kubernetes/secrets/aggregation-client.key
- --requestheader-client-ca-file=/etc/kubernetes/secrets/aggregation-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-UserEOF
}
# Generated kubeconfig for Kubelets
resource "local_file" "kubeconfig-kubelet" {
content = "${data.template_file.kubeconfig-kubelet.rendered}"
filename = "${var.asset_dir}/auth/kubeconfig-kubelet"
}
# Generated admin kubeconfig (bootkube requires it be at auth/kubeconfig)
# https://github.com/kubernetes-incubator/bootkube/blob/master/pkg/bootkube/bootkube.go#L42
resource "local_file" "kubeconfig-admin" {
content = "${data.template_file.kubeconfig-admin.rendered}"
filename = "${var.asset_dir}/auth/kubeconfig"
}
# Generated admin kubeconfig in a file named after the cluster
resource "local_file" "kubeconfig-admin-named" {
content = "${data.template_file.kubeconfig-admin.rendered}"
filename = "${var.asset_dir}/auth/${var.cluster_name}-config"
}
data "template_file" "kubeconfig-kubelet" {
template = "${file("${path.module}/resources/kubeconfig-kubelet")}"
vars {
ca_cert = "${base64encode(tls_self_signed_cert.kube-ca.cert_pem)}"
kubelet_cert = "${base64encode(tls_locally_signed_cert.kubelet.cert_pem)}"
kubelet_key = "${base64encode(tls_private_key.kubelet.private_key_pem)}"
server = "${format("https://%s:%s", element(var.api_servers, 0), var.apiserver_port)}"
}
}
data "template_file" "kubeconfig-admin" {
template = "${file("${path.module}/resources/kubeconfig-admin")}"
vars {
name = "${var.cluster_name}"
ca_cert = "${base64encode(tls_self_signed_cert.kube-ca.cert_pem)}"
kubelet_cert = "${base64encode(tls_locally_signed_cert.admin.cert_pem)}"
kubelet_key = "${base64encode(tls_private_key.admin.private_key_pem)}"
server = "${format("https://%s:%s", element(var.api_servers, 0), var.apiserver_port)}"
}
}

68
auth.tf Normal file
View File

@@ -0,0 +1,68 @@
locals {
# component kubeconfigs assets map
auth_kubeconfigs = {
"auth/admin.conf" = local.kubeconfig-admin,
"auth/controller-manager.conf" = local.kubeconfig-controller-manager
"auth/scheduler.conf" = local.kubeconfig-scheduler
}
}
locals {
# Generated admin kubeconfig to bootstrap control plane
kubeconfig-admin = templatefile("${path.module}/resources/kubeconfig-admin",
{
name = var.cluster_name
ca_cert = base64encode(tls_self_signed_cert.kube-ca.cert_pem)
kubelet_cert = base64encode(tls_locally_signed_cert.admin.cert_pem)
kubelet_key = base64encode(tls_private_key.admin.private_key_pem)
server = format("https://%s:%s", var.api_servers[0], var.external_apiserver_port)
}
)
# Generated kube-controller-manager kubeconfig
kubeconfig-controller-manager = templatefile("${path.module}/resources/kubeconfig-admin",
{
name = var.cluster_name
ca_cert = base64encode(tls_self_signed_cert.kube-ca.cert_pem)
kubelet_cert = base64encode(tls_locally_signed_cert.controller-manager.cert_pem)
kubelet_key = base64encode(tls_private_key.controller-manager.private_key_pem)
server = format("https://%s:%s", var.api_servers[0], var.external_apiserver_port)
}
)
# Generated kube-controller-manager kubeconfig
kubeconfig-scheduler = templatefile("${path.module}/resources/kubeconfig-admin",
{
name = var.cluster_name
ca_cert = base64encode(tls_self_signed_cert.kube-ca.cert_pem)
kubelet_cert = base64encode(tls_locally_signed_cert.scheduler.cert_pem)
kubelet_key = base64encode(tls_private_key.scheduler.private_key_pem)
server = format("https://%s:%s", var.api_servers[0], var.external_apiserver_port)
}
)
# Generated kubeconfig to bootstrap Kubelets
kubeconfig-bootstrap = templatefile("${path.module}/resources/kubeconfig-bootstrap",
{
ca_cert = base64encode(tls_self_signed_cert.kube-ca.cert_pem)
server = format("https://%s:%s", var.api_servers[0], var.external_apiserver_port)
token_id = random_password.bootstrap-token-id.result
token_secret = random_password.bootstrap-token-secret.result
}
)
}
# Generate a cryptographically random token id (public)
resource "random_password" "bootstrap-token-id" {
length = 6
upper = false
special = false
}
# Generate a cryptographically random token secret
resource "random_password" "bootstrap-token-secret" {
length = 16
upper = false
special = false
}

View File

@@ -1,47 +1,36 @@
# Assets generated only when certain options are chosen
resource "template_dir" "flannel-manifests" {
count = "${var.networking == "flannel" ? 1 : 0}"
source_dir = "${path.module}/resources/flannel"
destination_dir = "${var.asset_dir}/manifests-networking"
locals {
# flannel manifests map
# { manifests-networking/manifest.yaml => content }
flannel_manifests = {
for name in fileset("${path.module}/resources/flannel", "*.yaml") :
"manifests/network/${name}" => templatefile(
"${path.module}/resources/flannel/${name}",
{
flannel_image = var.container_images["flannel"]
flannel_cni_image = var.container_images["flannel_cni"]
pod_cidr = var.pod_cidr
daemonset_tolerations = var.daemonset_tolerations
}
)
if var.components.enable && var.components.flannel.enable && var.networking == "flannel"
}
vars {
flannel_image = "${var.container_images["flannel"]}"
flannel_cni_image = "${var.container_images["flannel_cni"]}"
pod_cidr = "${var.pod_cidr}"
# cilium manifests map
# { manifests-networking/manifest.yaml => content }
cilium_manifests = {
for name in fileset("${path.module}/resources/cilium", "**/*.yaml") :
"manifests/network/${name}" => templatefile(
"${path.module}/resources/cilium/${name}",
{
cilium_agent_image = var.container_images["cilium_agent"]
cilium_operator_image = var.container_images["cilium_operator"]
pod_cidr = var.pod_cidr
daemonset_tolerations = var.daemonset_tolerations
}
)
if var.components.enable && var.components.cilium.enable && var.networking == "cilium"
}
}
resource "template_dir" "calico-manifests" {
count = "${var.networking == "calico" ? 1 : 0}"
source_dir = "${path.module}/resources/calico"
destination_dir = "${var.asset_dir}/manifests-networking"
vars {
calico_image = "${var.container_images["calico"]}"
calico_cni_image = "${var.container_images["calico_cni"]}"
network_mtu = "${var.network_mtu}"
network_encapsulation = "${indent(2, var.network_encapsulation == "vxlan" ? "vxlanMode: Always" : "ipipMode: Always")}"
ipip_enabled = "${var.network_encapsulation == "ipip" ? true : false}"
ipip_readiness = "${var.network_encapsulation == "ipip" ? indent(16, "- --bird-ready") : ""}"
vxlan_enabled = "${var.network_encapsulation == "vxlan" ? true : false}"
network_ip_autodetection_method = "${var.network_ip_autodetection_method}"
pod_cidr = "${var.pod_cidr}"
enable_reporting = "${var.enable_reporting}"
}
}
resource "template_dir" "kube-router-manifests" {
count = "${var.networking == "kube-router" ? 1 : 0}"
source_dir = "${path.module}/resources/kube-router"
destination_dir = "${var.asset_dir}/manifests-networking"
vars {
kube_router_image = "${var.container_images["kube_router"]}"
flannel_cni_image = "${var.container_images["flannel_cni"]}"
network_mtu = "${var.network_mtu}"
}
}

77
manifests.tf Normal file
View File

@@ -0,0 +1,77 @@
locals {
# Kubernetes static pod manifests map
# {static-manifests/manifest.yaml => content }
static_manifests = {
for name in fileset("${path.module}/resources/static-manifests", "*.yaml") :
"static-manifests/${name}" => templatefile(
"${path.module}/resources/static-manifests/${name}",
{
kube_apiserver_image = var.container_images["kube_apiserver"]
kube_controller_manager_image = var.container_images["kube_controller_manager"]
kube_scheduler_image = var.container_images["kube_scheduler"]
etcd_servers = join(",", formatlist("https://%s:2379", var.etcd_servers))
pod_cidr = var.pod_cidr
service_cidr = var.service_cidr
service_account_issuer = var.service_account_issuer
aggregation_flags = var.enable_aggregation ? indent(4, local.aggregation_flags) : ""
}
)
}
# Kubernetes control plane manifests map
# { manifests/manifest.yaml => content }
manifests = merge({
for name in fileset("${path.module}/resources/manifests", "**/*.yaml") :
"manifests/${name}" => templatefile(
"${path.module}/resources/manifests/${name}",
{
server = format("https://%s:%s", var.api_servers[0], var.external_apiserver_port)
apiserver_host = var.api_servers[0]
apiserver_port = var.external_apiserver_port
token_id = random_password.bootstrap-token-id.result
token_secret = random_password.bootstrap-token-secret.result
}
)
},
# CoreDNS manifests (optional)
{
for name in fileset("${path.module}/resources/coredns", "*.yaml") :
"manifests/coredns/${name}" => templatefile(
"${path.module}/resources/coredns/${name}",
{
coredns_image = var.container_images["coredns"]
control_plane_replicas = max(2, length(var.etcd_servers))
cluster_domain_suffix = var.cluster_domain_suffix
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
}
) if var.components.enable && var.components.coredns.enable
},
# kube-proxy manifests (optional)
{
for name in fileset("${path.module}/resources/kube-proxy", "*.yaml") :
"manifests/kube-proxy/${name}" => templatefile(
"${path.module}/resources/kube-proxy/${name}",
{
kube_proxy_image = var.container_images["kube_proxy"]
pod_cidr = var.pod_cidr
daemonset_tolerations = var.daemonset_tolerations
}
) if var.components.enable && var.components.kube_proxy.enable && var.networking != "cilium"
}
)
}
locals {
aggregation_flags = <<EOF
- --proxy-client-cert-file=/etc/kubernetes/pki/aggregation-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/aggregation-client.key
- --requestheader-client-ca-file=/etc/kubernetes/pki/aggregation-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
EOF
}

View File

@@ -1,71 +1,76 @@
output "id" {
value = "${sha1("${template_dir.bootstrap-manifests.id} ${template_dir.manifests.id}")}"
}
output "content_hash" {
value = "${sha1("${template_dir.bootstrap-manifests.id} ${template_dir.manifests.id}")}"
}
output "cluster_dns_service_ip" {
value = "${cidrhost(var.service_cidr, 10)}"
value = cidrhost(var.service_cidr, 10)
}
// Generated kubeconfig for Kubelets (i.e. lower privilege than admin)
output "kubeconfig-kubelet" {
value = "${data.template_file.kubeconfig-kubelet.rendered}"
value = local.kubeconfig-bootstrap
sensitive = true
}
// Generated kubeconfig for admins (i.e. human super-user)
output "kubeconfig-admin" {
value = "${data.template_file.kubeconfig-admin.rendered}"
value = local.kubeconfig-admin
sensitive = true
}
# assets to distribute to controllers
# { some/path => content }
output "assets_dist" {
# combine maps of assets
value = merge(
local.auth_kubeconfigs,
local.etcd_tls,
local.kubernetes_tls,
local.aggregation_tls,
local.static_manifests,
local.manifests,
local.flannel_manifests,
local.cilium_manifests,
)
sensitive = true
}
# etcd TLS assets
output "etcd_ca_cert" {
value = "${tls_self_signed_cert.etcd-ca.cert_pem}"
value = tls_self_signed_cert.etcd-ca.cert_pem
sensitive = true
}
output "etcd_client_cert" {
value = "${tls_locally_signed_cert.client.cert_pem}"
value = tls_locally_signed_cert.client.cert_pem
sensitive = true
}
output "etcd_client_key" {
value = "${tls_private_key.client.private_key_pem}"
value = tls_private_key.client.private_key_pem
sensitive = true
}
output "etcd_server_cert" {
value = "${tls_locally_signed_cert.server.cert_pem}"
value = tls_locally_signed_cert.server.cert_pem
sensitive = true
}
output "etcd_server_key" {
value = "${tls_private_key.server.private_key_pem}"
value = tls_private_key.server.private_key_pem
sensitive = true
}
output "etcd_peer_cert" {
value = "${tls_locally_signed_cert.peer.cert_pem}"
value = tls_locally_signed_cert.peer.cert_pem
sensitive = true
}
output "etcd_peer_key" {
value = "${tls_private_key.peer.private_key_pem}"
value = tls_private_key.peer.private_key_pem
sensitive = true
}
# Some platforms may need to reconstruct the kubeconfig directly in user-data.
# That can't be done with the way template_file interpolates multi-line
# contents so the raw components of the kubeconfig may be needed.
# Kubernetes TLS assets
output "ca_cert" {
value = "${base64encode(tls_self_signed_cert.kube-ca.cert_pem)}"
}
output "kubelet_cert" {
value = "${base64encode(tls_locally_signed_cert.kubelet.cert_pem)}"
}
output "kubelet_key" {
value = "${base64encode(tls_private_key.kubelet.private_key_pem)}"
}
output "server" {
value = "${format("https://%s:%s", element(var.api_servers, 0), var.apiserver_port)}"
output "service_account_public_key" {
value = tls_private_key.service-account.public_key_pem
}

View File

@@ -1,56 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: bootstrap-kube-apiserver
namespace: kube-system
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
containers:
- name: kube-apiserver
image: ${hyperkube_image}
command:
- /hyperkube
- apiserver
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --anonymous-auth=false
- --authorization-mode=RBAC
- --bind-address=0.0.0.0
- --client-ca-file=/etc/kubernetes/secrets/ca.crt
- --cloud-provider=${cloud_provider}
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,Priority
- --etcd-cafile=/etc/kubernetes/secrets/etcd-client-ca.crt
- --etcd-certfile=/etc/kubernetes/secrets/etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/secrets/etcd-client.key
- --etcd-servers=${etcd_servers}
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --secure-port=${apiserver_port}
- --service-account-key-file=/etc/kubernetes/secrets/service-account.pub
- --service-cluster-ip-range=${service_cidr}
- --storage-backend=etcd3
- --tls-cert-file=/etc/kubernetes/secrets/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/secrets/apiserver.key
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
- name: ssl-certs-host
mountPath: /etc/ssl/certs
readOnly: true
volumes:
- name: secrets
hostPath:
path: /etc/kubernetes/bootstrap-secrets
- name: ssl-certs-host
hostPath:
path: ${trusted_certs_dir}

View File

@@ -1,40 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: bootstrap-kube-controller-manager
namespace: kube-system
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
containers:
- name: kube-controller-manager
image: ${hyperkube_image}
command:
- ./hyperkube
- controller-manager
- --allocate-node-cidrs=true
- --cluster-cidr=${pod_cidr}
- --service-cluster-ip-range=${service_cidr}
- --cloud-provider=${cloud_provider}
- --cluster-signing-cert-file=/etc/kubernetes/secrets/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/secrets/ca.key
- --configure-cloud-routes=false
- --kubeconfig=/etc/kubernetes/secrets/kubeconfig
- --leader-elect=true
- --root-ca-file=/etc/kubernetes/secrets/ca.crt
- --service-account-private-key-file=/etc/kubernetes/secrets/service-account.key
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
- name: ssl-host
mountPath: /etc/ssl/certs
readOnly: true
hostNetwork: true
volumes:
- name: secrets
hostPath:
path: /etc/kubernetes/bootstrap-secrets
- name: ssl-host
hostPath:
path: ${trusted_certs_dir}

View File

@@ -1,25 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: bootstrap-kube-scheduler
namespace: kube-system
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
containers:
- name: kube-scheduler
image: ${hyperkube_image}
command:
- ./hyperkube
- scheduler
- --kubeconfig=/etc/kubernetes/secrets/kubeconfig
- --leader-elect=true
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
hostNetwork: true
volumes:
- name: secrets
hostPath:
path: /etc/kubernetes/bootstrap-secrets

View File

@@ -1,12 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPConfiguration
plural: bgpconfigurations
singular: bgpconfiguration

View File

@@ -1,12 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPPeer
plural: bgppeers
singular: bgppeer

View File

@@ -1,12 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: blockaffinities.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BlockAffinity
plural: blockaffinities
singular: blockaffinity

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system

View File

@@ -1,108 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: calico-node
rules:
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
- apiGroups: [""]
resources:
- endpoints
- services
verbs:
- watch
- list
# Used by Calico for policy information
- apiGroups: [""]
resources:
- pods
- namespaces
- serviceaccounts
verbs:
- list
- watch
- apiGroups: [""]
resources:
- nodes/status
verbs:
# Calico patches the node NetworkUnavilable status
- patch
# Calico updates some info in node annotations
- update
# CNI plugin patches pods/status
- apiGroups: [""]
resources:
- pods/status
verbs:
- patch
# Calico reads some info on nodes
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- watch
# Calico monitors Kubernetes NetworkPolicies
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
# Calico monitors its CRDs
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurations
- ippools
- ipamblocks
- globalnetworkpolicies
- globalnetworksets
- networksets
- networkpolicies
- clusterinformations
- hostendpoints
verbs:
- get
- list
- watch
- apiGroups: ["crd.projectcalico.org"]
resources:
- felixconfigurations
- ippools
- clusterinformations
verbs:
- create
- update
# Calico may perform IPAM allocations
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
- apiGroups: ["crd.projectcalico.org"]
resources:
- ipamconfigs
verbs:
- get
# Watch block affinities for route aggregation
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
verbs:
- watch

View File

@@ -1,12 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: ClusterInformation
plural: clusterinformations
singular: clusterinformation

View File

@@ -1,41 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: calico-config
namespace: kube-system
data:
# Disable Typha for now.
typha_service_name: "none"
# Calico backend to use
calico_backend: "bird"
# Calico MTU
veth_mtu: "${network_mtu}"
# The CNI network configuration to install on each node.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}

View File

@@ -1,191 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
priorityClassName: system-node-critical
serviceAccountName: calico-node
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
initContainers:
# Install Calico CNI binaries and CNI network config file on nodes
- name: install-cni
image: ${calico_cni_image}
command: ["/install-cni.sh"]
env:
# Name of the CNI config file to create on each node.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# Set node name based on k8s nodeName
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Contents of the CNI config to create on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
- name: CNI_NET_DIR
value: "/etc/kubernetes/cni/net.d"
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
- name: SLEEP
value: "false"
volumeMounts:
- name: cni-bin-dir
mountPath: /host/opt/cni/bin
- name: cni-conf-dir
mountPath: /host/etc/cni/net.d
containers:
- name: calico-node
image: ${calico_image}
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Wait for datastore
- name: WAIT_FOR_DATASTORE
value: "true"
# Typha support: controlled by the ConfigMap.
- name: FELIX_TYPHAK8SSERVICENAME
valueFrom:
configMapKeyRef:
name: calico-config
key: typha_service_name
- name: FELIX_USAGEREPORTINGENABLED
value: "${enable_reporting}"
# Set node name based on k8s nodeName.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Calico network backend
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
- name: IP_AUTODETECTION_METHOD
value: "${network_ip_autodetection_method}"
# Whether Felix should enable IP-in-IP tunnel
- name: FELIX_IPINIPENABLED
value: "${ipip_enabled}"
# MTU to set on the IPIP tunnel (if enabled)
- name: FELIX_IPINIPMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Whether Felix should enable VXLAN tunnel
- name: FELIX_VXLANENABLED
value: "${vxlan_enabled}"
# MTU to set on the VXLAN tunnel (if enabled)
- name: FELIX_VXLANMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
- name: NO_DEFAULT_POOLS
value: "true"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPV6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Enable felix info logging.
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 150m
livenessProbe:
httpGet:
path: /liveness
port: 9099
host: localhost
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/calico-node
- -felix-ready
${ipip_readiness}
periodSeconds: 10
volumeMounts:
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: var-lib-calico
mountPath: /var/lib/calico
readOnly: false
- name: var-run-calico
mountPath: /var/run/calico
readOnly: false
- name: xtables-lock
mountPath: /run/xtables.lock
readOnly: false
terminationGracePeriodSeconds: 0
volumes:
# Used by calico/node
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: xtables-lock
hostPath:
type: FileOrCreate
path: /run/xtables.lock
# Used by install-cni
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-conf-dir
hostPath:
path: /etc/kubernetes/cni/net.d

View File

@@ -1,10 +0,0 @@
apiVersion: crd.projectcalico.org/v1
kind: IPPool
metadata:
name: default-ipv4-ippool
spec:
blockSize: 24
cidr: ${pod_cidr}
${network_encapsulation}
natOutgoing: true
nodeSelector: all()

View File

@@ -1,12 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: FelixConfiguration
plural: felixconfigurations
singular: felixconfiguration

View File

@@ -1,12 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkPolicy
plural: globalnetworkpolicies
singular: globalnetworkpolicy

View File

@@ -1,12 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworksets.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkSet
plural: globalnetworksets
singular: globalnetworkset

View File

@@ -1,12 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: hostendpoints.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: HostEndpoint
plural: hostendpoints
singular: hostendpoint

View File

@@ -1,12 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamblocks.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMBlock
plural: ipamblocks
singular: ipamblock

View File

@@ -1,12 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMConfig
plural: ipamconfigs
singular: ipamconfig

View File

@@ -1,12 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamhandles.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMHandle
plural: ipamhandles
singular: ipamhandle

View File

@@ -1,12 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ippools.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPPool
plural: ippools
singular: ippool

View File

@@ -1,12 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networkpolicies.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkPolicy
plural: networkpolicies
singular: networkpolicy

View File

@@ -1,12 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networksets.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkSet
plural: networksets
singular: networkset

View File

@@ -0,0 +1,27 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cilium-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cilium-operator
subjects:
- kind: ServiceAccount
name: cilium-operator
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cilium-agent
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cilium-agent
subjects:
- kind: ServiceAccount
name: cilium-agent
namespace: kube-system

View File

@@ -0,0 +1,188 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cilium-operator
rules:
- apiGroups:
- ""
resources:
# to automatically delete [core|kube]dns pods so that are starting to being
# managed by Cilium
- pods
verbs:
- get
- list
- watch
- delete
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
- nodes/status
verbs:
- patch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
# to perform LB IP allocation for BGP
- services/status
verbs:
- update
- apiGroups:
- ""
resources:
# to perform the translation of a CNP that contains `ToGroup` to its endpoints
- services
- endpoints
# to check apiserver connectivity
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- cilium.io
resources:
- ciliumnetworkpolicies
- ciliumnetworkpolicies/status
- ciliumnetworkpolicies/finalizers
- ciliumclusterwidenetworkpolicies
- ciliumclusterwidenetworkpolicies/status
- ciliumclusterwidenetworkpolicies/finalizers
- ciliumendpoints
- ciliumendpoints/status
- ciliumendpoints/finalizers
- ciliumnodes
- ciliumnodes/status
- ciliumnodes/finalizers
- ciliumidentities
- ciliumidentities/status
- ciliumidentities/finalizers
- ciliumlocalredirectpolicies
- ciliumlocalredirectpolicies/status
- ciliumlocalredirectpolicies/finalizers
- ciliumendpointslices
- ciliumloadbalancerippools
- ciliumloadbalancerippools/status
- ciliumcidrgroups
- ciliuml2announcementpolicies
- ciliuml2announcementpolicies/status
- ciliumpodippools
verbs:
- '*'
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- create
- get
- list
- update
- watch
# Cilium leader elects if among multiple operator replicas
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cilium-agent
rules:
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- namespaces
- services
- pods
- endpoints
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- create
- get
- list
- watch
- update
- apiGroups:
- cilium.io
resources:
- ciliumnetworkpolicies
- ciliumnetworkpolicies/status
- ciliumclusterwidenetworkpolicies
- ciliumclusterwidenetworkpolicies/status
- ciliumendpoints
- ciliumendpoints/status
- ciliumnodes
- ciliumnodes/status
- ciliumidentities
- ciliumidentities/status
- ciliumlocalredirectpolicies
- ciliumlocalredirectpolicies/status
- ciliumegressnatpolicies
- ciliumendpointslices
- ciliumcidrgroups
- ciliuml2announcementpolicies
- ciliuml2announcementpolicies/status
- ciliumpodippools
verbs:
- '*'

View File

@@ -0,0 +1,175 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cilium
namespace: kube-system
data:
# Identity allocation mode selects how identities are shared between cilium
# nodes by setting how they are stored. The options are "crd" or "kvstore".
# - "crd" stores identities in kubernetes as CRDs (custom resource definition).
# These can be queried with:
# kubectl get ciliumid
# - "kvstore" stores identities in a kvstore, etcd or consul, that is
# configured below. Cilium versions before 1.6 supported only the kvstore
# backend. Upgrades from these older cilium versions should continue using
# the kvstore by commenting out the identity-allocation-mode below, or
# setting it to "kvstore".
identity-allocation-mode: crd
cilium-endpoint-gc-interval: "5m0s"
nodes-gc-interval: "5m0s"
# If you want to run cilium in debug mode change this value to true
debug: "false"
# The agent can be put into the following three policy enforcement modes
# default, always and never.
# https://docs.cilium.io/en/latest/policy/intro/#policy-enforcement-modes
enable-policy: "default"
# Prometheus
# enable-metrics: "true"
# prometheus-serve-addr: ":foo"
# operator-prometheus-serve-addr: ":bar"
# Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
# address.
enable-ipv4: "true"
# Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
# address.
enable-ipv6: "false"
# Enable probing for a more efficient clock source for the BPF datapath
enable-bpf-clock-probe: "true"
# Enable use of transparent proxying mechanisms (Linux 5.7+)
enable-bpf-tproxy: "false"
# If you want cilium monitor to aggregate tracing for packets, set this level
# to "low", "medium", or "maximum". The higher the level, the less packets
# that will be seen in monitor output.
monitor-aggregation: medium
# The monitor aggregation interval governs the typical time between monitor
# notification events for each allowed connection.
#
# Only effective when monitor aggregation is set to "medium" or higher.
monitor-aggregation-interval: 5s
# The monitor aggregation flags determine which TCP flags which, upon the
# first observation, cause monitor notifications to be generated.
#
# Only effective when monitor aggregation is set to "medium" or higher.
monitor-aggregation-flags: all
# Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
# sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
bpf-map-dynamic-size-ratio: "0.0025"
# bpf-policy-map-max specified the maximum number of entries in endpoint
# policy map (per endpoint)
bpf-policy-map-max: "16384"
# bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
# backend and affinity maps.
bpf-lb-map-max: "65536"
# Pre-allocation of map entries allows per-packet latency to be reduced, at
# the expense of up-front memory allocation for the entries in the maps. The
# default value below will minimize memory usage in the default installation;
# users who are sensitive to latency may consider setting this to "true".
#
# This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
# this option and behave as though it is set to "true".
#
# If this value is modified, then during the next Cilium startup the restore
# of existing endpoints and tracking of ongoing connections may be disrupted.
# As a result, reply packets may be dropped and the load-balancing decisions
# for established connections may change.
#
# If this option is set to "false" during an upgrade from 1.3 or earlier to
# 1.4 or later, then it may cause one-time disruptions during the upgrade.
preallocate-bpf-maps: "false"
# Name of the cluster. Only relevant when building a mesh of clusters.
cluster-name: default
# Unique ID of the cluster. Must be unique across all conneted clusters and
# in the range of 1 and 255. Only relevant when building a mesh of clusters.
cluster-id: "0"
# Encapsulation mode for communication between nodes
# Possible values:
# - disabled
# - vxlan (default)
# - geneve
routing-mode: "tunnel"
tunnel: vxlan
# Enables L7 proxy for L7 policy enforcement and visibility
enable-l7-proxy: "true"
auto-direct-node-routes: "false"
# enableXTSocketFallback enables the fallback compatibility solution
# when the xt_socket kernel module is missing and it is needed for
# the datapath L7 redirection to work properly. See documentation
# for details on when this can be disabled:
# http://docs.cilium.io/en/latest/install/system_requirements/#admin-kernel-version.
enable-xt-socket-fallback: "true"
# installIptablesRules enables installation of iptables rules to allow for
# TPROXY (L7 proxy injection), itpables based masquerading and compatibility
# with kube-proxy. See documentation for details on when this can be
# disabled.
install-iptables-rules: "true"
# masquerade traffic leaving the node destined for outside
enable-ipv4-masquerade: "true"
enable-ipv6-masquerade: "false"
# bpfMasquerade enables masquerading with BPF instead of iptables
enable-bpf-masquerade: "true"
# kube-proxy
kube-proxy-replacement: "true"
kube-proxy-replacement-healthz-bind-address: ":10256"
enable-session-affinity: "true"
# ClusterIPs from host namespace
bpf-lb-sock: "true"
# ClusterIPs from external nodes
bpf-lb-external-clusterip: "true"
# NodePort
enable-node-port: "true"
enable-health-check-nodeport: "false"
# ExternalIPs
enable-external-ips: "true"
# HostPort
enable-host-port: "true"
# IPAM
ipam: "cluster-pool"
disable-cnp-status-updates: "true"
cluster-pool-ipv4-cidr: "${pod_cidr}"
cluster-pool-ipv4-mask-size: "24"
# Health
agent-health-port: "9876"
enable-health-checking: "true"
enable-endpoint-health-checking: "true"
# Identity
enable-well-known-identities: "false"
enable-remote-node-identity: "true"
# Misc
enable-bandwidth-manager: "false"
enable-local-redirect-policy: "false"
policy-audit-mode: "false"
operator-api-serve-addr: "127.0.0.1:9234"
enable-l2-neigh-discovery: "true"
enable-k8s-terminating-endpoint: "true"
enable-k8s-networkpolicy: "true"
external-envoy-proxy: "false"
write-cni-conf-when-ready: /host/etc/cni/net.d/05-cilium.conflist
cni-exclusive: "true"
cni-log-file: "/var/run/cilium/cilium-cni.log"

View File

@@ -0,0 +1,219 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cilium
namespace: kube-system
labels:
k8s-app: cilium
spec:
selector:
matchLabels:
k8s-app: cilium-agent
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: cilium-agent
spec:
hostNetwork: true
priorityClassName: system-node-critical
serviceAccountName: cilium-agent
securityContext:
seccompProfile:
type: RuntimeDefault
tolerations:
- key: node-role.kubernetes.io/controller
operator: Exists
- key: node.kubernetes.io/not-ready
operator: Exists
%{~ for key in daemonset_tolerations ~}
- key: ${key}
operator: Exists
%{~ endfor ~}
initContainers:
# Cilium v1.13.1 starts installing CNI plugins in yet another init container
# https://github.com/cilium/cilium/pull/24075
- name: install-cni
image: ${cilium_agent_image}
command:
- /install-plugin.sh
securityContext:
privileged: true
capabilities:
drop:
- ALL
volumeMounts:
- name: cni-bin-dir
mountPath: /host/opt/cni/bin
# Required to mount cgroup2 filesystem on the underlying Kubernetes node.
# We use nsenter command with host's cgroup and mount namespaces enabled.
- name: mount-cgroup
image: ${cilium_agent_image}
command:
- sh
- -ec
# The statically linked Go program binary is invoked to avoid any
# dependency on utilities like sh and mount that can be missing on certain
# distros installed on the underlying host. Copy the binary to the
# same directory where we install cilium cni plugin so that exec permissions
# are available.
- 'cp /usr/bin/cilium-mount /hostbin/cilium-mount && nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "$${BIN_PATH}/cilium-mount" $CGROUP_ROOT; rm /hostbin/cilium-mount'
env:
- name: CGROUP_ROOT
value: /run/cilium/cgroupv2
- name: BIN_PATH
value: /opt/cni/bin
securityContext:
privileged: true
volumeMounts:
- name: hostproc
mountPath: /hostproc
- name: cni-bin-dir
mountPath: /hostbin
- name: clean-cilium-state
image: ${cilium_agent_image}
command:
- /init-container.sh
securityContext:
privileged: true
volumeMounts:
- name: sys-fs-bpf
mountPath: /sys/fs/bpf
- name: var-run-cilium
mountPath: /var/run/cilium
# Required to mount cgroup filesystem from the host to cilium agent pod
- name: cilium-cgroup
mountPath: /run/cilium/cgroupv2
mountPropagation: HostToContainer
containers:
- name: cilium-agent
image: ${cilium_agent_image}
command:
- cilium-agent
args:
- --config-dir=/tmp/cilium/config-map
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: KUBERNETES_SERVICE_HOST
valueFrom:
configMapKeyRef:
name: in-cluster
key: apiserver-host
- name: KUBERNETES_SERVICE_PORT
valueFrom:
configMapKeyRef:
name: in-cluster
key: apiserver-port
ports:
# Not yet used, prefer exec's
- name: health
protocol: TCP
containerPort: 9876
lifecycle:
preStop:
exec:
command:
- /cni-uninstall.sh
securityContext:
privileged: true
livenessProbe:
exec:
command:
- cilium
- status
- --brief
periodSeconds: 30
initialDelaySeconds: 120
successThreshold: 1
failureThreshold: 10
timeoutSeconds: 5
readinessProbe:
exec:
command:
- cilium
- status
- --brief
periodSeconds: 20
initialDelaySeconds: 5
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 5
volumeMounts:
# Load kernel modules
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: xtables-lock
mountPath: /run/xtables.lock
# Keep state between restarts
- name: var-run-cilium
mountPath: /var/run/cilium
- name: sys-fs-bpf
mountPath: /sys/fs/bpf
mountPropagation: Bidirectional
# Configuration
- name: config
mountPath: /tmp/cilium/config-map
readOnly: true
# Install config on host
- name: cni-conf-dir
mountPath: /host/etc/cni/net.d
terminationGracePeriodSeconds: 1
volumes:
# Load kernel modules
- name: lib-modules
hostPath:
path: /lib/modules
# Access iptables concurrently with other processes (e.g. kube-proxy)
- name: xtables-lock
hostPath:
type: FileOrCreate
path: /run/xtables.lock
# Keep state between restarts
- name: var-run-cilium
hostPath:
path: /var/run/cilium
type: DirectoryOrCreate
# Keep state between restarts for bpf maps
- name: sys-fs-bpf
hostPath:
path: /sys/fs/bpf
type: DirectoryOrCreate
# Mount host cgroup2 filesystem
- name: hostproc
hostPath:
path: /proc
type: Directory
- name: cilium-cgroup
hostPath:
path: /run/cilium/cgroupv2
type: DirectoryOrCreate
# Read configuration
- name: config
configMap:
name: cilium
# Install CNI plugin and config on host
- name: cni-bin-dir
hostPath:
type: DirectoryOrCreate
path: /opt/cni/bin
- name: cni-conf-dir
hostPath:
type: DirectoryOrCreate
path: /etc/cni/net.d

View File

@@ -0,0 +1,103 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: cilium-operator
namespace: kube-system
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
name: cilium-operator
template:
metadata:
labels:
name: cilium-operator
spec:
hostNetwork: true
priorityClassName: system-cluster-critical
serviceAccountName: cilium-operator
securityContext:
seccompProfile:
type: RuntimeDefault
tolerations:
- key: node-role.kubernetes.io/controller
operator: Exists
- key: node.kubernetes.io/not-ready
operator: Exists
containers:
- name: cilium-operator
image: ${cilium_operator_image}
command:
- cilium-operator-generic
args:
- --config-dir=/tmp/cilium/config-map
- --debug=$(CILIUM_DEBUG)
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: KUBERNETES_SERVICE_HOST
valueFrom:
configMapKeyRef:
name: in-cluster
key: apiserver-host
- name: KUBERNETES_SERVICE_PORT
valueFrom:
configMapKeyRef:
name: in-cluster
key: apiserver-port
- name: CILIUM_DEBUG
valueFrom:
configMapKeyRef:
name: cilium
key: debug
optional: true
ports:
- name: health
protocol: TCP
containerPort: 9234
livenessProbe:
httpGet:
scheme: HTTP
host: 127.0.0.1
port: 9234
path: /healthz
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 3
readinessProbe:
httpGet:
scheme: HTTP
host: 127.0.0.1
port: 9234
path: /healthz
periodSeconds: 15
timeoutSeconds: 3
failureThreshold: 5
volumeMounts:
- name: config
mountPath: /tmp/cilium/config-map
readOnly: true
topologySpreadConstraints:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
name: cilium-operator
maxSkew: 1
whenUnsatisfiable: DoNotSchedule
volumes:
# Read configuration
- name: config
configMap:
name: cilium

View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: cilium-operator
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cilium-agent
namespace: kube-system

View File

@@ -14,6 +14,12 @@ rules:
verbs:
- list
- watch
- apiGroups: ["discovery.k8s.io"]
resources:
- endpointslices
verbs:
- list
- watch
- apiGroups: [""]
resources:
- nodes

View File

@@ -7,7 +7,9 @@ data:
Corefile: |
.:53 {
errors
health
health {
lameduck 5s
}
ready
log . {
class error

View File

@@ -6,7 +6,6 @@ metadata:
labels:
k8s-app: coredns
kubernetes.io/name: "CoreDNS"
kubernetes.io/cluster-service: "true"
spec:
replicas: ${control_plane_replicas}
strategy:
@@ -22,10 +21,15 @@ spec:
labels:
tier: control-plane
k8s-app: coredns
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: node.kubernetes.io/controller
operator: Exists
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
@@ -42,9 +46,12 @@ spec:
- coredns
topologyKey: kubernetes.io/hostname
priorityClassName: system-cluster-critical
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccountName: coredns
tolerations:
- key: node-role.kubernetes.io/master
- key: node-role.kubernetes.io/controller
effect: NoSchedule
containers:
- name: coredns

View File

@@ -1,5 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
name: coredns
namespace: kube-system

View File

@@ -8,7 +8,6 @@ metadata:
prometheus.io/port: "9153"
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:

View File

@@ -32,6 +32,6 @@ data:
"Network": "${pod_cidr}",
"Backend": {
"Type": "vxlan",
"Port": 4789
"Port": 8472
}
}

View File

@@ -17,17 +17,37 @@ spec:
metadata:
labels:
k8s-app: flannel
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
priorityClassName: system-node-critical
serviceAccountName: flannel
securityContext:
seccompProfile:
type: RuntimeDefault
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
- key: node-role.kubernetes.io/controller
operator: Exists
- key: node.kubernetes.io/not-ready
operator: Exists
%{~ for key in daemonset_tolerations ~}
- key: ${key}
operator: Exists
%{~ endfor ~}
initContainers:
- name: install-cni
image: ${flannel_cni_image}
command: ["/install-cni.sh"]
env:
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: flannel-config
key: cni-conf.json
volumeMounts:
- name: cni-bin-dir
mountPath: /host/opt/cni/bin/
- name: cni-conf-dir
mountPath: /host/etc/cni/net.d
containers:
- name: flannel
image: ${flannel_image}
@@ -55,20 +75,8 @@ spec:
mountPath: /etc/kube-flannel/
- name: run-flannel
mountPath: /run/flannel
- name: install-cni
image: ${flannel_cni_image}
command: ["/install-cni.sh"]
env:
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: flannel-config
key: cni-conf.json
volumeMounts:
- name: cni-bin-dir
mountPath: /host/opt/cni/bin/
- name: cni-conf-dir
mountPath: /host/etc/cni/net.d
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: flannel-config
configMap:
@@ -82,4 +90,10 @@ spec:
path: /opt/cni/bin
- name: cni-conf-dir
hostPath:
path: /etc/kubernetes/cni/net.d
type: DirectoryOrCreate
path: /etc/cni/net.d
# Access iptables concurrently
- name: xtables-lock
hostPath:
type: FileOrCreate
path: /run/xtables.lock

View File

@@ -20,32 +20,42 @@ spec:
labels:
tier: node
k8s-app: kube-proxy
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccountName: kube-proxy
tolerations:
- effect: NoSchedule
- key: node-role.kubernetes.io/controller
operator: Exists
- effect: NoExecute
- key: node.kubernetes.io/not-ready
operator: Exists
%{~ for key in daemonset_tolerations ~}
- key: ${key}
operator: Exists
%{~ endfor ~}
containers:
- name: kube-proxy
image: ${hyperkube_image}
image: ${kube_proxy_image}
command:
- ./hyperkube
- proxy
- kube-proxy
- --cluster-cidr=${pod_cidr}
- --hostname-override=$(NODE_NAME)
- --kubeconfig=/etc/kubernetes/kubeconfig
- --proxy-mode=iptables
- --metrics-bind-address=0.0.0.0
- --proxy-mode=ipvs
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- name: metrics
containerPort: 10249
- name: health
containerPort: 10256
livenessProbe:
httpGet:
path: /healthz
@@ -61,9 +71,14 @@ spec:
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: ssl-certs-host
- name: etc-ssl
mountPath: /etc/ssl/certs
readOnly: true
- name: etc-pki
mountPath: /etc/pki
readOnly: true
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: kubeconfig
configMap:
@@ -71,6 +86,14 @@ spec:
- name: lib-modules
hostPath:
path: /lib/modules
- name: ssl-certs-host
- name: etc-ssl
hostPath:
path: ${trusted_certs_dir}
path: /etc/ssl/certs
- name: etc-pki
hostPath:
path: /etc/pki
# Access iptables concurrently
- name: xtables-lock
hostPath:
type: FileOrCreate
path: /run/xtables.lock

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-router
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-router
subjects:
- kind: ServiceAccount
name: kube-router
namespace: kube-system

View File

@@ -1,33 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kube-router
rules:
- apiGroups:
- ""
resources:
- namespaces
- pods
- services
- nodes
- endpoints
verbs:
- list
- get
- watch
- apiGroups:
- "networking.k8s.io"
resources:
- networkpolicies
verbs:
- list
- get
- watch
- apiGroups:
- extensions
resources:
- networkpolicies
verbs:
- get
- list
- watch

View File

@@ -1,30 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-router-config
namespace: kube-system
data:
cni-conf.json: |
{
"name": "pod-network",
"cniVersion": "0.3.1",
"plugins":[
{
"name": "kube-router",
"type": "bridge",
"bridge": "kube-bridge",
"isDefaultGateway": true,
"mtu": ${network_mtu},
"ipam": {
"type": "host-local"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {
"portMappings": true
}
}
]
}

View File

@@ -1,90 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-router
namespace: kube-system
labels:
k8s-app: kube-router
spec:
selector:
matchLabels:
k8s-app: kube-router
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: kube-router
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
priorityClassName: system-node-critical
serviceAccountName: kube-router
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
containers:
- name: kube-router
image: ${kube_router_image}
args:
- --kubeconfig=/etc/kubernetes/kubeconfig
- --run-router=true
- --run-firewall=true
- --run-service-proxy=false
- --v=5
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: KUBE_ROUTER_CNI_CONF_FILE
value: /etc/cni/net.d/10-kuberouter.conflist
securityContext:
privileged: true
volumeMounts:
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: cni-conf-dir
mountPath: /etc/cni/net.d
- name: kubeconfig
mountPath: /etc/kubernetes
readOnly: true
- name: install-cni
image: ${flannel_cni_image}
command: ["/install-cni.sh"]
env:
- name: CNI_OLD_NAME
value: 10-flannel.conflist
- name: CNI_CONF_NAME
value: 10-kuberouter.conflist
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: kube-router-config
key: cni-conf.json
volumeMounts:
- name: cni-bin-dir
mountPath: /host/opt/cni/bin
- name: cni-conf-dir
mountPath: /host/etc/cni/net.d
volumes:
# Used by kube-router
- name: lib-modules
hostPath:
path: /lib/modules
- name: kubeconfig
configMap:
name: kubeconfig-in-cluster
# Used by install-cni
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-conf-dir
hostPath:
path: /etc/kubernetes/cni/net.d

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-router
namespace: kube-system

View File

@@ -1,18 +1,18 @@
apiVersion: v1
kind: Config
clusters:
- name: ${name}-cluster
- name: ${name}
cluster:
server: ${server}
certificate-authority-data: ${ca_cert}
users:
- name: ${name}-user
- name: ${name}
user:
client-certificate-data: ${kubelet_cert}
client-key-data: ${kubelet_key}
current-context: ${name}-context
current-context: ${name}
contexts:
- name: ${name}-context
- name: ${name}
context:
cluster: ${name}-cluster
user: ${name}-user
cluster: ${name}
user: ${name}

View File

@@ -8,8 +8,7 @@ clusters:
users:
- name: kubelet
user:
client-certificate-data: ${kubelet_cert}
client-key-data: ${kubelet_key}
token: ${token_id}.${token_secret}
contexts:
- context:
cluster: local

View File

@@ -0,0 +1,13 @@
# Bind system:bootstrappers to ClusterRole for node bootstrap
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: bootstrap-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers

View File

@@ -0,0 +1,13 @@
# Approve new CSRs from "system:bootstrappers" subjects
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: bootstrap-approve-new
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers

View File

@@ -0,0 +1,13 @@
# Approve renewal CSRs from "system:nodes" subjects
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: bootstrap-approve-renew
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: Secret
type: bootstrap.kubernetes.io/token
metadata:
# Name MUST be of form "bootstrap-token-<token_id>"
name: bootstrap-token-${token_id}
namespace: kube-system
stringData:
description: "Typhoon generated bootstrap token"
token-id: ${token_id}
token-secret: ${token_secret}
usage-bootstrap-authentication: "true"

View File

@@ -1,7 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"

View File

@@ -0,0 +1,10 @@
# in-cluster ConfigMap is for control plane components that must reach
# kube-apiserver before service IPs are available (e.g. 10.3.0.1)
apiVersion: v1
kind: ConfigMap
metadata:
name: in-cluster
namespace: kube-system
data:
apiserver-host: ${apiserver_host}
apiserver-port: "${apiserver_port}"

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-apiserver
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kube-apiserver
namespace: kube-system

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: kube-apiserver

View File

@@ -1,18 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: kube-apiserver
namespace: kube-system
type: Opaque
data:
apiserver.key: ${apiserver_key}
apiserver.crt: ${apiserver_cert}
service-account.pub: ${serviceaccount_pub}
ca.crt: ${ca_cert}
etcd-client-ca.crt: ${etcd_ca_cert}
etcd-client.crt: ${etcd_client_cert}
etcd-client.key: ${etcd_client_key}
aggregation-ca.crt: ${aggregation_ca_cert}
aggregation-client.crt: ${aggregation_client_cert}
aggregation-client.key: ${aggregation_client_key}

View File

@@ -1,82 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-apiserver
namespace: kube-system
labels:
tier: control-plane
k8s-app: kube-apiserver
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: kube-apiserver
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-apiserver
annotations:
checkpointer.alpha.coreos.com/checkpoint: "true"
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
priorityClassName: system-cluster-critical
serviceAccountName: kube-apiserver
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: kube-apiserver
image: ${hyperkube_image}
command:
- /hyperkube
- apiserver
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --anonymous-auth=false
- --authorization-mode=RBAC
- --bind-address=0.0.0.0
- --client-ca-file=/etc/kubernetes/secrets/ca.crt
- --cloud-provider=${cloud_provider}
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,Priority
- --etcd-cafile=/etc/kubernetes/secrets/etcd-client-ca.crt
- --etcd-certfile=/etc/kubernetes/secrets/etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/secrets/etcd-client.key
- --etcd-servers=${etcd_servers}
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname${aggregation_flags}
- --secure-port=${apiserver_port}
- --service-account-key-file=/etc/kubernetes/secrets/service-account.pub
- --service-cluster-ip-range=${service_cidr}
- --storage-backend=etcd3
- --tls-cert-file=/etc/kubernetes/secrets/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/secrets/apiserver.key
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
- name: ssl-certs-host
mountPath: /etc/ssl/certs
readOnly: true
volumes:
- name: secrets
secret:
secretName: kube-apiserver
- name: ssl-certs-host
hostPath:
path: ${trusted_certs_dir}

View File

@@ -1,11 +0,0 @@
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kube-controller-manager
namespace: kube-system
spec:
minAvailable: 1
selector:
matchLabels:
tier: control-plane
k8s-app: kube-controller-manager

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-controller-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-controller-manager
subjects:
- kind: ServiceAccount
name: kube-controller-manager
namespace: kube-system

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: kube-controller-manager

View File

@@ -1,11 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: kube-controller-manager
namespace: kube-system
type: Opaque
data:
service-account.key: ${serviceaccount_key}
ca.crt: ${ca_cert}
ca.key: ${ca_key}

View File

@@ -1,96 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-controller-manager
namespace: kube-system
labels:
tier: control-plane
k8s-app: kube-controller-manager
spec:
replicas: ${control_plane_replicas}
selector:
matchLabels:
tier: control-plane
k8s-app: kube-controller-manager
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-controller-manager
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: tier
operator: In
values:
- control-plane
- key: k8s-app
operator: In
values:
- kube-controller-manager
topologyKey: kubernetes.io/hostname
nodeSelector:
node-role.kubernetes.io/master: ""
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: kube-controller-manager
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: kube-controller-manager
image: ${hyperkube_image}
command:
- ./hyperkube
- controller-manager
- --use-service-account-credentials
- --allocate-node-cidrs=true
- --cloud-provider=${cloud_provider}
- --cluster-cidr=${pod_cidr}
- --service-cluster-ip-range=${service_cidr}
- --cluster-signing-cert-file=/etc/kubernetes/secrets/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/secrets/ca.key
- --configure-cloud-routes=false
- --leader-elect=true
- --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins
- --pod-eviction-timeout=1m
- --root-ca-file=/etc/kubernetes/secrets/ca.crt
- --service-account-private-key-file=/etc/kubernetes/secrets/service-account.key
livenessProbe:
httpGet:
scheme: HTTPS
path: /healthz
port: 10257
initialDelaySeconds: 15
timeoutSeconds: 15
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
- name: volumeplugins
mountPath: /var/lib/kubelet/volumeplugins
readOnly: true
- name: ssl-host
mountPath: /etc/ssl/certs
readOnly: true
volumes:
- name: secrets
secret:
secretName: kube-controller-manager
- name: ssl-host
hostPath:
path: ${trusted_certs_dir}
- name: volumeplugins
hostPath:
path: /var/lib/kubelet/volumeplugins
dnsPolicy: Default # Don't use cluster DNS.

View File

@@ -1,11 +0,0 @@
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kube-scheduler
namespace: kube-system
spec:
minAvailable: 1
selector:
matchLabels:
tier: control-plane
k8s-app: kube-scheduler

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-scheduler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-scheduler
subjects:
- kind: ServiceAccount
name: kube-scheduler
namespace: kube-system

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: kube-scheduler

View File

@@ -1,13 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: volume-scheduler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:volume-scheduler
subjects:
- kind: ServiceAccount
name: kube-scheduler
namespace: kube-system

View File

@@ -1,63 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-scheduler
namespace: kube-system
labels:
tier: control-plane
k8s-app: kube-scheduler
spec:
replicas: ${control_plane_replicas}
selector:
matchLabels:
tier: control-plane
k8s-app: kube-scheduler
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-scheduler
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: tier
operator: In
values:
- control-plane
- key: k8s-app
operator: In
values:
- kube-scheduler
topologyKey: kubernetes.io/hostname
nodeSelector:
node-role.kubernetes.io/master: ""
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: kube-scheduler
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: kube-scheduler
image: ${hyperkube_image}
command:
- ./hyperkube
- scheduler
- --leader-elect=true
livenessProbe:
httpGet:
scheme: HTTPS
path: /healthz
port: 10259
initialDelaySeconds: 15
timeoutSeconds: 15

View File

@@ -7,6 +7,6 @@ roleRef:
kind: ClusterRole
name: kubelet-delete
subjects:
- kind: Group
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io

View File

@@ -8,3 +8,16 @@ rules:
- nodes
verbs:
- delete
- apiGroups: ["apps"]
resources:
- deployments
- daemonsets
- statefulsets
verbs:
- get
- list
- apiGroups: [""]
resources:
- pods/eviction
verbs:
- create

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system-nodes
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pod-checkpointer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pod-checkpointer
subjects:
- kind: ServiceAccount
name: pod-checkpointer
namespace: kube-system

View File

@@ -1,11 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-checkpointer
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/proxy
verbs:
- get

View File

@@ -1,13 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-checkpointer
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-checkpointer
subjects:
- kind: ServiceAccount
name: pod-checkpointer
namespace: kube-system

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-checkpointer
namespace: kube-system
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets", "configmaps"]
verbs: ["get"]

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: pod-checkpointer

View File

@@ -1,72 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: pod-checkpointer
namespace: kube-system
labels:
tier: control-plane
k8s-app: pod-checkpointer
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: pod-checkpointer
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
tier: control-plane
k8s-app: pod-checkpointer
annotations:
checkpointer.alpha.coreos.com/checkpoint: "true"
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
priorityClassName: system-node-critical
serviceAccountName: pod-checkpointer
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: pod-checkpointer
image: ${pod_checkpointer_image}
command:
- /checkpoint
- --lock-file=/var/run/lock/pod-checkpointer.lock
- --kubeconfig=/etc/checkpointer/kubeconfig
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: kubeconfig
mountPath: /etc/checkpointer
- name: etc-kubernetes
mountPath: /etc/kubernetes
- name: var-run
mountPath: /var/run
volumes:
- name: kubeconfig
configMap:
name: kubeconfig-in-cluster
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes
- name: var-run
hostPath:
path: /var/run

View File

@@ -0,0 +1,73 @@
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
labels:
k8s-app: kube-apiserver
tier: control-plane
spec:
hostNetwork: true
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
containers:
- name: kube-apiserver
image: ${kube_apiserver_image}
command:
- kube-apiserver
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --anonymous-auth=false
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd-client-ca.crt
- --etcd-certfile=/etc/kubernetes/pki/etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/etcd-client.key
- --etcd-servers=${etcd_servers}
- --feature-gates=MutatingAdmissionPolicy=true
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname${aggregation_flags}
- --runtime-config=admissionregistration.k8s.io/v1beta1=true,admissionregistration.k8s.io/v1alpha1=true
- --secure-port=6443
- --service-account-issuer=${service_account_issuer}
- --service-account-jwks-uri=${service_account_issuer}/openid/v1/jwks
- --service-account-key-file=/etc/kubernetes/pki/service-account.pub
- --service-account-signing-key-file=/etc/kubernetes/pki/service-account.key
- --service-cluster-ip-range=${service_cidr}
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
resources:
requests:
cpu: 150m
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/pki
readOnly: true
- name: etc-ssl
mountPath: /etc/ssl/certs
readOnly: true
- name: etc-pki
mountPath: /etc/pki
readOnly: true
volumes:
- name: secrets
hostPath:
path: /etc/kubernetes/pki
- name: etc-ssl
hostPath:
path: /etc/ssl/certs
- name: etc-pki
hostPath:
path: /etc/pki

View File

@@ -0,0 +1,75 @@
apiVersion: v1
kind: Pod
metadata:
name: kube-controller-manager
namespace: kube-system
labels:
k8s-app: kube-controller-manager
tier: control-plane
spec:
hostNetwork: true
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
containers:
- name: kube-controller-manager
image: ${kube_controller_manager_image}
command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/pki/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/pki/controller-manager.conf
- --allocate-node-cidrs=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=${pod_cidr}
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --cluster-signing-duration=72h
- --controllers=*,tokencleaner
- --configure-cloud-routes=false
- --kubeconfig=/etc/kubernetes/pki/controller-manager.conf
- --leader-elect=true
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/service-account.key
- --service-cluster-ip-range=${service_cidr}
- --use-service-account-credentials=true
livenessProbe:
httpGet:
scheme: HTTPS
host: 127.0.0.1
path: /healthz
port: 10257
initialDelaySeconds: 25
timeoutSeconds: 15
failureThreshold: 8
resources:
requests:
cpu: 150m
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/pki
readOnly: true
- name: etc-ssl
mountPath: /etc/ssl/certs
readOnly: true
- name: etc-pki
mountPath: /etc/pki
readOnly: true
- name: flex
mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
volumes:
- name: secrets
hostPath:
path: /etc/kubernetes/pki
- name: etc-ssl
hostPath:
path: /etc/ssl/certs
- name: etc-pki
hostPath:
path: /etc/pki
- name: flex
hostPath:
type: DirectoryOrCreate
path: /var/lib/kubelet/volumeplugins

View File

@@ -0,0 +1,44 @@
apiVersion: v1
kind: Pod
metadata:
name: kube-scheduler
namespace: kube-system
labels:
k8s-app: kube-scheduler
tier: control-plane
spec:
hostNetwork: true
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
containers:
- name: kube-scheduler
image: ${kube_scheduler_image}
command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/pki/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/pki/scheduler.conf
- --kubeconfig=/etc/kubernetes/pki/scheduler.conf
- --leader-elect=true
livenessProbe:
httpGet:
scheme: HTTPS
host: 127.0.0.1
path: /healthz
port: 10259
initialDelaySeconds: 15
timeoutSeconds: 15
resources:
requests:
cpu: 100m
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/pki/scheduler.conf
readOnly: true
volumes:
- name: secrets
hostPath:
path: /etc/kubernetes/pki/scheduler.conf

View File

@@ -1,5 +1,4 @@
cluster_name = "example"
api_servers = ["node1.example.com"]
etcd_servers = ["node1.example.com"]
asset_dir = "/home/core/mycluster"
networking = "flannel"

View File

@@ -1,27 +1,26 @@
# NOTE: Across this module, the following workaround is used:
# `"${var.some_var == "condition" ? join(" ", tls_private_key.aggregation-ca.*.private_key_pem) : ""}"`
# Due to https://github.com/hashicorp/hil/issues/50, both sides of conditions
# are evaluated, until one of them is discarded. When a `count` is used resources
# can be referenced as lists with the `.*` notation, and arrays are allowed to be
# empty. The `join()` interpolation function is then used to cast them back to
# a string. Since `count` can only be 0 or 1, the returned value is either empty
# (and discarded anyways) or the desired value.
locals {
# Kubernetes Aggregation TLS assets map
aggregation_tls = var.enable_aggregation ? {
"tls/k8s/aggregation-ca.crt" = tls_self_signed_cert.aggregation-ca[0].cert_pem,
"tls/k8s/aggregation-client.crt" = tls_locally_signed_cert.aggregation-client[0].cert_pem,
"tls/k8s/aggregation-client.key" = tls_private_key.aggregation-client[0].private_key_pem,
} : {}
}
# Kubernetes Aggregation CA (i.e. front-proxy-ca)
# Files: tls/{aggregation-ca.crt,aggregation-ca.key}
resource "tls_private_key" "aggregation-ca" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
count = var.enable_aggregation ? 1 : 0
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_self_signed_cert" "aggregation-ca" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
count = var.enable_aggregation ? 1 : 0
key_algorithm = "${tls_private_key.aggregation-ca.algorithm}"
private_key_pem = "${tls_private_key.aggregation-ca.private_key_pem}"
private_key_pem = tls_private_key.aggregation-ca[0].private_key_pem
subject {
common_name = "kubernetes-front-proxy-ca"
@@ -37,35 +36,20 @@ resource "tls_self_signed_cert" "aggregation-ca" {
]
}
resource "local_file" "aggregation-ca-key" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
content = "${tls_private_key.aggregation-ca.private_key_pem}"
filename = "${var.asset_dir}/tls/aggregation-ca.key"
}
resource "local_file" "aggregation-ca-crt" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
content = "${tls_self_signed_cert.aggregation-ca.cert_pem}"
filename = "${var.asset_dir}/tls/aggregation-ca.crt"
}
# Kubernetes apiserver (i.e. front-proxy-client)
# Files: tls/{aggregation-client.crt,aggregation-client.key}
resource "tls_private_key" "aggregation-client" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
count = var.enable_aggregation ? 1 : 0
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "aggregation-client" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
count = var.enable_aggregation ? 1 : 0
key_algorithm = "${tls_private_key.aggregation-client.algorithm}"
private_key_pem = "${tls_private_key.aggregation-client.private_key_pem}"
private_key_pem = tls_private_key.aggregation-client[0].private_key_pem
subject {
common_name = "kube-apiserver"
@@ -73,13 +57,12 @@ resource "tls_cert_request" "aggregation-client" {
}
resource "tls_locally_signed_cert" "aggregation-client" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
count = var.enable_aggregation ? 1 : 0
cert_request_pem = "${tls_cert_request.aggregation-client.cert_request_pem}"
cert_request_pem = tls_cert_request.aggregation-client[0].cert_request_pem
ca_key_algorithm = "${tls_self_signed_cert.aggregation-ca.key_algorithm}"
ca_private_key_pem = "${tls_private_key.aggregation-ca.private_key_pem}"
ca_cert_pem = "${tls_self_signed_cert.aggregation-ca.cert_pem}"
ca_private_key_pem = tls_private_key.aggregation-ca[0].private_key_pem
ca_cert_pem = tls_self_signed_cert.aggregation-ca[0].cert_pem
validity_period_hours = 8760
@@ -90,16 +73,3 @@ resource "tls_locally_signed_cert" "aggregation-client" {
]
}
resource "local_file" "aggregation-client-key" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
content = "${tls_private_key.aggregation-client.private_key_pem}"
filename = "${var.asset_dir}/tls/aggregation-client.key"
}
resource "local_file" "aggregation-client-crt" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
content = "${tls_locally_signed_cert.aggregation-client.cert_pem}"
filename = "${var.asset_dir}/tls/aggregation-client.crt"
}

View File

@@ -1,70 +1,19 @@
# etcd-ca.crt
resource "local_file" "etcd_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
filename = "${var.asset_dir}/tls/etcd-ca.crt"
locals {
# etcd TLS assets map
etcd_tls = {
"tls/etcd/etcd-client-ca.crt" = tls_self_signed_cert.etcd-ca.cert_pem,
"tls/etcd/etcd-client.crt" = tls_locally_signed_cert.client.cert_pem,
"tls/etcd/etcd-client.key" = tls_private_key.client.private_key_pem
"tls/etcd/server-ca.crt" = tls_self_signed_cert.etcd-ca.cert_pem,
"tls/etcd/server.crt" = tls_locally_signed_cert.server.cert_pem
"tls/etcd/server.key" = tls_private_key.server.private_key_pem
"tls/etcd/peer-ca.crt" = tls_self_signed_cert.etcd-ca.cert_pem,
"tls/etcd/peer.crt" = tls_locally_signed_cert.peer.cert_pem
"tls/etcd/peer.key" = tls_private_key.peer.private_key_pem
}
}
# etcd-ca.key
resource "local_file" "etcd_ca_key" {
content = "${tls_private_key.etcd-ca.private_key_pem}"
filename = "${var.asset_dir}/tls/etcd-ca.key"
}
# etcd-client-ca.crt
resource "local_file" "etcd_client_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
filename = "${var.asset_dir}/tls/etcd-client-ca.crt"
}
# etcd-client.crt
resource "local_file" "etcd_client_crt" {
content = "${tls_locally_signed_cert.client.cert_pem}"
filename = "${var.asset_dir}/tls/etcd-client.crt"
}
# etcd-client.key
resource "local_file" "etcd_client_key" {
content = "${tls_private_key.client.private_key_pem}"
filename = "${var.asset_dir}/tls/etcd-client.key"
}
# server-ca.crt
resource "local_file" "etcd_server_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
filename = "${var.asset_dir}/tls/etcd/server-ca.crt"
}
# server.crt
resource "local_file" "etcd_server_crt" {
content = "${tls_locally_signed_cert.server.cert_pem}"
filename = "${var.asset_dir}/tls/etcd/server.crt"
}
# server.key
resource "local_file" "etcd_server_key" {
content = "${tls_private_key.server.private_key_pem}"
filename = "${var.asset_dir}/tls/etcd/server.key"
}
# peer-ca.crt
resource "local_file" "etcd_peer_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
filename = "${var.asset_dir}/tls/etcd/peer-ca.crt"
}
# peer.crt
resource "local_file" "etcd_peer_crt" {
content = "${tls_locally_signed_cert.peer.cert_pem}"
filename = "${var.asset_dir}/tls/etcd/peer.crt"
}
# peer.key
resource "local_file" "etcd_peer_key" {
content = "${tls_private_key.peer.private_key_pem}"
filename = "${var.asset_dir}/tls/etcd/peer.key"
}
# certificates and keys
# etcd CA
resource "tls_private_key" "etcd-ca" {
algorithm = "RSA"
@@ -72,8 +21,7 @@ resource "tls_private_key" "etcd-ca" {
}
resource "tls_self_signed_cert" "etcd-ca" {
key_algorithm = "${tls_private_key.etcd-ca.algorithm}"
private_key_pem = "${tls_private_key.etcd-ca.private_key_pem}"
private_key_pem = tls_private_key.etcd-ca.private_key_pem
subject {
common_name = "etcd-ca"
@@ -90,16 +38,15 @@ resource "tls_self_signed_cert" "etcd-ca" {
]
}
# client certs are used for client (apiserver, locksmith, etcd-operator)
# to etcd communication
# etcd Client (apiserver to etcd communication)
resource "tls_private_key" "client" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "client" {
key_algorithm = "${tls_private_key.client.algorithm}"
private_key_pem = "${tls_private_key.client.private_key_pem}"
private_key_pem = tls_private_key.client.private_key_pem
subject {
common_name = "etcd-client"
@@ -110,19 +57,14 @@ resource "tls_cert_request" "client" {
"127.0.0.1",
]
dns_names = ["${concat(
var.etcd_servers,
list(
"localhost",
))}"]
dns_names = concat(var.etcd_servers, ["localhost"])
}
resource "tls_locally_signed_cert" "client" {
cert_request_pem = "${tls_cert_request.client.cert_request_pem}"
cert_request_pem = tls_cert_request.client.cert_request_pem
ca_key_algorithm = "${join(" ", tls_self_signed_cert.etcd-ca.*.key_algorithm)}"
ca_private_key_pem = "${join(" ", tls_private_key.etcd-ca.*.private_key_pem)}"
ca_cert_pem = "${join(" ", tls_self_signed_cert.etcd-ca.*.cert_pem)}"
ca_private_key_pem = tls_private_key.etcd-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.etcd-ca.cert_pem
validity_period_hours = 8760
@@ -134,14 +76,15 @@ resource "tls_locally_signed_cert" "client" {
]
}
# etcd Server
resource "tls_private_key" "server" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "server" {
key_algorithm = "${tls_private_key.server.algorithm}"
private_key_pem = "${tls_private_key.server.private_key_pem}"
private_key_pem = tls_private_key.server.private_key_pem
subject {
common_name = "etcd-server"
@@ -152,19 +95,14 @@ resource "tls_cert_request" "server" {
"127.0.0.1",
]
dns_names = ["${concat(
var.etcd_servers,
list(
"localhost",
))}"]
dns_names = concat(var.etcd_servers, ["localhost"])
}
resource "tls_locally_signed_cert" "server" {
cert_request_pem = "${tls_cert_request.server.cert_request_pem}"
cert_request_pem = tls_cert_request.server.cert_request_pem
ca_key_algorithm = "${join(" ", tls_self_signed_cert.etcd-ca.*.key_algorithm)}"
ca_private_key_pem = "${join(" ", tls_private_key.etcd-ca.*.private_key_pem)}"
ca_cert_pem = "${join(" ", tls_self_signed_cert.etcd-ca.*.cert_pem)}"
ca_private_key_pem = tls_private_key.etcd-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.etcd-ca.cert_pem
validity_period_hours = 8760
@@ -176,29 +114,29 @@ resource "tls_locally_signed_cert" "server" {
]
}
# etcd Peer
resource "tls_private_key" "peer" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "peer" {
key_algorithm = "${tls_private_key.peer.algorithm}"
private_key_pem = "${tls_private_key.peer.private_key_pem}"
private_key_pem = tls_private_key.peer.private_key_pem
subject {
common_name = "etcd-peer"
organization = "etcd"
}
dns_names = ["${var.etcd_servers}"]
dns_names = var.etcd_servers
}
resource "tls_locally_signed_cert" "peer" {
cert_request_pem = "${tls_cert_request.peer.cert_request_pem}"
cert_request_pem = tls_cert_request.peer.cert_request_pem
ca_key_algorithm = "${join(" ", tls_self_signed_cert.etcd-ca.*.key_algorithm)}"
ca_private_key_pem = "${join(" ", tls_private_key.etcd-ca.*.private_key_pem)}"
ca_cert_pem = "${join(" ", tls_self_signed_cert.etcd-ca.*.cert_pem)}"
ca_private_key_pem = tls_private_key.etcd-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.etcd-ca.cert_pem
validity_period_hours = 8760
@@ -209,3 +147,4 @@ resource "tls_locally_signed_cert" "peer" {
"client_auth",
]
}

View File

@@ -1,3 +1,15 @@
locals {
# Kubernetes TLS assets map
kubernetes_tls = {
"tls/k8s/ca.crt" = tls_self_signed_cert.kube-ca.cert_pem,
"tls/k8s/ca.key" = tls_private_key.kube-ca.private_key_pem,
"tls/k8s/apiserver.crt" = tls_locally_signed_cert.apiserver.cert_pem,
"tls/k8s/apiserver.key" = tls_private_key.apiserver.private_key_pem,
"tls/k8s/service-account.pub" = tls_private_key.service-account.public_key_pem
"tls/k8s/service-account.key" = tls_private_key.service-account.private_key_pem
}
}
# Kubernetes CA (tls/{ca.crt,ca.key})
resource "tls_private_key" "kube-ca" {
@@ -6,12 +18,11 @@ resource "tls_private_key" "kube-ca" {
}
resource "tls_self_signed_cert" "kube-ca" {
key_algorithm = "${tls_private_key.kube-ca.algorithm}"
private_key_pem = "${tls_private_key.kube-ca.private_key_pem}"
private_key_pem = tls_private_key.kube-ca.private_key_pem
subject {
common_name = "kubernetes-ca"
organization = "bootkube"
organization = "typhoon"
}
is_ca_certificate = true
@@ -24,16 +35,6 @@ resource "tls_self_signed_cert" "kube-ca" {
]
}
resource "local_file" "kube-ca-key" {
content = "${tls_private_key.kube-ca.private_key_pem}"
filename = "${var.asset_dir}/tls/ca.key"
}
resource "local_file" "kube-ca-crt" {
content = "${tls_self_signed_cert.kube-ca.cert_pem}"
filename = "${var.asset_dir}/tls/ca.crt"
}
# Kubernetes API Server (tls/{apiserver.key,apiserver.crt})
resource "tls_private_key" "apiserver" {
@@ -42,33 +43,31 @@ resource "tls_private_key" "apiserver" {
}
resource "tls_cert_request" "apiserver" {
key_algorithm = "${tls_private_key.apiserver.algorithm}"
private_key_pem = "${tls_private_key.apiserver.private_key_pem}"
private_key_pem = tls_private_key.apiserver.private_key_pem
subject {
common_name = "kube-apiserver"
organization = "system:masters"
}
dns_names = [
"${var.api_servers}",
dns_names = flatten([
var.api_servers,
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.${var.cluster_domain_suffix}",
]
])
ip_addresses = [
"${cidrhost(var.service_cidr, 1)}",
cidrhost(var.service_cidr, 1),
]
}
resource "tls_locally_signed_cert" "apiserver" {
cert_request_pem = "${tls_cert_request.apiserver.cert_request_pem}"
cert_request_pem = tls_cert_request.apiserver.cert_request_pem
ca_key_algorithm = "${tls_self_signed_cert.kube-ca.key_algorithm}"
ca_private_key_pem = "${tls_private_key.kube-ca.private_key_pem}"
ca_cert_pem = "${tls_self_signed_cert.kube-ca.cert_pem}"
ca_private_key_pem = tls_private_key.kube-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.kube-ca.cert_pem
validity_period_hours = 8760
@@ -80,14 +79,64 @@ resource "tls_locally_signed_cert" "apiserver" {
]
}
resource "local_file" "apiserver-key" {
content = "${tls_private_key.apiserver.private_key_pem}"
filename = "${var.asset_dir}/tls/apiserver.key"
# kube-controller-manager
resource "tls_private_key" "controller-manager" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "local_file" "apiserver-crt" {
content = "${tls_locally_signed_cert.apiserver.cert_pem}"
filename = "${var.asset_dir}/tls/apiserver.crt"
resource "tls_cert_request" "controller-manager" {
private_key_pem = tls_private_key.controller-manager.private_key_pem
subject {
common_name = "system:kube-controller-manager"
}
}
resource "tls_locally_signed_cert" "controller-manager" {
cert_request_pem = tls_cert_request.controller-manager.cert_request_pem
ca_private_key_pem = tls_private_key.kube-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.kube-ca.cert_pem
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"client_auth",
]
}
# kube-scheduler
resource "tls_private_key" "scheduler" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "scheduler" {
private_key_pem = tls_private_key.scheduler.private_key_pem
subject {
common_name = "system:kube-scheduler"
}
}
resource "tls_locally_signed_cert" "scheduler" {
cert_request_pem = tls_cert_request.scheduler.cert_request_pem
ca_private_key_pem = tls_private_key.kube-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.kube-ca.cert_pem
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"client_auth",
]
}
# Kubernetes Admin (tls/{admin.key,admin.crt})
@@ -98,8 +147,7 @@ resource "tls_private_key" "admin" {
}
resource "tls_cert_request" "admin" {
key_algorithm = "${tls_private_key.admin.algorithm}"
private_key_pem = "${tls_private_key.admin.private_key_pem}"
private_key_pem = tls_private_key.admin.private_key_pem
subject {
common_name = "kubernetes-admin"
@@ -108,11 +156,10 @@ resource "tls_cert_request" "admin" {
}
resource "tls_locally_signed_cert" "admin" {
cert_request_pem = "${tls_cert_request.admin.cert_request_pem}"
cert_request_pem = tls_cert_request.admin.cert_request_pem
ca_key_algorithm = "${tls_self_signed_cert.kube-ca.key_algorithm}"
ca_private_key_pem = "${tls_private_key.kube-ca.private_key_pem}"
ca_cert_pem = "${tls_self_signed_cert.kube-ca.cert_pem}"
ca_private_key_pem = tls_private_key.kube-ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.kube-ca.cert_pem
validity_period_hours = 8760
@@ -123,16 +170,6 @@ resource "tls_locally_signed_cert" "admin" {
]
}
resource "local_file" "admin-key" {
content = "${tls_private_key.admin.private_key_pem}"
filename = "${var.asset_dir}/tls/admin.key"
}
resource "local_file" "admin-crt" {
content = "${tls_locally_signed_cert.admin.cert_pem}"
filename = "${var.asset_dir}/tls/admin.crt"
}
# Kubernete's Service Account (tls/{service-account.key,service-account.pub})
resource "tls_private_key" "service-account" {
@@ -140,56 +177,3 @@ resource "tls_private_key" "service-account" {
rsa_bits = "2048"
}
resource "local_file" "service-account-key" {
content = "${tls_private_key.service-account.private_key_pem}"
filename = "${var.asset_dir}/tls/service-account.key"
}
resource "local_file" "service-account-crt" {
content = "${tls_private_key.service-account.public_key_pem}"
filename = "${var.asset_dir}/tls/service-account.pub"
}
# Kubelet
resource "tls_private_key" "kubelet" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "kubelet" {
key_algorithm = "${tls_private_key.kubelet.algorithm}"
private_key_pem = "${tls_private_key.kubelet.private_key_pem}"
subject {
common_name = "kubelet"
organization = "system:nodes"
}
}
resource "tls_locally_signed_cert" "kubelet" {
cert_request_pem = "${tls_cert_request.kubelet.cert_request_pem}"
ca_key_algorithm = "${tls_self_signed_cert.kube-ca.key_algorithm}"
ca_private_key_pem = "${tls_private_key.kube-ca.private_key_pem}"
ca_cert_pem = "${tls_self_signed_cert.kube-ca.cert_pem}"
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"server_auth",
"client_auth",
]
}
resource "local_file" "kubelet-key" {
content = "${tls_private_key.kubelet.private_key_pem}"
filename = "${var.asset_dir}/tls/kubelet.key"
}
resource "local_file" "kubelet-crt" {
content = "${tls_locally_signed_cert.kubelet.cert_pem}"
filename = "${var.asset_dir}/tls/kubelet.crt"
}

View File

@@ -1,113 +1,140 @@
variable "cluster_name" {
type = string
description = "Cluster name"
type = "string"
}
variable "api_servers" {
type = list(string)
description = "List of URLs used to reach kube-apiserver"
type = "list"
}
variable "etcd_servers" {
type = list(string)
description = "List of URLs used to reach etcd servers."
type = "list"
}
variable "asset_dir" {
description = "Path to a directory where generated assets should be placed (contains secrets)"
type = "string"
}
variable "cloud_provider" {
description = "The provider for cloud services (empty string for no provider)"
type = "string"
default = ""
}
# optional
variable "networking" {
description = "Choice of networking provider (flannel or calico or kube-router)"
type = "string"
default = "flannel"
}
variable "network_mtu" {
description = "CNI interface MTU (only applies to calico and kube-router)"
type = "string"
default = "1500"
}
variable "network_encapsulation" {
description = "Network encapsulation mode either ipip or vxlan (only applies to calico)"
type = "string"
default = "ipip"
}
variable "network_ip_autodetection_method" {
description = "Method to autodetect the host IPv4 address (only applies to calico)"
type = "string"
default = "first-found"
type = string
description = "Choice of networking provider (flannel or cilium)"
default = "cilium"
validation {
condition = contains(["flannel", "cilium"], var.networking)
error_message = "networking can be flannel or cilium."
}
}
variable "pod_cidr" {
type = string
description = "CIDR IP range to assign Kubernetes pods"
type = "string"
default = "10.2.0.0/16"
default = "10.20.0.0/14"
}
variable "service_cidr" {
type = string
description = <<EOD
CIDR IP range to assign Kubernetes services.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
EOD
type = "string"
default = "10.3.0.0/24"
}
variable "cluster_domain_suffix" {
description = "Queries for domains with the suffix will be answered by kube-dns"
type = "string"
default = "cluster.local"
default = "10.3.0.0/24"
}
variable "container_images" {
type = map(string)
description = "Container images to use"
type = "map"
default = {
calico = "quay.io/calico/node:v3.7.2"
calico_cni = "quay.io/calico/cni:v3.7.2"
flannel = "quay.io/coreos/flannel:v0.11.0-amd64"
flannel_cni = "quay.io/coreos/flannel-cni:v0.3.0"
kube_router = "cloudnativelabs/kube-router:v0.3.1"
hyperkube = "k8s.gcr.io/hyperkube:v1.14.3"
coredns = "k8s.gcr.io/coredns:1.5.0"
pod_checkpointer = "quay.io/coreos/pod-checkpointer:83e25e5968391b9eb342042c435d1b3eeddb2be1"
cilium_agent = "quay.io/cilium/cilium:v1.18.4"
cilium_operator = "quay.io/cilium/operator-generic:v1.18.4"
coredns = "registry.k8s.io/coredns/coredns:v1.13.1"
flannel = "docker.io/flannel/flannel:v0.27.0"
flannel_cni = "quay.io/poseidon/flannel-cni:v0.4.2"
kube_apiserver = "registry.k8s.io/kube-apiserver:v1.34.2"
kube_controller_manager = "registry.k8s.io/kube-controller-manager:v1.34.2"
kube_scheduler = "registry.k8s.io/kube-scheduler:v1.34.2"
kube_proxy = "registry.k8s.io/kube-proxy:v1.34.2"
}
}
variable "enable_reporting" {
type = "string"
description = "Enable usage or analytics reporting to upstream component owners (Tigera: Calico)"
default = "false"
}
variable "trusted_certs_dir" {
description = "Path to the directory on cluster nodes where trust TLS certs are kept"
type = "string"
default = "/usr/share/ca-certificates"
}
variable "enable_aggregation" {
description = "Enable the Kubernetes Aggregation Layer (defaults to false, recommended)"
type = "string"
default = "false"
type = bool
description = "Enable the Kubernetes Aggregation Layer (defaults to true)"
default = true
}
variable "daemonset_tolerations" {
type = list(string)
description = "List of additional taint keys kube-system DaemonSets should tolerate (e.g. ['custom-role', 'gpu-role'])"
default = []
}
# unofficial, temporary, may be removed without notice
variable "apiserver_port" {
description = "kube-apiserver port"
type = "string"
default = "6443"
variable "external_apiserver_port" {
type = number
description = "External kube-apiserver port (e.g. 6443 to match internal kube-apiserver port)"
default = 6443
}
variable "cluster_domain_suffix" {
type = string
description = "Queries for domains with the suffix will be answered by kube-dns"
default = "cluster.local"
}
variable "components" {
description = "Configure pre-installed cluster components"
type = object({
enable = optional(bool, true)
coredns = optional(
object({
enable = optional(bool, true)
}),
{
enable = true
}
)
kube_proxy = optional(
object({
enable = optional(bool, true)
}),
{
enable = true
}
)
# CNI providers are enabled for pre-install by default, but only the
# provider matching var.networking is actually installed.
flannel = optional(
object({
enable = optional(bool, true)
}),
{
enable = true
}
)
cilium = optional(
object({
enable = optional(bool, true)
}),
{
enable = true
}
)
})
default = {
enable = true
coredns = null
kube_proxy = null
flannel = null
cilium = null
}
# Set the variable value to the default value when the caller
# sets it to null.
nullable = false
}
variable "service_account_issuer" {
type = string
description = "kube-apiserver service account token issuer (used as an identifier in 'iss' claims)"
default = "https://kubernetes.default.svc.cluster.local"
}

9
versions.tf Normal file
View File

@@ -0,0 +1,9 @@
# Terraform version and plugin versions
terraform {
required_version = ">= 0.13.0, < 2.0.0"
required_providers {
random = "~> 3.1"
tls = "~> 4.0"
}
}