187 Commits

Author SHA1 Message Date
Dalton Hubble
082921d679 Update Kubernetes from v1.14.2 to v1.14.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md#v1143
2019-05-31 01:05:00 -07:00
Dalton Hubble
efd1cfd9bf Update CoreDNS from v1.3.1 to v1.5.0
* Add `ready` plugin and change the readinessProbe to check
default port 8181 to ensure all plugins are ready
* `upstream [ADDRESS]` defines upstream resolvers for external
services. If no address is given, resolution is against CoreDNS
itself, which is the default. So `upstream` can be removed
2019-05-27 00:07:59 -07:00
Dalton Hubble
85571f6dae Update Kubernetes from v1.14.1 to v1.14.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md#v1142
2019-05-17 13:00:30 +02:00
Dalton Hubble
eca7c49fe1 Update Calico from v3.7.0 to v3.7.2
* https://docs.projectcalico.org/v3.7/release-notes/
2019-05-17 12:26:02 +02:00
Dalton Hubble
42b9e782b2 Update kube-router from v0.3.0 to v0.3.1
* kube-router is experimental and not supported
* https://github.com/cloudnativelabs/kube-router/releases/tag/v0.3.1
2019-05-17 12:20:23 +02:00
Dalton Hubble
fc7a6fb20a Change flannel port from 8472 to 4789
* Change flannel port from the kernel default 8472 to the
IANA assigned VXLAN port 4789
* Requires a change to firewall rules or security groups
depending on the platform (**action required!**)
* Why now? Calico now offers its own VXLAN backend so
standardizing on the IANA port simplifies configuration
* https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan
2019-05-06 21:23:08 -07:00
Dalton Hubble
b96d641f6d Update Calico from v3.6.1 to v3.7.0
* Accept a `network_encapsulation` variable to choose whether the
default IPPool should use ipip (default) or vxlan encapsulation
* Use `network_mtu` as the MTU for workload interfaces for ipip
or vxlan (although Calico can have a IPPools with a mix, we're
picking ipip xor vxlan)
2019-05-05 20:41:53 -07:00
Dalton Hubble
614defe090 Update kube-router from v0.2.5 to v0.3.0
* https://github.com/cloudnativelabs/kube-router/releases/tag/v0.3.0
* Recall, kube-router is experimental and not vouched for
as part of clusters
2019-05-04 11:38:19 -07:00
Dalton Hubble
a80eed2b6a Update Kubernetes from v1.14.0 to v1.14.1 2019-04-09 21:43:39 -07:00
Dalton Hubble
53b2520d70 Remove deprecated user-kubeconfig output
* Use kubeconfig-admin output instead
* https://github.com/poseidon/terraform-render-bootkube/pull/100
2019-04-09 21:41:26 -07:00
Dalton Hubble
feb6e4cb3e Fix a few ca_cert vars that are lists and should be strings
* Error introduced in prior commit #104
2019-04-07 11:59:33 -07:00
Dalton Hubble
88fd15c2f6 Remove support for using a pre-existing certificate authority
* Remove the `ca_certificate`, `ca_key_alg`, and `ca_private_key`
variables
* Typhoon does not plan to expose custom CA support. Continuing
to support it clutters the implementation and security auditing
* Using an existing CA certificate and private key has been
supported in terraform-render-bootkube only to match bootkube
2019-04-07 11:42:57 -07:00
Dalton Hubble
b9bef14a0b Add enable_aggregation option (defaults to false)
* Add an `enable_aggregation` variable to enable the kube-apiserver
aggregation layer for adding extension apiservers to clusters
* Aggregation is **disabled** by default. Typhoon recommends you not
enable aggregation. Consider whether less invasive ways to achieve
your goals are possible and whether those goals are well-founded
* Enabling aggregation and extension apiservers increases the attack
surface of a cluster and makes extensions a part of the control plane.
Admins must scrutinize and trust any extension apiserver used.
* Passing a v1.14 CNCF conformance test requires aggregation be enabled.
Having an option for aggregation keeps compliance, but retains the stricter
security posture on default clusters
2019-04-07 02:27:40 -07:00
Dalton Hubble
a693381400 Update Kubernetes from v1.13.5 to v1.14.0 2019-03-31 17:45:25 -07:00
Dalton Hubble
bcb015e105 Update Calico from v3.6.0 to v3.6.1
* https://docs.projectcalico.org/v3.6/release-notes/
2019-03-31 17:41:15 -07:00
Dalton Hubble
da0321287b Update hyperkube from v1.13.4 to v1.13.5
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#v1135
2019-03-25 21:37:15 -07:00
Dalton Hubble
9862888bb2 Reduce calico-node CPU request from 250m to 150m
* calico-node uses only a small fraction of its CPU request
(i.e. reservation) even under stress. The unbounded limit
already allows usage to scale favorably in bursty cases
* Motivation: On instance types that skew memory-optimized
(e.g. GCP n1), over-requesting can push the system toward
overcommitment (alerts can be tuned)
* Overcommitment is not necessarily bad, but 250m seems too
generous a minimum given the actual usage
2019-03-24 11:55:56 -07:00
Dalton Hubble
23f81a5e8c Upgrade Calico from v3.5.2 to v3.6.0
* Add calico-ipam CRDs and RBAC permissions
* Switch IPAM from host-local to calico-ipam!
  * `calico-ipam` subnets `ippools` (defaults to pod CIDR) into
`ipamblocks` (defaults to /26, but set to /24 in Typhoon)
  * `host-local` subnets the pod CIDR based on the node PodCIDR
field (set via kube-controller-manager as /24's)
* Create a custom default IPv4 IPPool to ensure the block size
is kept at /24 to allow 110 pods per node (Kubernetes default)
* Retaining host-local was slightly preferred, but Calico v3.6
is migrating all usage to calico-ipam. The codepath that skipped
calico-ipam for KDD was removed
*  https://docs.projectcalico.org/v3.6/release-notes/
2019-03-18 22:28:48 -07:00
Dalton Hubble
6cda319b9d Revert "Update Calico from v3.5.2 to v3.6.0"
* Calico is not using host-local IPAM as desired
* This reverts commit e6e051ef47.
2019-03-18 21:32:23 -07:00
Dalton Hubble
e6e051ef47 Update Calico from v3.5.2 to v3.6.0
* Add calico-ipam CRDs and RBAC permissions
* Continue using host-local IPAM
*  https://docs.projectcalico.org/v3.6/release-notes/
2019-03-18 21:03:27 -07:00
Dalton Hubble
1528266595 Resolve in-addr.arpa and ip6.arpa zones with CoreDNS kubernetes plugin
* Resolve in-addr.arpa and ip6.arpa DNS PTR requests for Kubernetes
service IPs and pod IPs
* Previously, CoreDNS was configured to resolve in-addr.arpa PTR
records for service IPs (but not pod IPs)
2019-03-04 22:33:21 -08:00
Dalton Hubble
953521dbba Update hyperkube from v1.13.3 to v1.13.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#v1134
2019-02-28 22:22:35 -08:00
Dalton Hubble
0a7c4fda35 Update Calico from v3.5.1 to v3.5.2
* https://docs.projectcalico.org/v3.5/releases/
2019-02-25 21:20:47 -08:00
Dalton Hubble
593f0e3655 Add a readinessProbe to CoreDNS
* https://github.com/kubernetes/kubernetes/pull/74137
2019-02-23 13:11:19 -08:00
Dalton Hubble
c5f5aacce9 Assign Pod Priority Classes to control plane components
* Priority Admission Controller has been enabled since Typhoon
v1.11.1
* Assign cluster and node components a builtin priorityClassName
(higher is higher priority) to inform scheduler prepemption,
scheduling order, and node out-of-resource eviction order
2019-02-17 17:12:46 -08:00
Dalton Hubble
4d315afd41 Update Calico from v3.5.0 to v3.5.1
* https://github.com/projectcalico/confd/pull/205
2019-02-09 11:45:38 -08:00
Dalton Hubble
c12a11c800 Update hyperkube from v1.13.2 to v1.13.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#v1133
2019-02-01 23:23:07 -08:00
Dalton Hubble
1de56ef7c8 Update kube-router from v0.2.4 to v0.2.5
* https://github.com/cloudnativelabs/kube-router/releases/tag/v0.2.5
2019-02-01 23:21:58 -08:00
Dalton Hubble
7dc8f8bf8c Switch CoreDNS to use the forward plugin instead of proxy
* Use the forward plugin to forward to upstream resolvers, instead
of the proxy plugin. The forward plugin is reported to be a faster
alternative since it can re-use open sockets
* https://coredns.io/explugins/forward/
* https://coredns.io/plugins/proxy/
* https://github.com/kubernetes/kubernetes/issues/73254
2019-01-30 22:19:13 -08:00
Dalton Hubble
c5bc23ef7a Update flannel from v0.10.0 to v0.11.0
* https://github.com/coreos/flannel/releases/tag/v0.11.0
2019-01-29 21:48:47 -08:00
Dalton Hubble
54f15b6c8c Update Calico from v3.4.0 to v3.5.0
* https://docs.projectcalico.org/v3.5/releases/
2019-01-27 16:25:57 -08:00
Dalton Hubble
7b06557b7a Reduce kube-controller-manager --pod-eviction-timeout to 1m
* Pods on preempted nodes should be moved to healthy nodes
more quickly (1 min instead of 5 minutes)
2019-01-27 16:20:01 -08:00
Dalton Hubble
ef99293eb2 Update CoreDNS from v1.3.0 to v1.3.1
* https://coredns.io/2019/01/13/coredns-1.3.1-release/
2019-01-15 21:22:40 -08:00
Dalton Hubble
e892e291b5 Restore Kubelet authorization to delete nodes
* Fix a regression caused by lowering the Kubelet TLS client
certificate to system:nodes group (#100) since dropping
cluster-admin dropped the Kubelet's ability to delete nodes.
* On clouds where workers can scale down (manual terraform apply,
AWS spot termination, Azure low priority deletion), worker shutdown
runs the delete-node.service to remove a node to prevent NotReady
nodes from accumulating
* Allow Kubelets to delete cluster nodes via system:nodes group. Kubelets
acting with system:node and kubelet-delete ClusterRoles is still an
improvement over acting as cluster-admin
2019-01-14 23:26:41 -08:00
Dalton Hubble
2353c586a1 Update kube-router from v0.2.3 to v0.2.4
* https://github.com/cloudnativelabs/kube-router/releases/tag/v0.2.4
2019-01-12 14:19:36 -08:00
Dalton Hubble
bcbdddd8d0 Update hyperkube from v1.13.1 to v1.13.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#v1132
2019-01-11 23:59:24 -08:00
Dalton Hubble
f1e69f1d93 Re-enable kube-scheduler and kube-controller-manager HTTP ports
* Fix regression added in 48730c0f12, allow Prometheus to scrape
metrics from kube-scheduler and kube-controller-manager
2019-01-11 23:52:57 -08:00
Dalton Hubble
48730c0f12 Probe kube-scheduler and kube-controller-manager HTTPS ports
* Disable kube-scheduler and kube-controller-manager HTTP ports
2019-01-09 20:50:57 -08:00
Dalton Hubble
0e65e3567e Enable certificates.k8s.io API certificate issuance
* Allow kube-controller-manager to sign Approved CSR's using the
cluster CA private key to issue cluster certificates
* System components that need to use certificates signed by the
cluster CA can submit a CSR to the apiserver, have an admin
inspect and manually approve it, and be issued a certificate
* Admins should inspect CSRs very carefully to ensure their
origin and authorization level are appropriate
* https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/#approving-certificate-signing-requests
2019-01-06 17:17:03 -08:00
Dalton Hubble
4f8952a956 Disable anonymous auth on the bootstrap kube-apiserver
* Anonymous auth isn't used during bootstrapping and can
be disabled
2019-01-05 21:48:40 -08:00
Dalton Hubble
ea30087577 Structure control plane manifests neatly 2019-01-05 21:47:30 -08:00
Dalton Hubble
847ec5929b Consolidate both variants of the admin kubeconfig
* Provide an admin kubeconfig which includes a named context
and also sets that context as the current-context
* Retains support for both the KUBECONFIG=path style of usage
or adding many kubeconfig's to a ~/.kube/configs folder and
using `kubectl use-context CLUSTER-context`
2019-01-05 14:56:45 -08:00
Dalton Hubble
f5ea389e8c Update CoreDNS from v1.2.6 to v1.3.0
* https://coredns.io/2018/12/15/coredns-1.3.0-release/
* Limit log plugin to just log error class
2019-01-05 13:21:10 -08:00
Dalton Hubble
3431a12ac1 Remove deprecated kube_dns_service_ip output
* Use cluster_dns_service_ip output instead
2019-01-05 13:11:15 -08:00
Dalton Hubble
a7bd306679 Add admin kubeconfig and limit Kubelet cert to system:nodes group
* Change Kubelet TLS client certificate to belong to the system:nodes
group instead of the system:masters group (more limited)
* Bind the system:node ClusterRole to the system:nodes group (yes,
the ClusterRole is singular)
* Generate separate admin.crt and admin.key files (which do still use
system:masters). Output kubeconfig-kubelet and kubeconfig-admin values
from the module
* Remove the kubeconfig output to force users to pick the correct
kubeconfig, depending on how the output is used (action required!)

Related:

* https://kubernetes.io/docs/reference/access-authn-authz/rbac/#core-component-roles

Note, NodeAuthorizer/NodeRestriction would be an enhancement, but to
work across platforms it effectively requires TLS bootstraping which
doesn't have a viable attestation strategy and clashes with CCM. This
change improves Kubelet limitations, but intentionally doesn't aim to
steer toward NodeAuthorizer/NodeRestriction
2019-01-02 23:08:09 -08:00
Dalton Hubble
f382415f2b Edit CA certificate CommonName to match upstream
* Consistency with https://kubernetes.io/docs/setup/certificates/#single-root-ca
2019-01-01 17:30:33 -08:00
Dalton Hubble
7bcca25043 Use a kube-apiserver ServiceAccount and ClusterRoleBinding
* Switch kube-apiserver from using the kube-system default ServicAccount
(with cluster-admin) to using a kube-apiserver ServiceAccount bound to
cluster-admin (as before)
* Remove the default-sa ClusterRoleBinding that allowed kube-apiserver
and kube-scheduler (or other 3rd-party components added to kube-system)
to use the kube-system default ServiceAccount for cluster-admin
* Require all future components in kube-system define their own
ServiceAccount
2019-01-01 17:30:28 -08:00
Dalton Hubble
fa4c2d8a68 Use a kube-scheduler ServiceAccount and ClusterRoleBinding
* Switch kube-scheduler from using the kube-system default ServiceAccount
(with cluster-admin) to using a kube-scheduler ServiceAccount bound to
the builtin system:kube-scheduler and system:volume-scheduler
(required for StorageClass) ClusterRoles
* https://kubernetes.io/docs/reference/access-authn-authz/rbac/#core-component-roles
2019-01-01 17:29:36 -08:00
Dalton Hubble
d14348a368 Update Calico from v3.3.2 to v3.4.0
* Use an init container to install CNI plugins
* Update the calico-node ClusterRole
2018-12-15 18:04:25 -08:00
Dalton Hubble
51e3323a6d Update hyperkube from v1.13.0 to v1.13.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#v1131
2018-12-15 11:42:32 -08:00
Dalton Hubble
95e568935c Update Calico from v3.3.1 to v3.3.2
* https://docs.projectcalico.org/v3.3/releases/
2018-12-06 22:49:48 -08:00
Dalton Hubble
b101fddf6e Configure kube-router to use in-cluster-kubeconfig
* Use access token, but access apiserver via apiserver endpoint
rather than internal service IP
2018-12-06 22:39:59 -08:00
Dalton Hubble
cff13f9248 Update hyperkube from v1.12.3 to v1.13.0
* Remove controller-manager empty dir mount added for v1.12
https://github.com/kubernetes/kubernetes/issues/68973
* No longer required https://github.com/kubernetes/kubernetes/pull/69884
2018-12-03 20:42:14 -08:00
Dalton Hubble
9d6f0c31d3 Add experimental kube-router CNI provider
* Allow using kube-router for pod-to-pod networking
and for NetworkPolicy
2018-12-03 19:42:02 -08:00
Dalton Hubble
7dc6e199f9 Fix terraform fmt 2018-12-03 19:41:30 -08:00
Hielke Christian Braun
bfb3d23d1b Write etcd CA cert and key to the asset directory
* Provide the etcd CA key for administrator usage. Note that
the key should rarely, if ever, be used
2018-12-03 19:37:25 -08:00
Dalton Hubble
4021467b7f Update hyperkube from v1.12.2 to v1.12.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md/#v1123
2018-11-26 20:56:11 -08:00
Dalton Hubble
bffb5d5d23 Update pod-checkpointer image to query Kubelet secure api
* Updates pod-checkpointer to prefer the Kubelet secure
API (before falling back to the Kubelet read-only API that
is disabled on Typhoon clusters since
https://github.com/poseidon/typhoon/pull/324)
* Previously, pod-checkpointer checkpointed an initial set
of pods during bootstrapping so recovery from power cycling
clusters was unaffected, but logs were noisy
* https://github.com/kubernetes-incubator/bootkube/pull/1027
* https://github.com/kubernetes-incubator/bootkube/pull/1025
2018-11-26 20:11:01 -08:00
Dalton Hubble
dbf67da1cb Disable Calico usage reporting by default
* Calico Felix has been reporting anonymous usage data about
Calico version and cluster size
* https://docs.projectcalico.org/v3.3/reference/felix/configuration
* Add an enable_reporting variable and default to false
2018-11-18 23:41:19 -08:00
Dalton Hubble
3d9f957aec Update CoreDNS from v1.2.4 to v1.2.6
* https://coredns.io/2018/11/05/coredns-1.2.6-release/
2018-11-18 16:18:52 -08:00
Dalton Hubble
39f9afb336 Add resource request to flannel and mount /run/flannel
* Request 100m CPU without a limit (similar to Calico)
2018-11-11 15:56:13 -08:00
Dalton Hubble
3f3ab6b5c0 Enable CoreDNS loop and loadbalance plugins
* loop sends an initial query to detect infinite forwarding
loops in configured upstream DNS servers and fast exit with
an error (its a fatal misconfiguration on the network that
will otherwise cause resolvers to consume memory/CPU until
crashing, masking the problem)
* https://github.com/coredns/coredns/tree/master/plugin/loop
* loadbalance randomizes the ordering of A, AAAA, and MX records
in responses to provide round-robin load balancing (as usual,
clients may still cache responses though)
* https://github.com/coredns/coredns/tree/master/plugin/loadbalance
2018-11-10 17:33:30 -08:00
Dalton Hubble
1cb00c8270 Update README to correspond to bootkube v0.14.0 2018-11-10 17:32:47 -08:00
Dalton Hubble
d045a8e6b8 Structure flannel/Calico manifests consistently
* Organize flannel and Calico manifests to use consistent
naming, structure, and ordering to align
* Downside: Makes direct diff'ing with upstream harder, but
that's become difficult lately anyway, since Calico uses a
templating engine
2018-11-10 13:14:36 -08:00
Dalton Hubble
8742024bbf Update Calico from v3.3.0 to v3.3.1
* https://docs.projectcalico.org/v3.3/releases/
2018-11-10 12:41:32 -08:00
Dalton Hubble
365d089610 Set kube-apiserver's kubelet preferred address types
* Prefer InternalIP and ExternalIP over the node's hostname,
to match upstream behavior and kubeadm
* Previously, hostname-override was used to set node names
to internal IP's to work around some cloud providers not
resolving hostnames for instances (e.g. DO droplets)
2018-11-03 14:58:30 -07:00
Dalton Hubble
f39f8294c4 Update hyperkube from v1.12.1 to v1.12.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md/#v1122
2018-10-27 15:35:49 -07:00
Dalton Hubble
6a77775e52 Update CoreDNS from v1.2.2 to v1.2.4
* https://coredns.io/2018/10/17/coredns-1.2.4-release/
* https://coredns.io/2018/10/16/coredns-1.2.3-release/
2018-10-27 15:35:21 -07:00
Dalton Hubble
e0e5577d37 Update Calico from v3.2.3 to v3.3.0
* https://docs.projectcalico.org/v3.3/releases/
2018-10-23 20:26:48 -07:00
Dalton Hubble
79065baa8c Fix CoreDNS AntiAffinity to prefer spreading pods 2018-10-17 22:15:53 -07:00
Dalton Hubble
81f19507fa Update Kubernetes from v1.11.3 to v1.12.1
* Mount an empty dir for the controller-manager to work around
https://github.com/kubernetes/kubernetes/issues/68973
* Update coreos/pod-checkpointer to strip affinity from
checkpointed pod manifests. Kubernetes v1.12.0-rc.1 introduced
a default affinity that appears on checkpointed manifests; but
it prevented scheduling and checkpointed pods should not have an
affinity, they're run directly by the Kubelet on the local node
* https://github.com/kubernetes-incubator/bootkube/issues/1001
* https://github.com/kubernetes/kubernetes/pull/68173
2018-10-16 20:03:04 -07:00
Dalton Hubble
2437023c10 Add docker/default seccomp profile to control plane pods
* By default, Kubernetes starts containers without the Docker
runtime's default seccomp profile (e.g. seccomp=unconfined)
* https://docs.docker.com/engine/security/seccomp/#pass-a-profile-for-a-container
2018-10-13 18:06:34 -07:00
Dalton Hubble
4e0ad77f96 Add livenessProbe to kube-proxy DaemonSet 2018-10-13 17:59:44 -07:00
Dalton Hubble
f7c2f8d590 Raise CoreDNS replica count to at least 2
* Run at least two replicas of CoreDNS to better support
rolling updates (previously, kube-dns had a pod nanny)
* On multi-master clusters, set the CoreDNS replica count
to match the number of masters (e.g. a 3-master cluster
previously used replicas:1, now replicas:3)
* Add AntiAffinity preferred rule to favor distributing
CoreDNS pods across nodes
2018-10-13 17:19:02 -07:00
Dalton Hubble
7797377d50 Raise scheduler/controller-manager replicas in multi-master
* Continue to ensure scheduler and controller-manager run
at least two replicas to support performing kubectl edits
on single-master clusters (no change)
* For multi-master clusters, set scheduler / controller-manager
replica count to the number of masters (e.g. a 3-master cluster
previously used replicas:2, now replicas:3)
2018-10-13 15:43:31 -07:00
Dalton Hubble
bccf3da096 Update Calico from v3.2.1 to v3.2.3
* https://github.com/projectcalico/calico/releases/tag/v3.2.2
* https://github.com/projectcalico/calico/releases/tag/v3.2.3
2018-10-02 15:59:50 +02:00
Dalton Hubble
9929abef7d Update CoreDNS from 1.1.3 to 1.2.2
* https://github.com/coredns/coredns/releases/tag/v1.2.2
* https://github.com/coredns/coredns/issues/2056
2018-10-02 15:58:07 +02:00
Dalton Hubble
5378e166ef Update hyperkube from v1.11.2 to v1.11.3
*  https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1113
2018-09-13 18:42:16 -07:00
Dalton Hubble
6f024c457e Update Calico from v3.1.3 to v3.2.1
* Most upstream changes were buried in calico#1884 which
switched from non-templated manifests to templating
* https://github.com/projectcalico/calico/pull/1884
* https://github.com/projectcalico/calico/pull/1853
* https://github.com/projectcalico/calico/pull/2069
* https://github.com/projectcalico/calico/pull/2032
* https://github.com/projectcalico/calico/pull/1841
* https://github.com/projectcalico/calico/pull/1770
2018-08-25 17:46:31 -07:00
Dalton Hubble
70c2839970 Update hyperkube from v1.11.1 to v1.11.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1112
2018-08-07 21:49:27 -07:00
Dalton Hubble
9e6fc7e697 Update hyperkube from v1.11.0 to v1.11.1
* Kubernetes v1.11.1 defaults to enabling the Priority
admission controller. List the Priority admission controller
explicitly for readability
2018-07-20 00:27:31 -07:00
Dalton Hubble
81ba300e71 Switch from kube-dns to CoreDNS
* Add system:coredns ClusterRole and binding
* Annotate CoreDNS service for Prometheus metrics scraping
* Remove kube-dns deployment, service, service account, and
variables
* Deprecate kube_dns_service_ip module output, use
cluster_dns_service_ip instead
2018-07-01 16:17:04 -07:00
Dalton Hubble
eb2dfa64de Explicitly disable apiserver 127.0.0.1 insecure port
* Although the --insecure-port flag is deprecated, apiserver
continues to default to listening on 127.0.0.1:8080
* Explicitly disable insecure local listener since its unused
* https://github.com/kubernetes/kubernetes/pull/59018#discussion_r177849954
* 5f3546b66f
2018-06-27 22:30:29 -07:00
Dalton Hubble
34992426f6 Update hyperkube from v1.10.5 to v1.11.0 2018-06-27 22:29:21 -07:00
Dalton Hubble
1d4db824f0 Update hyperkube from v1.10.4 to v1.10.5
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1105
2018-06-21 22:46:00 -07:00
Dalton Hubble
2bcf61b2b5 Change apiserver port from 443 to 6443
* Requires updating load balancers, firewall rules,
security groups, and potentially routers/balancers
* Temporarily allow apiserver_port override to accommodate
edge cases or migration
* https://github.com/kubernetes-incubator/bootkube/pull/789
2018-06-19 23:40:09 -07:00
Dalton Hubble
0e98e89e14 Update hyperkube from v1.10.3 to v1.10.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1104
2018-06-06 23:11:33 -07:00
Dalton Hubble
24e900af46 Update Calico from v3.1.2 to v3.1.3
* https://github.com/projectcalico/calico/releases/tag/v3.1.3
* https://github.com/projectcalico/cni-plugin/releases/tag/v3.1.3
2018-05-30 21:17:46 -07:00
Dalton Hubble
3fa3c2d73b Update hyperkube from v1.10.2 to v1.10.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1103
2018-05-21 20:17:36 -07:00
Dalton Hubble
2a776e7054 Update Calico from v3.1.1 to v3.1.2
* https://github.com/projectcalico/calico/releases/tag/v3.1.2
2018-05-21 20:15:49 -07:00
Dalton Hubble
28f68db28e Switch apiserver certificate to system:masters org
* A kubernetes apiserver should be authorized to make requests
to kubelets using an admin role associated with system:masters
* Kubelet defaults to AlwaysAllow so an apiserver that presented
a valid certificate had all access to the Kubelet. With Webhook
authorization, we're making that admin access explicit
* Its important the apiserver be able to perform or proxy to
kubelets for kubectl log, exec, port-forward, etc.
* https://github.com/poseidon/typhoon/issues/215
2018-05-13 23:04:25 -07:00
Dalton Hubble
305c813234 Allow specifying the Calico IP autodetection method
* Calico's default method "first-found" is appropriate for
single-NIC or bonded-NIC bare-metal and for clouds
* On bare-metal machines with multiple NICs, first-found
may result in Calico pods picking an unintended IP address
(perhaps an admin has dedicated certain NICs to certain
purposes). It mat be helpful to use `can-reach=DEST` or
`interface=REGEX` to select the host's address
* Caveat: autodetection method is set for the Calico
DaemonSet so the choice must be appropriate for all
machines in the cluster.
* https://docs.projectcalico.org/v3.1/reference/node/configuration#ip-autodetection-methods
2018-05-13 19:57:44 -07:00
Dalton Hubble
911f411508 Update kube-dns from v1.14.9 to v1.14.10
* https://github.com/kubernetes/kubernetes/pull/62676
2018-04-28 00:39:44 -07:00
Dalton Hubble
a43af2562c Update hyperkube from v1.10.1 to v1.10.2 2018-04-27 23:50:57 -07:00
Ruben Das
dc721063af Fix typo in README module example 2018-04-27 23:49:58 -07:00
Dalton Hubble
6ec5e3c3af Update Calico from v3.0.4 to v3.1.1
* https://github.com/projectcalico/calico/releases/tag/v3.1.1
* https://github.com/projectcalico/calico/releases/tag/v3.1.0
* CNI config now defaults to having Kubelet CNI plugin read
from /var/lib/calico/nodename
* https://github.com/projectcalico/calico/pull/1722
2018-04-21 15:09:06 -07:00
Dalton Hubble
db36b92abc Update hyperkube from v1.10.0 to v1.10.1 2018-04-12 20:09:52 -07:00
Dalton Hubble
581f24d11a Update README to correspond to bootkube v0.12.0 2018-04-12 20:09:05 -07:00
Dalton Hubble
15b380a471 Remove deprecated bootstrap apiserver flags
* Remove flags deprecated in Kubernetes v1.10.x
* https://github.com/poseidon/terraform-render-bootkube/pull/50
2018-04-12 19:50:25 -07:00
Dalton Hubble
33e00a6dc5 Use k8s.gcr.io instead of gcr.io/google_containers
* Kubernetes recommends using the alias to fetch images
from the nearest GCR regional mirror, to abstract the
use of GCR, and to drop names containing "google"
* https://groups.google.com/forum/#!msg/kubernetes-dev/ytjk_rNrTa0/3EFUHvovCAAJ
2018-04-08 11:41:48 -07:00
qbast
109ddd2dc1 Add flexvolume plugin mount to controller-manager
* Mount /var/lib/kubelet/volumeplugins by default
2018-04-08 11:37:21 -07:00
Dalton Hubble
b408d80c59 Update kube-dns from v1.14.8 to v1.14.9
* https://github.com/kubernetes/kubernetes/pull/61908
2018-04-04 20:49:59 -07:00
Dalton Hubble
61fb176647 Add optional trusted certs directory variable 2018-04-04 00:35:00 -07:00
Dalton Hubble
5f3546b66f Remove deprecated apiserver flags 2018-03-26 20:52:56 -07:00
Dalton Hubble
e01ff60e42 Update hyperkube from v1.9.6 to v1.10.0
* Update pod checkpointer from CRI v1alpha1 to v1alpha2
* https://github.com/kubernetes-incubator/bootkube/pull/940
* https://github.com/kubernetes-incubator/bootkube/pull/938
2018-03-26 19:45:14 -07:00
Dalton Hubble
88b361207d Update hyperkube from v1.9.5 to v1.9.6 2018-03-21 20:27:11 -07:00
Dalton Hubble
747603e90d Update Calico from v3.0.3 to v3.0.4
* Update cni-plugin from v2.0.0 to v2.0.1
* https://github.com/projectcalico/calico/releases/tag/v3.0.4
* https://github.com/projectcalico/cni-plugin/releases/tag/v2.0.1
2018-03-21 20:25:04 -07:00
Andy Cobaugh
366f751283 Change user-kubeconfig output to rendered content 2018-03-21 20:21:04 -07:00
Dalton Hubble
457b596fa0 Update hyperkube from v1.9.4 to v1.9.5 2018-03-18 17:10:15 -07:00
Dalton Hubble
36bf88af70 Add /var/lib/calico volume mount for Calico
* 73705b2cb3
2018-03-18 16:35:45 -07:00
Dalton Hubble
c5fc93d95f Update hyperkube from v1.9.3 to v1.9.4 2018-03-10 23:00:59 -08:00
Dalton Hubble
c92f3589db Update Calico from v3.0.2 to v3.0.3
* https://github.com/projectcalico/calico/releases/tag/v3.0.3
2018-02-24 19:10:49 -08:00
Dalton Hubble
13a20039f5 Update README to correspond to bootkube v0.11.0 2018-02-22 21:48:30 -08:00
Dalton Hubble
070d184644 Update pod-checkpointer image version
* No notable changes except a grace period flag we don't use
* https://github.com/kubernetes-incubator/bootkube/pull/826
2018-02-15 08:03:16 -08:00
Dalton Hubble
cd6f6fa20d Remove PersistentVolumeLabel admission controller flag
* PersistentVolumeLabel admission controller is deprecated in 1.9
2018-02-11 11:25:02 -08:00
Dalton Hubble
8159561165 Switch Deployments and DaemonSets to apps/v1 2018-02-11 11:22:52 -08:00
Dalton Hubble
203b90169e Add Calico GlobalNetworkSet CRD 2018-02-10 13:04:13 -08:00
Dalton Hubble
72ab2b6aa8 Update Calico from v3.0.1 to v3.0.2
* https://github.com/projectcalico/calico/releases/tag/v3.0.2
2018-02-10 12:58:07 -08:00
Dalton Hubble
5d8a9e8986 Remove deprecated apiserver --etcd-quorum-read flag 2018-02-09 17:53:55 -08:00
Dalton Hubble
27857322df Update hyperkube from v1.9.2 to v1.9.3 2018-02-09 16:44:54 -08:00
Dalton Hubble
27d5f62f6c Change DaemonSets to tolerate NoSchedule and NoExecute taints
* Change kube-proxy, flannel, and calico to tolerate any NoSchedule
or NoExecute taint, not just allow running on masters
* https://github.com/kubernetes-incubator/bootkube/pull/704
2018-02-03 05:58:23 +01:00
Dalton Hubble
20adb15d32 Add flannel service account and RBAC cluster role
* Define a limited ClusterRole and service account for flannel
* https://github.com/kubernetes-incubator/bootkube/pull/869
2018-02-03 05:46:31 +01:00
Dalton Hubble
8d40d6c64d Update flannel from v0.9.0 to v0.10.0
* https://github.com/coreos/flannel/releases/tag/v0.10.0
2018-01-28 22:19:42 -08:00
Dalton Hubble
f4ccbeee10 Migrate from Calico v2.6.6 to to 3.0.1
* https://github.com/projectcalico/calico/releases/tag/v3.0.1
2018-01-19 23:04:57 -08:00
Dalton Hubble
b339254ed5 Update README to correspond to bootkube v0.10.0 2018-01-19 23:03:03 -08:00
Dalton Hubble
9ccedf7b1e Update Calico from v2.6.5 to v2.6.6
* https://github.com/projectcalico/calico/releases/tag/v2.6.6
2018-01-19 22:18:58 -08:00
Dalton Hubble
9795894004 Update hyperkube from v1.9.1 to v1.9.2 2018-01-19 08:19:28 -08:00
Dalton Hubble
bf07c3edad Update kube-dns from v1.14.7 to v1.14.8
* https://github.com/kubernetes/kubernetes/pull/57918
2018-01-12 09:57:01 -08:00
Dalton Hubble
41a16db127 Add separate service account for kube-dns 2018-01-12 09:15:36 -08:00
Dalton Hubble
b83e321b35 Enable portmap plugin to fix hostPort with Calico
* Ask the Calico sidecar to add a CNI conflist to each node
(for calico and portmap plugins). Cleans up Switch from CNI conf to conflist
* https://github.com/projectcalico/cni-plugin/blob/v1.11.2/k8s-install/scripts/install-cni.sh
* Related https://github.com/kubernetes-incubator/bootkube/pull/711
2018-01-06 13:33:17 -08:00
Dalton Hubble
28333ec9da Update Calico from v2.6.4 to 2.6.5 2018-01-06 13:17:46 -08:00
Dalton Hubble
891e88a70b Update apiserver --admission-control for v1.9.x
* https://kubernetes.io/docs/admin/admission-controllers
2018-01-06 13:16:27 -08:00
Dalton Hubble
5326239074 Update hyperkube from v1.9.0 to v1.9.1 2018-01-06 11:25:26 -08:00
Dalton Hubble
abe1f6dbf3 Update kube-dns from v1.14.6 to v1.14.7
* https://github.com/kubernetes/kubernetes/pull/54443
2018-01-06 11:24:55 -08:00
Dalton Hubble
4260d9ae87 Update kube-dns version and probe for SRV records
* https://github.com/kubernetes/kubernetes/pull/51378
2018-01-06 11:24:55 -08:00
Dalton Hubble
84c86ed81a Update hyperkube from v1.8.6 to v1.9.0 2018-01-06 11:24:55 -08:00
Dalton Hubble
a97f2ea8de Use an isolated service account for controller-manager
* https://github.com/kubernetes-incubator/bootkube/pull/795
2018-01-06 11:19:11 -08:00
Dalton Hubble
5072569bb7 Update calico/cni sidecar from v1.11.1 to v1.11.2 2017-12-21 11:16:55 -08:00
Dalton Hubble
7a52b30713 Update hyperkube image from v1.8.5 to v1.8.6 2017-12-21 10:26:06 -08:00
Dalton Hubble
73fcee2471 Switch kubeconfig-in-cluster from Secret to ConfigMap
* kubeconfig-in-cluster doesn't contain secrets, just refernces
to locations
2017-12-21 09:15:15 -08:00
Dalton Hubble
b25d802e3e Update Calico from v2.6.3 to v2.6.4
* https://github.com/projectcalico/calico/releases/tag/v2.6.4
2017-12-21 08:57:02 -08:00
Dalton Hubble
df22b04db7 Update README to correspond to bootkube v0.9.1 2017-12-15 01:40:25 -08:00
Dalton Hubble
6dc7630020 Fix Terraform formatting with fmt 2017-12-13 00:58:26 -08:00
Dalton Hubble
3ec47194ce Rename cluster_dns_fqdn variable to cluster_domain_suffix 2017-12-13 00:11:16 -08:00
Barak Michener
03ca146ef3 Add option for Cluster DNS having a FQDN other than cluster.local 2017-12-12 10:17:53 -08:00
Dalton Hubble
5763b447de Remove self-hosted etcd TLS cert SANs
* Remove self-hosted etcd service IP out, defunct
2017-12-12 00:30:04 -08:00
Dalton Hubble
36243ff89b Update pod-checkpointer and drop ClusterRole to Role
* pod-checkpointer no longer needs to watch pods in all namespaces,
it should only have permission to watch kube-system
* https://github.com/kubernetes-incubator/bootkube/pull/784
2017-12-12 00:10:55 -08:00
Dalton Hubble
810ddfad9f Add controller-manager flag for service_cidr
* controller-manager can handle overlapping pod and service CIDRs
to avoid address collisions, if its informed of both ranges
* Still favor non-overlapping pod and service ranges of course
* https://github.com/kubernetes-incubator/bootkube/pull/797
2017-12-12 00:00:26 -08:00
Dalton Hubble
ec48758c5e Remove experimental self-hosted etcd options 2017-12-11 21:51:07 -08:00
Dalton Hubble
533e82f833 Update hyperkube from v1.8.4 to v1.8.5 2017-12-08 08:46:22 -08:00
Dalton Hubble
31cfae5789 Update README to correspond to v0.9.0 2017-12-01 22:13:33 -08:00
Dalton Hubble
680244706c Update Calico from v2.6.1 to v2.6.3
* Bug fixes for Calico 2.6.x
https://github.com/projectcalico/calico/releases/tag/v2.6.3
* Bug fixes for cni-plugin (i.e. cni) v1.11.x
https://github.com/projectcalico/cni-plugin/releases/tag/v1.11.1
2017-11-28 21:33:51 -08:00
Dalton Hubble
dbcf3b599f Remove flock from bootstrap-apiserver and kube-apiserver
* https://github.com/kubernetes-incubator/bootkube/pull/616
2017-11-28 21:13:15 -08:00
Dalton Hubble
b7b56a6e55 Update hyperkube from v1.8.3 to v1.8.4 2017-11-28 21:11:52 -08:00
Dalton Hubble
a613c7dfa6 Remove unused critical-pod annotations in manifests
* https://github.com/kubernetes-incubator/bootkube/pull/777
2017-11-28 21:10:05 -08:00
Dalton Hubble
ab4d7becce Disable Calico termination grace period
* Disable termination grace period to account for Kubernetes v1.8
changes to DaemonSet rolling behavior
* https://github.com/projectcalico/calico/pull/1293
* Fix IPIP mode casing https://github.com/projectcalico/calico/pull/1233
2017-11-17 00:40:25 -08:00
Dalton Hubble
4d85d9c0d1 Update flannel version from v0.9.0 to v0.9.1
* https://github.com/kubernetes-incubator/bootkube/pull/776
2017-11-17 00:38:37 -08:00
Dalton Hubble
ec5f86b014 Use service accounts for kube-proxy and pod-checkpointer
* Create separate service accounts for kube-proxy and pod-checkpointer
* Switch kube-proxy and pod-checkpointer to use a kubeconfig that
references the local service account, rather than the host kubeconfig
* https://github.com/kubernetes-incubator/bootkube/pull/767
2017-11-17 00:33:22 -08:00
Dalton Hubble
92ff0f253a Update README to correspond to bootkube v0.8.2 2017-11-10 19:54:35 -08:00
Dalton Hubble
4f6af5b811 Update hyperkube from v1.8.2 to v1.8.3
* https://github.com/kubernetes-incubator/bootkube/pull/765
2017-11-08 21:48:21 -08:00
Dalton Hubble
f76e58b56d Update checkpointer with state machine impl
* https://github.com/kubernetes-incubator/bootkube/pull/759
2017-11-08 21:45:01 -08:00
Dalton Hubble
383aba4e8e Add /lib/modules mount to kube-proxy
* Starting in Kubernetes v1.8, kube-proxy modprobes ipvs
* kube-proxy still uses iptables, but in future may switch to
ipvs, this prepares the way for that to happen
* https://github.com/kubernetes-incubator/bootkube/issues/741
2017-11-08 21:39:07 -08:00
Dalton Hubble
aebb45e6e9 Update README to correspond to bootkube v0.8.1 2017-10-28 12:44:06 -07:00
Dalton Hubble
b6b320ef6a Update hyperkube from v1.8.1 to v1.8.2
* v1.8.2 includes an apiserver memory leak fix
2017-10-24 21:27:46 -07:00
Dalton Hubble
9f4ffe273b Switch hyperkube from quay.io/coreos to gcr.io/google_containers
* Use the Kubernetes official hyperkube image
* Patches in quay.io/coreos/hyperkube are no longer needed
for kubernetes-incubator/bootkube clusters starting in
Kubernetes 1.8
2017-10-22 17:05:52 -07:00
Dalton Hubble
74366f6076 Enable hairpinMode in flannel CNI config
* Allow pods to communicate with themselves via service IP
* https://github.com/coreos/flannel/pull/849
2017-10-22 13:51:46 -07:00
Dalton Hubble
db7c13f5ee Update flannel from v0.8.0-amd64 to v0.9.0-amd64 2017-10-22 13:48:14 -07:00
Dalton Hubble
3ac28c9210 Add --no-negcache flag to dnsmasq args
* e1d6bcc227
2017-10-21 17:15:19 -07:00
Dalton Hubble
64748203ba Update assets generation for bootkube v0.8.0
* Update from Kubernetes v1.7.7 to v1.8.1
2017-10-19 20:48:24 -07:00
Dalton Hubble
262cc49856 Update README intro, repo name, and links 2017-10-08 23:00:58 -07:00
Dalton Hubble
125f29d43d Render images from the container_images map variable
* Container images may be customized to facilitate using mirrored
images or development with custom images
2017-10-08 22:29:26 -07:00
Dalton Hubble
aded06a0a7 Update assets generation for bootkube v0.7.0 2017-10-03 09:27:30 -07:00
Dalton Hubble
cc2b45780a Add square brackets for lists to be explicit
* Terraform's "type system" sometimes doesn't identify list
types correctly so be explicit
* https://github.com/hashicorp/terraform/issues/12263#issuecomment-282571256
2017-10-03 09:23:25 -07:00
Dalton Hubble
d93b7e4dc8 Update kube-dns image to address dnsmasq vulnerability
* https://security.googleblog.com/2017/10/behind-masq-yet-more-dns-and-dhcp.html
2017-10-02 10:23:22 -07:00
Dalton Hubble
48b33db1f1 Update Calico from v2.6.0 to v2.6.1 2017-09-30 16:12:29 -07:00
Dalton Hubble
8a9b6f1270 Update Calico from v2.5.1 to v2.6.0
* Update cni sidecar image from v1.10.0 to v1.11.0
* Lower log level in CNI config from debug to info
2017-09-28 20:43:15 -07:00
Dalton Hubble
3b8d762081 Merge pull request #16 from poseidon/etcd-network-checkpointer
Add kube-etcd-network-checkpointer for self-hosted etcd only
2017-09-27 18:06:19 -07:00
Dalton Hubble
9c144e6522 Add kube-etcd-network-checkpointer for self-hosted etcd only 2017-09-26 00:39:42 -07:00
Dalton Hubble
c0d4f56a4c Merge pull request #12 from cloudnativelabs/doc-fix-etcd_servers
Update etcd_servers variable description
2017-09-26 00:12:34 -07:00
bzub
62c887f41b Update etcd_servers variable description. 2017-09-16 16:12:40 -05:00
Dalton Hubble
dbfb11c6ea Update assets generation for bootkube v0.6.2
* Update hyperkube to v1.7.5_coreos.0
* Update etcd-operator to v0.5.0
* Update pod-checkpointer
* Update flannel-cni to v0.2.0
* Change etcd-operator TPR to CRD
2017-09-08 13:46:28 -07:00
Dalton Hubble
5ffbfec46d Configure the Calico MTU
* Add a network_mtu input variable (default 1500)
* Set the Calico CNI config (i.e. workload network interfaces)
* Set the Calico IP in IP MTU (for tunnel network interfaces)
2017-09-05 10:50:26 -07:00
Dalton Hubble
a52f99e8cc Add support for calico networking
* Add support for using Calico pod networking instead of flannel
* Add variable "networking" which may be "calico" or "flannel"
* Users MUST move the contents of assets_dir/manifests-networking
into the assets_dir/manifests directory before running bootkube
start. This is needed because Terraform cannot generate conditional
files into a template_dir because other resources write to the same
directory and delete.
https://github.com/terraform-providers/terraform-provider-template/issues/10
2017-09-01 10:27:43 -07:00
Dalton Hubble
1c1c4b36f8 Enable hairpin mode on cbr0 in kube-flannel-cfg 2017-08-16 18:22:42 -07:00
Dalton Hubble
c4e87f9695 Update assets generation for bootkube v0.6.1 2017-08-16 18:20:40 -07:00
Dalton Hubble
4cd0360a1a Add MIT License 2017-08-02 00:05:04 -07:00
Dalton Hubble
e7d2c1e597 Update assets generation for bootkube v0.6.0 2017-07-24 13:12:32 -07:00
91 changed files with 1838 additions and 859 deletions

2
.gitignore vendored
View File

@@ -1,2 +1,4 @@
*.tfvars
.terraform
*.tfstate*
assets

21
LICENSE Normal file
View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2017 Dalton Hubble
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -1,62 +1,49 @@
# bootkube-terraform
# terraform-render-bootkube
`bootkube-terraform` is a Terraform module that renders [bootkube](https://github.com/kubernetes-incubator/bootkube) assets, just like running the binary `bootkube render`. It aims to provide the same variable names, defaults, features, and outputs.
`terraform-render-bootkube` is a Terraform module that renders [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube) assets for bootstrapping a Kubernetes cluster.
## Audience
`terraform-render-bootkube` is a low-level component of the [Typhoon](https://github.com/poseidon/typhoon) Kubernetes distribution. Use Typhoon modules to create and manage Kubernetes clusters across supported platforms. Use the bootkube module if you'd like to customize a Kubernetes control plane or build your own distribution.
## Usage
Use the `bootkube-terraform` module within your existing Terraform configs. Provide the variables listed in `variables.tf` or check `terraform.tfvars.example` for examples.
Use the module to declare bootkube assets. Check [variables.tf](variables.tf) for options and [terraform.tfvars.example](terraform.tfvars.example) for examples.
```hcl
module "bootkube" {
source = "git://https://github.com/dghubble/bootkube-terraform.git?ref=SHA"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=SHA"
cluster_name = "example"
api_servers = ["node1.example.com"]
etcd_servers = ["node1.example.com"]
asset_dir = "/home/core/clusters/mycluster"
experimental_self_hosted_etcd = false
}
```
Alternately, use a local checkout of this repo and copy `terraform.tfvars.example` to `terraform.tfvars` to generate assets without an existing terraform config repo.
Generate the bootkube assets.
Generate the assets.
```sh
terraform get
terraform init
terraform plan
terraform apply
```
Find bootkube assets rendered to the `asset_dir` path. That's it.
### Comparison
Render bootkube assets directly with bootkube v0.5.1.
#### On-host etcd
Render bootkube assets directly with bootkube v0.14.0.
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=https://node1.example.com:2379
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:6443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=https://node1.example.com:2379
```
Compare assets. The only diffs you should see are TLS credentials.
```sh
diff -rw assets /home/core/mycluster
```
#### Self-hosted etcd
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --experimental-self-hosted-etcd
```
Compare assets. Note that experimental must be generated to a separate directory for terraform applies to sync. Move the experimental `bootstrap-manifests` and `manifests` files during deployment.
Compare assets. Rendered assets may differ slightly from bootkube assets to reflect decisions made by the [Typhoon](https://github.com/poseidon/typhoon) distribution.
```sh
pushd /home/core/mycluster
mv experimental/bootstrap-manifests/* boostrap-manifests
mv experimental/manifests/* manifests
mv manifests-networking/* manifests
popd
diff -rw assets /home/core/mycluster
```

View File

@@ -1,45 +0,0 @@
# Assets generated only when experimental self-hosted etcd is enabled
# bootstrap-etcd.yaml pod bootstrap-manifest
resource "template_dir" "experimental-bootstrap-manifests" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/experimental/bootstrap-manifests"
destination_dir = "${var.asset_dir}/experimental/bootstrap-manifests"
vars {
etcd_image = "${var.container_images["etcd"]}"
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 20)}"
}
}
# etcd subfolder - bootstrap-etcd-service.json and migrate-etcd-cluster.json TPR
resource "template_dir" "etcd-subfolder" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/etcd"
destination_dir = "${var.asset_dir}/etcd"
vars {
bootstrap_etcd_service_ip = "${cidrhost(var.service_cidr, 20)}"
}
}
# etcd-operator deployment and etcd-service manifests
# etcd client, server, and peer tls secrets
resource "template_dir" "experimental-manifests" {
count = "${var.experimental_self_hosted_etcd ? 1 : 0}"
source_dir = "${path.module}/resources/experimental/manifests"
destination_dir = "${var.asset_dir}/experimental/manifests"
vars {
etcd_service_ip = "${cidrhost(var.service_cidr, 15)}"
# Self-hosted etcd TLS certs / keys
etcd_ca_cert = "${base64encode(tls_self_signed_cert.etcd-ca.cert_pem)}"
etcd_client_cert = "${base64encode(tls_locally_signed_cert.client.cert_pem)}"
etcd_client_key = "${base64encode(tls_private_key.client.private_key_pem)}"
etcd_server_cert = "${base64encode(tls_locally_signed_cert.server.cert_pem)}"
etcd_server_key = "${base64encode(tls_private_key.server.private_key_pem)}"
etcd_peer_cert = "${base64encode(tls_locally_signed_cert.peer.cert_pem)}"
etcd_peer_key = "${base64encode(tls_private_key.peer.private_key_pem)}"
}
}

View File

@@ -5,11 +5,14 @@ resource "template_dir" "bootstrap-manifests" {
vars {
hyperkube_image = "${var.container_images["hyperkube"]}"
etcd_servers = "${var.experimental_self_hosted_etcd ? format("https://%s:2379,https://127.0.0.1:12379", cidrhost(var.service_cidr, 15)) : join(",", formatlist("https://%s:2379", var.etcd_servers))}"
etcd_servers = "${join(",", formatlist("https://%s:2379", var.etcd_servers))}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
trusted_certs_dir = "${var.trusted_certs_dir}"
apiserver_port = "${var.apiserver_port}"
}
}
@@ -19,58 +22,89 @@ resource "template_dir" "manifests" {
destination_dir = "${var.asset_dir}/manifests"
vars {
hyperkube_image = "${var.container_images["hyperkube"]}"
etcd_servers = "${var.experimental_self_hosted_etcd ? format("https://%s:2379", cidrhost(var.service_cidr, 15)) : join(",", formatlist("https://%s:2379", var.etcd_servers))}"
hyperkube_image = "${var.container_images["hyperkube"]}"
pod_checkpointer_image = "${var.container_images["pod_checkpointer"]}"
coredns_image = "${var.container_images["coredns"]}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
kube_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
etcd_servers = "${join(",", formatlist("https://%s:2379", var.etcd_servers))}"
control_plane_replicas = "${max(2, length(var.etcd_servers))}"
ca_cert = "${base64encode(var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate)}"
cloud_provider = "${var.cloud_provider}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
cluster_domain_suffix = "${var.cluster_domain_suffix}"
cluster_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
trusted_certs_dir = "${var.trusted_certs_dir}"
apiserver_port = "${var.apiserver_port}"
ca_cert = "${base64encode(tls_self_signed_cert.kube-ca.cert_pem)}"
ca_key = "${base64encode(tls_private_key.kube-ca.private_key_pem)}"
server = "${format("https://%s:%s", element(var.api_servers, 0), var.apiserver_port)}"
apiserver_key = "${base64encode(tls_private_key.apiserver.private_key_pem)}"
apiserver_cert = "${base64encode(tls_locally_signed_cert.apiserver.cert_pem)}"
serviceaccount_pub = "${base64encode(tls_private_key.service-account.public_key_pem)}"
serviceaccount_key = "${base64encode(tls_private_key.service-account.private_key_pem)}"
etcd_ca_cert = "${base64encode(tls_self_signed_cert.etcd-ca.cert_pem)}"
etcd_ca_cert = "${base64encode(tls_self_signed_cert.etcd-ca.cert_pem)}"
etcd_client_cert = "${base64encode(tls_locally_signed_cert.client.cert_pem)}"
etcd_client_key = "${base64encode(tls_private_key.client.private_key_pem)}"
etcd_client_key = "${base64encode(tls_private_key.client.private_key_pem)}"
aggregation_flags = "${var.enable_aggregation == "true" ? indent(8, local.aggregation_flags) : ""}"
aggregation_ca_cert = "${var.enable_aggregation == "true" ? base64encode(join(" ", tls_self_signed_cert.aggregation-ca.*.cert_pem)) : ""}"
aggregation_client_cert = "${var.enable_aggregation == "true" ? base64encode(join(" ", tls_locally_signed_cert.aggregation-client.*.cert_pem)) : ""}"
aggregation_client_key = "${var.enable_aggregation == "true" ? base64encode(join(" ", tls_private_key.aggregation-client.*.private_key_pem)) : ""}"
}
}
# Generated kubeconfig
resource "local_file" "kubeconfig" {
content = "${data.template_file.kubeconfig.rendered}"
locals {
aggregation_flags = <<EOF
- --proxy-client-cert-file=/etc/kubernetes/secrets/aggregation-client.crt
- --proxy-client-key-file=/etc/kubernetes/secrets/aggregation-client.key
- --requestheader-client-ca-file=/etc/kubernetes/secrets/aggregation-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-UserEOF
}
# Generated kubeconfig for Kubelets
resource "local_file" "kubeconfig-kubelet" {
content = "${data.template_file.kubeconfig-kubelet.rendered}"
filename = "${var.asset_dir}/auth/kubeconfig-kubelet"
}
# Generated admin kubeconfig (bootkube requires it be at auth/kubeconfig)
# https://github.com/kubernetes-incubator/bootkube/blob/master/pkg/bootkube/bootkube.go#L42
resource "local_file" "kubeconfig-admin" {
content = "${data.template_file.kubeconfig-admin.rendered}"
filename = "${var.asset_dir}/auth/kubeconfig"
}
# Generated kubeconfig with user-context
resource "local_file" "user-kubeconfig" {
content = "${data.template_file.user-kubeconfig.rendered}"
# Generated admin kubeconfig in a file named after the cluster
resource "local_file" "kubeconfig-admin-named" {
content = "${data.template_file.kubeconfig-admin.rendered}"
filename = "${var.asset_dir}/auth/${var.cluster_name}-config"
}
data "template_file" "kubeconfig" {
template = "${file("${path.module}/resources/kubeconfig")}"
data "template_file" "kubeconfig-kubelet" {
template = "${file("${path.module}/resources/kubeconfig-kubelet")}"
vars {
ca_cert = "${base64encode(var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate)}"
ca_cert = "${base64encode(tls_self_signed_cert.kube-ca.cert_pem)}"
kubelet_cert = "${base64encode(tls_locally_signed_cert.kubelet.cert_pem)}"
kubelet_key = "${base64encode(tls_private_key.kubelet.private_key_pem)}"
server = "${format("https://%s:443", element(var.api_servers, 0))}"
server = "${format("https://%s:%s", element(var.api_servers, 0), var.apiserver_port)}"
}
}
data "template_file" "user-kubeconfig" {
template = "${file("${path.module}/resources/user-kubeconfig")}"
data "template_file" "kubeconfig-admin" {
template = "${file("${path.module}/resources/kubeconfig-admin")}"
vars {
name = "${var.cluster_name}"
ca_cert = "${base64encode(var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate)}"
kubelet_cert = "${base64encode(tls_locally_signed_cert.kubelet.cert_pem)}"
kubelet_key = "${base64encode(tls_private_key.kubelet.private_key_pem)}"
server = "${format("https://%s:443", element(var.api_servers, 0))}"
ca_cert = "${base64encode(tls_self_signed_cert.kube-ca.cert_pem)}"
kubelet_cert = "${base64encode(tls_locally_signed_cert.admin.cert_pem)}"
kubelet_key = "${base64encode(tls_private_key.admin.private_key_pem)}"
server = "${format("https://%s:%s", element(var.api_servers, 0), var.apiserver_port)}"
}
}

47
conditional.tf Normal file
View File

@@ -0,0 +1,47 @@
# Assets generated only when certain options are chosen
resource "template_dir" "flannel-manifests" {
count = "${var.networking == "flannel" ? 1 : 0}"
source_dir = "${path.module}/resources/flannel"
destination_dir = "${var.asset_dir}/manifests-networking"
vars {
flannel_image = "${var.container_images["flannel"]}"
flannel_cni_image = "${var.container_images["flannel_cni"]}"
pod_cidr = "${var.pod_cidr}"
}
}
resource "template_dir" "calico-manifests" {
count = "${var.networking == "calico" ? 1 : 0}"
source_dir = "${path.module}/resources/calico"
destination_dir = "${var.asset_dir}/manifests-networking"
vars {
calico_image = "${var.container_images["calico"]}"
calico_cni_image = "${var.container_images["calico_cni"]}"
network_mtu = "${var.network_mtu}"
network_encapsulation = "${indent(2, var.network_encapsulation == "vxlan" ? "vxlanMode: Always" : "ipipMode: Always")}"
ipip_enabled = "${var.network_encapsulation == "ipip" ? true : false}"
ipip_readiness = "${var.network_encapsulation == "ipip" ? indent(16, "- --bird-ready") : ""}"
vxlan_enabled = "${var.network_encapsulation == "vxlan" ? true : false}"
network_ip_autodetection_method = "${var.network_ip_autodetection_method}"
pod_cidr = "${var.pod_cidr}"
enable_reporting = "${var.enable_reporting}"
}
}
resource "template_dir" "kube-router-manifests" {
count = "${var.networking == "kube-router" ? 1 : 0}"
source_dir = "${path.module}/resources/kube-router"
destination_dir = "${var.asset_dir}/manifests-networking"
vars {
kube_router_image = "${var.container_images["kube_router"]}"
flannel_cni_image = "${var.container_images["flannel_cni"]}"
network_mtu = "${var.network_mtu}"
}
}

View File

@@ -1,25 +1,23 @@
output "id" {
value = "${sha1("${template_dir.bootstrap-manifests.id} ${local_file.kubeconfig.id}")}"
value = "${sha1("${template_dir.bootstrap-manifests.id} ${template_dir.manifests.id}")}"
}
output "content_hash" {
value = "${sha1("${template_dir.bootstrap-manifests.id} ${template_dir.manifests.id}")}"
}
output "kube_dns_service_ip" {
output "cluster_dns_service_ip" {
value = "${cidrhost(var.service_cidr, 10)}"
}
output "etcd_service_ip" {
value = "${cidrhost(var.service_cidr, 15)}"
// Generated kubeconfig for Kubelets (i.e. lower privilege than admin)
output "kubeconfig-kubelet" {
value = "${data.template_file.kubeconfig-kubelet.rendered}"
}
output "kubeconfig" {
value = "${data.template_file.kubeconfig.rendered}"
}
output "user-kubeconfig" {
value = "${local_file.user-kubeconfig.filename}"
// Generated kubeconfig for admins (i.e. human super-user)
output "kubeconfig-admin" {
value = "${data.template_file.kubeconfig-admin.rendered}"
}
# etcd TLS assets
@@ -57,7 +55,7 @@ output "etcd_peer_key" {
# contents so the raw components of the kubeconfig may be needed.
output "ca_cert" {
value = "${base64encode(var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate)}"
value = "${base64encode(tls_self_signed_cert.kube-ca.cert_pem)}"
}
output "kubelet_cert" {
@@ -69,5 +67,5 @@ output "kubelet_key" {
}
output "server" {
value = "${format("https://%s:443", element(var.api_servers, 0))}"
value = "${format("https://%s:%s", element(var.api_servers, 0), var.apiserver_port)}"
}

View File

@@ -3,35 +3,36 @@ kind: Pod
metadata:
name: bootstrap-kube-apiserver
namespace: kube-system
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
containers:
- name: kube-apiserver
image: ${hyperkube_image}
command:
- /usr/bin/flock
- /var/lock/api-server.lock
- /hyperkube
- apiserver
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --anonymous-auth=false
- --authorization-mode=RBAC
- --bind-address=0.0.0.0
- --client-ca-file=/etc/kubernetes/secrets/ca.crt
- --cloud-provider=${cloud_provider}
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,Priority
- --etcd-cafile=/etc/kubernetes/secrets/etcd-client-ca.crt
- --etcd-certfile=/etc/kubernetes/secrets/etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/secrets/etcd-client.key
- --etcd-quorum-read=true
- --etcd-servers=${etcd_servers}
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
- --secure-port=443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --secure-port=${apiserver_port}
- --service-account-key-file=/etc/kubernetes/secrets/service-account.pub
- --service-cluster-ip-range=${service_cidr}
- --cloud-provider=${cloud_provider}
- --storage-backend=etcd3
- --tls-ca-file=/etc/kubernetes/secrets/ca.crt
- --tls-cert-file=/etc/kubernetes/secrets/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/secrets/apiserver.key
env:
@@ -40,23 +41,16 @@ spec:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- mountPath: /etc/ssl/certs
name: ssl-certs-host
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
- mountPath: /etc/kubernetes/secrets
name: secrets
- name: ssl-certs-host
mountPath: /etc/ssl/certs
readOnly: true
- mountPath: /var/lock
name: var-lock
readOnly: false
hostNetwork: true
volumes:
- name: secrets
hostPath:
path: /etc/kubernetes/bootstrap-secrets
- name: ssl-certs-host
hostPath:
path: /usr/share/ca-certificates
- name: var-lock
hostPath:
path: /var/lock
path: ${trusted_certs_dir}

View File

@@ -3,6 +3,8 @@ kind: Pod
metadata:
name: bootstrap-kube-controller-manager
namespace: kube-system
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
containers:
- name: kube-controller-manager
@@ -12,24 +14,27 @@ spec:
- controller-manager
- --allocate-node-cidrs=true
- --cluster-cidr=${pod_cidr}
- --service-cluster-ip-range=${service_cidr}
- --cloud-provider=${cloud_provider}
- --cluster-signing-cert-file=/etc/kubernetes/secrets/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/secrets/ca.key
- --configure-cloud-routes=false
- --kubeconfig=/etc/kubernetes/kubeconfig
- --kubeconfig=/etc/kubernetes/secrets/kubeconfig
- --leader-elect=true
- --root-ca-file=/etc/kubernetes/bootstrap-secrets/ca.crt
- --service-account-private-key-file=/etc/kubernetes/bootstrap-secrets/service-account.key
- --root-ca-file=/etc/kubernetes/secrets/ca.crt
- --service-account-private-key-file=/etc/kubernetes/secrets/service-account.key
volumeMounts:
- name: kubernetes
mountPath: /etc/kubernetes
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
- name: ssl-host
mountPath: /etc/ssl/certs
readOnly: true
hostNetwork: true
volumes:
- name: kubernetes
- name: secrets
hostPath:
path: /etc/kubernetes
path: /etc/kubernetes/bootstrap-secrets
- name: ssl-host
hostPath:
path: /usr/share/ca-certificates
path: ${trusted_certs_dir}

View File

@@ -3,6 +3,8 @@ kind: Pod
metadata:
name: bootstrap-kube-scheduler
namespace: kube-system
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
containers:
- name: kube-scheduler
@@ -10,14 +12,14 @@ spec:
command:
- ./hyperkube
- scheduler
- --kubeconfig=/etc/kubernetes/kubeconfig
- --kubeconfig=/etc/kubernetes/secrets/kubeconfig
- --leader-elect=true
volumeMounts:
- name: kubernetes
mountPath: /etc/kubernetes
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
hostNetwork: true
volumes:
- name: kubernetes
- name: secrets
hostPath:
path: /etc/kubernetes
path: /etc/kubernetes/bootstrap-secrets

View File

@@ -0,0 +1,12 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPConfiguration
plural: bgpconfigurations
singular: bgpconfiguration

View File

@@ -0,0 +1,12 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPPeer
plural: bgppeers
singular: bgppeer

View File

@@ -0,0 +1,12 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: blockaffinities.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BlockAffinity
plural: blockaffinities
singular: blockaffinity

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system

View File

@@ -0,0 +1,108 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: calico-node
rules:
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
- apiGroups: [""]
resources:
- endpoints
- services
verbs:
- watch
- list
# Used by Calico for policy information
- apiGroups: [""]
resources:
- pods
- namespaces
- serviceaccounts
verbs:
- list
- watch
- apiGroups: [""]
resources:
- nodes/status
verbs:
# Calico patches the node NetworkUnavilable status
- patch
# Calico updates some info in node annotations
- update
# CNI plugin patches pods/status
- apiGroups: [""]
resources:
- pods/status
verbs:
- patch
# Calico reads some info on nodes
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- watch
# Calico monitors Kubernetes NetworkPolicies
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
# Calico monitors its CRDs
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurations
- ippools
- ipamblocks
- globalnetworkpolicies
- globalnetworksets
- networksets
- networkpolicies
- clusterinformations
- hostendpoints
verbs:
- get
- list
- watch
- apiGroups: ["crd.projectcalico.org"]
resources:
- felixconfigurations
- ippools
- clusterinformations
verbs:
- create
- update
# Calico may perform IPAM allocations
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
- apiGroups: ["crd.projectcalico.org"]
resources:
- ipamconfigs
verbs:
- get
# Watch block affinities for route aggregation
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
verbs:
- watch

View File

@@ -0,0 +1,12 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: ClusterInformation
plural: clusterinformations
singular: clusterinformation

View File

@@ -0,0 +1,41 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: calico-config
namespace: kube-system
data:
# Disable Typha for now.
typha_service_name: "none"
# Calico backend to use
calico_backend: "bird"
# Calico MTU
veth_mtu: "${network_mtu}"
# The CNI network configuration to install on each node.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}

View File

@@ -0,0 +1,191 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
priorityClassName: system-node-critical
serviceAccountName: calico-node
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
initContainers:
# Install Calico CNI binaries and CNI network config file on nodes
- name: install-cni
image: ${calico_cni_image}
command: ["/install-cni.sh"]
env:
# Name of the CNI config file to create on each node.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# Set node name based on k8s nodeName
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Contents of the CNI config to create on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
- name: CNI_NET_DIR
value: "/etc/kubernetes/cni/net.d"
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
- name: SLEEP
value: "false"
volumeMounts:
- name: cni-bin-dir
mountPath: /host/opt/cni/bin
- name: cni-conf-dir
mountPath: /host/etc/cni/net.d
containers:
- name: calico-node
image: ${calico_image}
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Wait for datastore
- name: WAIT_FOR_DATASTORE
value: "true"
# Typha support: controlled by the ConfigMap.
- name: FELIX_TYPHAK8SSERVICENAME
valueFrom:
configMapKeyRef:
name: calico-config
key: typha_service_name
- name: FELIX_USAGEREPORTINGENABLED
value: "${enable_reporting}"
# Set node name based on k8s nodeName.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Calico network backend
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
- name: IP_AUTODETECTION_METHOD
value: "${network_ip_autodetection_method}"
# Whether Felix should enable IP-in-IP tunnel
- name: FELIX_IPINIPENABLED
value: "${ipip_enabled}"
# MTU to set on the IPIP tunnel (if enabled)
- name: FELIX_IPINIPMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Whether Felix should enable VXLAN tunnel
- name: FELIX_VXLANENABLED
value: "${vxlan_enabled}"
# MTU to set on the VXLAN tunnel (if enabled)
- name: FELIX_VXLANMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
- name: NO_DEFAULT_POOLS
value: "true"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPV6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Enable felix info logging.
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 150m
livenessProbe:
httpGet:
path: /liveness
port: 9099
host: localhost
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/calico-node
- -felix-ready
${ipip_readiness}
periodSeconds: 10
volumeMounts:
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: var-lib-calico
mountPath: /var/lib/calico
readOnly: false
- name: var-run-calico
mountPath: /var/run/calico
readOnly: false
- name: xtables-lock
mountPath: /run/xtables.lock
readOnly: false
terminationGracePeriodSeconds: 0
volumes:
# Used by calico/node
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: xtables-lock
hostPath:
type: FileOrCreate
path: /run/xtables.lock
# Used by install-cni
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-conf-dir
hostPath:
path: /etc/kubernetes/cni/net.d

View File

@@ -0,0 +1,10 @@
apiVersion: crd.projectcalico.org/v1
kind: IPPool
metadata:
name: default-ipv4-ippool
spec:
blockSize: 24
cidr: ${pod_cidr}
${network_encapsulation}
natOutgoing: true
nodeSelector: all()

View File

@@ -0,0 +1,12 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: FelixConfiguration
plural: felixconfigurations
singular: felixconfiguration

View File

@@ -0,0 +1,12 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkPolicy
plural: globalnetworkpolicies
singular: globalnetworkpolicy

View File

@@ -0,0 +1,12 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworksets.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkSet
plural: globalnetworksets
singular: globalnetworkset

View File

@@ -0,0 +1,12 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: hostendpoints.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: HostEndpoint
plural: hostendpoints
singular: hostendpoint

View File

@@ -0,0 +1,12 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamblocks.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMBlock
plural: ipamblocks
singular: ipamblock

View File

@@ -0,0 +1,12 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMConfig
plural: ipamconfigs
singular: ipamconfig

View File

@@ -0,0 +1,12 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamhandles.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMHandle
plural: ipamhandles
singular: ipamhandle

View File

@@ -0,0 +1,12 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ippools.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPPool
plural: ippools
singular: ippool

View File

@@ -0,0 +1,12 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networkpolicies.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkPolicy
plural: networkpolicies
singular: networkpolicy

View File

@@ -0,0 +1,12 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networksets.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkSet
plural: networksets
singular: networkset

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system

View File

@@ -1,26 +0,0 @@
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "bootstrap-etcd-service",
"namespace": "kube-system"
},
"spec": {
"selector": {
"k8s-app": "boot-etcd"
},
"clusterIP": "${bootstrap_etcd_service_ip}",
"ports": [
{
"name": "client",
"port": 12379,
"protocol": "TCP"
},
{
"name": "peers",
"port": 12380,
"protocol": "TCP"
}
]
}
}

View File

@@ -1,36 +0,0 @@
{
"apiVersion": "etcd.coreos.com/v1beta1",
"kind": "Cluster",
"metadata": {
"name": "kube-etcd",
"namespace": "kube-system"
},
"spec": {
"size": 1,
"version": "v3.1.8",
"pod": {
"nodeSelector": {
"node-role.kubernetes.io/master": ""
},
"tolerations": [
{
"key": "node-role.kubernetes.io/master",
"operator": "Exists",
"effect": "NoSchedule"
}
]
},
"selfHosted": {
"bootMemberClientEndpoint": "https://${bootstrap_etcd_service_ip}:12379"
},
"TLS": {
"static": {
"member": {
"peerSecret": "etcd-peer-tls",
"serverSecret": "etcd-server-tls"
},
"operatorSecret": "etcd-client-tls"
}
}
}
}

View File

@@ -1,41 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: bootstrap-etcd
namespace: kube-system
labels:
k8s-app: boot-etcd
spec:
containers:
- name: etcd
image: ${etcd_image}
command:
- /usr/local/bin/etcd
- --name=boot-etcd
- --listen-client-urls=https://0.0.0.0:12379
- --listen-peer-urls=https://0.0.0.0:12380
- --advertise-client-urls=https://${bootstrap_etcd_service_ip}:12379
- --initial-advertise-peer-urls=https://${bootstrap_etcd_service_ip}:12380
- --initial-cluster=boot-etcd=https://${bootstrap_etcd_service_ip}:12380
- --initial-cluster-token=bootkube
- --initial-cluster-state=new
- --data-dir=/var/etcd/data
- --peer-client-cert-auth=true
- --peer-trusted-ca-file=/etc/kubernetes/secrets/etcd/peer-ca.crt
- --peer-cert-file=/etc/kubernetes/secrets/etcd/peer.crt
- --peer-key-file=/etc/kubernetes/secrets/etcd/peer.key
- --client-cert-auth=true
- --trusted-ca-file=/etc/kubernetes/secrets/etcd/server-ca.crt
- --cert-file=/etc/kubernetes/secrets/etcd/server.crt
- --key-file=/etc/kubernetes/secrets/etcd/server.key
volumeMounts:
- mountPath: /etc/kubernetes/secrets
name: secrets
readOnly: true
volumes:
- name: secrets
hostPath:
path: /etc/kubernetes/bootstrap-secrets
hostNetwork: true
restartPolicy: Never
dnsPolicy: ClusterFirstWithHostNet

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: etcd-client-tls
namespace: kube-system
type: Opaque
data:
etcd-client-ca.crt: ${etcd_ca_cert}
etcd-client.crt: ${etcd_client_cert}
etcd-client.key: ${etcd_client_key}

View File

@@ -1,43 +0,0 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: etcd-operator
namespace: kube-system
labels:
k8s-app: etcd-operator
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
replicas: 1
template:
metadata:
labels:
k8s-app: etcd-operator
spec:
containers:
- name: etcd-operator
image: quay.io/coreos/etcd-operator:v0.4.2
command:
- /usr/local/bin/etcd-operator
- --analytics=false
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
nodeSelector:
node-role.kubernetes.io/master: ""
securityContext:
runAsNonRoot: true
runAsUser: 65534
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: etcd-peer-tls
namespace: kube-system
type: Opaque
data:
peer-ca.crt: ${etcd_ca_cert}
peer.crt: ${etcd_peer_cert}
peer.key: ${etcd_peer_key}

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: etcd-server-tls
namespace: kube-system
type: Opaque
data:
server-ca.crt: ${etcd_ca_cert}
server.crt: ${etcd_server_cert}
server.key: ${etcd_server_key}

View File

@@ -1,18 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: etcd-service
namespace: kube-system
# This alpha annotation will retain the endpoints even if the etcd pod isn't ready.
# This feature is always enabled in endpoint controller in k8s even it is alpha.
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
selector:
app: etcd
etcd_cluster: kube-etcd
clusterIP: ${etcd_service_ip}
ports:
- name: client
port: 2379
protocol: TCP

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system

View File

@@ -0,0 +1,24 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch

View File

@@ -0,0 +1,37 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: flannel-config
namespace: kube-system
labels:
tier: node
k8s-app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "${pod_cidr}",
"Backend": {
"Type": "vxlan",
"Port": 4789
}
}

View File

@@ -0,0 +1,85 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: flannel
namespace: kube-system
labels:
k8s-app: flannel
spec:
selector:
matchLabels:
k8s-app: flannel
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: flannel
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
priorityClassName: system-node-critical
serviceAccountName: flannel
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
containers:
- name: flannel
image: ${flannel_image}
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=$(POD_IP)"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
securityContext:
privileged: true
resources:
requests:
cpu: 100m
volumeMounts:
- name: flannel-config
mountPath: /etc/kube-flannel/
- name: run-flannel
mountPath: /run/flannel
- name: install-cni
image: ${flannel_cni_image}
command: ["/install-cni.sh"]
env:
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: flannel-config
key: cni-conf.json
volumeMounts:
- name: cni-bin-dir
mountPath: /host/opt/cni/bin/
- name: cni-conf-dir
mountPath: /host/etc/cni/net.d
volumes:
- name: flannel-config
configMap:
name: flannel-config
- name: run-flannel
hostPath:
path: /run/flannel
# Used by install-cni
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-conf-dir
hostPath:
path: /etc/kubernetes/cni/net.d

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system

View File

@@ -1,12 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1alpha1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:default-sa
name: kube-router
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-router
subjects:
- kind: ServiceAccount
name: default
name: kube-router
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io

View File

@@ -0,0 +1,33 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kube-router
rules:
- apiGroups:
- ""
resources:
- namespaces
- pods
- services
- nodes
- endpoints
verbs:
- list
- get
- watch
- apiGroups:
- "networking.k8s.io"
resources:
- networkpolicies
verbs:
- list
- get
- watch
- apiGroups:
- extensions
resources:
- networkpolicies
verbs:
- get
- list
- watch

View File

@@ -0,0 +1,30 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-router-config
namespace: kube-system
data:
cni-conf.json: |
{
"name": "pod-network",
"cniVersion": "0.3.1",
"plugins":[
{
"name": "kube-router",
"type": "bridge",
"bridge": "kube-bridge",
"isDefaultGateway": true,
"mtu": ${network_mtu},
"ipam": {
"type": "host-local"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {
"portMappings": true
}
}
]
}

View File

@@ -0,0 +1,90 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-router
namespace: kube-system
labels:
k8s-app: kube-router
spec:
selector:
matchLabels:
k8s-app: kube-router
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: kube-router
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
priorityClassName: system-node-critical
serviceAccountName: kube-router
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
containers:
- name: kube-router
image: ${kube_router_image}
args:
- --kubeconfig=/etc/kubernetes/kubeconfig
- --run-router=true
- --run-firewall=true
- --run-service-proxy=false
- --v=5
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: KUBE_ROUTER_CNI_CONF_FILE
value: /etc/cni/net.d/10-kuberouter.conflist
securityContext:
privileged: true
volumeMounts:
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: cni-conf-dir
mountPath: /etc/cni/net.d
- name: kubeconfig
mountPath: /etc/kubernetes
readOnly: true
- name: install-cni
image: ${flannel_cni_image}
command: ["/install-cni.sh"]
env:
- name: CNI_OLD_NAME
value: 10-flannel.conflist
- name: CNI_CONF_NAME
value: 10-kuberouter.conflist
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: kube-router-config
key: cni-conf.json
volumeMounts:
- name: cni-bin-dir
mountPath: /host/opt/cni/bin
- name: cni-conf-dir
mountPath: /host/etc/cni/net.d
volumes:
# Used by kube-router
- name: lib-modules
hostPath:
path: /lib/modules
- name: kubeconfig
configMap:
name: kubeconfig-in-cluster
# Used by install-cni
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-conf-dir
hostPath:
path: /etc/kubernetes/cni/net.d

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-router
namespace: kube-system

View File

@@ -10,6 +10,7 @@ users:
user:
client-certificate-data: ${kubelet_cert}
client-key-data: ${kubelet_key}
current-context: ${name}-context
contexts:
- name: ${name}-context
context:

View File

@@ -0,0 +1,16 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:coredns
labels:
kubernetes.io/bootstrapping: rbac-defaults
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system

View File

@@ -0,0 +1,21 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:coredns
labels:
kubernetes.io/bootstrapping: rbac-defaults
rules:
- apiGroups: [""]
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups: [""]
resources:
- nodes
verbs:
- get

View File

@@ -0,0 +1,25 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
ready
log . {
class error
}
kubernetes ${cluster_domain_suffix} in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}

View File

@@ -0,0 +1,102 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/name: "CoreDNS"
kubernetes.io/cluster-service: "true"
spec:
replicas: ${control_plane_replicas}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
tier: control-plane
k8s-app: coredns
template:
metadata:
labels:
tier: control-plane
k8s-app: coredns
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: tier
operator: In
values:
- control-plane
- key: k8s-app
operator: In
values:
- coredns
topologyKey: kubernetes.io/hostname
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: coredns
image: ${coredns_image}
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config
mountPath: /etc/coredns
readOnly: true
ports:
- name: dns
protocol: UDP
containerPort: 53
- name: dns-tcp
protocol: TCP
containerPort: 53
- name: metrics
protocol: TCP
containerPort: 9153
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config
configMap:
name: coredns
items:
- key: Corefile
path: Corefile

View File

@@ -0,0 +1,7 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"

View File

@@ -0,0 +1,23 @@
apiVersion: v1
kind: Service
metadata:
name: coredns
namespace: kube-system
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9153"
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: coredns
clusterIP: ${cluster_dns_service_ip}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-apiserver
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kube-apiserver
namespace: kube-system

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: kube-apiserver

View File

@@ -12,3 +12,7 @@ data:
etcd-client-ca.crt: ${etcd_ca_cert}
etcd-client.crt: ${etcd_client_cert}
etcd-client.key: ${etcd_client_key}
aggregation-ca.crt: ${aggregation_ca_cert}
aggregation-client.crt: ${aggregation_client_cert}
aggregation-client.key: ${aggregation_client_key}

View File

@@ -1,4 +1,4 @@
apiVersion: "extensions/v1beta1"
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-apiserver
@@ -7,6 +7,14 @@ metadata:
tier: control-plane
k8s-app: kube-apiserver
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: kube-apiserver
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
@@ -14,17 +22,23 @@ spec:
k8s-app: kube-apiserver
annotations:
checkpointer.alpha.coreos.com/checkpoint: "true"
scheduler.alpha.kubernetes.io/critical-pod: ''
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
priorityClassName: system-cluster-critical
serviceAccountName: kube-apiserver
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: kube-apiserver
image: ${hyperkube_image}
command:
- /usr/bin/flock
- /var/lock/api-server.lock
- /hyperkube
- apiserver
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --anonymous-auth=false
@@ -32,19 +46,19 @@ spec:
- --bind-address=0.0.0.0
- --client-ca-file=/etc/kubernetes/secrets/ca.crt
- --cloud-provider=${cloud_provider}
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,Priority
- --etcd-cafile=/etc/kubernetes/secrets/etcd-client-ca.crt
- --etcd-certfile=/etc/kubernetes/secrets/etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/secrets/etcd-client.key
- --etcd-quorum-read=true
- --etcd-servers=${etcd_servers}
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
- --secure-port=443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname${aggregation_flags}
- --secure-port=${apiserver_port}
- --service-account-key-file=/etc/kubernetes/secrets/service-account.pub
- --service-cluster-ip-range=${service_cidr}
- --storage-backend=etcd3
- --tls-ca-file=/etc/kubernetes/secrets/ca.crt
- --tls-cert-file=/etc/kubernetes/secrets/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/secrets/apiserver.key
env:
@@ -53,35 +67,16 @@ spec:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- mountPath: /etc/ssl/certs
name: ssl-certs-host
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
- mountPath: /etc/kubernetes/secrets
name: secrets
- name: ssl-certs-host
mountPath: /etc/ssl/certs
readOnly: true
- mountPath: /var/lock
name: var-lock
readOnly: false
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- name: ssl-certs-host
hostPath:
path: /usr/share/ca-certificates
- name: secrets
secret:
secretName: kube-apiserver
- name: var-lock
- name: ssl-certs-host
hostPath:
path: /var/lock
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
path: ${trusted_certs_dir}

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-controller-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-controller-manager
subjects:
- kind: ServiceAccount
name: kube-controller-manager
namespace: kube-system

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: kube-controller-manager

View File

@@ -7,3 +7,5 @@ type: Opaque
data:
service-account.key: ${serviceaccount_key}
ca.crt: ${ca_cert}
ca.key: ${ca_key}

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-controller-manager
@@ -7,14 +7,18 @@ metadata:
tier: control-plane
k8s-app: kube-controller-manager
spec:
replicas: 2
replicas: ${control_plane_replicas}
selector:
matchLabels:
tier: control-plane
k8s-app: kube-controller-manager
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-controller-manager
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
affinity:
podAntiAffinity:
@@ -32,48 +36,61 @@ spec:
values:
- kube-controller-manager
topologyKey: kubernetes.io/hostname
nodeSelector:
node-role.kubernetes.io/master: ""
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: kube-controller-manager
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: kube-controller-manager
image: ${hyperkube_image}
command:
- ./hyperkube
- controller-manager
- --use-service-account-credentials
- --allocate-node-cidrs=true
- --cloud-provider=${cloud_provider}
- --cluster-cidr=${pod_cidr}
- --service-cluster-ip-range=${service_cidr}
- --cluster-signing-cert-file=/etc/kubernetes/secrets/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/secrets/ca.key
- --configure-cloud-routes=false
- --leader-elect=true
- --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins
- --pod-eviction-timeout=1m
- --root-ca-file=/etc/kubernetes/secrets/ca.crt
- --service-account-private-key-file=/etc/kubernetes/secrets/service-account.key
livenessProbe:
httpGet:
scheme: HTTPS
path: /healthz
port: 10252 # Note: Using default port. Update if --port option is set differently.
port: 10257
initialDelaySeconds: 15
timeoutSeconds: 15
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
- name: volumeplugins
mountPath: /var/lib/kubelet/volumeplugins
readOnly: true
- name: ssl-host
mountPath: /etc/ssl/certs
readOnly: true
nodeSelector:
node-role.kubernetes.io/master: ""
securityContext:
runAsNonRoot: true
runAsUser: 65534
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- name: secrets
secret:
secretName: kube-controller-manager
- name: ssl-host
hostPath:
path: /usr/share/ca-certificates
path: ${trusted_certs_dir}
- name: volumeplugins
hostPath:
path: /var/lib/kubelet/volumeplugins
dnsPolicy: Default # Don't use cluster DNS.

View File

@@ -1,155 +0,0 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true

View File

@@ -1,20 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: ${kube_dns_service_ip}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

View File

@@ -1,58 +0,0 @@
apiVersion: "extensions/v1beta1"
kind: DaemonSet
metadata:
name: kube-etcd-network-checkpointer
namespace: kube-system
labels:
tier: control-plane
k8s-app: kube-etcd-network-checkpointer
spec:
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-etcd-network-checkpointer
annotations:
checkpointer.alpha.coreos.com/checkpoint: "true"
spec:
containers:
- image: quay.io/coreos/kenc:8f6e2e885f790030fbbb0496ea2a2d8830e58b8f
name: kube-etcd-network-checkpointer
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/kubernetes/selfhosted-etcd
name: checkpoint-dir
readOnly: false
- mountPath: /var/etcd
name: etcd-dir
readOnly: false
- mountPath: /var/lock
name: var-lock
readOnly: false
command:
- /usr/bin/flock
- /var/lock/kenc.lock
- -c
- "kenc -r -m iptables && kenc -m iptables"
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- name: checkpoint-dir
hostPath:
path: /etc/kubernetes/checkpoint-iptables
- name: etcd-dir
hostPath:
path: /var/etcd
- name: var-lock
hostPath:
path: /var/lock
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -1,24 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
k8s-app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}
net-conf.json: |
{
"Network": "${pod_cidr}",
"Backend": {
"Type": "vxlan"
}
}

View File

@@ -1,77 +0,0 @@
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel
namespace: kube-system
labels:
tier: node
k8s-app: flannel
spec:
template:
metadata:
labels:
tier: node
k8s-app: flannel
spec:
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.7.1-amd64
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=$(POD_IP)"]
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: run
mountPath: /run
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: install-cni
image: quay.io/coreos/flannel-cni:0.1.0
command: ["/install-cni.sh"]
env:
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: kube-flannel-cfg
key: cni-conf.json
volumeMounts:
- name: cni
mountPath: /host/etc/cni/net.d
- name: host-cni-bin
mountPath: /host/opt/cni/bin/
hostNetwork: true
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/kubernetes/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: host-cni-bin
hostPath:
path: /opt/cni/bin
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-proxy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-proxier # Automatically created system role.
subjects:
- kind: ServiceAccount
name: kube-proxy
namespace: kube-system

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: kube-proxy

View File

@@ -1,4 +1,4 @@
apiVersion: "extensions/v1beta1"
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-proxy
@@ -7,14 +7,30 @@ metadata:
tier: node
k8s-app: kube-proxy
spec:
selector:
matchLabels:
tier: node
k8s-app: kube-proxy
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
tier: node
k8s-app: kube-proxy
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
priorityClassName: system-node-critical
serviceAccountName: kube-proxy
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
containers:
- name: kube-proxy
image: ${hyperkube_image}
@@ -30,30 +46,31 @@ spec:
valueFrom:
fieldRef:
fieldPath: spec.nodeName
livenessProbe:
httpGet:
path: /healthz
port: 10256
initialDelaySeconds: 15
timeoutSeconds: 15
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
- name: etc-kubernetes
- name: kubeconfig
mountPath: /etc/kubernetes
readOnly: true
hostNetwork: true
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: ssl-certs-host
mountPath: /etc/ssl/certs
readOnly: true
volumes:
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
- name: etc-kubernetes
- name: kubeconfig
configMap:
name: kubeconfig-in-cluster
- name: lib-modules
hostPath:
path: /etc/kubernetes
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
path: /lib/modules
- name: ssl-certs-host
hostPath:
path: ${trusted_certs_dir}

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-scheduler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-scheduler
subjects:
- kind: ServiceAccount
name: kube-scheduler
namespace: kube-system

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: kube-scheduler

View File

@@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: volume-scheduler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:volume-scheduler
subjects:
- kind: ServiceAccount
name: kube-scheduler
namespace: kube-system

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-scheduler
@@ -7,14 +7,18 @@ metadata:
tier: control-plane
k8s-app: kube-scheduler
spec:
replicas: 2
replicas: ${control_plane_replicas}
selector:
matchLabels:
tier: control-plane
k8s-app: kube-scheduler
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-scheduler
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
affinity:
podAntiAffinity:
@@ -32,6 +36,17 @@ spec:
values:
- kube-scheduler
topologyKey: kubernetes.io/hostname
nodeSelector:
node-role.kubernetes.io/master: ""
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: kube-scheduler
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: kube-scheduler
image: ${hyperkube_image}
@@ -41,18 +56,8 @@ spec:
- --leader-elect=true
livenessProbe:
httpGet:
scheme: HTTPS
path: /healthz
port: 10251 # Note: Using default port. Update if --port option is set differently.
port: 10259
initialDelaySeconds: 15
timeoutSeconds: 15
nodeSelector:
node-role.kubernetes.io/master: ""
securityContext:
runAsNonRoot: true
runAsUser: 65534
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule

View File

@@ -0,0 +1,24 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeconfig-in-cluster
namespace: kube-system
data:
kubeconfig: |
apiVersion: v1
clusters:
- name: local
cluster:
# kubeconfig-in-cluster is for control plane components that must reach
# kube-apiserver before service IPs are available (e.g.10.3.0.1)
server: ${server}
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
users:
- name: service-account
user:
# Use service account token
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
contexts:
- context:
cluster: local
user: service-account

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubelet-delete
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubelet-delete
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io

View File

@@ -0,0 +1,10 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubelet-delete
rules:
- apiGroups: [""]
resources:
- nodes
verbs:
- delete

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system-nodes
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pod-checkpointer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pod-checkpointer
subjects:
- kind: ServiceAccount
name: pod-checkpointer
namespace: kube-system

View File

@@ -0,0 +1,11 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-checkpointer
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/proxy
verbs:
- get

View File

@@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-checkpointer
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-checkpointer
subjects:
- kind: ServiceAccount
name: pod-checkpointer
namespace: kube-system

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-checkpointer
namespace: kube-system
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets", "configmaps"]
verbs: ["get"]

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: pod-checkpointer

View File

@@ -1,4 +1,4 @@
apiVersion: "extensions/v1beta1"
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: pod-checkpointer
@@ -7,6 +7,14 @@ metadata:
tier: control-plane
k8s-app: pod-checkpointer
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: pod-checkpointer
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
@@ -14,14 +22,24 @@ spec:
k8s-app: pod-checkpointer
annotations:
checkpointer.alpha.coreos.com/checkpoint: "true"
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
priorityClassName: system-node-critical
serviceAccountName: pod-checkpointer
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: pod-checkpointer
image: quay.io/coreos/pod-checkpointer:4e7a7dab10bc4d895b66c21656291c6e0b017248
image: ${pod_checkpointer_image}
command:
- /checkpoint
- --v=4
- --lock-file=/var/run/lock/pod-checkpointer.lock
- --kubeconfig=/etc/checkpointer/kubeconfig
env:
- name: NODE_NAME
valueFrom:
@@ -35,28 +53,20 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
imagePullPolicy: Always
volumeMounts:
- mountPath: /etc/kubernetes
name: etc-kubernetes
- mountPath: /var/run
name: var-run
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
restartPolicy: Always
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- name: kubeconfig
mountPath: /etc/checkpointer
- name: etc-kubernetes
mountPath: /etc/kubernetes
- name: var-run
mountPath: /var/run
volumes:
- name: kubeconfig
configMap:
name: kubeconfig-in-cluster
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes
- name: var-run
hostPath:
path: /var/run
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate

View File

@@ -2,4 +2,4 @@ cluster_name = "example"
api_servers = ["node1.example.com"]
etcd_servers = ["node1.example.com"]
asset_dir = "/home/core/mycluster"
experimental_self_hosted_etcd = false
networking = "flannel"

105
tls-aggregation.tf Normal file
View File

@@ -0,0 +1,105 @@
# NOTE: Across this module, the following workaround is used:
# `"${var.some_var == "condition" ? join(" ", tls_private_key.aggregation-ca.*.private_key_pem) : ""}"`
# Due to https://github.com/hashicorp/hil/issues/50, both sides of conditions
# are evaluated, until one of them is discarded. When a `count` is used resources
# can be referenced as lists with the `.*` notation, and arrays are allowed to be
# empty. The `join()` interpolation function is then used to cast them back to
# a string. Since `count` can only be 0 or 1, the returned value is either empty
# (and discarded anyways) or the desired value.
# Kubernetes Aggregation CA (i.e. front-proxy-ca)
# Files: tls/{aggregation-ca.crt,aggregation-ca.key}
resource "tls_private_key" "aggregation-ca" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_self_signed_cert" "aggregation-ca" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
key_algorithm = "${tls_private_key.aggregation-ca.algorithm}"
private_key_pem = "${tls_private_key.aggregation-ca.private_key_pem}"
subject {
common_name = "kubernetes-front-proxy-ca"
}
is_ca_certificate = true
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"cert_signing",
]
}
resource "local_file" "aggregation-ca-key" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
content = "${tls_private_key.aggregation-ca.private_key_pem}"
filename = "${var.asset_dir}/tls/aggregation-ca.key"
}
resource "local_file" "aggregation-ca-crt" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
content = "${tls_self_signed_cert.aggregation-ca.cert_pem}"
filename = "${var.asset_dir}/tls/aggregation-ca.crt"
}
# Kubernetes apiserver (i.e. front-proxy-client)
# Files: tls/{aggregation-client.crt,aggregation-client.key}
resource "tls_private_key" "aggregation-client" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "aggregation-client" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
key_algorithm = "${tls_private_key.aggregation-client.algorithm}"
private_key_pem = "${tls_private_key.aggregation-client.private_key_pem}"
subject {
common_name = "kube-apiserver"
}
}
resource "tls_locally_signed_cert" "aggregation-client" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
cert_request_pem = "${tls_cert_request.aggregation-client.cert_request_pem}"
ca_key_algorithm = "${tls_self_signed_cert.aggregation-ca.key_algorithm}"
ca_private_key_pem = "${tls_private_key.aggregation-ca.private_key_pem}"
ca_cert_pem = "${tls_self_signed_cert.aggregation-ca.cert_pem}"
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"client_auth",
]
}
resource "local_file" "aggregation-client-key" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
content = "${tls_private_key.aggregation-client.private_key_pem}"
filename = "${var.asset_dir}/tls/aggregation-client.key"
}
resource "local_file" "aggregation-client-crt" {
count = "${var.enable_aggregation == "true" ? 1 : 0}"
content = "${tls_locally_signed_cert.aggregation-client.cert_pem}"
filename = "${var.asset_dir}/tls/aggregation-client.crt"
}

View File

@@ -1,3 +1,15 @@
# etcd-ca.crt
resource "local_file" "etcd_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
filename = "${var.asset_dir}/tls/etcd-ca.crt"
}
# etcd-ca.key
resource "local_file" "etcd_ca_key" {
content = "${tls_private_key.etcd-ca.private_key_pem}"
filename = "${var.asset_dir}/tls/etcd-ca.key"
}
# etcd-client-ca.crt
resource "local_file" "etcd_client_ca_crt" {
content = "${tls_self_signed_cert.etcd-ca.cert_pem}"
@@ -96,17 +108,13 @@ resource "tls_cert_request" "client" {
ip_addresses = [
"127.0.0.1",
"${cidrhost(var.service_cidr, 15)}",
"${cidrhost(var.service_cidr, 20)}",
]
dns_names = "${concat(
dns_names = ["${concat(
var.etcd_servers,
list(
"localhost",
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"
))}"]
}
resource "tls_locally_signed_cert" "client" {
@@ -139,20 +147,16 @@ resource "tls_cert_request" "server" {
common_name = "etcd-server"
organization = "etcd"
}
ip_addresses = [
"127.0.0.1",
"${cidrhost(var.service_cidr, 15)}",
"${cidrhost(var.service_cidr, 20)}",
]
dns_names = "${concat(
dns_names = ["${concat(
var.etcd_servers,
list(
"localhost",
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"
))}"]
}
resource "tls_locally_signed_cert" "server" {
@@ -185,17 +189,8 @@ resource "tls_cert_request" "peer" {
common_name = "etcd-peer"
organization = "etcd"
}
ip_addresses = [
"${cidrhost(var.service_cidr, 20)}"
]
dns_names = "${concat(
var.etcd_servers,
list(
"*.kube-etcd.kube-system.svc.cluster.local",
"kube-etcd-client.kube-system.svc.cluster.local",
))}"
dns_names = ["${var.etcd_servers}"]
}
resource "tls_locally_signed_cert" "peer" {

View File

@@ -1,32 +1,16 @@
# NOTE: Across this module, the following syntax is used at various places:
# `"${var.ca_certificate == "" ? join(" ", tls_private_key.kube-ca.*.private_key_pem) : var.ca_private_key}"`
#
# Due to https://github.com/hashicorp/hil/issues/50, both sides of conditions
# are evaluated, until one of them is discarded. Unfortunately, the
# `{tls_private_key/tls_self_signed_cert}.kube-ca` resources are created
# conditionally and might not be present - in which case an error is
# generated. Because a `count` is used on these ressources, the resources can be
# referenced as lists with the `.*` notation, and arrays are allowed to be
# empty. The `join()` interpolation function is then used to cast them back to
# a string. Since `count` can only be 0 or 1, the returned value is either empty
# (and discarded anyways) or the desired value.
# Kubernetes CA (tls/{ca.crt,ca.key})
resource "tls_private_key" "kube-ca" {
count = "${var.ca_certificate == "" ? 1 : 0}"
resource "tls_private_key" "kube-ca" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_self_signed_cert" "kube-ca" {
count = "${var.ca_certificate == "" ? 1 : 0}"
key_algorithm = "${tls_private_key.kube-ca.algorithm}"
private_key_pem = "${tls_private_key.kube-ca.private_key_pem}"
subject {
common_name = "kube-ca"
common_name = "kubernetes-ca"
organization = "bootkube"
}
@@ -41,16 +25,17 @@ resource "tls_self_signed_cert" "kube-ca" {
}
resource "local_file" "kube-ca-key" {
content = "${var.ca_certificate == "" ? join(" ", tls_private_key.kube-ca.*.private_key_pem) : var.ca_private_key}"
content = "${tls_private_key.kube-ca.private_key_pem}"
filename = "${var.asset_dir}/tls/ca.key"
}
resource "local_file" "kube-ca-crt" {
content = "${var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate}"
content = "${tls_self_signed_cert.kube-ca.cert_pem}"
filename = "${var.asset_dir}/tls/ca.crt"
}
# Kubernetes API Server (tls/{apiserver.key,apiserver.crt})
resource "tls_private_key" "apiserver" {
algorithm = "RSA"
rsa_bits = "2048"
@@ -62,7 +47,7 @@ resource "tls_cert_request" "apiserver" {
subject {
common_name = "kube-apiserver"
organization = "kube-master"
organization = "system:masters"
}
dns_names = [
@@ -70,7 +55,7 @@ resource "tls_cert_request" "apiserver" {
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster.local",
"kubernetes.default.svc.${var.cluster_domain_suffix}",
]
ip_addresses = [
@@ -81,9 +66,9 @@ resource "tls_cert_request" "apiserver" {
resource "tls_locally_signed_cert" "apiserver" {
cert_request_pem = "${tls_cert_request.apiserver.cert_request_pem}"
ca_key_algorithm = "${var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.key_algorithm) : var.ca_key_alg}"
ca_private_key_pem = "${var.ca_certificate == "" ? join(" ", tls_private_key.kube-ca.*.private_key_pem) : var.ca_private_key}"
ca_cert_pem = "${var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem): var.ca_certificate}"
ca_key_algorithm = "${tls_self_signed_cert.kube-ca.key_algorithm}"
ca_private_key_pem = "${tls_private_key.kube-ca.private_key_pem}"
ca_cert_pem = "${tls_self_signed_cert.kube-ca.cert_pem}"
validity_period_hours = 8760
@@ -105,7 +90,51 @@ resource "local_file" "apiserver-crt" {
filename = "${var.asset_dir}/tls/apiserver.crt"
}
# Kubernetes Admin (tls/{admin.key,admin.crt})
resource "tls_private_key" "admin" {
algorithm = "RSA"
rsa_bits = "2048"
}
resource "tls_cert_request" "admin" {
key_algorithm = "${tls_private_key.admin.algorithm}"
private_key_pem = "${tls_private_key.admin.private_key_pem}"
subject {
common_name = "kubernetes-admin"
organization = "system:masters"
}
}
resource "tls_locally_signed_cert" "admin" {
cert_request_pem = "${tls_cert_request.admin.cert_request_pem}"
ca_key_algorithm = "${tls_self_signed_cert.kube-ca.key_algorithm}"
ca_private_key_pem = "${tls_private_key.kube-ca.private_key_pem}"
ca_cert_pem = "${tls_self_signed_cert.kube-ca.cert_pem}"
validity_period_hours = 8760
allowed_uses = [
"key_encipherment",
"digital_signature",
"client_auth",
]
}
resource "local_file" "admin-key" {
content = "${tls_private_key.admin.private_key_pem}"
filename = "${var.asset_dir}/tls/admin.key"
}
resource "local_file" "admin-crt" {
content = "${tls_locally_signed_cert.admin.cert_pem}"
filename = "${var.asset_dir}/tls/admin.crt"
}
# Kubernete's Service Account (tls/{service-account.key,service-account.pub})
resource "tls_private_key" "service-account" {
algorithm = "RSA"
rsa_bits = "2048"
@@ -122,6 +151,7 @@ resource "local_file" "service-account-crt" {
}
# Kubelet
resource "tls_private_key" "kubelet" {
algorithm = "RSA"
rsa_bits = "2048"
@@ -133,16 +163,16 @@ resource "tls_cert_request" "kubelet" {
subject {
common_name = "kubelet"
organization = "system:masters"
organization = "system:nodes"
}
}
resource "tls_locally_signed_cert" "kubelet" {
cert_request_pem = "${tls_cert_request.kubelet.cert_request_pem}"
ca_key_algorithm = "${var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.key_algorithm) : var.ca_key_alg}"
ca_private_key_pem = "${var.ca_certificate == "" ? join(" ", tls_private_key.kube-ca.*.private_key_pem) : var.ca_private_key}"
ca_cert_pem = "${var.ca_certificate == "" ? join(" ", tls_self_signed_cert.kube-ca.*.cert_pem) : var.ca_certificate}"
ca_key_algorithm = "${tls_self_signed_cert.kube-ca.key_algorithm}"
ca_private_key_pem = "${tls_private_key.kube-ca.private_key_pem}"
ca_cert_pem = "${tls_self_signed_cert.kube-ca.cert_pem}"
validity_period_hours = 8760

View File

@@ -4,20 +4,15 @@ variable "cluster_name" {
}
variable "api_servers" {
description = "URL used to reach kube-apiserver"
description = "List of URLs used to reach kube-apiserver"
type = "list"
}
variable "etcd_servers" {
description = "List of etcd server URLs including protocol, host, and port. Ignored if experimental self-hosted etcd is enabled."
description = "List of URLs used to reach etcd servers."
type = "list"
}
variable "experimental_self_hosted_etcd" {
description = "(Experimental) Create self-hosted etcd assets"
default = false
}
variable "asset_dir" {
description = "Path to a directory where generated assets should be placed (contains secrets)"
type = "string"
@@ -29,6 +24,30 @@ variable "cloud_provider" {
default = ""
}
variable "networking" {
description = "Choice of networking provider (flannel or calico or kube-router)"
type = "string"
default = "flannel"
}
variable "network_mtu" {
description = "CNI interface MTU (only applies to calico and kube-router)"
type = "string"
default = "1500"
}
variable "network_encapsulation" {
description = "Network encapsulation mode either ipip or vxlan (only applies to calico)"
type = "string"
default = "ipip"
}
variable "network_ip_autodetection_method" {
description = "Method to autodetect the host IPv4 address (only applies to calico)"
type = "string"
default = "first-found"
}
variable "pod_cidr" {
description = "CIDR IP range to assign Kubernetes pods"
type = "string"
@@ -38,37 +57,57 @@ variable "pod_cidr" {
variable "service_cidr" {
description = <<EOD
CIDR IP range to assign Kubernetes services.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns, the 15th IP will be reserved for self-hosted etcd, and the 20th IP will be reserved for bootstrap self-hosted etcd.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
EOD
type = "string"
default = "10.3.0.0/24"
}
variable "cluster_domain_suffix" {
description = "Queries for domains with the suffix will be answered by kube-dns"
type = "string"
default = "cluster.local"
}
variable "container_images" {
description = "Container images to use"
type = "map"
default = {
hyperkube = "quay.io/coreos/hyperkube:v1.6.7_coreos.0"
etcd = "quay.io/coreos/etcd:v3.1.8"
calico = "quay.io/calico/node:v3.7.2"
calico_cni = "quay.io/calico/cni:v3.7.2"
flannel = "quay.io/coreos/flannel:v0.11.0-amd64"
flannel_cni = "quay.io/coreos/flannel-cni:v0.3.0"
kube_router = "cloudnativelabs/kube-router:v0.3.1"
hyperkube = "k8s.gcr.io/hyperkube:v1.14.3"
coredns = "k8s.gcr.io/coredns:1.5.0"
pod_checkpointer = "quay.io/coreos/pod-checkpointer:83e25e5968391b9eb342042c435d1b3eeddb2be1"
}
}
variable "ca_certificate" {
description = "Existing PEM-encoded CA certificate (generated if blank)"
variable "enable_reporting" {
type = "string"
default = ""
description = "Enable usage or analytics reporting to upstream component owners (Tigera: Calico)"
default = "false"
}
variable "ca_key_alg" {
description = "Algorithm used to generate ca_key (required if ca_cert is specified)"
variable "trusted_certs_dir" {
description = "Path to the directory on cluster nodes where trust TLS certs are kept"
type = "string"
default = "RSA"
default = "/usr/share/ca-certificates"
}
variable "ca_private_key" {
description = "Existing Certificate Authority private key (required if ca_certificate is set)"
variable "enable_aggregation" {
description = "Enable the Kubernetes Aggregation Layer (defaults to false, recommended)"
type = "string"
default = ""
default = "false"
}
# unofficial, temporary, may be removed without notice
variable "apiserver_port" {
description = "kube-apiserver port"
type = "string"
default = "6443"
}