Migrate from a self-hosted to static pod control plane

* Run kube-apiserver, kube-scheduler, and kube-controller-manager
as static pods on each controller node
* Boostrap a minimal control plane by copying `static-manifests`
to the Kubelet `--pod-manifest-path` and tls/auth secrets to
`/etc/kubernetes/bootstrap-secrets`. Then, kubectl apply Kubernetes
manifests.
* Discontinue using bootkube to bootstrap and pivot to a self-hosted
control plane.
* Remove bootkube self-hosted kube-apiserver DaemonSet and
kube-scheduler and kube-controller-manager Deployments
* Remove pod-checkpointer manifests (no longer needed)

Advantages:

* Reduce control plane bootstrapping complexity. Self-hosted pivot and
pod checkpointing worked well, but in-place edits to kube-apiserver,
kube-controller-manager, or kube-scheduler is infrequently used. The
concept was originally geared toward continuously in-place upgrading
clusters, a goal Typhoon doesn't take on (rec. blue/green clusters).
As such, the value-add isn't justifying the extra components for this
particular project.
* Static pods still provide kubectl visibility and log access

Drawbacks:

* In-place edits to kube-apiserver, kube-controller-manager, and
kube-scheduler are not possible via kubectl (non-goal)
* Assets must be copied to each controller (not just one)
* Static pod must load credentials via hostPath, which is less clean
compared with the former Kubernetes secrets and service accounts
This commit is contained in:
Dalton Hubble
2019-08-18 20:50:46 -07:00
parent 98cc19f80f
commit 6e59af7113
28 changed files with 58 additions and 543 deletions

View File

@@ -1,17 +1,17 @@
# terraform-render-bootkube
`terraform-render-bootkube` is a Terraform module that renders [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube) assets for bootstrapping a Kubernetes cluster.
`terraform-render-bootkube` is a Terraform module that renders TLS certificates, static pods, and manifests for bootstrapping a Kubernetes cluster.
## Audience
`terraform-render-bootkube` is a low-level component of the [Typhoon](https://github.com/poseidon/typhoon) Kubernetes distribution. Use Typhoon modules to create and manage Kubernetes clusters across supported platforms. Use the bootkube module if you'd like to customize a Kubernetes control plane or build your own distribution.
`terraform-render-bootstrap` is a low-level component of the [Typhoon](https://github.com/poseidon/typhoon) Kubernetes distribution. Use Typhoon modules to create and manage Kubernetes clusters across supported platforms. Use the bootstrap module if you'd like to customize a Kubernetes control plane or build your own distribution.
## Usage
Use the module to declare bootkube assets. Check [variables.tf](variables.tf) for options and [terraform.tfvars.example](terraform.tfvars.example) for examples.
Use the module to declare bootstrap assets. Check [variables.tf](variables.tf) for options and [terraform.tfvars.example](terraform.tfvars.example) for examples.
```hcl
module "bootkube" {
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=SHA"
cluster_name = "example"
@@ -29,21 +29,5 @@ terraform plan
terraform apply
```
Find bootkube assets rendered to the `asset_dir` path. That's it.
Find bootstrap assets rendered to the `asset_dir` path. That's it.
### Comparison
Render bootkube assets directly with bootkube v0.14.0.
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:6443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=https://node1.example.com:2379
```
Compare assets. Rendered assets may differ slightly from bootkube assets to reflect decisions made by the [Typhoon](https://github.com/poseidon/typhoon) distribution.
```sh
pushd /home/core/mycluster
mv manifests-networking/* manifests
popd
diff -rw assets /home/core/mycluster
```

View File

@@ -1,7 +1,7 @@
# Self-hosted Kubernetes bootstrap-manifests
resource "template_dir" "bootstrap-manifests" {
source_dir = "${path.module}/resources/bootstrap-manifests"
destination_dir = "${var.asset_dir}/bootstrap-manifests"
# Kubernetes static pod manifests
resource "template_dir" "static-manifests" {
source_dir = "${path.module}/resources/static-manifests"
destination_dir = "${var.asset_dir}/static-manifests"
vars = {
hyperkube_image = var.container_images["hyperkube"]
@@ -10,44 +10,24 @@ resource "template_dir" "bootstrap-manifests" {
pod_cidr = var.pod_cidr
service_cidr = var.service_cidr
trusted_certs_dir = var.trusted_certs_dir
aggregation_flags = var.enable_aggregation == "true" ? indent(4, local.aggregation_flags) : ""
}
}
# Self-hosted Kubernetes manifests
# Kubernetes control plane manifests
resource "template_dir" "manifests" {
source_dir = "${path.module}/resources/manifests"
destination_dir = "${var.asset_dir}/manifests"
vars = {
hyperkube_image = var.container_images["hyperkube"]
pod_checkpointer_image = var.container_images["pod_checkpointer"]
coredns_image = var.container_images["coredns"]
etcd_servers = join(",", formatlist("https://%s:2379", var.etcd_servers))
control_plane_replicas = max(2, length(var.etcd_servers))
cloud_provider = var.cloud_provider
pod_cidr = var.pod_cidr
service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
trusted_certs_dir = var.trusted_certs_dir
ca_cert = base64encode(tls_self_signed_cert.kube-ca.cert_pem)
ca_key = base64encode(tls_private_key.kube-ca.private_key_pem)
server = format("https://%s:%s", element(var.api_servers, 0), var.external_apiserver_port)
apiserver_key = base64encode(tls_private_key.apiserver.private_key_pem)
apiserver_cert = base64encode(tls_locally_signed_cert.apiserver.cert_pem)
serviceaccount_pub = base64encode(tls_private_key.service-account.public_key_pem)
serviceaccount_key = base64encode(tls_private_key.service-account.private_key_pem)
etcd_ca_cert = base64encode(tls_self_signed_cert.etcd-ca.cert_pem)
etcd_client_cert = base64encode(tls_locally_signed_cert.client.cert_pem)
etcd_client_key = base64encode(tls_private_key.client.private_key_pem)
aggregation_flags = var.enable_aggregation == "true" ? indent(8, local.aggregation_flags) : ""
aggregation_ca_cert = var.enable_aggregation == "true" ? base64encode(join(" ", tls_self_signed_cert.aggregation-ca.*.cert_pem)) : ""
aggregation_client_cert = var.enable_aggregation == "true" ? base64encode(
join(" ", tls_locally_signed_cert.aggregation-client.*.cert_pem),
) : ""
aggregation_client_key = var.enable_aggregation == "true" ? base64encode(
join(" ", tls_private_key.aggregation-client.*.private_key_pem),
) : ""
server = format("https://%s:%s", var.api_servers[0], var.external_apiserver_port)
}
}
@@ -69,8 +49,7 @@ resource "local_file" "kubeconfig-kubelet" {
filename = "${var.asset_dir}/auth/kubeconfig-kubelet"
}
# Generated admin kubeconfig (bootkube requires it be at auth/kubeconfig)
# https://github.com/kubernetes-incubator/bootkube/blob/master/pkg/bootkube/bootkube.go#L42
# Generated admin kubeconfig to bootstrap control plane
resource "local_file" "kubeconfig-admin" {
content = data.template_file.kubeconfig-admin.rendered
filename = "${var.asset_dir}/auth/kubeconfig"

View File

@@ -1,9 +1,9 @@
output "id" {
value = sha1("${template_dir.bootstrap-manifests.id} ${template_dir.manifests.id}")
value = sha1("${template_dir.static-manifests.id} ${template_dir.manifests.id}")
}
output "content_hash" {
value = sha1("${template_dir.bootstrap-manifests.id} ${template_dir.manifests.id}")
value = sha1("${template_dir.static-manifests.id} ${template_dir.manifests.id}")
}
output "cluster_dns_service_ip" {

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-apiserver
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kube-apiserver
namespace: kube-system

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: kube-apiserver

View File

@@ -1,18 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: kube-apiserver
namespace: kube-system
type: Opaque
data:
apiserver.key: ${apiserver_key}
apiserver.crt: ${apiserver_cert}
service-account.pub: ${serviceaccount_pub}
ca.crt: ${ca_cert}
etcd-client-ca.crt: ${etcd_ca_cert}
etcd-client.crt: ${etcd_client_cert}
etcd-client.key: ${etcd_client_key}
aggregation-ca.crt: ${aggregation_ca_cert}
aggregation-client.crt: ${aggregation_client_cert}
aggregation-client.key: ${aggregation_client_key}

View File

@@ -1,85 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-apiserver
namespace: kube-system
labels:
tier: control-plane
k8s-app: kube-apiserver
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: kube-apiserver
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-apiserver
annotations:
checkpointer.alpha.coreos.com/checkpoint: "true"
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
priorityClassName: system-cluster-critical
serviceAccountName: kube-apiserver
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: kube-apiserver
image: ${hyperkube_image}
command:
- /hyperkube
- kube-apiserver
- --advertise-address=$(POD_IP)
- --allow-privileged=true
- --anonymous-auth=false
- --authorization-mode=RBAC
- --bind-address=0.0.0.0
- --client-ca-file=/etc/kubernetes/secrets/ca.crt
- --cloud-provider=${cloud_provider}
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,Priority
- --etcd-cafile=/etc/kubernetes/secrets/etcd-client-ca.crt
- --etcd-certfile=/etc/kubernetes/secrets/etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/secrets/etcd-client.key
- --etcd-servers=${etcd_servers}
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname${aggregation_flags}
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/secrets/service-account.pub
- --service-cluster-ip-range=${service_cidr}
- --storage-backend=etcd3
- --tls-cert-file=/etc/kubernetes/secrets/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/secrets/apiserver.key
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
- name: ssl-certs-host
mountPath: /etc/ssl/certs
readOnly: true
securityContext:
runAsNonRoot: true
runAsUser: 65534
volumes:
- name: secrets
secret:
secretName: kube-apiserver
- name: ssl-certs-host
hostPath:
path: ${trusted_certs_dir}

View File

@@ -1,11 +0,0 @@
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kube-controller-manager
namespace: kube-system
spec:
minAvailable: 1
selector:
matchLabels:
tier: control-plane
k8s-app: kube-controller-manager

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-controller-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-controller-manager
subjects:
- kind: ServiceAccount
name: kube-controller-manager
namespace: kube-system

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: kube-controller-manager

View File

@@ -1,11 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: kube-controller-manager
namespace: kube-system
type: Opaque
data:
service-account.key: ${serviceaccount_key}
ca.crt: ${ca_cert}
ca.key: ${ca_key}

View File

@@ -1,96 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-controller-manager
namespace: kube-system
labels:
tier: control-plane
k8s-app: kube-controller-manager
spec:
replicas: ${control_plane_replicas}
selector:
matchLabels:
tier: control-plane
k8s-app: kube-controller-manager
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-controller-manager
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: tier
operator: In
values:
- control-plane
- key: k8s-app
operator: In
values:
- kube-controller-manager
topologyKey: kubernetes.io/hostname
nodeSelector:
node-role.kubernetes.io/master: ""
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: kube-controller-manager
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: kube-controller-manager
image: ${hyperkube_image}
command:
- ./hyperkube
- kube-controller-manager
- --use-service-account-credentials
- --allocate-node-cidrs=true
- --cloud-provider=${cloud_provider}
- --cluster-cidr=${pod_cidr}
- --service-cluster-ip-range=${service_cidr}
- --cluster-signing-cert-file=/etc/kubernetes/secrets/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/secrets/ca.key
- --configure-cloud-routes=false
- --leader-elect=true
- --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins
- --pod-eviction-timeout=1m
- --root-ca-file=/etc/kubernetes/secrets/ca.crt
- --service-account-private-key-file=/etc/kubernetes/secrets/service-account.key
livenessProbe:
httpGet:
scheme: HTTPS
path: /healthz
port: 10257
initialDelaySeconds: 15
timeoutSeconds: 15
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
- name: volumeplugins
mountPath: /var/lib/kubelet/volumeplugins
readOnly: true
- name: ssl-host
mountPath: /etc/ssl/certs
readOnly: true
volumes:
- name: secrets
secret:
secretName: kube-controller-manager
- name: ssl-host
hostPath:
path: ${trusted_certs_dir}
- name: volumeplugins
hostPath:
path: /var/lib/kubelet/volumeplugins
dnsPolicy: Default # Don't use cluster DNS.

View File

@@ -1,11 +0,0 @@
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kube-scheduler
namespace: kube-system
spec:
minAvailable: 1
selector:
matchLabels:
tier: control-plane
k8s-app: kube-scheduler

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-scheduler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-scheduler
subjects:
- kind: ServiceAccount
name: kube-scheduler
namespace: kube-system

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: kube-scheduler

View File

@@ -1,13 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: volume-scheduler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:volume-scheduler
subjects:
- kind: ServiceAccount
name: kube-scheduler
namespace: kube-system

View File

@@ -1,63 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-scheduler
namespace: kube-system
labels:
tier: control-plane
k8s-app: kube-scheduler
spec:
replicas: ${control_plane_replicas}
selector:
matchLabels:
tier: control-plane
k8s-app: kube-scheduler
template:
metadata:
labels:
tier: control-plane
k8s-app: kube-scheduler
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: tier
operator: In
values:
- control-plane
- key: k8s-app
operator: In
values:
- kube-scheduler
topologyKey: kubernetes.io/hostname
nodeSelector:
node-role.kubernetes.io/master: ""
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: kube-scheduler
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: kube-scheduler
image: ${hyperkube_image}
command:
- ./hyperkube
- kube-scheduler
- --leader-elect=true
livenessProbe:
httpGet:
scheme: HTTPS
path: /healthz
port: 10259
initialDelaySeconds: 15
timeoutSeconds: 15

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pod-checkpointer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pod-checkpointer
subjects:
- kind: ServiceAccount
name: pod-checkpointer
namespace: kube-system

View File

@@ -1,11 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-checkpointer
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/proxy
verbs:
- get

View File

@@ -1,13 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-checkpointer
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-checkpointer
subjects:
- kind: ServiceAccount
name: pod-checkpointer
namespace: kube-system

View File

@@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-checkpointer
namespace: kube-system
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets", "configmaps"]
verbs: ["get"]

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: pod-checkpointer

View File

@@ -1,72 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: pod-checkpointer
namespace: kube-system
labels:
tier: control-plane
k8s-app: pod-checkpointer
spec:
selector:
matchLabels:
tier: control-plane
k8s-app: pod-checkpointer
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
tier: control-plane
k8s-app: pod-checkpointer
annotations:
checkpointer.alpha.coreos.com/checkpoint: "true"
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
priorityClassName: system-node-critical
serviceAccountName: pod-checkpointer
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: pod-checkpointer
image: ${pod_checkpointer_image}
command:
- /checkpoint
- --lock-file=/var/run/lock/pod-checkpointer.lock
- --kubeconfig=/etc/checkpointer/kubeconfig
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: kubeconfig
mountPath: /etc/checkpointer
- name: etc-kubernetes
mountPath: /etc/kubernetes
- name: var-run
mountPath: /var/run
volumes:
- name: kubeconfig
configMap:
name: kubeconfig-in-cluster
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes
- name: var-run
hostPath:
path: /var/run

View File

@@ -1,12 +1,19 @@
apiVersion: v1
kind: Pod
metadata:
name: bootstrap-kube-apiserver
name: kube-apiserver
namespace: kube-system
labels:
k8s-app: kube-apiserver
tier: control-plane
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
containers:
- name: kube-apiserver
image: ${hyperkube_image}
@@ -28,7 +35,7 @@ spec:
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname${aggregation_flags}
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/secrets/service-account.pub
- --service-cluster-ip-range=${service_cidr}

View File

@@ -1,11 +1,19 @@
apiVersion: v1
kind: Pod
metadata:
name: bootstrap-kube-controller-manager
name: kube-controller-manager
namespace: kube-system
labels:
k8s-app: kube-controller-manager
tier: control-plane
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
containers:
- name: kube-controller-manager
image: ${hyperkube_image}
@@ -23,6 +31,14 @@ spec:
- --leader-elect=true
- --root-ca-file=/etc/kubernetes/secrets/ca.crt
- --service-account-private-key-file=/etc/kubernetes/secrets/service-account.key
livenessProbe:
httpGet:
scheme: HTTPS
host: 127.0.0.1
path: /healthz
port: 10257
initialDelaySeconds: 15
timeoutSeconds: 15
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/secrets
@@ -30,7 +46,6 @@ spec:
- name: ssl-host
mountPath: /etc/ssl/certs
readOnly: true
hostNetwork: true
volumes:
- name: secrets
hostPath:

View File

@@ -1,11 +1,19 @@
apiVersion: v1
kind: Pod
metadata:
name: bootstrap-kube-scheduler
name: kube-scheduler
namespace: kube-system
labels:
k8s-app: kube-scheduler
tier: control-plane
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
hostNetwork: true
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
containers:
- name: kube-scheduler
image: ${hyperkube_image}
@@ -14,11 +22,18 @@ spec:
- kube-scheduler
- --kubeconfig=/etc/kubernetes/secrets/kubeconfig
- --leader-elect=true
livenessProbe:
httpGet:
scheme: HTTPS
host: 127.0.0.1
path: /healthz
port: 10259
initialDelaySeconds: 15
timeoutSeconds: 15
volumeMounts:
- name: secrets
mountPath: /etc/kubernetes/secrets
readOnly: true
hostNetwork: true
volumes:
- name: secrets
hostPath:

View File

@@ -83,7 +83,6 @@ variable "container_images" {
kube_router = "cloudnativelabs/kube-router:v0.3.2"
hyperkube = "k8s.gcr.io/hyperkube:v1.15.3"
coredns = "k8s.gcr.io/coredns:1.6.2"
pod_checkpointer = "quay.io/coreos/pod-checkpointer:83e25e5968391b9eb342042c435d1b3eeddb2be1"
}
}