mirror of
https://github.com/outbackdingo/matchbox.git
synced 2026-01-27 18:19:36 +00:00
Merge pull request #403 from coreos/k8s-v1.4.7
Update Kubernetes clusters to v1.4.7
This commit is contained in:
@@ -7,6 +7,12 @@
|
||||
* Deprecate Pixiecore support
|
||||
* Update Fuze and Ignition to v0.11.2
|
||||
|
||||
#### Examples
|
||||
|
||||
* Upgrade Kubernetes v1.4.7 (static) example clusters
|
||||
* Upgrade Kubernetes v1.4.7 (self-hosted) example cluster
|
||||
* Combine rktnetes Ignition into Kubernetes static cluster
|
||||
|
||||
## v0.4.2 (2016-12-7)
|
||||
|
||||
#### Improvements
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
|
||||
# Self-Hosted Kubernetes
|
||||
|
||||
The self-hosted Kubernetes example provisions a 3 node "self-hosted" Kubernetes v1.4.6 cluster. On-host kubelets wait for an apiserver to become reachable, then yield to kubelet pods scheduled via daemonset. [bootkube](https://github.com/kubernetes-incubator/bootkube) is run on any controller to bootstrap a temporary apiserver which schedules control plane components as pods before exiting. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
|
||||
The self-hosted Kubernetes example provisions a 3 node "self-hosted" Kubernetes v1.4.7 cluster. On-host kubelets wait for an apiserver to become reachable, then yield to kubelet pods scheduled via daemonset. [bootkube](https://github.com/kubernetes-incubator/bootkube) is run on any controller to bootstrap a temporary apiserver which schedules control plane components as pods before exiting. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
|
||||
|
||||
## Requirements
|
||||
|
||||
@@ -15,7 +15,7 @@ Ensure that you've gone through the [bootcfg with rkt](getting-started-rkt.md) o
|
||||
Build and install the [fork of bootkube](https://github.com/dghubble/bootkube), which supports DNS names.
|
||||
|
||||
$ bootkube version
|
||||
Version: bd5a87af28f84898272519894b09d16c5e5df441
|
||||
Version: 4a4ae6e78a59258b528f7de57db8d5cf5786a8a5
|
||||
|
||||
## Examples
|
||||
|
||||
@@ -63,10 +63,11 @@ Secure copy the `kubeconfig` to `/etc/kubernetes/kubeconfig` on **every** node w
|
||||
Secure copy the `bootkube` generated assets to any controller node and run `bootkube-start`.
|
||||
|
||||
scp -r assets core@node1.example.com:/home/core/assets
|
||||
ssh core@node1.example.com 'sudo ./bootkube-start'
|
||||
ssh core@node1.example.com 'sudo systemctl start bootkube'
|
||||
|
||||
Watch the temporary control plane logs until the scheduled kubelet takes over in place of the on-host kubelet.
|
||||
Optionally watch the Kubernetes control plane bootstrapping with the bootkube temporary api-server. You will see quite a bit of output.
|
||||
|
||||
$ ssh core@node1.example.com 'journalctl -f -u bootkube'
|
||||
[ 299.241291] bootkube[5]: Pod Status: kube-api-checkpoint Running
|
||||
[ 299.241618] bootkube[5]: Pod Status: kube-apiserver Running
|
||||
[ 299.241804] bootkube[5]: Pod Status: kube-scheduler Running
|
||||
@@ -88,17 +89,17 @@ You may cleanup the `bootkube` assets on the node, but you should keep the copy
|
||||
|
||||
$ kubectl get pods --all-namespaces
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system kube-api-checkpoint-node1.example.com 1/1 Running 0 4m
|
||||
kube-system kube-apiserver-iffsz 2/2 Running 0 5m
|
||||
kube-system kube-controller-manager-1148212084-1zx9g 1/1 Running 0 6m
|
||||
kube-system kube-dns-v20-3531996453-r18ht 3/3 Running 0 5m
|
||||
kube-system kube-proxy-36jj8 1/1 Running 0 5m
|
||||
kube-system kube-proxy-fdt2t 1/1 Running 0 6m
|
||||
kube-system kube-proxy-sttgn 1/1 Running 0 5m
|
||||
kube-system kube-scheduler-1921762579-z6jn6 1/1 Running 0 6m
|
||||
kube-system kubelet-1ibsf 1/1 Running 0 6m
|
||||
kube-system kubelet-65h6j 1/1 Running 0 5m
|
||||
kube-system kubelet-d1qql 1/1 Running 0 5m
|
||||
kube-system checkpoint-installer-cpjrm 1/1 Running 0 2m
|
||||
kube-system kube-apiserver-rvjes 1/1 Running 0 2m
|
||||
kube-system kube-controller-manager-3900529476-5n9xb 1/1 Running 0 4m
|
||||
kube-system kube-controller-manager-3900529476-rq6p8 1/1 Running 0 4m
|
||||
kube-system kube-dns-4101612645-oeu5g 4/4 Running 0 4m
|
||||
kube-system kube-proxy-f5kb8 1/1 Running 0 2m
|
||||
kube-system kube-proxy-jkg4z 1/1 Running 0 2m
|
||||
kube-system kube-proxy-rrmuv 1/1 Running 0 2m
|
||||
kube-system kube-scheduler-1084603659-qaqbk 1/1 Running 0 3m
|
||||
kube-system kube-scheduler-1084603659-rztvw 1/1 Running 0 4m
|
||||
kube-system pod-checkpointer-node1.example.com 1/1 Running 0 1m
|
||||
|
||||
Try deleting pods to see that the cluster is resilient to failures and machine restarts (CoreOS auto-updates).
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
|
||||
# Kubernetes
|
||||
|
||||
The Kubernetes example provisions a 3 node Kubernetes v1.4.6 cluster with one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
|
||||
The Kubernetes example provisions a 3 node Kubernetes v1.4.7 cluster with one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
|
||||
|
||||
## Requirements
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Kubernetes (with rkt)
|
||||
|
||||
The `rktnetes` example provisions a 3 node Kubernetes v1.4.6 cluster with [rkt](https://github.com/coreos/rkt) as the container runtime. The cluster has one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
|
||||
The `rktnetes` example provisions a 3 node Kubernetes v1.4.7 cluster with [rkt](https://github.com/coreos/rkt) as the container runtime. The cluster has one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
|
||||
|
||||
## Requirements
|
||||
|
||||
|
||||
@@ -7,6 +7,7 @@
|
||||
"mac": "52:54:00:a1:9c:ae"
|
||||
},
|
||||
"metadata": {
|
||||
"container_runtime": "docker",
|
||||
"domain_name": "node1.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"etcd_name": "node1",
|
||||
|
||||
@@ -7,6 +7,7 @@
|
||||
"mac": "52:54:00:b2:2f:86"
|
||||
},
|
||||
"metadata": {
|
||||
"container_runtime": "docker",
|
||||
"domain_name": "node2.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"k8s_cert_endpoint": "http://bootcfg.foo:8080/assets",
|
||||
|
||||
@@ -7,6 +7,7 @@
|
||||
"mac": "52:54:00:c3:61:77"
|
||||
},
|
||||
"metadata": {
|
||||
"container_runtime": "docker",
|
||||
"domain_name": "node3.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"k8s_cert_endpoint": "http://bootcfg.foo:8080/assets",
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
"mac": "52:54:00:a1:9c:ae"
|
||||
},
|
||||
"metadata": {
|
||||
"container_runtime": "docker",
|
||||
"domain_name": "node1.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"etcd_name": "node1",
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
"mac": "52:54:00:b2:2f:86"
|
||||
},
|
||||
"metadata": {
|
||||
"container_runtime": "docker",
|
||||
"domain_name": "node2.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"k8s_cert_endpoint": "http://bootcfg.foo:8080/assets",
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
"mac": "52:54:00:c3:61:77"
|
||||
},
|
||||
"metadata": {
|
||||
"container_runtime": "docker",
|
||||
"domain_name": "node3.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"k8s_cert_endpoint": "http://bootcfg.foo:8080/assets",
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
{
|
||||
"id": "node1",
|
||||
"name": "k8s controller",
|
||||
"profile": "rktnetes-controller",
|
||||
"profile": "k8s-controller",
|
||||
"selector": {
|
||||
"mac": "52:54:00:a1:9c:ae",
|
||||
"os": "installed"
|
||||
},
|
||||
"metadata": {
|
||||
"container_runtime": "rkt",
|
||||
"domain_name": "node1.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"etcd_name": "node1",
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
{
|
||||
"id": "node2",
|
||||
"name": "k8s worker",
|
||||
"profile": "rktnetes-worker",
|
||||
"profile": "k8s-worker",
|
||||
"selector": {
|
||||
"mac": "52:54:00:b2:2f:86",
|
||||
"os": "installed"
|
||||
},
|
||||
"metadata": {
|
||||
"container_runtime": "rkt",
|
||||
"domain_name": "node2.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"k8s_cert_endpoint": "http://bootcfg.foo:8080/assets",
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
{
|
||||
"id": "node3",
|
||||
"name": "k8s worker",
|
||||
"profile": "rktnetes-worker",
|
||||
"profile": "k8s-worker",
|
||||
"selector": {
|
||||
"mac": "52:54:00:c3:61:77",
|
||||
"os": "installed"
|
||||
},
|
||||
"metadata": {
|
||||
"container_runtime": "rkt",
|
||||
"domain_name": "node3.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"k8s_cert_endpoint": "http://bootcfg.foo:8080/assets",
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
{
|
||||
"id": "node1",
|
||||
"name": "k8s controller",
|
||||
"profile": "rktnetes-controller",
|
||||
"profile": "k8s-controller",
|
||||
"selector": {
|
||||
"mac": "52:54:00:a1:9c:ae"
|
||||
},
|
||||
"metadata": {
|
||||
"container_runtime": "rkt",
|
||||
"domain_name": "node1.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"etcd_name": "node1",
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
{
|
||||
"id": "node2",
|
||||
"name": "k8s worker",
|
||||
"profile": "rktnetes-worker",
|
||||
"profile": "k8s-worker",
|
||||
"selector": {
|
||||
"mac": "52:54:00:b2:2f:86"
|
||||
},
|
||||
"metadata": {
|
||||
"container_runtime": "rkt",
|
||||
"domain_name": "node2.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"k8s_cert_endpoint": "http://bootcfg.foo:8080/assets",
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
{
|
||||
"id": "node3",
|
||||
"name": "k8s worker",
|
||||
"profile": "rktnetes-worker",
|
||||
"profile": "k8s-worker",
|
||||
"selector": {
|
||||
"mac": "52:54:00:c3:61:77"
|
||||
},
|
||||
"metadata": {
|
||||
"container_runtime": "rkt",
|
||||
"domain_name": "node3.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"k8s_cert_endpoint": "http://bootcfg.foo:8080/assets",
|
||||
|
||||
@@ -70,7 +70,6 @@ systemd:
|
||||
--allow-privileged \
|
||||
--hostname-override={{.domain_name}} \
|
||||
--node-labels=master=true \
|
||||
--minimum-container-ttl-duration=6m0s \
|
||||
--cluster_dns={{.k8s_dns_service_ip}} \
|
||||
--cluster_domain=cluster.local
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
|
||||
@@ -78,7 +77,13 @@ systemd:
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
- name: bootkube.service
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Bootstrap a Kubernetes control plane with a temp api-server
|
||||
[Service]
|
||||
Type=simple
|
||||
ExecStart=/home/core/bootkube-start
|
||||
storage:
|
||||
{{ if index . "pxe" }}
|
||||
disks:
|
||||
@@ -103,7 +108,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_ACI=quay.io/coreos/hyperkube
|
||||
KUBELET_VERSION=v1.4.6_coreos.0
|
||||
KUBELET_VERSION=v1.4.7_coreos.0
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
@@ -128,7 +133,7 @@ storage:
|
||||
# Wrapper for bootkube start
|
||||
set -e
|
||||
BOOTKUBE_ACI="${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
||||
BOOTKUBE_VERSION="${BOOTKUBE_VERSION:-v0.2.5}"
|
||||
BOOTKUBE_VERSION="${BOOTKUBE_VERSION:-v0.2.6}"
|
||||
BOOTKUBE_ASSETS="${BOOTKUBE_ASSETS:-/home/core/assets}"
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
|
||||
@@ -60,7 +60,6 @@ systemd:
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--allow-privileged \
|
||||
--hostname-override={{.domain_name}} \
|
||||
--minimum-container-ttl-duration=6m0s \
|
||||
--cluster_dns={{.k8s_dns_service_ip}} \
|
||||
--cluster_domain=cluster.local
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
|
||||
@@ -93,7 +92,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_ACI=quay.io/coreos/hyperkube
|
||||
KUBELET_VERSION=v1.4.6_coreos.0
|
||||
KUBELET_VERSION=v1.4.7_coreos.0
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
|
||||
@@ -64,10 +64,18 @@ systemd:
|
||||
Requires=k8s-assets.target
|
||||
After=k8s-assets.target
|
||||
[Service]
|
||||
Environment=KUBELET_VERSION=v1.4.6_coreos.0
|
||||
Environment=KUBELET_VERSION=v1.4.7_coreos.0
|
||||
Environment="RKT_OPTS=--uuid-file-save=/var/run/kubelet-pod.uuid \
|
||||
--volume dns,kind=host,source=/etc/resolv.conf \
|
||||
--mount volume=dns,target=/etc/resolv.conf \
|
||||
{{ if eq .container_runtime "rkt" -}}
|
||||
--volume rkt,kind=host,source=/opt/bin/host-rkt \
|
||||
--mount volume=rkt,target=/usr/bin/rkt \
|
||||
--volume var-lib-rkt,kind=host,source=/var/lib/rkt \
|
||||
--mount volume=var-lib-rkt,target=/var/lib/rkt \
|
||||
--volume stage,kind=host,source=/tmp \
|
||||
--mount volume=stage,target=/tmp \
|
||||
{{ end -}}
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
--mount volume=var-log,target=/var/log"
|
||||
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@@ -79,6 +87,9 @@ systemd:
|
||||
--register-schedulable=true \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--network-plugin=cni \
|
||||
--container-runtime={{.container_runtime}} \
|
||||
--rkt-path=/usr/bin/rkt \
|
||||
--rkt-stage1-image=coreos.com/rkt/stage1-coreos \
|
||||
--allow-privileged=true \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--hostname-override={{.domain_name}} \
|
||||
@@ -99,6 +110,34 @@ systemd:
|
||||
ExecStart=/opt/k8s-addons
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
{{ if eq .container_runtime "rkt" }}
|
||||
- name: rkt-api.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Before=kubelet.service
|
||||
[Service]
|
||||
ExecStart=/usr/bin/rkt api-service
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
RequiredBy=kubelet.service
|
||||
- name: load-rkt-stage1.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Load rkt stage1 images
|
||||
Documentation=http://github.com/coreos/rkt
|
||||
Requires=network-online.target
|
||||
After=network-online.target
|
||||
Before=rkt-api.service
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
ExecStart=/usr/bin/rkt fetch /usr/lib/rkt/stage1-images/stage1-coreos.aci /usr/lib/rkt/stage1-images/stage1-fly.aci --insecure-options=image
|
||||
[Install]
|
||||
RequiredBy=rkt-api.service
|
||||
{{ end }}
|
||||
|
||||
storage:
|
||||
{{ if index . "pxe" }}
|
||||
@@ -116,7 +155,7 @@ storage:
|
||||
force: true
|
||||
options:
|
||||
- "-LROOT"
|
||||
{{end}}
|
||||
{{ end }}
|
||||
files:
|
||||
- path: /etc/kubernetes/cni/net.d/10-flannel.conf
|
||||
filesystem: root
|
||||
@@ -149,11 +188,13 @@ storage:
|
||||
metadata:
|
||||
name: kube-proxy
|
||||
namespace: kube-system
|
||||
annotations:
|
||||
rkt.alpha.kubernetes.io/stage1-name-override: coreos.com/rkt/stage1-fly
|
||||
spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: kube-proxy
|
||||
image: quay.io/coreos/hyperkube:v1.4.6_coreos.0
|
||||
image: quay.io/coreos/hyperkube:v1.4.7_coreos.0
|
||||
command:
|
||||
- /hyperkube
|
||||
- proxy
|
||||
@@ -164,10 +205,16 @@ storage:
|
||||
- mountPath: /etc/ssl/certs
|
||||
name: ssl-certs-host
|
||||
readOnly: true
|
||||
- mountPath: /var/run/dbus
|
||||
name: dbus
|
||||
readOnly: false
|
||||
volumes:
|
||||
- hostPath:
|
||||
path: /usr/share/ca-certificates
|
||||
name: ssl-certs-host
|
||||
- hostPath:
|
||||
path: /var/run/dbus
|
||||
name: dbus
|
||||
- path: /etc/kubernetes/manifests/kube-apiserver.yaml
|
||||
filesystem: root
|
||||
contents:
|
||||
@@ -181,7 +228,7 @@ storage:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: kube-apiserver
|
||||
image: quay.io/coreos/hyperkube:v1.4.6_coreos.0
|
||||
image: quay.io/coreos/hyperkube:v1.4.7_coreos.0
|
||||
command:
|
||||
- /hyperkube
|
||||
- apiserver
|
||||
@@ -241,7 +288,7 @@ storage:
|
||||
spec:
|
||||
containers:
|
||||
- name: kube-controller-manager
|
||||
image: quay.io/coreos/hyperkube:v1.4.6_coreos.0
|
||||
image: quay.io/coreos/hyperkube:v1.4.7_coreos.0
|
||||
command:
|
||||
- /hyperkube
|
||||
- controller-manager
|
||||
@@ -287,7 +334,7 @@ storage:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: kube-scheduler
|
||||
image: quay.io/coreos/hyperkube:v1.4.6_coreos.0
|
||||
image: quay.io/coreos/hyperkube:v1.4.7_coreos.0
|
||||
command:
|
||||
- /hyperkube
|
||||
- scheduler
|
||||
@@ -619,6 +666,23 @@ storage:
|
||||
fi
|
||||
}
|
||||
init_flannel
|
||||
{{ if eq .container_runtime "rkt" }}
|
||||
- path: /opt/bin/host-rkt
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/sh
|
||||
# This is bind mounted into the kubelet rootfs and all rkt shell-outs go
|
||||
# through this rkt wrapper. It essentially enters the host mount namespace
|
||||
# (which it is already in) only for the purpose of breaking out of the chroot
|
||||
# before calling rkt. It makes things like rkt gc work and avoids bind mounting
|
||||
# in certain rkt filesystem dependancies into the kubelet rootfs. This can
|
||||
# eventually be obviated when the write-api stuff gets upstream and rkt gc is
|
||||
# through the api-server. Related issue:
|
||||
# https://github.com/coreos/rkt/issues/2878
|
||||
exec nsenter -m -u -i -n -p -t 1 -- /usr/bin/rkt "$@"
|
||||
{{ end }}
|
||||
- path: /opt/k8s-addons
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
|
||||
@@ -58,10 +58,18 @@ systemd:
|
||||
Requires=k8s-assets.target
|
||||
After=k8s-assets.target
|
||||
[Service]
|
||||
Environment=KUBELET_VERSION=v1.4.6_coreos.0
|
||||
Environment=KUBELET_VERSION=v1.4.7_coreos.0
|
||||
Environment="RKT_OPTS=--uuid-file-save=/var/run/kubelet-pod.uuid \
|
||||
--volume dns,kind=host,source=/etc/resolv.conf \
|
||||
--mount volume=dns,target=/etc/resolv.conf \
|
||||
{{ if eq .container_runtime "rkt" -}}
|
||||
--volume rkt,kind=host,source=/opt/bin/host-rkt \
|
||||
--mount volume=rkt,target=/usr/bin/rkt \
|
||||
--volume var-lib-rkt,kind=host,source=/var/lib/rkt \
|
||||
--mount volume=var-lib-rkt,target=/var/lib/rkt \
|
||||
--volume stage,kind=host,source=/tmp \
|
||||
--mount volume=stage,target=/tmp \
|
||||
{{ end -}}
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
--mount volume=var-log,target=/var/log"
|
||||
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@@ -69,9 +77,12 @@ systemd:
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--api-servers={{.k8s_controller_endpoint}} \
|
||||
--register-node=true \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--network-plugin=cni \
|
||||
--container-runtime={{.container_runtime}} \
|
||||
--rkt-path=/usr/bin/rkt \
|
||||
--rkt-stage1-image=coreos.com/rkt/stage1-coreos \
|
||||
--register-node=true \
|
||||
--allow-privileged=true \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--hostname-override={{.domain_name}} \
|
||||
@@ -85,6 +96,34 @@ systemd:
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
{{ if eq .container_runtime "rkt" }}
|
||||
- name: rkt-api.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Before=kubelet.service
|
||||
[Service]
|
||||
ExecStart=/usr/bin/rkt api-service
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
RequiredBy=kubelet.service
|
||||
- name: load-rkt-stage1.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Load rkt stage1 images
|
||||
Documentation=http://github.com/coreos/rkt
|
||||
Requires=network-online.target
|
||||
After=network-online.target
|
||||
Before=rkt-api.service
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
ExecStart=/usr/bin/rkt fetch /usr/lib/rkt/stage1-images/stage1-coreos.aci /usr/lib/rkt/stage1-images/stage1-fly.aci --insecure-options=image
|
||||
[Install]
|
||||
RequiredBy=rkt-api.service
|
||||
{{ end }}
|
||||
|
||||
storage:
|
||||
{{ if index . "pxe" }}
|
||||
@@ -156,11 +195,13 @@ storage:
|
||||
metadata:
|
||||
name: kube-proxy
|
||||
namespace: kube-system
|
||||
annotations:
|
||||
rkt.alpha.kubernetes.io/stage1-name-override: coreos.com/rkt/stage1-fly
|
||||
spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: kube-proxy
|
||||
image: quay.io/coreos/hyperkube:v1.4.6_coreos.0
|
||||
image: quay.io/coreos/hyperkube:v1.4.7_coreos.0
|
||||
command:
|
||||
- /hyperkube
|
||||
- proxy
|
||||
@@ -177,6 +218,9 @@ storage:
|
||||
- mountPath: /etc/kubernetes/ssl
|
||||
name: "etc-kube-ssl"
|
||||
readOnly: true
|
||||
- mountPath: /var/run/dbus
|
||||
name: dbus
|
||||
readOnly: false
|
||||
volumes:
|
||||
- name: "ssl-certs"
|
||||
hostPath:
|
||||
@@ -187,11 +231,31 @@ storage:
|
||||
- name: "etc-kube-ssl"
|
||||
hostPath:
|
||||
path: "/etc/kubernetes/ssl"
|
||||
- hostPath:
|
||||
path: /var/run/dbus
|
||||
name: dbus
|
||||
- path: /etc/flannel/options.env
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
FLANNELD_ETCD_ENDPOINTS={{.k8s_etcd_endpoints}}
|
||||
{{ if eq .container_runtime "rkt" }}
|
||||
- path: /opt/bin/host-rkt
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/sh
|
||||
# This is bind mounted into the kubelet rootfs and all rkt shell-outs go
|
||||
# through this rkt wrapper. It essentially enters the host mount namespace
|
||||
# (which it is already in) only for the purpose of breaking out of the chroot
|
||||
# before calling rkt. It makes things like rkt gc work and avoids bind mounting
|
||||
# in certain rkt filesystem dependancies into the kubelet rootfs. This can
|
||||
# eventually be obviated when the write-api stuff gets upstream and rkt gc is
|
||||
# through the api-server. Related issue:
|
||||
# https://github.com/coreos/rkt/issues/2878
|
||||
exec nsenter -m -u -i -n -p -t 1 -- /usr/bin/rkt "$@"
|
||||
{{ end }}
|
||||
|
||||
{{ if index . "ssh_authorized_keys" }}
|
||||
passwd:
|
||||
|
||||
@@ -1,709 +0,0 @@
|
||||
---
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd2.service
|
||||
enable: true
|
||||
dropins:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_NAME={{.etcd_name}}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=http://{{.domain_name}}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=http://{{.domain_name}}:2380"
|
||||
Environment="ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379"
|
||||
Environment="ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380"
|
||||
Environment="ETCD_INITIAL_CLUSTER={{.etcd_initial_cluster}}"
|
||||
Environment="ETCD_STRICT_RECONFIG_CHECK=true"
|
||||
- name: flanneld.service
|
||||
dropins:
|
||||
- name: 40-ExecStartPre-symlink.conf
|
||||
contents: |
|
||||
[Service]
|
||||
EnvironmentFile=-/etc/flannel/options.env
|
||||
ExecStartPre=/opt/init-flannel
|
||||
- name: docker.service
|
||||
dropins:
|
||||
- name: 40-flannel.conf
|
||||
contents: |
|
||||
[Unit]
|
||||
Requires=flanneld.service
|
||||
After=flanneld.service
|
||||
[Service]
|
||||
EnvironmentFile=/etc/kubernetes/cni/docker_opts_cni.env
|
||||
- name: locksmithd.service
|
||||
dropins:
|
||||
- name: 40-etcd-lock.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="REBOOT_STRATEGY=etcd-lock"
|
||||
- name: k8s-certs@.service
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Fetch Kubernetes certificate assets
|
||||
Requires=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/ssl
|
||||
ExecStart=/usr/bin/bash -c "[ -f /etc/kubernetes/ssl/%i ] || curl {{.k8s_cert_endpoint}}/tls/%i -o /etc/kubernetes/ssl/%i"
|
||||
- name: k8s-assets.target
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Load Kubernetes Assets
|
||||
Requires=k8s-certs@apiserver.pem.service
|
||||
After=k8s-certs@apiserver.pem.service
|
||||
Requires=k8s-certs@apiserver-key.pem.service
|
||||
After=k8s-certs@apiserver-key.pem.service
|
||||
Requires=k8s-certs@ca.pem.service
|
||||
After=k8s-certs@ca.pem.service
|
||||
- name: kubelet.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet via Hyperkube ACI
|
||||
Wants=flanneld.service
|
||||
Requires=k8s-assets.target
|
||||
After=k8s-assets.target
|
||||
[Service]
|
||||
Environment=KUBELET_VERSION=v1.4.6_coreos.0
|
||||
Environment="RKT_OPTS=--uuid-file-save=/var/run/kubelet-pod.uuid \
|
||||
--volume dns,kind=host,source=/etc/resolv.conf \
|
||||
--mount volume=dns,target=/etc/resolv.conf \
|
||||
--volume rkt,kind=host,source=/opt/bin/host-rkt \
|
||||
--mount volume=rkt,target=/usr/bin/rkt \
|
||||
--volume var-lib-rkt,kind=host,source=/var/lib/rkt \
|
||||
--mount volume=var-lib-rkt,target=/var/lib/rkt \
|
||||
--volume stage,kind=host,source=/tmp \
|
||||
--mount volume=stage,target=/tmp \
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
--mount volume=var-log,target=/var/log"
|
||||
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
|
||||
ExecStartPre=/usr/bin/systemctl is-active flanneld.service
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--api-servers=http://127.0.0.1:8080 \
|
||||
--register-schedulable=true \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--network-plugin=cni \
|
||||
--container-runtime=rkt \
|
||||
--rkt-path=/usr/bin/rkt \
|
||||
--rkt-stage1-image=coreos.com/rkt/stage1-coreos \
|
||||
--allow-privileged=true \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--hostname-override={{.domain_name}} \
|
||||
--cluster_dns={{.k8s_dns_service_ip}} \
|
||||
--cluster_domain=cluster.local
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: k8s-addons.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubernetes Addons
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=/opt/k8s-addons
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: rkt-api.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Before=kubelet.service
|
||||
[Service]
|
||||
ExecStart=/usr/bin/rkt api-service
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
RequiredBy=kubelet.service
|
||||
- name: load-rkt-stage1.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Load rkt stage1 images
|
||||
Documentation=http://github.com/coreos/rkt
|
||||
Requires=network-online.target
|
||||
After=network-online.target
|
||||
Before=rkt-api.service
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
ExecStart=/usr/bin/rkt fetch /usr/lib/rkt/stage1-images/stage1-coreos.aci /usr/lib/rkt/stage1-images/stage1-fly.aci --insecure-options=image
|
||||
[Install]
|
||||
RequiredBy=rkt-api.service
|
||||
|
||||
storage:
|
||||
{{ if index . "pxe" }}
|
||||
disks:
|
||||
- device: /dev/sda
|
||||
wipe_table: true
|
||||
partitions:
|
||||
- label: ROOT
|
||||
filesystems:
|
||||
- name: root
|
||||
mount:
|
||||
device: "/dev/sda1"
|
||||
format: "ext4"
|
||||
create:
|
||||
force: true
|
||||
options:
|
||||
- "-LROOT"
|
||||
{{end}}
|
||||
files:
|
||||
- path: /etc/kubernetes/cni/net.d/10-flannel.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
{
|
||||
"name": "podnet",
|
||||
"type": "flannel",
|
||||
"delegate": {
|
||||
"isDefaultGateway": true
|
||||
}
|
||||
}
|
||||
- path: /etc/kubernetes/cni/docker_opts_cni.env
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
DOCKER_OPT_BIP=""
|
||||
DOCKER_OPT_IPMASQ=""
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/kubernetes/manifests/kube-proxy.yaml
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: kube-proxy
|
||||
namespace: kube-system
|
||||
annotations:
|
||||
rkt.alpha.kubernetes.io/stage1-name-override: coreos.com/rkt/stage1-fly
|
||||
spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: kube-proxy
|
||||
image: quay.io/coreos/hyperkube:v1.4.6_coreos.0
|
||||
command:
|
||||
- /hyperkube
|
||||
- proxy
|
||||
- --master=http://127.0.0.1:8080
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- mountPath: /etc/ssl/certs
|
||||
name: ssl-certs-host
|
||||
readOnly: true
|
||||
- mountPath: /var/run/dbus
|
||||
name: dbus
|
||||
readOnly: false
|
||||
volumes:
|
||||
- hostPath:
|
||||
path: /usr/share/ca-certificates
|
||||
name: ssl-certs-host
|
||||
- hostPath:
|
||||
path: /var/run/dbus
|
||||
name: dbus
|
||||
- path: /etc/kubernetes/manifests/kube-apiserver.yaml
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: kube-apiserver
|
||||
namespace: kube-system
|
||||
spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: kube-apiserver
|
||||
image: quay.io/coreos/hyperkube:v1.4.6_coreos.0
|
||||
command:
|
||||
- /hyperkube
|
||||
- apiserver
|
||||
- --bind-address=0.0.0.0
|
||||
- --etcd-servers={{.k8s_etcd_endpoints}}
|
||||
- --allow-privileged=true
|
||||
- --service-cluster-ip-range={{.k8s_service_ip_range}}
|
||||
- --secure-port=443
|
||||
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
|
||||
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
|
||||
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
|
||||
- --client-ca-file=/etc/kubernetes/ssl/ca.pem
|
||||
- --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
|
||||
- --runtime-config=extensions/v1beta1/networkpolicies=true
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
host: 127.0.0.1
|
||||
port: 8080
|
||||
path: /healthz
|
||||
initialDelaySeconds: 15
|
||||
timeoutSeconds: 15
|
||||
ports:
|
||||
- containerPort: 443
|
||||
hostPort: 443
|
||||
name: https
|
||||
- containerPort: 8080
|
||||
hostPort: 8080
|
||||
name: local
|
||||
volumeMounts:
|
||||
- mountPath: /etc/kubernetes/ssl
|
||||
name: ssl-certs-kubernetes
|
||||
readOnly: true
|
||||
- mountPath: /etc/ssl/certs
|
||||
name: ssl-certs-host
|
||||
readOnly: true
|
||||
volumes:
|
||||
- hostPath:
|
||||
path: /etc/kubernetes/ssl
|
||||
name: ssl-certs-kubernetes
|
||||
- hostPath:
|
||||
path: /usr/share/ca-certificates
|
||||
name: ssl-certs-host
|
||||
- path: /etc/flannel/options.env
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
FLANNELD_ETCD_ENDPOINTS={{.k8s_etcd_endpoints}}
|
||||
- path: /etc/kubernetes/manifests/kube-controller-manager.yaml
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: kube-controller-manager
|
||||
namespace: kube-system
|
||||
spec:
|
||||
containers:
|
||||
- name: kube-controller-manager
|
||||
image: quay.io/coreos/hyperkube:v1.4.6_coreos.0
|
||||
command:
|
||||
- /hyperkube
|
||||
- controller-manager
|
||||
- --master=http://127.0.0.1:8080
|
||||
- --leader-elect=true
|
||||
- --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
|
||||
- --root-ca-file=/etc/kubernetes/ssl/ca.pem
|
||||
resources:
|
||||
requests:
|
||||
cpu: 200m
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
host: 127.0.0.1
|
||||
path: /healthz
|
||||
port: 10252
|
||||
initialDelaySeconds: 15
|
||||
timeoutSeconds: 15
|
||||
volumeMounts:
|
||||
- mountPath: /etc/kubernetes/ssl
|
||||
name: ssl-certs-kubernetes
|
||||
readOnly: true
|
||||
- mountPath: /etc/ssl/certs
|
||||
name: ssl-certs-host
|
||||
readOnly: true
|
||||
hostNetwork: true
|
||||
volumes:
|
||||
- hostPath:
|
||||
path: /etc/kubernetes/ssl
|
||||
name: ssl-certs-kubernetes
|
||||
- hostPath:
|
||||
path: /usr/share/ca-certificates
|
||||
name: ssl-certs-host
|
||||
- path: /etc/kubernetes/manifests/kube-scheduler.yaml
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: kube-scheduler
|
||||
namespace: kube-system
|
||||
spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: kube-scheduler
|
||||
image: quay.io/coreos/hyperkube:v1.4.6_coreos.0
|
||||
command:
|
||||
- /hyperkube
|
||||
- scheduler
|
||||
- --master=http://127.0.0.1:8080
|
||||
- --leader-elect=true
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
host: 127.0.0.1
|
||||
path: /healthz
|
||||
port: 10251
|
||||
initialDelaySeconds: 15
|
||||
timeoutSeconds: 15
|
||||
- path: /srv/kubernetes/manifests/kube-dns-rc.yaml
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: kube-dns-v20
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: kube-dns
|
||||
version: v20
|
||||
kubernetes.io/cluster-service: "true"
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
k8s-app: kube-dns
|
||||
version: v20
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kube-dns
|
||||
version: v20
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
|
||||
spec:
|
||||
containers:
|
||||
- name: kubedns
|
||||
image: gcr.io/google_containers/kubedns-amd64:1.8
|
||||
resources:
|
||||
limits:
|
||||
memory: 170Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 70Mi
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz-kubedns
|
||||
port: 8080
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 60
|
||||
timeoutSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 5
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /readiness
|
||||
port: 8081
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 3
|
||||
timeoutSeconds: 5
|
||||
args:
|
||||
- --domain=cluster.local.
|
||||
- --dns-port=10053
|
||||
ports:
|
||||
- containerPort: 10053
|
||||
name: dns-local
|
||||
protocol: UDP
|
||||
- containerPort: 10053
|
||||
name: dns-tcp-local
|
||||
protocol: TCP
|
||||
- name: dnsmasq
|
||||
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz-dnsmasq
|
||||
port: 8080
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 60
|
||||
timeoutSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 5
|
||||
args:
|
||||
- --cache-size=1000
|
||||
- --no-resolv
|
||||
- --server=127.0.0.1#10053
|
||||
- --log-facility=-
|
||||
ports:
|
||||
- containerPort: 53
|
||||
name: dns
|
||||
protocol: UDP
|
||||
- containerPort: 53
|
||||
name: dns-tcp
|
||||
protocol: TCP
|
||||
- name: healthz
|
||||
image: gcr.io/google_containers/exechealthz-amd64:1.2
|
||||
resources:
|
||||
limits:
|
||||
memory: 50Mi
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 50Mi
|
||||
args:
|
||||
- --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
|
||||
- --url=/healthz-dnsmasq
|
||||
- --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
|
||||
- --url=/healthz-kubedns
|
||||
- --port=8080
|
||||
- --quiet
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
protocol: TCP
|
||||
dnsPolicy: Default
|
||||
- path: /srv/kubernetes/manifests/kube-dns-svc.yaml
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: kube-dns
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: kube-dns
|
||||
kubernetes.io/cluster-service: "true"
|
||||
kubernetes.io/name: "KubeDNS"
|
||||
spec:
|
||||
selector:
|
||||
k8s-app: kube-dns
|
||||
clusterIP: {{.k8s_dns_service_ip}}
|
||||
ports:
|
||||
- name: dns
|
||||
port: 53
|
||||
protocol: UDP
|
||||
- name: dns-tcp
|
||||
port: 53
|
||||
protocol: TCP
|
||||
- path: /srv/kubernetes/manifests/heapster-deployment.yaml
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: heapster-v1.2.0
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: heapster
|
||||
kubernetes.io/cluster-service: "true"
|
||||
version: v1.2.0
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: heapster
|
||||
version: v1.2.0
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: heapster
|
||||
version: v1.2.0
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/heapster:v1.2.0
|
||||
name: heapster
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8082
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 180
|
||||
timeoutSeconds: 5
|
||||
resources:
|
||||
# keep request = limit to keep this container in guaranteed class
|
||||
limits:
|
||||
cpu: 80m
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 80m
|
||||
memory: 200Mi
|
||||
command:
|
||||
- /heapster
|
||||
- --source=kubernetes.summary_api:''
|
||||
- image: gcr.io/google_containers/addon-resizer:1.6
|
||||
name: heapster-nanny
|
||||
resources:
|
||||
limits:
|
||||
cpu: 50m
|
||||
memory: 90Mi
|
||||
requests:
|
||||
cpu: 50m
|
||||
memory: 90Mi
|
||||
env:
|
||||
- name: MY_POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: MY_POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
command:
|
||||
- /pod_nanny
|
||||
- --cpu=80m
|
||||
- --extra-cpu=4m
|
||||
- --memory=200Mi
|
||||
- --extra-memory=4Mi
|
||||
- --threshold=5
|
||||
- --deployment=heapster-v1.2.0
|
||||
- --container=heapster
|
||||
- --poll-period=300000
|
||||
- --estimator=exponential
|
||||
- path: /srv/kubernetes/manifests/heapster-svc.yaml
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: heapster
|
||||
namespace: kube-system
|
||||
labels:
|
||||
kubernetes.io/cluster-service: "true"
|
||||
kubernetes.io/name: "Heapster"
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 8082
|
||||
selector:
|
||||
k8s-app: heapster
|
||||
- path: /srv/kubernetes/manifests/kube-dashboard-rc.yaml
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: kubernetes-dashboard-v1.4.1
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
version: v1.4.1
|
||||
kubernetes.io/cluster-service: "true"
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
k8s-app: kubernetes-dashboard
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
version: v1.4.1
|
||||
kubernetes.io/cluster-service: "true"
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
|
||||
spec:
|
||||
containers:
|
||||
- name: kubernetes-dashboard
|
||||
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.1
|
||||
resources:
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
ports:
|
||||
- containerPort: 9090
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 9090
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 30
|
||||
- path: /srv/kubernetes/manifests/kube-dashboard-svc.yaml
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: kubernetes-dashboard
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
kubernetes.io/cluster-service: "true"
|
||||
spec:
|
||||
selector:
|
||||
k8s-app: kubernetes-dashboard
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 9090
|
||||
- path: /opt/init-flannel
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -ex
|
||||
function init_flannel {
|
||||
echo "Waiting for etcd..."
|
||||
while true
|
||||
do
|
||||
IFS=',' read -ra ES <<< "{{.k8s_etcd_endpoints}}"
|
||||
for ETCD in "${ES[@]}"; do
|
||||
echo "Trying: $ETCD"
|
||||
if [ -n "$(curl --silent "$ETCD/v2/machines")" ]; then
|
||||
local ACTIVE_ETCD=$ETCD
|
||||
break
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
if [ -n "$ACTIVE_ETCD" ]; then
|
||||
break
|
||||
fi
|
||||
done
|
||||
RES=$(curl --silent -X PUT -d "value={\"Network\":\"{{.k8s_pod_network}}\",\"Backend\":{\"Type\":\"vxlan\"}}" "$ACTIVE_ETCD/v2/keys/coreos.com/network/config?prevExist=false")
|
||||
if [ -z "$(echo $RES | grep '"action":"create"')" ] && [ -z "$(echo $RES | grep 'Key already exists')" ]; then
|
||||
echo "Unexpected error configuring flannel pod network: $RES"
|
||||
fi
|
||||
}
|
||||
init_flannel
|
||||
- path: /opt/bin/host-rkt
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/sh
|
||||
# This is bind mounted into the kubelet rootfs and all rkt shell-outs go
|
||||
# through this rkt wrapper. It essentially enters the host mount namespace
|
||||
# (which it is already in) only for the purpose of breaking out of the chroot
|
||||
# before calling rkt. It makes things like rkt gc work and avoids bind mounting
|
||||
# in certain rkt filesystem dependancies into the kubelet rootfs. This can
|
||||
# eventually be obviated when the write-api stuff gets upstream and rkt gc is
|
||||
# through the api-server. Related issue:
|
||||
# https://github.com/coreos/rkt/issues/2878
|
||||
exec nsenter -m -u -i -n -p -t 1 -- /usr/bin/rkt "$@"
|
||||
- path: /opt/k8s-addons
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -ex
|
||||
echo "Waiting for Kubernetes API..."
|
||||
until curl --silent "http://127.0.0.1:8080/version"
|
||||
do
|
||||
sleep 5
|
||||
done
|
||||
echo "K8S: DNS addon"
|
||||
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dns-rc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/replicationcontrollers" > /dev/null
|
||||
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dns-svc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services" > /dev/null
|
||||
echo "K8S: Heapster addon"
|
||||
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/heapster-deployment.yaml)" "http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/kube-system/deployments"
|
||||
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/heapster-svc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services"
|
||||
echo "K8S: Dashboard addon"
|
||||
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dashboard-rc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/replicationcontrollers" > /dev/null
|
||||
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dashboard-svc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services" > /dev/null
|
||||
|
||||
{{ if index . "ssh_authorized_keys" }}
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
ssh_authorized_keys:
|
||||
{{ range $element := .ssh_authorized_keys }}
|
||||
- {{$element}}
|
||||
{{end}}
|
||||
{{end}}
|
||||
@@ -1,261 +0,0 @@
|
||||
---
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd2.service
|
||||
enable: true
|
||||
dropins:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_PROXY=on"
|
||||
Environment="ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379"
|
||||
Environment="ETCD_INITIAL_CLUSTER={{.etcd_initial_cluster}}"
|
||||
- name: flanneld.service
|
||||
dropins:
|
||||
- name: 40-add-options.conf
|
||||
contents: |
|
||||
[Service]
|
||||
EnvironmentFile=-/etc/flannel/options.env
|
||||
- name: docker.service
|
||||
dropins:
|
||||
- name: 40-flannel.conf
|
||||
contents: |
|
||||
[Unit]
|
||||
Requires=flanneld.service
|
||||
After=flanneld.service
|
||||
[Service]
|
||||
EnvironmentFile=/etc/kubernetes/cni/docker_opts_cni.env
|
||||
- name: locksmithd.service
|
||||
dropins:
|
||||
- name: 40-etcd-lock.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="REBOOT_STRATEGY=etcd-lock"
|
||||
- name: k8s-certs@.service
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Fetch Kubernetes certificate assets
|
||||
Requires=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/ssl
|
||||
ExecStart=/usr/bin/bash -c "[ -f /etc/kubernetes/ssl/%i ] || curl {{.k8s_cert_endpoint}}/tls/%i -o /etc/kubernetes/ssl/%i"
|
||||
- name: k8s-assets.target
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Load Kubernetes Assets
|
||||
Requires=k8s-certs@worker.pem.service
|
||||
After=k8s-certs@worker.pem.service
|
||||
Requires=k8s-certs@worker-key.pem.service
|
||||
After=k8s-certs@worker-key.pem.service
|
||||
Requires=k8s-certs@ca.pem.service
|
||||
After=k8s-certs@ca.pem.service
|
||||
- name: kubelet.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet via Hyperkube ACI
|
||||
Requires=k8s-assets.target
|
||||
After=k8s-assets.target
|
||||
[Service]
|
||||
Environment=KUBELET_VERSION=v1.4.6_coreos.0
|
||||
Environment="RKT_OPTS=--uuid-file-save=/var/run/kubelet-pod.uuid \
|
||||
--volume dns,kind=host,source=/etc/resolv.conf \
|
||||
--mount volume=dns,target=/etc/resolv.conf \
|
||||
--volume rkt,kind=host,source=/opt/bin/host-rkt \
|
||||
--mount volume=rkt,target=/usr/bin/rkt \
|
||||
--volume var-lib-rkt,kind=host,source=/var/lib/rkt \
|
||||
--mount volume=var-lib-rkt,target=/var/lib/rkt \
|
||||
--volume stage,kind=host,source=/tmp \
|
||||
--mount volume=stage,target=/tmp \
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
--mount volume=var-log,target=/var/log"
|
||||
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--api-servers={{.k8s_controller_endpoint}} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--network-plugin=cni \
|
||||
--container-runtime=rkt \
|
||||
--rkt-path=/usr/bin/rkt \
|
||||
--rkt-stage1-image=coreos.com/rkt/stage1-coreos \
|
||||
--register-node=true \
|
||||
--allow-privileged=true \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--hostname-override={{.domain_name}} \
|
||||
--cluster_dns={{.k8s_dns_service_ip}} \
|
||||
--cluster_domain=cluster.local \
|
||||
--kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
|
||||
--tls-cert-file=/etc/kubernetes/ssl/worker.pem \
|
||||
--tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: rkt-api.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Before=kubelet.service
|
||||
[Service]
|
||||
ExecStart=/usr/bin/rkt api-service
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
RequiredBy=kubelet.service
|
||||
- name: load-rkt-stage1.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Load rkt stage1 images
|
||||
Documentation=http://github.com/coreos/rkt
|
||||
Requires=network-online.target
|
||||
After=network-online.target
|
||||
Before=rkt-api.service
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
ExecStart=/usr/bin/rkt fetch /usr/lib/rkt/stage1-images/stage1-coreos.aci /usr/lib/rkt/stage1-images/stage1-fly.aci --insecure-options=image
|
||||
[Install]
|
||||
RequiredBy=rkt-api.service
|
||||
|
||||
storage:
|
||||
{{ if index . "pxe" }}
|
||||
disks:
|
||||
- device: /dev/sda
|
||||
wipe_table: true
|
||||
partitions:
|
||||
- label: ROOT
|
||||
filesystems:
|
||||
- name: root
|
||||
mount:
|
||||
device: "/dev/sda1"
|
||||
format: "ext4"
|
||||
create:
|
||||
force: true
|
||||
options:
|
||||
- "-LROOT"
|
||||
{{end}}
|
||||
files:
|
||||
- path: /etc/kubernetes/cni/net.d/10-flannel.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
{
|
||||
"name": "podnet",
|
||||
"type": "flannel",
|
||||
"delegate": {
|
||||
"isDefaultGateway": true
|
||||
}
|
||||
}
|
||||
- path: /etc/kubernetes/cni/docker_opts_cni.env
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
DOCKER_OPT_BIP=""
|
||||
DOCKER_OPT_IPMASQ=""
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/kubernetes/worker-kubeconfig.yaml
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
clusters:
|
||||
- name: local
|
||||
cluster:
|
||||
certificate-authority: /etc/kubernetes/ssl/ca.pem
|
||||
users:
|
||||
- name: kubelet
|
||||
user:
|
||||
client-certificate: /etc/kubernetes/ssl/worker.pem
|
||||
client-key: /etc/kubernetes/ssl/worker-key.pem
|
||||
contexts:
|
||||
- context:
|
||||
cluster: local
|
||||
user: kubelet
|
||||
name: kubelet-context
|
||||
current-context: kubelet-context
|
||||
- path: /etc/kubernetes/manifests/kube-proxy.yaml
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: kube-proxy
|
||||
namespace: kube-system
|
||||
annotations:
|
||||
rkt.alpha.kubernetes.io/stage1-name-override: coreos.com/rkt/stage1-fly
|
||||
spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: kube-proxy
|
||||
image: quay.io/coreos/hyperkube:v1.4.6_coreos.0
|
||||
command:
|
||||
- /hyperkube
|
||||
- proxy
|
||||
- --master={{.k8s_controller_endpoint}}
|
||||
- --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- mountPath: /etc/ssl/certs
|
||||
name: "ssl-certs"
|
||||
- mountPath: /etc/kubernetes/worker-kubeconfig.yaml
|
||||
name: "kubeconfig"
|
||||
readOnly: true
|
||||
- mountPath: /etc/kubernetes/ssl
|
||||
name: "etc-kube-ssl"
|
||||
readOnly: true
|
||||
- mountPath: /var/run/dbus
|
||||
name: dbus
|
||||
readOnly: false
|
||||
volumes:
|
||||
- name: "ssl-certs"
|
||||
hostPath:
|
||||
path: "/usr/share/ca-certificates"
|
||||
- name: "kubeconfig"
|
||||
hostPath:
|
||||
path: "/etc/kubernetes/worker-kubeconfig.yaml"
|
||||
- name: "etc-kube-ssl"
|
||||
hostPath:
|
||||
path: "/etc/kubernetes/ssl"
|
||||
- hostPath:
|
||||
path: /var/run/dbus
|
||||
name: dbus
|
||||
- path: /etc/flannel/options.env
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
FLANNELD_ETCD_ENDPOINTS={{.k8s_etcd_endpoints}}
|
||||
- path: /opt/bin/host-rkt
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/sh
|
||||
# This is bind mounted into the kubelet rootfs and all rkt shell-outs go
|
||||
# through this rkt wrapper. It essentially enters the host mount namespace
|
||||
# (which it is already in) only for the purpose of breaking out of the chroot
|
||||
# before calling rkt. It makes things like rkt gc work and avoids bind mounting
|
||||
# in certain rkt filesystem dependancies into the kubelet rootfs. This can
|
||||
# eventually be obviated when the write-api stuff gets upstream and rkt gc is
|
||||
# through the api-server. Related issue:
|
||||
# https://github.com/coreos/rkt/issues/2878
|
||||
exec nsenter -m -u -i -n -p -t 1 -- /usr/bin/rkt "$@"
|
||||
|
||||
{{ if index . "ssh_authorized_keys" }}
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
ssh_authorized_keys:
|
||||
{{ range $element := .ssh_authorized_keys }}
|
||||
- {{$element}}
|
||||
{{end}}
|
||||
{{end}}
|
||||
@@ -1,17 +0,0 @@
|
||||
{
|
||||
"id": "rktnetes-controller",
|
||||
"name": "Kubernetes Controller",
|
||||
"boot": {
|
||||
"kernel": "/assets/coreos/1185.3.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/1185.3.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"args": [
|
||||
"root=/dev/sda1",
|
||||
"coreos.config.url=http://bootcfg.foo:8080/ignition?uuid=${uuid}&mac=${net0/mac:hexhyp}",
|
||||
"coreos.first_boot=yes",
|
||||
"console=tty0",
|
||||
"console=ttyS0",
|
||||
"coreos.autologin"
|
||||
]
|
||||
},
|
||||
"ignition_id": "rktnetes-controller.yaml"
|
||||
}
|
||||
@@ -1,17 +0,0 @@
|
||||
{
|
||||
"id": "rktnetes-worker",
|
||||
"name": "Kubernetes Worker",
|
||||
"boot": {
|
||||
"kernel": "/assets/coreos/1185.3.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/1185.3.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"args": [
|
||||
"root=/dev/sda1",
|
||||
"coreos.config.url=http://bootcfg.foo:8080/ignition?uuid=${uuid}&mac=${net0/mac:hexhyp}",
|
||||
"coreos.first_boot=yes",
|
||||
"console=tty0",
|
||||
"console=ttyS0",
|
||||
"coreos.autologin"
|
||||
]
|
||||
},
|
||||
"ignition_id": "rktnetes-worker.yaml"
|
||||
}
|
||||
Reference in New Issue
Block a user