Merge pull request #356 from coreos/update-kubernetes

examples/{k8s,rktnetes}: Update Kubernetes to v1.4.0_coreos.2
This commit is contained in:
Dalton Hubble
2016-10-08 20:22:34 -07:00
committed by GitHub
10 changed files with 691 additions and 787 deletions

View File

@@ -9,7 +9,8 @@
#### Examples
* Add Kubernetes example with rkt container runtime (i.e. rktnetes)
* Upgrade Kubernetes v1.3.6 (static manifest) example clusters
* Upgrade Kubernetes v1.4.0 (static manifest) example clusters
* Upgrade Kubernetes v1.4.0 (rktnetes) example clusters
* Upgrade Kubernetes v1.3.4 (self-hosted) example cluster
* Add etcd3 example cluster (PXE in-RAM or install to disk)
* Use DNS names (instead of IPs) in example clusters (except bootkube)

View File

@@ -19,7 +19,7 @@ Build and install the [fork of bootkube](https://github.com/dghubble/bootkube),
## Examples
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. The examples can be used for physical machines if you update the MAC/IP addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. The examples can be used for physical machines if you update the MAC addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
* [bootkube](../examples/groups/bootkube) - iPXE boot a self-hosted Kubernetes cluster
* [bootkube-install](../examples/groups/bootkube-install) - Install a self-hosted Kubernetes cluster

View File

@@ -1,7 +1,7 @@
# Kubernetes
The Kubernetes example provisions a 3 node Kubernetes v1.3.6 cluster with one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
The Kubernetes example provisions a 3 node Kubernetes v1.4.0 cluster with one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
## Requirements
@@ -13,7 +13,7 @@ Ensure that you've gone through the [bootcfg with rkt](getting-started-rkt.md) o
## Examples
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. VMs are setup on the `metal0` CNI bridge for rkt or the `docker0` bridge for Docker. The examples can be used for physical machines if you update the MAC/IP addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. VMs are setup on the `metal0` CNI bridge for rkt or the `docker0` bridge for Docker. The examples can be used for physical machines if you update the MAC addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
* [k8s](../examples/groups/k8s) - iPXE boot a Kubernetes cluster
* [k8s-install](../examples/groups/k8s-install) - Install a Kubernetes cluster to disk
@@ -25,7 +25,7 @@ Download the CoreOS image assets referenced in the target [profile](../examples/
./scripts/get-coreos alpha 1153.0.0 ./examples/assets
Add your SSH public key to each machine group definition [as shown](../examples/README.md#ssh-keys).
Optionally, add your SSH public key to each machine group definition [as shown](../examples/README.md#ssh-keys).
Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver`, `worker`).
@@ -50,29 +50,29 @@ Client machines should boot and provision themselves. Local client VMs should ne
$ cd /path/to/coreos-baremetal
$ kubectl --kubeconfig=examples/assets/tls/kubeconfig get nodes
NAME STATUS AGE
node1.example.com Ready 43s
node2.example.com Ready 38s
node3.example.com Ready 37s
node1.example.com Ready 3m
node2.example.com Ready 3m
node3.example.com Ready 3m
Get all pods.
$ kubectl --kubeconfig=examples/assets/tls/kubeconfig get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-v1.1.0-3647315203-oearg 2/2 Running 0 12m
kube-system kube-apiserver-node1.example.com 1/1 Running 0 13m
kube-system kube-controller-manager-node1.example.com 1/1 Running 0 13m
kube-system kube-dns-v17.1-atlcx 3/3 Running 0 13m
kube-system kube-proxy-node1.example.com 1/1 Running 0 13m
kube-system kube-proxy-node2.example.com 1/1 Running 0 12m
kube-system kube-proxy-node3.example.com 1/1 Running 0 12m
kube-system kube-scheduler-node1.example.com 1/1 Running 0 12m
kube-system kubernetes-dashboard-v1.1.1-hf87z 1/1 Running 0 13m
kube-system heapster-v1.2.0-4088228293-k3yn8 2/2 Running 0 3m
kube-system kube-apiserver-node1.example.com 1/1 Running 0 4m
kube-system kube-controller-manager-node1.example.com 1/1 Running 0 3m
kube-system kube-dns-v19-l2u8r 3/3 Running 0 4m
kube-system kube-proxy-node1.example.com 1/1 Running 0 3m
kube-system kube-proxy-node2.example.com 1/1 Running 0 3m
kube-system kube-proxy-node3.example.com 1/1 Running 0 3m
kube-system kube-scheduler-node1.example.com 1/1 Running 0 3m
kube-system kubernetes-dashboard-v1.4.0-0iy07 1/1 Running 0 4m
## Kubernetes Dashboard
Access the Kubernetes Dashboard with `kubeconfig` credentials by port forwarding to the dashboard pod.
$ kubectl --kubeconfig=examples/assets/tls/kubeconfig port-forward kubernetes-dashboard-v1.1.1-SOME-ID 9090 --namespace=kube-system
$ kubectl --kubeconfig=examples/assets/tls/kubeconfig port-forward kubernetes-dashboard-v1.4.0-SOME-ID 9090 --namespace=kube-system
Forwarding from 127.0.0.1:9090 -> 9090
Then visit [http://127.0.0.1:9090](http://127.0.0.1:9090/).

79
Documentation/rktnetes.md Normal file
View File

@@ -0,0 +1,79 @@
# Kubernetes (with rkt)
The `rktnetes` example provisions a 3 node Kubernetes v1.4.0 cluster with [rkt](https://github.com/coreos/rkt) as the container runtime. The cluster has one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
## Requirements
Ensure that you've gone through the [bootcfg with rkt](getting-started-rkt.md) or [bootcfg with docker](getting-started-docker.md) guide and understand the basics. In particular, you should be able to:
* Use rkt or Docker to start `bootcfg`
* Create a network boot environment with `coreos/dnsmasq`
* Create the example libvirt client VMs
## Examples
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. VMs are setup on the `metal0` CNI bridge for rkt or the `docker0` bridge for Docker. The examples can be used for physical machines if you update the MAC addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
* [rktnetes](../examples/groups/rktnetes) - iPXE boot a Kubernetes cluster
* [rktnetes-install](../examples/groups/rktnetes-install) - Install a Kubernetes cluster to disk
* [Lab examples](https://github.com/dghubble/metal) - Lab hardware examples
### Assets
Download the CoreOS image assets referenced in the target [profile](../examples/profiles).
./scripts/get-coreos alpha 1153.0.0 ./examples/assets
Optionally, add your SSH public key to each machine group definition [as shown](../examples/README.md#ssh-keys).
Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver`, `worker`).
rm -rf examples/assets/tls
# for Kubernetes on CNI metal0 (for rkt)
./scripts/tls/k8s-certgen -d examples/assets/tls -s 172.15.0.21 -m IP.1=10.3.0.1,IP.2=172.15.0.21,DNS.1=node1.example.com -w DNS.1=node2.example.com,DNS.2=node3.example.com
# for Kubernetes on docker0 (for docker)
./scripts/tls/k8s-certgen -d examples/assets/tls -s 172.17.0.21 -m IP.1=10.3.0.1,IP.2=172.17.0.21,DNS.1=node1.example.com -w DNS.1=node2.example.com,DNS.2=node3.example.com
**Note**: TLS assets are served to any machines which request them, which requires a trusted network. Alternately, provisioning may be tweaked to require TLS assets be securely copied to each host. Read about our longer term security plans at [Distributed Trusted Computing](https://coreos.com/blog/coreos-trusted-computing.html).
## Containers
Use rkt or docker to start `bootcfg` and mount the desired example resources. Create a network boot environment and power-on your machines. Revisit [bootcfg with rkt](getting-started-rkt.md) or [bootcfg with Docker](getting-started-docker.md) for help.
Client machines should boot and provision themselves. Local client VMs should network boot CoreOS in about a 1 minute and the Kubernetes API should be available after 3-4 minutes (each node downloads a ~160MB Hyperkube). If you chose `rktnetes-install`, notice that machines install CoreOS and then reboot (in libvirt, you must hit "power" again). Time to network boot and provision Kubernetes clusters on physical hardware depends on a number of factors (POST duration, boot device iteration, network speed, etc.).
## Verify
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your laptop. Use the generated kubeconfig to access the Kubernetes cluster created on rkt `metal0` or `docker0`.
$ cd /path/to/coreos-baremetal
$ kubectl --kubeconfig=examples/assets/tls/kubeconfig get nodes
NAME STATUS AGE
node1.example.com Ready 3m
node2.example.com Ready 3m
node3.example.com Ready 3m
Get all pods.
$ kubectl --kubeconfig=examples/assets/tls/kubeconfig get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-v1.2.0-4088228293-k3yn8 2/2 Running 0 3m
kube-system kube-apiserver-node1.example.com 1/1 Running 0 4m
kube-system kube-controller-manager-node1.example.com 1/1 Running 0 3m
kube-system kube-dns-v19-l2u8r 3/3 Running 0 4m
kube-system kube-proxy-node1.example.com 1/1 Running 0 3m
kube-system kube-proxy-node2.example.com 1/1 Running 0 3m
kube-system kube-proxy-node3.example.com 1/1 Running 0 3m
kube-system kube-scheduler-node1.example.com 1/1 Running 0 3m
kube-system kubernetes-dashboard-v1.4.0-0iy07 1/1 Running 0 4m
## Kubernetes Dashboard
Access the Kubernetes Dashboard with `kubeconfig` credentials by port forwarding to the dashboard pod.
$ kubectl --kubeconfig=examples/assets/tls/kubeconfig port-forward kubernetes-dashboard-v1.4.0-SOME-ID 9090 --namespace=kube-system
Forwarding from 127.0.0.1:9090 -> 9090
Then visit [http://127.0.0.1:9090](http://127.0.0.1:9090/).
<img src='img/kubernetes-dashboard.png' class="img-center" alt="Kubernetes Dashboard"/>

View File

@@ -14,7 +14,7 @@ Ensure that you've gone through the [bootcfg with rkt](getting-started-rkt.md) g
## Examples
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. The examples can be used for physical machines if you update the MAC/IP addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. The examples can be used for physical machines if you update the MAC addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
* [torus](../examples/groups/torus) - iPXE boot a Torus cluster

View File

@@ -14,8 +14,8 @@ These examples network boot and provision machines into CoreOS clusters using `b
| etcd3-install | Install a 3 node etcd3 cluster to disk | alpha/1153.0.0 | Disk | None |
| k8s | Kubernetes cluster with 1 master, 2 workers, and TLS-authentication | alpha/1153.0.0 | Disk | [tutorial](../Documentation/kubernetes.md) |
| k8s-install | Kubernetes cluster, installed to disk | alpha/1153.0.0 | Disk | [tutorial](../Documentation/kubernetes.md) |
| rktnetes | Kubernetes cluster with rkt container runtime, 1 master, workers, TLS auth (experimental) | alpha/1153.0.0 | Disk | None |
| rktnetes-install | Kubernetes cluster with rkt container runtime, installed to disk (experimental) | alpha/1153.0.0 | Disk | None |
| rktnetes | Kubernetes cluster with rkt container runtime, 1 master, workers, TLS auth (experimental) | alpha/1153.0.0 | Disk | [tutorial](../Documentation/rktnetes.md) |
| rktnetes-install | Kubernetes cluster with rkt container runtime, installed to disk (experimental) | alpha/1153.0.0 | Disk | [tutorial](../Documentation/rktnetes.md) |
| bootkube | iPXE boot a self-hosted Kubernetes cluster (with bootkube) | alpha/1153.0.0 | Disk | [tutorial](../Documentation/bootkube.md) |
| bootkube-install | Install a self-hosted Kubernetes cluster (with bootkube) | alpha/1153.0.0 | Disk | [tutorial](../Documentation/bootkube.md) |
| torus | Torus distributed storage | alpha/1153.0.0 | Disk | [tutorial](../Documentation/torus.md) |
@@ -28,6 +28,7 @@ Get started running `bootcfg` on your Linux machine to network boot and provisio
* [bootcfg with rkt](../Documentation/getting-started-rkt.md)
* [bootcfg with Docker](../Documentation/getting-started-docker.md)
* [Kubernetes (static manifests)](../Documentation/kubernetes.md)
* [Kubernetes (rktnetes)](../Documentation/rktnetes.md)
* [Kubernetes (self-hosted)](../Documentation/bootkube.md)
* [Torus Storage](../Documentation/torus.md)
* [Lab Examples](https://github.com/dghubble/metal)

View File

@@ -28,6 +28,8 @@ systemd:
[Unit]
Requires=flanneld.service
After=flanneld.service
[Service]
EnvironmentFile=/etc/kubernetes/cni/docker_opts_cni.env
- name: k8s-certs@.service
contents: |
[Unit]
@@ -56,7 +58,7 @@ systemd:
Requires=k8s-assets.target
After=k8s-assets.target
[Service]
Environment=KUBELET_VERSION=v1.3.6_coreos.0
Environment=KUBELET_VERSION=v1.4.0_coreos.2
Environment="RKT_OPTS=--volume dns,kind=host,source=/etc/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf \
--volume var-log,kind=host,source=/var/log \
@@ -67,6 +69,8 @@ systemd:
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers=http://127.0.0.1:8080 \
--register-schedulable=true \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--allow-privileged=true \
--config=/etc/kubernetes/manifests \
--hostname-override={{.domain_name}} \
@@ -105,6 +109,23 @@ storage:
- "-LROOT"
{{end}}
files:
- path: /etc/kubernetes/cni/net.d/10-flannel.conf
filesystem: root
contents:
inline: |
{
"name": "podnet",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}
- path: /etc/kubernetes/cni/docker_opts_cni.env
filesystem: root
contents:
inline: |
DOCKER_OPT_BIP=""
DOCKER_OPT_IPMASQ=""
- path: /etc/kubernetes/manifests/kube-proxy.yaml
filesystem: root
contents:
@@ -118,7 +139,7 @@ storage:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.3.6_coreos.0
image: quay.io/coreos/hyperkube:v1.4.0_coreos.2
command:
- /hyperkube
- proxy
@@ -146,7 +167,7 @@ storage:
hostNetwork: true
containers:
- name: kube-apiserver
image: quay.io/coreos/hyperkube:v1.3.6_coreos.0
image: quay.io/coreos/hyperkube:v1.4.0_coreos.2
command:
- /hyperkube
- apiserver
@@ -155,7 +176,7 @@ storage:
- --allow-privileged=true
- --service-cluster-ip-range={{.k8s_service_ip_range}}
- --secure-port=443
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --client-ca-file=/etc/kubernetes/ssl/ca.pem
@@ -206,7 +227,7 @@ storage:
spec:
containers:
- name: kube-controller-manager
image: quay.io/coreos/hyperkube:v1.3.6_coreos.0
image: quay.io/coreos/hyperkube:v1.4.0_coreos.2
command:
- /hyperkube
- controller-manager
@@ -252,7 +273,7 @@ storage:
hostNetwork: true
containers:
- name: kube-scheduler
image: quay.io/coreos/hyperkube:v1.3.6_coreos.0
image: quay.io/coreos/hyperkube:v1.4.0_coreos.2
command:
- /hyperkube
- scheduler
@@ -268,392 +289,282 @@ storage:
port: 10251
initialDelaySeconds: 15
timeoutSeconds: 15
- path: /srv/kubernetes/manifests/kube-dns-rc.json
- path: /srv/kubernetes/manifests/kube-dns-rc.yaml
filesystem: root
contents:
inline: |
{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"labels": {
"k8s-app": "kube-dns",
"kubernetes.io/cluster-service": "true",
"version": "v17.1"
},
"name": "kube-dns-v17.1",
"namespace": "kube-system"
},
"spec": {
"replicas": 1,
"selector": {
"k8s-app": "kube-dns",
"version": "v17.1"
},
"template": {
"metadata": {
"labels": {
"k8s-app": "kube-dns",
"kubernetes.io/cluster-service": "true",
"version": "v17.1"
}
},
"spec": {
"containers": [
{
"args": [
"--domain=cluster.local.",
"--dns-port=10053"
],
"image": "gcr.io/google_containers/kubedns-amd64:1.5",
"livenessProbe": {
"failureThreshold": 5,
"httpGet": {
"path": "/healthz",
"port": 8080,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"successThreshold": 1,
"timeoutSeconds": 5
},
"name": "kubedns",
"ports": [
{
"containerPort": 10053,
"name": "dns-local",
"protocol": "UDP"
},
{
"containerPort": 10053,
"name": "dns-tcp-local",
"protocol": "TCP"
}
],
"readinessProbe": {
"httpGet": {
"path": "/readiness",
"port": 8081,
"scheme": "HTTP"
},
"initialDelaySeconds": 30,
"timeoutSeconds": 5
},
"resources": {
"limits": {
"cpu": "100m",
"memory": "170Mi"
},
"requests": {
"cpu": "100m",
"memory": "70Mi"
}
}
},
{
"args": [
"--cache-size=1000",
"--no-resolv",
"--server=127.0.0.1#10053"
],
"image": "gcr.io/google_containers/kube-dnsmasq-amd64:1.3",
"name": "dnsmasq",
"ports": [
{
"containerPort": 53,
"name": "dns",
"protocol": "UDP"
},
{
"containerPort": 53,
"name": "dns-tcp",
"protocol": "TCP"
}
]
},
{
"args": [
"-cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null",
"-port=8080",
"-quiet"
],
"image": "gcr.io/google_containers/exechealthz-amd64:1.1",
"name": "healthz",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"resources": {
"limits": {
"cpu": "10m",
"memory": "50Mi"
},
"requests": {
"cpu": "10m",
"memory": "50Mi"
}
}
}
],
"dnsPolicy": "Default"
}
}
}
}
- path: /srv/kubernetes/manifests/kube-dns-svc.json
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v19
namespace: kube-system
labels:
k8s-app: kube-dns
version: v19
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v19
template:
metadata:
labels:
k8s-app: kube-dns
version: v19
kubernetes.io/cluster-service: "true"
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/kubedns-amd64:1.7
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- name: dnsmasq
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
- --log-facility=-
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz-amd64:1.1
resources:
limits:
memory: 50Mi
requests:
cpu: 10m
memory: 50Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
- -port=8080
- -quiet
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: Default
- path: /srv/kubernetes/manifests/kube-dns-svc.yaml
filesystem: root
contents:
inline: |
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"labels": {
"k8s-app": "kube-dns",
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "KubeDNS"
},
"name": "kube-dns",
"namespace": "kube-system"
},
"spec": {
"clusterIP": "{{.k8s_dns_service_ip}}",
"ports": [
{
"name": "dns",
"port": 53,
"protocol": "UDP"
},
{
"name": "dns-tcp",
"port": 53,
"protocol": "TCP"
}
],
"selector": {
"k8s-app": "kube-dns"
}
}
}
- path: /srv/kubernetes/manifests/heapster-deployment.json
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: {{.k8s_dns_service_ip}}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- path: /srv/kubernetes/manifests/heapster-deployment.yaml
filesystem: root
contents:
inline: |
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"labels": {
"k8s-app": "heapster",
"kubernetes.io/cluster-service": "true",
"version": "v1.1.0"
},
"name": "heapster-v1.1.0",
"namespace": "kube-system"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"k8s-app": "heapster",
"version": "v1.1.0"
}
},
"template": {
"metadata": {
"labels": {
"k8s-app": "heapster",
"version": "v1.1.0"
}
},
"spec": {
"containers": [
{
"command": [
"/heapster",
"--source=kubernetes.summary_api:''"
],
"image": "gcr.io/google_containers/heapster:v1.1.0",
"name": "heapster",
"resources": {
"limits": {
"cpu": "100m",
"memory": "200Mi"
},
"requests": {
"cpu": "100m",
"memory": "200Mi"
}
}
},
{
"command": [
"/pod_nanny",
"--cpu=100m",
"--extra-cpu=0.5m",
"--memory=200Mi",
"--extra-memory=4Mi",
"--threshold=5",
"--deployment=heapster-v1.1.0",
"--container=heapster",
"--poll-period=300000",
"--estimator=exponential"
],
"env": [
{
"name": "MY_POD_NAME",
"valueFrom": {
"fieldRef": {
"fieldPath": "metadata.name"
}
}
},
{
"name": "MY_POD_NAMESPACE",
"valueFrom": {
"fieldRef": {
"fieldPath": "metadata.namespace"
}
}
}
],
"image": "gcr.io/google_containers/addon-resizer:1.3",
"name": "heapster-nanny",
"resources": {
"limits": {
"cpu": "50m",
"memory": "100Mi"
},
"requests": {
"cpu": "50m",
"memory": "100Mi"
}
}
}
]
}
}
}
}
- path: /srv/kubernetes/manifests/heapster-svc.json
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster-v1.2.0
namespace: kube-system
labels:
k8s-app: heapster
kubernetes.io/cluster-service: "true"
version: v1.2.0
spec:
replicas: 1
selector:
matchLabels:
k8s-app: heapster
version: v1.2.0
template:
metadata:
labels:
k8s-app: heapster
version: v1.2.0
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- image: gcr.io/google_containers/heapster:v1.2.0
name: heapster
livenessProbe:
httpGet:
path: /healthz
port: 8082
scheme: HTTP
initialDelaySeconds: 180
timeoutSeconds: 5
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 80m
memory: 200Mi
requests:
cpu: 80m
memory: 200Mi
command:
- /heapster
- --source=kubernetes.summary_api:''
- image: gcr.io/google_containers/addon-resizer:1.6
name: heapster-nanny
resources:
limits:
cpu: 50m
memory: 90Mi
requests:
cpu: 50m
memory: 90Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command:
- /pod_nanny
- --cpu=80m
- --extra-cpu=4m
- --memory=200Mi
- --extra-memory=4Mi
- --threshold=5
- --deployment=heapster-v1.2.0
- --container=heapster
- --poll-period=300000
- --estimator=exponential
- path: /srv/kubernetes/manifests/heapster-svc.yaml
filesystem: root
contents:
inline: |
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"labels": {
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "Heapster"
},
"name": "heapster",
"namespace": "kube-system"
},
"spec": {
"ports": [
{
"port": 80,
"targetPort": 8082
}
],
"selector": {
"k8s-app": "heapster"
}
}
}
- path: /srv/kubernetes/manifests/kube-dashboard-rc.json
kind: Service
apiVersion: v1
metadata:
name: heapster
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Heapster"
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
- path: /srv/kubernetes/manifests/kube-dashboard-rc.yaml
filesystem: root
contents:
inline: |
{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"labels": {
"k8s-app": "kubernetes-dashboard",
"kubernetes.io/cluster-service": "true",
"version": "v1.1.1"
},
"name": "kubernetes-dashboard-v1.1.1",
"namespace": "kube-system"
},
"spec": {
"replicas": 1,
"selector": {
"k8s-app": "kubernetes-dashboard"
},
"template": {
"metadata": {
"labels": {
"k8s-app": "kubernetes-dashboard",
"kubernetes.io/cluster-service": "true",
"version": "v1.1.1"
}
},
"spec": {
"containers": [
{
"image": "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1",
"livenessProbe": {
"httpGet": {
"path": "/",
"port": 9090
},
"initialDelaySeconds": 30,
"timeoutSeconds": 30
},
"name": "kubernetes-dashboard",
"ports": [
{
"containerPort": 9090
}
],
"resources": {
"limits": {
"cpu": "100m",
"memory": "50Mi"
},
"requests": {
"cpu": "100m",
"memory": "50Mi"
}
}
}
]
}
}
}
}
- path: /srv/kubernetes/manifests/kube-dashboard-svc.json
apiVersion: v1
kind: ReplicationController
metadata:
name: kubernetes-dashboard-v1.4.0
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
version: v1.4.0
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
version: v1.4.0
kubernetes.io/cluster-service: "true"
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
- path: /srv/kubernetes/manifests/kube-dashboard-svc.yaml
filesystem: root
contents:
inline: |
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"labels": {
"k8s-app": "kubernetes-dashboard",
"kubernetes.io/cluster-service": "true"
},
"name": "kubernetes-dashboard",
"namespace": "kube-system"
},
"spec": {
"ports": [
{
"port": 80,
"targetPort": 9090
}
],
"selector": {
"k8s-app": "kubernetes-dashboard"
}
}
}
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090
- path: /opt/init-flannel
filesystem: root
mode: 0544
@@ -695,14 +606,14 @@ storage:
sleep 5
done
echo "K8S: DNS addon"
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dns-rc.json)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/replicationcontrollers" > /dev/null
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dns-svc.json)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services" > /dev/null
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dns-rc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/replicationcontrollers" > /dev/null
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dns-svc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services" > /dev/null
echo "K8S: Heapster addon"
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /srv/kubernetes/manifests/heapster-deployment.json)" "http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/kube-system/deployments"
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /srv/kubernetes/manifests/heapster-svc.json)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services"
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/heapster-deployment.yaml)" "http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/kube-system/deployments"
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/heapster-svc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services"
echo "K8S: Dashboard addon"
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dashboard-rc.json)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/replicationcontrollers" > /dev/null
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dashboard-svc.json)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services" > /dev/null
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dashboard-rc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/replicationcontrollers" > /dev/null
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dashboard-svc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services" > /dev/null
{{ if index . "ssh_authorized_keys" }}
passwd:

View File

@@ -23,6 +23,8 @@ systemd:
[Unit]
Requires=flanneld.service
After=flanneld.service
[Service]
EnvironmentFile=/etc/kubernetes/cni/docker_opts_cni.env
- name: k8s-certs@.service
contents: |
[Unit]
@@ -50,7 +52,7 @@ systemd:
Requires=k8s-assets.target
After=k8s-assets.target
[Service]
Environment=KUBELET_VERSION=v1.3.6_coreos.0
Environment=KUBELET_VERSION=v1.4.0_coreos.2
Environment="RKT_OPTS=--volume dns,kind=host,source=/etc/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf \
--volume var-log,kind=host,source=/var/log \
@@ -60,6 +62,8 @@ systemd:
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers={{.k8s_controller_endpoint}} \
--register-node=true \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--allow-privileged=true \
--config=/etc/kubernetes/manifests \
--hostname-override={{.domain_name}} \
@@ -91,6 +95,23 @@ storage:
- "-LROOT"
{{end}}
files:
- path: /etc/kubernetes/cni/net.d/10-flannel.conf
filesystem: root
contents:
inline: |
{
"name": "podnet",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}
- path: /etc/kubernetes/cni/docker_opts_cni.env
filesystem: root
contents:
inline: |
DOCKER_OPT_BIP=""
DOCKER_OPT_IPMASQ=""
- path: /etc/kubernetes/worker-kubeconfig.yaml
filesystem: root
contents:
@@ -125,7 +146,7 @@ storage:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.3.6_coreos.0
image: quay.io/coreos/hyperkube:v1.4.0_coreos.2
command:
- /hyperkube
- proxy

View File

@@ -58,6 +58,7 @@ systemd:
Requires=k8s-assets.target
After=k8s-assets.target
[Service]
Environment=KUBELET_VERSION=v1.4.0_coreos.2
Environment="RKT_OPTS=--volume dns,kind=host,source=/etc/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf \
--volume rkt,kind=host,source=/opt/bin/host-rkt \
@@ -68,13 +69,13 @@ systemd:
--mount volume=stage,target=/tmp \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
Environment=KUBELET_VERSION=v1.3.6_coreos.0
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStartPre=/usr/bin/systemctl is-active flanneld.service
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers=http://127.0.0.1:8080 \
--register-schedulable=true \
--network-plugin-dir=/etc/kubernetes/cni/net.d \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--container-runtime=rkt \
--rkt-path=/usr/bin/rkt \
@@ -175,7 +176,7 @@ storage:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.3.6_coreos.0
image: quay.io/coreos/hyperkube:v1.4.0_coreos.2
command:
- /hyperkube
- proxy
@@ -209,7 +210,7 @@ storage:
hostNetwork: true
containers:
- name: kube-apiserver
image: quay.io/coreos/hyperkube:v1.3.6_coreos.0
image: quay.io/coreos/hyperkube:v1.4.0_coreos.2
command:
- /hyperkube
- apiserver
@@ -218,7 +219,7 @@ storage:
- --allow-privileged=true
- --service-cluster-ip-range={{.k8s_service_ip_range}}
- --secure-port=443
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --client-ca-file=/etc/kubernetes/ssl/ca.pem
@@ -269,7 +270,7 @@ storage:
spec:
containers:
- name: kube-controller-manager
image: quay.io/coreos/hyperkube:v1.3.6_coreos.0
image: quay.io/coreos/hyperkube:v1.4.0_coreos.2
command:
- /hyperkube
- controller-manager
@@ -315,7 +316,7 @@ storage:
hostNetwork: true
containers:
- name: kube-scheduler
image: quay.io/coreos/hyperkube:v1.3.6_coreos.0
image: quay.io/coreos/hyperkube:v1.4.0_coreos.2
command:
- /hyperkube
- scheduler
@@ -331,392 +332,282 @@ storage:
port: 10251
initialDelaySeconds: 15
timeoutSeconds: 15
- path: /srv/kubernetes/manifests/kube-dns-rc.json
- path: /srv/kubernetes/manifests/kube-dns-rc.yaml
filesystem: root
contents:
inline: |
{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"labels": {
"k8s-app": "kube-dns",
"kubernetes.io/cluster-service": "true",
"version": "v17.1"
},
"name": "kube-dns-v17.1",
"namespace": "kube-system"
},
"spec": {
"replicas": 1,
"selector": {
"k8s-app": "kube-dns",
"version": "v17.1"
},
"template": {
"metadata": {
"labels": {
"k8s-app": "kube-dns",
"kubernetes.io/cluster-service": "true",
"version": "v17.1"
}
},
"spec": {
"containers": [
{
"args": [
"--domain=cluster.local.",
"--dns-port=10053"
],
"image": "gcr.io/google_containers/kubedns-amd64:1.5",
"livenessProbe": {
"failureThreshold": 5,
"httpGet": {
"path": "/healthz",
"port": 8080,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"successThreshold": 1,
"timeoutSeconds": 5
},
"name": "kubedns",
"ports": [
{
"containerPort": 10053,
"name": "dns-local",
"protocol": "UDP"
},
{
"containerPort": 10053,
"name": "dns-tcp-local",
"protocol": "TCP"
}
],
"readinessProbe": {
"httpGet": {
"path": "/readiness",
"port": 8081,
"scheme": "HTTP"
},
"initialDelaySeconds": 30,
"timeoutSeconds": 5
},
"resources": {
"limits": {
"cpu": "100m",
"memory": "170Mi"
},
"requests": {
"cpu": "100m",
"memory": "70Mi"
}
}
},
{
"args": [
"--cache-size=1000",
"--no-resolv",
"--server=127.0.0.1#10053"
],
"image": "gcr.io/google_containers/kube-dnsmasq-amd64:1.3",
"name": "dnsmasq",
"ports": [
{
"containerPort": 53,
"name": "dns",
"protocol": "UDP"
},
{
"containerPort": 53,
"name": "dns-tcp",
"protocol": "TCP"
}
]
},
{
"args": [
"-cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null",
"-port=8080",
"-quiet"
],
"image": "gcr.io/google_containers/exechealthz-amd64:1.1",
"name": "healthz",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"resources": {
"limits": {
"cpu": "10m",
"memory": "50Mi"
},
"requests": {
"cpu": "10m",
"memory": "50Mi"
}
}
}
],
"dnsPolicy": "Default"
}
}
}
}
- path: /srv/kubernetes/manifests/kube-dns-svc.json
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v19
namespace: kube-system
labels:
k8s-app: kube-dns
version: v19
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v19
template:
metadata:
labels:
k8s-app: kube-dns
version: v19
kubernetes.io/cluster-service: "true"
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/kubedns-amd64:1.7
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- name: dnsmasq
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
- --log-facility=-
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz-amd64:1.1
resources:
limits:
memory: 50Mi
requests:
cpu: 10m
memory: 50Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
- -port=8080
- -quiet
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: Default
- path: /srv/kubernetes/manifests/kube-dns-svc.yaml
filesystem: root
contents:
inline: |
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"labels": {
"k8s-app": "kube-dns",
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "KubeDNS"
},
"name": "kube-dns",
"namespace": "kube-system"
},
"spec": {
"clusterIP": "{{.k8s_dns_service_ip}}",
"ports": [
{
"name": "dns",
"port": 53,
"protocol": "UDP"
},
{
"name": "dns-tcp",
"port": 53,
"protocol": "TCP"
}
],
"selector": {
"k8s-app": "kube-dns"
}
}
}
- path: /srv/kubernetes/manifests/heapster-deployment.json
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: {{.k8s_dns_service_ip}}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- path: /srv/kubernetes/manifests/heapster-deployment.yaml
filesystem: root
contents:
inline: |
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"labels": {
"k8s-app": "heapster",
"kubernetes.io/cluster-service": "true",
"version": "v1.1.0"
},
"name": "heapster-v1.1.0",
"namespace": "kube-system"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"k8s-app": "heapster",
"version": "v1.1.0"
}
},
"template": {
"metadata": {
"labels": {
"k8s-app": "heapster",
"version": "v1.1.0"
}
},
"spec": {
"containers": [
{
"command": [
"/heapster",
"--source=kubernetes.summary_api:''"
],
"image": "gcr.io/google_containers/heapster:v1.1.0",
"name": "heapster",
"resources": {
"limits": {
"cpu": "100m",
"memory": "200Mi"
},
"requests": {
"cpu": "100m",
"memory": "200Mi"
}
}
},
{
"command": [
"/pod_nanny",
"--cpu=100m",
"--extra-cpu=0.5m",
"--memory=200Mi",
"--extra-memory=4Mi",
"--threshold=5",
"--deployment=heapster-v1.1.0",
"--container=heapster",
"--poll-period=300000",
"--estimator=exponential"
],
"env": [
{
"name": "MY_POD_NAME",
"valueFrom": {
"fieldRef": {
"fieldPath": "metadata.name"
}
}
},
{
"name": "MY_POD_NAMESPACE",
"valueFrom": {
"fieldRef": {
"fieldPath": "metadata.namespace"
}
}
}
],
"image": "gcr.io/google_containers/addon-resizer:1.3",
"name": "heapster-nanny",
"resources": {
"limits": {
"cpu": "50m",
"memory": "100Mi"
},
"requests": {
"cpu": "50m",
"memory": "100Mi"
}
}
}
]
}
}
}
}
- path: /srv/kubernetes/manifests/heapster-svc.json
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster-v1.2.0
namespace: kube-system
labels:
k8s-app: heapster
kubernetes.io/cluster-service: "true"
version: v1.2.0
spec:
replicas: 1
selector:
matchLabels:
k8s-app: heapster
version: v1.2.0
template:
metadata:
labels:
k8s-app: heapster
version: v1.2.0
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- image: gcr.io/google_containers/heapster:v1.2.0
name: heapster
livenessProbe:
httpGet:
path: /healthz
port: 8082
scheme: HTTP
initialDelaySeconds: 180
timeoutSeconds: 5
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 80m
memory: 200Mi
requests:
cpu: 80m
memory: 200Mi
command:
- /heapster
- --source=kubernetes.summary_api:''
- image: gcr.io/google_containers/addon-resizer:1.6
name: heapster-nanny
resources:
limits:
cpu: 50m
memory: 90Mi
requests:
cpu: 50m
memory: 90Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command:
- /pod_nanny
- --cpu=80m
- --extra-cpu=4m
- --memory=200Mi
- --extra-memory=4Mi
- --threshold=5
- --deployment=heapster-v1.2.0
- --container=heapster
- --poll-period=300000
- --estimator=exponential
- path: /srv/kubernetes/manifests/heapster-svc.yaml
filesystem: root
contents:
inline: |
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"labels": {
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "Heapster"
},
"name": "heapster",
"namespace": "kube-system"
},
"spec": {
"ports": [
{
"port": 80,
"targetPort": 8082
}
],
"selector": {
"k8s-app": "heapster"
}
}
}
- path: /srv/kubernetes/manifests/kube-dashboard-rc.json
kind: Service
apiVersion: v1
metadata:
name: heapster
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Heapster"
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
- path: /srv/kubernetes/manifests/kube-dashboard-rc.yaml
filesystem: root
contents:
inline: |
{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"labels": {
"k8s-app": "kubernetes-dashboard",
"kubernetes.io/cluster-service": "true",
"version": "v1.1.1"
},
"name": "kubernetes-dashboard-v1.1.1",
"namespace": "kube-system"
},
"spec": {
"replicas": 1,
"selector": {
"k8s-app": "kubernetes-dashboard"
},
"template": {
"metadata": {
"labels": {
"k8s-app": "kubernetes-dashboard",
"kubernetes.io/cluster-service": "true",
"version": "v1.1.1"
}
},
"spec": {
"containers": [
{
"image": "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1",
"livenessProbe": {
"httpGet": {
"path": "/",
"port": 9090
},
"initialDelaySeconds": 30,
"timeoutSeconds": 30
},
"name": "kubernetes-dashboard",
"ports": [
{
"containerPort": 9090
}
],
"resources": {
"limits": {
"cpu": "100m",
"memory": "50Mi"
},
"requests": {
"cpu": "100m",
"memory": "50Mi"
}
}
}
]
}
}
}
}
- path: /srv/kubernetes/manifests/kube-dashboard-svc.json
apiVersion: v1
kind: ReplicationController
metadata:
name: kubernetes-dashboard-v1.4.0
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
version: v1.4.0
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
version: v1.4.0
kubernetes.io/cluster-service: "true"
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
- path: /srv/kubernetes/manifests/kube-dashboard-svc.yaml
filesystem: root
contents:
inline: |
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"labels": {
"k8s-app": "kubernetes-dashboard",
"kubernetes.io/cluster-service": "true"
},
"name": "kubernetes-dashboard",
"namespace": "kube-system"
},
"spec": {
"ports": [
{
"port": 80,
"targetPort": 9090
}
],
"selector": {
"k8s-app": "kubernetes-dashboard"
}
}
}
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090
- path: /opt/init-flannel
filesystem: root
mode: 0544
@@ -773,14 +664,14 @@ storage:
sleep 5
done
echo "K8S: DNS addon"
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dns-rc.json)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/replicationcontrollers" > /dev/null
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dns-svc.json)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services" > /dev/null
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dns-rc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/replicationcontrollers" > /dev/null
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dns-svc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services" > /dev/null
echo "K8S: Heapster addon"
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /srv/kubernetes/manifests/heapster-deployment.json)" "http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/kube-system/deployments"
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /srv/kubernetes/manifests/heapster-svc.json)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services"
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/heapster-deployment.yaml)" "http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/kube-system/deployments"
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/heapster-svc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services"
echo "K8S: Dashboard addon"
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dashboard-rc.json)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/replicationcontrollers" > /dev/null
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dashboard-svc.json)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services" > /dev/null
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dashboard-rc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/replicationcontrollers" > /dev/null
curl --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dashboard-svc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services" > /dev/null
{{ if index . "ssh_authorized_keys" }}
passwd:

View File

@@ -52,6 +52,7 @@ systemd:
Requires=k8s-assets.target
After=k8s-assets.target
[Service]
Environment=KUBELET_VERSION=v1.4.0_coreos.2
Environment="RKT_OPTS=--volume dns,kind=host,source=/etc/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf \
--volume rkt,kind=host,source=/opt/bin/host-rkt \
@@ -62,11 +63,10 @@ systemd:
--mount volume=stage,target=/tmp \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
Environment=KUBELET_VERSION=v1.3.6_coreos.0
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers={{.k8s_controller_endpoint}} \
--network-plugin-dir=/etc/kubernetes/cni/net.d \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--container-runtime=rkt \
--rkt-path=/usr/bin/rkt \
@@ -182,7 +182,7 @@ storage:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.3.6_coreos.0
image: quay.io/coreos/hyperkube:v1.4.0_coreos.2
command:
- /hyperkube
- proxy