examples: Update k8s clusters to v1.3.6

* Update Kubernetes hyperkube image to v1.3.6_coreos.0
* Update kube-dns to v17.1
* Update Kubernetes-dashboard to 1.1.1
This commit is contained in:
Dalton Hubble
2016-09-09 00:27:19 -07:00
parent 83d3d90b3e
commit cc675906c7
4 changed files with 129 additions and 121 deletions

View File

@@ -9,7 +9,7 @@
#### Examples
* Add Kubernetes example with rkt container runtime (i.e. rktnetes)
* Upgrade Kubernetes v1.3.4 (static manifest) example clusters
* Upgrade Kubernetes v1.3.6 (static manifest) example clusters
* Upgrade Kubernetes v1.3.4 (self-hosted) example cluster
* Add etcd3 example cluster (PXE in-RAM or install to disk)
* Use DNS names (instead of IPs) in example clusters (except bootkube)

View File

@@ -1,7 +1,7 @@
# Kubernetes
The Kubernetes example provisions a 3 node Kubernetes v1.3.4 cluster with one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
The Kubernetes example provisions a 3 node Kubernetes v1.3.6 cluster with one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
## Requirements
@@ -57,22 +57,22 @@ Client machines should boot and provision themselves. Local client VMs should ne
Get all pods.
$ kubectl --kubeconfig=examples/assets/tls/kubeconfig get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-v1.1.0-3647315203-tes6g 2/2 Running 0 14m
kube-system kube-apiserver-172.15.0.21 1/1 Running 0 14m
kube-system kube-controller-manager-172.15.0.21 1/1 Running 0 14m
kube-system kube-dns-v15-nfbz4 3/3 Running 0 14m
kube-system kube-proxy-172.15.0.21 1/1 Running 0 14m
kube-system kube-proxy-172.15.0.22 1/1 Running 0 14m
kube-system kube-proxy-172.15.0.23 1/1 Running 0 14m
kube-system kube-scheduler-172.15.0.21 1/1 Running 0 13m
kube-system kubernetes-dashboard-v1.1.0-m1gyy 1/1 Running 0 14m
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-v1.1.0-3647315203-oearg 2/2 Running 0 12m
kube-system kube-apiserver-node1.example.com 1/1 Running 0 13m
kube-system kube-controller-manager-node1.example.com 1/1 Running 0 13m
kube-system kube-dns-v17.1-atlcx 3/3 Running 0 13m
kube-system kube-proxy-node1.example.com 1/1 Running 0 13m
kube-system kube-proxy-node2.example.com 1/1 Running 0 12m
kube-system kube-proxy-node3.example.com 1/1 Running 0 12m
kube-system kube-scheduler-node1.example.com 1/1 Running 0 12m
kube-system kubernetes-dashboard-v1.1.1-hf87z 1/1 Running 0 13m
## Kubernetes Dashboard
Access the Kubernetes Dashboard with `kubeconfig` credentials by port forwarding to the dashboard pod.
$ kubectl --kubeconfig=examples/assets/tls/kubeconfig port-forward kubernetes-dashboard-v1.1.0-SOME-ID 9090 --namespace=kube-system
$ kubectl --kubeconfig=examples/assets/tls/kubeconfig port-forward kubernetes-dashboard-v1.1.1-SOME-ID 9090 --namespace=kube-system
Forwarding from 127.0.0.1:9090 -> 9090
Then visit [http://127.0.0.1:9090](http://127.0.0.1:9090/).

View File

@@ -56,9 +56,13 @@ systemd:
Requires=k8s-assets.target
After=k8s-assets.target
[Service]
Environment="RKT_OPTS=--volume=resolv,kind=host,source=/etc/resolv.conf --mount volume=resolv,target=/etc/resolv.conf --volume var-log,kind=host,source=/var/log --mount volume=var-log,target=/var/log"
Environment=KUBELET_VERSION=v1.3.4_coreos.0
Environment=KUBELET_VERSION=v1.3.6_coreos.0
Environment="RKT_OPTS=--volume dns,kind=host,source=/etc/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStartPre=/usr/bin/systemctl is-active flanneld.service
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers=http://127.0.0.1:8080 \
@@ -114,7 +118,7 @@ storage:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.3.4_coreos.0
image: quay.io/coreos/hyperkube:v1.3.6_coreos.0
command:
- /hyperkube
- proxy
@@ -142,7 +146,7 @@ storage:
hostNetwork: true
containers:
- name: kube-apiserver
image: quay.io/coreos/hyperkube:v1.3.4_coreos.0
image: quay.io/coreos/hyperkube:v1.3.6_coreos.0
command:
- /hyperkube
- apiserver
@@ -202,7 +206,7 @@ storage:
spec:
containers:
- name: kube-controller-manager
image: quay.io/coreos/hyperkube:v1.3.4_coreos.0
image: quay.io/coreos/hyperkube:v1.3.6_coreos.0
command:
- /hyperkube
- controller-manager
@@ -248,7 +252,7 @@ storage:
hostNetwork: true
containers:
- name: kube-scheduler
image: quay.io/coreos/hyperkube:v1.3.4_coreos.0
image: quay.io/coreos/hyperkube:v1.3.6_coreos.0
command:
- /hyperkube
- scheduler
@@ -275,123 +279,123 @@ storage:
"labels": {
"k8s-app": "kube-dns",
"kubernetes.io/cluster-service": "true",
"version": "v15"
"version": "v17.1"
},
"name": "kube-dns-v15",
"name": "kube-dns-v17.1",
"namespace": "kube-system"
},
"spec": {
"replicas": 1,
"selector": {
"k8s-app": "kube-dns",
"version": "v15"
"k8s-app": "kube-dns",
"version": "v17.1"
},
"template": {
"metadata": {
"labels": {
"k8s-app": "kube-dns",
"kubernetes.io/cluster-service": "true",
"version": "v15"
"version": "v17.1"
}
},
"spec": {
"containers": [
{
"args": [
"--domain=cluster.local.",
"--dns-port=10053"
],
"image": "gcr.io/google_containers/kubedns-amd64:1.3",
"livenessProbe": {
"failureThreshold": 5,
"httpGet": {
"path": "/healthz",
"port": 8080,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"successThreshold": 1,
"timeoutSeconds": 5
{
"args": [
"--domain=cluster.local.",
"--dns-port=10053"
],
"image": "gcr.io/google_containers/kubedns-amd64:1.5",
"livenessProbe": {
"failureThreshold": 5,
"httpGet": {
"path": "/healthz",
"port": 8080,
"scheme": "HTTP"
},
"name": "kubedns",
"ports": [
{
"containerPort": 10053,
"name": "dns-local",
"protocol": "UDP"
},
{
"containerPort": 10053,
"name": "dns-tcp-local",
"protocol": "TCP"
}
],
"readinessProbe": {
"httpGet": {
"path": "/readiness",
"port": 8081,
"scheme": "HTTP"
},
"initialDelaySeconds": 30,
"timeoutSeconds": 5
"initialDelaySeconds": 60,
"successThreshold": 1,
"timeoutSeconds": 5
},
"name": "kubedns",
"ports": [
{
"containerPort": 10053,
"name": "dns-local",
"protocol": "UDP"
},
"resources": {
"limits": {
"cpu": "100m",
"memory": "200Mi"
},
"requests": {
"cpu": "100m",
"memory": "50Mi"
}
{
"containerPort": 10053,
"name": "dns-tcp-local",
"protocol": "TCP"
}
],
"readinessProbe": {
"httpGet": {
"path": "/readiness",
"port": 8081,
"scheme": "HTTP"
},
"initialDelaySeconds": 30,
"timeoutSeconds": 5
},
{
"args": [
"--cache-size=1000",
"--no-resolv",
"--server=127.0.0.1#10053"
],
"image": "gcr.io/google_containers/kube-dnsmasq-amd64:1.3",
"name": "dnsmasq",
"ports": [
{
"containerPort": 53,
"name": "dns",
"protocol": "UDP"
},
{
"containerPort": 53,
"name": "dns-tcp",
"protocol": "TCP"
}
]
},
{
"args": [
"-cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null",
"-port=8080",
"-quiet"
],
"image": "gcr.io/google_containers/exechealthz-amd64:1.0",
"name": "healthz",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"resources": {
"limits": {
"cpu": "10m",
"memory": "20Mi"
},
"requests": {
"cpu": "10m",
"memory": "20Mi"
}
"resources": {
"limits": {
"cpu": "100m",
"memory": "170Mi"
},
"requests": {
"cpu": "100m",
"memory": "70Mi"
}
}
},
{
"args": [
"--cache-size=1000",
"--no-resolv",
"--server=127.0.0.1#10053"
],
"image": "gcr.io/google_containers/kube-dnsmasq-amd64:1.3",
"name": "dnsmasq",
"ports": [
{
"containerPort": 53,
"name": "dns",
"protocol": "UDP"
},
{
"containerPort": 53,
"name": "dns-tcp",
"protocol": "TCP"
}
]
},
{
"args": [
"-cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null",
"-port=8080",
"-quiet"
],
"image": "gcr.io/google_containers/exechealthz-amd64:1.1",
"name": "healthz",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"resources": {
"limits": {
"cpu": "10m",
"memory": "50Mi"
},
"requests": {
"cpu": "10m",
"memory": "50Mi"
}
}
}
],
"dnsPolicy": "Default"
}
@@ -571,9 +575,9 @@ storage:
"labels": {
"k8s-app": "kubernetes-dashboard",
"kubernetes.io/cluster-service": "true",
"version": "v1.1.0"
"version": "v1.1.1"
},
"name": "kubernetes-dashboard-v1.1.0",
"name": "kubernetes-dashboard-v1.1.1",
"namespace": "kube-system"
},
"spec": {
@@ -586,13 +590,13 @@ storage:
"labels": {
"k8s-app": "kubernetes-dashboard",
"kubernetes.io/cluster-service": "true",
"version": "v1.1.0"
"version": "v1.1.1"
}
},
"spec": {
"containers": [
{
"image": "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0",
"image": "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1",
"livenessProbe": {
"httpGet": {
"path": "/",

View File

@@ -50,9 +50,13 @@ systemd:
Requires=k8s-assets.target
After=k8s-assets.target
[Service]
Environment="RKT_OPTS=--volume=resolv,kind=host,source=/etc/resolv.conf --mount volume=resolv,target=/etc/resolv.conf --volume var-log,kind=host,source=/var/log --mount volume=var-log,target=/var/log"
Environment=KUBELET_VERSION=v1.3.4_coreos.0
Environment=KUBELET_VERSION=v1.3.6_coreos.0
Environment="RKT_OPTS=--volume dns,kind=host,source=/etc/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers={{.k8s_controller_endpoint}} \
--register-node=true \
@@ -121,7 +125,7 @@ storage:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.3.4_coreos.0
image: quay.io/coreos/hyperkube:v1.3.6_coreos.0
command:
- /hyperkube
- proxy