Merge pull request #428 from coreos/update-kubernetes

Update example Kubernetes clusters to v1.5.2
This commit is contained in:
Dalton Hubble
2017-01-30 14:26:13 -08:00
committed by GitHub
14 changed files with 54 additions and 38 deletions

View File

@@ -4,6 +4,9 @@ Notable changes between releases.
## Latest
* Upgrade Kubernetes v1.5.2 (static) example clusters
* Upgrade Kubernetes v1.5.2 (self-hosted) example cluster
## v0.5.0 (2017-01-23)
* Rename project to CoreOS `matchbox`!

View File

@@ -1,7 +1,7 @@
# Self-Hosted Kubernetes
The self-hosted Kubernetes example provisions a 3 node "self-hosted" Kubernetes v1.5.1 cluster. On-host kubelets wait for an apiserver to become reachable, then yield to kubelet pods scheduled via daemonset. [bootkube](https://github.com/kubernetes-incubator/bootkube) is run on any controller to bootstrap a temporary apiserver which schedules control plane components as pods before exiting. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
The self-hosted Kubernetes example provisions a 3 node "self-hosted" Kubernetes v1.5.2 cluster. On-host kubelets wait for an apiserver to become reachable, then yield to kubelet pods scheduled via daemonset. [bootkube](https://github.com/kubernetes-incubator/bootkube) is run on any controller to bootstrap a temporary apiserver which schedules control plane components as pods before exiting. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
## Requirements
@@ -12,12 +12,12 @@ Ensure that you've gone through the [matchbox with rkt](getting-started-rkt.md)
* Create the example libvirt client VMs
* `/etc/hosts` entries for `node[1-3].example.com` (or pass custom names to `k8s-certgen`)
Install [bootkube](https://github.com/kubernetes-incubator/bootkube/releases/tag/v0.3.4) v0.3.4 and add it somewhere on your PATH.
Install [bootkube](https://github.com/kubernetes-incubator/bootkube/releases/tag/v0.3.5) v0.3.5 and add it somewhere on your PATH.
$ wget https://github.com/kubernetes-incubator/bootkube/releases/download/v0.3.4/bootkube.tar.gz
$ wget https://github.com/kubernetes-incubator/bootkube/releases/download/v0.3.5/bootkube.tar.gz
$ tar xzf bootkube.tar.gz
$ ./bin/linux/bootkube version
Version: v0.3.4
Version: v0.3.5
## Examples
@@ -64,8 +64,8 @@ Secure copy the `kubeconfig` to `/etc/kubernetes/kubeconfig` on **every** node w
Secure copy the `bootkube` generated assets to any controller node and run `bootkube-start`.
scp -r assets core@node1.example.com:/home/core/assets
ssh core@node1.example.com 'sudo systemctl start bootkube'
scp -r assets core@node1.example.com:/home/core
ssh core@node1.example.com 'sudo mv assets /opt/bootkube/assets && sudo systemctl start bootkube'
Optionally watch the Kubernetes control plane bootstrapping with the bootkube temporary api-server. You will see quite a bit of output.
@@ -113,4 +113,4 @@ Try deleting pods to see that the cluster is resilient to failures and machine r
## Going Further
[Learn](bootkube-upgrades.md) to upgrade a self-hosted Kubernetes cluster.
[Learn](bootkube-upgrades.md) to upgrade a self-hosted Kubernetes cluster.

View File

@@ -1,7 +1,7 @@
# Kubernetes
The Kubernetes example provisions a 3 node Kubernetes v1.5.1 cluster with one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
The Kubernetes example provisions a 3 node Kubernetes v1.5.2 cluster with one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
## Requirements

View File

@@ -1,6 +1,6 @@
# Kubernetes (with rkt)
The `rktnetes` example provisions a 3 node Kubernetes v1.4.7 cluster with [rkt](https://github.com/coreos/rkt) as the container runtime. The cluster has one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
The `rktnetes` example provisions a 3 node Kubernetes v1.5.2 cluster with [rkt](https://github.com/coreos/rkt) as the container runtime. The cluster has one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
## Requirements
@@ -74,4 +74,4 @@ Access the Kubernetes Dashboard with `kubeconfig` credentials by port forwarding
Then visit [http://127.0.0.1:9090](http://127.0.0.1:9090/).
<img src='img/kubernetes-dashboard.png' class="img-center" alt="Kubernetes Dashboard"/>
<img src='img/kubernetes-dashboard.png' class="img-center" alt="Kubernetes Dashboard"/>

View File

@@ -11,9 +11,6 @@
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"etcd_name": "node1",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_etcd_endpoints": "http://node1.example.com:2379",
"k8s_pod_network": "10.2.0.0/16",
"k8s_service_ip_range": "10.3.0.0/24",
"ssh_authorized_keys": [
"ADD ME"
]

View File

@@ -10,8 +10,6 @@
"domain_name": "node2.example.com",
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_pod_network": "10.2.0.0/16",
"k8s_service_ip_range": "10.3.0.0/24",
"ssh_authorized_keys": [
"ADD ME"
]

View File

@@ -10,8 +10,6 @@
"domain_name": "node3.example.com",
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_pod_network": "10.2.0.0/16",
"k8s_service_ip_range": "10.3.0.0/24",
"ssh_authorized_keys": [
"ADD ME"
]

View File

@@ -10,12 +10,9 @@
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"etcd_name": "node1",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_etcd_endpoints": "http://node1.example.com:2379",
"k8s_pod_network": "10.2.0.0/16",
"k8s_service_ip_range": "10.3.0.0/24",
"pxe": "true",
"ssh_authorized_keys": [
"ADD ME"
]
}
}
}

View File

@@ -9,8 +9,6 @@
"domain_name": "node2.example.com",
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_pod_network": "10.2.0.0/16",
"k8s_service_ip_range": "10.3.0.0/24",
"pxe": "true",
"ssh_authorized_keys": [
"ADD ME"

View File

@@ -9,8 +9,6 @@
"domain_name": "node3.example.com",
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_pod_network": "10.2.0.0/16",
"k8s_service_ip_range": "10.3.0.0/24",
"pxe": "true",
"ssh_authorized_keys": [
"ADD ME"

View File

@@ -31,6 +31,19 @@ systemd:
PathExists=/etc/kubernetes/kubeconfig
[Install]
WantedBy=multi-user.target
- name: wait-for-dns.service
enable: true
contents: |
[Unit]
Description=Wait for DNS entries
Wants=systemd-resolved.service
Before=kubelet.service
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
[Install]
RequiredBy=kubelet.service
- name: kubelet.service
contents: |
[Unit]
@@ -74,7 +87,8 @@ systemd:
Description=Bootstrap a Kubernetes control plane with a temp api-server
[Service]
Type=simple
ExecStart=/home/core/bootkube-start
WorkingDirectory=/opt/bootkube
ExecStart=/opt/bootkube/bootkube-start
storage:
{{ if index . "pxe" }}
disks:
@@ -99,7 +113,7 @@ storage:
contents:
inline: |
KUBELET_ACI=quay.io/coreos/hyperkube
KUBELET_VERSION=v1.5.1_coreos.0
KUBELET_VERSION=v1.5.2_coreos.0
- path: /etc/hostname
filesystem: root
mode: 0644
@@ -111,7 +125,7 @@ storage:
contents:
inline: |
fs.inotify.max_user_watches=16184
- path: /home/core/bootkube-start
- path: /opt/bootkube/bootkube-start
filesystem: root
mode: 0544
user:
@@ -124,8 +138,8 @@ storage:
# Wrapper for bootkube start
set -e
BOOTKUBE_ACI="${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
BOOTKUBE_VERSION="${BOOTKUBE_VERSION:-v0.3.4}"
BOOTKUBE_ASSETS="${BOOTKUBE_ASSETS:-/home/core/assets}"
BOOTKUBE_VERSION="${BOOTKUBE_VERSION:-v0.3.5}"
BOOTKUBE_ASSETS="${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
exec /usr/bin/rkt run \
--trust-keys-from-https \
--volume assets,kind=host,source=$BOOTKUBE_ASSETS \

View File

@@ -27,6 +27,19 @@ systemd:
PathExists=/etc/kubernetes/kubeconfig
[Install]
WantedBy=multi-user.target
- name: wait-for-dns.service
enable: true
contents: |
[Unit]
Description=Wait for DNS entries
Wants=systemd-resolved.service
Before=kubelet.service
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
[Install]
RequiredBy=kubelet.service
- name: kubelet.service
contents: |
[Unit]
@@ -88,7 +101,7 @@ storage:
contents:
inline: |
KUBELET_ACI=quay.io/coreos/hyperkube
KUBELET_VERSION=v1.5.1_coreos.0
KUBELET_VERSION=v1.5.2_coreos.0
- path: /etc/hostname
filesystem: root
mode: 0644

View File

@@ -64,7 +64,7 @@ systemd:
Requires=k8s-assets.target
After=k8s-assets.target
[Service]
Environment=KUBELET_VERSION=v1.5.1_coreos.0
Environment=KUBELET_VERSION=v1.5.2_coreos.0
Environment="RKT_OPTS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume dns,kind=host,source=/etc/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf \
@@ -194,7 +194,7 @@ storage:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.5.1_coreos.0
image: quay.io/coreos/hyperkube:v1.5.2_coreos.0
command:
- /hyperkube
- proxy
@@ -228,7 +228,7 @@ storage:
hostNetwork: true
containers:
- name: kube-apiserver
image: quay.io/coreos/hyperkube:v1.5.1_coreos.0
image: quay.io/coreos/hyperkube:v1.5.2_coreos.0
command:
- /hyperkube
- apiserver
@@ -289,7 +289,7 @@ storage:
spec:
containers:
- name: kube-controller-manager
image: quay.io/coreos/hyperkube:v1.5.1_coreos.0
image: quay.io/coreos/hyperkube:v1.5.2_coreos.0
command:
- /hyperkube
- controller-manager
@@ -335,7 +335,7 @@ storage:
hostNetwork: true
containers:
- name: kube-scheduler
image: quay.io/coreos/hyperkube:v1.5.1_coreos.0
image: quay.io/coreos/hyperkube:v1.5.2_coreos.0
command:
- /hyperkube
- scheduler

View File

@@ -58,7 +58,7 @@ systemd:
Requires=k8s-assets.target
After=k8s-assets.target
[Service]
Environment=KUBELET_VERSION=v1.5.1_coreos.0
Environment=KUBELET_VERSION=v1.5.2_coreos.0
Environment="RKT_OPTS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume dns,kind=host,source=/etc/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf \
@@ -201,7 +201,7 @@ storage:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.5.1_coreos.0
image: quay.io/coreos/hyperkube:v1.5.2_coreos.0
command:
- /hyperkube
- proxy