mirror of
https://github.com/outbackdingo/matchbox.git
synced 2026-01-27 18:19:36 +00:00
Merge pull request #442 from ElijahCaine/codeblock-std
Docs: standardize codeblocks to ``` fencing
This commit is contained in:
@@ -5,13 +5,17 @@
|
||||
|
||||
Serves a static iPXE boot script which gathers client machine attributes and chainloads to the iPXE endpoint. Use DHCP/TFTP to point iPXE clients to this endpoint as the next-server.
|
||||
|
||||
GET http://matchbox.foo/boot.ipxe
|
||||
GET http://matchbox.foo/boot.ipxe.0 // for dnsmasq
|
||||
```
|
||||
GET http://matchbox.foo/boot.ipxe
|
||||
GET http://matchbox.foo/boot.ipxe.0 // for dnsmasq
|
||||
```
|
||||
|
||||
**Response**
|
||||
|
||||
#!ipxe
|
||||
chain ipxe?uuid=${uuid}&mac=${mac:hexhyp}&domain=${domain}&hostname=${hostname}&serial=${serial}
|
||||
```
|
||||
#!ipxe
|
||||
chain ipxe?uuid=${uuid}&mac=${mac:hexhyp}&domain=${domain}&hostname=${hostname}&serial=${serial}
|
||||
```
|
||||
|
||||
Client's booted with the `/ipxe.boot` endpoint will introspect and make a request to `/ipxe` with the `uuid`, `mac`, `hostname`, and `serial` value as query arguments.
|
||||
|
||||
@@ -19,7 +23,9 @@ Client's booted with the `/ipxe.boot` endpoint will introspect and make a reques
|
||||
|
||||
Finds the profile for the machine and renders the network boot config (kernel, options, initrd) as an iPXE script.
|
||||
|
||||
GET http://matchbox.foo/ipxe?label=value
|
||||
```
|
||||
GET http://matchbox.foo/ipxe?label=value
|
||||
```
|
||||
|
||||
**Query Parameters**
|
||||
|
||||
@@ -31,16 +37,20 @@ Finds the profile for the machine and renders the network boot config (kernel, o
|
||||
|
||||
**Response**
|
||||
|
||||
#!ipxe
|
||||
kernel /assets/coreos/1235.9.0/coreos_production_pxe.vmlinuz coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp} coreos.first_boot=1 coreos.autologin
|
||||
initrd /assets/coreos/1235.9.0/coreos_production_pxe_image.cpio.gz
|
||||
boot
|
||||
```
|
||||
#!ipxe
|
||||
kernel /assets/coreos/1235.9.0/coreos_production_pxe.vmlinuz coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp} coreos.first_boot=1 coreos.autologin
|
||||
initrd /assets/coreos/1235.9.0/coreos_production_pxe_image.cpio.gz
|
||||
boot
|
||||
```
|
||||
|
||||
## GRUB2
|
||||
|
||||
Finds the profile for the machine and renders the network boot config as a GRUB config. Use DHCP/TFTP to point GRUB clients to this endpoint as the next-server.
|
||||
|
||||
GET http://matchbox.foo/grub?label=value
|
||||
```
|
||||
GET http://matchbox.foo/grub?label=value
|
||||
```
|
||||
|
||||
**Query Parameters**
|
||||
|
||||
@@ -52,20 +62,24 @@ Finds the profile for the machine and renders the network boot config as a GRUB
|
||||
|
||||
**Response**
|
||||
|
||||
default=0
|
||||
timeout=1
|
||||
menuentry "CoreOS" {
|
||||
echo "Loading kernel"
|
||||
linuxefi "(http;matchbox.foo:8080)/assets/coreos/1235.9.0/coreos_production_pxe.vmlinuz" "coreos.autologin" "coreos.config.url=http://matchbox.foo:8080/ignition" "coreos.first_boot"
|
||||
echo "Loading initrd"
|
||||
initrdefi "(http;matchbox.foo:8080)/assets/coreos/1235.9.0/coreos_production_pxe_image.cpio.gz"
|
||||
}
|
||||
```
|
||||
default=0
|
||||
timeout=1
|
||||
menuentry "CoreOS" {
|
||||
echo "Loading kernel"
|
||||
linuxefi "(http;matchbox.foo:8080)/assets/coreos/1235.9.0/coreos_production_pxe.vmlinuz" "coreos.autologin" "coreos.config.url=http://matchbox.foo:8080/ignition" "coreos.first_boot"
|
||||
echo "Loading initrd"
|
||||
initrdefi "(http;matchbox.foo:8080)/assets/coreos/1235.9.0/coreos_production_pxe_image.cpio.gz"
|
||||
}
|
||||
```
|
||||
|
||||
## Cloud Config
|
||||
|
||||
Finds the profile matching the machine and renders the corresponding Cloud-Config with group metadata, selectors, and query params.
|
||||
|
||||
GET http://matchbox.foo/cloud?label=value
|
||||
```
|
||||
GET http://matchbox.foo/cloud?label=value
|
||||
```
|
||||
|
||||
**Query Parameters**
|
||||
|
||||
@@ -77,19 +91,23 @@ Finds the profile matching the machine and renders the corresponding Cloud-Confi
|
||||
|
||||
**Response**
|
||||
|
||||
#cloud-config
|
||||
coreos:
|
||||
units:
|
||||
- name: etcd2.service
|
||||
command: start
|
||||
- name: fleet.service
|
||||
command: start
|
||||
```yaml
|
||||
#cloud-config
|
||||
coreos:
|
||||
units:
|
||||
- name: etcd2.service
|
||||
command: start
|
||||
- name: fleet.service
|
||||
command: start
|
||||
```
|
||||
|
||||
## Ignition Config
|
||||
|
||||
Finds the profile matching the machine and renders the corresponding Ignition Config with group metadata, selectors, and query params.
|
||||
|
||||
GET http://matchbox.foo/ignition?label=value
|
||||
```
|
||||
GET http://matchbox.foo/ignition?label=value
|
||||
```
|
||||
|
||||
**Query Parameters**
|
||||
|
||||
@@ -101,22 +119,26 @@ Finds the profile matching the machine and renders the corresponding Ignition Co
|
||||
|
||||
**Response**
|
||||
|
||||
{
|
||||
"ignition": { "version": "2.0.0" },
|
||||
"systemd": {
|
||||
"units": [{
|
||||
"name": "example.service",
|
||||
"enable": true,
|
||||
"contents": "[Service]\nType=oneshot\nExecStart=/usr/bin/echo Hello World\n\n[Install]\nWantedBy=multi-user.target"
|
||||
}]
|
||||
}
|
||||
}
|
||||
```json
|
||||
{
|
||||
"ignition": { "version": "2.0.0" },
|
||||
"systemd": {
|
||||
"units": [{
|
||||
"name": "example.service",
|
||||
"enable": true,
|
||||
"contents": "[Service]\nType=oneshot\nExecStart=/usr/bin/echo Hello World\n\n[Install]\nWantedBy=multi-user.target"
|
||||
}]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Generic Config
|
||||
|
||||
Finds the profile matching the machine and renders the corresponding generic config with group metadata, selectors, and query params.
|
||||
|
||||
GET http://matchbox.foo/generic?label=value
|
||||
```
|
||||
GET http://matchbox.foo/generic?label=value
|
||||
```
|
||||
|
||||
**Query Parameters**
|
||||
|
||||
@@ -128,19 +150,22 @@ Finds the profile matching the machine and renders the corresponding generic con
|
||||
|
||||
**Response**
|
||||
|
||||
{
|
||||
“uuid”: “”,
|
||||
“mac”: “52:54:00:a1:9c:ae”,
|
||||
“osInstalled”: true,
|
||||
“rawQuery”: “mac=52:54:00:a1:9c:ae&os=installed”
|
||||
}
|
||||
|
||||
```
|
||||
{
|
||||
“uuid”: “”,
|
||||
“mac”: “52:54:00:a1:9c:ae”,
|
||||
“osInstalled”: true,
|
||||
“rawQuery”: “mac=52:54:00:a1:9c:ae&os=installed”
|
||||
}
|
||||
```
|
||||
|
||||
## Metadata
|
||||
|
||||
Finds the matching machine group and renders the group metadata, selectors, and query params in an "env file" style response.
|
||||
|
||||
GET http://matchbox.foo/metadata?mac=52-54-00-a1-9c-ae&foo=bar&count=3&gate=true
|
||||
```
|
||||
GET http://matchbox.foo/metadata?mac=52-54-00-a1-9c-ae&foo=bar&count=3&gate=true
|
||||
```
|
||||
|
||||
**Query Parameters**
|
||||
|
||||
@@ -152,15 +177,17 @@ Finds the matching machine group and renders the group metadata, selectors, and
|
||||
|
||||
**Response**
|
||||
|
||||
META=data
|
||||
ETCD_NAME=node1
|
||||
SOME_NESTED_DATA=some-value
|
||||
MAC=52:54:00:a1:9c:ae
|
||||
REQUEST_QUERY_MAC=52:54:00:a1:9c:ae
|
||||
REQUEST_QUERY_FOO=bar
|
||||
REQUEST_QUERY_COUNT=3
|
||||
REQUEST_QUERY_GATE=true
|
||||
REQUEST_RAW_QUERY=mac=52-54-00-a1-9c-ae&foo=bar&count=3&gate=true
|
||||
```
|
||||
META=data
|
||||
ETCD_NAME=node1
|
||||
SOME_NESTED_DATA=some-value
|
||||
MAC=52:54:00:a1:9c:ae
|
||||
REQUEST_QUERY_MAC=52:54:00:a1:9c:ae
|
||||
REQUEST_QUERY_FOO=bar
|
||||
REQUEST_QUERY_COUNT=3
|
||||
REQUEST_QUERY_GATE=true
|
||||
REQUEST_RAW_QUERY=mac=52-54-00-a1-9c-ae&foo=bar&count=3&gate=true
|
||||
```
|
||||
|
||||
## OpenPGP Signatures
|
||||
|
||||
@@ -177,8 +204,10 @@ OpenPGPG signature endpoints serve detached binary and ASCII armored signatures
|
||||
|
||||
Get a config and its detached ASCII armored signature.
|
||||
|
||||
GET http://matchbox.foo/ipxe?label=value
|
||||
GET http://matchbox.foo/ipxe.asc?label=value
|
||||
```
|
||||
GET http://matchbox.foo/ipxe?label=value
|
||||
GET http://matchbox.foo/ipxe.asc?label=value
|
||||
```
|
||||
|
||||
**Response**
|
||||
|
||||
@@ -199,12 +228,13 @@ NO+p24BL3PHZyKw0nsrm275C913OxEVgnNZX7TQltaweW23Cd1YBNjcfb3zv+Zo=
|
||||
|
||||
If you need to serve static assets (e.g. kernel, initrd), `matchbox` can serve arbitrary assets from the `-assets-path`.
|
||||
|
||||
matchbox.foo/assets/
|
||||
└── coreos
|
||||
└── 1235.9.0
|
||||
├── coreos_production_pxe.vmlinuz
|
||||
└── coreos_production_pxe_image.cpio.gz
|
||||
└── 1153.0.0
|
||||
├── coreos_production_pxe.vmlinuz
|
||||
└── coreos_production_pxe_image.cpio.gz
|
||||
|
||||
```
|
||||
matchbox.foo/assets/
|
||||
└── coreos
|
||||
└── 1235.9.0
|
||||
├── coreos_production_pxe.vmlinuz
|
||||
└── coreos_production_pxe_image.cpio.gz
|
||||
└── 1153.0.0
|
||||
├── coreos_production_pxe.vmlinuz
|
||||
└── coreos_production_pxe_image.cpio.gz
|
||||
```
|
||||
|
||||
@@ -9,23 +9,27 @@ Let's upgrade a self-hosted Kubernetes v1.4.1 cluster to v1.4.3 as an example.
|
||||
|
||||
Show the control plane daemonsets and deployments which will need to be updated.
|
||||
|
||||
$ kubectl get daemonsets -n=kube-system
|
||||
NAME DESIRED CURRENT NODE-SELECTOR AGE
|
||||
kube-apiserver 1 1 master=true 5m
|
||||
kube-proxy 3 3 <none> 5m
|
||||
kubelet 3 3 <none> 5m
|
||||
```sh
|
||||
$ kubectl get daemonsets -n=kube-system
|
||||
NAME DESIRED CURRENT NODE-SELECTOR AGE
|
||||
kube-apiserver 1 1 master=true 5m
|
||||
kube-proxy 3 3 <none> 5m
|
||||
kubelet 3 3 <none> 5m
|
||||
|
||||
$ kubectl get deployments -n=kube-system
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
kube-controller-manager 1 1 1 1 5m
|
||||
kube-dns-v20 1 1 1 1 5m
|
||||
kube-scheduler 1 1 1 1 5m
|
||||
$ kubectl get deployments -n=kube-system
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
kube-controller-manager 1 1 1 1 5m
|
||||
kube-dns-v20 1 1 1 1 5m
|
||||
kube-scheduler 1 1 1 1 5m
|
||||
```
|
||||
|
||||
Check the current Kubernetes version.
|
||||
|
||||
$ kubectl version
|
||||
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"a16c0a7f71a6f93c7e0f222d961f4675cd97a46b", GitTreeState:"clean", BuildDate:"2016-09-26T18:16:57Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
|
||||
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1+coreos.0", GitCommit:"b7a02f46b972c5211e5c04fdb1d5b86ac16c00eb", GitTreeState:"clean", BuildDate:"2016-10-11T20:13:55Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
|
||||
```sh
|
||||
$ kubectl version
|
||||
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"a16c0a7f71a6f93c7e0f222d961f4675cd97a46b", GitTreeState:"clean", BuildDate:"2016-09-26T18:16:57Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
|
||||
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1+coreos.0", GitCommit:"b7a02f46b972c5211e5c04fdb1d5b86ac16c00eb", GitTreeState:"clean", BuildDate:"2016-10-11T20:13:55Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
|
||||
```
|
||||
|
||||
In this case, Kubernetes is `v1.4.1+coreos.0` and our goal is to upgrade to `v1.4.3+coreos.0`. First, update the control plane pods. Then the kubelets and proxies on all nodes.
|
||||
|
||||
@@ -37,35 +41,43 @@ In this case, Kubernetes is `v1.4.1+coreos.0` and our goal is to upgrade to `v1.
|
||||
|
||||
Edit the kube-apiserver daemonset. Change the container image name to `quay.io/coreos/hyperkube:v1.4.3_coreos.0`.
|
||||
|
||||
$ kubectl edit daemonset kube-apiserver -n=kube-system
|
||||
```sh
|
||||
$ kubectl edit daemonset kube-apiserver -n=kube-system
|
||||
```
|
||||
|
||||
Since daemonsets don't yet support rolling, manually delete each apiserver one by one and wait for each to be re-scheduled.
|
||||
|
||||
$ kubectl get pods -n=kube-system
|
||||
# WARNING: Self-hosted Kubernetes is still new and this may fail
|
||||
$ kubectl delete pod kube-apiserver-s62kb -n=kube-system
|
||||
```sh
|
||||
$ kubectl get pods -n=kube-system
|
||||
# WARNING: Self-hosted Kubernetes is still new and this may fail
|
||||
$ kubectl delete pod kube-apiserver-s62kb -n=kube-system
|
||||
```
|
||||
|
||||
If you only have one, your cluster will be temporarily unavailable. Remember the Hyperkube image is quite large and this can take a minute.
|
||||
|
||||
kubectl get pods -n=kube-system
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kube-api-checkpoint-node1.example.com 1/1 Running 0 12m
|
||||
kube-apiserver-vyg3t 2/2 Running 0 2m
|
||||
kube-controller-manager-1510822774-qebia 1/1 Running 2 12m
|
||||
kube-dns-v20-3531996453-0tlv9 3/3 Running 0 12m
|
||||
kube-proxy-8jthl 1/1 Running 0 12m
|
||||
kube-proxy-bnvgy 1/1 Running 0 12m
|
||||
kube-proxy-gkyx8 1/1 Running 0 12m
|
||||
kube-scheduler-2099299605-67ezp 1/1 Running 2 12m
|
||||
kubelet-exe5k 1/1 Running 0 12m
|
||||
kubelet-p3g98 1/1 Running 0 12m
|
||||
kubelet-quhhg 1/1 Running 0 12m
|
||||
```sh
|
||||
$ kubectl get pods -n=kube-system
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kube-api-checkpoint-node1.example.com 1/1 Running 0 12m
|
||||
kube-apiserver-vyg3t 2/2 Running 0 2m
|
||||
kube-controller-manager-1510822774-qebia 1/1 Running 2 12m
|
||||
kube-dns-v20-3531996453-0tlv9 3/3 Running 0 12m
|
||||
kube-proxy-8jthl 1/1 Running 0 12m
|
||||
kube-proxy-bnvgy 1/1 Running 0 12m
|
||||
kube-proxy-gkyx8 1/1 Running 0 12m
|
||||
kube-scheduler-2099299605-67ezp 1/1 Running 2 12m
|
||||
kubelet-exe5k 1/1 Running 0 12m
|
||||
kubelet-p3g98 1/1 Running 0 12m
|
||||
kubelet-quhhg 1/1 Running 0 12m
|
||||
```
|
||||
|
||||
### kube-scheduler
|
||||
|
||||
Edit the scheduler deployment to rolling update the scheduler. Change the container image name for the hyperkube.
|
||||
|
||||
kubectl edit deployments kube-scheduler -n=kube-system
|
||||
```sh
|
||||
$ kubectl edit deployments kube-scheduler -n=kube-system
|
||||
```
|
||||
|
||||
Wait for the schduler to be deployed.
|
||||
|
||||
@@ -73,31 +85,37 @@ Wait for the schduler to be deployed.
|
||||
|
||||
Edit the controller-manager deployment to rolling update the controller manager. Change the container image name for the hyperkube.
|
||||
|
||||
kubectl edit deployments kube-controller-manager -n=kube-system
|
||||
```sh
|
||||
$ kubectl edit deployments kube-controller-manager -n=kube-system
|
||||
```
|
||||
|
||||
Wait for the controller manager to be deployed.
|
||||
|
||||
$ kubectl get pods -n=kube-system
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kube-api-checkpoint-node1.example.com 1/1 Running 0 28m
|
||||
kube-apiserver-vyg3t 2/2 Running 0 18m
|
||||
kube-controller-manager-1709527928-zj8c4 1/1 Running 0 4m
|
||||
kube-dns-v20-3531996453-0tlv9 3/3 Running 0 28m
|
||||
kube-proxy-8jthl 1/1 Running 0 28m
|
||||
kube-proxy-bnvgy 1/1 Running 0 28m
|
||||
kube-proxy-gkyx8 1/1 Running 0 28m
|
||||
kube-scheduler-2255275287-hti6w 1/1 Running 0 6m
|
||||
kubelet-exe5k 1/1 Running 0 28m
|
||||
kubelet-p3g98 1/1 Running 0 28m
|
||||
kubelet-quhhg 1/1 Running 0 28m
|
||||
```sh
|
||||
$ kubectl get pods -n=kube-system
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kube-api-checkpoint-node1.example.com 1/1 Running 0 28m
|
||||
kube-apiserver-vyg3t 2/2 Running 0 18m
|
||||
kube-controller-manager-1709527928-zj8c4 1/1 Running 0 4m
|
||||
kube-dns-v20-3531996453-0tlv9 3/3 Running 0 28m
|
||||
kube-proxy-8jthl 1/1 Running 0 28m
|
||||
kube-proxy-bnvgy 1/1 Running 0 28m
|
||||
kube-proxy-gkyx8 1/1 Running 0 28m
|
||||
kube-scheduler-2255275287-hti6w 1/1 Running 0 6m
|
||||
kubelet-exe5k 1/1 Running 0 28m
|
||||
kubelet-p3g98 1/1 Running 0 28m
|
||||
kubelet-quhhg 1/1 Running 0 28m
|
||||
```
|
||||
|
||||
### Verify
|
||||
|
||||
At this point, the control plane components have been upgraded to v1.4.3.
|
||||
|
||||
$ kubectl version
|
||||
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"a16c0a7f71a6f93c7e0f222d961f4675cd97a46b", GitTreeState:"clean", BuildDate:"2016-09-26T18:16:57Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
|
||||
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.3+coreos.0", GitCommit:"7819c84f25e8c661321ee80d6b9fa5f4ff06676f", GitTreeState:"clean", BuildDate:"2016-10-17T21:19:17Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
|
||||
```sh
|
||||
$ kubectl version
|
||||
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"a16c0a7f71a6f93c7e0f222d961f4675cd97a46b", GitTreeState:"clean", BuildDate:"2016-09-26T18:16:57Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
|
||||
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.3+coreos.0", GitCommit:"7819c84f25e8c661321ee80d6b9fa5f4ff06676f", GitTreeState:"clean", BuildDate:"2016-10-17T21:19:17Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
|
||||
```
|
||||
|
||||
Finally, upgrade the kubelets and kube-proxies.
|
||||
|
||||
@@ -105,52 +123,60 @@ Finally, upgrade the kubelets and kube-proxies.
|
||||
|
||||
Show the current kubelet and kube-proxy version on each node.
|
||||
|
||||
$ kubectl get nodes -o yaml | grep 'kubeletVersion\|kubeProxyVersion'
|
||||
kubeProxyVersion: v1.4.1+coreos.0
|
||||
kubeletVersion: v1.4.1+coreos.0
|
||||
kubeProxyVersion: v1.4.1+coreos.0
|
||||
kubeletVersion: v1.4.1+coreos.0
|
||||
kubeProxyVersion: v1.4.1+coreos.0
|
||||
kubeletVersion: v1.4.1+coreos.0
|
||||
```sh
|
||||
$ kubectl get nodes -o yaml | grep 'kubeletVersion\|kubeProxyVersion'
|
||||
kubeProxyVersion: v1.4.1+coreos.0
|
||||
kubeletVersion: v1.4.1+coreos.0
|
||||
kubeProxyVersion: v1.4.1+coreos.0
|
||||
kubeletVersion: v1.4.1+coreos.0
|
||||
kubeProxyVersion: v1.4.1+coreos.0
|
||||
kubeletVersion: v1.4.1+coreos.0
|
||||
```
|
||||
|
||||
Edit the kubelet and kube-proxy daemonsets. Change the container image name for the hyperkube.
|
||||
|
||||
$ kubectl edit daemonset kubelet -n=kube-system
|
||||
$ kubectl edit daemonset kube-proxy -n=kube-system
|
||||
```sh
|
||||
$ kubectl edit daemonset kubelet -n=kube-system
|
||||
$ kubectl edit daemonset kube-proxy -n=kube-system
|
||||
```
|
||||
|
||||
Since daemonsets don't yet support rolling, manually delete each kubelet and each kube-proxy. The daemonset controller will create new (upgraded) replics.
|
||||
|
||||
$ kubectl get pods -n=kube-system
|
||||
$ kubectl delete pod kubelet-quhhg
|
||||
...repeat
|
||||
$ kubectl delete pod kube-proxy-8jthl -n=kube-system
|
||||
...repeat
|
||||
```sh
|
||||
$ kubectl get pods -n=kube-system
|
||||
$ kubectl delete pod kubelet-quhhg
|
||||
...repeat
|
||||
$ kubectl delete pod kube-proxy-8jthl -n=kube-system
|
||||
...repeat
|
||||
|
||||
$ kubectl get pods -n=kube-system
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kube-api-checkpoint-node1.example.com 1/1 Running 0 1h
|
||||
kube-apiserver-vyg3t 2/2 Running 0 1h
|
||||
kube-controller-manager-1709527928-zj8c4 1/1 Running 0 47m
|
||||
kube-dns-v20-3531996453-0tlv9 3/3 Running 0 1h
|
||||
kube-proxy-6dbne 1/1 Running 0 1s
|
||||
kube-proxy-sm4jv 1/1 Running 0 8s
|
||||
kube-proxy-xmuao 1/1 Running 0 14s
|
||||
kube-scheduler-2255275287-hti6w 1/1 Running 0 49m
|
||||
kubelet-hfdwr 1/1 Running 0 38s
|
||||
kubelet-oia47 1/1 Running 0 52s
|
||||
kubelet-s6dab 1/1 Running 0 59s
|
||||
$ kubectl get pods -n=kube-system
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kube-api-checkpoint-node1.example.com 1/1 Running 0 1h
|
||||
kube-apiserver-vyg3t 2/2 Running 0 1h
|
||||
kube-controller-manager-1709527928-zj8c4 1/1 Running 0 47m
|
||||
kube-dns-v20-3531996453-0tlv9 3/3 Running 0 1h
|
||||
kube-proxy-6dbne 1/1 Running 0 1s
|
||||
kube-proxy-sm4jv 1/1 Running 0 8s
|
||||
kube-proxy-xmuao 1/1 Running 0 14s
|
||||
kube-scheduler-2255275287-hti6w 1/1 Running 0 49m
|
||||
kubelet-hfdwr 1/1 Running 0 38s
|
||||
kubelet-oia47 1/1 Running 0 52s
|
||||
kubelet-s6dab 1/1 Running 0 59s
|
||||
```
|
||||
|
||||
## Verify
|
||||
|
||||
Verify that the kubelet and kube-proxy on each node have been upgraded.
|
||||
|
||||
$ kubectl get nodes -o yaml | grep 'kubeletVersion\|kubeProxyVersion'
|
||||
kubeProxyVersion: v1.4.3+coreos.0
|
||||
kubeletVersion: v1.4.3+coreos.0
|
||||
kubeProxyVersion: v1.4.3+coreos.0
|
||||
kubeletVersion: v1.4.3+coreos.0
|
||||
kubeProxyVersion: v1.4.3+coreos.0
|
||||
kubeletVersion: v1.4.3+coreos.0
|
||||
```sh
|
||||
$ kubectl get nodes -o yaml | grep 'kubeletVersion\|kubeProxyVersion'
|
||||
kubeProxyVersion: v1.4.3+coreos.0
|
||||
kubeletVersion: v1.4.3+coreos.0
|
||||
kubeProxyVersion: v1.4.3+coreos.0
|
||||
kubeletVersion: v1.4.3+coreos.0
|
||||
kubeProxyVersion: v1.4.3+coreos.0
|
||||
kubeletVersion: v1.4.3+coreos.0
|
||||
```
|
||||
|
||||
Now, Kubernetes components have been upgraded to a new version of Kubernetes!
|
||||
|
||||
@@ -158,18 +184,20 @@ Now, Kubernetes components have been upgraded to a new version of Kubernetes!
|
||||
|
||||
Bare-metal or virtualized self-hosted Kubernetes clusters can be upgraded in place in 5-10 minutes. Here is a bare-metal example:
|
||||
|
||||
$ kubectl -n=kube-system get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kube-api-checkpoint-ibm0.lab.dghubble.io 1/1 Running 0 2d
|
||||
kube-apiserver-j6atn 2/2 Running 0 5m
|
||||
kube-controller-manager-1709527928-y05n5 1/1 Running 0 1m
|
||||
kube-dns-v20-3531996453-zwbl8 3/3 Running 0 2d
|
||||
kube-proxy-e49p5 1/1 Running 0 14s
|
||||
kube-proxy-eu5dc 1/1 Running 0 8s
|
||||
kube-proxy-gjrzq 1/1 Running 0 3s
|
||||
kube-scheduler-2255275287-96n56 1/1 Running 0 2m
|
||||
kubelet-9ob0e 1/1 Running 0 19s
|
||||
kubelet-bvwp0 1/1 Running 0 14s
|
||||
kubelet-xlrql 1/1 Running 0 24s
|
||||
```sh
|
||||
$ kubectl -n=kube-system get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kube-api-checkpoint-ibm0.lab.dghubble.io 1/1 Running 0 2d
|
||||
kube-apiserver-j6atn 2/2 Running 0 5m
|
||||
kube-controller-manager-1709527928-y05n5 1/1 Running 0 1m
|
||||
kube-dns-v20-3531996453-zwbl8 3/3 Running 0 2d
|
||||
kube-proxy-e49p5 1/1 Running 0 14s
|
||||
kube-proxy-eu5dc 1/1 Running 0 8s
|
||||
kube-proxy-gjrzq 1/1 Running 0 3s
|
||||
kube-scheduler-2255275287-96n56 1/1 Running 0 2m
|
||||
kubelet-9ob0e 1/1 Running 0 19s
|
||||
kubelet-bvwp0 1/1 Running 0 14s
|
||||
kubelet-xlrql 1/1 Running 0 24s
|
||||
```
|
||||
|
||||
Check upstream for updates to addons like `kube-dns` or `kube-dashboard` and update them like any other applications. Some kube-system components use version labels and you may wish to clean those up as well.
|
||||
|
||||
@@ -14,10 +14,12 @@ Ensure that you've gone through the [matchbox with rkt](getting-started-rkt.md)
|
||||
|
||||
Install [bootkube](https://github.com/kubernetes-incubator/bootkube/releases) v0.3.7 and add it somewhere on your PATH.
|
||||
|
||||
$ wget https://github.com/kubernetes-incubator/bootkube/releases/download/v0.3.7/bootkube.tar.gz
|
||||
$ tar xzf bootkube.tar.gz
|
||||
$ ./bin/linux/bootkube version
|
||||
Version: v0.3.7
|
||||
```sh
|
||||
$ wget https://github.com/kubernetes-incubator/bootkube/releases/download/v0.3.7/bootkube.tar.gz
|
||||
$ tar xzf bootkube.tar.gz
|
||||
$ ./bin/linux/bootkube version
|
||||
Version: v0.3.7
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
@@ -30,20 +32,26 @@ The [examples](../examples) statically assign IP addresses to libvirt client VMs
|
||||
|
||||
Download the CoreOS image assets referenced in the target [profile](../examples/profiles).
|
||||
|
||||
./scripts/get-coreos stable 1235.9.0 ./examples/assets
|
||||
```sh
|
||||
$ ./scripts/get-coreos stable 1235.9.0 ./examples/assets
|
||||
```
|
||||
|
||||
Add your SSH public key to each machine group definition [as shown](../examples/README.md#ssh-keys).
|
||||
|
||||
{
|
||||
"profile": "bootkube-worker",
|
||||
"metadata": {
|
||||
"ssh_authorized_keys": ["ssh-rsa pub-key-goes-here"]
|
||||
}
|
||||
```json
|
||||
{
|
||||
"profile": "bootkube-worker",
|
||||
"metadata": {
|
||||
"ssh_authorized_keys": ["ssh-rsa pub-key-goes-here"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Use the `bootkube` tool to render Kubernetes manifests and credentials into an `--asset-dir`. Later, `bootkube` will schedule these manifests during bootstrapping and the credentials will be used to access your cluster.
|
||||
|
||||
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com
|
||||
```sh
|
||||
$ bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com
|
||||
```
|
||||
|
||||
## Containers
|
||||
|
||||
@@ -57,24 +65,30 @@ We're ready to use bootkube to create a temporary control plane and bootstrap a
|
||||
|
||||
Secure copy the `kubeconfig` to `/etc/kubernetes/kubeconfig` on **every** node which will path activate the `kubelet.service`.
|
||||
|
||||
for node in 'node1' 'node2' 'node3'; do
|
||||
scp assets/auth/kubeconfig core@$node.example.com:/home/core/kubeconfig
|
||||
ssh core@$node.example.com 'sudo mv kubeconfig /etc/kubernetes/kubeconfig'
|
||||
done
|
||||
```bash
|
||||
for node in 'node1' 'node2' 'node3'; do
|
||||
scp assets/auth/kubeconfig core@$node.example.com:/home/core/kubeconfig
|
||||
ssh core@$node.example.com 'sudo mv kubeconfig /etc/kubernetes/kubeconfig'
|
||||
done
|
||||
```
|
||||
|
||||
Secure copy the `bootkube` generated assets to any controller node and run `bootkube-start`.
|
||||
|
||||
scp -r assets core@node1.example.com:/home/core
|
||||
ssh core@node1.example.com 'sudo mv assets /opt/bootkube/assets && sudo systemctl start bootkube'
|
||||
```sh
|
||||
$ scp -r assets core@node1.example.com:/home/core
|
||||
$ ssh core@node1.example.com 'sudo mv assets /opt/bootkube/assets && sudo systemctl start bootkube'
|
||||
```
|
||||
|
||||
Optionally watch the Kubernetes control plane bootstrapping with the bootkube temporary api-server. You will see quite a bit of output.
|
||||
|
||||
$ ssh core@node1.example.com 'journalctl -f -u bootkube'
|
||||
[ 299.241291] bootkube[5]: Pod Status: kube-api-checkpoint Running
|
||||
[ 299.241618] bootkube[5]: Pod Status: kube-apiserver Running
|
||||
[ 299.241804] bootkube[5]: Pod Status: kube-scheduler Running
|
||||
[ 299.241993] bootkube[5]: Pod Status: kube-controller-manager Running
|
||||
[ 299.311743] bootkube[5]: All self-hosted control plane components successfully started
|
||||
```sh
|
||||
$ ssh core@node1.example.com 'journalctl -f -u bootkube'
|
||||
[ 299.241291] bootkube[5]: Pod Status: kube-api-checkpoint Running
|
||||
[ 299.241618] bootkube[5]: Pod Status: kube-apiserver Running
|
||||
[ 299.241804] bootkube[5]: Pod Status: kube-scheduler Running
|
||||
[ 299.241993] bootkube[5]: Pod Status: kube-controller-manager Running
|
||||
[ 299.311743] bootkube[5]: All self-hosted control plane components successfully started
|
||||
```
|
||||
|
||||
You may cleanup the `bootkube` assets on the node, but you should keep the copy on your laptop. It contains a `kubeconfig` used to access the cluster.
|
||||
|
||||
@@ -82,32 +96,31 @@ You may cleanup the `bootkube` assets on the node, but you should keep the copy
|
||||
|
||||
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your laptop. Use the generated kubeconfig to access the Kubernetes cluster. Verify that the cluster is accessible and that the kubelet, apiserver, scheduler, and controller-manager are running as pods.
|
||||
|
||||
$ KUBECONFIG=assets/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE
|
||||
node1.example.com Ready 3m
|
||||
node2.example.com Ready 3m
|
||||
node3.example.com Ready 3m
|
||||
|
||||
$ kubectl get pods --all-namespaces
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system checkpoint-installer-p8g8r 1/1 Running 1 13m
|
||||
kube-system kube-apiserver-s5gnx 1/1 Running 1 41s
|
||||
kube-system kube-controller-manager-3438979800-jrlnd 1/1 Running 1 13m
|
||||
kube-system kube-controller-manager-3438979800-tkjx7 1/1 Running 1 13m
|
||||
kube-system kube-dns-4101612645-xt55f 4/4 Running 4 13m
|
||||
kube-system kube-flannel-pl5c2 2/2 Running 0 13m
|
||||
kube-system kube-flannel-r9t5r 2/2 Running 3 13m
|
||||
kube-system kube-flannel-vfb0s 2/2 Running 4 13m
|
||||
kube-system kube-proxy-cvhmj 1/1 Running 0 13m
|
||||
kube-system kube-proxy-hf9mh 1/1 Running 1 13m
|
||||
kube-system kube-proxy-kpl73 1/1 Running 1 13m
|
||||
kube-system kube-scheduler-694795526-1l23b 1/1 Running 1 13m
|
||||
kube-system kube-scheduler-694795526-fks0b 1/1 Running 1 13m
|
||||
kube-system pod-checkpointer-node1.example.com 1/1 Running 2 10m
|
||||
|
||||
|
||||
```sh
|
||||
$ KUBECONFIG=assets/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE
|
||||
node1.example.com Ready 3m
|
||||
node2.example.com Ready 3m
|
||||
node3.example.com Ready 3m
|
||||
|
||||
$ kubectl get pods --all-namespaces
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system checkpoint-installer-p8g8r 1/1 Running 1 13m
|
||||
kube-system kube-apiserver-s5gnx 1/1 Running 1 41s
|
||||
kube-system kube-controller-manager-3438979800-jrlnd 1/1 Running 1 13m
|
||||
kube-system kube-controller-manager-3438979800-tkjx7 1/1 Running 1 13m
|
||||
kube-system kube-dns-4101612645-xt55f 4/4 Running 4 13m
|
||||
kube-system kube-flannel-pl5c2 2/2 Running 0 13m
|
||||
kube-system kube-flannel-r9t5r 2/2 Running 3 13m
|
||||
kube-system kube-flannel-vfb0s 2/2 Running 4 13m
|
||||
kube-system kube-proxy-cvhmj 1/1 Running 0 13m
|
||||
kube-system kube-proxy-hf9mh 1/1 Running 1 13m
|
||||
kube-system kube-proxy-kpl73 1/1 Running 1 13m
|
||||
kube-system kube-scheduler-694795526-1l23b 1/1 Running 1 13m
|
||||
kube-system kube-scheduler-694795526-fks0b 1/1 Running 1 13m
|
||||
kube-system pod-checkpointer-node1.example.com 1/1 Running 2 10m
|
||||
```
|
||||
|
||||
Try deleting pods to see that the cluster is resilient to failures and machine restarts (CoreOS auto-updates).
|
||||
|
||||
|
||||
@@ -7,12 +7,14 @@ CoreOS Cloud-Config is a system for configuring machines with a Cloud-Config fil
|
||||
|
||||
Cloud-Config template files can be added in `/var/lib/matchbox/cloud` or in a `cloud` subdirectory of a custom `-data-path`. Template files may contain [Go template](https://golang.org/pkg/text/template/) elements which will be evaluated with group metadata, selectors, and query params.
|
||||
|
||||
/var/lib/matchbox
|
||||
├── cloud
|
||||
│ ├── cloud.yaml
|
||||
│ └── script.sh
|
||||
├── ignition
|
||||
└── profiles
|
||||
```
|
||||
/var/lib/matchbox
|
||||
├── cloud
|
||||
│ ├── cloud.yaml
|
||||
│ └── script.sh
|
||||
├── ignition
|
||||
└── profiles
|
||||
```
|
||||
|
||||
## Reference
|
||||
|
||||
@@ -22,19 +24,23 @@ Reference a Cloud-Config in a [Profile](matchbox.md#profiles) with `cloud_id`. W
|
||||
|
||||
Here is an example Cloud-Config which starts some units and writes a file.
|
||||
|
||||
#cloud-config
|
||||
coreos:
|
||||
units:
|
||||
- name: etcd2.service
|
||||
command: start
|
||||
- name: fleet.service
|
||||
command: start
|
||||
write_files:
|
||||
- path: "/home/core/welcome"
|
||||
owner: "core"
|
||||
permissions: "0644"
|
||||
content: |
|
||||
{{.greeting}}
|
||||
<!-- {% raw %} -->
|
||||
```yaml
|
||||
#cloud-config
|
||||
coreos:
|
||||
units:
|
||||
- name: etcd2.service
|
||||
command: start
|
||||
- name: fleet.service
|
||||
command: start
|
||||
write_files:
|
||||
- path: "/home/core/welcome"
|
||||
owner: "core"
|
||||
permissions: "0644"
|
||||
content: |
|
||||
{{.greeting}}
|
||||
```
|
||||
<!-- {% endraw %} -->
|
||||
|
||||
The Cloud-Config [Validator](https://coreos.com/validate/) is also useful for checking your Cloud-Config files for errors.
|
||||
|
||||
|
||||
@@ -33,33 +33,45 @@ Configuration arguments can be provided as flags or as environment variables.
|
||||
|
||||
## Version
|
||||
|
||||
./bin/matchbox -version
|
||||
sudo rkt run quay.io/coreos/matchbox:latest -- -version
|
||||
sudo docker run quay.io/coreos/matchbox:latest -version
|
||||
```sh
|
||||
$ ./bin/matchbox -version
|
||||
$ sudo rkt run quay.io/coreos/matchbox:latest -- -version
|
||||
$ sudo docker run quay.io/coreos/matchbox:latest -version
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
Run the binary.
|
||||
|
||||
./bin/matchbox -address=0.0.0.0:8080 -log-level=debug -data-path=examples -assets-path=examples/assets
|
||||
```sh
|
||||
$ ./bin/matchbox -address=0.0.0.0:8080 -log-level=debug -data-path=examples -assets-path=examples/assets
|
||||
```
|
||||
|
||||
Run the latest ACI with rkt.
|
||||
|
||||
sudo rkt run --mount volume=assets,target=/var/lib/matchbox/assets --volume assets,kind=host,source=$PWD/examples/assets quay.io/coreos/matchbox:latest -- -address=0.0.0.0:8080 -log-level=debug
|
||||
```sh
|
||||
$ sudo rkt run --mount volume=assets,target=/var/lib/matchbox/assets --volume assets,kind=host,source=$PWD/examples/assets quay.io/coreos/matchbox:latest -- -address=0.0.0.0:8080 -log-level=debug
|
||||
```
|
||||
|
||||
Run the latest Docker image.
|
||||
|
||||
sudo docker run -p 8080:8080 --rm -v $PWD/examples/assets:/var/lib/matchbox/assets:Z quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -log-level=debug
|
||||
```sh
|
||||
$ sudo docker run -p 8080:8080 --rm -v $PWD/examples/assets:/var/lib/matchbox/assets:Z quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -log-level=debug
|
||||
```
|
||||
|
||||
#### With Examples
|
||||
|
||||
Mount `examples` to pre-load the [example](../examples/README.md) machine groups and profiles. Run the container with rkt,
|
||||
|
||||
sudo rkt run --net=metal0:IP=172.18.0.2 --mount volume=data,target=/var/lib/matchbox --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/matchbox/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd quay.io/coreos/matchbox:latest -- -address=0.0.0.0:8080 -log-level=debug
|
||||
```sh
|
||||
$ sudo rkt run --net=metal0:IP=172.18.0.2 --mount volume=data,target=/var/lib/matchbox --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/matchbox/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd quay.io/coreos/matchbox:latest -- -address=0.0.0.0:8080 -log-level=debug
|
||||
```
|
||||
|
||||
or with Docker.
|
||||
|
||||
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/var/lib/matchbox:Z -v $PWD/examples/groups/etcd:/var/lib/matchbox/groups:Z quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -log-level=debug
|
||||
```sh
|
||||
$ sudo docker run -p 8080:8080 --rm -v $PWD/examples:/var/lib/matchbox:Z -v $PWD/examples/groups/etcd:/var/lib/matchbox/groups:Z quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -log-level=debug
|
||||
```
|
||||
|
||||
### gRPC API
|
||||
|
||||
@@ -67,43 +79,61 @@ The gRPC API allows clients with a TLS client certificate and key to make RPC re
|
||||
|
||||
Run the binary with TLS credentials from `examples/etc/matchbox`.
|
||||
|
||||
./bin/matchbox -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug -data-path=examples -assets-path=examples/assets -cert-file examples/etc/matchbox/server.crt -key-file examples/etc/matchbox/server.key -ca-file examples/etc/matchbox/ca.crt
|
||||
```sh
|
||||
$ ./bin/matchbox -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug -data-path=examples -assets-path=examples/assets -cert-file examples/etc/matchbox/server.crt -key-file examples/etc/matchbox/server.key -ca-file examples/etc/matchbox/ca.crt
|
||||
```
|
||||
|
||||
Clients, such as `bootcmd`, verify the server's certificate with a CA bundle passed via `-ca-file` and present a client certificate and key via `-cert-file` and `-key-file` to cal the gRPC API.
|
||||
|
||||
./bin/bootcmd profile list --endpoints 127.0.0.1:8081 --ca-file examples/etc/matchbox/ca.crt --cert-file examples/etc/matchbox/client.crt --key-file examples/etc/matchbox/client.key
|
||||
```sh
|
||||
$ ./bin/bootcmd profile list --endpoints 127.0.0.1:8081 --ca-file examples/etc/matchbox/ca.crt --cert-file examples/etc/matchbox/client.crt --key-file examples/etc/matchbox/client.key
|
||||
```
|
||||
|
||||
#### With rkt
|
||||
|
||||
Run the ACI with rkt and TLS credentials from `examples/etc/matchbox`.
|
||||
|
||||
sudo rkt run --net=metal0:IP=172.18.0.2 --mount volume=data,target=/var/lib/matchbox --volume data,kind=host,source=$PWD/examples,readOnly=true --mount volume=config,target=/etc/matchbox --volume config,kind=host,source=$PWD/examples/etc/matchbox --mount volume=groups,target=/var/lib/matchbox/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd quay.io/coreos/matchbox:latest -- -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
|
||||
```sh
|
||||
$ sudo rkt run --net=metal0:IP=172.18.0.2 --mount volume=data,target=/var/lib/matchbox --volume data,kind=host,source=$PWD/examples,readOnly=true --mount volume=config,target=/etc/matchbox --volume config,kind=host,source=$PWD/examples/etc/matchbox --mount volume=groups,target=/var/lib/matchbox/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd quay.io/coreos/matchbox:latest -- -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
|
||||
```
|
||||
|
||||
A `bootcmd` client can call the gRPC API running at the IP used in the rkt example.
|
||||
|
||||
./bin/bootcmd profile list --endpoints 172.18.0.2:8081 --ca-file examples/etc/matchbox/ca.crt --cert-file examples/etc/matchbox/client.crt --key-file examples/etc/matchbox/client.key
|
||||
```sh
|
||||
$ ./bin/bootcmd profile list --endpoints 172.18.0.2:8081 --ca-file examples/etc/matchbox/ca.crt --cert-file examples/etc/matchbox/client.crt --key-file examples/etc/matchbox/client.key
|
||||
```
|
||||
|
||||
#### With docker
|
||||
|
||||
Run the Docker image with TLS credentials from `examples/etc/matchbox`.
|
||||
|
||||
sudo docker run -p 8080:8080 -p 8081:8081 --rm -v $PWD/examples:/var/lib/matchbox:Z -v $PWD/examples/etc/matchbox:/etc/matchbox:Z,ro -v $PWD/examples/groups/etcd:/var/lib/matchbox/groups:Z quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
|
||||
```sh
|
||||
$ sudo docker run -p 8080:8080 -p 8081:8081 --rm -v $PWD/examples:/var/lib/matchbox:Z -v $PWD/examples/etc/matchbox:/etc/matchbox:Z,ro -v $PWD/examples/groups/etcd:/var/lib/matchbox/groups:Z quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
|
||||
```
|
||||
|
||||
A `bootcmd` client can call the gRPC API running at the IP used in the Docker example.
|
||||
|
||||
./bin/bootcmd profile list --endpoints 127.0.0.1:8081 --ca-file examples/etc/matchbox/ca.crt --cert-file examples/etc/matchbox/client.crt --key-file examples/etc/matchbox/client.key
|
||||
```sh
|
||||
$ ./bin/bootcmd profile list --endpoints 127.0.0.1:8081 --ca-file examples/etc/matchbox/ca.crt --cert-file examples/etc/matchbox/client.crt --key-file examples/etc/matchbox/client.key
|
||||
```
|
||||
|
||||
### OpenPGP [Signing](openpgp.md)
|
||||
|
||||
Run with the binary with a test key.
|
||||
|
||||
export MATCHBOX_PASSPHRASE=test
|
||||
./bin/matchbox -address=0.0.0.0:8080 -key-ring-path matchbox/sign/fixtures/secring.gpg -data-path=examples -assets-path=examples/assets
|
||||
```sh
|
||||
$ export MATCHBOX_PASSPHRASE=test
|
||||
$ ./bin/matchbox -address=0.0.0.0:8080 -key-ring-path matchbox/sign/fixtures/secring.gpg -data-path=examples -assets-path=examples/assets
|
||||
```
|
||||
|
||||
Run the ACI with a test key.
|
||||
|
||||
sudo rkt run --net=metal0:IP=172.18.0.2 --set-env=MATCHBOX_PASSPHRASE=test --mount volume=secrets,target=/secrets --volume secrets,kind=host,source=$PWD/matchbox/sign/fixtures --mount volume=data,target=/var/lib/matchbox --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/matchbox/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd quay.io/coreos/matchbox:latest -- -address=0.0.0.0:8080 -key-ring-path secrets/secring.gpg
|
||||
```sh
|
||||
$ sudo rkt run --net=metal0:IP=172.18.0.2 --set-env=MATCHBOX_PASSPHRASE=test --mount volume=secrets,target=/secrets --volume secrets,kind=host,source=$PWD/matchbox/sign/fixtures --mount volume=data,target=/var/lib/matchbox --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/matchbox/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd quay.io/coreos/matchbox:latest -- -address=0.0.0.0:8080 -key-ring-path secrets/secring.gpg
|
||||
```
|
||||
|
||||
Run the Docker image with a test key.
|
||||
|
||||
sudo docker run -p 8080:8080 --rm --env MATCHBOX_PASSPHRASE=test -v $PWD/examples:/var/lib/matchbox:Z -v $PWD/examples/groups/etcd:/var/lib/matchbox/groups:Z -v $PWD/matchbox/sign/fixtures:/secrets:Z quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -log-level=debug -key-ring-path secrets/secring.gpg
|
||||
```sh
|
||||
$ sudo docker run -p 8080:8080 --rm --env MATCHBOX_PASSPHRASE=test -v $PWD/examples:/var/lib/matchbox:Z -v $PWD/examples/groups/etcd:/var/lib/matchbox/groups:Z -v $PWD/matchbox/sign/fixtures:/secrets:Z quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -log-level=debug -key-ring-path secrets/secring.gpg
|
||||
```
|
||||
|
||||
@@ -27,7 +27,7 @@ $ wget https://github.com/coreos/matchbox/releases/download/v0.5.0/matchbox-v0.5
|
||||
|
||||
Verify the release has been signed by the [CoreOS App Signing Key](https://coreos.com/security/app-signing-key/).
|
||||
|
||||
```
|
||||
```sh
|
||||
$ gpg --keyserver pgp.mit.edu --recv-key 18AD5014C99EF7E3BA5F6CE950BDD3E0FC8A365E
|
||||
$ gpg --verify matchbox-v0.5.0-linux-amd64.tar.gz.asc matchbox-v0.5.0-linux-amd64.tar.gz
|
||||
# gpg: Good signature from "CoreOS Application Signing Key <security@coreos.com>"
|
||||
@@ -89,7 +89,9 @@ $ sudo cp contrib/systemd/matchbox-local.service /etc/systemd/system/
|
||||
|
||||
Customize matchbox by editing the systemd unit or adding a systemd dropin. Find the complete set of `matchbox` flags and environment variables at [config](config.md).
|
||||
|
||||
sudo systemctl edit matchbox
|
||||
```sh
|
||||
$ sudo systemctl edit matchbox
|
||||
```
|
||||
|
||||
By default, the read-only HTTP machine endpoint will be exposed on port **8080**.
|
||||
|
||||
@@ -226,7 +228,7 @@ $ sudo cp -r coreos /var/lib/matchbox/assets
|
||||
|
||||
and verify the images are acessible.
|
||||
|
||||
```
|
||||
```sh
|
||||
$ curl http://matchbox.example.com:8080/assets/coreos/1235.9.0/
|
||||
<pre>...
|
||||
```
|
||||
@@ -269,7 +271,7 @@ Create machine profiles, groups, or Ignition configs at runtime with `bootcmd` o
|
||||
|
||||
Create a `matchbox` Kubernetes `Deployment` and `Service` based on the example manifests provided in [contrib/k8s](../contrib/k8s).
|
||||
|
||||
```
|
||||
```sh
|
||||
$ kubectl apply -f contrib/k8s/matchbox-deployment.yaml
|
||||
$ kubectl apply -f contrib/k8s/matchbox-service.yaml
|
||||
```
|
||||
|
||||
@@ -6,57 +6,77 @@ To develop `matchbox` locally, compile the binary and build the container image.
|
||||
|
||||
Build the static binary.
|
||||
|
||||
make build
|
||||
```sh
|
||||
$ make build
|
||||
```
|
||||
|
||||
Test with vendored dependencies.
|
||||
|
||||
make test
|
||||
```sh
|
||||
$ make test
|
||||
```
|
||||
|
||||
## Container Image
|
||||
|
||||
Build an ACI `matchbox.aci`.
|
||||
|
||||
make aci
|
||||
```sh
|
||||
$ make aci
|
||||
```
|
||||
|
||||
Alternately, build a Docker image `coreos/matchbox:latest`.
|
||||
|
||||
make docker-image
|
||||
```sh
|
||||
$ make docker-image
|
||||
```
|
||||
|
||||
## Version
|
||||
|
||||
./bin/matchbox -version
|
||||
sudo rkt --insecure-options=image run matchbox.aci -- -version
|
||||
sudo docker run coreos/matchbox:latest -version
|
||||
|
||||
```sh
|
||||
$ ./bin/matchbox -version
|
||||
$ sudo rkt --insecure-options=image run matchbox.aci -- -version
|
||||
$ sudo docker run coreos/matchbox:latest -version
|
||||
```
|
||||
## Run
|
||||
|
||||
Run the binary.
|
||||
|
||||
./bin/matchbox -address=0.0.0.0:8080 -log-level=debug -data-path examples -assets-path examples/assets
|
||||
```sh
|
||||
$ ./bin/matchbox -address=0.0.0.0:8080 -log-level=debug -data-path examples -assets-path examples/assets
|
||||
```
|
||||
|
||||
Run the container image with rkt, on `metal0`.
|
||||
|
||||
sudo rkt --insecure-options=image run --net=metal0:IP=172.18.0.2 --mount volume=data,target=/var/lib/matchbox --volume data,kind=host,source=$PWD/examples --mount volume=config,target=/etc/matchbox --volume config,kind=host,source=$PWD/examples/etc/matchbox --mount volume=groups,target=/var/lib/matchbox/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd matchbox.aci -- -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
|
||||
```sh
|
||||
$ sudo rkt --insecure-options=image run --net=metal0:IP=172.18.0.2 --mount volume=data,target=/var/lib/matchbox --volume data,kind=host,source=$PWD/examples --mount volume=config,target=/etc/matchbox --volume config,kind=host,source=$PWD/examples/etc/matchbox --mount volume=groups,target=/var/lib/matchbox/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd matchbox.aci -- -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
|
||||
```
|
||||
|
||||
Alternately, run the Docker image on `docker0`.
|
||||
|
||||
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/var/lib/matchbox:Z -v $PWD/examples/groups/etcd:/var/lib/matchbox/groups:Z coreos/matchbox:latest -address=0.0.0.0:8080 -log-level=debug
|
||||
```sh
|
||||
$ sudo docker run -p 8080:8080 --rm -v $PWD/examples:/var/lib/matchbox:Z -v $PWD/examples/groups/etcd:/var/lib/matchbox/groups:Z coreos/matchbox:latest -address=0.0.0.0:8080 -log-level=debug
|
||||
```
|
||||
|
||||
## bootcmd
|
||||
|
||||
Run `bootcmd` against the gRPC API of the service running via rkt.
|
||||
|
||||
./bin/bootcmd profile list --endpoints 172.18.0.2:8081 --cacert examples/etc/matchbox/ca.crt
|
||||
```sh
|
||||
$ ./bin/bootcmd profile list --endpoints 172.18.0.2:8081 --cacert examples/etc/matchbox/ca.crt
|
||||
```
|
||||
|
||||
## Vendor
|
||||
|
||||
Use `glide` and `glide-vc` to manage dependencies committed to the `vendor` directory.
|
||||
|
||||
make vendor
|
||||
```sh
|
||||
$ make vendor
|
||||
```
|
||||
|
||||
## Codegen
|
||||
|
||||
Generate code from *proto* definitions using `protoc` and the `protoc-gen-go` plugin.
|
||||
|
||||
make codegen
|
||||
|
||||
```sh
|
||||
$ make codegen
|
||||
```
|
||||
|
||||
@@ -7,22 +7,28 @@ This guide covers releasing new versions of matchbox.
|
||||
|
||||
Create a release commit which updates old version references.
|
||||
|
||||
export VERSION=v0.5.0
|
||||
```sh
|
||||
$ export VERSION=v0.5.0
|
||||
```
|
||||
|
||||
## Tag
|
||||
|
||||
Tag, sign the release version, and push it to Github.
|
||||
|
||||
git tag -s vX.Y.Z -m 'vX.Y.Z'
|
||||
git push origin --tags
|
||||
git push origin master
|
||||
```sh
|
||||
$ git tag -s vX.Y.Z -m 'vX.Y.Z'
|
||||
$ git push origin --tags
|
||||
$ git push origin master
|
||||
```
|
||||
|
||||
## Images
|
||||
|
||||
Travis CI will build the Docker image and push it to Quay.io when the tag is pushed to master. Verify the new image and version.
|
||||
|
||||
sudo docker run quay.io/coreos/matchbox:$VERSION -version
|
||||
sudo rkt run --no-store quay.io/coreos/matchbox:$VERSION -- -version
|
||||
```sh
|
||||
$ sudo docker run quay.io/coreos/matchbox:$VERSION -version
|
||||
$ sudo rkt run --no-store quay.io/coreos/matchbox:$VERSION -- -version
|
||||
```
|
||||
|
||||
## Github Release
|
||||
|
||||
@@ -32,44 +38,58 @@ Publish the release on Github with release notes.
|
||||
|
||||
Build the release tarballs.
|
||||
|
||||
make release
|
||||
```sh
|
||||
$ make release
|
||||
```
|
||||
|
||||
Verify the reported version.
|
||||
|
||||
./_output/matchbox-v0.5.0-linux-amd64/matchbox -version
|
||||
```
|
||||
./_output/matchbox-v0.5.0-linux-amd64/matchbox -version
|
||||
```
|
||||
|
||||
## ACI
|
||||
|
||||
Build the rkt ACI on a Linux host with `acbuild`,
|
||||
|
||||
make aci
|
||||
```sh
|
||||
$ make aci
|
||||
```
|
||||
|
||||
Check that the listed version is correct/clean.
|
||||
|
||||
sudo rkt --insecure-options=image run matchbox.aci -- -version
|
||||
```sh
|
||||
$ sudo rkt --insecure-options=image run matchbox.aci -- -version
|
||||
```
|
||||
|
||||
Add the ACI to `output` for signing.
|
||||
|
||||
mv matchbox.aci _output/matchbox-$VERSION-linux-amd64.aci
|
||||
```sh
|
||||
$ mv matchbox.aci _output/matchbox-$VERSION-linux-amd64.aci
|
||||
```
|
||||
|
||||
## Signing
|
||||
|
||||
Sign the release tarballs and ACI with a [CoreOS App Signing Key](https://coreos.com/security/app-signing-key/) subkey.
|
||||
|
||||
cd _output
|
||||
gpg2 -a --default-key FC8A365E --detach-sign matchbox-$VERSION-linux-amd64.aci
|
||||
gpg2 -a --default-key FC8A365E --detach-sign matchbox-$VERSION-linux-amd64.tar.gz
|
||||
gpg2 -a --default-key FC8A365E --detach-sign matchbox-$VERSION-darwin-amd64.tar.gz
|
||||
gpg2 -a --default-key FC8A365E --detach-sign matchbox-$VERSION-linux-arm.tar.gz
|
||||
gpg2 -a --default-key FC8A365E --detach-sign matchbox-$VERSION-linux-arm64.tar.gz
|
||||
```sh
|
||||
$ cd _output
|
||||
$ gpg2 -a --default-key FC8A365E --detach-sign matchbox-$VERSION-linux-amd64.aci
|
||||
$ gpg2 -a --default-key FC8A365E --detach-sign matchbox-$VERSION-linux-amd64.tar.gz
|
||||
$ gpg2 -a --default-key FC8A365E --detach-sign matchbox-$VERSION-darwin-amd64.tar.gz
|
||||
$ gpg2 -a --default-key FC8A365E --detach-sign matchbox-$VERSION-linux-arm.tar.gz
|
||||
$ gpg2 -a --default-key FC8A365E --detach-sign matchbox-$VERSION-linux-arm64.tar.gz
|
||||
```
|
||||
|
||||
Verify the signatures.
|
||||
|
||||
gpg2 --verify matchbox-$VERSION-linux-amd64.aci.asc matchbox-$VERSION-linux-amd64.aci
|
||||
gpg2 --verify matchbox-$VERSION-linux-amd64.tar.gz.asc matchbox-$VERSION-linux-amd64.tar.gz
|
||||
gpg2 --verify matchbox-$VERSION-darwin-amd64.tar.gz.asc matchbox-$VERSION-darwin-amd64.tar.gz
|
||||
gpg2 --verify matchbox-$VERSION-linux-arm.tar.gz.asc matchbox-$VERSION-linux-arm.tar.gz
|
||||
gpg2 --verify matchbox-$VERSION-linux-arm64.tar.gz.asc matchbox-$VERSION-linux-arm64.tar.gz
|
||||
```sh
|
||||
$ gpg2 --verify matchbox-$VERSION-linux-amd64.aci.asc matchbox-$VERSION-linux-amd64.aci
|
||||
$ gpg2 --verify matchbox-$VERSION-linux-amd64.tar.gz.asc matchbox-$VERSION-linux-amd64.tar.gz
|
||||
$ gpg2 --verify matchbox-$VERSION-darwin-amd64.tar.gz.asc matchbox-$VERSION-darwin-amd64.tar.gz
|
||||
$ gpg2 --verify matchbox-$VERSION-linux-arm.tar.gz.asc matchbox-$VERSION-linux-arm.tar.gz
|
||||
$ gpg2 --verify matchbox-$VERSION-linux-arm64.tar.gz.asc matchbox-$VERSION-linux-arm64.tar.gz
|
||||
```
|
||||
|
||||
## Publish
|
||||
|
||||
|
||||
@@ -9,37 +9,47 @@ In this tutorial, we'll run `matchbox` on your Linux machine with Docker to netw
|
||||
|
||||
Install the package dependencies and start the Docker daemon.
|
||||
|
||||
# Fedora
|
||||
sudo dnf install docker virt-install virt-manager
|
||||
sudo systemctl start docker
|
||||
```sh
|
||||
$ # Fedora
|
||||
$ sudo dnf install docker virt-install virt-manager
|
||||
$ sudo systemctl start docker
|
||||
|
||||
# Debian/Ubuntu
|
||||
# check Docker's docs to install Docker 1.8+ on Debian/Ubuntu
|
||||
sudo apt-get install virt-manager virtinst qemu-kvm
|
||||
$ # Debian/Ubuntu
|
||||
$ # check Docker's docs to install Docker 1.8+ on Debian/Ubuntu
|
||||
$ sudo apt-get install virt-manager virtinst qemu-kvm
|
||||
```
|
||||
|
||||
Clone the [matchbox](https://github.com/coreos/matchbox) source which contains the examples and scripts.
|
||||
|
||||
git clone https://github.com/coreos/matchbox.git
|
||||
cd matchbox
|
||||
```sh
|
||||
$ git clone https://github.com/coreos/matchbox.git
|
||||
$ cd matchbox
|
||||
```
|
||||
|
||||
Download CoreOS image assets referenced by the `etcd-docker` [example](../examples) to `examples/assets`.
|
||||
|
||||
./scripts/get-coreos stable 1235.9.0 ./examples/assets
|
||||
```sh
|
||||
$ ./scripts/get-coreos stable 1235.9.0 ./examples/assets
|
||||
```
|
||||
|
||||
For development convenience, add `/etc/hosts` entries for nodes so they may be referenced by name as you would in production.
|
||||
|
||||
# /etc/hosts
|
||||
...
|
||||
172.17.0.21 node1.example.com
|
||||
172.17.0.22 node2.example.com
|
||||
172.17.0.23 node3.example.com
|
||||
```sh
|
||||
$ # /etc/hosts
|
||||
$ ...
|
||||
$ 172.17.0.21 node1.example.com
|
||||
$ 172.17.0.22 node2.example.com
|
||||
$ 172.17.0.23 node3.example.com
|
||||
```
|
||||
|
||||
## Containers
|
||||
|
||||
Run the latest `matchbox` Docker image from `quay.io/coreos/matchbox` with the `etcd-docker` example. The container should receive the IP address 172.17.0.2 on the `docker0` bridge.
|
||||
|
||||
sudo docker pull quay.io/coreos/matchbox:latest
|
||||
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/var/lib/matchbox:Z -v $PWD/examples/groups/etcd3:/var/lib/matchbox/groups:Z quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -log-level=debug
|
||||
```sh
|
||||
$ sudo docker pull quay.io/coreos/matchbox:latest
|
||||
$ sudo docker run -p 8080:8080 --rm -v $PWD/examples:/var/lib/matchbox:Z -v $PWD/examples/groups/etcd3:/var/lib/matchbox/groups:Z quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -log-level=debug
|
||||
```
|
||||
|
||||
Take a look at the [etcd3 groups](../examples/groups/etcd3) to get an idea of how machines are mapped to Profiles. Explore some endpoints exposed by the service, say for QEMU/KVM node1.
|
||||
|
||||
@@ -51,7 +61,9 @@ Take a look at the [etcd3 groups](../examples/groups/etcd3) to get an idea of ho
|
||||
|
||||
Since the virtual network has no network boot services, use the `dnsmasq` image to create an iPXE network boot environment which runs DHCP, DNS, and TFTP.
|
||||
|
||||
sudo docker run --name dnsmasq --cap-add=NET_ADMIN -v $PWD/contrib/dnsmasq/docker0.conf:/etc/dnsmasq.conf:Z quay.io/coreos/dnsmasq -d
|
||||
```sh
|
||||
$ sudo docker run --name dnsmasq --cap-add=NET_ADMIN -v $PWD/contrib/dnsmasq/docker0.conf:/etc/dnsmasq.conf:Z quay.io/coreos/dnsmasq -d
|
||||
```
|
||||
|
||||
In this case, dnsmasq runs a DHCP server allocating IPs to VMs between 172.17.0.43 and 172.17.0.99, resolves `matchbox.foo` to 172.17.0.2 (the IP where `matchbox` runs), and points iPXE clients to `http://matchbox.foo:8080/boot.ipxe`.
|
||||
|
||||
@@ -59,19 +71,27 @@ In this case, dnsmasq runs a DHCP server allocating IPs to VMs between 172.17.0.
|
||||
|
||||
Create QEMU/KVM VMs which have known hardware attributes. The nodes will be attached to the `docker0` bridge, where Docker's containers run.
|
||||
|
||||
sudo ./scripts/libvirt create-docker
|
||||
```sh
|
||||
$ sudo ./scripts/libvirt create-docker
|
||||
```
|
||||
|
||||
You can connect to the serial console of any node. If you provisioned nodes with an SSH key, you can SSH after bring-up.
|
||||
|
||||
sudo virsh console node1
|
||||
```sh
|
||||
$ sudo virsh console node1
|
||||
```
|
||||
|
||||
You can also use `virt-manager` to watch the console.
|
||||
|
||||
sudo virt-manager
|
||||
```sh
|
||||
$ sudo virt-manager
|
||||
```
|
||||
|
||||
Use the wrapper script to act on all nodes.
|
||||
|
||||
sudo ./scripts/libvirt [start|reboot|shutdown|poweroff|destroy]
|
||||
```sh
|
||||
$ sudo ./scripts/libvirt [start|reboot|shutdown|poweroff|destroy]
|
||||
```
|
||||
|
||||
## Verify
|
||||
|
||||
@@ -79,20 +99,22 @@ The VMs should network boot and provision themselves into a three node etcd3 clu
|
||||
|
||||
The example profile added autologin so you can verify that etcd3 works between nodes.
|
||||
|
||||
systemctl status etcd-member
|
||||
ETCDCTL_API=3
|
||||
etcdctl set /message hello
|
||||
etcdctl get /message
|
||||
|
||||
```sh
|
||||
$ systemctl status etcd-member
|
||||
$ ETCDCTL_API=3
|
||||
$ etcdctl set /message hello
|
||||
$ etcdctl get /message
|
||||
```
|
||||
## Cleanup
|
||||
|
||||
Clean up the containers and VM machines.
|
||||
|
||||
sudo docker rm -f dnsmasq
|
||||
sudo ./scripts/libvirt poweroff
|
||||
sudo ./scripts/libvirt destroy
|
||||
```sh
|
||||
$ sudo docker rm -f dnsmasq
|
||||
$ sudo ./scripts/libvirt poweroff
|
||||
$ sudo ./scripts/libvirt destroy
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn more about [matchbox](matchbox.md) or explore the other [example](../examples) clusters. Try the [k8s example](kubernetes.md) to produce a TLS-authenticated Kubernetes cluster you can access locally with `kubectl`.
|
||||
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
|
||||
# Getting Started with rkt
|
||||
|
||||
In this tutorial, we'll run `matchbox` on your Linux machine with `rkt` and `CNI` to network boot and provision a cluster of QEMU/KVM CoreOS machines locally. You'll be able to create Kubernetes clustes, etcd3 clusters, and test network setups.
|
||||
@@ -11,22 +10,28 @@ Install [rkt](https://coreos.com/rkt/docs/latest/distributions.html) 1.8 or high
|
||||
|
||||
Next, install the package dependencies.
|
||||
|
||||
# Fedora
|
||||
sudo dnf install virt-install virt-manager
|
||||
```sh
|
||||
# Fedora
|
||||
$ sudo dnf install virt-install virt-manager
|
||||
|
||||
# Debian/Ubuntu
|
||||
sudo apt-get install virt-manager virtinst qemu-kvm systemd-container
|
||||
# Debian/Ubuntu
|
||||
$ sudo apt-get install virt-manager virtinst qemu-kvm systemd-container
|
||||
```
|
||||
|
||||
**Note**: rkt does not yet integrate with SELinux on Fedora. As a workaround, temporarily set enforcement to permissive if you are comfortable (`sudo setenforce Permissive`). Check the rkt [distribution notes](https://github.com/coreos/rkt/blob/master/Documentation/distributions.md) or see the tracking [issue](https://github.com/coreos/rkt/issues/1727).
|
||||
|
||||
Clone the [matchbox](https://github.com/coreos/matchbox) source which contains the examples and scripts.
|
||||
|
||||
git clone https://github.com/coreos/matchbox.git
|
||||
cd matchbox
|
||||
```sh
|
||||
$ git clone https://github.com/coreos/matchbox.git
|
||||
$ cd matchbox
|
||||
```
|
||||
|
||||
Download CoreOS image assets referenced by the `etcd` [example](../examples) to `examples/assets`.
|
||||
|
||||
./scripts/get-coreos stable 1235.9.0 ./examples/assets
|
||||
```sh
|
||||
$ ./scripts/get-coreos stable 1235.9.0 ./examples/assets
|
||||
```
|
||||
|
||||
Define the `metal0` virtual bridge with [CNI](https://github.com/appc/cni).
|
||||
|
||||
@@ -50,17 +55,21 @@ EOF'
|
||||
|
||||
On Fedora, add the `metal0` interface to the trusted zone in your firewall configuration.
|
||||
|
||||
sudo firewall-cmd --add-interface=metal0 --zone=trusted
|
||||
```sh
|
||||
$ sudo firewall-cmd --add-interface=metal0 --zone=trusted
|
||||
```
|
||||
|
||||
After a recent update, you may see a warning that NetworkManager controls the interface. Work-around this using the firewall-config GUI to add `metal0` to the trusted zone.
|
||||
|
||||
For development convenience, add `/etc/hosts` entries for nodes so they may be referenced by name as you would in production.
|
||||
|
||||
# /etc/hosts
|
||||
...
|
||||
172.18.0.21 node1.example.com
|
||||
172.18.0.22 node2.example.com
|
||||
172.18.0.23 node3.example.com
|
||||
```
|
||||
# /etc/hosts
|
||||
...
|
||||
172.18.0.21 node1.example.com
|
||||
172.18.0.22 node2.example.com
|
||||
172.18.0.23 node3.example.com
|
||||
```
|
||||
|
||||
Trust the needed ACIs.
|
||||
|
||||
@@ -70,21 +79,27 @@ Run the `matchbox` and `dnsmasq` services on the `metal0` bridge. `dnsmasq` will
|
||||
|
||||
Trust the needed ACIs.
|
||||
|
||||
sudo rkt trust --prefix quay.io/coreos/matchbox
|
||||
sudo rkt trust --prefix quay.io/coreos/alpine-sh
|
||||
sudo rkt trust --prefix coreos.com/dnsmasq
|
||||
```sh
|
||||
$ sudo rkt trust --prefix quay.io/coreos/matchbox
|
||||
$ sudo rkt trust --prefix quay.io/coreos/alpine-sh
|
||||
$ sudo rkt trust --prefix coreos.com/dnsmasq
|
||||
```
|
||||
|
||||
The `devnet` wrapper script can quickly rkt run `matchbox` and `dnsmasq` in systemd transient units. Create can take the name of any example cluster in [examples](../examples).
|
||||
|
||||
sudo ./scripts/devnet create etcd3
|
||||
```sh
|
||||
$ sudo ./scripts/devnet create etcd3
|
||||
```
|
||||
|
||||
Inspect the journal logs or check the status of the systemd services.
|
||||
|
||||
# quick status
|
||||
sudo ./scripts/devnet status
|
||||
# tail logs
|
||||
journalctl -f -u dev-matchbox
|
||||
journalctl -f -u dev-dnsmasq
|
||||
```
|
||||
# quick status
|
||||
$ sudo ./scripts/devnet status
|
||||
# tail logs
|
||||
$ journalctl -f -u dev-matchbox
|
||||
$ journalctl -f -u dev-dnsmasq
|
||||
```
|
||||
|
||||
Take a look at the [etcd3 groups](../examples/groups/etcd3) to get an idea of how machines are mapped to Profiles. Explore some endpoints exposed by the service, say for QEMU/KVM node1.
|
||||
|
||||
@@ -96,32 +111,44 @@ Take a look at the [etcd3 groups](../examples/groups/etcd3) to get an idea of ho
|
||||
|
||||
If you prefer to start the containers yourself, instead of using `devnet`:
|
||||
|
||||
# matchbox with etcd3 example
|
||||
sudo rkt run --net=metal0:IP=172.18.0.2 --mount volume=data,target=/var/lib/matchbox --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/matchbox/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd3 quay.io/coreos/matchbox:latest -- -address=0.0.0.0:8080 -log-level=debug
|
||||
# dnsmasq
|
||||
sudo rkt run coreos.com/dnsmasq:v0.3.0 --net=metal0:IP=172.18.0.3 --mount volume=config,target=/etc/dnsmasq.conf --volume config,kind=host,source=$PWD/contrib/dnsmasq/metal0.conf
|
||||
```
|
||||
# matchbox with etcd3 example
|
||||
$ sudo rkt run --net=metal0:IP=172.18.0.2 --mount volume=data,target=/var/lib/matchbox --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/matchbox/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd3 quay.io/coreos/matchbox:latest -- -address=0.0.0.0:8080 -log-level=debug
|
||||
# dnsmasq
|
||||
$ sudo rkt run coreos.com/dnsmasq:v0.3.0 --net=metal0:IP=172.18.0.3 --mount volume=config,target=/etc/dnsmasq.conf --volume config,kind=host,source=$PWD/contrib/dnsmasq/metal0.conf
|
||||
```
|
||||
|
||||
If you get an error about the IP assignment, stop old pods and run garbage collection.
|
||||
|
||||
sudo rkt gc --grace-period=0
|
||||
```sh
|
||||
$ sudo rkt gc --grace-period=0
|
||||
```
|
||||
|
||||
## Client VMs
|
||||
|
||||
Create QEMU/KVM VMs which have known hardware attributes. The nodes will be attached to the `metal0` bridge, where your pods run.
|
||||
|
||||
sudo ./scripts/libvirt create
|
||||
```sh
|
||||
$ sudo ./scripts/libvirt create
|
||||
```
|
||||
|
||||
You can connect to the serial console of any node. If you provisioned nodes with an SSH key, you can SSH after bring-up.
|
||||
|
||||
sudo virsh console node1
|
||||
```sh
|
||||
$ sudo virsh console node1
|
||||
```
|
||||
|
||||
You can also use `virt-manager` to watch the console.
|
||||
|
||||
sudo virt-manager
|
||||
```sh
|
||||
$ sudo virt-manager
|
||||
```
|
||||
|
||||
Use the wrapper script to act on all nodes.
|
||||
|
||||
sudo ./scripts/libvirt [start|reboot|shutdown|poweroff|destroy]
|
||||
```sh
|
||||
$ sudo ./scripts/libvirt [start|reboot|shutdown|poweroff|destroy]
|
||||
```
|
||||
|
||||
## Verify
|
||||
|
||||
@@ -129,20 +156,26 @@ The VMs should network boot and provision themselves into a three node etcd3 clu
|
||||
|
||||
The example profile added autologin so you can verify that etcd3 works between nodes.
|
||||
|
||||
systemctl status etcd-member
|
||||
ETCDCTL_API=3
|
||||
etcdctl set /message hello
|
||||
etcdctl get /message
|
||||
```sh
|
||||
$ systemctl status etcd-member
|
||||
$ ETCDCTL_API=3
|
||||
$ etcdctl set /message hello
|
||||
$ etcdctl get /message
|
||||
```
|
||||
|
||||
## Cleanup
|
||||
|
||||
Clean up the systemd units running `matchbox` and `dnsmasq`.
|
||||
|
||||
sudo ./scripts/devnet destroy
|
||||
```sh
|
||||
$ sudo ./scripts/devnet destroy
|
||||
```
|
||||
|
||||
Clean up VM machines.
|
||||
|
||||
sudo ./scripts/libvirt destroy
|
||||
```sh
|
||||
$ sudo ./scripts/libvirt destroy
|
||||
```
|
||||
|
||||
Press ^] three times to stop any rkt pod.
|
||||
|
||||
|
||||
@@ -19,24 +19,34 @@ Run `matchbox` with rkt, but mount the [grub](../examples/groups/grub) group exa
|
||||
|
||||
On Fedora, add the `metal0` interface to the trusted zone in your firewall configuration.
|
||||
|
||||
sudo firewall-cmd --add-interface=metal0 --zone=trusted
|
||||
```sh
|
||||
$ sudo firewall-cmd --add-interface=metal0 --zone=trusted
|
||||
```
|
||||
|
||||
Run the `coreos.com/dnsmasq` ACI with rkt.
|
||||
|
||||
sudo rkt run coreos.com/dnsmasq:v0.3.0 --net=metal0:IP=172.18.0.3 -- -d -q --dhcp-range=172.18.0.50,172.18.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-match=set:efi-bc,option:client-arch,7 --dhcp-boot=tag:efi-bc,grub.efi --dhcp-userclass=set:grub,GRUB2 --dhcp-boot=tag:grub,"(http;matchbox.foo:8080)/grub","172.18.0.2" --log-queries --log-dhcp --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:pxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://matchbox.foo:8080/boot.ipxe --address=/matchbox.foo/172.18.0.2
|
||||
```sh
|
||||
$ sudo rkt run coreos.com/dnsmasq:v0.3.0 --net=metal0:IP=172.18.0.3 -- -d -q --dhcp-range=172.18.0.50,172.18.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-match=set:efi-bc,option:client-arch,7 --dhcp-boot=tag:efi-bc,grub.efi --dhcp-userclass=set:grub,GRUB2 --dhcp-boot=tag:grub,"(http;matchbox.foo:8080)/grub","172.18.0.2" --log-queries --log-dhcp --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:pxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://matchbox.foo:8080/boot.ipxe --address=/matchbox.foo/172.18.0.2
|
||||
```
|
||||
|
||||
## Client VM
|
||||
|
||||
Create UEFI VM nodes which have known hardware attributes.
|
||||
|
||||
sudo ./scripts/libvirt create-uefi
|
||||
```sh
|
||||
$ sudo ./scripts/libvirt create-uefi
|
||||
```
|
||||
|
||||
## Docker
|
||||
|
||||
If you use Docker, run `matchbox` according to [matchbox with Docker](getting-started-docker.md), but mount the [grub](../examples/groups/grub) group example. Then start the `coreos/dnsmasq` Docker image, which bundles a `grub.efi`.
|
||||
|
||||
sudo docker run --rm --cap-add=NET_ADMIN quay.io/coreos/dnsmasq -d -q --dhcp-range=172.17.0.43,172.17.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-match=set:efi-bc,option:client-arch,7 --dhcp-boot=tag:efi-bc,grub.efi --dhcp-userclass=set:grub,GRUB2 --dhcp-boot=tag:grub,"(http;matchbox.foo:8080)/grub","172.17.0.2" --log-queries --log-dhcp --dhcp-option=3,172.17.0.1 --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:pxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://matchbox.foo:8080/boot.ipxe --address=/matchbox.foo/172.17.0.2
|
||||
```sh
|
||||
$ sudo docker run --rm --cap-add=NET_ADMIN quay.io/coreos/dnsmasq -d -q --dhcp-range=172.17.0.43,172.17.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-match=set:efi-bc,option:client-arch,7 --dhcp-boot=tag:efi-bc,grub.efi --dhcp-userclass=set:grub,GRUB2 --dhcp-boot=tag:grub,"(http;matchbox.foo:8080)/grub","172.17.0.2" --log-queries --log-dhcp --dhcp-option=3,172.17.0.1 --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:pxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://matchbox.foo:8080/boot.ipxe --address=/matchbox.foo/172.17.0.2
|
||||
```
|
||||
|
||||
Create a VM to verify the machine network boots.
|
||||
|
||||
sudo virt-install --name uefi-test --pxe --boot=uefi,network --disk pool=default,size=4 --network=bridge=docker0,model=e1000 --memory=1024 --vcpus=1 --os-type=linux --noautoconsole
|
||||
```sh
|
||||
$ sudo virt-install --name uefi-test --pxe --boot=uefi,network --disk pool=default,size=4 --network=bridge=docker0,model=e1000 --memory=1024 --vcpus=1 --os-type=linux --noautoconsole
|
||||
```
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
|
||||
# Ignition
|
||||
|
||||
Ignition is a system for declaratively provisioning disks during the initramfs, before systemd starts. It runs only on the first boot and handles partitioning disks, formatting partitions, writing files (regular files, systemd units, networkd units, etc.), and configuring users. See the Ignition [docs](https://coreos.com/ignition/docs/latest/) for details.
|
||||
@@ -13,14 +12,16 @@ The [Fuze schema](https://github.com/coreos/fuze/blob/master/doc/configuration.m
|
||||
|
||||
Fuze template files can be added in the `/var/lib/matchbox/ignition` directory or in an `ignition` subdirectory of a custom `-data-path`. Template files may contain [Go template](https://golang.org/pkg/text/template/) elements which will be evaluated with group metadata, selectors, and query params.
|
||||
|
||||
/var/lib/matchbox
|
||||
├── cloud
|
||||
├── ignition
|
||||
│ └── k8s-controller.yaml
|
||||
│ └── etcd.yaml
|
||||
│ └── k8s-worker.yaml
|
||||
│ └── raw.ign
|
||||
└── profiles
|
||||
```
|
||||
/var/lib/matchbox
|
||||
├── cloud
|
||||
├── ignition
|
||||
│ └── k8s-controller.yaml
|
||||
│ └── etcd.yaml
|
||||
│ └── k8s-worker.yaml
|
||||
│ └── raw.ign
|
||||
└── profiles
|
||||
```
|
||||
|
||||
### Reference
|
||||
|
||||
@@ -57,102 +58,109 @@ Here is an example Fuze template. This template will be rendered into a Fuze con
|
||||
|
||||
ignition/format-disk.yaml.tmpl:
|
||||
|
||||
---
|
||||
storage:
|
||||
disks:
|
||||
- device: /dev/sda
|
||||
wipe_table: true
|
||||
partitions:
|
||||
- label: ROOT
|
||||
filesystems:
|
||||
- name: root
|
||||
mount:
|
||||
device: "/dev/sda1"
|
||||
format: "ext4"
|
||||
create:
|
||||
force: true
|
||||
options:
|
||||
- "-LROOT"
|
||||
files:
|
||||
- filesystem: root
|
||||
path: /home/core/foo
|
||||
mode: 0644
|
||||
user:
|
||||
id: 500
|
||||
group:
|
||||
id: 500
|
||||
contents:
|
||||
inline: |
|
||||
{{.example_contents}}
|
||||
{{ if index . "ssh_authorized_keys" }}
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
ssh_authorized_keys:
|
||||
{{ range $element := .ssh_authorized_keys }}
|
||||
- {{$element}}
|
||||
{{end}}
|
||||
{{end}}
|
||||
<!-- {% raw %} -->
|
||||
```yaml
|
||||
|
||||
---
|
||||
storage:
|
||||
disks:
|
||||
- device: /dev/sda
|
||||
wipe_table: true
|
||||
partitions:
|
||||
- label: ROOT
|
||||
filesystems:
|
||||
- name: root
|
||||
mount:
|
||||
device: "/dev/sda1"
|
||||
format: "ext4"
|
||||
create:
|
||||
force: true
|
||||
options:
|
||||
- "-LROOT"
|
||||
files:
|
||||
- filesystem: root
|
||||
path: /home/core/foo
|
||||
mode: 0644
|
||||
user:
|
||||
id: 500
|
||||
group:
|
||||
id: 500
|
||||
contents:
|
||||
inline: |
|
||||
{{.example_contents}}
|
||||
{{ if index . "ssh_authorized_keys" }}
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
ssh_authorized_keys:
|
||||
{{ range $element := .ssh_authorized_keys }}
|
||||
- {{$element}}
|
||||
{{end}}
|
||||
{{end}}
|
||||
```
|
||||
<!-- {% endraw %} -->
|
||||
|
||||
The Ignition config response (formatted) to a query `/ignition?label=value` for a CoreOS instance supporting Ignition 2.0.0 would be:
|
||||
|
||||
{
|
||||
"ignition": {
|
||||
"version": "2.0.0",
|
||||
"config": {}
|
||||
},
|
||||
"storage": {
|
||||
"disks": [
|
||||
```json
|
||||
{
|
||||
"ignition": {
|
||||
"version": "2.0.0",
|
||||
"config": {}
|
||||
},
|
||||
"storage": {
|
||||
"disks": [
|
||||
{
|
||||
"device": "/dev/sda",
|
||||
"wipeTable": true,
|
||||
"partitions": [
|
||||
{
|
||||
"device": "/dev/sda",
|
||||
"wipeTable": true,
|
||||
"partitions": [
|
||||
{
|
||||
"label": "ROOT",
|
||||
"number": 0,
|
||||
"size": 0,
|
||||
"start": 0
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"filesystems": [
|
||||
{
|
||||
"name": "root",
|
||||
"mount": {
|
||||
"device": "/dev/sda1",
|
||||
"format": "ext4",
|
||||
"create": {
|
||||
"force": true,
|
||||
"options": [
|
||||
"-LROOT"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"files": [
|
||||
{
|
||||
"filesystem": "root",
|
||||
"path": "/home/core/foo",
|
||||
"contents": {
|
||||
"source": "data:,Example%20file%20contents%0A",
|
||||
"verification": {}
|
||||
},
|
||||
"mode": 420,
|
||||
"user": {
|
||||
"id": 500
|
||||
},
|
||||
"group": {
|
||||
"id": 500
|
||||
}
|
||||
"label": "ROOT",
|
||||
"number": 0,
|
||||
"size": 0,
|
||||
"start": 0
|
||||
}
|
||||
]
|
||||
},
|
||||
"systemd": {},
|
||||
"networkd": {},
|
||||
"passwd": {}
|
||||
}
|
||||
}
|
||||
],
|
||||
"filesystems": [
|
||||
{
|
||||
"name": "root",
|
||||
"mount": {
|
||||
"device": "/dev/sda1",
|
||||
"format": "ext4",
|
||||
"create": {
|
||||
"force": true,
|
||||
"options": [
|
||||
"-LROOT"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"files": [
|
||||
{
|
||||
"filesystem": "root",
|
||||
"path": "/home/core/foo",
|
||||
"contents": {
|
||||
"source": "data:,Example%20file%20contents%0A",
|
||||
"verification": {}
|
||||
},
|
||||
"mode": 420,
|
||||
"user": {
|
||||
"id": 500
|
||||
},
|
||||
"group": {
|
||||
"id": 500
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"systemd": {},
|
||||
"networkd": {},
|
||||
"passwd": {}
|
||||
}
|
||||
```
|
||||
|
||||
See [examples/ignition](../examples/ignition) for numerous Fuze template examples.
|
||||
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
|
||||
# Kubernetes
|
||||
|
||||
The Kubernetes example provisions a 3 node Kubernetes v1.5.2 cluster with one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
|
||||
@@ -24,14 +23,18 @@ The [examples](../examples) statically assign IP addresses to libvirt client VMs
|
||||
|
||||
Download the CoreOS image assets referenced in the target [profile](../examples/profiles).
|
||||
|
||||
./scripts/get-coreos stable 1235.9.0 ./examples/assets
|
||||
```sh
|
||||
$ ./scripts/get-coreos stable 1235.9.0 ./examples/assets
|
||||
```
|
||||
|
||||
Optionally, add your SSH public key to each machine group definition [as shown](../examples/README.md#ssh-keys).
|
||||
|
||||
Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver`, `worker`) with SANs for `node1.example.com`, etc.
|
||||
|
||||
rm -rf examples/assets/tls
|
||||
./scripts/tls/k8s-certgen
|
||||
```sh
|
||||
$ rm -rf examples/assets/tls
|
||||
$ ./scripts/tls/k8s-certgen
|
||||
```
|
||||
|
||||
**Note**: TLS assets are served to any machines which request them, which requires a trusted network. Alternately, provisioning may be tweaked to require TLS assets be securely copied to each host.
|
||||
|
||||
@@ -45,34 +48,40 @@ Client machines should boot and provision themselves. Local client VMs should ne
|
||||
|
||||
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your laptop. Use the generated kubeconfig to access the Kubernetes cluster created on rkt `metal0` or `docker0`.
|
||||
|
||||
$ KUBECONFIG=examples/assets/tls/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE
|
||||
node1.example.com Ready 3m
|
||||
node2.example.com Ready 3m
|
||||
node3.example.com Ready 3m
|
||||
```sh
|
||||
$ KUBECONFIG=examples/assets/tls/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE
|
||||
node1.example.com Ready 3m
|
||||
node2.example.com Ready 3m
|
||||
node3.example.com Ready 3m
|
||||
```
|
||||
|
||||
Get all pods.
|
||||
|
||||
$ kubectl get pods --all-namespaces
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system heapster-v1.2.0-4088228293-5xbgg 2/2 Running 0 41m
|
||||
kube-system kube-apiserver-node1.example.com 1/1 Running 0 40m
|
||||
kube-system kube-controller-manager-node1.example.com 1/1 Running 0 40m
|
||||
kube-system kube-dns-782804071-326dd 4/4 Running 0 41m
|
||||
kube-system kube-dns-autoscaler-2715466192-8bm78 1/1 Running 0 41m
|
||||
kube-system kube-proxy-node1.example.com 1/1 Running 0 41m
|
||||
kube-system kube-proxy-node2.example.com 1/1 Running 0 41m
|
||||
kube-system kube-proxy-node3.example.com 1/1 Running 0 40m
|
||||
kube-system kube-scheduler-node1.example.com 1/1 Running 0 40m
|
||||
kube-system kubernetes-dashboard-3543765157-2nqgh 1/1 Running 0 41m
|
||||
```sh
|
||||
$ kubectl get pods --all-namespaces
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system heapster-v1.2.0-4088228293-5xbgg 2/2 Running 0 41m
|
||||
kube-system kube-apiserver-node1.example.com 1/1 Running 0 40m
|
||||
kube-system kube-controller-manager-node1.example.com 1/1 Running 0 40m
|
||||
kube-system kube-dns-782804071-326dd 4/4 Running 0 41m
|
||||
kube-system kube-dns-autoscaler-2715466192-8bm78 1/1 Running 0 41m
|
||||
kube-system kube-proxy-node1.example.com 1/1 Running 0 41m
|
||||
kube-system kube-proxy-node2.example.com 1/1 Running 0 41m
|
||||
kube-system kube-proxy-node3.example.com 1/1 Running 0 40m
|
||||
kube-system kube-scheduler-node1.example.com 1/1 Running 0 40m
|
||||
kube-system kubernetes-dashboard-3543765157-2nqgh 1/1 Running 0 41m
|
||||
```
|
||||
|
||||
## Kubernetes Dashboard
|
||||
|
||||
Access the Kubernetes Dashboard with `kubeconfig` credentials by port forwarding to the dashboard pod.
|
||||
|
||||
$ kubectl port-forward kubernetes-dashboard-SOME-ID 9090 -n=kube-system
|
||||
Forwarding from 127.0.0.1:9090 -> 9090
|
||||
```sh
|
||||
$ kubectl port-forward kubernetes-dashboard-SOME-ID 9090 -n=kube-system
|
||||
Forwarding from 127.0.0.1:9090 -> 9090
|
||||
```
|
||||
|
||||
Then visit [http://127.0.0.1:9090](http://127.0.0.1:9090/).
|
||||
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
|
||||
# Lifecycle of a Physical Machine
|
||||
|
||||
Physical machines [network boot](network-booting.md) in an network boot environment with DHCP/TFTP/DNS services or with [coreos/dnsmasq](../contrib/dnsmasq).
|
||||
|
||||
@@ -29,25 +29,27 @@ A `Store` stores machine Groups, Profiles, and associated Ignition configs, clou
|
||||
|
||||
Prepare `/var/lib/matchbox` with `groups`, `profile`, `ignition`, `cloud`, and `generic` subdirectories. You may wish to keep these files under version control.
|
||||
|
||||
/var/lib/matchbox
|
||||
├── cloud
|
||||
│ ├── cloud.yaml.tmpl
|
||||
│ └── worker.sh.tmpl
|
||||
├── ignition
|
||||
│ └── raw.ign
|
||||
│ └── etcd.yaml.tmpl
|
||||
│ └── simple.yaml.tmpl
|
||||
├── generic
|
||||
│ └── config.yaml
|
||||
│ └── setup.cfg
|
||||
│ └── datacenter-1.tmpl
|
||||
├── groups
|
||||
│ └── default.json
|
||||
│ └── node1.json
|
||||
│ └── us-central1-a.json
|
||||
└── profiles
|
||||
└── etcd.json
|
||||
└── worker.json
|
||||
```
|
||||
/var/lib/matchbox
|
||||
├── cloud
|
||||
│ ├── cloud.yaml.tmpl
|
||||
│ └── worker.sh.tmpl
|
||||
├── ignition
|
||||
│ └── raw.ign
|
||||
│ └── etcd.yaml.tmpl
|
||||
│ └── simple.yaml.tmpl
|
||||
├── generic
|
||||
│ └── config.yaml
|
||||
│ └── setup.cfg
|
||||
│ └── datacenter-1.tmpl
|
||||
├── groups
|
||||
│ └── default.json
|
||||
│ └── node1.json
|
||||
│ └── us-central1-a.json
|
||||
└── profiles
|
||||
└── etcd.json
|
||||
└── worker.json
|
||||
```
|
||||
|
||||
The [examples](../examples) directory is a valid data directory with some pre-defined configs. Note that `examples/groups` contains many possible groups in nested directories for demo purposes (tutorials pick one to mount). Your machine groups should be kept directly inside the `groups` directory as shown above.
|
||||
|
||||
@@ -55,22 +57,24 @@ The [examples](../examples) directory is a valid data directory with some pre-de
|
||||
|
||||
Profiles reference an Ignition config, Cloud-Config, and/or generic config by name and define network boot settings.
|
||||
|
||||
{
|
||||
"id": "etcd",
|
||||
"name": "CoreOS with etcd2",
|
||||
"cloud_id": "",
|
||||
"ignition_id": "etcd.yaml"
|
||||
"generic_id": "some-service.cfg",
|
||||
"boot": {
|
||||
"kernel": "/assets/coreos/1235.9.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/1235.9.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"args": [
|
||||
"coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp}",
|
||||
"coreos.first_boot=yes",
|
||||
"coreos.autologin"
|
||||
]
|
||||
},
|
||||
}
|
||||
```json
|
||||
{
|
||||
"id": "etcd",
|
||||
"name": "CoreOS with etcd2",
|
||||
"cloud_id": "",
|
||||
"ignition_id": "etcd.yaml"
|
||||
"generic_id": "some-service.cfg",
|
||||
"boot": {
|
||||
"kernel": "/assets/coreos/1235.9.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/1235.9.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"args": [
|
||||
"coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp}",
|
||||
"coreos.first_boot=yes",
|
||||
"coreos.autologin"
|
||||
]
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
The `"boot"` settings will be used to render configs to network boot programs such as iPXE, GRUB, or Pixiecore. You may reference remote kernel and initrd assets or [local assets](#assets).
|
||||
|
||||
@@ -84,30 +88,34 @@ Groups define selectors which match zero or more machines. Machine(s) matching a
|
||||
|
||||
Create a group definition with a `Profile` to be applied, selectors for matching machines, and any `metadata` needed to render templated configs. For example `/var/lib/matchbox/groups/node1.json` matches a single machine with MAC address `52:54:00:89:d8:10`.
|
||||
|
||||
# /var/lib/matchbox/groups/node1.json
|
||||
{
|
||||
"name": "node1",
|
||||
"profile": "etcd",
|
||||
"selector": {
|
||||
"mac": "52:54:00:89:d8:10"
|
||||
},
|
||||
"metadata": {
|
||||
"fleet_metadata": "role=etcd,name=node1",
|
||||
"etcd_name": "node1",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380,node2=http://node2.example.com:2380,node3=http://node3.example.com:2380"
|
||||
}
|
||||
}
|
||||
```json
|
||||
# /var/lib/matchbox/groups/node1.json
|
||||
{
|
||||
"name": "node1",
|
||||
"profile": "etcd",
|
||||
"selector": {
|
||||
"mac": "52:54:00:89:d8:10"
|
||||
},
|
||||
"metadata": {
|
||||
"fleet_metadata": "role=etcd,name=node1",
|
||||
"etcd_name": "node1",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380,node2=http://node2.example.com:2380,node3=http://node3.example.com:2380"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Meanwhile, `/var/lib/matchbox/groups/proxy.json` acts as the default machine group since it has no selectors.
|
||||
|
||||
{
|
||||
"name": "etcd-proxy",
|
||||
"profile": "etcd-proxy",
|
||||
"metadata": {
|
||||
"fleet_metadata": "role=etcd-proxy",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380,node2=http://node2.example.com:2380,node3=http://node3.example.com:2380"
|
||||
}
|
||||
}
|
||||
```
|
||||
{
|
||||
"name": "etcd-proxy",
|
||||
"profile": "etcd-proxy",
|
||||
"metadata": {
|
||||
"fleet_metadata": "role=etcd-proxy",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380,node2=http://node2.example.com:2380,node3=http://node3.example.com:2380"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
For example, a request to `/ignition?mac=52:54:00:89:d8:10` would render the Ignition template in the "etcd" `Profile`, with the machine group's metadata. A request to `/ignition` would match the default group (which has no selectors) and render the Ignition in the "etcd-proxy" Profile. Avoid defining multiple default groups as resolution will not be deterministic.
|
||||
|
||||
@@ -133,18 +141,22 @@ For details and examples:
|
||||
|
||||
Within Ignition/Fuze templates, Cloud-Config templates, or generic templates, you can use group metadata, selectors, or request-scoped query params. For example, a request `/generic?mac=52-54-00-89-d8-10&foo=some-param&bar=b` would match the `node1.json` machine group shown above. If the group's profile ("etcd") referenced a generic template, the following variables could be used.
|
||||
|
||||
# Untyped generic config file
|
||||
# Selector
|
||||
{{.mac}} # 52:54:00:89:d8:10 (normalized)
|
||||
# Metadata
|
||||
{{.etcd_name}} # node1
|
||||
{{.fleet_metadata}} # role=etcd,name=node1
|
||||
# Query
|
||||
{{.request.query.mac}} # 52:54:00:89:d8:10 (normalized)
|
||||
{{.request.query.foo}} # some-param
|
||||
{{.request.query.bar}} # b
|
||||
# Special Addition
|
||||
{{.request.raw_query}} # mac=52:54:00:89:d8:10&foo=some-param&bar=b
|
||||
<!-- {% raw %} -->
|
||||
```
|
||||
# Untyped generic config file
|
||||
# Selector
|
||||
{{.mac}} # 52:54:00:89:d8:10 (normalized)
|
||||
# Metadata
|
||||
{{.etcd_name}} # node1
|
||||
{{.fleet_metadata}} # role=etcd,name=node1
|
||||
# Query
|
||||
{{.request.query.mac}} # 52:54:00:89:d8:10 (normalized)
|
||||
{{.request.query.foo}} # some-param
|
||||
{{.request.query.bar}} # b
|
||||
# Special Addition
|
||||
{{.request.raw_query}} # mac=52:54:00:89:d8:10&foo=some-param&bar=b
|
||||
```
|
||||
<!-- {% endraw %} -->
|
||||
|
||||
Note that `.request` is reserved for these purposes so group metadata with data nested under a top level "request" key will be overwritten.
|
||||
|
||||
@@ -152,11 +164,13 @@ Note that `.request` is reserved for these purposes so group metadata with data
|
||||
|
||||
`matchbox` can serve `-assets-path` static assets at `/assets`. This is helpful for reducing bandwidth usage when serving the kernel and initrd to network booted machines. The default assets-path is `/var/lib/matchbox/assets` or you can pass `-assets-path=""` to disable asset serving.
|
||||
|
||||
matchbox.foo/assets/
|
||||
└── coreos
|
||||
└── VERSION
|
||||
├── coreos_production_pxe.vmlinuz
|
||||
└── coreos_production_pxe_image.cpio.gz
|
||||
```
|
||||
matchbox.foo/assets/
|
||||
└── coreos
|
||||
└── VERSION
|
||||
├── coreos_production_pxe.vmlinuz
|
||||
└── coreos_production_pxe_image.cpio.gz
|
||||
```
|
||||
|
||||
For example, a `Profile` might refer to a local asset `/assets/coreos/VERSION/coreos_production_pxe.vmlinuz` instead of `http://stable.release.core-os.net/amd64-usr/VERSION/coreos_production_pxe.vmlinuz`.
|
||||
|
||||
@@ -171,6 +185,3 @@ See the [get-coreos](../scripts/README.md#get-coreos) script to quickly download
|
||||
* [gRPC API Usage](config.md#grpc-api)
|
||||
* [Metadata](api.md#metadata)
|
||||
* OpenPGP [Signing](api.md#openpgp-signatures)
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -21,8 +21,10 @@ Machines can be booted and configured with CoreOS using several network boot pro
|
||||
|
||||
[PXELINUX](http://www.syslinux.org/wiki/index.php/PXELINUX) is a common network boot program which loads a config file from `mybootdir/pxelinux.cfg/` over TFTP. The file is chosen based on the client's UUID, MAC address, IP address, or a default.
|
||||
|
||||
mybootdir/pxelinux.cfg/b8945908-d6a6-41a9-611d-74a6ab80b83d
|
||||
mybootdir/pxelinux.cfg/default
|
||||
```sh
|
||||
$ mybootdir/pxelinux.cfg/b8945908-d6a6-41a9-611d-74a6ab80b83d
|
||||
$ mybootdir/pxelinux.cfg/default
|
||||
```
|
||||
|
||||
Here is an example PXE config file which boots a CoreOS image hosted on the TFTP server.
|
||||
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
|
||||
# Network Setup
|
||||
|
||||
This guide shows how to create a DHCP/TFTP/DNS network boot environment to work with `matchbox` to boot and provision PXE, iPXE, or GRUB2 client machines.
|
||||
@@ -34,19 +33,25 @@ The setup of DHCP, TFTP, and DNS services on a network varies greatly. If you wi
|
||||
|
||||
Add a DNS entry (e.g. `matchbox.foo`, `provisoner.mycompany-internal`) that resolves to a deployment of the CoreOS `matchbox` service from machines you intend to boot and provision.
|
||||
|
||||
dig matchbox.foo
|
||||
```sh
|
||||
$ dig matchbox.foo
|
||||
```
|
||||
|
||||
If you deployed `matchbox` to a known IP address (e.g. dedicated host, load balanced endpoint, Kubernetes NodePort) and use `dnsmasq`, a domain name to IPv4/IPv6 address mapping could be added to the `/etc/dnsmasq.conf`.
|
||||
|
||||
# dnsmasq.conf
|
||||
address=/matchbox.foo/172.18.0.2
|
||||
```
|
||||
# dnsmasq.conf
|
||||
address=/matchbox.foo/172.18.0.2
|
||||
```
|
||||
|
||||
## iPXE
|
||||
|
||||
Servers with DHCP/TFTP/ services which already network boot iPXE clients can use the `chain` command to make clients download and execute the iPXE boot script from `matchbox`.
|
||||
|
||||
# /var/www/html/ipxe/default.ipxe
|
||||
chain http://matchbox.foo:8080/boot.ipxe
|
||||
```
|
||||
# /var/www/html/ipxe/default.ipxe
|
||||
chain http://matchbox.foo:8080/boot.ipxe
|
||||
```
|
||||
|
||||
You can chainload from a menu entry or use other [iPXE commands](http://ipxe.org/cmd) if you have needs beyond just delegating to the iPXE script served by `matchbox`.
|
||||
|
||||
@@ -87,9 +92,11 @@ address=/matchbox.foo/192.168.1.100
|
||||
|
||||
Add [unidonly.kpxe](http://boot.ipxe.org/undionly.kpxe) (and undionly.kpxe.0 if using dnsmasq) to your tftp-root (e.g. `/var/lib/tftpboot`).
|
||||
|
||||
sudo systemctl start dnsmasq
|
||||
sudo firewall-cmd --add-service=dhcp --add-service=tftp [--add-service=dns]
|
||||
sudo firewall-cmd --list-services
|
||||
```sh
|
||||
$ sudo systemctl start dnsmasq
|
||||
$ sudo firewall-cmd --add-service=dhcp --add-service=tftp [--add-service=dns]
|
||||
$ sudo firewall-cmd --list-services
|
||||
```
|
||||
|
||||
#### proxy DHCP
|
||||
|
||||
@@ -118,21 +125,21 @@ log-dhcp
|
||||
Add [unidonly.kpxe](http://boot.ipxe.org/undionly.kpxe) (and undionly.kpxe.0 if using dnsmasq) to your tftp-root (e.g. `/var/lib/tftpboot`).
|
||||
|
||||
```sh
|
||||
sudo systemctl start dnsmasq
|
||||
sudo firewall-cmd --add-service=dhcp --add-service=tftp [--add-service=dns]
|
||||
sudo firewall-cmd --list-services
|
||||
$ sudo systemctl start dnsmasq
|
||||
$ sudo firewall-cmd --add-service=dhcp --add-service=tftp [--add-service=dns]
|
||||
$ sudo firewall-cmd --list-services
|
||||
```
|
||||
|
||||
With rkt:
|
||||
|
||||
```sh
|
||||
sudo rkt run coreos.com/dnsmasq:v0.3.0 --net=host -- -d -q --dhcp-range=192.168.1.1,proxy,255.255.255.0 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --pxe-service=tag:#ipxe,x86PC,"PXE chainload to iPXE",undionly.kpxe --pxe-service=tag:ipxe,x86PC,"iPXE",http://matchbox.foo:8080/boot.ipxe --log-queries --log-dhcp
|
||||
$ sudo rkt run coreos.com/dnsmasq:v0.3.0 --net=host -- -d -q --dhcp-range=192.168.1.1,proxy,255.255.255.0 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --pxe-service=tag:#ipxe,x86PC,"PXE chainload to iPXE",undionly.kpxe --pxe-service=tag:ipxe,x86PC,"iPXE",http://matchbox.foo:8080/boot.ipxe --log-queries --log-dhcp
|
||||
```
|
||||
|
||||
With Docker:
|
||||
|
||||
```sh
|
||||
sudo docker run --net=host --rm --cap-add=NET_ADMIN quay.io/coreos/dnsmasq -d -q --dhcp-range=192.168.1.1,proxy,255.255.255.0 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --pxe-service=tag:#ipxe,x86PC,"PXE chainload to iPXE",undionly.kpxe --pxe-service=tag:ipxe,x86PC,"iPXE",http://matchbox.foo:8080/boot.ipxe --log-queries --log-dhcp
|
||||
$ sudo docker run --net=host --rm --cap-add=NET_ADMIN quay.io/coreos/dnsmasq -d -q --dhcp-range=192.168.1.1,proxy,255.255.255.0 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --pxe-service=tag:#ipxe,x86PC,"PXE chainload to iPXE",undionly.kpxe --pxe-service=tag:ipxe,x86PC,"iPXE",http://matchbox.foo:8080/boot.ipxe --log-queries --log-dhcp
|
||||
```
|
||||
|
||||
### Configurable TFTP
|
||||
@@ -141,11 +148,13 @@ If your DHCP server is configured to PXE boot clients, but you don't have contro
|
||||
|
||||
Example `/var/lib/tftpboot/pxelinux.cfg/default`:
|
||||
|
||||
timeout 10
|
||||
default iPXE
|
||||
LABEL iPXE
|
||||
KERNEL ipxe.lkrn
|
||||
APPEND dhcp && chain http://matchbox.foo:8080/boot.ipxe
|
||||
```
|
||||
timeout 10
|
||||
default iPXE
|
||||
LABEL iPXE
|
||||
KERNEL ipxe.lkrn
|
||||
APPEND dhcp && chain http://matchbox.foo:8080/boot.ipxe
|
||||
```
|
||||
|
||||
Add ipxe.lkrn to `/var/lib/tftpboot` (see [iPXE docs](http://ipxe.org/embed)).
|
||||
|
||||
@@ -156,24 +165,24 @@ On networks without network services, the `coreos.com/dnsmasq:v0.3.0` rkt ACI or
|
||||
With rkt:
|
||||
|
||||
```sh
|
||||
sudo rkt trust --prefix coreos.com/dnsmasq
|
||||
$ sudo rkt trust --prefix coreos.com/dnsmasq
|
||||
# gpg key fingerprint is: 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E
|
||||
```
|
||||
|
||||
```sh
|
||||
sudo rkt run coreos.com/dnsmasq:v0.3.0 --net=host -- -d -q --dhcp-range=192.168.1.3,192.168.1.254 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:#ipxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://matchbox.foo:8080/boot.ipxe --address=/matchbox.foo/192.168.1.2 --log-queries --log-dhcp
|
||||
$ sudo rkt run coreos.com/dnsmasq:v0.3.0 --net=host -- -d -q --dhcp-range=192.168.1.3,192.168.1.254 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:#ipxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://matchbox.foo:8080/boot.ipxe --address=/matchbox.foo/192.168.1.2 --log-queries --log-dhcp
|
||||
```
|
||||
|
||||
With Docker:
|
||||
|
||||
```sh
|
||||
sudo docker run --rm --cap-add=NET_ADMIN --net=host quay.io/coreos/dnsmasq -d -q --dhcp-range=192.168.1.3,192.168.1.254 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:#ipxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://matchbox.foo:8080/boot.ipxe --address=/matchbox.foo/192.168.1.2 --log-queries --log-dhcp
|
||||
$ sudo docker run --rm --cap-add=NET_ADMIN --net=host quay.io/coreos/dnsmasq -d -q --dhcp-range=192.168.1.3,192.168.1.254 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:#ipxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://matchbox.foo:8080/boot.ipxe --address=/matchbox.foo/192.168.1.2 --log-queries --log-dhcp
|
||||
```
|
||||
|
||||
Ensure that `matchbox.foo` resolves to a `matchbox` deployment and that you've allowed the services to run in your firewall configuration.
|
||||
|
||||
```sh
|
||||
sudo firewall-cmd --add-service=dhcp --add-service=tftp --add-service=dns
|
||||
$ sudo firewall-cmd --add-service=dhcp --add-service=tftp --add-service=dns
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
@@ -25,13 +25,15 @@ Verify a signature response and config response from the command line using the
|
||||
|
||||
**Warning: The test fixture keyring is for examples only.**
|
||||
|
||||
$ gpg --homedir sign/fixtures --verify sig_file response_file
|
||||
gpg: Signature made Mon 08 Feb 2016 11:37:03 PM PST using RSA key ID 9896356A
|
||||
gpg: sign/fixtures/trustdb.gpg: trustdb created
|
||||
gpg: Good signature from "Fake Bare Metal Key (Do not use) <do-not-use@example.com>"
|
||||
gpg: WARNING: This key is not certified with a trusted signature!
|
||||
gpg: There is no indication that the signature belongs to the owner.
|
||||
Primary key fingerprint: BE2F 12BC 3642 2594 570A CCBB 8DC4 2020 9896 356A
|
||||
```sh
|
||||
$ gpg --homedir sign/fixtures --verify sig_file response_file
|
||||
gpg: Signature made Mon 08 Feb 2016 11:37:03 PM PST using RSA key ID 9896356A
|
||||
gpg: sign/fixtures/trustdb.gpg: trustdb created
|
||||
gpg: Good signature from "Fake Bare Metal Key (Do not use) <do-not-use@example.com>"
|
||||
gpg: WARNING: This key is not certified with a trusted signature!
|
||||
gpg: There is no indication that the signature belongs to the owner.
|
||||
Primary key fingerprint: BE2F 12BC 3642 2594 570A CCBB 8DC4 2020 9896 356A
|
||||
```
|
||||
|
||||
## Signing Key Generation
|
||||
|
||||
@@ -39,13 +41,17 @@ Create a signing key or subkey according to your requirements and security polic
|
||||
|
||||
### gpg
|
||||
|
||||
mkdir -m 700 path/in/vault
|
||||
gpg --homedir path/in/vault --expert --gen-key
|
||||
...
|
||||
```sh
|
||||
$ mkdir -m 700 path/in/vault
|
||||
$ gpg --homedir path/in/vault --expert --gen-key
|
||||
...
|
||||
```
|
||||
|
||||
### gpg2
|
||||
|
||||
mkdir -m 700 path/in/vault
|
||||
gpg2 --homedir path/in/vault --expert --gen-key
|
||||
...
|
||||
gpg2 --homedir path/in/vault --export-secret-key KEYID > path/in/vault/secring.gpg
|
||||
```sh
|
||||
$ mkdir -m 700 path/in/vault
|
||||
$ gpg2 --homedir path/in/vault --expert --gen-key
|
||||
...
|
||||
$ gpg2 --homedir path/in/vault --export-secret-key KEYID > path/in/vault/secring.gpg
|
||||
```
|
||||
|
||||
@@ -23,14 +23,18 @@ The [examples](../examples) statically assign IP addresses to libvirt client VMs
|
||||
|
||||
Download the CoreOS image assets referenced in the target [profile](../examples/profiles).
|
||||
|
||||
./scripts/get-coreos stable 1235.9.0 ./examples/assets
|
||||
```sh
|
||||
$ ./scripts/get-coreos stable 1235.9.0 ./examples/assets
|
||||
```
|
||||
|
||||
Optionally, add your SSH public key to each machine group definition [as shown](../examples/README.md#ssh-keys).
|
||||
|
||||
Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver`, `worker`) with SANs for `node1.example.com`, etc.
|
||||
|
||||
rm -rf examples/assets/tls
|
||||
./scripts/tls/k8s-certgen
|
||||
```sh
|
||||
$ rm -rf examples/assets/tls
|
||||
$ ./scripts/tls/k8s-certgen
|
||||
```
|
||||
|
||||
**Note**: TLS assets are served to any machines which request them, which requires a trusted network. Alternately, provisioning may be tweaked to require TLS assets be securely copied to each host.
|
||||
|
||||
@@ -44,33 +48,39 @@ Client machines should boot and provision themselves. Local client VMs should ne
|
||||
|
||||
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your laptop. Use the generated kubeconfig to access the Kubernetes cluster created on rkt `metal0` or `docker0`.
|
||||
|
||||
$ KUBECONFIG=examples/assets/tls/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE
|
||||
node1.example.com Ready 3m
|
||||
node2.example.com Ready 3m
|
||||
node3.example.com Ready 3m
|
||||
```sh
|
||||
$ KUBECONFIG=examples/assets/tls/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE
|
||||
node1.example.com Ready 3m
|
||||
node2.example.com Ready 3m
|
||||
node3.example.com Ready 3m
|
||||
```
|
||||
|
||||
Get all pods.
|
||||
|
||||
$ kubectl get pods --all-namespaces
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system heapster-v1.2.0-4088228293-k3yn8 2/2 Running 0 3m
|
||||
kube-system kube-apiserver-node1.example.com 1/1 Running 0 4m
|
||||
kube-system kube-controller-manager-node1.example.com 1/1 Running 0 3m
|
||||
kube-system kube-dns-v19-l2u8r 3/3 Running 0 4m
|
||||
kube-system kube-proxy-node1.example.com 1/1 Running 0 3m
|
||||
kube-system kube-proxy-node2.example.com 1/1 Running 0 3m
|
||||
kube-system kube-proxy-node3.example.com 1/1 Running 0 3m
|
||||
kube-system kube-scheduler-node1.example.com 1/1 Running 0 3m
|
||||
kube-system kubernetes-dashboard-v1.4.1-0iy07 1/1 Running 0 4m
|
||||
```sh
|
||||
$ kubectl get pods --all-namespaces
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system heapster-v1.2.0-4088228293-k3yn8 2/2 Running 0 3m
|
||||
kube-system kube-apiserver-node1.example.com 1/1 Running 0 4m
|
||||
kube-system kube-controller-manager-node1.example.com 1/1 Running 0 3m
|
||||
kube-system kube-dns-v19-l2u8r 3/3 Running 0 4m
|
||||
kube-system kube-proxy-node1.example.com 1/1 Running 0 3m
|
||||
kube-system kube-proxy-node2.example.com 1/1 Running 0 3m
|
||||
kube-system kube-proxy-node3.example.com 1/1 Running 0 3m
|
||||
kube-system kube-scheduler-node1.example.com 1/1 Running 0 3m
|
||||
kube-system kubernetes-dashboard-v1.4.1-0iy07 1/1 Running 0 4m
|
||||
```
|
||||
|
||||
## Kubernetes Dashboard
|
||||
|
||||
Access the Kubernetes Dashboard with `kubeconfig` credentials by port forwarding to the dashboard pod.
|
||||
|
||||
$ kubectl port-forward kubernetes-dashboard-v1.4.1-SOME-ID 9090 -n=kube-system
|
||||
Forwarding from 127.0.0.1:9090 -> 9090
|
||||
```sh
|
||||
$ kubectl port-forward kubernetes-dashboard-v1.4.1-SOME-ID 9090 -n=kube-system
|
||||
Forwarding from 127.0.0.1:9090 -> 9090
|
||||
```
|
||||
|
||||
Then visit [http://127.0.0.1:9090](http://127.0.0.1:9090/).
|
||||
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
|
||||
# Troubleshooting
|
||||
|
||||
## Firewall
|
||||
@@ -9,10 +8,12 @@ Running DHCP or proxyDHCP with `coreos/dnsmasq` on a host requires that the Fire
|
||||
|
||||
Running DHCP or proxyDHCP can cause port already in use collisions depending on what's running. Fedora runs bootp listening on udp/67 for example. Find the service using the port.
|
||||
|
||||
sudo lsof -i :67
|
||||
```sh
|
||||
$ sudo lsof -i :67
|
||||
```
|
||||
|
||||
Evaluate whether you can configure the existing service or whether you'd like to stop it and test with `coreos/dnsmasq`.
|
||||
|
||||
## No boot filename received
|
||||
|
||||
PXE client firmware did not receive a DHCP Offer with PXE-Options after several attempts. If you're using the `coreos/dnsmasq` image with `-d`, each request should log to stdout. Using the wrong `-i` interface is the most common reason DHCP requests are not received. Otherwise, wireshark can be useful for investigating.
|
||||
PXE client firmware did not receive a DHCP Offer with PXE-Options after several attempts. If you're using the `coreos/dnsmasq` image with `-d`, each request should log to stdout. Using the wrong `-i` interface is the most common reason DHCP requests are not received. Otherwise, wireshark can be useful for investigating.
|
||||
|
||||
Reference in New Issue
Block a user