mirror of
https://github.com/outbackdingo/matchbox.git
synced 2026-01-27 10:19:35 +00:00
Documentation: Use /etc/hosts node names in docs
This commit is contained in:
@@ -10,6 +10,7 @@ Ensure that you've gone through the [bootcfg with rkt](getting-started-rkt.md) o
|
||||
* Use rkt or Docker to start `bootcfg`
|
||||
* Create a network boot environment with `coreos/dnsmasq`
|
||||
* Create the example libvirt client VMs
|
||||
* `/etc/hosts` entries for `node[1-3].example.com` (or pass custom names to `k8s-certgen`)
|
||||
|
||||
Build and install the [fork of bootkube](https://github.com/dghubble/bootkube), which supports DNS names.
|
||||
|
||||
@@ -40,8 +41,7 @@ Add your SSH public key to each machine group definition [as shown](../examples/
|
||||
|
||||
Use the `bootkube` tool to render Kubernetes manifests and credentials into an `--asset-dir`. Later, `bootkube` will schedule these manifests during bootstrapping and the credentials will be used to access your cluster.
|
||||
|
||||
# If running with docker, use 172.17.0.21 instead of 172.15.0.21
|
||||
bootkube render --asset-dir=assets --api-servers=https://172.15.0.21:443 --api-server-alt-names=DNS=node1.example.com,IP=172.15.0.21
|
||||
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com
|
||||
|
||||
## Containers
|
||||
|
||||
@@ -53,17 +53,17 @@ Client machines should boot and provision themselves. Local client VMs should ne
|
||||
|
||||
We're ready to use bootkube to create a temporary control plane and bootstrap a self-hosted Kubernetes cluster.
|
||||
|
||||
Secure copy the `kubeconfig` to `/etc/kubernetes/kubeconfig` on **every** node (i.e. 172.15.0.21-23 for metal0 or 172.17.0.21-23 for docker0).
|
||||
Secure copy the `kubeconfig` to `/etc/kubernetes/kubeconfig` on **every** node which will path activate the `kubelet.service`.
|
||||
|
||||
for node in '172.15.0.21' '172.15.0.22' '172.15.0.23'; do
|
||||
scp assets/auth/kubeconfig core@$node:/home/core/kubeconfig
|
||||
ssh core@$node 'sudo mv kubeconfig /etc/kubernetes/kubeconfig'
|
||||
for node in 'node1' 'node2' 'node3'; do
|
||||
scp assets/auth/kubeconfig core@$node.example.com:/home/core/kubeconfig
|
||||
ssh core@$node.example.com 'sudo mv kubeconfig /etc/kubernetes/kubeconfig'
|
||||
done
|
||||
|
||||
Secure copy the `bootkube` generated assets to any controller node and run `bootkube-start`.
|
||||
|
||||
scp -r assets core@172.15.0.21:/home/core/assets
|
||||
ssh core@172.15.0.21 'sudo ./bootkube-start'
|
||||
scp -r assets core@node1.example.com:/home/core/assets
|
||||
ssh core@node1.example.com 'sudo ./bootkube-start'
|
||||
|
||||
Watch the temporary control plane logs until the scheduled kubelet takes over in place of the on-host kubelet.
|
||||
|
||||
|
||||
@@ -26,6 +26,14 @@ Download CoreOS image assets referenced by the `etcd-docker` [example](../exampl
|
||||
|
||||
./scripts/get-coreos stable 1185.3.0 ./examples/assets
|
||||
|
||||
For development convenience, add `/etc/hosts` entries for nodes so they may be referenced by name as you would in production.
|
||||
|
||||
# /etc/hosts
|
||||
...
|
||||
172.17.0.21 node1.example.com
|
||||
172.17.0.22 node2.example.com
|
||||
172.17.0.23 node3.example.com
|
||||
|
||||
## Containers
|
||||
|
||||
Run the latest `bootcfg` Docker image from `quay.io/coreos/bootcfg` with the `etcd-docker` example. The container should receive the IP address 172.17.0.2 on the `docker0` bridge.
|
||||
@@ -73,7 +81,9 @@ The example profile added autologin so you can verify that etcd works between no
|
||||
etcdctl set /message hello
|
||||
etcdctl get /message
|
||||
|
||||
Clean up the VM machines.
|
||||
## Cleanup
|
||||
|
||||
Clean up the containers and VM machines.
|
||||
|
||||
sudo docker rm -f dnsmasq
|
||||
sudo ./scripts/libvirt poweroff
|
||||
@@ -81,5 +91,5 @@ Clean up the VM machines.
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn more about [bootcfg](bootcfg.md) or explore the other [example](../examples) clusters. Try the [k8s-docker example](kubernetes.md) to produce a TLS-authenticated Kubernetes cluster you can access locally with `kubectl`.
|
||||
Learn more about [bootcfg](bootcfg.md) or explore the other [example](../examples) clusters. Try the [k8s example](kubernetes.md) to produce a TLS-authenticated Kubernetes cluster you can access locally with `kubectl`.
|
||||
|
||||
|
||||
@@ -54,6 +54,14 @@ On Fedora, add the `metal0` interface to the trusted zone in your firewall confi
|
||||
|
||||
After a recent update, you may see a warning that NetworkManager controls the interface. Work-around this using the firewall-config GUI to add `metal0` to the trusted zone.
|
||||
|
||||
For development convenience, add `/etc/hosts` entries for nodes so they may be referenced by name as you would in production.
|
||||
|
||||
# /etc/hosts
|
||||
...
|
||||
172.15.0.21 node1.example.com
|
||||
172.15.0.22 node2.example.com
|
||||
172.15.0.23 node3.example.com
|
||||
|
||||
## Containers
|
||||
|
||||
Run the latest `bootcfg` ACI with rkt and the `etcd` example.
|
||||
@@ -111,6 +119,8 @@ The example profile added autologin so you can verify that etcd works between no
|
||||
etcdctl set /message hello
|
||||
etcdctl get /message
|
||||
|
||||
## Cleanup
|
||||
|
||||
Press ^] three times to stop a rkt pod. Clean up the VM machines.
|
||||
|
||||
sudo ./scripts/libvirt poweroff
|
||||
|
||||
@@ -10,6 +10,7 @@ Ensure that you've gone through the [bootcfg with rkt](getting-started-rkt.md) o
|
||||
* Use rkt or Docker to start `bootcfg`
|
||||
* Create a network boot environment with `coreos/dnsmasq`
|
||||
* Create the example libvirt client VMs
|
||||
* `/etc/hosts` entries for `node[1-3].example.com` (or pass custom names to `k8s-certgen`)
|
||||
|
||||
## Examples
|
||||
|
||||
@@ -27,15 +28,12 @@ Download the CoreOS image assets referenced in the target [profile](../examples/
|
||||
|
||||
Optionally, add your SSH public key to each machine group definition [as shown](../examples/README.md#ssh-keys).
|
||||
|
||||
Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver`, `worker`).
|
||||
Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver`, `worker`) with SANs for `node1.example.com`, etc.
|
||||
|
||||
rm -rf examples/assets/tls
|
||||
# for Kubernetes on CNI metal0 (for rkt)
|
||||
./scripts/tls/k8s-certgen -d examples/assets/tls -s 172.15.0.21 -m IP.1=10.3.0.1,IP.2=172.15.0.21,DNS.1=node1.example.com -w DNS.1=node2.example.com,DNS.2=node3.example.com
|
||||
# for Kubernetes on docker0 (for docker)
|
||||
./scripts/tls/k8s-certgen -d examples/assets/tls -s 172.17.0.21 -m IP.1=10.3.0.1,IP.2=172.17.0.21,DNS.1=node1.example.com -w DNS.1=node2.example.com,DNS.2=node3.example.com
|
||||
./scripts/tls/k8s-certgen
|
||||
|
||||
**Note**: TLS assets are served to any machines which request them, which requires a trusted network. Alternately, provisioning may be tweaked to require TLS assets be securely copied to each host. Read about our longer term security plans at [Distributed Trusted Computing](https://coreos.com/blog/coreos-trusted-computing.html).
|
||||
**Note**: TLS assets are served to any machines which request them, which requires a trusted network. Alternately, provisioning may be tweaked to require TLS assets be securely copied to each host.
|
||||
|
||||
## Containers
|
||||
|
||||
@@ -66,7 +64,7 @@ Get all pods.
|
||||
kube-system kube-proxy-node2.example.com 1/1 Running 0 3m
|
||||
kube-system kube-proxy-node3.example.com 1/1 Running 0 3m
|
||||
kube-system kube-scheduler-node1.example.com 1/1 Running 0 3m
|
||||
kube-system kubernetes-dashboard-v1.4.0-0iy07 1/1 Running 0 4m
|
||||
kube-system kubernetes-dashboard-v1.4.1-0iy07 1/1 Running 0 4m
|
||||
|
||||
## Kubernetes Dashboard
|
||||
|
||||
|
||||
@@ -9,6 +9,7 @@ Ensure that you've gone through the [bootcfg with rkt](getting-started-rkt.md) o
|
||||
* Use rkt or Docker to start `bootcfg`
|
||||
* Create a network boot environment with `coreos/dnsmasq`
|
||||
* Create the example libvirt client VMs
|
||||
* `/etc/hosts` entries for `node[1-3].example.com` (or pass custom names to `k8s-certgen`)
|
||||
|
||||
## Examples
|
||||
|
||||
@@ -26,15 +27,12 @@ Download the CoreOS image assets referenced in the target [profile](../examples/
|
||||
|
||||
Optionally, add your SSH public key to each machine group definition [as shown](../examples/README.md#ssh-keys).
|
||||
|
||||
Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver`, `worker`).
|
||||
Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver`, `worker`) with SANs for `node1.example.com`, etc.
|
||||
|
||||
rm -rf examples/assets/tls
|
||||
# for Kubernetes on CNI metal0 (for rkt)
|
||||
./scripts/tls/k8s-certgen -d examples/assets/tls -s 172.15.0.21 -m IP.1=10.3.0.1,IP.2=172.15.0.21,DNS.1=node1.example.com -w DNS.1=node2.example.com,DNS.2=node3.example.com
|
||||
# for Kubernetes on docker0 (for docker)
|
||||
./scripts/tls/k8s-certgen -d examples/assets/tls -s 172.17.0.21 -m IP.1=10.3.0.1,IP.2=172.17.0.21,DNS.1=node1.example.com -w DNS.1=node2.example.com,DNS.2=node3.example.com
|
||||
./scripts/tls/k8s-certgen
|
||||
|
||||
**Note**: TLS assets are served to any machines which request them, which requires a trusted network. Alternately, provisioning may be tweaked to require TLS assets be securely copied to each host. Read about our longer term security plans at [Distributed Trusted Computing](https://coreos.com/blog/coreos-trusted-computing.html).
|
||||
**Note**: TLS assets are served to any machines which request them, which requires a trusted network. Alternately, provisioning may be tweaked to require TLS assets be securely copied to each host.
|
||||
|
||||
## Containers
|
||||
|
||||
@@ -65,7 +63,7 @@ Get all pods.
|
||||
kube-system kube-proxy-node2.example.com 1/1 Running 0 3m
|
||||
kube-system kube-proxy-node3.example.com 1/1 Running 0 3m
|
||||
kube-system kube-scheduler-node1.example.com 1/1 Running 0 3m
|
||||
kube-system kubernetes-dashboard-v1.4.0-0iy07 1/1 Running 0 4m
|
||||
kube-system kubernetes-dashboard-v1.4.1-0iy07 1/1 Running 0 4m
|
||||
|
||||
## Kubernetes Dashboard
|
||||
|
||||
|
||||
@@ -10,6 +10,7 @@ Ensure that you've gone through the [bootcfg with rkt](getting-started-rkt.md) g
|
||||
* Use rkt or Docker to start `bootcfg`
|
||||
* Create a network boot environment with `coreos/dnsmasq`
|
||||
* Create the example libvirt client VMs
|
||||
* `/etc/hosts` entries for `node[1-3].example.com` (or pass custom names to `k8s-certgen`)
|
||||
* Install the Torus [binaries](https://github.com/coreos/torus/releases)
|
||||
|
||||
## Examples
|
||||
@@ -36,7 +37,7 @@ Create a network boot environment and power-on your machines. Revisit [bootcfg w
|
||||
|
||||
Install the Torus [binaries](https://github.com/coreos/torus/releases) on your laptop. Torus uses etcd3 for coordination and metadata storage, so any etcd node in the cluster can be queried with `torusctl`.
|
||||
|
||||
./torusctl --etcd 172.15.0.21:2379 list-peers
|
||||
./torusctl --etcd node1.example.com:2379 list-peers
|
||||
|
||||
Run `list-peers` to report the status of data nodes in the Torus cluster.
|
||||
|
||||
@@ -57,11 +58,11 @@ Torus has already initialized its metadata within etcd3 to format the cluster an
|
||||
|
||||
Create a new replicated, virtual block device or `volume` on Torus.
|
||||
|
||||
./torusctl --etcd=172.15.0.21:2379 block create hello 500MiB
|
||||
./torusctl --etcd=node1.example.com:2379 block create hello 500MiB
|
||||
|
||||
List the current volumes,
|
||||
|
||||
./torusctl --etcd=172.15.0.21:2379 volume list
|
||||
./torusctl --etcd=node1.example.com:2379 volume list
|
||||
|
||||
and verify that `hello` was created.
|
||||
|
||||
@@ -78,7 +79,7 @@ and verify that `hello` was created.
|
||||
Let's attach the Torus volume, create a filesystem, and add some files. Add the `nbd` kernel module.
|
||||
|
||||
sudo modprobe nbd
|
||||
sudo ./torusblk --etcd=172.15.0.21:2379 nbd hello
|
||||
sudo ./torusblk --etcd=node1.example.com:2379 nbd hello
|
||||
|
||||
In a new shell, create a new filesystem on the volume and mount it on your system.
|
||||
|
||||
@@ -100,14 +101,14 @@ By default, Torus uses a replication factor of 2. You may write some data and po
|
||||
|
||||
Check the Torus data nodes.
|
||||
|
||||
$ ./torusctl --etcd 172.15.0.21:2379 list-peers
|
||||
$ ./torusctl --etcd node1.example.com:2379 list-peers
|
||||
|
||||
```
|
||||
+--------------------------+--------------------------------------+---------+--------+--------+---------------+--------------+
|
||||
| ADDRESS | UUID | SIZE | USED | MEMBER | UPDATED | REB/REP DATA |
|
||||
+--------------------------+--------------------------------------+---------+--------+--------+---------------+--------------+
|
||||
| http://172.15.0.21:40000 | 016fad6a-2e23-11e6-8ced-525400a19cae | 1.0 GiB | 22 MiB | OK | 3 seconds ago | 0 B/sec |
|
||||
| http://172.15.0.22:40000 | 0c67d31c-2e23-11e6-91f5-525400b22f86 | 1.0 GiB | 22 MiB | OK | 3 seconds ago | 0 B/sec |
|
||||
| http://node1.example.com:40000 | 016fad6a-2e23-11e6-8ced-525400a19cae | 1.0 GiB | 22 MiB | OK | 3 seconds ago | 0 B/sec |
|
||||
| http://node2.example.com:40000 | 0c67d31c-2e23-11e6-91f5-525400b22f86 | 1.0 GiB | 22 MiB | OK | 3 seconds ago | 0 B/sec |
|
||||
| | 0408cbba-2e23-11e6-9871-525400c36177 | ??? | ??? | DOWN | Missing | |
|
||||
+--------------------------+--------------------------------------+---------+--------+--------+---------------+--------------+
|
||||
Balanced: true Usage: 2.15%
|
||||
|
||||
@@ -7,6 +7,6 @@
|
||||
},
|
||||
"metadata": {
|
||||
"fleet_metadata": "role=etcd-proxy",
|
||||
"etcd_initial_cluster": "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380"
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380,node2=http://node2.example.com:2380,node3=http://node3.example.com:2380"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,16 +3,16 @@
|
||||
USAGE="Usage: $(basename $0)
|
||||
Options:
|
||||
-d DEST Destination for generated files (default: .examples/assets/tls)
|
||||
-s SERVER Reachable Server IP for kubeconfig (e.g. 172.15.0.21)
|
||||
-m MASTERS Master Node Names/Addresses in SAN format (e.g. IP.1=10.3.0.1,IP.2=172.15.0.21).
|
||||
-w WORKERS Worker Node Names/Addresses in SAN format (e.g. IP.1=172.15.0.22,IP.2=172.15.0.23)
|
||||
-s SERVER Reachable Server IP for kubeconfig (e.g. node1.example.com)
|
||||
-m MASTERS Controller Node Names/Addresses in SAN format (e.g. IP.1=10.3.0.1,DNS.1=node1.example.com).
|
||||
-w WORKERS Worker Node Names/Addresses in SAN format (e.g. DNS.1=node2.example.com,DNS.2=node3.example.com)
|
||||
-h Show help.
|
||||
"
|
||||
|
||||
DEST="./examples/assets/tls"
|
||||
SERVER="172.15.0.21"
|
||||
MASTERS="IP.1=10.3.0.1,IP.2=172.15.0.21"
|
||||
WORKERS="IP.1=172.15.0.22,IP.2=172.15.0.23"
|
||||
SERVER="node1.example.com"
|
||||
MASTERS="IP.1=10.3.0.1,DNS.1=node1.example.com"
|
||||
WORKERS="DNS.1=node2.example.com,DNS.2=node3.example.com"
|
||||
|
||||
while getopts "d:s:m:w:vh" opt; do
|
||||
case $opt in
|
||||
|
||||
Reference in New Issue
Block a user