Files
matchbox/Documentation/getting-started-docker.md

3.9 KiB

Getting Started with Docker

Get started with bootcfg on your Linux machine with Docker. If you're ready to try rkt, see Getting Started with rkt.

In this tutorial, we'll run bootcfg to boot and provision a cluster of four VM machines on the docker0 bridge. You'll be able to boot etcd clusters, Kubernetes clusters, and more, while testing different network setups.

Requirements

Install the dependencies and start the Docker daemon.

# Fedora
sudo dnf install docker virt-install virt-manager
sudo systemctl start docker

# Debian/Ubuntu
# check Docker's docs to install Docker 1.8+ on Debian/Ubuntu
sudo apt-get install virt-manager virtinst qemu-kvm

Clone the coreos-baremetal source which contains the examples and scripts.

git clone https://github.com/coreos/coreos-baremetal.git
cd coreos-baremetal

Create four VM nodes which have known hardware attributes. The nodes will be attached to the docker0 bridge where containers run.

sudo ./scripts/libvirt create-docker
sudo virt-manager

Download the CoreOS PXE image assets to assets/coreos. The examples instruct machines to load these from bootcfg.

./scripts/get-coreos
./scripts/get-coreos channel version

Containers

Run bootcfg on the default bridge docker0. The bridge should assign it the IP 172.17.0.2 (sudo docker network inspect bridge).

sudo docker run -p 8080:8080 --rm -v $PWD/examples:/data:Z -v $PWD/assets:/assets:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug -config /data/etcd-docker.yaml

Take a look at etcd-docker.yaml to get an idea of how machines are matched to specifications. Explore some endpoints port mapped to localhost:8080.

Since the virtual network has no network boot services, use the dnsmasq container to set up an example iPXE environment which runs DHCP, DNS, and TFTP. The dnsmasq container can help test different network setups.

sudo docker run --rm --cap-add=NET_ADMIN quay.io/coreos/dnsmasq -d -q --dhcp-range=172.17.0.43,172.17.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:#ipxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://bootcfg.foo:8080/boot.ipxe --log-queries --log-dhcp --dhcp-option=3,172.17.0.1 --address=/bootcfg.foo/172.17.0.2

In this case, it runs a DHCP server allocating IPs to VMs between 172.17.0.43 and 172.17.0.99, resolves bootcfg.foo to 172.17.0.2 (the IP where bootcfg runs), and points iPXE clients to http://bootcfg.foo:8080/boot.ipxe.

Verify

Reboot the VM machines and use virt-manager to watch the console.

sudo ./scripts/libvirt poweroff
sudo ./scripts/libvirt start

At this point, the VMs will PXE boot and use Ignition (preferred over cloud config) to set up a three node etcd cluster, with other nodes behaving as etcd proxies.

The example spec added autologin so you can check that etcd works between nodes.

systemctl status etcd2
etcdctl set /message hello
etcdctl get /message

Clean up the VM machines.

sudo ./scripts/libvirt poweroff
sudo ./scripts/libvirt destroy
sudo ./scripts/libvirt delete-disks

Going Further

Explore the examples. Try the k8s-docker.yaml example to produce a TLS-authenticated Kubernetes cluster you can access locally with kubectl.

Learn more about bootcfg, enable OpenPGP signing, or adapt an example for your own physical hardware and network.