Examples
Examples contains Config Service data directories showcasing different network-bootable baremetal clusters. These examples work with the libvirt VMs created by scripts/libvirt and the Libvirt Guide.
| Name | Description | Docs |
|---|---|---|
| etcd-small | Cluster with 1 etcd node, 4 proxies | reference |
| etcd-large | Cluster with 3 etcd nodes, 2 proxies | reference |
| kubernetes | Kubernetes cluster with 1 master, 1 worker, 1 dedicated etcd node | reference |
Experimental
These examples are experimental and designed to demonstrate booting and configuring CoreOS clusters, especially locally. They have NOT been hardened for production yet.
Virtual or Physical Hardware
Create a network of virtual hardware on your Linux machine before running the Config Service with one of these examples.
Create 5 libvirt VM nodes, which should be enough for any of the examples. The scripts/libvirt script will create 5 VM nodes with known hardware attributes, on the docker0 bridge network.
# Fedora/RHEL
dnf install virt-manager
# create node1 ... node5
./scripts/libvirt create
The nodes can be conveniently managed together.
./scripts/libvirt reboot
./scripts/libvirt shutdown # graceful
./scripts/libvirt poweroff # non-graceful
./scripts/libvirt destroy
The Config Service can uses these examples to provision physical clusters, but you'll have to edit the static IPs to suit your network and hardware. See the Baremetal Guide.
Config Service
The Config service matches machines to boot configurations, ignition configs, and cloud configs. It optionally serves OS images and other assets.
Let's run the config service on the virtual network.
docker pull quay.io/coreos/bootcfg:latest
Run the command for the example you wish to use.
etcd-small Cluster
docker run -p 8080:8080 --name=bootcfg --rm -v $PWD/examples/etcd-small:/data:Z -v $PWD/assets:/assets:Z coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug
etcd-large Cluster
docker run -p 8080:8080 --name=bootcfg --rm -v $PWD/examples/etcd-large:/data:Z -v $PWD/assets:/assets:Z coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug
Kubernetes Cluster
docker run -p 8080:8080 --name=bootcfg --rm -v $PWD/examples/kubernetes:/data:Z -v $PWD/assets:/assets:Z coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug
The mounted data directory (e.g. -v $PWD/examples/etcd-small:/data:Z) depends on the example you wish to run.
Assets
The examples require the CoreOS stable PXE kernel and initrd images be served by the Config service. Run get-coreos to download those images to assets.
./scripts/get-coreos
The following examples require a few additional assets be downloaded or generated. Acquire those assets before continuing.
Network Environment
Run an iPXE setup with DHCP and TFTP on the virtual network on your machine similar to what would be present on a real network. This allocates IP addresses to VM hosts, points PXE booting clients to the config service, and chainloads iPXE.
docker run --rm --cap-add=NET_ADMIN coreos/dnsmasq -d -q --dhcp-range=172.17.0.43,172.17.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:#ipxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://172.17.0.2:8080/boot.ipxe
Boot
Reboot the nodes to PXE boot them into your new cluster!
./scripts/libvirt reboot
# if nodes are in a non-booted state
./scripts/libvirt poweroff
All examples use autologin so you can check whether your nodes were setup correctly, depending on the cluster example you chose.
If something goes wrong, see troubleshooting.
If everything works, congratulations! Stay tuned for developments.
Further Reading
See the libvirt guide or baremetal guide for more information.