docs: Add libvirt docker0 walkthrough

This commit is contained in:
Dalton Hubble
2015-12-21 14:07:44 -08:00
parent 0b55b57e87
commit 458b264849
6 changed files with 113 additions and 97 deletions

2
.gitignore vendored
View File

@@ -25,5 +25,5 @@ _testmain.go
bin/
coverage/
Godeps/_workspace/src/github.com/coreos/coreos-baremetal
Godeps/_workspace/src/
images/

101
README.md
View File

@@ -1,101 +1,14 @@
# CoreOS on Baremetal
# CoreOS on Baremetal [![Docker Repository on Quay](https://quay.io/repository/coreos/bootcfg/status "Docker Repository on Quay")](https://quay.io/repository/coreos/bootcfg)
CoreOS on Baremetal contains guides for booting and configuring CoreOS clusters on virtual or physical hardware. It includes Dockerfiles and Vagrantfiles for setting up a network boot environment and the `bootcfg` HTTP service for providing configs to machines based on their attributes.
## Guides
[Getting Started](docs/getting-started.md)
[Boot Config Service](docs/bootcfg.md)
[Libvirt Guide](docs/virtual-hardware.md)
[Baremetal Guide](docs/physical-hardware.md)
[bootcfg Config](docs/config.md)
[bootcfg API](docs/api.md)
* [Getting Started](docs/getting-started.md)
* [Boot Config Service](docs/bootcfg.md)
* [Libvirt Guide](docs/virtual-hardware.md)
* [Baremetal Guide](docs/physical-hardware.md)
* [bootcfg Config](docs/config.md)
* [bootcfg API](docs/api.md)
## Networking
Use PXE, iPXE, or Pixiecore with `bootcfg` to define kernel boot images, options, and cloud configs for machines within your network.
To get started in a libvirt development environment (under Linux), you can run the `bootcfg` container on the docker0 virtual bridge, alongside either the included `ipxe` conatainer or `danderson/pixiecore` and `dhcp` containers. Then you'll be able to boot libvirt VMs on the same bridge or baremetal PXE clients that are attatched to your development machine via a network adapter and added to the bridge. `docker0` defaults to subnet 172.17.0.0/16. See [clients](#clients).
List all your bridges with
brctl show
and on docker 1.9+, you may inspect the bridge and its interfaces with
docker network inspect bridge # docker calls the bridge "bridge"
To get started on a baremetal, run the `bootcfg` container with the `--net=host` argument and configure PXE, iPXE, or Pixiecore. If you do not yet have any of these services running on your network, the [dnsmasq image](dockerfiles/dnsmasq) can be helpful.
### iPXE
Configure your iPXE server to chainload the `bootcfg` iPXE boot script hosted at `$BOOTCFG_BASE_URL/boot.ipxe`. The base URL should be of the for `protocol://hostname:port` where where hostname is the IP address of the config server or a DNS name for it.
# dnsmasq.conf
# if request from iPXE, serve bootcfg's iPXE boot script (via HTTP)
dhcp-boot=tag:ipxe,http://172.17.0.2:8080/boot.ipxe
#### docker0
Try running a PXE/iPXE server alongside `bootcfg` on the docker0 bridge. Use the included `ipxe` container to run an example PXE/iPXE server on that bridge.
The `ipxe` Docker image uses dnsmasq DHCP to point PXE/iPXE clients to the boot config service (hardcoded to http://172.17.0.2:8080). It also runs a TFTP server to serve the iPXE firmware to older PXE clients via chainloading.
cd dockerfiles/ipxe
./docker-build
./docker-run
Now create local PXE boot [clients](#clients) as libvirt VMs or by attaching bare metal machines to the docker0 bridge.
### PXE
To use `bootcfg` with PXE, you must [chainload iPXE](http://ipxe.org/howto/chainloading). This can be done by configuring your PXE/DHCP/TFTP server to send `undionly.kpxe` over TFTP to PXE clients.
# dnsmasq.conf
enable-tftp
tftp-root=/var/lib/tftpboot
# if regular PXE firmware, serve iPXE firmware (via TFTP)
dhcp-boot=tag:!ipxe,undionly.kpxe
`bootcfg` does not respond to DHCP requests or serve files over TFTP.
### Pixecore
Pixiecore is a ProxyDHCP, TFTP, and HTTP server and calls through to the `bootcfg` API to get a boot config for `pxelinux` to boot. No modification of your existing DHCP server is required in production.
#### docker0
Try running a DHCP server, Pixiecore, and `bootcfg` on the docker0 bridge. Use the included `dhcp` container to run an example DHCP server and the official Pixiecore container image.
# DHCP
cd dockerfiles/dhcp
./docker-build
./docker-run
Start Pixiecore using the script which attempts to detect the IP and port of `bootcfg` on the Docker host or do it manually.
# Pixiecore
./scripts/pixiecore
# manual
docker run -v $PWD/images:/images:Z danderson/pixiecore -api http://$BOOTCFG_HOST:$BOOTCFG_PORT/pixiecore
Now create local PXE boot [clients](#clients) as libvirt VMs or by attaching bare metal machines to the docker0 bridge.
## Clients
Once boot services are running, create a PXE boot client VM or attach a bare metal machine to your host.
### VM
Create a VM using the virt-manager UI, select Network Boot with PXE, and for the network selection, choose "Specify Shared Device" with bridge name `docker0`. The VM should PXE boot using the boot configuration determined by its MAC address, which can be tweaked in virt-manager.
### Bare Metal
Link a bare metal machine, which has boot firmware (BIOS) support for PXE, to your host with a network adapter. Get the link and attach it to the bridge.
ip link show # find new link e.g. enp0s20u2
brctl addif docker0 enp0s20u2
Configure the boot firmware to prefer PXE booting or network booting and restart the machine. It should PXE boot using the boot configuration determined by its MAC address.

View File

@@ -51,7 +51,7 @@ In these guides, PXE is used to load the iPXE boot file so iPXE can chainload sc
[iPXE](http://ipxe.org/) is an enhanced implementation of the PXE client firmware and a network boot program which uses iPXE scripts rather than config files and can download scripts and images with HTTP.
<img src='img/ipxe.png' class="img-center" alt="Basic PXE client server protocol flow"/>
<img src='img/ipxe.png' class="img-center" alt="iPXE client server protocol flow"/>
A DHCPOFFER to iPXE client firmware specifies an HTTP boot script such as `http://example.provisioner.net/boot.ipxe`.
@@ -86,7 +86,7 @@ Many networks have DHCP services which are impractical to modify or disable. Cor
To address this, PXE client firmware listens for a DHCPOFFER from non-PXE DHCP server *and* a DHCPOFFER from a PXE-enabled **proxyDHCP server** which is configured to respond with just the next server and boot filename. The client firmware combines the two responses as if they had come from a single DHCP server which provided PXE Options.
<img src='img/proxydhcp.png' class="img-center" alt="Basic PXE client server protocol flow"/>
<img src='img/proxydhcp.png' class="img-center" alt="DHCP and proxyDHCP responses are merged to get PXE Options"/>
The [libvirt guide](virtual-networking.md) shows how to setup a network environment with a standalone PXE-enabled DHCP server or with a separate DHCP server and proxyDHCP server.

BIN
docs/img/libvirt-ipxe.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

BIN
docs/img/virt-manager.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

103
docs/virtual-hardware.md Normal file
View File

@@ -0,0 +1,103 @@
# Libvirt Virtual Hardware
CoreOS can be booted and configured on virtual hardware within a libvirt environment (under Linux) with different network services running as Docker containers on the `docker0` virtual bridge. Client VMs or even baremetal hardware attached to the bridge can be booted and configured from the network.
Docker containers run on the `docker0` virtual bridge, typically on a subnet 172.17.0.0/16. Docker assigns IPs to containers started through the docker cli, but the bridge does not run a DHCP service. List network bridges on your host and inspect the bridge Docker 1.9+ created (Docker cli refers to `docker0` as `bridge`).
brctl show
docker network inspect bridge
## Boot Config Service
First, run the `bootcfg` container with your configs and images directories.
docker run -p 8080:8080 --name=bootcfg --rm -v $PWD/data:/data:Z -v $PWD/images:/images:Z coreos/bootcfg:latest -address=0.0.0.0:8080 -data-path=/data -images-path=/images
or use `./docker-run`.
## PXE Setups
As discussed in [getting started](getting-started.md), there are several variations of PXE network boot environments. We'll show how to setup and test each network environment.
Several setups make use of the `dnsmasq` program which can run a PXE-enabled DHCP server, proxy DHCP server, and/or TFTP server.
### PXE
To boot PXE clients, configure a PXE network environment to [chainload iPXE](http://ipxe.org/howto/chainloading). The iPXE setup below configures DHCP to send `undionly.kpxe` over TFTP to older PXE clients for this purpose.
With `dnsmasq`, the relevant `dnsmask.conf` settings would be:
enable-tftp
tftp-root=/var/lib/tftpboot
# if PXE request came from regular firmware, serve iPXE firmware (via TFTP)
dhcp-boot=tag:!ipxe,undionly.kpxe
### iPXE
Create a PXE/iPXE network environment by running the included `ipxe` container on the `docker0` bridge alongside `bootcfg`.
cd dockerfiles/ipxe
./docker-build
./docker-run
The `ipxe` image uses `dnsmasq` to run DHCP and TFTP. It allocates IPs in the `docker0` subnet and sends options to chainload older PXE clients to iPXE. iPXE clients are pointed to the `bootcfg` service (assumed to be running on 172.17.0.2:8080) to get a boot script.
```
# dnsmasq.conf
dhcp-range=172.17.0.43,172.17.0.99,30m
enable-tftp
tftp-root=/var/lib/tftpboot
# set tag "ipxe" if request comes from iPXE ("iPXE" user class)
dhcp-userclass=set:ipxe,iPXE
# if PXE request came from regular firmware, serve iPXE firmware (via TFTP)
dhcp-boot=tag:!ipxe,undionly.kpxe
# if PXE request came from iPXE, serve an iPXE boot script (via HTTP)
dhcp-boot=tag:ipxe,http://172.17.0.2:8080/boot.ipxe
```
<img src='img/libvirt-ipxe.png' class="img-center" alt="Libvirt iPXE network environment"/>
Continue to [clients](#clients) to create a client VM or attach a baremetal machine to boot.
### Pixiecore
Create a Pixiecore network environment by running the `danderson/pixiecore` container alongside `bootcfg`. Since Pixiecore is a proxyDHCP/TFTP/HTTP server and the `docker0` bridge does not run DHCP services, you'll need to run DHCP for your client machines.
The `dhcp` image uses `dnsmasq` just to provide DHCP to the `docker0` subnet.
cd dockerfiles/dhcp
./docker-build
./docker-run
Start Pixiecore using the script which attempts to detect IP:port `bootcfg` ss using on `docker0` or do it manually.
# Pixiecore
./scripts/pixiecore
# manual
docker run -v $PWD/images:/images:Z danderson/pixiecore -api http://$BOOTCFG_HOST:$BOOTCFG_PORT/pixiecore
Continue to [clients](#clients) to create a client VM or attach a baremetal machine to boot.
## Clients
Once a network environment is prepared to boot client machines, create a libvirt VM configured to PXE boot or attach a baremetal machine to your Docker host.
### libvirt VM
Use `virt-manager` to create a new client VM. Select Network Boot with PXE and for the network selection, choose "Specify Shared Device" with the bridge name `docker0`.
<img src='img/virt-manager.png' class="img-center" alt="Virt-Manager showing PXE network boot method"/>
The VM should PXE boot using the boot config and cloud config based on its UUID, MAC address, or your configured defaults. The `virt-manager` shows the UUID and the MAC address of the NIC on the shared bridge, which you can use when naming configs.
### Bare Metal
Connect a baremetal client machine to your libvirt Docker host machine and ensure that the client's boot firmware (probably BIOS) has been configured to prefer PXE booting.
Find the network interface and attach it to the virtual bridge.
ip link show # find new link e.g. enp0s20u2
brctl addif docker0 enp0s20u2
Restart the client machine and it should PXE boot using the boot config and cloud config based on its UUID, MAC address, or your configured defaults.