pixiecore: Add pixiecore network example

This commit is contained in:
Dalton Hubble
2015-12-07 11:54:42 -08:00
parent 46da9091a7
commit 8e4dcf4172
7 changed files with 107 additions and 32 deletions

View File

@@ -2,5 +2,5 @@ FROM busybox:latest
MAINTAINER Dalton Hubble <dalton.hubble@coreos.com>
ADD bin/server /bin/server
EXPOSE 8080
EXPOSE 8081
CMD ./bin/server

View File

@@ -10,9 +10,18 @@ docker-build:
aci-build:
./acifile
docker-run:
docker run -p 8080:8080 -v $(shell echo $$PWD)/static:/static dghubble.io/metapxe:latest
run-docker:
docker run -p 8081:8081 -v $(shell echo $$PWD)/static:/static dghubble.io/metapxe:latest
rkt-run:
# Fedora 23 issue https://github.com/coreos/rkt/issues/1727
run-rkt:
rkt --insecure-options=image run --no-overlay bin/metapxe-0.0.1-linux-amd64.aci
run-pixiecore:
docker run -v $(shell echo $$PWD)/static:/static danderson/pixiecore -api http://172.17.0.2:8081/
run-dhcp:
./scripts/vethdhcp

View File

@@ -1,48 +1,44 @@
Boot and cloud config service for PXE, iPXE, and Pixiecore.
## Development
## Validation
`pxe` and `pixiecore` provide Vagrantfiles and scripts for setting up a PXE or Pixiecore provisioning server in libvirt for development.
The config service can be validated in scenarios which use PXE, iPXE, or Pixiecore on a libvirt virtual network or on a physical network of bare metal machines.
To get started, install the dependencies
### libvirt
# Fedora 22/23
dnf install vagrant vagrant-libvirt virt-manager
A libvirt virtual network of containers and VMs can be used to validate PXE booting of VM clients in various scenarios (PXE, iPXE, Pixiecore).
## Usage
To do this, start the appropriate set of containered services for the scenario, then boot a VM configured to use the PXE boot method.
Create a PXE or Pixiecore server VM with `vagrant up`.
Docker starts containers with virtual ethernet connections to the `docker0` bridge
vagrant up --provider libivrt
vagrant ssh
$ brctl show
$ docker network inspect bridge # docker client names the bridge "bridge"
The PXE server will allocate DHCP leases, run a TFTP server with a CoreOS kernel image and init RAM fs, and host a cloud-config over HTTP. The Pixiecore server itself is a proxy DHCP, TFTP, and HTTP server for images.
which uses the default subnet 172.17.0.0/16. It is also possible to create your own network bridge and reconfigure Docker to start containers on that bridge, but that approach is not used here.
By default, the PXE server runs at 192.168.32.10 on the `vagrant-pxe` virtual network. The Pixiecore server runs at 192.168.33.10 on the `vagrant-pixiecore` virtual network.
PXE boot client VMs can be started within the same subnet by attaching to the `docker0` bridge.
### Clients
Create a VM using the virt-manager UI, select Network Boot with PXE, and for the network selection, choose "Specify Shared Device" with bridge name `docker0`.
Once the provisioning server has started, PXE boot enabled client VMs in the same network should boot with CoreOS.
The VM should PXE boot using the boot config determined by the MAC address of the virtual network card which can be inspected in virt-manager.
Launch `virt-manager` to create a new virtual machine. When prompted, select Network Boot (PXE), skip adding a disk, and choose the `vagrant-libvirt` network.
### iPXE
If you see "Nothing" to boot, try force resetting the client VM.
#### Pixecore
Use SSH to connect to a client VM after boot and cloud-config succeed. The CLIENT_IP will be visible in the virt-manager console.
Run the config service container.
ssh core@CLIENT_IP # requires ssh_authorized_keys entry in cloud-config
make run-docker
### Configuration
Run the Pixiecore server container which uses `api` mode to call through to the config service.
The Vagrantfile parses the `config.rb` file for several variables you can use to configure network settings.
make run-pixiecore
### Reload
Finally, run the `vethdhcp` to create a virtual ethernet connection on the `docker0` bridge, assign an IP address, and run dnsmasq to provide DHCP service to VMs we'll add to the bridge.
If you change the Vagrantfile or a configuration variable, reload the VM with
make run-dhcp
vagrant reload --provision
Create a PXE boot client VM using virt-manager as described above. Let the VM PXE boot using the boot config determined the MAC address to config mapping set in the config service.
To try a new cloud-config, you can also scp the file onto the dev PXE server.
scp new-config.yml core@NODE_IP:/var/www/html/cloud-config.yml

View File

@@ -7,7 +7,7 @@ import (
"github.com/coreos/coreos-baremetal/server"
)
const address = ":8080"
const address = ":8081"
func main() {
bootConfigProvider := server.NewBootConfigProvider()

26
scripts/vethdhcp Executable file
View File

@@ -0,0 +1,26 @@
#!/bin/bash -e
# Connect dnsmasq DHCP to the docker0 bridge via a veth
BRIDGE=docker0
VETH=vethdhcp
VETH_ADDR=172.17.0.42/16
# DHCP
ADDR_RANGE_START=172.17.0.43
ADDR_RANGE_END=172.17.0.99
LEASE_TIME=30m
# create and attach the veth if it is missing
if ! ip link show $VETH; then
# create a virtual ethernet device (veth pair)
ip link add $VETH type veth peer name ${VETH}_b
# attach the "b" side of the veth to the bridge
brctl addif $BRIDGE ${VETH}_b
# assign an IP address to the veth
ip addr add $VETH_ADDR dev $VETH
# set both links to be up
ip link set $VETH up
ip link set $VETH_b up
fi
dnsmasq --no-daemon --port=0 -i $VETH --dhcp-range=$ADDR_RANGE_START,$ADDR_RANGE_END,$LEASE_TIME

45
vagrant/README.md Normal file
View File

@@ -0,0 +1,45 @@
# Vagrant Development
`pxe` and `pixiecore` provide Vagrantfiles and scripts for setting up a PXE or Pixiecore provisioning server in libvirt for development.
To get started, install the dependencies
# Fedora 22/23
dnf install vagrant vagrant-libvirt virt-manager
## Usage
Create a PXE or Pixiecore server VM with `vagrant up`.
vagrant up --provider libivrt
vagrant ssh
The PXE server will allocate DHCP leases, run a TFTP server with a CoreOS kernel image and init RAM fs, and host a cloud-config over HTTP. The Pixiecore server itself is a proxy DHCP, TFTP, and HTTP server for images.
By default, the PXE server runs at 192.168.32.10 on the `vagrant-pxe` virtual network. The Pixiecore server runs at 192.168.33.10 on the `vagrant-pixiecore` virtual network.
### Clients
Once the provisioning server has started, PXE boot enabled client VMs in the same network should boot with CoreOS.
Launch `virt-manager` to create a new virtual machine. When prompted, select Network Boot (PXE), skip adding a disk, and choose the `vagrant-libvirt` network.
If you see "Nothing" to boot, try force resetting the client VM.
Use SSH to connect to a client VM after boot and cloud-config succeed. The CLIENT_IP will be visible in the virt-manager console.
ssh core@CLIENT_IP # requires ssh_authorized_keys entry in cloud-config
### Configuration
The Vagrantfile parses the `config.rb` file for several variables you can use to configure network settings.
### Reload
If you change the Vagrantfile or a configuration variable, reload the VM with
vagrant reload --provision
To try a new cloud-config, you can also scp the file onto the dev PXE server.
scp new-config.yml core@NODE_IP:/var/www/html/cloud-config.yml

View File

@@ -41,7 +41,6 @@ systemctl start httpd
# Pixiecore
docker pull danderson/pixiecore
#docker run -v /var/lib/image:/image --net=host danderson/pixiecore -kernel /image/coreos_production_pxe.vmlinuz -initrd /image/coreos_production_pxe_image.cpio.gz --cmdline cloud-config-url=http://$PIXIECORE_SERVER_IP/cloud-config.yml
cat << EOF > /etc/systemd/system/pixiecore.service
[Unit]