Documentation: Add getting started with rkt and docker

This commit is contained in:
Dalton Hubble
2016-02-02 11:35:12 -08:00
parent 54e28591dc
commit 3949e84fae
6 changed files with 191 additions and 85 deletions

View File

@@ -0,0 +1,73 @@
# Getting Started with Docker
Get started with the Config service on your Linux machine with Docker. If you're ready to try [rkt](https://coreos.com/rkt/docs/latest/), see [Getting Started with rkt](getting-started-rkt.md).
In this tutorial, we'll run the Config service (`bootcfg`) to boot and provision a cluster of 5 VM machines on the `docker0` bridge. You'll be able to boot etcd clusters, Kubernetes clusters, and more, while emulating different network setups.
## Requirements
Install the dependencies. These examples have been tested on Fedora 23.
sudo dnf install virt-install docker virt-manager
sudo systemctl start docker
Clone the [coreos-baremetal](https://github.com/coreos/coreos-baremetal) source which contains the examples and scripts.
git clone https://github.com/coreos/coreos-baremetal.git
cd coreos-baremetal
Create 5 VM nodes which have known hardware attributes. The nodes will be attached to the `docker0` bridge where your containers run.
sudo ./scripts/libvirt create-docker
Download the CoreOS PXE image assets to `assets/coreos`. The examples instruct machines to load these from the Config server, though you could change this.
./scripts/get-coreos
## Containers
Run the Config service (`bootcfg`). The `docker0` bridge should assign it the IP 172.17.0.2 (`sudo docker network inspect bridge`).
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/data:Z -v $PWD/assets:/assets:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug -config /data/etcd-docker.yaml
Take a look at [etcd-docker.yaml](../examples/etcd-docker.yaml) to get an idea of how machines are matched to specifications. Explore some endpoints port mapped to localhost:8080.
* [node1's ipxe](http://127.0.0.1:8080/ipxe?uuid=16e7d8a7-bfa9-428b-9117-363341bb330b)
* [node1's Ignition](http://127.0.0.1:8080/ignition?uuid=16e7d8a7-bfa9-428b-9117-363341bb330b)
* [node1's Metadata](http://127.0.0.1:8080/metadata?uuid=16e7d8a7-bfa9-428b-9117-363341bb330b)
Since the virtual network has no network boot services, use the `dnsmasq` container to set up an example iPXE environment which runs DHCP, DNS, and TFTP. The `dnsmasq` container can help test different network setups.
sudo docker run --rm --cap-add=NET_ADMIN quay.io/coreos/dnsmasq -d -q --dhcp-range=172.17.0.43,172.17.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:#ipxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://bootcfg.foo:8080/boot.ipxe --log-queries --log-dhcp --dhcp-option=3,172.17.0.1 --address=/bootcfg.foo/172.17.0.2
In this case, it runs a DHCP server allocating IPs to VMs between 172.17.0.43 and 172.17.0.99, resolves bootcfg.foo to 172.17.0.2 (the IP where `bootcfg` runs), and points iPXE clients to `http://bootcfg.foo:8080/boot.ipxe`.
## Verify
Reboot the VM machines and use `virt-manager` to watch the console.
sudo ./scripts/libvirt poweroff
sudo ./scripts/libvirt start
At this point, the VMs will PXE boot and use Ignition (preferred over cloud config) to set up a three node etcd cluster, with other nodes behaving as etcd proxies.
On VMs with autologin, check etcd2 works between different nodes.
systemctl status etcd2
etcdctl set /message hello
etcdctl get /message
Clean up the VM machines.
sudo ./scripts/libvirt poweroff
sudo ./scripts/libvirt destroy
## Going Further
Explore the [examples](../examples). Try the `k8s-docker.yaml` config to produce a TLS-authenticated Kubernetes cluster you can access locally with `kubectl`.
Add a GPG key to sign all rendered configs.
Learn more about the [config service](bootcfg.md) or adapt an example for your own [physical hardware and network](physical-hardware.md).

View File

@@ -0,0 +1,108 @@
# Getting Started with rkt
Get started with the Config service on your Linux machine with rkt, CNI, and appc.
In this tutorial, we'll run the Config service (`bootcfg`) to boot and provision a cluster of 5 VM machines on a CNI bridge (`metal0`). You'll be able to boot etcd clusters, Kubernetes clusters, and more, while emulating different network setups.
## Requirements
**Note**: Currently, rkt and the Fedora/RHEL/CentOS SELinux policies aren't supported. See the [issue](https://github.com/coreos/rkt/issues/1727) tracking the work and policy changes. To test these examples on your laptop, set SELinux enforcement to permissive if you are comfortable (`sudo setenforce 0`). Enable it again when you are finished.
Install [rkt](https://github.com/coreos/rkt/releases), [acbuild](https://github.com/appc/acbuild), and package dependencies. These examples have been tested on Fedora 23.
sudo dnf install virt-install virt-manager
Clone the [coreos-baremetal](https://github.com/coreos/coreos-baremetal) source which contains the examples and scripts.
git clone https://github.com/coreos/coreos-baremetal.git
cd coreos-baremetal
Download the CoreOS PXE image assets to `assets/coreos`. The examples instruct machines to load these from the Config server, though you could change this.
./scripts/get-coreos
Define the `metal0` virtual bridge with [CNI](https://github.com/appc/cni).
cat > /etc/rkt/net.d/20-metal.conf << EOF
{
"name": "metal0",
"type": "bridge",
"bridge": "metal0",
"isGateway": true,
"ipam": {
"type": "host-local",
"subnet": "172.15.0.0/16",
"routes" : [ { "dst" : "172.15.0.0/16" } ]
}
}
EOF
## Application Container
Run the Config service (`bootcfg`) on the `metal0` network, with a known IP we'll use in later steps with DNS.
sudo rkt --insecure-options=image fetch docker://quay.io/coreos/bootcfg
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=assets,target=/assets --volume assets,kind=host,source=$PWD/assets --mount volume=data,target=/data --volume data,kind=host,source=$PWD/examples quay.io/coreos/bootcfg -- -address=0.0.0.0:8080 -log-level=debug -config /data/etcd-rkt.yaml
Currently, the insecure flag is needed since Docker images do not support signature verification. We'll ship an ACI soon to address this.
If you get an error about the IP assignment, garbage collect old pods.
sudo rkt gc --grace-period=0
Take a look at [etcd-rkt.yaml](../examples/etcd-rkt.yaml) to get an idea of how machines are matched to specifications. Explore some endpoints exposed by the service.
* [node1's ipxe](http://172.15.0.2:8080/ipxe?uuid=16e7d8a7-bfa9-428b-9117-363341bb330b)
* [node1's Ignition](http://172.15.0.2:8080/ignition?uuid=16e7d8a7-bfa9-428b-9117-363341bb330b)
* [node1's Metadata](http://172.15.0.2:8080/metadata?uuid=16e7d8a7-bfa9-428b-9117-363341bb330b)
## Client VMs
Create 5 VM nodes which have known hardware attributes. The nodes will be attached to the `metal0` bridge where your pods run.
sudo ./scripts/libvirt create-rkt
In your Firewall Configuration, add `metal0` as a trusted interface.
## Network
Since the virtual network has no network boot services, use the `dnsmasq` ACI to set up an example iPXE environment which runs DHCP, DNS, and TFTP. The `dnsmasq` container can help test different network setups.
Build the `dnsmasq.aci` ACI.
cd contrib/dnsmasq
sudo ./build-aci
Run `dnsmasq.aci` to create a DHCP and TFTP server pointing to config server.
sudo rkt --insecure-options=image run dnsmasq.aci --net=metal0 -- -d -q --dhcp-range=172.15.0.50,172.15.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:#ipxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://bootcfg.foo:8080/boot.ipxe --log-queries --log-dhcp --dhcp-option=3,172.15.0.1 --address=/bootcfg.foo/172.15.0.2
In this case, dnsmasq runs a DHCP server allocating IPs to VMs between 172.15.0.50 and 172.15.0.99, resolves bootcfg.foo to 172.15.0.2 (the IP where `bootcfg` runs), and points iPXE clients to `http://bootcfg.foo:8080/boot.ipxe`.
## Verify
Reboot the VM machines and use `virt-manager` to watch the console.
sudo ./scripts/libvirt poweroff
sudo ./scripts/libvirt start
At this point, the VMs will PXE boot and use Ignition (preferred over cloud config) to set up a three node etcd cluster, with other nodes behaving as etcd proxies.
On VMs with autologin, check etcd2 works between different nodes.
systemctl status etcd2
etcdctl set /message hello
etcdctl get /message
Press ^] three times to stop a rkt pod. Clean up the VM machines.
sudo ./scripts/libvirt poweroff
sudo ./scripts/libvirt destroy
## Going Further
Explore the [examples](../examples). Try the `k8s-rkt.yaml` config to produce a TLS-authenticated Kubernetes cluster you can access locally with `kubectl`.
Add a GPG key to sign all rendered configs.
Learn more about the [config service](bootcfg.md) or adapt an example for your own [physical hardware and network](physical-hardware.md).

View File

@@ -1,74 +0,0 @@
# rkt Tutorial
**Like the Docker libvirt setup, the rkt libvirt setup is meant for local development and testing**
Get started on your laptop with `rkt`, `libvirt`, and `virt-manager` (tested on Fedora 23).
Install a [rkt](https://github.com/coreos/rkt/releases) and [acbuild](https://github.com/appc/acbuild/releases) release and get the `libvirt` and `virt-manager` packages.
sudo dnf install virt-manager virt-install
Clone the source.
git clone https://github.com/coreos/coreos-baremetal.git
cd coreos-baremetal
Currently, rkt and the Fedora/RHEL/CentOS SELinux policies aren't supported. See the [issue](https://github.com/coreos/rkt/issues/1727) tracking the work and policy changes. To test these examples on your laptop, set SELinux enforcement to permissive if you are comfortable (`sudo setenforce 0`). Enable it again when you are finished.
Download the CoreOS network boot images to assets/coreos:
cd coreos-baremetal
./scripts/get-coreos
Define the `metal0` virtual bridge with [CNI](https://github.com/appc/cni).
cat > /etc/rkt/net.d/20-metal.conf << EOF
{
"name": "metal0",
"type": "bridge",
"bridge": "metal0",
"isGateway": true,
"ipam": {
"type": "host-local",
"subnet": "172.15.0.0/16",
"routes" : [ { "dst" : "172.15.0.0/16" } ]
}
}
EOF
Run the config server on `metal0` with the IP address corresponding to the examples (or add DNS).
sudo rkt --insecure-options=image fetch docker://quay.io/coreos/bootcfg
The insecure flag is needed because Docker images do not support signature verification.
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=assets,target=/assets --volume assets,kind=host,source=$PWD/assets --mount volume=data,target=/data --volume data,kind=host,source=$PWD/examples quay.io/coreos/bootcfg -- -address=0.0.0.0:8080 -log-level=debug -config /data/etcd-rkt.yaml
If you get an error about the IP being assigned already.
sudo rkt gc --grace-period=0
sudo rkt list # should be empty
Create 5 VM nodes on the `metal0` bridge, which have known "hardware" attributes that match the examples.
sudo ./scripts/libvirt create-rkt
# if you previously tried the docker examples, cleanup first
sudo ./scripts/libvirt shutdown
sudo ./scripts/libvirt destroy
In your firewall settings, configure the `metal0` interface as trusted.
Build an dnsmasq ACI and run it to create a DNS server, TFTP server, and DHCP server which points network boot clients to the config server started above.
cd contrib/dnsmasq
sudo ./build-aci
Run `dnsmasq.aci` to create a DHCP and TFTP server pointing to config server.
sudo rkt --insecure-options=image run dnsmasq.aci --net=metal0 -- -d -q --dhcp-range=172.15.0.50,172.15.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:#ipxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://bootcfg.foo:8080/boot.ipxe --log-queries --log-dhcp --dhcp-option=3,172.15.0.1 --address=/bootcfg.foo/172.15.0.2
Reboot Nodes
sudo ./script/libvirt poweroff
sudo ./script/libvirt start

View File

@@ -9,21 +9,22 @@ CoreOS on Baremetal contains guides for network booting and configuring CoreOS c
* [Network Booting](Documentation/network-booting.md)
* [Config Service](Documentation/bootcfg.md)
* [rkt Tutorial](Documentation/rkt.md)
* [Libvirt Guide](Documentation/virtual-hardware.md)
* [Baremetal Guide](Documentation/physical-hardware.md)
## Config Service
The config service provides network boot (PXE, iPXE, Pixiecore), [Ignition](https://coreos.com/ignition/docs/latest/what-is-ignition.html), and [Cloud-Init](https://github.com/coreos/coreos-cloudinit) configs to machines based on hardware attributes (e.g. UUID, MAC, hostname) or free-form tags.
The config service renders signed [Ignition](https://coreos.com/ignition/docs/latest/what-is-ignition.html) configs, [Cloud-Init](https://github.com/coreos/coreos-cloudinit) configs, and metadata to machines based on hardware attributes (e.g. UUID, MAC) or arbitrary tags (e.g. os=installed). Network boot endpoints provide PXE, iPXE, and Pixiecore support.
* Getting Started
- [Using rkt](Documentation/getting-started-rkt.md)
- [Using docker](Documentation/getting-started-docker.md)
* [API](Documentation/api.md)
* [Flags](Documentation/config.md)
## Examples
### Examples
Get started with the declarative [examples](examples) which network boot different CoreOS clusters. Use the [libvirt script](scripts/libvirt) to quickly setup a network of virtual hardware on your Linux box.
Boot machines into CoreOS clusters of higher-order systems like Kubernetes or etcd according to the declarative [examples](examples). Use the [libvirt script](scripts/libvirt) to quickly setup a network of virtual hardware on your Linux box.
* Single Node etcd Cluster
* Multi Node etcd Cluster
* Kubernetes Cluster (1 master, 1 worker, 1 etcd)

View File

@@ -42,9 +42,8 @@ groups:
- name: default
spec: etcd_proxy
metadata:
etcd_initial_cluster: "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380"
metadata:
networkd_name: ens3
networkd_gateway: 172.17.0.1
networkd_dns: 172.17.0.3
networkd_dns: 172.17.0.3
etcd_initial_cluster: "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380"

View File

@@ -42,9 +42,8 @@ groups:
- name: default
spec: etcd_proxy
metadata:
etcd_initial_cluster: "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380"
metadata:
networkd_name: ens3
networkd_gateway: 172.15.0.1
networkd_dns: 172.15.0.3
networkd_dns: 172.15.0.3
etcd_initial_cluster: "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380"