cmd, docs: Change default config, data, and assets locations

* Change default -config path to /etc/bootcfg.conf
* Change default -data-path to /etc/bootcfg
* Change default -assets-path to /var/bootcfg
This commit is contained in:
Dalton Hubble
2016-03-14 16:22:16 -07:00
parent 75de03bf8f
commit 040665930d
11 changed files with 143 additions and 152 deletions

View File

@@ -8,6 +8,9 @@
#### Changes
* Change default `-config` path to `/etc/bootcfg.conf`
* Change default `-data-path` to `/etc/bootcfg`
* Change default `-assets-path` to `/var/bootcfg`
* Remove HTTP `/spec/id` JSON endpoint
* Require `metadata` values in the YAML config to be strings, lists of strings, or nested maps of strings. Ignore and log other values.

View File

@@ -5,33 +5,28 @@
The aim is to use CoreOS Linux's early-boot capabilities to boot machines into functional cluster members with end to end [Distributed Trusted Computing](https://coreos.com/blog/coreos-trusted-computing.html). PXE, iPXE, and [Pixiecore](https://github.com/danderson/pixiecore/blob/master/README.api.md) endpoints provide support for network booting. The `bootcfg` service can be run as an [application container](https://github.com/appc/spec) with rkt, as a Docker container, or as a binary.
## Usage
Fetch the application container image (ACI) from [Quay](https://quay.io/repository/coreos/bootcfg?tab=tags).
sudo rkt --insecure-options=image fetch docker://quay.io/coreos/bootcfg
Alternately, pull the Docker image.
sudo docker pull quay.io/coreos/bootcfg
The `latest` image corresponds to the most recent commit on master, so choose a tagged [release](https://github.com/coreos/coreos-baremetal/releases) if you require more stability.
## Getting Started
Get started running `bootcfg` with rkt or Docker to network boot libvirt VMs on your laptop into CoreOS clusters.
* [Getting Started with rkt](getting-started-rkt.md)
* [Getting Started with Docker](getting-started-docker.md)
Once you've tried those examples, you're ready to write your own configs.
## Flags
See [flags and variables](config.md)
## API
See [API](api.md)
## Data
A `Store` stores Ignition configs, cloud-configs, and named Specs. By default, `bootcfg` uses a `FileStore` to search a data directory (`-data-path`) for these resources.
Prepare a data directory similar to the [examples](../examples) directory, with `ignition`, `cloud`, and `specs` subdirectories. You might keep this directory under version control since it will define the early boot behavior of your machines.
Prepare `/etc/bootcfg` or a custom `-data-path` with `ignition`, `cloud`, and `specs` subdirectories. You may wish to keep these files under version control. The [examples](../examples) directory is a valid target with some pre-defined configs.
data
├── config.yaml
/etc/bootcfg
├── cloud
│   ├── cloud.yaml
│   └── worker.sh
@@ -45,14 +40,14 @@ Prepare a data directory similar to the [examples](../examples) directory, with
└── worker
└── spec.json
Ignition files can be JSON files or Ignition YAML. Cloud-Configs can be YAML or scripts. Both may contain may contain [Go template](https://golang.org/pkg/text/template/) elements which will be evaluated with [metadata](#groups-and-metadata). For details and examples:
Ignition templates can be JSON files or Ignition YAML. Cloud-Config templates can be YAML or scripts. Both may contain may contain [Go template](https://golang.org/pkg/text/template/) elements which will be evaluated with [metadata](#groups-and-metadata). For details and examples:
* [Ignition Config](ignition.md)
* [Cloud-Config](cloud-config.md)
#### Spec
Specs specify the Ignition config, cloud-config, and PXE boot settings (kernel, options, initrd) of a matched machine.
Specs specify the Ignition config, cloud-config, and network boot settings (kernel, options, initrd) of a matched machine.
{
"id": "etcd_profile",
@@ -70,19 +65,19 @@ Specs specify the Ignition config, cloud-config, and PXE boot settings (kernel,
}
}
The `"boot"` settings will be used to render configs to the network boot programs used in PXE, iPXE, or Pixiecore setups. You may reference remote kernel and initrd assets or [local assets](#assets).
The `"boot"` settings will be used to render configs to network boot programs used in PXE, iPXE, or Pixiecore setups. You may reference remote kernel and initrd assets or [local assets](#assets).
To use cloud-config, set the `cloud-config-url` kernel option to the `bootcfg` [Cloud-Config endpoint](api.md#cloud-config) `/cloud?param=val`, which will render the `cloud_id` file.
To use cloud-config, set the `cloud-config-url` kernel option to reference the `bootcfg` [Cloud-Config endpoint](api.md#cloud-config), which will render the `cloud_id` file.
To use Ignition, set the `coreos.config.url` kernel option to the `bootcfg` [Ignition endpoint](api.md#ignition-config) `/ignition?param=val`, which will render the `ignition_id` file. Be sure to add the `coreos.first_boot` option as well.
To use Ignition, set the `coreos.config.url` kernel option to reference the `bootcfg` [Ignition endpoint](api.md#ignition-config), which will render the `ignition_id` file. Be sure to add the `coreos.first_boot` option as well.
## Groups and Metadata
Groups define a set of required tags which match zero or more machines. Machines matching a group will boot and provision themselves according to the group's `spec` and metadata. Currently, `bootcfg` loads group definitions from a YAML config file specified by the `-config` flag. When running `bootcfg` as a container, it is easiest to keep the config file in the [data](#data) directory so it is mounted and versioned.
Groups define a set of required tags which match zero or more machines. Machines matching a group will boot and provision themselves according to the group's `spec` and metadata.
Define a list of named groups, name the `Spec` that should be applied, add the tags required to match the group, and add your own `metadata` needed to render your Ignition or Cloud configs.
Define a list of groups, name the `Spec` that should be applied, add the tags required to match each group, and add any `metadata` needed to render the templates in your Ignition or Cloud configs.
Here is an example `bootcfg` config.yaml:
Here is an example `/etc/bootcfg.conf` YAML file:
---
api_version: v1alpha1
@@ -144,14 +139,7 @@ For example, a `Spec` might refer to a local asset `/assets/coreos/VERSION/coreo
See the [get-coreos](../scripts/README.md#get-coreos) script to quickly download, verify, and move CoreOS assets to `assets`.
## Endpoints
The [API](api.md) documents the available endpoints.
## Network
`bootcfg` does not implement a DHCP/TFTP server or monitor running instances. If you need a quick DHCP, proxyDHCP, TFTP, or DNS setup, the [coreos/dnsmasq](../contrib/dnsmasq) image can create a suitable network boot environment on a virtual or physical network. Use `--net` to specify a network bridge and `--dhcp-boot` to point clients to `bootcfg`.
## Virtual and Physical Machine Guides
Next, setup a network of virtual machines with libvirt or boot a cluster of physical hardware. Follow the [libvirt guide](virtual-hardware.md) or [physical hardware guide](physical-hardware.md).

View File

@@ -5,27 +5,45 @@ Configuration arguments can be provided as flags or as environment variables.
| flag | variable | example |
|------|----------|---------|
| -address | BOOTCFG_ADDRESS | 0.0.0.0:8080 |
| -config | BOOTCFG_CONFIG | ./data/config.yaml |
| -data-path | BOOTCFG_DATA_PATH | ./data |
| -assets-path | BOOTCFG_ASSETS_PATH | ./assets |
| -address | BOOTCFG_ADDRESS | 127.0.0.1:8080 |
| -config | BOOTCFG_CONFIG | /etc/bootcfg.conf |
| -data-path | BOOTCFG_DATA_PATH | /etc/bootcfg |
| -assets-path | BOOTCFG_ASSETS_PATH | /var/bootcfg |
| -key-ring-path | BOOTCFG_KEY_RING_PATH | ~/.secrets/vault/bootcfg/secring.gpg |
| Disallowed | BOOTCFG_PASSPHRASE | secret passphrase |
| -log-level | BOOTCFG_LOG_LEVEL | critical, error, warning, notice, info, debug |
## Files and Directories
| Contents | Default Location |
|-----------|-------------------|
| conf file | /etc/bootcfg.conf |
| configs | /etc/bootcfg/ |
| assets | /var/bootcfg/ |
## Check Version
./bin/bootcfg -version
sudo rkt --insecure-options=image run quay.io/coreos/bootcfg:latest -- -version
sudo docker run quay.io/coreos/bootcfg:latest -version
## Examples
Build the static binary.
./build
Run
Run the binary.
./bin/bootcfg -address=0.0.0.0:8080 -log-level=debug -data-path examples/ -config examples/etcd-rkt.yaml
Run with a fake signing key.
### With [OpenPGP Signing](openpgp.md)
Run with the binary with a fake key.
export BOOTCFG_PASSPHRASE=test
./bin/bootcfg -address=0.0.0.0:8080 -key-ring-path bootcfg/sign/fixtures/secring.gpg -data-path examples/ -config examples/etcd-rkt.yaml
Run the ACI with a fake key.
sudo rkt --insecure-options=image run --net=metal0:IP=172.15.0.2 --set-env=BOOTCFG_PASSPHRASE=test --mount volume=secrets,target=/secrets --volume secrets,kind=host,source=$PWD/bootcfg/sign/fixtures --mount volume=assets,target=/var/bootcfg --volume assets,kind=host,source=$PWD/assets --mount volume=data,target=/etc/bootcfg --volume data,kind=host,source=$PWD/examples quay.io/coreos/bootcfg:latest -- -address=0.0.0.0:8080 -config /etc/bootcfg/etcd-rkt.yaml -key-ring-path secrets/secring.gpg
Run the Docker image with a fake key.
sudo docker run -p 8080:8080 --rm --env BOOTCFG_PASSPHRASE=test -v $PWD/examples:/etc/bootcfg:Z -v $PWD/assets:/var/bootcfg:Z -v $PWD/bootcfg/sign/fixtures:/secrets:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug -config /etc/bootcfg/etcd-docker.yaml -key-ring-path secrets/secring.gpg

View File

@@ -23,18 +23,22 @@ Alternately, build a Docker image `coreos/bootcfg:latest`.
sudo ./build-docker
## Run
Run the ACI with rkt on `metal0`.
sudo rkt --insecure-options=image run --net=metal0:IP=172.15.0.2 --mount volume=assets,target=/assets --volume assets,kind=host,source=$PWD/assets --mount volume=data,target=/data --volume data,kind=host,source=$PWD/examples bootcfg.aci -- -address=0.0.0.0:8080 -log-level=debug -config /data/etcd-rkt.yaml
Alternately, run the Docker image on `docker0`.
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/data:Z -v $PWD/assets:/assets:Z coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug -config /data/etcd-docker.yaml
## Version
## Check Version
./bin/bootcfg -version
sudo rkt --insecure-options=image run bootcfg.aci -- -version
sudo docker run coreos/bootcfg:latest -version
## Run
Run the binary.
./bin/bootcfg -address=0.0.0.0:8080 -log-level=debug -data-path examples/ -config examples/etcd-rkt.yaml
Run the ACI with rkt on `metal0`.
sudo rkt --insecure-options=image run --net=metal0:IP=172.15.0.2 --mount volume=assets,target=/var/bootcfg --volume assets,kind=host,source=$PWD/assets --mount volume=data,target=/etc/bootcfg --volume data,kind=host,source=$PWD/examples bootcfg.aci -- -address=0.0.0.0:8080 -log-level=debug -config /etc/bootcfg/etcd-rkt.yaml
Alternately, run the Docker image on `docker0`.
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/etc/bootcfg:Z -v $PWD/assets:/var/bootcfg:Z coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug -config /etc/bootcfg/etcd-docker.yaml

View File

@@ -2,9 +2,9 @@
# Getting Started with Docker
Get started with `bootcfg` on your Linux machine with Docker. If you're ready to try [rkt](https://coreos.com/rkt/docs/latest/), see [Getting Started with rkt](getting-started-rkt.md).
In this tutorial, we'll run `bootcfg` on your Linux machine, with Docker, to network boot and provision a cluster of CoreOS machines. You'll be able to create Kubernetes clustes, etcd clusters, or just install CoreOS and test network setups locally.
In this tutorial, we'll run `bootcfg` to boot and provision a cluster of four VM machines on the `docker0` bridge. You'll be able to boot etcd clusters, Kubernetes clusters, and more, while testing different network setups.
If you're ready to try [rkt](https://coreos.com/rkt/docs/latest/), see [Getting Started with rkt](getting-started-rkt.md).
## Requirements
@@ -23,11 +23,6 @@ Clone the [coreos-baremetal](https://github.com/coreos/coreos-baremetal) source
git clone https://github.com/coreos/coreos-baremetal.git
cd coreos-baremetal
Create four VM nodes which have known hardware attributes. The nodes will be attached to the `docker0` bridge where containers run.
sudo ./scripts/libvirt create-docker
sudo virt-manager
Download the CoreOS PXE image assets to `assets/coreos`. The examples instruct machines to load these from `bootcfg`.
./scripts/get-coreos
@@ -35,9 +30,17 @@ Download the CoreOS PXE image assets to `assets/coreos`. The examples instruct m
## Containers
Run `bootcfg` on the default bridge `docker0`. The bridge should assign it the IP 172.17.0.2 (`sudo docker network inspect bridge`).
Run `quay.io/coreos/bootcfg`, which should receive an IP address 172.17.0.2 on the `docker0` bridge.
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/data:Z -v $PWD/assets:/assets:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug -config /data/etcd-docker.yaml
#### Latest
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/etc/bootcfg:Z -v $PWD/assets:/var/bootcfg:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug -config /etc/bootcfg/etcd-docker.yaml
#### Release
Alternately, run one of the recently tagged [releases](https://github.com/coreos/coreos-baremetal/releases).
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/data:Z -v $PWD/assets:/assets:Z quay.io/coreos/bootcfg:v0.2.0 -address=0.0.0.0:8080 -log-level=debug -config /data/etcd-docker.yaml
Take a look at [etcd-docker.yaml](../examples/etcd-docker.yaml) to get an idea of how machines are matched to specifications. Explore some endpoints port mapped to localhost:8080.
@@ -45,22 +48,31 @@ Take a look at [etcd-docker.yaml](../examples/etcd-docker.yaml) to get an idea o
* [node1's Ignition](http://127.0.0.1:8080/ignition?uuid=16e7d8a7-bfa9-428b-9117-363341bb330b)
* [node1's Metadata](http://127.0.0.1:8080/metadata?uuid=16e7d8a7-bfa9-428b-9117-363341bb330b)
Since the virtual network has no network boot services, use the `dnsmasq` container to set up an example iPXE environment which runs DHCP, DNS, and TFTP. The `dnsmasq` container can help test different network setups.
## Network
Since the virtual network has no network boot services, use the `dnsmasq` image to create an iPXE network boot environment which runs DHCP, DNS, and TFTP.
sudo docker run --rm --cap-add=NET_ADMIN quay.io/coreos/dnsmasq -d -q --dhcp-range=172.17.0.43,172.17.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:#ipxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://bootcfg.foo:8080/boot.ipxe --log-queries --log-dhcp --dhcp-option=3,172.17.0.1 --address=/bootcfg.foo/172.17.0.2
In this case, it runs a DHCP server allocating IPs to VMs between 172.17.0.43 and 172.17.0.99, resolves bootcfg.foo to 172.17.0.2 (the IP where `bootcfg` runs), and points iPXE clients to `http://bootcfg.foo:8080/boot.ipxe`.
In this case, dnsmasq runs a DHCP server allocating IPs to VMs between 172.17.0.43 and 172.17.0.99, resolves `bootcfg.foo` to 172.17.0.2 (the IP where `bootcfg` runs), and points iPXE clients to `http://bootcfg.foo:8080/boot.ipxe`.
## Verify
## Client VMs
Reboot the VM machines and use `virt-manager` to watch the console.
Create VM nodes which have known hardware attributes. The nodes will be attached to the `docker0` bridge where Docker's containers run.
sudo ./scripts/libvirt create-docker
sudo virt-manager
You can use `virt-manager` to watch the console and reboot VM machines with
sudo ./scripts/libvirt poweroff
sudo ./scripts/libvirt start
At this point, the VMs will PXE boot and use Ignition (preferred over cloud config) to set up a three node etcd cluster, with other nodes behaving as etcd proxies.
## Verify
The example spec added autologin so you can check that etcd works between nodes.
The VMs should network boot and provision themselves into a three node etcd cluster, with other nodes behaving as etcd proxies.
The example spec added autologin so you can verify that etcd works between nodes.
systemctl status etcd2
etcdctl set /message hello

View File

@@ -1,15 +1,10 @@
# Getting Started with rkt
Get started with `bootcfg` on your Linux machine with rkt, CNI, and appc.
In this tutorial, we'll run `bootcfg` to boot and provision a cluster of four VM machines on a CNI bridge (`metal0`). You'll be able to boot etcd clusters, Kubernetes clusters, and more, while testing different network setups.
In this tutorial, we'll run `bootcfg` on your Linux machinem with `rkt` and `CNI`, to network boot and provision a cluster of CoreOS machines. You'll be able to create Kubernetes clustes, etcd clusters, or just install CoreOS and test network setups locally.
## Requirements
Install [rkt](https://github.com/coreos/rkt/releases) and [acbuild](https://github.com/appc/acbuild/releases) from the latest releases. For rkt, see the setup and privilege separation [docs](https://coreos.com/rkt/docs/latest/trying-out-rkt.html). For acbuild:
tar xzvf acbuild.tar.gz
sudo ln -s /path/to/acbuild /usr/local/bin/acbuild
Install [rkt](https://github.com/coreos/rkt/releases) and [acbuild](https://github.com/appc/acbuild/releases) from the latest releases ([example script](https://github.com/dghubble/phoenix/blob/master/scripts/fedora/sources.sh)). Optionally setup rkt [privilege separation](https://coreos.com/rkt/docs/latest/trying-out-rkt.html).
Install package dependencies.
@@ -19,7 +14,7 @@ Install package dependencies.
# Debian/Ubuntu
sudo apt-get install virt-manager virtinst qemu-kvm systemd-container
**Note**: Currently, rkt does not integrate with SELinux on Fedora. As a workaround, temporarily set enforcement to permissive if you are comfortable (`sudo setenforce Permissive`). Check the rkt [distribution notes](https://github.com/coreos/rkt/blob/master/Documentation/distributions.md) or track the [issue](https://github.com/coreos/rkt/issues/1727).
**Note**: rkt does not yet integrate with SELinux on Fedora. As a workaround, temporarily set enforcement to permissive if you are comfortable (`sudo setenforce Permissive`). Check the rkt [distribution notes](https://github.com/coreos/rkt/blob/master/Documentation/distributions.md) or see the tracking [issue](https://github.com/coreos/rkt/issues/1727).
Clone the [coreos-baremetal](https://github.com/coreos/coreos-baremetal) source which contains the examples and scripts.
@@ -57,24 +52,31 @@ On Fedora, add the `metal0` interface to the trusted zone in your firewall confi
## Application Container
Trust the CoreOS App Signing [primary key](https://coreos.com/dist/pubkeys/app-signing-pubkey.gpg) for image signature verification, after cross referencing.
#### Latest
Run `quay.io/coreos/bootcfg:latest` to get the latest commit from master as an ACI from [Quay.io](https://quay.io/repository/coreos/bootcfg).
sudo rkt --insecure-options=image run --net=metal0:IP=172.15.0.2 --mount volume=assets,target=/var/bootcfg --volume assets,kind=host,source=$PWD/assets --mount volume=data,target=/etc/bootcfg --volume data,kind=host,source=$PWD/examples quay.io/coreos/bootcfg:latest -- -address=0.0.0.0:8080 -log-level=debug -config /etc/bootcfg/etcd-rkt.yaml
Note: The insecure flag is needed for this case, since Docker images (docker2aci) don't support signatures.
#### Release
Alternately, run one of the recently tagged [releases](https://github.com/coreos/coreos-baremetal/releases).
Trust the [CoreOS App Signing Key](https://coreos.com/dist/pubkeys/app-signing-pubkey.gpg) for image signature verification.
sudo rkt trust --prefix coreos.com/bootcfg
# Fingerprint 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E
Run `bootcfg` on the `metal0` network, with a known IP we'll have DNS point to.
Run a `bootcfg` release on the `metal0` network, with a known IP.
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=assets,target=/assets --volume assets,kind=host,source=$PWD/assets --mount volume=data,target=/data --volume data,kind=host,source=$PWD/examples coreos.com/bootcfg:v0.2.0 -- -address=0.0.0.0:8080 -log-level=debug -config /data/etcd-rkt.yaml
If you'd like to try to latest build from master, fetch an ACI from Quay.io (uses docker2aci) and run `quay.io/coreos/bootcfg`.
sudo rkt --insecure-options=image fetch docker://quay.io/coreos/bootcfg
Note: The insecure flag is needed for this case, since Docker images don't support signatures.
If you get an error about the IP assignment, garbage collect old pods.
sudo rkt gc --grace-period=0
./scripts/rkt-gc-force
Take a look at [etcd-rkt.yaml](../examples/etcd-rkt.yaml) to get an idea of how machines are matched to specifications. Explore some endpoints exposed by the service.
@@ -82,16 +84,9 @@ Take a look at [etcd-rkt.yaml](../examples/etcd-rkt.yaml) to get an idea of how
* [node1's Ignition](http://172.15.0.2:8080/ignition?uuid=16e7d8a7-bfa9-428b-9117-363341bb330b)
* [node1's Metadata](http://172.15.0.2:8080/metadata?uuid=16e7d8a7-bfa9-428b-9117-363341bb330b)
## Client VMs
Create four VM nodes which have known hardware attributes. The nodes will be attached to the `metal0` bridge where your pods run.
sudo ./scripts/libvirt create-rkt
sudo virt-manager
## Network
Since the virtual network has no network boot services, use the `dnsmasq` ACI to create an iPXE network boot environment which runs DHCP, DNS, and TFTP. The `dnsmasq` container can help test different network setups.
Since the virtual network has no network boot services, use the `dnsmasq` ACI to create an iPXE network boot environment which runs DHCP, DNS, and TFTP.
Build the `dnsmasq.aci` ACI.
@@ -99,22 +94,29 @@ Build the `dnsmasq.aci` ACI.
./get-tftp-files
sudo ./build-aci
Run `dnsmasq.aci` to create a DHCP and TFTP server pointing to config server.
Run `dnsmasq.aci`.
sudo rkt --insecure-options=image run dnsmasq.aci --net=metal0:IP=172.15.0.3 -- -d -q --dhcp-range=172.15.0.50,172.15.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:#ipxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://bootcfg.foo:8080/boot.ipxe --log-queries --log-dhcp --dhcp-option=3,172.15.0.1 --address=/bootcfg.foo/172.15.0.2
In this case, dnsmasq runs a DHCP server allocating IPs to VMs between 172.15.0.50 and 172.15.0.99, resolves bootcfg.foo to 172.15.0.2 (the IP where `bootcfg` runs), and points iPXE clients to `http://bootcfg.foo:8080/boot.ipxe`.
In this case, dnsmasq runs a DHCP server allocating IPs to VMs between 172.15.0.50 and 172.15.0.99, resolves `bootcfg.foo` to 172.15.0.2 (the IP where `bootcfg` runs), and points iPXE clients to `http://bootcfg.foo:8080/boot.ipxe`.
## Verify
## Client VMs
Reboot the VM machines and use `virt-manager` to watch the console.
Create VM nodes which have known hardware attributes. The nodes will be attached to the `metal0` bridge where your pods run.
sudo ./scripts/libvirt create-rkt
sudo virt-manager
You can use `virt-manager` to watch the console and reboot VM machines with
sudo ./scripts/libvirt poweroff
sudo ./scripts/libvirt start
At this point, the VMs will PXE boot and use Ignition (preferred over cloud config) to set up a three node etcd cluster, with other nodes behaving as etcd proxies.
## Verify
The example spec added autologin so you can check that etcd works between nodes.
The VMs should network boot and provision themselves into a three node etcd cluster, with other nodes behaving as etcd proxies.
The example spec added autologin so you can verify that etcd works between nodes.
systemctl status etcd2
etcdctl set /message hello

View File

@@ -19,19 +19,6 @@ In production, mount your signing keyring and source the passphrase from a [Kube
To try it locally, you may use the test fixture keyring. **Warning: The test fixture keyring is for examples only.**
**Binary**
export BOOTCFG_PASSPHRASE=test
./bin/bootcfg -address=0.0.0.0:8080 -key-ring-path sign/fixtures/secring.gpg -config examples/etcd-rkt.yaml -data-path examples/
**rkt**
sudo rkt run --set-env=BOOTCFG_PASSPHRASE=test --mount volume=secrets,target=/secrets --volume secrets,kind=host,source=$PWD/sign/fixtures --mount volume=assets,target=/assets --volume assets,kind=host,source=$PWD/assets --mount volume=data,target=/data --volume data,kind=host,source=$PWD/examples quay.io/coreos/bootcfg -- -address=0.0.0.0:8080 -config /data/etcd-rkt.yaml -key-ring-path secrets/secring.gpg
**docker**
sudo docker run -p 8080:8080 --rm --env BOOTCFG_PASSPHRASE=test -v $PWD/examples:/data:Z -v $PWD/assets:/assets:Z -v $PWD/sign/fixtures:/secrets:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -config=/data/etcd-docker.yaml -key-ring-path secrets/secring.gpg
## Verify
Verify a signature response and config response from the command line using the public key. Notice that most configs have a trailing newline.

View File

@@ -16,17 +16,23 @@ CoreOS on Baremetal contains guides for network booting and configuring CoreOS c
* [Getting Started with rkt](Documentation/getting-started-rkt.md)
* [Getting Started with Docker](Documentation/getting-started-docker.md)
* [bootcfg](Documentation/bootcfg.md)
* [Groups](Documentation/bootcfg.md#groups-and-metadata)
* [Specs](Documentation/bootcfg.md#spec)
* [Ignition](Documentation/ignition.md)
* [Cloud-Config](Documentation/cloud-config.md)
* [Groups](Documentation/bootcfg.md#groups-and-metadata)
* [OpenPGP Signing](Documentation/openpgp.md)
* [Flags](Documentation/config.md)
* [API](Documentation/api.md)
* [Troubleshooting](Documentation/troubleshooting.md)
* [Hacking](Documentation/dev/develop.md)
### Examples
Use the [examples](examples) to boot machines into CoreOS clusters of higher-order systems, like Kubernetes. Quickly setup a network of virtual hardware on your Linux box for testing with [libvirt](scripts/README.md#libvirt).
* TLS-auth Kubernetes cluster (1 master, 1 worker, 1 etcd)
* Multi Node etcd cluster
* Install CoreOS to disk with followup Ignition stages
* Multi-node Kubernetes cluster with TLS
* Multi-node etcd cluster
* Install CoreOS to disk and provision with Ignition
* GRUB Netboot CoreOS
* PXE Boot CoreOS with a root fs
* PXE Boot CoreOS

View File

@@ -32,8 +32,8 @@ func main() {
help bool
}{}
flag.StringVar(&flags.address, "address", "127.0.0.1:8081", "gRPC listen address")
flag.StringVar(&flags.configPath, "config", "./data/config.yaml", "Path to config file")
flag.StringVar(&flags.dataPath, "data-path", "./data", "Path to data directory")
flag.StringVar(&flags.configPath, "config", "/etc/bootcfg.conf", "Path to config file")
flag.StringVar(&flags.dataPath, "data-path", "/etc/bootcfg", "Path to data directory")
// subcommands
flag.BoolVar(&flags.version, "version", false, "print version and exit")
flag.BoolVar(&flags.help, "help", false, "print usage and exit")

View File

@@ -35,9 +35,9 @@ func main() {
help bool
}{}
flag.StringVar(&flags.address, "address", "127.0.0.1:8080", "HTTP listen address")
flag.StringVar(&flags.configPath, "config", "./data/config.yaml", "Path to config file")
flag.StringVar(&flags.dataPath, "data-path", "./data", "Path to data directory")
flag.StringVar(&flags.assetsPath, "assets-path", "./assets", "Path to static assets")
flag.StringVar(&flags.configPath, "config", "/etc/bootcfg.conf", "Path to config file")
flag.StringVar(&flags.dataPath, "data-path", "/etc/bootcfg", "Path to data directory")
flag.StringVar(&flags.assetsPath, "assets-path", "/var/bootcfg", "Path to static assets")
flag.StringVar(&flags.keyRingPath, "key-ring-path", "", "Path to a private keyring file")
// available log levels https://godoc.org/github.com/coreos/pkg/capnslog#LogLevel
flag.StringVar(&flags.logLevel, "log-level", "info", "Set the logging level")

View File

@@ -5,15 +5,16 @@ These examples show declarative configurations for network booting libvirt VMs i
| Name | Description | CoreOS Version | FS | Reference |
|------------|-------------|----------------|----|-----------|
| pxe | CoreOS alpha node | alpha/962.0.0 | RAM | [reference](https://coreos.com/os/docs/latest/booting-with-ipxe.html) |
| pxe-disk | CoreOS alpha node, partition disk and root fs | alpha/962.0.0 | Disk | [reference](https://coreos.com/os/docs/latest/booting-with-ipxe.html) |
| pxe | CoreOS via iPXE | alpha/962.0.0 | RAM | [reference](https://coreos.com/os/docs/latest/booting-with-ipxe.html) |
| grub | CoreOS via GRUB2 Netboot
| pxe-disk | CoreOS via iPXE, with a root filesystem | alpha/962.0.0 | Disk | [reference](https://coreos.com/os/docs/latest/booting-with-ipxe.html) |
| coreos-install | 2-stage Ignition: Install CoreOS, provision etcd cluster | alpha/962.0.0 | Disk | [reference](https://coreos.com/os/docs/latest/installing-to-disk.html) |
| etcd-rkt, etcd-docker | Cluster with 3 etcd nodes, 2 proxies | beta/899.6.0 | RAM | [reference](https://coreos.com/os/docs/latest/cluster-architectures.html) |
| k8s-rkt, k8s-docker | Kubernetes cluster with 1 master, 1 worker, 1 dedicated etcd node, TLS-authentication | beta/899.6.0 | RAM | [reference](https://github.com/coreos/coreos-kubernetes) |
| coreos-install | 2-stage Ignition: Install CoreOS, provision etcd cluster | alpha/962.0.0 | Disk | [reference](https://coreos.com/os/docs/latest/installing-to-disk.html) |
## Experimental
These CoreOS clusters are experimental and have **NOT** been hardened for production yet. They demonstrate Ignition (initrd) and cloud-init provisioning of higher order clusters.
These CoreOS clusters are experimental and have **NOT** been hardened for production yet. They demonstrate Ignition and cloud-init provisioning of higher order clusters.
## Getting Started
@@ -22,36 +23,6 @@ Get started running the `bootcfg` on your Linux machine to boot clusters of libv
* [Getting Started with rkt](../Documentation/getting-started-rkt.md)
* [Getting Started with Docker](../Documentation/getting-started-docker.md)
## Physical Hardware
Run `bootcfg` to boot and configure physical machines (for testing). Update the network values in the `*.yaml` config to match your hardware and network. Generate TLS assets if required for the example (e.g. Kubernetes).
Continue to the [Physical Hardware Guide](../Documentation/physical-hardware.md) for details.
## Examples
See the Getting Started with [rkt](getting-started-rkt.md) or [Docker](getting-started-docker.md) for a walk-through.
### rkt
etcd cluster with 3 nodes on `metal0`, other nodes act as proxies.
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=assets,target=/assets --volume assets,kind=host,source=$PWD/assets --mount volume=data,target=/data --volume data,kind=host,source=$PWD/examples quay.io/coreos/bootcfg -- -address=0.0.0.0:8080 -log-level=debug -config /data/etcd-rkt.yaml
Kubernetes cluster with one master, one worker, and one dedicated etcd on `metal0`.
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=assets,target=/assets --volume assets,kind=host,source=$PWD/assets --mount volume=data,target=/data --volume data,kind=host,source=$PWD/examples quay.io/coreos/bootcfg -- -address=0.0.0.0:8080 -log-level=debug -config /data/k8s-rkt.yaml
### Docker
etcd cluster with 3 nodes on `docker0`, other nodes act as proxies.
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/data:Z -v $PWD/assets:/assets:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug -config /data/etcd-docker.yaml
Kubernetes cluster with one master, one worker, and one dedicated etcd on `docker0`.
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/data:Z -v $PWD/assets:/assets:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug -config /data/k8s-docker.yaml
## Kubernetes
The Kubernetes cluster examples create a TLS-authenticated Kubernetes cluster with 1 master node, 1 worker node, and 1 etcd node, running without a disk.