Documentation: Refresh documentation for v0.3.0

This commit is contained in:
Dalton Hubble
2016-04-13 15:44:07 -07:00
parent fcf68ce69c
commit d4afc6ae97
11 changed files with 71 additions and 84 deletions

View File

@@ -2,7 +2,7 @@
## Latest
## v0.3.0
## v0.3.0 (2016-04-14)
#### Features
@@ -25,7 +25,7 @@
- Rename Group field `spec` to `profile`
- Rename Group field `require` to `selector` (#147)
* Allow asset serving to be disabled with `-assets-path=""` (#118)
* Allow `selector` key/value pairs to be used in Ignition and Cloud config teplates (#64)
* Allow `selector` key/value pairs to be used in Ignition and Cloud config templates (#64)
* Change default `-data-path` to `/var/lib/bootcfg` (#132)
* Change default `-assets-path` to `/var/lib/bootcfg/assets` (#132)
* Change the default assets download location to `examples/assets`

View File

@@ -140,12 +140,12 @@ OpenPGPG signature endpoints serve detached binary and ASCII armored signatures
| Endpoint | Signature Endpoint | ASCII Signature Endpoint |
|------------|--------------------|-------------------------|
| Ignition | `http://bootcfg.foo/ignition.sig` | `http://bootcfg.foo/ignition.asc` |
| Cloud-init | `http://bootcfg.foo/cloud.sig` | `http://bootcfg.foo/cloud.asc` |
| iPXE | `http://bootcfg.foo/boot.ipxe.sig` | `http://bootcfg.foo/boot.ipxe.asc` |
| iPXE | `http://bootcfg.foo/ipxe.sig` | `http://bootcfg.foo/ipxe.asc` |
| Pixiecore | `http://bootcfg/pixiecore/v1/boot.sig/:MAC` | `http://bootcfg/pixiecore/v1/boot.asc/:MAC` |
| GRUB2 | `http://bootcf.foo/grub.sig` | `http://bootcfg.foo/grub.asc` |
| Ignition | `http://bootcfg.foo/ignition.sig` | `http://bootcfg.foo/ignition.asc` |
| Cloud-Config | `http://bootcfg.foo/cloud.sig` | `http://bootcfg.foo/cloud.asc` |
| Metadata | `http://bootcfg.foo/metadata.sig` | `http://bootcfg.foo/metadata.asc` |
Get a config and its detached ASCII armored signature.
@@ -169,7 +169,7 @@ NO+p24BL3PHZyKw0nsrm275C913OxEVgnNZX7TQltaweW23Cd1YBNjcfb3zv+Zo=
## Assets
If you need to serve static assets (e.g. kernel, initrd), `bootcfg` can serve arbitrary assets from `-assets-path` at `/assets/`.
If you need to serve static assets (e.g. kernel, initrd), `bootcfg` can serve arbitrary assets from the `-assets-path`.
bootcfg.foo/assets/
└── coreos

View File

@@ -1,13 +1,13 @@
# bootcfg
`bootcfg` is a HTTP and gRPC service that renders signed [Ignition configs](https://coreos.com/ignition/docs/latest/what-is-ignition.html), [cloud-configs](https://coreos.com/os/docs/latest/cloud-config.html), network boot configs, and metadata to machines to create clusters of CoreOS machines. `bootcfg` maintains **Group** definitions which match machines to *profiles* based on labels (e.g. UUID, MAC address, stage, region). A **Profile** is a named set of config templates (e.g. iPXE, GRUB, Ignition config, Cloud-Config). The aim is to use CoreOS Linux's early-boot capabilities to provision CoreOS machines into clusters.
`bootcfg` is an HTTP and gRPC service that renders signed [Ignition configs](https://coreos.com/ignition/docs/latest/what-is-ignition.html), [cloud-configs](https://coreos.com/os/docs/latest/cloud-config.html), network boot configs, and metadata to machines to create CoreOS clusters. `bootcfg` maintains **Group** definitions which match machines to *profiles* based on labels (e.g. UUID, MAC address, stage, region). A **Profile** is a named set of config templates (e.g. iPXE, GRUB, Ignition config, Cloud-Config). The aim is to use CoreOS Linux's early-boot capabilities to provision CoreOS machines.
Network boot endpoints provide PXE, iPXE, GRUB, and [Pixiecore](https://github.com/danderson/pixiecore/blob/master/README.api.md) support. The `bootcfg` service can be run as binary, as an [application container](https://github.com/appc/spec) with rkt, or as a Docker container.
Network boot endpoints provide PXE, iPXE, GRUB, and [Pixiecore](https://github.com/danderson/pixiecore/blob/master/README.api.md) support. `bootcfg` can be deployed as a binary, as an [appc](https://github.com/appc/spec) container with rkt, or as a Docker container.
## Getting Started
Get started running `bootcfg` on your laptop, with rkt or Docker, to network boot libvirt VMs into CoreOS clusters.
Get started running `bootcfg` on your Linux machine, with rkt or Docker, to network boot virtual or physical machines into CoreOS clusters.
* [Getting Started with rkt](getting-started-rkt.md)
* [Getting Started with Docker](getting-started-docker.md)
@@ -22,11 +22,11 @@ See [API](api.md)
## Data
A `Store` stores machine Profiles, Groups, Ignition configs, and cloud-configs. By default, `bootcfg` uses a `FileStore` to search a `-data-path` for these resources ([#133](https://github.com/coreos/coreos-baremetal/issues/133)).
A `Store` stores machine Profiles, Groups, Ignition configs, and cloud-configs. By default, `bootcfg` uses a `FileStore` to search a `-data-path` for these resources.
Prepare `/var/lib/bootcfg` with `profile`, `groups`, `ignition`, and `cloud` subdirectories. You may wish to keep these files under version control. The [examples](../examples) directory is a valid target with some pre-defined configs and templates.
/etc/bootcfg
/var/lib/bootcfg
├── cloud
│   ├── cloud.yaml
│   └── worker.sh
@@ -42,14 +42,14 @@ Prepare `/var/lib/bootcfg` with `profile`, `groups`, `ignition`, and `cloud` sub
└── etcd.json
└── worker.json
Ignition templates can be JSON or YAML files. Cloud-Config templates can be a script or YAML file. Both may contain may contain [Go template](https://golang.org/pkg/text/template/) elements which will be executed machine group [metadata](#groups-and-metadata). For details and examples:
Ignition templates can be JSON or YAML files (rendered as JSON). Cloud-Config templates can be a script or YAML file. Both may contain [Go template](https://golang.org/pkg/text/template/) elements which will be executed with machine Group [metadata](#groups-and-metadata). For details and examples:
* [Ignition Config](ignition.md)
* [Cloud-Config](cloud-config.md)
### Profiles
Profiles specify the Ignition config, Cloud-Config, and network boot config to be used by machine(s).
Profiles specify a Ignition config, Cloud-Config, and network boot config.
{
"id": "etcd",
@@ -94,7 +94,7 @@ Create a group definition with a `Profile` to be applied, selectors for matching
}
}
While `/var/lib/bootcfg/groups/proxy.json` is the default machine group, since it has no selectors.
Meanwhile, `/var/lib/bootcfg/groups/proxy.json` acts as the default machine group since it has no selectors.
{
"name": "etcd-proxy",
@@ -113,10 +113,10 @@ Some labels are normalized or parsed specially because they have reserved semant
* `uuid` - machine UUID
* `mac` - network interface physical address (MAC address)
* `hostname`
* `serial`
* `hostname` - hostname reported by a network boot program
* `serial` - serial reported by a network boot program
Client's booted with the `/ipxe.boot` endpoint will introspect and make a request to `/ipxe` with the `uuid`, `mac`, `hostname`, and `serial` value as query arguments. Pixiecore which can only detect MAC addresss and cannot substitute it into later config requests ([issue](https://github.com/coreos/coreos-baremetal/issues/36)).
Client's booted with the `/ipxe.boot` endpoint will introspect and make a request to `/ipxe` with the `uuid`, `mac`, `hostname`, and `serial` value as query arguments. Pixiecore can only detect MAC addresss and cannot substitute it into later config requests ([issue](https://github.com/coreos/coreos-baremetal/issues/36)).
## Assets

View File

@@ -1,11 +1,13 @@
# Cloud Config
**Note:** We recommend you migrate to [Ignition](https://coreos.com/blog/introducing-ignition.html).
CoreOS Cloud-Config is a system for configuring machines with a Cloud-Config file or executable script from user-data. Cloud-Config runs in userspace on each boot and implements a subset of the [cloud-init spec](http://cloudinit.readthedocs.org/en/latest/topics/format.html#cloud-config-data). See the cloud-config [docs](https://coreos.com/os/docs/latest/cloud-config.html) for details.
Cloud-Config template files can be added in the `/etc/bootcfg/cloud` directory or in a `cloud` subdirectory of a custom `-data-path`. Template files may contain [Go template](https://golang.org/pkg/text/template/) elements which will be evaluated with `metadata` when served.
Cloud-Config template files can be added in `/var/lib/bootcfg/cloud` or in a `cloud` subdirectory of a custom `-data-path`. Template files may contain [Go template](https://golang.org/pkg/text/template/) elements which will be evaluated with Group `metadata` when served.
data/
/var/lib/bootcfg
├── cloud
│   ├── cloud.yaml
│   └── script.sh
@@ -14,21 +16,6 @@ Cloud-Config template files can be added in the `/etc/bootcfg/cloud` directory o
Reference a Cloud-Config in a [Profile](bootcfg.md#profiles). When PXE booting, use the kernel option `cloud-config-url` to point to `bootcfg` [cloud-config endpoint](api.md#cloud-config).
profile.json:
{
"id": "worker_profile",
"cloud_id": "worker.yaml",
"ignition_id": "",
"boot": {
"kernel": "/assets/coreos/899.6.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/899.6.0/coreos_production_pxe_image.cpio.gz"],
"cmdline": {
"cloud-config-url": "http://bootcfg.foo/cloud?uuid=${uuid}&mac=${net0/mac:hexhyp}"
}
}
}
## Configs
Here is an example Cloud-Config which starts some units and writes a file.
@@ -57,6 +44,6 @@ The Cloud-Config [Validator](https://coreos.com/validate/) is useful for checkin
## Comparison with Ignition
Cloud-Config starts after userspace has started, on every boot.Ignition starts before PID 1 and only runs on the first boot. Ignition favors immutable infrastructure.
Cloud-Config starts after userspace has started, on every boot. Ignition starts before PID 1 and only runs on the first boot. Ignition favors immutable infrastructure.
Ignition is favored as the eventual replacement for CoreOS Cloud-Config. Tasks often only need to be run once and can be performed more easily before systemd has started (e.g. configuring networking). Ignition can write service units for tasks that need to be run on each boot. Instead of depending on Cloud-Config variable substitution, leverage systemd's EnvironmentFile expansion to start units with a metadata file from a source of truth.
Ignition is favored as the replacement for CoreOS Cloud-Config. Tasks often only need to be run once and can be performed more easily before systemd has started (e.g. configuring networking). Ignition can write service units for tasks that need to be run on each boot. Instead of depending on Cloud-Config variable substitution, Ignition favors using systemd's EnvironmentFile expansion to start units with a metadata file from a metadata source.

View File

@@ -1,14 +1,13 @@
# Getting Started with Docker
In this tutorial, we'll run `bootcfg` on your Linux machine with Docker to network boot and provision a cluster of CoreOS machines. You'll be able to create Kubernetes clustes, etcd clusters, or just install CoreOS and test network setups locally.
In this tutorial, we'll run `bootcfg` on your Linux machine with Docker to network boot and provision a cluster of CoreOS machines locally. You'll be able to create Kubernetes clustes, etcd clusters, and test network setups.
If you're ready to try [rkt](https://coreos.com/rkt/docs/latest/), see [Getting Started with rkt](getting-started-rkt.md).
## Requirements
Install the dependencies and start the Docker daemon.
Install the package dependencies and start the Docker daemon.
# Fedora
sudo dnf install docker virt-install virt-manager
@@ -38,9 +37,9 @@ Run the latest Docker image from `quay.io/coreos/bootcfg` with the `etcd-docker`
#### Release
Alternately, run a recent tagged [release](https://github.com/coreos/coreos-baremetal/releases).
Alternately, run the most recent tagged [release](https://github.com/coreos/coreos-baremetal/releases).
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/data:Z -v $PWD/assets:/assets:Z quay.io/coreos/bootcfg:v0.2.0 -address=0.0.0.0:8080 -log-level=debug -config /data/etcd-docker.yaml
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/var/lib/bootcfg:Z -v $PWD/examples/groups/etcd-docker:/var/lib/bootcfg/groups:Z quay.io/coreos/bootcfg:v0.3.0 -address=0.0.0.0:8080 -log-level=debug
Take a look at the [etcd groups](../examples/groups/etcd-docker) to get an idea of how machines are mapped to Profiles. Explore some endpoints port mapped to localhost:8080.

View File

@@ -1,12 +1,13 @@
# Getting Started with rkt
In this tutorial, we'll run `bootcfg` on your Linux machine with `rkt` and `CNI` to network boot and provision a cluster of CoreOS machines. You'll be able to create Kubernetes clustes, etcd clusters, or just install CoreOS and test network setups locally.
In this tutorial, we'll run `bootcfg` on your Linux machine with `rkt` and `CNI` to network boot and provision a cluster of CoreOS machines locally. You'll be able to create Kubernetes clustes, etcd clusters, and test network setups.
## Requirements
Install [rkt](https://github.com/coreos/rkt/releases) and [acbuild](https://github.com/appc/acbuild/releases) from the latest releases ([example script](https://github.com/dghubble/phoenix/blob/master/scripts/fedora/sources.sh)). Optionally setup rkt [privilege separation](https://coreos.com/rkt/docs/latest/trying-out-rkt.html).
Install package dependencies.
Next, install the package dependencies.
# Fedora
sudo dnf install virt-install virt-manager
@@ -54,24 +55,24 @@ On Fedora, add the `metal0` interface to the trusted zone in your firewall confi
#### Latest
Run the latest ACI with rkt with the `etcd` example.
Run the latest ACI with rkt and the `etcd` example.
sudo rkt --insecure-options=image run --net=metal0:IP=172.15.0.2 --mount volume=data,target=/var/lib/bootcfg --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/bootcfg/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd quay.io/coreos/bootcfg:latest -- -address=0.0.0.0:8080 -log-level=debug
Note: The insecure flag is needed for this case, since [Quay.io](https://quay.io/repository/coreos/bootcfg) serves ACIs coverted from Docker images (docker2aci) and Docker images don't support signatures.
Note: The insecure flag is needed since [Quay.io](https://quay.io/repository/coreos/bootcfg) serves ACIs coverted from Docker images (docker2aci) and Docker images don't support signatures.
#### Release
Alternately, run a recent tagged and signed [release](https://github.com/coreos/coreos-baremetal/releases). Trust the [CoreOS App Signing Key](https://coreos.com/dist/pubkeys/app-signing-pubkey.gpg) for image signature verification.
Alternately, run the most recent tagged and signed [release](https://github.com/coreos/coreos-baremetal/releases). Trust the [CoreOS App Signing Key](https://coreos.com/security/app-signing-key/) for image signature verification.
sudo rkt trust --prefix coreos.com/bootcfg
# gpg key fingerprint is: 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=assets,target=/assets --volume assets,kind=host,source=$PWD/assets --mount volume=data,target=/data --volume data,kind=host,source=$PWD/examples coreos.com/bootcfg:v0.2.0 -- -address=0.0.0.0:8080 -log-level=debug -config /data/etcd-rkt.yaml
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=data,target=/var/lib/bootcfg --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/bootcfg/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd coreos.com/bootcfg:v0.3.0 -- -address=0.0.0.0:8080 -log-level=debug
If you get an error about the IP assignment, garbage collect old pods.
sudo rkt gc --grace-period=0
./scripts/rkt-gc-force
./scripts/rkt-gc-force # sometimes needed
Take a look at the [etcd groups](../examples/groups/etcd) to get an idea of how machines are mapped to Profiles. Explore some endpoints exposed by the service.
@@ -83,7 +84,7 @@ Take a look at the [etcd groups](../examples/groups/etcd) to get an idea of how
Since the virtual network has no network boot services, use the `dnsmasq` ACI to create an iPXE network boot environment which runs DHCP, DNS, and TFTP.
Trust the [CoreOS App Signing Key](https://coreos.com/dist/pubkeys/app-signing-pubkey.gpg).
Trust the [CoreOS App Signing Key](https://coreos.com/security/app-signing-key/).
sudo rkt trust --prefix coreos.com/dnsmasq
# gpg key fingerprint is: 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E

View File

@@ -1,11 +1,11 @@
# Ignition
Ignition is a system for declaratively provisioning disks during the initramfs, before systemd starts. It runs only on the first boot and handles formatting partitions, writing files (systemd units, networkd units, dropins, regular files), and configuring users. See the Ignition [docs](https://coreos.com/ignition/docs/latest/) for details.
Ignition is a system for declaratively provisioning disks from the initramfs, before systemd starts. It runs only on the first boot and handles formatting partitioning, writing files (systemd units, networkd units, dropins, regular files), and configuring users. See the Ignition [docs](https://coreos.com/ignition/docs/latest/) for details.
Ignition template files can be added in the `/etc/bootcfg/ignition` directory or in an `ignition` subdirectory of a custom `-data-path`. Template files should contain Ignition JSON or YAML (which will be rendered as JSON) and may contain [Go template](https://golang.org/pkg/text/template/) elements which will be evaluated with `metadata` when served.
Ignition template files can be added in the `/var/lib/bootcfg/ignition` directory or in an `ignition` subdirectory of a custom `-data-path`. Template files should contain Ignition JSON or YAML (which will be rendered as JSON) and may contain [Go template](https://golang.org/pkg/text/template/) elements which will be evaluated with Group `metadata` when served.
/etc/bootcfg/
/var/lib/bootcfg
├── cloud
├── ignition
│   └── simple.json
@@ -16,22 +16,6 @@ Ignition template files can be added in the `/etc/bootcfg/ignition` directory or
Reference an Ignition config in a [Profile](bootcfg.md#profiles). When PXE booting, use the kernel option `coreos.first_boot=1` and `coreos.config.url` to point to the `bootcfg` [Ignition endpoint](api.md#ignition-config).
profile.json:
{
"id": "etcd_profile",
"boot": {
"kernel": "/assets/coreos/899.6.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/899.6.0/coreos_production_pxe_image.cpio.gz"],
"cmdline": {
"coreos.config.url": "http://bootcfg.foo/ignition?uuid=${uuid}&mac=${net0/mac:hexhyp}",
"coreos.first_boot": "1"
}
},
"cloud_id": "",
"ignition_id": "etcd.yaml"
}
## Configs
Here is an example Ignition config for static networking, which will be rendered, with metadata, into YAML and tranformed into machine-friendly JSON.
@@ -78,7 +62,7 @@ Response from `/ignition?mac=address` for a particular machine.
"passwd": {}
}
Note that Ignition does **not** allow variables - the response has been fully rendered with `metadata` for the requesting machine.
Note that rendered Ignition does **not** allow variables - the response has been fully rendered with `metadata` for the requesting machine.
Ignition configs can be provided directly as JSON as well. This is useful for simple cases or if you prefer to use your own templating solution to generate Ignition configs.

View File

@@ -1,14 +1,15 @@
# Lifecycle of a Physical Machine
Physical machines [network boot](network-booting.md) in an network boot environment with DHCP/TFTP/DNS services or with [coreos/dnsmasq](../contrib/dnsmasq).
`bootcfg` serves iPXE, GRUB, or Pixiecore boot configs via HTTP to machines based on Group selectors (e.g. UUID, MAC, region, etc.) and machine Profiles. Kernel and initrd images are fetched and booted with Ignition to install CoreOS. The "first boot" Ignition config if fetched and CoreOS is installed.
CoreOS boots ("first boot" from disk) and runs Ignition to provision its disk with systemd units, files, keys, and more to become a cluster node. Systemd units may fetch metadata from a remote source if needed.
Coordinated auto-updates are enabled. Systems like [fleet](https://coreos.com/docs/#fleet) or [Kubernetes](http://kubernetes.io/docs/) coordinate container services. IPMI, vendor utilities, or first-boot are used to re-provision machines into new roles.
![Machine Lifecycle](img/machine-lifecycle.png)
A physical machine [network boots](network-booting.md) in an network boot environment created by [coreos/dnsmasq](../contrib/dnsmasq) or a custom DHCP/TFTP/DNS setup.
`bootcfg` serves iPXE, GRUB, or Pixiecore boot configs via HTTP to machines based on group selectors (e.g. UUID, MAC, region, etc.). Kernel and initrd images are fetched and booted with an initial Ignition config for installing CoreOS. CoreOS is installed to disk and the provisioning Ignition config for the machine is fetched before rebooting.
CoreOS boots ("first boot" from disk) and runs Ignition to provision its disk with systemd units, files, keys, and more. On subsequent reboots, systemd units may fetch dynamic metadata if needed.
CoreOS hosts should have automatic updates enabled and use a system like fleet or Kubernetes to run containers to tolerate node updates or failures without operator intervention. Use IPMI, vendor utilities, or first-boot to re-provision machines to change their role, rather than mutation.

View File

@@ -9,12 +9,12 @@ Here are example signature endpoints without their query parameters.
| Endpoint | Signature Endpoint | ASCII Signature Endpoint |
|------------|--------------------|-------------------------|
| Ignition | `http://bootcfg.foo/ignition.sig` | `http://bootcfg.foo/ignition.asc` |
| Cloud-init | `http://bootcfg.foo/cloud.sig` | `http://bootcfg.foo/cloud.asc` |
| iPXE | `http://bootcfg.foo/boot.ipxe.sig` | `http://bootcfg.foo/boot.ipxe.asc` |
| iPXE | `http://bootcfg.foo/ipxe.sig` | `http://bootcfg.foo/ipxe.asc` |
| Pixiecore | `http://bootcfg/pixiecore/v1/boot.sig/:MAC` | `http://bootcfg/pixiecore/v1/boot.asc/:MAC` |
| GRUB2 | `http://bootcf.foo/grub.sig` | `http://bootcfg.foo/grub.asc` |
| Ignition | `http://bootcfg.foo/ignition.sig` | `http://bootcfg.foo/ignition.asc` |
| Cloud-Config | `http://bootcfg.foo/cloud.sig` | `http://bootcfg.foo/cloud.asc` |
| Metadata | `http://bootcfg.foo/metadata.sig` | `http://bootcfg.foo/metadata.asc` |
In production, mount your signing keyring and source the passphrase from a [Kubernetes secret](http://kubernetes.io/v1.1/docs/user-guide/secrets.html). Use a signing subkey exported to a keyring by itself, which can be revoked by a primary key, if needed.

View File

@@ -3,7 +3,7 @@
[![Build Status](https://travis-ci.org/coreos/coreos-baremetal.svg?branch=master)](https://travis-ci.org/coreos/coreos-baremetal) [![GoDoc](https://godoc.org/github.com/coreos/coreos-baremetal?status.png)](https://godoc.org/github.com/coreos/coreos-baremetal) [![Docker Repository on Quay](https://quay.io/repository/coreos/bootcfg/status "Docker Repository on Quay")](https://quay.io/repository/coreos/bootcfg)
CoreOS on Baremetal contains guides for network booting and configuring CoreOS clusters on virtual or physical hardware.
CoreOS on Baremetal provides guides and a service for network booting and provisioning CoreOS clusters on virtual or physical hardware.
## Guides
@@ -12,7 +12,7 @@ CoreOS on Baremetal contains guides for network booting and configuring CoreOS c
## bootcfg
`bootcfg` is a HTTP and gRPC service that renders signed [Ignition configs](https://coreos.com/ignition/docs/latest/what-is-ignition.html), [cloud-configs](https://coreos.com/os/docs/latest/cloud-config.html), network boot configs, and metadata to machines based on attribute labels (e.g. UUID, MAC, stage, region) to create CoreOS clusters. Network boot endpoints provide PXE, iPXE, GRUB, and Pixiecore support. `bootcfg` can run as an [ACI](https://github.com/appc/spec) with [rkt](https://coreos.com/rkt/docs/latest/), as a Docker container, or as a binary.
`bootcfg` is an HTTP and gRPC service that renders signed [Ignition configs](https://coreos.com/ignition/docs/latest/what-is-ignition.html), [cloud-configs](https://coreos.com/os/docs/latest/cloud-config.html), network boot configs, and metadata to machines to create CoreOS clusters. Groups match machines based on labels (e.g. UUID, MAC, stage, region) and use named Profiles for provisioning. Network boot endpoints provide PXE, iPXE, GRUB, and Pixiecore support. `bootcfg` can be deployed as a binary, as an [appc](https://github.com/appc/spec) container with [rkt](https://coreos.com/rkt/docs/latest/), or as a Docker container.
* [Getting Started with rkt](Documentation/getting-started-rkt.md)
* [Getting Started with Docker](Documentation/getting-started-docker.md)
@@ -23,7 +23,9 @@ CoreOS on Baremetal contains guides for network booting and configuring CoreOS c
* [Cloud-Config](Documentation/cloud-config.md)
* [Flags](Documentation/config.md)
* [API](Documentation/api.md)
* [Deployment](Documentation/deployment.md)
* Backends
* [FileStore](Documentation/bootcfg.md#data)
* Deployment via
* [systemd](Documentation/deployment.md#systemd)
* [Troubleshooting](Documentation/troubleshooting.md)
* Going Further

View File

@@ -3,7 +3,7 @@
## get-coreos
Run the `get-coreos` script to download CoreOS kernel and initrd images, verify them, and move them into `assets`.
Run the `get-coreos` script to download CoreOS kernel and initrd images, verify them, and move them into `examples/assets`.
./scripts/get-coreos
./scripts/get-coreos channel version
@@ -21,7 +21,7 @@ This will create:
## libvirt
Create libvirt VM nodes which are configured to boot from the network or from disk (empty). The `scripts/libvirt` script will create virtual machines on the `metal0` or `docker0` bridge with known hardware attributes (e.g. UUID, MAC address).
Create libvirt VM nodes which are configured to boot from the network. The `scripts/libvirt` script will create virtual machines on the `metal0` or `docker0` bridge with known hardware attributes (e.g. UUID, MAC address).
$ sudo ./scripts/libvirt
USAGE: libvirt <command>
@@ -34,3 +34,16 @@ Create libvirt VM nodes which are configured to boot from the network or from di
poweroff poweroff the libvirt nodes
destroy destroy the libvirt nodes
## k8s-certgen
Generate TLS certificates needed for a multi-node Kubernetes cluster. See the [examples](../examples/README.md#assets).
$ ./scripts/tls/k8s-certgen -h
./scripts/tls/k8s-certgen -h
Usage: k8s-certgen
Options:
-d DEST Destination for generated files (default: .examples/assets/tls)
-s SERVER Reachable Server IP for kubeconfig (e.g. 172.15.0.21)
-m MASTERS Master Node Names/Addresses in SAN format (e.g. IP.1=10.3.0.1,IP.2=172.15.0.21).
-w WORKERS Worker Node Names/Addresses in SAN format (e.g. IP.1=172.15.0.22,IP.2=172.15.0.23)
-h Show help.