Documentation,CHANGES.md: Update docs for v0.4.0

This commit is contained in:
Dalton Hubble
2016-07-19 16:49:25 -07:00
parent 6d3649778c
commit d8f3ceafbe
14 changed files with 262 additions and 288 deletions

View File

@@ -2,44 +2,53 @@
## Latest
* TLS Authentication:
## v0.4.0 (2016-06-21)
#### Features
* Add/improve rkt, Docker, Kubernetes, and binary/systemd deployment docs
* TLS Client Authentication:
* Add gRPC API TLS and TLS client-to-server authentication (#140)
* Enable gRPC API by providing a TLS server `-cert-file` and `-key-file`, and a `-ca-file` to authenticate client certificates
* Provide `bootcmd` tool a TLS client `-cert-file` and `-key-file`, and a `-ca-file` to verify the server identity.
* Provide the `bootcmd` tool a TLS client `-cert-file` and `-key-file`, and a `-ca-file` to verify the server identity.
* Improvements to Ignition Support:
* Allow Ignition 2.0.0 JSON and YAML template files (#141)
* Allow Fuze YAML template files for Ignition 2.0.0 (#141)
* Stop requiring Ignition templates to use file extensions (#176)
* Logging Improvements:
* Show `bootcfg` message at the home path `/`
* Add structured loggging with Logrus (#254, #268)
* Log requests for bootcfg hosted assets (#214)
* Log requests for bootcfg assets (#214)
* Show `bootcfg` message at the home path `/`
* Fix http package log messages (#173)
* Error when a template is rendered with a machine Group which is missing a metadata value. Previously, missing values defaulted to "no value" (#210)
* Add/improve rkt, Docker, Kubernetes, and binary/systemd deployment docs
* Add DialTimeout to gRPC client config (#273)
* Allow query parameters to be used as template variables as `{{.request.query.foo}}` (#182)
* Support nested metadata in responses from the "env file" style metadata endpoint (#84)
* Templating:
* Allow query parameters to be used as template variables as `{{.request.query.foo}}` (#182)
* Support nested maps in responses from the "env file" metadata endpoint (#84)
* Error when a template is rendered with variables which are missing a referenced key. Previously, missing lookups defaulted to "no value" (#210)
* gRPC API
* Add DialTimeout to gRPC client config (#273)
* Add IgnitionPut and Close to the client (#160,#193)
#### Changes
* Replace Ignition YAML templates with Fuze templates (**breaking**)
* gRPC API requires TLS client authentication
* Replace Ignition YAML templates with Fuze templates
- Fuze formalizes the transform from Fuze configs (YAML) to Ignition 2.0.0 (JSON)
- [Migrate from bootcfg v0.3.0](Documentation/ignition.md#migration-from-v030)
- [Migrate templates from v0.3.0](Documentation/ignition.md#migration-from-v030)
- Require CoreOS 1010.1.0 or newer
- Drop support for Ignition v1 format
* Replace template variable `{{.query}}` with `{{.request.raw_query}}` (**breaking**)
* Replace template variable `{{.query}}` with `{{.request.raw_query}}`
#### Examples
* Kubernetes
* Upgrade Kubernetes (static manifest) examples to v1.3.0
* Add Kubernetes (self-hosted) example (PXE boot or install to disk)
* Mount /etc/resolv.conf into host kubelet for skydns and pod lookups (#237,#260)
* Upgrade Kubernetes v1.3.0 (static manifest) example clusters
* Add Kubernetes v1.3.0-beta.2 (self-hosted) example cluster
* Mount /etc/resolv.conf into host kubelet for skydns and pod DNS lookups (#237,#260)
* Fix a bug in the k8s example k8s-certs@.service file check (#156)
* Avoid systemd dependency failures and restart components (#257,#274)
* Add CoreOS Torus distributed storage cluster example (PXE boot)
* Avoid systemd dependency failures by restarting components (#257,#274)
* Verify Kubernetes v1.2.4 and v1.3.0 clusters pass conformance tests (#71,#265)
* Add Torus distributed storage cluster example (PXE boot)
* Add `create-uefi` subcommand to `scripts/libvirt` for UEFI/GRUB testing
* Show CoreOS install to disk from a cached copy via bootcfg baseurl (#228)
* Install CoreOS to disk from a cached copy via bootcfg baseurl (#228)
* Remove 8.8.8.8 from networkd example Ignition configs (#184)
* Match machines by MAC address in examples to simplify networkd device matching (#209)
* With rkt 1.8+, you can use `rkt gc --grace-period=0` to cleanup rkt IP assignments in examples. The `rkt-gc-force` script has been removed.

View File

@@ -1,5 +1,5 @@
# API
# HTTP API
## iPXE Script
@@ -13,18 +13,21 @@ Serves a static iPXE boot script which gathers client machine attributes and cha
#!ipxe
chain ipxe?uuid=${uuid}&mac=${net0/mac:hexhyp}&domain=${domain}&hostname=${hostname}&serial=${serial}
Client's booted with the `/ipxe.boot` endpoint will introspect and make a request to `/ipxe` with the `uuid`, `mac`, `hostname`, and `serial` value as query arguments.
## iPXE
Finds the profile for the machine and renders the network boot config (kernel, options, initrd) as an iPXE script.
GET http://bootcfg.foo/ipxe
GET http://bootcfg.foo/ipxe?label=value
**Query Parameters**
| Name | Type | Description |
|------|--------|---------------|
| uuid | string | Hardware UUID |
| mac | string | MAC address |
| Name | Type | Description |
|------|--------|-----------------|
| uuid | string | Hardware UUID |
| mac | string | MAC address |
| * | string | Arbitrary label |
**Response**
@@ -37,14 +40,15 @@ Finds the profile for the machine and renders the network boot config (kernel, o
Finds the profile for the machine and renders the network boot config as a GRUB config. Use DHCP/TFTP to point GRUB clients to this endpoint as the next-server.
GET http://bootcfg.foo/grub
GET http://bootcfg.foo/grub?label=value
**Query Parameters**
| Name | Type | Description |
|------|--------|---------------|
| uuid | string | Hardware UUID |
| mac | string | MAC address |
| Name | Type | Description |
|------|--------|-----------------|
| uuid | string | Hardware UUID |
| mac | string | MAC address |
| * | string | Arbitrary label |
**Response**
@@ -82,16 +86,17 @@ Finds the profile matching the machine and renders the network boot config as JS
## Cloud Config
Finds the profile matching the machine and renders the corresponding Cloud-Config with metadata.
Finds the profile matching the machine and renders the corresponding Cloud-Config with group metadata, selectors, and query params.
GET http://bootcfg.foo/cloud
GET http://bootcfg.foo/cloud?label=value
**Query Parameters**
| Name | Type | Description |
|------|--------|---------------|
| uuid | string | Hardware UUID |
| mac | string | MAC address |
| Name | Type | Description |
|------|--------|-----------------|
| uuid | string | Hardware UUID |
| mac | string | MAC address |
| * | string | Arbitrary label |
**Response**
@@ -105,47 +110,44 @@ Finds the profile matching the machine and renders the corresponding Cloud-Confi
## Ignition Config
Finds the profile matching the machine and renders the corresponding Ignition Config with metadata.
Finds the profile matching the machine and renders the corresponding Ignition Config with group metadata, selectors, and query params.
GET http://bootcfg.foo/ignition
GET http://bootcfg.foo/ignition?label=value
**Query Parameters**
| Name | Type | Description |
|------|--------|---------------|
| uuid | string | Hardware UUID |
| mac | string | MAC address |
| Name | Type | Description |
|------|--------|-----------------|
| uuid | string | Hardware UUID |
| mac | string | MAC address |
| * | string | Arbitrary label |
**Response**
{
"ignitionVersion": 1,
"storage": {},
"ignition": { "version": "2.0.0" },
"systemd": {
"units": [
{
"name": "hello.service",
"enable": true,
"contents": "[Service]\nType=oneshot\nExecStart=\/usr\/bin\/echo Hello World\n\n[Install]\nWantedBy=multi-user.target"
}
]
},
"networkd": {},
"passwd": {}
"units": [{
"name": "example.service",
"enable": true,
"contents": "[Service]\nType=oneshot\nExecStart=/usr/bin/echo Hello World\n\n[Install]\nWantedBy=multi-user.target"
}]
}
}
## Generic Config
Finds the profile matching the machine and renders the corresponding Generic config with metadata and group selectors.
Finds the profile matching the machine and renders the corresponding generic config with group metadata, selectors, and query params.
GET http://bootcfg.foo/generic
GET http://bootcfg.foo/generic?label=value
**Query Parameters**
| Name | Type | Description |
|------|--------|---------------|
| uuid | string | Hardware UUID |
| mac | string | MAC address |
| Name | Type | Description |
|------|--------|-----------------|
| uuid | string | Hardware UUID |
| mac | string | MAC address |
| * | string | Arbitrary label |
**Response**
@@ -159,28 +161,29 @@ Finds the profile matching the machine and renders the corresponding Generic con
## Metadata
Finds the matching machine group and renders the selectors and metadata as a `plain/text` file.
Finds the matching machine group and renders the group metadata, selectors, and query params in an "env file" style response.
GET http://bootcfg.foo/metadata
GET http://bootcfg.foo/metadata?mac=52-54-00-a1-9c-ae&foo=bar&count=3&gate=true
**Query Parameters**
| Name | Type | Description |
|------|--------|---------------|
| uuid | string | Hardware UUID |
| mac | string | MAC address |
| Name | Type | Description |
|------|--------|-----------------|
| uuid | string | Hardware UUID |
| mac | string | MAC address |
| * | string | Arbitrary label |
**Response**
IPV4_ADDRESS=172.15.0.21
NETWORKD_ADDRESS=172.15.0.21/16
NETWORKD_GATEWAY=172.15.0.1
NETWORKD_NAME=ens3
META=data
ETCD_NAME=node1
FLEET_METADATA=role=etcd,name=node1
SOME_NESTED_DATA=some-value
MAC=52:54:00:a1:9c:ae
ETCD_INITIAL_CLUSTER=node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380
NETWORKD_DNS=172.15.0.3
REQUEST_QUERY_MAC=52:54:00:a1:9c:ae
REQUEST_QUERY_FOO=bar
REQUEST_QUERY_COUNT=3
REQUEST_QUERY_GATE=true
REQUEST_RAW_QUERY=mac=52-54-00-a1-9c-ae&foo=bar&count=3&gate=true
## OpenPGP Signatures
@@ -193,6 +196,7 @@ OpenPGPG signature endpoints serve detached binary and ASCII armored signatures
| GRUB2 | `http://bootcf.foo/grub.sig` | `http://bootcfg.foo/grub.asc` |
| Ignition | `http://bootcfg.foo/ignition.sig` | `http://bootcfg.foo/ignition.asc` |
| Cloud-Config | `http://bootcfg.foo/cloud.sig` | `http://bootcfg.foo/cloud.asc` |
| Generic | `http://bootcfg.foo/generic.sig` | `http://bootcfg.foo/generic.asc` |
| Metadata | `http://bootcfg.foo/metadata.sig` | `http://bootcfg.foo/metadata.asc` |
Get a config and its detached ASCII armored signature.

View File

@@ -1,22 +1,22 @@
# bootcfg
`bootcfg` is an HTTP and gRPC service that renders signed [Ignition configs](https://coreos.com/ignition/docs/latest/what-is-ignition.html), [cloud-configs](https://coreos.com/os/docs/latest/cloud-config.html), network boot configs, and metadata to machines to create CoreOS clusters. `bootcfg` maintains **Group** definitions which match machines to *profiles* based on labels (e.g. UUID, MAC address, stage, region). A **Profile** is a named set of config templates (e.g. iPXE, GRUB, Ignition config, Cloud-Config). The aim is to use CoreOS Linux's early-boot capabilities to provision CoreOS machines.
`bootcfg` is an HTTP and gRPC service that renders signed [Ignition configs](https://coreos.com/ignition/docs/latest/what-is-ignition.html), [cloud-configs](https://coreos.com/os/docs/latest/cloud-config.html), network boot configs, and metadata to machines to create CoreOS clusters. `bootcfg` maintains **Group** definitions which match machines to *profiles* based on labels (e.g. MAC address, UUID, stage, region). A **Profile** is a named set of config templates (e.g. iPXE, GRUB, Ignition config, Cloud-Config, generic configs). The aim is to use CoreOS Linux's early-boot capabilities to provision CoreOS machines.
Network boot endpoints provide PXE, iPXE, GRUB, and [Pixiecore](https://github.com/danderson/pixiecore/blob/master/README.api.md) support. `bootcfg` can be deployed as a binary, as an [appc](https://github.com/appc/spec) container with rkt, or as a Docker container.
Network boot endpoints provide iPXE, GRUB, and [Pixiecore](https://github.com/danderson/pixiecore/blob/master/README.api.md) support. `bootcfg` can be deployed as a binary, as an [appc](https://github.com/appc/spec) container with rkt, or as a Docker container.
<img src='img/overview.png' class="img-center" alt="Bootcfg Overview"/>
## Getting Started
Get started running `bootcfg` on your Linux machine, with rkt or Docker, to network boot virtual or physical machines into CoreOS clusters.
Get started running `bootcfg` on your Linux machine, with rkt or Docker.
* [bootcfg with rkt](getting-started-rkt.md)
* [bootcfg with Docker](getting-started-docker.md)
## Flags
See [flags and variables](config.md)
See [configuration](config.md) flags and variables.
## API
@@ -25,9 +25,9 @@ See [flags and variables](config.md)
## Data
A `Store` stores machine Profiles, Groups, Ignition configs, and cloud-configs. By default, `bootcfg` uses a `FileStore` to search a `-data-path` for these resources.
A `Store` stores machine Groups, Profiles, and associated Ignition configs, cloud-configs, and generic configs. By default, `bootcfg` uses a `FileStore` to search a `-data-path` for these resources.
Prepare `/var/lib/bootcfg` with `profile`, `groups`, `ignition`, and `cloud` subdirectories. You may wish to keep these files under version control. The [examples](../examples) directory is a valid target with some pre-defined configs and templates.
Prepare `/var/lib/bootcfg` with `groups`, `profile`, `ignition`, `cloud`, and `generic` subdirectories. You may wish to keep these files under version control.
/var/lib/bootcfg
├── cloud
@@ -49,46 +49,40 @@ Prepare `/var/lib/bootcfg` with `profile`, `groups`, `ignition`, and `cloud` sub
└── etcd.json
└── worker.json
The [examples](../examples) directory is a valid data directory with some pre-defined configs. Note that `examples/groups` contains many possible groups in nested directories for demo purposes (tutorials pick one to mount). Your machine groups should be kept directly inside the `groups` directory as shown above.
### Profiles
Profiles specify a Ignition config, Cloud-Config, and network boot config. Generic configs can be used as well.
Profiles reference an Ignition config, Cloud-Config, and/or generic config by name and define network boot settings.
{
"id": "etcd",
"name": "CoreOS with etcd2"
"cloud_id": "",
"ignition_id": "etcd.yaml",
"generic_id": "some-service.cfg",
"boot": {
"kernel": "/assets/coreos/899.6.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/899.6.0/coreos_production_pxe_image.cpio.gz"],
"cmdline": {
"cloud-config-url": "http://bootcfg.foo/cloud?uuid=${uuid}&mac=${net0/mac:hexhyp}",
"coreos.autologin": "",
"coreos.config.url": "http://bootcfg.foo/ignition?uuid=${uuid}&mac=${net0/mac:hexhyp}",
"coreos.first_boot": "1"
}
"id": "etcd",
"name": "CoreOS with etcd2",
"cloud_id": "",
"ignition_id": "etcd.yaml"
"generic_id": "some-service.cfg",
"boot": {
"kernel": "/assets/coreos/1053.2.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1053.2.0/coreos_production_pxe_image.cpio.gz"],
"cmdline": {
"coreos.config.url": "http://bootcfg.foo:8080/ignition?uuid=${uuid}&mac=${net0/mac:hexhyp}",
"coreos.autologin": "",
"coreos.first_boot": "1"
}
},
}
The `"boot"` settings will be used to render configs to network boot programs such as iPXE, GRUB, or Pixiecore. You may reference remote kernel and initrd assets or [local assets](#assets).
To use cloud-config, set the `cloud-config-url` kernel option to reference the `bootcfg` [Cloud-Config endpoint](api.md#cloud-config), which will render the `cloud_id` file.
To use Ignition, set the `coreos.config.url` kernel option to reference the `bootcfg` [Ignition endpoint](api.md#ignition-config), which will render the `ignition_id` file. Be sure to add the `coreos.first_boot` option as well.
#### Configs
To use cloud-config, set the `cloud-config-url` kernel option to reference the `bootcfg` [Cloud-Config endpoint](api.md#cloud-config), which will render the `cloud_id` file.
Profiles can reference various templated configs. Ignition JSON configs can be generated from [Fuze config](https://github.com/coreos/fuze/blob/master/doc/configuration.md) template files. Cloud-Config templates files can be used to render a script or Cloud-Config. Generic template files configs can be used to render arbitrary untyped configs. Each template may contain [Go template](https://golang.org/pkg/text/template/) elements which will be executed with machine Group [metadata](#groups-and-metadata). For details and examples:
### Groups
* [Ignition Config](ignition.md)
* [Cloud-Config](cloud-config.md)
Groups define selectors which match zero or more machines. Machine(s) matching a group will boot and provision according to the group's `Profile`.
## Groups and Metadata
Groups define selectors which match zero or more machines. Machine(s) matching a group will boot and provision according to the group's `Profile` and `metadata`.
Create a group definition with a `Profile` to be applied, selectors for matching machines, and any `metadata` needed to render the Ignition or Cloud config templates. For example `/var/lib/bootcfg/groups/node1.json` matches a single machine with MAC address `52:54:00:89:d8:10`.
Create a group definition with a `Profile` to be applied, selectors for matching machines, and any `metadata` needed to render templated configs. For example `/var/lib/bootcfg/groups/node1.json` matches a single machine with MAC address `52:54:00:89:d8:10`.
# /var/lib/bootcfg/groups/node1.json
{
@@ -117,16 +111,42 @@ Meanwhile, `/var/lib/bootcfg/groups/proxy.json` acts as the default machine grou
For example, a request to `/ignition?mac=52:54:00:89:d8:10` would render the Ignition template in the "etcd" `Profile`, with the machine group's metadata. A request to `/ignition` would match the default group (which has no selectors) and render the Ignition in the "etcd-proxy" Profile. Avoid defining multiple default groups as resolution will not be deterministic.
### Reserved Labels
#### Reserved Selectors
Some labels are normalized or parsed specially because they have reserved semantic purpose.
Group selectors can use any key/value pairs you find useful. However, several labels have a defined purpose and will be normalized or parsed specially.
* `uuid` - machine UUID
* `mac` - network interface physical address (MAC address)
* `mac` - network interface physical address (normalized MAC address)
* `hostname` - hostname reported by a network boot program
* `serial` - serial reported by a network boot program
Client's booted with the `/ipxe.boot` endpoint will introspect and make a request to `/ipxe` with the `uuid`, `mac`, `hostname`, and `serial` value as query arguments. Pixiecore can only detect MAC addresss and cannot substitute it into later config requests ([issue](https://github.com/coreos/coreos-baremetal/issues/36)).
### Config Templates
Profiles can reference various templated configs. Ignition JSON configs can be generated from [Fuze config](https://github.com/coreos/fuze/blob/master/doc/configuration.md) template files. Cloud-Config templates files can be used to render a script or Cloud-Config. Generic template files can be used to render arbitrary untyped configs (experimental). Each template may contain [Go template](https://golang.org/pkg/text/template/) elements which will be rendered with machine group metadata, selectors, and query params.
For details and examples:
* [Ignition Config](ignition.md)
* [Cloud-Config](cloud-config.md)
#### Variables
Within Ignition/Fuze templates, Cloud-Config templates, or generic templates, you can use group metadata, selectors, or request-scoped query params. For example, a request `/generic?mac=52-54-00-89-d8-10&foo=some-param&bar=b` would match the `node1.json` machine group shown above. If the group's profile ("etcd") referenced a generic template, the following variables could be used.
# Untyped generic config file
# Selector
{{.mac}} # 52:54:00:89:d8:10 (normalized)
# Metadata
{{.etcd_name}} # node1
{{.fleet_metadata}} # role=etcd,name=node1
# Query
{{.request.query.mac}} # 52:54:00:89:d8:10 (normalized)
{{.request.query.foo}} # some-param
{{.request.query.bar}} # b
# Special Addition
{{.request.raw_query}} # mac=52:54:00:89:d8:10&foo=some-param&bar=b
Note that `.request` is reserved for these purposes so group metadata with data nested under a top level "request" key will be overwritten.
## Assets
@@ -144,5 +164,13 @@ See the [get-coreos](../scripts/README.md#get-coreos) script to quickly download
## Network
`bootcfg` does not implement or exec a DHCP/TFTP server. Use the [coreos/dnsmasq](../contrib/dnsmasq) image if you need a quick DHCP, proxyDHCP, TFTP, or DNS setup.
`bootcfg` does not implement or exec a DHCP/TFTP server. Read [network setup](network-setup.md) or use the [coreos/dnsmasq](../contrib/dnsmasq) image if you need a quick DHCP, proxyDHCP, TFTP, or DNS setup.
## Going Further
* [gRPC API Usage](config.md#grpc-api)
* [Metadata](api.md#metadata)
* OpenPGP [Signing](api.md#openpgp-signatures)

View File

@@ -1,7 +1,7 @@
# Self-Hosted Kubernetes
The self-hosted Kubernetes examples provision a 3 node cluster with etcd, flannel, and a special "runonce" host Kublet. The CoreOS [bootkube](https://github.com/coreos/bootkube) tool used to bootstrap kubelet, apiserver, scheduler, and controller-manager pods which can be managed via kubectl. `bootkube start` is run on any controller (master) exactly once to create a temporary control-plane, create Kubernetes components, and stop itself. The `bootkube` examples network boot and provision CoreOS nodes for use with the `bootkube` tool.
The self-hosted Kubernetes example provisions a 3 node Kubernetes v1.3.0-beta.2 cluster with etcd, flannel, and a special "runonce" host Kublet. The CoreOS [bootkube](https://github.com/coreos/bootkube) tool is used to bootstrap kubelet, apiserver, scheduler, and controller-manager as pods, which can be managed via kubectl. `bootkube start` is run on any controller (master) to create a temporary control-plane and start Kubernetes components initially. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
## Experimental
@@ -19,7 +19,7 @@ Build and install [bootkube](https://github.com/coreos/bootkube/releases) v0.1.1
## Examples
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. You can use the same examples for physical machines, but you'll need to update the MAC/IP addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. The examples can be used for physical machines if you update the MAC/IP addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
* [bootkube](../examples/groups/bootkube) - iPXE boot a bootkube-ready cluster (use rkt)
* [bootkube-install](../examples/groups/bootkube-install) - Install a bootkube-ready cluster (use rkt)
@@ -30,10 +30,6 @@ Download the CoreOS image assets referenced in the target [profile](../examples/
./scripts/get-coreos alpha 1053.2.0 ./examples/assets
Use the `bootkube` tool to render Kubernetes manifests and credentials into an `--asset-dir`. Later, `bootkube` will schedule these manifests during bootstrapping and the credentials will be used to access your cluster.
bootkube render --asset-dir=assets --api-servers=https://172.15.0.21:443 --etcd-servers=http://172.15.0.21:2379 --api-server-alt-names=IP=172.15.0.21
Add your SSH public key to each machine group definition [as shown](../examples/README.md#ssh-keys).
{
@@ -43,13 +39,19 @@ Add your SSH public key to each machine group definition [as shown](../examples/
}
}
Use the `bootkube` tool to render Kubernetes manifests and credentials into an `--asset-dir`. Later, `bootkube` will schedule these manifests during bootstrapping and the credentials will be used to access your cluster.
bootkube render --asset-dir=assets --api-servers=https://172.15.0.21:443 --etcd-servers=http://172.15.0.21:2379 --api-server-alt-names=IP=172.15.0.21
## Containers
Run the latest `bootcfg` ACI with rkt and the `bootkube` example (or `bootkube-install`).
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=data,target=/var/lib/bootcfg --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/bootcfg/groups --volume groups,kind=host,source=$PWD/examples/groups/bootkube quay.io/coreos/bootcfg:latest -- -address=0.0.0.0:8080 -log-level=debug
Create a network boot environment with `coreos/dnsmasq` and create VMs with `scripts/libvirt` as covered in [bootcfg with rkt](getting-started-rkt.md). Client machines should network boot and provision themselves.
Create a network boot environment and power-on your machines. Revisit [bootcfg with rkt](getting-started-rkt.md) for help.
Client machines should boot and provision themselves. Local client VMs should network boot CoreOS and become available via SSH in about 1 minute. If you chose `bootkube-install`, notice that machines install CoreOS and then reboot (in libvirt, you must hit "power" again). Time to network boot and provision physical hardware depends on a number of factors (POST duration, boot device iteration, network speed, etc.).
## bootkube

View File

@@ -1,11 +1,11 @@
# Cloud Config
**Note:** We recommend you migrate to [Ignition](https://coreos.com/blog/introducing-ignition.html) on bare metal.
**Note:** We recommend migrating to [Ignition](ignition.md) for hardware provisioning.
CoreOS Cloud-Config is a system for configuring machines with a Cloud-Config file or executable script from user-data. Cloud-Config runs in userspace on each boot and implements a subset of the [cloud-init spec](http://cloudinit.readthedocs.org/en/latest/topics/format.html#cloud-config-data). See the cloud-config [docs](https://coreos.com/os/docs/latest/cloud-config.html) for details.
Cloud-Config template files can be added in `/var/lib/bootcfg/cloud` or in a `cloud` subdirectory of a custom `-data-path`. Template files may contain [Go template](https://golang.org/pkg/text/template/) elements which will be evaluated with Group `metadata` when served.
Cloud-Config template files can be added in `/var/lib/bootcfg/cloud` or in a `cloud` subdirectory of a custom `-data-path`. Template files may contain [Go template](https://golang.org/pkg/text/template/) elements which will be evaluated with group metadata, selectors, and query params.
/var/lib/bootcfg
├── cloud
@@ -14,9 +14,11 @@ Cloud-Config template files can be added in `/var/lib/bootcfg/cloud` or in a `cl
├── ignition
└── profiles
Reference a Cloud-Config in a [Profile](bootcfg.md#profiles). When PXE booting, use the kernel option `cloud-config-url` to point to `bootcfg` [cloud-config endpoint](api.md#cloud-config).
## Reference
## Configs
Reference a Cloud-Config in a [Profile](bootcfg.md#profiles) with `cloud_id`. When PXE booting, use the kernel option `cloud-config-url` to point to `bootcfg` [cloud-config endpoint](api.md#cloud-config).
## Examples
Here is an example Cloud-Config which starts some units and writes a file.
@@ -34,13 +36,7 @@ Here is an example Cloud-Config which starts some units and writes a file.
content: |
{{.greeting}}
### Examples
See [examples/cloud](../examples/cloud) for example Cloud-Config files.
### Validator
The Cloud-Config [Validator](https://coreos.com/validate/) is useful for checking your Cloud-Config files for errors.
The Cloud-Config [Validator](https://coreos.com/validate/) is also useful for checking your Cloud-Config files for errors.
## Comparison with Ignition

View File

@@ -9,7 +9,7 @@ Configuration arguments can be provided as flags or as environment variables.
| -log-level | BOOTCFG_LOG_LEVEL | info | critical, error, warning, notice, info, debug |
| -data-path | BOOTCFG_DATA_PATH | /var/lib/bootcfg | ./examples |
| -assets-path | BOOTCFG_ASSETS_PATH | /var/lib/bootcfg/assets | ./examples/assets |
| -rpc-address | BOOTCFG_RPC_ADDRESS | (gRPC API disabled) | 127.0.0.1:8081 |
| -rpc-address | BOOTCFG_RPC_ADDRESS | (gRPC API disabled) | 0.0.0.0:8081 |
| -cert-file | BOOTCFG_CERT_FILE | /etc/bootcfg/server.crt | ./examples/etc/bootcfg/server.crt |
| -key-file | BOOTCFG_KEY_FILE | /etc/bootcfg/server.key | ./examples/etc/bootcfg/server.key
| -ca-file | BOOTCFG_CA_FILE | /etc/bootcfg/ca.crt | ./examples/etc/bootcfg/ca.crt |
@@ -20,9 +20,8 @@ Configuration arguments can be provided as flags or as environment variables.
| Data | Default Location |
|:---------|:--------------------------------------------------|
| data | /var/lib/bootcfg/{profiles,groups,ignition,cloud} |
| data | /var/lib/bootcfg/{profiles,groups,ignition,cloud,generic} |
| assets | /var/lib/bootcfg/assets |
| environment variables | /etc/bootcfg.env |
| gRPC API TLS Credentials | Default Location |
|:---------|:--------------------------------------------------|
@@ -38,44 +37,44 @@ Configuration arguments can be provided as flags or as environment variables.
sudo rkt run quay.io/coreos/bootcfg:latest -- -version
sudo docker run quay.io/coreos/bootcfg:latest -version
## Minimal
Start the latest ACI with rkt.
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=assets,target=/var/lib/bootcfg/assets --volume assets,kind=host,source=$PWD/examples/assets quay.io/coreos/bootcfg:latest -- -address=0.0.0.0:8080 -log-level=debug
Start the latest Docker image.
sudo docker run -p 8080:8080 --rm -v $PWD/examples/assets:/var/lib/bootcfg/assets:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug
To start containers with the example machine Groups and Profiles, see the commands below.
## Examples
## Usage
Run the binary.
./bin/bootcfg -address=0.0.0.0:8080 -log-level=debug -data-path=examples -assets-path=examples/assets
Run the ACI with rkt. Mounts are used to add the provided examples.
Run the latest ACI with rkt.
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=assets,target=/var/lib/bootcfg/assets --volume assets,kind=host,source=$PWD/examples/assets quay.io/coreos/bootcfg:latest -- -address=0.0.0.0:8080 -log-level=debug
Run the latest Docker image.
sudo docker run -p 8080:8080 --rm -v $PWD/examples/assets:/var/lib/bootcfg/assets:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug
#### With Examples
Mount `examples` to pre-load the [example](../examples/README.md) machine groups and profiles. Run the container with rkt,
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=data,target=/var/lib/bootcfg --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/bootcfg/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd quay.io/coreos/bootcfg:latest -- -address=0.0.0.0:8080 -log-level=debug
Run the Docker image. Mounts are used to add the provided examples.
or with Docker.
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/var/lib/bootcfg:Z -v $PWD/examples/groups/etcd:/var/lib/bootcfg/groups:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug
### gRPC API
The gRPC API can be enabled with the `-rpc-address` flag and by providing a TLS server certificate and key with `-cert-file` and `-key-file` and a CA certificate for authenticating clients with `-ca-file`. gRPC clients (such as `bootcmd`) must verify the server's certificate with a CA bundle passed via `-ca-file` and present a client certificate and key via `-cert-file` and `-key-file`.
The gRPC API allows clients with a TLS client certificate and key to make RPC requests to programmatically create or update `bootcfg` resources. The API can be enabled with the `-rpc-address` flag and by providing a TLS server certificate and key with `-cert-file` and `-key-file` and a CA certificate for authenticating clients with `-ca-file`.
Run the binary with TLS credentials from `examples/etc/bootcfg`.
./bin/bootcfg -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug -data-path=examples -assets-path=examples/assets -cert-file examples/etc/bootcfg/server.crt -key-file examples/etc/bootcfg/server.key -ca-file examples/etc/bootcfg/ca.crt
A `bootcmd` client can call the gRPC API.
Clients, such as `bootcmd`, verify the server's certificate with a CA bundle passed via `-ca-file` and present a client certificate and key via `-cert-file` and `-key-file` to cal the gRPC API.
./bin/bootcmd profile list --endpoints 127.0.0.1:8081 --ca-file examples/etc/bootcfg/ca.crt --cert-file examples/etc/bootcfg/client.crt --key-file examples/etc/bootcfg/client.key
#### With rkt
Run the ACI with rkt and TLS credentials from `examples/etc/bootcfg`.
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=data,target=/var/lib/bootcfg --volume data,kind=host,source=$PWD/examples,readOnly=true --mount volume=config,target=/etc/bootcfg --volume config,kind=host,source=$PWD/examples/etc/bootcfg --mount volume=groups,target=/var/lib/bootcfg/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd quay.io/coreos/bootcfg:latest -- -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
@@ -84,6 +83,8 @@ A `bootcmd` client can call the gRPC API running at the IP used in the rkt examp
./bin/bootcmd profile list --endpoints 172.15.0.2:8081 --ca-file examples/etc/bootcfg/ca.crt --cert-file examples/etc/bootcfg/client.crt --key-file examples/etc/bootcfg/client.key
#### With docker
Run the Docker image with TLS credentials from `examples/etc/bootcfg`.
sudo docker run -p 8080:8080 -p 8081:8081 --rm -v $PWD/examples:/var/lib/bootcfg:Z -v $PWD/examples/etc/bootcfg:/etc/bootcfg:Z,ro -v $PWD/examples/groups/etcd:/var/lib/bootcfg/groups:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
@@ -92,7 +93,7 @@ A `bootcmd` client can call the gRPC API running at the IP used in the Docker ex
./bin/bootcmd profile list --endpoints 127.0.0.1:8081 --ca-file examples/etc/bootcfg/ca.crt --cert-file examples/etc/bootcfg/client.crt --key-file examples/etc/bootcfg/client.key
#### With [OpenPGP Signing](openpgp.md)
### OpenPGP [Signing](openpgp.md)
Run with the binary with a test key.

View File

@@ -7,7 +7,7 @@ Run the most recent tagged and signed `bootcfg` [release](https://github.com/cor
sudo rkt trust --prefix coreos.com/bootcfg
# gpg key fingerprint is: 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E
sudo rkt run --net=host --mount volume=assets,target=/var/lib/bootcfg/assets --volume assets,kind=host,source=$PWD/examples/assets quay.io/coreos/bootcfg:v0.3.0 -- -address=0.0.0.0:8080 -log-level=debug
sudo rkt run --net=host --mount volume=assets,target=/var/lib/bootcfg/assets --volume assets,kind=host,source=$PWD/examples/assets quay.io/coreos/bootcfg:v0.4.0 -- -address=0.0.0.0:8080 -log-level=debug
Create machine profiles, groups, or Ignition configs at runtime with `bootcmd` or by using your own `/var/lib/bootcfg` volume mounts.
@@ -15,7 +15,7 @@ Create machine profiles, groups, or Ignition configs at runtime with `bootcmd` o
Run the latest or the most recently tagged `bootcfg` [release](https://github.com/coreos/coreos-baremetal/releases) Docker image.
sudo docker run --net=host --rm -v $PWD/examples/assets:/var/lib/bootcfg/assets:Z quay.io/coreos/bootcfg:v0.3.0 -address=0.0.0.0:8080 -log-level=debug
sudo docker run --net=host --rm -v $PWD/examples/assets:/var/lib/bootcfg/assets:Z quay.io/coreos/bootcfg:v0.4.0 -address=0.0.0.0:8080 -log-level=debug
Create machine profiles, groups, or Ignition configs at runtime with `bootcmd` or by using your own `/var/lib/bootcfg` volume mounts.
@@ -44,10 +44,8 @@ The example manifests use Kubernetes `emptyDir` volumes to back the `bootcfg` Fi
The `bootcfg` service should be run by a non-root user with access to the `bootcfg` data directory (e.g. `/var/lib/bootcfg`). Create a `bootcfg` user and group.
sudo useradd -U bootcfg
Run the provided script to setup the `bootcfg` data directory.
sudo ./scripts/setup-data-dir
sudo mkdir -p /var/lib/bootcfg/assets
sudo chown -R bootcfg:bootcfg /var/lib/bootcfg
Add yourself to the `bootcfg` group if you'd like to edit configs directly rather than through the `bootcmd` client.
@@ -70,7 +68,7 @@ Verify the signature from the [CoreOS App Signing Key](https://coreos.com/securi
Install the `bootcfg` static binary to `/usr/local/bin`.
tar xzvf bootcfg-VERSION-linux-amd64.tar.gz
sudo cp bootcfg-VERSION/bootcfg /usr/local/bin
sudo cp bootcfg /usr/local/bin
### Source
@@ -85,8 +83,6 @@ Build `bootcfg` from source.
Install the `bootcfg` static binary to `/usr/local/bin`.
$ sudo make install
### Run
Run the `bootcfg` server.

View File

@@ -1,15 +1,43 @@
# Ignition
Ignition is a system for declaratively provisioning disks during the initramfs, before systemd starts. It runs only on the first boot and handles partitioning disks, formatting partitions, writing files (regular files, systemd units, networkd units, dropins), and configuring users. See the Ignition [docs](https://coreos.com/ignition/docs/latest/) for details.
Ignition is a system for declaratively provisioning disks during the initramfs, before systemd starts. It runs only on the first boot and handles partitioning disks, formatting partitions, writing files (regular files, systemd units, networkd units, etc.), and configuring users. See the Ignition [docs](https://coreos.com/ignition/docs/latest/) for details.
## Fuze Configs
Ignition 2.0.0+ configs are versioned, *machine-friendly* JSON documents (which contain encoded file contents). Operators should write and maintain configs in a *human-friendly* format, such as CoreOS [fuze](https://github.com/coreos/fuze) configs. As of `bootcfg` v0.4.0, Fuze configs are the primary way to use CoreOS Ignition.
Fuze formalizes the transform from a [Fuze config](https://github.com/coreos/fuze/blob/master/doc/configuration.md) (YAML) to Ignition. Fuze allows services (like `bootcfg`) to negotiate versions and serve Ignition configs to different CoreOS clients. That means, you can write a Fuze config in YAML and serve Ignition to different CoreOS instances, even as we update the Ignition version shipped in the OS.
The [Fuze schema](https://github.com/coreos/fuze/blob/master/doc/configuration.md) formalizes and improves upon the YAML to Ignition JSON transform. Fuze provides better support for Ignition 2.0.0+, handles file content encoding, patches Ignition bugs, performs better validations, and lets services (like `bootcfg`) negotiate the Ignition version required by a CoreOS client.
Fuze automatically handles file content encoding so that you can continue to write and maintain readable inline file contents.
## Adding Fuze Configs
Fuze template files can be added in the `/var/lib/bootcfg/ignition` directory or in an `ignition` subdirectory of a custom `-data-path`. Template files may contain [Go template](https://golang.org/pkg/text/template/) elements which will be evaluated with group metadata, selectors, and query params.
/var/lib/bootcfg
├── cloud
├── ignition
│   └── k8s-master.yaml
│   └── etcd.yaml
│   └── k8s-worker.yaml
│   └── raw.ign
└── profiles
### Reference
Reference an Fuze config in a [Profile](bootcfg.md#profiles) with `ignition_id`. When PXE booting, use the kernel option `coreos.first_boot=1` and `coreos.config.url` to point to the `bootcfg` [Ignition endpoint](api.md#ignition-config).
### Migration from v0.3.0
In v0.4.0, `bootcfg` switched to using the CoreOS [fuze](https://github.com/coreos/fuze) library, which formalizes and improves upon the YAML to Ignition JSON transform. Fuze provides better support for Ignition 2.0.0+, handles file content encoding, patches Ignition bugs, and performs better validations.
Upgrade your Ignition YAML templates to match the [Fuze config schema](https://github.com/coreos/fuze/blob/master/doc/configuration.md). Typically, you'll need to do the following:
* Remove `ignition_version: 1`, Fuze configs are version-less
* Update `filesystems` section and set the `name`
* Update `files` section to use `inline` as shown below
* Replace `uid` and `gid` with `user` and `group` objects as shown above
Maintain readable inline file contents in Fuze:
```
...
@@ -21,30 +49,11 @@ files:
foo bar
```
Fuze can also patch some bugs in shipped Ignition versions and validate configs for errors.
## Adding Fuze Configs
Fuze template files can be added in the `/var/lib/bootcfg/ignition` directory or in an `ignition` subdirectory of a custom `-data-path`. Template files may contain [Go template](https://golang.org/pkg/text/template/) elements which will be evaluated with Group `metadata` and should render to a Fuze config.
/var/lib/bootcfg
├── cloud
├── ignition
│   └── raw.ign
│   └── etcd.yaml
│   └── etcd-proxy.yaml
│   └── networking.yaml
└── profiles
### Referencing
Reference an Fuze config in a [Profile](bootcfg.md#profiles) with `ignition_id`. When PXE booting, use the kernel option `coreos.first_boot=1` and `coreos.config.url` to point to the `bootcfg` [Ignition endpoint](api.md#ignition-config).
Support for the older Ignition v1 format has been dropped, so CoreOS machines must be **1010.1.0 or newer**. Read the upstream Ignition v1 to 2.0.0 [migration guide](https://coreos.com/ignition/docs/latest/migrating-configs.html) to understand the reasons behind schema changes.
## Examples
See [examples/ignition](../examples/ignition) for numerous Fuze template examples.
Here is an example Fuze template. This template will be rendered into a Fuze config (YAML), using metadata (from a Group) to fill in the template values. At query time, `bootcfg` transforms the Fuze config to Ignition for clients.
Here is an example Fuze template. This template will be rendered into a Fuze config (YAML), using group metadata, selectors, and query params as template variables. Finally, the Fuze config is served to client machines as Ignition JSON.
ignition/format-disk.yaml.tmpl:
@@ -85,7 +94,7 @@ ignition/format-disk.yaml.tmpl:
{{end}}
{{end}}
Below is the Ignition config response from `/ignition?selector=value` for a CoreOS instance supporting Ignition 2.0.0. In this case, no `"ssh_authorized_keys"` list was provided in metadata.
The Ignition config response (formatted) to a query `/ignition?label=value` for a CoreOS instance supporting Ignition 2.0.0 would be:
{
"ignition": {
@@ -145,39 +154,8 @@ Below is the Ignition config response from `/ignition?selector=value` for a Core
"passwd": {}
}
See [examples/ignition](../examples/ignition) for numerous Fuze template examples.
### Raw Ignition
If you prefer to design your own templating solution, raw Ignition files (suffixed with `.ign` or `.ignition`) are served directly.
ignition/run-hello.ign:
{
"ignitionVersion": 1,
"systemd": {
"units": [
{
"name": "hello.service",
"enable": true,
"contents": "[Service]\nType=oneshot\nExecStart=/usr/bin/echo Hello World\n\n[Install]\nWantedBy=multi-user.target"
}
]
}
}
### Migration from v0.3.0
`bootcfg` v0.3.0 and earlier accepted YAML templates and rendered them to Ignition v1 JSON.
In v0.4.0, `bootcfg` switched to using the CoreOS [fuze](https://github.com/coreos/fuze) library, which formalizes and improves upong the YAML to Ignition JSON transform. Fuze provides better support for Ignition 2.0.0+, handles file content encoding, patches Ignition bugs, and performs better validations.
To upgrade to bootcfg `v0.4.0`, upgrade your Ignition YAML templates to match the [Fuze config schema](https://github.com/coreos/fuze/blob/master/doc/configuration.md). CoreOS machines **must be 1010.1.0 or newer**, support for the older Ignition v1 format has been dropped.
Typically, you'll need to do the following:
* Remove `ignition_version: 1`, Fuze configs are versionless
* Update `filesystems` section and set the `name`
* Update `files` section to use `inline` as shown above
* Replace `uid` and `gid` with `user` and `group` objects as shown above
Read the upstream Ignition v1 to 2.0.0 [migration guide](https://coreos.com/ignition/docs/latest/migrating-configs.html) to better understand the reasons behind Fuze's schema.

View File

@@ -1,11 +1,11 @@
# Kubernetes
The Kubernetes examples provision a 3 node v1.3.0 Kubernetes cluster with one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
The Kubernetes example provisions a 3 node Kubernetes v1.3.0 cluster with one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
## Requirements
Ensure that you've gone through the [bootcfg with rkt](getting-started-rkt.md) guide and understand the basics. In particular, you should be able to:
Ensure that you've gone through the [bootcfg with rkt](getting-started-rkt.md) or [bootcfg with docker](getting-started-docker.md) guide and understand the basics. In particular, you should be able to:
* Use rkt or Docker to start `bootcfg`
* Create a network boot environment with `coreos/dnsmasq`
@@ -13,11 +13,12 @@ Ensure that you've gone through the [bootcfg with rkt](getting-started-rkt.md) g
## Examples
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. VMs are setup on the `metal0` CNI bridge for rkt or the `docker0` bridge for Docker. You can use the same examples for physical machines, but you'll need to update the MAC/IP addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. VMs are setup on the `metal0` CNI bridge for rkt or the `docker0` bridge for Docker. The examples can be used for physical machines if you update the MAC/IP addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
* [k8s](../examples/groups/k8s) - iPXE boot a Kubernetes cluster (use rkt)
* [k8s-docker](../examples/groups/k8s-docker) - iPXE boot a Kubernetes cluster on `docker0` (use docker)
* [k8s-install](../examples/groups/k8s-install) - Install a Kubernetes cluster to disk (use rkt)
* [Lab examples](https://github.com/dghubble/metal) - Lab hardware examples
### Assets
@@ -25,6 +26,8 @@ Download the CoreOS image assets referenced in the target [profile](../examples/
./scripts/get-coreos alpha 1053.2.0 ./examples/assets
Add your SSH public key to each machine group definition [as shown](../examples/README.md#ssh-keys).
Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver`, `worker`).
rm -rf examples/assets/tls
@@ -33,15 +36,13 @@ Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver
# for Kubernetes on docker0
./scripts/tls/k8s-certgen -d examples/assets/tls -s 172.17.0.21 -m IP.1=10.3.0.1,IP.2=172.17.0.21 -w IP.1=172.17.0.22,IP.2=172.17.0.23
**Note**: TLS assets are served to any machines which request them. This is unsuitable for production where machines and networks are untrusted. Read about our longer term security plans at [Distributed Trusted Computing](https://coreos.com/blog/coreos-trusted-computing.html). See the [Cluster TLS OpenSSL Generation](https://coreos.com/kubernetes/docs/latest/openssl.html) document or [Kubernetes Step by Step](https://coreos.com/kubernetes/docs/latest/getting-started.html) for more details.
Optionally add your SSH public key to each machine group definition [as shown](../examples/README.md#ssh-keys).
**Note**: TLS assets are served to any machines which request them, which requires a trusted network. Alternately, provisioning may be tweaked to require TLS assets be securely copied to each host. Read about our longer term security plans at [Distributed Trusted Computing](https://coreos.com/blog/coreos-trusted-computing.html).
## Containers
Use rkt or docker to start `bootcfg` with the desired example machine groups. Create a network boot environment with `coreos/dnsmasq` and create VMs with `scripts/libvirt` to power-on your machines. Client machines should boot and provision themselves.
Use rkt or docker to start `bootcfg` and mount the desired example resources. Create a network boot environment and power-on your machines. Revisit [bootcfg with rkt](getting-started-rkt.md) or [bootcfg with Docker](getting-started-docker.md) for help.
Revisit [bootcfg with rkt](getting-started-rkt.md) or [bootcfg with Docker](getting-started-docker.md) for help.
Client machines should boot and provision themselves. Local client VMs should network boot CoreOS in about a 1 minute and the Kubernetes API should be available after 2-3 minutes. If you chose `k8s-install`, notice that machines install CoreOS and then reboot (in libvirt, you must hit "power" again). Time to network boot and provision Kubernetes clusters on physical hardware depends on a number of factors (POST duration, boot device iteration, network speed, etc.).
## Verify
@@ -68,8 +69,6 @@ Get all pods.
kube-system kube-scheduler-172.15.0.21 1/1 Running 0 13m
kube-system kubernetes-dashboard-v1.1.0-m1gyy 1/1 Running 0 14m
Machines should download and network boot CoreOS in about a minute. It can take 2-3 minutes for the Kubernetes API to become available and for add-on pods to be scheduled.
## Kubernetes Dashboard
Access the Kubernetes Dashboard with `kubeconfig` credentials by port forwarding to the dashboard pod.
@@ -83,7 +82,7 @@ Then visit [http://127.0.0.1:9090](http://127.0.0.1:9090/).
## Tectonic
Now sign up for [Tectonic Starter](https://tectonic.com/starter/) for free and deploy the [Tectonic Console](https://tectonic.com/enterprise/docs/latest/deployer/tectonic_console.html) with a few `kubectl` commands!
Sign up for [Tectonic Starter](https://tectonic.com/starter/) for free and deploy the [Tectonic Console](https://tectonic.com/enterprise/docs/latest/deployer/tectonic_console.html) with a few `kubectl` commands!
<img src='img/tectonic-console.png' class="img-center" alt="Tectonic Console"/>

View File

@@ -1,7 +1,7 @@
# Torus Storage
The `torus` example provisions a 3 node CoreOS cluster, with `etcd3` and Torus, to demonstrate a stand-alone storage cluster. Each of the 3 nodes runs a Torus instance which makes 1GiB of space available (configured per node by "torus_storage_size" in machine group metadata).
The Torus example provisions a 3 node CoreOS cluster, with `etcd3` and Torus, to demonstrate a stand-alone storage cluster. Each of the 3 nodes runs a Torus instance which makes 1GiB of space available (configured per node by "torus_storage_size" in machine group metadata).
## Requirements
@@ -14,7 +14,7 @@ Ensure that you've gone through the [bootcfg with rkt](getting-started-rkt.md) g
## Examples
The [examples](..examples) statically assign IP addresses (172.15.0.21, 172.15.0.22, 172.15.0.23) to libvirt client VMs created by `scripts/libvirt`. You can use the same examples for real hardware, but you'll need to update the MAC/IP addresses.
The [examples](..examples) statically assign IP addresses (172.15.0.21, 172.15.0.22, 172.15.0.23) to libvirt client VMs created by `scripts/libvirt`. The examples can be used for physical machines if you update the MAC/IP addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
* [torus](../examples/groups/torus) - iPXE boot a Torus cluster (use rkt)
@@ -30,7 +30,7 @@ Run the latest `bootcfg` ACI with rkt and the `torus` example.
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=data,target=/var/lib/bootcfg --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/bootcfg/groups --volume groups,kind=host,source=$PWD/examples/groups/torus quay.io/coreos/bootcfg:latest -- -address=0.0.0.0:8080 -log-level=debug
Create a network boot environment with `coreos/dnsmasq` and create VMs with `scripts/libvirt` as covered in [bootcfg with rkt](getting-started-rkt.md). Client machines should network boot and provision themselves.
Create a network boot environment and power-on your machines. Revisit [bootcfg with rkt](getting-started-rkt.md) for help. Client machines should network boot and provision themselves.
## Verify

View File

@@ -1,7 +1,7 @@
# CoreOS on Baremetal
[![Build Status](https://travis-ci.org/coreos/coreos-baremetal.svg?branch=master)](https://travis-ci.org/coreos/coreos-baremetal) [![GoDoc](https://godoc.org/github.com/coreos/coreos-baremetal?status.png)](https://godoc.org/github.com/coreos/coreos-baremetal) [![Docker Repository on Quay](https://quay.io/repository/coreos/bootcfg/status "Docker Repository on Quay")](https://quay.io/repository/coreos/bootcfg) [![IRC](https://img.shields.io/badge/irc-%23coreos-F04C5C.svg)](https://botbot.me/freenode/coreos)
[![Build Status](https://travis-ci.org/coreos/coreos-baremetal.svg?branch=master)](https://travis-ci.org/coreos/coreos-baremetal) [![GoDoc](https://godoc.org/github.com/coreos/coreos-baremetal?status.png)](https://godoc.org/github.com/coreos/coreos-baremetal) [![Docker Repository on Quay](https://quay.io/repository/coreos/bootcfg/status "Docker Repository on Quay")](https://quay.io/repository/coreos/bootcfg) [![IRC](https://img.shields.io/badge/irc-%23coreos-449FD8.svg)](https://botbot.me/freenode/coreos)
Guides and a service for network booting and provisioning CoreOS clusters on virtual or physical hardware.
@@ -16,13 +16,14 @@ Guides and a service for network booting and provisioning CoreOS clusters on vir
`bootcfg` is an HTTP and gRPC service that renders signed [Ignition configs](https://coreos.com/ignition/docs/latest/what-is-ignition.html), [cloud-configs](https://coreos.com/os/docs/latest/cloud-config.html), network boot configs, and metadata to machines to create CoreOS clusters. Groups match machines based on labels (e.g. MAC, UUID, stage, region) and use named Profiles for provisioning. Network boot endpoints provide PXE, iPXE, GRUB, and Pixiecore support. `bootcfg` can be deployed as a binary, as an [appc](https://github.com/appc/spec) container with [rkt](https://coreos.com/rkt/docs/latest/), or as a Docker container.
* [bootcfg Service](Documentation/bootcfg.md)
* [Profiles](Documentation/bootcfg.md#profiles)
* [Groups](Documentation/bootcfg.md#groups-and-metadata)
* [Profiles](Documentation/bootcfg.md#profiles)
* [Groups](Documentation/bootcfg.md#groups-and-metadata)
* Config Templates
* [Ignition](Documentation/ignition.md)
* [Cloud-Config](Documentation/cloud-config.md)
* Tutorials (libvirt)
* [bootcfg with rkt](Documentation/getting-started-rkt.md)
* [bootcfg with Docker](Documentation/getting-started-docker.md)
* [bootcfg with Docker](Documentation/getting-started-docker.md)
* [Configuration](Documentation/config.md)
* [HTTP API](Documentation/api.md)
* [gRPC API](https://godoc.org/github.com/coreos/coreos-baremetal/bootcfg/client)
@@ -35,8 +36,9 @@ Guides and a service for network booting and provisioning CoreOS clusters on vir
* [binary](Documentation/deployment.md#binary) / [systemd](Documentation/deployment.md#systemd)
* [Troubleshooting](Documentation/troubleshooting.md)
* Going Further
* [OpenPGP Signing](Documentation/openpgp.md)
* [Development](Documentation/dev/develop.md)
* [gRPC API Usage](config.md#grpc-api)
* [Metadata](api.md#metadata)
* OpenPGP [Signing](api.md#openpgp-signatures)
### Examples

View File

@@ -132,7 +132,7 @@ func main() {
// gRPC Server (feature disabled by default)
if flags.rpcAddress != "" {
log.Infof("starting bootcfg gRPC server on %s", flags.rpcAddress)
log.Infof("Starting bootcfg gRPC server on %s", flags.rpcAddress)
log.Infof("Using TLS server certificate: %s", flags.certFile)
log.Infof("Using TLS server key: %s", flags.keyFile)
log.Infof("Using CA certificate: %s to authenticate client certificates", flags.caFile)
@@ -163,7 +163,7 @@ func main() {
ArmoredSigner: armoredSigner,
}
httpServer := web.NewServer(config)
log.Infof("starting bootcfg HTTP server on %s", flags.address)
log.Infof("Starting bootcfg HTTP server on %s", flags.address)
err = http.ListenAndServe(flags.address, httpServer.HTTPHandler())
if err != nil {
log.Fatalf("failed to start listening: %v", err)

View File

@@ -10,25 +10,23 @@ These examples network boot and provision machines into CoreOS clusters using `b
| pxe-disk | CoreOS via iPXE, with a root filesystem | alpha/1053.2.0 | Disk | [reference](https://coreos.com/os/docs/latest/booting-with-ipxe.html) |
| etcd, etcd-docker | iPXE boot a 3 node etcd cluster and proxy | alpha/1053.2.0 | RAM | [reference](https://coreos.com/os/docs/latest/cluster-architectures.html) |
| etcd-install | Install a 3-node etcd cluster to disk | alpha/1053.2.0 | Disk | [reference](https://coreos.com/os/docs/latest/installing-to-disk.html) |
| k8s, k8s-docker | Kubernetes cluster with 1 master and 2 workers, TLS-authentication | alpha/1053.2.0 | Disk | [tutorial](../Documentation/kubernetes.md) |
| k8s-install | Install a Kubernetes cluster to disk (1 master) | alpha/1053.2.0 | Disk | [tutorial](../Documentation/kubernetes.md) |
| k8s, k8s-docker | Kubernetes cluster with 1 master, 2 workers, and TLS-authentication | alpha/1053.2.0 | Disk | [tutorial](../Documentation/kubernetes.md) |
| k8s-install | Install a Kubernetes cluster to disk | alpha/1053.2.0 | Disk | [tutorial](../Documentation/kubernetes.md) |
| bootkube | iPXE boot a self-hosted Kubernetes cluster (with bootkube) | alpha/1053.2.0 | Disk | [tutorial](../Documentation/bootkube.md) |
| bootkube-install | Install a self-hosted Kubernetes cluster (with bootkube) | alpha/1053.2.0 | Disk | [tutorial](../Documentation/bootkube.md) |
| torus | CoreOS Torus distributed storage | alpha/1053.2.0 | Disk | [tutorial](../Documentation/torus.md) |
| torus | Torus distributed storage | alpha/1053.2.0 | Disk | [tutorial](../Documentation/torus.md) |
## Tutorials
Get started running `bootcfg` on your Linux machine to network boot and provision clusters of VMs or physical hardware.
* [bootcfg with rkt](../Documentation/getting-started-rkt.md)
* [bootcfg with Docker](../Documentation/getting-started-docker.md)
* [Static Kubernetes](../Documentation/kubernetes.md) v1.3.0
* [Self-hosted Kubernetes](../Documentation/bootkube.md) v1.3.0-beta.2
* Getting Started
* [bootcfg with rkt](../Documentation/getting-started-rkt.md)
* [bootcfg with Docker](../Documentation/getting-started-docker.md)
* [Kubernetes (static manifests)](../Documentation/kubernetes.md)
* [Kubernetes (self-hosted)](../Documentation/bootkube.md)
* [Torus Storage](..Documentation/torus.md)
## Experimental
These examples demonstrate booting and provisioning various (often experimental) CoreOS clusters. They have **NOT** been hardened for production yet. You should write or adapt Ignition configs to suit your needs and hardware.
* [Lab Examples](https://github.com/dghubble/metal)
## SSH Keys

View File

@@ -1,39 +0,0 @@
#!/bin/bash -e
# USAGE:
# ./setup-data-dir [/path/to/data/dir]
# Sets up a bootcfg data directory at the given path or assumes the default
# data directory path /var/lib/bootcfg.
if [ "$EUID" -ne 0 ]
then echo "Please run as root"
exit
fi
# default to /var/lib/bootcfg
datadir=${1:-"/var/lib/bootcfg"}
# Create the directory with the given mode and group
# 1 - directory to create if it does not exist
# 2 - mode to set the directory to
make_bootcfg_directory() {
local dir="${1}"
local mode="${2}"
if [[ -e "${dir}" ]]; then
chmod "${mode}" "${dir}"
else
mkdir --mode="${mode}" "${dir}"
fi
chgrp bootcfg "${dir}"
}
# SGID bit so all files created will have the correct group
make_bootcfg_directory ${datadir} 2550
make_bootcfg_directory "${datadir}/assets" 2550
make_bootcfg_directory "${datadir}/profiles" 2770
make_bootcfg_directory "${datadir}/groups" 2770
make_bootcfg_directory "${datadir}/ignition" 2770
make_bootcfg_directory "${datadir}/cloud" 2770
make_bootcfg_directory "${datadir}/generic" 2770