585 Commits

Author SHA1 Message Date
Stephen Demos
e78150218f Merge pull request #727 from sdemos/release
release v0.7.1
2018-11-01 14:37:41 -07:00
Stephen Demos
cccb588855 *: update matchbox version to v0.7.1 2018-11-01 14:28:26 -07:00
Stephen Demos
9a177e83d7 changes: update changes document with relevant changes 2018-11-01 14:20:09 -07:00
Stephen Demos
dfd0457e03 Merge pull request #713 from anitakumar/master
HTTPS support for web server
2018-11-01 13:58:24 -07:00
Anita Kumar
9de30aea59 documentation: document HTTPS flags
Updated Documentation to include HTTPS
2018-11-01 13:41:51 -07:00
Anita Kumar
910ee6f18c cmd/matchbox: HTTPS support for web server 2018-11-01 13:41:04 -07:00
Stephen Demos
0994b860b5 Merge pull request #720 from salarmgh/feature/autologin
Add kernel args variable
2018-11-01 13:26:54 -07:00
Stephen Demos
78f7e8d492 Merge pull request #722 from kkohtaka/fix-terraform-modules-example
Fix an example usage of terraform modules
2018-11-01 13:25:18 -07:00
Stephen Demos
e804ace9e2 Merge pull request #726 from schu/schu/scripts-get-flatcar
scripts: add helper script `get-flatcar`
2018-10-30 11:00:33 -07:00
Kazumasa Kohtaka
0012d691f4 Fix an example usage of terraform modules 2018-10-30 02:37:43 +09:00
Michael Schubert
e170c600b3 scripts: add helper script get-flatcar
Similar to `get-coreos`, add a helper script `get-flatcar` to download
Flatcar assets.

Follow up for https://github.com/poseidon/typhoon/pull/315
2018-10-29 16:53:22 +01:00
Stephen Demos
4f229d5d9a Merge pull request #723 from sdemos/master
travis: update to latest supported go major versions
2018-10-19 14:17:48 -07:00
Stephen Demos
3cd8ba0a05 travis: update to latest supported go major versions
this also fixes the golint url to use the new location, to fix ci.
2018-10-19 12:14:20 -07:00
Salar Moghaddam
74f13a2f86 Add description and defualt value 2018-09-24 15:59:08 +03:30
Salar Moghaddam
4eee84b17d Add kernel args variable 2018-09-24 15:15:17 +03:30
Stephen Demos
845d1d0adc Merge pull request #717 from olleolleolle/patch-2
README: Use SVG badge for GoDoc
2018-09-13 11:56:00 -07:00
Stephen Demos
5b1c790d0c Merge pull request #716 from olleolleolle/patch-1
[docs] Typo fix
2018-09-13 11:55:45 -07:00
Olle Jonsson
70400b7dd0 README: Use SVG badge for GoDoc 2018-09-12 16:09:20 +02:00
Olle Jonsson
c6ebdfeb92 [docs] Typo fix 2018-09-12 13:22:35 +02:00
Stephen Demos
99acdf4c6b Merge pull request #709 from dghubble/update-kubernetes
Update Kubernetes (terraform) example to v1.10.3
2018-05-30 10:07:21 -07:00
Dalton Hubble
be057ed9c8 Update Kubernetes (terraform) example to v1.10.3
* https://github.com/poseidon/typhoon/releases/tag/v1.10.3
2018-05-30 00:34:05 -07:00
Stephen Demos
8bb99143e8 Merge pull request #704 from ae-v/master
fixes typo in scripts/tls/README.md
2018-04-09 16:26:48 -07:00
Stephen Demos
c802ce5805 Merge pull request #703 from dghubble/master
Update terraform Kubernetes examples to v1.10.0
2018-04-09 13:20:28 -07:00
Andre Veelken
c4e82c03a4 fixes typo in scripts/tls/README.md 2018-04-09 10:20:55 +02:00
Dalton Hubble
29c93046ef Update terraform Kubernetes examples to v1.10.0 2018-04-04 01:23:11 -07:00
Dalton Hubble
34e981dc7c examples: Update terraform Kubernetes examples to v1.9.3 2018-02-13 16:18:45 -08:00
Dalton Hubble
3a88a663c3 Merge pull request #696 from zbwright/example-links
docs: change links to work with sync
2018-01-25 15:20:51 -08:00
Dalton Hubble
572c8d26eb Merge pull request #695 from coreos/fix-cert-gen
scripts/tls: Fix cert-gen to add index.txt.attr
2018-01-25 15:09:43 -08:00
Beth Wright
c22b273548 docs: change links to work with sync 2018-01-25 14:04:56 -08:00
Dalton Hubble
c3ef870ce5 scripts/tls: Fix cert-gen to add index.txt.attr 2018-01-25 11:35:09 -08:00
Dalton Hubble
e9ce7325ab Merge pull request #689 from diegs/env
scripts: fix shebangs.
2018-01-10 10:02:38 -08:00
Diego Pontoriero
948bdee165 scripts: fix shebangs.
/bin/bash is not an LSB path.
2018-01-09 17:59:15 -08:00
Dalton Hubble
50e923730e Merge pull request #687 from coreos/bump-cl
Bump Container Linux version from 1576.4.0 to 1576.5.0
2018-01-09 04:28:40 -08:00
Dalton Hubble
1799c8e23e Bump Container Linux version from 1576.4.0 to 1576.5.0 2018-01-08 16:33:48 -08:00
Dalton Hubble
454ae972a1 Merge pull request #686 from ericchiang/coc
automated PR: update CoC
2018-01-08 06:55:38 -08:00
Eric Chiang
fe0c3438fd update CoC 2018-01-04 12:30:28 -08:00
Dalton Hubble
65b410e20b Merge pull request #683 from coreos/update-kubernetes
Update Kubernetes from v1.8.4 to v1.8.5
2017-12-18 16:09:39 -08:00
Dalton Hubble
dced573acb examples: Update Kubernetes from v1.8.4 to v1.8.5 2017-12-14 13:23:57 -08:00
Dalton Hubble
4888c04dee contrib: Change nginx-ingress ssl-passthrough annotation
* nginx-ingress controller 0.9.0-beta.18 and above changed the
annotations prefix to nginx.ingress.kubernetes.io
2017-12-13 15:24:24 -08:00
Dalton Hubble
4e9d542a87 Merge pull request #682 from coreos/release-v0.7.0
*: Update Matchbox version to v0.7.0
2017-12-12 17:00:55 -08:00
Dalton Hubble
08f4e9908b *: Update Matchbox version to v0.7.0 2017-12-12 14:57:09 -08:00
Dalton Hubble
dd96f58417 Merge pull request #681 from coreos/allow-terraform-11
examples: Fix examples to work with Terraform v0.11.x
2017-12-12 14:48:45 -08:00
Dalton Hubble
f5ef2d156b examples: Fix examples to work with Terraform v0.11.x
* Explicitly pass provider modules to satisfy constraints
* https://github.com/hashicorp/terraform/issues/16824
2017-12-12 14:36:38 -08:00
Dalton Hubble
f673d48007 Merge pull request #680 from coreos/bump-cl
examples: Update Container Linux to stable 1576.4.0
2017-12-12 13:33:13 -08:00
Dalton Hubble
7a58d944d8 examples: Update Container Linux to stable 1576.4.0
* Use Docker 17.09 by default in Kubernetes clusters
2017-12-11 21:40:51 -08:00
Dalton Hubble
5d975ec42a Merge pull request #678 from coreos/update-bootkube
examples: Update from Kubernetes v1.8.3 to v1.8.4
2017-12-11 21:40:26 -08:00
Dalton Hubble
2404d34b0e examples: Update from Kubernetes v1.8.3 to v1.8.4 2017-12-11 21:30:26 -08:00
Dalton Hubble
c9b9711bca Merge pull request #677 from dghubble/bump-version
scripts/devnet: Bump matchbox image version
2017-11-27 16:12:17 -08:00
Dalton Hubble
ae524f57f2 scripts/devnet: Bump matchbox image version
* Examples use Ignition v2.1.0 spec
2017-11-27 11:14:47 -08:00
Dalton Hubble
f26224c57d Merge pull request #675 from redbaron/multiple-initrd
fix loading multiple initrds
2017-11-22 13:45:42 -08:00
Dalton Hubble
2c063a4674 Merge pull request #676 from coreos/fix-matchbox-endpoint
examples: Fix endpoint name for manual examples
2017-11-20 14:10:46 -08:00
Dalton Hubble
7d5656ffe3 examples: Fix endpoint name for manual examples
* Bug introduced by b10c777729
2017-11-20 13:46:02 -08:00
Maxim Ivanov
a683e8261e iPXE loads multiple initrds when each is given to it's own initrd command 2017-11-20 19:23:04 +00:00
Dalton Hubble
c75fc8f88e Merge pull request #674 from coreos/efi
contrib/dnsmasq: Add ipxe.efi for dnsmasq:v0.5.0
2017-11-17 11:21:24 -08:00
Dalton Hubble
b10c777729 contrib/dnsmasq: Remove old matchbox endpoint from dnsmasq configs 2017-11-16 23:41:29 -08:00
Dalton Hubble
5992ba6ad5 scripts/libvirt: Add disk hd to UEFI VM boot order 2017-11-16 23:41:29 -08:00
Dalton Hubble
ca223f800b examples: Add UEFI initrd option to Terraform examples 2017-11-16 23:41:27 -08:00
Dalton Hubble
1246d5a0db contrib/dnsmasq: Add ipxe.efi for dnsmasq:v0.5.0
* Add ipxe.efi to dnsmasq image's /var/lib/tftpboot directory
* Add initrd kernel argument respected only by UEFI
https://github.com/coreos/bugs/issues/1239
* Improve network-setup docs and scripts to cover UEFI clients
and to support launching UEFI QEMU/KVM clusters locally
* Reduce references to grub.efi flow, its not a happy path
2017-11-16 23:40:52 -08:00
Dalton Hubble
4f7dd0942c Merge pull request #673 from coreos/update-kubernetes
examples: Update Kubernetes from v1.8.2 to v1.8.3
2017-11-09 16:29:45 -08:00
Dalton Hubble
3e6aa4ee73 examples: Update Kubernetes from v1.8.2 to v1.8.3 2017-11-09 16:01:43 -08:00
Dalton Hubble
9c39221b71 Merge pull request #672 from coreos/fix-publishing
travis.yml: Ensure deploy condition matches build matrix
2017-11-08 15:41:40 -08:00
Dalton Hubble
4103461778 travis.yml: Ensure deploy condition matches build matrix
* Build binaries for Docker images with Go 1.8.5
* Travis should "deploy" publish the quay image for Go 1.8.5
2017-11-08 15:09:43 -08:00
Dalton Hubble
9a6d815978 Merge pull request #671 from coreos/fix-publishing
travis.yml: Fix travis to publish master images
2017-11-08 15:00:39 -08:00
Dalton Hubble
6aa8759bfd travis.yml: Fix travis to publish master images 2017-11-08 14:47:40 -08:00
Dalton Hubble
d5027950e2 Merge pull request #670 from coreos/update-ignition
Update Ignition config version to v2.1.0
2017-11-08 12:58:29 -08:00
Dalton Hubble
85a2a6b252 matchbox: Update tests due to Ignition 2.1.0 format 2017-11-07 15:23:41 -08:00
Dalton Hubble
4bc5fcdc5e vendor: Vendor glide.yaml ct, Ignition, and dependencies 2017-11-06 14:13:54 -08:00
Dalton Hubble
2f4d5b95e4 glide.yaml: Update ct to v0.5.0 and Ignition to v0.19.0
* Change `/ignition` endpoint to serve a v2.1.0 Ignition config
* Drops support for Container Linux versions before 1465.0.0
2017-11-06 13:29:42 -08:00
Dalton Hubble
257f2fa553 Merge pull request #667 from dghubble/bump-cl
examples: Bump Container Linux to stable 1520.8.0
2017-10-30 17:11:50 -07:00
Dalton Hubble
7829c14d52 examples: Bump Container Linux to stable 1520.8.0
* Increase minimum RAM required to use PXE image
* https://coreos.com/releases/#1520.5.0
2017-10-30 13:58:17 -07:00
Dalton Hubble
ce72fb72a0 Merge pull request #665 from coreos/hyperkube
Update to Kubernetes v1.8.2
2017-10-27 16:39:07 -07:00
Dalton Hubble
41d5db4723 examples: Update examples to Kubernetes v1.8.2
* Fixes v1.8.1 kube-apiserver memory leak
2017-10-27 15:49:53 -07:00
Dalton Hubble
dfd08e48e5 Switch from quay.io to gcr.io hyperkube image 2017-10-27 15:49:53 -07:00
Dalton Hubble
347e142db9 Merge pull request #664 from coreos/docker-docs
Switch local QEMU/KVM tutorial to favor Docker
2017-10-27 13:51:36 -07:00
Dalton Hubble
b63e9b2589 scripts/devnet: Use a tagged matchbox release in devnet 2017-10-23 13:50:07 -07:00
Dalton Hubble
4a32b0cd59 scripts: Switch default tutorial from rkt to docker 2017-10-23 13:49:09 -07:00
Dalton Hubble
b0b8d97539 examples: Update examples to Kubernetes v1.8.1
* Use bootkube v0.8.0
2017-10-20 15:04:09 -07:00
Dalton Hubble
581be69da7 Merge pull request #659 from rlenferink/master
Documentation: minor documentation changes
2017-10-05 14:01:28 -07:00
Roy Lenferink
dc75fcc869 Documentation: minor improvements
Fixed example hostname in docker run command

Added bash statements for storing certificates
2017-10-05 22:51:12 +02:00
Dalton Hubble
fc3e688c97 Merge pull request #658 from zbwright/fix-link
docs: fix broken link
2017-10-04 17:14:10 -07:00
Beth Wright
f07dc758c4 docs: fix broken link 2017-10-04 16:40:30 -07:00
Dalton Hubble
d2827d7ed0 Merge pull request #656 from coreos/update-kubernetes
examples: Update Kubernetes from v1.7.5 to v1.7.7
2017-10-04 10:13:33 -07:00
Dalton Hubble
692bf81df8 examples: Update Kubernetes from v1.7.5 to v1.7.7
* Update from bootkube v0.6.2 to v0.7.0
* Update kube-dns to fix dnsmasq vulnerability
2017-10-04 09:55:37 -07:00
Dalton Hubble
cfcec6ac03 Merge pull request #655 from coreos/update-terraform-module
examples/terraform: Update bare-metal module version
2017-09-29 10:52:18 -07:00
Dalton Hubble
592969134c examples/terraform: Update bare-metal module version
* Upstream fixes to bump all control plane components to v1.7.5
* Stop including etcd-network-checkpointer with on-host etcd
* Remove experimental_self_hosted_etcd support
2017-09-28 11:25:52 -07:00
Dalton Hubble
2b605c8d9c Merge pull request #653 from coreos/improve-ctx
matchbox: Use Go 1.7 request Context, remove ContextHandler
2017-09-25 17:07:45 -07:00
Dalton Hubble
63a95188be matchbox: Use Go 1.7 request Context, remove ContextHandler
* Starting in Go 1.7, the standard library http.Request includes
a Context for passing request-scoped values between chained handlers
* Delete the ContextHandler (breaking, should not have been
exported to begin with)
2017-09-21 17:12:33 -07:00
Dalton Hubble
5aa301b72d Merge pull request #648 from coreos/bump-container-linux
examples: Bump Container Linux to stable 1465.7.0
2017-09-18 16:35:48 -07:00
Dalton Hubble
7647a5d095 Merge pull request #649 from radhus/add_select_client
matchbox/client: Expose Select endpoint
2017-09-18 15:09:50 -07:00
Dalton Hubble
06f80fa003 examples: Bump Container Linux to stable 1465.7.0 2017-09-18 15:08:08 -07:00
Dalton Hubble
01a767ab3e Merge pull request #651 from coreos/cleanup
examples: Remove unused example module
2017-09-18 14:57:34 -07:00
Dalton Hubble
6be5c0f59c examples: Remove unused example module
* Terraform-based Kubernetes example now uses an community project's
 Terraform module to show Matchbox usage
2017-09-18 14:33:51 -07:00
William Johansson
5efc514097 matchbox/client: Expose Select endpoint
Exposes the Select endpoint in matchbox/client just as the other
endpoints like Profiles, Ignition and Generic.
2017-09-17 21:19:37 +02:00
Dalton Hubble
757f46e96f Merge pull request #647 from dvrkps/patch-1
travis: update go versions
2017-09-15 10:43:05 -07:00
Dalton Hubble
5aeb2d1d3d Merge pull request #646 from coreos/update-kubernetes
examples: Update Kubernetes from v1.7.3 to v1.7.5
2017-09-15 10:38:59 -07:00
Davor Kapsa
1119bb22f0 travis: update go versions 2017-09-15 12:15:03 +02:00
Dalton Hubble
6195ae377e examples/ignition: Update kubelet.service to match upstream
* Mount host /opt/cni/bin in Kubelet to use host's CNI plugins
* Switch /var/run/kubelet-pod.uuid to /var/cache/kubelet-pod.uuid
to persist between reboots and cleanup old Kubelet pods
* Organize Kubelet flags in alphabetical order
2017-09-14 16:53:42 -07:00
Dalton Hubble
d7783a94e9 examples: Update Kubernetes from v1.7.3 to v1.7.5
* Switch Terraform example to use Typhoon project's module
instead: https://github.com/poseidon/typhoon
* Includes support for Calico and Flannel
2017-09-14 15:52:58 -07:00
Dalton Hubble
4228ccb330 README: List notable projects using Matchbox 2017-09-11 15:59:05 -07:00
Dalton Hubble
e5d5280658 Merge pull request #644 from squeed/fix-pxe-flag
libvirt: don't pass --pxe
2017-08-22 10:47:31 -07:00
Casey Callendrello
46f0477614 libvirt: don't pass --pxe
In virt-install v1.4.2, the meaning of  `--pxe` changed from "allow pxe
boot" to "always pxe boot." This breaks matchbox, since we expect hosts
to pxe-boot only with empty hds. On hosts with v1.4.2, the VMs loop,
re-installing CL over and over.

The flag isn't necessary anyways, since we pass `--boot=hd,network`,
which enables pxe-booting.
2017-08-22 11:19:16 +02:00
Dalton Hubble
0e4265b2bc Merge pull request #643 from coreos/bump-kubernetes
examples: Update Kubernetes from v1.7.1 to v1.7.3
2017-08-21 15:00:57 -07:00
Dalton Hubble
18de74e85b examples: Update Kubernetes from v1.7.1 to v1.7.3 2017-08-21 11:19:39 -07:00
Dalton Hubble
31040e9729 Merge pull request #642 from coreos/bump-fix
Update CLUO version and bootkube-terraform location
2017-08-18 10:28:29 -07:00
Dalton Hubble
f0a4cfd1cb *: Update location of bootkube-terraform module 2017-08-17 15:56:49 -07:00
Dalton Hubble
aeca5b08f9 examples/addons: Update CLUO to v0.3.1 2017-08-17 15:38:34 -07:00
Dalton Hubble
7c1b9b17dc Merge pull request #636 from jcmoraisjr/jm-add-version
Add version.txt download on get-coreos
2017-08-15 17:15:47 -07:00
Dalton Hubble
0e6ce19172 Merge pull request #640 from andrewrothstein/typo
fix typo in documentation
2017-08-15 10:49:50 -07:00
Andrew Rothstein
281fd5226a fix typo 2017-08-14 19:35:49 -04:00
Joao Morais
fb0ee0f05a Add version.txt download on get-coreos
The version.txt file is used by coreos-install if
the version number is "current".
2017-08-09 22:10:59 -03:00
Dalton Hubble
7def0d7e86 Merge pull request #635 from dghubble/better-validation
matchbox/client: Validate client endpoint is a host:port
2017-08-09 14:45:57 -07:00
Dalton Hubble
1c076875c2 matchbox/client: Validate client endpoint is a host:port
* Provide better errors to clients that forget to specify the
port or include a protocol scheme by mistake
* grpc-go uses net.SplitHostPort to validate server listener
addresses are 'host:port', but doesn't validate Dial targets
2017-08-09 10:50:25 -07:00
Dalton Hubble
7ba0f1476b Merge pull request #632 from dghubble/update-ct-and-ignition
glide.yaml: Update ct and Ignition
2017-08-08 13:55:21 -07:00
Dalton Hubble
ec6844a43a glide.yaml: Update ct and Ignition
* Fix container-linux-config-transpiler calls that changes
* Update container-linux-config-transpiler to v0.4.2
* Update Ignition to v0.17.2
2017-08-08 13:30:14 -07:00
Dalton Hubble
6857c1319a Merge pull request #629 from heyitsanthony/etcdctl-api
Documentation: remove ETCDCTL_API=3 settings
2017-08-07 09:48:44 -07:00
Anthony Romano
cb6bb3c90d Documentation: remove ETCDCTL_API=3 settings
etcd examples set ETCDCTL_API=3 but are using v2 etcdctl commands. This
works on CL by accident because it ships with 2.3 so etcdctl doesn't
recognize the API env var.
2017-08-04 23:04:19 -07:00
Dalton Hubble
5c5be5ce5b Merge pull request #628 from alrs/fix-swallowed-test-errors
Fix swallowed errors in server package tests
2017-08-04 17:02:41 -07:00
Lars Lehtonen
4cbf2b7448 Fix swallowed errors in server package tests 2017-08-03 18:59:15 -07:00
Dalton Hubble
d781e43212 Merge pull request #627 from coreos/fix-module-location
*: Fix location of the bootkube-terraform module
2017-08-03 16:09:57 -07:00
Dalton Hubble
3ca88334d2 *: Fix location of the bootkube-terraform module 2017-08-03 14:00:35 -07:00
Dalton Hubble
c7a649c731 Merge pull request #626 from coreos/bump-dnsmasq
*: Bump dnsmasq references to use v0.4.1
2017-08-01 23:21:18 -07:00
Dalton Hubble
d03f256976 *: Bump dnsmasq references to use v0.4.1 2017-08-01 16:47:18 -07:00
Dalton Hubble
9ecfcac0b9 Merge pull request #625 from coreos/dnsmasq
contrib/dnsmasq: Bump dnsmasq image to v0.4.1
2017-08-01 16:17:06 -07:00
Dalton Hubble
035b01634f contrib/dnsmasq: Bump dnsmasq image to v0.4.1
* Update from alpine:3.5 to alpine:3.6
* List ports 67 and 69 so ACI conversion still works
2017-07-31 14:26:05 -07:00
Dalton Hubble
e8d3e8c70c Merge pull request #617 from coreos/kubernetes-v1.7
examples: Update Kubernetes to v1.7.1
2017-07-24 17:14:51 -07:00
Dalton Hubble
cc490ff55d examples: Update Kubernetes to v1.7.1 2017-07-24 15:52:57 -07:00
Dalton Hubble
df6354ad45 Merge pull request #618 from dghubble/cluo
examples/addonts: Update CLUO from v0.2.1 to v0.2.2
2017-07-21 16:05:43 -07:00
Dalton Hubble
3d8a3777f0 examples/addonts: Update CLUO from v0.2.1 to v0.2.2 2017-07-21 15:12:23 -07:00
Dalton Hubble
dfee550522 Merge pull request #615 from dghubble/in-place-upgrade
Documentation: Refresh Kubernetes in-place upgrade doc
2017-07-21 13:50:00 -07:00
Dalton Hubble
07e9676457 Merge pull request #616 from coreos/bump-cl
examples: Install clusters at Container Linux 1409.7.0 (stable)
2017-07-20 11:52:57 -07:00
Dalton Hubble
a69f6dd2d8 examples: Install clusters at Container Linux 1409.7.0 (stable) 2017-07-20 11:13:43 -07:00
Dalton Hubble
26d8b7d480 Documentation: Refresh Kubernetes in-place upgrade doc 2017-07-19 17:15:12 -07:00
Dalton Hubble
2c02549cd6 Merge branch 'celevra' 2017-07-19 13:06:44 -07:00
Philipp Zeitschel
3c999d27e9 Documentation: Export variables in example commands 2017-07-19 13:04:44 -07:00
Dalton Hubble
52b317dff9 Merge pull request #614 from coreos/kubernetes-v1.6.7
examples: Update Kubernetes from v1.6.6 to v1.6.7
2017-07-19 11:59:12 -07:00
Dalton Hubble
97985b213b examples: Update Kubernetes from v1.6.6 to v1.6.7 2017-07-19 11:30:54 -07:00
Dalton Hubble
1ba353e5b6 Merge pull request #611 from coreos/fix-bootkube-tests
tests/smoke: Fix etcd certs distribution in bootkube test
2017-07-17 14:15:38 -07:00
Dalton Hubble
398d12e148 tests/smoke: Fix etcd certs distribution in bootkube test
* Introduced in ce3154cae9
* Masked by larger-scale timeouts / issues in the testing env
2017-07-17 13:25:48 -07:00
Dalton Hubble
be8fd3d488 Merge pull request #608 from coreos/locksmithd-to-cluo
Switch Kubernetes clusters from locksmith to Container Linux Update Operator
2017-07-17 11:26:14 -07:00
Dalton Hubble
27d1139a07 examples/terraform: Switch Kubernetes to use CLUO
* Users should deploy the Container Linux Update Operator to coordinate
reboots of Container Linux nodes in a Kubernetes cluster
* Write cluster addon docs to describe CLUO
* Terraform modules `bootkube` and `profiles` (Kubernetes) disable
locksmithd
2017-07-14 15:12:53 -07:00
Dalton Hubble
ee3445454e examples: Switch Kubernetes (non-terraform) to use CLUO
* Use the container linux update operator to coordinate reboots
* Stop using locksmithd for reboot coordination
* etcd TLS assets now only need to be distributed to controller
nodes which are etcd peers
2017-07-14 14:11:33 -07:00
Dalton Hubble
170f8c09ec Merge pull request #605 from coreos/fix-bootkube-version
scripts/dev: Update bootkube render binary for tests
2017-07-14 10:23:35 -07:00
Dalton Hubble
e10525ded0 scripts/dev: Fix bootkube render binary for tests 2017-07-13 10:26:30 -07:00
Dalton Hubble
4c47adf390 Merge pull request #604 from coreos/bootkube-v0.5.0
examples: Update terraform Kubernetes to use bootkube v0.5.0
2017-07-13 09:37:41 -07:00
Dalton Hubble
ce3154cae9 examples: Update terraform Kubernetes to use bootkube v0.5.0 2017-07-12 20:13:04 -07:00
Dalton Hubble
5e54960a92 Merge pull request #603 from coreos/non-terraform-bootkube
Update non-terraform Kubernetes to use bootkube v0.5.0
2017-07-12 15:27:16 -07:00
Dalton Hubble
e008b8ea5e Jenkinsfile: Bump Kubernetes test timeouts
* Hyperkube image downloads can be very slow, though the
clusters themselves are considered correctly configured
2017-07-12 13:42:34 -07:00
Dalton Hubble
b636fc7a3d examples: Update non-terraform Kubernetes to use bootkube v0.5.0 2017-07-12 13:41:33 -07:00
Dalton Hubble
30cf06853d Merge pull request #597 from ivy/doc-tweaks
Documentation tweaks
2017-07-10 11:46:43 -07:00
Ivy Evans
61377d2955 Documentation: Add syntax highlighting for example 2017-07-06 18:38:57 -07:00
Ivy Evans
a7ba7714f5 Documentation: Fix typo "template" => "templates" 2017-07-06 18:34:26 -07:00
Dalton Hubble
ff916686e7 Merge pull request #596 from euank/retry-curl
examples: include 'curl' retries
2017-06-30 14:53:08 -07:00
Euan Kemp
fbc4b39c59 examples: include 'curl' retries
`After=network-online.target` *should* mean this isn't needed in most
cases, but per
https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/, the
definition of "network-online" is a little shaky.

Regardless, being a little more resilient to network flakes and races is
a good thing. The count of `10` was arbitrarily chosen.
2017-06-30 10:58:51 -07:00
Dalton Hubble
be46b389bf Merge pull request #594 from sdemos/master
scripts/devnet: open port 8081 when using docker
2017-06-28 14:51:53 -07:00
Stephen Demos
a14e6c8bb9 scripts/devnet: open port 8081 when using docker
otherwise the gRPC server is not accessible
2017-06-28 14:10:07 -07:00
Dalton Hubble
c03b7a9627 Merge branch 'readme-cleanup' 2017-06-26 17:38:48 -07:00
Dalton Hubble
ac40eeedb5 README: Remove duplicated Tectonic docs
* Ensure that Matchbox (open-source) and Tectonic (enterprise)
are kept separate, Tectonic has its own docs
* Matchbox is agnostic to Kubernetes distribution
2017-06-26 17:03:10 -07:00
Dalton Hubble
9e23f3a86d examples: Fix LOCKSMITHD_ENDPOINT protocol to be https
* Fix auto-update issue introduced in 6f02107 which occurs
when self-hosted etcd is used and locksmithd cannot auth
* See #590
2017-06-26 16:02:17 -07:00
Dalton Hubble
d1baa3fb65 Merge pull request #591 from coreos/fix-locksmithd
examples: Use etcd client certs in locksmithd dropin
2017-06-26 15:11:33 -07:00
Dalton Hubble
c915fc2b52 examples: Use etcd client certs in locksmithd dropin
* Fixes a regression introduced in 6f02107 which upgraded to
Kubernetes v1.6.6 and added self-hosted etcd with TLS
* Both on-host and self-hosted etcd now require clients to use
TLS client certs so locksmithd
2017-06-26 14:39:54 -07:00
Dalton Hubble
6f02107448 Merge pull request #585 from coreos/kubernetes-upgrade
examples: Upgrade Kubernetes to v1.6.6
2017-06-24 15:02:20 -07:00
Dalton Hubble
ff06990edb examples: Upgrade Kubernetes to v1.6.6
* Upgrade to bootkube v0.4.5
* Enable TLS for experimental self-hosted etcd
* Upstream manifest generation changes modify the flannel
Daemonset, switch several control plane components to run
as non-root, and add an explicit UpdateStrategy to the
control plane components
2017-06-24 14:39:10 -07:00
Dalton Hubble
9bc6edc65b Merge pull request #583 from coreos/etcd3-update
examples: Update etcd3 from v3.1.6 to v3.2.0
2017-06-16 15:19:02 -07:00
Dalton Hubble
5b8006ae35 examples: Update etcd3 from v3.1.6 to v3.2.0 2017-06-16 14:23:38 -07:00
Dalton Hubble
ff5cd0468e Merge pull request #547 from coreos/enable-bootkube-tests
Re-enable bootkube-terraform cluster tests
2017-06-15 16:56:30 -07:00
Dalton Hubble
4d9bd82c12 tests/smoke: Re-enable bootkube-terraform cluster tests
* Simplify script to not launch subshells
* Verify tests don't leave behind processes running terraform apply
2017-06-15 11:59:34 -07:00
Dalton Hubble
882793f230 Merge pull request #577 from notnamed/patch-1
Correct path to client.crt and client.key
2017-06-15 11:31:05 -07:00
Dalton Hubble
858e1bda73 Merge pull request #572 from coreos/allow-docker
scripts: Improve devnet script to allow using rkt or docker
2017-06-15 11:30:52 -07:00
Dalton Hubble
cfbb9cebd0 scripts: Improve devnet script to allow using rkt or docker
* Add create, status, and destroy subcommands that use docker as
the container runtime for testing local QEMU/KVM clusters. Before,
only rkt could be used.
* Update local QEMU/KVM tutorial documentation
2017-06-15 11:06:22 -07:00
Jordan Cooks
edbe5bab20 Correct path to client.crt and client.key
gRPC API verification step has invalid paths to client.crt and client.key; these are created in ~/matchbox-v0.6.1-linux-amd64/scripts/tls (depending on where the matchbox installer is extracted).
2017-06-14 09:19:55 -07:00
Dalton Hubble
299701e7ea Merge pull request #576 from coreos/fix-ingress-resource
contrib/k8s: Use two Ingress resources for HTTP and TLS gRPC
2017-06-13 17:15:02 -07:00
Dalton Hubble
a20720a0d4 contrib/k8s: Use two Ingress resources for HTTP and TLS gRPC
* Fixes Ingress controller issue upgrading from nginx-ingress-controller
0.9-beta.3 to 0.9-beta.4 through 0.9-beta.7
2017-06-13 14:06:53 -07:00
Dalton Hubble
5a9c24ceb3 Merge pull request #573 from coreos/base-image
Dockerfile: Update base image from alpine:3.5 to alpine:3.6
2017-06-13 09:57:52 -07:00
Dalton Hubble
82af3f747d Dockerfile: Update base image from alpine:3.5 to alpine:3.6 2017-06-12 16:45:18 -07:00
Dalton Hubble
e955fecd30 Merge pull request #571 from coreos/missing-output
examples/terraform/modules: Add outputs.tf with kubeconfig
2017-06-12 14:18:31 -07:00
Dalton Hubble
0c1e20db27 Merge pull request #569 from coreos/deprecate-cloud
matchbox,Documentation: Mark Cloud-Config as deprecated
2017-06-12 09:48:29 -07:00
Dalton Hubble
8d6d0397ff examples/terraform/modules: Add outputs.tf with kubeconfig 2017-06-12 00:46:14 -07:00
Dalton Hubble
abc7eb8dfb Merge pull request #568 from dghubble/changelog
CHANGES.md: Add missing changelog notes
2017-06-09 11:18:41 -07:00
Dalton Hubble
149f441ad8 matchbox,Documentation: Mark Cloud-Config as deprecated
* Warn that Cloud-Config support will be removed in the
future
2017-06-09 10:53:49 -07:00
Dalton Hubble
cf43908a72 CHANGES.md: Add missing changelog notes 2017-06-09 10:35:27 -07:00
Benjamin Gilbert
523b15ed13 Merge pull request #567 from bgilbert/container-linux
*: CoreOS -> Container Linux
2017-06-08 15:33:37 -07:00
Benjamin Gilbert
aac270e937 README: Shorten line 2017-06-08 15:14:03 -07:00
Dalton Hubble
1cfdce2970 Merge branch 'add-generic' 2017-06-08 14:37:18 -07:00
Benjamin Gilbert
9d3d08a26f *: CoreOS -> Container Linux 2017-06-08 12:29:00 -07:00
Wagner Sartori Junior
b176de805e cli,client,http,rpc,server,storage: Add gRPC API for generic (experimental) templates
Matchbox added generic template support to enable experimenting with
rendering different kinds of templates, beyond Container Linux configs
and cloud-configs. We'd like to add a gRPC endpoint for generic
templates, as is done for other configs to support gRPC clients.
2017-06-08 11:34:09 -07:00
Dalton Hubble
009b44b25d Merge pull request #566 from coreos/on-host-etcd-tls
examples: Use Kubernetes on-host etcd TLS
2017-06-08 09:51:44 -07:00
Dalton Hubble
57e473b6f5 examples/terraform: Enable on-host etcd TLS for terraform-based bootkube 2017-06-07 16:38:54 -07:00
Dalton Hubble
66cd8da417 examples: Use Kubernetes on-host etcd TLS
* etcd3 cluster requires peers and clients to be TLS authenticated
* kube-apiserver (incl. bootstrap) communicates with TLS
authenticated on-host etcd cluster
2017-06-07 10:56:55 -07:00
Dalton Hubble
50a3d11414 Merge pull request #564 from coreos/remove-cmdline
matchbox: Remove Profile cmdline map field
2017-06-06 13:53:14 -07:00
Dalton Hubble
6fa13007c8 matchbox: Remove Profile cmdline map field 2017-06-05 13:04:09 -07:00
Dalton Hubble
500a7b25e1 Merge pull request #561 from joshix/patch-1
Doc/deployment.md: Cp local config to correct location
2017-06-02 14:35:45 -07:00
Josh Wood
951e5ec4a3 Doc/deployment.md: Cp local config to correct location
Copy matchbox-local.service to /etc/systemd/system/matchbox.service
rather than bare dir.
2017-06-02 14:11:19 -07:00
Dalton Hubble
f92743fa57 Merge pull request #556 from coreos/terraform-improvements
Add some minor Terraform variables
2017-06-01 11:12:01 -07:00
Dalton Hubble
d84bb8e398 examples/terraform: Configure whether to install CL from cache
* Module "profiles" provides container-linux-install and
cached-container-linux-install Profiles
* Module bootkube accepts cached_install variable to determine
whether the cluster should install Container Linux from cache
or from the public download site (default)
2017-05-31 13:57:12 -07:00
Dalton Hubble
d54562f429 examples/terraform: Add install_disk optional override 2017-05-30 16:00:37 -07:00
Dalton Hubble
395494c1d9 examples/terraform: Template variables early where possible 2017-05-30 16:00:37 -07:00
Dalton Hubble
ddbe17cd31 Merge pull request #555 from coreos/declarative-jenkinsfile
Jenkinsfile: Switch to declarative-style Jenkins pipeline
2017-05-26 16:34:27 -07:00
Dalton Hubble
b1a866370a Jenkinsfile: Cleanup workspace directories 2017-05-26 14:40:34 -07:00
Dalton Hubble
b8326e6db6 Jenkinsfile: Switch to declarative-style Jenkins pipeline 2017-05-26 11:17:14 -07:00
Dalton Hubble
7864e64fd2 Merge pull request #554 from dghubble/documentation-fix
*: Update docs references to v0.6.1
2017-05-25 14:39:09 -07:00
Dalton Hubble
89bb5125b5 *: Update docs references to v0.6.1 2017-05-25 14:24:04 -07:00
Dalton Hubble
cff053328d Merge pull request #551 from coreos/prep-point-release
CHANGES.md: Prepare for a v0.6.1 docs point release
2017-05-25 10:43:58 -07:00
Dalton Hubble
698b6f6118 CHANGES.md: Prepare for a v0.6.1 docs point release 2017-05-25 10:27:43 -07:00
Dalton Hubble
23f23c1dcb Merge pull request #552 from coreos/go-bump
Update openpgp package and bump Go to 1.8.3
2017-05-24 15:39:35 -07:00
Dalton Hubble
51cf859587 glide.yaml: Update and vendor the crypto openpgp package 2017-05-24 15:28:16 -07:00
Dalton Hubble
8061f57346 travis.yml: Use Go 1.8.3 in tests and published images 2017-05-24 15:14:31 -07:00
Dalton Hubble
8000c323b6 Merge pull request #524 from coreos/organize-scripts
scripts: Organize dev-only scripts and use a single scripts/tls location
2017-05-24 14:21:00 -07:00
Dalton Hubble
314a317271 scripts: Move examples/etc/matchbox to scripts/tls
* Use the same TLS cert-gen location in source as in releases
2017-05-24 13:19:21 -07:00
Dalton Hubble
d437167ebf scripts: Move development-only scripts under scripts/dev 2017-05-24 10:15:24 -07:00
Dalton Hubble
4067702641 Merge pull request #548 from coreos/multi-controller
examples/terraform: Add tfvars showing multi-controller case
2017-05-24 09:49:21 -07:00
Dalton Hubble
86c07da76e examples/terraform: Add tfvars showing multi-controller case 2017-05-23 15:54:18 -07:00
Dalton Hubble
be00fdbca0 Merge pull request #546 from coreos/update-container-linux
Bump Container Linux version to stable 1353.7.0
2017-05-23 12:09:05 -07:00
enilfodne
abbf7faf56 examples: Bump Container Linux version to stable 1353.7.0 2017-05-23 11:01:24 -07:00
Dalton Hubble
76cc8cb13c scripts: Remove unused static k8s generation scripts
* Remove static rktnetes cluster docs
* Bump devnet matchbox version
2017-05-22 18:11:11 -07:00
Dalton Hubble
ed6dde528a Merge pull request #543 from coreos/remove-pixiecore
Remove pixiecore handler and support
2017-05-22 17:51:21 -07:00
Dalton Hubble
1e095661ad matchbox: Remove pixiecore handler and support
* Pixiecore was deprecated in v0.5.0 and can be removed
2017-05-22 17:13:02 -07:00
Dalton Hubble
3f70f9f2e5 Merge pull request #544 from coreos/remove-static-kubernetes
Remove static Kubernetes and rktnetes example clusters
2017-05-22 17:11:11 -07:00
Dalton Hubble
dabba64850 examples: Remove static Kubernetes and rktnetes example clusters
* Static Kubernetes / rktnetes examples are no longer going to be
maintained by this repo or upgraded to Kubernetes v1.6. This is not
considered a deprecation bc the reference clusters are examples.
* Remove static Kubernetes cluster examples so users don't choose it
* Self-hosted Kubernetes (bootkube) is now the standard recommended
Kubernetes cluster configuration
2017-05-22 16:13:26 -07:00
Dalton Hubble
7a2764b17b Merge pull request #542 from coreos/disable-terraform-tests
tests: Temporarily disable bootkube (terraform-based) cluster testing
2017-05-22 16:11:29 -07:00
Dalton Hubble
9de41e29ab scripts/test: Fix fmt test for local tests
* examples/terraform modules may contain Go files which
should be ignored
2017-05-22 15:55:19 -07:00
Dalton Hubble
0592503652 tests/smoke: Get nodes/pods should not fail bootkube tests
* Listing pods or nodes as the final step of cluster creation should
not fail the entire build, its mainly for a pretty output
* There is no official definition of when a Kubernetes cluster is
"done" bootstrapping, they can momentarily fail to response in the
first minute or so as components stabalize
2017-05-22 15:12:29 -07:00
Dalton Hubble
40926b6d0f tests: Temporarily disable bootkube (terraform-based) tests 2017-05-22 14:51:25 -07:00
Dalton Hubble
859ea5888b Merge pull request #538 from coreos/kubernetes-upgrade
Update Kubernetes from v1.6.2 to v1.6.4
2017-05-19 20:44:51 -07:00
Dalton Hubble
1736af5024 tests/smoke: Be sure terraform destroy runs 2017-05-19 18:08:50 -07:00
Dalton Hubble
c476cf8928 examples: Update Kubernetes clusters to v1.6.4
* Update bootkube example cluster to v1.6.4
* Update bootkube (terraform-based) cluster to v1.6.4
* Update bootkube Terraform module to v1.6.4
* Uses bootkube v0.4.4
2017-05-19 16:52:37 -07:00
Dalton Hubble
a47087ec6a Merge pull request #536 from coreos/calc-ips
Calculate Kubernetes service IPs based on the service CIDR
2017-05-19 16:46:48 -07:00
Dalton Hubble
0961e50f64 examples: Remove Kubernetes service IP inputs
* Calculate the required service IP values from the service CIDR
* These inputs were never truly customizable anyway since bootkube
start assumed the 1st, 10th, and 15th offsets for named services
2017-05-19 15:05:42 -07:00
Dalton Hubble
7a017c2d7d Merge pull request #537 from coreos/etcd3-terraform-state
tests/smoke: Ensure etcd3-terraform tests cleans state
2017-05-19 13:21:31 -07:00
Dalton Hubble
41aaad3d6f tests/smoke: Ensure etcd3-terraform tests cleans state 2017-05-19 12:41:37 -07:00
Dalton Hubble
ddf1f88cb9 Merge pull request #535 from coreos/bootkube-tests
tests: Add cluster tests for bootkube-install (terraform-based)
2017-05-19 11:39:55 -07:00
Dalton Hubble
af8abc7dc2 tests: Add cluster tests for bootkube-install (terraform-based)
* Terraform-based cluster examples are doing disk installs so they
take a bit longer than their counterparts
2017-05-19 10:14:22 -07:00
Dalton Hubble
0d2173e446 Merge pull request #534 from coreos/bootkube-v0.4.3
examples: Update Kubernetes to use bootkube v0.4.3
2017-05-18 16:10:00 -07:00
Dalton Hubble
e9bf13963c examples: Update Kubernetes to use bootkube v0.4.3
* Update terraform-based bootkube-install cluster example
* Update manual bootkube cluster example
2017-05-18 15:37:51 -07:00
Dalton Hubble
dbba1316b2 Merge branch 'support-oem' 2017-05-18 12:04:38 -07:00
enilfodne
34d0f5003a examples/terraform: Add support for OEM images 2017-05-18 04:43:24 +03:00
Dalton Hubble
79e5240d3f Merge pull request #531 from coreos/examples-and-links
Organize README examples listing and links
2017-05-17 16:46:10 -07:00
Dalton Hubble
46dd95da0c README: Organize examples listing and links 2017-05-17 16:32:00 -07:00
Dalton Hubble
f6522a561b Merge pull request #528 from coreos/controller-taints
examples: Add NoSchedule taint to bootkube controllers
2017-05-15 16:49:08 -07:00
Dalton Hubble
e4fdcb204e examples: Add NoSchedule taint to bootkube controllers 2017-05-15 13:50:19 -07:00
Dalton Hubble
81e00d7e79 Merge pull request #522 from coreos/bootkube-automate
examples/terraform: Automate terraform-based bootkube-install
2017-05-15 13:43:54 -07:00
Dalton Hubble
06a9a28d7c examples/terraform: Add optional variables commented out 2017-05-15 13:11:48 -07:00
Dalton Hubble
756c28f2fc examples/terraform: Fix terraform fmt 2017-05-14 14:14:47 -07:00
Dalton Hubble
cc240286f3 examples/terraform: Automate terraform-based bootkube-install
* Use the dghubble/bootkube-terraform terraform module to generate
the exact same assets that `bootkube render` would
* Use terraform to automate the kubeconfig copy and bootkube start
* Removes the reuqirement to download a bootkube binary, render assets,
and manually copy assets to nodes
2017-05-14 14:14:10 -07:00
Dalton Hubble
75e428aece Merge pull request #520 from coreos/etcd3-terraform
Jenkinsfile,tests: Add etcd3-terraform cluster to pipeline
2017-05-12 15:46:14 -07:00
Dalton Hubble
51c4371e39 Jenkinsfile,tests: Add etcd3-terraform cluster to pipeline
* Test the Terraform-based etcd3 cluster in parallel
2017-05-12 14:54:42 -07:00
Dalton Hubble
ef85730d69 Merge pull request #517 from dghubble/self-hosted-etcd
examples/terraform: Add experimental self-hosted etcd option
2017-05-10 09:55:33 -07:00
Dalton Hubble
3752ee78d5 Merge pull request #519 from brianredbeard/source-url-fix
contrib/rpm: Fixing the source URL format
2017-05-09 20:35:21 -04:00
Brian 'Redbeard' Harrington
ea9042e86e contrib/rpm: Fixing the source URL format
Fixing the source URL format to confirm to more normative rpmbuild
standards and to allow for proper use of spectool/rpmspectool.  This
change now produces a proper archive with the name and version number
used.
2017-05-09 17:26:42 -07:00
Dalton Hubble
d4e33efb38 Merge pull request #516 from coreos/local-disk-size
scripts/libvirt: Allow QEMU/KVM disk size to be customized
2017-05-09 17:37:19 -04:00
Dalton Hubble
459ce2d8bc examples/terraform: Add experimental self-hosted etcd option
* Add an option to try experimental self-hosted etcd which uses
the etcd-operator to deploy an etcd cluster as pods atop Kubernetes
and disables the on-host etcd cluster
* When enabled, configure locksmithd to coordinate reboots through
self-hosted etcd
2017-05-09 14:00:51 -07:00
Dalton Hubble
31ed8dba2f scripts/libvirt: Allow QEMU/KVM disk size to be customized 2017-05-08 16:43:38 -07:00
Dalton Hubble
2d69b2d734 Merge pull request #514 from coreos/container-install
Documentation: Add missing mkdir for rkt/docker installation
2017-05-08 18:13:01 -04:00
Dalton Hubble
2aea18e048 Documentation: Add missing mkdir for rkt/docker installation 2017-05-08 13:47:00 -07:00
Dalton Hubble
c2e5196d1a Merge pull request #510 from dghubble/squid-proxy
Add squid proxy docs as contrib drafts
2017-05-02 17:47:26 -07:00
Dalton Hubble
47d3dbacb1 contrib/squid: Move Squid docs to contrib as a draft 2017-05-02 14:11:02 -07:00
Daneyon Hansen
5e2adb1eda Adds documentation for using a Squid proxy with Matchbox. 2017-05-02 13:57:30 -07:00
Dalton Hubble
7ee68aa1a4 Merge pull request #509 from coreos/improve-examples
Improve terraform examples, tutorials, and re-usable modules
2017-05-02 13:12:57 -07:00
Dalton Hubble
e1cabcf8e8 examples/terraform: Add etcd3 tutorial and Terraform modules doc 2017-05-02 12:56:08 -07:00
Dalton Hubble
6500ed51f3 examples/terraform: Improve configurability of cluster examples
* Add matchbox_http_endpoint and matchbox_rpc_endpoint as variables
* Remove dghubble ssh public key from default
* Add a terraform.tfvars.example and gitignore terraform.tfvars
2017-05-01 21:25:12 -07:00
Dalton Hubble
4fb3ea2c7e examples/terraform: Rename coreos-install to container-linux-install
* Add container-linux-install profile to install Container Linux
* Add cached-container-linux-install profile to install Container Linux
from cached matchbox assets
2017-05-01 17:54:18 -07:00
Dalton Hubble
b1beebe855 Merge pull request #506 from coreos/bootkube-v0.4.2
examples: Update from bootkube v0.4.1 to v0.4.2
2017-05-01 16:48:39 -07:00
Dalton Hubble
6743944390 examples: Update from bootkube v0.4.1 to v0.4.2
* Contains a few fixes to bootkube logging and checkpointing
2017-05-01 15:31:29 -07:00
Dalton Hubble
4451425db8 Merge pull request #505 from danehans/issue_502
examples: updates terraform readme to include get
2017-04-28 11:13:36 -07:00
Daneyon Hansen
23959a4dd2 examples: updates terraform readme to include get
Previously, the terraform readme was incomplete by only including
terraform plan and apply commands. Additionally, the readme was
updated to include instructions for updating the profiles module
source.

Fixes #502
2017-04-28 11:28:07 -06:00
Dalton Hubble
0825fd2492 Merge pull request #504 from coreos/bootkube-bump
examples: Update self-hosted Kubernetes to v1.6.2
2017-04-27 17:59:01 -07:00
Dalton Hubble
bb08cd5087 examples: Update self-hosted Kubernetes to v1.6.2 2017-04-27 17:47:59 -07:00
Dalton Hubble
a117af6500 Merge pull request #503 from coreos/init-flannel
examples/ignition: Remove --fail from curl PUT/POST's
2017-04-27 15:39:32 -07:00
Dalton Hubble
4304ee2aa5 examples/ignition: Remove --fail from curl PUT/POST's
* Reverts parts of #470
2017-04-27 13:38:30 -07:00
Dalton Hubble
6d6879ca4a Merge pull request #501 from dghubble/copr-fix
contrib/rpm: Bump to re-build RPM release now Copr is fixed
2017-04-25 17:39:39 -07:00
Dalton Hubble
cf301eed45 Merge pull request #500 from dghubble/fix-signing-docs
Documentation/dev/release: Update commands used for signing
2017-04-25 17:37:16 -07:00
Dalton Hubble
7bbd1f651f contrib/rpm: Bump to re-build RPM release now Copr is fixed 2017-04-25 17:34:49 -07:00
Dalton Hubble
6455528f3c Documentation/dev/release: Update commands used for signing 2017-04-25 16:46:27 -07:00
Dalton Hubble
a6fde5a0c6 Merge pull request #496 from coreos/add-rpm-spec
contrib/rpm: Add matchbox RPM spec file
2017-04-25 11:28:16 -07:00
Dalton Hubble
32baac329d Merge pull request #497 from coreos/caps-retain
Documentation: Add back original rkt run dnsmasq --caps-retain
2017-04-25 11:27:58 -07:00
Dalton Hubble
73d40db168 Documentation: Add back original dnsmasq Linux --caps-retain 2017-04-24 17:08:55 -07:00
Dalton Hubble
96259aa5da contrib/rpm: Add matchbox RPM spec file 2017-04-24 16:43:29 -07:00
Dalton Hubble
fed01db5a6 README: Add v0.6.0 release announcement 2017-04-24 11:24:24 -07:00
Dalton Hubble
c8af40108f Merge pull request #495 from coreos/prep-release
*: Prepare for v0.6.0 release
2017-04-24 11:15:42 -07:00
Dalton Hubble
bcae94efc7 Merge pull request #493 from dghubble/terraform-tutorial
Documentation,examples: Add getting started tutorial with terraform
2017-04-24 11:13:33 -07:00
Dalton Hubble
348b48d886 *: Prepare for v0.6.0 release 2017-04-24 11:04:50 -07:00
Dalton Hubble
2bc6934e44 Documentation,examples: Add getting started tutorial w terraform 2017-04-24 10:52:52 -07:00
Dalton Hubble
6a53726119 Merge pull request #492 from coreos/update-cl
Update Container Linux to 1298.7.0 and pin dev/test matchbox version
2017-04-24 10:07:18 -07:00
Dalton Hubble
64168bc42e Merge pull request #494 from coreos/sync-bootkube
examples: Sync with bootkube and etcd3
2017-04-22 20:05:58 -07:00
Dalton Hubble
37b050db3e examples: Update etcd3 from 3.1.0 to 3.1.6 2017-04-22 19:34:37 -07:00
Dalton Hubble
4e544a8f39 examples: Update self-hosted Kubernetes node labels
* Update kubelet wrapper env variables and location
2017-04-22 19:34:37 -07:00
Dalton Hubble
c0c43abf49 examples: Update Container Linux image from 1235.9.0 to 1298.7.0 2017-04-21 14:28:58 -07:00
Dalton Hubble
12fc4f37cc scripts: Set the matchbox tag development and test run against
* Stop using the latest tag, since this may be cached (i.e. rkt)
2017-04-21 13:54:31 -07:00
Dalton Hubble
1cefbe5d97 Merge pull request #491 from dghubble/readme-structure
README: Update README structure and description
2017-04-21 11:40:10 -07:00
Dalton Hubble
2b96139ff7 README: Update README structure and description 2017-04-21 11:33:18 -07:00
Dalton Hubble
fa5a76d9de Merge pull request #489 from coreos/fix-terraform-bootkube
examples/terraform: Fix bootkube worker etcd_endpoints port
2017-04-20 14:58:20 -07:00
Dalton Hubble
e30f800b2b examples/terraform: Fix bootkube worker etcd_endpoints port
* Terraform example typo's the port number in the etcd_endpoints
* Causes worker etcd-gateway to fail so Container Linux updates may
not been coordinated by locksmith
2017-04-19 22:13:36 -07:00
Dalton Hubble
7dfb04c4af Merge pull request #487 from coreos/dnsmasq-docs-fixes
Update docs, changelog, and scripts for dnsmasq:v0.4.0
2017-04-19 14:53:32 -07:00
Dalton Hubble
45bece3cf7 Documentation: Update dnsmasq image version mentions
* Update coreos.com/dnsmasq references to quay.io/coreos/dnsmasq
with the v0.4.0 when specifying the tag
* Add CHANGES.md for dnsmasq image releases
2017-04-19 14:36:12 -07:00
Dalton Hubble
fa31b0a58c Merge pull request #484 from coreos/dnsmasq-update
contrib/dnsmasq: Add dnsmasq Makefile, v0.4.0 bump
2017-04-19 00:07:33 -07:00
Dalton Hubble
fbbd1b88f7 scripts/devnet: Fix rkt run quay.io/coreos/dnsmasq
* Set --dns=host so /etc/resolv.conf is not empty
* Set Linux capabilities to run dnsmasq
2017-04-18 20:00:51 -07:00
Dalton Hubble
8aac29bdf1 Merge pull request #482 from dghubble/changes
CHANGE.md: Update changelog with notable recent changes
2017-04-18 14:00:54 -07:00
Dalton Hubble
d2fdc8bfab contrib/dnsmasq: Add dnsmasq Makefile, v0.4.0 bump
* Add grub.efi to get-tftp-files script. This matches prior
dnsmasq images, but was not part of a repeatable build
* Switch rkt run examples to pull from quay.io
* Remove script using acbuild to create ACIs
2017-04-18 13:22:27 -07:00
Dalton Hubble
c66360bee0 Merge pull request #479 from coreos/update-ignition-cloud-ct
Update Ignition, Container Linux Config transpiler, and Cloud-init
2017-04-18 11:38:02 -07:00
Dalton Hubble
c9d8fcfbc1 Merge pull request #480 from coreos/remove-bootcmd
Makefile: Remove bootcmd from the release tarball
2017-04-18 11:22:42 -07:00
Dalton Hubble
311f1ec7cd Documentation: Move ignition.md to container-linux-config.md 2017-04-18 10:22:38 -07:00
Dalton Hubble
32d48018e1 glide.yaml: Update and vendor coreos-cloudinit v1.13.0 2017-04-18 10:22:38 -07:00
Dalton Hubble
a948a97339 glide.yaml: Vendor Container Linux Config transpiler
* Update and vendor ct, Ignition, and deps
2017-04-18 10:22:38 -07:00
Dalton Hubble
3f43e4ecb6 matchbox,docs: Switch from Fuze to Container Linux Config
* Container Linux Configs are Fuze configs, just renamed
2017-04-18 10:20:44 -07:00
Chris Jones
2a83612ffb matchbox/http: Update grub endpoint to use profile kernel args 2017-04-18 00:31:08 -07:00
Dalton Hubble
bf7c6abc1d CHANGE.md: Update changelog with notable recent changes
* Fix a few reamining CoreOS -> Container Linux cases
2017-04-17 22:49:24 -07:00
Dalton Hubble
ed57a2a04a Makefile: Remove bootcmd from the release tarball
* Stop shipping or mentioning bootcmd CLI tool
  * bootcmd has not been built out into a user-facing tool
  * terraform-provider-matchbox addresses some of the need
* Keep bootcmd implementation as an example matchbox gRPC client
2017-04-17 22:16:17 -07:00
Dalton Hubble
fd2c5e303d Merge pull request #477 from coreos/terraform-examples
Add terraform examples for etcd3 and self-hosted Kubernetes
2017-04-17 13:56:55 -07:00
Dalton Hubble
2eed5fdf58 Add terraform examples for etcd3 and self-hosted Kubernetes 2017-04-17 11:31:33 -07:00
Dalton Hubble
54f0cc51ba Merge pull request #476 from coreos/upgrade-kubernetes
examples: Upgrade self-hosted Kubernetes to v1.6.1
2017-04-15 00:20:18 -07:00
Dalton Hubble
bd17dd07a3 examples: Upgrade self-hosted Kubernetes to v1.6.1
* Render self-hosted assets with bootkube v0.4.0
* Relax bootkube smoke test Jenkins timeout
2017-04-14 21:58:12 -07:00
Dalton Hubble
f162ab8943 Merge pull request #475 from coreos/matchbox-on-kubernetes
contrib/k8s: Run matchbox on Kubernetes behind Ingress
2017-04-14 18:40:35 -07:00
Dalton Hubble
370790804b contrib/k8s: Run matchbox on Kubernetes behind Ingress
* Show matchbox deployment, service, and TLS secret creation
* Provide an Ingress resource for exposing HTTP and gRPC APIs
* Add note mentioning matchbox can be run publicly if best practices
are followed
2017-04-14 15:07:31 -07:00
Dalton Hubble
9a42fb0701 Merge pull request #474 from coreos/terraform-experiment
README: Announce Matchbox Terraform Provider experiment
2017-04-12 13:07:24 -07:00
Dalton Hubble
a93a7f12bb README: Announce Matchbox Terraform Provider experiment 2017-04-12 11:37:06 -07:00
Dalton Hubble
5eb257f2eb Merge pull request #472 from coreos/ignition-get-delete
Add IgnitionGet and IgnitionDelete gRPC methods
2017-04-12 10:14:24 -07:00
Dalton Hubble
43ce5c1d91 matchbox/rpc: Add IgnitionGet and IgnitionDelete gRPC methods 2017-04-11 13:35:40 -07:00
Dalton Hubble
6bbf4a30a6 matchbox/storage: Add Ignition deletes to the Store 2017-04-11 12:01:34 -07:00
Dalton Hubble
d65b1b58ec Merge pull request #469 from coreos/update-protobuf
Update protoc, Go protobuf plugin, and gRPC package
2017-04-11 11:31:00 -07:00
Dalton Hubble
7aaf0bce1e glide.yaml: Update and vendor golang protobuf plugin and gRPC
* Update Go protobuf plugin to a recent SHA
* Update Go gRPC pacakge to v1.2.1
* Regenerate code from proto files (no changes)
2017-04-11 11:12:04 -07:00
Dalton Hubble
a0a508b16b scripts/get-protoc: Update protoc from 3.1.0 to 3.2.0
* Update protoc Protocol Buffer Compiler codegen tool
2017-04-11 11:12:04 -07:00
Dalton Hubble
e5f428d412 scripts: Remove unused gentools script 2017-04-11 11:12:04 -07:00
Dalton Hubble
585ce50284 Merge pull request #468 from coreos/deletion-apis
gRPC: Add ProfileDelete and GroupDelete gRPC methods
2017-04-11 11:03:23 -07:00
Dalton Hubble
fcdabd2f23 Merge pull request #470 from coreos/curl-exit-code
examples/ignition: Return non-zero exit code for curl failures
2017-04-11 10:38:13 -07:00
Dalton Hubble
ebfc9b3f57 examples/ignition: Return non-zero exit code for curl failures 2017-04-10 16:48:52 -07:00
Dalton Hubble
3464e38c85 matchbox/rpc: Add ProfileDelete and GroupDelete gRPC methods 2017-04-10 14:52:02 -07:00
Dalton Hubble
81989cc64e matchbox/storage: Add profile and group deletes to the Store
* Add deleteFile to the Dir restricted filesystem accessor
2017-04-10 13:44:09 -07:00
Dalton Hubble
7e05672ee7 Merge pull request #466 from coreos/update-bootkube
examples: Update self-hosted Kubernetes to v1.5.6
2017-04-05 10:51:16 -07:00
Dalton Hubble
5cd275bdc1 Jenkinsfile: Relax static Kubernetes bring-up timeout 2017-04-05 09:58:16 -07:00
Dalton Hubble
1537676484 examples: Update self-hosted Kubernetes to v1.5.6
* Bump bootkube to v0.3.13
* Use hyperkube v1.5.6_coreos.0
2017-04-05 09:55:21 -07:00
Dalton Hubble
9a3347f1b5 Merge pull request #465 from coreos/vm-memory
scripts: Allow libvirt VM_MEMORY configuration
2017-04-03 20:03:21 -07:00
Dalton Hubble
7787c6b787 scripts: Allow libvirt VM_MEMORY configuration 2017-04-03 14:48:48 -07:00
Dalton Hubble
630026a1ae Merge pull request #464 from coreos/update-kubernetes
examples: Update Kubernetes (static, self-hosted, rktnetes) to v1.5.5
2017-03-21 22:14:40 -07:00
Dalton Hubble
ca4ab1a230 examples: Update Kubernetes (static, self-hosted, rktnetes) to v1.5.5
* Bump kubectl version to v1.5.5
* Bump bootkube to v0.3.12
2017-03-21 21:47:14 -07:00
Dalton Hubble
1a48a51253 Merge pull request #460 from coreos/bump-k8s
examples: Update static Kubernetes to v1.5.4
2017-03-13 15:46:15 -07:00
Dalton Hubble
07e22ca6ed examples: Update static Kubernetes to v1.5.4 2017-03-13 15:30:35 -07:00
Dalton Hubble
a79d94947f Merge pull request #459 from tvon/patch-1
Add missing comma
2017-03-13 15:28:30 -07:00
Tom von Schwerdtner
31993b2e69 Add missing comma 2017-03-13 13:15:01 -04:00
Dalton Hubble
00dbbd9588 Merge pull request #458 from coreos/download-bins
scripts: Always download bootkube and kubectl
2017-03-10 18:04:14 -08:00
Dalton Hubble
c498665bdd scripts: Always download bootkube and kubectl
* Cached bootkube binary causes smoke test failures
when a new version is needed
2017-03-10 17:27:52 -08:00
Dalton Hubble
ad5caa1eee Merge pull request #457 from coreos/bootkube-update
examples: Update self-hosted Kubernetes to v1.5.4
2017-03-10 16:31:20 -08:00
Dalton Hubble
66fb51f006 examples: Update self-hosted Kubernetes to v1.5.4
* Use bootkube v0.3.11 binary and image
* Disable anonymous-auth flag for on-host kubelet
* Set the client CA for on-host kubelet, based on kubeconfig
2017-03-10 14:45:11 -08:00
Dalton Hubble
7c53dc5b60 Merge pull request #443 from coreos/go-1.8
travis.yml: Build matchbox with Go 1.8
2017-03-10 14:39:18 -08:00
Dalton Hubble
020768834c travis.yml: Build matchbox with Go 1.8 2017-03-09 11:28:31 -08:00
Dalton Hubble
aaa29ec6b2 Merge pull request #456 from coreos/improve-tests
tests: Improve Jenkins smoke testing reliability
2017-03-09 11:24:03 -08:00
Dalton Hubble
d465d97201 Jenkinsfile: Smoke test against the checkout scm 2017-03-09 10:57:25 -08:00
Dalton Hubble
fa23c0706f tests: Increase etcd3 timeout and trap EXITs to cleanup 2017-03-09 10:54:38 -08:00
Dalton Hubble
3946d9ee66 tests,scripts: Simplify bootkube and kubectl binary curling 2017-03-09 10:51:10 -08:00
Dalton Hubble
b03f62814d Merge pull request #453 from coreos/fix-address-parse
cmd/matchbox: fix -address parsing when built with Go 1.8
2017-03-08 10:08:16 -08:00
Dalton Hubble
0d1beeb632 cmd/matchbox: fix -address parsing when built with Go 1.8
* 1.8 changed the behavior of url.Parse so it is no longer
appropriate for parsing values like 0.0.0.0:8080
* Pass the address directly to http.ListenAndServe which gives
reasonable errors when bad values are given
2017-03-08 01:45:30 -08:00
Josh Wood
3ff11fad17 Merge pull request #452 from radhikapc/master
Documentation/*.md: Make headings sentence case
2017-03-07 17:00:33 -08:00
Radhika Puthiyetath
08504aabc5 Documentation/*.md: Make headings sentence case. Performed the following checks:
sentence case
index heading 2
matchbox - proper noun and capitalized
2017-03-07 16:51:16 -08:00
Dalton Hubble
05cc9d8f1b Merge pull request #451 from coreos/jenkins
Add a Jenkinsfile to perform smoke tests
2017-03-07 13:45:03 -08:00
Dalton Hubble
30600915c6 Add a Jenkinsfile to perform smoke tests 2017-03-07 12:34:43 -08:00
Dalton Hubble
fb80af3fe5 Merge pull request #450 from coreos/add-worker-node
contrib/dnsmasq: Add a 4th node for multi-k8s tests
2017-03-07 10:35:35 -08:00
Dalton Hubble
ce34cc8fa4 contrib/dnsmasq: Add a 4th node for multi-k8s tests 2017-03-06 17:55:21 -08:00
Dalton Hubble
033efb5ebf tests/smoke: Cleanup TLS assets before k8s smoke tests 2017-03-06 12:55:00 -08:00
Dalton Hubble
f3f20104aa Merge pull request #449 from coreos/bootkube-update
Update self-hosted Kubernetes to v1.5.3 and use bootkube v0.3.9
2017-03-04 12:14:11 -08:00
Dalton Hubble
b84e92a05a examples: Update self-hosted Kubernetes to v1.5.3
* Use kubernetes-incubator/bootkube v0.3.9
2017-03-03 15:33:59 -08:00
Dalton Hubble
26957d6fb3 Merge pull request #447 from coreos/bootkube-tests
Add bootkube cluster bring-up to smoke tests
2017-03-03 15:33:08 -08:00
Dalton Hubble
90532afa3d tests: Move smoke tests from internal to tests 2017-03-03 15:16:00 -08:00
Dalton Hubble
2849f94cd9 internal: Reduce smoke test duplication 2017-03-03 13:51:14 -08:00
Dalton Hubble
fa14cf8c9c Add bootkube cluster bring-up to smoke tests 2017-03-02 01:32:34 -08:00
Dalton Hubble
d2ec4f1ced Merge pull request #431 from coreos/smoke-tests
internal: Add smoke testing scripts
2017-03-01 14:28:18 -08:00
Dalton Hubble
9a1d87b143 internal: Add internal smoke test scripts
* Test etcd and k8s cluster bring-up
2017-03-01 13:38:32 -08:00
Dalton Hubble
1e6b8ece14 Merge pull request #444 from coreos/jx-generic-linux
Doc/deployment: s/general/generic linux distros
2017-03-01 10:48:52 -08:00
Josh Wood
1b86919bac Doc/deployment: s/general/generic linux distros 2017-03-01 10:30:08 -08:00
Dalton Hubble
58fa667888 Merge pull request #445 from siadat/fix-hosts-in-doc
Docs: fix entries in the /etc/hosts section
2017-03-01 10:26:14 -08:00
Sina Siadat
e47227750a Docs: fix entries in the /etc/hosts section
Remove unnecessary "$" prefixes
2017-03-01 11:26:32 +03:30
Dalton Hubble
66f2e35616 Merge pull request #442 from ElijahCaine/codeblock-std
Docs: standardize codeblocks to ``` fencing
2017-02-27 09:40:10 -08:00
Elijah C. Voigt
6a12032f51 Docs: standardize codeblock fencing. 2017-02-24 17:17:41 -08:00
Dalton Hubble
2e5375f495 Merge pull request #441 from joshix/markerrun
Documentation: Update outdated links (marker)
2017-02-23 13:02:58 -08:00
Josh Wood
5fad9943da Documentation: Update outdated links (marker) 2017-02-23 11:33:30 -08:00
Dalton Hubble
dde6a8972f Merge pull request #440 from coreos/etcd2-to-etcd3
Switch from etcd2 to etcd3
2017-02-19 12:04:20 -08:00
Dalton Hubble
8fd4bea89b examples: Update bootkube to v0.3.7 and etcd3 2017-02-17 18:36:12 -08:00
Dalton Hubble
dabf0eae54 examples: Remove etcd (i.e. etcd2) cluster examples 2017-02-17 15:21:12 -08:00
Dalton Hubble
e02f8f7a9e examples: Update etcd3 (etcd-member) and use the etcd3 gateway 2017-02-17 15:10:37 -08:00
Dalton Hubble
b27e1a8afa Merge pull request #439 from coreos/test
Documentation: Remove mkdocs site builder
2017-02-16 15:53:38 -08:00
Dalton Hubble
1a93282cb5 Documentation: Remove mkdocs site builder 2017-02-16 15:47:29 -08:00
Dalton Hubble
fd2b6e1cb1 Merge pull request #438 from andrewrothstein/fix-typo
fix typo in documentation
2017-02-16 15:45:34 -08:00
Andrew Rothstein
2d5bce04c1 fix typo 2017-02-14 23:13:16 -05:00
Dalton Hubble
189f790a7e examples: Remove torus example cluster 2017-02-13 16:39:16 -08:00
Dalton Hubble
cfdec8cea0 Merge pull request #435 from coreos/bump-coreos
Bump CoreOS image from 1185.3.0 to 1235.9.0
2017-02-13 16:33:25 -08:00
Dalton Hubble
0419b2f327 Bump CoreOS image from 1185.3.0 to 1235.9.0 2017-02-13 15:44:06 -08:00
Dalton Hubble
e1fda6b22b Merge pull request #434 from cjyar/filestore-logging
Don't swallow errors; that's rude.
2017-02-13 15:30:45 -08:00
Chris Jones
697d7ec73d storage: report errors in GroupList
Errors should be reported instead of silently failing to read files.
2017-02-10 16:45:05 -07:00
Dalton Hubble
7fa2c96d5d Merge pull request #428 from coreos/update-kubernetes
Update example Kubernetes clusters to v1.5.2
2017-01-30 14:26:13 -08:00
Dalton Hubble
1526a2edaf examples: Ensure kubelet awaits /etc/resolv.conf setup
* Reflects a change we applied in Tectonic clusters
2017-01-29 17:37:36 -08:00
Dalton Hubble
e024a6b7b0 examples: Update Kubernetes clusters to v1.5.2
* Update bootkube to v0.3.5 binary
2017-01-29 17:37:19 -08:00
Dalton Hubble
5812cab050 Documentation: Update RPM Copr repository 2017-01-25 11:24:11 -08:00
Dalton Hubble
40f13e0587 *: Prepare for v0.5.0 release 2017-01-23 04:05:55 -08:00
Máximo Cuadros
cedeb868f9 ipxe: use mac instead of net0/mac 2017-01-19 21:06:45 -08:00
Dalton Hubble
354b434ffc Merge pull request #427 from urzds/patch-2
Lock architecture to ARMv6 for linux-arm images
2017-01-19 14:01:42 -08:00
Dalton Hubble
88f9d637b1 Merge branch 'master' into patch-2 2017-01-19 11:48:51 -08:00
Dalton Hubble
a0d578d547 Merge pull request #426 from urzds/patch-1
Fix typo: AMR64->ARM64
2017-01-19 11:48:25 -08:00
Dennis Schridde
e739b98305 Lock architecture to ARMv6 for linux-arm images
Unless `GOARM` is specified, the Go compiler will choose the compiling system's architecture as a target (similar to GCC's `-march=native`). This commit prevents that by explicitly selecting ARMv6 (e.g. Raspberry Pi 1) as the target.

According to the documentation, ARM64 does not need this, as ARMv8 is implicit and the value of `GOARM` is not considered for this architecture.

See-Also: https://github.com/golang/go/wiki/GoArm
2017-01-19 13:49:23 +01:00
Dennis Schridde
ea10f41886 Fix typo: AMR64->ARM64 2017-01-19 11:30:38 +01:00
Dalton Hubble
ed017d7a86 Merge pull request #425 from coreos/devnet
scripts/devnet: Improve devnet script usability
2017-01-18 13:04:29 -08:00
Dalton Hubble
3337f9ef60 scripts/devnet: Improve devnet script usability
* Add devnet script to getting-started-rkt
2017-01-18 11:37:03 -08:00
Dalton Hubble
23216b2a97 Merge pull request #417 from coreos/update-protobuf
Update protobuf tools, re-vendor deps, re-generate code
2017-01-18 11:36:03 -08:00
Dalton Hubble
f609b85d30 matchbox: Re-generate matchbox protobuf code 2017-01-18 11:17:08 -08:00
Dalton Hubble
3ac3063995 vendor: Re-vendor protobuf and grpc 2017-01-18 11:17:08 -08:00
Dalton Hubble
11c739949f glide.yaml: Update protobuf and grpc
* Update grpc to 1.0.5
* Update protobuf to 8ee79997227bf9b34611aee7946ae64735e6fd93
* Add protoc-gen-go subpackage which is needed by codegen
2017-01-18 11:17:08 -08:00
Dalton Hubble
6c6e2aadaf Merge pull request #422 from coreos/makefile
Switch to a Makefile driven develop/release process
2017-01-18 10:21:10 -08:00
Dalton Hubble
219da4d934 *: Switch to a Makefile driven develop/release process
* Add make targets for vendor, docker-image, and tools
* Move scripts into the scripts folder
2017-01-18 02:11:27 -08:00
Dalton Hubble
b9d73c58ee Merge pull request #421 from coreos/project-rename
*: Rename coreos-baremetal to matchbox
2017-01-17 01:48:35 -08:00
Dalton Hubble
c749a28662 *: Rename coreos-baremetal to matchbox 2017-01-17 00:58:03 -08:00
Dalton Hubble
9e3efa5229 Merge pull request #411 from coreos/self-hosted-flannel
examples: Update Kubernetes to use self-hosted flannel
2017-01-13 16:57:07 -08:00
Dalton Hubble
e54418633c examples: Update Kubernetes to use self-hosted flannel
* Self-hosted Kubernetes now runs flannel as a daemonset
2017-01-13 15:29:23 -08:00
Dalton Hubble
e56e5a3a03 Update dockerignore to slim the build context 2017-01-12 17:21:16 -08:00
Dalton Hubble
e4b4f82177 Merge pull request #416 from coreos/pin-base-image
Dockerfile: Pin the matchbox and dnsmasq base images
2017-01-11 18:01:46 -08:00
Dalton Hubble
79a2bf2326 Dockerfile: Pin the matchbox and dnsmasq base images 2017-01-11 15:47:31 -08:00
Dalton Hubble
2b23f4a17c Merge pull request #414 from coreos/standard-context
matchbox: Switch to Go 1.7+ standard context
2017-01-11 10:58:09 -08:00
Dalton Hubble
89accb281f matchbox: Switch to Go 1.7+ standard context
* Drop build support for Go 1.6
2017-01-11 03:00:33 -08:00
Dalton Hubble
fd61297a43 README: Update README with move notice
* Add an Enterprise section to show Tectonic
2017-01-11 02:44:05 -08:00
Dalton Hubble
f064d72c28 .travis.yml: Add Go 1.8rc1, fix Go tip tests 2017-01-11 01:28:18 -08:00
Dalton Hubble
9165dc202b contrib: Rename systemd units from bootcfg to matchbox 2017-01-11 01:25:32 -08:00
Dalton Hubble
2d8977b2e1 Merge pull request #413 from coreos/rename-bootcfg
Rename bootcfg to matchbox
2017-01-11 00:59:54 -08:00
Dalton Hubble
27427dbd1b Add CHANGES, migration notes, and update contrib
* Update k8s and systemd contrib examples for v0.5.0
2017-01-09 04:34:06 -08:00
Dalton Hubble
b7377f54bc Discontinue signed and tagged ACIs
* Move toward a unified container image which is run
either by rkt or docker
* Sadly, image signing only supported by rkt and is not
part of the new standard
2017-01-09 04:33:01 -08:00
Dalton Hubble
d496192032 Rename bootcfg to matchbox in docs, examples, scripts
* Verify all examples and docs work correctly
* Exclude contrib k8s and systemd which will be updated
and verified in a followup commit
2017-01-09 04:32:45 -08:00
Dalton Hubble
86f737ff93 Rename bootcfg code references to matchbox
* Change config drectory to /etc/matchbox
* Change data-path to /var/lib/matchbox
* Change assets-path to /var/lib/matchbox/assets
2017-01-09 04:30:30 -08:00
Dalton Hubble
50432159d7 Update imports from bootcfg to matchbox 2017-01-09 02:25:56 -08:00
Dalton Hubble
88da59560d Move bootcfg package to matchbox 2017-01-09 02:21:47 -08:00
Dalton Hubble
3073e1ed22 Rename bootcfg to matchbox in development scripts
* Build and cross-compile binary named matchbox
* Use matchbox binary in container images
2017-01-09 02:15:30 -08:00
Dalton Hubble
6320cae91e Merge pull request #409 from coreos/update-k8s
Update Kubernetes (static) to v1.5.1
2017-01-05 13:14:34 -08:00
Dalton Hubble
0e06878714 examples: Update Kubernetes (static) addons for v1.5.1 2017-01-05 13:04:19 -08:00
Dalton Hubble
5ee63aebe5 examples: Update Kubernetes (static) to v1.5.1 2017-01-05 10:54:06 -08:00
Dalton Hubble
4a72802a32 Merge pull request #408 from coreos/virt-install-reboots
scripts: Add virt-install flag so reboots work
2017-01-04 16:55:42 -08:00
Dalton Hubble
da95af5625 scripts: Add virt-install flag so reboots work
* virt-install created VMs which reboot themselves should
actually reboot in order to mirror real machines
2017-01-04 16:26:51 -08:00
Dalton Hubble
8b50a84c4a Merge pull request #407 from coreos/update-self-hosted
examples: Upgrade self-hosted Kubernetes to v1.5.1
2016-12-30 17:05:07 -08:00
Dalton Hubble
b9f0f61bd5 examples: Upgrade self-hosted Kubernetes to v1.5.1
* Update bootkube version to stop requiring the forked
binary; bootkube now allows DNS names
* https://github.com/kubernetes-incubator/bootkube/pull/151
2016-12-30 16:27:53 -08:00
Dalton Hubble
8823e0fc30 Merge pull request #405 from stephanlindauer/patch-1
typo
2016-12-19 17:56:17 -08:00
stephan lindauer
dcb37f1acc typo 2016-12-20 02:25:35 +01:00
Dalton Hubble
e1334730ce Merge pull request #403 from coreos/k8s-v1.4.7
Update Kubernetes clusters to v1.4.7
2016-12-17 23:38:46 -08:00
Dalton Hubble
6963994942 examples: Update static Kubernetes clusters to v1.4.7
* Combine rktnetes Ignition into Kubernetes static cluster
* Set the container_runtime metadata to docker or rkt
2016-12-17 17:07:56 -08:00
Dalton Hubble
9f27efba9b examples/ignition/bootkube: Wrap bootkube in systemd service
* Start `bootkube start` via systemctl, don't require a persistent
SSH connection during the script run
2016-12-17 14:43:50 -08:00
Dalton Hubble
b2317ec35e examples/bootkube: Update self-hosted k8s to v1.4.7 2016-12-17 14:26:31 -08:00
Dalton Hubble
eb9809ee86 examples/ignition: Remove old kubelet by uuid-file
* Fix typo, --uuid should be --uuid-file
2016-12-13 14:16:24 -08:00
Dalton Hubble
112c2949fa Merge pull request #396 from coreos/kernel-args
bootcfg/storage: Change kernel args to be a slice
2016-12-12 11:13:30 -08:00
Dalton Hubble
54f0be2b8a bootcfg/storage: Change kernel args to be a slice
* Add Profile 'args' field as a list of kernel args
* Deprecate 'cmdline' field map of kernel args
* Add missing console=tty0 console=ttyS0 kernel args
to all example clusters
* Show `virsh console nodeN` command for development
with local QEMU/KVM nodes
2016-12-12 11:05:05 -08:00
Dalton Hubble
1e367634f3 Documentation: Deprecate Pixiecore support
* Focus on real-world, varied network environments
and flexibility, and bare-metal Tectonic
* iPXE provides better hardware introspection than being
limited to MAC
* ISC DHCP and dnsmasq provides production DHCP service
* Drop Pixiecore from the setups we can support or recommend
2016-12-12 10:45:26 -08:00
Dalton Hubble
0e495c5720 bootcfg: Parse and convert Fuze configs to Ignition 2016-12-11 21:08:40 -08:00
Dalton Hubble
377ca3b1e8 vendor: Update Ignition and Fuze version 2016-12-11 21:08:12 -08:00
Dalton Hubble
d48b8e884f Merge pull request #395 from coreos/kubelet-cleanup
examples: Remove old kubelet pods by uuid-file
2016-12-09 16:57:45 -08:00
Dalton Hubble
6cd016d019 examples: Remove old kubelet pods by uuid-file
* Save the rkt pod uuid on start and remove pod resources
(files, network) on restart, without waiting on gc
2016-12-09 15:08:56 -08:00
Dalton Hubble
d654c525dd Documentation: Update release process docs 2016-12-07 14:04:24 -08:00
Dalton Hubble
3e2593c673 Update version from v0.4.1 to v0.4.2 2016-12-07 13:14:24 -08:00
Dalton Hubble
8005c51d56 Merge pull request #391 from coreos/add-mkdocs
Add mkdocs.yaml and index page
2016-12-07 13:02:56 -08:00
Dalton Hubble
9124f3f461 Add mkdocs.yaml and index page 2016-12-07 12:15:31 -08:00
Dalton Hubble
485f7bcc99 Documentation: Add RPM install notes 2016-12-07 12:02:29 -08:00
Dalton Hubble
ce381ff788 contrib/systemd: Update bootcfg systemd units 2016-12-07 11:41:08 -08:00
Dalton Hubble
a77dd0f55b Merge pull request #389 from coreos/bump-go-version
Update .travis Go version and deployment settings
2016-12-06 20:42:38 -08:00
Dalton Hubble
2a73345e0f travis: Pull requests always skip deploy
* https://docs.travis-ci.com/user/deployment/#Pull-Requests
2016-12-06 20:13:07 -08:00
Dalton Hubble
de093cb7aa travis: Update Go version to 1.7.4 2016-12-06 20:01:00 -08:00
Dalton Hubble
e62e8419cd Documentation: Remove rkt trust command
* CoreOS systemd unit doesn't use the signed image
coreos.com/bootcfg currently. Trust does nothing
2016-12-06 16:37:13 -08:00
Dalton Hubble
37a3fd9b3a Merge pull request #387 from bzub/doc-fix
Documentation: update deployment.md cert-gen references
2016-12-04 12:58:30 -08:00
bzub
61eafcb861 Documentation: update deployment.md cert-gen references
Deployment document refers to `scripts/tls` for self-signed certificates.
The current location is actually `examples/etc/bootcfg`
2016-12-04 13:25:46 -06:00
Dalton Hubble
74e5e884ec contrib/dnsmasq: Add address for a Tectonic test 2016-11-29 17:17:46 -08:00
Dalton Hubble
1394ee4fd8 scripts/devnet: Fix CoreOS download help text 2016-11-28 11:17:31 -08:00
Quentin Machu
a78c3a0f75 Documentation: Update pixiecore link 2016-11-28 11:06:24 -08:00
Dmitry Bashkatov
2bdffc7569 scripts/tls: Fix kube-conf for darwin OS type 2016-11-28 11:04:31 -08:00
Dalton Hubble
43cf9cba66 Merge pull request #383 from coreos/metal0-cidr
Documentation: Change metal0 bridge to 172.18.0.0/24
2016-11-21 11:41:38 -08:00
Dalton Hubble
b492b1a23a Documentation: Change metal0 bridge to 172.18.0.0/24
* Change CIDR from 172.15.0.0/16, which isn't a reserved
private range
* Use a smaller CIDR, /24 is sufficient
2016-11-21 11:01:41 -08:00
Dalton Hubble
a00437c8c4 Merge pull request #380 from coreos/update-examples
Update CoreOS and tutorial/docs
2016-11-20 15:32:12 -08:00
Dalton Hubble
002412df2e Documentation: Use /etc/hosts node names in docs 2016-11-20 01:05:44 -08:00
Dalton Hubble
dc36b7858d examples/README: Add autologin clarification 2016-11-19 23:18:04 -08:00
Dalton Hubble
d5c5dde2e4 docs,examples: Update CoreOS to stable 1185.3.0 2016-11-19 23:09:58 -08:00
Dalton Hubble
7edf503807 Merge pull request #378 from coreos/docs-deployment
Documentation: Update deployment docs to use v0.4.1
2016-11-17 20:52:03 -08:00
Dalton Hubble
3a07ea3ac2 Documentation: Update deployment docs for v0.4.1 2016-11-17 20:25:58 -08:00
Dalton Hubble
e1727e6cb3 Merge pull request #377 from coreos/update-kubernetes
Update Kubernetes clusters to v1.4.6
2016-11-17 11:15:42 -08:00
Dalton Hubble
afa5068dd6 Documentation: Update Kubernetes dashboard screenshot 2016-11-17 00:50:14 -08:00
Dalton Hubble
cc07099687 examples: Bump self-hosted Kubernetes to v1.4.6 2016-11-17 00:46:14 -08:00
Dalton Hubble
962474e667 examples: Bump Kubernetes to v1.4.6
* Bump static docker/rkt Kubernetes clusters to v1.4.6
2016-11-16 23:21:10 -08:00
Dalton Hubble
91d42b9e1f Merge pull request #373 from coreos/locksmith
examples/ignition: Set reboot strategy to etcd-lock
2016-11-10 16:07:27 -08:00
Dalton Hubble
60842c155c examples/ignition: Set reboot strategy to etcd-lock
* locksmithd should use etcd to lock for reboots
* The default best-effort strategy uses the reboot
strategy if etcd isn't running for some reason
2016-11-10 15:28:54 -08:00
Denis Andrejew
dbc081913e fix anchor link in network-setup.md 2016-11-10 14:37:16 -08:00
Dalton Hubble
7a13843b21 examples: Increase the inotify max_user_watches
* Kubelet is reported to crash if cadvisor can't watch
2016-11-08 10:09:38 -08:00
Dalton Hubble
b2735d1f41 Merge pull request #370 from coreos/on-host-kubelet
examples: Use Kubelet --pod-manifest-path and EnvironmentFile
2016-11-02 13:16:18 -07:00
Dalton Hubble
2c68b4a0d9 examples: Use Kubelet --pod-manifest-path and EnvironmentFile 2016-11-01 16:39:14 -07:00
Dalton Hubble
f97df85304 Documentation: Fix some images and links 2016-10-29 18:01:09 -07:00
Dalton Hubble
056a29ad0f Merge pull request #367 from coreos/bump-kubernetes
Upgrade Kubernetes clusters to v1.4.3
2016-10-19 14:14:11 -07:00
Dalton Hubble
eb0e109f09 Upgrade Kubernetes clusters to v1.4.3
* Upgrade rktnetes Kubernetes clusters
* Upgrade dockernetes Kubernetes clusters
* Upgrade self-hosted Kubernetes clusters
2016-10-19 12:11:19 -07:00
Dalton Hubble
cee750cf3e Merge pull request #365 from coreos/self-hosted-upgrades
Documentation: Show self-hosted Kubernetes upgrade process
2016-10-19 09:49:20 -07:00
Dalton Hubble
faf8e37938 Documentation: Show self-hosted Kubernetes upgrade process 2016-10-19 02:24:53 -07:00
Dalton Hubble
fb2ab2a5d9 examples: Update all examples to CoreOS Beta 1185.1.0 2016-10-16 16:25:26 -07:00
Dalton Hubble
cef7c97945 examples: Update Kubernetes/rktnetes to v1.4.1 2016-10-16 15:38:11 -07:00
Dalton Hubble
f3cb1db4bc examples: Fix etcd and etcd3 proxy references 2016-10-16 15:38:11 -07:00
Dalton Hubble
279ede31c7 examples: Update self-hosted Kubernetes to v1.4.1 2016-10-16 15:37:43 -07:00
Dalton Hubble
ddd78bd2e0 Merge pull request #361 from coreos/self-hosted-k8s-v.1.4.0
examples: Update self-hosted Kubernetes to v1.4.0
2016-10-12 10:56:50 -07:00
Dalton Hubble
6692148b87 examples: Update self-hosted Kubernetes to v1.4.0
* Render with dghubble/bootkube fork which supports DNS
* Start with bootkube v0.2.0 with rkt
* Update on-host hyperkube to v1.4.0_coreos.0
2016-10-12 02:37:56 -07:00
Dalton Hubble
621d6fce7d Merge pull request #360 from coreos/bump-coreos
examples/rktnetes: Update to CoreOS Beta 1185.1.0
2016-10-10 23:10:02 -07:00
Dalton Hubble
82c2ec62d1 examples/rktnetes: Update to CoreOS Beta 1185.1.0
* rkt v1.14.0 is needed for rktnetes
2016-10-10 17:14:58 -07:00
Dalton Hubble
ee15dae003 Merge pull request #356 from coreos/update-kubernetes
examples/{k8s,rktnetes}: Update Kubernetes to v1.4.0_coreos.2
2016-10-08 20:22:34 -07:00
Dalton Hubble
e739c3adfa Documentation: Update docs for k8s/rktnetes 1.4 2016-10-08 20:11:43 -07:00
Dalton Hubble
2a11b387fb examples/{k8s,rktnetes}: Update addons for 1.4, use YAML 2016-10-08 18:33:38 -07:00
Dalton Hubble
1ed777edb2 examples/{k8s,rktnetes}: Bump to v1.4.0_coreos.2 hyperkube
* Replace --network-plugin-dir with --cni-conf-dir
* Add DefaultStorageClass to admission-control list
2016-10-08 18:33:38 -07:00
Dalton Hubble
4daf997a73 examples/k8s: Use CNI for Kubernetes clusters
* Set the Docker bridge IP and IP masq to empty string
* https://github.com/coreos/coreos-kubernetes/pull/551
2016-10-08 18:33:37 -07:00
Dalton Hubble
640f734e50 Merge pull request #359 from coreos/fix-devnet
scripts/devnet: Fix devnet to start named examples
2016-10-08 18:33:07 -07:00
Dalton Hubble
c53062d491 scripts/devnet: Fix devnet to start named examples 2016-10-08 16:26:30 -07:00
Dalton Hubble
abdb74f3b2 Merge pull request #357 from coreos/update-deployment-docs
Documentation: Update deployment and network docs
2016-10-03 13:47:11 -07:00
Dalton Hubble
9f791af195 Documentation: Update deployment and network docs 2016-10-03 13:29:07 -07:00
Dalton Hubble
eae23dc30c Merge pull request #354 from ebraminio/patch-1
[doc] Fix sample Docker command
2016-09-30 16:34:23 -07:00
Ebrahim Byagowi
197501c04a [doc] Fix sample Docker command 2016-09-26 22:08:29 +00:00
Dalton Hubble
cbbd2a4a8a Merge pull request #352 from ericchiang/fix-devnet-script-no-args
scripts/devnet: print usage if no arguments are provided
2016-09-22 11:12:31 -07:00
Eric Chiang
f0465c0d0b scripts/devnet: print usage if no arguments are provided
Before:

    $ sudo ./scripts/devnet
    ./scripts/devnet: line 20: $1: unbound variable

After:

    $ sudo ./scripts/devnet
    USAGE: devnet <command>
    Commands:
    	create	create bootcfg and PXE services on the bridge
    	destroy	destroy the services on the bridge
2016-09-22 11:04:34 -07:00
Dalton Hubble
7b1640b1c6 examples/etc/bootcfg: Fix typo in cert-gen help 2016-09-21 16:20:06 -07:00
Dalton Hubble
b3dab0aa98 Merge pull request #348 from coreos/devnet
scripts: Add devnet script to setup PXE/bootcfg bridge
2016-09-20 11:31:03 -07:00
Dalton Hubble
32b8a1108d scripts/devnet: Add devnet script to setup PXE/bootcfg 2016-09-19 19:26:55 -07:00
Dalton Hubble
bd131fc60d examples: Remove unneeded k8s-*-install profiles
* Use the same k8s-controller and k8s-worker profiles
whether booting a live cluster or installing to disk
* Exta root=/dev/sda kernel arg during install is fine
2016-09-17 21:07:45 -07:00
Dalton Hubble
14c1a37e71 examples: Replace pxe/pxe-disk with simple/simple-install
* simple example just network boots CoreOS machines
* simple-install example just network boots and installs CoreOS
* Simple examples don't do much provisioning, except adding pubkeys
2016-09-17 15:16:31 -07:00
Dalton Hubble
f6aec67eb8 scripts: Add libvirt create subcommand and --os-variant
* Add `scripts/libvirt create` subcommand for rkt setups
* Add --os-variant=generic to remove nag messages to specify
* Rename places QEMU/KVM VMs were called libvirt VMs
2016-09-17 02:39:04 -07:00
Dalton Hubble
dde306dec2 Merge pull request #345 from coreos/jonboulle-patch-1
docs: fix typo (kuberentes -> kubernetes)
2016-09-16 11:54:25 -07:00
Jonathan Boulle
c4f46f1db2 docs: fix typo (kuberentes -> kubernetes) 2016-09-16 11:38:19 -07:00
Dalton Hubble
27bd21eefa Merge pull request #346 from rothgar/master
Finished renaming Master -> controller
2016-09-16 11:30:53 -07:00
Justin Garrison
37ef166c15 Finished renaming Master -> controller 2016-09-16 11:07:00 -07:00
Dalton Hubble
34c7d01997 Merge pull request #343 from coreos/bump-k8s
Update static dockernetes and rktnetes to v1.3.6
2016-09-09 13:08:34 -07:00
Dalton Hubble
9b364b8efa examples: Update rktnetes clusters to v1.3.6
* Update Kubernetes hyperkube image to v1.3.6_coreos.0
* Update kube-dns to v17.1
* Update Kubernetes-dashboard to 1.1.1
2016-09-09 11:16:56 -07:00
Dalton Hubble
94db98d854 *: Rename k8s-master to k8s-controller 2016-09-09 11:16:56 -07:00
Dalton Hubble
cc675906c7 examples: Update k8s clusters to v1.3.6
* Update Kubernetes hyperkube image to v1.3.6_coreos.0
* Update kube-dns to v17.1
* Update Kubernetes-dashboard to 1.1.1
2016-09-09 11:15:41 -07:00
Dalton Hubble
83d3d90b3e Merge pull request #340 from coreos/etcd-fix
examples/ignition: Fix etcd peer listen urls to use IPs
2016-09-09 00:03:44 -07:00
Dalton Hubble
4b12a21acf examples/ignition: Fix etcd peer listen urls to use IPs
* See github.com/coreos/etcd/pull/6365
2016-09-08 23:43:26 -07:00
Dalton Hubble
e185a1e86c Merge pull request #341 from ericchiang/docs-bootkube-copy-kubeconfig-to-all-nodes
bootkube docs: make scp kubeconfig command copy-pastable
2016-09-07 13:43:53 -07:00
Eric Chiang
85d0d194fe Documentation/bootkube.md: make scp kubeconfig command copy-pastable
In the bootkube documentation make the command to copy kubeconfig
files to all nodes copy-pastable by adding a for loop.
2016-09-07 13:37:21 -07:00
Dalton Hubble
099f3dbf2d vendor: Update fuze, cloudinit, and go-systemd 2016-09-07 11:44:43 -07:00
Dalton Hubble
549727aae9 glide,vendor,Documentation: Update glide min verison to 0.12 2016-09-06 17:02:36 -07:00
Dennis Schridde
d56bf78e58 scripts/get-coreos: Make gpg binary customisable
Distributions like Debian 8 ship a `gpg` (1.4.x) and a `gpg2` (2.1.x) binary,
 which both use the same config files, and thus cannot be used at the same
 time, due to incompatible options. Thus we allow the user to specify which
 gpg binary they want to use.
2016-09-06 14:43:55 -07:00
Dalton Hubble
3b389cc524 README: Update links and examples list 2016-09-06 14:30:21 -07:00
Dalton Hubble
9c241ad384 examples: Add rknetes-install example cluster
* Add reference cluster which installs CoreOS and
provisions Kubernetes with rkt as the container
runtime
2016-08-30 10:55:57 -07:00
Dalton Hubble
bbadbc582e Merge pull request #330 from coreos/bump-coreos
examples: Update example clusters to CoreOS 1153.0.0
2016-08-30 10:54:58 -07:00
Dalton Hubble
dbbbc228b5 examples: Update example clusters to CoreOS 1153.0.0
* CoreOS 1153.0.0 adds rkt 1.13.0 which should resolve a
docker2aci bug in rktnetes observed with rkt 1.11.0
* https://github.com/coreos/rkt/pull/3026
2016-08-30 00:28:33 -07:00
Dalton Hubble
55b3b06c00 Merge branch 'fix-get-coreos' 2016-08-29 18:30:23 -07:00
Dalton Hubble
ee788bf077 scripts/get-coreos: Use grep -E
* egrep is equivalent, but technically deprecated
2016-08-29 18:29:28 -07:00
Dalton Hubble
c23824075c Merge pull request #327 from coreos/torus-etcd3
examples: Run Torus' etcd3 with rkt
2016-08-27 15:17:04 -07:00
Dalton Hubble
5342e28754 examples: Run Torus' etcd3 with rkt
* Bump etcd3 version to 3.0.6
2016-08-27 15:10:52 -07:00
Dalton Hubble
0cd039811a Merge pull request #335 from nak3/fix-torus.md
Doc: fix invalid link to examples
2016-08-27 15:09:18 -07:00
Kenjiro Nakayama
4760763401 Doc: fix invalid link to examples 2016-08-27 17:32:03 +09:00
Dennis Schridde
75ca3bca90 scripts/get-coreos: Relax version/channel check
Previously this matched very specific HTTP status codes only, while now it
 matches any success or redirection status code. It also works for "HTTP/2"
 answers in addition to "HTTP/2.0".

Fixes: #331
2016-08-27 10:01:50 +02:00
Dalton Hubble
cd2f7d4bfb Merge pull request #332 from coreos/rktnetes-fixes
Update rktnetes to 1151.0.0 and fix Docker 1.12 issue
2016-08-26 15:21:37 -07:00
Dalton Hubble
87d41c1a7f examples/rktnetes: Use EnvironmentFile to configure cni
* Docker v1.12 dockerd fails if given the daemon argument
* Applies https://github.com/coreos/coreos-kubernetes/pull/642
2016-08-26 15:04:49 -07:00
Dalton Hubble
90d9ca588a examples/rktnetes: Update to CoreOS 1151.0.0
* Resolves several docker2aci bugs in rkt 1.11.0
https://github.com/coreos/rkt/pull/3026
* Install to disk (rktnetes-install) is not yet working
https://github.com/coreos/bugs/issues/1541
2016-08-26 14:42:48 -07:00
Dalton Hubble
7d2f6b8b04 Merge pull request #324 from coreos/universal-root
examples: Use universal root filesystem
2016-08-25 14:55:16 -07:00
Dalton Hubble
54a16ffda4 examples: Use universal root filesystem
* Use the "root" filesystem from the Ignition universal
base config (path /sysroot)
* No need for custom named filesystem anymore
2016-08-25 14:18:07 -07:00
Dalton Hubble
d6ce07021f Merge pull request #328 from coreos/etcd3-updates
Update and add to etcd3 example clusters
2016-08-25 12:11:30 -07:00
Dalton Hubble
128f5d9b36 examples: Add etcd3-install cluster
* etcd3-install installs CoreOS to disk and sets
up a 3-node etcd3 cluster
* Additional machines are setup as etcd3 proxies
2016-08-25 11:52:28 -07:00
Dalton Hubble
6ff16e0813 examples/ignition/etcd3: Use notify service type 2016-08-25 11:52:28 -07:00
Dalton Hubble
bd420ba25e Merge pull request #293 from coreos/dns-self-hosted
examples/bootkube: Use DNS names for self-hosted Kubernetes
2016-08-25 11:38:38 -07:00
Dalton Hubble
9d35698b9b examples/groups/bootkube-install: Replace IPs with DNS names 2016-08-25 01:07:49 -07:00
Dalton Hubble
e194bf0355 examples/bootkube: Use DNS names for self-hosted Kubernetes
* Self-hosted Kuberntes api-server comes up without a hostname
override and detects the hostname it should use from `uname -n`
* kube-apiserver name must correspond to routable kubelet
hostname-override
* Provision /etc/hostname with the FQDN so `uname -n` can be
used by Kubernetes (until NODE_NAME is avail. in k8s 1.4)
2016-08-25 00:42:08 -07:00
Dalton Hubble
22ae896c85 travis.yml: Add Go 1.7, remove Go 1.5 2016-08-17 19:57:24 -07:00
Dalton Hubble
88fa2341e5 Merge pull request #317 from coreos/bootkube-bump
examples: Update self-hosted Kubernetes to v1.3.4
2016-08-17 17:19:16 -07:00
Dalton Hubble
747245c2f8 examples: Update self-hosted Kubernetes to v1.3.4
* Use bootkube v0.1.4 for self-hosted bootstrapping
2016-08-17 16:51:24 -07:00
Dalton Hubble
cae3135ef6 Merge pull request #315 from coreos/etcd3
Add etcd3 example cluster with rkt
2016-08-16 11:49:50 -07:00
Dalton Hubble
28c95f3255 Add etcd3 example cluster with rkt 2016-08-16 11:41:53 -07:00
Dalton Hubble
f9d9bb2367 contrib/dnsmasq: Add a cluster DNS name to conf 2016-08-16 11:18:38 -07:00
Dalton Hubble
796ae7e82c Merge pull request #313 from coreos/rktnetes
examples: Add Kubernetes with rkt runtime (rktnetes)
2016-08-16 10:23:14 -07:00
Dalton Hubble
e179256194 examples: Add Kubernetes with rkt runtime
* Add an example Kubernetes cluster with rkt as the container
runtime, CoreOS 1122.0.0, and rkt 1.11.0 (i.e. rktnetes)
2016-08-16 10:08:22 -07:00
Dalton Hubble
af3ce324cc Merge pull request #310 from coreos/usability-fixes
Usability improvements to tls scripts and docs
2016-08-12 16:01:12 -07:00
Dalton Hubble
0a486cb991 examples/etc/bootcfg/cert-gen: Improve cert-gen usability
* Print helpful message if SAN is unset
* Don't prompt to sign certs, false illusion of choice. Users
running cert-gen need self-signed certs.
* Remove intermediate cert signing requests
* Decrease the scariness of the self-signed warnings
2016-08-12 14:33:04 -07:00
Dalton Hubble
e46bab96c7 Documentation: Work-around NetworkManager firewall zone issue 2016-08-12 14:31:42 -07:00
Dalton Hubble
f8663ea36f README,CHANGES: Update README and recent changelog
* Add changelog entries since v0.4.0
* Link to Tectonic Installer post
* Don't highlight pixiecore on README. Endpoints support
it, but its not encouraged.
2016-08-12 13:55:39 -07:00
Dalton Hubble
2a61827a6b Merge pull request #309 from coreos/add-arm-binary
Add ARM to release architectures
2016-08-12 12:00:43 -07:00
Dalton Hubble
db2ea9704f scripts: Include contrib files in releases 2016-08-12 11:38:41 -07:00
Dalton Hubble
5325f104d3 Makefile: Add ARM to release architectures 2016-08-12 11:38:41 -07:00
Dalton Hubble
bfc58161d5 Merge pull request #307 from chancez/common_node_attrs
scripts: Move node info into common sourceable scripts
2016-08-12 11:32:20 -07:00
Chance Zibolski
62d269f92f scripts: Move node info into common sourcable scripts 2016-08-12 11:24:26 -07:00
Dalton Hubble
f5441e35a1 Merge pull request #306 from coreos/coreos-install-guide
Documentation: Add guide for installing on CoreOS
2016-08-08 13:08:56 -07:00
Dalton Hubble
6222fcf802 Documentation: Add guide for installing on CoreOS
* Provide a systemd service unit which rkt runs bootcfg
2016-08-08 11:00:38 -07:00
Dalton Hubble
05da923fc3 Makefile, scripts: Add make codegen target
* Get/build protoc and protoc-gen-go binaries under tools
so they do not interfere with any user installations
* Ensure protoc and protoc-gen-go versions are pinned
2016-08-05 12:07:19 -07:00
Dalton Hubble
724615f539 Merge branch 'rothgar-k8s-certgen-dns' 2016-08-05 10:58:47 -07:00
Justin Garrison
db9141ec1e scripts/tls: Update DNS values
* Fixes #299
2016-08-05 10:58:02 -07:00
Dalton Hubble
43e1637c18 Merge pull request #300 from coreos/k8s-bump
examples: Bump k8s version to v1.3.4_coreos.0
2016-08-04 16:35:23 -07:00
Dalton Hubble
183285a03b examples: Bump k8s version to v1.3.4_coreos.0 2016-08-04 15:23:19 -07:00
Chance Zibolski
8705f78aee examples: Update kubelet.service to mount /var/log in rkt 2016-08-04 00:53:14 -07:00
Dalton Hubble
8349b25587 README.md: Fix minor README typos 2016-08-03 15:43:50 -07:00
Dalton Hubble
82b12d227b Merge pull request #295 from coreos/installation-docs
Documentation: Update installation guide for v0.4.0
2016-07-27 18:44:35 -07:00
Dalton Hubble
aa31228f7b Documentation: Update installation guide for v0.4.0 2016-07-27 15:20:32 -07:00
Dalton Hubble
4d46848417 examples/torus: Use DNS names for Torus cluster
* Change `torusblk volume create` to `torusctl block create`
2016-07-26 14:52:58 -07:00
Dalton Hubble
dc5b3e24e5 examples/k8s: Use DNS names in Kubernetes clusters
* Use DNS names to refer to nodes to mirror production
2016-07-26 14:41:03 -07:00
Dalton Hubble
6157217f6b Merge pull request #290 from coreos/etcd
examples/etcd: Use DNS names in etcd clusters, no IPs
2016-07-25 15:43:08 -07:00
Dalton Hubble
ed0f54da27 examples/etcd: Use DNS names in etcd clusters, no IPs
* Use DNS names to refer to nodes in etcd examples to mirror
production
* Add dnsmasq.conf files for metal0 (rkt) and docker0 examples
which include static MAC->IP and Name->IP mappings
* Remove the etcd-docker example cluster, no longer needed
2016-07-25 12:03:26 -07:00
Dalton Hubble
07e8289282 Merge pull request #289 from coreos/update-examples
examples: Update CoreOS version and bootkube
2016-07-22 14:42:22 -07:00
Dalton Hubble
dcfc6dae96 examples/bootkube: Update from bootkube v0.1.1 to v0.1.2
* Update self-hosted Kubernetes cluster example to use bootkube
v0.1.2.
* Bump Kubernetes from v1.3.0-beta.2_coreos.0 to v1.3.0_coreos.1
2016-07-21 16:54:40 -07:00
Dalton Hubble
0e4a809600 examples: Bump CoreOS Alpha from 1053.2.0 to 1109.1.0
* Clusters which install to disk auto-update so this bump just
changes the "starting" version. Deployed alpha clusters should
already be using 1109.1.0.
2016-07-21 14:48:44 -07:00
Dalton Hubble
4681c227f9 Documentation: Add snippet for running v0.4.0 release
* Show how to use the v0.4.0 tagged release rkt or Docker
image in addition to running the latest image
2016-07-20 17:07:50 -07:00
Dalton Hubble
d33945bfad Documentation/dev/release.md: Update release process 2016-07-20 16:59:34 -07:00
3946 changed files with 23351 additions and 726424 deletions

View File

@@ -1,8 +1,2 @@
contrib/
Documentation/
examples/
Godeps/
scripts/
vendor/
vagrant/
*.aci
*
!bin/matchbox

9
.gitignore vendored
View File

@@ -26,7 +26,10 @@ _testmain.go
*.test
*.prof
_output/
bin/
assets/
*.aci
assets/
bin/
_output/
tools/
contrib/registry/data
contrib/rpm/*.tar.gz

View File

@@ -3,27 +3,23 @@ sudo: required
services:
- docker
go:
- 1.5.4
- 1.6.2
- 1.10.x
- 1.11.x
- 1.11.1
- tip
matrix:
allow_failures:
- go: tip
env:
global:
- GO15VENDOREXPERIMENT="1"
install:
- go get github.com/golang/lint/golint
- go get golang.org/x/lint/golint
script:
- ./test
- make test
deploy:
provider: script
script: scripts/travis-docker-push
script: scripts/dev/travis-docker-push
skip_cleanup: true
on:
branch: master
go: '1.6.2'
condition: "$TRAVIS_PULL_REQUEST = false"
go: '1.11.1'
notifications:
email: change

View File

@@ -1,8 +1,140 @@
# coreos-baremetal bootcfg
# Matchbox
Notable changes between releases.
## Latest
## v0.4.0 (2016-06-21)
## v0.7.1 (2018-11-01)
* Add `kernel_args` variable to the terraform bootkube-install cluster definition
* Add `get-flatcar` helper script
* Add optional TLS support to read-only HTTP API
* Build Matchbox with Go 1.11.1 for images and binaries
### Examples
* Upgrade Kubernetes example clusters to v1.10.0 (Terraform-based)
* Upgrade Kubernetes example clusters to v1.8.5
## v0.7.0 (2017-12-12)
* Add gRPC API endpoints for managing generic (experimental) templates
* Update Container Linux config transpiler to v0.5.0
* Update Ignition to v0.19.0, render v2.1.0 Ignition configs
* Drop support for Container Linux versions below 1465.0.0 (breaking)
* Build Matchbox with Go 1.8.5 for images and binaries
* Remove Profile `Cmdline` map (deprecated in v0.5.0), use `Args` slice instead
* Remove pixiecore support (deprecated in v0.5.0)
* Remove `ContextHandler`, `ContextHandlerFunc`, and `NewHandler` from the `matchbox/http` package.
### Examples / Modules
* Upgrade Kubernetes example clusters to v1.8.4
* Kubernetes examples clusters enable etcd TLS
* Deploy the Container Linux Update Operator (CLUO) to coordinate reboots of Container Linux nodes in Kubernetes clusters. See the cluster [addon docs](Documentation/cluster-addons.md).
* Kubernetes examples (terraform and non-terraform) mask locksmithd
* Terraform modules `bootkube` and `profiles` (Kubernetes) mask locksmithd
## v0.6.1 (2017-05-25)
* Improve the installation documentation
* Move examples/etc/matchbox/cert-gen to scripts/tls
* Build Matchbox with Go 1.8.3 for images and binaries
### Examples
* Upgrade self-hosted Kubernetes cluster examples to v1.6.4
* Add NoSchedule taint to self-hosted Kubernetes controllers
* Remove static Kubernetes and rktnetes cluster examples
## v0.6.0 (2017-04-25)
* New [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) plugin for Terraform users!
* New hosted [documentation](https://coreos.com/matchbox/docs/latest) on coreos.com
* Add `ProfileDelete`, `GroupDelete`, `IgnitionGet` and `IgnitionDelete` gRPC endpoints
* Build matchbox with Go 1.8 for container images and binaries
* Generate code with gRPC v1.2.1 and matching Go protoc-gen-go plugin
* Update Ignition to v0.14.0 and coreos-cloudinit to v1.13.0
* Update "fuze" docs to the new name [Container Linux Configs](https://coreos.com/os/docs/latest/configuration.html)
* Remove `bootcmd` binary from release tarballs
### Examples
* Upgrade Kubernetes v1.5.5 (static) example clusters
* Upgrade Kubernetes v1.6.1 (self-hosted) example cluster
* Use etcd3 by default in all clusters (remove etcd2 clusters)
* Add Terraform examples for etcd3 and self-hosted Kubernetes 1.6.1
## v0.5.0 (2017-01-23)
* Rename project to CoreOS `matchbox`!
* Add Profile `args` field to list kernel args
* Update [Fuze](https://github.com/coreos/container-linux-config-transpiler) and [Ignition](https://github.com/coreos/ignition) to v0.11.2
* Switch from `golang.org/x/net/context` to `context`
* Deprecate Profile `cmd` field map of kernel args
* Deprecate Pixiecore support
* Drop build support for Go 1.6
#### Rename
* Move repo from github.com/coreos/coreos-baremetal to github.com/coreos/matchbox
* Rename `bootcfg` binary to `matchbox`
* Rename `bootcfg` packages to `matchbox`
* Publish a `quay.io/coreos/matchbox` container image. The `quay.io/coreos/bootcfg` image will no longer be updated.
* Rename environment variable prefix from `BOOTCFG*` to `MATCHBOX*`
* Change config directory to `/etc/matchbox`
* Change default `-data-path` to `/var/lib/matchbox`
* Change default `-assets-path` to `/var/lib/matchbox/assets`
#### Examples
* Upgrade Kubernetes v1.5.1 (static) example clusters
* Upgrade Kubernetes v1.5.1 (self-hosted) example cluster
* Switch Kubernetes (self-hosted) to run flannel as pods
* Combine rktnetes Ignition into Kubernetes static cluster
#### Migration
* binary users should install the `matchbox` binary (see [installation](Documentation/deployment.md))
* rkt/docker users should start using `quay.io/coreos/matchbox` (see [installation](Documentation/deployment.md))
* RPM users should uninstall bootcfg and install matchbox (see [installation](Documentation/deployment.md))
* Move `/etc/bootcfg` configs and certificates to `/etc/matchbox`
* Move `/var/lib/bootcfg` data to `/var/lib/matchbox`
* See the new [contrib/systemd](contrib/systemd) service examples
* Remove the old `bootcfg` user if you created one
## v0.4.2 (2016-12-7)
#### Improvements
* Add RPM packages to Copr
* Fix packaged `contrib/systemd` units
* Update Go version to 1.7.4
#### Examples
* Upgrade Kubernetes v1.4.6 (static manifest) example clusters
* Upgrade Kubernetes v1.4.6 (rktnetes) example clusters
* Upgrade Kubernetes v1.4.6 (self-hosted) example cluster
## v0.4.1 (2016-10-17)
#### Improvements
* Add ARM and ARM64 release architectures (#309)
* Add guide for installing bootcfg on CoreOS (#306)
* Improvements to the bootcfg cert-gen script (#310)
#### Examples
* Add Kubernetes example with rkt container runtime (i.e. rktnetes)
* Upgrade Kubernetes v1.4.1 (static manifest) example clusters
* Upgrade Kubernetes v1.4.1 (rktnetes) example clusters
* Upgrade Kubernetes v1.4.1 (self-hosted) example cluster
* Add etcd3 example cluster (PXE in-RAM or install to disk)
* Use DNS names (instead of IPs) in example clusters (except bootkube)
## v0.4.0 (2016-07-21)
#### Features

View File

@@ -1,5 +1,5 @@
FROM alpine:latest
FROM alpine:3.6
MAINTAINER Dalton Hubble <dalton.hubble@coreos.com>
COPY bin/bootcfg /bootcfg
COPY bin/matchbox /matchbox
EXPOSE 8080
ENTRYPOINT ["/bootcfg"]
ENTRYPOINT ["/matchbox"]

View File

@@ -1,17 +1,21 @@
# HTTP API
## iPXE Script
## iPXE script
Serves a static iPXE boot script which gathers client machine attributes and chainloads to the iPXE endpoint. Use DHCP/TFTP to point iPXE clients to this endpoint as the next-server.
GET http://bootcfg.foo/boot.ipxe
GET http://bootcfg.foo/boot.ipxe.0 // for dnsmasq
```
GET http://matchbox.foo/boot.ipxe
GET http://matchbox.foo/boot.ipxe.0 // for dnsmasq
```
**Response**
#!ipxe
chain ipxe?uuid=${uuid}&mac=${net0/mac:hexhyp}&domain=${domain}&hostname=${hostname}&serial=${serial}
```
#!ipxe
chain ipxe?uuid=${uuid}&mac=${mac:hexhyp}&domain=${domain}&hostname=${hostname}&serial=${serial}
```
Client's booted with the `/ipxe.boot` endpoint will introspect and make a request to `/ipxe` with the `uuid`, `mac`, `hostname`, and `serial` value as query arguments.
@@ -19,9 +23,11 @@ Client's booted with the `/ipxe.boot` endpoint will introspect and make a reques
Finds the profile for the machine and renders the network boot config (kernel, options, initrd) as an iPXE script.
GET http://bootcfg.foo/ipxe?label=value
```
GET http://matchbox.foo/ipxe?label=value
```
**Query Parameters**
**Query parameters**
| Name | Type | Description |
|------|--------|-----------------|
@@ -31,16 +37,49 @@ Finds the profile for the machine and renders the network boot config (kernel, o
**Response**
#!ipxe
kernel /assets/coreos/1053.2.0/coreos_production_pxe.vmlinuz coreos.config.url=http://bootcfg.foo:8080/ignition?uuid=${uuid}&mac=${net0/mac:hexhyp} coreos.first_boot=1 coreos.autologin
initrd /assets/coreos/1053.2.0/coreos_production_pxe_image.cpio.gz
boot
```
#!ipxe
kernel /assets/coreos/1576.5.0/coreos_production_pxe.vmlinuz coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp} coreos.first_boot=1 coreos.autologin
initrd /assets/coreos/1576.5.0/coreos_production_pxe_image.cpio.gz
boot
```
## GRUB2
Finds the profile for the machine and renders the network boot config as a GRUB config. Use DHCP/TFTP to point GRUB clients to this endpoint as the next-server.
GET http://bootcfg.foo/grub?label=value
```
GET http://matchbox.foo/grub?label=value
```
**Query parameters**
| Name | Type | Description |
|------|--------|-----------------|
| uuid | string | Hardware UUID |
| mac | string | MAC address |
| * | string | Arbitrary label |
**Response**
```
default=0
timeout=1
menuentry "CoreOS" {
echo "Loading kernel"
linuxefi "(http;matchbox.foo:8080)/assets/coreos/1576.5.0/coreos_production_pxe.vmlinuz" "coreos.autologin" "coreos.config.url=http://matchbox.foo:8080/ignition" "coreos.first_boot"
echo "Loading initrd"
initrdefi "(http;matchbox.foo:8080)/assets/coreos/1576.5.0/coreos_production_pxe_image.cpio.gz"
}
```
## Cloud config
DEPRECATED: Finds the profile matching the machine and renders the corresponding Cloud-Config with group metadata, selectors, and query params.
```
GET http://matchbox.foo/cloud?label=value
```
**Query Parameters**
@@ -52,69 +91,25 @@ Finds the profile for the machine and renders the network boot config as a GRUB
**Response**
default=0
timeout=1
menuentry "CoreOS" {
echo "Loading kernel"
linuxefi "(http;bootcfg.foo:8080)/assets/coreos/1053.2.0/coreos_production_pxe.vmlinuz" "coreos.autologin" "coreos.config.url=http://bootcfg.foo:8080/ignition" "coreos.first_boot"
echo "Loading initrd"
initrdefi "(http;bootcfg.foo:8080)/assets/coreos/1053.2.0/coreos_production_pxe_image.cpio.gz"
}
```yaml
#cloud-config
coreos:
units:
- name: etcd2.service
command: start
- name: fleet.service
command: start
```
## Pixiecore
Finds the profile matching the machine and renders the network boot config as JSON to implement the [Pixiecore API](https://github.com/danderson/pixiecore/blob/master/README.api.md). Currently, Pixiecore only provides the machine's MAC address for matching.
GET http://bootcfg.foo/pixiecore/v1/boot/:MAC
**URL Parameters**
| Name | Type | Description |
|------|--------|-------------|
| mac | string | MAC address |
**Response**
{
"kernel":"/assets/coreos/1032.0.0/coreos_production_pxe.vmlinuz",
"initrd":["/assets/coreos/1032.0.0/coreos_production_pxe_image.cpio.gz"],
"cmdline":{
"cloud-config-url":"http://bootcfg.foo/cloud?mac=ADDRESS",
"coreos.autologin":""
}
}
## Cloud Config
Finds the profile matching the machine and renders the corresponding Cloud-Config with group metadata, selectors, and query params.
GET http://bootcfg.foo/cloud?label=value
**Query Parameters**
| Name | Type | Description |
|------|--------|-----------------|
| uuid | string | Hardware UUID |
| mac | string | MAC address |
| * | string | Arbitrary label |
**Response**
#cloud-config
coreos:
units:
- name: etcd2.service
command: start
- name: fleet.service
command: start
## Ignition Config
## Container Linux Config / Ignition Config
Finds the profile matching the machine and renders the corresponding Ignition Config with group metadata, selectors, and query params.
GET http://bootcfg.foo/ignition?label=value
```
GET http://matchbox.foo/ignition?label=value
```
**Query Parameters**
**Query parameters**
| Name | Type | Description |
|------|--------|-----------------|
@@ -124,24 +119,28 @@ Finds the profile matching the machine and renders the corresponding Ignition Co
**Response**
{
"ignition": { "version": "2.0.0" },
"systemd": {
"units": [{
"name": "example.service",
"enable": true,
"contents": "[Service]\nType=oneshot\nExecStart=/usr/bin/echo Hello World\n\n[Install]\nWantedBy=multi-user.target"
}]
}
}
```json
{
"ignition": { "version": "2.0.0" },
"systemd": {
"units": [{
"name": "example.service",
"enable": true,
"contents": "[Service]\nType=oneshot\nExecStart=/usr/bin/echo Hello World\n\n[Install]\nWantedBy=multi-user.target"
}]
}
}
```
## Generic Config
## Generic config
Finds the profile matching the machine and renders the corresponding generic config with group metadata, selectors, and query params.
GET http://bootcfg.foo/generic?label=value
```
GET http://matchbox.foo/generic?label=value
```
**Query Parameters**
**Query parameters**
| Name | Type | Description |
|------|--------|-----------------|
@@ -151,19 +150,22 @@ Finds the profile matching the machine and renders the corresponding generic con
**Response**
{
“uuid”: “”,
“mac”: “52:54:00:a1:9c:ae”,
“osInstalled”: true,
“rawQuery”: “mac=52:54:00:a1:9c:ae&os=installed”
}
```
{
“uuid”: “”,
“mac”: “52:54:00:a1:9c:ae”,
“osInstalled”: true,
“rawQuery”: “mac=52:54:00:a1:9c:ae&os=installed”
}
```
## Metadata
Finds the matching machine group and renders the group metadata, selectors, and query params in an "env file" style response.
GET http://bootcfg.foo/metadata?mac=52-54-00-a1-9c-ae&foo=bar&count=3&gate=true
```
GET http://matchbox.foo/metadata?mac=52-54-00-a1-9c-ae&foo=bar&count=3&gate=true
```
**Query Parameters**
@@ -175,34 +177,37 @@ Finds the matching machine group and renders the group metadata, selectors, and
**Response**
META=data
ETCD_NAME=node1
SOME_NESTED_DATA=some-value
MAC=52:54:00:a1:9c:ae
REQUEST_QUERY_MAC=52:54:00:a1:9c:ae
REQUEST_QUERY_FOO=bar
REQUEST_QUERY_COUNT=3
REQUEST_QUERY_GATE=true
REQUEST_RAW_QUERY=mac=52-54-00-a1-9c-ae&foo=bar&count=3&gate=true
```
META=data
ETCD_NAME=node1
SOME_NESTED_DATA=some-value
MAC=52:54:00:a1:9c:ae
REQUEST_QUERY_MAC=52:54:00:a1:9c:ae
REQUEST_QUERY_FOO=bar
REQUEST_QUERY_COUNT=3
REQUEST_QUERY_GATE=true
REQUEST_RAW_QUERY=mac=52-54-00-a1-9c-ae&foo=bar&count=3&gate=true
```
## OpenPGP Signatures
## OpenPGP signatures
OpenPGPG signature endpoints serve detached binary and ASCII armored signatures of rendered configs, if enabled. See [OpenPGP Signing](openpgp.md).
| Endpoint | Signature Endpoint | ASCII Signature Endpoint |
|------------|--------------------|-------------------------|
| iPXE | `http://bootcfg.foo/ipxe.sig` | `http://bootcfg.foo/ipxe.asc` |
| Pixiecore | `http://bootcfg/pixiecore/v1/boot.sig/:MAC` | `http://bootcfg/pixiecore/v1/boot.asc/:MAC` |
| GRUB2 | `http://bootcf.foo/grub.sig` | `http://bootcfg.foo/grub.asc` |
| Ignition | `http://bootcfg.foo/ignition.sig` | `http://bootcfg.foo/ignition.asc` |
| Cloud-Config | `http://bootcfg.foo/cloud.sig` | `http://bootcfg.foo/cloud.asc` |
| Generic | `http://bootcfg.foo/generic.sig` | `http://bootcfg.foo/generic.asc` |
| Metadata | `http://bootcfg.foo/metadata.sig` | `http://bootcfg.foo/metadata.asc` |
| iPXE | `http://matchbox.foo/ipxe.sig` | `http://matchbox.foo/ipxe.asc` |
| GRUB2 | `http://bootcf.foo/grub.sig` | `http://matchbox.foo/grub.asc` |
| Ignition | `http://matchbox.foo/ignition.sig` | `http://matchbox.foo/ignition.asc` |
| Cloud-Config | `http://matchbox.foo/cloud.sig` | `http://matchbox.foo/cloud.asc` |
| Generic | `http://matchbox.foo/generic.sig` | `http://matchbox.foo/generic.asc` |
| Metadata | `http://matchbox.foo/metadata.sig` | `http://matchbox.foo/metadata.asc` |
Get a config and its detached ASCII armored signature.
GET http://bootcfg.foo/ipxe?label=value
GET http://bootcfg.foo/ipxe.asc?label=value
```
GET http://matchbox.foo/ipxe?label=value
GET http://matchbox.foo/ipxe.asc?label=value
```
**Response**
@@ -221,14 +226,15 @@ NO+p24BL3PHZyKw0nsrm275C913OxEVgnNZX7TQltaweW23Cd1YBNjcfb3zv+Zo=
## Assets
If you need to serve static assets (e.g. kernel, initrd), `bootcfg` can serve arbitrary assets from the `-assets-path`.
bootcfg.foo/assets/
└── coreos
└── 1053.2.0
├── coreos_production_pxe.vmlinuz
└── coreos_production_pxe_image.cpio.gz
└── 1032.0.0
├── coreos_production_pxe.vmlinuz
└── coreos_production_pxe_image.cpio.gz
If you need to serve static assets (e.g. kernel, initrd), `matchbox` can serve arbitrary assets from the `-assets-path`.
```
matchbox.foo/assets/
└── coreos
└── 1576.5.0
├── coreos_production_pxe.vmlinuz
└── coreos_production_pxe_image.cpio.gz
└── 1153.0.0
├── coreos_production_pxe.vmlinuz
└── coreos_production_pxe_image.cpio.gz
```

View File

@@ -1,176 +0,0 @@
# bootcfg
`bootcfg` is an HTTP and gRPC service that renders signed [Ignition configs](https://coreos.com/ignition/docs/latest/what-is-ignition.html), [cloud-configs](https://coreos.com/os/docs/latest/cloud-config.html), network boot configs, and metadata to machines to create CoreOS clusters. `bootcfg` maintains **Group** definitions which match machines to *profiles* based on labels (e.g. MAC address, UUID, stage, region). A **Profile** is a named set of config templates (e.g. iPXE, GRUB, Ignition config, Cloud-Config, generic configs). The aim is to use CoreOS Linux's early-boot capabilities to provision CoreOS machines.
Network boot endpoints provide iPXE, GRUB, and [Pixiecore](https://github.com/danderson/pixiecore/blob/master/README.api.md) support. `bootcfg` can be deployed as a binary, as an [appc](https://github.com/appc/spec) container with rkt, or as a Docker container.
<img src='img/overview.png' class="img-center" alt="Bootcfg Overview"/>
## Getting Started
Get started running `bootcfg` on your Linux machine, with rkt or Docker.
* [bootcfg with rkt](getting-started-rkt.md)
* [bootcfg with Docker](getting-started-docker.md)
## Flags
See [configuration](config.md) flags and variables.
## API
* [HTTP API](api.md)
* [gRPC API](https://godoc.org/github.com/coreos/coreos-baremetal/bootcfg/client)
## Data
A `Store` stores machine Groups, Profiles, and associated Ignition configs, cloud-configs, and generic configs. By default, `bootcfg` uses a `FileStore` to search a `-data-path` for these resources.
Prepare `/var/lib/bootcfg` with `groups`, `profile`, `ignition`, `cloud`, and `generic` subdirectories. You may wish to keep these files under version control.
/var/lib/bootcfg
├── cloud
│   ├── cloud.yaml.tmpl
│   └── worker.sh.tmpl
├── ignition
│   └── raw.ign
│   └── etcd.yaml.tmpl
│   └── simple.yaml.tmpl
├── generic
│   └── config.yaml
│   └── setup.cfg
│   └── datacenter-1.tmpl
├── groups
│   └── default.json
│   └── node1.json
│   └── us-central1-a.json
└── profiles
└── etcd.json
└── worker.json
The [examples](../examples) directory is a valid data directory with some pre-defined configs. Note that `examples/groups` contains many possible groups in nested directories for demo purposes (tutorials pick one to mount). Your machine groups should be kept directly inside the `groups` directory as shown above.
### Profiles
Profiles reference an Ignition config, Cloud-Config, and/or generic config by name and define network boot settings.
{
"id": "etcd",
"name": "CoreOS with etcd2",
"cloud_id": "",
"ignition_id": "etcd.yaml"
"generic_id": "some-service.cfg",
"boot": {
"kernel": "/assets/coreos/1053.2.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1053.2.0/coreos_production_pxe_image.cpio.gz"],
"cmdline": {
"coreos.config.url": "http://bootcfg.foo:8080/ignition?uuid=${uuid}&mac=${net0/mac:hexhyp}",
"coreos.autologin": "",
"coreos.first_boot": "1"
}
},
}
The `"boot"` settings will be used to render configs to network boot programs such as iPXE, GRUB, or Pixiecore. You may reference remote kernel and initrd assets or [local assets](#assets).
To use Ignition, set the `coreos.config.url` kernel option to reference the `bootcfg` [Ignition endpoint](api.md#ignition-config), which will render the `ignition_id` file. Be sure to add the `coreos.first_boot` option as well.
To use cloud-config, set the `cloud-config-url` kernel option to reference the `bootcfg` [Cloud-Config endpoint](api.md#cloud-config), which will render the `cloud_id` file.
### Groups
Groups define selectors which match zero or more machines. Machine(s) matching a group will boot and provision according to the group's `Profile`.
Create a group definition with a `Profile` to be applied, selectors for matching machines, and any `metadata` needed to render templated configs. For example `/var/lib/bootcfg/groups/node1.json` matches a single machine with MAC address `52:54:00:89:d8:10`.
# /var/lib/bootcfg/groups/node1.json
{
"name": "node1",
"profile": "etcd",
"selector": {
"mac": "52:54:00:89:d8:10"
},
"metadata": {
"fleet_metadata": "role=etcd,name=node1",
"etcd_name": "node1",
"etcd_initial_cluster": "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380"
}
}
Meanwhile, `/var/lib/bootcfg/groups/proxy.json` acts as the default machine group since it has no selectors.
{
"name": "etcd-proxy",
"profile": "etcd-proxy",
"metadata": {
"fleet_metadata": "role=etcd-proxy",
"etcd_initial_cluster": "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380"
}
}
For example, a request to `/ignition?mac=52:54:00:89:d8:10` would render the Ignition template in the "etcd" `Profile`, with the machine group's metadata. A request to `/ignition` would match the default group (which has no selectors) and render the Ignition in the "etcd-proxy" Profile. Avoid defining multiple default groups as resolution will not be deterministic.
#### Reserved Selectors
Group selectors can use any key/value pairs you find useful. However, several labels have a defined purpose and will be normalized or parsed specially.
* `uuid` - machine UUID
* `mac` - network interface physical address (normalized MAC address)
* `hostname` - hostname reported by a network boot program
* `serial` - serial reported by a network boot program
### Config Templates
Profiles can reference various templated configs. Ignition JSON configs can be generated from [Fuze config](https://github.com/coreos/fuze/blob/master/doc/configuration.md) template files. Cloud-Config templates files can be used to render a script or Cloud-Config. Generic template files can be used to render arbitrary untyped configs (experimental). Each template may contain [Go template](https://golang.org/pkg/text/template/) elements which will be rendered with machine group metadata, selectors, and query params.
For details and examples:
* [Ignition Config](ignition.md)
* [Cloud-Config](cloud-config.md)
#### Variables
Within Ignition/Fuze templates, Cloud-Config templates, or generic templates, you can use group metadata, selectors, or request-scoped query params. For example, a request `/generic?mac=52-54-00-89-d8-10&foo=some-param&bar=b` would match the `node1.json` machine group shown above. If the group's profile ("etcd") referenced a generic template, the following variables could be used.
# Untyped generic config file
# Selector
{{.mac}} # 52:54:00:89:d8:10 (normalized)
# Metadata
{{.etcd_name}} # node1
{{.fleet_metadata}} # role=etcd,name=node1
# Query
{{.request.query.mac}} # 52:54:00:89:d8:10 (normalized)
{{.request.query.foo}} # some-param
{{.request.query.bar}} # b
# Special Addition
{{.request.raw_query}} # mac=52:54:00:89:d8:10&foo=some-param&bar=b
Note that `.request` is reserved for these purposes so group metadata with data nested under a top level "request" key will be overwritten.
## Assets
`bootcfg` can serve `-assets-path` static assets at `/assets`. This is helpful for reducing bandwidth usage when serving the kernel and initrd to network booted machines. The default assets-path is `/var/lib/bootcfg/assets` or you can pass `-assets-path=""` to disable asset serving.
bootcfg.foo/assets/
└── coreos
└── VERSION
├── coreos_production_pxe.vmlinuz
└── coreos_production_pxe_image.cpio.gz
For example, a `Profile` might refer to a local asset `/assets/coreos/VERSION/coreos_production_pxe.vmlinuz` instead of `http://stable.release.core-os.net/amd64-usr/VERSION/coreos_production_pxe.vmlinuz`.
See the [get-coreos](../scripts/README.md#get-coreos) script to quickly download, verify, and place CoreOS assets.
## Network
`bootcfg` does not implement or exec a DHCP/TFTP server. Read [network setup](network-setup.md) or use the [coreos/dnsmasq](../contrib/dnsmasq) image if you need a quick DHCP, proxyDHCP, TFTP, or DNS setup.
## Going Further
* [gRPC API Usage](config.md#grpc-api)
* [Metadata](api.md#metadata)
* OpenPGP [Signing](api.md#openpgp-signatures)

View File

@@ -0,0 +1,147 @@
# Upgrading self-hosted Kubernetes
CoreOS Kubernetes clusters "self-host" the apiserver, scheduler, controller-manager, flannel, kube-dns, and kube-proxy as Kubernetes pods, like ordinary applications (except with taint tolerations). This allows upgrades to be performed in-place using (mostly) `kubectl` as an alternative to re-provisioning.
Let's upgrade a Kubernetes v1.6.6 cluster to v1.6.7 as an example.
## Stability
This guide shows how to attempt a in-place upgrade of a Kubernetes cluster setup via the [examples](../examples). It does not provide exact diffs, migrations between breaking changes, the stability of a fresh re-provision, or any guarantees. Evaluate whether in-place updates are appropriate for your Kubernetes cluster and be prepared to perform a fresh re-provision if something goes wrong, especially between Kubernetes minor releases (e.g. 1.6 to 1.7).
Matchbox Kubernetes examples provide a vanilla Kubernetes cluster with only free (as in freedom and cost) software components. If you require currated updates, migrations, or guarantees for production, consider [Tectonic](https://coreos.com/tectonic/) by CoreOS.
**Note: Tectonic users should NOT manually upgrade. Follow the [Tectonic docs](https://coreos.com/tectonic/docs/latest/admin/upgrade.html)**
## Inspect
Show the control plane daemonsets and deployments which will need to be updated.
```sh
$ kubectl get daemonsets -n=kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE
kube-apiserver 1 1 1 1 1 node-role.kubernetes.io/master= 21d
kube-etcd-network-checkpointer 1 1 1 1 1 node-role.kubernetes.io/master= 21d
kube-flannel 4 4 4 4 4 <none> 21d
kube-proxy 4 4 4 4 4 <none> 21d
pod-checkpointer 1 1 1 1 1 node-role.kubernetes.io/master= 21d
$ kubectl get deployments -n=kube-system
kube-controller-manager 2 2 2 2 21d
kube-dns 1 1 1 1 21d
kube-scheduler 2 2 2 2 21d
```
Check the current Kubernetes version.
```sh
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6+coreos.1", GitCommit:"42a5c8b99c994a51d9ceaed5d0254f177e97d419", GitTreeState:"clean", BuildDate:"2017-06-21T01:10:07Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
```
```sh
$ kubectl get nodes
NAME STATUS AGE VERSION
node1.example.com Ready 21d v1.6.6+coreos.1
node2.example.com Ready 21d v1.6.6+coreos.1
node3.example.com Ready 21d v1.6.6+coreos.1
node4.example.com Ready 21d v1.6.6+coreos.1
```
## Strategy
Update control plane components with `kubectl`. Then update the `kubelet` systemd unit on each host.
Prepare the changes to the Kubernetes manifests by generating assets for a target Kubernetes cluster (e.g. bootkube `v0.5.0` produces Kubernetes 1.6.6 and bootkube `v0.5.1` produces Kubernetes 1.6.7). Choose the tool used during creation of the cluster:
* [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube) - install the `bootkube` binary for the target version and render assets
* [poseidon/bootkube-terraform](https://github.com/poseidon/bootkube-terraform) - checkout the tag for the target version and `terraform apply` to render assets
Diff the generated assets against the assets used when originally creating the cluster. In simple cases, you may only need to bump the hyperkube image. In more complex cases, some manifests may have new flags or configuration.
## Control Plane
### kube-apiserver
Edit the `kube-apiserver` daemonset to rolling update the apiserver.
```sh
$ kubectl edit daemonset kube-apiserver -n=kube-system
```
If you only have one apiserver, the cluster may be momentarily unavailable.
### kube-scheduler
Edit the `kube-scheduler` deployment to rolling update the scheduler.
```sh
$ kubectl edit deployments kube-scheduler -n=kube-system
```
### kube-controller-manager
Edit the `kube-controller-manager` deployment to rolling update the controller manager.
```sh
$ kubectl edit deployments kube-controller-manager -n=kube-system
```
### kube-proxy
Edit the `kube-proxy` daemonset to rolling update the proxy.
```sh
$ kubectl edit daemonset kube-proxy -n=kube-system
```
### Others
If there are changes between the prior version and target version manifests, update the `kube-dns` deployment, `kube-flannel` daemonset, or `pod-checkpointer` daemonset.
### Verify
Verify the control plane components updated.
```sh
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.7+coreos.0", GitCommit:"c8c505ee26ac3ab4d1dff506c46bc5538bc66733", GitTreeState:"clean", BuildDate:"2017-07-06T17:38:33Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
```
```sh
$ kubectl get nodes
NAME STATUS AGE VERSION
node1.example.com Ready 21d v1.6.7+coreos.0
node2.example.com Ready 21d v1.6.7+coreos.0
node3.example.com Ready 21d v1.6.7+coreos.0
node4.example.com Ready 21d v1.6.7+coreos.0
```
## kubelet
SSH to each node and update `/etc/kubernetes/kubelet.env`. Restart the `kubelet.service`.
```sh
ssh core@node1.example.com
sudo vim /etc/kubernetes/kubelet.env
sudo systemctl restart kubelet
```
### Verify
Verify the kubelet and kube-proxy of each node updated.
```sh
$ kubectl get nodes -o yaml | grep 'kubeletVersion\|kubeProxyVersion'
kubeProxyVersion: v1.6.7+coreos.0
kubeletVersion: v1.6.7+coreos.0
kubeProxyVersion: v1.6.7+coreos.0
kubeletVersion: v1.6.7+coreos.0
kubeProxyVersion: v1.6.7+coreos.0
kubeletVersion: v1.6.7+coreos.0
kubeProxyVersion: v1.6.7+coreos.0
kubeletVersion: v1.6.7+coreos.0
```
Kubernetes control plane components have been successfully updated!

View File

@@ -1,106 +1,139 @@
# Kubernetes
# Self-Hosted Kubernetes
The self-hosted Kubernetes example provisions a 3 node Kubernetes v1.3.0-beta.2 cluster with etcd, flannel, and a special "runonce" host Kublet. The CoreOS [bootkube](https://github.com/coreos/bootkube) tool is used to bootstrap kubelet, apiserver, scheduler, and controller-manager as pods, which can be managed via kubectl. `bootkube start` is run on any controller (master) to create a temporary control-plane and start Kubernetes components initially. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
## Experimental
Self-hosted Kubernetes is under very active development by CoreOS.
The Kubernetes example provisions a 3 node Kubernetes v1.8.5 cluster. [bootkube](https://github.com/kubernetes-incubator/bootkube) is run once on a controller node to bootstrap Kubernetes control plane components as pods before exiting. An etcd3 cluster across controllers is used to back Kubernetes.
## Requirements
Ensure that you've gone through the [bootcfg with rkt](getting-started-rkt.md) guide and understand the basics. In particular, you should be able to:
Ensure that you've gone through the [matchbox with rkt](getting-started-rkt.md) or [matchbox with docker](getting-started-docker.md) guide and understand the basics. In particular, you should be able to:
* Use rkt to start `bootcfg`
* Use rkt or Docker to start `matchbox`
* Create a network boot environment with `coreos/dnsmasq`
* Create the example libvirt client VMs
* `/etc/hosts` entries for `node[1-3].example.com`
Build and install [bootkube](https://github.com/coreos/bootkube/releases) v0.1.1.
Install [bootkube](https://github.com/kubernetes-incubator/bootkube/releases) v0.9.1 and add it on your $PATH.
```sh
$ bootkube version
Version: v0.9.1
```
## Examples
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. The examples can be used for physical machines if you update the MAC/IP addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. The examples can be used for physical machines if you update the MAC addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
* [bootkube](../examples/groups/bootkube) - iPXE boot a bootkube-ready cluster (use rkt)
* [bootkube-install](../examples/groups/bootkube-install) - Install a bootkube-ready cluster (use rkt)
* [bootkube](../examples/groups/bootkube) - iPXE boot a self-hosted Kubernetes cluster
* [bootkube-install](../examples/groups/bootkube-install) - Install a self-hosted Kubernetes cluster
### Assets
## Assets
Download the CoreOS image assets referenced in the target [profile](../examples/profiles).
Download the CoreOS Container Linux image assets referenced in the target [profile](../examples/profiles).
./scripts/get-coreos alpha 1053.2.0 ./examples/assets
```sh
$ ./scripts/get-coreos stable 1576.5.0 ./examples/assets
```
Add your SSH public key to each machine group definition [as shown](../examples/README.md#ssh-keys).
{
"profile": "bootkube-worker",
"metadata": {
"ssh_authorized_keys": ["ssh-rsa pub-key-goes-here"]
}
```json
{
"profile": "bootkube-worker",
"metadata": {
"ssh_authorized_keys": ["ssh-rsa pub-key-goes-here"]
}
}
```
Use the `bootkube` tool to render Kubernetes manifests and credentials into an `--asset-dir`. Later, `bootkube` will schedule these manifests during bootstrapping and the credentials will be used to access your cluster.
Use the `bootkube` tool to render Kubernetes manifests and credentials into an `--asset-dir`. Set the `--network-provider` to `flannel` (default) or `experimental-calico` if desired.
bootkube render --asset-dir=assets --api-servers=https://172.15.0.21:443 --etcd-servers=http://172.15.0.21:2379 --api-server-alt-names=IP=172.15.0.21
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=https://node1.example.com:2379
```
Later, a controller will use `bootkube` to bootstrap these manifests and the credentials will be used to access your cluster.
## Containers
Run the latest `bootcfg` ACI with rkt and the `bootkube` example (or `bootkube-install`).
Use rkt or docker to start `matchbox` and mount the desired example resources. Create a network boot environment and power-on your machines. Revisit [matchbox with rkt](getting-started-rkt.md) or [matchbox with Docker](getting-started-docker.md) for help.
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=data,target=/var/lib/bootcfg --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/bootcfg/groups --volume groups,kind=host,source=$PWD/examples/groups/bootkube quay.io/coreos/bootcfg:latest -- -address=0.0.0.0:8080 -log-level=debug
Create a network boot environment and power-on your machines. Revisit [bootcfg with rkt](getting-started-rkt.md) for help.
Client machines should boot and provision themselves. Local client VMs should network boot CoreOS and become available via SSH in about 1 minute. If you chose `bootkube-install`, notice that machines install CoreOS and then reboot (in libvirt, you must hit "power" again). Time to network boot and provision physical hardware depends on a number of factors (POST duration, boot device iteration, network speed, etc.).
Client machines should boot and provision themselves. Local client VMs should network boot Container Linux and become available via SSH in about 1 minute. If you chose `bootkube-install`, notice that machines install Container Linux and then reboot (in libvirt, you must hit "power" again). Time to network boot and provision physical hardware depends on a number of factors (POST duration, boot device iteration, network speed, etc.).
## bootkube
We're ready to use [bootkube](https://github.com/coreos/bootkube) to create a temporary control plane and bootstrap a self-hosted Kubernetes cluster.
We're ready to use bootkube to create a temporary control plane and bootstrap a self-hosted Kubernetes cluster.
Secure copy the `kubeconfig` to `/etc/kuberentes/kubeconfig` on **every** node (i.e. repeat for 172.15.0.22, 172.15.0.23).
Secure copy the etcd TLS assets to `/etc/ssl/etcd/*` on **every controller** node.
scp assets/auth/kubeconfig core@172.15.0.21:/home/core/kubeconfig
ssh core@172.15.0.21
sudo mv kubeconfig /etc/kubernetes/kubeconfig
```sh
for node in 'node1'; do
scp -r assets/tls/etcd-* assets/tls/etcd core@$node.example.com:/home/core/
ssh core@$node.example.com 'sudo mkdir -p /etc/ssl/etcd && sudo mv etcd-* etcd /etc/ssl/etcd/ && sudo chown -R etcd:etcd /etc/ssl/etcd && sudo chmod -R 500 /etc/ssl/etcd/'
done
```
Secure copy the `bootkube` generated assets to any one of the master nodes.
Secure copy the `kubeconfig` to `/etc/kubernetes/kubeconfig` on **every node** to path activate the `kubelet.service`.
scp -r assets core@172.15.0.21:/home/core/assets
```sh
for node in 'node1' 'node2' 'node3'; do
scp assets/auth/kubeconfig core@$node.example.com:/home/core/kubeconfig
ssh core@$node.example.com 'sudo mv kubeconfig /etc/kubernetes/kubeconfig'
done
```
SSH to the chosen master node and bootstrap the cluster with `bootkube-start`.
Secure copy the `bootkube` generated assets to **any controller** node and run `bootkube-start` (takes ~10 minutes).
ssh core@172.15.0.21 'sudo ./bootkube-start'
```sh
scp -r assets core@node1.example.com:/home/core
ssh core@node1.example.com 'sudo mv assets /opt/bootkube/assets && sudo systemctl start bootkube'
```
Watch the temporary control plane logs until the scheduled kubelet takes over in place of the runonce host kubelet.
Watch the Kubernetes control plane bootstrapping with the bootkube temporary api-server. You will see quite a bit of output.
I0425 12:38:23.746330 29538 status.go:87] Pod status kubelet: Running
I0425 12:38:23.746361 29538 status.go:87] Pod status kube-apiserver: Running
I0425 12:38:23.746370 29538 status.go:87] Pod status kube-scheduler: Running
I0425 12:38:23.746378 29538 status.go:87] Pod status kube-controller-manager: Running
```sh
$ ssh core@node1.example.com 'journalctl -f -u bootkube'
[ 299.241291] bootkube[5]: Pod Status: kube-api-checkpoint Running
[ 299.241618] bootkube[5]: Pod Status: kube-apiserver Running
[ 299.241804] bootkube[5]: Pod Status: kube-scheduler Running
[ 299.241993] bootkube[5]: Pod Status: kube-controller-manager Running
[ 299.311743] bootkube[5]: All self-hosted control plane components successfully started
```
You may cleanup the `bootkube` assets on the node, but you should keep the copy on your laptop. They contain a `kubeconfig` and may need to be re-used if the last apiserver were to fail and bootstrapping were needed.
[Verify](#verify) the Kubernetes cluster is accessible once complete. Then install **important** cluster [addons](cluster-addons.md). You may cleanup the `bootkube` assets on the node, but you should keep the copy on your laptop. It contains a `kubeconfig` used to access the cluster.
## Verify
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your laptop. Use the generated kubeconfig to access the Kubernetes cluster. Verify that the cluster is accessible and that the kubelet, apiserver, scheduler, and controller-manager are running as pods.
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your laptop. Use the generated kubeconfig to access the Kubernetes cluster. Verify that the cluster is accessible and that the apiserver, scheduler, and controller-manager are running as pods.
$ kubectl --kubeconfig=assets/auth/kubeconfig get nodes
NAME STATUS AGE
172.15.0.21 Ready 3m
172.15.0.22 Ready 3m
172.15.0.23 Ready 3m
```sh
$ export KUBECONFIG=assets/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
node1.example.com Ready 11m v1.8.5
node2.example.com Ready 11m v1.8.5
node3.example.com Ready 11m v1.8.5
$ kubectl --kubeconfig=assets/auth/kubeconfig get pods --all-namespaces
kube-system kube-api-checkpoint-172.15.0.21 1/1 Running 0 2m
kube-system kube-apiserver-wq4mh 2/2 Running 0 2m
kube-system kube-controller-manager-2834499578-y9cnl 1/1 Running 0 2m
kube-system kube-dns-v11-2259792283-5tpld 4/4 Running 0 2m
kube-system kube-proxy-8zr1b 1/1 Running 0 2m
kube-system kube-proxy-i9cgw 1/1 Running 0 2m
kube-system kube-proxy-n6qg3 1/1 Running 0 2m
kube-system kube-scheduler-4136156790-v9892 1/1 Running 0 2m
kube-system kubelet-9wilx 1/1 Running 0 2m
kube-system kubelet-a6mmj 1/1 Running 0 2m
kube-system kubelet-eomnb 1/1 Running 0 2m
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-apiserver-zd1k3 1/1 Running 0 7m
kube-system kube-controller-manager-762207937-2ztxb 1/1 Running 0 7m
kube-system kube-controller-manager-762207937-vf6bk 1/1 Running 1 7m
kube-system kube-dns-2431531914-qc752 3/3 Running 0 7m
kube-system kube-flannel-180mz 2/2 Running 1 7m
kube-system kube-flannel-jjr0x 2/2 Running 0 7m
kube-system kube-flannel-mlr9w 2/2 Running 0 7m
kube-system kube-proxy-0jlq7 1/1 Running 0 7m
kube-system kube-proxy-k4mjl 1/1 Running 0 7m
kube-system kube-proxy-l4xrd 1/1 Running 0 7m
kube-system kube-scheduler-1873228005-5d2mk 1/1 Running 0 7m
kube-system kube-scheduler-1873228005-s4w27 1/1 Running 0 7m
kube-system pod-checkpointer-hb960 1/1 Running 0 7m
kube-system pod-checkpointer-hb960-node1.example.com 1/1 Running 0 6m
```
Try deleting pods to see that the cluster is resilient to failures and machine restarts (CoreOS auto-updates).
## Addons
Install **important** cluster [addons](cluster-addons.md).
## Going further
[Learn](bootkube-upgrades.md) to upgrade a self-hosted Kubernetes cluster.

View File

@@ -1,40 +1,46 @@
# Cloud Config
# Cloud config
**Note:** We recommend migrating to [Ignition](ignition.md) for hardware provisioning.
**Note:** Please migrate to [Container Linux Configs](container-linux-config.md). Cloud-Config support will be removed in the future.
CoreOS Cloud-Config is a system for configuring machines with a Cloud-Config file or executable script from user-data. Cloud-Config runs in userspace on each boot and implements a subset of the [cloud-init spec](http://cloudinit.readthedocs.org/en/latest/topics/format.html#cloud-config-data). See the cloud-config [docs](https://coreos.com/os/docs/latest/cloud-config.html) for details.
Cloud-Config template files can be added in `/var/lib/bootcfg/cloud` or in a `cloud` subdirectory of a custom `-data-path`. Template files may contain [Go template](https://golang.org/pkg/text/template/) elements which will be evaluated with group metadata, selectors, and query params.
Cloud-Config template files can be added in `/var/lib/matchbox/cloud` or in a `cloud` subdirectory of a custom `-data-path`. Template files may contain [Go template](https://golang.org/pkg/text/template/) elements which will be evaluated with group metadata, selectors, and query params.
/var/lib/bootcfg
├── cloud
│   ├── cloud.yaml
│   ── script.sh
├── ignition
└── profiles
```
/var/lib/matchbox
├── cloud
│   ── cloud.yaml
│   └── script.sh
├── ignition
└── profiles
```
## Reference
Reference a Cloud-Config in a [Profile](bootcfg.md#profiles) with `cloud_id`. When PXE booting, use the kernel option `cloud-config-url` to point to `bootcfg` [cloud-config endpoint](api.md#cloud-config).
Reference a Cloud-Config in a [Profile](matchbox.md#profiles) with `cloud_id`. When PXE booting, use the kernel option `cloud-config-url` to point to `matchbox` [cloud-config endpoint](api.md#cloud-config).
## Examples
Here is an example Cloud-Config which starts some units and writes a file.
#cloud-config
coreos:
units:
- name: etcd2.service
command: start
- name: fleet.service
command: start
write_files:
- path: "/home/core/welcome"
owner: "core"
permissions: "0644"
content: |
{{.greeting}}
<!-- {% raw %} -->
```yaml
#cloud-config
coreos:
units:
- name: etcd2.service
command: start
- name: fleet.service
command: start
write_files:
- path: "/home/core/welcome"
owner: "core"
permissions: "0644"
content: |
{{.greeting}}
```
<!-- {% endraw %} -->
The Cloud-Config [Validator](https://coreos.com/validate/) is also useful for checking your Cloud-Config files for errors.

View File

@@ -0,0 +1,30 @@
## Cluster Addons
Kubernetes clusters run cluster addons atop Kubernetes itself. Addons may be considered essential for bootstrapping (non-optional), important (highly recommended), or optional.
## Essential
Several addons are considered essential. CoreOS cluster creation tools ensure these addons are included. Kubernetes clusters deployed via the Matchbox examples or using our Terraform Modules include these addons as well.
### kube-proxy
`kube-proxy` is deployed as a DaemonSet.
### kube-dns
`kube-dns` is deployed as a Deployment.
## Important
### Container Linux Update Operator
The [Container Linux Update Operator](https://github.com/coreos/container-linux-update-operator) (i.e. CLUO) coordinates reboots of auto-updating Container Linux nodes so that one node reboots at a time and nodes are drained before reboot. CLUO enables the auto-update behavior Container Linux clusters are known for, but does it in a Kubernetes native way. Deploying CLUO is strongly recommended.
Create the `update-operator` deployment and `update-agent` DaemonSet.
```
kubectl apply -f examples/addons/cluo/update-operator.yaml
kubectl apply -f examples/addons/cluo/update-agent.yaml
```
*Note, CLUO replaces `locksmithd` reboot coordination. The `update_engine` systemd unit on hosts still performs the Container Linux update check, download, and install to the inactive partition.*

View File

@@ -1,109 +1,139 @@
# Flags and Variables
# Flags and variables
Configuration arguments can be provided as flags or as environment variables.
| flag | variable | default | example |
|------|----------|---------|---------|
| -address | BOOTCFG_ADDRESS | 127.0.0.1:8080 | 0.0.0.0:8080 |
| -log-level | BOOTCFG_LOG_LEVEL | info | critical, error, warning, notice, info, debug |
| -data-path | BOOTCFG_DATA_PATH | /var/lib/bootcfg | ./examples |
| -assets-path | BOOTCFG_ASSETS_PATH | /var/lib/bootcfg/assets | ./examples/assets |
| -rpc-address | BOOTCFG_RPC_ADDRESS | (gRPC API disabled) | 0.0.0.0:8081 |
| -cert-file | BOOTCFG_CERT_FILE | /etc/bootcfg/server.crt | ./examples/etc/bootcfg/server.crt |
| -key-file | BOOTCFG_KEY_FILE | /etc/bootcfg/server.key | ./examples/etc/bootcfg/server.key
| -ca-file | BOOTCFG_CA_FILE | /etc/bootcfg/ca.crt | ./examples/etc/bootcfg/ca.crt |
| -key-ring-path | BOOTCFG_KEY_RING_PATH | (no key ring) | ~/.secrets/vault/bootcfg/secring.gpg |
| (no flag) | BOOTCFG_PASSPHRASE | (no passphrase) | "secret passphrase" |
| -address | MATCHBOX_ADDRESS | 127.0.0.1:8080 | 0.0.0.0:8080 |
| -log-level | MATCHBOX_LOG_LEVEL | info | critical, error, warning, notice, info, debug |
| -data-path | MATCHBOX_DATA_PATH | /var/lib/matchbox | ./examples |
| -assets-path | MATCHBOX_ASSETS_PATH | /var/lib/matchbox/assets | ./examples/assets |
| -rpc-address | MATCHBOX_RPC_ADDRESS | (gRPC API disabled) | 0.0.0.0:8081 |
| -cert-file | MATCHBOX_CERT_FILE | /etc/matchbox/server.crt | ./examples/etc/matchbox/server.crt |
| -key-file | MATCHBOX_KEY_FILE | /etc/matchbox/server.key | ./examples/etc/matchbox/server.key
| -ca-file | MATCHBOX_CA_FILE | /etc/matchbox/ca.crt | ./examples/etc/matchbox/ca.crt |
| -key-ring-path | MATCHBOX_KEY_RING_PATH | (no key ring) | ~/.secrets/vault/matchbox/secring.gpg |
| (no flag) | MATCHBOX_PASSPHRASE | (no passphrase) | "secret passphrase" |
## Files and Directories
## Files and directories
| Data | Default Location |
|:---------|:--------------------------------------------------|
| data | /var/lib/bootcfg/{profiles,groups,ignition,cloud,generic} |
| assets | /var/lib/bootcfg/assets |
| data | /var/lib/matchbox/{profiles,groups,ignition,cloud,generic} |
| assets | /var/lib/matchbox/assets |
| gRPC API TLS Credentials | Default Location |
|:---------|:--------------------------------------------------|
| CA certificate | /etc/bootcfg/ca.crt |
| Server certificate | /etc/bootcfg/server.crt |
| Server private key | /etc/bootcfg/server.key |
| Client certificate | /etc/bootcfg/client.crt |
| Client private key | /etc/bootcfg/client.key |
| CA certificate | /etc/matchbox/ca.crt |
| Server certificate | /etc/matchbox/server.crt |
| Server private key | /etc/matchbox/server.key |
| Client certificate | /etc/matchbox/client.crt |
| Client private key | /etc/matchbox/client.key |
## Version
./bin/bootcfg -version
sudo rkt run quay.io/coreos/bootcfg:latest -- -version
sudo docker run quay.io/coreos/bootcfg:latest -version
```sh
$ ./bin/matchbox -version
$ sudo rkt run quay.io/coreos/matchbox:latest -- -version
$ sudo docker run quay.io/coreos/matchbox:latest -version
```
## Usage
Run the binary.
./bin/bootcfg -address=0.0.0.0:8080 -log-level=debug -data-path=examples -assets-path=examples/assets
```sh
$ ./bin/matchbox -address=0.0.0.0:8080 -log-level=debug -data-path=examples -assets-path=examples/assets
```
Run the latest ACI with rkt.
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=assets,target=/var/lib/bootcfg/assets --volume assets,kind=host,source=$PWD/examples/assets quay.io/coreos/bootcfg:latest -- -address=0.0.0.0:8080 -log-level=debug
```sh
$ sudo rkt run --mount volume=assets,target=/var/lib/matchbox/assets --volume assets,kind=host,source=$PWD/examples/assets quay.io/coreos/matchbox:latest -- -address=0.0.0.0:8080 -log-level=debug
```
Run the latest Docker image.
sudo docker run -p 8080:8080 --rm -v $PWD/examples/assets:/var/lib/bootcfg/assets:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug
```sh
$ sudo docker run -p 8080:8080 --rm -v $PWD/examples/assets:/var/lib/matchbox/assets:Z quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -log-level=debug
```
#### With Examples
### With examples
Mount `examples` to pre-load the [example](../examples/README.md) machine groups and profiles. Run the container with rkt,
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=data,target=/var/lib/bootcfg --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/bootcfg/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd quay.io/coreos/bootcfg:latest -- -address=0.0.0.0:8080 -log-level=debug
```sh
$ sudo rkt run --net=metal0:IP=172.18.0.2 --mount volume=data,target=/var/lib/matchbox --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/matchbox/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd quay.io/coreos/matchbox:latest -- -address=0.0.0.0:8080 -log-level=debug
```
or with Docker.
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/var/lib/bootcfg:Z -v $PWD/examples/groups/etcd:/var/lib/bootcfg/groups:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug
```sh
$ sudo docker run -p 8080:8080 --rm -v $PWD/examples:/var/lib/matchbox:Z -v $PWD/examples/groups/etcd:/var/lib/matchbox/groups:Z quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -log-level=debug
```
### gRPC API
### With gRPC API
The gRPC API allows clients with a TLS client certificate and key to make RPC requests to programmatically create or update `bootcfg` resources. The API can be enabled with the `-rpc-address` flag and by providing a TLS server certificate and key with `-cert-file` and `-key-file` and a CA certificate for authenticating clients with `-ca-file`.
The gRPC API allows clients with a TLS client certificate and key to make RPC requests to programmatically create or update `matchbox` resources. The API can be enabled with the `-rpc-address` flag and by providing a TLS server certificate and key with `-cert-file` and `-key-file` and a CA certificate for authenticating clients with `-ca-file`.
Run the binary with TLS credentials from `examples/etc/bootcfg`.
Run the binary with TLS credentials from `examples/etc/matchbox`.
./bin/bootcfg -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug -data-path=examples -assets-path=examples/assets -cert-file examples/etc/bootcfg/server.crt -key-file examples/etc/bootcfg/server.key -ca-file examples/etc/bootcfg/ca.crt
```sh
$ ./bin/matchbox -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug -data-path=examples -assets-path=examples/assets -cert-file examples/etc/matchbox/server.crt -key-file examples/etc/matchbox/server.key -ca-file examples/etc/matchbox/ca.crt
```
Clients, such as `bootcmd`, verify the server's certificate with a CA bundle passed via `-ca-file` and present a client certificate and key via `-cert-file` and `-key-file` to cal the gRPC API.
./bin/bootcmd profile list --endpoints 127.0.0.1:8081 --ca-file examples/etc/bootcfg/ca.crt --cert-file examples/etc/bootcfg/client.crt --key-file examples/etc/bootcfg/client.key
```sh
$ ./bin/bootcmd profile list --endpoints 127.0.0.1:8081 --ca-file examples/etc/matchbox/ca.crt --cert-file examples/etc/matchbox/client.crt --key-file examples/etc/matchbox/client.key
```
#### With rkt
### With rkt
Run the ACI with rkt and TLS credentials from `examples/etc/bootcfg`.
Run the ACI with rkt and TLS credentials from `examples/etc/matchbox`.
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=data,target=/var/lib/bootcfg --volume data,kind=host,source=$PWD/examples,readOnly=true --mount volume=config,target=/etc/bootcfg --volume config,kind=host,source=$PWD/examples/etc/bootcfg --mount volume=groups,target=/var/lib/bootcfg/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd quay.io/coreos/bootcfg:latest -- -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
```sh
$ sudo rkt run --net=metal0:IP=172.18.0.2 --mount volume=data,target=/var/lib/matchbox --volume data,kind=host,source=$PWD/examples,readOnly=true --mount volume=config,target=/etc/matchbox --volume config,kind=host,source=$PWD/examples/etc/matchbox --mount volume=groups,target=/var/lib/matchbox/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd quay.io/coreos/matchbox:latest -- -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
```
A `bootcmd` client can call the gRPC API running at the IP used in the rkt example.
./bin/bootcmd profile list --endpoints 172.15.0.2:8081 --ca-file examples/etc/bootcfg/ca.crt --cert-file examples/etc/bootcfg/client.crt --key-file examples/etc/bootcfg/client.key
```sh
$ ./bin/bootcmd profile list --endpoints 172.18.0.2:8081 --ca-file examples/etc/matchbox/ca.crt --cert-file examples/etc/matchbox/client.crt --key-file examples/etc/matchbox/client.key
```
#### With docker
### With docker
Run the Docker image with TLS credentials from `examples/etc/bootcfg`.
Run the Docker image with TLS credentials from `examples/etc/matchbox`.
sudo docker run -p 8080:8080 -p 8081:8081 --rm -v $PWD/examples:/var/lib/bootcfg:Z -v $PWD/examples/etc/bootcfg:/etc/bootcfg:Z,ro -v $PWD/examples/groups/etcd:/var/lib/bootcfg/groups:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
```sh
$ sudo docker run -p 8080:8080 -p 8081:8081 --rm -v $PWD/examples:/var/lib/matchbox:Z -v $PWD/examples/etc/matchbox:/etc/matchbox:Z,ro -v $PWD/examples/groups/etcd:/var/lib/matchbox/groups:Z quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
```
A `bootcmd` client can call the gRPC API running at the IP used in the Docker example.
./bin/bootcmd profile list --endpoints 127.0.0.1:8081 --ca-file examples/etc/bootcfg/ca.crt --cert-file examples/etc/bootcfg/client.crt --key-file examples/etc/bootcfg/client.key
```sh
$ ./bin/bootcmd profile list --endpoints 127.0.0.1:8081 --ca-file examples/etc/matchbox/ca.crt --cert-file examples/etc/matchbox/client.crt --key-file examples/etc/matchbox/client.key
```
### OpenPGP [Signing](openpgp.md)
### With openPGP [Signing](openpgp.md)
Run with the binary with a test key.
export BOOTCFG_PASSPHRASE=test
./bin/bootcfg -address=0.0.0.0:8080 -key-ring-path bootcfg/sign/fixtures/secring.gpg -data-path=examples -assets-path=examples/assets
```sh
$ export MATCHBOX_PASSPHRASE=test
$ ./bin/matchbox -address=0.0.0.0:8080 -key-ring-path matchbox/sign/fixtures/secring.gpg -data-path=examples -assets-path=examples/assets
```
Run the ACI with a test key.
sudo rkt run --net=metal0:IP=172.15.0.2 --set-env=BOOTCFG_PASSPHRASE=test --mount volume=secrets,target=/secrets --volume secrets,kind=host,source=$PWD/bootcfg/sign/fixtures --mount volume=data,target=/var/lib/bootcfg --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/bootcfg/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd quay.io/coreos/bootcfg:latest -- -address=0.0.0.0:8080 -key-ring-path secrets/secring.gpg
```sh
$ sudo rkt run --net=metal0:IP=172.18.0.2 --set-env=MATCHBOX_PASSPHRASE=test --mount volume=secrets,target=/secrets --volume secrets,kind=host,source=$PWD/matchbox/sign/fixtures --mount volume=data,target=/var/lib/matchbox --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/matchbox/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd quay.io/coreos/matchbox:latest -- -address=0.0.0.0:8080 -key-ring-path secrets/secring.gpg
```
Run the Docker image with a test key.
sudo docker run -p 8080:8080 --rm --env BOOTCFG_PASSPHRASE=test -v $PWD/examples:/var/lib/bootcfg:Z -v $PWD/examples/groups/etcd:/var/lib/bootcfg/groups:Z -v $PWD/bootcfg/sign/fixtures:/secrets:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug -key-ring-path secrets/secring.gpg
```sh
$ sudo docker run -p 8080:8080 --rm --env MATCHBOX_PASSPHRASE=test -v $PWD/examples:/var/lib/matchbox:Z -v $PWD/examples/groups/etcd:/var/lib/matchbox/groups:Z -v $PWD/matchbox/sign/fixtures:/secrets:Z quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -log-level=debug -key-ring-path secrets/secring.gpg
```

View File

@@ -0,0 +1,144 @@
# Container Linux Configs
A Container Linux Config is a YAML document which declares how Container Linux instances' disks should be provisioned on network boot and first-boot from disk. Configs can declare disk partitions, write files (regular files, systemd units, networkd units, etc.), and configure users. See the Container Linux Config [spec](https://coreos.com/os/docs/latest/configuration.html).
### Ignition
Container Linux Configs are validated and converted to *machine-friendly* Ignition configs (JSON) by matchbox when serving to booting machines. [Ignition](https://coreos.com/ignition/docs/latest/), the provisioning utility shipped in Container Linux, will parse and execute the Ignition config to realize the desired configuration. Matchbox users usually only need to write Container Linux Configs.
*Note: Container Linux directory names are still named "ignition" for historical reasons as outlined below. A future breaking change will rename to "container-linux-config".*
## Adding Container Linux Configs
Container Linux Config templates can be added to the `/var/lib/matchbox/ignition` directory or in an `ignition` subdirectory of a custom `-data-path`. Template files may contain [Go template](https://golang.org/pkg/text/template/) elements which will be evaluated with group metadata, selectors, and query params.
```
/var/lib/matchbox
├── cloud
├── ignition
│   └── k8s-controller.yaml
│   └── etcd.yaml
│   └── k8s-worker.yaml
│   └── raw.ign
└── profiles
```
## Referencing in Profiles
Profiles can include a Container Linux Config for provisioning machines. Specify the Container Linux Config in a [Profile](matchbox.md#profiles) with `ignition_id`. When PXE booting, use the kernel option `coreos.first_boot=1` and `coreos.config.url` to point to the `matchbox` [Ignition endpoint](api.md#ignition-config).
## Examples
Here is an example Container Linux Config template. Variables will be interpreted using group metadata, selectors, and query params. Matchbox will convert the config to Ignition to serve Container Linux machines.
ignition/format-disk.yaml.tmpl:
<!-- {% raw %} -->
```yaml
---
storage:
disks:
- device: /dev/sda
wipe_table: true
partitions:
- label: ROOT
filesystems:
- name: root
mount:
device: "/dev/sda1"
format: "ext4"
create:
force: true
options:
- "-LROOT"
files:
- filesystem: root
path: /home/core/foo
mode: 0644
user:
id: 500
group:
id: 500
contents:
inline: |
{{.example_contents}}
{{ if index . "ssh_authorized_keys" }}
passwd:
users:
- name: core
ssh_authorized_keys:
{{ range $element := .ssh_authorized_keys }}
- {{$element}}
{{end}}
{{end}}
```
<!-- {% endraw %} -->
The Ignition config response (formatted) to a query `/ignition?label=value` for a Container Linux instance supporting Ignition 2.0.0 would be:
```json
{
"ignition": {
"version": "2.0.0",
"config": {}
},
"storage": {
"disks": [
{
"device": "/dev/sda",
"wipeTable": true,
"partitions": [
{
"label": "ROOT",
"number": 0,
"size": 0,
"start": 0
}
]
}
],
"filesystems": [
{
"name": "root",
"mount": {
"device": "/dev/sda1",
"format": "ext4",
"create": {
"force": true,
"options": [
"-LROOT"
]
}
}
}
],
"files": [
{
"filesystem": "root",
"path": "/home/core/foo",
"contents": {
"source": "data:,Example%20file%20contents%0A",
"verification": {}
},
"mode": 420,
"user": {
"id": 500
},
"group": {
"id": 500
}
}
]
},
"systemd": {},
"networkd": {},
"passwd": {}
}
```
See [examples/ignition](../examples/ignition) for numerous Container Linux Config template examples.
### Raw Ignition
If you prefer to design your own templating solution, raw Ignition files (suffixed with `.ign` or `.ignition`) are served directly.

View File

@@ -1,118 +1,337 @@
# Installation
# Deployment
This guide walks through deploying the `matchbox` service on a Linux host (via RPM, rkt, docker, or binary) or on a Kubernetes cluster.
## Provisoner
`matchbox` is a service for network booting and provisioning machines to create CoreOS Container Linux clusters. `matchbox` should be installed on a provisioner machine (Container Linux or any Linux distribution) or cluster (Kubernetes) which can serve configs to client machines in a lab or datacenter.
Choose one of the supported installation options:
* [CoreOS Container Linux (rkt)](#coreos-container-linux)
* [RPM-based](#rpm-based-distro)
* [Generic Linux (binary)](#generic-linux)
* [With rkt](#rkt)
* [With docker](#docker)
* [Kubernetes Service](#kubernetes)
## Download
Download the latest matchbox [release](https://github.com/coreos/matchbox/releases) to the provisioner host.
```sh
$ wget https://github.com/coreos/matchbox/releases/download/v0.7.1/matchbox-v0.7.1-linux-amd64.tar.gz
$ wget https://github.com/coreos/matchbox/releases/download/v0.7.1/matchbox-v0.7.1-linux-amd64.tar.gz.asc
```
Verify the release has been signed by the [CoreOS App Signing Key](https://coreos.com/security/app-signing-key/).
```sh
$ gpg --keyserver pgp.mit.edu --recv-key 18AD5014C99EF7E3BA5F6CE950BDD3E0FC8A365E
$ gpg --verify matchbox-v0.7.1-linux-amd64.tar.gz.asc matchbox-v0.7.1-linux-amd64.tar.gz
# gpg: Good signature from "CoreOS Application Signing Key <security@coreos.com>"
```
Untar the release.
```sh
$ tar xzvf matchbox-v0.7.1-linux-amd64.tar.gz
$ cd matchbox-v0.7.1-linux-amd64
```
## Install
### RPM-based distro
On an RPM-based provisioner (Fedora 24+), install the `matchbox` RPM from the Copr [repository](https://copr.fedorainfracloud.org/coprs/g/CoreOS/matchbox/) using `dnf`.
```sh
dnf copr enable @CoreOS/matchbox
dnf install matchbox
```
RPMs are not currently available for CentOS and RHEL (due to Go version). CentOS and RHEL users should follow the Generic Linux section below.
### CoreOS Container Linux
On a Container Linux provisioner, rkt run `matchbox` image with the provided systemd unit.
```sh
$ sudo cp contrib/systemd/matchbox-on-coreos.service /etc/systemd/system/matchbox.service
```
### Generic Linux
Pre-built binaries are available for generic Linux distributions. Copy the `matchbox` static binary to an appropriate location on the host.
```sh
$ sudo cp matchbox /usr/local/bin
```
#### Set up User/Group
The `matchbox` service should be run by a non-root user with access to the `matchbox` data directory (`/var/lib/matchbox`). Create a `matchbox` user and group.
```sh
$ sudo useradd -U matchbox
$ sudo mkdir -p /var/lib/matchbox/assets
$ sudo chown -R matchbox:matchbox /var/lib/matchbox
```
#### Create systemd service
Copy the provided `matchbox` systemd unit file.
```sh
$ sudo cp contrib/systemd/matchbox-local.service /etc/systemd/system/matchbox.service
```
## Customization
Customize matchbox by editing the systemd unit or adding a systemd dropin. Find the complete set of `matchbox` flags and environment variables at [config](config.md).
```sh
$ sudo systemctl edit matchbox
```
By default, the read-only HTTP machine endpoint will be exposed on port **8080**.
```ini
# /etc/systemd/system/matchbox.service.d/override.conf
[Service]
Environment="MATCHBOX_ADDRESS=0.0.0.0:8080"
Environment="MATCHBOX_LOG_LEVEL=debug"
```
A common customization is enabling the gRPC API to allow clients with a TLS client certificate to change machine configs.
```ini
# /etc/systemd/system/matchbox.service.d/override.conf
[Service]
Environment="MATCHBOX_ADDRESS=0.0.0.0:8080"
Environment="MATCHBOX_RPC_ADDRESS=0.0.0.0:8081"
```
The Tectonic [Installer](https://tectonic.com/enterprise/docs/latest/install/bare-metal/index.html) uses this API. Tectonic users with a Container Linux provisioner can start with an example that enables it.
```sh
$ sudo cp contrib/systemd/matchbox-for-tectonic.service /etc/systemd/system/matchbox.service
```
Customize `matchbox` to suit your preferences.
## Firewall
Allow your port choices on the provisioner's firewall so the clients can access the service. Here are the commands for those using `firewalld`:
```sh
$ sudo firewall-cmd --zone=MYZONE --add-port=8080/tcp --permanent
$ sudo firewall-cmd --zone=MYZONE --add-port=8081/tcp --permanent
```
## Generate TLS Certificates
The Matchbox gRPC API allows clients (terraform-provider-matchbox) to create and update Matchbox resources. TLS credentials are needed for client authentication and to establish a secure communication channel. Client machines (those PXE booting) read from the HTTP endpoints and do not require this setup.
The `cert-gen` helper script generates a self-signed CA, server certificate, and client certificate. **Prefer your organization's PKI, if possible**
Navigate to the `scripts/tls` directory.
```sh
$ cd scripts/tls
```
Export `SAN` to set the Subject Alt Names which should be used in certificates. Provide the fully qualified domain name or IP (discouraged) where Matchbox will be installed.
```sh
# DNS or IP Subject Alt Names where matchbox runs
$ export SAN=DNS.1:matchbox.example.com,IP.1:172.18.0.2
```
Generate a `ca.crt`, `server.crt`, `server.key`, `client.crt`, and `client.key`.
```sh
$ ./cert-gen
```
Move TLS credentials to the matchbox server's default location.
```sh
$ sudo mkdir -p /etc/matchbox
$ sudo cp ca.crt server.crt server.key /etc/matchbox
```
Save `client.crt`, `client.key`, and `ca.crt` for later use (e.g. `~/.matchbox`).
```sh
$ mkdir -p ~/.matchbox
$ cp client.crt client.key ca.crt ~/.matchbox/
```
## Start matchbox
Start the `matchbox` service and enable it if you'd like it to start on every boot.
```sh
$ sudo systemctl daemon-reload
$ sudo systemctl start matchbox
$ sudo systemctl enable matchbox
```
## Verify
Verify the matchbox service is running and can be reached by client machines (those being provisioned).
```sh
$ systemctl status matchbox
$ dig matchbox.example.com
```
Verify you receive a response from the HTTP and API endpoints.
```sh
$ curl http://matchbox.example.com:8080
matchbox
```
If you enabled the gRPC API,
```sh
$ openssl s_client -connect matchbox.example.com:8081 -CAfile /etc/matchbox/ca.crt -cert scripts/tls/client.crt -key scripts/tls/client.key
CONNECTED(00000003)
depth=1 CN = fake-ca
verify return:1
depth=0 CN = fake-server
verify return:1
---
Certificate chain
0 s:/CN=fake-server
i:/CN=fake-ca
---
....
```
## Download Container Linux (optional)
`matchbox` can serve Container Linux images in development or lab environments to reduce bandwidth usage and increase the speed of Container Linux PXE boots and installs to disk.
Download a recent Container Linux [release](https://coreos.com/releases/) with signatures.
```sh
$ ./scripts/get-coreos stable 1576.5.0 . # note the "." 3rd argument
```
Move the images to `/var/lib/matchbox/assets`,
```sh
$ sudo cp -r coreos /var/lib/matchbox/assets
```
```
/var/lib/matchbox/assets/
├── coreos
│   └── 1576.5.0
│   ├── CoreOS_Image_Signing_Key.asc
│   ├── coreos_production_image.bin.bz2
│   ├── coreos_production_image.bin.bz2.sig
│   ├── coreos_production_pxe_image.cpio.gz
│   ├── coreos_production_pxe_image.cpio.gz.sig
│   ├── coreos_production_pxe.vmlinuz
│   └── coreos_production_pxe.vmlinuz.sig
```
and verify the images are acessible.
```sh
$ curl http://matchbox.example.com:8080/assets/coreos/1576.5.0/
<pre>...
```
For large production environments, use a cache proxy or mirror suitable for your environment to serve Container Linux images. See [contrib/squid](../contrib/squid/README.md) for details.
## Network
Review [network setup](https://github.com/coreos/matchbox/blob/master/Documentation/network-setup.md) with your network administrator to set up DHCP, TFTP, and DNS services on your network. At a high level, your goals are to:
* Chainload PXE firmwares to iPXE
* Point iPXE client machines to the `matchbox` iPXE HTTP endpoint `http://matchbox.example.com:8080/boot.ipxe`
* Ensure `matchbox.example.com` resolves to your `matchbox` deployment
CoreOS provides [dnsmasq](https://github.com/coreos/matchbox/tree/master/contrib/dnsmasq) as `quay.io/coreos/dnsmasq`, if you wish to use rkt or Docker.
## rkt
Run the most recent tagged and signed `bootcfg` [release](https://github.com/coreos/coreos-baremetal/releases) ACI. Trust the [CoreOS App Signing Key](https://coreos.com/security/app-signing-key/) for image signature verification.
Run the container image with rkt.
sudo rkt trust --prefix coreos.com/bootcfg
# gpg key fingerprint is: 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E
sudo rkt run --net=host --mount volume=assets,target=/var/lib/bootcfg/assets --volume assets,kind=host,source=$PWD/examples/assets quay.io/coreos/bootcfg:v0.4.0 -- -address=0.0.0.0:8080 -log-level=debug
latest or most recent tagged `matchbox` [release](https://github.com/coreos/matchbox/releases) ACI. Trust the [CoreOS App Signing Key](https://coreos.com/security/app-signing-key/) for image signature verification.
Create machine profiles, groups, or Ignition configs at runtime with `bootcmd` or by using your own `/var/lib/bootcfg` volume mounts.
```sh
$ mkdir -p /var/lib/matchbox/assets
$ sudo rkt run --net=host --mount volume=data,target=/var/lib/matchbox --volume data,kind=host,source=/var/lib/matchbox quay.io/coreos/matchbox:latest --mount volume=config,target=/etc/matchbox --volume config,kind=host,source=/etc/matchbox,readOnly=true -- -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
```
Create machine profiles, groups, or Ignition configs by adding files to `/var/lib/matchbox`.
## Docker
Run the latest or the most recently tagged `bootcfg` [release](https://github.com/coreos/coreos-baremetal/releases) Docker image.
Run the container image with docker.
sudo docker run --net=host --rm -v $PWD/examples/assets:/var/lib/bootcfg/assets:Z quay.io/coreos/bootcfg:v0.4.0 -address=0.0.0.0:8080 -log-level=debug
```sh
$ mkdir -p /var/lib/matchbox/assets
$ sudo docker run --net=host --rm -v /var/lib/matchbox:/var/lib/matchbox:Z -v /etc/matchbox:/etc/matchbox:Z,ro quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
```
Create machine profiles, groups, or Ignition configs at runtime with `bootcmd` or by using your own `/var/lib/bootcfg` volume mounts.
Create machine profiles, groups, or Ignition configs by adding files to `/var/lib/matchbox`.
## Kubernetes
*Note: Enhancements to the gRPC API, CLI, and `EtcdStore` backend will improve this deployment strategy in the future.*
Install `matchbox` on a Kubernetes cluster by creating a deployment and service.
Create a `bootcfg` Kubernetes `Deployment` and `Service` based on the example manifests provided in [contrib/k8s](../contrib/k8s).
```sh
$ kubectl apply -f contrib/k8s/matchbox-deployment.yaml
$ kubectl apply -f contrib/k8s/matchbox-service.yaml
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
matchbox 10.3.0.145 <none> 8080/TCP,8081/TCP 46m
```
kubectl apply -f contrib/k8s/bootcfg-deployment.yaml
kubectl apply -f contrib/k8s/bootcfg-service.yaml
Example manifests in [contrib/k8s](../contrib/k8s) enable the gRPC API to allow client apps to update matchbox objects. Generate TLS server credentials for `matchbox-rpc.example.com` [as shown](#generate-tls-credentials) and create a Kubernetes secret. Alternately, edit the example manifests if you don't need the gRPC API enabled.
The `bootcfg` HTTP server should be exposed on NodePort `tcp:31488` on each node in the cluster. `BOOTCFG_LOG_LEVEL` is set to debug.
```sh
$ kubectl create secret generic matchbox-rpc --from-file=ca.crt --from-file=server.crt --from-file=server.key
```
kubectl get deployments
kubectl get services
kubectl get pods
kubectl logs POD-NAME
Create an Ingress resource to expose the HTTP read-only and gRPC API endpoints. The Ingress example requires the cluster to have a functioning [Nginx Ingress Controller](https://github.com/kubernetes/ingress).
The example manifests use Kubernetes `emptyDir` volumes to back the `bootcfg` FileStore (`/var/lib/bootcfg`). This doesn't provide long-term persistent storage so you may wish to mount your machine groups, profiles, and Ignition configs with a [gitRepo](http://kubernetes.io/docs/user-guide/volumes/#gitrepo) and host image assets on a file server.
```sh
$ kubectl create -f contrib/k8s/matchbox-ingress.yaml
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
matchbox matchbox.example.com 10.128.0.3,10... 80 29m
matchbox-rpc matchbox-rpc.example.com 10.128.0.3,10... 80, 443 29m
```
## Binary
Add DNS records `matchbox.example.com` and `matchbox-rpc.example.com` to route traffic to the Ingress Controller.
### User/Group
Verify `http://matchbox.example.com` responds with the text "matchbox" and verify gRPC clients can connect to `matchbox-rpc.example.com:443`.
The `bootcfg` service should be run by a non-root user with access to the `bootcfg` data directory (e.g. `/var/lib/bootcfg`). Create a `bootcfg` user and group.
```sh
$ curl http://matchbox.example.com
$ openssl s_client -connect matchbox-rpc.example.com:443 -CAfile ca.crt -cert client.crt -key client.key
```
sudo useradd -U bootcfg
sudo mkdir -p /var/lib/bootcfg/assets
sudo chown -R bootcfg:bootcfg /var/lib/bootcfg
# HTTPS - The read-only Matchbox API is also available with HTTPS
Add yourself to the `bootcfg` group if you'd like to edit configs directly rather than through the `bootcmd` client.
To start matchbox in this mode you will need the following flags set:
SELF=$(whoami)
sudo gpasswd --add $SELF bootcfg
| Name | Type | Description |
|----------------|--------|---------------------------------------------------------------|
| -web-ssl | bool | true/false |
| -web-cert-file | string | Path to the server TLS certificate file |
| -web-key-file | string | Path to the server TLS key file |
### Prebuilt
Download a prebuilt binary from the Github [releases](https://github.com/coreos/coreos-baremetal/releases).
wget https://github.com/coreos/coreos-baremetal/releases/download/VERSION/bootcfg-VERSION-linux-amd64.tar.gz
wget https://github.com/coreos/coreos-baremetal/releases/download/VERSION/bootcfg-VERSION-linux-amd64.tar.gz.asc
Verify the signature from the [CoreOS App Signing Key](https://coreos.com/security/app-signing-key/).
gpg --keyserver pgp.mit.edu --recv-key 18AD5014C99EF7E3BA5F6CE950BDD3E0FC8A365E
gpg --verify bootcfg-VERSION-linux-amd64.tar.gz.asc bootcfg-VERSION-linux-amd64.tar.gz
# gpg: Good signature from "CoreOS Application Signing Key <security@coreos.com>"
Install the `bootcfg` static binary to `/usr/local/bin`.
tar xzvf bootcfg-VERSION-linux-amd64.tar.gz
sudo cp bootcfg /usr/local/bin
### Source
Clone the coreos-baremetal project into your $GOPATH.
go get github.com/coreos/coreos-baremetal/cmd/bootcfg
cd $GOPATH/src/github.com/coreos/coreos-baremetal
Build `bootcfg` from source.
make
Install the `bootcfg` static binary to `/usr/local/bin`.
### Run
Run the `bootcfg` server.
$ bootcfg -version
$ bootcfg -address 0.0.0.0:8080
main: starting bootcfg HTTP server on 0.0.0.0:8080
See [flags and variables](config.md).
### systemd
First, install the `bootcfg` binary from a pre-built binary or from source. Then add and start bootcfg's example systemd unit.
sudo cp contrib/systemd/bootcfg.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl start bootcfg.service
Check the status and logs.
systemctl status bootcfg.service
journalctl -u bootcfg.service
Enable the `bootcfg` service if you'd like it to start at boot time.
sudo systemctl enable bootcfg.service
### Uninstall
sudo systemctl stop bootcfg.service
sudo make uninstall
### Operational notes
* Secrets: Matchbox **can** be run as a public facing service. However, you **must** follow best practices and avoid writing secret material into machine user-data. Instead, load secret materials from an internal secret store.
* Storage: Example manifests use Kubernetes `emptyDir` volumes to store `matchbox` data. Swap those out for a Kubernetes persistent volume if available.

View File

@@ -1,65 +1,82 @@
# Development
# bootcfg Development
To develop `matchbox` locally, compile the binary and build the container image.
Develop `bootcfg` locally.
## Binary
## Static binary
Build the static binary.
./build
```sh
$ make build
```
Test with vendored dependencies.
./test
```sh
$ make test
```
## Container Image
## Container image
Build an ACI `bootcfg.aci`.
Build an ACI `matchbox.aci`.
./build-aci
```sh
$ make aci
```
Alternately, build a Docker image `coreos/bootcfg:latest`.
Alternately, build a Docker image `coreos/matchbox:latest`.
sudo ./build-docker
```sh
$ make docker-image
```
## Version
./bin/bootcfg -version
sudo rkt --insecure-options=image run bootcfg.aci -- -version
sudo docker run coreos/bootcfg:latest -version
```sh
$ ./bin/matchbox -version
$ sudo rkt --insecure-options=image run matchbox.aci -- -version
$ sudo docker run coreos/matchbox:latest -version
```
## Run
Run the binary.
./bin/bootcfg -address=0.0.0.0:8080 -log-level=debug -data-path examples -assets-path examples/assets
```sh
$ ./bin/matchbox -address=0.0.0.0:8080 -log-level=debug -data-path examples -assets-path examples/assets
```
Run the ACI with rkt on `metal0`.
Run the container image with rkt, on `metal0`.
sudo rkt --insecure-options=image run --net=metal0:IP=172.15.0.2 --mount volume=data,target=/var/lib/bootcfg --volume data,kind=host,source=$PWD/examples --mount volume=config,target=/etc/bootcfg --volume config,kind=host,source=$PWD/examples/etc/bootcfg --mount volume=groups,target=/var/lib/bootcfg/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd bootcfg.aci -- -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
```sh
$ sudo rkt --insecure-options=image run --net=metal0:IP=172.18.0.2 --mount volume=data,target=/var/lib/matchbox --volume data,kind=host,source=$PWD/examples --mount volume=config,target=/etc/matchbox --volume config,kind=host,source=$PWD/examples/etc/matchbox --mount volume=groups,target=/var/lib/matchbox/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd matchbox.aci -- -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
```
Alternately, run the Docker image on `docker0`.
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/var/lib/bootcfg:Z -v $PWD/examples/groups/etcd-docker:/var/lib/bootcfg/groups:Z coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug
```sh
$ sudo docker run -p 8080:8080 --rm -v $PWD/examples:/var/lib/matchbox:Z -v $PWD/examples/groups/etcd:/var/lib/matchbox/groups:Z coreos/matchbox:latest -address=0.0.0.0:8080 -log-level=debug
```
### bootcmd
## bootcmd
Run `bootcmd` against the gRPC API of the service running via rkt.
./bin/bootcmd profile list --endpoints 172.15.0.2:8081 --cacert examples/etc/bootcfg/ca.crt
```sh
$ ./bin/bootcmd profile list --endpoints 172.18.0.2:8081 --cacert examples/etc/matchbox/ca.crt
```
## Dependencies
## Vendor
Project dependencies are commited to the `vendor` directory, so Go 1.6+ users can clone to their `GOPATH` and build or test immediately. Go 1.5 users should set `GO15VENDOREXPERIMENT=1`.
Use `glide` and `glide-vc` to manage dependencies committed to the `vendor` directory.
Project developers should use [glide](https://github.com/Masterminds/glide) to manage commited dependencies under `vendor`. Configure `glide.yaml` as desired. Use `glide update` to download and update dependencies listed in `glide.yaml` into `/vendor` (do **not** use glide `get`).
```sh
$ make vendor
```
glide update --update-vendored --strip-vendor --strip-vcs
## Codegen
Recursive dependencies are also vendored. A `glide.lock` will be created to represent the exact versions of each dependency.
Generate code from *proto* definitions using `protoc` and the `protoc-gen-go` plugin.
With an empty `vendor` directory, you can install the `glide.lock` dependencies.
rm -rf vendor/
glide install --strip-vendor --strip-vcs
```sh
$ make codegen
```

View File

@@ -1,47 +1,74 @@
# bootcfg Release Guide
# Release guide
This guide covers releasing new versions of `bootcfg`.
This guide covers releasing new versions of matchbox.
## Release Notes
## Version
Create a pre-release with the [changelog](../CHANGES.md) contents.
Create a release commit which updates old version references.
```sh
$ export VERSION=v0.7.1
```
## Tag
Tag, sign the release version, and push to Github.
Tag, sign the release version, and push it to Github.
git tag -s vX.Y.Z -m 'vX.Y.Z'
```sh
$ git tag -s vX.Y.Z -m 'vX.Y.Z'
$ git push origin --tags
$ git push origin master
```
Travis CI will build the Docker image and push it to Quay.io when the tag is pushed to master.
## Images
## Binaries and Images
Travis CI will build the Docker image and push it to Quay.io when the tag is pushed to master. Verify the new image and version.
Build the binary and ACI. Check that their version is correct/clean.
```sh
$ sudo docker run quay.io/coreos/matchbox:$VERSION -version
$ sudo rkt run --no-store quay.io/coreos/matchbox:$VERSION -- -version
```
./build
./build-aci
## Github release
Prepare the binary tarball and ACI.
Publish the release on Github with release notes.
export VERSION=v0.3.0
mkdir bootcfg-$VERSION
cp bin/bootcfg bootcfg-$VERSION
cp bootcfg.aci bootcfg-$VERSION-linux-amd64.aci
tar -zcvf bootcfg-$VERSION-linux-amd64.tar.gz bootcfg-$VERSION
## Tarballs
Build the release tarballs.
```sh
$ make release
```
Verify the reported version.
```
./_output/matchbox-v0.7.1-linux-amd64/matchbox -version
```
## Signing
Sign the binary tarball and ACI.
Sign the release tarballs and ACI with a [CoreOS App Signing Key](https://coreos.com/security/app-signing-key/) subkey.
gpg2 -a --default-key FC8A365E --detach-sign bootcfg-$VERSION-linux-amd64.tar.gz
gpg2 -a --default-key FC8A365E --detach-sign bootcfg-$VERSION-linux-amd64.aci
```sh
cd _output
gpg2 --armor --local-user A6F71EE5BEDDBA18! --detach-sign matchbox-$VERSION-linux-amd64.tar.gz
gpg2 --armor --local-user A6F71EE5BEDDBA18! --detach-sign matchbox-$VERSION-darwin-amd64.tar.gz
gpg2 --armor --local-user A6F71EE5BEDDBA18! --detach-sign matchbox-$VERSION-linux-arm.tar.gz
gpg2 --armor --local-user A6F71EE5BEDDBA18! --detach-sign matchbox-$VERSION-linux-arm64.tar.gz
```
Verify the signatures.
gpg2 --verify bootcfg-$VERSION-linux-amd64.tar.gz.asc bootcfg-$VERSION-linux-amd64.tar.gz
gpg2 --verify bootcfg-$VERSION-linux-amd64.aci.asc bootcfg-$VERSION-linux-amd64.aci
```sh
gpg2 --verify matchbox-$VERSION-linux-amd64.tar.gz.asc matchbox-$VERSION-linux-amd64.tar.gz
gpg2 --verify matchbox-$VERSION-darwin-amd64.tar.gz.asc matchbox-$VERSION-darwin-amd64.tar.gz
gpg2 --verify matchbox-$VERSION-linux-arm.tar.gz.asc matchbox-$VERSION-linux-arm.tar.gz
gpg2 --verify matchbox-$VERSION-linux-arm64.tar.gz.asc matchbox-$VERSION-linux-arm64.tar.gz
```
## Publish
Publish the signed binary tarball(s) and the signed ACI with the Github release. The Docker image is published to Quay.io when the tag is pushed to master.
Upload the signed tarball(s) with the Github release. Promote the release from a `pre-release` to an official release.

View File

@@ -1,7 +1,6 @@
# Getting started with Docker
# Getting Started with Docker
In this tutorial, we'll run `bootcfg` on your Linux machine with Docker to network boot and provision a cluster of CoreOS machines locally. You'll be able to create Kubernetes clustes, etcd clusters, and test network setups.
In this tutorial, we'll run `matchbox` on your Linux machine with Docker to network boot and provision a cluster of QEMU/KVM Container Linux machines locally. You'll be able to create Kubernetes clusters, etcd3 clusters, and test network setups.
*Note*: To provision physical machines, see [network setup](network-setup.md) and [deployment](deployment.md).
@@ -9,73 +8,117 @@ In this tutorial, we'll run `bootcfg` on your Linux machine with Docker to netwo
Install the package dependencies and start the Docker daemon.
# Fedora
sudo dnf install docker virt-install virt-manager
sudo systemctl start docker
```sh
$ # Fedora
$ sudo dnf install docker virt-install virt-manager
$ sudo systemctl start docker
# Debian/Ubuntu
# check Docker's docs to install Docker 1.8+ on Debian/Ubuntu
sudo apt-get install virt-manager virtinst qemu-kvm
$ # Debian/Ubuntu
$ # check Docker's docs to install Docker 1.8+ on Debian/Ubuntu
$ sudo apt-get install virt-manager virtinst qemu-kvm
```
Clone the [coreos-baremetal](https://github.com/coreos/coreos-baremetal) source which contains the examples and scripts.
Clone the [matchbox](https://github.com/coreos/matchbox) source which contains the examples and scripts.
git clone https://github.com/coreos/coreos-baremetal.git
cd coreos-baremetal
```sh
$ git clone https://github.com/coreos/matchbox.git
$ cd matchbox
```
Download CoreOS image assets referenced by the `etcd-docker` [example](../examples) to `examples/assets`.
Download CoreOS Container Linux image assets referenced by the `etcd3` [example](../examples) to `examples/assets`.
./scripts/get-coreos alpha 1053.2.0 ./examples/assets
```sh
$ ./scripts/get-coreos stable 1576.5.0 ./examples/assets
```
For development convenience, add `/etc/hosts` entries for nodes so they may be referenced by name.
```sh
# /etc/hosts
...
172.17.0.21 node1.example.com
172.17.0.22 node2.example.com
172.17.0.23 node3.example.com
```
## Containers
Run the latest `bootcfg` Docker image from `quay.io/coreos/bootcfg` with the `etcd-docker` example. The container should receive the IP address 172.17.0.2 on the `docker0` bridge.
Run the `matchbox` and `dnsmasq` services on the `docker0` bridge. `dnsmasq` will run DHCP, DNS and TFTP services to create a suitable network boot environment. `matchbox` will serve configs to machines as they PXE boot.
sudo docker pull quay.io/coreos/bootcfg:latest
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/var/lib/bootcfg:Z -v $PWD/examples/groups/etcd-docker:/var/lib/bootcfg/groups:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug
The `devnet` convenience script can start these services and accepts the name of any example cluster in [examples](../examples).
Take a look at the [etcd groups](../examples/groups/etcd-docker) to get an idea of how machines are mapped to Profiles. Explore some endpoints port mapped to localhost:8080.
```sh
$ sudo ./scripts/devnet create etcd3
```
* [node1's ipxe](http://127.0.0.1:8080/ipxe?mac=52:54:00:a1:9c:ae)
* [node1's Ignition](http://127.0.0.1:8080/ignition?mac=52:54:00:a1:9c:ae)
* [node1's Metadata](http://127.0.0.1:8080/metadata?mac=52:54:00:a1:9c:ae)
Inspect the logs.
## Network
```
$ sudo ./scripts/devnet status
```
Since the virtual network has no network boot services, use the `dnsmasq` image to create an iPXE network boot environment which runs DHCP, DNS, and TFTP.
Take a look at the [etcd3 groups](../examples/groups/etcd3) to get an idea of how machines are mapped to Profiles. Explore some endpoints exposed by the service, say for QEMU/KVM node1.
sudo docker run --rm --cap-add=NET_ADMIN quay.io/coreos/dnsmasq -d -q --dhcp-range=172.17.0.43,172.17.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:#ipxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://bootcfg.foo:8080/boot.ipxe --log-queries --log-dhcp --dhcp-option=3,172.17.0.1 --address=/bootcfg.foo/172.17.0.2
* iPXE [http://127.0.0.1:8080/ipxe?mac=52:54:00:a1:9c:ae](http://127.0.0.1:8080/ipxe?mac=52:54:00:a1:9c:ae)
* Ignition [http://127.0.0.1:8080/ignition?mac=52:54:00:a1:9c:ae](http://127.0.0.1:8080/ignition?mac=52:54:00:a1:9c:ae)
* Metadata [http://127.0.0.1:8080/metadata?mac=52:54:00:a1:9c:ae](http://127.0.0.1:8080/metadata?mac=52:54:00:a1:9c:ae)
In this case, dnsmasq runs a DHCP server allocating IPs to VMs between 172.17.0.43 and 172.17.0.99, resolves `bootcfg.foo` to 172.17.0.2 (the IP where `bootcfg` runs), and points iPXE clients to `http://bootcfg.foo:8080/boot.ipxe`.
### Manual
If you prefer to start the containers yourself, instead of using `devnet`,
```sh
$ sudo docker run -p 8080:8080 --rm -v $PWD/examples:/var/lib/matchbox:Z -v $PWD/examples/groups/etcd3:/var/lib/matchbox/groups:Z quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -log-level=debug
$ sudo docker run --name dnsmasq --cap-add=NET_ADMIN -v $PWD/contrib/dnsmasq/docker0.conf:/etc/dnsmasq.conf:Z quay.io/coreos/dnsmasq -d
```
## Client VMs
Create VM nodes which have known hardware attributes. The nodes will be attached to the `docker0` bridge where Docker's containers run.
Create QEMU/KVM VMs which have known hardware attributes. The nodes will be attached to the `docker0` bridge, where Docker containers run.
sudo ./scripts/libvirt create-docker
sudo virt-manager
```sh
$ sudo ./scripts/libvirt create
```
You can use `virt-manager` to watch the console and reboot VM machines with
You can connect to the serial console of any node (ctrl+] to exit). If you provisioned nodes with an SSH key, you can SSH after bring-up.
sudo ./scripts/libvirt poweroff
sudo ./scripts/libvirt start
```sh
$ sudo virsh console node1
$ ssh core@node1.example.com
```
You can also use `virt-manager` to watch the console.
```sh
$ sudo virt-manager
```
Use the wrapper script to act on all nodes.
```sh
$ sudo ./scripts/libvirt [start|reboot|shutdown|poweroff|destroy]
```
## Verify
The VMs should network boot and provision themselves into a three node etcd cluster, with other nodes behaving as etcd proxies.
The VMs should network boot and provision themselves into a three node etcd3 cluster, with other nodes behaving as etcd3 gateways.
The example profile added autologin so you can verify that etcd works between nodes.
The example profile added autologin so you can verify that etcd3 works between nodes.
systemctl status etcd2
etcdctl set /message hello
etcdctl get /message
fleetctl list-machines
```sh
$ systemctl status etcd-member
$ etcdctl set /message hello
$ etcdctl get /message
```
## Clean up
Clean up the VM machines.
Clean up the containers and VM machines.
sudo ./scripts/libvirt poweroff
sudo ./scripts/libvirt destroy
```sh
$ sudo ./scripts/devnet destroy
$ sudo ./scripts/libvirt destroy
```
## Going Further
Learn more about [bootcfg](bootcfg.md) or explore the other [example](../examples) clusters. Try the [k8s-docker example](kubernetes.md) to produce a TLS-authenticated Kubernetes cluster you can access locally with `kubectl` ([docs](../examples/README.md#kubernetes)).
## Going further
Learn more about [matchbox](matchbox.md) or explore the other [example](../examples) clusters. Try the [k8s example](bootkube.md) to produce a TLS-authenticated Kubernetes cluster you can access locally with `kubectl`.

View File

@@ -1,32 +1,39 @@
# Getting started with rkt
# Getting Started with rkt
In this tutorial, we'll run `bootcfg` on your Linux machine with `rkt` and `CNI` to network boot and provision a cluster of CoreOS machines locally. You'll be able to create Kubernetes clustes, etcd clusters, and test network setups.
In this tutorial, we'll run `matchbox` on your Linux machine with `rkt` and `CNI` to network boot and provision a cluster of QEMU/KVM Container Linux machines locally. You'll be able to create Kubernetes clustes, etcd3 clusters, and test network setups.
*Note*: To provision physical machines, see [network setup](network-setup.md) and [deployment](deployment.md).
## Requirements
Install [rkt](https://coreos.com/rkt/docs/latest/distributions.html) 1.8 or higher ([example script](https://github.com/dghubble/phoenix/blob/master/scripts/fedora/sources.sh)) and setup rkt [privilege separation](https://coreos.com/rkt/docs/latest/trying-out-rkt.html).
Install [rkt](https://coreos.com/rkt/docs/latest/distributions.html) 1.12.0 or higher ([example script](https://github.com/dghubble/phoenix/blob/master/fedora/sources.sh)) and setup rkt [privilege separation](https://coreos.com/rkt/docs/latest/trying-out-rkt.html).
Next, install the package dependencies.
# Fedora
sudo dnf install virt-install virt-manager
```sh
# Fedora
$ sudo dnf install virt-install virt-manager
# Debian/Ubuntu
sudo apt-get install virt-manager virtinst qemu-kvm systemd-container
# Debian/Ubuntu
$ sudo apt-get install virt-manager virtinst qemu-kvm systemd-container
```
**Note**: rkt does not yet integrate with SELinux on Fedora. As a workaround, temporarily set enforcement to permissive if you are comfortable (`sudo setenforce Permissive`). Check the rkt [distribution notes](https://github.com/coreos/rkt/blob/master/Documentation/distributions.md) or see the tracking [issue](https://github.com/coreos/rkt/issues/1727).
Clone the [coreos-baremetal](https://github.com/coreos/coreos-baremetal) source which contains the examples and scripts.
Clone the [matchbox](https://github.com/coreos/matchbox) source which contains the examples and scripts.
git clone https://github.com/coreos/coreos-baremetal.git
cd coreos-baremetal
```sh
$ git clone https://github.com/coreos/matchbox.git
$ cd matchbox
```
Download CoreOS image assets referenced by the `etcd` [example](../examples) to `examples/assets`.
Download CoreOS Container Linux image assets referenced by the `etcd3` [example](../examples) to `examples/assets`.
./scripts/get-coreos alpha 1053.2.0 ./examples/assets
```sh
$ ./scripts/get-coreos stable 1576.5.0 ./examples/assets
```
## Network
Define the `metal0` virtual bridge with [CNI](https://github.com/appc/cni).
@@ -41,7 +48,7 @@ sudo bash -c 'cat > /etc/rkt/net.d/20-metal.conf << EOF
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "172.15.0.0/16",
"subnet": "172.18.0.0/24",
"routes" : [ { "dst" : "0.0.0.0/0" } ]
}
}
@@ -50,68 +57,128 @@ EOF'
On Fedora, add the `metal0` interface to the trusted zone in your firewall configuration.
sudo firewall-cmd --add-interface=metal0 --zone=trusted
```sh
$ sudo firewall-cmd --add-interface=metal0 --zone=trusted
$ sudo firewall-cmd --add-interface=metal0 --zone=trusted --permanent
```
For development convenience, you may wish to add `/etc/hosts` entries for nodes to refer to them by name.
```
# /etc/hosts
...
172.18.0.21 node1.example.com
172.18.0.22 node2.example.com
172.18.0.23 node3.example.com
```
## Containers
Run the latest `bootcfg` ACI with rkt and the `etcd` example.
Run the `matchbox` and `dnsmasq` services on the `metal0` bridge. `dnsmasq` will run DHCP, DNS, and TFTP services to create a suitable network boot environment. `matchbox` will serve configs to machinesas they PXE boot.
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=data,target=/var/lib/bootcfg --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/bootcfg/groups --volume groups,kind=host,source=$PWD/examples/groups/etcd quay.io/coreos/bootcfg:latest -- -address=0.0.0.0:8080 -log-level=debug
The `devnet` convenience script can rkt run these services in systemd transient units and accepts the name of any example cluster in [examples](../examples).
```sh
$ export CONTAINER_RUNTIME=rkt
$ sudo -E ./scripts/devnet create etcd3
```
Inspect the journal logs.
```
$ sudo -E ./scripts/devnet status
$ journalctl -f -u dev-matchbox
$ journalctl -f -u dev-dnsmasq
```
Take a look at the [etcd3 groups](../examples/groups/etcd3) to get an idea of how machines are mapped to Profiles. Explore some endpoints exposed by the service, say for QEMU/KVM node1.
* iPXE [http://172.18.0.2:8080/ipxe?mac=52:54:00:a1:9c:ae](http://172.18.0.2:8080/ipxe?mac=52:54:00:a1:9c:ae)
* Ignition [http://172.18.0.2:8080/ignition?mac=52:54:00:a1:9c:ae](http://172.18.0.2:8080/ignition?mac=52:54:00:a1:9c:ae)
* Metadata [http://172.18.0.2:8080/metadata?mac=52:54:00:a1:9c:ae](http://172.18.0.2:8080/metadata?mac=52:54:00:a1:9c:ae)
### Manual
If you prefer to start the containers yourself, instead of using `devnet`,
```sh
sudo rkt run --net=metal0:IP=172.18.0.2 \
--mount volume=data,target=/var/lib/matchbox \
--volume data,kind=host,source=$PWD/examples \
--mount volume=groups,target=/var/lib/matchbox/groups \
--volume groups,kind=host,source=$PWD/examples/groups/etcd3 \
quay.io/coreos/matchbox:v0.7.1 -- -address=0.0.0.0:8080 -log-level=debug
```
```sh
sudo rkt run --net=metal0:IP=172.18.0.3 \
--dns=host \
--mount volume=config,target=/etc/dnsmasq.conf \
--volume config,kind=host,source=$PWD/contrib/dnsmasq/metal0.conf \
quay.io/coreos/dnsmasq:v0.4.1 \
--caps-retain=CAP_NET_ADMIN,CAP_NET_BIND_SERVICE,CAP_SETGID,CAP_SETUID,CAP_NET_RAW
```
If you get an error about the IP assignment, stop old pods and run garbage collection.
sudo rkt gc --grace-period=0
Take a look at the [etcd groups](../examples/groups/etcd) to get an idea of how machines are mapped to Profiles. Explore some endpoints exposed by the service.
* [node1's ipxe](http://172.15.0.2:8080/ipxe?mac=52:54:00:a1:9c:ae)
* [node1's Ignition](http://172.15.0.2:8080/ignition?mac=52:54:00:a1:9c:ae)
* [node1's Metadata](http://172.15.0.2:8080/metadata?mac=52:54:00:a1:9c:ae)
## Network
Since the virtual network has no network boot services, use the `dnsmasq` ACI to create an iPXE network boot environment which runs DHCP, DNS, and TFTP.
Trust the [CoreOS App Signing Key](https://coreos.com/security/app-signing-key/).
sudo rkt trust --prefix coreos.com/dnsmasq
# gpg key fingerprint is: 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E
Run the `coreos.com/dnsmasq` ACI with rkt.
sudo rkt run coreos.com/dnsmasq:v0.3.0 --net=metal0:IP=172.15.0.3 -- -d -q --dhcp-range=172.15.0.50,172.15.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:#ipxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://bootcfg.foo:8080/boot.ipxe --log-queries --log-dhcp --dhcp-option=3,172.15.0.1 --address=/bootcfg.foo/172.15.0.2
In this case, dnsmasq runs a DHCP server allocating IPs to VMs between 172.15.0.50 and 172.15.0.99, resolves `bootcfg.foo` to 172.15.0.2 (the IP where `bootcfg` runs), and points iPXE clients to `http://bootcfg.foo:8080/boot.ipxe`.
```sh
$ sudo rkt gc --grace-period=0
```
## Client VMs
Create VM nodes which have known hardware attributes. The nodes will be attached to the `metal0` bridge where your pods run.
Create QEMU/KVM VMs which have known hardware attributes. The nodes will be attached to the `metal0` bridge, where your pods run.
sudo ./scripts/libvirt create-rkt
sudo virt-manager
```sh
$ sudo ./scripts/libvirt create-rkt
```
You can use `virt-manager` to watch the console and reboot VM machines with
You can connect to the serial console of any node (ctrl+] to exit). If you provisioned nodes with an SSH key, you can SSH after bring-up.
sudo ./scripts/libvirt poweroff
sudo ./scripts/libvirt start
```sh
$ sudo virsh console node1
$ ssh core@node1.example.com
```
You can also use `virt-manager` to watch the console.
```sh
$ sudo virt-manager
```
Use the wrapper script to act on all nodes.
```sh
$ sudo ./scripts/libvirt [start|reboot|shutdown|poweroff|destroy]
```
## Verify
The VMs should network boot and provision themselves into a three node etcd cluster, with other nodes behaving as etcd proxies.
The VMs should network boot and provision themselves into a three node etcd3 cluster, with other nodes behaving as etcd3 gateways.
The example profile added autologin so you can verify that etcd works between nodes.
The example profile added autologin so you can verify that etcd3 works between nodes.
systemctl status etcd2
etcdctl set /message hello
etcdctl get /message
fleetctl list-machines
```sh
$ systemctl status etcd-member
$ etcdctl set /message hello
$ etcdctl get /message
```
Press ^] three times to stop a rkt pod. Clean up the VM machines.
## Clean up
sudo ./scripts/libvirt poweroff
sudo ./scripts/libvirt destroy
Clean up the systemd units running `matchbox` and `dnsmasq`.
## Going Further
```sh
$ sudo -E ./scripts/devnet destroy
```
Learn more about [bootcfg](bootcfg.md) or explore the other [example](../examples) clusters. Try the [k8s example](kubernetes.md) to produce a TLS-authenticated Kubernetes cluster you can access locally with `kubectl` ([docs](../examples/README.md#kubernetes)).
Clean up VM machines.
```sh
$ sudo ./scripts/libvirt destroy
```
Press ^] three times to stop any rkt pod.
## Going further
Learn more about [matchbox](matchbox.md) or explore the other [example](../examples) clusters. Try the [k8s example](bootkube.md) to produce a TLS-authenticated Kubernetes cluster you can access locally with `kubectl`.

View File

@@ -0,0 +1,200 @@
# Getting started
In this tutorial, we'll show how to use terraform with `matchbox` to provision Container Linux machines.
You'll install the `matchbox` service, setup a PXE network boot environment, and then use terraform configs to describe your infrastructure and the terraform CLI to create those resources on `matchbox`.
## matchbox
Install `matchbox` on a dedicated server or Kubernetes cluster. Generate TLS credentials and enable the gRPC API as directed. Save the `ca.crt`, `client.crt`, and `client.key` on your local machine (e.g. `~/.matchbox`).
* Installing on [Container Linux / other distros](deployment.md)
* Installing on [Kubernetes](deployment.md#kubernetes)
* Running with [rkt](deployment.md#rkt) / [docker](deployment.md#docker)
Verify the matchbox read-only HTTP endpoints are accessible.
```sh
$ curl http://matchbox.example.com:8080
matchbox
```
Verify your TLS client certificate and key can be used to access the gRPC API.
```sh
$ openssl s_client -connect matchbox.example.com:8081 \
-CAfile ~/.matchbox/ca.crt \
-cert ~/.matchbox/client.crt \
-key ~/.matchbox/client.key
```
## Terraform
Install [Terraform][terraform-dl] v0.9+ on your system.
```sh
$ terraform version
Terraform v0.9.4
```
Add the `terraform-provider-matchbox` plugin binary on your system.
```sh
$ wget https://github.com/coreos/terraform-provider-matchbox/releases/download/v0.1.0/terraform-provider-matchbox-v0.1.0-linux-amd64.tar.gz
$ tar xzf terraform-provider-matchbox-v0.1.0-linux-amd64.tar.gz
```
Add the plugin to your `~/.terraformrc`.
```hcl
providers {
matchbox = "/path/to/terraform-provider-matchbox"
}
```
## First cluster
Clone the matchbox source and take a look at the Terraform examples.
```sh
$ git clone https://github.com/coreos/matchbox.git
$ cd matchbox/examples/terraform
```
Let's start with the `simple-install` example. With `simple-install`, any machines which PXE boot from matchbox will install Container Linux to `dev/sda`, reboot, and have your SSH key set. Its not much of a cluster, but we'll get to that later.
```sh
$ cd simple-install
```
Configure the variables in `variables.tf` by creating a `terraform.tfvars` file.
```hcl
matchbox_http_endpoint = "http://matchbox.example.com:8080"
matchbox_rpc_endpoint = "matchbox.example.com:8081"
ssh_authorized_key = "YOUR_SSH_KEY"
```
Terraform can now interact with the matchbox service and create resources.
```sh
$ terraform plan
Plan: 4 to add, 0 to change, 0 to destroy.
```
Let's review the terraform config and learn a bit about matchbox.
#### Provider
Matchbox is configured as a provider platform for bare-metal resources.
```hcl
// Configure the matchbox provider
provider "matchbox" {
endpoint = "${var.matchbox_rpc_endpoint}"
client_cert = "${file("~/.matchbox/client.crt")}"
client_key = "${file("~/.matchbox/client.key")}"
ca = "${file("~/.matchbox/ca.crt")}"
}
```
#### Profiles
Machine profiles specify the kernel, initrd, kernel args, Container Linux Config, Cloud-config, or other configs used to network boot and provision a bare-metal machine. This profile will PXE boot machines using the current stable Container Linux kernel and initrd (see [assets](api.md#assets) to learn about caching for speed) and supply a Container Linux Config specifying that a disk install and reboot should be performed. Learn more about [Container Linux configs](https://coreos.com/os/docs/latest/configuration.html).
```hcl
// Create a CoreOS-install profile
resource "matchbox_profile" "coreos-install" {
name = "coreos-install"
kernel = "https://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz"
initrd = [
"https://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz"
]
args = [
"coreos.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"coreos.first_boot=yes",
"console=tty0",
"console=ttyS0",
]
container_linux_config = "${file("./cl/coreos-install.yaml.tmpl")}"
}
```
#### Groups
Matcher groups match machines based on labels like MAC, UUID, etc. to different profiles and templates in machine-specific values. This group does not have a `selector` block, so any machines which network boot from matchbox will match this group and be provisioned using the `coreos-install` profile. Machines are matched to the most specific matching group.
```hcl
resource "matchbox_group" "default" {
name = "default"
profile = "${matchbox_profile.coreos-install.name}"
# no selector means all machines can be matched
metadata {
ignition_endpoint = "${var.matchbox_http_endpoint}/ignition"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
```
### Apply
Apply the terraform configuration.
```sh
$ terraform apply
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
```
Matchbox serves configs to machines and respects query parameters, if you're interested:
* iPXE default - [/ipxe](http://matchbox.example.com:8080/ipxe)
* Ignition default - [/ignition](http://matchbox.example.com:8080/ignition)
* Ignition post-install - [/ignition?os=installed](http://matchbox.example.com:8080/ignition?os=installed)
* GRUB default - [/grub](http://matchbox.example.com:8080/grub)
## Network
Matchbox can integrate with many on-premise network setups. It does not seek to be the DHCP server, TFTP server, or DNS server for the network. Instead, matchbox serves iPXE scripts and GRUB configs as the entrypoint for provisioning network booted machines. PXE clients are supported by chainloading iPXE firmware.
In the simplest case, an iPXE-enabled network can chain to matchbox,
```
# /var/www/html/ipxe/default.ipxe
chain http://matchbox.foo:8080/boot.ipxe
```
Read [network-setup.md](network-setup.md) for the complete range of options. Network admins have a great amount of flexibility:
* May keep using existing DHCP, TFTP, and DNS services
* May configure subnets, architectures, or specific machines to delegate to matchbox
* May place matchbox behind a menu entry (timeout and default to matchbox)
If you've never setup a PXE-enabled network before or you're trying to setup a home lab, checkout the [quay.io/coreos/dnsmasq](https://quay.io/repository/coreos/dnsmasq) container image [copy-paste examples](https://github.com/coreos/matchbox/blob/master/Documentation/network-setup.md#coreosdnsmasq) and see the section about [proxy-DHCP](https://github.com/coreos/matchbox/blob/master/Documentation/network-setup.md#proxy-dhcp).
## Boot
Its time to network boot your machines. Use the BMC's remote management capablities (may be vendor-specific) to set the boot device (on the next boot only) to PXE and power on each machine.
```sh
$ ipmitool -H node1.example.com -U USER -P PASS power off
$ ipmitool -H node1.example.com -U USER -P PASS chassis bootdev pxe
$ ipmitool -H node1.example.com -U USER -P PASS power on
```
Each machine should chainload iPXE, delegate to `matchbox`, receive its iPXE config (or other supported configs) and begin the provisioning process. The `simple-install` example assumes your machines are configured to boot from disk first and PXE only when requested, but you can write profiles for different cases.
Once the Container Linux install completes and the machine reboots you can SSH,
```ssh
$ ssh core@node1.example.com
```
To re-provision the machine for another purpose, run `terraform apply` and PXE boot it again.
## Going Further
Matchbox can be used to provision multi-node Container Linux clusters at one or many on-premise sites if deployed in an HA way. Machines can be matched individually by MAC address, UUID, region, or other labels you choose. Installs can be made much faster by caching images in the built-in HTTP [assets](api.md#assets) server.
[Container Linux configs](https://coreos.com/os/docs/latest/configuration.html) can be used to partition disks and filesystems, write systemd units, write networkd configs or regular files, and create users. Container Linux nodes can be provisioned into a system that meets your needs. Checkout the examples which create a 3 node [etcd](../examples/terraform/etcd3-install) cluster or a 3 node [Kubernetes](../examples/terraform/bootkube-install) cluster.
[terraform-dl]: https://www.terraform.io/downloads.html

View File

@@ -1,5 +1,4 @@
# GRUB2 Netboot
# GRUB2 netboot
Use GRUB to network boot UEFI hardware.
@@ -9,34 +8,59 @@ For local development, install the dependencies for libvirt with UEFI.
* [UEFI with QEMU](https://fedoraproject.org/wiki/Using_UEFI_with_QEMU)
Ensure that you've gone through the [bootcfg with rkt](getting-started-rkt.md) and [bootcfg](bootcfg.md) guides and understand the basics.
Ensure that you've gone through the [matchbox with rkt](getting-started-rkt.md) and [matchbox](matchbox.md) guides and understand the basics.
## Containers
Run `bootcfg` with rkt, but mount the [grub](../examples/groups/grub) group example.
Run `matchbox` with rkt, but mount the [grub](../examples/groups/grub) group example.
## Network
On Fedora, add the `metal0` interface to the trusted zone in your firewall configuration.
sudo firewall-cmd --add-interface=metal0 --zone=trusted
```sh
$ sudo firewall-cmd --add-interface=metal0 --zone=trusted
```
Run the `coreos.com/dnsmasq` ACI with rkt.
Run the `quay.io/coreos/dnsmasq` container image with rkt or docker.
sudo rkt run coreos.com/dnsmasq:v0.3.0 --net=metal0:IP=172.15.0.3 -- -d -q --dhcp-range=172.15.0.50,172.15.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-match=set:efi-bc,option:client-arch,7 --dhcp-boot=tag:efi-bc,grub.efi --dhcp-userclass=set:grub,GRUB2 --dhcp-boot=tag:grub,"(http;bootcfg.foo:8080)/grub","172.15.0.2" --log-queries --log-dhcp --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:pxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://bootcfg.foo:8080/boot.ipxe --address=/bootcfg.foo/172.15.0.2
```sh
sudo rkt run --net=metal0:IP=172.18.0.3 quay.io/coreos/dnsmasq \
--caps-retain=CAP_NET_ADMIN,CAP_NET_BIND_SERVICE,CAP_SETGID,CAP_SETUID,CAP_NET_RAW \
-- -d -q \
--dhcp-range=172.18.0.50,172.18.0.99 \
--enable-tftp \
--tftp-root=/var/lib/tftpboot \
--dhcp-match=set:efi-bc,option:client-arch,7 \
--dhcp-boot=tag:efi-bc,grub.efi \
--dhcp-userclass=set:grub,GRUB2 \
--dhcp-boot=tag:grub,"(http;matchbox.example.com:8080)/grub","172.18.0.2" \
--log-queries \
--log-dhcp \
--dhcp-userclass=set:ipxe,iPXE \
--dhcp-boot=tag:pxe,undionly.kpxe \
--dhcp-boot=tag:ipxe,http://matchbox.example.com:8080/boot.ipxe \
--address=/matchbox.foo/172.18.0.2
```
## Client VM
Create UEFI VM nodes which have known hardware attributes.
sudo ./scripts/libvirt create-uefi
```sh
$ sudo ./scripts/libvirt create-uefi
```
## Docker
If you use Docker, run `bootcfg` according to [bootcfg with Docker](getting-started-docker.md), but mount the [grub](../examples/groups/grub) group example. Then start the `coreos/dnsmasq` Docker image, which bundles a `grub.efi`.
If you use Docker, run `matchbox` according to [matchbox with Docker](getting-started-docker.md), but mount the [grub](../examples/groups/grub) group example. Then start the `coreos/dnsmasq` Docker image, which bundles a `grub.efi`.
sudo docker run --rm --cap-add=NET_ADMIN quay.io/coreos/dnsmasq -d -q --dhcp-range=172.17.0.43,172.17.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-match=set:efi-bc,option:client-arch,7 --dhcp-boot=tag:efi-bc,grub.efi --dhcp-userclass=set:grub,GRUB2 --dhcp-boot=tag:grub,"(http;bootcfg.foo:8080)/grub","172.17.0.2" --log-queries --log-dhcp --dhcp-option=3,172.17.0.1 --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:pxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://bootcfg.foo:8080/boot.ipxe --address=/bootcfg.foo/172.17.0.2
```sh
$ sudo docker run --rm --cap-add=NET_ADMIN quay.io/coreos/dnsmasq -d -q --dhcp-range=172.17.0.43,172.17.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-match=set:efi-bc,option:client-arch,7 --dhcp-boot=tag:efi-bc,grub.efi --dhcp-userclass=set:grub,GRUB2 --dhcp-boot=tag:grub,"(http;matchbox.foo:8080)/grub","172.17.0.2" --log-queries --log-dhcp --dhcp-option=3,172.17.0.1 --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:pxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://matchbox.foo:8080/boot.ipxe --address=/matchbox.foo/172.17.0.2
```
Create a VM to verify the machine network boots.
sudo virt-install --name uefi-test --pxe --boot=uefi,network --disk pool=default,size=4 --network=bridge=docker0,model=e1000 --memory=1024 --vcpus=1 --os-type=linux --noautoconsole
```sh
$ sudo virt-install --name uefi-test --boot=uefi,network --disk pool=default,size=4 --network=bridge=docker0,model=e1000 --memory=1024 --vcpus=1 --os-type=linux --noautoconsole
```

View File

@@ -1,161 +0,0 @@
# Ignition
Ignition is a system for declaratively provisioning disks during the initramfs, before systemd starts. It runs only on the first boot and handles partitioning disks, formatting partitions, writing files (regular files, systemd units, networkd units, etc.), and configuring users. See the Ignition [docs](https://coreos.com/ignition/docs/latest/) for details.
## Fuze Configs
Ignition 2.0.0+ configs are versioned, *machine-friendly* JSON documents (which contain encoded file contents). Operators should write and maintain configs in a *human-friendly* format, such as CoreOS [fuze](https://github.com/coreos/fuze) configs. As of `bootcfg` v0.4.0, Fuze configs are the primary way to use CoreOS Ignition.
The [Fuze schema](https://github.com/coreos/fuze/blob/master/doc/configuration.md) formalizes and improves upon the YAML to Ignition JSON transform. Fuze provides better support for Ignition 2.0.0+, handles file content encoding, patches Ignition bugs, performs better validations, and lets services (like `bootcfg`) negotiate the Ignition version required by a CoreOS client.
## Adding Fuze Configs
Fuze template files can be added in the `/var/lib/bootcfg/ignition` directory or in an `ignition` subdirectory of a custom `-data-path`. Template files may contain [Go template](https://golang.org/pkg/text/template/) elements which will be evaluated with group metadata, selectors, and query params.
/var/lib/bootcfg
├── cloud
├── ignition
│   └── k8s-master.yaml
│   └── etcd.yaml
│   └── k8s-worker.yaml
│   └── raw.ign
└── profiles
### Reference
Reference an Fuze config in a [Profile](bootcfg.md#profiles) with `ignition_id`. When PXE booting, use the kernel option `coreos.first_boot=1` and `coreos.config.url` to point to the `bootcfg` [Ignition endpoint](api.md#ignition-config).
### Migration from v0.3.0
In v0.4.0, `bootcfg` switched to using the CoreOS [fuze](https://github.com/coreos/fuze) library, which formalizes and improves upon the YAML to Ignition JSON transform. Fuze provides better support for Ignition 2.0.0+, handles file content encoding, patches Ignition bugs, and performs better validations.
Upgrade your Ignition YAML templates to match the [Fuze config schema](https://github.com/coreos/fuze/blob/master/doc/configuration.md). Typically, you'll need to do the following:
* Remove `ignition_version: 1`, Fuze configs are version-less
* Update `filesystems` section and set the `name`
* Update `files` section to use `inline` as shown below
* Replace `uid` and `gid` with `user` and `group` objects as shown above
Maintain readable inline file contents in Fuze:
```
...
files:
- path: /etc/foo.conf
filesystem: rootfs
contents:
inline: |
foo bar
```
Support for the older Ignition v1 format has been dropped, so CoreOS machines must be **1010.1.0 or newer**. Read the upstream Ignition v1 to 2.0.0 [migration guide](https://coreos.com/ignition/docs/latest/migrating-configs.html) to understand the reasons behind schema changes.
## Examples
Here is an example Fuze template. This template will be rendered into a Fuze config (YAML), using group metadata, selectors, and query params as template variables. Finally, the Fuze config is served to client machines as Ignition JSON.
ignition/format-disk.yaml.tmpl:
---
storage:
disks:
- device: /dev/sda
wipe_table: true
partitions:
- label: ROOT
filesystems:
- name: rootfs
mount:
device: "/dev/sda1"
format: "ext4"
create:
force: true
options:
- "-LROOT"
files:
- filesystem: rootfs
path: /home/core/foo
mode: 0644
user:
id: 500
group:
id: 500
contents:
inline: |
{{.example_contents}}
{{ if index . "ssh_authorized_keys" }}
passwd:
users:
- name: core
ssh_authorized_keys:
{{ range $element := .ssh_authorized_keys }}
- {{$element}}
{{end}}
{{end}}
The Ignition config response (formatted) to a query `/ignition?label=value` for a CoreOS instance supporting Ignition 2.0.0 would be:
{
"ignition": {
"version": "2.0.0",
"config": {}
},
"storage": {
"disks": [
{
"device": "/dev/sda",
"wipeTable": true,
"partitions": [
{
"label": "ROOT",
"number": 0,
"size": 0,
"start": 0
}
]
}
],
"filesystems": [
{
"name": "rootfs",
"mount": {
"device": "/dev/sda1",
"format": "ext4",
"create": {
"force": true,
"options": [
"-LROOT"
]
}
}
}
],
"files": [
{
"filesystem": "rootfs",
"path": "/home/core/foo",
"contents": {
"source": "data:,Example%20file%20contents%0A",
"verification": {}
},
"mode": 420,
"user": {
"id": 500
},
"group": {
"id": 500
}
}
]
},
"systemd": {},
"networkd": {},
"passwd": {}
}
See [examples/ignition](../examples/ignition) for numerous Fuze template examples.
### Raw Ignition
If you prefer to design your own templating solution, raw Ignition files (suffixed with `.ign` or `.ignition`) are served directly.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 118 KiB

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 172 KiB

After

Width:  |  Height:  |  Size: 116 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

View File

@@ -1,88 +0,0 @@
# Kubernetes
The Kubernetes example provisions a 3 node Kubernetes v1.3.0 cluster with one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
## Requirements
Ensure that you've gone through the [bootcfg with rkt](getting-started-rkt.md) or [bootcfg with docker](getting-started-docker.md) guide and understand the basics. In particular, you should be able to:
* Use rkt or Docker to start `bootcfg`
* Create a network boot environment with `coreos/dnsmasq`
* Create the example libvirt client VMs
## Examples
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. VMs are setup on the `metal0` CNI bridge for rkt or the `docker0` bridge for Docker. The examples can be used for physical machines if you update the MAC/IP addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
* [k8s](../examples/groups/k8s) - iPXE boot a Kubernetes cluster (use rkt)
* [k8s-docker](../examples/groups/k8s-docker) - iPXE boot a Kubernetes cluster on `docker0` (use docker)
* [k8s-install](../examples/groups/k8s-install) - Install a Kubernetes cluster to disk (use rkt)
* [Lab examples](https://github.com/dghubble/metal) - Lab hardware examples
### Assets
Download the CoreOS image assets referenced in the target [profile](../examples/profiles).
./scripts/get-coreos alpha 1053.2.0 ./examples/assets
Add your SSH public key to each machine group definition [as shown](../examples/README.md#ssh-keys).
Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver`, `worker`).
rm -rf examples/assets/tls
# for Kubernetes on CNI metal0, i.e. rkt
./scripts/tls/k8s-certgen -d examples/assets/tls -s 172.15.0.21 -m IP.1=10.3.0.1,IP.2=172.15.0.21 -w IP.1=172.15.0.22,IP.2=172.15.0.23
# for Kubernetes on docker0
./scripts/tls/k8s-certgen -d examples/assets/tls -s 172.17.0.21 -m IP.1=10.3.0.1,IP.2=172.17.0.21 -w IP.1=172.17.0.22,IP.2=172.17.0.23
**Note**: TLS assets are served to any machines which request them, which requires a trusted network. Alternately, provisioning may be tweaked to require TLS assets be securely copied to each host. Read about our longer term security plans at [Distributed Trusted Computing](https://coreos.com/blog/coreos-trusted-computing.html).
## Containers
Use rkt or docker to start `bootcfg` and mount the desired example resources. Create a network boot environment and power-on your machines. Revisit [bootcfg with rkt](getting-started-rkt.md) or [bootcfg with Docker](getting-started-docker.md) for help.
Client machines should boot and provision themselves. Local client VMs should network boot CoreOS in about a 1 minute and the Kubernetes API should be available after 2-3 minutes. If you chose `k8s-install`, notice that machines install CoreOS and then reboot (in libvirt, you must hit "power" again). Time to network boot and provision Kubernetes clusters on physical hardware depends on a number of factors (POST duration, boot device iteration, network speed, etc.).
## Verify
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your laptop. Use the generated kubeconfig to access the Kubernetes cluster created on rkt `metal0` or `docker0`.
$ cd /path/to/coreos-baremetal
$ kubectl --kubeconfig=examples/assets/tls/kubeconfig get nodes
NAME STATUS AGE
172.15.0.21 Ready 6m
172.15.0.22 Ready 5m
172.15.0.23 Ready 6m
Get all pods.
$ kubectl --kubeconfig=examples/assets/tls/kubeconfig get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-v1.1.0-3647315203-tes6g 2/2 Running 0 14m
kube-system kube-apiserver-172.15.0.21 1/1 Running 0 14m
kube-system kube-controller-manager-172.15.0.21 1/1 Running 0 14m
kube-system kube-dns-v15-nfbz4 3/3 Running 0 14m
kube-system kube-proxy-172.15.0.21 1/1 Running 0 14m
kube-system kube-proxy-172.15.0.22 1/1 Running 0 14m
kube-system kube-proxy-172.15.0.23 1/1 Running 0 14m
kube-system kube-scheduler-172.15.0.21 1/1 Running 0 13m
kube-system kubernetes-dashboard-v1.1.0-m1gyy 1/1 Running 0 14m
## Kubernetes Dashboard
Access the Kubernetes Dashboard with `kubeconfig` credentials by port forwarding to the dashboard pod.
$ kubectl --kubeconfig=examples/assets/tls/kubeconfig port-forward kubernetes-dashboard-v1.1.0-SOME-ID 9090 --namespace=kube-system
Forwarding from 127.0.0.1:9090 -> 9090
Then visit [http://127.0.0.1:9090](http://127.0.0.1:9090/).
<img src='img/kubernetes-dashboard.png' class="img-center" alt="Kubernetes Dashboard"/>
## Tectonic
Sign up for [Tectonic Starter](https://tectonic.com/starter/) for free and deploy the [Tectonic Console](https://tectonic.com/enterprise/docs/latest/deployer/tectonic_console.html) with a few `kubectl` commands!
<img src='img/tectonic-console.png' class="img-center" alt="Tectonic Console"/>

View File

@@ -1,15 +1,15 @@
# Lifecycle of a physical machine
# Lifecycle of a Physical Machine
## About boot environment
Physical machines [network boot](network-booting.md) in an network boot environment with DHCP/TFTP/DNS services or with [coreos/dnsmasq](../contrib/dnsmasq).
`bootcfg` serves iPXE, GRUB, or Pixiecore boot configs via HTTP to machines based on Group selectors (e.g. UUID, MAC, region, etc.) and machine Profiles. Kernel and initrd images are fetched and booted with Ignition to install CoreOS. The "first boot" Ignition config if fetched and CoreOS is installed.
`matchbox` serves iPXE or GRUB configs via HTTP to machines based on Group selectors (e.g. UUID, MAC, region, etc.) and machine Profiles. Kernel and initrd images are fetched and booted with Ignition to install CoreOS Container Linux. The "first boot" Ignition config if fetched and Container Linux is installed.
CoreOS boots ("first boot" from disk) and runs Ignition to provision its disk with systemd units, files, keys, and more to become a cluster node. Systemd units may fetch metadata from a remote source if needed.
Container Linux boots ("first boot" from disk) and runs Ignition to provision its disk with systemd units, files, keys, and more to become a cluster node. Systemd units may fetch metadata from a remote source if needed.
Coordinated auto-updates are enabled. Systems like [fleet](https://coreos.com/docs/#fleet) or [Kubernetes](http://kubernetes.io/docs/) coordinate container services. IPMI, vendor utilities, or first-boot are used to re-provision machines into new roles.
## Machine lifecycle
![Machine Lifecycle](img/machine-lifecycle.png)

186
Documentation/matchbox.md Normal file
View File

@@ -0,0 +1,186 @@
# matchbox
`matchbox` is an HTTP and gRPC service that renders signed [Ignition configs](https://coreos.com/ignition/docs/latest/what-is-ignition.html), [cloud-configs](https://coreos.com/os/docs/latest/cloud-config.html), network boot configs, and metadata to machines to create CoreOS Container Linux clusters. `matchbox` maintains **Group** definitions which match machines to *profiles* based on labels (e.g. MAC address, UUID, stage, region). A **Profile** is a named set of config templates (e.g. iPXE, GRUB, Ignition config, Cloud-Config, generic configs). The aim is to use Container Linux's early-boot capabilities to provision Container Linux machines.
Network boot endpoints provide PXE, iPXE, GRUB support. `matchbox` can be deployed as a binary, as an [appc](https://github.com/appc/spec) container with rkt, or as a Docker container.
![Bootcfg Overview](img/overview.png)
## Getting started
Get started running `matchbox` on your Linux machine, with rkt or Docker.
* [matchbox with rkt](getting-started-rkt.md)
* [matchbox with Docker](getting-started-docker.md)
## Flags
See [configuration](config.md) flags and variables.
## API
* [HTTP API](api.md)
* [gRPC API](https://godoc.org/github.com/coreos/matchbox/matchbox/client)
## Data
A `Store` stores machine Groups, Profiles, and associated Ignition configs, cloud-configs, and generic configs. By default, `matchbox` uses a `FileStore` to search a `-data-path` for these resources.
Prepare `/var/lib/matchbox` with `groups`, `profile`, `ignition`, `cloud`, and `generic` subdirectories. You may wish to keep these files under version control.
```
/var/lib/matchbox
├── cloud
│   ├── cloud.yaml.tmpl
│   └── worker.sh.tmpl
├── ignition
│   └── raw.ign
│   └── etcd.yaml.tmpl
│   └── simple.yaml.tmpl
├── generic
│   └── config.yaml
│   └── setup.cfg
│   └── datacenter-1.tmpl
├── groups
│   └── default.json
│   └── node1.json
│   └── us-central1-a.json
└── profiles
└── etcd.json
└── worker.json
```
The [examples](../examples) directory is a valid data directory with some pre-defined configs. Note that `examples/groups` contains many possible groups in nested directories for demo purposes (tutorials pick one to mount). Your machine groups should be kept directly inside the `groups` directory as shown above.
### Profiles
Profiles reference an Ignition config, Cloud-Config, and/or generic config by name and define network boot settings.
```json
{
"id": "etcd",
"name": "Container Linux with etcd2",
"cloud_id": "",
"ignition_id": "etcd.yaml",
"generic_id": "some-service.cfg",
"boot": {
"kernel": "/assets/coreos/1576.5.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1576.5.0/coreos_production_pxe_image.cpio.gz"],
"args": [
"coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp}",
"coreos.first_boot=yes",
"coreos.autologin"
]
},
}
```
The `"boot"` settings will be used to render configs to network boot programs such as iPXE or GRUB. You may reference remote kernel and initrd assets or [local assets](#assets).
To use Ignition, set the `coreos.config.url` kernel option to reference the `matchbox` [Ignition endpoint](api.md#ignition-config), which will render the `ignition_id` file. Be sure to add the `coreos.first_boot` option as well.
To use cloud-config, set the `cloud-config-url` kernel option to reference the `matchbox` [Cloud-Config endpoint](api.md#cloud-config), which will render the `cloud_id` file.
### Groups
Groups define selectors which match zero or more machines. Machine(s) matching a group will boot and provision according to the group's `Profile`.
Create a group definition with a `Profile` to be applied, selectors for matching machines, and any `metadata` needed to render templated configs. For example `/var/lib/matchbox/groups/node1.json` matches a single machine with MAC address `52:54:00:89:d8:10`.
```json
# /var/lib/matchbox/groups/node1.json
{
"name": "node1",
"profile": "etcd",
"selector": {
"mac": "52:54:00:89:d8:10"
},
"metadata": {
"fleet_metadata": "role=etcd,name=node1",
"etcd_name": "node1",
"etcd_initial_cluster": "node1=http://node1.example.com:2380,node2=http://node2.example.com:2380,node3=http://node3.example.com:2380"
}
}
```
Meanwhile, `/var/lib/matchbox/groups/proxy.json` acts as the default machine group since it has no selectors.
```
{
"name": "etcd-proxy",
"profile": "etcd-proxy",
"metadata": {
"fleet_metadata": "role=etcd-proxy",
"etcd_initial_cluster": "node1=http://node1.example.com:2380,node2=http://node2.example.com:2380,node3=http://node3.example.com:2380"
}
}
```
For example, a request to `/ignition?mac=52:54:00:89:d8:10` would render the Ignition template in the "etcd" `Profile`, with the machine group's metadata. A request to `/ignition` would match the default group (which has no selectors) and render the Ignition in the "etcd-proxy" Profile. Avoid defining multiple default groups as resolution will not be deterministic.
#### Reserved selectors
Group selectors can use any key/value pairs you find useful. However, several labels have a defined purpose and will be normalized or parsed specially.
* `uuid` - machine UUID
* `mac` - network interface physical address (normalized MAC address)
* `hostname` - hostname reported by a network boot program
* `serial` - serial reported by a network boot program
### Config templates
Profiles can reference various templated configs. Ignition JSON configs can be generated from [Container Linux Config](https://github.com/coreos/container-linux-config-transpiler/blob/master/doc/configuration.md) template files. Cloud-Config templates files can be used to render a script or Cloud-Config. Generic template files can be used to render arbitrary untyped configs (experimental). Each template may contain [Go template](https://golang.org/pkg/text/template/) elements which will be rendered with machine group metadata, selectors, and query params.
For details and examples:
* [Container Linux Config](container-linux-config.md)
* [Cloud-Config](cloud-config.md)
#### Variables
Within Container Linux Config templates, Cloud-Config templates, or generic templates, you can use group metadata, selectors, or request-scoped query params. For example, a request `/generic?mac=52-54-00-89-d8-10&foo=some-param&bar=b` would match the `node1.json` machine group shown above. If the group's profile ("etcd") referenced a generic template, the following variables could be used.
<!-- {% raw %} -->
```
# Untyped generic config file
# Selector
{{.mac}} # 52:54:00:89:d8:10 (normalized)
# Metadata
{{.etcd_name}} # node1
{{.fleet_metadata}} # role=etcd,name=node1
# Query
{{.request.query.mac}} # 52:54:00:89:d8:10 (normalized)
{{.request.query.foo}} # some-param
{{.request.query.bar}} # b
# Special Addition
{{.request.raw_query}} # mac=52:54:00:89:d8:10&foo=some-param&bar=b
```
<!-- {% endraw %} -->
Note that `.request` is reserved for these purposes so group metadata with data nested under a top level "request" key will be overwritten.
## Assets
`matchbox` can serve `-assets-path` static assets at `/assets`. This is helpful for reducing bandwidth usage when serving the kernel and initrd to network booted machines. The default assets-path is `/var/lib/matchbox/assets` or you can pass `-assets-path=""` to disable asset serving.
```
matchbox.foo/assets/
└── coreos
└── VERSION
├── coreos_production_pxe.vmlinuz
└── coreos_production_pxe_image.cpio.gz
```
For example, a `Profile` might refer to a local asset `/assets/coreos/VERSION/coreos_production_pxe.vmlinuz` instead of `http://stable.release.core-os.net/amd64-usr/VERSION/coreos_production_pxe.vmlinuz`.
See the [get-coreos](../scripts/README.md#get-coreos) script to quickly download, verify, and place Container Linux assets.
## Network
`matchbox` does not implement or exec a DHCP/TFTP server. Read [network setup](network-setup.md) or use the [coreos/dnsmasq](../contrib/dnsmasq) image if you need a quick DHCP, proxyDHCP, TFTP, or DNS setup.
## Going further
* [gRPC API Usage](config.md#grpc-api)
* [Metadata](api.md#metadata)
* OpenPGP [Signing](api.md#openpgp-signatures)

View File

@@ -1,5 +1,5 @@
# Network Boot Environments
# Network boot environments
This guide reviews network boot protocols and the different ways client machines can be PXE booted.
@@ -7,24 +7,26 @@ This guide reviews network boot protocols and the different ways client machines
The Preboot eXecution Environment (PXE) defines requirements for consistent, hardware-independent network-based machine booting and configuration. Formally, PXE specifies pre-boot protocol services that client NIC firmware must provide (DHCP, TFTP, UDP/IP), specifies boot firmware requirements, and defines a client-server protocol for obtaining a network boot program (NBP) which automates OS installation and configuration.
<img src='img/pxelinux.png' class="img-center" alt="Basic PXE client server protocol flow"/>
![PXE protocol](img/pxelinux.png)
At power-on, if a client machine's BIOS or UEFI boot firmware is set to perform network booting, the network interface card's PXE firmware broadcasts a DHCPDISCOVER packet identifying itself as a PXEClient to the network environment.
The network environment can be set up in a number of ways, which we'll discuss. In the simplest, a PXE-enabled DHCP Server responds with a DHCPOFFER with Options, which include a TFTP server IP ("next server") and the name of an NBP ("boot filename") to download (e.g. pxelinux.0). PXE firmware then downloads the NBP over TFTP and starts it. Finally, the NBP loads configs, scripts, and/or images it requires to run an OS.
### Network Boot Programs
### Network boot programs
Machines can be booted and configured with CoreOS using several network boot programs and approaches. Let's review them. If you're new to network booting or unsure which to choose, iPXE is a reasonable and flexible choice.
Machines can be booted and configured with CoreOS Container Linux using several network boot programs and approaches. Let's review them. If you're new to network booting or unsure which to choose, iPXE is a reasonable and flexible choice.
#### PXELINUX
[PXELINUX](http://www.syslinux.org/wiki/index.php/PXELINUX) is a common network boot program which loads a config file from `mybootdir/pxelinux.cfg/` over TFTP. The file is chosen based on the client's UUID, MAC address, IP address, or a default.
mybootdir/pxelinux.cfg/b8945908-d6a6-41a9-611d-74a6ab80b83d
mybootdir/pxelinux.cfg/default
```sh
$ mybootdir/pxelinux.cfg/b8945908-d6a6-41a9-611d-74a6ab80b83d
$ mybootdir/pxelinux.cfg/default
```
Here is an example PXE config file which boots a CoreOS image hosted on the TFTP server.
Here is an example PXE config file which boots a Container Linux image hosted on the TFTP server.
```
default coreos
@@ -47,11 +49,11 @@ This approach has a number of drawbacks. TFTP can be slow, managing config files
[iPXE](http://ipxe.org/) is an enhanced implementation of the PXE client firmware and a network boot program which uses iPXE scripts rather than config files and can download scripts and images with HTTP.
<img src='img/ipxe.png' class="img-center" alt="iPXE client server protocol flow"/>
![iPXE flow](img/ipxe.png)
A DHCPOFFER to iPXE client firmware specifies an HTTP boot script such as `http://bootcfg.foo/boot.ipxe`.
A DHCPOFFER to iPXE client firmware specifies an HTTP boot script such as `http://matchbox.foo/boot.ipxe`.
Here is an example iPXE script for booting the remote CoreOS stable image.
Here is an example iPXE script for booting the remote Container Linux stable image.
```
#!ipxe
@@ -64,11 +66,7 @@ boot
A TFTP server is used only to provide the `undionly.kpxe` boot program to older PXE firmware in order to bootstrap into iPXE.
CoreOS `bootcfg` can render signed iPXE scripts to machines based on their hardware attributes. Setup involves configuring your DHCP server to point iPXE clients to the `bootcfg` [iPXE endpoint](api.md#ipxe).
#### Pixiecore
[Pixiecore](https://github.com/danderson/pixiecore) is a newer service which implements a proxyDHCP server, TFTP server, and HTTP server all-in-one and calls through to an HTTP API. CoreOS `bootcfg` can serve Pixiecore JSON (optionally signed) based on the supplied MAC address, to implement the Pixiecore HTTP API.
CoreOS `matchbox` can render signed iPXE scripts to machines based on their hardware attributes. Setup involves configuring your DHCP server to point iPXE clients to the `matchbox` [iPXE endpoint](api.md#ipxe).
## DHCP
@@ -76,4 +74,4 @@ Many networks have DHCP services which are impractical to modify or disable. Com
To address this, PXE client firmware listens for DHCPOFFERs from a non-PXE DHCP server *and* a PXE-enabled **proxyDHCP server** configured to respond with the next server and boot filename only. Client firmware combines the two responses as if they had come from a single PXE-enabled DHCP server.
<img src='img/proxydhcp.png' class="img-center" alt="DHCP and proxyDHCP responses are merged to get PXE Options"/>
![Proxy DHCP flow](img/proxydhcp.png)

View File

@@ -1,9 +1,8 @@
# Network setup
# Network Setup
This guide shows how to create a DHCP/TFTP/DNS network boot environment to boot and provision BIOS/PXE, iPXE, or UEFI client machines.
This guide shows how to create a DHCP/TFTP/DNS network boot environment to work with `bootcfg` to boot and provision PXE, iPXE, or GRUB2 client machines.
`bootcfg` serves iPXE scripts or GRUB configs over HTTP to serve as the entrypoint for CoreOS cluster bring-up. It does not implement or exec a DHCP, TFTP, or DNS server. Instead, you can configure your own network services to point to `bootcfg` or use the convenient [coreos/dnsmasq](../contrib/dnsmasq) container image (used in libvirt demos).
Matchbox serves iPXE scripts over HTTP to serve as the entrypoint for provisioning clusters. It does not implement or exec a DHCP, TFTP, or DNS server. Instead, configure your network environment to point to Matchbox or use the convenient [coreos/dnsmasq](../contrib/dnsmasq) container image (used in local QEMU/KVM setup).
*Note*: These are just suggestions. Your network administrator or system administrator should choose the right network setup for your company.
@@ -13,148 +12,243 @@ Client hardware must have a network interface which supports PXE or iPXE.
## Goals
* Add a DNS name which resolves to a `bootcfg` deploy.
* Chainload PXE firmware to iPXE or GRUB2
* Point iPXE clients to `http://bootcfg.foo:port/boot.ipxe`
* Point GRUB clients to `http://bootcfg.foo:port/grub`
* Add a DNS name which resolves to a `matchbox` deploy.
* Chainload BIOS clients (legacy PXE) to iPXE (undionly.kpxe)
* Chainload UEFI clients to iPXE (ipxe.efi)
* Point iPXE clients to `http://matchbox.example.com:port/boot.ipxe`
* Point GRUB clients to `http://matchbox.example.com:port/grub`
## Setup
Many companies already have DHCP/TFTP configured to "PXE-boot" PXE/iPXE clients. In this case, machines (or a subset of machines) can be made to chainload from `chain http://bootcfg.foo:port/boot.ipxe`. Older PXE clients can be made to chainload into iPXE or GRUB to be able to fetch subsequent configs via HTTP.
Many companies already have DHCP/TFTP configured to "PXE-boot" PXE/iPXE clients. In this case, machines (or a subset of machines) can be made to chainload from `chain http://matchbox.example.com:port/boot.ipxe`. Older PXE clients can be made to chainload into iPXE to be able to fetch subsequent configs via HTTP.
On simpler networks, such as what a developer might have at home, a relatively inflexible DHCP server may be in place, with no TFTP server. In this case, a proxy DHCP server can be run alongside a non-PXE capable DHCP server.
This diagram can point you to the **right section(s)** of this document.
<img src='img/network-setup-flow.png' class="img-center" alt="Network Setup Flow"/>
![Network Setup](img/network-setup-flow.png)
The setup of DHCP, TFTP, and DNS services on a network varies greatly. If you wish to use rkt or Docker to quickly run DHCP, proxyDHCP TFTP, or DNS services, use [coreos/dnsmasq](#coreos/dnsmasq).
The setup of DHCP, TFTP, and DNS services on a network varies greatly. If you wish to use rkt or Docker to quickly run DHCP, proxyDHCP TFTP, or DNS services, use [coreos/dnsmasq](#coreosdnsmasq).
## DNS
Add a DNS entry (e.g. `bootcfg.foo`, `provisoner.mycompany-internal`) that resolves to a deployment of the CoreOS `bootcfg` service from machines you intend to boot and provision.
Add a DNS entry (e.g. `matchbox.example.com`, `provisoner.mycompany-internal`) that resolves to a deployment of the CoreOS `matchbox` service from machines you intend to boot and provision.
dig bootcfg.foo
```sh
$ dig matchbox.example.com
```
If you deployed `bootcfg` to a known IP address (e.g. dedicated host, load balanced endpoint, Kubernetes NodePort) and use `dnsmasq`, a domain name to IPv4/IPv6 address mapping could be added to the `/etc/dnsmasq.conf`.
If you deployed `matchbox` to a known IP address (e.g. dedicated host, load balanced endpoint, Kubernetes NodePort) and use `dnsmasq`, a domain name to IPv4/IPv6 address mapping could be added to the `/etc/dnsmasq.conf`.
# dnsmasq.conf
address=/bootcfg.foo/172.15.0.2
```
# dnsmasq.conf
address=/matchbox.example.com/172.18.0.2
```
## iPXE
Servers with DHCP/TFTP/ services which already network boot iPXE clients can use the `chain` command to make clients download and execute the iPXE boot script from `bootcfg`.
Networks which already run DHCP and TFTP services to network boot PXE/iPXE clients can add an iPXE config to delegate or `chain` to the matchbox service's iPXE entrypoint.
# /var/www/html/ipxe/default.ipxe
chain http://bootcfg.foo:8080/boot.ipxe
```
# /var/www/html/ipxe/default.ipxe
chain http://matchbox.example.com:8080/boot.ipxe
```
You can chainload from a menu entry or use other [iPXE commands](http://ipxe.org/cmd) if you have needs beyond just delegating to the iPXE script served by `bootcfg`.
You can chainload from a menu entry or use other [iPXE commands](http://ipxe.org/cmd) if you need to do more than simple delegation.
## GRUB
### PXE-enabled DHCP
Needs docs.
### Configuring DHCP
Configure your DHCP server to supply options to older PXE client firmware to specify the location of an iPXE or GRUB network boot program on your TFTP server. Send clients to the `bootcfg` iPXE script or GRUB config endpoints.
Configure your DHCP server to supply options to older PXE client firmware to specify the location of an iPXE or GRUB network boot program on your TFTP server. Send clients to the `matchbox` iPXE script or GRUB config endpoints.
Here is an example `/etc/dnsmasq.conf`:
dhcp-range=192.168.1.1,192.168.1.254,30m
enable-tftp
tftp-root=/var/lib/tftpboot
# if request comes from older PXE ROM, chainload to iPXE (via TFTP)
dhcp-boot=tag:!ipxe,undionly.kpxe
# if request comes from iPXE user class, set tag "ipxe"
dhcp-userclass=set:ipxe,iPXE
# point ipxe tagged requests to the bootcfg iPXE boot script (via HTTP)
dhcp-boot=tag:ipxe,http://bootcfg.foo:8080/boot.ipxe
# verbose
log-queries
log-dhcp
# (optional) disable DNS
port=0
# (optional) static DNS assignements
# address=/bootcfg.foo/192.168.1.100
```ini
dhcp-range=192.168.1.1,192.168.1.254,30m
Add [unidonly.kpxe](http://boot.ipxe.org/undionly.kpxe) (and undionly.kpxe.0 if using dnsmasq) to your tftp-root (e.g. `/var/lib/tftpboot`).
enable-tftp
tftp-root=/var/lib/tftpboot
sudo systemctl start dnsmasq
sudo firewall-cmd --add-service=dhcp --add-service=tftp [--add-service=dns]
sudo firewall-cmd --list-services
# Legacy PXE
dhcp-match=set:bios,option:client-arch,0
dhcp-boot=tag:bios,undionly.kpxe
#### proxy DHCP
# UEFI
dhcp-match=set:efi32,option:client-arch,6
dhcp-boot=tag:efi32,ipxe.efi
dhcp-match=set:efibc,option:client-arch,7
dhcp-boot=tag:efibc,ipxe.efi
dhcp-match=set:efi64,option:client-arch,9
dhcp-boot=tag:efi64,ipxe.efi
Alternately, a DHCP proxy server can be run alongside an existing non-PXE DHCP server. The proxy DHCP server provides only the next server and boot filename Options, leaving IP allocation to the DHCP server. Clients listen for both DHCP offers and merge the responses as though they had come from one PXE-enabled DHCP server.
# iPXE - chainload to matchbox ipxe boot script
dhcp-userclass=set:ipxe,iPXE
dhcp-boot=tag:ipxe,http://matchbox.example.com:8080/boot.ipxe
# verbose
log-queries
log-dhcp
# static DNS assignements
address=/matchbox.example.com/192.168.1.100
# (optional) disable DNS and specify alternate
# port=0
# dhcp-option=6,192.168.1.100
```
Add [ipxe.efi](http://boot.ipxe.org/ipxe.efi) and [unidonly.kpxe](http://boot.ipxe.org/undionly.kpxe) to your tftp-root (e.g. `/var/lib/tftpboot`).
```sh
$ sudo systemctl start dnsmasq
$ sudo firewall-cmd --add-service=dhcp --add-service=tftp [--add-service=dns]
$ sudo firewall-cmd --list-services
```
See [dnsmasq](#coreosdnsmasq) below to run dnsmasq with a container.
#### Proxy-DHCP
Alternately, a proxy-DHCP server can be run alongside an existing non-PXE DHCP server. The proxy DHCP server provides only the next server and boot filename Options, leaving IP allocation to the DHCP server. Clients listen for both DHCP offers and merge the responses as though they had come from one PXE-enabled DHCP server.
Example `/etc/dnsmasq.conf`:
dhcp-range=192.168.1.1,proxy,255.255.255.0
enable-tftp
tftp-root=/var/lib/tftpboot
# if request comes from older PXE ROM, chainload to iPXE (via TFTP)
pxe-service=tag:#ipxe,x86PC,"PXE chainload to iPXE",undionly.kpxe
# if request comes from iPXE user class, set tag "ipxe"
dhcp-userclass=set:ipxe,iPXE
# point ipxe tagged requests to the bootcfg iPXE boot script (via HTTP)
pxe-service=tag:ipxe,x86PC,"iPXE",http://bootcfg.foo:8080/boot.ipxe
# verbose
log-queries
log-dhcp
```ini
dhcp-range=192.168.1.1,proxy,255.255.255.0
enable-tftp
tftp-root=/var/lib/tftpboot
# if request comes from older PXE ROM, chainload to iPXE (via TFTP)
pxe-service=tag:#ipxe,x86PC,"PXE chainload to iPXE",undionly.kpxe
# if request comes from iPXE user class, set tag "ipxe"
dhcp-userclass=set:ipxe,iPXE
# point ipxe tagged requests to the matchbox iPXE boot script (via HTTP)
pxe-service=tag:ipxe,x86PC,"iPXE",http://matchbox.example.com:8080/boot.ipxe
# verbose
log-queries
log-dhcp
```
Add [unidonly.kpxe](http://boot.ipxe.org/undionly.kpxe) (and undionly.kpxe.0 if using dnsmasq) to your tftp-root (e.g. `/var/lib/tftpboot`).
sudo systemctl start dnsmasq
sudo firewall-cmd --add-service=dhcp --add-service=tftp [--add-service=dns]
sudo firewall-cmd --list-services
```sh
$ sudo systemctl start dnsmasq
$ sudo firewall-cmd --add-service=dhcp --add-service=tftp [--add-service=dns]
$ sudo firewall-cmd --list-services
```
With rkt:
sudo rkt run coreos.com/dnsmasq:v0.3.0 --net=host -- -d -q --dhcp-range=192.168.1.1,proxy,255.255.255.0 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --pxe-service=tag:#ipxe,x86PC,"PXE chainload to iPXE",undionly.kpxe --pxe-service=tag:ipxe,x86PC,"iPXE",http://bootcfg.foo:8080/boot.ipxe --log-queries --log-dhcp
With Docker:
sudo docker run --net=host --rm --cap-add=NET_ADMIN quay.io/coreos/dnsmasq -d -q --dhcp-range=192.168.1.1,proxy,255.255.255.0 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --pxe-service=tag:#ipxe,x86PC,"PXE chainload to iPXE",undionly.kpxe --pxe-service=tag:ipxe,x86PC,"iPXE",http://bootcfg.foo:8080/boot.ipxe --log-queries --log-dhcp
See [dnsmasq](#coreosdnsmasq) below to run dnsmasq with a container.
### Configurable TFTP
If your DHCP server is configured to PXE boot clients, but you don't have control over this configuration, you can modify the pxelinux.cfg's served to PXE clients.
If your DHCP server is configured to network boot PXE clients (but not iPXE clients), add a pxelinux.cfg to serve an iPXE kernel image and append commands.
Example `/var/lib/tftpboot/pxelinux.cfg/default`:
timeout 10
default iPXE
LABEL iPXE
KERNEL ipxe.lkrn
APPEND dhcp && chain http://bootcfg.foo:8080/boot.ipxe
```
timeout 10
default iPXE
LABEL iPXE
KERNEL ipxe.lkrn
APPEND dhcp && chain http://matchbox.example.com:8080/boot.ipxe
```
Add ipxe.lkrn to `/var/lib/tftpboot` (see [iPXE docs](http://ipxe.org/embed)).
## coreos/dnsmasq
On networks without network services, the `coreos.com/dnsmasq:v0.3.0` rkt ACI or `coreos/dnsmasq:latest` Docker image can setup an appropriate environment quickly. The images bundle `undionly.kpxe` and `grub.efi` for convenience. Here are some examples which run a DHCP/TFTP/DNS server on your host's network:
The [quay.io/coreos/dnsmasq](https://quay.io/repository/coreos/dnsmasq) container image can run DHCP, TFTP, and DNS services via rkt or docker. The image bundles `ipxe.efi`, `undionly.kpxe`, and `grub.efi` for convenience. See [contrib/dnsmasq](../contrib/dnsmasq) for details.
With rkt:
Run DHCP, TFTP, and DNS on the host's network:
```sh
sudo rkt run --net=host quay.io/coreos/dnsmasq \
--caps-retain=CAP_NET_ADMIN,CAP_NET_BIND_SERVICE,CAP_SETGID,CAP_SETUID,CAP_NET_RAW \
-- -d -q \
--dhcp-range=192.168.1.3,192.168.1.254 \
--enable-tftp \
--tftp-root=/var/lib/tftpboot \
--dhcp-match=set:bios,option:client-arch,0 \
--dhcp-boot=tag:bios,undionly.kpxe \
--dhcp-match=set:efi32,option:client-arch,6 \
--dhcp-boot=tag:efi32,ipxe.efi \
--dhcp-match=set:efibc,option:client-arch,7 \
--dhcp-boot=tag:efibc,ipxe.efi \
--dhcp-match=set:efi64,option:client-arch,9 \
--dhcp-boot=tag:efi64,ipxe.efi \
--dhcp-userclass=set:ipxe,iPXE \
--dhcp-boot=tag:ipxe,http://matchbox.example.com:8080/boot.ipxe \
--address=/matchbox.example.com/192.168.1.2 \
--log-queries \
--log-dhcp
```
sudo rkt trust --prefix coreos.com/dnsmasq
# gpg key fingerprint is: 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E
```sh
sudo docker run --rm --cap-add=NET_ADMIN --net=host quay.io/coreos/dnsmasq \
-d -q \
--dhcp-range=192.168.1.3,192.168.1.254 \
--enable-tftp --tftp-root=/var/lib/tftpboot \
--dhcp-match=set:bios,option:client-arch,0 \
--dhcp-boot=tag:bios,undionly.kpxe \
--dhcp-match=set:efi32,option:client-arch,6 \
--dhcp-boot=tag:efi32,ipxe.efi \
--dhcp-match=set:efibc,option:client-arch,7 \
--dhcp-boot=tag:efibc,ipxe.efi \
--dhcp-match=set:efi64,option:client-arch,9 \
--dhcp-boot=tag:efi64,ipxe.efi \
--dhcp-userclass=set:ipxe,iPXE \
--dhcp-boot=tag:ipxe,http://matchbox.example.com:8080/boot.ipxe \
--address=/matchbox.example.com/192.168.1.2 \
--log-queries \
--log-dhcp
```
Run a proxy-DHCP and TFTP service on the host's network:
```sh
sudo rkt run --net=host quay.io/coreos/dnsmasq \
--caps-retain=CAP_NET_ADMIN,CAP_NET_BIND_SERVICE,CAP_SETGID,CAP_SETUID,CAP_NET_RAW \
-- -d -q \
--dhcp-range=192.168.1.1,proxy,255.255.255.0 \
--enable-tftp --tftp-root=/var/lib/tftpboot \
--dhcp-userclass=set:ipxe,iPXE \
--pxe-service=tag:#ipxe,x86PC,"PXE chainload to iPXE",undionly.kpxe \
--pxe-service=tag:ipxe,x86PC,"iPXE",http://matchbox.example.com:8080/boot.ipxe \
--log-queries \
--log-dhcp
```
sudo rkt run coreos.com/dnsmasq:v0.3.0 --net=host -- -d -q --dhcp-range=192.168.1.3,192.168.1.254 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:#ipxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://bootcfg.foo:8080/boot.ipxe --address=/bootcfg.foo/192.168.1.2 --log-queries --log-dhcp
```sh
sudo docker run --rm --cap-add=NET_ADMIN --net=host quay.io/coreos/dnsmasq \
-d -q \
--dhcp-range=192.168.1.1,proxy,255.255.255.0 \
--enable-tftp --tftp-root=/var/lib/tftpboot \
--dhcp-userclass=set:ipxe,iPXE \
--pxe-service=tag:#ipxe,x86PC,"PXE chainload to iPXE",undionly.kpxe \
--pxe-service=tag:ipxe,x86PC,"iPXE",http://matchbox.example.com:8080/boot.ipxe \
--log-queries \
--log-dhcp
```
With Docker:
Be sure to allow enabled services in your firewall configuration.
```
sudo docker run --rm --cap-add=NET_ADMIN --net=host quay.io/coreos/dnsmasq -d -q --dhcp-range=192.168.1.3,192.168.1.254 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:#ipxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://bootcfg.foo:8080/boot.ipxe --address=/bootcfg.foo/192.168.1.2 --log-queries --log-dhcp
```sh
$ sudo firewall-cmd --add-service=dhcp --add-service=tftp --add-service=dns
```
Ensure that `bootcfg.foo` resolves to a `bootcfg` deployment and that you've allowed the services to run in your firewall configuration.
## UEFI
sudo firewall-cmd --add-service=dhcp --add-service=tftp --add-service=dns
### Development
Install the dependencies for [QEMU with UEFI](https://fedoraproject.org/wiki/Using_UEFI_with_QEMU). Walk through the [getting-started-with-docker](getting-started-with-docker.md) tutorial. Launch client VMs using `create-uefi`.
Create UEFI QEMU/KVM VMs attached to the `docker0` bridge.
```sh
$ sudo ./scripts/libvirt create-uefi
```
UEFI clients should chainload `ipxe.efi`, load iPXE and Ignition configs from Matchbox, and Container Linux should boot as usual.
## Troubleshooting
See [troubleshooting](troubleshooting.md).
See [troubleshooting](troubleshooting.md).

View File

@@ -1,22 +1,21 @@
# OpenPGP Signing
# OpenPGP signing
The `bootcfg` OpenPGP signature endpoints serve detached binary and ASCII armored signatures of rendered configs, if enabled. Each config endpoint has corresponding signature endpoints, typically suffixed with `.sig` or `.asc`.
The `matchbox` OpenPGP signature endpoints serve detached binary and ASCII armored signatures of rendered configs, if enabled. Each config endpoint has corresponding signature endpoints, typically suffixed with `.sig` or `.asc`.
To enable OpenPGP signing, provide the path to a secret keyring containing a single signing key with `-key-ring-path` or by setting `BOOTCFG_KEY_RING_PATH`. If a passphrase is required, set it via the `BOOTCFG_PASSPHRASE` environment variable.
To enable OpenPGP signing, provide the path to a secret keyring containing a single signing key with `-key-ring-path` or by setting `MATCHBOX_KEY_RING_PATH`. If a passphrase is required, set it via the `MATCHBOX_PASSPHRASE` environment variable.
Here are example signature endpoints without their query parameters.
| Endpoint | Signature Endpoint | ASCII Signature Endpoint |
|------------|--------------------|-------------------------|
| iPXE | `http://bootcfg.foo/ipxe.sig` | `http://bootcfg.foo/ipxe.asc` |
| Pixiecore | `http://bootcfg/pixiecore/v1/boot.sig/:MAC` | `http://bootcfg/pixiecore/v1/boot.asc/:MAC` |
| GRUB2 | `http://bootcf.foo/grub.sig` | `http://bootcfg.foo/grub.asc` |
| Ignition | `http://bootcfg.foo/ignition.sig` | `http://bootcfg.foo/ignition.asc` |
| Cloud-Config | `http://bootcfg.foo/cloud.sig` | `http://bootcfg.foo/cloud.asc` |
| Metadata | `http://bootcfg.foo/metadata.sig` | `http://bootcfg.foo/metadata.asc` |
| iPXE | `http://matchbox.foo/ipxe.sig` | `http://matchbox.foo/ipxe.asc` |
| GRUB2 | `http://bootcf.foo/grub.sig` | `http://matchbox.foo/grub.asc` |
| Ignition | `http://matchbox.foo/ignition.sig` | `http://matchbox.foo/ignition.asc` |
| Cloud-Config | `http://matchbox.foo/cloud.sig` | `http://matchbox.foo/cloud.asc` |
| Metadata | `http://matchbox.foo/metadata.sig` | `http://matchbox.foo/metadata.asc` |
In production, mount your signing keyring and source the passphrase from a [Kubernetes secret](http://kubernetes.io/v1.1/docs/user-guide/secrets.html). Use a signing subkey exported to a keyring by itself, which can be revoked by a primary key, if needed.
In production, mount your signing keyring and source the passphrase from a [Kubernetes secret](https://kubernetes.io/docs/user-guide/secrets/). Use a signing subkey exported to a keyring by itself, which can be revoked by a primary key, if needed.
To try it locally, you may use the test fixture keyring. **Warning: The test fixture keyring is for examples only.**
@@ -26,28 +25,33 @@ Verify a signature response and config response from the command line using the
**Warning: The test fixture keyring is for examples only.**
$ gpg --homedir sign/fixtures --verify sig_file response_file
gpg: Signature made Mon 08 Feb 2016 11:37:03 PM PST using RSA key ID 9896356A
gpg: sign/fixtures/trustdb.gpg: trustdb created
gpg: Good signature from "Fake Bare Metal Key (Do not use) <do-not-use@example.com>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: BE2F 12BC 3642 2594 570A CCBB 8DC4 2020 9896 356A
```sh
$ gpg --homedir sign/fixtures --verify sig_file response_file
gpg: Signature made Mon 08 Feb 2016 11:37:03 PM PST using RSA key ID 9896356A
gpg: sign/fixtures/trustdb.gpg: trustdb created
gpg: Good signature from "Fake Bare Metal Key (Do not use) <do-not-use@example.com>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: BE2F 12BC 3642 2594 570A CCBB 8DC4 2020 9896 356A
```
## Signing Key Generation
## Signing key generation
Create a signing key or subkey according to your requirements and security policies. Here are some basic [guides](https://coreos.com/rkt/docs/latest/signing-and-verification-guide.html).
### gpg
mkdir -m 700 path/in/vault
gpg --homedir path/in/vault --expert --gen-key
...
```sh
$ mkdir -m 700 path/in/vault
$ gpg --homedir path/in/vault --expert --gen-key
...
```
### gpg2
mkdir -m 700 path/in/vault
gpg2 --homedir path/in/vault --expert --gen-key
...
gpg2 --homedir path/in/vault --export-secret-key KEYID > path/in/vault/secring.gpg
```sh
$ mkdir -m 700 path/in/vault
$ gpg2 --homedir path/in/vault --expert --gen-key
...
$ gpg2 --homedir path/in/vault --export-secret-key KEYID > path/in/vault/secring.gpg
```

View File

@@ -1,115 +0,0 @@
# Torus Storage
The Torus example provisions a 3 node CoreOS cluster, with `etcd3` and Torus, to demonstrate a stand-alone storage cluster. Each of the 3 nodes runs a Torus instance which makes 1GiB of space available (configured per node by "torus_storage_size" in machine group metadata).
## Requirements
Ensure that you've gone through the [bootcfg with rkt](getting-started-rkt.md) guide and understand the basics. In particular, you should be able to:
* Use rkt to start `bootcfg`
* Create a network boot environment with `coreos/dnsmasq`
* Create the example libvirt client VMs
* Install the Torus [binaries](https://github.com/coreos/torus/releases)
## Examples
The [examples](..examples) statically assign IP addresses (172.15.0.21, 172.15.0.22, 172.15.0.23) to libvirt client VMs created by `scripts/libvirt`. The examples can be used for physical machines if you update the MAC/IP addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
* [torus](../examples/groups/torus) - iPXE boot a Torus cluster (use rkt)
## Assets
Download the CoreOS image assets referenced in the target [profile](../examples/profiles).
./scripts/get-coreos alpha 1053.2.0 ./examples/assets
## Containers
Run the latest `bootcfg` ACI with rkt and the `torus` example.
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=data,target=/var/lib/bootcfg --volume data,kind=host,source=$PWD/examples --mount volume=groups,target=/var/lib/bootcfg/groups --volume groups,kind=host,source=$PWD/examples/groups/torus quay.io/coreos/bootcfg:latest -- -address=0.0.0.0:8080 -log-level=debug
Create a network boot environment and power-on your machines. Revisit [bootcfg with rkt](getting-started-rkt.md) for help. Client machines should network boot and provision themselves.
## Verify
Install the Torus [binaries](https://github.com/coreos/torus/releases) on your laptop. Torus uses etcd3 for coordination and metadata storage, so any etcd node in the cluster can be queried with `torusctl`.
./torusctl --etcd 172.15.0.21:2379 list-peers
Run `list-peers` to report the status of data nodes in the Torus cluster.
```
+--------------------------+--------------------------------------+---------+------+--------+---------------+--------------+
| ADDRESS | UUID | SIZE | USED | MEMBER | UPDATED | REB/REP DATA |
+--------------------------+--------------------------------------+---------+------+--------+---------------+--------------+
| http://172.15.0.21:40000 | 016fad6a-2e23-11e6-8ced-525400a19cae | 1.0 GiB | 0 B | OK | 1 second ago | 0 B/sec |
| http://172.15.0.23:40000 | 0408cbba-2e23-11e6-9871-525400c36177 | 1.0 GiB | 0 B | OK | 2 seconds ago | 0 B/sec |
| http://172.15.0.22:40000 | 0c67d31c-2e23-11e6-91f5-525400b22f86 | 1.0 GiB | 0 B | OK | 3 seconds ago | 0 B/sec |
+--------------------------+--------------------------------------+---------+------+--------+---------------+--------------+
```
Torus has already initialized its metadata within etcd3 to format the cluster and added all peers to the pool. Each node provides 1 GiB of storage and has `MEMBER` status `OK`.
### Volume Creation
Create a new replicated, virtual block device or `volume` on Torus.
./torusblk --etcd=172.15.0.21:2379 volume create hello 500MiB
List the current volumes,
./torusctl --etcd=172.15.0.21:2379 volume list
and verify that `hello` was created.
```
+-------------+---------+
| VOLUME NAME | SIZE |
+-------------+---------+
| hello | 500 MiB |
+-------------+---------+
```
### Filesystems and Mounting
Let's attach the Torus volume, create a filesystem, and add some files. Add the `nbd` kernel module.
sudo modprobe nbd
sudo ./torusblk --etcd=172.15.0.21:2379 nbd hello
In a new shell, create a new filesystem on the volume and mount it on your system.
sudo mkfs.ext4 /dev/nbd0
sudo mkdir -p /mnt/hello
sudo mount /dev/nbd0 -o discard,noatime /mnt/hello
Check that the mounted filesystem is present.
$ mount | grep nbd
/dev/nbd0 on /mnt/hello type ext4 (rw,noatime,seclabel,discard,data=ordered)
By default, Torus uses a replication factor of 2. You may write some data and poweroff one of the three nodes if you wish.
sudo sh -c "echo 'hello world' > /mnt/hello/world"
sudo virsh destroy node3 # actually equivalent to poweroff
Check the Torus data nodes.
$ ./torusctl --etcd 172.15.0.21:2379 list-peers
```
+--------------------------+--------------------------------------+---------+--------+--------+---------------+--------------+
| ADDRESS | UUID | SIZE | USED | MEMBER | UPDATED | REB/REP DATA |
+--------------------------+--------------------------------------+---------+--------+--------+---------------+--------------+
| http://172.15.0.21:40000 | 016fad6a-2e23-11e6-8ced-525400a19cae | 1.0 GiB | 22 MiB | OK | 3 seconds ago | 0 B/sec |
| http://172.15.0.22:40000 | 0c67d31c-2e23-11e6-91f5-525400b22f86 | 1.0 GiB | 22 MiB | OK | 3 seconds ago | 0 B/sec |
| | 0408cbba-2e23-11e6-9871-525400c36177 | ??? | ??? | DOWN | Missing | |
+--------------------------+--------------------------------------+---------+--------+--------+---------------+--------------+
Balanced: true Usage: 2.15%
```
## Going Further
See the [Torus](https://github.com/coreos/torus) project to learn more about Torus and contribute.

View File

@@ -1,18 +1,19 @@
# Troubleshooting
## Firewall
Running DHCP or proxyDHCP with `coreos/dnsmasq` on a host requires that the Firewall allow DHCP and TFTP (for chainloading) services to run.
## Port Collision
## Port collision
Running DHCP or proxyDHCP can cause port already in use collisions depending on what's running. Fedora runs bootp listening on udp/67 for example. Find the service using the port.
sudo lsof -i :67
```sh
$ sudo lsof -i :67
```
Evaluate whether you can configure the existing service or whether you'd like to stop it and test with `coreos/dnsmasq`.
## No boot filename received
PXE client firmware did not receive a DHCP Offer with PXE-Options after several attempts. If you're using the `coreos/dnsmasq` image with `-d`, each request should log to stdout. Using the wrong `-i` interface is the most common reason DHCP requests are not received. Otherwise, wireshark can be useful for investigating.
PXE client firmware did not receive a DHCP Offer with PXE-Options after several attempts. If you're using the `coreos/dnsmasq` image with `-d`, each request should log to stdout. Using the wrong `-i` interface is the most common reason DHCP requests are not received. Otherwise, wireshark can be useful for investigating.

63
Jenkinsfile vendored Normal file
View File

@@ -0,0 +1,63 @@
pipeline {
agent none
options {
timeout(time:45, unit:'MINUTES')
buildDiscarder(logRotator(numToKeepStr:'20'))
}
stages {
stage('Cluster Tests') {
steps {
parallel (
etcd3: {
node('fedora && bare-metal') {
timeout(time:5, unit:'MINUTES') {
checkout scm
sh '''#!/bin/bash -e
export ASSETS_DIR=~/assets; ./tests/smoke/etcd3
'''
deleteDir()
}
}
},
bootkube: {
node('fedora && bare-metal') {
timeout(time:60, unit:'MINUTES') {
checkout scm
sh '''#!/bin/bash -e
chmod 600 ./tests/smoke/fake_rsa
export ASSETS_DIR=~/assets; ./tests/smoke/bootkube
'''
deleteDir()
}
}
},
"etcd3-terraform": {
node('fedora && bare-metal') {
timeout(time:10, unit:'MINUTES') {
checkout scm
sh '''#!/bin/bash -e
export ASSETS_DIR=~/assets; export CONFIG_DIR=~/matchbox/examples/etc/matchbox; ./tests/smoke/etcd3-terraform
'''
deleteDir()
}
}
},
"bootkube-terraform": {
node('fedora && bare-metal') {
timeout(time:60, unit:'MINUTES') {
checkout scm
sh '''#!/bin/bash -e
chmod 600 ./tests/smoke/fake_rsa
export ASSETS_DIR=~/assets; export CONFIG_DIR=~/matchbox/examples/etc/matchbox; ./tests/smoke/bootkube-terraform
'''
deleteDir()
}
}
},
)
}
}
}
}

View File

@@ -1,45 +1,86 @@
export CGO_ENABLED:=0
LD_FLAGS="-w -X github.com/coreos/coreos-baremetal/bootcfg/version.Version=$(shell ./git-version)"
LOCAL_BIN=/usr/local/bin
VERSION=$(shell ./scripts/dev/git-version)
LD_FLAGS="-w -X github.com/coreos/matchbox/matchbox/version.Version=$(VERSION)"
REPO=github.com/coreos/matchbox
IMAGE_REPO=coreos/matchbox
QUAY_REPO=quay.io/coreos/matchbox
all: build
build: clean bin/bootcfg bin/bootcmd
bin/bootcfg:
go build -o bin/bootcfg -ldflags $(LD_FLAGS) -a github.com/coreos/coreos-baremetal/cmd/bootcfg
build: clean bin/matchbox
bin/bootcmd:
go build -o bin/bootcmd -ldflags $(LD_FLAGS) -a github.com/coreos/coreos-baremetal/cmd/bootcmd
bin/%:
@go build -o bin/$* -v -ldflags $(LD_FLAGS) $(REPO)/cmd/$*
test:
./test
@./scripts/dev/test
install:
cp bin/bootcfg $(LOCAL_BIN)
cp bin/bootcmd $(LOCAL_BIN)
.PHONY: aci
aci: clean build
@sudo ./scripts/dev/build-aci
release: clean _output/coreos-baremetal-linux-amd64.tar.gz _output/coreos-baremetal-darwin-amd64.tar.gz
.PHONY: docker-image
docker-image:
@sudo docker build --rm=true -t $(IMAGE_REPO):$(VERSION) .
@sudo docker tag $(IMAGE_REPO):$(VERSION) $(IMAGE_REPO):latest
bin/%/bootcfg:
GOOS=$* go build -o bin/$*/bootcfg -ldflags $(LD_FLAGS) -a github.com/coreos/coreos-baremetal/cmd/bootcfg
.PHONY: docker-push
docker-push: docker-image
@sudo docker tag $(IMAGE_REPO):$(VERSION) $(QUAY_REPO):latest
@sudo docker tag $(IMAGE_REPO):$(VERSION) $(QUAY_REPO):$(VERSION)
@sudo docker push $(QUAY_REPO):latest
@sudo docker push $(QUAY_REPO):$(VERSION)
bin/%/bootcmd:
GOOS=$* go build -o bin/$*/bootcmd -ldflags $(LD_FLAGS) -a github.com/coreos/coreos-baremetal/cmd/bootcmd
.PHONY: vendor
vendor:
@glide update --strip-vendor
@glide-vc --use-lock-file --no-tests --only-code
_output/coreos-baremetal-%-amd64.tar.gz: NAME=coreos-baremetal-$(VERSION)-$*-amd64
_output/coreos-baremetal-%-amd64.tar.gz: DEST=_output/$(NAME)
_output/coreos-baremetal-%-amd64.tar.gz: bin/%/bootcfg bin/%/bootcmd
mkdir -p $(DEST)
cp bin/$*/bootcfg $(DEST)
cp bin/$*/bootcmd $(DEST)
./scripts/release-files $(DEST)
tar zcvf $(DEST).tar.gz -C _output $(NAME)
.PHONY: codegen
codegen: tools
@./scripts/dev/codegen
.PHONY: tools
tools: bin/protoc bin/protoc-gen-go
bin/protoc:
@./scripts/dev/get-protoc
bin/protoc-gen-go:
@go build -o bin/protoc-gen-go $(REPO)/vendor/github.com/golang/protobuf/protoc-gen-go
clean:
rm -rf bin
rm -rf _output
@rm -rf bin
.PHONY: all build test install release clean
.SECONDARY: _output/coreos-baremetal-linux-amd64 _output/coreos-baremetal-darwin-amd64
clean-release:
@rm -rf _output
release: \
clean \
clean-release \
_output/matchbox-linux-amd64.tar.gz \
_output/matchbox-linux-arm.tar.gz \
_output/matchbox-linux-arm64.tar.gz \
_output/matchbox-darwin-amd64.tar.gz
bin/linux-amd64/matchbox: GOARGS = GOOS=linux GOARCH=amd64
bin/linux-arm/matchbox: GOARGS = GOOS=linux GOARCH=arm GOARM=6
bin/linux-arm64/matchbox: GOARGS = GOOS=linux GOARCH=arm64
bin/darwin-amd64/matchbox: GOARGS = GOOS=darwin GOARCH=amd64
bin/%/matchbox:
$(GOARGS) go build -o $@ -ldflags $(LD_FLAGS) -a $(REPO)/cmd/matchbox
_output/matchbox-%.tar.gz: NAME=matchbox-$(VERSION)-$*
_output/matchbox-%.tar.gz: DEST=_output/$(NAME)
_output/matchbox-%.tar.gz: bin/%/matchbox
mkdir -p $(DEST)
cp bin/$*/matchbox $(DEST)
./scripts/dev/release-files $(DEST)
tar zcvf $(DEST).tar.gz -C _output $(NAME)
.PHONY: all build clean test release
.SECONDARY: _output/matchbox-linux-amd64 _output/matchbox-darwin-amd64

View File

@@ -1,56 +1,52 @@
# matchbox [![Build Status](https://travis-ci.org/coreos/matchbox.svg?branch=master)](https://travis-ci.org/coreos/matchbox) [![GoDoc](https://godoc.org/github.com/coreos/matchbox?status.svg)](https://godoc.org/github.com/coreos/matchbox) [![Docker Repository on Quay](https://quay.io/repository/coreos/matchbox/status "Docker Repository on Quay")](https://quay.io/repository/coreos/matchbox) [![IRC](https://img.shields.io/badge/irc-%23coreos-449FD8.svg)](https://botbot.me/freenode/coreos)
# CoreOS on Baremetal
`matchbox` is a service that matches bare-metal machines (based on labels like MAC, UUID, etc.) to profiles that PXE boot and provision Container Linux clusters. Profiles specify the kernel/initrd, kernel arguments, iPXE config, GRUB config, [Container Linux Config][cl-config], or other configs a machine should use. Matchbox can be [installed](Documentation/deployment.md) as a binary, RPM, container image, or deployed on a Kubernetes cluster and it provides an authenticated gRPC API for clients like [Terraform][terraform].
[![Build Status](https://travis-ci.org/coreos/coreos-baremetal.svg?branch=master)](https://travis-ci.org/coreos/coreos-baremetal) [![GoDoc](https://godoc.org/github.com/coreos/coreos-baremetal?status.png)](https://godoc.org/github.com/coreos/coreos-baremetal) [![Docker Repository on Quay](https://quay.io/repository/coreos/bootcfg/status "Docker Repository on Quay")](https://quay.io/repository/coreos/bootcfg) [![IRC](https://img.shields.io/badge/irc-%23coreos-449FD8.svg)](https://botbot.me/freenode/coreos)
Guides and a service for network booting and provisioning CoreOS clusters on virtual or physical hardware.
## Guides
* [Network Setup](Documentation/network-setup.md)
* [Machine Lifecycle](Documentation/machine-lifecycle.md)
* [Documentation][docs]
* [matchbox Service](Documentation/matchbox.md)
* [Profiles](Documentation/matchbox.md#profiles)
* [Groups](Documentation/matchbox.md#groups)
* Config Templates
* [Container Linux Config][cl-config]
* [Cloud-Config][cloud-config]
* [Configuration](Documentation/config.md)
* [HTTP API](Documentation/api.md) / [gRPC API](https://godoc.org/github.com/coreos/matchbox/matchbox/client)
* [Background: Machine Lifecycle](Documentation/machine-lifecycle.md)
* [Background: PXE Booting](Documentation/network-booting.md)
## bootcfg
### Installation
`bootcfg` is an HTTP and gRPC service that renders signed [Ignition configs](https://coreos.com/ignition/docs/latest/what-is-ignition.html), [cloud-configs](https://coreos.com/os/docs/latest/cloud-config.html), network boot configs, and metadata to machines to create CoreOS clusters. Groups match machines based on labels (e.g. MAC, UUID, stage, region) and use named Profiles for provisioning. Network boot endpoints provide PXE, iPXE, GRUB, and Pixiecore support. `bootcfg` can be deployed as a binary, as an [appc](https://github.com/appc/spec) container with [rkt](https://coreos.com/rkt/docs/latest/), or as a Docker container.
* Installation
* Installing on [Container Linux / other distros](Documentation/deployment.md)
* Installing on [Kubernetes](Documentation/deployment.md#kubernetes)
* Running with [rkt](Documentation/deployment.md#rkt) / [docker](Documentation/deployment.md#docker)
* [Network Setup](Documentation/network-setup.md)
* [bootcfg Service](Documentation/bootcfg.md)
* [Profiles](Documentation/bootcfg.md#profiles)
* [Groups](Documentation/bootcfg.md#groups-and-metadata)
* Config Templates
* [Ignition](Documentation/ignition.md)
* [Cloud-Config](Documentation/cloud-config.md)
* Tutorials (libvirt)
* [bootcfg with rkt](Documentation/getting-started-rkt.md)
* [bootcfg with Docker](Documentation/getting-started-docker.md)
* [Configuration](Documentation/config.md)
* [HTTP API](Documentation/api.md)
* [gRPC API](https://godoc.org/github.com/coreos/coreos-baremetal/bootcfg/client)
* Backends
* [FileStore](Documentation/bootcfg.md#data)
* Deployment via
* [rkt](Documentation/deployment.md#rkt)
* [docker](Documentation/deployment.md#docker)
* [Kubernetes](Documentation/deployment.md#kubernetes)
* [binary](Documentation/deployment.md#binary) / [systemd](Documentation/deployment.md#systemd)
* [Troubleshooting](Documentation/troubleshooting.md)
* Going Further
* [gRPC API Usage](config.md#grpc-api)
* [Metadata](api.md#metadata)
* OpenPGP [Signing](api.md#openpgp-signatures)
### Tutorials
### Examples
* [Getting Started](Documentation/getting-started.md) - provision physical machines with Container Linux
* Local QEMU/KVM
* [matchbox with Docker](Documentation/getting-started-docker.md)
* [matchbox with rkt](Documentation/getting-started-rkt.md)
* Clusters
* [etcd3](Documentation/getting-started-rkt.md) - Install a 3-node etcd3 cluster
* [Kubernetes](Documentation/bootkube.md) - Install a 3-node Kubernetes v1.8.5 cluster
* Clusters (Terraform-based)
* [etcd3](examples/terraform/etcd3-install/README.md) - Install a 3-node etcd3 cluster
* [Kubernetes](examples/terraform/bootkube-install/README.md) - Install a 3-node Kubernetes v1.10.3 cluster
The [examples](examples) network boot and provision CoreOS clusters. Network boot [libvirt](scripts/README.md#libvirt) VMs to try the examples on your Linux laptop.
### Projects
* Multi-node [Kubernetes cluster](Documentation/kubernetes.md) with TLS
* Multi-node [self-hosted Kubernetes cluster](Documentation/bootkube.md)
* Multi-node etcd cluster
* Multi-node [Torus](Documentation/torus.md) distributed storage cluster
* Network boot or Install to Disk
* Multi-stage CoreOS installs
* [GRUB Netboot](Documentation/grub.md) CoreOS
* iPXE Boot CoreOS with a root fs
* iPXE Boot CoreOS
* Lab [examples](https://github.com/dghubble/metal)
* [Tectonic](https://coreos.com/tectonic/docs/latest/index.html) - enterprise-ready Kubernetes
* [Typhoon](https://typhoon.psdn.io/) - minimal and free Kubernetes
## Contrib
* [dnsmasq](contrib/dnsmasq/README.md) - Run DHCP, TFTP, and DNS services with docker or rkt
* [squid](contrib/squid/README.md) - Run a transparent cache proxy
* [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) - Terraform provider plugin for Matchbox
[docs]: https://coreos.com/matchbox/docs/latest
[terraform]: https://github.com/coreos/terraform-provider-matchbox
[cl-config]: Documentation/container-linux-config.md
[cloud-config]: Documentation/cloud-config.md

View File

@@ -1,16 +0,0 @@
package client
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestNew_MissingEndpoints(t *testing.T) {
cfg := &Config{
Endpoints: []string{},
}
client, err := New(cfg)
assert.Nil(t, client)
assert.Equal(t, errNoEndpoints, err)
}

View File

@@ -1,47 +0,0 @@
package http
import (
"net/http"
"golang.org/x/net/context"
)
// ContextHandler defines a handler which receives a passed context.Context
// with the standard ResponseWriter and Request.
type ContextHandler interface {
ServeHTTP(context.Context, http.ResponseWriter, *http.Request)
}
// ContextHandlerFunc type is an adapter to allow the use of an ordinary
// function as a ContextHandler. If f is a function with the correct
// signature, ContextHandlerFunc(f) is a ContextHandler that calls f.
type ContextHandlerFunc func(context.Context, http.ResponseWriter, *http.Request)
// ServeHTTP calls the function f(ctx, w, req).
func (f ContextHandlerFunc) ServeHTTP(ctx context.Context, w http.ResponseWriter, req *http.Request) {
f(ctx, w, req)
}
// handler wraps a ContextHandler to implement the http.Handler interface for
// compatability with ServeMux and middlewares.
//
// Middleswares which do not pass a ctx break the chain so place them before
// or after chains of ContextHandlers.
type handler struct {
ctx context.Context
handler ContextHandler
}
// NewHandler returns an http.Handler which wraps the given ContextHandler
// and creates a background context.Context.
func NewHandler(h ContextHandler) http.Handler {
return &handler{
ctx: context.Background(),
handler: h,
}
}
// ServeHTTP lets handler implement the http.Handler interface.
func (h *handler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
h.handler.ServeHTTP(h.ctx, w, req)
}

View File

@@ -1,22 +0,0 @@
package http
import (
"fmt"
"net/http"
"net/http/httptest"
"testing"
"github.com/stretchr/testify/assert"
"golang.org/x/net/context"
)
func TestNewHandler(t *testing.T) {
fn := func(ctx context.Context, w http.ResponseWriter, req *http.Request) {
fmt.Fprintf(w, "ContextHandler called")
}
h := NewHandler(ContextHandlerFunc(fn))
w := httptest.NewRecorder()
req, _ := http.NewRequest("GET", "/", nil)
h.ServeHTTP(w, req)
assert.Equal(t, "ContextHandler called", w.Body.String())
}

View File

@@ -1,2 +0,0 @@
// Package http provides the bootcfg HTTP server
package http

View File

@@ -1,57 +0,0 @@
package http
import (
"net/http"
"path/filepath"
"github.com/Sirupsen/logrus"
"golang.org/x/net/context"
"github.com/coreos/coreos-baremetal/bootcfg/server"
pb "github.com/coreos/coreos-baremetal/bootcfg/server/serverpb"
)
// pixiecoreHandler returns a handler that renders the boot config JSON for
// the requester, to implement the Pixiecore API specification.
// https://github.com/danderson/pixiecore/blob/master/README.api.md
func (s *Server) pixiecoreHandler(core server.Server) ContextHandler {
fn := func(ctx context.Context, w http.ResponseWriter, req *http.Request) {
// pixiecore only provides a MAC address label
macAddr, err := parseMAC(filepath.Base(req.URL.Path))
if err != nil {
s.logger.Errorf("unparseable MAC address: %v", err)
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
attrs := map[string]string{"mac": macAddr.String()}
group, err := core.SelectGroup(ctx, &pb.SelectGroupRequest{Labels: attrs})
if err != nil {
s.logger.WithFields(logrus.Fields{
"label": macAddr,
}).Infof("No matching group")
http.NotFound(w, req)
return
}
profile, err := core.ProfileGet(ctx, &pb.ProfileGetRequest{Id: group.Profile})
if err != nil {
s.logger.WithFields(logrus.Fields{
"label": macAddr,
"group": group.Id,
}).Infof("No profile named: %s", group.Profile)
http.NotFound(w, req)
return
}
// match was successful
s.logger.WithFields(logrus.Fields{
"label": macAddr,
"group": group.Id,
"profile": profile.Id,
}).Debug("Matched a Pixiecore config")
s.renderJSON(w, profile.Boot)
}
return ContextHandlerFunc(fn)
}

View File

@@ -1,73 +0,0 @@
package http
import (
"net/http"
"net/http/httptest"
"testing"
logtest "github.com/Sirupsen/logrus/hooks/test"
"github.com/stretchr/testify/assert"
"golang.org/x/net/context"
"github.com/coreos/coreos-baremetal/bootcfg/server"
"github.com/coreos/coreos-baremetal/bootcfg/storage/storagepb"
fake "github.com/coreos/coreos-baremetal/bootcfg/storage/testfakes"
)
func TestPixiecoreHandler(t *testing.T) {
store := &fake.FixedStore{
Groups: map[string]*storagepb.Group{testGroupWithMAC.Id: testGroupWithMAC},
Profiles: map[string]*storagepb.Profile{testGroupWithMAC.Profile: fake.Profile},
}
logger, _ := logtest.NewNullLogger()
srv := NewServer(&Config{Logger: logger})
c := server.NewServer(&server.Config{Store: store})
h := srv.pixiecoreHandler(c)
w := httptest.NewRecorder()
req, _ := http.NewRequest("GET", "/"+validMACStr, nil)
h.ServeHTTP(context.Background(), w, req)
// assert that:
// - MAC address parameter is used for Group matching
// - the Profile's NetBoot config is rendered as Pixiecore JSON
expectedJSON := `{"kernel":"/image/kernel","initrd":["/image/initrd_a","/image/initrd_b"],"cmdline":{"a":"b","c":""}}`
assert.Equal(t, http.StatusOK, w.Code)
assert.Equal(t, jsonContentType, w.HeaderMap.Get(contentType))
assert.Equal(t, expectedJSON, w.Body.String())
}
func TestPixiecoreHandler_InvalidMACAddress(t *testing.T) {
logger, _ := logtest.NewNullLogger()
srv := NewServer(&Config{Logger: logger})
c := server.NewServer(&server.Config{Store: &fake.EmptyStore{}})
h := srv.pixiecoreHandler(c)
w := httptest.NewRecorder()
req, _ := http.NewRequest("GET", "/", nil)
h.ServeHTTP(context.Background(), w, req)
assert.Equal(t, http.StatusBadRequest, w.Code)
assert.Equal(t, "invalid MAC address /\n", w.Body.String())
}
func TestPixiecoreHandler_NoMatchingGroup(t *testing.T) {
logger, _ := logtest.NewNullLogger()
srv := NewServer(&Config{Logger: logger})
c := server.NewServer(&server.Config{Store: &fake.EmptyStore{}})
h := srv.pixiecoreHandler(c)
w := httptest.NewRecorder()
req, _ := http.NewRequest("GET", "/"+validMACStr, nil)
h.ServeHTTP(context.Background(), w, req)
assert.Equal(t, http.StatusNotFound, w.Code)
}
func TestPixiecoreHandler_NoMatchingProfile(t *testing.T) {
store := &fake.FixedStore{
Groups: map[string]*storagepb.Group{fake.Group.Id: fake.Group},
}
logger, _ := logtest.NewNullLogger()
srv := NewServer(&Config{Logger: logger})
c := server.NewServer(&server.Config{Store: store})
h := srv.pixiecoreHandler(c)
w := httptest.NewRecorder()
req, _ := http.NewRequest("GET", "/"+validMACStr, nil)
h.ServeHTTP(context.Background(), w, req)
assert.Equal(t, http.StatusNotFound, w.Code)
}

View File

@@ -1,2 +0,0 @@
// Package rpc provides the bootcfg gRPC server
package rpc

View File

@@ -1,25 +0,0 @@
package rpc
import (
"golang.org/x/net/context"
"github.com/coreos/coreos-baremetal/bootcfg/rpc/rpcpb"
"github.com/coreos/coreos-baremetal/bootcfg/server"
pb "github.com/coreos/coreos-baremetal/bootcfg/server/serverpb"
)
// ignitionServer takes a bootcfg Server and implements a gRPC IgnitionServer.
type ignitionServer struct {
srv server.Server
}
func newIgnitionServer(s server.Server) rpcpb.IgnitionServer {
return &ignitionServer{
srv: s,
}
}
func (s *ignitionServer) IgnitionPut(ctx context.Context, req *pb.IgnitionPutRequest) (*pb.IgnitionPutResponse, error) {
_, err := s.srv.IgnitionPut(ctx, req)
return &pb.IgnitionPutResponse{}, grpcError(err)
}

View File

@@ -1,2 +0,0 @@
// Package server is a bootcfg library package for implementing servers.
package server

View File

@@ -1,2 +0,0 @@
// Package serverpb provides bootcfg message types.
package serverpb

View File

@@ -1,326 +0,0 @@
// Code generated by protoc-gen-go.
// source: messages.proto
// DO NOT EDIT!
/*
Package serverpb is a generated protocol buffer package.
It is generated from these files:
messages.proto
It has these top-level messages:
SelectGroupRequest
SelectGroupResponse
SelectProfileRequest
SelectProfileResponse
GroupPutRequest
GroupPutResponse
GroupGetRequest
GroupListRequest
GroupGetResponse
GroupListResponse
ProfilePutRequest
ProfilePutResponse
ProfileGetRequest
ProfileGetResponse
ProfileListRequest
ProfileListResponse
IgnitionPutRequest
IgnitionPutResponse
*/
package serverpb
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import storagepb "github.com/coreos/coreos-baremetal/bootcfg/storage/storagepb"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
const _ = proto.ProtoPackageIsVersion1
type SelectGroupRequest struct {
Labels map[string]string `protobuf:"bytes,1,rep,name=labels" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
}
func (m *SelectGroupRequest) Reset() { *m = SelectGroupRequest{} }
func (m *SelectGroupRequest) String() string { return proto.CompactTextString(m) }
func (*SelectGroupRequest) ProtoMessage() {}
func (*SelectGroupRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *SelectGroupRequest) GetLabels() map[string]string {
if m != nil {
return m.Labels
}
return nil
}
type SelectGroupResponse struct {
Group *storagepb.Group `protobuf:"bytes,1,opt,name=group" json:"group,omitempty"`
}
func (m *SelectGroupResponse) Reset() { *m = SelectGroupResponse{} }
func (m *SelectGroupResponse) String() string { return proto.CompactTextString(m) }
func (*SelectGroupResponse) ProtoMessage() {}
func (*SelectGroupResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *SelectGroupResponse) GetGroup() *storagepb.Group {
if m != nil {
return m.Group
}
return nil
}
type SelectProfileRequest struct {
Labels map[string]string `protobuf:"bytes,1,rep,name=labels" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
}
func (m *SelectProfileRequest) Reset() { *m = SelectProfileRequest{} }
func (m *SelectProfileRequest) String() string { return proto.CompactTextString(m) }
func (*SelectProfileRequest) ProtoMessage() {}
func (*SelectProfileRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *SelectProfileRequest) GetLabels() map[string]string {
if m != nil {
return m.Labels
}
return nil
}
type SelectProfileResponse struct {
Profile *storagepb.Profile `protobuf:"bytes,1,opt,name=profile" json:"profile,omitempty"`
}
func (m *SelectProfileResponse) Reset() { *m = SelectProfileResponse{} }
func (m *SelectProfileResponse) String() string { return proto.CompactTextString(m) }
func (*SelectProfileResponse) ProtoMessage() {}
func (*SelectProfileResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
func (m *SelectProfileResponse) GetProfile() *storagepb.Profile {
if m != nil {
return m.Profile
}
return nil
}
type GroupPutRequest struct {
Group *storagepb.Group `protobuf:"bytes,1,opt,name=group" json:"group,omitempty"`
}
func (m *GroupPutRequest) Reset() { *m = GroupPutRequest{} }
func (m *GroupPutRequest) String() string { return proto.CompactTextString(m) }
func (*GroupPutRequest) ProtoMessage() {}
func (*GroupPutRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
func (m *GroupPutRequest) GetGroup() *storagepb.Group {
if m != nil {
return m.Group
}
return nil
}
type GroupPutResponse struct {
}
func (m *GroupPutResponse) Reset() { *m = GroupPutResponse{} }
func (m *GroupPutResponse) String() string { return proto.CompactTextString(m) }
func (*GroupPutResponse) ProtoMessage() {}
func (*GroupPutResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
type GroupGetRequest struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
}
func (m *GroupGetRequest) Reset() { *m = GroupGetRequest{} }
func (m *GroupGetRequest) String() string { return proto.CompactTextString(m) }
func (*GroupGetRequest) ProtoMessage() {}
func (*GroupGetRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
type GroupListRequest struct {
}
func (m *GroupListRequest) Reset() { *m = GroupListRequest{} }
func (m *GroupListRequest) String() string { return proto.CompactTextString(m) }
func (*GroupListRequest) ProtoMessage() {}
func (*GroupListRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
type GroupGetResponse struct {
Group *storagepb.Group `protobuf:"bytes,1,opt,name=group" json:"group,omitempty"`
}
func (m *GroupGetResponse) Reset() { *m = GroupGetResponse{} }
func (m *GroupGetResponse) String() string { return proto.CompactTextString(m) }
func (*GroupGetResponse) ProtoMessage() {}
func (*GroupGetResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
func (m *GroupGetResponse) GetGroup() *storagepb.Group {
if m != nil {
return m.Group
}
return nil
}
type GroupListResponse struct {
Groups []*storagepb.Group `protobuf:"bytes,1,rep,name=groups" json:"groups,omitempty"`
}
func (m *GroupListResponse) Reset() { *m = GroupListResponse{} }
func (m *GroupListResponse) String() string { return proto.CompactTextString(m) }
func (*GroupListResponse) ProtoMessage() {}
func (*GroupListResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9} }
func (m *GroupListResponse) GetGroups() []*storagepb.Group {
if m != nil {
return m.Groups
}
return nil
}
type ProfilePutRequest struct {
Profile *storagepb.Profile `protobuf:"bytes,1,opt,name=profile" json:"profile,omitempty"`
}
func (m *ProfilePutRequest) Reset() { *m = ProfilePutRequest{} }
func (m *ProfilePutRequest) String() string { return proto.CompactTextString(m) }
func (*ProfilePutRequest) ProtoMessage() {}
func (*ProfilePutRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{10} }
func (m *ProfilePutRequest) GetProfile() *storagepb.Profile {
if m != nil {
return m.Profile
}
return nil
}
type ProfilePutResponse struct {
}
func (m *ProfilePutResponse) Reset() { *m = ProfilePutResponse{} }
func (m *ProfilePutResponse) String() string { return proto.CompactTextString(m) }
func (*ProfilePutResponse) ProtoMessage() {}
func (*ProfilePutResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{11} }
type ProfileGetRequest struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
}
func (m *ProfileGetRequest) Reset() { *m = ProfileGetRequest{} }
func (m *ProfileGetRequest) String() string { return proto.CompactTextString(m) }
func (*ProfileGetRequest) ProtoMessage() {}
func (*ProfileGetRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{12} }
type ProfileGetResponse struct {
Profile *storagepb.Profile `protobuf:"bytes,1,opt,name=profile" json:"profile,omitempty"`
}
func (m *ProfileGetResponse) Reset() { *m = ProfileGetResponse{} }
func (m *ProfileGetResponse) String() string { return proto.CompactTextString(m) }
func (*ProfileGetResponse) ProtoMessage() {}
func (*ProfileGetResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{13} }
func (m *ProfileGetResponse) GetProfile() *storagepb.Profile {
if m != nil {
return m.Profile
}
return nil
}
type ProfileListRequest struct {
}
func (m *ProfileListRequest) Reset() { *m = ProfileListRequest{} }
func (m *ProfileListRequest) String() string { return proto.CompactTextString(m) }
func (*ProfileListRequest) ProtoMessage() {}
func (*ProfileListRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{14} }
type ProfileListResponse struct {
Profiles []*storagepb.Profile `protobuf:"bytes,1,rep,name=profiles" json:"profiles,omitempty"`
}
func (m *ProfileListResponse) Reset() { *m = ProfileListResponse{} }
func (m *ProfileListResponse) String() string { return proto.CompactTextString(m) }
func (*ProfileListResponse) ProtoMessage() {}
func (*ProfileListResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{15} }
func (m *ProfileListResponse) GetProfiles() []*storagepb.Profile {
if m != nil {
return m.Profiles
}
return nil
}
type IgnitionPutRequest struct {
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
Config []byte `protobuf:"bytes,2,opt,name=config,proto3" json:"config,omitempty"`
}
func (m *IgnitionPutRequest) Reset() { *m = IgnitionPutRequest{} }
func (m *IgnitionPutRequest) String() string { return proto.CompactTextString(m) }
func (*IgnitionPutRequest) ProtoMessage() {}
func (*IgnitionPutRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{16} }
type IgnitionPutResponse struct {
}
func (m *IgnitionPutResponse) Reset() { *m = IgnitionPutResponse{} }
func (m *IgnitionPutResponse) String() string { return proto.CompactTextString(m) }
func (*IgnitionPutResponse) ProtoMessage() {}
func (*IgnitionPutResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{17} }
func init() {
proto.RegisterType((*SelectGroupRequest)(nil), "serverpb.SelectGroupRequest")
proto.RegisterType((*SelectGroupResponse)(nil), "serverpb.SelectGroupResponse")
proto.RegisterType((*SelectProfileRequest)(nil), "serverpb.SelectProfileRequest")
proto.RegisterType((*SelectProfileResponse)(nil), "serverpb.SelectProfileResponse")
proto.RegisterType((*GroupPutRequest)(nil), "serverpb.GroupPutRequest")
proto.RegisterType((*GroupPutResponse)(nil), "serverpb.GroupPutResponse")
proto.RegisterType((*GroupGetRequest)(nil), "serverpb.GroupGetRequest")
proto.RegisterType((*GroupListRequest)(nil), "serverpb.GroupListRequest")
proto.RegisterType((*GroupGetResponse)(nil), "serverpb.GroupGetResponse")
proto.RegisterType((*GroupListResponse)(nil), "serverpb.GroupListResponse")
proto.RegisterType((*ProfilePutRequest)(nil), "serverpb.ProfilePutRequest")
proto.RegisterType((*ProfilePutResponse)(nil), "serverpb.ProfilePutResponse")
proto.RegisterType((*ProfileGetRequest)(nil), "serverpb.ProfileGetRequest")
proto.RegisterType((*ProfileGetResponse)(nil), "serverpb.ProfileGetResponse")
proto.RegisterType((*ProfileListRequest)(nil), "serverpb.ProfileListRequest")
proto.RegisterType((*ProfileListResponse)(nil), "serverpb.ProfileListResponse")
proto.RegisterType((*IgnitionPutRequest)(nil), "serverpb.IgnitionPutRequest")
proto.RegisterType((*IgnitionPutResponse)(nil), "serverpb.IgnitionPutResponse")
}
var fileDescriptor0 = []byte{
// 441 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xac, 0x54, 0x5d, 0x8b, 0xd3, 0x40,
0x14, 0x25, 0x5d, 0x37, 0xae, 0xb7, 0xb2, 0x76, 0xa7, 0x5d, 0x59, 0xf6, 0x49, 0x47, 0x90, 0x20,
0x3a, 0x85, 0xf5, 0xc5, 0x5d, 0x58, 0x58, 0x17, 0xca, 0xa2, 0xec, 0x43, 0x89, 0xbf, 0x20, 0x89,
0xb7, 0x31, 0x98, 0x64, 0xe2, 0xcc, 0xa4, 0xd0, 0x9f, 0xe1, 0x83, 0xff, 0xd7, 0x76, 0x3e, 0xe2,
0xa4, 0x15, 0xb1, 0xe2, 0xd3, 0xdc, 0xb9, 0xf7, 0x9c, 0x73, 0x7b, 0xce, 0x94, 0xc0, 0x71, 0x85,
0x52, 0x26, 0x39, 0x4a, 0xd6, 0x08, 0xae, 0x38, 0x39, 0x92, 0x28, 0x96, 0x28, 0x9a, 0xf4, 0xfc,
0x63, 0x5e, 0xa8, 0x2f, 0x6d, 0xca, 0x32, 0x5e, 0x4d, 0x33, 0x2e, 0x90, 0x4b, 0x7b, 0xbc, 0x49,
0x13, 0x81, 0x15, 0xaa, 0xa4, 0x9c, 0xa6, 0x9c, 0xab, 0x6c, 0x91, 0x4f, 0xa5, 0xe2, 0x62, 0x2d,
0xe2, 0xce, 0x26, 0x75, 0x95, 0x51, 0xa5, 0xdf, 0x03, 0x20, 0x9f, 0xb0, 0xc4, 0x4c, 0xdd, 0x09,
0xde, 0x36, 0x31, 0x7e, 0x6b, 0x51, 0x2a, 0x72, 0x03, 0x61, 0x99, 0xa4, 0x58, 0xca, 0xb3, 0xe0,
0xd9, 0x41, 0x34, 0xbc, 0x88, 0x98, 0xdb, 0xce, 0x76, 0xd1, 0xec, 0x5e, 0x43, 0x67, 0xb5, 0x12,
0xab, 0xd8, 0xf2, 0xce, 0x2f, 0x61, 0xe8, 0xb5, 0xc9, 0x08, 0x0e, 0xbe, 0xe2, 0x6a, 0xad, 0x16,
0x44, 0x8f, 0xe2, 0x4d, 0x49, 0x26, 0x70, 0xb8, 0x4c, 0xca, 0x16, 0xcf, 0x06, 0xba, 0x67, 0x2e,
0x57, 0x83, 0x77, 0x01, 0xbd, 0x86, 0x71, 0x6f, 0x89, 0x6c, 0x78, 0x2d, 0x91, 0xbc, 0x84, 0xc3,
0x7c, 0xd3, 0xd0, 0x22, 0xc3, 0x8b, 0x11, 0xeb, 0x3c, 0x31, 0x03, 0x34, 0x63, 0xfa, 0x23, 0x80,
0x89, 0xe1, 0xcf, 0x05, 0x5f, 0x14, 0x25, 0x3a, 0x53, 0xb7, 0x5b, 0xa6, 0x5e, 0x6d, 0x9b, 0xea,
0xe3, 0xff, 0xb7, 0xad, 0x19, 0x9c, 0x6e, 0xad, 0xb1, 0xc6, 0x5e, 0xc3, 0xc3, 0xc6, 0xb4, 0xac,
0x35, 0xe2, 0x59, 0x73, 0x60, 0x07, 0xa1, 0x97, 0xf0, 0x44, 0xdb, 0x9d, 0xb7, 0xca, 0x19, 0xfb,
0xdb, 0x64, 0x08, 0x8c, 0x7e, 0x51, 0xcd, 0x72, 0xfa, 0xdc, 0xca, 0xdd, 0x61, 0x27, 0x77, 0x0c,
0x83, 0xe2, 0xb3, 0xf5, 0xb4, 0xae, 0x3a, 0xda, 0x7d, 0x21, 0x1d, 0x86, 0x5e, 0xd9, 0x9e, 0xa6,
0xed, 0xf9, 0x40, 0xd7, 0x70, 0xe2, 0xe9, 0x59, 0x72, 0x04, 0xa1, 0x9e, 0xba, 0xc7, 0xd9, 0x65,
0xdb, 0x39, 0x7d, 0x0f, 0x27, 0x36, 0x14, 0x2f, 0x82, 0xfd, 0x32, 0x9c, 0x00, 0xf1, 0x25, 0x6c,
0x14, 0x2f, 0x3a, 0xe1, 0x3f, 0x84, 0x71, 0xdb, 0x51, 0x7d, 0xeb, 0xff, 0xba, 0xde, 0x8f, 0x74,
0x06, 0xe3, 0x5e, 0xd7, 0x4a, 0x33, 0x38, 0xb2, 0x3c, 0x17, 0xcd, 0xef, 0xb4, 0x3b, 0x0c, 0xbd,
0x01, 0xf2, 0x21, 0xaf, 0x0b, 0x55, 0xf0, 0xda, 0xcb, 0x87, 0xc0, 0x83, 0x3a, 0xa9, 0xd0, 0x1a,
0xd1, 0x35, 0x79, 0x0a, 0x61, 0xc6, 0xeb, 0x45, 0x91, 0xeb, 0xff, 0xea, 0xe3, 0xd8, 0xde, 0xe8,
0x29, 0x8c, 0x7b, 0x0a, 0xe6, 0x87, 0xa4, 0xa1, 0xfe, 0x62, 0xbc, 0xfd, 0x19, 0x00, 0x00, 0xff,
0xff, 0x19, 0xef, 0xe3, 0x5b, 0x99, 0x04, 0x00, 0x00,
}

View File

@@ -1,2 +0,0 @@
// Package sign adds signatures to bootcfg responses.
package sign

View File

@@ -1,2 +0,0 @@
// Package storage defines bootcfg's storage and object types.
package storage

View File

@@ -1,72 +0,0 @@
package storagepb
import (
"testing"
"github.com/stretchr/testify/assert"
)
var (
testProfile = &Profile{
Id: "id",
CloudId: "cloud.yaml",
IgnitionId: "ignition.json",
}
)
func TestProfileParse(t *testing.T) {
cases := []struct {
json string
profile *Profile
}{
{`{"id": "id", "cloud_id": "cloud.yaml", "ignition_id": "ignition.json"}`, testProfile},
}
for _, c := range cases {
profile, _ := ParseProfile([]byte(c.json))
assert.Equal(t, c.profile, profile)
}
}
func TestProfileValidate(t *testing.T) {
cases := []struct {
profile *Profile
valid bool
}{
{testProfile, true},
{&Profile{Id: "a1b2c3d4"}, true},
{&Profile{}, false},
}
for _, c := range cases {
valid := c.profile.AssertValid() == nil
assert.Equal(t, c.valid, valid)
}
}
func TestProfileCopy(t *testing.T) {
profile := &Profile{
Id: "id",
CloudId: "cloudy.tmpl",
IgnitionId: "ignition.tmpl",
Boot: &NetBoot{
Kernel: "/image/kernel",
Initrd: []string{"/image/initrd_a"},
Cmdline: map[string]string{"a": "b"},
},
}
copy := profile.Copy()
// assert that:
// - Profile fields are copied
// - mutation of the copy does not affect the original
assert.Equal(t, profile.Id, copy.Id)
assert.Equal(t, profile.Name, copy.Name)
assert.Equal(t, profile.IgnitionId, copy.IgnitionId)
assert.Equal(t, profile.CloudId, copy.CloudId)
assert.Equal(t, profile.Boot, copy.Boot)
copy.Id = "a-copy"
copy.Boot.Initrd = []string{"/image/initrd_b"}
copy.Boot.Cmdline["c"] = "d"
assert.NotEqual(t, profile.Id, copy.Id)
assert.NotEqual(t, profile.Boot.Initrd, copy.Boot.Initrd)
assert.NotEqual(t, profile.Boot.Cmdline, copy.Boot.Cmdline)
}

7
build
View File

@@ -1,7 +0,0 @@
#!/bin/bash -e
LD_FLAGS="-w -X github.com/coreos/coreos-baremetal/bootcfg/version.Version=$(./git-version)"
CGO_ENABLED=0 go build -o bin/bootcfg -ldflags "$LD_FLAGS" -a github.com/coreos/coreos-baremetal/cmd/bootcfg
# bootcmd CLI binary
CGO_ENABLED=0 go build -o bin/bootcmd -ldflags "$LD_FLAGS" -a github.com/coreos/coreos-baremetal/cmd/bootcmd

View File

@@ -1,31 +0,0 @@
#!/usr/bin/env bash
set -e
GIT_SHA=$(./git-version)
# Start with an empty ACI
acbuild --debug begin
# In the event of the script exiting, end the build
trap "{ export EXT=$?; acbuild --debug end && exit $EXT; }" EXIT
# Name the ACI
acbuild --debug set-name coreos.com/bootcfg
# Add a version label
acbuild --debug label add version $GIT_SHA
# Add alpine base dependency
acbuild --debug dep add quay.io/coreos/alpine-sh
# Copy the static binary
acbuild --debug copy bin/bootcfg /bootcfg
# Add a port for HTTP traffic
acbuild --debug port add www tcp 8080
# Set the exec command
acbuild --debug set-exec -- /bootcfg
# Save and overwrite any older bootcfg ACI
acbuild --debug write --overwrite bootcfg.aci

View File

@@ -1,7 +0,0 @@
#!/bin/bash -e
REPO=coreos/bootcfg
GIT_SHA=$(./git-version)
docker build -q --rm=true -t $REPO:$GIT_SHA .
docker tag $REPO:$GIT_SHA $REPO:latest

View File

@@ -1,171 +0,0 @@
package main
import (
"flag"
"fmt"
"net"
"net/http"
"net/url"
"os"
"github.com/Sirupsen/logrus"
"github.com/coreos/pkg/flagutil"
web "github.com/coreos/coreos-baremetal/bootcfg/http"
"github.com/coreos/coreos-baremetal/bootcfg/rpc"
"github.com/coreos/coreos-baremetal/bootcfg/server"
"github.com/coreos/coreos-baremetal/bootcfg/sign"
"github.com/coreos/coreos-baremetal/bootcfg/storage"
"github.com/coreos/coreos-baremetal/bootcfg/tlsutil"
"github.com/coreos/coreos-baremetal/bootcfg/version"
)
var (
// Defaults to info logging
log = logrus.New()
)
func main() {
flags := struct {
address string
rpcAddress string
dataPath string
assetsPath string
logLevel string
certFile string
keyFile string
caFile string
keyRingPath string
version bool
help bool
}{}
flag.StringVar(&flags.address, "address", "127.0.0.1:8080", "HTTP listen address")
flag.StringVar(&flags.rpcAddress, "rpc-address", "", "RPC listen address")
flag.StringVar(&flags.dataPath, "data-path", "/var/lib/bootcfg", "Path to data directory")
flag.StringVar(&flags.assetsPath, "assets-path", "/var/lib/bootcfg/assets", "Path to static assets")
// Log levels https://github.com/Sirupsen/logrus/blob/master/logrus.go#L36
flag.StringVar(&flags.logLevel, "log-level", "info", "Set the logging level")
// gRPC Server TLS
flag.StringVar(&flags.certFile, "cert-file", "/etc/bootcfg/server.crt", "Path to the server TLS certificate file")
flag.StringVar(&flags.keyFile, "key-file", "/etc/bootcfg/server.key", "Path to the server TLS key file")
// TLS Client Authentication
flag.StringVar(&flags.caFile, "ca-file", "/etc/bootcfg/ca.crt", "Path to the CA verify and authenticate client certificates")
// Signing
flag.StringVar(&flags.keyRingPath, "key-ring-path", "", "Path to a private keyring file")
// subcommands
flag.BoolVar(&flags.version, "version", false, "print version and exit")
flag.BoolVar(&flags.help, "help", false, "print usage and exit")
// parse command-line and environment variable arguments
flag.Parse()
if err := flagutil.SetFlagsFromEnv(flag.CommandLine, "BOOTCFG"); err != nil {
log.Fatal(err.Error())
}
// restrict OpenPGP passphrase to pass via environment variable only
passphrase := os.Getenv("BOOTCFG_PASSPHRASE")
if flags.version {
fmt.Println(version.Version)
return
}
if flags.help {
flag.Usage()
return
}
// validate arguments
if url, err := url.Parse(flags.address); err != nil || url.String() == "" {
log.Fatal("A valid HTTP listen address is required")
}
if finfo, err := os.Stat(flags.dataPath); err != nil || !finfo.IsDir() {
log.Fatal("A valid -data-path is required")
}
if flags.assetsPath != "" {
if finfo, err := os.Stat(flags.assetsPath); err != nil || !finfo.IsDir() {
log.Fatalf("Provide a valid -assets-path or '' to disable asset serving: %s", flags.assetsPath)
}
}
if flags.rpcAddress != "" {
if _, err := os.Stat(flags.certFile); err != nil {
log.Fatalf("Provide a valid TLS server certificate with -cert-file: %v", err)
}
if _, err := os.Stat(flags.keyFile); err != nil {
log.Fatalf("Provide a valid TLS server key with -key-file: %v", err)
}
if _, err := os.Stat(flags.caFile); err != nil {
log.Fatalf("Provide a valid TLS certificate authority for authorizing client certificates: %v", err)
}
}
// logging setup
lvl, err := logrus.ParseLevel(flags.logLevel)
if err != nil {
log.Fatalf("invalid log-level: %v", err)
}
log.Level = lvl
// (optional) signing
var signer, armoredSigner sign.Signer
if flags.keyRingPath != "" {
entity, err := sign.LoadGPGEntity(flags.keyRingPath, passphrase)
if err != nil {
log.Fatal(err)
}
signer = sign.NewGPGSigner(entity)
armoredSigner = sign.NewArmoredGPGSigner(entity)
}
// storage
store := storage.NewFileStore(&storage.Config{
Root: flags.dataPath,
})
// core logic
server := server.NewServer(&server.Config{
Store: store,
})
// gRPC Server (feature disabled by default)
if flags.rpcAddress != "" {
log.Infof("Starting bootcfg gRPC server on %s", flags.rpcAddress)
log.Infof("Using TLS server certificate: %s", flags.certFile)
log.Infof("Using TLS server key: %s", flags.keyFile)
log.Infof("Using CA certificate: %s to authenticate client certificates", flags.caFile)
lis, err := net.Listen("tcp", flags.rpcAddress)
if err != nil {
log.Fatalf("failed to start listening: %v", err)
}
tlsinfo := tlsutil.TLSInfo{
CertFile: flags.certFile,
KeyFile: flags.keyFile,
CAFile: flags.caFile,
}
tlscfg, err := tlsinfo.ServerConfig()
if err != nil {
log.Fatalf("Invalid TLS credentials: %v", err)
}
grpcServer := rpc.NewServer(server, tlscfg)
go grpcServer.Serve(lis)
defer grpcServer.Stop()
}
// HTTP Server
config := &web.Config{
Core: server,
Logger: log,
AssetsPath: flags.assetsPath,
Signer: signer,
ArmoredSigner: armoredSigner,
}
httpServer := web.NewServer(config)
log.Infof("Starting bootcfg HTTP server on %s", flags.address)
err = http.ListenAndServe(flags.address, httpServer.HTTPHandler())
if err != nil {
log.Fatalf("failed to start listening: %v", err)
}
}

View File

@@ -1,6 +1,6 @@
package main
import "github.com/coreos/coreos-baremetal/bootcfg/cli"
import "github.com/coreos/matchbox/matchbox/cli"
func main() {
cli.Execute()

197
cmd/matchbox/main.go Normal file
View File

@@ -0,0 +1,197 @@
package main
import (
"flag"
"fmt"
"net"
"net/http"
"os"
"github.com/Sirupsen/logrus"
web "github.com/coreos/matchbox/matchbox/http"
"github.com/coreos/matchbox/matchbox/rpc"
"github.com/coreos/matchbox/matchbox/server"
"github.com/coreos/matchbox/matchbox/sign"
"github.com/coreos/matchbox/matchbox/storage"
"github.com/coreos/matchbox/matchbox/tlsutil"
"github.com/coreos/matchbox/matchbox/version"
"github.com/coreos/pkg/flagutil"
)
var (
// Defaults to info logging
log = logrus.New()
)
func main() {
flags := struct {
address string
rpcAddress string
dataPath string
assetsPath string
logLevel string
grpcCAFile string
grpcCertFile string
grpcKeyFile string
tlsCertFile string
tlsKeyFile string
tlsEnabled bool
keyRingPath string
version bool
help bool
}{}
flag.StringVar(&flags.address, "address", "127.0.0.1:8080", "HTTP listen address")
flag.StringVar(&flags.rpcAddress, "rpc-address", "", "RPC listen address")
flag.StringVar(&flags.dataPath, "data-path", "/var/lib/matchbox", "Path to data directory")
flag.StringVar(&flags.assetsPath, "assets-path", "/var/lib/matchbox/assets", "Path to static assets")
// Log levels https://github.com/Sirupsen/logrus/blob/master/logrus.go#L36
flag.StringVar(&flags.logLevel, "log-level", "info", "Set the logging level")
// gRPC Server TLS
flag.StringVar(&flags.grpcCertFile, "cert-file", "/etc/matchbox/server.crt", "Path to the server TLS certificate file")
flag.StringVar(&flags.grpcKeyFile, "key-file", "/etc/matchbox/server.key", "Path to the server TLS key file")
// gRPC TLS Client Authentication
flag.StringVar(&flags.grpcCAFile, "ca-file", "/etc/matchbox/ca.crt", "Path to the CA verify and authenticate client certificates")
// Signing
flag.StringVar(&flags.keyRingPath, "key-ring-path", "", "Path to a private keyring file")
// SSL flags
flag.StringVar(&flags.tlsCertFile, "web-cert-file", "/etc/matchbox/ssl/server.crt", "Path to the server TLS certificate file")
flag.StringVar(&flags.tlsKeyFile, "web-key-file", "/etc/matchbox/ssl/server.key", "Path to the server TLS key file")
flag.BoolVar(&flags.tlsEnabled, "web-ssl", false, "True to enable HTTPS")
// subcommands
flag.BoolVar(&flags.version, "version", false, "print version and exit")
flag.BoolVar(&flags.help, "help", false, "print usage and exit")
// parse command-line and environment variable arguments
flag.Parse()
if err := flagutil.SetFlagsFromEnv(flag.CommandLine, "MATCHBOX"); err != nil {
log.Fatal(err.Error())
}
// restrict OpenPGP passphrase to pass via environment variable only
passphrase := os.Getenv("MATCHBOX_PASSPHRASE")
if flags.version {
fmt.Println(version.Version)
return
}
if flags.help {
flag.Usage()
return
}
// validate arguments
if finfo, err := os.Stat(flags.dataPath); err != nil || !finfo.IsDir() {
log.Fatal("A valid -data-path is required")
}
if flags.assetsPath != "" {
if finfo, err := os.Stat(flags.assetsPath); err != nil || !finfo.IsDir() {
log.Fatalf("Provide a valid -assets-path or '' to disable asset serving: %s", flags.assetsPath)
}
}
if flags.rpcAddress != "" {
if _, err := os.Stat(flags.grpcCertFile); err != nil {
log.Fatalf("Provide a valid TLS server certificate with -cert-file: %v", err)
}
if _, err := os.Stat(flags.grpcKeyFile); err != nil {
log.Fatalf("Provide a valid TLS server key with -key-file: %v", err)
}
if _, err := os.Stat(flags.grpcCAFile); err != nil {
log.Fatalf("Provide a valid TLS certificate authority for authorizing client certificates: %v", err)
}
}
if flags.tlsEnabled {
if _, err := os.Stat(flags.tlsCertFile); err != nil {
log.Fatalf("Provide a valid SSL server certificate with -web-cert-file: %v", err)
}
if _, err := os.Stat(flags.tlsKeyFile); err != nil {
log.Fatalf("Provide a valid SSL server key with -web-key-file: %v", err)
}
}
// logging setup
lvl, err := logrus.ParseLevel(flags.logLevel)
if err != nil {
log.Fatalf("invalid log-level: %v", err)
}
log.Level = lvl
// (optional) signing
var signer, armoredSigner sign.Signer
if flags.keyRingPath != "" {
entity, err := sign.LoadGPGEntity(flags.keyRingPath, passphrase)
if err != nil {
log.Fatal(err)
}
signer = sign.NewGPGSigner(entity)
armoredSigner = sign.NewArmoredGPGSigner(entity)
}
// storage
store := storage.NewFileStore(&storage.Config{
Root: flags.dataPath,
Logger: log,
})
// core logic
server := server.NewServer(&server.Config{
Store: store,
})
// gRPC Server (feature disabled by default)
if flags.rpcAddress != "" {
log.Infof("Starting matchbox gRPC server on %s", flags.rpcAddress)
log.Infof("Using TLS server certificate: %s", flags.grpcCertFile)
log.Infof("Using TLS server key: %s", flags.grpcKeyFile)
log.Infof("Using CA certificate: %s to authenticate client certificates", flags.grpcCAFile)
lis, err := net.Listen("tcp", flags.rpcAddress)
if err != nil {
log.Fatalf("failed to start listening: %v", err)
}
tlsinfo := tlsutil.TLSInfo{
CertFile: flags.grpcCertFile,
KeyFile: flags.grpcKeyFile,
CAFile: flags.grpcCAFile,
}
tlscfg, err := tlsinfo.ServerConfig()
if err != nil {
log.Fatalf("Invalid TLS credentials: %v", err)
}
grpcServer := rpc.NewServer(server, tlscfg)
go grpcServer.Serve(lis)
defer grpcServer.Stop()
}
config := &web.Config{
Core: server,
Logger: log,
AssetsPath: flags.assetsPath,
Signer: signer,
ArmoredSigner: armoredSigner,
}
httpServer := web.NewServer(config)
if flags.tlsEnabled {
// HTTPS Server
log.Infof("Starting matchbox HTTPS server on %s", flags.address)
log.Infof("Using SSL server certificate: %s", flags.tlsCertFile)
log.Infof("Using SSL server key: %s", flags.tlsKeyFile)
err = http.ListenAndServeTLS(flags.address, flags.tlsCertFile, flags.tlsKeyFile, httpServer.HTTPHandler())
if err != nil {
log.Fatalf("failed to start listening: %v", err)
}
} else {
// HTTP Server
log.Infof("Starting matchbox HTTP server on %s", flags.address)
err = http.ListenAndServe(flags.address, httpServer.HTTPHandler())
if err != nil {
log.Fatalf("failed to start listening: %v", err)
}
}
}

61
code-of-conduct.md Normal file
View File

@@ -0,0 +1,61 @@
## CoreOS Community Code of Conduct
### Contributor Code of Conduct
As contributors and maintainers of this project, and in the interest of
fostering an open and welcoming community, we pledge to respect all people who
contribute through reporting issues, posting feature requests, updating
documentation, submitting pull requests or patches, and other activities.
We are committed to making participation in this project a harassment-free
experience for everyone, regardless of level of experience, gender, gender
identity and expression, sexual orientation, disability, personal appearance,
body size, race, ethnicity, age, religion, or nationality.
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery
* Personal attacks
* Trolling or insulting/derogatory comments
* Public or private harassment
* Publishing others' private information, such as physical or electronic addresses, without explicit permission
* Other unethical or unprofessional conduct.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct. By adopting this Code of Conduct,
project maintainers commit themselves to fairly and consistently applying these
principles to every aspect of managing this project. Project maintainers who do
not follow or enforce the Code of Conduct may be permanently removed from the
project team.
This code of conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting a project maintainer, Brandon Philips
<brandon.philips@coreos.com>, and/or Rithu John <rithu.john@coreos.com>.
This Code of Conduct is adapted from the Contributor Covenant
(http://contributor-covenant.org), version 1.2.0, available at
http://contributor-covenant.org/version/1/2/0/
### CoreOS Events Code of Conduct
CoreOS events are working conferences intended for professional networking and
collaboration in the CoreOS community. Attendees are expected to behave
according to professional standards and in accordance with their employers
policies on appropriate workplace behavior.
While at CoreOS events or related social networking opportunities, attendees
should not engage in discriminatory or offensive speech or actions including
but not limited to gender, sexuality, race, age, disability, or religion.
Speakers should be especially aware of these concerns.
CoreOS does not condone any statements by speakers contrary to these standards.
CoreOS reserves the right to deny entrance and/or eject from an event (without
refund) any individual found to be engaging in discriminatory or offensive
speech or actions.
Please bring any concerns to the immediate attention of designated on-site
staff, Brandon Philips <brandon.philips@coreos.com>, and/or Rithu John <rithu.john@coreos.com>.

View File

@@ -0,0 +1,18 @@
# dnsmasq
Notable changes image releases. The dnsmasq project [upstream](http://www.thekelleys.org.uk/dnsmasq/doc.html) has its own [changelog](http://www.thekelleys.org.uk/dnsmasq/CHANGELOG).
## v0.4.1
* Rebuild with alpine:3.6 base image
* Add EXPOSE ports 67 and 69 to Dockerfile
## v0.4.0
* `dnsmasq` package version 2.76
* Rebuild with alpine:3.5 base image to receive patches
* Update CoreOS `grub.efi` to be recent (stable, 1298.7.0)
## v0.3.0
* `dnsmasq` package version 2.75

View File

@@ -1,6 +1,6 @@
FROM alpine:latest
FROM alpine:3.6
MAINTAINER Dalton Hubble <dalton.hubble@coreos.com>
RUN apk -U add dnsmasq curl
COPY tftpboot /var/lib/tftpboot
EXPOSE 53
ENTRYPOINT ["/usr/sbin/dnsmasq"]
EXPOSE 53 67 69
ENTRYPOINT ["/usr/sbin/dnsmasq"]

23
contrib/dnsmasq/Makefile Normal file
View File

@@ -0,0 +1,23 @@
VERSION=v0.5.0
IMAGE_REPO=coreos/dnsmasq
QUAY_REPO=quay.io/coreos/dnsmasq
.PHONY: all
all: docker-image
.PHONY: tftp
tftp:
@./get-tftp-files
.PHONY: docker-image
docker-image: tftp
@sudo docker build --rm=true -t $(IMAGE_REPO):$(VERSION) .
@sudo docker tag $(IMAGE_REPO):$(VERSION) $(IMAGE_REPO):latest
.PHONY: docker-push
docker-push:
@sudo docker tag $(IMAGE_REPO):$(VERSION) $(QUAY_REPO):latest
@sudo docker tag $(IMAGE_REPO):$(VERSION) $(QUAY_REPO):$(VERSION)
@sudo docker push $(QUAY_REPO):latest
@sudo docker push $(QUAY_REPO):$(VERSION)

View File

@@ -1,62 +1,79 @@
# dnsmasq [![Docker Repository on Quay](https://quay.io/repository/coreos/dnsmasq/status "Docker Repository on Quay")](https://quay.io/repository/coreos/dnsmasq)
# dnsmasq
`dnsmasq` provides a container image for running DHCP, proxy DHCP, DNS, and/or TFTP with [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html). Use it to test different network setups with clusters of network bootable machines.
[![Docker Repository on Quay](https://quay.io/repository/coreos/dnsmasq/status "Docker Repository on Quay")](https://quay.io/repository/coreos/dnsmasq)
`dnsmasq` provides an App Container Image (ACI) or Docker image for running DHCP, proxy DHCP, DNS, and/or TFTP with [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html) in a container/pod. Use it to test different network setups with clusters of network bootable machines.
The image bundles `undionly.kpxe` which chainloads PXE clients to iPXE and `grub.efi` (experimental) which chainloads UEFI architectures to GRUB2.
The image bundles `undionly.kpxe`, `ipxe.efi`, and `grub.efi` (experimental) for chainloading BIOS and UEFI clients to iPXE.
## Usage
Run the `coreos.com/dnsmasq` ACI with rkt.
Run the container image as a DHCP, DNS, and TFTP service.
sudo rkt trust --prefix coreos.com/dnsmasq
# gpg key fingerprint is: 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E
sudo rkt run coreos.com/dnsmasq:v0.3.0
```sh
sudo rkt run --net=host quay.io/coreos/dnsmasq \
--caps-retain=CAP_NET_ADMIN,CAP_NET_BIND_SERVICE,CAP_SETGID,CAP_SETUID,CAP_NET_RAW \
-- -d -q \
--dhcp-range=192.168.1.3,192.168.1.254 \
--enable-tftp \
--tftp-root=/var/lib/tftpboot \
--dhcp-match=set:bios,option:client-arch,0 \
--dhcp-boot=tag:bios,undionly.kpxe \
--dhcp-match=set:efi32,option:client-arch,6 \
--dhcp-boot=tag:efi32,ipxe.efi \
--dhcp-match=set:efibc,option:client-arch,7 \
--dhcp-boot=tag:efibc,ipxe.efi \
--dhcp-match=set:efi64,option:client-arch,9 \
--dhcp-boot=tag:efi64,ipxe.efi \
--dhcp-userclass=set:ipxe,iPXE \
--dhcp-boot=tag:ipxe,http://matchbox.example.com:8080/boot.ipxe \
--address=/matchbox.example.com/192.168.1.2 \
--log-queries \
--log-dhcp
```
Press ^] three times to kill the container.
```sh
sudo docker run --rm --cap-add=NET_ADMIN --net=host quay.io/coreos/dnsmasq \
-d -q \
--dhcp-range=192.168.1.3,192.168.1.254 \
--enable-tftp --tftp-root=/var/lib/tftpboot \
--dhcp-match=set:bios,option:client-arch,0 \
--dhcp-boot=tag:bios,undionly.kpxe \
--dhcp-match=set:efi32,option:client-arch,6 \
--dhcp-boot=tag:efi32,ipxe.efi \
--dhcp-match=set:efibc,option:client-arch,7 \
--dhcp-boot=tag:efibc,ipxe.efi \
--dhcp-match=set:efi64,option:client-arch,9 \
--dhcp-boot=tag:efi64,ipxe.efi \
--dhcp-userclass=set:ipxe,iPXE \
--dhcp-boot=tag:ipxe,http://matchbox.example.com:8080/boot.ipxe \
--address=/matchbox.example/192.168.1.2 \
--log-queries \
--log-dhcp
```
Alternately, Docker can be used.
docker pull quay.io/coreos/dnsmasq
docker run quay.io/coreos/dnsmasq --cap-add=NET_ADMIN
Press ^] three times to stop the rkt pod. Press ctrl-C to stop the Docker container.
## Configuration Flags
Configuration arguments can be provided at the command line. Check the dnsmasq [man pages](http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html) for a complete list, but here are some important flags.
Configuration arguments can be provided as flags. Check the dnsmasq [man pages](http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html) for a complete list.
| flag | description | example |
|----------|-------------|---------|
| -dhcp-range | Enable DHCP, lease given range | `172.15,0.50,172.15.0.99`, `192.168.1.1,proxy,255.255.255.0` |
| --dhcp-boot | DHCP next server option | `http://bootcfg.foo:8080/boot.ipxe` |
| --dhcp-range | Enable DHCP, lease given range | `172.18.0.50,172.18.0.99`, `192.168.1.1,proxy,255.255.255.0` |
| --dhcp-boot | DHCP next server option | `http://matchbox.foo:8080/boot.ipxe` |
| --enable-tftp | Enable serving from tftp-root over TFTP | NA |
| --address | IP address for a domain name | /bootcfg.foo/172.15.0.2 |
| --address | IP address for a domain name | /matchbox.foo/172.18.0.2 |
## ACI
## Development
Build a `dnsmasq` ACI with the build script which uses [acbuild](https://github.com/appc/acbuild).
Build a container image locally.
cd contrib/dnsmasq
./get-tftp-files
sudo ./build-aci
```
make docker-image
```
Run `dnsmasq.aci` with rkt to run DHCP/proxyDHCP/TFTP/DNS services.
Run the image with Docker on the `docker0` bridge (default).
DHCP+TFTP+DNS on the `metal0` bridge:
```
sudo docker run --rm --cap-add=NET_ADMIN coreos/dnsmasq -d -q
```
sudo rkt --insecure-options=image run dnsmasq.aci --net=metal0 -- -d -q --dhcp-range=172.15.0.50,172.15.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:#ipxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://bootcfg.foo:8080/boot.ipxe --log-queries --log-dhcp --dhcp-option=3,172.15.0.1 --address=/bootcfg.foo/172.15.0.2
## Docker
Build a Docker image locally using the tag `latest`.
cd contrib/dnsmasq
./get-tftp-files
sudo ./build-docker
Run the Docker image to run DHCP/proxyDHCP/TFTP/DNS services.
DHCP+TFTP+DNS on the `docker0` bridge:
sudo docker run --rm --cap-add=NET_ADMIN quay.io/coreos/dnsmasq -d -q --dhcp-range=172.17.0.43,172.17.0.99 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-userclass=set:ipxe,iPXE --dhcp-boot=tag:#ipxe,undionly.kpxe --dhcp-boot=tag:ipxe,http://bootcfg.foo:8080/boot.ipxe --log-queries --log-dhcp --dhcp-option=3,172.17.0.1 --address=/bootcfg.foo/172.17.0.2

View File

@@ -1,42 +0,0 @@
#!/usr/bin/env bash
set -e
if [ "$EUID" -ne 0 ]; then
echo "Please run as root"
exit 1
fi
# Start with an empty ACI
acbuild --debug begin
# In the event of the script exiting, end the build
trap "{ export EXT=$?; acbuild --debug end && exit $EXT; }" EXIT
# Name the ACI
acbuild --debug set-name coreos.com/dnsmasq
# Add a version label
acbuild --debug label add version v0.3.0
# Add alpine base dependency
acbuild --debug dep add quay.io/coreos/alpine-sh
# Install dnsmasq and curl
acbuild --debug run apk update
acbuild --debug run apk add dnsmasq curl
# Copy the PXE->iPXE chainloader
acbuild --debug copy tftpboot /var/lib/tftpboot
# Add DHCP and DNS ports for dnsmasq
acbuild --debug port add dhcp udp 67
acbuild --debug port add dns udp 53
# Elevate network admin capabilities
echo "{\"set\": [\"CAP_NET_ADMIN\", \"CAP_NET_BIND_SERVICE\", \"CAP_SETGID\", \"CAP_SETUID\", \"CAP_NET_RAW\"]}" | acbuild --debug isolator add os/linux/capabilities-retain-set -
# Set the exec command
acbuild --debug set-exec -- /usr/sbin/dnsmasq -d
# Save and override any older ACI
acbuild --debug write --overwrite dnsmasq.aci

View File

@@ -1,5 +0,0 @@
#!/bin/bash -e
REPO=coreos/dnsmasq
docker build -q --rm=true -t $REPO:latest .

View File

@@ -0,0 +1,40 @@
# dnsmasq.conf
no-daemon
dhcp-range=172.17.0.50,172.17.0.99
dhcp-option=3,172.17.0.1
dhcp-host=52:54:00:a1:9c:ae,172.17.0.21,1h
dhcp-host=52:54:00:b2:2f:86,172.17.0.22,1h
dhcp-host=52:54:00:c3:61:77,172.17.0.23,1h
dhcp-host=52:54:00:d7:99:c7,172.17.0.24,1h
enable-tftp
tftp-root=/var/lib/tftpboot
# Legacy PXE
dhcp-match=set:bios,option:client-arch,0
dhcp-boot=tag:bios,undionly.kpxe
# UEFI
dhcp-match=set:efi32,option:client-arch,6
dhcp-boot=tag:efi32,ipxe.efi
dhcp-match=set:efibc,option:client-arch,7
dhcp-boot=tag:efibc,ipxe.efi
dhcp-match=set:efi64,option:client-arch,9
dhcp-boot=tag:efi64,ipxe.efi
# iPXE
dhcp-userclass=set:ipxe,iPXE
dhcp-boot=tag:ipxe,http://matchbox.example.com:8080/boot.ipxe
log-queries
log-dhcp
address=/matchbox.example.com/172.17.0.2
address=/node1.example.com/172.17.0.21
address=/node2.example.com/172.17.0.22
address=/node3.example.com/172.17.0.23
address=/node4.example.com/172.17.0.24
address=/cluster.example.com/172.17.0.21

View File

@@ -1,6 +1,7 @@
#!/bin/bash -e
#!/usr/bin/env bash
set -eu
DEST=tftpboot
DEST=${1:-"tftpboot"}
if [ ! -d $DEST ]; then
echo "Creating directory $DEST"
@@ -9,3 +10,7 @@ fi
curl -s -o $DEST/undionly.kpxe http://boot.ipxe.org/undionly.kpxe
cp $DEST/undionly.kpxe $DEST/undionly.kpxe.0
curl -s -o $DEST/ipxe.efi http://boot.ipxe.org/ipxe.efi
# Any vaguely recent CoreOS grub.efi is fine
curl -s -o $DEST/grub.efi https://stable.release.core-os.net/amd64-usr/1353.7.0/coreos_production_pxe_grub.efi

View File

@@ -0,0 +1,30 @@
# dnsmasq.conf
no-daemon
dhcp-range=172.18.0.50,172.18.0.99
dhcp-option=3,172.18.0.1
dhcp-host=52:54:00:a1:9c:ae,172.18.0.21,1h
dhcp-host=52:54:00:b2:2f:86,172.18.0.22,1h
dhcp-host=52:54:00:c3:61:77,172.18.0.23,1h
dhcp-host=52:54:00:d7:99:c7,172.18.0.24,1h
enable-tftp
tftp-root=/var/lib/tftpboot
dhcp-userclass=set:ipxe,iPXE
dhcp-boot=tag:#ipxe,undionly.kpxe
dhcp-boot=tag:ipxe,http://matchbox.example.com:8080/boot.ipxe
log-queries
log-dhcp
address=/matchbox.example.com/172.18.0.2
address=/node1.example.com/172.18.0.21
address=/node2.example.com/172.18.0.22
address=/node3.example.com/172.18.0.23
address=/node4.example.com/172.18.0.24
address=/cluster.example.com/172.18.0.21
# for a Tectonic test, ignore
address=/tectonic.example.com/172.18.0.22
address=/tectonic.example.com/172.18.0.23

View File

@@ -1,55 +0,0 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: bootcfg
namespace: default
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
name: bootcfg
phase: prod
spec:
containers:
- name: bootcfg
image: quay.io/coreos/bootcfg:latest
env:
- {name: BOOTCFG_ADDRESS, value: "0.0.0.0:8080"}
- {name: BOOTCFG_LOG_LEVEL, value: "debug"}
ports:
# port exposed on pod IP
- containerPort: 8080
resources:
requests:
cpu: "50m"
memory: "50Mi"
volumeMounts:
- name: groups
mountPath: /var/lib/bootcfg/groups
- name: profiles
mountPath: /var/lib/bootcfg/profiles
- name: ignition
mountPath: /var/lib/bootcfg/ignition
- name: cloud
mountPath: /var/lib/bootcfg/cloud
- name: assets
mountPath: /var/lib/bootcfg/assets
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 30
volumes:
- name: groups
emptyDir: {}
- name: profiles
emptyDir: {}
- name: ignition
emptyDir: {}
- name: cloud
emptyDir: {}
- name: assets
emptyDir: {}

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: bootcfg
spec:
type: NodePort
selector:
name: bootcfg
phase: prod
ports:
- protocol: TCP
port: 80
# port exposed on each node
nodePort: 31488
# name or port exposed on targeted pod(s)
targetPort: 8080

View File

@@ -0,0 +1,52 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: matchbox
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
name: matchbox
phase: prod
spec:
containers:
- name: matchbox
image: quay.io/coreos/matchbox:v0.7.1
env:
- name: MATCHBOX_ADDRESS
value: "0.0.0.0:8080"
- name: MATCHBOX_RPC_ADDRESS
value: "0.0.0.0:8081"
- name: MATCHBOX_LOG_LEVEL
value: "debug"
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8081
resources:
requests:
cpu: "50m"
memory: "50Mi"
volumeMounts:
- name: config
mountPath: /etc/matchbox
- name: data
mountPath: /var/lib/matchbox
- name: assets
mountPath: /var/lib/matchbox/assets
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 30
volumes:
- name: config
secret:
secretName: matchbox-rpc
- name: data
emptyDir: {}
- name: assets
emptyDir: {}

View File

@@ -0,0 +1,32 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: matchbox
spec:
rules:
- host: matchbox.example.com
http:
paths:
- path: /
backend:
serviceName: matchbox
servicePort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: matchbox
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
tls:
- hosts:
- matchbox-rpc.example.com
rules:
- host: matchbox-rpc.example.com
http:
paths:
- path: /
backend:
serviceName: matchbox
servicePort: 8081

View File

@@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
name: matchbox
spec:
type: ClusterIP
selector:
name: matchbox
phase: prod
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
- name: https
protocol: TCP
port: 8081
targetPort: 8081

86
contrib/rpm/matchbox.spec Normal file
View File

@@ -0,0 +1,86 @@
%global import_path github.com/coreos/matchbox
%global repo matchbox
%global debug_package %{nil}
Name: matchbox
Version: 0.6.0
Release: 2%{?dist}
Summary: Network boot and provision CoreOS machines
License: ASL 2.0
URL: https://%{import_path}
Source0: https://%{import_path}/archive/v%{version}/%{name}-%{version}.tar.gz
BuildRequires: golang
BuildRequires: systemd
%{?systemd_requires}
Requires(pre): shadow-utils
%description
matchbox is a service that matches machines to profiles to PXE boot and provision
clusters. Profiles specify the kernel/initrd, kernel args, iPXE config, GRUB
config, Container Linux config, Cloud-config, or other configs. matchbox provides
a read-only HTTP API for machines and an authenticated gRPC API for clients.
# Limit to architectures supported by golang or gcc-go compilers
ExclusiveArch: %{go_arches}
# Use golang or gcc-go compiler depending on architecture
BuildRequires: compiler(golang)
%prep
%setup -q -n %{repo}-%{version}
%build
# create a Go workspace with a symlink to builddir source
mkdir -p src/github.com/coreos
ln -s ../../../ src/github.com/coreos/matchbox
export GOPATH=$(pwd):%{gopath}
export GO15VENDOREXPERIMENT=1
function gobuild { go build -a -ldflags "-w -X github.com/coreos/matchbox/matchbox/version.Version=v%{version}" "$@"; }
gobuild -o bin/matchbox %{import_path}/cmd/matchbox
%install
install -d %{buildroot}/%{_bindir}
install -d %{buildroot}%{_sharedstatedir}/%{name}
install -p -m 0755 bin/matchbox %{buildroot}/%{_bindir}
# systemd service unit
mkdir -p %{buildroot}%{_unitdir}
cp contrib/systemd/%{name}.service %{buildroot}%{_unitdir}/
%files
%doc README.md CHANGES.md CONTRIBUTING.md LICENSE NOTICE DCO
%{_bindir}/matchbox
%{_sharedstatedir}/%{name}
%{_unitdir}/%{name}.service
%pre
getent group matchbox >/dev/null || groupadd -r matchbox
getent passwd matchbox >/dev/null || \
useradd -r -g matchbox -s /sbin/nologin matchbox
%post
%systemd_post matchbox.service
%preun
%systemd_preun matchbox.service
%postun
%systemd_postun_with_restart matchbox.service
%changelog
* Mon Apr 24 2017 <dalton.hubble@coreos.com> - 0.6.0-1
- New support for terraform-provider-matchbox plugin
- Add ProfileDelete, GroupDelete, IgnitionGet and IgnitionDelete gRPC endpoints
- Generate code with gRPC v1.2.1 and matching Go protoc-gen-go plugin
- Update Ignition to v0.14.0 and coreos-cloudinit to v1.13.0
- New documentation at https://coreos.com/matchbox/docs/latest
* Wed Jan 25 2017 <dalton.hubble@coreos.com> - 0.5.0-1
- Rename project from bootcfg to matchbox
* Sat Dec 3 2016 <dalton.hubble@coreos.com> - 0.4.1-3
- Add missing ldflags which caused bootcfg -version to report wrong version
* Fri Dec 2 2016 <dalton.hubble@coreos.com> - 0.4.1-2
- Fix bootcfg user creation
* Fri Dec 2 2016 <dalton.hubble@coreos.com> - 0.4.1-1
- Initial package

96
contrib/squid/README.md Normal file
View File

@@ -0,0 +1,96 @@
# Squid Proxy (DRAFT)
This guide shows how to setup a [Squid](http://www.squid-cache.org/) cache proxy for providing kernel/initrd files to PXE, iPXE, or GRUB2 client machines. This setup runs Squid as a Docker container using the [sameersbn/squid](https://quay.io/repository/sameersbn/squid)
image.
The Squid container requires a squid.conf file to run. Download the example squid.conf file from the [sameersbn/docker-squid](https://github.com/sameersbn/docker-squid) repo:
```
curl -O https://raw.githubusercontent.com/sameersbn/docker-squid/master/squid.conf
```
Squid [interception caching](http://wiki.squid-cache.org/SquidFaq/InterceptionProxy#Concepts_of_Interception_Caching) is required for proxying PXE, iPXE, or GRUB2 client machines. Set the intercept mode in squid.conf:
```
sed -ie 's/http_port 3128/http_port 3128 intercept/g' squid.conf
```
By default, Squid caches objects that are 4MB or less. Increase the maximum object size to cache large files such as kernel and initrd images. The following example increases the maximum object size to 300MB:
```
sed -ie 's/# maximum_object_size 4 MB/maximum_object_size 300 MB/g' squid.conf
```
Squid supports a wide range of cache configurations. Review the Squid [documentation](http://www.squid-cache.org/Doc/) to learn more about configuring Squid.
This example uses systemd to manage squid. Create the squid service systemd unit file:
```
cat /etc/systemd/system/squid.service
#/etc/systemd/system/squid.service
[Unit]
Description=squid proxy service
After=docker.service
Requires=docker.service
[Service]
Restart=always
TimeoutStartSec=0
ExecStart=/usr/bin/docker run --net=host --rm \
-v /path/to/squid.conf:/etc/squid3/squid.conf:Z \
-v /srv/docker/squid/cache:/var/spool/squid3:Z \
quay.io/sameersbn/squid
[Install]
WantedBy=multi-user.target
```
Start Squid:
```
systemctl start squid
```
If your Squid host is running iptables or firewalld, modify rules to allow the interception and redirection of traffic. In the following example, 192.168.10.1 is the IP address of the interface facing PXE, iPXE, or GRUB2 client machines. The default port number used by squid is 3128.
For firewalld:
```
firewall-cmd --permanent --zone=internal --add-forward-port=port=80:proto=tcp:toport=3128:toaddr=192.168.10.1
firewall-cmd --permanent --zone=internal --add-port=3128/tcp
firewall-cmd --reload
firewall-cmd --zone=internal --list-all
```
For iptables:
```
iptables -t nat -A POSTROUTING -o enp15s0 -j MASQUERADE
iptables -t nat -A PREROUTING -i enp14s0 -p tcp --dport 80 -j REDIRECT --to-port 3128
```
**Note**: enp14s0 faces PXE, iPXE, or GRUB2 clients and enp15s0 faces Internet access.
Your DHCP server should be configured so the Squid host is the default gateway for PXE, iPXE, or GRUB2 clients. For deployments that run Squid on the same host as dnsmasq, remove any DHCP option 3 settings. For example ```--dhcp-option=3,192.168.10.1"```
Update Matchbox policies to use the url of the Container Linux kernel/initrd download site:
```
cat policy/etcd3.json
{
"id": "etcd3",
"name": "etcd3",
"boot": {
"kernel": "http://stable.release.core-os.net/amd64-usr/1235.9.0/coreos_production_pxe.vmlinuz",
"initrd": ["http://stable.release.core-os.net/amd64-usr/1235.9.0/coreos_production_pxe_image.cpio.gz"],
"args": [
"coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp}",
"coreos.first_boot=yes",
"console=tty0",
"console=ttyS0",
"coreos.autologin"
]
},
"ignition_id": "etcd3.yaml"
}
```
(Optional) Configure Matchbox to not serve static assets by providing an empty assets-path value.
```
# /etc/systemd/system/matchbox.service.d/override.conf
[Service]
Environment="MATCHBOX_ASSETS_PATHS="
```
Boot your PXE, iPXE, or GRUB2 clients.

View File

@@ -1,17 +0,0 @@
[Unit]
Description=CoreOS bootcfg Server
Documentation=https://github.com/coreos/coreos-baremetal
[Service]
Type=simple
User=bootcfg
Group=bootcfg
ExecStart=/usr/local/bin/bootcfg -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
# systemd.exec
ProtectHome=yes
ProtectSystem=full
ReadWriteDirectories=/var/lib/bootcfg
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,24 @@
[Unit]
Description=CoreOS matchbox Server
Documentation=https://github.com/coreos/matchbox
[Service]
Environment="IMAGE=quay.io/coreos/matchbox"
Environment="VERSION=v0.7.1"
Environment="MATCHBOX_ADDRESS=0.0.0.0:8080"
Environment="MATCHBOX_RPC_ADDRESS=0.0.0.0:8081"
Environment="MATCHBOX_LOG_LEVEL=debug"
ExecStartPre=/usr/bin/mkdir -p /etc/matchbox
ExecStartPre=/usr/bin/mkdir -p /var/lib/matchbox/assets
ExecStart=/usr/bin/rkt run \
--net=host \
--inherit-env \
--trust-keys-from-https \
--mount volume=data,target=/var/lib/matchbox \
--mount volume=config,target=/etc/matchbox \
--volume data,kind=host,source=/var/lib/matchbox \
--volume config,kind=host,source=/etc/matchbox \
${IMAGE}:${VERSION}
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,16 @@
[Unit]
Description=CoreOS matchbox Server
Documentation=https://github.com/coreos/matchbox
[Service]
User=matchbox
Group=matchbox
Environment="MATCHBOX_ADDRESS=0.0.0.0:8080"
ExecStart=/usr/local/bin/matchbox
# systemd.exec
ProtectHome=yes
ProtectSystem=full
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,22 @@
[Unit]
Description=CoreOS matchbox Server
Documentation=https://github.com/coreos/matchbox
[Service]
Environment="IMAGE=quay.io/coreos/matchbox"
Environment="VERSION=v0.7.1"
Environment="MATCHBOX_ADDRESS=0.0.0.0:8080"
ExecStartPre=/usr/bin/mkdir -p /etc/matchbox
ExecStartPre=/usr/bin/mkdir -p /var/lib/matchbox/assets
ExecStart=/usr/bin/rkt run \
--net=host \
--inherit-env \
--trust-keys-from-https \
--mount volume=data,target=/var/lib/matchbox \
--mount volume=config,target=/etc/matchbox \
--volume data,kind=host,source=/var/lib/matchbox \
--volume config,kind=host,source=/etc/matchbox \
${IMAGE}:${VERSION}
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,16 @@
[Unit]
Description=CoreOS matchbox Server
Documentation=https://github.com/coreos/matchbox
[Service]
User=matchbox
Group=matchbox
Environment="MATCHBOX_ADDRESS=0.0.0.0:8080"
ExecStart=/usr/bin/matchbox
# systemd.exec
ProtectHome=yes
ProtectSystem=full
[Install]
WantedBy=multi-user.target

View File

@@ -1,38 +1,46 @@
# Examples
These examples network boot and provision machines into CoreOS clusters using `bootcfg`. You can re-use their profiles to provision your own physical machines.
Matchbox automates network booting and provisioning of clusters. These examples show how to use matchbox on-premise or locally with [QEMU/KVM](scripts/README.md#libvirt).
| Name | Description | CoreOS Version | FS | Docs |
## Terraform Examples
These examples use [Terraform](https://www.terraform.io/intro/) as a client to Matchbox.
| Name | Description |
|-------------------------------|-------------------------------|
| [simple-install](terraform/simple-install/) | Install Container Linux with an SSH key |
| [etcd3-install](terraform/etcd3-install/) | Install a 3-node etcd3 cluster |
| [bootkube-install](terraform/bootkube-install/) | Install a 3-node Kubernetes v1.10.3 cluster |
### Customization
You are encouraged to look through the examples and Terraform modules. Implement your own profiles or package them as modules to meet your needs. We've just provided a starting point. Learn more about [matchbox](../Documentation/matchbox.md) and [Container Linux configs](../Documentation/container-linux-config.md).
## Manual Examples
These examples mount raw Matchbox objects into a Matchbox server's `/var/lib/matchbox/` directory.
| Name | Description | CoreOS Container Linux Version | FS | Docs |
|------------|-------------|----------------|----|-----------|
| pxe | CoreOS via iPXE | alpha/1053.2.0 | RAM | [reference](https://coreos.com/os/docs/latest/booting-with-ipxe.html) |
| grub | CoreOS via GRUB2 Netboot | alpha/1053.2.0 | RAM | NA |
| pxe-disk | CoreOS via iPXE, with a root filesystem | alpha/1053.2.0 | Disk | [reference](https://coreos.com/os/docs/latest/booting-with-ipxe.html) |
| etcd, etcd-docker | iPXE boot a 3 node etcd cluster and proxy | alpha/1053.2.0 | RAM | [reference](https://coreos.com/os/docs/latest/cluster-architectures.html) |
| etcd-install | Install a 3-node etcd cluster to disk | alpha/1053.2.0 | Disk | [reference](https://coreos.com/os/docs/latest/installing-to-disk.html) |
| k8s, k8s-docker | Kubernetes cluster with 1 master, 2 workers, and TLS-authentication | alpha/1053.2.0 | Disk | [tutorial](../Documentation/kubernetes.md) |
| k8s-install | Install a Kubernetes cluster to disk | alpha/1053.2.0 | Disk | [tutorial](../Documentation/kubernetes.md) |
| bootkube | iPXE boot a self-hosted Kubernetes cluster (with bootkube) | alpha/1053.2.0 | Disk | [tutorial](../Documentation/bootkube.md) |
| bootkube-install | Install a self-hosted Kubernetes cluster (with bootkube) | alpha/1053.2.0 | Disk | [tutorial](../Documentation/bootkube.md) |
| torus | Torus distributed storage | alpha/1053.2.0 | Disk | [tutorial](../Documentation/torus.md) |
| simple | CoreOS Container Linux with autologin, using iPXE | stable/1576.5.0 | RAM | [reference](https://coreos.com/os/docs/latest/booting-with-ipxe.html) |
| simple-install | CoreOS Container Linux Install, using iPXE | stable/1576.5.0 | RAM | [reference](https://coreos.com/os/docs/latest/booting-with-ipxe.html) |
| grub | CoreOS Container Linux via GRUB2 Netboot | stable/1576.5.0 | RAM | NA |
| etcd3 | PXE boot a 3-node etcd3 cluster with proxies | stable/1576.5.0 | RAM | None |
| etcd3-install | Install a 3-node etcd3 cluster to disk | stable/1576.5.0 | Disk | None |
| bootkube | PXE boot a 3-node Kubernetes v1.8.5 cluster | stable/1576.5.0 | Disk | [tutorial](../Documentation/bootkube.md) |
| bootkube-install | Install a 3-node Kubernetes v1.8.5 cluster | stable/1576.5.0 | Disk | [tutorial](../Documentation/bootkube.md) |
## Tutorials
### Customization
Get started running `bootcfg` on your Linux machine to network boot and provision clusters of VMs or physical hardware.
#### Autologin
* Getting Started
* [bootcfg with rkt](../Documentation/getting-started-rkt.md)
* [bootcfg with Docker](../Documentation/getting-started-docker.md)
* [Kubernetes (static manifests)](../Documentation/kubernetes.md)
* [Kubernetes (self-hosted)](../Documentation/bootkube.md)
* [Torus Storage](..Documentation/torus.md)
* [Lab Examples](https://github.com/dghubble/metal)
Example profiles pass the `coreos.autologin` kernel argument. This skips the password prompt for development and troubleshooting and should be removed **before production**.
## SSH Keys
Most examples allow `ssh_authorized_keys` to be added for the `core` user as machine group metadata.
Example groups allow `ssh_authorized_keys` to be added for the `core` user as metadata. You might also include this directly in your Ignition.
# /var/lib/bootcfg/groups/default.json
# /var/lib/matchbox/groups/default.json
{
"name": "Example Machine Group",
"profile": "pxe",
@@ -41,12 +49,8 @@ Most examples allow `ssh_authorized_keys` to be added for the `core` user as mac
}
}
## Conditional Variables
#### Conditional Variables
### "pxe"
**"pxe"**
Some examples check the `pxe` variable to determine whether to create a `/dev/sda1` filesystem and partition for PXEing with `root=/dev/sda1` ("pxe":"true") or to write files to the existing filesystem on `/dev/disk/by-label/ROOT` ("pxe":"false").
### "skip_networkd"
Some examples (mainly Kubernetes examples) check the `skip_networkd` variable to determine whether to skip configuring networkd. When `true`, the default networkd config is used, which uses DCHP to setup networking. Use this if you've pre-configured static IP mappings for Kubernetes nodes in your DHCP server. Otherwise, `networkd_address`, `networkd_dns`, and `networkd_gateway` machine metadata are used to populate a networkd configuration on each host.

View File

@@ -0,0 +1,56 @@
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: container-linux-update-agent
namespace: kube-system
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
app: container-linux-update-agent
spec:
containers:
- name: update-agent
image: quay.io/coreos/container-linux-update-operator:v0.3.1
command:
- "/bin/update-agent"
volumeMounts:
- mountPath: /var/run/dbus
name: var-run-dbus
- mountPath: /etc/coreos
name: etc-coreos
- mountPath: /usr/share/coreos
name: usr-share-coreos
- mountPath: /etc/os-release
name: etc-os-release
env:
# read by update-agent as the node name to manage reboots for
- name: UPDATE_AGENT_NODE
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- name: var-run-dbus
hostPath:
path: /var/run/dbus
- name: etc-coreos
hostPath:
path: /etc/coreos
- name: usr-share-coreos
hostPath:
path: /usr/share/coreos
- name: etc-os-release
hostPath:
path: /etc/os-release

View File

@@ -0,0 +1,22 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: container-linux-update-operator
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
app: container-linux-update-operator
spec:
containers:
- name: update-operator
image: quay.io/coreos/container-linux-update-operator:v0.3.1
command:
- "/bin/update-operator"
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace

View File

@@ -1,44 +0,0 @@
## gRPC API Credentials
Create FAKE TLS credentials for running the `bootcfg` gRPC API examples.
**DO NOT** use these certificates for anything other than running `bootcfg` examples. Use your organization's production PKI for production deployments.
Navigate to the example directory which will be mounted as `/etc/bootcfg` in examples:
cd coreos-baremetal/examples/etc/bootcfg
Set certificate subject alt names which should be used by exporting `SAN`. Use the DNS name or IP at which `bootcfg` is hosted.
# for examples on metal0 or docker0 bridges
export SAN=IP.1:127.0.0.1,IP.2:172.15.0.2
# production example
export SAN=DNS.1:bootcfg.example.com
Create a fake `ca.crt`, `server.crt`, `server.key`, `client.crt`, and `client.key`. Type 'Y' when prompted.
$ ./cert-gen
Creating FAKE CA, server cert/key, and client cert/key...
...
...
...
******************************************************************
WARNING: Generated TLS credentials are ONLY SUITABLE FOR EXAMPLES!
Use your organization's production PKI for production deployments!
## Inpsect
Inspect the generated FAKE certificates if desired.
openssl x509 -noout -text -in ca.crt
openssl x509 -noout -text -in server.crt
openssl x509 -noout -text -in client.crt
## Verify
Verify that the FAKE server and client certificates were signed by the fake CA.
openssl verify -CAfile ca.crt server.crt
openssl verify -CAfile ca.crt client.crt

View File

@@ -1,11 +1,11 @@
{
"id": "coreos-install",
"name": "CoreOS Install",
"name": "CoreOS Container Linux Install",
"profile": "install-reboot",
"metadata": {
"coreos_channel": "alpha",
"coreos_version": "1053.2.0",
"ignition_endpoint": "http://bootcfg.foo:8080/ignition",
"baseurl": "http://bootcfg.foo:8080/assets/coreos"
"coreos_channel": "stable",
"coreos_version": "1576.5.0",
"ignition_endpoint": "http://matchbox.example.com:8080/ignition",
"baseurl": "http://matchbox.example.com:8080/assets/coreos"
}
}

View File

@@ -1,25 +1,19 @@
{
"id": "node1",
"name": "Master Node",
"profile": "bootkube-master",
"name": "Controller Node",
"profile": "bootkube-controller",
"selector": {
"mac": "52:54:00:a1:9c:ae",
"os": "installed"
},
"metadata": {
"ipv4_address": "172.15.0.21",
"etcd_initial_cluster": "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380",
"domain_name": "node1.example.com",
"etcd_initial_cluster": "node1=https://node1.example.com:2380",
"etcd_name": "node1",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_master_endpoint": "https://172.15.0.21:443",
"k8s_pod_network": "10.2.0.0/16",
"k8s_service_ip_range": "10.3.0.0/24",
"k8s_etcd_endpoints": "http://172.15.0.21:2379,http://172.15.0.22:2379,http://172.15.0.23:2379",
"networkd_address": "172.15.0.21/16",
"networkd_dns": "172.15.0.3",
"networkd_gateway": "172.15.0.1",
"ssh_authorized_keys": [
"ADD ME"
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPQFdwVLr+alsWIgYRz9OdqDhnx9jjuFbkdSdpqq4gd9uZApYlivMDD4UgjFazQpezx8DiNhu9ym7i6LgAcdwi+10hE4L9yoJv9uBgbBxOAd65znqLqF91NtV4mlKP5YfJtR7Ehs+pTB+IIC+o5veDbPn+BYgDMJ2x7Osbn1/gFSDken/yoOFbYbRMGMfVEQYjJzC4r/qCKH0bl/xuVNLxf9FkWSTCcQFKGOndwuGITDkshD4r2Kk8gUddXPxoahBv33/2QH0CY5zbKYjhgN6I6WtwO+O1uJwtNeV1AGhYjurdd60qggNwx+W7623uK3nIXvJd3hzDO8u5oa53/tIL fake-test-key-REMOVE-ME"
]
}
}

View File

@@ -7,18 +7,10 @@
"os": "installed"
},
"metadata": {
"ipv4_address": "172.15.0.22",
"etcd_initial_cluster": "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380",
"etcd_name": "node2",
"domain_name": "node2.example.com",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_master_endpoint": "https://172.15.0.21:443",
"k8s_pod_network": "10.2.0.0/16",
"k8s_service_ip_range": "10.3.0.0/24",
"networkd_address": "172.15.0.22/16",
"networkd_dns": "172.15.0.3",
"networkd_gateway": "172.15.0.1",
"ssh_authorized_keys": [
"ADD ME"
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPQFdwVLr+alsWIgYRz9OdqDhnx9jjuFbkdSdpqq4gd9uZApYlivMDD4UgjFazQpezx8DiNhu9ym7i6LgAcdwi+10hE4L9yoJv9uBgbBxOAd65znqLqF91NtV4mlKP5YfJtR7Ehs+pTB+IIC+o5veDbPn+BYgDMJ2x7Osbn1/gFSDken/yoOFbYbRMGMfVEQYjJzC4r/qCKH0bl/xuVNLxf9FkWSTCcQFKGOndwuGITDkshD4r2Kk8gUddXPxoahBv33/2QH0CY5zbKYjhgN6I6WtwO+O1uJwtNeV1AGhYjurdd60qggNwx+W7623uK3nIXvJd3hzDO8u5oa53/tIL fake-test-key-REMOVE-ME"
]
}
}

View File

@@ -7,18 +7,10 @@
"os": "installed"
},
"metadata": {
"ipv4_address": "172.15.0.23",
"etcd_initial_cluster": "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380",
"etcd_name": "node3",
"domain_name": "node3.example.com",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_master_endpoint": "https://172.15.0.21:443",
"k8s_pod_network": "10.2.0.0/16",
"k8s_service_ip_range": "10.3.0.0/24",
"networkd_address": "172.15.0.23/16",
"networkd_dns": "172.15.0.3",
"networkd_gateway": "172.15.0.1",
"ssh_authorized_keys": [
"ADD ME"
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPQFdwVLr+alsWIgYRz9OdqDhnx9jjuFbkdSdpqq4gd9uZApYlivMDD4UgjFazQpezx8DiNhu9ym7i6LgAcdwi+10hE4L9yoJv9uBgbBxOAd65znqLqF91NtV4mlKP5YfJtR7Ehs+pTB+IIC+o5veDbPn+BYgDMJ2x7Osbn1/gFSDken/yoOFbYbRMGMfVEQYjJzC4r/qCKH0bl/xuVNLxf9FkWSTCcQFKGOndwuGITDkshD4r2Kk8gUddXPxoahBv33/2QH0CY5zbKYjhgN6I6WtwO+O1uJwtNeV1AGhYjurdd60qggNwx+W7623uK3nIXvJd3hzDO8u5oa53/tIL fake-test-key-REMOVE-ME"
]
}
}

View File

@@ -1,25 +1,18 @@
{
"id": "node1",
"name": "Master Node",
"profile": "bootkube-master",
"name": "Controller Node",
"profile": "bootkube-controller",
"selector": {
"mac": "52:54:00:a1:9c:ae"
},
"metadata": {
"etcd_initial_cluster": "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380",
"domain_name": "node1.example.com",
"etcd_initial_cluster": "node1=https://node1.example.com:2380",
"etcd_name": "node1",
"ipv4_address": "172.15.0.21",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_etcd_endpoints": "http://172.15.0.21:2379,http://172.15.0.22:2379,http://172.15.0.23:2379",
"k8s_master_endpoint": "https://172.15.0.21:443",
"k8s_pod_network": "10.2.0.0/16",
"k8s_service_ip_range": "10.3.0.0/24",
"networkd_address": "172.15.0.21/16",
"networkd_dns": "172.15.0.3",
"networkd_gateway": "172.15.0.1",
"pxe": "true",
"ssh_authorized_keys": [
"ADD ME"
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPQFdwVLr+alsWIgYRz9OdqDhnx9jjuFbkdSdpqq4gd9uZApYlivMDD4UgjFazQpezx8DiNhu9ym7i6LgAcdwi+10hE4L9yoJv9uBgbBxOAd65znqLqF91NtV4mlKP5YfJtR7Ehs+pTB+IIC+o5veDbPn+BYgDMJ2x7Osbn1/gFSDken/yoOFbYbRMGMfVEQYjJzC4r/qCKH0bl/xuVNLxf9FkWSTCcQFKGOndwuGITDkshD4r2Kk8gUddXPxoahBv33/2QH0CY5zbKYjhgN6I6WtwO+O1uJwtNeV1AGhYjurdd60qggNwx+W7623uK3nIXvJd3hzDO8u5oa53/tIL fake-test-key-REMOVE-ME"
]
}
}
}

View File

@@ -6,19 +6,11 @@
"mac": "52:54:00:b2:2f:86"
},
"metadata": {
"ipv4_address": "172.15.0.22",
"etcd_initial_cluster": "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380",
"etcd_name": "node2",
"domain_name": "node2.example.com",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_master_endpoint": "https://172.15.0.21:443",
"k8s_pod_network": "10.2.0.0/16",
"k8s_service_ip_range": "10.3.0.0/24",
"networkd_address": "172.15.0.22/16",
"networkd_dns": "172.15.0.3",
"networkd_gateway": "172.15.0.1",
"pxe": "true",
"ssh_authorized_keys": [
"ADD ME"
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPQFdwVLr+alsWIgYRz9OdqDhnx9jjuFbkdSdpqq4gd9uZApYlivMDD4UgjFazQpezx8DiNhu9ym7i6LgAcdwi+10hE4L9yoJv9uBgbBxOAd65znqLqF91NtV4mlKP5YfJtR7Ehs+pTB+IIC+o5veDbPn+BYgDMJ2x7Osbn1/gFSDken/yoOFbYbRMGMfVEQYjJzC4r/qCKH0bl/xuVNLxf9FkWSTCcQFKGOndwuGITDkshD4r2Kk8gUddXPxoahBv33/2QH0CY5zbKYjhgN6I6WtwO+O1uJwtNeV1AGhYjurdd60qggNwx+W7623uK3nIXvJd3hzDO8u5oa53/tIL fake-test-key-REMOVE-ME"
]
}
}

View File

@@ -6,19 +6,11 @@
"mac": "52:54:00:c3:61:77"
},
"metadata": {
"ipv4_address": "172.15.0.23",
"etcd_initial_cluster": "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380",
"etcd_name": "node3",
"domain_name": "node3.example.com",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_master_endpoint": "https://172.15.0.21:443",
"k8s_pod_network": "10.2.0.0/16",
"k8s_service_ip_range": "10.3.0.0/24",
"networkd_address": "172.15.0.23/16",
"networkd_dns": "172.15.0.3",
"networkd_gateway": "172.15.0.1",
"pxe": "true",
"ssh_authorized_keys": [
"ADD ME"
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPQFdwVLr+alsWIgYRz9OdqDhnx9jjuFbkdSdpqq4gd9uZApYlivMDD4UgjFazQpezx8DiNhu9ym7i6LgAcdwi+10hE4L9yoJv9uBgbBxOAd65znqLqF91NtV4mlKP5YfJtR7Ehs+pTB+IIC+o5veDbPn+BYgDMJ2x7Osbn1/gFSDken/yoOFbYbRMGMfVEQYjJzC4r/qCKH0bl/xuVNLxf9FkWSTCcQFKGOndwuGITDkshD4r2Kk8gUddXPxoahBv33/2QH0CY5zbKYjhgN6I6WtwO+O1uJwtNeV1AGhYjurdd60qggNwx+W7623uK3nIXvJd3hzDO8u5oa53/tIL fake-test-key-REMOVE-ME"
]
}
}

View File

@@ -1,15 +0,0 @@
{
"id": "etcd-aws",
"name": "etcd Node",
"profile": "etcd-aws",
"selector": {
"name": "etcd",
"platform": "aws"
},
"metadata": {
"etcd_discovery": "token from https://discovery.etcd.io/new?size=N",
"ssh_authorized_keys": [
"ssh-rsa pub-key-goes-here"
]
}
}

View File

@@ -1,9 +0,0 @@
{
"id": "default",
"name": "default",
"profile": "etcd-proxy",
"metadata": {
"etcd_initial_cluster": "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380",
"fleet_metadata": "role=etcd-proxy"
}
}

View File

@@ -1,17 +0,0 @@
{
"id": "node1",
"name": "etcd Node 1",
"profile": "etcd",
"selector": {
"mac": "52:54:00:a1:9c:ae"
},
"metadata": {
"etcd_initial_cluster": "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380",
"etcd_name": "node1",
"fleet_metadata": "role=etcd,name=node1",
"ipv4_address": "172.17.0.21",
"networkd_address": "172.17.0.21/16",
"networkd_dns": "172.17.0.3",
"networkd_gateway": "172.17.0.1"
}
}

View File

@@ -1,17 +0,0 @@
{
"id": "node2",
"name": "etcd Node 2",
"profile": "etcd",
"selector": {
"mac": "52:54:00:b2:2f:86"
},
"metadata": {
"etcd_initial_cluster": "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380",
"etcd_name": "node2",
"fleet_metadata": "role=etcd,name=node2",
"ipv4_address": "172.17.0.22",
"networkd_address": "172.17.0.22/16",
"networkd_dns": "172.17.0.3",
"networkd_gateway": "172.17.0.1"
}
}

View File

@@ -1,17 +0,0 @@
{
"id": "node3",
"name": "etcd Node 3",
"profile": "etcd",
"selector": {
"mac": "52:54:00:c3:61:77"
},
"metadata": {
"etcd_initial_cluster": "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380",
"etcd_name": "node3",
"fleet_metadata": "role=etcd,name=node3",
"ipv4_address": "172.17.0.23",
"networkd_address": "172.17.0.23/16",
"networkd_dns": "172.17.0.3",
"networkd_gateway": "172.17.0.1"
}
}

View File

@@ -1,11 +0,0 @@
{
"id": "coreos-install",
"name": "CoreOS Install",
"profile": "install-reboot",
"metadata": {
"coreos_channel": "alpha",
"coreos_version": "1053.2.0",
"ignition_endpoint": "http://bootcfg.foo:8080/ignition",
"baseurl": "http://bootcfg.foo:8080/assets/coreos"
}
}

View File

@@ -1,18 +0,0 @@
{
"id": "node1",
"name": "etcd Node 1",
"profile": "etcd",
"selector": {
"mac": "52:54:00:a1:9c:ae",
"os": "installed"
},
"metadata": {
"ipv4_address": "172.15.0.21",
"networkd_gateway": "172.15.0.1",
"networkd_dns": "172.15.0.3",
"networkd_address": "172.15.0.21/16",
"fleet_metadata": "role=etcd,name=node1",
"etcd_name": "node1",
"etcd_initial_cluster": "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380"
}
}

View File

@@ -1,18 +0,0 @@
{
"id": "node2",
"name": "etcd Node 2",
"profile": "etcd",
"selector": {
"mac": "52:54:00:b2:2f:86",
"os": "installed"
},
"metadata": {
"ipv4_address": "172.15.0.22",
"networkd_gateway": "172.15.0.1",
"networkd_dns": "172.15.0.3",
"networkd_address": "172.15.0.22/16",
"fleet_metadata": "role=etcd,name=node2",
"etcd_name": "node2",
"etcd_initial_cluster": "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380"
}
}

View File

@@ -1,18 +0,0 @@
{
"id": "node3",
"name": "etcd Node 3",
"profile": "etcd",
"selector": {
"mac": "52:54:00:c3:61:77",
"os": "installed"
},
"metadata": {
"ipv4_address": "172.15.0.23",
"networkd_gateway": "172.15.0.1",
"networkd_dns": "172.15.0.3",
"networkd_address": "172.15.0.23/16",
"fleet_metadata": "role=etcd,name=node3",
"etcd_name": "node3",
"etcd_initial_cluster": "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380"
}
}

View File

@@ -1,12 +0,0 @@
{
"id": "etcd-proxies",
"name": "etcd Proxy",
"profile": "etcd-proxy",
"selector": {
"os": "installed"
},
"metadata": {
"fleet_metadata": "role=etcd-proxy",
"etcd_initial_cluster": "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380"
}
}

View File

@@ -1,9 +0,0 @@
{
"id": "default",
"name": "default",
"profile": "etcd-proxy",
"metadata": {
"fleet_metadata": "role=etcd-proxy",
"etcd_initial_cluster": "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380"
}
}

Some files were not shown because too many files have changed in this diff Show More