68 Commits

Author SHA1 Message Date
Dalton Hubble
2c04ea46ee CHANGES.md: Prepare for a v0.6.1 docs point release 2017-05-25 10:33:04 -07:00
Dalton Hubble
317307f6f6 glide.yaml: Update and vendor the crypto openpgp package 2017-05-25 10:32:42 -07:00
Dalton Hubble
3d30cff9ec travis.yml: Use Go 1.8.3 in tests and published images 2017-05-25 10:32:42 -07:00
Dalton Hubble
15a1128398 scripts: Move examples/etc/matchbox to scripts/tls
* Use the same TLS cert-gen location in source as in releases
2017-05-25 10:32:42 -07:00
Dalton Hubble
1b7f60b895 scripts: Move development-only scripts under scripts/dev 2017-05-25 10:32:42 -07:00
Dalton Hubble
26a901ecd6 examples/terraform: Add tfvars showing multi-controller case 2017-05-25 10:32:42 -07:00
enilfodne
19a402c187 examples: Bump Container Linux version to stable 1353.7.0 2017-05-25 10:32:42 -07:00
Dalton Hubble
02f7fb7f7c scripts: Remove unused static k8s generation scripts
* Remove static rktnetes cluster docs
* Bump devnet matchbox version
2017-05-25 10:32:42 -07:00
Dalton Hubble
3f70f9f2e5 Merge pull request #544 from coreos/remove-static-kubernetes
Remove static Kubernetes and rktnetes example clusters
2017-05-22 17:11:11 -07:00
Dalton Hubble
dabba64850 examples: Remove static Kubernetes and rktnetes example clusters
* Static Kubernetes / rktnetes examples are no longer going to be
maintained by this repo or upgraded to Kubernetes v1.6. This is not
considered a deprecation bc the reference clusters are examples.
* Remove static Kubernetes cluster examples so users don't choose it
* Self-hosted Kubernetes (bootkube) is now the standard recommended
Kubernetes cluster configuration
2017-05-22 16:13:26 -07:00
Dalton Hubble
7a2764b17b Merge pull request #542 from coreos/disable-terraform-tests
tests: Temporarily disable bootkube (terraform-based) cluster testing
2017-05-22 16:11:29 -07:00
Dalton Hubble
9de41e29ab scripts/test: Fix fmt test for local tests
* examples/terraform modules may contain Go files which
should be ignored
2017-05-22 15:55:19 -07:00
Dalton Hubble
0592503652 tests/smoke: Get nodes/pods should not fail bootkube tests
* Listing pods or nodes as the final step of cluster creation should
not fail the entire build, its mainly for a pretty output
* There is no official definition of when a Kubernetes cluster is
"done" bootstrapping, they can momentarily fail to response in the
first minute or so as components stabalize
2017-05-22 15:12:29 -07:00
Dalton Hubble
40926b6d0f tests: Temporarily disable bootkube (terraform-based) tests 2017-05-22 14:51:25 -07:00
Dalton Hubble
859ea5888b Merge pull request #538 from coreos/kubernetes-upgrade
Update Kubernetes from v1.6.2 to v1.6.4
2017-05-19 20:44:51 -07:00
Dalton Hubble
1736af5024 tests/smoke: Be sure terraform destroy runs 2017-05-19 18:08:50 -07:00
Dalton Hubble
c476cf8928 examples: Update Kubernetes clusters to v1.6.4
* Update bootkube example cluster to v1.6.4
* Update bootkube (terraform-based) cluster to v1.6.4
* Update bootkube Terraform module to v1.6.4
* Uses bootkube v0.4.4
2017-05-19 16:52:37 -07:00
Dalton Hubble
a47087ec6a Merge pull request #536 from coreos/calc-ips
Calculate Kubernetes service IPs based on the service CIDR
2017-05-19 16:46:48 -07:00
Dalton Hubble
0961e50f64 examples: Remove Kubernetes service IP inputs
* Calculate the required service IP values from the service CIDR
* These inputs were never truly customizable anyway since bootkube
start assumed the 1st, 10th, and 15th offsets for named services
2017-05-19 15:05:42 -07:00
Dalton Hubble
7a017c2d7d Merge pull request #537 from coreos/etcd3-terraform-state
tests/smoke: Ensure etcd3-terraform tests cleans state
2017-05-19 13:21:31 -07:00
Dalton Hubble
41aaad3d6f tests/smoke: Ensure etcd3-terraform tests cleans state 2017-05-19 12:41:37 -07:00
Dalton Hubble
ddf1f88cb9 Merge pull request #535 from coreos/bootkube-tests
tests: Add cluster tests for bootkube-install (terraform-based)
2017-05-19 11:39:55 -07:00
Dalton Hubble
af8abc7dc2 tests: Add cluster tests for bootkube-install (terraform-based)
* Terraform-based cluster examples are doing disk installs so they
take a bit longer than their counterparts
2017-05-19 10:14:22 -07:00
Dalton Hubble
0d2173e446 Merge pull request #534 from coreos/bootkube-v0.4.3
examples: Update Kubernetes to use bootkube v0.4.3
2017-05-18 16:10:00 -07:00
Dalton Hubble
e9bf13963c examples: Update Kubernetes to use bootkube v0.4.3
* Update terraform-based bootkube-install cluster example
* Update manual bootkube cluster example
2017-05-18 15:37:51 -07:00
Dalton Hubble
dbba1316b2 Merge branch 'support-oem' 2017-05-18 12:04:38 -07:00
enilfodne
34d0f5003a examples/terraform: Add support for OEM images 2017-05-18 04:43:24 +03:00
Dalton Hubble
79e5240d3f Merge pull request #531 from coreos/examples-and-links
Organize README examples listing and links
2017-05-17 16:46:10 -07:00
Dalton Hubble
46dd95da0c README: Organize examples listing and links 2017-05-17 16:32:00 -07:00
Dalton Hubble
f6522a561b Merge pull request #528 from coreos/controller-taints
examples: Add NoSchedule taint to bootkube controllers
2017-05-15 16:49:08 -07:00
Dalton Hubble
e4fdcb204e examples: Add NoSchedule taint to bootkube controllers 2017-05-15 13:50:19 -07:00
Dalton Hubble
81e00d7e79 Merge pull request #522 from coreos/bootkube-automate
examples/terraform: Automate terraform-based bootkube-install
2017-05-15 13:43:54 -07:00
Dalton Hubble
06a9a28d7c examples/terraform: Add optional variables commented out 2017-05-15 13:11:48 -07:00
Dalton Hubble
756c28f2fc examples/terraform: Fix terraform fmt 2017-05-14 14:14:47 -07:00
Dalton Hubble
cc240286f3 examples/terraform: Automate terraform-based bootkube-install
* Use the dghubble/bootkube-terraform terraform module to generate
the exact same assets that `bootkube render` would
* Use terraform to automate the kubeconfig copy and bootkube start
* Removes the reuqirement to download a bootkube binary, render assets,
and manually copy assets to nodes
2017-05-14 14:14:10 -07:00
Dalton Hubble
75e428aece Merge pull request #520 from coreos/etcd3-terraform
Jenkinsfile,tests: Add etcd3-terraform cluster to pipeline
2017-05-12 15:46:14 -07:00
Dalton Hubble
51c4371e39 Jenkinsfile,tests: Add etcd3-terraform cluster to pipeline
* Test the Terraform-based etcd3 cluster in parallel
2017-05-12 14:54:42 -07:00
Dalton Hubble
ef85730d69 Merge pull request #517 from dghubble/self-hosted-etcd
examples/terraform: Add experimental self-hosted etcd option
2017-05-10 09:55:33 -07:00
Dalton Hubble
3752ee78d5 Merge pull request #519 from brianredbeard/source-url-fix
contrib/rpm: Fixing the source URL format
2017-05-09 20:35:21 -04:00
Brian 'Redbeard' Harrington
ea9042e86e contrib/rpm: Fixing the source URL format
Fixing the source URL format to confirm to more normative rpmbuild
standards and to allow for proper use of spectool/rpmspectool.  This
change now produces a proper archive with the name and version number
used.
2017-05-09 17:26:42 -07:00
Dalton Hubble
d4e33efb38 Merge pull request #516 from coreos/local-disk-size
scripts/libvirt: Allow QEMU/KVM disk size to be customized
2017-05-09 17:37:19 -04:00
Dalton Hubble
459ce2d8bc examples/terraform: Add experimental self-hosted etcd option
* Add an option to try experimental self-hosted etcd which uses
the etcd-operator to deploy an etcd cluster as pods atop Kubernetes
and disables the on-host etcd cluster
* When enabled, configure locksmithd to coordinate reboots through
self-hosted etcd
2017-05-09 14:00:51 -07:00
Dalton Hubble
31ed8dba2f scripts/libvirt: Allow QEMU/KVM disk size to be customized 2017-05-08 16:43:38 -07:00
Dalton Hubble
2d69b2d734 Merge pull request #514 from coreos/container-install
Documentation: Add missing mkdir for rkt/docker installation
2017-05-08 18:13:01 -04:00
Dalton Hubble
2aea18e048 Documentation: Add missing mkdir for rkt/docker installation 2017-05-08 13:47:00 -07:00
Dalton Hubble
c2e5196d1a Merge pull request #510 from dghubble/squid-proxy
Add squid proxy docs as contrib drafts
2017-05-02 17:47:26 -07:00
Dalton Hubble
47d3dbacb1 contrib/squid: Move Squid docs to contrib as a draft 2017-05-02 14:11:02 -07:00
Daneyon Hansen
5e2adb1eda Adds documentation for using a Squid proxy with Matchbox. 2017-05-02 13:57:30 -07:00
Dalton Hubble
7ee68aa1a4 Merge pull request #509 from coreos/improve-examples
Improve terraform examples, tutorials, and re-usable modules
2017-05-02 13:12:57 -07:00
Dalton Hubble
e1cabcf8e8 examples/terraform: Add etcd3 tutorial and Terraform modules doc 2017-05-02 12:56:08 -07:00
Dalton Hubble
6500ed51f3 examples/terraform: Improve configurability of cluster examples
* Add matchbox_http_endpoint and matchbox_rpc_endpoint as variables
* Remove dghubble ssh public key from default
* Add a terraform.tfvars.example and gitignore terraform.tfvars
2017-05-01 21:25:12 -07:00
Dalton Hubble
4fb3ea2c7e examples/terraform: Rename coreos-install to container-linux-install
* Add container-linux-install profile to install Container Linux
* Add cached-container-linux-install profile to install Container Linux
from cached matchbox assets
2017-05-01 17:54:18 -07:00
Dalton Hubble
b1beebe855 Merge pull request #506 from coreos/bootkube-v0.4.2
examples: Update from bootkube v0.4.1 to v0.4.2
2017-05-01 16:48:39 -07:00
Dalton Hubble
6743944390 examples: Update from bootkube v0.4.1 to v0.4.2
* Contains a few fixes to bootkube logging and checkpointing
2017-05-01 15:31:29 -07:00
Dalton Hubble
4451425db8 Merge pull request #505 from danehans/issue_502
examples: updates terraform readme to include get
2017-04-28 11:13:36 -07:00
Daneyon Hansen
23959a4dd2 examples: updates terraform readme to include get
Previously, the terraform readme was incomplete by only including
terraform plan and apply commands. Additionally, the readme was
updated to include instructions for updating the profiles module
source.

Fixes #502
2017-04-28 11:28:07 -06:00
Dalton Hubble
0825fd2492 Merge pull request #504 from coreos/bootkube-bump
examples: Update self-hosted Kubernetes to v1.6.2
2017-04-27 17:59:01 -07:00
Dalton Hubble
bb08cd5087 examples: Update self-hosted Kubernetes to v1.6.2 2017-04-27 17:47:59 -07:00
Dalton Hubble
a117af6500 Merge pull request #503 from coreos/init-flannel
examples/ignition: Remove --fail from curl PUT/POST's
2017-04-27 15:39:32 -07:00
Dalton Hubble
4304ee2aa5 examples/ignition: Remove --fail from curl PUT/POST's
* Reverts parts of #470
2017-04-27 13:38:30 -07:00
Dalton Hubble
6d6879ca4a Merge pull request #501 from dghubble/copr-fix
contrib/rpm: Bump to re-build RPM release now Copr is fixed
2017-04-25 17:39:39 -07:00
Dalton Hubble
cf301eed45 Merge pull request #500 from dghubble/fix-signing-docs
Documentation/dev/release: Update commands used for signing
2017-04-25 17:37:16 -07:00
Dalton Hubble
7bbd1f651f contrib/rpm: Bump to re-build RPM release now Copr is fixed 2017-04-25 17:34:49 -07:00
Dalton Hubble
6455528f3c Documentation/dev/release: Update commands used for signing 2017-04-25 16:46:27 -07:00
Dalton Hubble
a6fde5a0c6 Merge pull request #496 from coreos/add-rpm-spec
contrib/rpm: Add matchbox RPM spec file
2017-04-25 11:28:16 -07:00
Dalton Hubble
32baac329d Merge pull request #497 from coreos/caps-retain
Documentation: Add back original rkt run dnsmasq --caps-retain
2017-04-25 11:27:58 -07:00
Dalton Hubble
73d40db168 Documentation: Add back original dnsmasq Linux --caps-retain 2017-04-24 17:08:55 -07:00
Dalton Hubble
96259aa5da contrib/rpm: Add matchbox RPM spec file 2017-04-24 16:43:29 -07:00
125 changed files with 1566 additions and 2214 deletions

3
.gitignore vendored
View File

@@ -32,5 +32,4 @@ bin/
_output/
tools/
contrib/registry/data
terraform.tfvars
contrib/rpm/*.tar.gz

View File

@@ -4,7 +4,7 @@ services:
- docker
go:
- 1.7.4
- 1.8
- 1.8.3
- tip
matrix:
allow_failures:
@@ -15,10 +15,10 @@ script:
- make test
deploy:
provider: script
script: scripts/travis-docker-push
script: scripts/dev/travis-docker-push
skip_cleanup: true
on:
branch: master
go: '1.8'
go: '1.8.3'
notifications:
email: change

View File

@@ -4,6 +4,20 @@ Notable changes between releases.
## Latest
* Remove pixiecore support (deprecated in v0.5.0)
## v0.6.1 (2017-05-25)
* Improve the installation documentation
* Move examples/etc/matchbox/cert-gen to scripts/tls
* Build Matchbox with Go 1.8.3 for images and binaries
### Examples
* Upgrade self-hosted Kubernetes cluster examples to v1.6.4
* Add NoSchedule taint to self-hosted Kubernetes controllers
* Remove static Kubernetes and rktnetes cluster examples
## v0.6.0 (2017-04-25)
* New [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) plugin for Terraform users!

View File

@@ -39,8 +39,8 @@ GET http://matchbox.foo/ipxe?label=value
```
#!ipxe
kernel /assets/coreos/1298.7.0/coreos_production_pxe.vmlinuz coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp} coreos.first_boot=1 coreos.autologin
initrd /assets/coreos/1298.7.0/coreos_production_pxe_image.cpio.gz
kernel /assets/coreos/1353.7.0/coreos_production_pxe.vmlinuz coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp} coreos.first_boot=1 coreos.autologin
initrd /assets/coreos/1353.7.0/coreos_production_pxe_image.cpio.gz
boot
```
@@ -67,9 +67,9 @@ default=0
timeout=1
menuentry "CoreOS" {
echo "Loading kernel"
linuxefi "(http;matchbox.foo:8080)/assets/coreos/1298.7.0/coreos_production_pxe.vmlinuz" "coreos.autologin" "coreos.config.url=http://matchbox.foo:8080/ignition" "coreos.first_boot"
linuxefi "(http;matchbox.foo:8080)/assets/coreos/1353.7.0/coreos_production_pxe.vmlinuz" "coreos.autologin" "coreos.config.url=http://matchbox.foo:8080/ignition" "coreos.first_boot"
echo "Loading initrd"
initrdefi "(http;matchbox.foo:8080)/assets/coreos/1298.7.0/coreos_production_pxe_image.cpio.gz"
initrdefi "(http;matchbox.foo:8080)/assets/coreos/1353.7.0/coreos_production_pxe_image.cpio.gz"
}
```
@@ -231,7 +231,7 @@ If you need to serve static assets (e.g. kernel, initrd), `matchbox` can serve a
```
matchbox.foo/assets/
└── coreos
└── 1298.7.0
└── 1353.7.0
├── coreos_production_pxe.vmlinuz
└── coreos_production_pxe_image.cpio.gz
└── 1153.0.0

View File

@@ -1,6 +1,6 @@
# Self-hosted Kubernetes
The self-hosted Kubernetes example provisions a 3 node "self-hosted" Kubernetes v1.6.1 cluster. On-host kubelets wait for an apiserver to become reachable, then yield to kubelet pods scheduled via daemonset. [bootkube](https://github.com/kubernetes-incubator/bootkube) is run on any controller to bootstrap a temporary apiserver which schedules control plane components as pods before exiting. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
The self-hosted Kubernetes example provisions a 3 node "self-hosted" Kubernetes v1.6.4 cluster. [bootkube](https://github.com/kubernetes-incubator/bootkube) is run once on a controller node to bootstrap Kubernetes control plane components as pods before exiting. An etcd3 cluster across controllers is used to back Kubernetes and coordinate Container Linux auto-updates (enabled for disk installs).
## Requirements
@@ -11,11 +11,11 @@ Ensure that you've gone through the [matchbox with rkt](getting-started-rkt.md)
* Create the example libvirt client VMs
* `/etc/hosts` entries for `node[1-3].example.com` (or pass custom names to `k8s-certgen`)
Install [bootkube](https://github.com/kubernetes-incubator/bootkube/releases) v0.4.0 and add it somewhere on your PATH.
Install [bootkube](https://github.com/kubernetes-incubator/bootkube/releases) v0.4.4 and add it somewhere on your PATH.
```sh
$ bootkube version
Version: v0.4.0
Version: v0.4.4
```
## Examples
@@ -30,7 +30,7 @@ The [examples](../examples) statically assign IP addresses to libvirt client VMs
Download the CoreOS image assets referenced in the target [profile](../examples/profiles).
```sh
$ ./scripts/get-coreos stable 1298.7.0 ./examples/assets
$ ./scripts/get-coreos stable 1353.7.0 ./examples/assets
```
Add your SSH public key to each machine group definition [as shown](../examples/README.md#ssh-keys).
@@ -47,7 +47,7 @@ Add your SSH public key to each machine group definition [as shown](../examples/
Use the `bootkube` tool to render Kubernetes manifests and credentials into an `--asset-dir`. Later, `bootkube` will schedule these manifests during bootstrapping and the credentials will be used to access your cluster.
```sh
$ bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com
$ bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=http://127.0.0.1:2379
```
## Containers

View File

@@ -43,13 +43,15 @@ $ cd matchbox-v0.6.0-linux-amd64
### RPM-based distro
On an RPM-based provisioner, install the `matchbox` RPM from the Copr [repository](https://copr.fedorainfracloud.org/coprs/g/CoreOS/matchbox/) using `dnf` or `yum`.
On an RPM-based provisioner (Fedora 24+), install the `matchbox` RPM from the Copr [repository](https://copr.fedorainfracloud.org/coprs/g/CoreOS/matchbox/) using `dnf`.
```sh
dnf copr enable @CoreOS/matchbox
dnf install matchbox
```
RPMs are not currently available for CentOS and RHEL (due to Go version). CentOS and RHEL users should follow the Generic Linux section below.
### CoreOS
On a CoreOS provisioner, rkt run `matchbox` image with the provided systemd unit.
@@ -127,31 +129,39 @@ $ sudo firewall-cmd --zone=MYZONE --add-port=8080/tcp --permanent
$ sudo firewall-cmd --zone=MYZONE --add-port=8081/tcp --permanent
```
## Generate TLS credentials
## Generate TLS Certificates
*Skip this unless you need to enable the gRPC API*
The Matchbox gRPC API allows clients (terraform-provider-matchbox) to create and update Matchbox resources. TLS credentials are needed for client authentication and to establish a secure communication channel. Client machines (those PXE booting) read from the HTTP endpoints and do not require this setup.
The `matchbox` gRPC API allows client apps (terraform-provider-matchbox, Tectonic Installer, etc.) to update how machines are provisioned. TLS credentials are needed for client authentication and to establish a secure communication channel. Client machines (those PXE booting) read from the HTTP endpoints and do not require this setup.
The `cert-gen` helper script generates a self-signed CA, server certificate, and client certificate. **Prefer your organization's PKI, if possible**
If your organization manages public key infrastructure and a certificate authority, create a server certificate and key for the `matchbox` service and a client certificate and key for each client tool.
Otherwise, generate a self-signed `ca.crt`, a server certificate (`server.crt`, `server.key`), and client credentials (`client.crt`, `client.key`) with the `examples/etc/matchbox/cert-gen` script. Export the DNS name or IP (discouraged) of the provisioner host.
Navigate to the `scripts/tls` directory.
```sh
$ cd scripts/tls
```
Export `SAN` to set the Subject Alt Names which should be used in certificates. Provide the fully qualified domain name or IP (discouraged) where Matchbox will be installed.
```sh
# DNS or IP Subject Alt Names where matchbox runs
$ export SAN=DNS.1:matchbox.example.com,IP.1:172.18.0.2
```
Generate a `ca.crt`, `server.crt`, `server.key`, `client.crt`, and `client.key`.
```sh
$ cd examples/etc/matchbox
# DNS or IP Subject Alt Names where matchbox can be reached
$ export SAN=DNS.1:matchbox.example.com,IP.1:192.168.1.42
$ ./cert-gen
```
Place the TLS credentials in the default location:
Move TLS credentials to the matchbox server's default location.
```sh
$ sudo mkdir -p /etc/matchbox
$ sudo cp ca.crt server.crt server.key /etc/matchbox/
$ sudo cp ca.crt server.crt server.key /etc/matchbox
```
Save `client.crt`, `client.key`, and `ca.crt` to use with a client tool later.
Save `client.crt`, `client.key`, and `ca.crt` for later use (e.g. `~/.matchbox`).
## Start matchbox
@@ -203,7 +213,7 @@ Certificate chain
Download a recent CoreOS [release](https://coreos.com/releases/) with signatures.
```sh
$ ./scripts/get-coreos stable 1298.7.0 . # note the "." 3rd argument
$ ./scripts/get-coreos stable 1353.7.0 . # note the "." 3rd argument
```
Move the images to `/var/lib/matchbox/assets`,
@@ -215,7 +225,7 @@ $ sudo cp -r coreos /var/lib/matchbox/assets
```
/var/lib/matchbox/assets/
├── coreos
│   └── 1298.7.0
│   └── 1353.7.0
│   ├── CoreOS_Image_Signing_Key.asc
│   ├── coreos_production_image.bin.bz2
│   ├── coreos_production_image.bin.bz2.sig
@@ -228,11 +238,11 @@ $ sudo cp -r coreos /var/lib/matchbox/assets
and verify the images are acessible.
```sh
$ curl http://matchbox.example.com:8080/assets/coreos/1298.7.0/
$ curl http://matchbox.example.com:8080/assets/coreos/1353.7.0/
<pre>...
```
For large production environments, use a cache proxy or mirror suitable for your environment to serve CoreOS images.
For large production environments, use a cache proxy or mirror suitable for your environment to serve CoreOS images. See [contrib/squid](../contrib/squid/README.md) for details.
## Network
@@ -251,6 +261,7 @@ Run the container image with rkt.
latest or most recent tagged `matchbox` [release](https://github.com/coreos/matchbox/releases) ACI. Trust the [CoreOS App Signing Key](https://coreos.com/security/app-signing-key/) for image signature verification.
```sh
$ mkdir -p /var/lib/matchbox/assets
$ sudo rkt run --net=host --mount volume=data,target=/var/lib/matchbox --volume data,kind=host,source=/var/lib/matchbox quay.io/coreos/matchbox:latest --mount volume=config,target=/etc/matchbox --volume config,kind=host,source=/etc/matchbox,readOnly=true -- -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
```
@@ -261,7 +272,8 @@ Create machine profiles, groups, or Ignition configs by adding files to `/var/li
Run the container image with docker.
```sh
sudo docker run --net=host --rm -v /var/lib/matchbox:/var/lib/matchbox:Z -v /etc/matchbox:/etc/matchbox:Z,ro quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
$ mkdir -p /var/lib/matchbox/assets
$ sudo docker run --net=host --rm -v /var/lib/matchbox:/var/lib/matchbox:Z -v /etc/matchbox:/etc/matchbox:Z,ro quay.io/coreos/matchbox:latest -address=0.0.0.0:8080 -rpc-address=0.0.0.0:8081 -log-level=debug
```
Create machine profiles, groups, or Ignition configs by adding files to `/var/lib/matchbox`.

View File

@@ -53,20 +53,20 @@ Verify the reported version.
Sign the release tarballs and ACI with a [CoreOS App Signing Key](https://coreos.com/security/app-signing-key/) subkey.
```sh
$ cd _output
$ gpg2 -a --default-key FC8A365E --detach-sign matchbox-$VERSION-linux-amd64.tar.gz
$ gpg2 -a --default-key FC8A365E --detach-sign matchbox-$VERSION-darwin-amd64.tar.gz
$ gpg2 -a --default-key FC8A365E --detach-sign matchbox-$VERSION-linux-arm.tar.gz
$ gpg2 -a --default-key FC8A365E --detach-sign matchbox-$VERSION-linux-arm64.tar.gz
cd _output
gpg2 --armor --local-user FC8A365E! --detach-sign matchbox-$VERSION-linux-amd64.tar.gz
gpg2 --armor --local-user FC8A365E! --detach-sign matchbox-$VERSION-darwin-amd64.tar.gz
gpg2 --armor --local-user FC8A365E! --detach-sign matchbox-$VERSION-linux-arm.tar.gz
gpg2 --armor --local-user FC8A365E! --detach-sign matchbox-$VERSION-linux-arm64.tar.gz
```
Verify the signatures.
```sh
$ gpg2 --verify matchbox-$VERSION-linux-amd64.tar.gz.asc matchbox-$VERSION-linux-amd64.tar.gz
$ gpg2 --verify matchbox-$VERSION-darwin-amd64.tar.gz.asc matchbox-$VERSION-darwin-amd64.tar.gz
$ gpg2 --verify matchbox-$VERSION-linux-arm.tar.gz.asc matchbox-$VERSION-linux-arm.tar.gz
$ gpg2 --verify matchbox-$VERSION-linux-arm64.tar.gz.asc matchbox-$VERSION-linux-arm64.tar.gz
gpg2 --verify matchbox-$VERSION-linux-amd64.tar.gz.asc matchbox-$VERSION-linux-amd64.tar.gz
gpg2 --verify matchbox-$VERSION-darwin-amd64.tar.gz.asc matchbox-$VERSION-darwin-amd64.tar.gz
gpg2 --verify matchbox-$VERSION-linux-arm.tar.gz.asc matchbox-$VERSION-linux-arm.tar.gz
gpg2 --verify matchbox-$VERSION-linux-arm64.tar.gz.asc matchbox-$VERSION-linux-arm64.tar.gz
```
## Publish

View File

@@ -29,7 +29,7 @@ $ cd matchbox
Download CoreOS image assets referenced by the `etcd-docker` [example](../examples) to `examples/assets`.
```sh
$ ./scripts/get-coreos stable 1298.7.0 ./examples/assets
$ ./scripts/get-coreos stable 1353.7.0 ./examples/assets
```
For development convenience, add `/etc/hosts` entries for nodes so they may be referenced by name as you would in production.
@@ -117,4 +117,4 @@ $ sudo ./scripts/libvirt destroy
## Going further
Learn more about [matchbox](matchbox.md) or explore the other [example](../examples) clusters. Try the [k8s example](kubernetes.md) to produce a TLS-authenticated Kubernetes cluster you can access locally with `kubectl`.
Learn more about [matchbox](matchbox.md) or explore the other [example](../examples) clusters. Try the [k8s example](bootkube.md) to produce a TLS-authenticated Kubernetes cluster you can access locally with `kubectl`.

View File

@@ -30,7 +30,7 @@ $ cd matchbox
Download CoreOS image assets referenced by the `etcd` [example](../examples) to `examples/assets`.
```sh
$ ./scripts/get-coreos stable 1298.7.0 ./examples/assets
$ ./scripts/get-coreos stable 1353.7.0 ./examples/assets
```
## Network
@@ -114,7 +114,7 @@ sudo rkt run --net=metal0:IP=172.18.0.3 \
--mount volume=config,target=/etc/dnsmasq.conf \
--volume config,kind=host,source=$PWD/contrib/dnsmasq/metal0.conf \
quay.io/coreos/dnsmasq:v0.4.0 \
--caps-retain=CAP_NET_ADMIN,CAP_NET_BIND_SERVICE
--caps-retain=CAP_NET_ADMIN,CAP_NET_BIND_SERVICE,CAP_SETGID,CAP_SETUID,CAP_NET_RAW
```
If you get an error about the IP assignment, stop old pods and run garbage collection.
@@ -180,4 +180,4 @@ Press ^] three times to stop any rkt pod.
## Going further
Learn more about [matchbox](matchbox.md) or explore the other [example](../examples) clusters. Try the [k8s example](kubernetes.md) to produce a TLS-authenticated Kubernetes cluster you can access locally with `kubectl`.
Learn more about [matchbox](matchbox.md) or explore the other [example](../examples) clusters. Try the [k8s example](bootkube.md) to produce a TLS-authenticated Kubernetes cluster you can access locally with `kubectl`.

View File

@@ -34,7 +34,7 @@ Install [Terraform][terraform-dl] v0.9+ on your system.
```sh
$ terraform version
Terraform v0.9.2
Terraform v0.9.4
```
Add the `terraform-provider-matchbox` plugin binary on your system.

View File

@@ -26,7 +26,7 @@ Run the `quay.io/coreos/dnsmasq` container image with rkt or docker.
```sh
sudo rkt run --net=metal0:IP=172.18.0.3 quay.io/coreos/dnsmasq \
--caps-retain=CAP_NET_ADMIN,CAP_NET_BIND_SERVICE \
--caps-retain=CAP_NET_ADMIN,CAP_NET_BIND_SERVICE,CAP_SETGID,CAP_SETUID,CAP_NET_RAW \
-- -d -q \
--dhcp-range=172.18.0.50,172.18.0.99 \
--enable-tftp \

View File

@@ -1,88 +0,0 @@
# Kubernetes
The Kubernetes example provisions a 3 node Kubernetes v1.5.5 cluster with one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
## Requirements
Ensure that you've gone through the [matchbox with rkt](getting-started-rkt.md) or [matchbox with docker](getting-started-docker.md) guide and understand the basics. In particular, you should be able to:
* Use rkt or Docker to start `matchbox`
* Create a network boot environment with `coreos/dnsmasq`
* Create the example libvirt client VMs
* `/etc/hosts` entries for `node[1-3].example.com` (or pass custom names to `k8s-certgen`)
## Examples
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. VMs are setup on the `metal0` CNI bridge for rkt or the `docker0` bridge for Docker. The examples can be used for physical machines if you update the MAC addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
* [k8s](../examples/groups/k8s) - iPXE boot a Kubernetes cluster
* [k8s-install](../examples/groups/k8s-install) - Install a Kubernetes cluster to disk
* [Lab examples](https://github.com/dghubble/metal) - Lab hardware examples
### Assets
Download the CoreOS image assets referenced in the target [profile](../examples/profiles).
```sh
$ ./scripts/get-coreos stable 1298.7.0 ./examples/assets
```
Optionally, add your SSH public key to each machine group definition [as shown](../examples/README.md#ssh-keys).
Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver`, `worker`) with SANs for `node1.example.com`, etc.
```sh
$ rm -rf examples/assets/tls
$ ./scripts/tls/k8s-certgen
```
**Note**: TLS assets are served to any machines which request them, which requires a trusted network. Alternately, provisioning may be tweaked to require TLS assets be securely copied to each host.
## Containers
Use rkt or docker to start `matchbox` and mount the desired example resources. Create a network boot environment and power-on your machines. Revisit [matchbox with rkt](getting-started-rkt.md) or [matchbox with Docker](getting-started-docker.md) for help.
Client machines should boot and provision themselves. Local client VMs should network boot CoreOS in about a 1 minute and the Kubernetes API should be available after 3-4 minutes (each node downloads a ~160MB Hyperkube). If you chose `k8s-install`, notice that machines install CoreOS and then reboot (in libvirt, you must hit "power" again). Time to network boot and provision Kubernetes clusters on physical hardware depends on a number of factors (POST duration, boot device iteration, network speed, etc.).
## Verify
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your laptop. Use the generated kubeconfig to access the Kubernetes cluster created on rkt `metal0` or `docker0`.
```sh
$ KUBECONFIG=examples/assets/tls/kubeconfig
$ kubectl get nodes
NAME STATUS AGE
node1.example.com Ready 3m
node2.example.com Ready 3m
node3.example.com Ready 3m
```
Get all pods.
```sh
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-v1.2.0-4088228293-5xbgg 2/2 Running 0 41m
kube-system kube-apiserver-node1.example.com 1/1 Running 0 40m
kube-system kube-controller-manager-node1.example.com 1/1 Running 0 40m
kube-system kube-dns-782804071-326dd 4/4 Running 0 41m
kube-system kube-dns-autoscaler-2715466192-8bm78 1/1 Running 0 41m
kube-system kube-proxy-node1.example.com 1/1 Running 0 41m
kube-system kube-proxy-node2.example.com 1/1 Running 0 41m
kube-system kube-proxy-node3.example.com 1/1 Running 0 40m
kube-system kube-scheduler-node1.example.com 1/1 Running 0 40m
kube-system kubernetes-dashboard-3543765157-2nqgh 1/1 Running 0 41m
```
## Kubernetes Dashboard
Access the Kubernetes Dashboard with `kubeconfig` credentials by port forwarding to the dashboard pod.
```sh
$ kubectl port-forward kubernetes-dashboard-SOME-ID 9090 -n=kube-system
Forwarding from 127.0.0.1:9090 -> 9090
```
Then visit [http://127.0.0.1:9090](http://127.0.0.1:9090/).
<img src='img/kubernetes-dashboard.png' class="img-center" alt="Kubernetes Dashboard"/>

View File

@@ -64,8 +64,8 @@ Profiles reference an Ignition config, Cloud-Config, and/or generic config by na
"ignition_id": "etcd.yaml",
"generic_id": "some-service.cfg",
"boot": {
"kernel": "/assets/coreos/1298.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1298.7.0/coreos_production_pxe_image.cpio.gz"],
"kernel": "/assets/coreos/1353.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1353.7.0/coreos_production_pxe_image.cpio.gz"],
"args": [
"coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp}",
"coreos.first_boot=yes",

View File

@@ -154,7 +154,7 @@ Run DHCP, TFTP, and DNS on the host's network:
```sh
sudo rkt run --net=host quay.io/coreos/dnsmasq \
--caps-retain=CAP_NET_ADMIN,CAP_NET_BIND_SERVICE \
--caps-retain=CAP_NET_ADMIN,CAP_NET_BIND_SERVICE,CAP_SETGID,CAP_SETUID,CAP_NET_RAW \
-- -d -q \
--dhcp-range=192.168.1.3,192.168.1.254 \
--enable-tftp \
@@ -183,7 +183,7 @@ Run a proxy-DHCP and TFTP service on the host's network:
```sh
sudo rkt run --net=host quay.io/coreos/dnsmasq \
--caps-retain=CAP_NET_ADMIN,CAP_NET_BIND_SERVICE \
--caps-retain=CAP_NET_ADMIN,CAP_NET_BIND_SERVICE,CAP_SETGID,CAP_SETUID,CAP_NET_RAW \
-- -d -q \
--dhcp-range=192.168.1.1,proxy,255.255.255.0 \
--enable-tftp --tftp-root=/var/lib/tftpboot \

View File

@@ -1,87 +0,0 @@
# Kubernetes (with rkt)
The `rktnetes` example provisions a 3 node Kubernetes v1.5.5 cluster with [rkt](https://github.com/coreos/rkt) as the container runtime. The cluster has one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
## Requirements
Ensure that you've gone through the [matchbox with rkt](getting-started-rkt.md) or [matchbox with docker](getting-started-docker.md) guide and understand the basics. In particular, you should be able to:
* Use rkt or Docker to start `matchbox`
* Create a network boot environment with `coreos/dnsmasq`
* Create the example libvirt client VMs
* `/etc/hosts` entries for `node[1-3].example.com` (or pass custom names to `k8s-certgen`)
## Examples
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. VMs are setup on the `metal0` CNI bridge for rkt or the `docker0` bridge for Docker. The examples can be used for physical machines if you update the MAC addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
* [rktnetes](../examples/groups/rktnetes) - iPXE boot a Kubernetes cluster
* [rktnetes-install](../examples/groups/rktnetes-install) - Install a Kubernetes cluster to disk
* [Lab examples](https://github.com/dghubble/metal) - Lab hardware examples
## Assets
Download the CoreOS image assets referenced in the target [profile](../examples/profiles).
```sh
$ ./scripts/get-coreos stable 1298.7.0 ./examples/assets
```
Optionally, add your SSH public key to each machine group definition [as shown](../examples/README.md#ssh-keys).
Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver`, `worker`) with SANs for `node1.example.com`, etc.
```sh
$ rm -rf examples/assets/tls
$ ./scripts/tls/k8s-certgen
```
**Note**: TLS assets are served to any machines which request them, which requires a trusted network. Alternately, provisioning may be tweaked to require TLS assets be securely copied to each host.
## Containers
Use rkt or docker to start `matchbox` and mount the desired example resources. Create a network boot environment and power-on your machines. Revisit [matchbox with rkt](getting-started-rkt.md) or [matchbox with Docker](getting-started-docker.md) for help.
Client machines should boot and provision themselves. Local client VMs should network boot CoreOS in about a 1 minute and the Kubernetes API should be available after 3-4 minutes (each node downloads a ~160MB Hyperkube). If you chose `rktnetes-install`, notice that machines install CoreOS and then reboot (in libvirt, you must hit "power" again). Time to network boot and provision Kubernetes clusters on physical hardware depends on a number of factors (POST duration, boot device iteration, network speed, etc.).
## Verify
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your laptop. Use the generated kubeconfig to access the Kubernetes cluster created on rkt `metal0` or `docker0`.
```sh
$ KUBECONFIG=examples/assets/tls/kubeconfig
$ kubectl get nodes
NAME STATUS AGE
node1.example.com Ready 3m
node2.example.com Ready 3m
node3.example.com Ready 3m
```
Get all pods.
```sh
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-v1.2.0-4088228293-k3yn8 2/2 Running 0 3m
kube-system kube-apiserver-node1.example.com 1/1 Running 0 4m
kube-system kube-controller-manager-node1.example.com 1/1 Running 0 3m
kube-system kube-dns-v19-l2u8r 3/3 Running 0 4m
kube-system kube-proxy-node1.example.com 1/1 Running 0 3m
kube-system kube-proxy-node2.example.com 1/1 Running 0 3m
kube-system kube-proxy-node3.example.com 1/1 Running 0 3m
kube-system kube-scheduler-node1.example.com 1/1 Running 0 3m
kube-system kubernetes-dashboard-v1.4.1-0iy07 1/1 Running 0 4m
```
## Kubernetes Dashboard
Access the Kubernetes Dashboard with `kubeconfig` credentials by port forwarding to the dashboard pod.
```sh
$ kubectl port-forward kubernetes-dashboard-v1.4.1-SOME-ID 9090 -n=kube-system
Forwarding from 127.0.0.1:9090 -> 9090
```
Then visit [http://127.0.0.1:9090](http://127.0.0.1:9090/).
<img src='img/kubernetes-dashboard.png' class="img-center" alt="Kubernetes Dashboard"/>

31
Jenkinsfile vendored
View File

@@ -9,41 +9,38 @@ parallel (
etcd3: {
node('fedora && bare-metal') {
stage('etcd3') {
timeout(time:3, unit:'MINUTES') {
timeout(time:5, unit:'MINUTES') {
checkout scm
sh '''#!/bin/bash -e
cat /etc/os-release
export ASSETS_DIR=~/assets; ./tests/smoke/etcd3
'''
}
}
}
},
k8s: {
node('fedora && bare-metal') {
stage('k8s') {
timeout(time:12, unit:'MINUTES') {
checkout scm
sh '''#!/bin/bash -e
cat /etc/os-release
export ASSETS_DIR=~/assets; ./tests/smoke/k8s
'''
}
}
}
},
bootkube: {
node('fedora && bare-metal') {
stage('bootkube') {
timeout(time:12, unit:'MINUTES') {
checkout scm
sh '''#!/bin/bash -e
cat /etc/os-release
chmod 600 ./tests/smoke/fake_rsa
export ASSETS_DIR=~/assets; ./tests/smoke/bootkube
'''
}
}
}
}
},
"etcd3-terraform": {
node('fedora && bare-metal') {
stage('etcd3-terraform') {
timeout(time:10, unit:'MINUTES') {
checkout scm
sh '''#!/bin/bash -e
export ASSETS_DIR=~/assets; export CONFIG_DIR=~/matchbox/examples/etc/matchbox; ./tests/smoke/etcd3-terraform
'''
}
}
}
},
)

View File

@@ -1,6 +1,6 @@
export CGO_ENABLED:=0
VERSION=$(shell ./scripts/git-version)
VERSION=$(shell ./scripts/dev/git-version)
LD_FLAGS="-w -X github.com/coreos/matchbox/matchbox/version.Version=$(VERSION)"
REPO=github.com/coreos/matchbox
@@ -15,11 +15,11 @@ bin/%:
@go build -o bin/$* -v -ldflags $(LD_FLAGS) $(REPO)/cmd/$*
test:
@./scripts/test
@./scripts/dev/test
.PHONY: aci
aci: clean build
@sudo ./scripts/build-aci
@sudo ./scripts/dev/build-aci
.PHONY: docker-image
docker-image:
@@ -40,13 +40,13 @@ vendor:
.PHONY: codegen
codegen: tools
@./scripts/codegen
@./scripts/dev/codegen
.PHONY: tools
tools: bin/protoc bin/protoc-gen-go
bin/protoc:
@./scripts/get-protoc
@./scripts/dev/get-protoc
bin/protoc-gen-go:
@go build -o bin/protoc-gen-go $(REPO)/vendor/github.com/golang/protobuf/protoc-gen-go
@@ -78,7 +78,7 @@ _output/matchbox-%.tar.gz: DEST=_output/$(NAME)
_output/matchbox-%.tar.gz: bin/%/matchbox
mkdir -p $(DEST)
cp bin/$*/matchbox $(DEST)
./scripts/release-files $(DEST)
./scripts/dev/release-files $(DEST)
tar zcvf $(DEST).tar.gz -C _output $(NAME)
.PHONY: all build clean test release

View File

@@ -1,12 +1,8 @@
# matchbox [![Build Status](https://travis-ci.org/coreos/matchbox.svg?branch=master)](https://travis-ci.org/coreos/matchbox) [![GoDoc](https://godoc.org/github.com/coreos/matchbox?status.png)](https://godoc.org/github.com/coreos/matchbox) [![Docker Repository on Quay](https://quay.io/repository/coreos/matchbox/status "Docker Repository on Quay")](https://quay.io/repository/coreos/matchbox) [![IRC](https://img.shields.io/badge/irc-%23coreos-449FD8.svg)](https://botbot.me/freenode/coreos)
Network boot and provision Container Linux clusters on virtual or physical hardware.
**Announcement**: Matchbox [v0.6.0](https://github.com/coreos/matchbox/releases) is released with a new [Matchbox Terraform Provider][terraform] and [tutorial](Documentation/getting-started.md).
## matchbox
`matchbox` is a service that matches machines (based on labels like MAC, UUID, etc.) to profiles to PXE boot and provision Container Linux clusters. Profiles specify the kernel/initrd, kernel arguments, iPXE config, GRUB config, [Container Linux Config][cl-config], [Cloud-Config][cloud-config], or other configs a machine should use. Matchbox can be [installed](Documentation/deployment.md) as a binary, RPM, container image, or deployed on a Kubernetes cluster and it provides an authenticated gRPC API for clients like [terraform][terraform].
`matchbox` is a service that matches bare-metal machines (based on labels like MAC, UUID, etc.) to profiles to PXE boot and provision Container Linux clusters. Profiles specify the kernel/initrd, kernel arguments, iPXE config, GRUB config, [Container Linux Config][cl-config], [Cloud-Config][cloud-config], or other configs a machine should use. Matchbox can be [installed](Documentation/deployment.md) as a binary, RPM, container image, or deployed on a Kubernetes cluster and it provides an authenticated gRPC API for clients like [terraform][terraform].
* [Documentation][docs]
* [matchbox Service](Documentation/matchbox.md)
@@ -16,8 +12,7 @@ Network boot and provision Container Linux clusters on virtual or physical hardw
* [Container Linux Config][cl-config]
* [Cloud-Config][cloud-config]
* [Configuration](Documentation/config.md)
* [HTTP API](Documentation/api.md)
* [gRPC API](https://godoc.org/github.com/coreos/matchbox/matchbox/client)
* [HTTP API](Documentation/api.md) / [gRPC API](https://godoc.org/github.com/coreos/matchbox/matchbox/client)
* [Background: Machine Lifecycle](Documentation/machine-lifecycle.md)
* [Background: PXE Booting](Documentation/network-booting.md)
@@ -40,22 +35,31 @@ Local QEMU/KVM
### Example Clusters
Network boot the [examples](examples) with [QEMU/KVM](scripts/README.md#libvirt) VMs to try them on your Linux laptop.
Create [example](examples) clusters on-premise or locally with [QEMU/KVM](scripts/README.md#libvirt).
* Multi-node [self-hosted](Documentation/bootkube.md) Kubernetes cluster
* [Upgrading](Documentation/bootkube-upgrades.md) self-hosted Kubernetes clusters
* Multi-node [Kubernetes cluster](Documentation/kubernetes.md)
* Multi-node [rktnetes](Documentation/rktnetes.md) cluster (i.e. Kubernetes with rkt as the container runtime)
**Terraform-based**
* [simple-install](Documentation/getting-started.md) - Install Container Linux with an SSH key on all machines (beginner)
* [etcd3](examples/terraform/etcd3-install/README.md) - Install a 3-node etcd3 cluster
* [Kubernetes](examples/terraform/bootkube-install/README.md) - Install a 3-node self-hosted Kubernetes v1.6.4 cluster
* Terraform [Modules](examples/terraform/modules) - Re-usable Terraform Modules
**Manual**
* [etcd3](Documentation/getting-started-rkt.md) - Install a 3-node etcd3 cluster
* [Kubernetes](Documentation/bootkube.md) - Install a 3-node self-hosted Kubernetes v1.6.4 cluster
## Contrib
* [dnsmasq](contrib/dnsmasq/README.md) - Run DHCP, TFTP, and DNS services with docker or rkt
* [squid](contrib/squid/README.md) - Run a transparent cache proxy
* [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) - Terraform plugin which supports "matchbox" provider
## Enterprise
[Tectonic](https://coreos.com/tectonic/) is the enterprise-ready Kubernetes offering from CoreOS (free for 10 nodes!). The [Tectonic Installer](https://coreos.com/tectonic/docs/latest/install/bare-metal/#4-tectonic-installer) app integrates directly with `matchbox` through its gRPC API to provide a rich graphical client for populating `matchbox` with machine configs.
Learn more from our [docs](https://coreos.com/tectonic/docs/latest/) or [blog](https://coreos.com/blog/tectonic-1-5-2.html).
Learn more from our [docs](https://coreos.com/tectonic/docs/latest/) or [blog](https://coreos.com/blog/announcing-tectonic-1.6).
![Tectonic Installer](Documentation/img/tectonic-installer.png)

View File

@@ -9,7 +9,9 @@ The image bundles `undionly.kpxe` which chainloads PXE clients to iPXE and `grub
Run the container image as a DHCP, DNS, and TFTP service.
```sh
sudo rkt run --net=host quay.io/coreos/dnsmasq -- -d -q \
sudo rkt run --net=host quay.io/coreos/dnsmasq \
--caps-retain=CAP_NET_ADMIN,CAP_NET_BIND_SERVICE,CAP_SETGID,CAP_SETUID,CAP_NET_RAW \
-- -d -q \
--dhcp-range=192.168.1.3,192.168.1.254 \
--enable-tftp \
--tftp-root=/var/lib/tftpboot \

View File

@@ -12,4 +12,4 @@ curl -s -o $DEST/undionly.kpxe http://boot.ipxe.org/undionly.kpxe
cp $DEST/undionly.kpxe $DEST/undionly.kpxe.0
# Any vaguely recent CoreOS grub.efi is fine
curl -s -o $DEST/grub.efi https://stable.release.core-os.net/amd64-usr/1298.7.0/coreos_production_pxe_grub.efi
curl -s -o $DEST/grub.efi https://stable.release.core-os.net/amd64-usr/1353.7.0/coreos_production_pxe_grub.efi

86
contrib/rpm/matchbox.spec Normal file
View File

@@ -0,0 +1,86 @@
%global import_path github.com/coreos/matchbox
%global repo matchbox
%global debug_package %{nil}
Name: matchbox
Version: 0.6.0
Release: 2%{?dist}
Summary: Network boot and provision CoreOS machines
License: ASL 2.0
URL: https://%{import_path}
Source0: https://%{import_path}/archive/v%{version}/%{name}-%{version}.tar.gz
BuildRequires: golang
BuildRequires: systemd
%{?systemd_requires}
Requires(pre): shadow-utils
%description
matchbox is a service that matches machines to profiles to PXE boot and provision
clusters. Profiles specify the kernel/initrd, kernel args, iPXE config, GRUB
config, Container Linux config, Cloud-config, or other configs. matchbox provides
a read-only HTTP API for machines and an authenticated gRPC API for clients.
# Limit to architectures supported by golang or gcc-go compilers
ExclusiveArch: %{go_arches}
# Use golang or gcc-go compiler depending on architecture
BuildRequires: compiler(golang)
%prep
%setup -q -n %{repo}-%{version}
%build
# create a Go workspace with a symlink to builddir source
mkdir -p src/github.com/coreos
ln -s ../../../ src/github.com/coreos/matchbox
export GOPATH=$(pwd):%{gopath}
export GO15VENDOREXPERIMENT=1
function gobuild { go build -a -ldflags "-w -X github.com/coreos/matchbox/matchbox/version.Version=v%{version}" "$@"; }
gobuild -o bin/matchbox %{import_path}/cmd/matchbox
%install
install -d %{buildroot}/%{_bindir}
install -d %{buildroot}%{_sharedstatedir}/%{name}
install -p -m 0755 bin/matchbox %{buildroot}/%{_bindir}
# systemd service unit
mkdir -p %{buildroot}%{_unitdir}
cp contrib/systemd/%{name}.service %{buildroot}%{_unitdir}/
%files
%doc README.md CHANGES.md CONTRIBUTING.md LICENSE NOTICE DCO
%{_bindir}/matchbox
%{_sharedstatedir}/%{name}
%{_unitdir}/%{name}.service
%pre
getent group matchbox >/dev/null || groupadd -r matchbox
getent passwd matchbox >/dev/null || \
useradd -r -g matchbox -s /sbin/nologin matchbox
%post
%systemd_post matchbox.service
%preun
%systemd_preun matchbox.service
%postun
%systemd_postun_with_restart matchbox.service
%changelog
* Mon Apr 24 2017 <dalton.hubble@coreos.com> - 0.6.0-1
- New support for terraform-provider-matchbox plugin
- Add ProfileDelete, GroupDelete, IgnitionGet and IgnitionDelete gRPC endpoints
- Generate code with gRPC v1.2.1 and matching Go protoc-gen-go plugin
- Update Ignition to v0.14.0 and coreos-cloudinit to v1.13.0
- New documentation at https://coreos.com/matchbox/docs/latest
* Wed Jan 25 2017 <dalton.hubble@coreos.com> - 0.5.0-1
- Rename project from bootcfg to matchbox
* Sat Dec 3 2016 <dalton.hubble@coreos.com> - 0.4.1-3
- Add missing ldflags which caused bootcfg -version to report wrong version
* Fri Dec 2 2016 <dalton.hubble@coreos.com> - 0.4.1-2
- Fix bootcfg user creation
* Fri Dec 2 2016 <dalton.hubble@coreos.com> - 0.4.1-1
- Initial package

96
contrib/squid/README.md Normal file
View File

@@ -0,0 +1,96 @@
# Squid Proxy (DRAFT)
This guide shows how to setup a [Squid](http://www.squid-cache.org/) cache proxy for providing kernel/initrd files to PXE, iPXE, or GRUB2 client machines. This setup runs Squid as a Docker container using the [sameersbn/squid](https://quay.io/repository/sameersbn/squid)
image.
The Squid container requires a squid.conf file to run. Download the example squid.conf file from the [sameersbn/docker-squid](https://github.com/sameersbn/docker-squid) repo:
```
curl -O https://raw.githubusercontent.com/sameersbn/docker-squid/master/squid.conf
```
Squid [interception caching](http://wiki.squid-cache.org/SquidFaq/InterceptionProxy#Concepts_of_Interception_Caching) is required for proxying PXE, iPXE, or GRUB2 client machines. Set the intercept mode in squid.conf:
```
sed -ie 's/http_port 3128/http_port 3128 intercept/g' squid.conf
```
By default, Squid caches objects that are 4MB or less. Increase the maximum object size to cache large files such as kernel and initrd images. The following example increases the maximum object size to 300MB:
```
sed -ie 's/# maximum_object_size 4 MB/maximum_object_size 300 MB/g' squid.conf
```
Squid supports a wide range of cache configurations. Review the Squid [documentation](http://www.squid-cache.org/Doc/) to learn more about configuring Squid.
This example uses systemd to manage squid. Create the squid service systemd unit file:
```
cat /etc/systemd/system/squid.service
#/etc/systemd/system/squid.service
[Unit]
Description=squid proxy service
After=docker.service
Requires=docker.service
[Service]
Restart=always
TimeoutStartSec=0
ExecStart=/usr/bin/docker run --net=host --rm \
-v /path/to/squid.conf:/etc/squid3/squid.conf:Z \
-v /srv/docker/squid/cache:/var/spool/squid3:Z \
quay.io/sameersbn/squid
[Install]
WantedBy=multi-user.target
```
Start Squid:
```
systemctl start squid
```
If your Squid host is running iptables or firewalld, modify rules to allow the interception and redirection of traffic. In the following example, 192.168.10.1 is the IP address of the interface facing PXE, iPXE, or GRUB2 client machines. The default port number used by squid is 3128.
For firewalld:
```
firewall-cmd --permanent --zone=internal --add-forward-port=port=80:proto=tcp:toport=3128:toaddr=192.168.10.1
firewall-cmd --permanent --zone=internal --add-port=3128/tcp
firewall-cmd --reload
firewall-cmd --zone=internal --list-all
```
For iptables:
```
iptables -t nat -A POSTROUTING -o enp15s0 -j MASQUERADE
iptables -t nat -A PREROUTING -i enp14s0 -p tcp --dport 80 -j REDIRECT --to-port 3128
```
**Note**: enp14s0 faces PXE, iPXE, or GRUB2 clients and enp15s0 faces Internet access.
Your DHCP server should be configured so the Squid host is the default gateway for PXE, iPXE, or GRUB2 clients. For deployments that run Squid on the same host as dnsmasq, remove any DHCP option 3 settings. For example ```--dhcp-option=3,192.168.10.1"```
Update Matchbox policies to use the url of the CoreOS kernel/initrd download site:
```
cat policy/etcd3.json
{
"id": "etcd3",
"name": "etcd3",
"boot": {
"kernel": "http://stable.release.core-os.net/amd64-usr/1235.9.0/coreos_production_pxe.vmlinuz",
"initrd": ["http://stable.release.core-os.net/amd64-usr/1235.9.0/coreos_production_pxe_image.cpio.gz"],
"args": [
"coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp}",
"coreos.first_boot=yes",
"console=tty0",
"console=ttyS0",
"coreos.autologin"
]
},
"ignition_id": "etcd3.yaml"
}
```
(Optional) Configure Matchbox to not serve static assets by providing an empty assets-path value.
```
# /etc/systemd/system/matchbox.service.d/override.conf
[Service]
Environment="MATCHBOX_ASSETS_PATHS="
```
Boot your PXE, iPXE, or GRUB2 clients.

View File

@@ -1,35 +1,38 @@
# Examples
These examples network boot and provision machines into Container Linux clusters using `matchbox`. You can re-use their profiles to provision your own physical machines.
Matchbox automates network booting and provisioning of clusters. These examples show how to use matchbox on-premise or locally with [QEMU/KVM](scripts/README.md#libvirt).
## Terraform Examples
These examples use [Terraform](https://www.terraform.io/intro/) as a client to Matchbox.
| Name | Description |
|-------------------------------|-------------------------------|
| [simple-install](terraform/simple-install) | Install Container Linux with an SSH key |
| [etcd3-install](terraform/etcd3-install) | Install a 3-node etcd3 cluster |
| [bootkube-install](terraform/bootkube-install) | Install a 3-node self-hosted Kubernetes v1.6.4 cluster |
### Customization
You are encouraged to look through the examples and Terraform modules. Implement your own profiles or package them as modules to meet your needs. We've just provided a starting point. Learn more about [matchbox](../Documentation/matchbox.md) and [Container Linux configs](../Documentation/container-linux-config.md).
## Manual Examples
These examples mount raw Matchbox objects into a Matchbox server's `/var/lib/matchbox/` directory.
| Name | Description | CoreOS Version | FS | Docs |
|------------|-------------|----------------|----|-----------|
| simple | CoreOS with autologin, using iPXE | stable/1298.7.0 | RAM | [reference](https://coreos.com/os/docs/latest/booting-with-ipxe.html) |
| simple-install | CoreOS Install, using iPXE | stable/1298.7.0 | RAM | [reference](https://coreos.com/os/docs/latest/booting-with-ipxe.html) |
| grub | CoreOS via GRUB2 Netboot | stable/1298.7.0 | RAM | NA |
| etcd3 | A 3 node etcd3 cluster with proxies | stable/1298.7.0 | RAM | None |
| etcd3-install | Install a 3 node etcd3 cluster to disk | stable/1298.7.0 | Disk | None |
| k8s | Kubernetes cluster with 1 master, 2 workers, and TLS-authentication | stable/1298.7.0 | Disk | [tutorial](../Documentation/kubernetes.md) |
| k8s-install | Kubernetes cluster, installed to disk | stable/1298.7.0 | Disk | [tutorial](../Documentation/kubernetes.md) |
| rktnetes | Kubernetes cluster with rkt container runtime, 1 master, workers, TLS auth (experimental) | stable/1298.7.0 | Disk | [tutorial](../Documentation/rktnetes.md) |
| rktnetes-install | Kubernetes cluster with rkt container runtime, installed to disk (experimental) | stable/1298.7.0 | Disk | [tutorial](../Documentation/rktnetes.md) |
| bootkube | iPXE boot a self-hosted Kubernetes cluster (with bootkube) | stable/1298.7.0 | Disk | [tutorial](../Documentation/bootkube.md) |
| bootkube-install | Install a self-hosted Kubernetes cluster (with bootkube) | stable/1298.7.0 | Disk | [tutorial](../Documentation/bootkube.md) |
| simple | CoreOS with autologin, using iPXE | stable/1353.7.0 | RAM | [reference](https://coreos.com/os/docs/latest/booting-with-ipxe.html) |
| simple-install | CoreOS Install, using iPXE | stable/1353.7.0 | RAM | [reference](https://coreos.com/os/docs/latest/booting-with-ipxe.html) |
| grub | CoreOS via GRUB2 Netboot | stable/1353.7.0 | RAM | NA |
| etcd3 | PXE boot a 3 node etcd3 cluster with proxies | stable/1353.7.0 | RAM | None |
| etcd3-install | Install a 3 node etcd3 cluster to disk | stable/1353.7.0 | Disk | None |
| bootkube | PXE boot a self-hosted Kubernetes v1.6.4 cluster | stable/1353.7.0 | Disk | [tutorial](../Documentation/bootkube.md) |
| bootkube-install | Install a self-hosted Kubernetes v1.6.4 cluster | stable/1353.7.0 | Disk | [tutorial](../Documentation/bootkube.md) |
## Tutorials
### Customization
Get started running `matchbox` on your Linux machine to network boot and provision clusters of VMs or physical hardware.
* [Getting Started](../Documentation/getting-started.md)
* [matchbox with rkt](../Documentation/getting-started-rkt.md)
* [matchbox with Docker](../Documentation/getting-started-docker.md)
* [Kubernetes (static manifests)](../Documentation/kubernetes.md)
* [Kubernetes (rktnetes)](../Documentation/rktnetes.md)
* [Kubernetes (self-hosted)](../Documentation/bootkube.md)
* [Lab Examples](https://github.com/dghubble/metal)
## Autologin
#### Autologin
Example profiles pass the `coreos.autologin` kernel argument. This skips the password prompt for development and troubleshooting and should be removed **before production**.
@@ -46,8 +49,8 @@ Example groups allow `ssh_authorized_keys` to be added for the `core` user as me
}
}
## Conditional Variables
#### Conditional Variables
### "pxe"
**"pxe"**
Some examples check the `pxe` variable to determine whether to create a `/dev/sda1` filesystem and partition for PXEing with `root=/dev/sda1` ("pxe":"true") or to write files to the existing filesystem on `/dev/disk/by-label/ROOT` ("pxe":"false").

View File

@@ -1,44 +0,0 @@
## gRPC API Credentials
Create FAKE TLS credentials for running the `matchbox` gRPC API examples.
**DO NOT** use these certificates for anything other than running `matchbox` examples. Use your organization's production PKI for production deployments.
Navigate to the example directory which will be mounted as `/etc/matchbox` in examples:
cd matchbox/examples/etc/matchbox
Set certificate subject alt names which should be used by exporting `SAN`. Use the DNS name or IP at which `matchbox` is hosted.
# for examples on metal0 or docker0 bridges
export SAN=IP.1:127.0.0.1,IP.2:172.18.0.2
# production example
export SAN=DNS.1:matchbox.example.com
Create a fake `ca.crt`, `server.crt`, `server.key`, `client.crt`, and `client.key`. Type 'Y' when prompted.
$ ./cert-gen
Creating FAKE CA, server cert/key, and client cert/key...
...
...
...
******************************************************************
WARNING: Generated TLS credentials are ONLY SUITABLE FOR EXAMPLES!
Use your organization's production PKI for production deployments!
## Inpsect
Inspect the generated FAKE certificates if desired.
openssl x509 -noout -text -in ca.crt
openssl x509 -noout -text -in server.crt
openssl x509 -noout -text -in client.crt
## Verify
Verify that the FAKE server and client certificates were signed by the fake CA.
openssl verify -CAfile ca.crt server.crt
openssl verify -CAfile ca.crt client.crt

View File

@@ -4,7 +4,7 @@
"profile": "install-reboot",
"metadata": {
"coreos_channel": "stable",
"coreos_version": "1298.7.0",
"coreos_version": "1353.7.0",
"ignition_endpoint": "http://matchbox.foo:8080/ignition",
"baseurl": "http://matchbox.foo:8080/assets/coreos"
}

View File

@@ -4,7 +4,7 @@
"profile": "install-reboot",
"metadata": {
"coreos_channel": "stable",
"coreos_version": "1298.7.0",
"coreos_version": "1353.7.0",
"ignition_endpoint": "http://matchbox.foo:8080/ignition",
"baseurl": "http://matchbox.foo:8080/assets/coreos"
}

View File

@@ -1,11 +0,0 @@
{
"id": "coreos-install",
"name": "CoreOS Install",
"profile": "install-reboot",
"metadata": {
"coreos_channel": "stable",
"coreos_version": "1298.7.0",
"ignition_endpoint": "http://matchbox.foo:8080/ignition",
"baseurl": "http://matchbox.foo:8080/assets/coreos"
}
}

View File

@@ -1,20 +0,0 @@
{
"id": "node1",
"name": "k8s controller",
"profile": "k8s-controller",
"selector": {
"os": "installed",
"mac": "52:54:00:a1:9c:ae"
},
"metadata": {
"container_runtime": "docker",
"domain_name": "node1.example.com",
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"etcd_name": "node1",
"k8s_cert_endpoint": "http://matchbox.foo:8080/assets",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_etcd_endpoints": "http://node1.example.com:2379",
"k8s_pod_network": "10.2.0.0/16",
"k8s_service_ip_range": "10.3.0.0/24"
}
}

View File

@@ -1,18 +0,0 @@
{
"id": "node2",
"name": "k8s worker",
"profile": "k8s-worker",
"selector": {
"os": "installed",
"mac": "52:54:00:b2:2f:86"
},
"metadata": {
"container_runtime": "docker",
"domain_name": "node2.example.com",
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"k8s_cert_endpoint": "http://matchbox.foo:8080/assets",
"k8s_controller_endpoint": "https://node1.example.com",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_etcd_endpoints": "http://node1.example.com:2379"
}
}

View File

@@ -1,18 +0,0 @@
{
"id": "node3",
"name": "k8s worker",
"profile": "k8s-worker",
"selector": {
"os": "installed",
"mac": "52:54:00:c3:61:77"
},
"metadata": {
"container_runtime": "docker",
"domain_name": "node3.example.com",
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"k8s_cert_endpoint": "http://matchbox.foo:8080/assets",
"k8s_controller_endpoint": "https://node1.example.com",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_etcd_endpoints": "http://node1.example.com:2379"
}
}

View File

@@ -1,20 +0,0 @@
{
"id": "node1",
"name": "k8s controller",
"profile": "k8s-controller",
"selector": {
"mac": "52:54:00:a1:9c:ae"
},
"metadata": {
"container_runtime": "docker",
"domain_name": "node1.example.com",
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"etcd_name": "node1",
"k8s_cert_endpoint": "http://matchbox.foo:8080/assets",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_etcd_endpoints": "http://node1.example.com:2379",
"k8s_pod_network": "10.2.0.0/16",
"k8s_service_ip_range": "10.3.0.0/24",
"pxe": "true"
}
}

View File

@@ -1,18 +0,0 @@
{
"id": "node2",
"name": "k8s worker",
"profile": "k8s-worker",
"selector": {
"mac": "52:54:00:b2:2f:86"
},
"metadata": {
"container_runtime": "docker",
"domain_name": "node2.example.com",
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"k8s_cert_endpoint": "http://matchbox.foo:8080/assets",
"k8s_controller_endpoint": "https://node1.example.com",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_etcd_endpoints": "http://node1.example.com:2379",
"pxe": "true"
}
}

View File

@@ -1,18 +0,0 @@
{
"id": "node3",
"name": "k8s worker",
"profile": "k8s-worker",
"selector": {
"mac": "52:54:00:c3:61:77"
},
"metadata": {
"container_runtime": "docker",
"domain_name": "node3.example.com",
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"k8s_cert_endpoint": "http://matchbox.foo:8080/assets",
"k8s_controller_endpoint": "https://node1.example.com",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_etcd_endpoints": "http://node1.example.com:2379",
"pxe": "true"
}
}

View File

@@ -1,11 +0,0 @@
{
"id": "coreos-install",
"name": "CoreOS Install",
"profile": "install-reboot",
"metadata": {
"coreos_channel": "stable",
"coreos_version": "1298.7.0",
"ignition_endpoint": "http://matchbox.foo:8080/ignition",
"baseurl": "http://matchbox.foo:8080/assets/coreos"
}
}

View File

@@ -1,20 +0,0 @@
{
"id": "node1",
"name": "k8s controller",
"profile": "k8s-controller",
"selector": {
"mac": "52:54:00:a1:9c:ae",
"os": "installed"
},
"metadata": {
"container_runtime": "rkt",
"domain_name": "node1.example.com",
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"etcd_name": "node1",
"k8s_cert_endpoint": "http://matchbox.foo:8080/assets",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_etcd_endpoints": "http://node1.example.com:2379",
"k8s_pod_network": "10.2.0.0/16",
"k8s_service_ip_range": "10.3.0.0/24"
}
}

View File

@@ -1,18 +0,0 @@
{
"id": "node2",
"name": "k8s worker",
"profile": "k8s-worker",
"selector": {
"mac": "52:54:00:b2:2f:86",
"os": "installed"
},
"metadata": {
"container_runtime": "rkt",
"domain_name": "node2.example.com",
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"k8s_cert_endpoint": "http://matchbox.foo:8080/assets",
"k8s_controller_endpoint": "https://node1.example.com",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_etcd_endpoints": "http://node1.example.com:2379"
}
}

View File

@@ -1,18 +0,0 @@
{
"id": "node3",
"name": "k8s worker",
"profile": "k8s-worker",
"selector": {
"mac": "52:54:00:c3:61:77",
"os": "installed"
},
"metadata": {
"container_runtime": "rkt",
"domain_name": "node3.example.com",
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"k8s_cert_endpoint": "http://matchbox.foo:8080/assets",
"k8s_controller_endpoint": "https://node1.example.com",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_etcd_endpoints": "http://node1.example.com:2379"
}
}

View File

@@ -1,20 +0,0 @@
{
"id": "node1",
"name": "k8s controller",
"profile": "k8s-controller",
"selector": {
"mac": "52:54:00:a1:9c:ae"
},
"metadata": {
"container_runtime": "rkt",
"domain_name": "node1.example.com",
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"etcd_name": "node1",
"k8s_cert_endpoint": "http://matchbox.foo:8080/assets",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_etcd_endpoints": "http://node1.example.com:2379",
"k8s_pod_network": "10.2.0.0/16",
"k8s_service_ip_range": "10.3.0.0/24",
"pxe": "true"
}
}

View File

@@ -1,18 +0,0 @@
{
"id": "node2",
"name": "k8s worker",
"profile": "k8s-worker",
"selector": {
"mac": "52:54:00:b2:2f:86"
},
"metadata": {
"container_runtime": "rkt",
"domain_name": "node2.example.com",
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"k8s_cert_endpoint": "http://matchbox.foo:8080/assets",
"k8s_controller_endpoint": "https://node1.example.com",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_etcd_endpoints": "http://node1.example.com:2379",
"pxe": "true"
}
}

View File

@@ -1,18 +0,0 @@
{
"id": "node3",
"name": "k8s worker",
"profile": "k8s-worker",
"selector": {
"mac": "52:54:00:c3:61:77"
},
"metadata": {
"container_runtime": "rkt",
"domain_name": "node3.example.com",
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
"k8s_cert_endpoint": "http://matchbox.foo:8080/assets",
"k8s_controller_endpoint": "https://node1.example.com",
"k8s_dns_service_ip": "10.3.0.10",
"k8s_etcd_endpoints": "http://node1.example.com:2379",
"pxe": "true"
}
}

View File

@@ -4,7 +4,7 @@
"profile": "simple-install",
"metadata": {
"coreos_channel": "stable",
"coreos_version": "1298.7.0",
"coreos_version": "1353.7.0",
"ignition_endpoint": "http://matchbox.foo:8080/ignition",
"baseurl": "http://matchbox.foo:8080/assets/coreos"
}

View File

@@ -50,8 +50,7 @@ systemd:
[Unit]
Description=Kubelet via Hyperkube ACI
[Service]
Environment=KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
Environment=KUBELET_IMAGE_TAG=v1.6.1_coreos.0
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
@@ -78,8 +77,8 @@ systemd:
--pod-manifest-path=/etc/kubernetes/manifests \
--allow-privileged \
--hostname-override={{.domain_name}} \
--node-labels=master=true \
--node-labels=node-role.kubernetes.io/master \
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
--cluster_dns={{.k8s_dns_service_ip}} \
--cluster_domain=cluster.local
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
@@ -113,12 +112,13 @@ storage:
- "-LROOT"
{{end}}
files:
- path: /etc/kubernetes/.empty
- path: /etc/kubernetes/kubelet.env
filesystem: root
mode: 0644
contents:
inline: |
empty
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.6.4_coreos.0
- path: /etc/hostname
filesystem: root
mode: 0644
@@ -142,20 +142,20 @@ storage:
#!/bin/bash
# Wrapper for bootkube start
set -e
mkdir -p /tmp/bootkube
BOOTKUBE_ACI="${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
BOOTKUBE_VERSION="${BOOTKUBE_VERSION:-v0.4.0}"
BOOTKUBE_VERSION="${BOOTKUBE_VERSION:-v0.4.4}"
BOOTKUBE_ASSETS="${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
exec /usr/bin/rkt run \
--trust-keys-from-https \
--volume assets,kind=host,source=$BOOTKUBE_ASSETS \
--mount volume=assets,target=/assets \
--volume bootstrap,kind=host,source=/etc/kubernetes/manifests \
--mount volume=bootstrap,target=/etc/kubernetes/manifests \
--volume temp,kind=host,source=/tmp/bootkube \
--mount volume=temp,target=/tmp/bootkube \
--volume bootstrap,kind=host,source=/etc/kubernetes \
--mount volume=bootstrap,target=/etc/kubernetes \
$RKT_OPTS \
${BOOTKUBE_ACI}:${BOOTKUBE_VERSION} --net=host --exec=/bootkube -- start --asset-dir=/assets "$@"
${BOOTKUBE_ACI}:${BOOTKUBE_VERSION} \
--net=host \
--dns=host \
--exec=/bootkube -- start --asset-dir=/assets "$@"
{{ if index . "ssh_authorized_keys" }}
passwd:

View File

@@ -47,8 +47,7 @@ systemd:
[Unit]
Description=Kubelet via Hyperkube ACI
[Service]
Environment=KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
Environment=KUBELET_IMAGE_TAG=v1.6.1_coreos.0
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
@@ -102,12 +101,13 @@ storage:
- "-LROOT"
{{end}}
files:
- path: /etc/kubernetes/.empty
- path: /etc/kubernetes/kubelet.env
filesystem: root
mode: 0644
contents:
inline: |
empty
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.6.4_coreos.0
- path: /etc/hostname
filesystem: root
mode: 0644

View File

@@ -1,37 +0,0 @@
---
systemd:
units:
- name: installer.service
enable: true
contents: |
[Unit]
Requires=network-online.target
After=network-online.target
[Service]
Type=simple
ExecStart=/opt/installer
[Install]
WantedBy=multi-user.target
storage:
files:
- path: /opt/installer
filesystem: root
mode: 0500
contents:
inline: |
#!/bin/bash -ex
curl --fail "{{.ignition_endpoint}}?{{.request.raw_query}}&os=installed" -o ignition.json
coreos-install -d /dev/sda -C {{.coreos_channel}} -V {{.coreos_version}} -i ignition.json {{if index . "baseurl"}}-b {{.baseurl}}{{end}}
udevadm settle
systemctl poweroff
{{ if index . "ssh_authorized_keys" }}
passwd:
users:
- name: core
ssh_authorized_keys:
{{ range $element := .ssh_authorized_keys }}
- {{$element}}
{{end}}
{{end}}

View File

@@ -1,778 +0,0 @@
---
systemd:
units:
- name: etcd2.service
enable: true
dropins:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_NAME={{.etcd_name}}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=http://{{.domain_name}}:2379"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=http://{{.domain_name}}:2380"
Environment="ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379"
Environment="ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380"
Environment="ETCD_INITIAL_CLUSTER={{.etcd_initial_cluster}}"
Environment="ETCD_STRICT_RECONFIG_CHECK=true"
- name: flanneld.service
dropins:
- name: 40-ExecStartPre-symlink.conf
contents: |
[Service]
EnvironmentFile=-/etc/flannel/options.env
ExecStartPre=/opt/init-flannel
- name: docker.service
dropins:
- name: 40-flannel.conf
contents: |
[Unit]
Requires=flanneld.service
After=flanneld.service
[Service]
EnvironmentFile=/etc/kubernetes/cni/docker_opts_cni.env
- name: locksmithd.service
dropins:
- name: 40-etcd-lock.conf
contents: |
[Service]
Environment="REBOOT_STRATEGY=etcd-lock"
- name: k8s-certs@.service
contents: |
[Unit]
Description=Fetch Kubernetes certificate assets
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/ssl
ExecStart=/usr/bin/bash -c "[ -f /etc/kubernetes/ssl/%i ] || curl --fail {{.k8s_cert_endpoint}}/tls/%i -o /etc/kubernetes/ssl/%i"
- name: k8s-assets.target
contents: |
[Unit]
Description=Load Kubernetes Assets
Requires=k8s-certs@apiserver.pem.service
After=k8s-certs@apiserver.pem.service
Requires=k8s-certs@apiserver-key.pem.service
After=k8s-certs@apiserver-key.pem.service
Requires=k8s-certs@ca.pem.service
After=k8s-certs@ca.pem.service
- name: kubelet.service
enable: true
contents: |
[Unit]
Description=Kubelet via Hyperkube ACI
Wants=flanneld.service
Requires=k8s-assets.target
After=k8s-assets.target
[Service]
Environment=KUBELET_VERSION=v1.5.5_coreos.0
Environment="RKT_OPTS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume dns,kind=host,source=/etc/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf \
{{ if eq .container_runtime "rkt" -}}
--volume rkt,kind=host,source=/opt/bin/host-rkt \
--mount volume=rkt,target=/usr/bin/rkt \
--volume var-lib-rkt,kind=host,source=/var/lib/rkt \
--mount volume=var-lib-rkt,target=/var/lib/rkt \
--volume stage,kind=host,source=/tmp \
--mount volume=stage,target=/tmp \
{{ end -}}
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStartPre=/usr/bin/systemctl is-active flanneld.service
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers=http://127.0.0.1:8080 \
--register-schedulable=true \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--container-runtime={{.container_runtime}} \
--rkt-path=/usr/bin/rkt \
--rkt-stage1-image=coreos.com/rkt/stage1-coreos \
--allow-privileged=true \
--pod-manifest-path=/etc/kubernetes/manifests \
--hostname-override={{.domain_name}} \
--cluster_dns={{.k8s_dns_service_ip}} \
--cluster_domain=cluster.local
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
- name: k8s-addons.service
enable: true
contents: |
[Unit]
Description=Kubernetes Addons
[Service]
Type=oneshot
ExecStart=/opt/k8s-addons
[Install]
WantedBy=multi-user.target
{{ if eq .container_runtime "rkt" }}
- name: rkt-api.service
enable: true
contents: |
[Unit]
Before=kubelet.service
[Service]
ExecStart=/usr/bin/rkt api-service
Restart=always
RestartSec=10
[Install]
RequiredBy=kubelet.service
- name: load-rkt-stage1.service
enable: true
contents: |
[Unit]
Description=Load rkt stage1 images
Documentation=http://github.com/coreos/rkt
Requires=network-online.target
After=network-online.target
Before=rkt-api.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/rkt fetch /usr/lib/rkt/stage1-images/stage1-coreos.aci /usr/lib/rkt/stage1-images/stage1-fly.aci --insecure-options=image
[Install]
RequiredBy=rkt-api.service
{{ end }}
storage:
{{ if index . "pxe" }}
disks:
- device: /dev/sda
wipe_table: true
partitions:
- label: ROOT
filesystems:
- name: root
mount:
device: "/dev/sda1"
format: "ext4"
create:
force: true
options:
- "-LROOT"
{{ end }}
files:
- path: /etc/kubernetes/cni/net.d/10-flannel.conf
filesystem: root
contents:
inline: |
{
"name": "podnet",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}
- path: /etc/kubernetes/cni/docker_opts_cni.env
filesystem: root
contents:
inline: |
DOCKER_OPT_BIP=""
DOCKER_OPT_IPMASQ=""
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
inline: |
fs.inotify.max_user_watches=16184
- path: /etc/kubernetes/manifests/kube-proxy.yaml
filesystem: root
contents:
inline: |
apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
annotations:
rkt.alpha.kubernetes.io/stage1-name-override: coreos.com/rkt/stage1-fly
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.5.5_coreos.0
command:
- /hyperkube
- proxy
- --master=http://127.0.0.1:8080
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
- mountPath: /var/run/dbus
name: dbus
readOnly: false
volumes:
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
- hostPath:
path: /var/run/dbus
name: dbus
- path: /etc/kubernetes/manifests/kube-apiserver.yaml
filesystem: root
contents:
inline: |
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-apiserver
image: quay.io/coreos/hyperkube:v1.5.5_coreos.0
command:
- /hyperkube
- apiserver
- --bind-address=0.0.0.0
- --etcd-servers={{.k8s_etcd_endpoints}}
- --allow-privileged=true
- --service-cluster-ip-range={{.k8s_service_ip_range}}
- --secure-port=443
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --client-ca-file=/etc/kubernetes/ssl/ca.pem
- --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --runtime-config=extensions/v1beta1/networkpolicies=true
- --anonymous-auth=false
livenessProbe:
httpGet:
host: 127.0.0.1
port: 8080
path: /healthz
initialDelaySeconds: 15
timeoutSeconds: 15
ports:
- containerPort: 443
hostPort: 443
name: https
- containerPort: 8080
hostPort: 8080
name: local
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
- path: /etc/flannel/options.env
filesystem: root
contents:
inline: |
FLANNELD_ETCD_ENDPOINTS={{.k8s_etcd_endpoints}}
- path: /etc/kubernetes/manifests/kube-controller-manager.yaml
filesystem: root
contents:
inline: |
apiVersion: v1
kind: Pod
metadata:
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- name: kube-controller-manager
image: quay.io/coreos/hyperkube:v1.5.5_coreos.0
command:
- /hyperkube
- controller-manager
- --master=http://127.0.0.1:8080
- --leader-elect=true
- --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --root-ca-file=/etc/kubernetes/ssl/ca.pem
resources:
requests:
cpu: 200m
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
initialDelaySeconds: 15
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
- path: /etc/kubernetes/manifests/kube-scheduler.yaml
filesystem: root
contents:
inline: |
apiVersion: v1
kind: Pod
metadata:
name: kube-scheduler
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-scheduler
image: quay.io/coreos/hyperkube:v1.5.5_coreos.0
command:
- /hyperkube
- scheduler
- --master=http://127.0.0.1:8080
- --leader-elect=true
resources:
requests:
cpu: 100m
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10251
initialDelaySeconds: 15
timeoutSeconds: 15
- path: /srv/kubernetes/manifests/kube-dns-autoscaler-deployment.yaml
filesystem: root
contents:
inline: |
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns-autoscaler
namespace: kube-system
labels:
k8s-app: kube-dns-autoscaler
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: kube-dns-autoscaler
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: autoscaler
image: gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.0.0
resources:
requests:
cpu: "20m"
memory: "10Mi"
command:
- /cluster-proportional-autoscaler
- --namespace=kube-system
- --configmap=kube-dns-autoscaler
- --mode=linear
- --target=Deployment/kube-dns
- --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"min":1}}
- --logtostderr=true
- --v=2
- path: /srv/kubernetes/manifests/kube-dns-deployment.yaml
filesystem: root
contents:
inline: |
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
spec:
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/kubedns-amd64:1.9
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthz-kubedns
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-map=kube-dns
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
- name: dnsmasq
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4
livenessProbe:
httpGet:
path: /healthz-dnsmasq
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
- --log-facility=-
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
resources:
requests:
cpu: 150m
memory: 10Mi
- name: dnsmasq-metrics
image: gcr.io/google_containers/dnsmasq-metrics-amd64:1.0
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 10Mi
- name: healthz
image: gcr.io/google_containers/exechealthz-amd64:1.2
resources:
limits:
memory: 50Mi
requests:
cpu: 10m
memory: 50Mi
args:
- --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- --url=/healthz-dnsmasq
- --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
- --url=/healthz-kubedns
- --port=8080
- --quiet
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: Default
- path: /srv/kubernetes/manifests/kube-dns-svc.yaml
filesystem: root
contents:
inline: |
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: {{.k8s_dns_service_ip}}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- path: /srv/kubernetes/manifests/heapster-deployment.yaml
filesystem: root
contents:
inline: |
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster-v1.2.0
namespace: kube-system
labels:
k8s-app: heapster
kubernetes.io/cluster-service: "true"
version: v1.2.0
spec:
replicas: 1
selector:
matchLabels:
k8s-app: heapster
version: v1.2.0
template:
metadata:
labels:
k8s-app: heapster
version: v1.2.0
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- image: gcr.io/google_containers/heapster:v1.2.0
name: heapster
livenessProbe:
httpGet:
path: /healthz
port: 8082
scheme: HTTP
initialDelaySeconds: 180
timeoutSeconds: 5
command:
- /heapster
- --source=kubernetes.summary_api:''
- image: gcr.io/google_containers/addon-resizer:1.6
name: heapster-nanny
resources:
limits:
cpu: 50m
memory: 90Mi
requests:
cpu: 50m
memory: 90Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command:
- /pod_nanny
- --cpu=80m
- --extra-cpu=4m
- --memory=200Mi
- --extra-memory=4Mi
- --threshold=5
- --deployment=heapster-v1.2.0
- --container=heapster
- --poll-period=300000
- --estimator=exponential
- path: /srv/kubernetes/manifests/heapster-svc.yaml
filesystem: root
contents:
inline: |
kind: Service
apiVersion: v1
metadata:
name: heapster
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Heapster"
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
- path: /srv/kubernetes/manifests/kube-dashboard-deployment.yaml
filesystem: root
contents:
inline: |
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
- path: /srv/kubernetes/manifests/kube-dashboard-svc.yaml
filesystem: root
contents:
inline: |
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090
- path: /opt/init-flannel
filesystem: root
mode: 0544
contents:
inline: |
#!/bin/bash -ex
function init_flannel {
echo "Waiting for etcd..."
while true
do
IFS=',' read -ra ES <<< "{{.k8s_etcd_endpoints}}"
for ETCD in "${ES[@]}"; do
echo "Trying: $ETCD"
if [ -n "$(curl --fail --silent "$ETCD/v2/machines")" ]; then
local ACTIVE_ETCD=$ETCD
break
fi
sleep 1
done
if [ -n "$ACTIVE_ETCD" ]; then
break
fi
done
RES=$(curl --fail --silent -X PUT -d "value={\"Network\":\"{{.k8s_pod_network}}\",\"Backend\":{\"Type\":\"vxlan\"}}" "$ACTIVE_ETCD/v2/keys/coreos.com/network/config?prevExist=false")
if [ -z "$(echo $RES | grep '"action":"create"')" ] && [ -z "$(echo $RES | grep 'Key already exists')" ]; then
echo "Unexpected error configuring flannel pod network: $RES"
fi
}
init_flannel
{{ if eq .container_runtime "rkt" }}
- path: /opt/bin/host-rkt
filesystem: root
mode: 0544
contents:
inline: |
#!/bin/sh
# This is bind mounted into the kubelet rootfs and all rkt shell-outs go
# through this rkt wrapper. It essentially enters the host mount namespace
# (which it is already in) only for the purpose of breaking out of the chroot
# before calling rkt. It makes things like rkt gc work and avoids bind mounting
# in certain rkt filesystem dependancies into the kubelet rootfs. This can
# eventually be obviated when the write-api stuff gets upstream and rkt gc is
# through the api-server. Related issue:
# https://github.com/coreos/rkt/issues/2878
exec nsenter -m -u -i -n -p -t 1 -- /usr/bin/rkt "$@"
{{ end }}
- path: /opt/k8s-addons
filesystem: root
mode: 0544
contents:
inline: |
#!/bin/bash -ex
echo "Waiting for Kubernetes API..."
until curl --fail --silent "http://127.0.0.1:8080/version"
do
sleep 5
done
echo "K8S: DNS addon"
curl --fail --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dns-deployment.yaml)" "http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/kube-system/deployments"
curl --fail --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dns-svc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services"
curl --fail --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dns-autoscaler-deployment.yaml)" "http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/kube-system/deployments"
echo "K8S: Heapster addon"
curl --fail --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/heapster-deployment.yaml)" "http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/kube-system/deployments"
curl --fail --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/heapster-svc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services"
echo "K8S: Dashboard addon"
curl --fail --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dashboard-deployment.yaml)" "http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/kube-system/deployments"
curl --fail --silent -H "Content-Type: application/yaml" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dashboard-svc.yaml)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services"
{{ if index . "ssh_authorized_keys" }}
passwd:
users:
- name: core
ssh_authorized_keys:
{{ range $element := .ssh_authorized_keys }}
- {{$element}}
{{end}}
{{end}}

View File

@@ -1,268 +0,0 @@
---
systemd:
units:
- name: etcd2.service
enable: true
dropins:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_PROXY=on"
Environment="ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379"
Environment="ETCD_INITIAL_CLUSTER={{.etcd_initial_cluster}}"
- name: flanneld.service
dropins:
- name: 40-add-options.conf
contents: |
[Service]
EnvironmentFile=-/etc/flannel/options.env
- name: docker.service
dropins:
- name: 40-flannel.conf
contents: |
[Unit]
Requires=flanneld.service
After=flanneld.service
[Service]
EnvironmentFile=/etc/kubernetes/cni/docker_opts_cni.env
- name: locksmithd.service
dropins:
- name: 40-etcd-lock.conf
contents: |
[Service]
Environment="REBOOT_STRATEGY=etcd-lock"
- name: k8s-certs@.service
contents: |
[Unit]
Description=Fetch Kubernetes certificate assets
Requires=network-online.target
After=network-online.target
[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/ssl
ExecStart=/usr/bin/bash -c "[ -f /etc/kubernetes/ssl/%i ] || curl --fail {{.k8s_cert_endpoint}}/tls/%i -o /etc/kubernetes/ssl/%i"
- name: k8s-assets.target
contents: |
[Unit]
Description=Load Kubernetes Assets
Requires=k8s-certs@worker.pem.service
After=k8s-certs@worker.pem.service
Requires=k8s-certs@worker-key.pem.service
After=k8s-certs@worker-key.pem.service
Requires=k8s-certs@ca.pem.service
After=k8s-certs@ca.pem.service
- name: kubelet.service
enable: true
contents: |
[Unit]
Description=Kubelet via Hyperkube ACI
Requires=k8s-assets.target
After=k8s-assets.target
[Service]
Environment=KUBELET_VERSION=v1.5.5_coreos.0
Environment="RKT_OPTS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume dns,kind=host,source=/etc/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf \
{{ if eq .container_runtime "rkt" -}}
--volume rkt,kind=host,source=/opt/bin/host-rkt \
--mount volume=rkt,target=/usr/bin/rkt \
--volume var-lib-rkt,kind=host,source=/var/lib/rkt \
--mount volume=var-lib-rkt,target=/var/lib/rkt \
--volume stage,kind=host,source=/tmp \
--mount volume=stage,target=/tmp \
{{ end -}}
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers={{.k8s_controller_endpoint}} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--container-runtime={{.container_runtime}} \
--rkt-path=/usr/bin/rkt \
--rkt-stage1-image=coreos.com/rkt/stage1-coreos \
--register-node=true \
--allow-privileged=true \
--pod-manifest-path=/etc/kubernetes/manifests \
--hostname-override={{.domain_name}} \
--cluster_dns={{.k8s_dns_service_ip}} \
--cluster_domain=cluster.local \
--kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
--tls-cert-file=/etc/kubernetes/ssl/worker.pem \
--tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
{{ if eq .container_runtime "rkt" }}
- name: rkt-api.service
enable: true
contents: |
[Unit]
Before=kubelet.service
[Service]
ExecStart=/usr/bin/rkt api-service
Restart=always
RestartSec=10
[Install]
RequiredBy=kubelet.service
- name: load-rkt-stage1.service
enable: true
contents: |
[Unit]
Description=Load rkt stage1 images
Documentation=http://github.com/coreos/rkt
Requires=network-online.target
After=network-online.target
Before=rkt-api.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/rkt fetch /usr/lib/rkt/stage1-images/stage1-coreos.aci /usr/lib/rkt/stage1-images/stage1-fly.aci --insecure-options=image
[Install]
RequiredBy=rkt-api.service
{{ end }}
storage:
{{ if index . "pxe" }}
disks:
- device: /dev/sda
wipe_table: true
partitions:
- label: ROOT
filesystems:
- name: root
mount:
device: "/dev/sda1"
format: "ext4"
create:
force: true
options:
- "-LROOT"
{{end}}
files:
- path: /etc/kubernetes/cni/net.d/10-flannel.conf
filesystem: root
contents:
inline: |
{
"name": "podnet",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}
- path: /etc/kubernetes/cni/docker_opts_cni.env
filesystem: root
contents:
inline: |
DOCKER_OPT_BIP=""
DOCKER_OPT_IPMASQ=""
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
inline: |
fs.inotify.max_user_watches=16184
- path: /etc/kubernetes/worker-kubeconfig.yaml
filesystem: root
contents:
inline: |
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/ssl/worker.pem
client-key: /etc/kubernetes/ssl/worker-key.pem
contexts:
- context:
cluster: local
user: kubelet
name: kubelet-context
current-context: kubelet-context
- path: /etc/kubernetes/manifests/kube-proxy.yaml
filesystem: root
contents:
inline: |
apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
annotations:
rkt.alpha.kubernetes.io/stage1-name-override: coreos.com/rkt/stage1-fly
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.5.5_coreos.0
command:
- /hyperkube
- proxy
- --master={{.k8s_controller_endpoint}}
- --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: "ssl-certs"
- mountPath: /etc/kubernetes/worker-kubeconfig.yaml
name: "kubeconfig"
readOnly: true
- mountPath: /etc/kubernetes/ssl
name: "etc-kube-ssl"
readOnly: true
- mountPath: /var/run/dbus
name: dbus
readOnly: false
volumes:
- name: "ssl-certs"
hostPath:
path: "/usr/share/ca-certificates"
- name: "kubeconfig"
hostPath:
path: "/etc/kubernetes/worker-kubeconfig.yaml"
- name: "etc-kube-ssl"
hostPath:
path: "/etc/kubernetes/ssl"
- hostPath:
path: /var/run/dbus
name: dbus
- path: /etc/flannel/options.env
filesystem: root
contents:
inline: |
FLANNELD_ETCD_ENDPOINTS={{.k8s_etcd_endpoints}}
{{ if eq .container_runtime "rkt" }}
- path: /opt/bin/host-rkt
filesystem: root
mode: 0544
contents:
inline: |
#!/bin/sh
# This is bind mounted into the kubelet rootfs and all rkt shell-outs go
# through this rkt wrapper. It essentially enters the host mount namespace
# (which it is already in) only for the purpose of breaking out of the chroot
# before calling rkt. It makes things like rkt gc work and avoids bind mounting
# in certain rkt filesystem dependancies into the kubelet rootfs. This can
# eventually be obviated when the write-api stuff gets upstream and rkt gc is
# through the api-server. Related issue:
# https://github.com/coreos/rkt/issues/2878
exec nsenter -m -u -i -n -p -t 1 -- /usr/bin/rkt "$@"
{{ end }}
{{ if index . "ssh_authorized_keys" }}
passwd:
users:
- name: core
ssh_authorized_keys:
{{ range $element := .ssh_authorized_keys }}
- {{$element}}
{{end}}
{{end}}

View File

@@ -2,8 +2,8 @@
"id": "bootkube-controller",
"name": "bootkube Ready Controller",
"boot": {
"kernel": "/assets/coreos/1298.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1298.7.0/coreos_production_pxe_image.cpio.gz"],
"kernel": "/assets/coreos/1353.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1353.7.0/coreos_production_pxe_image.cpio.gz"],
"args": [
"root=/dev/sda1",
"coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp}",

View File

@@ -2,8 +2,8 @@
"id": "bootkube-worker",
"name": "bootkube Ready Worker",
"boot": {
"kernel": "/assets/coreos/1298.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1298.7.0/coreos_production_pxe_image.cpio.gz"],
"kernel": "/assets/coreos/1353.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1353.7.0/coreos_production_pxe_image.cpio.gz"],
"args": [
"root=/dev/sda1",
"coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp}",

View File

@@ -2,8 +2,8 @@
"id": "etcd3-gateway",
"name": "etcd3-gateway",
"boot": {
"kernel": "/assets/coreos/1298.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1298.7.0/coreos_production_pxe_image.cpio.gz"],
"kernel": "/assets/coreos/1353.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1353.7.0/coreos_production_pxe_image.cpio.gz"],
"args": [
"coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp}",
"coreos.first_boot=yes",

View File

@@ -2,8 +2,8 @@
"id": "etcd3",
"name": "etcd3",
"boot": {
"kernel": "/assets/coreos/1298.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1298.7.0/coreos_production_pxe_image.cpio.gz"],
"kernel": "/assets/coreos/1353.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1353.7.0/coreos_production_pxe_image.cpio.gz"],
"args": [
"coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp}",
"coreos.first_boot=yes",

View File

@@ -2,8 +2,8 @@
"id": "grub",
"name": "CoreOS via GRUB2",
"boot": {
"kernel": "(http;matchbox.foo:8080)/assets/coreos/1298.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["(http;matchbox.foo:8080)/assets/coreos/1298.7.0/coreos_production_pxe_image.cpio.gz"],
"kernel": "(http;matchbox.foo:8080)/assets/coreos/1353.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["(http;matchbox.foo:8080)/assets/coreos/1353.7.0/coreos_production_pxe_image.cpio.gz"],
"args": [
"coreos.config.url=http://matchbox.foo:8080/ignition",
"coreos.first_boot=yes",

View File

@@ -2,8 +2,8 @@
"id": "install-reboot",
"name": "Install CoreOS and Reboot",
"boot": {
"kernel": "/assets/coreos/1298.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1298.7.0/coreos_production_pxe_image.cpio.gz"],
"kernel": "/assets/coreos/1353.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1353.7.0/coreos_production_pxe_image.cpio.gz"],
"args": [
"coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp}",
"coreos.first_boot=yes",

View File

@@ -1,16 +0,0 @@
{
"id": "install-shutdown",
"name": "Install CoreOS and Shutdown",
"boot": {
"kernel": "/assets/coreos/1298.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1298.7.0/coreos_production_pxe_image.cpio.gz"],
"args": [
"coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp}",
"coreos.first_boot=yes",
"console=tty0",
"console=ttyS0",
"coreos.autologin"
]
},
"ignition_id": "install-shutdown.yaml"
}

View File

@@ -1,17 +0,0 @@
{
"id": "k8s-controller",
"name": "Kubernetes Controller",
"boot": {
"kernel": "/assets/coreos/1298.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1298.7.0/coreos_production_pxe_image.cpio.gz"],
"args": [
"root=/dev/sda1",
"coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp}",
"coreos.first_boot=yes",
"console=tty0",
"console=ttyS0",
"coreos.autologin"
]
},
"ignition_id": "k8s-controller.yaml"
}

View File

@@ -1,17 +0,0 @@
{
"id": "k8s-worker",
"name": "Kubernetes Worker",
"boot": {
"kernel": "/assets/coreos/1298.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1298.7.0/coreos_production_pxe_image.cpio.gz"],
"args": [
"root=/dev/sda1",
"coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp}",
"coreos.first_boot=yes",
"console=tty0",
"console=ttyS0",
"coreos.autologin"
]
},
"ignition_id": "k8s-worker.yaml"
}

View File

@@ -2,8 +2,8 @@
"id": "simple-install",
"name": "Simple CoreOS Alpha Install",
"boot": {
"kernel": "/assets/coreos/1298.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1298.7.0/coreos_production_pxe_image.cpio.gz"],
"kernel": "/assets/coreos/1353.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1353.7.0/coreos_production_pxe_image.cpio.gz"],
"args": [
"coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp}",
"coreos.first_boot=yes",

View File

@@ -2,8 +2,8 @@
"id": "simple",
"name": "Simple CoreOS Alpha",
"boot": {
"kernel": "/assets/coreos/1298.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1298.7.0/coreos_production_pxe_image.cpio.gz"],
"kernel": "/assets/coreos/1353.7.0/coreos_production_pxe.vmlinuz",
"initrd": ["/assets/coreos/1353.7.0/coreos_production_pxe_image.cpio.gz"],
"args": [
"coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp}",
"coreos.first_boot=yes",

View File

@@ -1,3 +1,4 @@
*.tfstate*
terraform.tfvars
.terraform
*.tfstate*
assets

View File

@@ -1,71 +1,121 @@
# Self-hosted Kubernetes
The self-hosted Kubernetes example provisions a 3 node "self-hosted" Kubernetes v1.6.1 cluster. On-host kubelets wait for an apiserver to become reachable, then yield to kubelet pods scheduled via daemonset. [bootkube](https://github.com/kubernetes-incubator/bootkube) is run on any controller to bootstrap a temporary apiserver which schedules control plane components as pods before exiting. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
The self-hosted Kubernetes example shows how to use matchbox to network boot and provision a 3 node "self-hosted" Kubernetes v1.6.4 cluster. [bootkube](https://github.com/kubernetes-incubator/bootkube) is run once on a controller node to bootstrap Kubernetes control plane components as pods before exiting.
## Requirements
* Create a PXE network boot environment (e.g. with `coreos/dnsmasq`)
* Run a `matchbox` service with the gRPC API enabled
* 3 machines with known DNS names and MAC addresses for this example
* Matchbox provider credentials: a `client.crt`, `client.key`, and `ca.crt`.
Follow the getting started [tutorial](../../../Documentation/getting-started.md) to learn about matchbox and set up an environment that meets the requirements:
Install [bootkube](https://github.com/kubernetes-incubator/bootkube/releases) v0.4.0 and add it somewhere on your PATH.
* Matchbox v0.6+ [installation](../../../Documentation/deployment.md) with gRPC API enabled
* Matchbox provider credentials `client.crt`, `client.key`, and `ca.crt`
* PXE [network boot](../../../Documentation/network-setup.md) environment
* Terraform v0.9+ and [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) installed locally on your system
* Machines with known DNS names and MAC addresses
If you prefer to provision QEMU/KVM VMs on your local Linux machine, set up the matchbox [development environment](../../../Documentation/getting-started-rkt.md).
```sh
bootkube version
Version v0.4.0
sudo ./scripts/devnet create
```
Use the `bootkube` tool to render Kubernetes manifests and credentials into an `--asset-dir`. Later, `bootkube` will schedule these manifests during bootstrapping and the credentials will be used to access your cluster.
## Usage
Clone the [matchbox](https://github.com/coreos/matchbox) project and take a look at the cluster examples.
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com
$ git clone https://github.com/coreos/matchbox.git
$ cd matchbox/examples/terraform/bootkube-install
```
## Infrastructure
Copy the `terraform.tfvars.example` file to `terraform.tfvars`. Ensure `provider.tf` references your matchbox credentials.
Plan and apply terraform configurations. Create `bootkube-controller`, `bootkube-worker`, and `install-reboot` profiles and Container Linux configs. Create matcher groups for `node1.example.com`, `node2.example.com`, and `node3.example.com`.
```hcl
matchbox_http_endpoint = "http://matchbox.example.com:8080"
matchbox_rpc_endpoint = "matchbox.example.com:8081"
```
cd examples/bootkube-install
terraform plan
terraform apply
cluster_name = "demo"
container_linux_version = "1353.7.0"
container_linux_channel = "stable"
ssh_authorized_key = "ADD ME"
```
Power on each machine and wait for it to PXE boot, install CoreOS to disk, and provision itself.
## Bootstrap
Secure copy the kubeconfig to /etc/kubernetes/kubeconfig on every node which will path activate the `kubelet.service`.
Provide an ordered list of controller names, MAC addresses, and domain names. Provide an ordered list of worker names, MAC addresses, and domain names.
```
for node in 'node1' 'node2' 'node3'; do
scp assets/auth/kubeconfig core@$node.example.com:/home/core/kubeconfig
ssh core@$node.example.com 'sudo mv kubeconfig /etc/kubernetes/kubeconfig'
done
controller_names = ["node1"]
controller_macs = ["52:54:00:a1:9c:ae"]
controller_domains = ["node1.example.com"]
worker_names = ["node2", "node3"]
worker_macs = ["52:54:00:b2:2f:86", "52:54:00:c3:61:77"]
worker_domains = ["node2.example.com", "node3.example.com"]
```
Secure copy the bootkube generated assets to any controller node and run bootkube-start.
Finally, provide an `assets_dir` for generated manifests and a DNS name which you've setup to resolves to controller(s) (e.g. round-robin). Worker nodes and your kubeconfig will communicate via this endpoint.
```
scp -r assets core@node1.example.com:/home/core
ssh core@node1.example.com 'sudo mv assets /opt/bootkube/assets && sudo systemctl start bootkube'
k8s_domain_name = "cluster.example.com"
asset_dir = "assets"
```
Optionally watch the Kubernetes control plane bootstrapping with the bootkube temporary api-server. You will see quite a bit of output.
### Options
You may set `experimental_self_hosted_etcd = "true"` to deploy "self-hosted" etcd atop Kubernetes instead of running etcd on hosts directly. Warning, this is experimental and potentially dangerous.
The example above defines a Kubernetes cluster with 1 controller and 2 workers. Check the `multi-controller.tfvars.example` for an example which defines 3 controllers and one worker.
## Apply
Fetch the [bootkube](../README.md#modules) Terraform [module](https://www.terraform.io/docs/modules/index.html) for bare-metal, which is maintained in the in the matchbox repo.
```sh
$ terraform get
```
$ ssh core@node1.example.com 'journalctl -f -u bootkube'
[ 299.241291] bootkube[5]: Pod Status: kube-api-checkpoint Running
[ 299.241618] bootkube[5]: Pod Status: kube-apiserver Running
[ 299.241804] bootkube[5]: Pod Status: kube-scheduler Running
[ 299.241993] bootkube[5]: Pod Status: kube-controller-manager Running
[ 299.311743] bootkube[5]: All self-hosted control plane components successfully started
Plan and apply to create the resources on Matchbox.
```sh
$ terraform plan
Plan: 37 to add, 0 to change, 0 to destroy.
```
Terraform will configure matchbox with profiles (e.g. `cached-container-linux-install`, `bootkube-controller`, `bootkube-worker`) and add groups to match machines by MAC address to a profile. These resources declare that each machine should PXE boot and install Container Linux to disk. `node1` will provision itself as a controller, while `node2` and `noe3` provision themselves as workers.
The module referenced in `cluster.tf` will also generate bootkube assets to `assets_dir` (exactly like the [bootkube](https://github.com/kubernetes-incubator/bootkube) binary would). These assets include Kubernetes bootstrapping and control plane manifests as well as a kubeconfig you can use to access the cluster.
```sh
$ terraform apply
module.cluster.null_resource.copy-kubeconfig.0: Still creating... (5m0s elapsed)
module.cluster.null_resource.copy-kubeconfig.1: Still creating... (5m0s elapsed)
module.cluster.null_resource.copy-kubeconfig.2: Still creating... (5m0s elapsed)
...
module.cluster.null_resource.bootkube-start: Still creating... (8m40s elapsed)
...
Apply complete! Resources: 37 added, 0 changed, 0 destroyed.
```
You can now move on to the "Machines" section. Apply will loop until it can successfully copy the kubeconfig to each node and start the one-time Kubernetes bootstrapping process on a controller. In practice, you may see `apply` fail if it connects before the disk install has completed. Run terraform apply until it reconciles successfully.
Note: The `cached-container-linux-install` profile will PXE boot and install Container Linux from matchbox [assets](https://github.com/coreos/matchbox/blob/master/Documentation/api.md#assets). If you have not populated the assets cache, use the `container-linux-install` profile to use public images (slower).
## Machines
Power on each machine (with PXE boot device on next boot). Machines should network boot, install Container Linux to disk, reboot, and provision themselves as bootkube controllers or workers.
```sh
$ ipmitool -H node1.example.com -U USER -P PASS chassis bootdev pxe
$ ipmitool -H node1.example.com -U USER -P PASS power on
```
For local QEMU/KVM development, create the QEMU/KVM VMs.
```sh
$ sudo ./scripts/libvirt create
$ sudo ./scripts/libvirt [start|reboot|shutdown|poweroff|destroy]
```
## Verify
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your laptop. Use the generated kubeconfig to access the Kubernetes cluster. Verify that the cluster is accessible and that the kubelet, apiserver, scheduler, and controller-manager are running as pods.
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your laptop. Use the generated kubeconfig to access the Kubernetes cluster. Verify that the cluster is accessible and that the apiserver, scheduler, and controller-manager are running as pods.
```sh
$ KUBECONFIG=assets/auth/kubeconfig
@@ -93,4 +143,8 @@ kube-system kube-scheduler-694795526-fks0b 1/1 Running 1
kube-system pod-checkpointer-node1.example.com 1/1 Running 2 10m
```
Try deleting pods to see that the cluster is resilient to failures and machine restarts (CoreOS auto-updates).
Try restarting machines or deleting pods to see that the cluster is resilient to failures.
## Going Further
Learn more about [matchbox](../../../Documentation/matchbox.md) or explore the other [example](../) clusters.

View File

@@ -1,70 +0,0 @@
// Create popular machine Profiles (convenience module)
module "profiles" {
source = "../modules/profiles"
matchbox_http_endpoint = "http://matchbox.example.com:8080"
coreos_version = "1298.7.0"
}
// Install CoreOS to disk before provisioning
resource "matchbox_group" "default" {
name = "default"
profile = "${module.profiles.coreos-install}"
// No selector, matches all nodes
metadata {
coreos_channel = "stable"
coreos_version = "1298.7.0"
ignition_endpoint = "http://matchbox.example.com:8080/ignition"
baseurl = "http://matchbox.example.com:8080/assets/coreos"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
// Create a controller matcher group
resource "matchbox_group" "node1" {
name = "node1"
profile = "${module.profiles.bootkube-controller}"
selector {
mac = "52:54:00:a1:9c:ae"
os = "installed"
}
metadata {
domain_name = "node1.example.com"
etcd_name = "node1"
etcd_initial_cluster = "node1=http://node1.example.com:2380"
k8s_dns_service_ip = "${var.k8s_dns_service_ip}"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
// Create worker matcher groups
resource "matchbox_group" "node2" {
name = "node2"
profile = "${module.profiles.bootkube-worker}"
selector {
mac = "52:54:00:b2:2f:86"
os = "installed"
}
metadata {
domain_name = "node2.example.com"
etcd_endpoints = "node1.example.com:2379"
k8s_dns_service_ip = "${var.k8s_dns_service_ip}"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
resource "matchbox_group" "node3" {
name = "node3"
profile = "${module.profiles.bootkube-worker}"
selector {
mac = "52:54:00:c3:61:77"
os = "installed"
}
metadata {
domain_name = "node3.example.com"
etcd_endpoints = "node1.example.com:2379"
k8s_dns_service_ip = "${var.k8s_dns_service_ip}"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}

View File

@@ -0,0 +1,27 @@
// Self-hosted Kubernetes cluster
module "cluster" {
source = "../modules/bootkube"
matchbox_http_endpoint = "${var.matchbox_http_endpoint}"
ssh_authorized_key = "${var.ssh_authorized_key}"
cluster_name = "${var.cluster_name}"
container_linux_channel = "${var.container_linux_channel}"
container_linux_version = "${var.container_linux_version}"
# Machines
controller_names = "${var.controller_names}"
controller_macs = "${var.controller_macs}"
controller_domains = "${var.controller_domains}"
worker_names = "${var.worker_names}"
worker_macs = "${var.worker_macs}"
worker_domains = "${var.worker_domains}"
# bootkube assets
k8s_domain_name = "${var.k8s_domain_name}"
asset_dir = "${var.asset_dir}"
# Optional
container_linux_oem = "${var.container_linux_oem}"
experimental_self_hosted_etcd = "${var.experimental_self_hosted_etcd}"
}

View File

@@ -0,0 +1,23 @@
matchbox_http_endpoint = "http://matchbox.example.com:8080"
matchbox_rpc_endpoint = "matchbox.example.com:8081"
# ssh_authorized_key = "ADD ME"
cluster_name = "example"
container_linux_version = "1353.7.0"
container_linux_channel = "stable"
# Machines
controller_names = ["node1", "node2", "node3"]
controller_macs = ["52:54:00:a1:9c:ae", "52:54:00:b2:2f:86", "52:54:00:c3:61:77"]
controller_domains = ["node1.example.com", "node2.example.com", "node3.example.com"]
worker_names = ["node4"]
worker_macs = ["52:54:00:d7:99:c7"]
worker_domains = ["node4.example.com"]
# Bootkube
k8s_domain_name = "cluster.example.com"
asset_dir = "assets"
# Optional
# container_linux_oem = ""
# experimental_self_hosted_etcd = "true"

View File

@@ -1,7 +1,7 @@
// Configure the matchbox provider
provider "matchbox" {
endpoint = "matchbox.example.com:8081"
endpoint = "${var.matchbox_rpc_endpoint}"
client_cert = "${file("~/.matchbox/client.crt")}"
client_key = "${file("~/.matchbox/client.key")}"
ca = "${file("~/.matchbox/ca.crt")}"
client_key = "${file("~/.matchbox/client.key")}"
ca = "${file("~/.matchbox/ca.crt")}"
}

View File

@@ -0,0 +1,23 @@
matchbox_http_endpoint = "http://matchbox.example.com:8080"
matchbox_rpc_endpoint = "matchbox.example.com:8081"
# ssh_authorized_key = "ADD ME"
cluster_name = "example"
container_linux_version = "1353.7.0"
container_linux_channel = "stable"
# Machines
controller_names = ["node1"]
controller_macs = ["52:54:00:a1:9c:ae"]
controller_domains = ["node1.example.com"]
worker_names = ["node2", "node3"]
worker_macs = ["52:54:00:b2:2f:86", "52:54:00:c3:61:77"]
worker_domains = ["node2.example.com", "node3.example.com"]
# Bootkube
k8s_domain_name = "cluster.example.com"
asset_dir = "assets"
# Optional
# container_linux_oem = ""
# experimental_self_hosted_etcd = "true"

View File

@@ -1,10 +1,94 @@
variable "ssh_authorized_key" {
type = "string"
default = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCt3BebCHqnSsgpLjo4kVvyfY/z2BS8t27r/7du+O2pb4xYkr7n+KFpbOz523vMTpQ+o1jY4u4TgexglyT9nqasWgLOvo1qjD1agHme8LlTPQSk07rXqOB85Uq5p7ig2zoOejF6qXhcc3n1c7+HkxHrgpBENjLVHOBpzPBIAHkAGaZcl07OCqbsG5yxqEmSGiAlh/IiUVOZgdDMaGjCRFy0wk0mQaGD66DmnFc1H5CzcPjsxr0qO65e7lTGsE930KkO1Vc+RHCVwvhdXs+c2NhJ2/3740Kpes9n1/YullaWZUzlCPDXtRuy6JRbFbvy39JUgHWGWzB3d+3f8oJ/N4qZ cardno:000603633110"
variable "matchbox_http_endpoint" {
type = "string"
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
}
variable "k8s_dns_service_ip" {
type = "string"
default = "10.3.0.10"
description = "Cluster DNS servce IP address passed via the Kubelet --cluster-dns flag"
variable "matchbox_rpc_endpoint" {
type = "string"
description = "Matchbox gRPC API endpoint, without the protocol (e.g. matchbox.example.com:8081)"
}
variable "container_linux_channel" {
type = "string"
description = "Container Linux channel corresponding to the container_linux_version"
}
variable "container_linux_version" {
type = "string"
description = "Container Linux version of the kernel/initrd to PXE or the image to install"
}
variable "cluster_name" {
type = "string"
description = "Cluster name"
}
variable "ssh_authorized_key" {
type = "string"
description = "SSH public key to set as an authorized_key on machines"
}
# Machines
# Terraform's crude "type system" does properly support lists of maps so we do this.
variable "controller_names" {
type = "list"
}
variable "controller_macs" {
type = "list"
}
variable "controller_domains" {
type = "list"
}
variable "worker_names" {
type = "list"
}
variable "worker_macs" {
type = "list"
}
variable "worker_domains" {
type = "list"
}
# bootkube assets
variable "k8s_domain_name" {
description = "Controller DNS name which resolves to a controller instance. Workers and kubeconfig's will communicate with this endpoint (e.g. cluster.example.com)"
type = "string"
}
variable "asset_dir" {
description = "Path to a directory where generated assets should be placed (contains secrets)"
type = "string"
}
variable "pod_cidr" {
description = "CIDR IP range to assign Kubernetes pods"
type = "string"
default = "10.2.0.0/16"
}
variable "service_cidr" {
description = <<EOD
CIDR IP range to assign Kubernetes services.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns, the 15th IP will be reserved for self-hosted etcd, and the 200th IP will be reserved for bootstrap self-hosted etcd.
EOD
type = "string"
default = "10.3.0.0/16"
}
variable "container_linux_oem" {
type = "string"
default = ""
description = "Specify an OEM image id to use as base for the installation (e.g. ami, vmware_raw, xen) or leave blank for the default image"
}
variable "experimental_self_hosted_etcd" {
default = "false"
description = "Create self-hosted etcd cluster as pods on Kubernetes, instead of on-hosts"
}

View File

@@ -0,0 +1,93 @@
# etcd3
The `etcd3-install` example shows how to use matchbox to network boot and provision 3-node etcd3 cluster on bare-metal in an automated way.
## Requirements
Follow the getting started [tutorial](../../../Documentation/getting-started.md) to learn about matchbox and set up an environment that meets the requirements:
* Matchbox v0.6+ [installation](../../../Documentation/deployment.md) with gRPC API enabled
* Matchbox provider credentials `client.crt`, `client.key`, and `ca.crt`
* PXE [network boot](../../../Documentation/network-setup.md) environment
* Terraform v0.9+ and [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) installed locally on your system
* 3 machines with known DNS names and MAC addresses
If you prefer to provision QEMU/KVM VMs on your local Linux machine, set up the matchbox [development environment](../../../Documentation/getting-started-rkt.md).
```sh
sudo ./scripts/devnet create
```
## Usage
Clone the [matchbox](https://github.com/coreos/matchbox) project and take a look at the cluster examples.
```sh
$ git clone https://github.com/coreos/matchbox.git
$ cd matchbox/examples/terraform/etcd3-install
```
Copy the `terraform.tfvars.example` file to `terraform.tfvars`. Ensure `provider.tf` references your matchbox credentials.
```hcl
matchbox_http_endpoint = "http://matchbox.example.com:8080"
matchbox_rpc_endpoint = "matchbox.example.com:8081"
ssh_authorized_key = "ADD ME"
```
Configs in `etcd3-install` configure the matchbox provider, define profiles (e.g. `cached-container-linux-install`, `etcd3`), and define 3 groups which match machines by MAC address to a profile. These resources declare that the machines should PXE boot, install Container Linux to disk, and provision themselves into peers in a 3-node etcd3 cluster.
Fetch the [profiles](../README.md#modules) Terraform [module](https://www.terraform.io/docs/modules/index.html) which let's you use common machine profiles maintained in the matchbox repo (like `etcd3`).
```sh
$ terraform get
```
Plan and apply to create the resoures on Matchbox.
```sh
$ terraform plan
Plan: 10 to add, 0 to change, 0 to destroy.
$ terraform apply
Apply complete! Resources: 10 added, 0 changed, 0 destroyed.
```
Note: The `cached-container-linux-install` profile will PXE boot and install Container Linux from matchbox [assets](https://github.com/coreos/matchbox/blob/master/Documentation/api.md#assets). If you have not populated the assets cache, use the `container-linux-install` profile to use public images (slower).
## Machines
Power on each machine (with PXE boot device on next boot). Machines should network boot, install Container Linux to disk, reboot, and provision themselves as a 3-node etcd3 cluster.
```sh
$ ipmitool -H node1.example.com -U USER -P PASS chassis bootdev pxe
$ ipmitool -H node1.example.com -U USER -P PASS power on
```
For local QEMU/KVM development, create the QEMU/KVM VMs.
```sh
$ sudo ./scripts/libvirt create
$ sudo ./scripts/libvirt [start|reboot|shutdown|poweroff|destroy]
```
## Verify
Verify each node is running etcd3 (i.e. etcd-member.service).
```sh
$ ssh core@node1.example.com
$ systemctl status etcd-member
```
Verify that etcd3 peers are healthy and communicating.
```sh
$ ETCDCTL_API=3
$ etcdctl cluster-health
$ etcdctl set /message hello
$ etcdctl get /message
```
## Going Further
Learn more about [matchbox](../../../Documentation/matchbox.md) or explore the other [example](../) clusters.

View File

@@ -1,68 +1,76 @@
// Create popular machine Profiles (convenience module)
// Create popular profiles (convenience module)
module "profiles" {
source = "../modules/profiles"
matchbox_http_endpoint = "http://matchbox.example.com:8080"
coreos_version = "1298.7.0"
source = "../modules/profiles"
matchbox_http_endpoint = "${var.matchbox_http_endpoint}"
container_linux_version = "1353.7.0"
container_linux_channel = "stable"
}
// Install CoreOS to disk before provisioning
// Install Container Linux to disk before provisioning
resource "matchbox_group" "default" {
name = "default"
profile = "${module.profiles.coreos-install}"
name = "default"
profile = "${module.profiles.cached-container-linux-install}"
// No selector, matches all nodes
metadata {
coreos_channel = "stable"
coreos_version = "1298.7.0"
ignition_endpoint = "http://matchbox.example.com:8080/ignition"
baseurl = "http://matchbox.example.com:8080/assets/coreos"
ssh_authorized_key = "${var.ssh_authorized_key}"
container_linux_channel = "stable"
container_linux_version = "1353.7.0"
container_linux_oem = "${var.container_linux_oem}"
ignition_endpoint = "${var.matchbox_http_endpoint}/ignition"
baseurl = "${var.matchbox_http_endpoint}/assets/coreos"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
// Create matcher groups for 3 machines
resource "matchbox_group" "node1" {
name = "node1"
name = "node1"
profile = "${module.profiles.etcd3}"
selector {
mac = "52:54:00:a1:9c:ae"
os = "installed"
os = "installed"
}
metadata {
domain_name = "node1.example.com"
etcd_name = "node1"
domain_name = "node1.example.com"
etcd_name = "node1"
etcd_initial_cluster = "node1=http://node1.example.com:2380,node2=http://node2.example.com:2380,node3=http://node3.example.com:2380"
ssh_authorized_key = "${var.ssh_authorized_key}"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
resource "matchbox_group" "node2" {
name = "node2"
name = "node2"
profile = "${module.profiles.etcd3}"
selector {
mac = "52:54:00:b2:2f:86"
os = "installed"
os = "installed"
}
metadata {
domain_name = "node2.example.com"
etcd_name = "node2"
domain_name = "node2.example.com"
etcd_name = "node2"
etcd_initial_cluster = "node1=http://node1.example.com:2380,node2=http://node2.example.com:2380,node3=http://node3.example.com:2380"
ssh_authorized_key = "${var.ssh_authorized_key}"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
resource "matchbox_group" "node3" {
name = "node3"
name = "node3"
profile = "${module.profiles.etcd3}"
selector {
mac = "52:54:00:c3:61:77"
os = "installed"
os = "installed"
}
metadata {
domain_name = "node3.example.com"
etcd_name = "node3"
domain_name = "node3.example.com"
etcd_name = "node3"
etcd_initial_cluster = "node1=http://node1.example.com:2380,node2=http://node2.example.com:2380,node3=http://node3.example.com:2380"
ssh_authorized_key = "${var.ssh_authorized_key}"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}

View File

@@ -1,7 +1,7 @@
// Configure the matchbox provider
provider "matchbox" {
endpoint = "matchbox.example.com:8081"
endpoint = "${var.matchbox_rpc_endpoint}"
client_cert = "${file("~/.matchbox/client.crt")}"
client_key = "${file("~/.matchbox/client.key")}"
ca = "${file("~/.matchbox/ca.crt")}"
client_key = "${file("~/.matchbox/client.key")}"
ca = "${file("~/.matchbox/ca.crt")}"
}

View File

@@ -0,0 +1,6 @@
matchbox_http_endpoint = "http://matchbox.example.com:8080"
matchbox_rpc_endpoint = "matchbox.example.com:8081"
# ssh_authorized_key = "ADD ME"
# Optional
# container_linux_oem = ""

View File

@@ -1,4 +1,20 @@
variable "ssh_authorized_key" {
type = "string"
default = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCt3BebCHqnSsgpLjo4kVvyfY/z2BS8t27r/7du+O2pb4xYkr7n+KFpbOz523vMTpQ+o1jY4u4TgexglyT9nqasWgLOvo1qjD1agHme8LlTPQSk07rXqOB85Uq5p7ig2zoOejF6qXhcc3n1c7+HkxHrgpBENjLVHOBpzPBIAHkAGaZcl07OCqbsG5yxqEmSGiAlh/IiUVOZgdDMaGjCRFy0wk0mQaGD66DmnFc1H5CzcPjsxr0qO65e7lTGsE930KkO1Vc+RHCVwvhdXs+c2NhJ2/3740Kpes9n1/YullaWZUzlCPDXtRuy6JRbFbvy39JUgHWGWzB3d+3f8oJ/N4qZ cardno:000603633110"
variable "matchbox_http_endpoint" {
type = "string"
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
}
variable "matchbox_rpc_endpoint" {
type = "string"
description = "Matchbox gRPC API endpoint, without the protocol (e.g. matchbox.example.com:8081)"
}
variable "ssh_authorized_key" {
type = "string"
description = "SSH public key to set as an authorized_key on machines"
}
variable "container_linux_oem" {
type = "string"
default = ""
description = "Specify an OEM image id to use as base for the installation (e.g. ami, vmware_raw, xen) or leave blank for the default image"
}

View File

@@ -0,0 +1,36 @@
# Terraform Modules
Matchbox provides Terraform [modules](https://www.terraform.io/docs/modules/usage.html) you can re-use directly within your own Terraform configs. Modules are updated regularly so it is **recommended** that you pin the module version (e.g. `ref=sha`) to keep your configs deterministic.
```hcl
module "profiles" {
source = "git::https://github.com/coreos/matchbox.git//examples/terraform/modules/profiles?ref=4451425db8f230012c36de6e6628c72aa34e1c10"
matchbox_http_endpoint = "${var.matchbox_http_endpoint}"
container_linux_version = "${var.container_linux_version}"
container_linux_channel = "${var.container_linux_channel}"
}
```
Download referenced Terraform modules.
```sh
$ terraform get # does not check for updates
$ terraform get --update # checks for updates
```
Available modules:
| Module | Includes | Description |
|----------|-----------|-------------|
| profiles | * | Creates machine profiles you can reference in matcher groups |
| | container-linux-install | Install Container Linux to disk from core-os.net |
| | cached-container-linux-install | Install Container Linux to disk from matchbox assets cache |
| | etcd3 | Provision an etcd3 peer node |
| | etcd3-gateway | Provision an etcd3 gateway node |
| | bootkube-controller | Provision a self-hosted Kubernetes controller/master node |
| | bootkube-worker | Provisioner a self-hosted Kubernetes worker node |
| bootkube | | Creates a multi-controller, multi-worker self-hosted Kubernetes cluster |
## Customization
You are encouraged to look through the examples and modules. Implement your own profiles or package them as modules to meet your needs. We've just provided a starting point. Learn more about [matchbox](../../Documentation/matchbox.md) and [Container Linux configs](../../Documentation/container-linux-config.md).

View File

@@ -0,0 +1,12 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/dghubble/bootkube-terraform.git?ref=3720aff28a465987e079dcd74fe3b6d5046d7010"
cluster_name = "${var.cluster_name}"
api_servers = ["${var.k8s_domain_name}"]
etcd_servers = ["http://127.0.0.1:2379"]
asset_dir = "${var.asset_dir}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
experimental_self_hosted_etcd = "${var.experimental_self_hosted_etcd}"
}

View File

@@ -0,0 +1,61 @@
// Install Container Linux to disk
resource "matchbox_group" "container-linux-install" {
count = "${length(var.controller_names) + length(var.worker_names)}"
name = "${format("container-linux-install-%s", element(concat(var.controller_names, var.worker_names), count.index))}"
profile = "${module.profiles.cached-container-linux-install}"
selector {
mac = "${element(concat(var.controller_macs, var.worker_macs), count.index)}"
}
metadata {
container_linux_channel = "${var.container_linux_channel}"
container_linux_version = "${var.container_linux_version}"
container_linux_oem = "${var.container_linux_oem}"
ignition_endpoint = "${var.matchbox_http_endpoint}/ignition"
baseurl = "${var.matchbox_http_endpoint}/assets/coreos"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
resource "matchbox_group" "controller" {
count = "${length(var.controller_names)}"
name = "${format("%s-%s", var.cluster_name, element(var.controller_names, count.index))}"
profile = "${module.profiles.bootkube-controller}"
selector {
mac = "${element(var.controller_macs, count.index)}"
os = "installed"
}
metadata {
domain_name = "${element(var.controller_domains, count.index)}"
etcd_name = "${element(var.controller_names, count.index)}"
etcd_initial_cluster = "${join(",", formatlist("%s=http://%s:2380", var.controller_names, var.controller_domains))}"
etcd_on_host = "${var.experimental_self_hosted_etcd ? "false" : "true"}"
k8s_dns_service_ip = "${module.bootkube.kube_dns_service_ip}"
k8s_etcd_service_ip = "${module.bootkube.etcd_service_ip}"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
resource "matchbox_group" "worker" {
count = "${length(var.worker_names)}"
name = "${format("%s-%s", var.cluster_name, element(var.worker_names, count.index))}"
profile = "${module.profiles.bootkube-worker}"
selector {
mac = "${element(var.worker_macs, count.index)}"
os = "installed"
}
metadata {
domain_name = "${element(var.worker_domains, count.index)}"
etcd_endpoints = "${join(",", formatlist("%s:2379", var.controller_domains))}"
etcd_on_host = "${var.experimental_self_hosted_etcd ? "false" : "true"}"
k8s_dns_service_ip = "${module.bootkube.kube_dns_service_ip}"
k8s_etcd_service_ip = "${module.bootkube.etcd_service_ip}"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}

View File

@@ -0,0 +1,7 @@
// Create common profiles
module "profiles" {
source = "../profiles"
matchbox_http_endpoint = "${var.matchbox_http_endpoint}"
container_linux_version = "${var.container_linux_version}"
container_linux_channel = "${var.container_linux_channel}"
}

View File

@@ -0,0 +1,51 @@
# Secure copy kubeconfig to all nodes to activate kubelet.service
resource "null_resource" "copy-kubeconfig" {
count = "${length(var.controller_names) + length(var.worker_names)}"
connection {
type = "ssh"
host = "${element(concat(var.controller_domains, var.worker_domains), count.index)}"
user = "core"
timeout = "60m"
}
provisioner "file" {
content = "${module.bootkube.kubeconfig}"
destination = "$HOME/kubeconfig"
}
provisioner "remote-exec" {
inline = [
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
]
}
}
# Secure copy bootkube assets to ONE controller and start bootkube to perform
# one-time self-hosted cluster bootstrapping.
resource "null_resource" "bootkube-start" {
# Without depends_on, this remote-exec may start before the kubeconfig copy.
# Terraform only does one task at a time, so it would try to bootstrap
# Kubernetes and Tectonic while no Kubelets are running. Ensure all nodes
# receive a kubeconfig before proceeding with bootkube and tectonic.
depends_on = ["null_resource.copy-kubeconfig"]
connection {
type = "ssh"
host = "${element(var.controller_domains, 0)}"
user = "core"
timeout = "60m"
}
provisioner "file" {
source = "${var.asset_dir}"
destination = "$HOME/assets"
}
provisioner "remote-exec" {
inline = [
"sudo mv /home/core/assets /opt/bootkube",
"sudo systemctl start bootkube",
]
}
}

View File

@@ -0,0 +1,89 @@
variable "matchbox_http_endpoint" {
type = "string"
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
}
variable "container_linux_channel" {
type = "string"
description = "Container Linux channel corresponding to the container_linux_version"
}
variable "container_linux_version" {
type = "string"
description = "Container Linux version of the kernel/initrd to PXE or the image to install"
}
variable "cluster_name" {
type = "string"
description = "Cluster name"
}
variable "ssh_authorized_key" {
type = "string"
description = "SSH public key to set as an authorized_key on machines"
}
# Machines
# Terraform's crude "type system" does properly support lists of maps so we do this.
variable "controller_names" {
type = "list"
}
variable "controller_macs" {
type = "list"
}
variable "controller_domains" {
type = "list"
}
variable "worker_names" {
type = "list"
}
variable "worker_macs" {
type = "list"
}
variable "worker_domains" {
type = "list"
}
# bootkube assets
variable "k8s_domain_name" {
description = "Controller DNS name which resolves to a controller instance. Workers and kubeconfig's will communicate with this endpoint (e.g. cluster.example.com)"
type = "string"
}
variable "asset_dir" {
description = "Path to a directory where generated assets should be placed (contains secrets)"
type = "string"
}
variable "pod_cidr" {
description = "CIDR IP range to assign Kubernetes pods"
type = "string"
default = "10.2.0.0/16"
}
variable "service_cidr" {
description = <<EOD
CIDR IP range to assign Kubernetes services.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns, the 15th IP will be reserved for self-hosted etcd, and the 200th IP will be reserved for bootstrap self-hosted etcd.
EOD
type = "string"
default = "10.3.0.0/16"
}
variable "container_linux_oem" {
type = "string"
default = ""
description = "Specify an OEM image id to use as base for the installation (e.g. ami, vmware_raw, xen) or leave blank for the default image"
}
variable "experimental_self_hosted_etcd" {
default = "false"
description = "Create self-hosted etcd cluster as pods on Kubernetes, instead of on-hosts"
}

View File

@@ -1,6 +1,7 @@
---
systemd:
units:
{{ if eq .etcd_on_host "true" }}
- name: etcd-member.service
enable: true
dropins:
@@ -15,6 +16,7 @@ systemd:
Environment="ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380"
Environment="ETCD_INITIAL_CLUSTER={{.etcd_initial_cluster}}"
Environment="ETCD_STRICT_RECONFIG_CHECK=true"
{{ end }}
- name: docker.service
enable: true
- name: locksmithd.service
@@ -23,6 +25,9 @@ systemd:
contents: |
[Service]
Environment="REBOOT_STRATEGY=etcd-lock"
{{ if eq .etcd_on_host "false" -}}
Environment="LOCKSMITHD_ENDPOINT=http://{{.k8s_etcd_service_ip}}:2379"
{{ end }}
- name: kubelet.path
enable: true
contents: |
@@ -50,8 +55,7 @@ systemd:
[Unit]
Description=Kubelet via Hyperkube ACI
[Service]
Environment=KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
Environment=KUBELET_IMAGE_TAG=v1.6.1_coreos.0
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
@@ -78,8 +82,8 @@ systemd:
--pod-manifest-path=/etc/kubernetes/manifests \
--allow-privileged \
--hostname-override={{.domain_name}} \
--node-labels=master=true \
--node-labels=node-role.kubernetes.io/master \
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
--cluster_dns={{.k8s_dns_service_ip}} \
--cluster_domain=cluster.local
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
@@ -91,10 +95,13 @@ systemd:
contents: |
[Unit]
Description=Bootstrap a Kubernetes control plane with a temp api-server
ConditionPathExists=!/opt/bootkube/init_bootkube.done
[Service]
Type=simple
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/opt/bootkube
ExecStart=/opt/bootkube/bootkube-start
ExecStartPost=/bin/touch /opt/bootkube/init_bootkube.done
storage:
{{ if index . "pxe" }}
disks:
@@ -113,12 +120,13 @@ storage:
- "-LROOT"
{{end}}
files:
- path: /etc/kubernetes/.empty
- path: /etc/kubernetes/kubelet.env
filesystem: root
mode: 0644
contents:
inline: |
empty
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.6.4_coreos.0
- path: /etc/hostname
filesystem: root
mode: 0644
@@ -142,20 +150,23 @@ storage:
#!/bin/bash
# Wrapper for bootkube start
set -e
mkdir -p /tmp/bootkube
# Move experimental manifests
[ -d /opt/bootkube/assets/experimental/manifests ] && mv /opt/bootkube/assets/experimental/manifests/* /opt/bootkube/assets/manifests && rm -r /opt/bootkube/assets/experimental/manifests
[ -d /opt/bootkube/assets/experimental/bootstrap-manifests ] && mv /opt/bootkube/assets/experimental/bootstrap-manifests/* /opt/bootkube/assets/bootstrap-manifests && rm -r /opt/bootkube/assets/experimental/bootstrap-manifests
BOOTKUBE_ACI="${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
BOOTKUBE_VERSION="${BOOTKUBE_VERSION:-v0.4.0}"
BOOTKUBE_VERSION="${BOOTKUBE_VERSION:-v0.4.4}"
BOOTKUBE_ASSETS="${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
exec /usr/bin/rkt run \
--trust-keys-from-https \
--volume assets,kind=host,source=$BOOTKUBE_ASSETS \
--mount volume=assets,target=/assets \
--volume bootstrap,kind=host,source=/etc/kubernetes/manifests \
--mount volume=bootstrap,target=/etc/kubernetes/manifests \
--volume temp,kind=host,source=/tmp/bootkube \
--mount volume=temp,target=/tmp/bootkube \
--volume bootstrap,kind=host,source=/etc/kubernetes \
--mount volume=bootstrap,target=/etc/kubernetes \
$RKT_OPTS \
${BOOTKUBE_ACI}:${BOOTKUBE_VERSION} --net=host --exec=/bootkube -- start --asset-dir=/assets "$@"
${BOOTKUBE_ACI}:${BOOTKUBE_VERSION} \
--net=host \
--dns=host \
--exec=/bootkube -- start --asset-dir=/assets "$@"
passwd:
users:
- name: core

View File

@@ -1,6 +1,7 @@
---
systemd:
units:
{{ if eq .etcd_on_host "true" }}
- name: etcd-member.service
enable: true
dropins:
@@ -12,6 +13,7 @@ systemd:
ExecStart=/usr/lib/coreos/etcd-wrapper gateway start \
--listen-addr=127.0.0.1:2379 \
--endpoints={{.etcd_endpoints}}
{{ end }}
- name: docker.service
enable: true
- name: locksmithd.service
@@ -20,6 +22,9 @@ systemd:
contents: |
[Service]
Environment="REBOOT_STRATEGY=etcd-lock"
{{ if eq .etcd_on_host "false" -}}
Environment="LOCKSMITHD_ENDPOINT=http://{{.k8s_etcd_service_ip}}:2379"
{{ end }}
- name: kubelet.path
enable: true
contents: |
@@ -47,8 +52,7 @@ systemd:
[Unit]
Description=Kubelet via Hyperkube ACI
[Service]
Environment=KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
Environment=KUBELET_IMAGE_TAG=v1.6.1_coreos.0
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
@@ -102,12 +106,13 @@ storage:
- "-LROOT"
{{end}}
files:
- path: /etc/kubernetes/.empty
- path: /etc/kubernetes/kubelet.env
filesystem: root
mode: 0644
contents:
inline: |
empty
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.6.4_coreos.0
- path: /etc/hostname
filesystem: root
mode: 0644

View File

@@ -21,7 +21,7 @@ storage:
inline: |
#!/bin/bash -ex
curl "{{.ignition_endpoint}}?{{.request.raw_query}}&os=installed" -o ignition.json
coreos-install -d /dev/sda -C {{.coreos_channel}} -V {{.coreos_version}} -i ignition.json {{if index . "baseurl"}}-b {{.baseurl}}{{end}}
coreos-install -d /dev/sda -C {{.container_linux_channel}} -V {{.container_linux_version}} -i ignition.json {{if index . "baseurl"}}-b {{.baseurl}}{{end}} {{if index . "container_linux_oem"}}-o {{.container_linux_oem}}{{end}}
udevadm settle
systemctl reboot
passwd:
@@ -29,3 +29,4 @@ passwd:
- name: core
ssh_authorized_keys:
- {{.ssh_authorized_key}}

View File

@@ -1,5 +1,9 @@
output "coreos-install" {
value = "${matchbox_profile.coreos-install.name}"
output "container-linux-install" {
value = "${matchbox_profile.container-linux-install.name}"
}
output "cached-container-linux-install" {
value = "${matchbox_profile.cached-container-linux-install.name}"
}
output "etcd3" {

View File

@@ -1,39 +1,62 @@
// CoreOS Install Profile
resource "matchbox_profile" "coreos-install" {
name = "coreos-install"
kernel = "/assets/coreos/${var.coreos_version}/coreos_production_pxe.vmlinuz"
// Container Linux Install profile (from release.core-os.net)
resource "matchbox_profile" "container-linux-install" {
name = "container-linux-install"
kernel = "http://${var.container_linux_channel}.release.core-os.net/amd64-usr/${var.container_linux_version}/coreos_production_pxe.vmlinuz"
initrd = [
"/assets/coreos/${var.coreos_version}/coreos_production_pxe_image.cpio.gz"
"http://${var.container_linux_channel}.release.core-os.net/amd64-usr/${var.container_linux_version}/coreos_production_pxe_image.cpio.gz",
]
args = [
"coreos.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"coreos.first_boot=yes",
"console=tty0",
"console=ttyS0"
"console=ttyS0",
]
container_linux_config = "${file("${path.module}/cl/coreos-install.yaml.tmpl")}"
container_linux_config = "${file("${path.module}/cl/container-linux-install.yaml.tmpl")}"
}
// Container Linux Install profile (from matchbox /assets cache)
// Note: Admin must have downloaded container_linux_version into matchbox assets.
resource "matchbox_profile" "cached-container-linux-install" {
name = "cached-container-linux-install"
kernel = "/assets/coreos/${var.container_linux_version}/coreos_production_pxe.vmlinuz"
initrd = [
"/assets/coreos/${var.container_linux_version}/coreos_production_pxe_image.cpio.gz",
]
args = [
"coreos.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"coreos.first_boot=yes",
"console=tty0",
"console=ttyS0",
]
container_linux_config = "${file("${path.module}/cl/container-linux-install.yaml.tmpl")}"
}
// etcd3 profile
resource "matchbox_profile" "etcd3" {
name = "etcd3"
name = "etcd3"
container_linux_config = "${file("${path.module}/cl/etcd3.yaml.tmpl")}"
}
// etcd3 Gateway profile
resource "matchbox_profile" "etcd3-gateway" {
name = "etcd3-gateway"
name = "etcd3-gateway"
container_linux_config = "${file("${path.module}/cl/etcd3-gateway.yaml.tmpl")}"
}
// Self-hosted Kubernetes (bootkube) Controller profile
resource "matchbox_profile" "bootkube-controller" {
name = "bootkube-controller"
name = "bootkube-controller"
container_linux_config = "${file("${path.module}/cl/bootkube-controller.yaml.tmpl")}"
}
// Self-hosted Kubernetes (bootkube) Worker profile
resource "matchbox_profile" "bootkube-worker" {
name = "bootkube-worker"
name = "bootkube-worker"
container_linux_config = "${file("${path.module}/cl/bootkube-worker.yaml.tmpl")}"
}

View File

@@ -1,9 +1,14 @@
variable "matchbox_http_endpoint" {
type = "string"
type = "string"
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
}
variable "coreos_version" {
type = "string"
description = "CoreOS kernel/initrd version to PXE boot. Must be present in matchbox assets."
variable "container_linux_version" {
type = "string"
description = "Container Linux version of the kernel/initrd to PXE or the image to install"
}
variable "container_linux_channel" {
type = "string"
description = "Container Linux channel corresponding to the container_linux_version"
}

View File

@@ -1,21 +1,24 @@
// Default matcher group for machines
resource "matchbox_group" "default" {
name = "default"
name = "default"
profile = "${matchbox_profile.coreos-install.name}"
# no selector means all machines can be matched
metadata {
ignition_endpoint = "${var.matchbox_http_endpoint}/ignition"
ignition_endpoint = "${var.matchbox_http_endpoint}/ignition"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
// Match machines which have CoreOS installed
resource "matchbox_group" "node1" {
name = "node1"
name = "node1"
profile = "${matchbox_profile.simple.name}"
selector {
os = "installed"
}
metadata {
ssh_authorized_key = "${var.ssh_authorized_key}"
}

View File

@@ -1,21 +1,24 @@
// Create a CoreOS-install profile
resource "matchbox_profile" "coreos-install" {
name = "coreos-install"
name = "coreos-install"
kernel = "http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz"
initrd = [
"http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz"
"http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz",
]
args = [
"coreos.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"coreos.first_boot=yes",
"console=tty0",
"console=ttyS0",
]
container_linux_config = "${file("./cl/coreos-install.yaml.tmpl")}"
}
// Create a simple profile which just sets an SSH authorized_key
resource "matchbox_profile" "simple" {
name = "simple"
name = "simple"
container_linux_config = "${file("./cl/simple.yaml.tmpl")}"
}

View File

@@ -1,7 +1,7 @@
// Configure the matchbox provider
provider "matchbox" {
endpoint = "${var.matchbox_rpc_endpoint}"
endpoint = "${var.matchbox_rpc_endpoint}"
client_cert = "${file("~/.matchbox/client.crt")}"
client_key = "${file("~/.matchbox/client.key")}"
ca = "${file("~/.matchbox/ca.crt")}"
client_key = "${file("~/.matchbox/client.key")}"
ca = "${file("~/.matchbox/ca.crt")}"
}

View File

@@ -1,14 +1,14 @@
variable "matchbox_http_endpoint" {
type = "string"
type = "string"
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
}
variable "matchbox_rpc_endpoint" {
type = "string"
type = "string"
description = "Matchbox gRPC API endpoint, without the protocol (e.g. matchbox.example.com:8081)"
}
variable "ssh_authorized_key" {
type = "string"
type = "string"
description = "SSH public key to set as an authorized_key on machines"
}

6
glide.lock generated
View File

@@ -1,5 +1,5 @@
hash: 205de0b66ed059a1f10d3fb36c7d465439818123940a9aaa68ddc71cc3bbfddd
updated: 2017-04-17T17:09:48.864562358-07:00
hash: 7de5ab95677974311285feaa83e24f127bbb4c64a68740bab24d71f491e8b689
updated: 2017-05-24T15:28:05.291154327-07:00
imports:
- name: github.com/ajeddeloh/go-json
version: 73d058cf8437a1989030afe571eeab9f90eebbbd
@@ -80,7 +80,7 @@ imports:
subpackages:
- errorutil
- name: golang.org/x/crypto
version: 5dc8cb4b8a8eb076cbb5a06bc3b8682c15bdbbd3
version: 7e9105388ebff089b3f99f0ef676ea55a6da3a7e
subpackages:
- cast5
- openpgp

View File

@@ -59,7 +59,7 @@ import:
- package: github.com/spf13/cobra
version: 65a708cee0a4424f4e353d031ce440643e312f92
- package: golang.org/x/crypto
version: 5dc8cb4b8a8eb076cbb5a06bc3b8682c15bdbbd3
version: 7e9105388ebff089b3f99f0ef676ea55a6da3a7e
subpackages:
- cast5
- openpgp

View File

@@ -1,11 +0,0 @@
NODE1_NAME=node1
NODE1_MAC=52:54:00:a1:9c:ae
NODE2_NAME=node2
NODE2_MAC=52:54:00:b2:2f:86
NODE3_NAME=node3
NODE3_MAC=52:54:00:c3:61:77
NODE4_NAME=node4
NODE4_MAC=52:54:00:d7:99:c7

View File

@@ -1,7 +1,8 @@
#!/usr/bin/env bash
set -e
GIT_SHA=$(./scripts/git-version)
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
GIT_SHA=$($DIR/git-version)
# Start with an empty ACI
acbuild --debug begin

View File

@@ -4,7 +4,7 @@
set -eu
DEST=${1:-"bin"}
VERSION="v0.4.0"
VERSION="v0.4.4"
URL="https://github.com/kubernetes-incubator/bootkube/releases/download/${VERSION}/bootkube.tar.gz"

View File

@@ -4,7 +4,7 @@
set -eu
DEST=${1:-"bin"}
VERSION="v1.5.5"
VERSION="v1.6.4"
URL="https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kubectl"

View File

@@ -18,9 +18,9 @@ cp README.md $DEST
# scripts
mkdir -p $SCRIPTS/tls
cp scripts/get-coreos $SCRIPTS
cp examples/etc/matchbox/README.md $SCRIPTS/tls
cp examples/etc/matchbox/cert-gen $SCRIPTS/tls
cp examples/etc/matchbox/openssl.conf $SCRIPTS/tls
cp scripts/tls/README.md $SCRIPTS/tls
cp scripts/tls/cert-gen $SCRIPTS/tls
cp scripts/tls/openssl.conf $SCRIPTS/tls
# systemd
mkdir -p $CONTRIB/systemd

View File

@@ -1,10 +1,8 @@
#!/bin/bash -e
PKGS=$(go list ./... | grep -v /vendor)
FORMATTABLE="$(ls -d */ | grep -v vendor/)"
LINT_EXCLUDE='(/vendor|pb$)'
LINTABLE=$(go list ./... | grep -v -E $LINT_EXCLUDE)
FORMATTABLE=$(ls -d */ | grep -v -E '(vendor/|examples/)')
LINTABLE=$(go list ./... | grep -v -E '(vendor/|pb$)')
go test $PKGS -cover
go vet $PKGS

View File

@@ -10,10 +10,11 @@ DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
EXAMPLE=${2:-}
BRIDGE=metal0
COREOS_CHANNEL=stable
COREOS_VERSION=1298.7.0
COREOS_VERSION=1353.7.0
MATCHBOX_ARGS=""
ASSETS_DIR="${ASSETS_DIR:-$PWD/examples/assets}"
CONFIG_DIR="${CONFIG_DIR:-$PWD/examples/etc/matchbox}"
if [ "$EUID" -ne 0 ]
then echo "Please run as root"
@@ -87,10 +88,10 @@ function create {
--trust-keys-from-https \
--net=metal0:IP=172.18.0.2 \
--mount volume=config,target=/etc/matchbox \
--volume config,kind=host,source=$PWD/examples/etc/matchbox,readOnly=true \
--volume config,kind=host,source=$CONFIG_DIR,readOnly=true \
--mount volume=data,target=/var/lib/matchbox \
$DATA_MOUNT \
quay.io/coreos/matchbox:c8af40108fb06f345a5fdae915874b0b1b606e1a -- -address=0.0.0.0:8080 -log-level=debug $MATCHBOX_ARGS
quay.io/coreos/matchbox:23f23c1dcb78b123754ffb4e64f21cd8269093ce -- -address=0.0.0.0:8080 -log-level=debug $MATCHBOX_ARGS
echo "Starting dnsmasq to provide DHCP/TFTP/DNS services"
rkt rm --uuid-file=/var/run/dnsmasq-pod.uuid > /dev/null 2>&1

View File

@@ -1,13 +1,17 @@
#!/bin/bash
# USAGE: ./scripts/get-coreos
# USAGE: ./scripts/get-coreos channel version dest
#
# ENV VARS:
# - OEM_ID - specify OEM image id to download, alongside the default one
set -eou pipefail
GPG=${GPG:-/usr/bin/gpg}
CHANNEL=${1:-"stable"}
VERSION=${2:-"1298.7.0"}
VERSION=${2:-"1353.7.0"}
DEST_DIR=${3:-"$PWD/examples/assets"}
OEM_ID=${OEM_ID:-""}
DEST=$DEST_DIR/coreos/$VERSION
BASE_URL=https://$CHANNEL.release.core-os.net/amd64-usr/$VERSION
@@ -22,6 +26,16 @@ if [ ! -d "$DEST" ]; then
mkdir -p $DEST
fi
if [[ -n "${OEM_ID}" ]]; then
IMAGE_NAME="coreos_production_${OEM_ID}_image.bin.bz2"
# check if the oem version exists based on the header response
if ! curl -s -I $BASE_URL/$IMAGE_NAME | grep -q -E '^HTTP/[0-9.]+ [23][0-9][0-9]' ; then
echo "OEM version not found"
exit 1
fi
fi
echo "Downloading CoreOS $CHANNEL $VERSION images and sigs to $DEST"
echo "CoreOS Image Signing Key"
@@ -46,7 +60,20 @@ curl -# $BASE_URL/coreos_production_image.bin.bz2 -o $DEST/coreos_production_ima
echo "coreos_production_image.bin.bz2.sig"
curl -# $BASE_URL/coreos_production_image.bin.bz2.sig -o $DEST/coreos_production_image.bin.bz2.sig
# Install oem image
if [[ -n "${IMAGE_NAME-}" ]]; then
echo $IMAGE_NAME
curl -# $BASE_URL/$IMAGE_NAME -o $DEST/$IMAGE_NAME
echo "$IMAGE_NAME.sig"
curl -# $BASE_URL/$IMAGE_NAME.sig -o $DEST/$IMAGE_NAME.sig
fi
# verify signatures
$GPG --verify $DEST/coreos_production_pxe.vmlinuz.sig
$GPG --verify $DEST/coreos_production_pxe_image.cpio.gz.sig
$GPG --verify $DEST/coreos_production_image.bin.bz2.sig
# verify oem signature
if [[ -n "${IMAGE_NAME-}" ]]; then
$GPG --verify $DEST/$IMAGE_NAME.sig
fi

Some files were not shown because too many files have changed in this diff Show More