Merge pull request #477 from coreos/terraform-examples

Add terraform examples for etcd3 and self-hosted Kubernetes
This commit is contained in:
Dalton Hubble
2017-04-17 13:56:55 -07:00
committed by GitHub
16 changed files with 703 additions and 0 deletions

3
examples/terraform/.gitignore vendored Normal file
View File

@@ -0,0 +1,3 @@
*.tfstate*
.terraform
assets

View File

@@ -0,0 +1,96 @@
# Self-hosted Kubernetes
The self-hosted Kubernetes example provisions a 3 node "self-hosted" Kubernetes v1.6.1 cluster. On-host kubelets wait for an apiserver to become reachable, then yield to kubelet pods scheduled via daemonset. [bootkube](https://github.com/kubernetes-incubator/bootkube) is run on any controller to bootstrap a temporary apiserver which schedules control plane components as pods before exiting. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
## Requirements
* Create a PXE network boot environment (e.g. with `coreos/dnsmasq`)
* Run a `matchbox` service with the gRPC API enabled
* 3 machines with known DNS names and MAC addresses for this example
* Matchbox provider credentials: a `client.crt`, `client.key`, and `ca.crt`.
Install [bootkube](https://github.com/kubernetes-incubator/bootkube/releases) v0.4.0 and add it somewhere on your PATH.
```sh
bootkube version
Version v0.4.0
```
Use the `bootkube` tool to render Kubernetes manifests and credentials into an `--asset-dir`. Later, `bootkube` will schedule these manifests during bootstrapping and the credentials will be used to access your cluster.
```sh
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com
```
## Infrastructure
Plan and apply terraform configurations. Create `bootkube-controller`, `bootkube-worker`, and `install-reboot` profiles and Container Linux configs. Create matcher groups for `node1.example.com`, `node2.example.com`, and `node3.example.com`.
```
cd examples/bootkube-install
terraform plan
terraform apply
```
Power on each machine and wait for it to PXE boot, install CoreOS to disk, and provision itself.
## Bootstrap
Secure copy the kubeconfig to /etc/kubernetes/kubeconfig on every node which will path activate the `kubelet.service`.
```
for node in 'node1' 'node2' 'node3'; do
scp assets/auth/kubeconfig core@$node.example.com:/home/core/kubeconfig
ssh core@$node.example.com 'sudo mv kubeconfig /etc/kubernetes/kubeconfig'
done
```
Secure copy the bootkube generated assets to any controller node and run bootkube-start.
```
scp -r assets core@node1.example.com:/home/core
ssh core@node1.example.com 'sudo mv assets /opt/bootkube/assets && sudo systemctl start bootkube'
```
Optionally watch the Kubernetes control plane bootstrapping with the bootkube temporary api-server. You will see quite a bit of output.
```
$ ssh core@node1.example.com 'journalctl -f -u bootkube'
[ 299.241291] bootkube[5]: Pod Status: kube-api-checkpoint Running
[ 299.241618] bootkube[5]: Pod Status: kube-apiserver Running
[ 299.241804] bootkube[5]: Pod Status: kube-scheduler Running
[ 299.241993] bootkube[5]: Pod Status: kube-controller-manager Running
[ 299.311743] bootkube[5]: All self-hosted control plane components successfully started
```
## Verify
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your laptop. Use the generated kubeconfig to access the Kubernetes cluster. Verify that the cluster is accessible and that the kubelet, apiserver, scheduler, and controller-manager are running as pods.
```sh
$ KUBECONFIG=assets/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE
node1.example.com Ready 3m
node2.example.com Ready 3m
node3.example.com Ready 3m
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system checkpoint-installer-p8g8r 1/1 Running 1 13m
kube-system kube-apiserver-s5gnx 1/1 Running 1 41s
kube-system kube-controller-manager-3438979800-jrlnd 1/1 Running 1 13m
kube-system kube-controller-manager-3438979800-tkjx7 1/1 Running 1 13m
kube-system kube-dns-4101612645-xt55f 4/4 Running 4 13m
kube-system kube-flannel-pl5c2 2/2 Running 0 13m
kube-system kube-flannel-r9t5r 2/2 Running 3 13m
kube-system kube-flannel-vfb0s 2/2 Running 4 13m
kube-system kube-proxy-cvhmj 1/1 Running 0 13m
kube-system kube-proxy-hf9mh 1/1 Running 1 13m
kube-system kube-proxy-kpl73 1/1 Running 1 13m
kube-system kube-scheduler-694795526-1l23b 1/1 Running 1 13m
kube-system kube-scheduler-694795526-fks0b 1/1 Running 1 13m
kube-system pod-checkpointer-node1.example.com 1/1 Running 2 10m
```
Try deleting pods to see that the cluster is resilient to failures and machine restarts (CoreOS auto-updates).

View File

@@ -0,0 +1,70 @@
// Create popular machine Profiles (convenience module)
module "profiles" {
source = "../modules/profiles"
matchbox_http_endpoint = "http://matchbox.example.com:8080"
coreos_version = "1235.9.0"
}
// Install CoreOS to disk before provisioning
resource "matchbox_group" "default" {
name = "default"
profile = "${module.profiles.coreos-install}"
// No selector, matches all nodes
metadata {
coreos_channel = "stable"
coreos_version = "1235.9.0"
ignition_endpoint = "http://matchbox.example.com:8080/ignition"
baseurl = "http://matchbox.example.com:8080/assets/coreos"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
// Create a controller matcher group
resource "matchbox_group" "node1" {
name = "node1"
profile = "${module.profiles.bootkube-controller}"
selector {
mac = "52:54:00:a1:9c:ae"
os = "installed"
}
metadata {
domain_name = "node1.example.com"
etcd_name = "node1"
etcd_initial_cluster = "node1=http://node1.example.com:2380"
k8s_dns_service_ip = "${var.k8s_dns_service_ip}"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
// Create worker matcher groups
resource "matchbox_group" "node2" {
name = "node2"
profile = "${module.profiles.bootkube-worker}"
selector {
mac = "52:54:00:b2:2f:86"
os = "installed"
}
metadata {
domain_name = "node2.example.com"
etcd_endpoints = "node1.example.com:2380"
k8s_dns_service_ip = "${var.k8s_dns_service_ip}"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
resource "matchbox_group" "node3" {
name = "node3"
profile = "${module.profiles.bootkube-worker}"
selector {
mac = "52:54:00:c3:61:77"
os = "installed"
}
metadata {
domain_name = "node3.example.com"
etcd_endpoints = "node1.example.com:2380"
k8s_dns_service_ip = "${var.k8s_dns_service_ip}"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}

View File

@@ -0,0 +1,7 @@
// Configure the matchbox provider
provider "matchbox" {
endpoint = "matchbox.example.com:8081"
client_cert = "${file("~/.matchbox/client.crt")}"
client_key = "${file("~/.matchbox/client.key")}"
ca = "${file("~/.matchbox/ca.crt")}"
}

View File

@@ -0,0 +1,10 @@
variable "ssh_authorized_key" {
type = "string"
default = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCt3BebCHqnSsgpLjo4kVvyfY/z2BS8t27r/7du+O2pb4xYkr7n+KFpbOz523vMTpQ+o1jY4u4TgexglyT9nqasWgLOvo1qjD1agHme8LlTPQSk07rXqOB85Uq5p7ig2zoOejF6qXhcc3n1c7+HkxHrgpBENjLVHOBpzPBIAHkAGaZcl07OCqbsG5yxqEmSGiAlh/IiUVOZgdDMaGjCRFy0wk0mQaGD66DmnFc1H5CzcPjsxr0qO65e7lTGsE930KkO1Vc+RHCVwvhdXs+c2NhJ2/3740Kpes9n1/YullaWZUzlCPDXtRuy6JRbFbvy39JUgHWGWzB3d+3f8oJ/N4qZ cardno:000603633110"
}
variable "k8s_dns_service_ip" {
type = "string"
default = "10.3.0.10"
description = "Cluster DNS servce IP address passed via the Kubelet --cluster-dns flag"
}

View File

@@ -0,0 +1,68 @@
// Create popular machine Profiles (convenience module)
module "profiles" {
source = "../modules/profiles"
matchbox_http_endpoint = "http://matchbox.example.com:8080"
coreos_version = "1235.9.0"
}
// Install CoreOS to disk before provisioning
resource "matchbox_group" "default" {
name = "default"
profile = "${module.profiles.coreos-install}"
// No selector, matches all nodes
metadata {
coreos_channel = "stable"
coreos_version = "1235.9.0"
ignition_endpoint = "http://matchbox.example.com:8080/ignition"
baseurl = "http://matchbox.example.com:8080/assets/coreos"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
// Create matcher groups for 3 machines
resource "matchbox_group" "node1" {
name = "node1"
profile = "${module.profiles.etcd3}"
selector {
mac = "52:54:00:a1:9c:ae"
os = "installed"
}
metadata {
domain_name = "node1.example.com"
etcd_name = "node1"
etcd_initial_cluster = "node1=http://node1.example.com:2380,node2=http://node2.example.com:2380,node3=http://node3.example.com:2380"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
resource "matchbox_group" "node2" {
name = "node2"
profile = "${module.profiles.etcd3}"
selector {
mac = "52:54:00:b2:2f:86"
os = "installed"
}
metadata {
domain_name = "node2.example.com"
etcd_name = "node2"
etcd_initial_cluster = "node1=http://node1.example.com:2380,node2=http://node2.example.com:2380,node3=http://node3.example.com:2380"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
resource "matchbox_group" "node3" {
name = "node3"
profile = "${module.profiles.etcd3}"
selector {
mac = "52:54:00:c3:61:77"
os = "installed"
}
metadata {
domain_name = "node3.example.com"
etcd_name = "node3"
etcd_initial_cluster = "node1=http://node1.example.com:2380,node2=http://node2.example.com:2380,node3=http://node3.example.com:2380"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}

View File

@@ -0,0 +1,7 @@
// Configure the matchbox provider
provider "matchbox" {
endpoint = "matchbox.example.com:8081"
client_cert = "${file("~/.matchbox/client.crt")}"
client_key = "${file("~/.matchbox/client.key")}"
ca = "${file("~/.matchbox/ca.crt")}"
}

View File

@@ -0,0 +1,4 @@
variable "ssh_authorized_key" {
type = "string"
default = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCt3BebCHqnSsgpLjo4kVvyfY/z2BS8t27r/7du+O2pb4xYkr7n+KFpbOz523vMTpQ+o1jY4u4TgexglyT9nqasWgLOvo1qjD1agHme8LlTPQSk07rXqOB85Uq5p7ig2zoOejF6qXhcc3n1c7+HkxHrgpBENjLVHOBpzPBIAHkAGaZcl07OCqbsG5yxqEmSGiAlh/IiUVOZgdDMaGjCRFy0wk0mQaGD66DmnFc1H5CzcPjsxr0qO65e7lTGsE930KkO1Vc+RHCVwvhdXs+c2NhJ2/3740Kpes9n1/YullaWZUzlCPDXtRuy6JRbFbvy39JUgHWGWzB3d+3f8oJ/N4qZ cardno:000603633110"
}

View File

@@ -0,0 +1,162 @@
---
systemd:
units:
- name: etcd-member.service
enable: true
dropins:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_IMAGE_TAG=v3.1.0"
Environment="ETCD_NAME={{.etcd_name}}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=http://{{.domain_name}}:2379"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=http://{{.domain_name}}:2380"
Environment="ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379"
Environment="ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380"
Environment="ETCD_INITIAL_CLUSTER={{.etcd_initial_cluster}}"
Environment="ETCD_STRICT_RECONFIG_CHECK=true"
- name: docker.service
enable: true
- name: locksmithd.service
dropins:
- name: 40-etcd-lock.conf
contents: |
[Service]
Environment="REBOOT_STRATEGY=etcd-lock"
- name: kubelet.path
enable: true
contents: |
[Unit]
Description=Watch for kubeconfig
[Path]
PathExists=/etc/kubernetes/kubeconfig
[Install]
WantedBy=multi-user.target
- name: wait-for-dns.service
enable: true
contents: |
[Unit]
Description=Wait for DNS entries
Wants=systemd-resolved.service
Before=kubelet.service
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
[Install]
RequiredBy=kubelet.service
- name: kubelet.service
contents: |
[Unit]
Description=Kubelet via Hyperkube ACI
[Service]
Environment="RKT_OPTS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
EnvironmentFile=/etc/kubernetes/kubelet.env
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--kubeconfig=/etc/kubernetes/kubeconfig \
--require-kubeconfig \
--client-ca-file=/etc/kubernetes/ca.crt \
--anonymous-auth=false \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--lock-file=/var/run/lock/kubelet.lock \
--exit-on-lock-contention \
--pod-manifest-path=/etc/kubernetes/manifests \
--allow-privileged \
--hostname-override={{.domain_name}} \
--node-labels=master=true \
--cluster_dns={{.k8s_dns_service_ip}} \
--cluster_domain=cluster.local
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
- name: bootkube.service
contents: |
[Unit]
Description=Bootstrap a Kubernetes control plane with a temp api-server
[Service]
Type=simple
WorkingDirectory=/opt/bootkube
ExecStart=/opt/bootkube/bootkube-start
storage:
{{ if index . "pxe" }}
disks:
- device: /dev/sda
wipe_table: true
partitions:
- label: ROOT
filesystems:
- name: root
mount:
device: "/dev/sda1"
format: "ext4"
create:
force: true
options:
- "-LROOT"
{{end}}
files:
- path: /etc/kubernetes/kubelet.env
filesystem: root
mode: 0644
contents:
inline: |
KUBELET_ACI=quay.io/coreos/hyperkube
KUBELET_VERSION=v1.6.1_coreos.0
- path: /etc/hostname
filesystem: root
mode: 0644
contents:
inline:
{{.domain_name}}
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
inline: |
fs.inotify.max_user_watches=16184
- path: /opt/bootkube/bootkube-start
filesystem: root
mode: 0544
user:
id: 500
group:
id: 500
contents:
inline: |
#!/bin/bash
# Wrapper for bootkube start
set -e
mkdir -p /tmp/bootkube
BOOTKUBE_ACI="${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
BOOTKUBE_VERSION="${BOOTKUBE_VERSION:-v0.4.0}"
BOOTKUBE_ASSETS="${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
exec /usr/bin/rkt run \
--trust-keys-from-https \
--volume assets,kind=host,source=$BOOTKUBE_ASSETS \
--mount volume=assets,target=/assets \
--volume bootstrap,kind=host,source=/etc/kubernetes/manifests \
--mount volume=bootstrap,target=/etc/kubernetes/manifests \
--volume temp,kind=host,source=/tmp/bootkube \
--mount volume=temp,target=/tmp/bootkube \
$RKT_OPTS \
${BOOTKUBE_ACI}:${BOOTKUBE_VERSION} --net=host --exec=/bootkube -- start --asset-dir=/assets "$@"
passwd:
users:
- name: core
ssh_authorized_keys:
- {{.ssh_authorized_key}}

View File

@@ -0,0 +1,125 @@
---
systemd:
units:
- name: etcd-member.service
enable: true
dropins:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_IMAGE_TAG=v3.1.0"
ExecStart=
ExecStart=/usr/lib/coreos/etcd-wrapper gateway start \
--listen-addr=127.0.0.1:2379 \
--endpoints={{.etcd_endpoints}}
- name: docker.service
enable: true
- name: locksmithd.service
dropins:
- name: 40-etcd-lock.conf
contents: |
[Service]
Environment="REBOOT_STRATEGY=etcd-lock"
- name: kubelet.path
enable: true
contents: |
[Unit]
Description=Watch for kubeconfig
[Path]
PathExists=/etc/kubernetes/kubeconfig
[Install]
WantedBy=multi-user.target
- name: wait-for-dns.service
enable: true
contents: |
[Unit]
Description=Wait for DNS entries
Wants=systemd-resolved.service
Before=kubelet.service
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
[Install]
RequiredBy=kubelet.service
- name: kubelet.service
contents: |
[Unit]
Description=Kubelet via Hyperkube ACI
[Service]
Environment="RKT_OPTS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
EnvironmentFile=/etc/kubernetes/kubelet.env
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--kubeconfig=/etc/kubernetes/kubeconfig \
--require-kubeconfig \
--client-ca-file=/etc/kubernetes/ca.crt \
--anonymous-auth=false \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--lock-file=/var/run/lock/kubelet.lock \
--exit-on-lock-contention \
--pod-manifest-path=/etc/kubernetes/manifests \
--allow-privileged \
--hostname-override={{.domain_name}} \
--cluster_dns={{.k8s_dns_service_ip}} \
--cluster_domain=cluster.local
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
storage:
{{ if index . "pxe" }}
disks:
- device: /dev/sda
wipe_table: true
partitions:
- label: ROOT
filesystems:
- name: root
mount:
device: "/dev/sda1"
format: "ext4"
create:
force: true
options:
- "-LROOT"
{{end}}
files:
- path: /etc/kubernetes/kubelet.env
filesystem: root
mode: 0644
contents:
inline: |
KUBELET_ACI=quay.io/coreos/hyperkube
KUBELET_VERSION=v1.6.1_coreos.0
- path: /etc/hostname
filesystem: root
mode: 0644
contents:
inline:
{{.domain_name}}
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
inline: |
fs.inotify.max_user_watches=16184
passwd:
users:
- name: core
ssh_authorized_keys:
- {{.ssh_authorized_key}}

View File

@@ -0,0 +1,31 @@
---
systemd:
units:
- name: installer.service
enable: true
contents: |
[Unit]
Requires=network-online.target
After=network-online.target
[Service]
Type=simple
ExecStart=/opt/installer
[Install]
WantedBy=multi-user.target
storage:
files:
- path: /opt/installer
filesystem: root
mode: 0500
contents:
inline: |
#!/bin/bash -ex
curl "{{.ignition_endpoint}}?{{.request.raw_query}}&os=installed" -o ignition.json
coreos-install -d /dev/sda -C {{.coreos_channel}} -V {{.coreos_version}} -i ignition.json {{if index . "baseurl"}}-b {{.baseurl}}{{end}}
udevadm settle
systemctl reboot
passwd:
users:
- name: core
ssh_authorized_keys:
- {{.ssh_authorized_key}}

View File

@@ -0,0 +1,25 @@
---
systemd:
units:
- name: etcd-member.service
enable: true
dropins:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_IMAGE_TAG=v3.1.0"
ExecStart=
ExecStart=/usr/lib/coreos/etcd-wrapper gateway start \
--listen-addr=127.0.0.1:2379 \
--endpoints={{.etcd_endpoints}}
- name: locksmithd.service
dropins:
- name: 40-etcd-lock.conf
contents: |
[Service]
Environment="REBOOT_STRATEGY=etcd-lock"
passwd:
users:
- name: core
ssh_authorized_keys:
- {{.ssh_authorized_key}}

View File

@@ -0,0 +1,28 @@
---
systemd:
units:
- name: etcd-member.service
enable: true
dropins:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_IMAGE_TAG=v3.1.0"
Environment="ETCD_NAME={{.etcd_name}}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=http://{{.domain_name}}:2379"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=http://{{.domain_name}}:2380"
Environment="ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379"
Environment="ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380"
Environment="ETCD_INITIAL_CLUSTER={{.etcd_initial_cluster}}"
Environment="ETCD_STRICT_RECONFIG_CHECK=true"
- name: locksmithd.service
dropins:
- name: 40-etcd-lock.conf
contents: |
[Service]
Environment="REBOOT_STRATEGY=etcd-lock"
passwd:
users:
- name: core
ssh_authorized_keys:
- {{.ssh_authorized_key}}

View File

@@ -0,0 +1,19 @@
output "coreos-install" {
value = "${matchbox_profile.coreos-install.name}"
}
output "etcd3" {
value = "${matchbox_profile.etcd3.name}"
}
output "etcd3-gateway" {
value = "${matchbox_profile.etcd3-gateway.name}"
}
output "bootkube-controller" {
value = "${matchbox_profile.bootkube-controller.name}"
}
output "bootkube-worker" {
value = "${matchbox_profile.bootkube-worker.name}"
}

View File

@@ -0,0 +1,39 @@
// CoreOS Install Profile
resource "matchbox_profile" "coreos-install" {
name = "coreos-install"
kernel = "/assets/coreos/${var.coreos_version}/coreos_production_pxe.vmlinuz"
initrd = [
"/assets/coreos/${var.coreos_version}/coreos_production_pxe_image.cpio.gz"
]
args = [
"coreos.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"coreos.first_boot=yes",
"console=tty0",
"console=ttyS0"
]
container_linux_config = "${file("${path.module}/cl/coreos-install.yaml.tmpl")}"
}
// etcd3 profile
resource "matchbox_profile" "etcd3" {
name = "etcd3"
container_linux_config = "${file("${path.module}/cl/etcd3.yaml.tmpl")}"
}
// etcd3 Gateway profile
resource "matchbox_profile" "etcd3-gateway" {
name = "etcd3-gateway"
container_linux_config = "${file("${path.module}/cl/etcd3-gateway.yaml.tmpl")}"
}
// Self-hosted Kubernetes (bootkube) Controller profile
resource "matchbox_profile" "bootkube-controller" {
name = "bootkube-controller"
container_linux_config = "${file("${path.module}/cl/bootkube-controller.yaml.tmpl")}"
}
// Self-hosted Kubernetes (bootkube) Worker profile
resource "matchbox_profile" "bootkube-worker" {
name = "bootkube-worker"
container_linux_config = "${file("${path.module}/cl/bootkube-worker.yaml.tmpl")}"
}

View File

@@ -0,0 +1,9 @@
variable "matchbox_http_endpoint" {
type = "string"
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
}
variable "coreos_version" {
type = "string"
description = "CoreOS kernel/initrd version to PXE boot. Must be present in matchbox assets."
}