examples/terraform: Add etcd3 tutorial and Terraform modules doc

This commit is contained in:
Dalton Hubble
2017-05-01 23:21:56 -07:00
parent 6500ed51f3
commit e1cabcf8e8
4 changed files with 216 additions and 39 deletions

View File

@@ -1,6 +1,6 @@
# Self-hosted Kubernetes
The self-hosted Kubernetes example provisions a 3 node "self-hosted" Kubernetes v1.6.2 cluster. On-host kubelets wait for an apiserver to become reachable, then yield to kubelet pods scheduled via daemonset. [bootkube](https://github.com/kubernetes-incubator/bootkube) is run on any controller to bootstrap a temporary apiserver which schedules control plane components as pods before exiting. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
The self-hosted Kubernetes example provisions a 3 node "self-hosted" Kubernetes v1.6.2 cluster. [bootkube](https://github.com/kubernetes-incubator/bootkube) is run once on a controller node to bootstrap Kubernetes control plane components as pods before exiting. An etcd3 cluster across controllers is used to back Kubernetes and coordinate Container Linux auto-updates (enabled for disk installs).
## Requirements

View File

@@ -0,0 +1,45 @@
# Examples with Terraform
Matchbox automates network booting and provisioning Container Linux clusters. These examples show how to use matchbox with Teraform to create official reference clusters.
| Name | Description |
|-------------------------------|-------------------------------|
| [etcd3-install](etcd-install) | Install a 3-node etcd3 cluster |
| [bootkube-install](bootkube-install) | Install a 3-node self-hosted Kubernetes v1.6.2 cluster |
## Modules
Matchbox also provides Terraform [modules](https://www.terraform.io/docs/modules/usage.html) you can use directly within your own Terraform configs. Modules are updated regularly so it is **recommended** that you pin the module version (e.g. `ref=sha`) to keep your configs deterministic.
```hcl
module "profiles" {
source = "git::https://github.com/coreos/matchbox.git//examples/terraform/modules/profiles?ref=4451425db8f230012c36de6e6628c72aa34e1c10"
matchbox_http_endpoint = "${var.matchbox_http_endpoint}"
container_linux_version = "${var.container_linux_version}"
container_linux_channel = "${var.container_linux_channel}"
}
```
Download referenced Terraform modules.
```sh
$ terraform get # does not check for updates
$ terraform get --update # checks for updates
```
Available modules:
| Module | Includes | Description |
|----------|-----------|-------------|
| profiles | * | Creates machine profiles you can reference in matcher groups |
| | container-linux-install | Install Container Linux to disk from core-os.net |
| | cached-container-linux-install | Install Container Linux to disk from matchbox assets cache |
| | etcd3 | Provision an etcd3 peer node |
| | etcd3-gateway | Provision an etcd3 gateway node |
| | bootkube-controller | Provision a self-hosted Kubernetes controller/master node |
| | bootkube-worker | Provisioner a self-hosted Kubernetes worker node |
## Customization
You are encouraged to look through the examples and modules. Implement your own profiles or package them as modules to meet your needs. We've just provided a starting point. Learn more about [matchbox](../../Documentation/matchbox.md) and [Container Linux configs](../../Documentation/container-linux-config.md).

View File

@@ -1,13 +1,78 @@
# Self-hosted Kubernetes
The self-hosted Kubernetes example provisions a 3 node "self-hosted" Kubernetes v1.6.2 cluster. On-host kubelets wait for an apiserver to become reachable, then yield to kubelet pods scheduled via daemonset. [bootkube](https://github.com/kubernetes-incubator/bootkube) is run on any controller to bootstrap a temporary apiserver which schedules control plane components as pods before exiting. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
The self-hosted Kubernetes example shows how to use matchbox to network boot and provision a 3 node "self-hosted" Kubernetes v1.6.2 cluster. [bootkube](https://github.com/kubernetes-incubator/bootkube) is run once on a controller node to bootstrap Kubernetes control plane components as pods before exiting. An etcd3 cluster across controllers is used to back Kubernetes and coordinate Container Linux auto-updates (enabled for disk installs).
## Requirements
* Create a PXE network boot environment (e.g. with `coreos/dnsmasq`)
* Run a `matchbox` service with the gRPC API enabled
* 3 machines with known DNS names and MAC addresses for this example
* Matchbox provider credentials: a `client.crt`, `client.key`, and `ca.crt`.
Follow the getting started [tutorial](../../../Documentation/getting-started.md) to learn about matchbox and set up an environment that meets the requirements:
* Matchbox v0.6+ [installation](../../../Documentation/deployment.md) with gRPC API enabled
* Matchbox provider credentials `client.crt`, `client.key`, and `ca.crt`
* PXE [network boot](../../../Documentation/network-setup.md) environment
* Terraform v0.9+ and [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) installed locally on your system
* 3 machines with known DNS names and MAC addresses
If you prefer to provision QEMU/KVM VMs on your local Linux machine, set up the matchbox [development environment](../../../Documentation/getting-started-rkt.md).
```sh
sudo ./scripts/devnet create
```
## Usage
Clone the [matchbox](https://github.com/coreos/matchbox) project and take a look at the cluster examples.
```sh
$ git clone https://github.com/coreos/matchbox.git
$ cd matchbox/examples/terraform/bootkube-install
```
Copy the `terraform.tfvars.example` file to `terraform.tfvars`. Ensure `provider.tf` references your matchbox credentials.
```hcl
matchbox_http_endpoint = "http://matchbox.example.com:8080"
matchbox_rpc_endpoint = "matchbox.example.com:8081"
ssh_authorized_key = "ADD ME"
```
Configs in `bootkube-install` configure the matchbox provider, define profiles (e.g. `cached-container-linux-install`, `bootkube-controller`, `bootkube-worker`), and define 3 groups which match machines by MAC address to a profile. These resources declare that each machine should PXE boot and install Container Linux to disk. `node1` will provision itself as a controller, while `node2` and `noe3` provision themselves as workers.
Fetch the [profiles](../README.md#modules) Terraform [module](https://www.terraform.io/docs/modules/index.html) which let's you use common machine profiles maintained in the matchbox repo (like `bootkube`).
```sh
$ terraform get
```
Plan and apply to create the resources on Matchbox.
```sh
$ terraform plan
Plan: 10 to add, 0 to change, 0 to destroy.
$ terraform apply
Apply complete! Resources: 10 added, 0 changed, 0 destroyed.
```
Note: The `cached-container-linux-install` profile will PXE boot and install Container Linux from matchbox [assets](https://github.com/coreos/matchbox/blob/master/Documentation/api.md#assets). If you have not populated the assets cache, use the `container-linux-install` profile to use public images (slower).
## Machines
Power on each machine (with PXE boot device on next boot). Machines should network boot, install Container Linux to disk, reboot, and provision themselves as bootkube controllers or workers.
```sh
$ ipmitool -H node1.example.com -U USER -P PASS chassis bootdev pxe
$ ipmitool -H node1.example.com -U USER -P PASS power on
```
For local QEMU/KVM development, create the QEMU/KVM VMs.
```sh
$ sudo ./scripts/libvirt create
$ sudo ./scripts/libvirt [start|reboot|shutdown|poweroff|destroy]
```
## bootkube
*This section will soon be automated by terraform*
Install [bootkube](https://github.com/kubernetes-incubator/bootkube/releases) v0.4.2 and add it somewhere on your PATH.
@@ -22,36 +87,6 @@ Use the `bootkube` tool to render Kubernetes manifests and credentials into an `
bootkube render --asset-dir=assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=http://127.0.0.1:2379
```
## Infrastructure
The bootkube-install example uses a [Terrform Module](https://www.terraform.io/docs/modules/index.html) called `profiles`. Before planning and applying the Terrafrom configurations, you must download and install this module.
```
cd examples/bootkube-install
terraform get
Get: file:///path/to/your/matchbox/examples/terraform/modules/profiles
```
Instead of modifying the the example `profiles` module, you can create your own module and update the source reference in `bootkube.tf`. Terraform supports several [sources](https://www.terraform.io/docs/modules/sources.html) where modules can be downloaded from. Here is an example of using a GitHub source.
```
module "profiles" {
source = "git::https://github.com/coreos/matchbox.git//examples/terraform/modules/profiles?ref=64168bc42edd5f249b5f6c542319c202a308434b"
matchbox_http_endpoint = "${var.matchbox_http_endpoint}"
coreos_version = "${var.container_linux_version}"
}
```
Plan and apply terraform configurations. Create `bootkube-controller`, `bootkube-worker`, and `install-reboot` profiles and Container Linux configs. Create matcher groups for `node1.example.com`, `node2.example.com`, and `node3.example.com`.
```
terraform plan
terraform apply
```
Power on each machine and wait for it to PXE boot, install CoreOS to disk, and provision itself.
## Bootstrap
Secure copy the kubeconfig to /etc/kubernetes/kubeconfig on every node which will path activate the `kubelet.service`.
```
@@ -68,7 +103,7 @@ scp -r assets core@node1.example.com:/home/core
ssh core@node1.example.com 'sudo mv assets /opt/bootkube/assets && sudo systemctl start bootkube'
```
Optionally watch the Kubernetes control plane bootstrapping with the bootkube temporary api-server. You will see quite a bit of output.
Optionally watch bootkube start the Kubernetes control plane.
```
$ ssh core@node1.example.com 'journalctl -f -u bootkube'
@@ -81,7 +116,7 @@ $ ssh core@node1.example.com 'journalctl -f -u bootkube'
## Verify
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your laptop. Use the generated kubeconfig to access the Kubernetes cluster. Verify that the cluster is accessible and that the kubelet, apiserver, scheduler, and controller-manager are running as pods.
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your laptop. Use the generated kubeconfig to access the Kubernetes cluster. Verify that the cluster is accessible and that the apiserver, scheduler, and controller-manager are running as pods.
```sh
$ KUBECONFIG=assets/auth/kubeconfig
@@ -109,4 +144,8 @@ kube-system kube-scheduler-694795526-fks0b 1/1 Running 1
kube-system pod-checkpointer-node1.example.com 1/1 Running 2 10m
```
Try deleting pods to see that the cluster is resilient to failures and machine restarts (CoreOS auto-updates).
Try restarting machines or deleting pods to see that the cluster is resilient to failures.
## Going Further
Learn more about [matchbox](../../../Documentation/matchbox.md) or explore the other [example](../) clusters.

View File

@@ -0,0 +1,93 @@
# etcd3
The `etcd3-install` example shows how to use matchbox to network boot and provision 3-node etcd3 cluster on bare-metal in an automated way.
## Requirements
Follow the getting started [tutorial](../../../Documentation/getting-started.md) to learn about matchbox and set up an environment that meets the requirements:
* Matchbox v0.6+ [installation](../../../Documentation/deployment.md) with gRPC API enabled
* Matchbox provider credentials `client.crt`, `client.key`, and `ca.crt`
* PXE [network boot](../../../Documentation/network-setup.md) environment
* Terraform v0.9+ and [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) installed locally on your system
* 3 machines with known DNS names and MAC addresses
If you prefer to provision QEMU/KVM VMs on your local Linux machine, set up the matchbox [development environment](../../../Documentation/getting-started-rkt.md).
```sh
sudo ./scripts/devnet create
```
## Usage
Clone the [matchbox](https://github.com/coreos/matchbox) project and take a look at the cluster examples.
```sh
$ git clone https://github.com/coreos/matchbox.git
$ cd matchbox/examples/terraform/etcd3-install
```
Copy the `terraform.tfvars.example` file to `terraform.tfvars`. Ensure `provider.tf` references your matchbox credentials.
```hcl
matchbox_http_endpoint = "http://matchbox.example.com:8080"
matchbox_rpc_endpoint = "matchbox.example.com:8081"
ssh_authorized_key = "ADD ME"
```
Configs in `etcd3-install` configure the matchbox provider, define profiles (e.g. `cached-container-linux-install`, `etcd3`), and define 3 groups which match machines by MAC address to a profile. These resources declare that the machines should PXE boot, install Container Linux to disk, and provision themselves into peers in a 3-node etcd3 cluster.
Fetch the [profiles](../README.md#modules) Terraform [module](https://www.terraform.io/docs/modules/index.html) which let's you use common machine profiles maintained in the matchbox repo (like `etcd3`).
```sh
$ terraform get
```
Plan and apply to create the resoures on Matchbox.
```sh
$ terraform plan
Plan: 10 to add, 0 to change, 0 to destroy.
$ terraform apply
Apply complete! Resources: 10 added, 0 changed, 0 destroyed.
```
Note: The `cached-container-linux-install` profile will PXE boot and install Container Linux from matchbox [assets](https://github.com/coreos/matchbox/blob/master/Documentation/api.md#assets). If you have not populated the assets cache, use the `container-linux-install` profile to use public images (slower).
## Machines
Power on each machine (with PXE boot device on next boot). Machines should network boot, install Container Linux to disk, reboot, and provision themselves as a 3-node etcd3 cluster.
```sh
$ ipmitool -H node1.example.com -U USER -P PASS chassis bootdev pxe
$ ipmitool -H node1.example.com -U USER -P PASS power on
```
For local QEMU/KVM development, create the QEMU/KVM VMs.
```sh
$ sudo ./scripts/libvirt create
$ sudo ./scripts/libvirt [start|reboot|shutdown|poweroff|destroy]
```
## Verify
Verify each node is running etcd3 (i.e. etcd-member.service).
```sh
$ ssh core@node1.example.com
$ systemctl status etcd-member
```
Verify that etcd3 peers are healthy and communicating.
```sh
$ ETCDCTL_API=3
$ etcdctl cluster-health
$ etcdctl set /message hello
$ etcdctl get /message
```
## Going Further
Learn more about [matchbox](../../../Documentation/matchbox.md) or explore the other [example](../) clusters.