mirror of
https://github.com/outbackdingo/matchbox.git
synced 2026-01-27 10:19:35 +00:00
examples: Combine etcd, k8s, docker, and rkt examples
* Different sets of examples subfolders are not needed
This commit is contained in:
@@ -21,7 +21,7 @@ Set up `coreos/bootcfg` according to the [docs](bootcfg.md). Pull the `coreos/bo
|
||||
|
||||
Run the `bootcfg` container to serve configs for any of the network environments we'll discuss next.
|
||||
|
||||
docker run -p 8080:8080 --net=host --name=bootcfg --rm -v $PWD/examples/dev:/data:Z -v $PWD/assets:/assets:Z coreos/bootcfg:latest -address=0.0.0.0:8080 [-log-level=debug]
|
||||
docker run -p 8080:8080 --net=host --name=bootcfg --rm -v $PWD/examples:/data:Z -v $PWD/assets:/assets:Z coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug -config /data/etcd-docker.yaml
|
||||
|
||||
Note, the kernel options in the `Spec` [examples](../examples) reference 172.17.0.2 (the libvirt case). Your kernel cmdline options should reference the IP or DNS name where `bootcfg` runs.
|
||||
|
||||
|
||||
@@ -43,7 +43,7 @@ Run the config server on `metal0` with the IP address corresponding to the examp
|
||||
|
||||
The insecure flag is needed because Docker images do not support signature verification.
|
||||
|
||||
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=assets,target=/assets --volume assets,kind=host,source=$PWD/assets --mount volume=data,target=/data --volume data,kind=host,source=$PWD/examples/etcd-large quay.io/coreos/bootcfg -- -address=0.0.0.0:8080 -log-level=debug
|
||||
sudo rkt run --net=metal0:IP=172.15.0.2 --mount volume=assets,target=/assets --volume assets,kind=host,source=$PWD/assets --mount volume=data,target=/data --volume data,kind=host,source=$PWD/examples quay.io/coreos/bootcfg -- -address=0.0.0.0:8080 -log-level=debug -config /data/etcd-rkt.yaml
|
||||
|
||||
If you get an error about the IP being assigned already.
|
||||
|
||||
|
||||
@@ -10,11 +10,11 @@ Docker containers run on the `docker0` virtual bridge, typically on a subnet 172
|
||||
|
||||
## Config Service
|
||||
|
||||
Set up `coreos/bootcfg` according to the [docs](bootcfg.md). Pull the `coreos/bootcfg` image, prepare a data volume with `Machine` definitions, `Spec` definitions and ignition/cloud configs. Optionally, include a volume of downloaded image assets.
|
||||
Set up `coreos/bootcfg` according to the [docs](bootcfg.md). Pull the `coreos/bootcfg` image, prepare a data volume with `Spec` definitions and ignition/cloud configs. Optionally, include a volume of downloaded image assets.
|
||||
|
||||
Run the `bootcfg` container to serve configs for any of the network environments we'll discuss next.
|
||||
|
||||
docker run -p 8080:8080 --name=bootcfg --rm -v $PWD/examples/dev:/data:Z -v $PWD/assets:/assets:Z coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug
|
||||
docker run -p 8080:8080 --name=bootcfg --rm -v $PWD/examples:/data:Z -v $PWD/assets:/assets:Z coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug -config /data/etcd-docker.yaml
|
||||
|
||||
Note, the kernel options in the `Spec` [examples](../examples) reference 172.17.0.2, the first container IP Docker is likely to assign to `bootcfg`. Ensure your kernel options point to where `bootcfg` runs.
|
||||
|
||||
@@ -91,8 +91,9 @@ Create 5 libvirt VM nodes configured to boot from the network. The `scripts/libv
|
||||
sudo ./scripts/libvirt
|
||||
USAGE: libvirt <command>
|
||||
Commands:
|
||||
create create 5 libvirt nodes
|
||||
start start 5 libvirt nodes
|
||||
create-docker create 5 libvirt nodes on the docker0 bridge
|
||||
create-rkt create 5 libvirt nodes on a rkt CNI metal0 bridge
|
||||
start start the 5 libvirt nodes
|
||||
reboot reboot the 5 libvirt nodes
|
||||
shutdown shutdown the 5 libvirt nodes
|
||||
poweroff poweroff the 5 libvirt nodes
|
||||
|
||||
@@ -5,7 +5,6 @@ Examples contains Config Service data directories showcasing different network-b
|
||||
|
||||
| Name | Description | Docs |
|
||||
|------------|-------------|----------------|
|
||||
| etcd-small | Cluster with 1 etcd node, 4 proxies | [reference](https://coreos.com/os/docs/latest/cluster-architectures.html) |
|
||||
| etcd-large | Cluster with 3 etcd nodes, 2 proxies | [reference](https://coreos.com/os/docs/latest/cluster-architectures.html) |
|
||||
| kubernetes | Kubernetes cluster with 1 master, 1 worker, 1 dedicated etcd node | [reference](https://github.com/coreos/coreos-kubernetes) |
|
||||
|
||||
@@ -49,19 +48,15 @@ Let's run the config service on the virtual network.
|
||||
|
||||
Run the command for the example you wish to use.
|
||||
|
||||
**etcd-small Cluster**
|
||||
**etcd Cluster**
|
||||
|
||||
docker run -p 8080:8080 --name=bootcfg --rm -v $PWD/examples/etcd-small:/data:Z -v $PWD/assets:/assets:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug
|
||||
|
||||
**etcd-large Cluster**
|
||||
|
||||
docker run -p 8080:8080 --name=bootcfg --rm -v $PWD/examples/etcd-large:/data:Z -v $PWD/assets:/assets:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug
|
||||
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/data:Z -v $PWD/assets:/assets:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug -config /data/etcd-docker.yaml
|
||||
|
||||
**Kubernetes Cluster**
|
||||
|
||||
docker run -p 8080:8080 --name=bootcfg --rm -v $PWD/examples/kubernetes:/data:Z -v $PWD/assets:/assets:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug
|
||||
sudo docker run -p 8080:8080 --rm -v $PWD/examples:/data:Z -v $PWD/assets:/assets:Z quay.io/coreos/bootcfg:latest -address=0.0.0.0:8080 -log-level=debug -config /data/k8s-docker.yaml
|
||||
|
||||
The mounted data directory (e.g. `-v $PWD/examples/etcd-small:/data:Z`) depends on the example you wish to run.
|
||||
The `-config` file describes the desired state of booted machines.
|
||||
|
||||
## Assets
|
||||
|
||||
@@ -98,3 +93,51 @@ If everything works, congratulations! Stay tuned for developments.
|
||||
|
||||
See the [libvirt guide](../Documentation/virtual-hardware.md) or [baremetal guide](../Documentation/physical-hardware.md) for more information.
|
||||
|
||||
# Kubernetes
|
||||
|
||||
This example provisions a Kubernetes cluster with 1 master node, 1 worker node, and a dedicated etcd node. Each node uses a static IP address on the local network.
|
||||
|
||||
## Assets
|
||||
|
||||
Download the required CoreOS Beta image assets.
|
||||
|
||||
./scripts/get-coreos beta 877.1.0
|
||||
|
||||
Next, add or generate a root CA and Kubernetes TLS assets for each component.
|
||||
|
||||
### TLS Assets
|
||||
|
||||
Note: In this example, TLS assets are served to any machines which request them. The network and any machines on it cannot be trusted yet, so this example is **not suitable for production**. [Distributed Trusted Computing](https://coreos.com/blog/coreos-trusted-computing.html) work soon let machines with TPMs establish secure channels to improve secret distribution and cluster attestation.
|
||||
|
||||
Use the `generate-tls` script to generate throw-away TLS assets. The script will generate a root CA and `admin`, `apiserver`, and `worker` certificates in `assets/tls`.
|
||||
|
||||
cd coreos-baremetal
|
||||
./scripts/tls/generate-kubernetes-secrets
|
||||
|
||||
Alternately, if you have existing Public Key Infrastructure, add your CA certificate, entity certificates, and entity private keys to `assets/tls` (for testing only, not secure yet).
|
||||
|
||||
* ca.pem
|
||||
* apiserver.pem
|
||||
* apiserver-key.pem
|
||||
* worker.pem
|
||||
* worker-key.pem
|
||||
* admin.pem
|
||||
* admin-key.pem
|
||||
|
||||
See the [Cluster TLS OpenSSL Generation](https://coreos.com/kubernetes/docs/latest/openssl.html) document or [Kubernetes Step by Step](https://coreos.com/kubernetes/docs/latest/getting-started.html) for more details.
|
||||
|
||||
Return the the general examples [README](../README).
|
||||
|
||||
## Usage
|
||||
|
||||
Install `kubectl` on your host and use the `examples/kubeconfig` file which references the top level `assets/tls`.
|
||||
|
||||
cd /path/to/coreos-baremetal
|
||||
kubectl --kubeconfig=examples/kubeconfig get nodes
|
||||
|
||||
Get all pods.
|
||||
|
||||
kubectl --kubeconfig=examples/kubeconfig get pods --all-namespaces
|
||||
|
||||
On my laptop, it takes about 1 minute from boot until the Kubernetes API comes up. Then it takes another 1-2 minutes for all components including DNS to be pulled and started.
|
||||
|
||||
|
||||
@@ -2,36 +2,36 @@
|
||||
set -e
|
||||
|
||||
# List of etcd servers (http://ip:port), comma separated
|
||||
export ETCD_ENDPOINTS=http://172.17.0.23:2379
|
||||
export ETCD_ENDPOINTS={{.k8s_etcd_endpoints}}
|
||||
|
||||
# Specify the version (vX.Y.Z) of Kubernetes assets to deploy
|
||||
export K8S_VER=v1.1.2
|
||||
export K8S_VER={{.k8s_version}}
|
||||
|
||||
# The CIDR network to use for pod IPs.
|
||||
# Each pod launched in the cluster will be assigned an IP out of this range.
|
||||
# Each node will be configured such that these IPs will be routable using the flannel overlay network.
|
||||
export POD_NETWORK=10.2.0.0/16
|
||||
export POD_NETWORK={{.k8s_pod_network}}
|
||||
|
||||
# The CIDR network to use for service cluster IPs.
|
||||
# Each service will be assigned a cluster IP out of this range.
|
||||
# This must not overlap with any IP ranges assigned to the POD_NETWORK, or other existing network infrastructure.
|
||||
# Routing to these IPs is handled by a proxy service local to each node, and are not required to be routable between nodes.
|
||||
export SERVICE_IP_RANGE=10.3.0.0/24
|
||||
export SERVICE_IP_RANGE={{.k8s_service_ip_range}}
|
||||
|
||||
# The IP address of the Kubernetes API Service
|
||||
# If the SERVICE_IP_RANGE is changed above, this must be set to the first IP in that range.
|
||||
export K8S_SERVICE_IP=10.3.0.1
|
||||
export K8S_SERVICE_IP={{.k8s_service_ip}}
|
||||
|
||||
# The IP address of the cluster DNS service.
|
||||
# This IP must be in the range of the SERVICE_IP_RANGE and cannot be the first IP in the range.
|
||||
# This same IP must be configured on all worker nodes to enable DNS service discovery.
|
||||
export DNS_SERVICE_IP=10.3.0.10
|
||||
export DNS_SERVICE_IP={{.k8s_dns_service_ip}}
|
||||
|
||||
# ADVERTISE_IP is the host node's IP.
|
||||
export ADVERTISE_IP=172.17.0.21
|
||||
export ADVERTISE_IP={{.k8s_advertise_ip}}
|
||||
|
||||
# TLS Certificate assets are hosted by the Config Server
|
||||
export CERT_ENDPOINT=172.17.0.2:8080/assets
|
||||
export CERT_ENDPOINT={{.k8s_cert_endpoint}}
|
||||
|
||||
function init_config {
|
||||
local REQUIRED=('ADVERTISE_IP' 'POD_NETWORK' 'ETCD_ENDPOINTS' 'SERVICE_IP_RANGE' 'K8S_SERVICE_IP' 'DNS_SERVICE_IP' 'K8S_VER' )
|
||||
@@ -47,9 +47,9 @@ function init_config {
|
||||
function get_certs {
|
||||
DEST=/etc/kubernetes/ssl
|
||||
mkdir -p $DEST
|
||||
curl http://$CERT_ENDPOINT/tls/apiserver.pem -o $DEST/apiserver.pem
|
||||
curl http://$CERT_ENDPOINT/tls/apiserver-key.pem -o $DEST/apiserver-key.pem
|
||||
curl http://$CERT_ENDPOINT/tls/ca.pem -o $DEST/ca.pem
|
||||
curl $CERT_ENDPOINT/tls/apiserver.pem -o $DEST/apiserver.pem
|
||||
curl $CERT_ENDPOINT/tls/apiserver-key.pem -o $DEST/apiserver-key.pem
|
||||
curl $CERT_ENDPOINT/tls/ca.pem -o $DEST/ca.pem
|
||||
}
|
||||
|
||||
function init_flannel {
|
||||
@@ -2,25 +2,25 @@
|
||||
set -e
|
||||
|
||||
# List of etcd servers (http://ip:port), comma separated
|
||||
export ETCD_ENDPOINTS=http://172.17.0.23:2379
|
||||
export ETCD_ENDPOINTS={{.k8s_etcd_endpoints}}
|
||||
|
||||
# The endpoint the worker node should use to contact controller nodes (https://ip:port)
|
||||
# In HA configurations this should be an external DNS record or loadbalancer in front of the control nodes.
|
||||
# However, it is also possible to point directly to a single control node.
|
||||
export CONTROLLER_ENDPOINT=https://172.17.0.21
|
||||
export CONTROLLER_ENDPOINT={{.k8s_controller_endpoint}}
|
||||
|
||||
# Specify the version (vX.Y.Z) of Kubernetes assets to deploy
|
||||
export K8S_VER=v1.1.2
|
||||
export K8S_VER={{.k8s_version}}
|
||||
|
||||
# The IP address of the cluster DNS service.
|
||||
# This must be the same DNS_SERVICE_IP used when configuring the controller nodes.
|
||||
export DNS_SERVICE_IP=10.3.0.10
|
||||
export DNS_SERVICE_IP={{.k8s_dns_service_ip}}
|
||||
|
||||
# ADVERTISE_IP is the host node's IP.
|
||||
export ADVERTISE_IP=172.17.0.22
|
||||
export ADVERTISE_IP={{.k8s_advertise_ip}}
|
||||
|
||||
# TLS Certificate assets are hosted by the Config Server
|
||||
export CERT_ENDPOINT=172.17.0.2:8080/assets
|
||||
export CERT_ENDPOINT={{.k8s_cert_endpoint}}
|
||||
|
||||
function init_config {
|
||||
local REQUIRED=( 'ADVERTISE_IP' 'ETCD_ENDPOINTS' 'CONTROLLER_ENDPOINT' 'DNS_SERVICE_IP' 'K8S_VER' )
|
||||
@@ -36,9 +36,9 @@ function init_config {
|
||||
function get_certs {
|
||||
DEST=/etc/kubernetes/ssl
|
||||
mkdir -p $DEST
|
||||
curl http://$CERT_ENDPOINT/tls/worker.pem -o $DEST/worker.pem
|
||||
curl http://$CERT_ENDPOINT/tls/worker-key.pem -o $DEST/worker-key.pem
|
||||
curl http://$CERT_ENDPOINT/tls/ca.pem -o $DEST/ca.pem
|
||||
curl $CERT_ENDPOINT/tls/worker.pem -o $DEST/worker.pem
|
||||
curl $CERT_ENDPOINT/tls/worker-key.pem -o $DEST/worker-key.pem
|
||||
curl $CERT_ENDPOINT/tls/ca.pem -o $DEST/ca.pem
|
||||
}
|
||||
|
||||
function init_templates {
|
||||
@@ -1,13 +0,0 @@
|
||||
#cloud-config
|
||||
coreos:
|
||||
units:
|
||||
- name: etcd2.service
|
||||
command: start
|
||||
- name: fleet.service
|
||||
command: start
|
||||
write_files:
|
||||
- path: "/home/core/cloud"
|
||||
owner: "core"
|
||||
permissions: "0644"
|
||||
content: |
|
||||
File added by the default cloud config.
|
||||
@@ -1,13 +0,0 @@
|
||||
#cloud-config
|
||||
coreos:
|
||||
units:
|
||||
- name: etcd2.service
|
||||
command: start
|
||||
- name: fleet.service
|
||||
command: start
|
||||
write_files:
|
||||
- path: "/home/core/cloud"
|
||||
owner: "core"
|
||||
permissions: "0644"
|
||||
content: |
|
||||
File added by node1.yml.
|
||||
@@ -1,13 +0,0 @@
|
||||
#cloud-config
|
||||
coreos:
|
||||
units:
|
||||
- name: etcd2.service
|
||||
command: start
|
||||
- name: fleet.service
|
||||
command: start
|
||||
write_files:
|
||||
- path: "/home/core/cloud"
|
||||
owner: "core"
|
||||
permissions: "0644"
|
||||
content: |
|
||||
File added by node2.yml.
|
||||
@@ -1,26 +0,0 @@
|
||||
---
|
||||
api_version: v1alpha1
|
||||
groups:
|
||||
- name: node1
|
||||
spec: node1
|
||||
require:
|
||||
uuid: 16e7d8a7-bfa9-428b-9117-363341bb330b
|
||||
metadata:
|
||||
greeting: hello
|
||||
networkd_name: ens3
|
||||
networkd_gateway: 172.17.0.1
|
||||
networkd_dns: 172.17.0.3
|
||||
networkd_address: 172.17.0.21/16
|
||||
|
||||
- name: node2
|
||||
spec: node2
|
||||
require:
|
||||
uuid: 264cd073-ca62-44b3-98c0-50aad5b5f819
|
||||
metadata:
|
||||
networkd_name: ens3
|
||||
networkd_gateway: 172.17.0.1
|
||||
networkd_dns: 172.17.0.3
|
||||
networkd_address: 172.17.0.22/16
|
||||
|
||||
- name: default
|
||||
spec: default
|
||||
@@ -1 +0,0 @@
|
||||
{"ignitionVersion":1,"storage":{},"systemd":{"units":[{"name":"metadata.service","enable":true,"contents":"[Unit]\nDescription=Bare Metal Metadata Agent\n[Service]\nType=oneshot\nEnvironment=OUTPUT=/run/metadata/bootcfg\nExecStart=/usr/bin/mkdir --parent /run/metadata\nExecStart=/usr/bin/bash -c 'curl --url \"http://bootcfg.foo:8080/metadata?{{.query}}\" --retry 10 --output ${OUTPUT}'\n[Install]\nWantedBy=multi-user.target\n"},{"name":"greeting.service","enable":true,"contents":"[Unit]\nDescription=Greeting using Ignition Metadata\nRequires=metadata.service\nAfter=metadata.service\n[Service]\nType=oneshot\nEnvironmentFile=/run/metadata/bootcfg\nExecStart=/usr/bin/echo ${GREETING}\n[Install]\nWantedBy=multi-user.target\n"}]},"networkd":{"units":[{"name":"00-{{.networkd_name}}.network","contents":"[Match]\nName={{.networkd_name}}\n[Network]\nGateway={{.networkd_gateway}}\nDNS={{.networkd_dns}}\nAddress={{.networkd_address}}\n"}]},"passwd":{}}
|
||||
@@ -1,39 +0,0 @@
|
||||
---
|
||||
ignition_version: 1
|
||||
systemd:
|
||||
units:
|
||||
- name: metadata.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Bare Metal Metadata Agent
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment=OUTPUT=/run/metadata/bootcfg
|
||||
ExecStart=/usr/bin/mkdir --parent /run/metadata
|
||||
ExecStart=/usr/bin/bash -c 'curl --url "http://bootcfg.foo:8080/metadata?{{.query}}" --retry 10 --output ${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: greeting.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Greeting using Ignition Metadata
|
||||
Requires=metadata.service
|
||||
After=metadata.service
|
||||
[Service]
|
||||
Type=oneshot
|
||||
EnvironmentFile=/run/metadata/bootcfg
|
||||
ExecStart=/usr/bin/echo ${GREETING}
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
networkd:
|
||||
units:
|
||||
- name: 00-{{.networkd_name}}.network
|
||||
contents: |
|
||||
[Match]
|
||||
Name={{.networkd_name}}
|
||||
[Network]
|
||||
Gateway={{.networkd_gateway}}
|
||||
DNS={{.networkd_dns}}
|
||||
Address={{.networkd_address}}
|
||||
@@ -1 +0,0 @@
|
||||
{"ignitionVersion":1,"storage":{},"systemd":{"units":[{"name":"metadata.service","enable":true,"contents":"[Unit]\nDescription=Bare Metal Metadata Agent\n[Service]\nType=oneshot\nEnvironment=OUTPUT=/run/metadata/bootcfg\nExecStart=/usr/bin/mkdir --parent /run/metadata\nExecStart=/usr/bin/bash -c 'curl --url \"http://bootcfg.foo:8080/metadata?{{.query}}\" --retry 10 --output ${OUTPUT}'\n[Install]\nWantedBy=multi-user.target\n"}]},"networkd":{"units":[{"name":"00-{{.networkd_name}}.network","contents":"[Match]\nName={{.networkd_name}}\n[Network]\nGateway={{.networkd_gateway}}\nDNS={{.networkd_dns}}\nAddress={{.networkd_address}}\n"}]},"passwd":{}}
|
||||
@@ -1,26 +0,0 @@
|
||||
---
|
||||
ignition_version: 1
|
||||
systemd:
|
||||
units:
|
||||
- name: metadata.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Bare Metal Metadata Agent
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment=OUTPUT=/run/metadata/bootcfg
|
||||
ExecStart=/usr/bin/mkdir --parent /run/metadata
|
||||
ExecStart=/usr/bin/bash -c 'curl --url "http://bootcfg.foo:8080/metadata?{{.query}}" --retry 10 --output ${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
networkd:
|
||||
units:
|
||||
- name: 00-{{.networkd_name}}.network
|
||||
contents: |
|
||||
[Match]
|
||||
Name={{.networkd_name}}
|
||||
[Network]
|
||||
Gateway={{.networkd_gateway}}
|
||||
DNS={{.networkd_dns}}
|
||||
Address={{.networkd_address}}
|
||||
@@ -1,13 +0,0 @@
|
||||
{
|
||||
"id": "default",
|
||||
"boot": {
|
||||
"kernel": "/assets/coreos/835.9.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/835.9.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"cmdline": {
|
||||
"cloud-config-url": "http://bootcfg.foo:8080/cloud?uuid=${uuid}&mac=${net0/mac:hexhyp}",
|
||||
"coreos.autologin": ""
|
||||
}
|
||||
},
|
||||
"cloud_id": "default.yaml",
|
||||
"ignition_id": ""
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"id": "node1",
|
||||
"boot": {
|
||||
"kernel": "/assets/coreos/835.9.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/835.9.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"cmdline": {
|
||||
"cloud-config-url": "http://bootcfg.foo:8080/cloud?uuid=${uuid}&mac=${net0/mac:hexhyp}",
|
||||
"coreos.config.url": "http://bootcfg.foo:8080/ignition?uuid=${uuid}&mac=${net0/mac:hexhyp}",
|
||||
"coreos.autologin": "",
|
||||
"coreos.first_boot": ""
|
||||
}
|
||||
},
|
||||
"cloud_id": "node1.yaml",
|
||||
"ignition_id": "node1.json"
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"id": "node2",
|
||||
"boot": {
|
||||
"kernel": "/assets/coreos/835.9.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/835.9.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"cmdline": {
|
||||
"cloud-config-url": "http://bootcfg.foo:8080/cloud?uuid=${uuid}&mac=${net0/mac:hexhyp}",
|
||||
"coreos.config.url": "http://bootcfg.foo:8080/ignition?uuid=${uuid}&mac=${net0/mac:hexhyp}",
|
||||
"coreos.autologin": "",
|
||||
"coreos.first_boot": ""
|
||||
}
|
||||
},
|
||||
"cloud_id": "node2.yaml",
|
||||
"ignition_id": "node2.json"
|
||||
}
|
||||
@@ -1,12 +0,0 @@
|
||||
#cloud-config
|
||||
coreos:
|
||||
etcd2:
|
||||
name: etcdserver
|
||||
initial-cluster: etcdserver=http://172.17.0.21:2380
|
||||
initial-advertise-peer-urls: http://172.17.0.21:2380
|
||||
advertise-client-urls: http://172.17.0.21:2379
|
||||
listen-client-urls: http://0.0.0.0:2379
|
||||
listen-peer-urls: http://0.0.0.0:2380
|
||||
units:
|
||||
- name: etcd2.service
|
||||
command: start
|
||||
@@ -1,13 +0,0 @@
|
||||
#cloud-config
|
||||
coreos:
|
||||
etcd2:
|
||||
proxy: on
|
||||
listen-client-urls: http://localhost:2379
|
||||
initial-cluster: etcdserver=http://172.17.0.21:2380
|
||||
fleet:
|
||||
etcd_servers: "http://localhost:2379"
|
||||
units:
|
||||
- name: etcd2.service
|
||||
command: start
|
||||
- name: fleet.service
|
||||
command: start
|
||||
@@ -1,10 +0,0 @@
|
||||
---
|
||||
api_version: v1alpha1
|
||||
groups:
|
||||
- name: node1
|
||||
spec: etcd
|
||||
require:
|
||||
uuid: 16e7d8a7-bfa9-428b-9117-363341bb330b
|
||||
|
||||
- name: default
|
||||
spec: worker
|
||||
@@ -1,11 +0,0 @@
|
||||
{
|
||||
"ignitionVersion": 1,
|
||||
"networkd": {
|
||||
"units": [
|
||||
{
|
||||
"name": "00-ens3.network",
|
||||
"contents": "[Match]\nName=ens3\n\n[Network]\nDNS=8.8.8.8\nGateway=172.17.0.1\nAddress=172.17.0.21"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"id": "etcd",
|
||||
"boot": {
|
||||
"kernel": "/assets/coreos/835.9.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/835.9.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"cmdline": {
|
||||
"cloud-config-url": "http://172.17.0.2:8080/cloud?uuid=${uuid}&mac=${net0/mac:hexhyp}",
|
||||
"coreos.config.url": "http://172.17.0.2:8080/ignition?uuid=${uuid}",
|
||||
"coreos.autologin": "",
|
||||
"coreos.first_boot": ""
|
||||
}
|
||||
},
|
||||
"cloud_id": "etcd.yaml",
|
||||
"ignition_id": "etcd.json"
|
||||
}
|
||||
@@ -1,13 +0,0 @@
|
||||
{
|
||||
"id": "worker",
|
||||
"boot": {
|
||||
"kernel": "/assets/coreos/835.9.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/835.9.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"cmdline": {
|
||||
"cloud-config-url": "http://172.17.0.2:8080/cloud?uuid=${uuid}&mac=${net0/mac:hexhyp}",
|
||||
"coreos.autologin": ""
|
||||
}
|
||||
},
|
||||
"cloud_id": "worker.yaml",
|
||||
"ignition_id": ""
|
||||
}
|
||||
@@ -10,6 +10,14 @@ groups:
|
||||
networkd_gateway: 172.17.0.1
|
||||
networkd_dns: 172.17.0.3
|
||||
networkd_address: 172.17.0.21/16
|
||||
k8s_etcd_endpoints: http://172.17.0.23:2379
|
||||
k8s_version: v1.1.2
|
||||
k8s_pod_network: 10.2.0.0/16
|
||||
k8s_service_ip_range: 10.3.0.0/24
|
||||
k8s_service_ip: 10.3.0.1
|
||||
k8s_dns_service_ip: 10.3.0.10
|
||||
k8s_advertise_ip: 172.17.0.21
|
||||
k8s_cert_endpoint: http://bootcfg.foo:8080/assets
|
||||
|
||||
- name: Worker Node
|
||||
spec: kubernetes-worker
|
||||
@@ -20,6 +28,12 @@ groups:
|
||||
networkd_gateway: 172.17.0.1
|
||||
networkd_dns: 172.17.0.3
|
||||
networkd_address: 172.17.0.22/16
|
||||
k8s_etcd_endpoints: http://172.17.0.23:2379
|
||||
k8s_controller_endpoint: https://172.17.0.21
|
||||
k8s_version: v1.1.2
|
||||
k8s_dns_service_ip: 10.3.0.1
|
||||
k8s_advertise_ip: 172.17.0.22
|
||||
k8s_cert_endpoint: http://bootcfg.foo:8080/assets
|
||||
|
||||
- name: etcd Node
|
||||
spec: etcd
|
||||
@@ -30,4 +44,7 @@ groups:
|
||||
networkd_gateway: 172.17.0.1
|
||||
networkd_dns: 172.17.0.3
|
||||
networkd_address: 172.17.0.23/16
|
||||
ipv4_address: 172.17.0.23
|
||||
etcd_name: solo
|
||||
etcd_initial_cluster: "solo=http://172.17.0.23:2380"
|
||||
|
||||
19
examples/kubeconfig
Normal file
19
examples/kubeconfig
Normal file
@@ -0,0 +1,19 @@
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority: ../assets/tls/ca.pem
|
||||
server: https://172.17.0.21:443
|
||||
name: k8s-docker
|
||||
contexts:
|
||||
- context:
|
||||
cluster: k8s-docker
|
||||
namespace: default
|
||||
user: k8s-docker
|
||||
name: k8s-docker
|
||||
current-context: k8s-docker
|
||||
users:
|
||||
- name: k8s-docker
|
||||
user:
|
||||
client-certificate: ../assets/tls/admin.pem
|
||||
client-key: ../assets/tls/admin-key.pem
|
||||
@@ -1,51 +0,0 @@
|
||||
|
||||
# Kubernetes
|
||||
|
||||
This example provisions a Kubernetes cluster with 1 master node, 1 worker node, and a dedicated etcd node. Each node uses a static IP address on the local network.
|
||||
|
||||
## Assets
|
||||
|
||||
Download the required CoreOS Beta image assets.
|
||||
|
||||
./scripts/get-coreos beta 877.1.0
|
||||
|
||||
Next, add or generate a root CA and Kubernetes TLS assets for each component.
|
||||
|
||||
### TLS Assets
|
||||
|
||||
Note: In this example, TLS assets are served to any machines which request them. The network and any machines on it cannot be trusted yet, so this example is **not suitable for production**. [Distributed Trusted Computing](https://coreos.com/blog/coreos-trusted-computing.html) work soon let machines with TPMs establish secure channels to improve secret distribution and cluster attestation.
|
||||
|
||||
Use the `generate-tls` script to generate throw-away TLS assets. The script will generate a root CA and `admin`, `apiserver`, and `worker` certificates in `assets/tls`.
|
||||
|
||||
./examples/kubernetes/scripts/generate-tls
|
||||
|
||||
Alternately, if you have existing Public Key Infrastructure, add your CA certificate, entity certificates, and entity private keys to `assets/tls`.
|
||||
|
||||
* ca.pem
|
||||
* apiserver.pem
|
||||
* apiserver-key.pem
|
||||
* worker.pem
|
||||
* worker-key.pem
|
||||
* admin.pem
|
||||
* admin-key.pem
|
||||
|
||||
See the [Cluster TLS OpenSSL Generation](https://coreos.com/kubernetes/docs/latest/openssl.html) document or [Kubernetes Step by Step](https://coreos.com/kubernetes/docs/latest/getting-started.html) for more details.
|
||||
|
||||
Return the the general examples [README](../README).
|
||||
|
||||
## Usage
|
||||
|
||||
Install `kubectl` on your host and use the `examples/kubernetes/kubeconfig` file which references the top level `assets/tls`.
|
||||
|
||||
cd /path/to/coreos-baremetal
|
||||
kubectl --kubeconfig=examples/kubernetes/kubeconfig get nodes
|
||||
|
||||
Watch pod events.
|
||||
|
||||
kubectl --kubeconfig=examples/kubernetes/kubeconfig get pods --all-namespaces -w
|
||||
|
||||
Get all pods.
|
||||
|
||||
kubectl --kubeconfig=examples/kubernetes/kubeconfig get pods --all-namespaces
|
||||
|
||||
On my laptop, it takes about 1 minute from boot until the Kubernetes API comes up. Then it takes another 1-2 minutes for all components including DNS to be pulled and started.
|
||||
@@ -1,18 +0,0 @@
|
||||
#cloud-config
|
||||
coreos:
|
||||
etcd2:
|
||||
name: etcdserver
|
||||
initial-cluster: etcdserver=http://172.17.0.23:2380
|
||||
initial-advertise-peer-urls: http://172.17.0.23:2380
|
||||
advertise-client-urls: http://172.17.0.23:2379
|
||||
listen-client-urls: http://0.0.0.0:2379
|
||||
listen-peer-urls: http://0.0.0.0:2380
|
||||
units:
|
||||
- name: etcd2.service
|
||||
command: start
|
||||
write_files:
|
||||
- path: "/home/core/etcd"
|
||||
owner: "core"
|
||||
permissions: "0644"
|
||||
content: |
|
||||
File added by etcd.yaml.
|
||||
@@ -1,19 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority: ../../assets/tls/ca.pem
|
||||
server: https://172.17.0.21:443
|
||||
name: baremetal-cluster
|
||||
contexts:
|
||||
- context:
|
||||
cluster: baremetal-cluster
|
||||
namespace: default
|
||||
user: baremetal-admin
|
||||
name: baremetal
|
||||
current-context: baremetal
|
||||
users:
|
||||
- name: baremetal-admin
|
||||
user:
|
||||
client-certificate: ../../assets/tls/admin.pem
|
||||
client-key: ../../assets/tls/admin-key.pem
|
||||
@@ -1,15 +0,0 @@
|
||||
#!/bin/bash -e
|
||||
# USAGE: ./examples/kubernetes/scripts/generate-tls
|
||||
|
||||
DEST=${1:-"assets/tls"}
|
||||
|
||||
if [ ! -d "$DEST" ]; then
|
||||
echo "Creating directory $DEST"
|
||||
mkdir -p $DEST
|
||||
fi
|
||||
|
||||
./examples/kubernetes/scripts/root-ca $DEST
|
||||
./examples/kubernetes/scripts/kubernetes-cert $DEST admin kube-admin
|
||||
./examples/kubernetes/scripts/kubernetes-cert $DEST apiserver kube-apiserver IP.1=10.3.0.1,IP.2=172.17.0.21
|
||||
./examples/kubernetes/scripts/kubernetes-cert $DEST worker kube-worker IP.1=172.17.0.22
|
||||
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"id": "etcd",
|
||||
"boot": {
|
||||
"kernel": "/assets/coreos/877.1.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/877.1.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"cmdline": {
|
||||
"cloud-config-url": "http://bootcfg.foo:8080/cloud?uuid=${uuid}&mac=${net0/mac:hexhyp}",
|
||||
"coreos.config.url": "http://bootcfg.foo:8080/ignition?uuid=${uuid}&mac=${net0/mac:hexhyp}",
|
||||
"coreos.autologin": "",
|
||||
"coreos.first_boot": ""
|
||||
}
|
||||
},
|
||||
"cloud_id": "etcd.yaml",
|
||||
"ignition_id": "network.json"
|
||||
}
|
||||
@@ -10,6 +10,6 @@
|
||||
"coreos.first_boot": ""
|
||||
}
|
||||
},
|
||||
"cloud_id": "master.sh",
|
||||
"cloud_id": "kubernetes-master.sh",
|
||||
"ignition_id": "network.json"
|
||||
}
|
||||
@@ -10,6 +10,6 @@
|
||||
"coreos.first_boot": ""
|
||||
}
|
||||
},
|
||||
"cloud_id": "worker.sh",
|
||||
"cloud_id": "kubernetes-worker.sh",
|
||||
"ignition_id": "network.json"
|
||||
}
|
||||
14
scripts/tls/generate-kubernetes-secrets
Executable file
14
scripts/tls/generate-kubernetes-secrets
Executable file
@@ -0,0 +1,14 @@
|
||||
#!/bin/bash -e
|
||||
# USAGE: ./scripts/generate-kubernetes-secrets
|
||||
|
||||
DEST=${1:-"assets/tls"}
|
||||
|
||||
if [ ! -d "$DEST" ]; then
|
||||
echo "Creating directory $DEST"
|
||||
mkdir -p $DEST
|
||||
fi
|
||||
|
||||
./scripts/tls/root-ca $DEST
|
||||
./scripts/tls/kubernetes-cert $DEST admin kube-admin
|
||||
./scripts/tls/kubernetes-cert $DEST apiserver kube-apiserver IP.1=10.3.0.1,IP.2=172.17.0.21
|
||||
./scripts/tls/kubernetes-cert $DEST worker kube-worker IP.1=172.17.0.22
|
||||
Reference in New Issue
Block a user