mirror of
https://github.com/outbackdingo/matchbox.git
synced 2026-01-27 10:19:35 +00:00
examples/k8s: Use DNS names in Kubernetes clusters
* Use DNS names to refer to nodes to mirror production
This commit is contained in:
@@ -15,9 +15,8 @@ Ensure that you've gone through the [bootcfg with rkt](getting-started-rkt.md) o
|
||||
|
||||
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. VMs are setup on the `metal0` CNI bridge for rkt or the `docker0` bridge for Docker. The examples can be used for physical machines if you update the MAC/IP addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
|
||||
|
||||
* [k8s](../examples/groups/k8s) - iPXE boot a Kubernetes cluster (use rkt)
|
||||
* [k8s-docker](../examples/groups/k8s-docker) - iPXE boot a Kubernetes cluster on `docker0` (use docker)
|
||||
* [k8s-install](../examples/groups/k8s-install) - Install a Kubernetes cluster to disk (use rkt)
|
||||
* [k8s](../examples/groups/k8s) - iPXE boot a Kubernetes cluster
|
||||
* [k8s-install](../examples/groups/k8s-install) - Install a Kubernetes cluster to disk
|
||||
* [Lab examples](https://github.com/dghubble/metal) - Lab hardware examples
|
||||
|
||||
### Assets
|
||||
@@ -31,10 +30,10 @@ Add your SSH public key to each machine group definition [as shown](../examples/
|
||||
Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver`, `worker`).
|
||||
|
||||
rm -rf examples/assets/tls
|
||||
# for Kubernetes on CNI metal0, i.e. rkt
|
||||
./scripts/tls/k8s-certgen -d examples/assets/tls -s 172.15.0.21 -m IP.1=10.3.0.1,IP.2=172.15.0.21 -w IP.1=172.15.0.22,IP.2=172.15.0.23
|
||||
# for Kubernetes on docker0
|
||||
./scripts/tls/k8s-certgen -d examples/assets/tls -s 172.17.0.21 -m IP.1=10.3.0.1,IP.2=172.17.0.21 -w IP.1=172.17.0.22,IP.2=172.17.0.23
|
||||
# for Kubernetes on CNI metal0 (for rkt)
|
||||
./scripts/tls/k8s-certgen -d examples/assets/tls -s 172.15.0.21 -m IP.1=10.3.0.1,IP.2=172.15.0.21,DNS.1=node1.example.com -w DNS.1=node2.example.com,DNS.2=node3.example.com
|
||||
# for Kubernetes on docker0 (for docker)
|
||||
./scripts/tls/k8s-certgen -d examples/assets/tls -s 172.17.0.21 -m IP.1=10.3.0.1,IP.2=172.17.0.21,DNS.1=node1.example.com -w DNS.1=node2.example.com,DNS.2=node3.example.com
|
||||
|
||||
**Note**: TLS assets are served to any machines which request them, which requires a trusted network. Alternately, provisioning may be tweaked to require TLS assets be securely copied to each host. Read about our longer term security plans at [Distributed Trusted Computing](https://coreos.com/blog/coreos-trusted-computing.html).
|
||||
|
||||
@@ -42,7 +41,7 @@ Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver
|
||||
|
||||
Use rkt or docker to start `bootcfg` and mount the desired example resources. Create a network boot environment and power-on your machines. Revisit [bootcfg with rkt](getting-started-rkt.md) or [bootcfg with Docker](getting-started-docker.md) for help.
|
||||
|
||||
Client machines should boot and provision themselves. Local client VMs should network boot CoreOS in about a 1 minute and the Kubernetes API should be available after 2-3 minutes. If you chose `k8s-install`, notice that machines install CoreOS and then reboot (in libvirt, you must hit "power" again). Time to network boot and provision Kubernetes clusters on physical hardware depends on a number of factors (POST duration, boot device iteration, network speed, etc.).
|
||||
Client machines should boot and provision themselves. Local client VMs should network boot CoreOS in about a 1 minute and the Kubernetes API should be available after 3-4 minutes (each node downloads a ~160MB Hyperkube). If you chose `k8s-install`, notice that machines install CoreOS and then reboot (in libvirt, you must hit "power" again). Time to network boot and provision Kubernetes clusters on physical hardware depends on a number of factors (POST duration, boot device iteration, network speed, etc.).
|
||||
|
||||
## Verify
|
||||
|
||||
@@ -50,10 +49,10 @@ Client machines should boot and provision themselves. Local client VMs should ne
|
||||
|
||||
$ cd /path/to/coreos-baremetal
|
||||
$ kubectl --kubeconfig=examples/assets/tls/kubeconfig get nodes
|
||||
NAME STATUS AGE
|
||||
172.15.0.21 Ready 6m
|
||||
172.15.0.22 Ready 5m
|
||||
172.15.0.23 Ready 6m
|
||||
NAME STATUS AGE
|
||||
node1.example.com Ready 43s
|
||||
node2.example.com Ready 38s
|
||||
node3.example.com Ready 37s
|
||||
|
||||
Get all pods.
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ These examples network boot and provision machines into CoreOS clusters using `b
|
||||
| pxe-disk | CoreOS via iPXE, with a root filesystem | alpha/1109.1.0 | Disk | [reference](https://coreos.com/os/docs/latest/booting-with-ipxe.html) |
|
||||
| etcd | iPXE boot a 3 node etcd cluster and proxy | alpha/1109.1.0 | RAM | [reference](https://coreos.com/os/docs/latest/cluster-architectures.html) |
|
||||
| etcd-install | Install a 3-node etcd cluster to disk | alpha/1109.1.0 | Disk | [reference](https://coreos.com/os/docs/latest/installing-to-disk.html) |
|
||||
| k8s, k8s-docker | Kubernetes cluster with 1 master, 2 workers, and TLS-authentication | alpha/1109.1.0 | Disk | [tutorial](../Documentation/kubernetes.md) |
|
||||
| k8s | Kubernetes cluster with 1 master, 2 workers, and TLS-authentication | alpha/1109.1.0 | Disk | [tutorial](../Documentation/kubernetes.md) |
|
||||
| k8s-install | Install a Kubernetes cluster to disk | alpha/1109.1.0 | Disk | [tutorial](../Documentation/kubernetes.md) |
|
||||
| bootkube | iPXE boot a self-hosted Kubernetes cluster (with bootkube) | alpha/1109.1.0 | Disk | [tutorial](../Documentation/bootkube.md) |
|
||||
| bootkube-install | Install a self-hosted Kubernetes cluster (with bootkube) | alpha/1109.1.0 | Disk | [tutorial](../Documentation/bootkube.md) |
|
||||
@@ -46,7 +46,3 @@ Most examples allow `ssh_authorized_keys` to be added for the `core` user as mac
|
||||
### "pxe"
|
||||
|
||||
Some examples check the `pxe` variable to determine whether to create a `/dev/sda1` filesystem and partition for PXEing with `root=/dev/sda1` ("pxe":"true") or to write files to the existing filesystem on `/dev/disk/by-label/ROOT` ("pxe":"false").
|
||||
|
||||
### "skip_networkd"
|
||||
|
||||
Some examples (mainly Kubernetes examples) check the `skip_networkd` variable to determine whether to skip configuring networkd. When `true`, the default networkd config is used, which uses DCHP to setup networking. Use this if you've pre-configured static IP mappings for Kubernetes nodes in your DHCP server. Otherwise, `networkd_address`, `networkd_dns`, and `networkd_gateway` machine metadata are used to populate a networkd configuration on each host.
|
||||
|
||||
@@ -1,22 +0,0 @@
|
||||
{
|
||||
"id": "node1",
|
||||
"name": "k8s controller",
|
||||
"profile": "k8s-master",
|
||||
"selector": {
|
||||
"mac": "52:54:00:a1:9c:ae"
|
||||
},
|
||||
"metadata": {
|
||||
"etcd_initial_cluster": "node1=http://172.17.0.21:2380",
|
||||
"etcd_name": "node1",
|
||||
"ipv4_address": "172.17.0.21",
|
||||
"k8s_cert_endpoint": "http://bootcfg.foo:8080/assets",
|
||||
"k8s_dns_service_ip": "10.3.0.10",
|
||||
"k8s_etcd_endpoints": "http://172.17.0.21:2379",
|
||||
"k8s_pod_network": "10.2.0.0/16",
|
||||
"k8s_service_ip_range": "10.3.0.0/24",
|
||||
"networkd_address": "172.17.0.21/16",
|
||||
"networkd_dns": "172.17.0.3",
|
||||
"networkd_gateway": "172.17.0.1",
|
||||
"pxe": "true"
|
||||
}
|
||||
}
|
||||
@@ -1,20 +0,0 @@
|
||||
{
|
||||
"id": "node2",
|
||||
"name": "k8s worker",
|
||||
"profile": "k8s-worker",
|
||||
"selector": {
|
||||
"mac": "52:54:00:b2:2f:86"
|
||||
},
|
||||
"metadata": {
|
||||
"etcd_initial_cluster": "node1=http://172.17.0.21:2380",
|
||||
"ipv4_address": "172.17.0.22",
|
||||
"k8s_cert_endpoint": "http://bootcfg.foo:8080/assets",
|
||||
"k8s_controller_endpoint": "https://172.17.0.21",
|
||||
"k8s_dns_service_ip": "10.3.0.10",
|
||||
"k8s_etcd_endpoints": "http://172.17.0.21:2379",
|
||||
"networkd_address": "172.17.0.22/16",
|
||||
"networkd_dns": "172.17.0.3",
|
||||
"networkd_gateway": "172.17.0.1",
|
||||
"pxe": "true"
|
||||
}
|
||||
}
|
||||
@@ -1,20 +0,0 @@
|
||||
{
|
||||
"id": "node3",
|
||||
"name": "k8s worker",
|
||||
"profile": "k8s-worker",
|
||||
"selector": {
|
||||
"mac": "52:54:00:c3:61:77"
|
||||
},
|
||||
"metadata": {
|
||||
"etcd_initial_cluster": "node1=http://172.17.0.21:2380",
|
||||
"ipv4_address": "172.17.0.23",
|
||||
"k8s_cert_endpoint": "http://bootcfg.foo:8080/assets",
|
||||
"k8s_controller_endpoint": "https://172.17.0.21",
|
||||
"k8s_dns_service_ip": "10.3.0.10",
|
||||
"k8s_etcd_endpoints": "http://172.17.0.21:2379",
|
||||
"networkd_address": "172.17.0.23/16",
|
||||
"networkd_dns": "172.17.0.3",
|
||||
"networkd_gateway": "172.17.0.1",
|
||||
"pxe": "true"
|
||||
}
|
||||
}
|
||||
@@ -7,16 +7,13 @@
|
||||
"mac": "52:54:00:a1:9c:ae"
|
||||
},
|
||||
"metadata": {
|
||||
"etcd_initial_cluster": "node1=http://172.15.0.21:2380",
|
||||
"domain_name": "node1.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"etcd_name": "node1",
|
||||
"ipv4_address": "172.15.0.21",
|
||||
"k8s_cert_endpoint": "http://bootcfg.foo:8080/assets",
|
||||
"k8s_dns_service_ip": "10.3.0.10",
|
||||
"k8s_etcd_endpoints": "http://172.15.0.21:2379",
|
||||
"k8s_etcd_endpoints": "http://node1.example.com:2379",
|
||||
"k8s_pod_network": "10.2.0.0/16",
|
||||
"k8s_service_ip_range": "10.3.0.0/24",
|
||||
"networkd_address": "172.15.0.21/16",
|
||||
"networkd_dns": "172.15.0.3",
|
||||
"networkd_gateway": "172.15.0.1"
|
||||
"k8s_service_ip_range": "10.3.0.0/24"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,14 +7,11 @@
|
||||
"mac": "52:54:00:b2:2f:86"
|
||||
},
|
||||
"metadata": {
|
||||
"etcd_initial_cluster": "node1=http://172.15.0.21:2380",
|
||||
"ipv4_address": "172.15.0.22",
|
||||
"domain_name": "node2.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"k8s_cert_endpoint": "http://bootcfg.foo:8080/assets",
|
||||
"k8s_controller_endpoint": "https://172.15.0.21",
|
||||
"k8s_controller_endpoint": "https://node1.example.com",
|
||||
"k8s_dns_service_ip": "10.3.0.10",
|
||||
"k8s_etcd_endpoints": "http://172.15.0.21:2379",
|
||||
"networkd_address": "172.15.0.22/16",
|
||||
"networkd_dns": "172.15.0.3",
|
||||
"networkd_gateway": "172.15.0.1"
|
||||
"k8s_etcd_endpoints": "http://node1.example.com:2379"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,14 +7,11 @@
|
||||
"mac": "52:54:00:c3:61:77"
|
||||
},
|
||||
"metadata": {
|
||||
"etcd_initial_cluster": "node1=http://172.15.0.21:2380",
|
||||
"ipv4_address": "172.15.0.23",
|
||||
"domain_name": "node3.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"k8s_cert_endpoint": "http://bootcfg.foo:8080/assets",
|
||||
"k8s_controller_endpoint": "https://172.15.0.21",
|
||||
"k8s_controller_endpoint": "https://node1.example.com",
|
||||
"k8s_dns_service_ip": "10.3.0.10",
|
||||
"k8s_etcd_endpoints": "http://172.15.0.21:2379",
|
||||
"networkd_address": "172.15.0.23/16",
|
||||
"networkd_dns": "172.15.0.3",
|
||||
"networkd_gateway": "172.15.0.1"
|
||||
"k8s_etcd_endpoints": "http://node1.example.com:2379"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,17 +6,14 @@
|
||||
"mac": "52:54:00:a1:9c:ae"
|
||||
},
|
||||
"metadata": {
|
||||
"etcd_initial_cluster": "node1=http://172.15.0.21:2380",
|
||||
"domain_name": "node1.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"etcd_name": "node1",
|
||||
"ipv4_address": "172.15.0.21",
|
||||
"k8s_cert_endpoint": "http://bootcfg.foo:8080/assets",
|
||||
"k8s_dns_service_ip": "10.3.0.10",
|
||||
"k8s_etcd_endpoints": "http://172.15.0.21:2379,http://172.15.0.22:2379,http://172.15.0.23:2379",
|
||||
"k8s_etcd_endpoints": "http://node1.example.com:2379",
|
||||
"k8s_pod_network": "10.2.0.0/16",
|
||||
"k8s_service_ip_range": "10.3.0.0/24",
|
||||
"networkd_address": "172.15.0.21/16",
|
||||
"networkd_dns": "172.15.0.3",
|
||||
"networkd_gateway": "172.15.0.1",
|
||||
"pxe": "true"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,15 +6,12 @@
|
||||
"mac": "52:54:00:b2:2f:86"
|
||||
},
|
||||
"metadata": {
|
||||
"etcd_initial_cluster": "node1=http://172.15.0.21:2380",
|
||||
"ipv4_address": "172.15.0.22",
|
||||
"domain_name": "node2.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"k8s_cert_endpoint": "http://bootcfg.foo:8080/assets",
|
||||
"k8s_controller_endpoint": "https://172.15.0.21",
|
||||
"k8s_controller_endpoint": "https://node1.example.com",
|
||||
"k8s_dns_service_ip": "10.3.0.10",
|
||||
"k8s_etcd_endpoints": "http://172.15.0.21:2379",
|
||||
"networkd_address": "172.15.0.22/16",
|
||||
"networkd_dns": "172.15.0.3",
|
||||
"networkd_gateway": "172.15.0.1",
|
||||
"k8s_etcd_endpoints": "http://node1.example.com:2379",
|
||||
"pxe": "true"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,15 +6,12 @@
|
||||
"mac": "52:54:00:c3:61:77"
|
||||
},
|
||||
"metadata": {
|
||||
"etcd_initial_cluster": "node1=http://172.15.0.21:2380",
|
||||
"ipv4_address": "172.15.0.23",
|
||||
"domain_name": "node3.example.com",
|
||||
"etcd_initial_cluster": "node1=http://node1.example.com:2380",
|
||||
"k8s_cert_endpoint": "http://bootcfg.foo:8080/assets",
|
||||
"k8s_controller_endpoint": "https://172.15.0.21",
|
||||
"k8s_controller_endpoint": "https://node1.example.com",
|
||||
"k8s_dns_service_ip": "10.3.0.10",
|
||||
"k8s_etcd_endpoints": "http://172.15.0.21:2379",
|
||||
"networkd_address": "172.15.0.23/16",
|
||||
"networkd_dns": "172.15.0.3",
|
||||
"networkd_gateway": "172.15.0.1",
|
||||
"k8s_etcd_endpoints": "http://node1.example.com:2379",
|
||||
"pxe": "true"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -8,10 +8,10 @@ systemd:
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_NAME={{.etcd_name}}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=http://{{.ipv4_address}}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=http://{{.ipv4_address}}:2380"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=http://{{.domain_name}}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=http://{{.domain_name}}:2380"
|
||||
Environment="ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379"
|
||||
Environment="ETCD_LISTEN_PEER_URLS=http://{{.ipv4_address}}:2380"
|
||||
Environment="ETCD_LISTEN_PEER_URLS=http://{{.domain_name}}:2380"
|
||||
Environment="ETCD_INITIAL_CLUSTER={{.etcd_initial_cluster}}"
|
||||
Environment="ETCD_STRICT_RECONFIG_CHECK=true"
|
||||
- name: flanneld.service
|
||||
@@ -19,7 +19,7 @@ systemd:
|
||||
- name: 40-ExecStartPre-symlink.conf
|
||||
contents: |
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/ln -sf /etc/flannel/options.env /run/flannel/options.env
|
||||
EnvironmentFile=-/etc/flannel/options.env
|
||||
ExecStartPre=/opt/init-flannel
|
||||
- name: docker.service
|
||||
dropins:
|
||||
@@ -65,7 +65,7 @@ systemd:
|
||||
--register-schedulable=true \
|
||||
--allow-privileged=true \
|
||||
--config=/etc/kubernetes/manifests \
|
||||
--hostname-override={{.ipv4_address}} \
|
||||
--hostname-override={{.domain_name}} \
|
||||
--cluster_dns={{.k8s_dns_service_ip}} \
|
||||
--cluster_domain=cluster.local
|
||||
Restart=always
|
||||
@@ -157,7 +157,6 @@ storage:
|
||||
- --allow-privileged=true
|
||||
- --service-cluster-ip-range={{.k8s_service_ip_range}}
|
||||
- --secure-port=443
|
||||
- --advertise-address={{.ipv4_address}}
|
||||
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota
|
||||
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
|
||||
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
|
||||
@@ -196,7 +195,6 @@ storage:
|
||||
filesystem: rootfs
|
||||
contents:
|
||||
inline: |
|
||||
FLANNELD_IFACE={{.ipv4_address}}
|
||||
FLANNELD_ETCD_ENDPOINTS={{.k8s_etcd_endpoints}}
|
||||
- path: /etc/kubernetes/manifests/kube-controller-manager.yaml
|
||||
filesystem: rootfs
|
||||
@@ -708,19 +706,6 @@ storage:
|
||||
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dashboard-rc.json)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/replicationcontrollers" > /dev/null
|
||||
curl --silent -H "Content-Type: application/json" -XPOST -d"$(cat /srv/kubernetes/manifests/kube-dashboard-svc.json)" "http://127.0.0.1:8080/api/v1/namespaces/kube-system/services" > /dev/null
|
||||
|
||||
{{ if not (index . "skip_networkd") }}
|
||||
networkd:
|
||||
units:
|
||||
- name: 10-static.network
|
||||
contents: |
|
||||
[Match]
|
||||
MACAddress={{.mac}}
|
||||
[Network]
|
||||
Gateway={{.networkd_gateway}}
|
||||
DNS={{.networkd_dns}}
|
||||
Address={{.networkd_address}}
|
||||
{{end}}
|
||||
|
||||
{{ if index . "ssh_authorized_keys" }}
|
||||
passwd:
|
||||
users:
|
||||
|
||||
@@ -12,10 +12,10 @@ systemd:
|
||||
Environment="ETCD_INITIAL_CLUSTER={{.etcd_initial_cluster}}"
|
||||
- name: flanneld.service
|
||||
dropins:
|
||||
- name: 40-ExecStartPre-symlink.conf
|
||||
- name: 40-add-options.conf
|
||||
contents: |
|
||||
[Service]
|
||||
ExecStartPre=/usr/bin/ln -sf /etc/flannel/options.env /run/flannel/options.env
|
||||
EnvironmentFile=-/etc/flannel/options.env
|
||||
- name: docker.service
|
||||
dropins:
|
||||
- name: 40-flannel.conf
|
||||
@@ -58,7 +58,7 @@ systemd:
|
||||
--register-node=true \
|
||||
--allow-privileged=true \
|
||||
--config=/etc/kubernetes/manifests \
|
||||
--hostname-override={{.ipv4_address}} \
|
||||
--hostname-override={{.domain_name}} \
|
||||
--cluster_dns={{.k8s_dns_service_ip}} \
|
||||
--cluster_domain=cluster.local \
|
||||
--kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
|
||||
@@ -158,22 +158,8 @@ storage:
|
||||
filesystem: rootfs
|
||||
contents:
|
||||
inline: |
|
||||
FLANNELD_IFACE={{.ipv4_address}}
|
||||
FLANNELD_ETCD_ENDPOINTS={{.k8s_etcd_endpoints}}
|
||||
|
||||
{{ if not (index . "skip_networkd") }}
|
||||
networkd:
|
||||
units:
|
||||
- name: 10-static.network
|
||||
contents: |
|
||||
[Match]
|
||||
MACAddress={{.mac}}
|
||||
[Network]
|
||||
Gateway={{.networkd_gateway}}
|
||||
DNS={{.networkd_dns}}
|
||||
Address={{.networkd_address}}
|
||||
{{end}}
|
||||
|
||||
{{ if index . "ssh_authorized_keys" }}
|
||||
passwd:
|
||||
users:
|
||||
|
||||
Reference in New Issue
Block a user