diff --git a/Documentation/getting-started-docker.md b/Documentation/getting-started-docker.md index 5f2bb566..b23356ca 100644 --- a/Documentation/getting-started-docker.md +++ b/Documentation/getting-started-docker.md @@ -77,6 +77,7 @@ The example profile added autologin so you can verify that etcd works between no systemctl status etcd2 etcdctl set /message hello etcdctl get /message + fleetctl list-machines Clean up the VM machines. diff --git a/Documentation/getting-started-rkt.md b/Documentation/getting-started-rkt.md index fa4c7804..01535a24 100644 --- a/Documentation/getting-started-rkt.md +++ b/Documentation/getting-started-rkt.md @@ -50,7 +50,7 @@ On Fedora, add the `metal0` interface to the trusted zone in your firewall confi sudo firewall-cmd --add-interface=metal0 --zone=trusted -## Application Container +## Containers #### Latest @@ -115,6 +115,7 @@ The example profile added autologin so you can verify that etcd works between no systemctl status etcd2 etcdctl set /message hello etcdctl get /message + fleetctl list-machines Press ^] three times to stop a rkt pod. Clean up the VM machines. diff --git a/examples/README.md b/examples/README.md index d12b501b..afcbc8bf 100644 --- a/examples/README.md +++ b/examples/README.md @@ -9,8 +9,8 @@ These examples show declarative configurations for network booting libvirt VMs i | grub | CoreOS via GRUB2 Netboot | beta/899.6.0 | RAM | NA | | pxe-disk | CoreOS via iPXE, with a root filesystem | alpha/962.0.0 | Disk | [reference](https://coreos.com/os/docs/latest/booting-with-ipxe.html) | | coreos-install | 2-stage Ignition: Install CoreOS, provision etcd cluster | alpha/962.0.0 | Disk | [reference](https://coreos.com/os/docs/latest/installing-to-disk.html) | -| etcd-rkt, etcd-docker | Cluster with 3 etcd nodes, 2 proxies | beta/899.6.0 | RAM | [reference](https://coreos.com/os/docs/latest/cluster-architectures.html) | -| k8s-rkt, k8s-docker | Kubernetes cluster with 1 master, 1 worker, 1 dedicated etcd node, TLS-authentication | alpha/962.0.0 | Disk | [reference](https://github.com/coreos/coreos-kubernetes) | +| etcd-rkt, etcd-docker | Cluster with 3 etcd nodes, 2 proxies | alpha/983.0.0 | RAM | [reference](https://coreos.com/os/docs/latest/cluster-architectures.html) | +| k8s-rkt, k8s-docker | Kubernetes cluster with 1 master and 2 workers, TLS-authentication | alpha/983.0.0 | Disk | [reference](https://github.com/coreos/coreos-kubernetes) | ## Experimental @@ -38,17 +38,19 @@ Most example profiles configure machines with a `core` user and `ssh_authorized_ ## Kubernetes -The Kubernetes cluster examples create a TLS-authenticated Kubernetes cluster with 1 master node, 1 worker node, and 1 etcd node, running without a disk. +The Kubernetes examples create Kubernetes clusters with CoreOS hosts and TLS authentication. -You'll need to download the CoreOS Beta image, which ships with the kubelet, and generate TLS assets. +### Assets -### TLS Assets +Download the CoreOS PXE image assets to `assets/coreos`. These images are served to network boot machines by `bootcfg`. + + ./scripts/get-coreos alpha 991.0.0 **Note**: TLS assets are served to any machines which request them. This is unsuitable for production where machines and networks are untrusted. Read about our longer term security plans at [Distributed Trusted Computing](https://coreos.com/blog/coreos-trusted-computing.html). -Generate a root CA and Kubernetes TLS assets for each component (`admin`, `apiserver`, `worker`). +Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver`, `worker`). - cd coreos-baremetal + rm -rf assets/tls # for Kubernetes on CNI metal0, i.e. rkt ./scripts/tls/gen-rkt-k8s-secrets # for Kubernetes on docker0 @@ -80,5 +82,9 @@ Get all pods. kubectl --kubeconfig=examples/kubecfg-rkt get pods --all-namespaces -On my laptop, it takes about 1 minute from boot until the Kubernetes API comes up. Then it takes another 1-2 minutes for all components including DNS to be pulled and started. +On my laptop, VMs download and network boot CoreOS in the first 45 seconds, the Kubernetes API becomes available after about 150 seconds, and add-on pods are scheduled by 180 seconds. On physical hosts and networks, OS and container image download times are a bit longer. + +## Tectonic + +Now sign up for [Tectonic Starter](https://tectonic.com/starter/) and deploy the [Tectonic Console](https://tectonic.com/enterprise/docs/latest/deployer/tectonic_console.html) with a few `kubectl` commands! diff --git a/examples/cloud/kubernetes-master.sh b/examples/cloud/kubernetes-master.sh index 8abd50ff..4f9f79b6 100644 --- a/examples/cloud/kubernetes-master.sh +++ b/examples/cloud/kubernetes-master.sh @@ -28,7 +28,7 @@ export K8S_SERVICE_IP={{.k8s_service_ip}} export DNS_SERVICE_IP={{.k8s_dns_service_ip}} # ADVERTISE_IP is the host node's IP. -export ADVERTISE_IP={{.k8s_advertise_ip}} +export ADVERTISE_IP={{.ipv4_address}} # TLS Certificate assets are hosted by the Config Server export CERT_ENDPOINT={{.k8s_cert_endpoint}} diff --git a/examples/cloud/kubernetes-worker.sh b/examples/cloud/kubernetes-worker.sh index 96d6debc..8f3c795e 100644 --- a/examples/cloud/kubernetes-worker.sh +++ b/examples/cloud/kubernetes-worker.sh @@ -17,7 +17,7 @@ export K8S_VER=v1.1.8_coreos.0 export DNS_SERVICE_IP={{.k8s_dns_service_ip}} # ADVERTISE_IP is the host node's IP. -export ADVERTISE_IP={{.k8s_advertise_ip}} +export ADVERTISE_IP={{.ipv4_address}} # TLS Certificate assets are hosted by the Config Server export CERT_ENDPOINT={{.k8s_cert_endpoint}} diff --git a/examples/etcd-docker.yaml b/examples/etcd-docker.yaml index b22d0ab6..d043b5f9 100644 --- a/examples/etcd-docker.yaml +++ b/examples/etcd-docker.yaml @@ -6,11 +6,11 @@ groups: require: uuid: 16e7d8a7-bfa9-428b-9117-363341bb330b metadata: + ipv4_address: 172.17.0.21 networkd_name: ens3 networkd_gateway: 172.17.0.1 networkd_dns: 172.17.0.3 networkd_address: 172.17.0.21/16 - ipv4_address: 172.17.0.21 fleet_metadata: "role=etcd,name=node1" etcd_name: node1 etcd_initial_cluster: "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380" @@ -20,11 +20,11 @@ groups: require: uuid: 264cd073-ca62-44b3-98c0-50aad5b5f819 metadata: + ipv4_address: 172.17.0.22 networkd_name: ens3 networkd_gateway: 172.17.0.1 networkd_dns: 172.17.0.3 networkd_address: 172.17.0.22/16 - ipv4_address: 172.17.0.22 fleet_metadata: "role=etcd,name=node2" etcd_name: node2 etcd_initial_cluster: "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380" @@ -34,11 +34,11 @@ groups: require: uuid: 39d2e747-2648-4d68-ae92-bbc70b245055 metadata: + ipv4_address: 172.17.0.23 networkd_name: ens3 networkd_gateway: 172.17.0.1 networkd_dns: 172.17.0.3 networkd_address: 172.17.0.23/16 - ipv4_address: 172.17.0.23 fleet_metadata: "role=etcd,name=node3" etcd_name: node3 etcd_initial_cluster: "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380" diff --git a/examples/etcd-rkt.yaml b/examples/etcd-rkt.yaml index 661f7a67..92c3e8d7 100644 --- a/examples/etcd-rkt.yaml +++ b/examples/etcd-rkt.yaml @@ -6,11 +6,11 @@ groups: require: uuid: 16e7d8a7-bfa9-428b-9117-363341bb330b metadata: + ipv4_address: 172.15.0.21 networkd_name: ens3 networkd_gateway: 172.15.0.1 networkd_dns: 172.15.0.3 networkd_address: 172.15.0.21/16 - ipv4_address: 172.15.0.21 fleet_metadata: "role=etcd,name=node1" etcd_name: node1 etcd_initial_cluster: "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380" @@ -20,11 +20,11 @@ groups: require: uuid: 264cd073-ca62-44b3-98c0-50aad5b5f819 metadata: + ipv4_address: 172.15.0.22 networkd_name: ens3 networkd_gateway: 172.15.0.1 networkd_dns: 172.15.0.3 networkd_address: 172.15.0.22/16 - ipv4_address: 172.15.0.22 fleet_metadata: "role=etcd,name=node2" etcd_name: node2 etcd_initial_cluster: "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380" @@ -34,11 +34,11 @@ groups: require: uuid: 39d2e747-2648-4d68-ae92-bbc70b245055 metadata: + ipv4_address: 172.15.0.23 networkd_name: ens3 networkd_gateway: 172.15.0.1 networkd_dns: 172.15.0.3 networkd_address: 172.15.0.23/16 - ipv4_address: 172.15.0.23 fleet_metadata: "role=etcd,name=node3" etcd_name: node3 etcd_initial_cluster: "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380" diff --git a/examples/ignition/network.yaml b/examples/ignition/network.yaml index 28e50fa9..8ee93a68 100644 --- a/examples/ignition/network.yaml +++ b/examples/ignition/network.yaml @@ -14,6 +14,30 @@ systemd: ExecStart=/usr/bin/bash -c 'curl --url "http://bootcfg.foo:8080/metadata?{{.query}}" --retry 10 --output ${OUTPUT}' [Install] WantedBy=multi-user.target + - name: fleet.service + enable: true + dropins: + - name: fleet-metadata.conf + contents: | + [Service] + Environment="FLEET_METADATA={{.fleet_metadata}}" + - name: etcd2.service + enable: true + dropins: + - name: etcd-metadata.conf + contents: | + [Unit] + Requires=metadata.service + After=metadata.service + [Service] + # ETCD_NAME, ETCD_INITIAL_CLUSTER + EnvironmentFile=/run/metadata/bootcfg + ExecStart= + ExecStart=/usr/bin/etcd2 \ + --advertise-client-urls=http://${IPV4_ADDRESS}:2379 \ + --initial-advertise-peer-urls=http://${IPV4_ADDRESS}:2380 \ + --listen-client-urls=http://0.0.0.0:2379 \ + --listen-peer-urls=http://${IPV4_ADDRESS}:2380 storage: disks: - device: /dev/sda diff --git a/examples/k8s-docker.yaml b/examples/k8s-docker.yaml index 4151be11..69e2b711 100644 --- a/examples/k8s-docker.yaml +++ b/examples/k8s-docker.yaml @@ -6,43 +6,54 @@ groups: require: uuid: 16e7d8a7-bfa9-428b-9117-363341bb330b metadata: + ipv4_address: 172.17.0.21 networkd_name: ens3 networkd_gateway: 172.17.0.1 networkd_dns: 172.17.0.3 networkd_address: 172.17.0.21/16 - k8s_etcd_endpoints: http://172.17.0.23:2379 + k8s_etcd_endpoints: "http://172.17.0.21:2379,http://172.17.0.22:2379,http://172.17.0.23:2379" k8s_pod_network: 10.2.0.0/16 k8s_service_ip_range: 10.3.0.0/24 k8s_service_ip: 10.3.0.1 k8s_dns_service_ip: 10.3.0.10 - k8s_advertise_ip: 172.17.0.21 k8s_cert_endpoint: http://bootcfg.foo:8080/assets + fleet_metadata: "role=etcd,name=node1" + etcd_name: node1 + etcd_initial_cluster: "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380" - - name: Worker Node + - name: Worker 1 profile: kubernetes-worker require: uuid: 264cd073-ca62-44b3-98c0-50aad5b5f819 metadata: + ipv4_address: 172.17.0.22 networkd_name: ens3 networkd_gateway: 172.17.0.1 networkd_dns: 172.17.0.3 networkd_address: 172.17.0.22/16 - k8s_etcd_endpoints: http://172.17.0.23:2379 + k8s_etcd_endpoints: "http://172.17.0.21:2379,http://172.17.0.22:2379,http://172.17.0.23:2379" k8s_controller_endpoint: https://172.17.0.21 k8s_dns_service_ip: 10.3.0.1 - k8s_advertise_ip: 172.17.0.22 k8s_cert_endpoint: http://bootcfg.foo:8080/assets + fleet_metadata: "role=etcd,name=node2" + etcd_name: node2 + etcd_initial_cluster: "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380" - - name: etcd Node - profile: etcd + - name: Worker 2 + profile: kubernetes-worker require: uuid: 39d2e747-2648-4d68-ae92-bbc70b245055 metadata: + ipv4_address: 172.17.0.23 networkd_name: ens3 networkd_gateway: 172.17.0.1 networkd_dns: 172.17.0.3 networkd_address: 172.17.0.23/16 - ipv4_address: 172.17.0.23 - etcd_name: solo - etcd_initial_cluster: "solo=http://172.17.0.23:2380" + k8s_etcd_endpoints: "http://172.17.0.21:2379,http://172.17.0.22:2379,http://172.17.0.23:2379" + k8s_controller_endpoint: https://172.17.0.21 + k8s_dns_service_ip: 10.3.0.1 + k8s_cert_endpoint: http://bootcfg.foo:8080/assets + fleet_metadata: "role=etcd,name=node3" + etcd_name: node3 + etcd_initial_cluster: "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380" diff --git a/examples/k8s-rkt.yaml b/examples/k8s-rkt.yaml index 475917ea..5fa3b3b6 100644 --- a/examples/k8s-rkt.yaml +++ b/examples/k8s-rkt.yaml @@ -6,44 +6,56 @@ groups: require: uuid: 16e7d8a7-bfa9-428b-9117-363341bb330b metadata: + ipv4_address: 172.15.0.21 networkd_name: ens3 networkd_gateway: 172.15.0.1 networkd_dns: 172.15.0.3 networkd_address: 172.15.0.21/16 - k8s_etcd_endpoints: http://172.15.0.23:2379 + k8s_etcd_endpoints: "http://172.15.0.21:2379,http://172.15.0.22:2379,http://172.15.0.23:2379" k8s_pod_network: 10.2.0.0/16 k8s_service_ip_range: 10.3.0.0/24 k8s_service_ip: 10.3.0.1 k8s_dns_service_ip: 10.3.0.10 - k8s_advertise_ip: 172.15.0.21 k8s_cert_endpoint: http://bootcfg.foo:8080/assets + fleet_metadata: "role=etcd,name=node1" + etcd_name: node1 + etcd_initial_cluster: "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380" ssh_authorized_keys: - - name: Worker Node + - name: Worker 1 profile: kubernetes-worker require: uuid: 264cd073-ca62-44b3-98c0-50aad5b5f819 metadata: + ipv4_address: 172.15.0.22 networkd_name: ens3 networkd_gateway: 172.15.0.1 networkd_dns: 172.15.0.3 networkd_address: 172.15.0.22/16 - k8s_etcd_endpoints: http://172.15.0.23:2379 + k8s_etcd_endpoints: "http://172.15.0.21:2379,http://172.15.0.22:2379,http://172.15.0.23:2379" k8s_controller_endpoint: https://172.15.0.21 k8s_dns_service_ip: 10.3.0.1 - k8s_advertise_ip: 172.15.0.22 k8s_cert_endpoint: http://bootcfg.foo:8080/assets + fleet_metadata: "role=etcd,name=node2" + etcd_name: node2 + etcd_initial_cluster: "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380" + ssh_authorized_keys: - - name: etcd Node - profile: etcd + - name: Worker 2 + profile: kubernetes-worker require: uuid: 39d2e747-2648-4d68-ae92-bbc70b245055 metadata: + ipv4_address: 172.15.0.23 networkd_name: ens3 networkd_gateway: 172.15.0.1 networkd_dns: 172.15.0.3 networkd_address: 172.15.0.23/16 - ipv4_address: 172.15.0.23 - etcd_name: solo - etcd_initial_cluster: "solo=http://172.15.0.23:2380" - + k8s_etcd_endpoints: "http://172.15.0.21:2379,http://172.15.0.22:2379,http://172.15.0.23:2379" + k8s_controller_endpoint: https://172.15.0.21 + k8s_dns_service_ip: 10.3.0.1 + k8s_cert_endpoint: http://bootcfg.foo:8080/assets + fleet_metadata: "role=etcd,name=node3" + etcd_name: node3 + etcd_initial_cluster: "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380" + ssh_authorized_keys: \ No newline at end of file diff --git a/examples/profiles/etcd/profile.json b/examples/profiles/etcd/profile.json index db83afaf..99b6ae65 100644 --- a/examples/profiles/etcd/profile.json +++ b/examples/profiles/etcd/profile.json @@ -1,8 +1,8 @@ { "id": "etcd", "boot": { - "kernel": "/assets/coreos/899.6.0/coreos_production_pxe.vmlinuz", - "initrd": ["/assets/coreos/899.6.0/coreos_production_pxe_image.cpio.gz"], + "kernel": "/assets/coreos/983.0.0/coreos_production_pxe.vmlinuz", + "initrd": ["/assets/coreos/983.0.0/coreos_production_pxe_image.cpio.gz"], "cmdline": { "coreos.config.url": "http://bootcfg.foo:8080/ignition?uuid=${uuid}&mac=${net0/mac:hexhyp}", "coreos.autologin": "", diff --git a/examples/profiles/etcd_proxy/profile.json b/examples/profiles/etcd_proxy/profile.json index e6f394d3..c055e2d5 100644 --- a/examples/profiles/etcd_proxy/profile.json +++ b/examples/profiles/etcd_proxy/profile.json @@ -1,8 +1,8 @@ { "id": "etcd_proxy", "boot": { - "kernel": "/assets/coreos/899.6.0/coreos_production_pxe.vmlinuz", - "initrd": ["/assets/coreos/899.6.0/coreos_production_pxe_image.cpio.gz"], + "kernel": "/assets/coreos/983.0.0/coreos_production_pxe.vmlinuz", + "initrd": ["/assets/coreos/983.0.0/coreos_production_pxe_image.cpio.gz"], "cmdline": { "coreos.config.url": "http://bootcfg.foo:8080/ignition?uuid=${uuid}&mac=${net0/mac:hexhyp}", "coreos.autologin": "", diff --git a/examples/profiles/kubernetes-master/profile.json b/examples/profiles/kubernetes-master/profile.json index 685695db..ba3ea97f 100644 --- a/examples/profiles/kubernetes-master/profile.json +++ b/examples/profiles/kubernetes-master/profile.json @@ -1,8 +1,8 @@ { "id": "kubernetes-master", "boot": { - "kernel": "/assets/coreos/962.0.0/coreos_production_pxe.vmlinuz", - "initrd": ["/assets/coreos/962.0.0/coreos_production_pxe_image.cpio.gz"], + "kernel": "/assets/coreos/983.0.0/coreos_production_pxe.vmlinuz", + "initrd": ["/assets/coreos/983.0.0/coreos_production_pxe_image.cpio.gz"], "cmdline": { "root": "/dev/sda1", "cloud-config-url": "http://bootcfg.foo:8080/cloud?uuid=${uuid}&mac=${net0/mac:hexhyp}", @@ -13,4 +13,4 @@ }, "cloud_id": "kubernetes-master.sh", "ignition_id": "network.yaml" -} \ No newline at end of file +} diff --git a/examples/profiles/kubernetes-worker/profile.json b/examples/profiles/kubernetes-worker/profile.json index fcada939..8fe8d7aa 100644 --- a/examples/profiles/kubernetes-worker/profile.json +++ b/examples/profiles/kubernetes-worker/profile.json @@ -1,8 +1,8 @@ { "id": "kubernetes-worker", "boot": { - "kernel": "/assets/coreos/962.0.0/coreos_production_pxe.vmlinuz", - "initrd": ["/assets/coreos/962.0.0/coreos_production_pxe_image.cpio.gz"], + "kernel": "/assets/coreos/983.0.0/coreos_production_pxe.vmlinuz", + "initrd": ["/assets/coreos/983.0.0/coreos_production_pxe_image.cpio.gz"], "cmdline": { "root": "/dev/sda1", "cloud-config-url": "http://bootcfg.foo:8080/cloud?uuid=${uuid}&mac=${net0/mac:hexhyp}", @@ -13,4 +13,4 @@ }, "cloud_id": "kubernetes-worker.sh", "ignition_id": "network.yaml" -} \ No newline at end of file +} diff --git a/scripts/get-coreos b/scripts/get-coreos index af6b9f63..6e4767e1 100755 --- a/scripts/get-coreos +++ b/scripts/get-coreos @@ -1,9 +1,9 @@ #!/bin/bash -e # USAGE: ./scripts/get-coreos -# USAGE: ./scripts/get-coreos alpha 942.0.0 +# USAGE: ./scripts/get-coreos channel version -CHANNEL=${1:-"beta"} -VERSION=${2:-"899.6.0"} +CHANNEL=${1:-"alpha"} +VERSION=${2:-"983.0.0"} DEST=${PWD}/assets/coreos/$VERSION BASE_URL=http://$CHANNEL.release.core-os.net/amd64-usr/$VERSION diff --git a/scripts/tls/gen-bm-k8s-secrets b/scripts/tls/gen-bm-k8s-secrets new file mode 100755 index 00000000..b5f5c9ed --- /dev/null +++ b/scripts/tls/gen-bm-k8s-secrets @@ -0,0 +1,14 @@ +#!/bin/bash -e +# USAGE: ./scripts/generate-kubernetes-secrets + +DEST=${1:-"assets/tls"} + +if [ ! -d "$DEST" ]; then + echo "Creating directory $DEST" + mkdir -p $DEST +fi + +./scripts/tls/root-ca $DEST +./scripts/tls/kubernetes-cert $DEST admin kube-admin +./scripts/tls/kubernetes-cert $DEST apiserver kube-apiserver IP.1=10.3.0.1,IP.2=192.168.1.21 +./scripts/tls/kubernetes-cert $DEST worker kube-worker IP.1=192.168.1.22,IP.2=192.168.1.23 diff --git a/scripts/tls/gen-docker0-k8s-secrets b/scripts/tls/gen-docker0-k8s-secrets index 597aee24..5981b55a 100755 --- a/scripts/tls/gen-docker0-k8s-secrets +++ b/scripts/tls/gen-docker0-k8s-secrets @@ -11,4 +11,4 @@ fi ./scripts/tls/root-ca $DEST ./scripts/tls/kubernetes-cert $DEST admin kube-admin ./scripts/tls/kubernetes-cert $DEST apiserver kube-apiserver IP.1=10.3.0.1,IP.2=172.17.0.21 -./scripts/tls/kubernetes-cert $DEST worker kube-worker IP.1=172.17.0.22 +./scripts/tls/kubernetes-cert $DEST worker kube-worker IP.1=172.17.0.22,IP.2=172.17.0.23 diff --git a/scripts/tls/gen-rkt-k8s-secrets b/scripts/tls/gen-rkt-k8s-secrets index 27e11c53..7648ae96 100755 --- a/scripts/tls/gen-rkt-k8s-secrets +++ b/scripts/tls/gen-rkt-k8s-secrets @@ -11,4 +11,4 @@ fi ./scripts/tls/root-ca $DEST ./scripts/tls/kubernetes-cert $DEST admin kube-admin ./scripts/tls/kubernetes-cert $DEST apiserver kube-apiserver IP.1=10.3.0.1,IP.2=172.15.0.21 -./scripts/tls/kubernetes-cert $DEST worker kube-worker IP.1=172.15.0.22 +./scripts/tls/kubernetes-cert $DEST worker kube-worker IP.1=172.15.0.22,IP.2=172.15.0.23