mirror of
https://github.com/outbackdingo/matchbox.git
synced 2026-01-27 10:19:35 +00:00
Merge pull request #113 from coreos/multi-worker-k8s
Create k8s clusters with 3-node etcd, 2 workers, and fleet
This commit is contained in:
@@ -77,6 +77,7 @@ The example profile added autologin so you can verify that etcd works between no
|
||||
systemctl status etcd2
|
||||
etcdctl set /message hello
|
||||
etcdctl get /message
|
||||
fleetctl list-machines
|
||||
|
||||
Clean up the VM machines.
|
||||
|
||||
|
||||
@@ -50,7 +50,7 @@ On Fedora, add the `metal0` interface to the trusted zone in your firewall confi
|
||||
|
||||
sudo firewall-cmd --add-interface=metal0 --zone=trusted
|
||||
|
||||
## Application Container
|
||||
## Containers
|
||||
|
||||
#### Latest
|
||||
|
||||
@@ -115,6 +115,7 @@ The example profile added autologin so you can verify that etcd works between no
|
||||
systemctl status etcd2
|
||||
etcdctl set /message hello
|
||||
etcdctl get /message
|
||||
fleetctl list-machines
|
||||
|
||||
Press ^] three times to stop a rkt pod. Clean up the VM machines.
|
||||
|
||||
|
||||
@@ -9,8 +9,8 @@ These examples show declarative configurations for network booting libvirt VMs i
|
||||
| grub | CoreOS via GRUB2 Netboot | beta/899.6.0 | RAM | NA |
|
||||
| pxe-disk | CoreOS via iPXE, with a root filesystem | alpha/962.0.0 | Disk | [reference](https://coreos.com/os/docs/latest/booting-with-ipxe.html) |
|
||||
| coreos-install | 2-stage Ignition: Install CoreOS, provision etcd cluster | alpha/962.0.0 | Disk | [reference](https://coreos.com/os/docs/latest/installing-to-disk.html) |
|
||||
| etcd-rkt, etcd-docker | Cluster with 3 etcd nodes, 2 proxies | beta/899.6.0 | RAM | [reference](https://coreos.com/os/docs/latest/cluster-architectures.html) |
|
||||
| k8s-rkt, k8s-docker | Kubernetes cluster with 1 master, 1 worker, 1 dedicated etcd node, TLS-authentication | alpha/962.0.0 | Disk | [reference](https://github.com/coreos/coreos-kubernetes) |
|
||||
| etcd-rkt, etcd-docker | Cluster with 3 etcd nodes, 2 proxies | alpha/983.0.0 | RAM | [reference](https://coreos.com/os/docs/latest/cluster-architectures.html) |
|
||||
| k8s-rkt, k8s-docker | Kubernetes cluster with 1 master and 2 workers, TLS-authentication | alpha/983.0.0 | Disk | [reference](https://github.com/coreos/coreos-kubernetes) |
|
||||
|
||||
## Experimental
|
||||
|
||||
@@ -38,17 +38,19 @@ Most example profiles configure machines with a `core` user and `ssh_authorized_
|
||||
|
||||
## Kubernetes
|
||||
|
||||
The Kubernetes cluster examples create a TLS-authenticated Kubernetes cluster with 1 master node, 1 worker node, and 1 etcd node, running without a disk.
|
||||
The Kubernetes examples create Kubernetes clusters with CoreOS hosts and TLS authentication.
|
||||
|
||||
You'll need to download the CoreOS Beta image, which ships with the kubelet, and generate TLS assets.
|
||||
### Assets
|
||||
|
||||
### TLS Assets
|
||||
Download the CoreOS PXE image assets to `assets/coreos`. These images are served to network boot machines by `bootcfg`.
|
||||
|
||||
./scripts/get-coreos alpha 991.0.0
|
||||
|
||||
**Note**: TLS assets are served to any machines which request them. This is unsuitable for production where machines and networks are untrusted. Read about our longer term security plans at [Distributed Trusted Computing](https://coreos.com/blog/coreos-trusted-computing.html).
|
||||
|
||||
Generate a root CA and Kubernetes TLS assets for each component (`admin`, `apiserver`, `worker`).
|
||||
Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver`, `worker`).
|
||||
|
||||
cd coreos-baremetal
|
||||
rm -rf assets/tls
|
||||
# for Kubernetes on CNI metal0, i.e. rkt
|
||||
./scripts/tls/gen-rkt-k8s-secrets
|
||||
# for Kubernetes on docker0
|
||||
@@ -80,5 +82,9 @@ Get all pods.
|
||||
|
||||
kubectl --kubeconfig=examples/kubecfg-rkt get pods --all-namespaces
|
||||
|
||||
On my laptop, it takes about 1 minute from boot until the Kubernetes API comes up. Then it takes another 1-2 minutes for all components including DNS to be pulled and started.
|
||||
On my laptop, VMs download and network boot CoreOS in the first 45 seconds, the Kubernetes API becomes available after about 150 seconds, and add-on pods are scheduled by 180 seconds. On physical hosts and networks, OS and container image download times are a bit longer.
|
||||
|
||||
## Tectonic
|
||||
|
||||
Now sign up for [Tectonic Starter](https://tectonic.com/starter/) and deploy the [Tectonic Console](https://tectonic.com/enterprise/docs/latest/deployer/tectonic_console.html) with a few `kubectl` commands!
|
||||
|
||||
|
||||
@@ -28,7 +28,7 @@ export K8S_SERVICE_IP={{.k8s_service_ip}}
|
||||
export DNS_SERVICE_IP={{.k8s_dns_service_ip}}
|
||||
|
||||
# ADVERTISE_IP is the host node's IP.
|
||||
export ADVERTISE_IP={{.k8s_advertise_ip}}
|
||||
export ADVERTISE_IP={{.ipv4_address}}
|
||||
|
||||
# TLS Certificate assets are hosted by the Config Server
|
||||
export CERT_ENDPOINT={{.k8s_cert_endpoint}}
|
||||
|
||||
@@ -17,7 +17,7 @@ export K8S_VER=v1.1.8_coreos.0
|
||||
export DNS_SERVICE_IP={{.k8s_dns_service_ip}}
|
||||
|
||||
# ADVERTISE_IP is the host node's IP.
|
||||
export ADVERTISE_IP={{.k8s_advertise_ip}}
|
||||
export ADVERTISE_IP={{.ipv4_address}}
|
||||
|
||||
# TLS Certificate assets are hosted by the Config Server
|
||||
export CERT_ENDPOINT={{.k8s_cert_endpoint}}
|
||||
|
||||
@@ -6,11 +6,11 @@ groups:
|
||||
require:
|
||||
uuid: 16e7d8a7-bfa9-428b-9117-363341bb330b
|
||||
metadata:
|
||||
ipv4_address: 172.17.0.21
|
||||
networkd_name: ens3
|
||||
networkd_gateway: 172.17.0.1
|
||||
networkd_dns: 172.17.0.3
|
||||
networkd_address: 172.17.0.21/16
|
||||
ipv4_address: 172.17.0.21
|
||||
fleet_metadata: "role=etcd,name=node1"
|
||||
etcd_name: node1
|
||||
etcd_initial_cluster: "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380"
|
||||
@@ -20,11 +20,11 @@ groups:
|
||||
require:
|
||||
uuid: 264cd073-ca62-44b3-98c0-50aad5b5f819
|
||||
metadata:
|
||||
ipv4_address: 172.17.0.22
|
||||
networkd_name: ens3
|
||||
networkd_gateway: 172.17.0.1
|
||||
networkd_dns: 172.17.0.3
|
||||
networkd_address: 172.17.0.22/16
|
||||
ipv4_address: 172.17.0.22
|
||||
fleet_metadata: "role=etcd,name=node2"
|
||||
etcd_name: node2
|
||||
etcd_initial_cluster: "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380"
|
||||
@@ -34,11 +34,11 @@ groups:
|
||||
require:
|
||||
uuid: 39d2e747-2648-4d68-ae92-bbc70b245055
|
||||
metadata:
|
||||
ipv4_address: 172.17.0.23
|
||||
networkd_name: ens3
|
||||
networkd_gateway: 172.17.0.1
|
||||
networkd_dns: 172.17.0.3
|
||||
networkd_address: 172.17.0.23/16
|
||||
ipv4_address: 172.17.0.23
|
||||
fleet_metadata: "role=etcd,name=node3"
|
||||
etcd_name: node3
|
||||
etcd_initial_cluster: "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380"
|
||||
|
||||
@@ -6,11 +6,11 @@ groups:
|
||||
require:
|
||||
uuid: 16e7d8a7-bfa9-428b-9117-363341bb330b
|
||||
metadata:
|
||||
ipv4_address: 172.15.0.21
|
||||
networkd_name: ens3
|
||||
networkd_gateway: 172.15.0.1
|
||||
networkd_dns: 172.15.0.3
|
||||
networkd_address: 172.15.0.21/16
|
||||
ipv4_address: 172.15.0.21
|
||||
fleet_metadata: "role=etcd,name=node1"
|
||||
etcd_name: node1
|
||||
etcd_initial_cluster: "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380"
|
||||
@@ -20,11 +20,11 @@ groups:
|
||||
require:
|
||||
uuid: 264cd073-ca62-44b3-98c0-50aad5b5f819
|
||||
metadata:
|
||||
ipv4_address: 172.15.0.22
|
||||
networkd_name: ens3
|
||||
networkd_gateway: 172.15.0.1
|
||||
networkd_dns: 172.15.0.3
|
||||
networkd_address: 172.15.0.22/16
|
||||
ipv4_address: 172.15.0.22
|
||||
fleet_metadata: "role=etcd,name=node2"
|
||||
etcd_name: node2
|
||||
etcd_initial_cluster: "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380"
|
||||
@@ -34,11 +34,11 @@ groups:
|
||||
require:
|
||||
uuid: 39d2e747-2648-4d68-ae92-bbc70b245055
|
||||
metadata:
|
||||
ipv4_address: 172.15.0.23
|
||||
networkd_name: ens3
|
||||
networkd_gateway: 172.15.0.1
|
||||
networkd_dns: 172.15.0.3
|
||||
networkd_address: 172.15.0.23/16
|
||||
ipv4_address: 172.15.0.23
|
||||
fleet_metadata: "role=etcd,name=node3"
|
||||
etcd_name: node3
|
||||
etcd_initial_cluster: "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380"
|
||||
|
||||
@@ -14,6 +14,30 @@ systemd:
|
||||
ExecStart=/usr/bin/bash -c 'curl --url "http://bootcfg.foo:8080/metadata?{{.query}}" --retry 10 --output ${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: fleet.service
|
||||
enable: true
|
||||
dropins:
|
||||
- name: fleet-metadata.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="FLEET_METADATA={{.fleet_metadata}}"
|
||||
- name: etcd2.service
|
||||
enable: true
|
||||
dropins:
|
||||
- name: etcd-metadata.conf
|
||||
contents: |
|
||||
[Unit]
|
||||
Requires=metadata.service
|
||||
After=metadata.service
|
||||
[Service]
|
||||
# ETCD_NAME, ETCD_INITIAL_CLUSTER
|
||||
EnvironmentFile=/run/metadata/bootcfg
|
||||
ExecStart=
|
||||
ExecStart=/usr/bin/etcd2 \
|
||||
--advertise-client-urls=http://${IPV4_ADDRESS}:2379 \
|
||||
--initial-advertise-peer-urls=http://${IPV4_ADDRESS}:2380 \
|
||||
--listen-client-urls=http://0.0.0.0:2379 \
|
||||
--listen-peer-urls=http://${IPV4_ADDRESS}:2380
|
||||
storage:
|
||||
disks:
|
||||
- device: /dev/sda
|
||||
|
||||
@@ -6,43 +6,54 @@ groups:
|
||||
require:
|
||||
uuid: 16e7d8a7-bfa9-428b-9117-363341bb330b
|
||||
metadata:
|
||||
ipv4_address: 172.17.0.21
|
||||
networkd_name: ens3
|
||||
networkd_gateway: 172.17.0.1
|
||||
networkd_dns: 172.17.0.3
|
||||
networkd_address: 172.17.0.21/16
|
||||
k8s_etcd_endpoints: http://172.17.0.23:2379
|
||||
k8s_etcd_endpoints: "http://172.17.0.21:2379,http://172.17.0.22:2379,http://172.17.0.23:2379"
|
||||
k8s_pod_network: 10.2.0.0/16
|
||||
k8s_service_ip_range: 10.3.0.0/24
|
||||
k8s_service_ip: 10.3.0.1
|
||||
k8s_dns_service_ip: 10.3.0.10
|
||||
k8s_advertise_ip: 172.17.0.21
|
||||
k8s_cert_endpoint: http://bootcfg.foo:8080/assets
|
||||
fleet_metadata: "role=etcd,name=node1"
|
||||
etcd_name: node1
|
||||
etcd_initial_cluster: "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380"
|
||||
|
||||
- name: Worker Node
|
||||
- name: Worker 1
|
||||
profile: kubernetes-worker
|
||||
require:
|
||||
uuid: 264cd073-ca62-44b3-98c0-50aad5b5f819
|
||||
metadata:
|
||||
ipv4_address: 172.17.0.22
|
||||
networkd_name: ens3
|
||||
networkd_gateway: 172.17.0.1
|
||||
networkd_dns: 172.17.0.3
|
||||
networkd_address: 172.17.0.22/16
|
||||
k8s_etcd_endpoints: http://172.17.0.23:2379
|
||||
k8s_etcd_endpoints: "http://172.17.0.21:2379,http://172.17.0.22:2379,http://172.17.0.23:2379"
|
||||
k8s_controller_endpoint: https://172.17.0.21
|
||||
k8s_dns_service_ip: 10.3.0.1
|
||||
k8s_advertise_ip: 172.17.0.22
|
||||
k8s_cert_endpoint: http://bootcfg.foo:8080/assets
|
||||
fleet_metadata: "role=etcd,name=node2"
|
||||
etcd_name: node2
|
||||
etcd_initial_cluster: "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380"
|
||||
|
||||
- name: etcd Node
|
||||
profile: etcd
|
||||
- name: Worker 2
|
||||
profile: kubernetes-worker
|
||||
require:
|
||||
uuid: 39d2e747-2648-4d68-ae92-bbc70b245055
|
||||
metadata:
|
||||
ipv4_address: 172.17.0.23
|
||||
networkd_name: ens3
|
||||
networkd_gateway: 172.17.0.1
|
||||
networkd_dns: 172.17.0.3
|
||||
networkd_address: 172.17.0.23/16
|
||||
ipv4_address: 172.17.0.23
|
||||
etcd_name: solo
|
||||
etcd_initial_cluster: "solo=http://172.17.0.23:2380"
|
||||
k8s_etcd_endpoints: "http://172.17.0.21:2379,http://172.17.0.22:2379,http://172.17.0.23:2379"
|
||||
k8s_controller_endpoint: https://172.17.0.21
|
||||
k8s_dns_service_ip: 10.3.0.1
|
||||
k8s_cert_endpoint: http://bootcfg.foo:8080/assets
|
||||
fleet_metadata: "role=etcd,name=node3"
|
||||
etcd_name: node3
|
||||
etcd_initial_cluster: "node1=http://172.17.0.21:2380,node2=http://172.17.0.22:2380,node3=http://172.17.0.23:2380"
|
||||
|
||||
|
||||
@@ -6,44 +6,56 @@ groups:
|
||||
require:
|
||||
uuid: 16e7d8a7-bfa9-428b-9117-363341bb330b
|
||||
metadata:
|
||||
ipv4_address: 172.15.0.21
|
||||
networkd_name: ens3
|
||||
networkd_gateway: 172.15.0.1
|
||||
networkd_dns: 172.15.0.3
|
||||
networkd_address: 172.15.0.21/16
|
||||
k8s_etcd_endpoints: http://172.15.0.23:2379
|
||||
k8s_etcd_endpoints: "http://172.15.0.21:2379,http://172.15.0.22:2379,http://172.15.0.23:2379"
|
||||
k8s_pod_network: 10.2.0.0/16
|
||||
k8s_service_ip_range: 10.3.0.0/24
|
||||
k8s_service_ip: 10.3.0.1
|
||||
k8s_dns_service_ip: 10.3.0.10
|
||||
k8s_advertise_ip: 172.15.0.21
|
||||
k8s_cert_endpoint: http://bootcfg.foo:8080/assets
|
||||
fleet_metadata: "role=etcd,name=node1"
|
||||
etcd_name: node1
|
||||
etcd_initial_cluster: "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380"
|
||||
ssh_authorized_keys:
|
||||
|
||||
- name: Worker Node
|
||||
- name: Worker 1
|
||||
profile: kubernetes-worker
|
||||
require:
|
||||
uuid: 264cd073-ca62-44b3-98c0-50aad5b5f819
|
||||
metadata:
|
||||
ipv4_address: 172.15.0.22
|
||||
networkd_name: ens3
|
||||
networkd_gateway: 172.15.0.1
|
||||
networkd_dns: 172.15.0.3
|
||||
networkd_address: 172.15.0.22/16
|
||||
k8s_etcd_endpoints: http://172.15.0.23:2379
|
||||
k8s_etcd_endpoints: "http://172.15.0.21:2379,http://172.15.0.22:2379,http://172.15.0.23:2379"
|
||||
k8s_controller_endpoint: https://172.15.0.21
|
||||
k8s_dns_service_ip: 10.3.0.1
|
||||
k8s_advertise_ip: 172.15.0.22
|
||||
k8s_cert_endpoint: http://bootcfg.foo:8080/assets
|
||||
fleet_metadata: "role=etcd,name=node2"
|
||||
etcd_name: node2
|
||||
etcd_initial_cluster: "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380"
|
||||
ssh_authorized_keys:
|
||||
|
||||
- name: etcd Node
|
||||
profile: etcd
|
||||
- name: Worker 2
|
||||
profile: kubernetes-worker
|
||||
require:
|
||||
uuid: 39d2e747-2648-4d68-ae92-bbc70b245055
|
||||
metadata:
|
||||
ipv4_address: 172.15.0.23
|
||||
networkd_name: ens3
|
||||
networkd_gateway: 172.15.0.1
|
||||
networkd_dns: 172.15.0.3
|
||||
networkd_address: 172.15.0.23/16
|
||||
ipv4_address: 172.15.0.23
|
||||
etcd_name: solo
|
||||
etcd_initial_cluster: "solo=http://172.15.0.23:2380"
|
||||
|
||||
k8s_etcd_endpoints: "http://172.15.0.21:2379,http://172.15.0.22:2379,http://172.15.0.23:2379"
|
||||
k8s_controller_endpoint: https://172.15.0.21
|
||||
k8s_dns_service_ip: 10.3.0.1
|
||||
k8s_cert_endpoint: http://bootcfg.foo:8080/assets
|
||||
fleet_metadata: "role=etcd,name=node3"
|
||||
etcd_name: node3
|
||||
etcd_initial_cluster: "node1=http://172.15.0.21:2380,node2=http://172.15.0.22:2380,node3=http://172.15.0.23:2380"
|
||||
ssh_authorized_keys:
|
||||
@@ -1,8 +1,8 @@
|
||||
{
|
||||
"id": "etcd",
|
||||
"boot": {
|
||||
"kernel": "/assets/coreos/899.6.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/899.6.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"kernel": "/assets/coreos/983.0.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/983.0.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"cmdline": {
|
||||
"coreos.config.url": "http://bootcfg.foo:8080/ignition?uuid=${uuid}&mac=${net0/mac:hexhyp}",
|
||||
"coreos.autologin": "",
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
{
|
||||
"id": "etcd_proxy",
|
||||
"boot": {
|
||||
"kernel": "/assets/coreos/899.6.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/899.6.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"kernel": "/assets/coreos/983.0.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/983.0.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"cmdline": {
|
||||
"coreos.config.url": "http://bootcfg.foo:8080/ignition?uuid=${uuid}&mac=${net0/mac:hexhyp}",
|
||||
"coreos.autologin": "",
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
{
|
||||
"id": "kubernetes-master",
|
||||
"boot": {
|
||||
"kernel": "/assets/coreos/962.0.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/962.0.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"kernel": "/assets/coreos/983.0.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/983.0.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"cmdline": {
|
||||
"root": "/dev/sda1",
|
||||
"cloud-config-url": "http://bootcfg.foo:8080/cloud?uuid=${uuid}&mac=${net0/mac:hexhyp}",
|
||||
@@ -13,4 +13,4 @@
|
||||
},
|
||||
"cloud_id": "kubernetes-master.sh",
|
||||
"ignition_id": "network.yaml"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
{
|
||||
"id": "kubernetes-worker",
|
||||
"boot": {
|
||||
"kernel": "/assets/coreos/962.0.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/962.0.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"kernel": "/assets/coreos/983.0.0/coreos_production_pxe.vmlinuz",
|
||||
"initrd": ["/assets/coreos/983.0.0/coreos_production_pxe_image.cpio.gz"],
|
||||
"cmdline": {
|
||||
"root": "/dev/sda1",
|
||||
"cloud-config-url": "http://bootcfg.foo:8080/cloud?uuid=${uuid}&mac=${net0/mac:hexhyp}",
|
||||
@@ -13,4 +13,4 @@
|
||||
},
|
||||
"cloud_id": "kubernetes-worker.sh",
|
||||
"ignition_id": "network.yaml"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
#!/bin/bash -e
|
||||
# USAGE: ./scripts/get-coreos
|
||||
# USAGE: ./scripts/get-coreos alpha 942.0.0
|
||||
# USAGE: ./scripts/get-coreos channel version
|
||||
|
||||
CHANNEL=${1:-"beta"}
|
||||
VERSION=${2:-"899.6.0"}
|
||||
CHANNEL=${1:-"alpha"}
|
||||
VERSION=${2:-"983.0.0"}
|
||||
DEST=${PWD}/assets/coreos/$VERSION
|
||||
BASE_URL=http://$CHANNEL.release.core-os.net/amd64-usr/$VERSION
|
||||
|
||||
|
||||
14
scripts/tls/gen-bm-k8s-secrets
Executable file
14
scripts/tls/gen-bm-k8s-secrets
Executable file
@@ -0,0 +1,14 @@
|
||||
#!/bin/bash -e
|
||||
# USAGE: ./scripts/generate-kubernetes-secrets
|
||||
|
||||
DEST=${1:-"assets/tls"}
|
||||
|
||||
if [ ! -d "$DEST" ]; then
|
||||
echo "Creating directory $DEST"
|
||||
mkdir -p $DEST
|
||||
fi
|
||||
|
||||
./scripts/tls/root-ca $DEST
|
||||
./scripts/tls/kubernetes-cert $DEST admin kube-admin
|
||||
./scripts/tls/kubernetes-cert $DEST apiserver kube-apiserver IP.1=10.3.0.1,IP.2=192.168.1.21
|
||||
./scripts/tls/kubernetes-cert $DEST worker kube-worker IP.1=192.168.1.22,IP.2=192.168.1.23
|
||||
@@ -11,4 +11,4 @@ fi
|
||||
./scripts/tls/root-ca $DEST
|
||||
./scripts/tls/kubernetes-cert $DEST admin kube-admin
|
||||
./scripts/tls/kubernetes-cert $DEST apiserver kube-apiserver IP.1=10.3.0.1,IP.2=172.17.0.21
|
||||
./scripts/tls/kubernetes-cert $DEST worker kube-worker IP.1=172.17.0.22
|
||||
./scripts/tls/kubernetes-cert $DEST worker kube-worker IP.1=172.17.0.22,IP.2=172.17.0.23
|
||||
|
||||
@@ -11,4 +11,4 @@ fi
|
||||
./scripts/tls/root-ca $DEST
|
||||
./scripts/tls/kubernetes-cert $DEST admin kube-admin
|
||||
./scripts/tls/kubernetes-cert $DEST apiserver kube-apiserver IP.1=10.3.0.1,IP.2=172.15.0.21
|
||||
./scripts/tls/kubernetes-cert $DEST worker kube-worker IP.1=172.15.0.22
|
||||
./scripts/tls/kubernetes-cert $DEST worker kube-worker IP.1=172.15.0.22,IP.2=172.15.0.23
|
||||
|
||||
Reference in New Issue
Block a user