scripts: Remove unused static k8s generation scripts

* Remove static rktnetes cluster docs
* Bump devnet matchbox version
This commit is contained in:
Dalton Hubble
2017-05-22 18:02:31 -07:00
parent 3f70f9f2e5
commit 02f7fb7f7c
7 changed files with 2 additions and 289 deletions

View File

@@ -1,87 +0,0 @@
# Kubernetes (with rkt)
The `rktnetes` example provisions a 3 node Kubernetes v1.5.5 cluster with [rkt](https://github.com/coreos/rkt) as the container runtime. The cluster has one controller, two workers, and TLS authentication. An etcd cluster backs Kubernetes and coordinates CoreOS auto-updates (enabled for disk installs).
## Requirements
Ensure that you've gone through the [matchbox with rkt](getting-started-rkt.md) or [matchbox with docker](getting-started-docker.md) guide and understand the basics. In particular, you should be able to:
* Use rkt or Docker to start `matchbox`
* Create a network boot environment with `coreos/dnsmasq`
* Create the example libvirt client VMs
* `/etc/hosts` entries for `node[1-3].example.com` (or pass custom names to `k8s-certgen`)
## Examples
The [examples](../examples) statically assign IP addresses to libvirt client VMs created by `scripts/libvirt`. VMs are setup on the `metal0` CNI bridge for rkt or the `docker0` bridge for Docker. The examples can be used for physical machines if you update the MAC addresses. See [network setup](network-setup.md) and [deployment](deployment.md).
* [rktnetes](../examples/groups/rktnetes) - iPXE boot a Kubernetes cluster
* [rktnetes-install](../examples/groups/rktnetes-install) - Install a Kubernetes cluster to disk
* [Lab examples](https://github.com/dghubble/metal) - Lab hardware examples
## Assets
Download the CoreOS image assets referenced in the target [profile](../examples/profiles).
```sh
$ ./scripts/get-coreos stable 1298.7.0 ./examples/assets
```
Optionally, add your SSH public key to each machine group definition [as shown](../examples/README.md#ssh-keys).
Generate a root CA and Kubernetes TLS assets for components (`admin`, `apiserver`, `worker`) with SANs for `node1.example.com`, etc.
```sh
$ rm -rf examples/assets/tls
$ ./scripts/tls/k8s-certgen
```
**Note**: TLS assets are served to any machines which request them, which requires a trusted network. Alternately, provisioning may be tweaked to require TLS assets be securely copied to each host.
## Containers
Use rkt or docker to start `matchbox` and mount the desired example resources. Create a network boot environment and power-on your machines. Revisit [matchbox with rkt](getting-started-rkt.md) or [matchbox with Docker](getting-started-docker.md) for help.
Client machines should boot and provision themselves. Local client VMs should network boot CoreOS in about a 1 minute and the Kubernetes API should be available after 3-4 minutes (each node downloads a ~160MB Hyperkube). If you chose `rktnetes-install`, notice that machines install CoreOS and then reboot (in libvirt, you must hit "power" again). Time to network boot and provision Kubernetes clusters on physical hardware depends on a number of factors (POST duration, boot device iteration, network speed, etc.).
## Verify
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your laptop. Use the generated kubeconfig to access the Kubernetes cluster created on rkt `metal0` or `docker0`.
```sh
$ KUBECONFIG=examples/assets/tls/kubeconfig
$ kubectl get nodes
NAME STATUS AGE
node1.example.com Ready 3m
node2.example.com Ready 3m
node3.example.com Ready 3m
```
Get all pods.
```sh
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-v1.2.0-4088228293-k3yn8 2/2 Running 0 3m
kube-system kube-apiserver-node1.example.com 1/1 Running 0 4m
kube-system kube-controller-manager-node1.example.com 1/1 Running 0 3m
kube-system kube-dns-v19-l2u8r 3/3 Running 0 4m
kube-system kube-proxy-node1.example.com 1/1 Running 0 3m
kube-system kube-proxy-node2.example.com 1/1 Running 0 3m
kube-system kube-proxy-node3.example.com 1/1 Running 0 3m
kube-system kube-scheduler-node1.example.com 1/1 Running 0 3m
kube-system kubernetes-dashboard-v1.4.1-0iy07 1/1 Running 0 4m
```
## Kubernetes Dashboard
Access the Kubernetes Dashboard with `kubeconfig` credentials by port forwarding to the dashboard pod.
```sh
$ kubectl port-forward kubernetes-dashboard-v1.4.1-SOME-ID 9090 -n=kube-system
Forwarding from 127.0.0.1:9090 -> 9090
```
Then visit [http://127.0.0.1:9090](http://127.0.0.1:9090/).
<img src='img/kubernetes-dashboard.png' class="img-center" alt="Kubernetes Dashboard"/>

View File

@@ -91,7 +91,7 @@ function create {
--volume config,kind=host,source=$CONFIG_DIR,readOnly=true \
--mount volume=data,target=/var/lib/matchbox \
$DATA_MOUNT \
quay.io/coreos/matchbox:v0.6.0 -- -address=0.0.0.0:8080 -log-level=debug $MATCHBOX_ARGS
quay.io/coreos/matchbox:ed6dde528a0146fe55551a317cc55849cec6ec80 -- -address=0.0.0.0:8080 -log-level=debug $MATCHBOX_ARGS
echo "Starting dnsmasq to provide DHCP/TFTP/DNS services"
rkt rm --uuid-file=/var/run/dnsmasq-pod.uuid > /dev/null 2>&1

View File

@@ -4,7 +4,7 @@
set -eu
DEST=${1:-"bin"}
VERSION="v1.5.5"
VERSION="v1.6.4"
URL="https://storage.googleapis.com/kubernetes-release/release/${VERSION}/bin/linux/amd64/kubectl"

View File

@@ -1,42 +0,0 @@
#!/bin/bash -e
USAGE="Usage: $(basename $0)
Options:
-d DEST Destination for generated files (default: .examples/assets/tls)
-s SERVER Reachable Server IP for kubeconfig (e.g. node1.example.com)
-m MASTERS Controller Node Names/Addresses in SAN format (e.g. IP.1=10.3.0.1,DNS.1=node1.example.com)
-w WORKERS Worker Node Names/Addresses in SAN format (e.g. DNS.1=node2.example.com,DNS.2=node3.example.com)
-h Show help
"
DEST="./examples/assets/tls"
SERVER="node1.example.com"
MASTERS="IP.1=10.3.0.1,DNS.1=node1.example.com"
WORKERS="DNS.1=node2.example.com,DNS.2=node3.example.com"
while getopts "d:s:m:w:vh" opt; do
case $opt in
d) DEST="$OPTARG" ;;
s) SERVER="$OPTARG" ;;
m) MASTERS="$OPTARG" ;;
w) WORKERS="$OPTARG" ;;
h) echo "$USAGE"; exit;;
*) exit 1;;
esac
done
if [ ! -d "$DEST" ]; then
echo "Creating directory $DEST"
mkdir -p $DEST
fi
# create root CA
./scripts/tls/root-ca $DEST
# create Kubernetes master and worker certificates
./scripts/tls/kubernetes-cert $DEST admin kube-admin
./scripts/tls/kubernetes-cert $DEST apiserver kube-apiserver $MASTERS
./scripts/tls/kubernetes-cert $DEST worker kube-worker $WORKERS
# create a kubeconfig
./scripts/tls/kube-conf $DEST $SERVER

View File

@@ -1,52 +0,0 @@
#!/bin/bash -e
function usage {
echo "USAGE: $0 DEST MASTER_IP"
echo "example: $0 dest/path 192.168.1.21"
}
function base64_encode {
if [[ "$OSTYPE" == darwin* ]]; then
base64 $1
else
base64 -w 0 $1
fi
}
if [ -z "$1" ] || [ -z "$2" ]; then
usage
exit 1
fi
DEST="$1"
MASTER_IP="$2"
ADMIN_CERT_BASE64=$(base64_encode $DEST/admin.pem)
ADMIN_KEY_BASE64="$(base64_encode $DEST/admin-key.pem)"
CA_CERT_BASE64="$(base64_encode $DEST/ca.pem)"
if [ -f "$DEST/kubeconfig" ]; then
echo "$DEST/kubeconfig already exists"
exit 1
fi
cat << EOF > $DEST/kubeconfig
apiVersion: v1
kind: Config
users:
- name: matchbox-user
user:
client-certificate-data: ${ADMIN_CERT_BASE64}
client-key-data: ${ADMIN_KEY_BASE64}
clusters:
- name: matchbox-cluster
cluster:
certificate-authority-data: ${CA_CERT_BASE64}
server: https://${MASTER_IP}:443
contexts:
- context:
cluster: matchbox-cluster
user: matchbox-user
name: matchbox-context
current-context: matchbox-context
EOF
echo "Wrote kubeconfig to $DEST/kubeconfig"

View File

@@ -1,74 +0,0 @@
#!/bin/bash -e
# define location of openssl binary manually since running this
# script under Vagrant fails on some systems without it
OPENSSL=/usr/bin/openssl
function usage {
echo "USAGE: $0 <output-dir> <cert-base-name> <CN> [SAN,SAN,SAN]"
echo " example: $0 ./ssl/ worker kube-worker IP.1=127.0.0.1,IP.2=10.0.0.1"
}
if [ -z "$1" ] || [ -z "$2" ] || [ -z "$3" ]; then
usage
exit 1
fi
OUTDIR="$1"
CERTBASE="$2"
CN="$3"
SANS="$4"
if [ ! -d $OUTDIR ]; then
echo "ERROR: output directory does not exist: $OUTDIR"
exit 1
fi
OUTFILE="$OUTDIR/$CN.tar"
if [ -f "$OUTFILE" ];then
exit 0
fi
CNF_TEMPLATE="
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.101 = kubernetes
DNS.102 = kubernetes.default
DNS.103 = kubernetes.default.svc
DNS.104 = kubernetes.default.svc.cluster.local
"
echo "Generating SSL artifacts in $OUTDIR"
CONFIGFILE="$OUTDIR/$CERTBASE-req.cnf"
CAFILE="$OUTDIR/ca.pem"
CAKEYFILE="$OUTDIR/ca-key.pem"
KEYFILE="$OUTDIR/$CERTBASE-key.pem"
CSRFILE="$OUTDIR/$CERTBASE.csr"
PEMFILE="$OUTDIR/$CERTBASE.pem"
CONTENTS="${CAFILE} ${KEYFILE} ${PEMFILE}"
# Add SANs to openssl config
echo "$CNF_TEMPLATE$(echo $SANS | tr ',' '\n')" > "$CONFIGFILE"
$OPENSSL genrsa -out "$KEYFILE" 2048
$OPENSSL req -new -key "$KEYFILE" -out "$CSRFILE" -subj "/CN=$CN" -config "$CONFIGFILE"
$OPENSSL x509 -req -in "$CSRFILE" -CA "$CAFILE" -CAkey "$CAKEYFILE" -CAcreateserial -out "$PEMFILE" -days 365 -extensions v3_req -extfile "$CONFIGFILE"
tar -cf $OUTFILE -C $OUTDIR $(for f in $CONTENTS;do printf "$(basename $f) ";done)
echo "Bundled SSL artifacts into $OUTFILE"
echo "$CONTENTS"

View File

@@ -1,32 +0,0 @@
#!/bin/bash -e
# define location of openssl binary manually since running this
# script under Vagrant fails on some systems without it
OPENSSL=/usr/bin/openssl
function usage {
echo "USAGE: $0 <output-dir>"
echo " example: $0 ./ssl/ca.pem"
}
if [ -z "$1" ]; then
usage
exit 1
fi
OUTDIR="$1"
if [ ! -d $OUTDIR ]; then
echo "ERROR: output directory does not exist: $OUTDIR"
exit 1
fi
OUTFILE="$OUTDIR/ca.pem"
if [ -f "$OUTFILE" ];then
exit 0
fi
# establish cluster CA and self-sign a cert
$OPENSSL genrsa -out "$OUTDIR/ca-key.pem" 2048
$OPENSSL req -x509 -new -nodes -key "$OUTDIR/ca-key.pem" -days 10000 -out "$OUTFILE" -subj "/CN=kube-ca"