Merge pull request #664 from coreos/docker-docs

Switch local QEMU/KVM tutorial to favor Docker
This commit is contained in:
Dalton Hubble
2017-10-27 13:51:36 -07:00
committed by GitHub
6 changed files with 20 additions and 51 deletions

View File

@@ -48,14 +48,13 @@ Run the `matchbox` and `dnsmasq` services on the `docker0` bridge. `dnsmasq` wil
The `devnet` convenience script can start these services and accepts the name of any example cluster in [examples](../examples).
```sh
$ export CONTAINER_RUNTIME=docker
$ sudo -E ./scripts/devnet create etcd3
$ sudo ./scripts/devnet create etcd3
```
Inspect the logs.
```
$ sudo -E ./scripts/devnet status
$ sudo ./scripts/devnet status
```
Take a look at the [etcd3 groups](../examples/groups/etcd3) to get an idea of how machines are mapped to Profiles. Explore some endpoints exposed by the service, say for QEMU/KVM node1.
@@ -78,13 +77,14 @@ $ sudo docker run --name dnsmasq --cap-add=NET_ADMIN -v $PWD/contrib/dnsmasq/doc
Create QEMU/KVM VMs which have known hardware attributes. The nodes will be attached to the `docker0` bridge, where Docker containers run.
```sh
$ sudo ./scripts/libvirt create-docker
$ sudo ./scripts/libvirt create
```
You can connect to the serial console of any node. If you provisioned nodes with an SSH key, you can SSH after bring-up.
You can connect to the serial console of any node (ctrl+] to exit). If you provisioned nodes with an SSH key, you can SSH after bring-up.
```sh
$ sudo virsh console node1
$ ssh core@node1.example.com
```
You can also use `virt-manager` to watch the console.
@@ -115,7 +115,7 @@ $ etcdctl get /message
Clean up the containers and VM machines.
```sh
$ sudo -E ./scripts/devnet destroy
$ sudo ./scripts/devnet destroy
$ sudo ./scripts/libvirt destroy
```

View File

@@ -129,13 +129,14 @@ $ sudo rkt gc --grace-period=0
Create QEMU/KVM VMs which have known hardware attributes. The nodes will be attached to the `metal0` bridge, where your pods run.
```sh
$ sudo ./scripts/libvirt create
$ sudo ./scripts/libvirt create-rkt
```
You can connect to the serial console of any node. If you provisioned nodes with an SSH key, you can SSH after bring-up.
You can connect to the serial console of any node (ctrl+] to exit). If you provisioned nodes with an SSH key, you can SSH after bring-up.
```sh
$ sudo virsh console node1
$ ssh core@node1.example.com
```
You can also use `virt-manager` to watch the console.

View File

@@ -7,8 +7,8 @@
* [Profiles](Documentation/matchbox.md#profiles)
* [Groups](Documentation/matchbox.md#groups)
* Config Templates
* [Container Linux Config][cl-config]
* [Cloud-Config][cloud-config]
* [Container Linux Config][cl-config]
* [Cloud-Config][cloud-config]
* [Configuration](Documentation/config.md)
* [HTTP API](Documentation/api.md) / [gRPC API](https://godoc.org/github.com/coreos/matchbox/matchbox/client)
* [Background: Machine Lifecycle](Documentation/machine-lifecycle.md)
@@ -17,17 +17,17 @@
### Installation
* Installation
* Installing on [Container Linux / other distros](Documentation/deployment.md)
* Installing on [Kubernetes](Documentation/deployment.md#kubernetes)
* Running with [rkt](Documentation/deployment.md#rkt) / [docker](Documentation/deployment.md#docker)
* Installing on [Container Linux / other distros](Documentation/deployment.md)
* Installing on [Kubernetes](Documentation/deployment.md#kubernetes)
* Running with [rkt](Documentation/deployment.md#rkt) / [docker](Documentation/deployment.md#docker)
* [Network Setup](Documentation/network-setup.md)
### Tutorials
* [Getting Started](Documentation/getting-started.md) - provision Container Linux machines (beginner)
* [Getting Started](Documentation/getting-started.md) - provision physical machines with Container Linux
* Local QEMU/KVM
* [matchbox with rkt](Documentation/getting-started-rkt.md)
* [matchbox with Docker](Documentation/getting-started-docker.md)
* [matchbox with Docker](Documentation/getting-started-docker.md)
* [matchbox with rkt](Documentation/getting-started-rkt.md)
* Clusters
* [etcd3](Documentation/getting-started-rkt.md) - Install a 3-node etcd3 cluster
* [Kubernetes](Documentation/bootkube.md) - Install a 3-node Kubernetes v1.8.1 cluster

View File

@@ -1,32 +0,0 @@
#!/usr/bin/env bash
set -e
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
GIT_SHA=$($DIR/git-version)
# Start with an empty ACI
acbuild --debug begin
# In the event of the script exiting, end the build
trap "{ export EXT=$?; acbuild --debug end && exit $EXT; }" EXIT
# Name the ACI
acbuild --debug set-name coreos.com/matchbox
# Add a version label
acbuild --debug label add version $GIT_SHA
# Add alpine base dependency
acbuild --debug dep add quay.io/coreos/alpine-sh
# Copy the static binary
acbuild --debug copy bin/matchbox /matchbox
# Add a port for HTTP traffic
acbuild --debug port add www tcp 8080
# Set the exec command
acbuild --debug set-exec -- /matchbox
# Save and overwrite any older matchbox ACI
acbuild --debug write --overwrite matchbox.aci

View File

@@ -9,7 +9,7 @@ DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
EXAMPLE=${2:-}
# Local Container Runtime (docker or rkt)
CONTAINER_RUNTIME="${CONTAINER_RUNTIME:-rkt}"
CONTAINER_RUNTIME="${CONTAINER_RUNTIME:-docker}"
BRIDGE=metal0
ASSETS_DIR="${ASSETS_DIR:-$PWD/examples/assets}"
CONFIG_DIR="${CONFIG_DIR:-$PWD/examples/etc/matchbox}"
@@ -117,7 +117,7 @@ function rkt_create {
--volume config,kind=host,source=$CONFIG_DIR,readOnly=true \
--mount volume=data,target=/var/lib/matchbox \
$DATA_MOUNT \
quay.io/coreos/matchbox:23f23c1dcb78b123754ffb4e64f21cd8269093ce -- -address=0.0.0.0:8080 -log-level=debug $MATCHBOX_ARGS
quay.io/coreos/matchbox:v0.6.1 -- -address=0.0.0.0:8080 -log-level=debug $MATCHBOX_ARGS
echo "Starting dnsmasq to provide DHCP/TFTP/DNS services"
rkt rm --uuid-file=/var/run/dnsmasq-pod.uuid > /dev/null 2>&1

View File

@@ -11,7 +11,7 @@ fi
function main {
case "$1" in
"create") create_rkt;;
"create") create_docker;;
"create-docker") create_docker;;
"create-rkt") create_rkt;;
"create-uefi") create_uefi;;