mirror of
https://github.com/optim-enterprises-bv/kubernetes.git
synced 2025-11-03 03:38:15 +00:00
add raw flag for GitHub download links
This commit is contained in:
@@ -100,7 +100,7 @@ spec:
|
||||
emptyDir: {}
|
||||
```
|
||||
|
||||
[Download example](cassandra-controller.yaml)
|
||||
[Download example](cassandra-controller.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE cassandra-controller.yaml -->
|
||||
|
||||
There are a few things to note in this description. First is that we are running the ```kubernetes/cassandra``` image. This is a standard Cassandra installation on top of Debian. However it also adds a custom [```SeedProvider```](https://svn.apache.org/repos/asf/cassandra/trunk/src/java/org/apache/cassandra/locator/SeedProvider.java) to Cassandra. In Cassandra, a ```SeedProvider``` bootstraps the gossip protocol that Cassandra uses to find other nodes. The ```KubernetesSeedProvider``` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later)
|
||||
@@ -131,7 +131,7 @@ spec:
|
||||
name: cassandra
|
||||
```
|
||||
|
||||
[Download example](cassandra-service.yaml)
|
||||
[Download example](cassandra-service.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE cassandra-service.yaml -->
|
||||
|
||||
The important thing to note here is the ```selector```. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is ```name=cassandra```. If you look back at the Pod specification above, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service.
|
||||
@@ -241,7 +241,7 @@ spec:
|
||||
emptyDir: {}
|
||||
```
|
||||
|
||||
[Download example](cassandra-controller.yaml)
|
||||
[Download example](cassandra-controller.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE cassandra-controller.yaml -->
|
||||
|
||||
Most of this replication controller definition is identical to the Cassandra pod definition above, it simply gives the replication controller a recipe to use when it creates new Cassandra pods. The other differentiating parts are the ```selector``` attribute which contains the controller's selector query, and the ```replicas``` attribute which specifies the desired number of replicas, in this case 1.
|
||||
|
||||
@@ -81,7 +81,7 @@ spec:
|
||||
component: rabbitmq
|
||||
```
|
||||
|
||||
[Download example](rabbitmq-service.yaml)
|
||||
[Download example](rabbitmq-service.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE rabbitmq-service.yaml -->
|
||||
|
||||
To start the service, run:
|
||||
@@ -126,7 +126,7 @@ spec:
|
||||
cpu: 100m
|
||||
```
|
||||
|
||||
[Download example](rabbitmq-controller.yaml)
|
||||
[Download example](rabbitmq-controller.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE rabbitmq-controller.yaml -->
|
||||
|
||||
Running `$ kubectl create -f examples/celery-rabbitmq/rabbitmq-controller.yaml` brings up a replication controller that ensures one pod exists which is running a RabbitMQ instance.
|
||||
@@ -167,7 +167,7 @@ spec:
|
||||
cpu: 100m
|
||||
```
|
||||
|
||||
[Download example](celery-controller.yaml)
|
||||
[Download example](celery-controller.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE celery-controller.yaml -->
|
||||
|
||||
There are several things to point out here...
|
||||
@@ -238,7 +238,7 @@ spec:
|
||||
type: LoadBalancer
|
||||
```
|
||||
|
||||
[Download example](flower-service.yaml)
|
||||
[Download example](flower-service.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE flower-service.yaml -->
|
||||
|
||||
It is marked as external (LoadBalanced). However on many platforms you will have to add an explicit firewall rule to open port 5555.
|
||||
@@ -279,7 +279,7 @@ spec:
|
||||
cpu: 100m
|
||||
```
|
||||
|
||||
[Download example](flower-controller.yaml)
|
||||
[Download example](flower-controller.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE flower-controller.yaml -->
|
||||
|
||||
This will bring up a new pod with Flower installed and port 5555 (Flower's default port) exposed through the service endpoint. This image uses the following command to start Flower:
|
||||
|
||||
@@ -100,7 +100,7 @@ spec:
|
||||
- containerPort: 6379
|
||||
```
|
||||
|
||||
[Download example](redis-master-controller.yaml)
|
||||
[Download example](redis-master-controller.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE redis-master-controller.yaml -->
|
||||
|
||||
Change to the `<kubernetes>/examples/guestbook` directory if you're not already there. Create the redis master pod in your Kubernetes cluster by running:
|
||||
@@ -221,7 +221,7 @@ spec:
|
||||
name: redis-master
|
||||
```
|
||||
|
||||
[Download example](redis-master-service.yaml)
|
||||
[Download example](redis-master-service.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE redis-master-service.yaml -->
|
||||
|
||||
Create the service by running:
|
||||
@@ -296,7 +296,7 @@ spec:
|
||||
- containerPort: 6379
|
||||
```
|
||||
|
||||
[Download example](redis-slave-controller.yaml)
|
||||
[Download example](redis-slave-controller.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE redis-slave-controller.yaml -->
|
||||
|
||||
and create the replication controller by running:
|
||||
@@ -347,7 +347,7 @@ spec:
|
||||
name: redis-slave
|
||||
```
|
||||
|
||||
[Download example](redis-slave-service.yaml)
|
||||
[Download example](redis-slave-service.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE redis-slave-service.yaml -->
|
||||
|
||||
This time the selector for the service is `name=redis-slave`, because that identifies the pods running redis slaves. It may also be helpful to set labels on your service itself as we've done here to make it easy to locate them with the `kubectl get services -l "label=value"` command.
|
||||
@@ -398,7 +398,7 @@ spec:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
[Download example](frontend-controller.yaml)
|
||||
[Download example](frontend-controller.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE frontend-controller.yaml -->
|
||||
|
||||
Using this file, you can turn up your frontend with:
|
||||
@@ -501,7 +501,7 @@ spec:
|
||||
name: frontend
|
||||
```
|
||||
|
||||
[Download example](frontend-service.yaml)
|
||||
[Download example](frontend-service.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE frontend-service.yaml -->
|
||||
|
||||
#### Using 'type: LoadBalancer' for the frontend service (cloud-provider-specific)
|
||||
|
||||
@@ -83,7 +83,7 @@ spec:
|
||||
name: hazelcast
|
||||
```
|
||||
|
||||
[Download example](hazelcast-service.yaml)
|
||||
[Download example](hazelcast-service.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE hazelcast-service.yaml -->
|
||||
|
||||
The important thing to note here is the `selector`. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is `name: hazelcast`. If you look at the Replication Controller specification below, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service.
|
||||
@@ -138,7 +138,7 @@ spec:
|
||||
name: hazelcast
|
||||
```
|
||||
|
||||
[Download example](hazelcast-controller.yaml)
|
||||
[Download example](hazelcast-controller.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE hazelcast-controller.yaml -->
|
||||
|
||||
There are a few things to note in this description. First is that we are running the `quay.io/pires/hazelcast-kubernetes` image, tag `0.5`. This is a `busybox` installation with JRE 8 Update 45. However it also adds a custom [`application`](https://github.com/pires/hazelcast-kubernetes-bootstrapper) that finds any Hazelcast nodes in the cluster and bootstraps an Hazelcast instance accordingly. The `HazelcastDiscoveryController` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later).
|
||||
|
||||
@@ -131,7 +131,7 @@ spec:
|
||||
fsType: ext4
|
||||
```
|
||||
|
||||
[Download example](mysql.yaml)
|
||||
[Download example](mysql.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE mysql.yaml -->
|
||||
|
||||
Note that we've defined a volume mount for `/var/lib/mysql`, and specified a volume that uses the persistent disk (`mysql-disk`) that you created.
|
||||
@@ -186,7 +186,7 @@ spec:
|
||||
name: mysql
|
||||
```
|
||||
|
||||
[Download example](mysql-service.yaml)
|
||||
[Download example](mysql-service.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE mysql-service.yaml -->
|
||||
|
||||
Start the service like this:
|
||||
@@ -241,7 +241,7 @@ spec:
|
||||
fsType: ext4
|
||||
```
|
||||
|
||||
[Download example](wordpress.yaml)
|
||||
[Download example](wordpress.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE wordpress.yaml -->
|
||||
|
||||
Create the pod:
|
||||
@@ -282,7 +282,7 @@ spec:
|
||||
type: LoadBalancer
|
||||
```
|
||||
|
||||
[Download example](wordpress-service.yaml)
|
||||
[Download example](wordpress-service.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE wordpress-service.yaml -->
|
||||
|
||||
Note the `type: LoadBalancer` setting. This will set up the wordpress service behind an external IP.
|
||||
|
||||
@@ -98,7 +98,7 @@ To start Phabricator server use the file [`examples/phabricator/phabricator-cont
|
||||
}
|
||||
```
|
||||
|
||||
[Download example](phabricator-controller.json)
|
||||
[Download example](phabricator-controller.json?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE phabricator-controller.json -->
|
||||
|
||||
Create the phabricator pod in your Kubernetes cluster by running:
|
||||
@@ -188,7 +188,7 @@ To automate this process and make sure that a proper host is authorized even if
|
||||
}
|
||||
```
|
||||
|
||||
[Download example](authenticator-controller.json)
|
||||
[Download example](authenticator-controller.json?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE authenticator-controller.json -->
|
||||
|
||||
To create the pod run:
|
||||
@@ -237,7 +237,7 @@ Use the file [`examples/phabricator/phabricator-service.json`](phabricator-servi
|
||||
}
|
||||
```
|
||||
|
||||
[Download example](phabricator-service.json)
|
||||
[Download example](phabricator-service.json?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE phabricator-service.json -->
|
||||
|
||||
To create the service run:
|
||||
|
||||
Reference in New Issue
Block a user