Added k8s deployment (#14)

* Added k8s deployment

* Make helm chart dependent on xmidt-* helm charts

* Change repo user to xmidt

* Update Readme
This commit is contained in:
Flori
2019-12-02 20:16:11 +01:00
committed by Jack Murdock
parent 30e8cf38c4
commit b55dbed03f
18 changed files with 613 additions and 175 deletions

5
.gitignore vendored
View File

@@ -1 +1,6 @@
diagrams/*.png
rendered.*
./deploy/kubernetes/helm/xmidt-cloud-ship/
./deploy/kubernetes/helm/charts/
./deploy/kubernetes/helm/.tmp/

View File

@@ -1,175 +1,8 @@
# Deploying XMiDT
## Docker
In order to deploy into Docker, make sure [Docker is installed](https://docs.docker.com/install/).
#### Deploy
_**Note**_: While Tr1d1um is not part of XMiDT(it is WebPA), it is recommended to be
brought up for current ease of use. Future releases will deprecate Tr1d1um.
1. Clone this repository
2. Run `deploy/docker-compose/deploy.sh`
This will build `simulator` and `goaws` locally. It will then run `docker-compose up` which uses images of `talaria`, `scytale`, `petasos`, `caduceus`, and `tr1d1um` from dockerhub.
To pull specific versions of the images, just set the `<SERVICE>_VERSION` env variables when running the shell script.
```
TALARIA_VERSION=x.x.x deploy/docker-compose/deploy.sh
```
If you only want to bring up, for example, the scytale and talaria, run:
```bash
deploy/docker-compose/deploy.sh scytale talaria
```
_**Note**_: Bringing up a subset of services can cause problems.
This can be done with any combination of services.
3. To bring the containers down:
```bash
docker-compose -f deploy/docker-compose/docker-compose.yml down
```
### INFO
The docker-compose file provides 1 full datacenter with one talaria in a "backup"
datacenter. Since this is pure docker, not swarm or kubernetes, it's easiest to
deal with just one datacenter. Since all ports are exposed, the names might seem a little weird.
#### Connection
##### Inside Docker
If the Parodus instance is inside of docker, life is easy! Just connect to the cluster with `petasos:6400`.
##### Outside Docker
if the Parodus instance is outside of docker and the ports are exposed correctly, life
will be hard since you will need to handle the redirect.
You can initially connect to 'localhost:6400' but on the redirect change `talaria-1:6210` to `localhost:6210`
or you can just connect to a talaria `localhost:6200`
Once connected you should see it connected via [metrics](http://localhost:9090/graph?g0.range_input=1h&g0.expr=xmidt_talaria_device_count&g0.tab=0)
### Interact with the machines
Checkout that petasos is working:
```
curl localhost:6400 -H "X-Webpa-Device-Name: mac:112233445566" -i
```
Should give you the following:
```
HTTP/1.1 307 Temporary Redirect
Content-Type: text/html; charset=utf-8
Location: http://talaria-0:6200
X-Petasos-Build: development
X-Petasos-Flavor: development
X-Petasos-Region: local
X-Petasos-Server: localhost
X-Petasos-Start-Time: 04 Jun 19 02:12 UTC
Date: Tue, 04 Jun 2019 02:16:58 GMT
Content-Length: 57
<a href="http://talaria-0:6200">Temporary Redirect</a>.
```
Checkout that tr1d1um is able to talk with scytale & talaria:
```
curl localhost:6100/api/v2/device/mac:112233445577/config?names=Foo -i -H "Authorization: Basic dXNlcjpwYXNz"
```
Should give you:
```
HTTP/1.1 404 Not Found
X-Scytale-Build: development
X-Scytale-Flavor: development
X-Scytale-Region: local
X-Scytale-Server: localhost
X-Scytale-Start-Time: 04 Jun 19 02:12 UTC
X-Talaria-Build: development
X-Talaria-Flavor: development
X-Talaria-Region: local
X-Talaria-Server: localhost
X-Talaria-Start-Time: 04 Jun 19 02:12 UTC
X-Tr1d1um-Build: development
X-Tr1d1um-Flavor: development
X-Tr1d1um-Region: local
X-Tr1d1um-Server: localhost
X-Tr1d1um-Start-Time: 04 Jun 19 02:11 UTC
X-Webpa-Transaction-Id: LQxoB5sUSGWPNgAzxRIXLA
X-Xmidt-Message-Error: The device does not exist
X-Xmidt-Span: "http://petasos:6400/api/v2/device/send","2019-06-04T02:27:26Z","2.185274ms"
Date: Tue, 04 Jun 2019 02:27:26 GMT
Content-Length: 87
Content-Type: text/plain; charset=utf-8
{"code": 404, "message": "Could not process device request: The device does not exist"}
```
Check out that your simulator is connected:
```
curl -H "Authorization: Basic dXNlcjpwYXNz" localhost:6100/api/v2/device/mac:112233445566/stat -i
```
Should give you something similar to:
```
HTTP/1.1 200 OK
Content-Type: application/json
X-Scytale-Build: development
X-Scytale-Flavor: development
X-Scytale-Region: local
X-Scytale-Server: localhost
X-Scytale-Start-Time: 10 Jun 19 06:36 UTC
X-Talaria-Build: development
X-Talaria-Flavor: development
X-Talaria-Region: local
X-Talaria-Server: localhost
X-Talaria-Start-Time: 10 Jun 19 06:36 UTC
X-Tr1d1um-Build: development
X-Tr1d1um-Flavor: development
X-Tr1d1um-Region: local
X-Tr1d1um-Server: localhost
X-Tr1d1um-Start-Time: 10 Jun 19 06:36 UTC
X-Webpa-Transaction-Id: CIyqnI23RjWhyC_vNO7hbA
X-Xmidt-Span: "http://petasos:6400/api/v2/device/mac:112233445566/stat","2019-06-10T06:38:13Z","2.947332ms"
Date: Mon, 10 Jun 2019 06:38:13 GMT
Content-Length: 231
{"id": "mac:112233445566", "pending": 0, "statistics": {"bytesSent": 0, "messagesSent": 0, "bytesReceived": 0, "messagesReceived": 0, "duplications": 0, "connectedAt": "2019-06-10T06:37:02.915435853Z", "upTime": "1m10.110197482s"}}
```
Read a single parameter:
```
curl -H "Authorization: Basic dXNlcjpwYXNz" localhost:6100/api/v2/device/mac:112233445566/config?names=Device.DeviceInfo.X_CISCO_COM_BootloaderVersion -i
```
Results in:
```
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
X-Scytale-Build: development
X-Scytale-Flavor: development
X-Scytale-Region: local
X-Scytale-Server: localhost
X-Scytale-Start-Time: 10 Jun 19 06:36 UTC
X-Talaria-Build: development
X-Talaria-Flavor: development
X-Talaria-Region: local
X-Talaria-Server: localhost
X-Talaria-Start-Time: 10 Jun 19 06:36 UTC
X-Tr1d1um-Build: development
X-Tr1d1um-Flavor: development
X-Tr1d1um-Region: local
X-Tr1d1um-Server: localhost
X-Tr1d1um-Start-Time: 10 Jun 19 06:36 UTC
X-Webpa-Transaction-Id: cDYIIKLgoDtrt3XrVfUKkg
X-Xmidt-Span: "http://petasos:6400/api/v2/device/send","2019-06-10T06:45:04Z","15.869854ms"
Date: Mon, 10 Jun 2019 06:45:04 GMT
Content-Length: 163
{"parameters":[{"name":"Device.DeviceInfo.X_CISCO_COM_BootloaderVersion","value":"4.2.0.45","dataType":0,"parameterCount":1,"message":"Success"}],"statusCode":200}
```
# Deploy
Currently two example deployments are provided:
- `docker-compose`
- [README.md](https://github.com/xmidt-org/xmidt/tree/master/deploy/docker-compose/README.md)
- see folder with example deployment: [docker-compose](https://github.com/xmidt-org/xmidt/tree/master/deploy/docker-compose)
- `kubernetes`
- [README.md](https://github.com/xmidt-org/xmidt/tree/master/deploy/kubernetes/README.md)
- see folder with example deployment: [kubernetes](https://github.com/xmidt-org/xmidt/tree/master/deploy/kubernetes)

View File

@@ -0,0 +1,175 @@
# Deploying XMiDT
## Docker
In order to deploy into Docker, make sure [Docker is installed](https://docs.docker.com/install/).
#### Deploy
_**Note**_: While Tr1d1um is not part of XMiDT(it is WebPA), it is recommended to be
brought up for current ease of use. Future releases will deprecate Tr1d1um.
1. Clone this repository
2. Run `deploy/docker-compose/deploy.sh`
This will build `simulator` and `goaws` locally. It will then run `docker-compose up` which uses images of `talaria`, `scytale`, `petasos`, `caduceus`, and `tr1d1um` from dockerhub.
To pull specific versions of the images, just set the `<SERVICE>_VERSION` env variables when running the shell script.
```
TALARIA_VERSION=x.x.x deploy/docker-compose/deploy.sh
```
If you only want to bring up, for example, the scytale and talaria, run:
```bash
deploy/docker-compose/deploy.sh scytale talaria
```
_**Note**_: Bringing up a subset of services can cause problems.
This can be done with any combination of services.
3. To bring the containers down:
```bash
docker-compose -f deploy/docker-compose/docker-compose.yml down
```
### INFO
The docker-compose file provides 1 full datacenter with one talaria in a "backup"
datacenter. Since this is pure docker, not swarm or kubernetes, it's easiest to
deal with just one datacenter. Since all ports are exposed, the names might seem a little weird.
#### Connection
##### Inside Docker
If the Parodus instance is inside of docker, life is easy! Just connect to the cluster with `petasos:6400`.
##### Outside Docker
if the Parodus instance is outside of docker and the ports are exposed correctly, life
will be hard since you will need to handle the redirect.
You can initially connect to 'localhost:6400' but on the redirect change `talaria-1:6210` to `localhost:6210`
or you can just connect to a talaria `localhost:6200`
Once connected you should see it connected via [metrics](http://localhost:9090/graph?g0.range_input=1h&g0.expr=xmidt_talaria_device_count&g0.tab=0)
### Interact with the machines
Checkout that petasos is working:
```
curl localhost:6400 -H "X-Webpa-Device-Name: mac:112233445566" -i
```
Should give you the following:
```
HTTP/1.1 307 Temporary Redirect
Content-Type: text/html; charset=utf-8
Location: http://talaria-0:6200
X-Petasos-Build: development
X-Petasos-Flavor: development
X-Petasos-Region: local
X-Petasos-Server: localhost
X-Petasos-Start-Time: 04 Jun 19 02:12 UTC
Date: Tue, 04 Jun 2019 02:16:58 GMT
Content-Length: 57
<a href="http://talaria-0:6200">Temporary Redirect</a>.
```
Checkout that tr1d1um is able to talk with scytale & talaria:
```
curl localhost:6100/api/v2/device/mac:112233445577/config?names=Foo -i -H "Authorization: Basic dXNlcjpwYXNz"
```
Should give you:
```
HTTP/1.1 404 Not Found
X-Scytale-Build: development
X-Scytale-Flavor: development
X-Scytale-Region: local
X-Scytale-Server: localhost
X-Scytale-Start-Time: 04 Jun 19 02:12 UTC
X-Talaria-Build: development
X-Talaria-Flavor: development
X-Talaria-Region: local
X-Talaria-Server: localhost
X-Talaria-Start-Time: 04 Jun 19 02:12 UTC
X-Tr1d1um-Build: development
X-Tr1d1um-Flavor: development
X-Tr1d1um-Region: local
X-Tr1d1um-Server: localhost
X-Tr1d1um-Start-Time: 04 Jun 19 02:11 UTC
X-Webpa-Transaction-Id: LQxoB5sUSGWPNgAzxRIXLA
X-Xmidt-Message-Error: The device does not exist
X-Xmidt-Span: "http://petasos:6400/api/v2/device/send","2019-06-04T02:27:26Z","2.185274ms"
Date: Tue, 04 Jun 2019 02:27:26 GMT
Content-Length: 87
Content-Type: text/plain; charset=utf-8
{"code": 404, "message": "Could not process device request: The device does not exist"}
```
Check out that your simulator is connected:
```
curl -H "Authorization: Basic dXNlcjpwYXNz" localhost:6100/api/v2/device/mac:112233445566/stat -i
```
Should give you something similar to:
```
HTTP/1.1 200 OK
Content-Type: application/json
X-Scytale-Build: development
X-Scytale-Flavor: development
X-Scytale-Region: local
X-Scytale-Server: localhost
X-Scytale-Start-Time: 10 Jun 19 06:36 UTC
X-Talaria-Build: development
X-Talaria-Flavor: development
X-Talaria-Region: local
X-Talaria-Server: localhost
X-Talaria-Start-Time: 10 Jun 19 06:36 UTC
X-Tr1d1um-Build: development
X-Tr1d1um-Flavor: development
X-Tr1d1um-Region: local
X-Tr1d1um-Server: localhost
X-Tr1d1um-Start-Time: 10 Jun 19 06:36 UTC
X-Webpa-Transaction-Id: CIyqnI23RjWhyC_vNO7hbA
X-Xmidt-Span: "http://petasos:6400/api/v2/device/mac:112233445566/stat","2019-06-10T06:38:13Z","2.947332ms"
Date: Mon, 10 Jun 2019 06:38:13 GMT
Content-Length: 231
{"id": "mac:112233445566", "pending": 0, "statistics": {"bytesSent": 0, "messagesSent": 0, "bytesReceived": 0, "messagesReceived": 0, "duplications": 0, "connectedAt": "2019-06-10T06:37:02.915435853Z", "upTime": "1m10.110197482s"}}
```
Read a single parameter:
```
curl -H "Authorization: Basic dXNlcjpwYXNz" localhost:6100/api/v2/device/mac:112233445566/config?names=Device.DeviceInfo.X_CISCO_COM_BootloaderVersion -i
```
Results in:
```
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
X-Scytale-Build: development
X-Scytale-Flavor: development
X-Scytale-Region: local
X-Scytale-Server: localhost
X-Scytale-Start-Time: 10 Jun 19 06:36 UTC
X-Talaria-Build: development
X-Talaria-Flavor: development
X-Talaria-Region: local
X-Talaria-Server: localhost
X-Talaria-Start-Time: 10 Jun 19 06:36 UTC
X-Tr1d1um-Build: development
X-Tr1d1um-Flavor: development
X-Tr1d1um-Region: local
X-Tr1d1um-Server: localhost
X-Tr1d1um-Start-Time: 10 Jun 19 06:36 UTC
X-Webpa-Transaction-Id: cDYIIKLgoDtrt3XrVfUKkg
X-Xmidt-Span: "http://petasos:6400/api/v2/device/send","2019-06-10T06:45:04Z","15.869854ms"
Date: Mon, 10 Jun 2019 06:45:04 GMT
Content-Length: 163
{"parameters":[{"name":"Device.DeviceInfo.X_CISCO_COM_BootloaderVersion","value":"4.2.0.45","dataType":0,"parameterCount":1,"message":"Success"}],"statusCode":200}
```

View File

@@ -0,0 +1,78 @@
# Deploying XMiDT With Kubernetes
## Kubernetes
Make sure you have a Kubernetes cluster ready.
You can either use:
* [minikube](https://kubernetes.io/docs/setup/learning-environment/minikube/)
* [kind](https://github.com/kubernetes-sigs/kind)
* or a cluster provided by a cloud provider
In order to deploy to Kubernetes (k8s), you need to setup kubectl to talk to your cluster.
## Prerequisites
xmidt cloud components are deployed using a helm chart. Helm charts enables us to describe dependencies to 3rd party services like Consul or Prometheus.
Make sure [helm](https://github.com/helm/helm) >= v3.0.0-beta.3 is installed on your system. [helm-quickstart](https://v3.helm.sh/docs/intro/quickstart/)
If you need to make customisation (e.g. because of unique characteristics of your k8s instance):
[ship](https://github.com/replicatedhq/ship) is recommended
## Getting Started
Use helm to render the chart to *rendered.yaml*:
```
helm template xmidt-cloud ./deploy/kubernetes/helm/xmidt-cloud/ > rendered.yaml
```
Now you can deploy the rendered chart to your k8s cluster:
```
kubectl apply -f rendered.yaml
```
Check if all pods are running (scytale might need a couple of restarts as it depends on consul):
```
kubectl get all
--
pod/caduceus-0 1/1 Running 0 5d
pod/petasos-0 1/1 Running 0 5d
pod/prometheus-0 1/1 Running 0 5d
pod/rdkb-simulator-0 1/1 Running 0 5d
pod/scytale-0 1/1 Running 4 5d
pod/talaria-0 1/1 Running 0 5d
pod/tr1d1um-0 1/1 Running 0 5d
pod/xmidt-cloud-consul-0 1/1 Running 0 5d
pod/xmidt-cloud-consul-1 1/1 Running 0 5d
pod/xmidt-cloud-consul-2 1/1 Running 0 5d
```
To get the ports for tr1d1um and petasos use:
```
kubectl get all | grep 'service/tr1d1um-nodeport\|service/petasos-nodeport'
--
service/petasos-nodeport NodePort 10.247.241.180 <none> 6400:32659/TCP 3m
service/tr1d1um-nodeport NodePort 10.247.63.209 <none> 6100:31425/TCP
```
It means you can access tr1d1um on port 31425 and petasos on port 32659 on each node of your cluster.
## Delete
You can delete your deployment with:
```
kubectl delete -f rendered.yaml
```
## Updating this chart
If you make changes to a chart in a child reposiroties like talaria, etc. you can use the *Makefile* in *helm* to update this chart to the newest version.
```
cd helm
make update-latest
```
It will download all child charts and package them into *xmidt-cloud/charts*
## FAQ
1. Consul pods are not running
Consul depends on a `PerstentVolumeClaim`, make sure your k8s instance supports this.
You may also have to add annotations to the `PerstentVolumeClaim` section.
If this is the case use [ship](https://github.com/replicatedhq/ship) to customize the xmidt helm chart.

View File

@@ -0,0 +1,34 @@
#TODO change this to xmidt
REPO_USER := xmidt
# download xmidt-cloud dependencies to local folder ./charts
.PHONY: charts
dependencies:
@-rm -rf charts
@-rm -rf .tmp
@mkdir -p charts
@mkdir -p .tmp
@cd .tmp && git clone https://github.com/${REPO_USER}/tr1d1um
@cd charts && helm package ../.tmp/tr1d1um/deploy/helm/tr1d1um
@cd .tmp && git clone https://github.com/${REPO_USER}/scytale
@cd charts && helm package ../.tmp/scytale/deploy/helm/scytale
@cd .tmp && git clone https://github.com/${REPO_USER}/petasos
@cd charts && helm package ../.tmp/petasos/deploy/helm/petasos
@cd .tmp && git clone https://github.com/${REPO_USER}/talaria
@cd charts && helm package ../.tmp/talaria/deploy/helm/talaria
@cd .tmp && git clone https://github.com/${REPO_USER}/caduceus
@cd charts && helm package ../.tmp/caduceus/deploy/helm/caduceus
# update helm chart (xmidt-cloud) with latest dependecies from xmidt-* services
update-latest: dependencies
@-rm xmidt-cloud/charts/tr1d1um*
@cp charts/tr1d1um* xmidt-cloud/charts/
@-rm xmidt-cloud/charts/scytale*
@cp charts/scytale* xmidt-cloud/charts/
@-rm xmidt-cloud/charts/petasos*
@cp charts/petasos* xmidt-cloud/charts/
@-rm xmidt-cloud/charts/talaria*
@cp charts/talaria* xmidt-cloud/charts/
@-rm xmidt-cloud/charts/caduceus*
@cp charts/caduceus* xmidt-cloud/charts/

View File

@@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@@ -0,0 +1,6 @@
dependencies:
- name: consul
repository: https://kubernetes-charts.storage.googleapis.com
version: 3.8.1
digest: sha256:b6289735c401fdd1c4e506961a5128d14d603e60fc12bb91967d3a50e91a0c72
generated: "2019-09-25T10:36:38.023910322+02:00"

View File

@@ -0,0 +1,30 @@
apiVersion: v2
name: xmidt-cloud
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 0.1.5
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: 0.0.1
dependencies:
- name: consul
repository: https://kubernetes-charts.storage.googleapis.com
version: 3.8.1
# - name: prometheus
# repository: https://kubernetes-charts.storage.googleapis.com
# version: 9.1.1

View File

@@ -0,0 +1,156 @@
apiVersion: v1
kind: Service
metadata:
name: prometheus
spec:
type: NodePort
ports:
- port: 9090
selector:
app: xmidt-app
---
apiVersion: v1
data:
prometheus.yml: |
# my global config
global:
scrape_interval: 2s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 2s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: "codelab-monitor"
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
# rule_files:
# - "first.rules"
# - "second.rules"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["prometheus:9090"]
- job_name: "docker"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["docker.for.mac.host.internal:9323"]
- job_name: "caduceus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["caduceus:6003"]
- job_name: "petasos"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["petasos:6403"]
- job_name: "scytale"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["scytale:6303"]
- job_name: "talaria"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["talaria:6204"]
- job_name: "tr1d1um"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["tr1d1um:6103"]
- job_name: "consul"
metrics_path: "/v1/agent/metrics"
params:
format:
["prometheus"]
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["{{ .Release.Name }}-consul:8500"]
kind: ConfigMap
metadata:
labels:
app: xmidt-app
name: prometheus-config
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: prometheus
labels:
app: xmidt-app
spec:
selector:
matchLabels:
app: xmidt-app
updateStrategy:
type: RollingUpdate
replicas: 1
serviceName: xmidt-app
template:
metadata:
labels:
app: xmidt-app
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- xmidt-app
volumes:
- name: prometheus-config
projected:
sources:
- configMap:
name: prometheus-config
items:
- key: prometheus.yml
path: prometheus.yml
mode: 0755
# securityContext:
# runAsNonRoot: false
# runAsUser: 999
# supplementalGroups: [999]
containers:
- image: prom/prometheus
name: prometheus
args:
[
"--log.level=debug",
"--config.file=/prometheus-data/prometheus.yml",
]
ports:
- containerPort: 9090
protocol: TCP
volumeMounts:
- name: prometheus-config
mountPath: "/prometheus-data"
readOnly: true

View File

@@ -0,0 +1,37 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rdkb-simulator
labels:
app: xmidt-app-rdkb-simulator
spec:
selector:
matchLabels:
app: xmidt-app-rdkb-simulator
updateStrategy:
type: RollingUpdate
replicas: 1
serviceName: xmidt-app
template:
metadata:
labels:
app: xmidt-app-rdkb-simulator
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- xmidt-app-rdkb-simulator
containers:
- image: {{ .Values.simulator.image }}
name: rdkb-simulator
command:
- /usr/bin/simulate
env:
- name: URL
value: "http://petasos:6400"

View File

@@ -0,0 +1,32 @@
apiVersion: v1
kind: Service
metadata:
name: petasos-nodeport
spec:
type: NodePort
ports:
- port: 6400
selector:
app: xmidt-app-petasos
---
apiVersion: v1
kind: Service
metadata:
name: tr1d1um-nodeport
spec:
type: NodePort
ports:
- port: 6100
selector:
app: xmidt-app-tr1d1um
---
apiVersion: v1
kind: Service
metadata:
name: scytale-nodeport
spec:
type: NodePort
ports:
- port: 6300
selector:
app: xmidt-app-scytale

View File

@@ -0,0 +1,31 @@
# Default values for xmitd-cloud.
simulator:
image: xmidt/rdkb-simulator
#######################################################
#################### Child Charts #####################
#######################################################
caduceus:
caduceus:
image: xmidt/caduceus:0.2.1
scytale:
scytale:
image: xmidt/scytale:0.1.5
fanout:
endpoints: ["http://petasos:6400/api/v2/device/send"]
petasos:
petasos:
image: xmidt/petasos:0.1.4
talaria:
talaria:
image: xmidt/talaria:0.1.3
tr1d1um:
tr1d1um:
image: xmidt/tr1d1um:0.1.5