mirror of
https://github.com/Telecominfraproject/wlan-toolsmith.git
synced 2025-10-29 18:12:20 +00:00
Merge pull request #207 from Telecominfraproject/fix/tools-150--deprecate-api
[TOOLS-150] Chg: move kibana chart to local
This commit is contained in:
@@ -6,7 +6,8 @@ spec:
|
||||
scaleTargetRef:
|
||||
name: wlan-testing-small-deployment
|
||||
minReplicas: 1
|
||||
maxReplicas: 10
|
||||
maxReplicas: 5
|
||||
scaleUpTriggers:
|
||||
- githubEvent: {}
|
||||
- githubEvent:
|
||||
workflowJob: {}
|
||||
duration: "24h"
|
||||
|
||||
21
helmfile/cloud-sdk/charts/kibana/.helmignore
Normal file
21
helmfile/cloud-sdk/charts/kibana/.helmignore
Normal file
@@ -0,0 +1,21 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
14
helmfile/cloud-sdk/charts/kibana/Chart.yaml
Normal file
14
helmfile/cloud-sdk/charts/kibana/Chart.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
apiVersion: v1
|
||||
appVersion: 6.7.0
|
||||
deprecated: true
|
||||
description: DEPRECATED - Kibana is an open source data visualization plugin for Elasticsearch
|
||||
engine: gotpl
|
||||
home: https://www.elastic.co/products/kibana
|
||||
icon: https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/blt8781708f8f37ed16/5c11ec2edf09df047814db23/logo-elastic-kibana-lt.svg
|
||||
keywords:
|
||||
- elasticsearch
|
||||
- kibana
|
||||
name: kibana
|
||||
sources:
|
||||
- https://github.com/elastic/kibana
|
||||
version: 3.2.8
|
||||
6
helmfile/cloud-sdk/charts/kibana/OWNERS
Normal file
6
helmfile/cloud-sdk/charts/kibana/OWNERS
Normal file
@@ -0,0 +1,6 @@
|
||||
approvers:
|
||||
- compleatang
|
||||
- monotek
|
||||
reviewers:
|
||||
- compleatang
|
||||
- monotek
|
||||
156
helmfile/cloud-sdk/charts/kibana/README.md
Normal file
156
helmfile/cloud-sdk/charts/kibana/README.md
Normal file
@@ -0,0 +1,156 @@
|
||||
# kibana
|
||||
|
||||
**CHART WAS DEPRECATED! USE THE OFFICIAL CHART INSTEAD @ <https://github.com/elastic/helm-charts/tree/master/kibana>**
|
||||
|
||||
[kibana](https://github.com/elastic/kibana) is your window into the Elastic Stack. Specifically, it's an open source (Apache Licensed), browser-based analytics and search dashboard for Elasticsearch.
|
||||
|
||||
## Pre-deprecation notice
|
||||
As mentioned in #14935 we are planning on deprecating this chart in favour of the official Elastic Helm Chart. The Elastic Helm Chart supports version 7 of Kibana.
|
||||
|
||||
## TL;DR;
|
||||
|
||||
```console
|
||||
$ helm install stable/kibana
|
||||
```
|
||||
|
||||
## Introduction
|
||||
|
||||
This chart bootstraps a kibana deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
To install the chart with the release name `my-release`:
|
||||
|
||||
```console
|
||||
$ helm install stable/kibana --name my-release
|
||||
```
|
||||
|
||||
The command deploys kibana on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
|
||||
|
||||
NOTE : We notice that lower resource constraints given to the chart + plugins are likely not going to work well.
|
||||
|
||||
## Uninstalling the Chart
|
||||
|
||||
To uninstall/delete the `my-release` deployment:
|
||||
|
||||
```console
|
||||
$ helm delete my-release
|
||||
```
|
||||
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
|
||||
## Configuration
|
||||
|
||||
The following table lists the configurable parameters of the kibana chart and their default values.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
| ------------------------------------------ | ---------------------------------------------------------------------- | ------------------------------------- |
|
||||
| `affinity` | node/pod affinities | None |
|
||||
| `env` | Environment variables to configure Kibana | `{}` |
|
||||
| `envFromSecrets` | Environment variables from secrets to the cronjob container | {} |
|
||||
| `envFromSecrets.*.from.secret` | - `secretKeyRef.name` used for environment variable | |
|
||||
| `envFromSecrets.*.from.key` | - `secretKeyRef.key` used for environment variable | |
|
||||
| `files` | Kibana configuration files | None |
|
||||
| `livenessProbe.enabled` | livenessProbe to be enabled? | `false` |
|
||||
| `livenessProbe.path` | path for livenessProbe | `/status` |
|
||||
| `livenessProbe.initialDelaySeconds` | number of seconds | 30 |
|
||||
| `livenessProbe.timeoutSeconds` | number of seconds | 10 |
|
||||
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
|
||||
| `image.repository` | Image repository | `docker.elastic.co/kibana/kibana-oss` |
|
||||
| `image.tag` | Image tag | `6.7.0` |
|
||||
| `image.pullSecrets` | Specify image pull secrets | `nil` |
|
||||
| `commandline.args` | add additional commandline args | `nil` |
|
||||
| `ingress.enabled` | Enables Ingress | `false` |
|
||||
| `ingress.annotations` | Ingress annotations | None: |
|
||||
| `ingress.hosts` | Ingress accepted hostnames | None: |
|
||||
| `ingress.tls` | Ingress TLS configuration | None: |
|
||||
| `nodeSelector` | node labels for pod assignment | `{}` |
|
||||
| `podAnnotations` | annotations to add to each pod | `{}` |
|
||||
| `podLabels` | labels to add to each pod | `{}` |
|
||||
| `replicaCount` | desired number of pods | `1` |
|
||||
| `revisionHistoryLimit` | revisionHistoryLimit | `3` |
|
||||
| `serviceAccountName` | DEPRECATED: use serviceAccount.name | `nil` |
|
||||
| `serviceAccount.create` | create a serviceAccount to run the pod | `false` |
|
||||
| `serviceAccount.name` | name of the serviceAccount to create | `kibana.fullname` |
|
||||
| `authProxyEnabled` | enables authproxy. Create container in extracontainers | `false` |
|
||||
| `extraContainers` | Sidecar containers to add to the kibana pod | `{}` |
|
||||
| `extraVolumeMounts` | additional volumemounts for the kibana pod | `[]` |
|
||||
| `extraVolumes` | additional volumes to add to the kibana pod | `[]` |
|
||||
| `resources` | pod resource requests & limits | `{}` |
|
||||
| `priorityClassName` | priorityClassName | `nil` |
|
||||
| `service.externalPort` | external port for the service | `443` |
|
||||
| `service.internalPort` | internal port for the service | `4180` |
|
||||
| `service.portName` | service port name | None: |
|
||||
| `service.authProxyPort` | port to use when using sidecar authProxy | None: |
|
||||
| `service.externalIPs` | external IP addresses | None: |
|
||||
| `service.loadBalancerIP` | Load Balancer IP address | None: |
|
||||
| `service.loadBalancerSourceRanges` | Limit load balancer source IPs to list of CIDRs (where available)) | `[]` |
|
||||
| `service.nodePort` | NodePort value if service.type is NodePort | None: |
|
||||
| `service.type` | type of service | `ClusterIP` |
|
||||
| `service.clusterIP` | static clusterIP or None for headless services | None: |
|
||||
| `service.annotations` | Kubernetes service annotations | None: |
|
||||
| `service.labels` | Kubernetes service labels | None: |
|
||||
| `service.selector` | Kubernetes service selector | `{}` |
|
||||
| `tolerations` | List of node taints to tolerate | `[]` |
|
||||
| `dashboardImport.enabled` | Enable dashboard import | `false` |
|
||||
| `dashboardImport.timeout` | Time in seconds waiting for Kibana to be in green overall state | `60` |
|
||||
| `dashboardImport.basePath` | Customizing base path url during dashboard import | `/` |
|
||||
| `dashboardImport.xpackauth.enabled` | Enable Xpack auth | `false` |
|
||||
| `dashboardImport.xpackauth.username` | Optional Xpack username | `myuser` |
|
||||
| `dashboardImport.xpackauth.password` | Optional Xpack password | `mypass` |
|
||||
| `dashboardImport.dashboards` | Dashboards | `{}` |
|
||||
| `plugins.enabled` | Enable installation of plugins. | `false` |
|
||||
| `plugins.reset` | Optional : Remove all installed plugins before installing all new ones | `false` |
|
||||
| `plugins.values` | List of plugins to install. Format | None: |
|
||||
| `persistentVolumeClaim.enabled` | Enable PVC for plugins | `false` |
|
||||
| `persistentVolumeClaim.existingClaim` | Use your own PVC for plugins | `false` |
|
||||
| `persistentVolumeClaim.annotations` | Add your annotations for the PVC | `{}` |
|
||||
| `persistentVolumeClaim.accessModes` | Acces mode to the PVC | `ReadWriteOnce` |
|
||||
| `persistentVolumeClaim.size` | Size of the PVC | `5Gi` |
|
||||
| `persistentVolumeClaim.storageClass` | Storage class of the PVC | None: |
|
||||
| `readinessProbe.enabled` | readinessProbe to be enabled? | `false` |
|
||||
| `readinessProbe.path` | path for readinessProbe | `/status` |
|
||||
| `readinessProbe.initialDelaySeconds` | number of seconds | 30 |
|
||||
| `readinessProbe.timeoutSeconds` | number of seconds | 10 |
|
||||
| `readinessProbe.periodSeconds` | number of seconds | 10 |
|
||||
| `readinessProbe.successThreshold` | number of successes | 5 |
|
||||
| `securityContext.enabled` | Enable security context (should be true for PVC) | `false` |
|
||||
| `securityContext.allowPrivilegeEscalation` | Allow privilege escalation | `false` |
|
||||
| `securityContext.runAsUser` | User id to run in pods | `1000` |
|
||||
| `securityContext.fsGroup` | fsGroup id to run in pods | `2000` |
|
||||
| `extraConfigMapMounts` | Additional configmaps to be mounted | `[]` |
|
||||
| `deployment.annotations` | Annotations for deployment | `{}` |
|
||||
| `initContainers` | Init containers to add to the kibana deployment | `{}` |
|
||||
| `testFramework.enabled` | enable the test framework | true |
|
||||
| `testFramework.image` | `test-framework` image repository. | `dduportal/bats` |
|
||||
| `testFramework.tag` | `test-framework` image tag. | `0.4.0` |
|
||||
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
|
||||
|
||||
- The Kibana configuration files config properties can be set through the `env` parameter too.
|
||||
- All the files listed under this variable will overwrite any existing files by the same name in kibana config directory.
|
||||
- Files not mentioned under this variable will remain unaffected.
|
||||
|
||||
```console
|
||||
$ helm install stable/kibana --name my-release \
|
||||
--set=image.tag=v0.0.2,resources.limits.cpu=200m
|
||||
```
|
||||
|
||||
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example :
|
||||
|
||||
```console
|
||||
$ helm install stable/kibana --name my-release -f values.yaml
|
||||
```
|
||||
|
||||
> **Tip**: You can use the default [values.yaml](values.yaml)
|
||||
|
||||
## Dasboard import
|
||||
|
||||
- A dashboard for dashboardImport.dashboards can be a JSON or a download url to a JSON file.
|
||||
|
||||
## Upgrading
|
||||
|
||||
### To 2.3.0
|
||||
|
||||
The default value of `elasticsearch.url` (for kibana < 6.6) has been removed in favor of `elasticsearch.hosts` (for kibana >= 6.6).
|
||||
@@ -0,0 +1,3 @@
|
||||
---
|
||||
# disable internal port by setting authProxyEnabled
|
||||
authProxyEnabled: true
|
||||
21
helmfile/cloud-sdk/charts/kibana/ci/dashboard-values.yaml
Normal file
21
helmfile/cloud-sdk/charts/kibana/ci/dashboard-values.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
---
|
||||
# enable the dashboard init container with dashboard embedded in configmap
|
||||
|
||||
dashboardImport:
|
||||
enabled: true
|
||||
dashboards:
|
||||
1_create_index: |-
|
||||
{
|
||||
"version": "6.7.0",
|
||||
"objects": [
|
||||
{
|
||||
"id": "a88738e0-d3c1-11e8-b38e-a37c21cf8c95",
|
||||
"version": 2,
|
||||
"attributes": {
|
||||
"title": "logstash-*",
|
||||
"timeFieldName": "@timestamp",
|
||||
"fields": "[{\"name\":\"@timestamp\",\"type\":\"date\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true}]"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1,6 @@
|
||||
---
|
||||
extraConfigMapMounts:
|
||||
- name: logtrail-configs
|
||||
configMap: kibana-logtrail
|
||||
mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json
|
||||
subPath: logtrail.json
|
||||
@@ -0,0 +1,3 @@
|
||||
ingress:
|
||||
hosts:
|
||||
- localhost.localdomain/kibana
|
||||
3
helmfile/cloud-sdk/charts/kibana/ci/ingress-hosts.yaml
Normal file
3
helmfile/cloud-sdk/charts/kibana/ci/ingress-hosts.yaml
Normal file
@@ -0,0 +1,3 @@
|
||||
ingress:
|
||||
hosts:
|
||||
- kibana.localhost.localdomain
|
||||
@@ -0,0 +1,23 @@
|
||||
---
|
||||
# enable all init container types
|
||||
|
||||
# A dashboard is defined by a name and a string with the json payload or the download url
|
||||
dashboardImport:
|
||||
enabled: true
|
||||
dashboards:
|
||||
k8s: https://raw.githubusercontent.com/monotek/kibana-dashboards/master/k8s-fluentd-elasticsearch.json
|
||||
|
||||
# Enable the plugin init container with plugins retrieved from an URL
|
||||
plugins:
|
||||
enabled: true
|
||||
reset: false
|
||||
# Use <plugin_name,version,url> to add/upgrade plugin
|
||||
values:
|
||||
- analyze-api-ui-plugin,6.7.0,https://github.com/johtani/analyze-api-ui-plugin/releases/download/6.7.0/analyze-api-ui-plugin-6.7.0.zip
|
||||
# - other_plugin
|
||||
|
||||
# Add your own init container
|
||||
initContainers:
|
||||
echo-container:
|
||||
image: "busybox"
|
||||
command: ['sh', '-c', 'echo Hello from init container! && sleep 3']
|
||||
@@ -0,0 +1,18 @@
|
||||
---
|
||||
# enable user-defined init containers
|
||||
|
||||
initContainers:
|
||||
numbers-container:
|
||||
image: "busybox"
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
command:
|
||||
- "/bin/sh"
|
||||
- "-c"
|
||||
- |
|
||||
for i in $(seq 1 10); do
|
||||
echo $i
|
||||
done
|
||||
|
||||
echo-container:
|
||||
image: "busybox"
|
||||
command: ['sh', '-c', 'echo Hello from init container! && sleep 3']
|
||||
9
helmfile/cloud-sdk/charts/kibana/ci/plugin-install.yaml
Normal file
9
helmfile/cloud-sdk/charts/kibana/ci/plugin-install.yaml
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
# enable the plugin init container with plugins retrieved from an URL
|
||||
plugins:
|
||||
enabled: true
|
||||
reset: false
|
||||
# Use <plugin_name,version,url> to add/upgrade plugin
|
||||
values:
|
||||
- analyze-api-ui-plugin,6.7.0,https://github.com/johtani/analyze-api-ui-plugin/releases/download/6.7.0/analyze-api-ui-plugin-6.7.0.zip
|
||||
# - other_plugin
|
||||
11
helmfile/cloud-sdk/charts/kibana/ci/pvc.yaml
Normal file
11
helmfile/cloud-sdk/charts/kibana/ci/pvc.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
persistentVolumeClaim:
|
||||
# set to true to use pvc
|
||||
enabled: true
|
||||
# set to true to use you own pvc
|
||||
existingClaim: false
|
||||
annotations: {}
|
||||
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
size: "5Gi"
|
||||
@@ -0,0 +1,6 @@
|
||||
---
|
||||
securityContext:
|
||||
enabled: true
|
||||
allowPrivilegeEscalation: false
|
||||
runAsUser: 1000
|
||||
fsGroup: 2000
|
||||
4
helmfile/cloud-sdk/charts/kibana/ci/service-values.yaml
Normal file
4
helmfile/cloud-sdk/charts/kibana/ci/service-values.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
---
|
||||
service:
|
||||
selector:
|
||||
foo: bar
|
||||
@@ -0,0 +1,7 @@
|
||||
---
|
||||
# enable the dashboard init container with dashboard retrieved from an URL
|
||||
|
||||
dashboardImport:
|
||||
enabled: true
|
||||
dashboards:
|
||||
k8s: https://raw.githubusercontent.com/monotek/kibana-dashboards/master/k8s-fluentd-elasticsearch.json
|
||||
23
helmfile/cloud-sdk/charts/kibana/templates/NOTES.txt
Normal file
23
helmfile/cloud-sdk/charts/kibana/templates/NOTES.txt
Normal file
@@ -0,0 +1,23 @@
|
||||
|
||||
THE CHART HAS BEEN DEPRECATED!
|
||||
|
||||
Find the new official version @ https://github.com/elastic/helm-charts/tree/master/kibana
|
||||
|
||||
To verify that {{ template "kibana.fullname" . }} has started, run:
|
||||
|
||||
kubectl --namespace={{ .Release.Namespace }} get pods -l "app={{ template "kibana.name" . }}"
|
||||
|
||||
Kibana can be accessed:
|
||||
|
||||
* From outside the cluster, run these commands in the same shell:
|
||||
{{- if contains "NodePort" .Values.service.type }}
|
||||
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "kibana.fullname" . }})
|
||||
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
echo http://$NODE_IP:$NODE_PORT
|
||||
{{- else if contains "ClusterIP" .Values.service.type }}
|
||||
|
||||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "kibana.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
|
||||
echo "Visit http://127.0.0.1:5601 to use Kibana"
|
||||
kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME 5601:5601
|
||||
{{- end }}
|
||||
40
helmfile/cloud-sdk/charts/kibana/templates/_helpers.tpl
Normal file
40
helmfile/cloud-sdk/charts/kibana/templates/_helpers.tpl
Normal file
@@ -0,0 +1,40 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "kibana.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "kibana.fullname" -}}
|
||||
{{- if .Values.fullnameOverride -}}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- printf .Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use
|
||||
*/}}
|
||||
{{- define "kibana.serviceAccountName" -}}
|
||||
{{- if .Values.serviceAccount.create -}}
|
||||
{{ default (include "kibana.fullname" .) .Values.serviceAccount.name }}
|
||||
{{- else -}}
|
||||
{{- if .Values.serviceAccountName -}}
|
||||
{{- .Values.serviceAccountName }}
|
||||
{{- else -}}
|
||||
{{ default "default" .Values.serviceAccount.name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,67 @@
|
||||
{{- if .Values.dashboardImport.enabled }}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ template "kibana.fullname" . }}-importscript
|
||||
labels:
|
||||
app: {{ template "kibana.name" . }}
|
||||
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
release: {{ .Release.Name }}
|
||||
heritage: {{ .Release.Service }}
|
||||
data:
|
||||
dashboardImport.sh: |
|
||||
#!/usr/bin/env bash
|
||||
#
|
||||
# kibana dashboard import script
|
||||
#
|
||||
|
||||
cd /kibanadashboards
|
||||
|
||||
echo "Starting Kibana..."
|
||||
|
||||
/usr/local/bin/kibana-docker $@ &
|
||||
|
||||
echo "Waiting up to {{ .Values.dashboardImport.timeout }} seconds for Kibana to get in green overall state..."
|
||||
|
||||
for i in {1..{{ .Values.dashboardImport.timeout }}}; do
|
||||
curl -s localhost:5601{{ .Values.dashboardImport.basePath }}/api/status | python -c 'import sys, json; print json.load(sys.stdin)["status"]["overall"]["state"]' 2> /dev/null | grep green > /dev/null && break || sleep 1
|
||||
done
|
||||
|
||||
for DASHBOARD_FILE in *; do
|
||||
echo -e "Importing ${DASHBOARD_FILE} dashboard..."
|
||||
|
||||
if ! python -c 'import sys, json; print json.load(sys.stdin)' < "${DASHBOARD_FILE}" &> /dev/null ; then
|
||||
echo "${DASHBOARD_FILE} is not valid JSON, assuming it's an URL..."
|
||||
TMP_FILE="$(mktemp)"
|
||||
curl -s $(cat ${DASHBOARD_FILE}) > ${TMP_FILE}
|
||||
curl -v {{ if .Values.dashboardImport.xpackauth.enabled }}--user {{ .Values.dashboardImport.xpackauth.username }}:{{ .Values.dashboardImport.xpackauth.password }}{{ end }} -s --connect-timeout 60 --max-time 60 -XPOST localhost:5601{{ .Values.dashboardImport.basePath }}/api/kibana/dashboards/import?force=true -H 'kbn-xsrf:true' -H 'Content-type:application/json' -d @${TMP_FILE}
|
||||
rm ${TMP_FILE}
|
||||
else
|
||||
echo "Valid JSON found in ${DASHBOARD_FILE}, importing..."
|
||||
curl -v {{ if .Values.dashboardImport.xpackauth.enabled }}--user {{ .Values.dashboardImport.xpackauth.username }}:{{ .Values.dashboardImport.xpackauth.password }}{{ end }} -s --connect-timeout 60 --max-time 60 -XPOST localhost:5601{{ .Values.dashboardImport.basePath }}/api/kibana/dashboards/import?force=true -H 'kbn-xsrf:true' -H 'Content-type:application/json' -d @./${DASHBOARD_FILE}
|
||||
fi
|
||||
|
||||
if [ "$?" != "0" ]; then
|
||||
echo -e "\nImport of ${DASHBOARD_FILE} dashboard failed... Exiting..."
|
||||
exit 1
|
||||
else
|
||||
echo -e "\nImport of ${DASHBOARD_FILE} dashboard finished :-)"
|
||||
fi
|
||||
|
||||
done
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ template "kibana.fullname" . }}-dashboards
|
||||
labels:
|
||||
app: {{ template "kibana.name" . }}
|
||||
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
release: {{ .Release.Name }}
|
||||
heritage: {{ .Release.Service }}
|
||||
data:
|
||||
{{- range $key, $value := .Values.dashboardImport.dashboards }}
|
||||
{{ $key }}: |-
|
||||
{{ $value | indent 4 }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
14
helmfile/cloud-sdk/charts/kibana/templates/configmap.yaml
Normal file
14
helmfile/cloud-sdk/charts/kibana/templates/configmap.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ template "kibana.fullname" . }}
|
||||
labels:
|
||||
app: {{ template "kibana.name" . }}
|
||||
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
release: {{ .Release.Name }}
|
||||
heritage: {{ .Release.Service }}
|
||||
data:
|
||||
{{- range $key, $value := .Values.files }}
|
||||
{{ $key }}: |
|
||||
{{ toYaml $value | default "{}" | indent 4 }}
|
||||
{{- end -}}
|
||||
251
helmfile/cloud-sdk/charts/kibana/templates/deployment.yaml
Normal file
251
helmfile/cloud-sdk/charts/kibana/templates/deployment.yaml
Normal file
@@ -0,0 +1,251 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "kibana.name" . }}
|
||||
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "kibana.fullname" . }}
|
||||
{{- if .Values.deployment.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.deployment.annotations | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
replicas: {{ .Values.replicaCount }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{ template "kibana.name" . }}
|
||||
release: {{ .Release.Name }}
|
||||
revisionHistoryLimit: {{ .Values.revisionHistoryLimit }}
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
|
||||
{{- if .Values.podAnnotations }}
|
||||
{{ toYaml .Values.podAnnotations | indent 8 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app: {{ template "kibana.name" . }}
|
||||
release: "{{ .Release.Name }}"
|
||||
{{- if .Values.podLabels }}
|
||||
{{ toYaml .Values.podLabels | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
serviceAccountName: {{ template "kibana.serviceAccountName" . }}
|
||||
{{- if .Values.priorityClassName }}
|
||||
priorityClassName: "{{ .Values.priorityClassName }}"
|
||||
{{- end }}
|
||||
{{- if or (.Values.initContainers) (.Values.dashboardImport.enabled) (.Values.plugins.enabled) }}
|
||||
initContainers:
|
||||
{{- if .Values.initContainers }}
|
||||
{{- range $key, $value := .Values.initContainers }}
|
||||
- name: {{ $key | quote }}
|
||||
{{ toYaml $value | indent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.dashboardImport.enabled }}
|
||||
- name: {{ .Chart.Name }}-dashboardimport
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
command: ["/bin/bash"]
|
||||
args:
|
||||
- "-c"
|
||||
- "/tmp/dashboardImport.sh"
|
||||
{{- if .Values.commandline.args }}
|
||||
{{ toYaml .Values.commandline.args | indent 10 }}
|
||||
{{- end }}
|
||||
env:
|
||||
{{- range $key, $value := .Values.env }}
|
||||
- name: {{ $key | quote }}
|
||||
value: {{ tpl $value $ | quote }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
- name: {{ template "kibana.fullname" . }}-dashboards
|
||||
mountPath: "/kibanadashboards"
|
||||
- name: {{ template "kibana.fullname" . }}-importscript
|
||||
mountPath: "/tmp/dashboardImport.sh"
|
||||
subPath: dashboardImport.sh
|
||||
{{- range $configFile := (keys .Values.files) }}
|
||||
- name: {{ template "kibana.name" $ }}
|
||||
mountPath: "/usr/share/kibana/config/{{ $configFile }}"
|
||||
subPath: {{ $configFile }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.plugins.enabled}}
|
||||
- name: {{ .Chart.Name }}-plugins-install
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
command:
|
||||
- /bin/bash
|
||||
- "-c"
|
||||
- |
|
||||
set -e
|
||||
rm -rf plugins/lost+found
|
||||
plugins=(
|
||||
{{- range .Values.plugins.values }}
|
||||
{{ . }}
|
||||
{{- end }}
|
||||
)
|
||||
if {{ .Values.plugins.reset }}
|
||||
then
|
||||
for p in $(./bin/kibana-plugin list | cut -d "@" -f1)
|
||||
do
|
||||
./bin/kibana-plugin remove ${p}
|
||||
done
|
||||
fi
|
||||
for i in "${plugins[@]}"
|
||||
do
|
||||
IFS=',' read -ra PLUGIN <<< "$i"
|
||||
pluginInstalledCheck=$(./bin/kibana-plugin list | grep "${PLUGIN[0]}" | cut -d '@' -f1 || true)
|
||||
pluginVersionCheck=$(./bin/kibana-plugin list | grep "${PLUGIN[0]}" | cut -d '@' -f2 || true)
|
||||
if [ "${pluginInstalledCheck}" = "${PLUGIN[0]}" ]
|
||||
then
|
||||
if [ "${pluginVersionCheck}" != "${PLUGIN[1]}" ]
|
||||
then
|
||||
./bin/kibana-plugin remove "${PLUGIN[0]}"
|
||||
./bin/kibana-plugin install "${PLUGIN[2]}"
|
||||
fi
|
||||
else
|
||||
./bin/kibana-plugin install "${PLUGIN[2]}"
|
||||
fi
|
||||
done
|
||||
env:
|
||||
{{- range $key, $value := .Values.env }}
|
||||
- name: {{ $key | quote }}
|
||||
value: {{ tpl $value $ | quote }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
- name: plugins
|
||||
mountPath: /usr/share/kibana/plugins
|
||||
{{- range $configFile := (keys .Values.files) }}
|
||||
- name: {{ template "kibana.name" $ }}
|
||||
mountPath: "/usr/share/kibana/config/{{ $configFile }}"
|
||||
subPath: {{ $configFile }}
|
||||
{{- end }}
|
||||
{{- if .Values.securityContext.enabled }}
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: {{ .Values.securityContext.allowPrivilegeEscalation }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
{{- if .Values.commandline.args }}
|
||||
args:
|
||||
- "/bin/bash"
|
||||
- "/usr/local/bin/kibana-docker"
|
||||
{{ toYaml .Values.commandline.args | indent 10 }}
|
||||
{{- end }}
|
||||
env:
|
||||
{{- range $key, $value := .Values.env }}
|
||||
- name: {{ $key | quote }}
|
||||
value: {{ tpl $value $ | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.envFromSecrets }}
|
||||
{{- range $key,$value := .Values.envFromSecrets }}
|
||||
- name: {{ $key | upper | quote}}
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ $value.from.secret | quote}}
|
||||
key: {{ $value.from.key | quote}}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if (not .Values.authProxyEnabled) }}
|
||||
ports:
|
||||
- containerPort: {{ .Values.service.internalPort }}
|
||||
name: {{ template "kibana.name" . }}
|
||||
protocol: TCP
|
||||
{{- end }}
|
||||
{{- if .Values.livenessProbe.enabled }}
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: {{ .Values.livenessProbe.path }}
|
||||
port: {{ .Values.service.internalPort }}
|
||||
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
|
||||
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
|
||||
{{- end }}
|
||||
{{- if .Values.readinessProbe.enabled }}
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: {{ .Values.readinessProbe.path }}
|
||||
port: {{ .Values.service.internalPort }}
|
||||
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
|
||||
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
|
||||
successThreshold: {{ .Values.readinessProbe.successThreshold }}
|
||||
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
|
||||
{{- end }}
|
||||
resources:
|
||||
{{ toYaml .Values.resources | indent 10 }}
|
||||
volumeMounts:
|
||||
{{- range $configFile := (keys .Values.files) }}
|
||||
- name: {{ template "kibana.name" $ }}
|
||||
mountPath: "/usr/share/kibana/config/{{ $configFile }}"
|
||||
subPath: {{ $configFile }}
|
||||
{{- end }}
|
||||
{{- if .Values.extraVolumeMounts }}
|
||||
{{ toYaml .Values.extraVolumeMounts | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.plugins.enabled}}
|
||||
- name: plugins
|
||||
mountPath: /usr/share/kibana/plugins
|
||||
{{- end }}
|
||||
{{- with .Values.extraContainers }}
|
||||
{{ tpl . $ | indent 6 }}
|
||||
{{- end }}
|
||||
{{- range .Values.extraConfigMapMounts }}
|
||||
- name: {{ .name }}
|
||||
mountPath: {{ .mountPath }}
|
||||
subPath: {{ .subPath }}
|
||||
{{- end }}
|
||||
{{- if .Values.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{ toYaml .Values.image.pullSecrets | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.affinity }}
|
||||
affinity:
|
||||
{{ toYaml .Values.affinity | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml .Values.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
tolerations:
|
||||
{{ toYaml .Values.tolerations | indent 8 }}
|
||||
{{- if .Values.securityContext.enabled }}
|
||||
securityContext:
|
||||
runAsUser: {{ .Values.securityContext.runAsUser }}
|
||||
fsGroup: {{ .Values.securityContext.fsGroup }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: {{ template "kibana.name" . }}
|
||||
configMap:
|
||||
name: {{ template "kibana.fullname" . }}
|
||||
{{- if .Values.plugins.enabled}}
|
||||
- name: plugins
|
||||
{{- if .Values.persistentVolumeClaim.enabled }}
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ template "kibana.fullname" . }}
|
||||
{{- else }}
|
||||
emptyDir: {}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.dashboardImport.enabled }}
|
||||
- name: {{ template "kibana.fullname" . }}-dashboards
|
||||
configMap:
|
||||
name: {{ template "kibana.fullname" . }}-dashboards
|
||||
- name: {{ template "kibana.fullname" . }}-importscript
|
||||
configMap:
|
||||
name: {{ template "kibana.fullname" . }}-importscript
|
||||
defaultMode: 0777
|
||||
{{- end }}
|
||||
{{- range .Values.extraConfigMapMounts }}
|
||||
- name: {{ .name }}
|
||||
configMap:
|
||||
name: {{ .configMap }}
|
||||
{{- end }}
|
||||
{{- if .Values.extraVolumes }}
|
||||
{{ toYaml .Values.extraVolumes | indent 8 }}
|
||||
{{- end }}
|
||||
36
helmfile/cloud-sdk/charts/kibana/templates/ingress.yaml
Normal file
36
helmfile/cloud-sdk/charts/kibana/templates/ingress.yaml
Normal file
@@ -0,0 +1,36 @@
|
||||
{{- if .Values.ingress.enabled -}}
|
||||
{{- $serviceName := include "kibana.fullname" . -}}
|
||||
{{- $servicePort := .Values.service.externalPort -}}
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "kibana.name" . }}
|
||||
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "kibana.fullname" . }}
|
||||
annotations:
|
||||
{{- range $key, $value := .Values.ingress.annotations }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
spec:
|
||||
rules:
|
||||
{{- range .Values.ingress.hosts }}
|
||||
{{- $url := splitList "/" . }}
|
||||
- host: {{ first $url }}
|
||||
http:
|
||||
paths:
|
||||
- path: /{{ rest $url | join "/" }}
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: {{ $serviceName }}
|
||||
port:
|
||||
number: {{ $servicePort }}
|
||||
{{- end -}}
|
||||
{{- if .Values.ingress.tls }}
|
||||
tls:
|
||||
{{ toYaml .Values.ingress.tls | indent 4 }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
56
helmfile/cloud-sdk/charts/kibana/templates/service.yaml
Normal file
56
helmfile/cloud-sdk/charts/kibana/templates/service.yaml
Normal file
@@ -0,0 +1,56 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "kibana.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
|
||||
release: {{ .Release.Name }}
|
||||
heritage: {{ .Release.Service }}
|
||||
{{- range $key, $value := .Values.service.labels }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
name: {{ template "kibana.fullname" . }}
|
||||
{{- with .Values.service.annotations }}
|
||||
annotations:
|
||||
{{- range $key, $value := . }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.service.loadBalancerSourceRanges }}
|
||||
loadBalancerSourceRanges:
|
||||
{{- range $cidr := .Values.service.loadBalancerSourceRanges }}
|
||||
- {{ $cidr }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
type: {{ .Values.service.type }}
|
||||
{{- if and (eq .Values.service.type "ClusterIP") .Values.service.clusterIP }}
|
||||
clusterIP: {{ .Values.service.clusterIP }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- port: {{ .Values.service.externalPort }}
|
||||
{{- if not .Values.authProxyEnabled }}
|
||||
targetPort: {{ .Values.service.internalPort }}
|
||||
{{- else }}
|
||||
targetPort: {{ .Values.service.authProxyPort }}
|
||||
{{- end }}
|
||||
protocol: TCP
|
||||
{{ if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
|
||||
nodePort: {{ .Values.service.nodePort }}
|
||||
{{ end }}
|
||||
{{- if .Values.service.portName }}
|
||||
name: {{ .Values.service.portName }}
|
||||
{{- end }}
|
||||
{{- if .Values.service.externalIPs }}
|
||||
externalIPs:
|
||||
{{ toYaml .Values.service.externalIPs | indent 4 }}
|
||||
{{- end }}
|
||||
selector:
|
||||
app: {{ template "kibana.name" . }}
|
||||
release: {{ .Release.Name }}
|
||||
{{- range $key, $value := .Values.service.selector }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.service.loadBalancerIP }}
|
||||
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,11 @@
|
||||
{{- if .Values.serviceAccount.create -}}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ template "kibana.serviceAccountName" . }}
|
||||
labels:
|
||||
app: {{ template "kibana.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,37 @@
|
||||
{{- if .Values.testFramework.enabled }}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ template "kibana.fullname" . }}-test
|
||||
labels:
|
||||
app: {{ template "kibana.fullname" . }}
|
||||
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
heritage: "{{ .Release.Service }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
data:
|
||||
run.sh: |-
|
||||
@test "Test Status" {
|
||||
{{- if .Values.service.selector }}
|
||||
skip "Can't guarentee pod names with selector"
|
||||
{{- else }}
|
||||
{{- $port := .Values.service.externalPort }}
|
||||
url="http://{{ template "kibana.fullname" . }}{{ if $port }}:{{ $port }}{{ end }}/api{{ .Values.livenessProbe.path }}"
|
||||
|
||||
# retry for 1 minute
|
||||
run curl -s -o /dev/null -I -w "%{http_code}" --retry 30 --retry-delay 2 $url
|
||||
|
||||
code=$(curl -s -o /dev/null -I -w "%{http_code}" $url)
|
||||
body=$(curl $url)
|
||||
if [ "$code" == "503" ]
|
||||
then
|
||||
skip "Kibana Unavailable (503), can't get status - see pod logs: $body"
|
||||
fi
|
||||
|
||||
result=$(echo $body | jq -cr '.status.statuses[]')
|
||||
[ "$result" != "" ]
|
||||
|
||||
result=$(echo $body | jq -cr '.status.statuses[] | select(.state != "green")')
|
||||
[ "$result" == "" ]
|
||||
{{- end }}
|
||||
}
|
||||
{{- end }}
|
||||
44
helmfile/cloud-sdk/charts/kibana/templates/tests/test.yaml
Normal file
44
helmfile/cloud-sdk/charts/kibana/templates/tests/test.yaml
Normal file
@@ -0,0 +1,44 @@
|
||||
{{- if .Values.testFramework.enabled }}
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: {{ template "kibana.fullname" . }}-test
|
||||
labels:
|
||||
app: {{ template "kibana.fullname" . }}
|
||||
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
heritage: "{{ .Release.Service }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
annotations:
|
||||
"helm.sh/hook": test-success
|
||||
spec:
|
||||
initContainers:
|
||||
- name: test-framework
|
||||
image: "{{ .Values.testFramework.image}}:{{ .Values.testFramework.tag }}"
|
||||
command:
|
||||
- "bash"
|
||||
- "-c"
|
||||
- |
|
||||
set -ex
|
||||
# copy bats to tools dir
|
||||
cp -R /usr/local/libexec/ /tools/bats/
|
||||
volumeMounts:
|
||||
- mountPath: /tools
|
||||
name: tools
|
||||
containers:
|
||||
- name: {{ .Release.Name }}-test
|
||||
image: "dwdraju/alpine-curl-jq"
|
||||
command: ["/tools/bats/bats", "-t", "/tests/run.sh"]
|
||||
volumeMounts:
|
||||
- mountPath: /tests
|
||||
name: tests
|
||||
readOnly: true
|
||||
- mountPath: /tools
|
||||
name: tools
|
||||
volumes:
|
||||
- name: tests
|
||||
configMap:
|
||||
name: {{ template "kibana.fullname" . }}-test
|
||||
- name: tools
|
||||
emptyDir: {}
|
||||
restartPolicy: Never
|
||||
{{- end }}
|
||||
31
helmfile/cloud-sdk/charts/kibana/templates/volume-claim.yaml
Normal file
31
helmfile/cloud-sdk/charts/kibana/templates/volume-claim.yaml
Normal file
@@ -0,0 +1,31 @@
|
||||
{{- if and .Values.plugins.enabled .Values.persistentVolumeClaim.enabled -}}
|
||||
{{- if not .Values.persistentVolumeClaim.existingClaim -}}
|
||||
apiVersion: "v1"
|
||||
kind: "PersistentVolumeClaim"
|
||||
metadata:
|
||||
{{- if .Values.persistentVolumeClaim.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.persistentVolumeClaim.annotations | indent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app: {{ template "kibana.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.persistentVolumeClaim.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "kibana.fullname" . }}
|
||||
spec:
|
||||
accessModes:
|
||||
{{ toYaml .Values.persistentVolumeClaim.accessModes | indent 4 }}
|
||||
{{- if .Values.persistentVolumeClaim.storageClass }}
|
||||
{{- if (eq "-" .Values.persistentVolumeClaim.storageClass) }}
|
||||
storageClassName: ""
|
||||
{{- else }}
|
||||
storageClassName: "{{ .Values.persistentVolumeClaim.storageClass }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
resources:
|
||||
requests:
|
||||
storage: "{{ .Values.persistentVolumeClaim.size }}"
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
241
helmfile/cloud-sdk/charts/kibana/values.yaml
Normal file
241
helmfile/cloud-sdk/charts/kibana/values.yaml
Normal file
@@ -0,0 +1,241 @@
|
||||
image:
|
||||
repository: "docker.elastic.co/kibana/kibana-oss"
|
||||
tag: "6.7.0"
|
||||
pullPolicy: "IfNotPresent"
|
||||
|
||||
testFramework:
|
||||
enabled: "true"
|
||||
image: "dduportal/bats"
|
||||
tag: "0.4.0"
|
||||
|
||||
commandline:
|
||||
args: []
|
||||
|
||||
env: {}
|
||||
## All Kibana configuration options are adjustable via env vars.
|
||||
## To adjust a config option to an env var uppercase + replace `.` with `_`
|
||||
## Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
|
||||
## For kibana < 6.6, use ELASTICSEARCH_URL instead
|
||||
# ELASTICSEARCH_HOSTS: http://elasticsearch-client:9200
|
||||
# SERVER_PORT: 5601
|
||||
# LOGGING_VERBOSE: "true"
|
||||
# SERVER_DEFAULTROUTE: "/app/kibana"
|
||||
|
||||
envFromSecrets: {}
|
||||
## Create a secret manually. Reference it here to inject environment variables
|
||||
# ELASTICSEARCH_USERNAME:
|
||||
# from:
|
||||
# secret: secret-name-here
|
||||
# key: ELASTICSEARCH_USERNAME
|
||||
# ELASTICSEARCH_PASSWORD:
|
||||
# from:
|
||||
# secret: secret-name-here
|
||||
# key: ELASTICSEARCH_PASSWORD
|
||||
|
||||
files:
|
||||
kibana.yml:
|
||||
## Default Kibana configuration from kibana-docker.
|
||||
server.name: kibana
|
||||
server.host: "0"
|
||||
## For kibana < 6.6, use elasticsearch.url instead
|
||||
elasticsearch.hosts: http://elasticsearch:9200
|
||||
|
||||
## Custom config properties below
|
||||
## Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
|
||||
# server.port: 5601
|
||||
# logging.verbose: "true"
|
||||
# server.defaultRoute: "/app/kibana"
|
||||
|
||||
deployment:
|
||||
annotations: {}
|
||||
|
||||
service:
|
||||
type: ClusterIP
|
||||
# clusterIP: None
|
||||
# portName: kibana-svc
|
||||
externalPort: 443
|
||||
internalPort: 5601
|
||||
# authProxyPort: 5602 To be used with authProxyEnabled and a proxy extraContainer
|
||||
## External IP addresses of service
|
||||
## Default: nil
|
||||
##
|
||||
# externalIPs:
|
||||
# - 192.168.0.1
|
||||
#
|
||||
## LoadBalancer IP if service.type is LoadBalancer
|
||||
## Default: nil
|
||||
##
|
||||
# loadBalancerIP: 10.2.2.2
|
||||
annotations: {}
|
||||
# Annotation example: setup ssl with aws cert when service.type is LoadBalancer
|
||||
# service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:EXAMPLE_CERT
|
||||
labels: {}
|
||||
## Label example: show service URL in `kubectl cluster-info`
|
||||
# kubernetes.io/cluster-service: "true"
|
||||
## Limit load balancer source ips to list of CIDRs (where available)
|
||||
# loadBalancerSourceRanges: []
|
||||
selector: {}
|
||||
|
||||
ingress:
|
||||
enabled: false
|
||||
# hosts:
|
||||
# - kibana.localhost.localdomain
|
||||
# - localhost.localdomain/kibana
|
||||
# annotations:
|
||||
# kubernetes.io/ingress.class: nginx
|
||||
# kubernetes.io/tls-acme: "true"
|
||||
# tls:
|
||||
# - secretName: chart-example-tls
|
||||
# hosts:
|
||||
# - chart-example.local
|
||||
|
||||
serviceAccount:
|
||||
# Specifies whether a service account should be created
|
||||
create: false
|
||||
# The name of the service account to use.
|
||||
# If not set and create is true, a name is generated using the fullname template
|
||||
# If set and create is false, the service account must be existing
|
||||
name:
|
||||
|
||||
livenessProbe:
|
||||
enabled: false
|
||||
path: /status
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 10
|
||||
|
||||
readinessProbe:
|
||||
enabled: false
|
||||
path: /status
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 10
|
||||
periodSeconds: 10
|
||||
successThreshold: 5
|
||||
|
||||
# Enable an authproxy. Specify container in extraContainers
|
||||
authProxyEnabled: false
|
||||
|
||||
extraContainers: |
|
||||
# - name: proxy
|
||||
# image: quay.io/gambol99/keycloak-proxy:latest
|
||||
# args:
|
||||
# - --resource=uri=/*
|
||||
# - --discovery-url=https://discovery-url
|
||||
# - --client-id=client
|
||||
# - --client-secret=secret
|
||||
# - --listen=0.0.0.0:5602
|
||||
# - --upstream-url=http://127.0.0.1:5601
|
||||
# ports:
|
||||
# - name: web
|
||||
# containerPort: 9090
|
||||
|
||||
extraVolumeMounts: []
|
||||
|
||||
extraVolumes: []
|
||||
|
||||
resources: {}
|
||||
# limits:
|
||||
# cpu: 100m
|
||||
# memory: 300Mi
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 300Mi
|
||||
|
||||
priorityClassName: ""
|
||||
|
||||
# Affinity for pod assignment
|
||||
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
|
||||
# affinity: {}
|
||||
|
||||
# Tolerations for pod assignment
|
||||
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
|
||||
tolerations: []
|
||||
|
||||
# Node labels for pod assignment
|
||||
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
|
||||
nodeSelector: {}
|
||||
|
||||
podAnnotations: {}
|
||||
replicaCount: 1
|
||||
revisionHistoryLimit: 3
|
||||
|
||||
# Custom labels for pod assignment
|
||||
podLabels: {}
|
||||
|
||||
# To export a dashboard from a running Kibana 6.3.x use:
|
||||
# curl --user <username>:<password> -XGET https://kibana.yourdomain.com:5601/api/kibana/dashboards/export?dashboard=<some-dashboard-uuid> > my-dashboard.json
|
||||
# A dashboard is defined by a name and a string with the json payload or the download url
|
||||
dashboardImport:
|
||||
enabled: false
|
||||
timeout: 60
|
||||
basePath: /
|
||||
xpackauth:
|
||||
enabled: false
|
||||
username: myuser
|
||||
password: mypass
|
||||
dashboards: {}
|
||||
# k8s: https://raw.githubusercontent.com/monotek/kibana-dashboards/master/k8s-fluentd-elasticsearch.json
|
||||
|
||||
# List of plugins to install using initContainer
|
||||
# NOTE : We notice that lower resource constraints given to the chart + plugins are likely not going to work well.
|
||||
plugins:
|
||||
# set to true to enable plugins installation
|
||||
enabled: false
|
||||
# set to true to remove all kibana plugins before installation
|
||||
reset: false
|
||||
# Use <plugin_name,version,url> to add/upgrade plugin
|
||||
values:
|
||||
# - elastalert-kibana-plugin,1.0.1,https://github.com/bitsensor/elastalert-kibana-plugin/releases/download/1.0.1/elastalert-kibana-plugin-1.0.1-6.4.2.zip
|
||||
# - logtrail,0.1.31,https://github.com/sivasamyk/logtrail/releases/download/v0.1.31/logtrail-6.6.0-0.1.31.zip
|
||||
# - other_plugin
|
||||
|
||||
persistentVolumeClaim:
|
||||
# set to true to use pvc
|
||||
enabled: false
|
||||
# set to true to use you own pvc
|
||||
existingClaim: false
|
||||
annotations: {}
|
||||
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
size: "5Gi"
|
||||
## If defined, storageClassName: <storageClass>
|
||||
## If set to "-", storageClassName: "", which disables dynamic provisioning
|
||||
## If undefined (the default) or set to null, no storageClassName spec is
|
||||
## set, choosing the default provisioner. (gp2 on AWS, standard on
|
||||
## GKE, AWS & OpenStack)
|
||||
##
|
||||
# storageClass: "-"
|
||||
|
||||
# default security context
|
||||
securityContext:
|
||||
enabled: false
|
||||
allowPrivilegeEscalation: false
|
||||
runAsUser: 1000
|
||||
fsGroup: 2000
|
||||
|
||||
extraConfigMapMounts: []
|
||||
# - name: logtrail-configs
|
||||
# configMap: kibana-logtrail
|
||||
# mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json
|
||||
# subPath: logtrail.json
|
||||
|
||||
# Add your own init container or uncomment and modify the given example.
|
||||
initContainers: {}
|
||||
## Don't start kibana till Elasticsearch is reachable.
|
||||
## Ensure that it is available at http://elasticsearch:9200
|
||||
##
|
||||
# es-check: # <- will be used as container name
|
||||
# image: "appropriate/curl:latest"
|
||||
# imagePullPolicy: "IfNotPresent"
|
||||
# command:
|
||||
# - "/bin/sh"
|
||||
# - "-c"
|
||||
# - |
|
||||
# is_down=true
|
||||
# while "$is_down"; do
|
||||
# if curl -sSf --fail-early --connect-timeout 5 http://elasticsearch:9200; then
|
||||
# is_down=false
|
||||
# else
|
||||
# sleep 5
|
||||
# fi
|
||||
# done
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1beta1
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: prometheus-operator-grafana-oauth
|
||||
@@ -15,6 +15,9 @@ spec:
|
||||
http:
|
||||
paths:
|
||||
- path: /oauth2
|
||||
pathType: Prefix
|
||||
backend:
|
||||
serviceName: oauth2-proxy
|
||||
servicePort: 4180
|
||||
service:
|
||||
name: oauth2-proxy
|
||||
port:
|
||||
number: 4180
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1beta1
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: k8s-dashboard-kubernetes-dashboard-oauth
|
||||
@@ -15,6 +15,9 @@ spec:
|
||||
http:
|
||||
paths:
|
||||
- path: /oauth2
|
||||
pathType: Prefix
|
||||
backend:
|
||||
serviceName: oauth2-proxy
|
||||
servicePort: 4180
|
||||
service:
|
||||
name: oauth2-proxy
|
||||
port:
|
||||
number: 4180
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1beta1
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: kibana-oauth
|
||||
@@ -15,6 +15,9 @@ spec:
|
||||
http:
|
||||
paths:
|
||||
- path: /oauth2
|
||||
pathType: Prefix
|
||||
backend:
|
||||
serviceName: oauth2-proxy
|
||||
servicePort: 4180
|
||||
service:
|
||||
name: oauth2-proxy
|
||||
port:
|
||||
number: 4180
|
||||
|
||||
@@ -8,4 +8,4 @@ metadata:
|
||||
k8s-app: oauth2-proxy
|
||||
spec:
|
||||
type: ExternalName
|
||||
externalName: oauth2-proxy.{{ .Values.proxy.namespace }}.svc.cluster.local
|
||||
externalName: oauth2-proxy.{{ .Values.proxy.namespace }}.svc.cluster.local
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1beta1
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: prometheus-operator-prometheus-oauth
|
||||
@@ -15,6 +15,9 @@ spec:
|
||||
http:
|
||||
paths:
|
||||
- path: /oauth2
|
||||
pathType: Prefix
|
||||
backend:
|
||||
serviceName: oauth2-proxy
|
||||
servicePort: 4180
|
||||
service:
|
||||
name: oauth2-proxy
|
||||
port:
|
||||
number: 4180
|
||||
|
||||
@@ -1,2 +1,4 @@
|
||||
monitoring:
|
||||
namespace: monitoring
|
||||
proxy:
|
||||
namespace: kube-system
|
||||
|
||||
@@ -22,4 +22,12 @@ expanderPriorities: |-
|
||||
10:
|
||||
- .*spot.*
|
||||
0:
|
||||
- .*
|
||||
- .*
|
||||
|
||||
resources:
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 200Mi
|
||||
|
||||
@@ -2,7 +2,7 @@ version: v0.139.7
|
||||
dependencies:
|
||||
- name: actions-runner-controller
|
||||
repository: https://actions-runner-controller.github.io/actions-runner-controller
|
||||
version: 0.16.1
|
||||
version: 0.19.1
|
||||
- name: aws-load-balancer-controller
|
||||
repository: https://aws.github.io/eks-charts
|
||||
version: 1.4.2
|
||||
@@ -23,19 +23,16 @@ dependencies:
|
||||
version: 2.2.3
|
||||
- name: external-dns
|
||||
repository: https://charts.bitnami.com/bitnami
|
||||
version: 6.0.2
|
||||
version: 6.1.0
|
||||
- name: fluentd-elasticsearch
|
||||
repository: https://kokuwaio.github.io/helm-charts
|
||||
version: 13.1.0
|
||||
- name: influxdb2
|
||||
repository: https://helm.influxdata.com
|
||||
version: 2.0.1
|
||||
version: 2.0.3
|
||||
- name: ingress-nginx
|
||||
repository: https://kubernetes.github.io/ingress-nginx
|
||||
version: 3.4.0
|
||||
- name: kibana
|
||||
repository: https://charts.helm.sh/stable
|
||||
version: 3.2.8
|
||||
- name: kube-prometheus-stack
|
||||
repository: https://prometheus-community.github.io/helm-charts
|
||||
version: 30.0.0
|
||||
@@ -51,5 +48,5 @@ dependencies:
|
||||
- name: tigera-operator
|
||||
repository: https://projectcalico.docs.tigera.io/charts
|
||||
version: v3.22.2
|
||||
digest: sha256:b1881ad1d9967f1cc65ba31946be63f56329dc55920c8e541e16b16a105c9307
|
||||
generated: "2022-05-27T12:53:47.055630086+03:00"
|
||||
digest: sha256:289fecc40f3cfb7c7ef4d458d313295cce9d950559391d51b973775333f4f07f
|
||||
generated: "2022-06-23T14:05:20.599854754+03:00"
|
||||
|
||||
@@ -133,7 +133,7 @@ releases:
|
||||
<<: *default
|
||||
<<: *external-dns
|
||||
chart: bitnami/external-dns
|
||||
version: 6.0.2
|
||||
version: 6.1.0
|
||||
labels:
|
||||
role: setup
|
||||
group: system
|
||||
@@ -389,7 +389,7 @@ releases:
|
||||
- monitoring:
|
||||
namespace: {{ .Environment.Values.monitoring.namespace }}
|
||||
domain: {{ .Environment.Values.domain }}
|
||||
proxy:
|
||||
- proxy:
|
||||
namespace: kube-system
|
||||
|
||||
- name: prometheus-operator-ingress-auth
|
||||
@@ -514,7 +514,7 @@ releases:
|
||||
- name: kibana
|
||||
condition: kibana.enabled
|
||||
namespace: {{ .Environment.Values.monitoring.namespace }}
|
||||
chart: stable/kibana
|
||||
chart: charts/kibana
|
||||
labels:
|
||||
role: setup
|
||||
group: monitoring
|
||||
@@ -848,7 +848,7 @@ releases:
|
||||
- name: influxdb
|
||||
namespace: test-bss
|
||||
chart: influxdata/influxdb2
|
||||
version: 2.0.1
|
||||
version: 2.0.3
|
||||
condition: influxdb.enabled
|
||||
labels:
|
||||
role: setup
|
||||
@@ -870,6 +870,13 @@ releases:
|
||||
size: 10Gi
|
||||
- service:
|
||||
type: NodePort
|
||||
- resources:
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 500Mi
|
||||
requests:
|
||||
cpu: 500m
|
||||
memory: 500Mi
|
||||
- ingress:
|
||||
enabled: true
|
||||
annotations:
|
||||
@@ -888,7 +895,7 @@ releases:
|
||||
condition: actions-runner-controller.enabled
|
||||
namespace: actions-runner-controller
|
||||
chart: actions-runner-controller/actions-runner-controller
|
||||
version: 0.16.1
|
||||
version: 0.19.1
|
||||
labels:
|
||||
app: actions-runner-controller
|
||||
values:
|
||||
|
||||
Reference in New Issue
Block a user