mirror of
https://github.com/Telecominfraproject/wlan-toolsmith.git
synced 2025-10-29 18:12:20 +00:00
[WIFI-11295] elasticsearch-client crashing in restart loop (#222)
* Increase heap size for elasticsearch-client Signed-off-by: Johann Hoffmann <johann.hoffmann@mailbox.org> * Adapt resource limits after heapsize increase Signed-off-by: Johann Hoffmann <johann.hoffmann@mailbox.org> * Switch to local Elasticsearch chart and increase timeoutSeconds Signed-off-by: Johann Hoffmann <johann.hoffmann@mailbox.org> * Increase CPU limit for elasticsearch-data pods Signed-off-by: Johann Hoffmann <johann.hoffmann@mailbox.org> * Commit Terraform lock file for last PR Signed-off-by: Johann Hoffmann <johann.hoffmann@mailbox.org> Signed-off-by: Johann Hoffmann <johann.hoffmann@mailbox.org>
This commit is contained in:
3
helmfile/cloud-sdk/charts/elasticsearch/.helmignore
Normal file
3
helmfile/cloud-sdk/charts/elasticsearch/.helmignore
Normal file
@@ -0,0 +1,3 @@
|
||||
.git
|
||||
# OWNERS file for Kubernetes
|
||||
OWNERS
|
||||
18
helmfile/cloud-sdk/charts/elasticsearch/Chart.yaml
Executable file
18
helmfile/cloud-sdk/charts/elasticsearch/Chart.yaml
Executable file
@@ -0,0 +1,18 @@
|
||||
apiVersion: v1
|
||||
name: elasticsearch
|
||||
home: https://www.elastic.co/products/elasticsearch
|
||||
version: 1.32.5
|
||||
appVersion: 6.8.6
|
||||
# The elasticsearch chart is deprecated and no longer maintained. For details deprecation, see the PROCESSES.md file.
|
||||
deprecated: true
|
||||
description: DEPRECATED Flexible and powerful open source, distributed real-time search and analytics
|
||||
engine.
|
||||
icon: https://static-www.elastic.co/assets/blteb1c97719574938d/logo-elastic-elasticsearch-lt.svg
|
||||
sources:
|
||||
- https://www.elastic.co/products/elasticsearch
|
||||
- https://github.com/jetstack/elasticsearch-pet
|
||||
- https://github.com/giantswarm/kubernetes-elastic-stack
|
||||
- https://github.com/GoogleCloudPlatform/elasticsearch-docker
|
||||
- https://github.com/clockworksoul/helm-elasticsearch
|
||||
- https://github.com/pires/kubernetes-elasticsearch-cluster
|
||||
maintainers: []
|
||||
278
helmfile/cloud-sdk/charts/elasticsearch/README.md
Normal file
278
helmfile/cloud-sdk/charts/elasticsearch/README.md
Normal file
@@ -0,0 +1,278 @@
|
||||
# Elasticsearch Helm Chart
|
||||
|
||||
This chart uses a standard Docker image of Elasticsearch (docker.elastic.co/elasticsearch/elasticsearch-oss) and uses a service pointing to the master's transport port for service discovery.
|
||||
Elasticsearch does not communicate with the Kubernetes API, hence no need for RBAC permissions.
|
||||
|
||||
## This Helm chart is deprecated
|
||||
As mentioned in #10543 this chart has been deprecated in favour of the official [Elastic Helm Chart](https://github.com/elastic/helm-charts/tree/master/elasticsearch).
|
||||
We have made steps towards that goal by producing a [migration guide](https://github.com/elastic/helm-charts/blob/master/elasticsearch/examples/migration/README.md) to help people switch the management of their clusters over to the new Charts.
|
||||
The Elastic Helm Chart supports version 6 and 7 of Elasticsearch and it was decided it would be easier for people to upgrade after migrating to the Elastic Helm Chart because it's upgrade process works better.
|
||||
During deprecation process we want to make sure that Chart will do what people are using this chart to do.
|
||||
Please look at the Elastic Helm Charts and if you see anything missing from please [open an issue](https://github.com/elastic/helm-charts/issues/new/choose) to let us know what you need.
|
||||
The Elastic Chart repo is also in [Helm Hub](https://hub.helm.sh).
|
||||
|
||||
## Warning for previous users
|
||||
If you are currently using an earlier version of this Chart you will need to redeploy your Elasticsearch clusters. The discovery method used here is incompatible with using RBAC.
|
||||
If you are upgrading to Elasticsearch 6 from the 5.5 version used in this chart before, please note that your cluster needs to do a full cluster restart.
|
||||
The simplest way to do that is to delete the installation (keep the PVs) and install this chart again with the new version.
|
||||
If you want to avoid doing that upgrade to Elasticsearch 5.6 first before moving on to Elasticsearch 6.0.
|
||||
|
||||
## Prerequisites Details
|
||||
|
||||
* Kubernetes 1.10+
|
||||
* PV dynamic provisioning support on the underlying infrastructure
|
||||
|
||||
## StatefulSets Details
|
||||
* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
|
||||
|
||||
## StatefulSets Caveats
|
||||
* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations
|
||||
|
||||
## Todo
|
||||
|
||||
* Implement TLS/Auth/Security
|
||||
* Smarter upscaling/downscaling
|
||||
* Solution for memory locking
|
||||
|
||||
## Chart Details
|
||||
This chart will do the following:
|
||||
|
||||
* Implemented a dynamically scalable elasticsearch cluster using Kubernetes StatefulSets/Deployments
|
||||
* Multi-role deployment: master, client (coordinating) and data nodes
|
||||
* Statefulset Supports scaling down without degrading the cluster
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
To install the chart with the release name `my-release`:
|
||||
|
||||
```bash
|
||||
$ helm install --name my-release stable/elasticsearch
|
||||
```
|
||||
|
||||
## Deleting the Charts
|
||||
|
||||
Delete the Helm deployment as normal
|
||||
|
||||
```
|
||||
$ helm delete my-release
|
||||
```
|
||||
|
||||
Deletion of the StatefulSet doesn't cascade to deleting associated PVCs. To delete them:
|
||||
|
||||
```
|
||||
$ kubectl delete pvc -l release=my-release,component=data
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
The following table lists the configurable parameters of the elasticsearch chart and their default values.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
| ------------------------------------ | ------------------------------------------------------------------- | --------------------------------------------------- |
|
||||
| `appVersion` | Application Version (Elasticsearch) | `6.8.2` |
|
||||
| `image.repository` | Container image name | `docker.elastic.co/elasticsearch/elasticsearch-oss` |
|
||||
| `image.tag` | Container image tag | `6.8.2` |
|
||||
| `image.pullPolicy` | Container pull policy | `IfNotPresent` |
|
||||
| `image.pullSecrets` | container image pull secrets | `[]` |
|
||||
| `initImage.repository` | Init container image name | `busybox` |
|
||||
| `initImage.tag` | Init container image tag | `latest` |
|
||||
| `initImage.pullPolicy` | Init container pull policy | `Always` |
|
||||
| `schedulerName` | Name of the k8s scheduler (other than default) | `nil` |
|
||||
| `cluster.name` | Cluster name | `elasticsearch` |
|
||||
| `cluster.xpackEnable` | Writes the X-Pack configuration options to the configuration file | `false` |
|
||||
| `cluster.config` | Additional cluster config appended | `{}` |
|
||||
| `cluster.keystoreSecret` | Name of secret holding secure config options in an es keystore | `nil` |
|
||||
| `cluster.env` | Cluster environment variables | `{MINIMUM_MASTER_NODES: "2"}` |
|
||||
| `cluster.bootstrapShellCommand` | Post-init command to run in separate Job | `""` |
|
||||
| `cluster.additionalJavaOpts` | Cluster parameters to be added to `ES_JAVA_OPTS` environment variable | `""` |
|
||||
| `cluster.plugins` | List of Elasticsearch plugins to install | `[]` |
|
||||
| `cluster.loggingYml` | Cluster logging configuration for ES v2 | see `values.yaml` for defaults |
|
||||
| `cluster.log4j2Properties` | Cluster logging configuration for ES v5 and 6 | see `values.yaml` for defaults |
|
||||
| `client.name` | Client component name | `client` |
|
||||
| `client.replicas` | Client node replicas (deployment) | `2` |
|
||||
| `client.resources` | Client node resources requests & limits | `{} - cpu limit must be an integer` |
|
||||
| `client.priorityClassName` | Client priorityClass | `nil` |
|
||||
| `client.heapSize` | Client node heap size | `512m` |
|
||||
| `client.podAnnotations` | Client Deployment annotations | `{}` |
|
||||
| `client.nodeSelector` | Node labels for client pod assignment | `{}` |
|
||||
| `client.tolerations` | Client tolerations | `[]` |
|
||||
| `client.terminationGracePeriodSeconds` | Client nodes: Termination grace period (seconds) | `nil` |
|
||||
| `client.serviceAnnotations` | Client Service annotations | `{}` |
|
||||
| `client.serviceType` | Client service type | `ClusterIP` |
|
||||
| `client.httpNodePort` | Client service HTTP NodePort port number. Has no effect if client.serviceType is not `NodePort`. | `nil` |
|
||||
| `client.loadBalancerIP` | Client loadBalancerIP | `{}` |
|
||||
| `client.loadBalancerSourceRanges` | Client loadBalancerSourceRanges | `{}` |
|
||||
| `client.antiAffinity` | Client anti-affinity policy | `soft` |
|
||||
| `client.nodeAffinity` | Client node affinity policy | `{}` |
|
||||
| `client.initResources` | Client initContainer resources requests & limits | `{}` |
|
||||
| `client.hooks.preStop` | Client nodes: Lifecycle hook script to execute prior the pod stops | `nil` |
|
||||
| `client.hooks.preStart` | Client nodes: Lifecycle hook script to execute after the pod starts | `nil` |
|
||||
| `client.additionalJavaOpts` | Parameters to be added to `ES_JAVA_OPTS` environment variable for client | `""` |
|
||||
| `client.ingress.enabled` | Enable Client Ingress | `false` |
|
||||
| `client.ingress.user` | If this & password are set, enable basic-auth on ingress | `nil` |
|
||||
| `client.ingress.password` | If this & user are set, enable basic-auth on ingress | `nil` |
|
||||
| `client.ingress.annotations` | Client Ingress annotations | `{}` |
|
||||
| `client.ingress.hosts` | Client Ingress Hostnames | `[]` |
|
||||
| `client.ingress.tls` | Client Ingress TLS configuration | `[]` |
|
||||
| `client.exposeTransportPort` | Expose transport port 9300 on client service (ClusterIP) | `false` |
|
||||
| `master.initResources` | Master initContainer resources requests & limits | `{}` |
|
||||
| `master.additionalJavaOpts` | Parameters to be added to `ES_JAVA_OPTS` environment variable for master | `""` |
|
||||
| `master.exposeHttp` | Expose http port 9200 on master Pods for monitoring, etc | `false` |
|
||||
| `master.name` | Master component name | `master` |
|
||||
| `master.replicas` | Master node replicas (deployment) | `2` |
|
||||
| `master.resources` | Master node resources requests & limits | `{} - cpu limit must be an integer` |
|
||||
| `master.priorityClassName` | Master priorityClass | `nil` |
|
||||
| `master.podAnnotations` | Master Deployment annotations | `{}` |
|
||||
| `master.nodeSelector` | Node labels for master pod assignment | `{}` |
|
||||
| `master.tolerations` | Master tolerations | `[]` |
|
||||
| `master.terminationGracePeriodSeconds` | Master nodes: Termination grace period (seconds) | `nil` |
|
||||
| `master.heapSize` | Master node heap size | `512m` |
|
||||
| `master.name` | Master component name | `master` |
|
||||
| `master.persistence.enabled` | Master persistent enabled/disabled | `true` |
|
||||
| `master.persistence.name` | Master statefulset PVC template name | `data` |
|
||||
| `master.persistence.size` | Master persistent volume size | `4Gi` |
|
||||
| `master.persistence.storageClass` | Master persistent volume Class | `nil` |
|
||||
| `master.persistence.accessMode` | Master persistent Access Mode | `ReadWriteOnce` |
|
||||
| `master.readinessProbe` | Master container readiness probes | see `values.yaml` for defaults |
|
||||
| `master.antiAffinity` | Master anti-affinity policy | `soft` |
|
||||
| `master.nodeAffinity` | Master node affinity policy | `{}` |
|
||||
| `master.podManagementPolicy` | Master pod creation strategy | `OrderedReady` |
|
||||
| `master.updateStrategy` | Master node update strategy policy | `{type: "onDelete"}` |
|
||||
| `master.hooks.preStop` | Master nodes: Lifecycle hook script to execute prior the pod stops | `nil` |
|
||||
| `master.hooks.preStart` | Master nodes: Lifecycle hook script to execute after the pod starts | `nil` |
|
||||
| `data.initResources` | Data initContainer resources requests & limits | `{}` |
|
||||
| `data.additionalJavaOpts` | Parameters to be added to `ES_JAVA_OPTS` environment variable for data | `""` |
|
||||
| `data.exposeHttp` | Expose http port 9200 on data Pods for monitoring, etc | `false` |
|
||||
| `data.replicas` | Data node replicas (statefulset) | `2` |
|
||||
| `data.resources` | Data node resources requests & limits | `{} - cpu limit must be an integer` |
|
||||
| `data.priorityClassName` | Data priorityClass | `nil` |
|
||||
| `data.heapSize` | Data node heap size | `1536m` |
|
||||
| `data.hooks.drain.enabled` | Data nodes: Enable drain pre-stop and post-start hook | `true` |
|
||||
| `data.hooks.preStop` | Data nodes: Lifecycle hook script to execute prior the pod stops. Ignored if `data.hooks.drain.enabled` is `true` | `nil` |
|
||||
| `data.hooks.preStart` | Data nodes: Lifecycle hook script to execute after the pod starts. Ignored if `data.hooks.drain.enabled` is `true` | `nil`|
|
||||
| `data.persistence.enabled` | Data persistent enabled/disabled | `true` |
|
||||
| `data.persistence.name` | Data statefulset PVC template name | `data` |
|
||||
| `data.persistence.size` | Data persistent volume size | `30Gi` |
|
||||
| `data.persistence.storageClass` | Data persistent volume Class | `nil` |
|
||||
| `data.persistence.accessMode` | Data persistent Access Mode | `ReadWriteOnce` |
|
||||
| `data.readinessProbe` | Readiness probes for data-containers | see `values.yaml` for defaults |
|
||||
| `data.podAnnotations` | Data StatefulSet annotations | `{}` |
|
||||
| `data.nodeSelector` | Node labels for data pod assignment | `{}` |
|
||||
| `data.tolerations` | Data tolerations | `[]` |
|
||||
| `data.terminationGracePeriodSeconds` | Data termination grace period (seconds) | `3600` |
|
||||
| `data.antiAffinity` | Data anti-affinity policy | `soft` |
|
||||
| `data.nodeAffinity` | Data node affinity policy | `{}` |
|
||||
| `data.podManagementPolicy` | Data pod creation strategy | `OrderedReady` |
|
||||
| `data.updateStrategy` | Data node update strategy policy | `{type: "onDelete"}` |
|
||||
| `sysctlInitContainer.enabled` | If true, the sysctl init container is enabled (does not stop chownInitContainer or extraInitContainers from running) | `true` |
|
||||
| `chownInitContainer.enabled` | If true, the chown init container is enabled (does not stop sysctlInitContainer or extraInitContainers from running) | `true` |
|
||||
| `extraInitContainers` | Additional init container passed through the tpl | `` |
|
||||
| `podSecurityPolicy.annotations` | Specify pod annotations in the pod security policy | `{}` |
|
||||
| `podSecurityPolicy.enabled` | Specify if a pod security policy must be created | `false` |
|
||||
| `securityContext.enabled` | If true, add securityContext to client, master and data pods | `false` |
|
||||
| `securityContext.runAsUser` | user ID to run containerized process | `1000` |
|
||||
| `serviceAccounts.client.create` | If true, create the client service account | `true` |
|
||||
| `serviceAccounts.client.name` | Name of the client service account to use or create | `{{ elasticsearch.client.fullname }}` |
|
||||
| `serviceAccounts.master.create` | If true, create the master service account | `true` |
|
||||
| `serviceAccounts.master.name` | Name of the master service account to use or create | `{{ elasticsearch.master.fullname }}` |
|
||||
| `serviceAccounts.data.create` | If true, create the data service account | `true` |
|
||||
| `serviceAccounts.data.name` | Name of the data service account to use or create | `{{ elasticsearch.data.fullname }}` |
|
||||
| `testFramework.image` | `test-framework` image repository. | `dduportal/bats` |
|
||||
| `testFramework.tag` | `test-framework` image tag. | `0.4.0` |
|
||||
| `forceIpv6` | force to use IPv6 address to listen if set to true | `false` |
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
|
||||
|
||||
In terms of Memory resources you should make sure that you follow that equation:
|
||||
|
||||
- `${role}HeapSize < ${role}MemoryRequests < ${role}MemoryLimits`
|
||||
|
||||
The YAML value of cluster.config is appended to elasticsearch.yml file for additional customization ("script.inline: on" for example to allow inline scripting)
|
||||
|
||||
# Deep dive
|
||||
|
||||
## Application Version
|
||||
|
||||
This chart aims to support Elasticsearch v2 to v6 deployments by specifying the `values.yaml` parameter `appVersion`.
|
||||
|
||||
### Version Specific Features
|
||||
|
||||
* Memory Locking *(variable renamed)*
|
||||
* Ingest Node *(v5)*
|
||||
* X-Pack Plugin *(v5)*
|
||||
|
||||
Upgrade paths & more info: https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html
|
||||
|
||||
## Mlocking
|
||||
|
||||
This is a limitation in kubernetes right now. There is no way to raise the
|
||||
limits of lockable memory, so that these memory areas won't be swapped. This
|
||||
would degrade performance heavily. The issue is tracked in
|
||||
[kubernetes/#3595](https://github.com/kubernetes/kubernetes/issues/3595).
|
||||
|
||||
```
|
||||
[WARN ][bootstrap] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
|
||||
[WARN ][bootstrap] This can result in part of the JVM being swapped out.
|
||||
[WARN ][bootstrap] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
|
||||
```
|
||||
|
||||
## Minimum Master Nodes
|
||||
> The minimum_master_nodes setting is extremely important to the stability of your cluster. This setting helps prevent split brains, the existence of two masters in a single cluster.
|
||||
|
||||
>When you have a split brain, your cluster is at danger of losing data. Because the master is considered the supreme ruler of the cluster, it decides when new indices can be created, how shards are moved, and so forth. If you have two masters, data integrity becomes perilous, since you have two nodes that think they are in charge.
|
||||
|
||||
>This setting tells Elasticsearch to not elect a master unless there are enough master-eligible nodes available. Only then will an election take place.
|
||||
|
||||
>This setting should always be configured to a quorum (majority) of your master-eligible nodes. A quorum is (number of master-eligible nodes / 2) + 1
|
||||
|
||||
More info: https://www.elastic.co/guide/en/elasticsearch/guide/1.x/_important_configuration_changes.html#_minimum_master_nodes
|
||||
|
||||
# Client and Coordinating Nodes
|
||||
|
||||
Elasticsearch v5 terminology has updated, and now refers to a `Client Node` as a `Coordinating Node`.
|
||||
|
||||
More info: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/modules-node.html#coordinating-node
|
||||
|
||||
## Enabling elasticsearch internal monitoring
|
||||
Requires version 6.3+ and standard non `oss` repository defined. Starting with 6.3 Xpack is partially free and enabled by default. You need to set a new config to enable the collection of these internal metrics. (https://www.elastic.co/guide/en/elasticsearch/reference/6.3/monitoring-settings.html)
|
||||
|
||||
To do this through this helm chart override with the three following changes:
|
||||
```
|
||||
image.repository: docker.elastic.co/elasticsearch/elasticsearch
|
||||
cluster.xpackEnable: true
|
||||
cluster.env.XPACK_MONITORING_ENABLED: true
|
||||
```
|
||||
|
||||
Note: to see these changes you will need to update your kibana repo to `image.repository: docker.elastic.co/kibana/kibana` instead of the `oss` version
|
||||
|
||||
|
||||
## Select right storage class for SSD volumes
|
||||
|
||||
### GCE + Kubernetes 1.5
|
||||
|
||||
Create StorageClass for SSD-PD
|
||||
|
||||
```
|
||||
$ kubectl create -f - <<EOF
|
||||
kind: StorageClass
|
||||
apiVersion: extensions/v1beta1
|
||||
metadata:
|
||||
name: ssd
|
||||
provisioner: kubernetes.io/gce-pd
|
||||
parameters:
|
||||
type: pd-ssd
|
||||
EOF
|
||||
```
|
||||
Create cluster with Storage class `ssd` on Kubernetes 1.5+
|
||||
|
||||
```
|
||||
$ helm install stable/elasticsearch --name my-release --set data.persistence.storageClass=ssd,data.storage=100Gi
|
||||
```
|
||||
|
||||
### Usage of the `tpl` Function
|
||||
|
||||
The `tpl` function allows us to pass string values from `values.yaml` through the templating engine. It is used for the following values:
|
||||
|
||||
* `extraInitContainers`
|
||||
|
||||
It is important that these values be configured as strings. Otherwise, installation will fail.
|
||||
@@ -0,0 +1,5 @@
|
||||
---
|
||||
# Expose transport port on ClusterIP service
|
||||
|
||||
client:
|
||||
exposeTransportPort: true
|
||||
@@ -0,0 +1,9 @@
|
||||
extraInitContainers: |
|
||||
- name: "plugin-install-ingest-attachment"
|
||||
image: "docker.elastic.co/elasticsearch/elasticsearch-oss:6.6.1"
|
||||
command: ["/bin/bash"]
|
||||
args: ["-c", "yes | /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-attachment"]
|
||||
- name: "plugin-install-mapper-size"
|
||||
image: "docker.elastic.co/elasticsearch/elasticsearch-oss:6.6.1"
|
||||
command: ["/bin/bash"]
|
||||
args: ["-c", "yes | /usr/share/elasticsearch/bin/elasticsearch-plugin install mapper-size"]
|
||||
31
helmfile/cloud-sdk/charts/elasticsearch/ci/hooks-values.yaml
Normal file
31
helmfile/cloud-sdk/charts/elasticsearch/ci/hooks-values.yaml
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
# Enable custom lifecycle hooks for client, data and master pods
|
||||
|
||||
client:
|
||||
hooks:
|
||||
preStop: |-
|
||||
#!/bin/bash
|
||||
echo "Node {{ template "elasticsearch.client.fullname" . }} is shutting down"
|
||||
postStart: |-
|
||||
#!/bin/bash
|
||||
echo "Node {{ template "elasticsearch.client.fullname" . }} is ready to be used"
|
||||
|
||||
data:
|
||||
hooks:
|
||||
drain:
|
||||
enabled: false
|
||||
preStop: |-
|
||||
#!/bin/bash
|
||||
echo "Node {{ template "elasticsearch.data.fullname" . }} is shutting down"
|
||||
postStart: |-
|
||||
#!/bin/bash
|
||||
echo "Node {{ template "elasticsearch.data.fullname" . }} is ready to be used"
|
||||
|
||||
master:
|
||||
hooks:
|
||||
preStop: |-
|
||||
#!/bin/bash
|
||||
echo "Node {{ template "elasticsearch.master.fullname" . }} is shutting down"
|
||||
postStart: |-
|
||||
#!/bin/bash
|
||||
echo "Node {{ template "elasticsearch.master.fullname" . }} is ready to be used"
|
||||
@@ -0,0 +1,12 @@
|
||||
---
|
||||
# Deploy Chart as non-root and unprivileged
|
||||
|
||||
chownInitContainer:
|
||||
enabled: false
|
||||
|
||||
securityContext:
|
||||
enabled: true
|
||||
runAsUser: 1000
|
||||
|
||||
sysctlInitContainer:
|
||||
enabled: false
|
||||
@@ -0,0 +1,7 @@
|
||||
---
|
||||
# Enable init container for installing plugins
|
||||
|
||||
cluster:
|
||||
plugins:
|
||||
- ingest-attachment
|
||||
- mapper-size
|
||||
@@ -0,0 +1,7 @@
|
||||
data:
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
|
||||
master:
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
35
helmfile/cloud-sdk/charts/elasticsearch/templates/NOTES.txt
Normal file
35
helmfile/cloud-sdk/charts/elasticsearch/templates/NOTES.txt
Normal file
@@ -0,0 +1,35 @@
|
||||
This Helm chart is deprecated. Please use https://github.com/elastic/helm-charts/tree/master/elasticsearch instead.
|
||||
|
||||
---
|
||||
|
||||
The elasticsearch cluster has been installed.
|
||||
|
||||
Elasticsearch can be accessed:
|
||||
|
||||
* Within your cluster, at the following DNS name at port 9200:
|
||||
|
||||
{{ template "elasticsearch.client.fullname" . }}.{{ .Release.Namespace }}.svc
|
||||
|
||||
* From outside the cluster, run these commands in the same shell:
|
||||
{{- if contains "NodePort" .Values.client.serviceType }}
|
||||
|
||||
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "elasticsearch.client.fullname" . }})
|
||||
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
echo http://$NODE_IP:$NODE_PORT
|
||||
{{- else if contains "LoadBalancer" .Values.client.serviceType }}
|
||||
|
||||
WARNING: You have likely exposed your Elasticsearch cluster direct to the internet.
|
||||
Elasticsearch does not implement any security for public facing clusters by default.
|
||||
As a minimum level of security; switch to ClusterIP/NodePort and place an Nginx gateway infront of the cluster in order to lock down access to dangerous HTTP endpoints and verbs.
|
||||
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
You can watch the status of by running 'kubectl get svc -w {{ template "elasticsearch.client.fullname" . }}'
|
||||
|
||||
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "elasticsearch.client.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
echo http://$SERVICE_IP:9200
|
||||
{{- else if contains "ClusterIP" .Values.client.serviceType }}
|
||||
|
||||
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "elasticsearch.name" . }},component={{ .Values.client.name }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
|
||||
echo "Visit http://127.0.0.1:9200 to use Elasticsearch"
|
||||
kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME 9200:9200
|
||||
{{- end }}
|
||||
114
helmfile/cloud-sdk/charts/elasticsearch/templates/_helpers.tpl
Normal file
114
helmfile/cloud-sdk/charts/elasticsearch/templates/_helpers.tpl
Normal file
@@ -0,0 +1,114 @@
|
||||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "elasticsearch.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
*/}}
|
||||
{{- define "elasticsearch.fullname" -}}
|
||||
{{- if .Values.fullnameOverride -}}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- if contains $name .Release.Name -}}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified client name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
*/}}
|
||||
{{- define "elasticsearch.client.fullname" -}}
|
||||
{{ template "elasticsearch.fullname" . }}-{{ .Values.client.name }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified data name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
*/}}
|
||||
{{- define "elasticsearch.data.fullname" -}}
|
||||
{{ template "elasticsearch.fullname" . }}-{{ .Values.data.name }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified master name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
*/}}
|
||||
{{- define "elasticsearch.master.fullname" -}}
|
||||
{{ template "elasticsearch.fullname" . }}-{{ .Values.master.name }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use for the client component
|
||||
*/}}
|
||||
{{- define "elasticsearch.serviceAccountName.client" -}}
|
||||
{{- if .Values.serviceAccounts.client.create -}}
|
||||
{{ default (include "elasticsearch.client.fullname" .) .Values.serviceAccounts.client.name }}
|
||||
{{- else -}}
|
||||
{{ default "default" .Values.serviceAccounts.client.name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use for the data component
|
||||
*/}}
|
||||
{{- define "elasticsearch.serviceAccountName.data" -}}
|
||||
{{- if .Values.serviceAccounts.data.create -}}
|
||||
{{ default (include "elasticsearch.data.fullname" .) .Values.serviceAccounts.data.name }}
|
||||
{{- else -}}
|
||||
{{ default "default" .Values.serviceAccounts.data.name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use for the master component
|
||||
*/}}
|
||||
{{- define "elasticsearch.serviceAccountName.master" -}}
|
||||
{{- if .Values.serviceAccounts.master.create -}}
|
||||
{{ default (include "elasticsearch.master.fullname" .) .Values.serviceAccounts.master.name }}
|
||||
{{- else -}}
|
||||
{{ default "default" .Values.serviceAccounts.master.name }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
plugin installer template
|
||||
*/}}
|
||||
{{- define "plugin-installer" -}}
|
||||
- name: es-plugin-install
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- IPC_LOCK
|
||||
- SYS_RESOURCE
|
||||
command:
|
||||
- "sh"
|
||||
- "-c"
|
||||
- |
|
||||
{{- range .Values.cluster.plugins }}
|
||||
PLUGIN_NAME="{{ . }}"
|
||||
echo "Installing $PLUGIN_NAME..."
|
||||
if /usr/share/elasticsearch/bin/elasticsearch-plugin list | grep "$PLUGIN_NAME" > /dev/null; then
|
||||
echo "Plugin $PLUGIN_NAME already exists, skipping."
|
||||
else
|
||||
/usr/share/elasticsearch/bin/elasticsearch-plugin install -b $PLUGIN_NAME
|
||||
fi
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
- mountPath: /usr/share/elasticsearch/plugins/
|
||||
name: plugindir
|
||||
- mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
|
||||
name: config
|
||||
subPath: elasticsearch.yml
|
||||
{{- end -}}
|
||||
@@ -0,0 +1,11 @@
|
||||
{{- if and ( .Values.client.ingress.user ) ( .Values.client.ingress.password ) }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: '{{ include "elasticsearch.client.fullname" . }}-auth'
|
||||
type: Opaque
|
||||
data:
|
||||
auth: {{ printf "%s:{PLAIN}%s\n" .Values.client.ingress.user .Values.client.ingress.password | b64enc | quote }}
|
||||
{{- end }}
|
||||
|
||||
@@ -0,0 +1,209 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.client.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.client.fullname" . }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.client.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
replicas: {{ .Values.client.replicas }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.client.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
annotations:
|
||||
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
|
||||
{{- if .Values.client.podAnnotations }}
|
||||
{{ toYaml .Values.client.podAnnotations | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
serviceAccountName: {{ template "elasticsearch.serviceAccountName.client" . }}
|
||||
{{- if .Values.client.priorityClassName }}
|
||||
priorityClassName: "{{ .Values.client.priorityClassName }}"
|
||||
{{- end }}
|
||||
securityContext:
|
||||
fsGroup: 1000
|
||||
{{- if or .Values.client.antiAffinity .Values.client.nodeAffinity }}
|
||||
affinity:
|
||||
{{- end }}
|
||||
{{- if eq .Values.client.antiAffinity "hard" }}
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- topologyKey: "kubernetes.io/hostname"
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: "{{ template "elasticsearch.name" . }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
component: "{{ .Values.client.name }}"
|
||||
{{- else if eq .Values.client.antiAffinity "soft" }}
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 1
|
||||
podAffinityTerm:
|
||||
topologyKey: kubernetes.io/hostname
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: "{{ template "elasticsearch.name" . }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
component: "{{ .Values.client.name }}"
|
||||
{{- end }}
|
||||
{{- with .Values.client.nodeAffinity }}
|
||||
nodeAffinity:
|
||||
{{ toYaml . | indent 10 }}
|
||||
{{- end }}
|
||||
{{- if .Values.client.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml .Values.client.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.client.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml .Values.client.tolerations | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.client.terminationGracePeriodSeconds }}
|
||||
terminationGracePeriodSeconds: {{ .Values.client.terminationGracePeriodSeconds }}
|
||||
{{- end }}
|
||||
{{- if or .Values.extraInitContainers .Values.sysctlInitContainer.enabled .Values.cluster.plugins }}
|
||||
initContainers:
|
||||
{{- if .Values.sysctlInitContainer.enabled }}
|
||||
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
|
||||
# and https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#mlockall
|
||||
- name: "sysctl"
|
||||
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
|
||||
imagePullPolicy: {{ .Values.initImage.pullPolicy | quote }}
|
||||
resources:
|
||||
{{ toYaml .Values.client.initResources | indent 12 }}
|
||||
command: ["sysctl", "-w", "vm.max_map_count=262144"]
|
||||
securityContext:
|
||||
privileged: true
|
||||
{{- end }}
|
||||
{{- if .Values.extraInitContainers }}
|
||||
{{ tpl .Values.extraInitContainers . | indent 6 }}
|
||||
{{- end }}
|
||||
{{- if .Values.cluster.plugins }}
|
||||
{{ include "plugin-installer" . | indent 6 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: elasticsearch
|
||||
env:
|
||||
- name: NODE_DATA
|
||||
value: "false"
|
||||
{{- if hasPrefix "5." .Values.appVersion }}
|
||||
- name: NODE_INGEST
|
||||
value: "false"
|
||||
{{- end }}
|
||||
- name: NODE_MASTER
|
||||
value: "false"
|
||||
- name: DISCOVERY_SERVICE
|
||||
value: {{ template "elasticsearch.fullname" . }}-discovery
|
||||
- name: PROCESSORS
|
||||
valueFrom:
|
||||
resourceFieldRef:
|
||||
resource: limits.cpu
|
||||
- name: ES_JAVA_OPTS
|
||||
value: "-Djava.net.preferIPv4Stack=true -Xms{{ .Values.client.heapSize }} -Xmx{{ .Values.client.heapSize }} {{ .Values.cluster.additionalJavaOpts }} {{ .Values.client.additionalJavaOpts }}"
|
||||
{{- range $key, $value := .Values.cluster.env }}
|
||||
- name: {{ $key }}
|
||||
value: {{ $value | quote }}
|
||||
{{- end }}
|
||||
resources:
|
||||
{{ toYaml .Values.client.resources | indent 12 }}
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /_cluster/health
|
||||
port: 9200
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 30
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /_cluster/health?local=true
|
||||
port: 9200
|
||||
initialDelaySeconds: 90
|
||||
timeoutSeconds: 30
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
|
||||
{{- if .Values.securityContext.enabled }}
|
||||
securityContext:
|
||||
runAsUser: {{ .Values.securityContext.runAsUser }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- containerPort: 9200
|
||||
name: http
|
||||
- containerPort: 9300
|
||||
name: transport
|
||||
volumeMounts:
|
||||
- mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
|
||||
name: config
|
||||
subPath: elasticsearch.yml
|
||||
{{- if .Values.cluster.plugins }}
|
||||
- mountPath: /usr/share/elasticsearch/plugins/
|
||||
name: plugindir
|
||||
{{- end }}
|
||||
{{- if hasPrefix "2." .Values.appVersion }}
|
||||
- mountPath: /usr/share/elasticsearch/config/logging.yml
|
||||
name: config
|
||||
subPath: logging.yml
|
||||
{{- end }}
|
||||
{{- if hasPrefix "5." .Values.appVersion }}
|
||||
- mountPath: /usr/share/elasticsearch/config/log4j2.properties
|
||||
name: config
|
||||
subPath: log4j2.properties
|
||||
{{- end }}
|
||||
{{- if .Values.cluster.keystoreSecret }}
|
||||
- name: keystore
|
||||
mountPath: "/usr/share/elasticsearch/config/elasticsearch.keystore"
|
||||
subPath: elasticsearch.keystore
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if .Values.client.hooks.preStop }}
|
||||
- name: config
|
||||
mountPath: /client-pre-stop-hook.sh
|
||||
subPath: client-pre-stop-hook.sh
|
||||
{{- end }}
|
||||
{{- if .Values.client.hooks.postStart }}
|
||||
- name: config
|
||||
mountPath: /client-post-start-hook.sh
|
||||
subPath: client-post-start-hook.sh
|
||||
{{- end }}
|
||||
{{- if or .Values.client.hooks.preStop .Values.client.hooks.postStart }}
|
||||
lifecycle:
|
||||
{{- if .Values.client.hooks.preStop }}
|
||||
preStop:
|
||||
exec:
|
||||
command: ["/bin/bash","/client-pre-stop-hook.sh"]
|
||||
{{- end }}
|
||||
{{- if .Values.client.hooks.postStart }}
|
||||
postStart:
|
||||
exec:
|
||||
command: ["/bin/bash","/client-post-start-hook.sh"]
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- range $pullSecret := .Values.image.pullSecrets }}
|
||||
- name: {{ $pullSecret }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: {{ template "elasticsearch.fullname" . }}
|
||||
{{- if .Values.cluster.plugins }}
|
||||
- name: plugindir
|
||||
emptyDir: {}
|
||||
{{- end }}
|
||||
{{- if .Values.cluster.keystoreSecret }}
|
||||
- name: keystore
|
||||
secret:
|
||||
secretName: {{ .Values.cluster.keystoreSecret }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,44 @@
|
||||
{{- if .Values.client.ingress.enabled -}}
|
||||
{{- $fullName := include "elasticsearch.client.fullname" . -}}
|
||||
{{- $ingressPath := .Values.client.ingress.path -}}
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: {{ $fullName }}
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.client.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
annotations:
|
||||
{{- with .Values.client.ingress.annotations }}
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if and ( .Values.client.ingress.user ) ( .Values.client.ingress.password ) }}
|
||||
nginx.ingress.kubernetes.io/auth-type: basic
|
||||
nginx.ingress.kubernetes.io/auth-secret: '{{ include "elasticsearch.client.fullname" . }}-auth'
|
||||
nginx.ingress.kubernetes.io/auth-realm: "Authentication-Required"
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.client.ingress.tls }}
|
||||
tls:
|
||||
{{- range .Values.client.ingress.tls }}
|
||||
- hosts:
|
||||
{{- range .hosts }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
secretName: {{ .secretName }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
rules:
|
||||
{{- range .Values.client.ingress.hosts }}
|
||||
- host: {{ . | quote }}
|
||||
http:
|
||||
paths:
|
||||
- path: {{ $ingressPath }}
|
||||
backend:
|
||||
serviceName: {{ $fullName }}
|
||||
servicePort: http
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,24 @@
|
||||
{{- if .Values.client.podDisruptionBudget.enabled }}
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.client.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.client.fullname" . }}
|
||||
spec:
|
||||
{{- if .Values.client.podDisruptionBudget.minAvailable }}
|
||||
minAvailable: {{ .Values.client.podDisruptionBudget.minAvailable }}
|
||||
{{- end }}
|
||||
{{- if .Values.client.podDisruptionBudget.maxUnavailable }}
|
||||
maxUnavailable: {{ .Values.client.podDisruptionBudget.maxUnavailable }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.client.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,12 @@
|
||||
{{- if .Values.serviceAccounts.client.create }}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.client.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.client.fullname" . }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,42 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.client.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.client.fullname" . }}
|
||||
{{- if .Values.client.serviceAnnotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.client.serviceAnnotations | indent 4 }}
|
||||
{{- end }}
|
||||
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
port: 9200
|
||||
{{- if and .Values.client.httpNodePort (eq .Values.client.serviceType "NodePort") }}
|
||||
nodePort: {{ .Values.client.httpNodePort }}
|
||||
{{- end }}
|
||||
targetPort: http
|
||||
{{- if .Values.client.exposeTransportPort }}
|
||||
- name: transport
|
||||
port: 9300
|
||||
targetPort: transport
|
||||
{{- end }}
|
||||
selector:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.client.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
type: {{ .Values.client.serviceType }}
|
||||
{{- if .Values.client.loadBalancerIP }}
|
||||
loadBalancerIP: "{{ .Values.client.loadBalancerIP }}"
|
||||
{{- end }}
|
||||
{{if .Values.client.loadBalancerSourceRanges}}
|
||||
loadBalancerSourceRanges:
|
||||
{{range $rangeList := .Values.client.loadBalancerSourceRanges}}
|
||||
- {{ $rangeList }}
|
||||
{{end}}
|
||||
{{end}}
|
||||
168
helmfile/cloud-sdk/charts/elasticsearch/templates/configmap.yaml
Normal file
168
helmfile/cloud-sdk/charts/elasticsearch/templates/configmap.yaml
Normal file
@@ -0,0 +1,168 @@
|
||||
{{ $minorAppVersion := regexFind "[0-9]*.[0-9]*" .Values.appVersion | float64 -}}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ template "elasticsearch.fullname" . }}
|
||||
labels:
|
||||
app: {{ template "elasticsearch.fullname" . }}
|
||||
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
heritage: "{{ .Release.Service }}"
|
||||
data:
|
||||
elasticsearch.yml: |-
|
||||
cluster.name: {{ .Values.cluster.name }}
|
||||
|
||||
node.data: ${NODE_DATA:true}
|
||||
node.master: ${NODE_MASTER:true}
|
||||
{{- if hasPrefix "5." .Values.appVersion }}
|
||||
node.ingest: ${NODE_INGEST:true}
|
||||
{{- else if hasPrefix "6." .Values.appVersion }}
|
||||
node.ingest: ${NODE_INGEST:true}
|
||||
{{- end }}
|
||||
node.name: ${HOSTNAME}
|
||||
|
||||
{{- if .Values.forceIpv6 }}
|
||||
network.host: "::"
|
||||
{{- else }}
|
||||
network.host: 0.0.0.0
|
||||
{{- end }}
|
||||
|
||||
{{- if hasPrefix "2." .Values.appVersion }}
|
||||
# see https://github.com/kubernetes/kubernetes/issues/3595
|
||||
bootstrap.mlockall: ${BOOTSTRAP_MLOCKALL:false}
|
||||
|
||||
discovery:
|
||||
zen:
|
||||
ping.unicast.hosts: ${DISCOVERY_SERVICE:}
|
||||
minimum_master_nodes: ${MINIMUM_MASTER_NODES:2}
|
||||
{{- else if hasPrefix "5." .Values.appVersion }}
|
||||
# see https://github.com/kubernetes/kubernetes/issues/3595
|
||||
bootstrap.memory_lock: ${BOOTSTRAP_MEMORY_LOCK:false}
|
||||
|
||||
discovery:
|
||||
zen:
|
||||
ping.unicast.hosts: ${DISCOVERY_SERVICE:}
|
||||
minimum_master_nodes: ${MINIMUM_MASTER_NODES:2}
|
||||
|
||||
{{- if .Values.cluster.xpackEnable }}
|
||||
# see https://www.elastic.co/guide/en/x-pack/current/xpack-settings.html
|
||||
{{- if or ( gt $minorAppVersion 5.4 ) ( eq $minorAppVersion 5.4 ) }}
|
||||
xpack.ml.enabled: ${XPACK_ML_ENABLED:false}
|
||||
{{- end }}
|
||||
xpack.monitoring.enabled: ${XPACK_MONITORING_ENABLED:false}
|
||||
xpack.security.enabled: ${XPACK_SECURITY_ENABLED:false}
|
||||
xpack.watcher.enabled: ${XPACK_WATCHER_ENABLED:false}
|
||||
{{- else }}
|
||||
{{- if or ( gt $minorAppVersion 5.4 ) ( eq $minorAppVersion 5.4 ) }}
|
||||
xpack.ml.enabled: false
|
||||
{{- end }}
|
||||
xpack.monitoring.enabled: false
|
||||
xpack.security.enabled: false
|
||||
xpack.watcher.enabled: false
|
||||
{{- end }}
|
||||
{{- else if hasPrefix "6." .Values.appVersion }}
|
||||
# see https://github.com/kubernetes/kubernetes/issues/3595
|
||||
bootstrap.memory_lock: ${BOOTSTRAP_MEMORY_LOCK:false}
|
||||
|
||||
discovery:
|
||||
zen:
|
||||
ping.unicast.hosts: ${DISCOVERY_SERVICE:}
|
||||
minimum_master_nodes: ${MINIMUM_MASTER_NODES:2}
|
||||
|
||||
{{- if and ( .Values.cluster.xpackEnable ) ( gt $minorAppVersion 6.3 ) }}
|
||||
# see https://www.elastic.co/guide/en/x-pack/current/xpack-settings.html
|
||||
# After 6.3 xpack systems changed and are enabled by default and different configs manage them this enables monitoring
|
||||
xpack.monitoring.collection.enabled: ${XPACK_MONITORING_ENABLED:false}
|
||||
{{- else if .Values.cluster.xpackEnable }}
|
||||
# see https://www.elastic.co/guide/en/x-pack/current/xpack-settings.html
|
||||
xpack.ml.enabled: ${XPACK_ML_ENABLED:false}
|
||||
xpack.monitoring.enabled: ${XPACK_MONITORING_ENABLED:false}
|
||||
xpack.security.enabled: ${XPACK_SECURITY_ENABLED:false}
|
||||
xpack.watcher.enabled: ${XPACK_WATCHER_ENABLED:false}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
# see https://github.com/elastic/elasticsearch-definitive-guide/pull/679
|
||||
processors: ${PROCESSORS:}
|
||||
|
||||
# avoid split-brain w/ a minimum consensus of two masters plus a data node
|
||||
gateway.expected_master_nodes: ${EXPECTED_MASTER_NODES:2}
|
||||
gateway.expected_data_nodes: ${EXPECTED_DATA_NODES:1}
|
||||
gateway.recover_after_time: ${RECOVER_AFTER_TIME:5m}
|
||||
gateway.recover_after_master_nodes: ${RECOVER_AFTER_MASTER_NODES:2}
|
||||
gateway.recover_after_data_nodes: ${RECOVER_AFTER_DATA_NODES:1}
|
||||
{{- with .Values.cluster.config }}
|
||||
{{ toYaml . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if hasPrefix "2." .Values.appVersion }}
|
||||
logging.yml: |-
|
||||
{{ toYaml .Values.cluster.loggingYml | indent 4 }}
|
||||
{{- else }}
|
||||
log4j2.properties: |-
|
||||
{{ tpl .Values.cluster.log4j2Properties . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.data.hooks.drain.enabled }}
|
||||
data-pre-stop-hook.sh: |-
|
||||
#!/bin/bash
|
||||
exec &> >(tee -a "/var/log/elasticsearch-hooks.log")
|
||||
NODE_NAME=${HOSTNAME}
|
||||
echo "Prepare to migrate data of the node ${NODE_NAME}"
|
||||
echo "Move all data from node ${NODE_NAME}"
|
||||
curl -s -XPUT -H 'Content-Type: application/json' '{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings' -d "{
|
||||
\"transient\" :{
|
||||
\"cluster.routing.allocation.exclude._name\" : \"${NODE_NAME}\"
|
||||
}
|
||||
}"
|
||||
echo ""
|
||||
|
||||
while true ; do
|
||||
echo -e "Wait for node ${NODE_NAME} to become empty"
|
||||
SHARDS_ALLOCATION=$(curl -s -XGET 'http://{{ template "elasticsearch.client.fullname" . }}:9200/_cat/shards')
|
||||
if ! echo "${SHARDS_ALLOCATION}" | grep -E "${NODE_NAME}"; then
|
||||
break
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
echo "Node ${NODE_NAME} is ready to shutdown"
|
||||
data-post-start-hook.sh: |-
|
||||
#!/bin/bash
|
||||
exec &> >(tee -a "/var/log/elasticsearch-hooks.log")
|
||||
NODE_NAME=${HOSTNAME}
|
||||
CLUSTER_SETTINGS=$(curl -s -XGET "http://{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings")
|
||||
if echo "${CLUSTER_SETTINGS}" | grep -E "${NODE_NAME}"; then
|
||||
echo "Activate node ${NODE_NAME}"
|
||||
curl -s -XPUT -H 'Content-Type: application/json' "http://{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings" -d "{
|
||||
\"transient\" :{
|
||||
\"cluster.routing.allocation.exclude._name\" : null
|
||||
}
|
||||
}"
|
||||
fi
|
||||
echo "Node ${NODE_NAME} is ready to be used"
|
||||
{{- else }}
|
||||
{{- if .Values.data.hooks.preStop }}
|
||||
data-pre-stop-hook.sh: |-
|
||||
{{ tpl .Values.data.hooks.preStop . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.data.hooks.postStart }}
|
||||
data-post-start-hook.sh: |-
|
||||
{{ tpl .Values.data.hooks.postStart . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.client.hooks.preStop }}
|
||||
client-pre-stop-hook.sh: |-
|
||||
{{ tpl .Values.client.hooks.preStop . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.client.hooks.postStart }}
|
||||
client-post-start-hook.sh: |-
|
||||
{{ tpl .Values.client.hooks.postStart . | indent 4 }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.master.hooks.preStop }}
|
||||
master-pre-stop-hook.sh: |-
|
||||
{{ tpl .Values.master.hooks.preStop . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.master.hooks.postStart }}
|
||||
master-post-start-hook.sh: |-
|
||||
{{ tpl .Values.master.hooks.postStart . | indent 4 }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,24 @@
|
||||
{{- if .Values.data.podDisruptionBudget.enabled }}
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.data.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.data.fullname" . }}
|
||||
spec:
|
||||
{{- if .Values.data.podDisruptionBudget.minAvailable }}
|
||||
minAvailable: {{ .Values.data.podDisruptionBudget.minAvailable }}
|
||||
{{- end }}
|
||||
{{- if .Values.data.podDisruptionBudget.maxUnavailable }}
|
||||
maxUnavailable: {{ .Values.data.podDisruptionBudget.maxUnavailable }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.data.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,12 @@
|
||||
{{- if .Values.serviceAccounts.data.create }}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.data.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.data.fullname" . }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,256 @@
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.data.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.data.fullname" . }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.data.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
role: data
|
||||
serviceName: {{ template "elasticsearch.data.fullname" . }}
|
||||
replicas: {{ .Values.data.replicas }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.data.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
role: data
|
||||
{{- if or .Values.data.podAnnotations (eq .Values.data.updateStrategy.type "RollingUpdate") }}
|
||||
annotations:
|
||||
{{- if .Values.data.podAnnotations }}
|
||||
{{ toYaml .Values.data.podAnnotations | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if eq .Values.data.updateStrategy.type "RollingUpdate" }}
|
||||
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.schedulerName }}
|
||||
schedulerName: "{{ .Values.schedulerName }}"
|
||||
{{- end }}
|
||||
serviceAccountName: {{ template "elasticsearch.serviceAccountName.data" . }}
|
||||
{{- if .Values.data.priorityClassName }}
|
||||
priorityClassName: "{{ .Values.data.priorityClassName }}"
|
||||
{{- end }}
|
||||
securityContext:
|
||||
fsGroup: 1000
|
||||
{{- if or .Values.data.antiAffinity .Values.data.nodeAffinity }}
|
||||
affinity:
|
||||
{{- end }}
|
||||
{{- if eq .Values.data.antiAffinity "hard" }}
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- topologyKey: "kubernetes.io/hostname"
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: "{{ template "elasticsearch.name" . }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
component: "{{ .Values.data.name }}"
|
||||
{{- else if eq .Values.data.antiAffinity "soft" }}
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 1
|
||||
podAffinityTerm:
|
||||
topologyKey: kubernetes.io/hostname
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: "{{ template "elasticsearch.name" . }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
component: "{{ .Values.data.name }}"
|
||||
{{- end }}
|
||||
{{- with .Values.data.nodeAffinity }}
|
||||
nodeAffinity:
|
||||
{{ toYaml . | indent 10 }}
|
||||
{{- end }}
|
||||
{{- if .Values.data.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml .Values.data.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.data.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml .Values.data.tolerations | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.extraInitContainers .Values.sysctlInitContainer.enabled .Values.chownInitContainer.enabled .Values.cluster.plugins }}
|
||||
initContainers:
|
||||
{{- end }}
|
||||
{{- if .Values.sysctlInitContainer.enabled }}
|
||||
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
|
||||
# and https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#mlockall
|
||||
- name: "sysctl"
|
||||
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
|
||||
imagePullPolicy: {{ .Values.initImage.pullPolicy | quote }}
|
||||
resources:
|
||||
{{ toYaml .Values.data.initResources | indent 12 }}
|
||||
command: ["sysctl", "-w", "vm.max_map_count=262144"]
|
||||
securityContext:
|
||||
privileged: true
|
||||
{{- end }}
|
||||
{{- if .Values.chownInitContainer.enabled }}
|
||||
- name: "chown"
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
|
||||
resources:
|
||||
{{ toYaml .Values.data.initResources | indent 12 }}
|
||||
command:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- >
|
||||
set -e;
|
||||
set -x;
|
||||
chown elasticsearch:elasticsearch /usr/share/elasticsearch/data;
|
||||
for datadir in $(find /usr/share/elasticsearch/data -mindepth 1 -maxdepth 1 -not -name ".snapshot"); do
|
||||
chown -R elasticsearch:elasticsearch $datadir;
|
||||
done;
|
||||
chown elasticsearch:elasticsearch /usr/share/elasticsearch/logs;
|
||||
for logfile in $(find /usr/share/elasticsearch/logs -mindepth 1 -maxdepth 1 -not -name ".snapshot"); do
|
||||
chown -R elasticsearch:elasticsearch $logfile;
|
||||
done
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
volumeMounts:
|
||||
- mountPath: /usr/share/elasticsearch/data
|
||||
name: data
|
||||
{{- end }}
|
||||
{{- if .Values.extraInitContainers }}
|
||||
{{ tpl .Values.extraInitContainers . | indent 6 }}
|
||||
{{- end }}
|
||||
{{- if .Values.cluster.plugins }}
|
||||
{{ include "plugin-installer" . | indent 6 }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: elasticsearch
|
||||
env:
|
||||
- name: DISCOVERY_SERVICE
|
||||
value: {{ template "elasticsearch.fullname" . }}-discovery
|
||||
- name: NODE_MASTER
|
||||
value: "false"
|
||||
- name: PROCESSORS
|
||||
valueFrom:
|
||||
resourceFieldRef:
|
||||
resource: limits.cpu
|
||||
- name: ES_JAVA_OPTS
|
||||
value: "-Djava.net.preferIPv4Stack=true -Xms{{ .Values.data.heapSize }} -Xmx{{ .Values.data.heapSize }} {{ .Values.cluster.additionalJavaOpts }} {{ .Values.data.additionalJavaOpts }}"
|
||||
{{- range $key, $value := .Values.cluster.env }}
|
||||
- name: {{ $key }}
|
||||
value: {{ $value | quote }}
|
||||
{{- end }}
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
|
||||
{{- if .Values.securityContext.enabled }}
|
||||
securityContext:
|
||||
runAsUser: {{ .Values.securityContext.runAsUser }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- containerPort: 9300
|
||||
name: transport
|
||||
{{ if .Values.data.exposeHttp }}
|
||||
- containerPort: 9200
|
||||
name: http
|
||||
{{ end }}
|
||||
resources:
|
||||
{{ toYaml .Values.data.resources | indent 12 }}
|
||||
readinessProbe:
|
||||
{{ toYaml .Values.data.readinessProbe | indent 10 }}
|
||||
volumeMounts:
|
||||
- mountPath: /usr/share/elasticsearch/data
|
||||
name: data
|
||||
- mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
|
||||
name: config
|
||||
subPath: elasticsearch.yml
|
||||
{{- if .Values.cluster.plugins }}
|
||||
- mountPath: /usr/share/elasticsearch/plugins/
|
||||
name: plugindir
|
||||
{{- end }}
|
||||
{{- if hasPrefix "2." .Values.appVersion }}
|
||||
- mountPath: /usr/share/elasticsearch/config/logging.yml
|
||||
name: config
|
||||
subPath: logging.yml
|
||||
{{- end }}
|
||||
{{- if hasPrefix "5." .Values.appVersion }}
|
||||
- mountPath: /usr/share/elasticsearch/config/log4j2.properties
|
||||
name: config
|
||||
subPath: log4j2.properties
|
||||
{{- end }}
|
||||
{{- if .Values.cluster.keystoreSecret }}
|
||||
- name: keystore
|
||||
mountPath: "/usr/share/elasticsearch/config/elasticsearch.keystore"
|
||||
subPath: elasticsearch.keystore
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if or .Values.data.hooks.preStop .Values.data.hooks.drain.enabled }}
|
||||
- name: config
|
||||
mountPath: /data-pre-stop-hook.sh
|
||||
subPath: data-pre-stop-hook.sh
|
||||
{{- end }}
|
||||
{{- if or .Values.data.hooks.postStart .Values.data.hooks.drain.enabled }}
|
||||
- name: config
|
||||
mountPath: /data-post-start-hook.sh
|
||||
subPath: data-post-start-hook.sh
|
||||
{{- end }}
|
||||
{{- if or .Values.data.hooks.preStop .Values.data.hooks.postStart .Values.data.hooks.drain.enabled }}
|
||||
lifecycle:
|
||||
{{- if or .Values.data.hooks.preStop .Values.data.hooks.drain.enabled }}
|
||||
preStop:
|
||||
exec:
|
||||
command: ["/bin/bash","/data-pre-stop-hook.sh"]
|
||||
{{- end }}
|
||||
{{- if or .Values.data.hooks.postStart .Values.data.hooks.drain.enabled }}
|
||||
postStart:
|
||||
exec:
|
||||
command: ["/bin/bash","/data-post-start-hook.sh"]
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
terminationGracePeriodSeconds: {{ .Values.data.terminationGracePeriodSeconds }}
|
||||
{{- if .Values.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- range $pullSecret := .Values.image.pullSecrets }}
|
||||
- name: {{ $pullSecret }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: {{ template "elasticsearch.fullname" . }}
|
||||
{{- if .Values.cluster.plugins }}
|
||||
- name: plugindir
|
||||
emptyDir: {}
|
||||
{{- end }}
|
||||
{{- if .Values.cluster.keystoreSecret }}
|
||||
- name: keystore
|
||||
secret:
|
||||
secretName: {{ .Values.cluster.keystoreSecret }}
|
||||
{{- end }}
|
||||
{{- if not .Values.data.persistence.enabled }}
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
{{- end }}
|
||||
podManagementPolicy: {{ .Values.data.podManagementPolicy }}
|
||||
updateStrategy:
|
||||
type: {{ .Values.data.updateStrategy.type }}
|
||||
{{- if .Values.data.persistence.enabled }}
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: {{ .Values.data.persistence.name }}
|
||||
spec:
|
||||
accessModes:
|
||||
- {{ .Values.data.persistence.accessMode | quote }}
|
||||
{{- if .Values.data.persistence.storageClass }}
|
||||
{{- if (eq "-" .Values.data.persistence.storageClass) }}
|
||||
storageClassName: ""
|
||||
{{- else }}
|
||||
storageClassName: "{{ .Values.data.persistence.storageClass }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
resources:
|
||||
requests:
|
||||
storage: "{{ .Values.data.persistence.size }}"
|
||||
{{- end }}
|
||||
34
helmfile/cloud-sdk/charts/elasticsearch/templates/job.yaml
Normal file
34
helmfile/cloud-sdk/charts/elasticsearch/templates/job.yaml
Normal file
@@ -0,0 +1,34 @@
|
||||
{{- if .Values.cluster.bootstrapShellCommand }}
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: {{ template "elasticsearch.fullname" . }}-bootstrap
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
|
||||
release: {{ .Release.Name }}
|
||||
heritage: {{ .Release.Service }}
|
||||
annotations:
|
||||
"helm.sh/hook": post-install,post-upgrade
|
||||
"helm.sh/hook-weight": "10"
|
||||
"helm.sh/hook-delete-policy": hook-succeeded
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
name: {{ template "elasticsearch.fullname" . }}-bootstrap
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
|
||||
release: {{ .Release.Name }}
|
||||
heritage: {{ .Release.Service }}
|
||||
spec:
|
||||
containers:
|
||||
- name: bootstrap-elasticsearch
|
||||
image: byrnedo/alpine-curl
|
||||
command:
|
||||
- "sh"
|
||||
- "-c"
|
||||
- {{ .Values.cluster.bootstrapShellCommand | quote }}
|
||||
restartPolicy: Never
|
||||
backoffLimit: 20
|
||||
{{- end }}
|
||||
@@ -0,0 +1,24 @@
|
||||
{{- if .Values.master.podDisruptionBudget.enabled }}
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.master.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.master.fullname" . }}
|
||||
spec:
|
||||
{{- if .Values.master.podDisruptionBudget.minAvailable }}
|
||||
minAvailable: {{ .Values.master.podDisruptionBudget.minAvailable }}
|
||||
{{- end }}
|
||||
{{- if .Values.master.podDisruptionBudget.maxUnavailable }}
|
||||
maxUnavailable: {{ .Values.master.podDisruptionBudget.maxUnavailable }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.master.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,12 @@
|
||||
{{- if .Values.serviceAccounts.master.create }}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.master.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.master.fullname" . }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,262 @@
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.master.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.master.fullname" . }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.master.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
role: master
|
||||
serviceName: {{ template "elasticsearch.master.fullname" . }}
|
||||
replicas: {{ .Values.master.replicas }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.master.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
role: master
|
||||
{{- if or .Values.master.podAnnotations (eq .Values.master.updateStrategy.type "RollingUpdate") }}
|
||||
annotations:
|
||||
{{- if .Values.master.podAnnotations }}
|
||||
{{ toYaml .Values.master.podAnnotations | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if eq .Values.master.updateStrategy.type "RollingUpdate" }}
|
||||
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.schedulerName }}
|
||||
schedulerName: "{{ .Values.schedulerName }}"
|
||||
{{- end }}
|
||||
serviceAccountName: {{ template "elasticsearch.serviceAccountName.master" . }}
|
||||
{{- if .Values.master.priorityClassName }}
|
||||
priorityClassName: "{{ .Values.master.priorityClassName }}"
|
||||
{{- end }}
|
||||
securityContext:
|
||||
fsGroup: 1000
|
||||
{{- if or .Values.master.antiAffinity .Values.master.nodeAffinity }}
|
||||
affinity:
|
||||
{{- end }}
|
||||
{{- if eq .Values.master.antiAffinity "hard" }}
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- topologyKey: "kubernetes.io/hostname"
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: "{{ template "elasticsearch.name" . }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
component: "{{ .Values.master.name }}"
|
||||
{{- else if eq .Values.master.antiAffinity "soft" }}
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 1
|
||||
podAffinityTerm:
|
||||
topologyKey: kubernetes.io/hostname
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: "{{ template "elasticsearch.name" . }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
component: "{{ .Values.master.name }}"
|
||||
{{- end }}
|
||||
{{- with .Values.master.nodeAffinity }}
|
||||
nodeAffinity:
|
||||
{{ toYaml . | indent 10 }}
|
||||
{{- end }}
|
||||
{{- if .Values.master.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml .Values.master.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.master.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml .Values.master.tolerations | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.master.terminationGracePeriodSeconds }}
|
||||
terminationGracePeriodSeconds: {{ .Values.master.terminationGracePeriodSeconds }}
|
||||
{{- end }}
|
||||
{{- if or .Values.extraInitContainers .Values.sysctlInitContainer.enabled .Values.chownInitContainer.enabled .Values.cluster.plugins }}
|
||||
initContainers:
|
||||
{{- end }}
|
||||
{{- if .Values.sysctlInitContainer.enabled }}
|
||||
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
|
||||
# and https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#mlockall
|
||||
- name: "sysctl"
|
||||
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
|
||||
imagePullPolicy: {{ .Values.initImage.pullPolicy | quote }}
|
||||
resources:
|
||||
{{ toYaml .Values.master.initResources | indent 12 }}
|
||||
command: ["sysctl", "-w", "vm.max_map_count=262144"]
|
||||
securityContext:
|
||||
privileged: true
|
||||
{{- end }}
|
||||
{{- if .Values.chownInitContainer.enabled }}
|
||||
- name: "chown"
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
|
||||
resources:
|
||||
{{ toYaml .Values.master.initResources | indent 12 }}
|
||||
command:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- >
|
||||
set -e;
|
||||
set -x;
|
||||
chown elasticsearch:elasticsearch /usr/share/elasticsearch/data;
|
||||
for datadir in $(find /usr/share/elasticsearch/data -mindepth 1 -maxdepth 1 -not -name ".snapshot"); do
|
||||
chown -R elasticsearch:elasticsearch $datadir;
|
||||
done;
|
||||
chown elasticsearch:elasticsearch /usr/share/elasticsearch/logs;
|
||||
for logfile in $(find /usr/share/elasticsearch/logs -mindepth 1 -maxdepth 1 -not -name ".snapshot"); do
|
||||
chown -R elasticsearch:elasticsearch $logfile;
|
||||
done
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
volumeMounts:
|
||||
- mountPath: /usr/share/elasticsearch/data
|
||||
name: data
|
||||
{{- end }}
|
||||
{{- if .Values.extraInitContainers }}
|
||||
{{ tpl .Values.extraInitContainers . | indent 6 }}
|
||||
{{- end }}
|
||||
{{- if .Values.cluster.plugins }}
|
||||
{{ include "plugin-installer" . | indent 6 }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: elasticsearch
|
||||
env:
|
||||
- name: NODE_DATA
|
||||
value: "false"
|
||||
{{- if hasPrefix "5." .Values.appVersion }}
|
||||
- name: NODE_INGEST
|
||||
value: "false"
|
||||
{{- end }}
|
||||
- name: DISCOVERY_SERVICE
|
||||
value: {{ template "elasticsearch.fullname" . }}-discovery
|
||||
- name: PROCESSORS
|
||||
valueFrom:
|
||||
resourceFieldRef:
|
||||
resource: limits.cpu
|
||||
- name: ES_JAVA_OPTS
|
||||
value: "-Djava.net.preferIPv4Stack=true -Xms{{ .Values.master.heapSize }} -Xmx{{ .Values.master.heapSize }} {{ .Values.cluster.additionalJavaOpts }} {{ .Values.master.additionalJavaOpts }}"
|
||||
{{- range $key, $value := .Values.cluster.env }}
|
||||
- name: {{ $key }}
|
||||
value: {{ $value | quote }}
|
||||
{{- end }}
|
||||
resources:
|
||||
{{ toYaml .Values.master.resources | indent 12 }}
|
||||
readinessProbe:
|
||||
{{ toYaml .Values.master.readinessProbe | indent 10 }}
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
|
||||
{{- if .Values.securityContext.enabled }}
|
||||
securityContext:
|
||||
runAsUser: {{ .Values.securityContext.runAsUser }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- containerPort: 9300
|
||||
name: transport
|
||||
{{ if .Values.master.exposeHttp }}
|
||||
- containerPort: 9200
|
||||
name: http
|
||||
{{ end }}
|
||||
volumeMounts:
|
||||
- mountPath: /usr/share/elasticsearch/data
|
||||
name: data
|
||||
- mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
|
||||
name: config
|
||||
subPath: elasticsearch.yml
|
||||
{{- if .Values.cluster.plugins }}
|
||||
- mountPath: /usr/share/elasticsearch/plugins/
|
||||
name: plugindir
|
||||
{{- end }}
|
||||
{{- if hasPrefix "2." .Values.appVersion }}
|
||||
- mountPath: /usr/share/elasticsearch/config/logging.yml
|
||||
name: config
|
||||
subPath: logging.yml
|
||||
{{- end }}
|
||||
{{- if hasPrefix "5." .Values.appVersion }}
|
||||
- mountPath: /usr/share/elasticsearch/config/log4j2.properties
|
||||
name: config
|
||||
subPath: log4j2.properties
|
||||
{{- end }}
|
||||
{{- if .Values.cluster.keystoreSecret }}
|
||||
- name: keystore
|
||||
mountPath: "/usr/share/elasticsearch/config/elasticsearch.keystore"
|
||||
subPath: elasticsearch.keystore
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if .Values.master.hooks.preStop }}
|
||||
- name: config
|
||||
mountPath: /master-pre-stop-hook.sh
|
||||
subPath: master-pre-stop-hook.sh
|
||||
{{- end }}
|
||||
{{- if .Values.master.hooks.postStart }}
|
||||
- name: config
|
||||
mountPath: /master-post-start-hook.sh
|
||||
subPath: master-post-start-hook.sh
|
||||
{{- end }}
|
||||
{{- if or .Values.master.hooks.preStop .Values.master.hooks.postStart }}
|
||||
lifecycle:
|
||||
{{- if .Values.master.hooks.preStop }}
|
||||
preStop:
|
||||
exec:
|
||||
command: ["/bin/bash","/master-pre-stop-hook.sh"]
|
||||
{{- end }}
|
||||
{{- if .Values.master.hooks.postStart }}
|
||||
postStart:
|
||||
exec:
|
||||
command: ["/bin/bash","/master-post-start-hook.sh"]
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- range $pullSecret := .Values.image.pullSecrets }}
|
||||
- name: {{ $pullSecret }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: {{ template "elasticsearch.fullname" . }}
|
||||
{{- if .Values.cluster.plugins }}
|
||||
- name: plugindir
|
||||
emptyDir: {}
|
||||
{{- end }}
|
||||
{{- if .Values.cluster.keystoreSecret }}
|
||||
- name: keystore
|
||||
secret:
|
||||
secretName: {{ .Values.cluster.keystoreSecret }}
|
||||
{{- end }}
|
||||
{{- if not .Values.master.persistence.enabled }}
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
{{- end }}
|
||||
podManagementPolicy: {{ .Values.master.podManagementPolicy }}
|
||||
updateStrategy:
|
||||
type: {{ .Values.master.updateStrategy.type }}
|
||||
{{- if .Values.master.persistence.enabled }}
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: {{ .Values.master.persistence.name }}
|
||||
spec:
|
||||
accessModes:
|
||||
- {{ .Values.master.persistence.accessMode | quote }}
|
||||
{{- if .Values.master.persistence.storageClass }}
|
||||
{{- if (eq "-" .Values.master.persistence.storageClass) }}
|
||||
storageClassName: ""
|
||||
{{- else }}
|
||||
storageClassName: "{{ .Values.master.persistence.storageClass }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
resources:
|
||||
requests:
|
||||
storage: "{{ .Values.master.persistence.size }}"
|
||||
{{ end }}
|
||||
@@ -0,0 +1,19 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
component: "{{ .Values.master.name }}"
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
name: {{ template "elasticsearch.fullname" . }}-discovery
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: 9300
|
||||
targetPort: transport
|
||||
selector:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
component: "{{ .Values.master.name }}"
|
||||
release: {{ .Release.Name }}
|
||||
@@ -0,0 +1,43 @@
|
||||
{{- if .Values.podSecurityPolicy.enabled }}
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: {{ template "elasticsearch.fullname" . }}
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
annotations:
|
||||
{{- if .Values.podSecurityPolicy.annotations }}
|
||||
{{ toYaml .Values.podSecurityPolicy.annotations | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
privileged: true
|
||||
allowPrivilegeEscalation: true
|
||||
volumes:
|
||||
- 'configMap'
|
||||
- 'secret'
|
||||
- 'emptyDir'
|
||||
- 'persistentVolumeClaim'
|
||||
hostNetwork: false
|
||||
hostPID: false
|
||||
hostIPC: false
|
||||
runAsUser:
|
||||
rule: 'RunAsAny'
|
||||
runAsGroup:
|
||||
rule: 'RunAsAny'
|
||||
seLinux:
|
||||
rule: 'RunAsAny'
|
||||
supplementalGroups:
|
||||
rule: 'RunAsAny'
|
||||
fsGroup:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
- min: 1000
|
||||
max: 1000
|
||||
readOnlyRootFilesystem: false
|
||||
hostPorts:
|
||||
- min: 1
|
||||
max: 65535
|
||||
{{- end }}
|
||||
17
helmfile/cloud-sdk/charts/elasticsearch/templates/role.yaml
Normal file
17
helmfile/cloud-sdk/charts/elasticsearch/templates/role.yaml
Normal file
@@ -0,0 +1,17 @@
|
||||
{{- if .Values.podSecurityPolicy.enabled }}
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: {{ template "elasticsearch.fullname" . }}
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
heritage: "{{ .Release.Service }}"
|
||||
rules:
|
||||
- apiGroups: ['extensions']
|
||||
resources: ['podsecuritypolicies']
|
||||
verbs: ['use']
|
||||
resourceNames:
|
||||
- {{ template "elasticsearch.fullname" . }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,26 @@
|
||||
{{- if .Values.podSecurityPolicy.enabled }}
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: {{ template "elasticsearch.fullname" . }}
|
||||
labels:
|
||||
app: {{ template "elasticsearch.name" . }}
|
||||
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
heritage: "{{ .Release.Service }}"
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: {{ template "elasticsearch.fullname" . }}
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ template "elasticsearch.serviceAccountName.client" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
- kind: ServiceAccount
|
||||
name: {{ template "elasticsearch.serviceAccountName.data" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
- kind: ServiceAccount
|
||||
name: {{ template "elasticsearch.serviceAccountName.master" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- end }}
|
||||
|
||||
@@ -0,0 +1,15 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ template "elasticsearch.fullname" . }}-test
|
||||
labels:
|
||||
app: {{ template "elasticsearch.fullname" . }}
|
||||
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
heritage: "{{ .Release.Service }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
data:
|
||||
run.sh: |-
|
||||
@test "Test Access and Health" {
|
||||
curl -D - http://{{ template "elasticsearch.client.fullname" . }}:9200
|
||||
curl -D - http://{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/health?wait_for_status=green
|
||||
}
|
||||
@@ -0,0 +1,48 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: {{ template "elasticsearch.fullname" . }}-test
|
||||
labels:
|
||||
app: {{ template "elasticsearch.fullname" . }}
|
||||
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
heritage: "{{ .Release.Service }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
annotations:
|
||||
"helm.sh/hook": test-success
|
||||
spec:
|
||||
{{- if .Values.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- range $pullSecret := .Values.image.pullSecrets }}
|
||||
- name: {{ $pullSecret }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
initContainers:
|
||||
- name: test-framework
|
||||
image: "{{ .Values.testFramework.image}}:{{ .Values.testFramework.tag }}"
|
||||
command:
|
||||
- "bash"
|
||||
- "-c"
|
||||
- |
|
||||
set -ex
|
||||
# copy bats to tools dir
|
||||
cp -R /usr/local/libexec/ /tools/bats/
|
||||
volumeMounts:
|
||||
- mountPath: /tools
|
||||
name: tools
|
||||
containers:
|
||||
- name: {{ .Release.Name }}-test
|
||||
image: "{{ .Values.testFramework.image}}:{{ .Values.testFramework.tag }}"
|
||||
command: ["/tools/bats/bats", "-t", "/tests/run.sh"]
|
||||
volumeMounts:
|
||||
- mountPath: /tests
|
||||
name: tests
|
||||
readOnly: true
|
||||
- mountPath: /tools
|
||||
name: tools
|
||||
volumes:
|
||||
- name: tests
|
||||
configMap:
|
||||
name: {{ template "elasticsearch.fullname" . }}-test
|
||||
- name: tools
|
||||
emptyDir: {}
|
||||
restartPolicy: Never
|
||||
317
helmfile/cloud-sdk/charts/elasticsearch/values.yaml
Normal file
317
helmfile/cloud-sdk/charts/elasticsearch/values.yaml
Normal file
@@ -0,0 +1,317 @@
|
||||
# Default values for elasticsearch.
|
||||
# This is a YAML-formatted file.
|
||||
# Declare variables to be passed into your templates.
|
||||
appVersion: "6.8.6"
|
||||
|
||||
## Define serviceAccount names for components. Defaults to component's fully qualified name.
|
||||
##
|
||||
serviceAccounts:
|
||||
client:
|
||||
create: true
|
||||
name:
|
||||
master:
|
||||
create: true
|
||||
name:
|
||||
data:
|
||||
create: true
|
||||
name:
|
||||
|
||||
## Specify if a Pod Security Policy for node-exporter must be created
|
||||
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
|
||||
##
|
||||
podSecurityPolicy:
|
||||
enabled: false
|
||||
annotations: {}
|
||||
## Specify pod annotations
|
||||
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
|
||||
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
|
||||
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
|
||||
##
|
||||
# seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
|
||||
# seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
|
||||
# apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
|
||||
|
||||
securityContext:
|
||||
enabled: false
|
||||
runAsUser: 1000
|
||||
|
||||
## Use an alternate scheduler, e.g. "stork".
|
||||
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
|
||||
##
|
||||
# schedulerName: "default-scheduler"
|
||||
|
||||
image:
|
||||
repository: "docker.elastic.co/elasticsearch/elasticsearch-oss"
|
||||
tag: "6.8.6"
|
||||
pullPolicy: "IfNotPresent"
|
||||
# If specified, use these secrets to access the image
|
||||
# pullSecrets:
|
||||
# - registry-secret
|
||||
|
||||
testFramework:
|
||||
image: "dduportal/bats"
|
||||
tag: "0.4.0"
|
||||
|
||||
initImage:
|
||||
repository: "busybox"
|
||||
tag: "latest"
|
||||
pullPolicy: "Always"
|
||||
|
||||
cluster:
|
||||
name: "elasticsearch"
|
||||
# If you want X-Pack installed, switch to an image that includes it, enable this option and toggle the features you want
|
||||
# enabled in the environment variables outlined in the README
|
||||
xpackEnable: false
|
||||
# Some settings must be placed in a keystore, so they need to be mounted in from a secret.
|
||||
# Use this setting to specify the name of the secret
|
||||
# keystoreSecret: eskeystore
|
||||
config: {}
|
||||
# Custom parameters, as string, to be added to ES_JAVA_OPTS environment variable
|
||||
additionalJavaOpts: ""
|
||||
# Command to run at the end of deployment
|
||||
bootstrapShellCommand: ""
|
||||
env:
|
||||
# IMPORTANT: https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html#minimum_master_nodes
|
||||
# To prevent data loss, it is vital to configure the discovery.zen.minimum_master_nodes setting so that each master-eligible
|
||||
# node knows the minimum number of master-eligible nodes that must be visible in order to form a cluster.
|
||||
MINIMUM_MASTER_NODES: "2"
|
||||
# List of plugins to install via dedicated init container
|
||||
plugins: []
|
||||
# - ingest-attachment
|
||||
# - mapper-size
|
||||
|
||||
loggingYml:
|
||||
# you can override this using by setting a system property, for example -Des.logger.level=DEBUG
|
||||
es.logger.level: INFO
|
||||
rootLogger: ${es.logger.level}, console
|
||||
logger:
|
||||
# log action execution errors for easier debugging
|
||||
action: DEBUG
|
||||
# reduce the logging for aws, too much is logged under the default INFO
|
||||
com.amazonaws: WARN
|
||||
appender:
|
||||
console:
|
||||
type: console
|
||||
layout:
|
||||
type: consolePattern
|
||||
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
|
||||
|
||||
log4j2Properties: |
|
||||
status = error
|
||||
appender.console.type = Console
|
||||
appender.console.name = console
|
||||
appender.console.layout.type = PatternLayout
|
||||
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
|
||||
rootLogger.level = info
|
||||
rootLogger.appenderRef.console.ref = console
|
||||
logger.searchguard.name = com.floragunn
|
||||
logger.searchguard.level = info
|
||||
|
||||
client:
|
||||
name: client
|
||||
replicas: 2
|
||||
serviceType: ClusterIP
|
||||
## If coupled with serviceType = "NodePort", this will set a specific nodePort to the client HTTP port
|
||||
# httpNodePort: 30920
|
||||
loadBalancerIP: {}
|
||||
loadBalancerSourceRanges: {}
|
||||
## (dict) If specified, apply these annotations to the client service
|
||||
# serviceAnnotations:
|
||||
# example: client-svc-foo
|
||||
heapSize: "512m"
|
||||
# additionalJavaOpts: "-XX:MaxRAM=512m"
|
||||
antiAffinity: "soft"
|
||||
nodeAffinity: {}
|
||||
nodeSelector: {}
|
||||
tolerations: []
|
||||
# terminationGracePeriodSeconds: 60
|
||||
initResources: {}
|
||||
# limits:
|
||||
# cpu: "25m"
|
||||
# # memory: "128Mi"
|
||||
# requests:
|
||||
# cpu: "25m"
|
||||
# memory: "128Mi"
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
# memory: "1024Mi"
|
||||
requests:
|
||||
cpu: "25m"
|
||||
memory: "512Mi"
|
||||
priorityClassName: ""
|
||||
## (dict) If specified, apply these annotations to each client Pod
|
||||
# podAnnotations:
|
||||
# example: client-foo
|
||||
podDisruptionBudget:
|
||||
enabled: false
|
||||
minAvailable: 1
|
||||
# maxUnavailable: 1
|
||||
hooks: {}
|
||||
## (string) Script to execute prior the client pod stops.
|
||||
# preStop: |-
|
||||
|
||||
## (string) Script to execute after the client pod starts.
|
||||
# postStart: |-
|
||||
ingress:
|
||||
enabled: false
|
||||
# user: NAME
|
||||
# password: PASSWORD
|
||||
annotations: {}
|
||||
# kubernetes.io/ingress.class: nginx
|
||||
# kubernetes.io/tls-acme: "true"
|
||||
path: /
|
||||
hosts:
|
||||
- chart-example.local
|
||||
tls: []
|
||||
# - secretName: chart-example-tls
|
||||
# hosts:
|
||||
# - chart-example.local
|
||||
|
||||
master:
|
||||
name: master
|
||||
exposeHttp: false
|
||||
replicas: 3
|
||||
heapSize: "512m"
|
||||
# additionalJavaOpts: "-XX:MaxRAM=512m"
|
||||
persistence:
|
||||
enabled: true
|
||||
accessMode: ReadWriteOnce
|
||||
name: data
|
||||
size: "4Gi"
|
||||
# storageClass: "ssd"
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /_cluster/health?local=true
|
||||
port: 9200
|
||||
initialDelaySeconds: 5
|
||||
antiAffinity: "soft"
|
||||
nodeAffinity: {}
|
||||
nodeSelector: {}
|
||||
tolerations: []
|
||||
# terminationGracePeriodSeconds: 60
|
||||
initResources: {}
|
||||
# limits:
|
||||
# cpu: "25m"
|
||||
# # memory: "128Mi"
|
||||
# requests:
|
||||
# cpu: "25m"
|
||||
# memory: "128Mi"
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
# memory: "1024Mi"
|
||||
requests:
|
||||
cpu: "25m"
|
||||
memory: "512Mi"
|
||||
priorityClassName: ""
|
||||
## (dict) If specified, apply these annotations to each master Pod
|
||||
# podAnnotations:
|
||||
# example: master-foo
|
||||
podManagementPolicy: OrderedReady
|
||||
podDisruptionBudget:
|
||||
enabled: false
|
||||
minAvailable: 2 # Same as `cluster.env.MINIMUM_MASTER_NODES`
|
||||
# maxUnavailable: 1
|
||||
updateStrategy:
|
||||
type: OnDelete
|
||||
hooks: {}
|
||||
## (string) Script to execute prior the master pod stops.
|
||||
# preStop: |-
|
||||
|
||||
## (string) Script to execute after the master pod starts.
|
||||
# postStart: |-
|
||||
|
||||
data:
|
||||
name: data
|
||||
exposeHttp: false
|
||||
replicas: 2
|
||||
heapSize: "1536m"
|
||||
# additionalJavaOpts: "-XX:MaxRAM=1536m"
|
||||
persistence:
|
||||
enabled: true
|
||||
accessMode: ReadWriteOnce
|
||||
name: data
|
||||
size: "30Gi"
|
||||
# storageClass: "ssd"
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /_cluster/health?local=true
|
||||
port: 9200
|
||||
initialDelaySeconds: 5
|
||||
terminationGracePeriodSeconds: 3600
|
||||
antiAffinity: "soft"
|
||||
nodeAffinity: {}
|
||||
nodeSelector: {}
|
||||
tolerations: []
|
||||
initResources: {}
|
||||
# limits:
|
||||
# cpu: "25m"
|
||||
# # memory: "128Mi"
|
||||
# requests:
|
||||
# cpu: "25m"
|
||||
# memory: "128Mi"
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
# memory: "2048Mi"
|
||||
requests:
|
||||
cpu: "25m"
|
||||
memory: "1536Mi"
|
||||
priorityClassName: ""
|
||||
## (dict) If specified, apply these annotations to each data Pod
|
||||
# podAnnotations:
|
||||
# example: data-foo
|
||||
podDisruptionBudget:
|
||||
enabled: false
|
||||
# minAvailable: 1
|
||||
maxUnavailable: 1
|
||||
podManagementPolicy: OrderedReady
|
||||
updateStrategy:
|
||||
type: OnDelete
|
||||
hooks:
|
||||
## Drain the node before stopping it and re-integrate it into the cluster after start.
|
||||
## When enabled, it supersedes `data.hooks.preStop` and `data.hooks.postStart` defined below.
|
||||
drain:
|
||||
enabled: true
|
||||
|
||||
## (string) Script to execute prior the data pod stops. Ignored if `data.hooks.drain.enabled` is true (default)
|
||||
# preStop: |-
|
||||
# #!/bin/bash
|
||||
# exec &> >(tee -a "/var/log/elasticsearch-hooks.log")
|
||||
# NODE_NAME=${HOSTNAME}
|
||||
# curl -s -XPUT -H 'Content-Type: application/json' '{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings' -d "{
|
||||
# \"transient\" :{
|
||||
# \"cluster.routing.allocation.exclude._name\" : \"${NODE_NAME}\"
|
||||
# }
|
||||
# }"
|
||||
# echo "Node ${NODE_NAME} is exluded from the allocation"
|
||||
|
||||
## (string) Script to execute after the data pod starts. Ignored if `data.hooks.drain.enabled` is true (default)
|
||||
# postStart: |-
|
||||
# #!/bin/bash
|
||||
# exec &> >(tee -a "/var/log/elasticsearch-hooks.log")
|
||||
# NODE_NAME=${HOSTNAME}
|
||||
# CLUSTER_SETTINGS=$(curl -s -XGET "http://{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings")
|
||||
# if echo "${CLUSTER_SETTINGS}" | grep -E "${NODE_NAME}"; then
|
||||
# echo "Activate node ${NODE_NAME}"
|
||||
# curl -s -XPUT -H 'Content-Type: application/json' "http://{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings" -d "{
|
||||
# \"transient\" :{
|
||||
# \"cluster.routing.allocation.exclude._name\" : null
|
||||
# }
|
||||
# }"
|
||||
# fi
|
||||
# echo "Node ${NODE_NAME} is ready to be used"
|
||||
|
||||
## Sysctl init container to setup vm.max_map_count
|
||||
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
|
||||
# and https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#mlockall
|
||||
sysctlInitContainer:
|
||||
enabled: true
|
||||
## Chown init container to change ownership of data and logs directories to elasticsearch user
|
||||
chownInitContainer:
|
||||
enabled: true
|
||||
## Additional init containers
|
||||
extraInitContainers: |
|
||||
|
||||
forceIpv6: false
|
||||
@@ -477,7 +477,7 @@ releases:
|
||||
- name: elasticsearch
|
||||
condition: elastic.enabled
|
||||
namespace: {{ .Environment.Values.monitoring.namespace }}
|
||||
chart: stable/elasticsearch
|
||||
chart: charts/elasticsearch
|
||||
labels:
|
||||
role: setup
|
||||
group: monitoring
|
||||
@@ -489,9 +489,10 @@ releases:
|
||||
- client:
|
||||
resources:
|
||||
limits:
|
||||
memory: 1000Mi
|
||||
memory: 2Gi
|
||||
requests:
|
||||
memory: 800Mi
|
||||
memory: 1024Mi
|
||||
heapSize: "1024m"
|
||||
|
||||
- master:
|
||||
resources:
|
||||
@@ -507,7 +508,7 @@ releases:
|
||||
size: 650Gi
|
||||
resources:
|
||||
limits:
|
||||
cpu: 1500m
|
||||
cpu: 3
|
||||
memory: 4Gi
|
||||
requests:
|
||||
cpu: 1500m
|
||||
|
||||
36
terraform/wifi-289708231103/core-dumps-s3/.terraform.lock.hcl
generated
Normal file
36
terraform/wifi-289708231103/core-dumps-s3/.terraform.lock.hcl
generated
Normal file
@@ -0,0 +1,36 @@
|
||||
# This file is maintained automatically by "terraform init".
|
||||
# Manual edits may be lost in future updates.
|
||||
|
||||
provider "registry.terraform.io/carlpett/sops" {
|
||||
version = "0.7.1"
|
||||
constraints = "~> 0.5"
|
||||
hashes = [
|
||||
"h1:/LNLI9qKgRjlHhyl1M/6BA+HVUMQ9RQApZgyfV4RAJ4=",
|
||||
"zh:203d5ab6af38efb9fc84fdbb303218aa5012dc8d28e700642be41bbc4b1c2fa1",
|
||||
"zh:5684a2dc65da50824fb4275c10ac452e6512dd0d60a9abd5f505e67e7b9d759a",
|
||||
"zh:b4311d7cae0b29f2dcf5a18a8297ed0787f59b140102547da9f8b61af27e15b6",
|
||||
"zh:bbf9e6956191a95dfbb8336b1cc8a059ceba4d3f1f22a83e4f08662cd1cabe9b",
|
||||
"zh:cd8f244d26f9733b9b238db22b520e69cdc68262093db3389ec466b1df2cadd8",
|
||||
"zh:d855e4dc2ad41d8a877dd5dcd51061233fc5976c5c9afceb5a973e6a9f76b1d9",
|
||||
"zh:ed584cf42015e1f10359cc2d85b12e348c5c1581ae781be29e0e3dfb7f43590b",
|
||||
]
|
||||
}
|
||||
|
||||
provider "registry.terraform.io/hashicorp/aws" {
|
||||
version = "4.33.0"
|
||||
hashes = [
|
||||
"h1:2MWU+HIKKivfhY8dAU1cR0xxwlzNrWOZEQs8BApQ/Ao=",
|
||||
"zh:421b24e21d7fac4d65d97438d2c0a4effe71d3a1bd15820d6fde2879e49fe817",
|
||||
"zh:4378a84ca8e2a6990f47abc24367b801e884be928671b37ad7b8e7b656f73e48",
|
||||
"zh:54e0d7884edf3cefd096715794d32b6532138dca905f0b2fe84fb2117594293c",
|
||||
"zh:6269a7d0312057db5ded669e9f7f9bd80fb6dcb549b50d8d7f3f3b2a0361b8a5",
|
||||
"zh:67f57d16aa3db493a3174c3c5f30385c7af9767c4e3cdca14e5a4bf384ff59d9",
|
||||
"zh:7d4d4a1d963e431ffdc3348e3a578d3ba0fa782b1f4bf55fd5c0e527d24fed81",
|
||||
"zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425",
|
||||
"zh:cd8e3d32485acb49c1b06f63916fec8e73a4caa6cf88ae9c4bf236d6f5d9b914",
|
||||
"zh:d586fd01195bd3775346495e61806e79b6012e745dc05e31a30b958acf968abe",
|
||||
"zh:d76122060f25ab87887a743096a42d47ba091c2c019ac13ce6b3973b2babe5a3",
|
||||
"zh:e917d36fe18eddc42ec743b3152b4dcb4853b75ea7a679abd19bdf271bc48221",
|
||||
"zh:eb780860d5c04f43a018aef564e76a2d84e9aa68984fa1f968ca8c09d23a611a",
|
||||
]
|
||||
}
|
||||
Reference in New Issue
Block a user