Compare commits

..

121 Commits

Author SHA1 Message Date
Dmitry Dunaev
ea263e23d6 [TOOLS-146] Del: usage of docker credentials as anonymous pull is allowed 2021-06-24 12:54:43 +03:00
Max
f0cdde9a87 increase memory values for SSC, SPC and prov service (#81)
* increase memory values for SSC, SPC and prov service
* update values for Kafka and Zookeeper
2021-06-23 15:09:31 +02:00
AkshayJagadish-ne
d8544d52f0 Merge pull request #78 from Telecominfraproject/WIFI-2434
WIFI 2434: Update SDK master to use image tag 1.2.0-SNAPSHOT
2021-05-28 11:52:22 -04:00
Akshay Jagadish
8d750222fa Merge branch 'WIFI-2434' of https://github.com/Telecominfraproject/wlan-cloud-helm into WIFI-2434 2021-05-28 11:21:38 -04:00
Akshay Jagadish
402abf4876 Update SDK master to use image tag 1.2.0-SNAPSHOT 2021-05-28 11:07:21 -04:00
Akshay Jagadish
0f9d28113a Update SDK master to use image tag 1.2.0-SNAPSHOT 2021-05-28 11:07:21 -04:00
Max
02c8dbc94c make all container images configurable (#67)
* make all container images configurable and default to tip-docker-cache-repo.jfrog.io registry
2021-05-27 13:39:24 +02:00
Akshay Jagadish
a13323f4ca Update SDK master to use image tag 1.2.0-SNAPSHOT 2021-05-26 18:54:29 -04:00
Akshay Jagadish
e492e51ae8 Update SDK master to use image tag 1.2.0-SNAPSHOT 2021-05-26 18:51:08 -04:00
Dmitry Dunaev
a4659451c1 Merge pull request #77 from Telecominfraproject/feature/aws-internal-example
Add: values for AWS internal setup [TOOLS-136]
2021-05-12 14:16:33 +03:00
Dmitry Dunaev
884d9411da Add: values for AWS internal setup [TOOLS-136] 2021-05-12 12:16:14 +02:00
Max
deb12d9d24 add AWS EKS values file (#76) 2021-04-26 16:06:09 +02:00
Max
7f0da5969d WIFI-1998 support newer api versions (#74)
* migrate to newer Ingress API versions
* add changelog entry
2021-04-26 16:05:58 +02:00
norm-traxler
c73350c535 Merge pull request #75 from Telecominfraproject/WIFI-2026
WIFI-2026 Change docker tags from 0.0.1-SNAPSHOT to 1.1.0-SNAPSHOT
2021-04-16 17:09:28 -04:00
Akshay Jagadish
f253034335 WIFI-2026 Change docker tags from 0.0.1-SNAPSHOT to 1.1.0-SNAPSHOT 2021-04-16 16:12:00 -04:00
Max
f85004ffc4 add post deployment notes (#73) 2021-04-13 12:06:10 +02:00
Max Brenner
d5af204c09 rebase v1.0.1 changelog entries 2021-04-13 10:52:58 +02:00
Max Brenner
ca70570de7 rebase v1.0.0 changelog 2021-04-01 13:52:16 +02:00
Max
d0a504a7c2 automate Helm chart release process (#69)
* automate Helm chart release process
2021-03-24 14:37:08 +01:00
norm-traxler
0d9e6e0afc Merge pull request #70 from Telecominfraproject/WIFI-1724-cassandra-reconnect
[WIFI-1724] SSC reconnect to cassandra after cassandra pod restart
2021-03-23 12:14:55 -04:00
Norm Traxler
af22e767b5 [WIFI-1724] SSC reconnect to cassandra after cassandra pod restart 2021-03-22 15:57:33 -04:00
Dmitry Toptygin
63e784482f WIFI-1877 - add topics in kafka where messages are partitioned by locationId - location_metrics and location_events 2021-03-22 12:34:36 -04:00
AkshayJagadish-ne
9a532cf290 Merge pull request #68 from Telecominfraproject/WIFI-1812-Revert
Changed FE image tag back to 0.0.1-SNAPSHOT
2021-03-19 10:41:40 -04:00
Akshay Jagadish
7c1dd0f5b8 Changed FE image tag back to 0.0.1-SNAPSHOT 2021-03-19 09:51:13 -04:00
AkshayJagadish-ne
724ab141dc Merge pull request #66 from Telecominfraproject/WIFI-1812
WIFI-1812: Change the front-end image tags
2021-03-15 19:06:36 -04:00
Akshay Jagadish
0fb7b37c2c WIFI-1812: Change the front-end image tags 2021-03-15 18:45:54 -04:00
Max
5e68d20255 add exporting of servo beans (#65)
* add exporting of servo beans
* add changelog entry
2021-03-08 13:59:23 +01:00
Gleb Boushev
fa533dde56 Update statefulset.yaml (#64)
work around the imagepullbackoff issue on updates
2021-03-03 14:07:42 +03:00
yongchen-cu
0060ce09ac Merge pull request #61 from Telecominfraproject/WIFI-1319-SslIssue
Wifi 1319 ssl issue
2021-02-22 14:32:54 -05:00
yongchen-cu
8670131e21 Merge pull request #62 from Telecominfraproject/WIFI-1610
WIFI-1610: Changed tag of FE components from latest to 0.0.1-SNAPSHOT
2021-02-22 14:24:33 -05:00
Akshay Jagadish
a15f091632 WIFI-1610: Changed tag of FE components from latest to 0.0.1-SNAPSHOT 2021-02-20 17:57:55 -05:00
Rahul Sharma
b833901b14 WIFI-1319: Renaming tlsv1.3 flag 2021-02-19 22:22:26 -05:00
Rahul Sharma
f8161542cf Moving Ssl.properties out of Secret and reading it instead as a file 2021-02-19 18:36:12 -05:00
Rahul Sharma
98e29d4f21 WIFI-1319: Adding ssl.properties directly 2021-02-19 18:24:38 -05:00
Rahul Sharma
be0f3512ae WIFI-1319: Updating charts to add TLS related properties in ssl.properties.
Since these are only relevant to microK8s environment, we only enable them in it.
2021-02-19 18:04:53 -05:00
Max
de8e8897f1 WIFI-1172 add JMX to Prometheus PoC (#51)
* add JMX to Prometheus PoC

* add JMX prometheus exporter to all Java services
2021-02-19 12:12:43 -05:00
Max
43233798b2 add debug output on failure (#60) 2021-02-17 18:32:05 +01:00
Max
73eec7509a WIFI-1524 add nightly microk8s scenario test (#57)
* add nightly microk8s scenario test
* add README for microk8s setup
2021-02-16 12:47:55 +01:00
Max
f824125224 WIFI-1028 remove vendor specific default values (#40)
* remove vendor specific default values
2021-02-15 12:15:52 +01:00
Max
6b4934c451 adjust resource request/limit values (#59)
* adjust resource request/limit values
* adjust cassandra values
* adjust postgres values
2021-02-11 13:02:24 +01:00
Gleb Boushev
d4a45ad10a found an error (#58)
* found an error
2021-02-11 12:49:33 +03:00
4c74356b41
915eb1d625 WIFI-1478 - all credentials moved to globals (#54)
* all credentials moved to globals

* cassandra fix

* centralized certificates, removed unneded entities

* minor fixes, local-multi-namespace example fixes

* removing unneeded sections in the yaml files

* updates to changelog and multiple namespaces examples

* fixing last couple of services, removed not needed secrets, centralized httpclientconfig.json and ssl.properties

* minor improvements

* changelog reformatted

* fixing startupprobe and changelog

Co-authored-by: Gleb Boushev <4c74356b41@outlook.com>
2021-02-04 13:03:51 +03:00
Max
cfda82150b Create enforce-jira-issue-key.yml (#55) 2021-02-03 11:33:41 +01:00
4c74356b41
fc783ea948 Merge pull request #53 from Telecominfraproject/feature/thirdparties-fixes
fixing docker secret and fixing kafka topics
2021-01-29 16:29:22 +01:00
4c74356b41
86c29ae62c Update README.md 2021-01-29 13:13:41 +03:00
Gleb Boushev
8484fc3f87 fixing docker secret and fixing kafka topics 2021-01-29 10:52:53 +03:00
4c74356b41
a3e523f922 Feature/thirdparties (#49)
* thirdparties replaced with latest bitnami charts

* migration values example for persistence, dev-local example for thirdparties

* removing hardcoded passwords

* changing storage classes to mimic what minikube has

* fixing missing folder

* fixing PR comments, fixing testing build

* forgot to fix the namespace in the testing build

* fixing path issues

* fixing another path issue

* fixing build issues

* improving namespace support

* fixing cleanup task

* fixing yaml files

* further yaml formatting

* Update README.md

* Update testing.yml

Co-authored-by: Gleb Boushev <4c74356b41@outlook.com>
Co-authored-by: Leonid Mirsky <leonid@opsfleet.com>
2021-01-28 16:51:39 +02:00
Leonid Mirsky
c8c1650f5b Merge pull request #52 from Telecominfraproject/changelog-v0.4-bump
Bump Helm chart's version and add initial Changelog
2021-01-28 16:30:42 +02:00
Leonid Mirsky
d8516225a9 Bump Helm chart's version and add initial Changelog 2021-01-27 15:32:34 +02:00
AkshayJagadish-ne
e1b2008a89 WIFI-1287 (#50)
* WIFI-1287

*Removed run.sh as it overrides the run.sh from the back-end
https://github.com/Telecominfraproject/wlan-cloud-services/blob/master/port-forwarding-gateway-docker/src/main/docker/app/run.sh

*created two env variables LOCAL_PORT_RANGE_START and
LOCAL_PORT_RANGE_END with values {{ include "apDebugPortsStart" . }} and
{{ include "apDebugPortsStart" . }} respectively

* removed files folder from configmap
subtracted end port by 1
2021-01-20 15:44:36 -05:00
Chris Busch
7bd33edb36 fix wlan.local references
Signed-off-by: Chris Busch <cbusch@fb.com>
2021-01-19 08:25:18 -05:00
AkshayJagadish-ne
cc987968d8 Merge pull request #48 from Telecominfraproject/revert-46-WIFI-1287
Revert "WIFI-1287  Add configurable variable for pfgw local port range"
2021-01-18 19:40:30 -05:00
AkshayJagadish-ne
d98d4ace39 Revert "WIFI-1287 Add configurable variable for pfgw local port range (#46)"
This reverts commit aac7b07801.
2021-01-18 19:39:49 -05:00
AkshayJagadish-ne
aac7b07801 WIFI-1287 Add configurable variable for pfgw local port range (#46) 2021-01-18 16:02:34 -05:00
Chris Busch
da7bbf1723 Merge pull request #47 from Telecominfraproject/microk8s-patch-1
New microk8s with metallb deployment file
2021-01-18 11:53:50 -05:00
Chris Busch
76fca7ef14 New microk8s with metallb deployment file
Signed-off-by: Chris Busch <cbusch@fb.com>
2021-01-17 15:08:09 -05:00
Max
e5d5c92f61 decrease cpu requests (#45) 2021-01-14 17:43:39 +01:00
Max
b2d8d7b205 WIFI-990 prefix ap debug ports (#34)
* WIFI-990: disable hardcoded NodePort for AWS deployments
* add prefix to AP debug ports
* update debug port range in port-forwarding run.sh

Co-authored-by: Eugene Taranov <eugene@opsfleet.com>
2021-01-11 14:34:20 +01:00
Max
0a1f9abd00 WIFI-1259 disable kube-score validation (#44)
* disable kube-score validation
2021-01-11 10:24:20 +01:00
Max
63a175bd29 WIFI-1246 add default CPU/memory limits/requests (#42)
* add default CPU/memory limits/requests
* set minimum values to 128Mi and 500m
2021-01-05 18:15:38 +01:00
Max
ee606a6204 remove reference of Toolsmith repo branch (#43) 2021-01-04 18:34:25 +01:00
Max
448ad243a4 add prototype of workflow (#39)
* add prototype of workflow
* generate Helm values for PR deployment
* add proper test
2021-01-04 18:14:22 +01:00
Max
174f1a4308 WIFI-1207 add example values for a multi namespace deployment (#41)
* add example value for a multi namespace deployment
2021-01-04 12:09:44 +01:00
Max
83c14c6548 WIFI-1238 set all imagePullPolicies to Always (#38)
* set all imagePullPolicies to Always
* use chart-level and global image pull policy
* make images configurable
2020-12-23 12:13:09 +01:00
Max
4960fb3654 add default server cert for Ingresses (#36) 2020-12-22 18:34:31 +01:00
Max
2550ed3ec2 replace hardcoded PSQL references (#35) 2020-12-22 17:43:53 +01:00
AkshayJagadish-ne
786fb43652 Merge pull request #37 from Telecominfraproject/zone3-albingress-testing
Changed mqtt version to 2.0.3 for more logs
2020-12-22 11:35:30 -05:00
Akshay Jagadish
ea829b67c8 Increase mqtt logging 2020-12-22 11:00:53 -05:00
Akshay Jagadish
63163f7520 Changed version to 2.0.3 for more logs 2020-12-21 19:07:59 -05:00
Max
3c1afd50cb Merge pull request #32 from Telecominfraproject/WIFI-1199-fix-cassandra-headless-hostname
WIFI-1199 fix cassandra headless hostname
2020-12-17 16:13:46 +01:00
Max
f46612fa61 Merge pull request #33 from Telecominfraproject/WIFI-1120-remove-hardcoded-kafka-references
WIFI-1120 remove hardcoded kafka references
2020-12-17 15:43:18 +01:00
Max Brenner
f10c416e19 add missing env variable for Kafka 2020-12-17 14:52:54 +01:00
Max Brenner
b5a47cc61c use common include for Cassandra headless service 2020-12-15 12:35:27 +01:00
Max Brenner
fac4df0a64 remove hardcoded reference from Cassandra test config 2020-12-15 12:34:32 +01:00
Max Brenner
5b81f38a0c remove hardcoded Kafka headless service references 2020-12-15 11:45:37 +01:00
Max Brenner
13cac13445 remove hardcoded reference in Kafka config 2020-12-14 18:31:58 +01:00
Max Brenner
2174cd4971 preserve newlines in config map 2020-12-14 18:19:58 +01:00
Max Brenner
ab6a4528d8 dynamically adjust references to Cassandra headless service 2020-12-14 17:56:05 +01:00
Dmitry Toptygin
6a846f9358 added scalability section with known tunable parameters into opensync-gw-cloud, wlan-portal-service, wlan-prov-service, wlan-spc-service, wlan-ssc-service. provided an example of overrides for those parameters in tip-wlan/resources/environments/dev-local.yaml 2020-12-09 16:35:51 -05:00
Dmitry Toptygin
8bd62a3dc6 second attempt to use explicit string as the value of environment variable tip_wlan_ovsdb_listener_threadPoolSize 2020-12-09 15:18:44 -05:00
Dmitry Toptygin
4ad3bb3b0c use explicit string as the value of environment variable tip_wlan_ovsdb_listener_threadPoolSize 2020-12-09 15:08:31 -05:00
Dmitry Toptygin
22ab0dbcf0 in opensync-gw-cloud exposed a property tip_wlan_ovsdb_listener_threadPoolSize 2020-12-09 14:00:59 -05:00
eugenetaranov-opsfleet
0c6f53eb9e WIFI-991: alb ingress (#29)
* rebased

* removed map-hash-bucket-size from nginx config map

* synced requests/limits for cassandra

* enabled alb for graphql and static

* added plain http

* added ingress + alb for tip-wlan-wlan-portal-service

* enabled http->https redirect for ingress/alb services

* enabled nlb for opensync-gw-cloud and mqtt services

* disabled nginx ingress

* refactored service annotations

* sync: works

* disabled nodePort hardcoded value for wlan-portal-service AWS deployment

* synced helm values

* sync

* fix prov service ssl creds

* enabled efs

* sync dev-amazon-tip from master

* disabled nodeport static for opensync-gw-cloud

* wlan-port-forwarding-gateway-service nodeport static

* opensync-mqttt-broker nodeport static

* removed whitespace in tip-wlan/charts/nginx-ingress-controller/templates/controller-configmap.yaml

* renamed nodePort_static

* renamed alb_https_redirect

* added comment to nodePortStatic

* added a comment to   lb_https_redirect

Co-authored-by: Eugene Taranov <eugene@taranov.me>
2020-12-03 12:48:52 +03:00
eugenetaranov-opsfleet
b5ff727d92 WIFI-752: healthchecks cloudsdk (#28)
* opensync-gw-cloud healthchecks

* added sha image ref

* upd depends-on to latest tag
2020-12-02 09:56:07 +03:00
AkshayJagadish-ne
d6a6caf2b3 Merge pull request #25 from Telecominfraproject/Reversion
Reverted lb auto-provisioning changes due to issues with qa deployment
2020-11-20 18:25:27 -05:00
Akshay Jagadish
0a9968fb5b changed testcluster to demo as a part of reversion 2020-11-20 18:18:48 -05:00
Akshay Jagadish
b5e1ae767f resolved conflict 2020-11-20 17:25:27 -05:00
AkshayJagadish-ne
58434b97e3 Merge pull request #24 from Telecominfraproject/NETEXP-485
added readme
2020-11-20 14:25:14 -05:00
Akshay Jagadish
c370a7f9de adde readme 2020-11-20 13:28:06 -05:00
AkshayJagadish-ne
c84c9357e7 Merge pull request #23 from Telecominfraproject/NETEXP-485
NETEXP-485: reorganized fields to support external address and port customization
2020-11-19 00:18:41 -05:00
Akshay Jagadish
59fbd585a3 reverted fqdn change 2020-11-19 00:15:15 -05:00
Akshay Jagadish
04a3cd4c40 NETEXP-485: reorganized fields to support old netexp-485 merge 2020-11-19 00:12:01 -05:00
AkshayJagadish-ne
9dd7585298 Merge pull request #22 from Telecominfraproject/NETEXP-485
NETEXP-485
2020-11-18 18:22:15 -05:00
Akshay Jagadish
052d03c056 Grouped env variables and seperated port/address fields for
opensync-externalhost
2020-11-18 13:59:19 -05:00
eugenetaranov-opsfleet
d113550060 WIFI-991: alb ingress (#21)
* synced requests/limits for cassandra

* enabled alb for graphql and static

* added ingress + alb for tip-wlan-wlan-portal-service

* enabled http->https redirect for ingress/alb services

* enabled nlb for opensync-gw-cloud and mqtt services

* disabled nginx ingress

* refactored service annotations

* disabled nodePort hardcoded value for wlan-portal-service AWS deployment
2020-11-18 18:36:30 +03:00
Akshay Jagadish
972827d7dc NETEXP-485
added support for overriding external host ports
2020-11-16 18:25:31 -05:00
Rahul Sharma
0bf4009350 Merge pull request #20 from Telecominfraproject/RemoveCloudDeploymentFlag
Remove CloudDeployment flag from Helm charts.
2020-11-04 19:54:43 -05:00
Rahul Sharma
dfa6bfc728 Remove CloudDeployment flag from Helm charts.
Not needed if pods(running in local env) can reach Jfrog to retrieve schema.
2020-11-04 18:54:41 -05:00
eugenetaranov-opsfleet
2adf0ae0ef local development env with minikube (#19)
* - added dev-local.yaml values file;
- updated README;

* - enabling logs;

* - removed hardcoded credentials;

* cleanup

* upd docs

* removed hardcoded values in kafka/admin-client.properties
2020-10-28 20:29:19 +03:00
eugenetaranov-opsfleet
a996f58f4d Revert minikube branch (#18)
* Revert "cloudsdk for minikube local environment (#9)"

This reverts commit 6af16ea911.

* Revert "fixed missing values in wlan-prov (#17)"

This reverts commit bd2a939b90.
2020-10-22 19:13:45 +03:00
eugenetaranov-opsfleet
bd2a939b90 fixed missing values in wlan-prov (#17) 2020-10-22 18:17:38 +03:00
eugenetaranov-opsfleet
6af16ea911 cloudsdk for minikube local environment (#9)
* minikube local deployment;
minor refactoring of duplicated healthchecks

* fix readinessProbe for mqtt

* reverted tip-wlan/resources/environments/dev-amazon-tip.yaml

* templated cassandra-application.conf

* removed common:

* removed comments

* rolled back DUMMY_PASSWORD

* rollback password

* templated user/passwd for cassandra

Co-authored-by: Eugene Taranov <eugene@taranov.me>
2020-10-21 22:47:00 +03:00
Rahul Sharma
a5f3594b35 Updated the VisibleHostName property in run.sh in the port-forwarder chart 2020-10-09 13:16:32 -04:00
AkshayJagadish-ne
9259e8c168 Merge pull request #15 from Telecominfraproject/AJ
Changes to support Port forwarding gateway properties
2020-10-08 13:23:18 -04:00
Akshay Jagadish
d41af1c6c0 Changes to support Port forwarding gateway properties 2020-10-08 13:22:16 -04:00
AkshayJagadish-ne
af9efe1970 Merge pull request #14 from Telecominfraproject/AJsBranch
correction in pg_hba.cong: changed ::0 to ::/0
2020-10-07 22:55:29 -04:00
Akshay Jagadish
a1f14776ef correction in pg_hba.cong: changed ::0 to ::/0 2020-10-07 22:53:48 -04:00
AkshayJagadish-ne
be8419e950 Merge pull request #13 from Telecominfraproject/AJsBranch
Reverting changes
2020-10-07 20:07:39 -04:00
Akshay Jagadish
514d6b7085 Reverting changes 2020-10-07 20:05:34 -04:00
AkshayJagadish-ne
554ffeb425 Merge pull request #12 from Telecominfraproject/AJsBranch
Changes to support Port-forwarding-gateway properties: PF_GATEWAY_EXT_HOST and  PF_GATEWAY_EXT_PORT
2020-10-07 19:33:21 -04:00
Akshay Jagadish
eab613b458 Created seperate 'externallyVisible' field with host and port values of
wlan-port-forwarding-gateway-service
2020-10-07 19:28:52 -04:00
Akshay Jagadish
bed988a49f Changes to support Port forwarding gateway properties
PF_GATEWAY_EXT_HOST and  PF_GATEWAY_EXT_PORT
2020-10-07 17:09:55 -04:00
Rahul Sharma
7a12cc59f9 Adding IpV6 support in pg_hba.conf and disabling port-forwarding chart in disable-allcharts.yaml 2020-10-07 14:51:28 -04:00
AkshayJagadish-ne
552cd31453 Merge pull request #11 from Telecominfraproject/AJ
Aj
2020-10-02 19:27:47 -04:00
Akshay Jagadish
4e4104c8b6 corrections 2020-10-02 19:15:53 -04:00
Akshay Jagadish
ae2afd1a9e removed timestamp 2020-10-02 18:28:56 -04:00
Akshay Jagadish
ae10d4d024 indentation 2020-10-02 16:45:41 -04:00
Akshay Jagadish
bae4ec6afa indentation 2020-10-02 16:44:14 -04:00
Akshay Jagadish
220407760c WIFI-845: Support Kubernetes Deployment Versioning 2020-10-02 14:39:51 -04:00
Akshay Jagadish
012050b8ce Indentation 2020-09-28 14:42:15 -04:00
Akshay Jagadish
33217abda1 Review changes that would be added to deployment testing 2020-09-28 14:40:06 -04:00
395 changed files with 3471 additions and 14697 deletions

View File

@@ -0,0 +1,37 @@
name: Ensure Jira issue is linked
on:
pull_request:
types: [opened, edited, reopened, synchronize]
jobs:
check_for_issue_key:
runs-on: ubuntu-latest
steps:
- name: Log into Jira
uses: atlassian/gajira-login@v2.0.0
env:
JIRA_BASE_URL: ${{ secrets.TIP_JIRA_URL }}
JIRA_USER_EMAIL: ${{ secrets.TIP_JIRA_USER_EMAIL }}
JIRA_API_TOKEN: ${{ secrets.TIP_JIRA_API_TOKEN }}
- name: Find issue key in PR title
id: issue_key_pr_title
continue-on-error: true
uses: atlassian/gajira-find-issue-key@v2.0.2
with:
string: ${{ github.event.pull_request.title }}
from: "" # required workaround for bug https://github.com/atlassian/gajira-find-issue-key/issues/24
- name: Find issue key in branch name
continue-on-error: true
id: issue_key_branch_name
uses: atlassian/gajira-find-issue-key@v2.0.2
with:
string: ${{ github.event.pull_request.head.ref }}
from: "" # required workaround for bug https://github.com/atlassian/gajira-find-issue-key/issues/24
- name: Check if issue key was found
run: |
if [[ -z "${{ steps.issue_key_pr_title.outputs.issue }}" && -z "${{ steps.issue_key_branch_name.outputs.issue }}" ]]; then
echo "Jira issue key could not be found!"
exit 1
fi

View File

@@ -3,28 +3,60 @@ name: Helm CI - TIP WLAN Cloud Master
on:
push:
branches: [ master ]
tags: [ "v*" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
ssh-key: ${{ secrets.GH_AUTOMATION_KEY }}
submodules: true
- name: Login to TIP Docker registry
uses: azure/docker-login@v1
with:
login-server: tip-tip-wlan-cloud-docker-repo.jfrog.io
username: build-pipeline
password: ${{ secrets.DOCKER_REPO_PASSWORD }}
- name: Login to TIP Helm chart registry
run: helm repo add tip-wlan-cloud-helm-virtual-repo https://tip.jfrog.io/artifactory/tip-wlan-cloud-helm-virtual-repo --username build-pipeline --password ${{ secrets.HELM_REPO_PASSWORD }}
- name: Build tip-wlan chart file
run: tar -czf tip-wlan.tgz tip-wlan
- name: Upload tip-wlan chart to the TIP helm registry
run: curl -ubuild-pipeline:${{ secrets.HELM_REPO_PASSWORD }} -T tip-wlan.tgz "https://tip.jfrog.io/artifactory/tip-wlan-cloud-helm-repo/tip-wlan.tgz"
- name: Verify that chart was uploaded successfully
run: |
helm repo update
helm search repo tip
if [[ "${{ github.ref }}" == "refs/tags/"* ]]; then
PACKAGE_OPTS="--version ${GITHUB_REF#refs/tags/v}"
else
PACKAGE_OPTS=""
fi
helm package $PACKAGE_OPTS -u tip-wlan
- name: Store chart as artifact
uses: actions/upload-artifact@v2
with:
name: helm-chart
path: tip-wlan-*.tgz
- name: Upload tip-wlan chart to the TIP helm registry
run: |
if [[ "${{ github.ref }}" == "refs/tags/"* ]]; then
curl -ubuild-pipeline:${{ secrets.HELM_REPO_PASSWORD }} -T tip-wlan-${GITHUB_REF#refs/tags/v}.tgz "https://tip.jfrog.io/artifactory/tip-wlan-cloud-helm-repo/tip-wlan-${GITHUB_REF#refs/tags/v}.tgz"
else
curl -ubuild-pipeline:${{ secrets.HELM_REPO_PASSWORD }} -T tip-wlan-*.tgz "https://tip.jfrog.io/artifactory/tip-wlan-cloud-helm-repo/tip-wlan-master.tgz"
fi
release:
runs-on: ubuntu-latest
needs: [ build ]
if: startsWith(github.ref, 'refs/tags/')
steps:
- uses: actions/checkout@v2
- name: setup Python
uses: actions/setup-python@v2
with:
python-version: "3.8"
- name: install keepachangelog
run: pip install keepachangelog
- name: create release description
continue-on-error: true
run: python .github/workflows/prepare-release-description.py ${GITHUB_REF#refs/tags/v} > RELEASE.md
- name: download Helm chart artifact
uses: actions/download-artifact@v2
with:
name: helm-chart
- name: create release
uses: softprops/action-gh-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
files: tip-wlan-*.tgz
body_path: RELEASE.md
prerelease: ${{ contains(github.ref, 'rc') }}

View File

@@ -45,7 +45,8 @@ jobs:
helm template -f values-test.yaml . | /tmp/k8s-validators/kubeval --ignore-missing-schemas
echo "Kube-score test"
helm template -f values-test.yaml . | /tmp/k8s-validators/kube-score score -
# will be fixed and enabled again in https://telecominfraproject.atlassian.net/browse/WIFI-1258
helm template -f values-test.yaml . | /tmp/k8s-validators/kube-score score - || true
- name: Test glusterfs
working-directory: glusterfs/kube-templates
run: |
@@ -53,4 +54,5 @@ jobs:
/tmp/k8s-validators/kubeval *.yaml
echo "Kube-score test"
/tmp/k8s-validators/kube-score score *.yaml
# will be fixed and enabled again in https://telecominfraproject.atlassian.net/browse/WIFI-1258
/tmp/k8s-validators/kube-score score *.yaml || true

View File

@@ -0,0 +1,98 @@
name: Nightly testing of all supported deployment scenarios
on:
workflow_dispatch:
schedule:
- cron: '15 0 * * *'
defaults:
run:
shell: bash
jobs:
microk8s:
runs-on: ubuntu-latest
steps:
- name: Checkout PKI scripts repo
uses: actions/checkout@v2
with:
path: wlan-pki-cert-scripts
repository: Telecominfraproject/wlan-pki-cert-scripts
- name: Checkout Cloud SDK repo
uses: actions/checkout@v2
with:
path: wlan-cloud-helm
repository: Telecominfraproject/wlan-cloud-helm
- name: Generate and copy certs
working-directory: wlan-pki-cert-scripts
run: |
./generate_all.sh
./copy-certs-to-helm.sh ../wlan-cloud-helm
- name: Determine public IP address
id: ip
uses: haythem/public-ip@v1.2
- uses: balchua/microk8s-actions@v0.2.1
with:
channel: 'latest/stable'
addons: '["dns", "helm3", "storage", "metallb:${{ steps.ip.outputs.ipv4 }}-${{ steps.ip.outputs.ipv4 }}"]'
- name: Deploy Cloud SDK
working-directory: wlan-cloud-helm
run: |
helm dependency update tip-wlan
# Github runners only have 2 CPU cores and 7GB of RAM. Thus we need to disable some of our resource requests
helm upgrade --install tip-wlan tip-wlan -f tip-wlan/example-values/microk8s-basic/values.yaml --create-namespace --namespace tip --set cassandra.resources=null --wait --timeout 10m
- name: Show pod state on deployment failure
if: failure()
run: |
kubectl get pods -n tip
kubectl describe pods -n tip
- name: Set custom DNS entries
run: |
sudo sh -c "echo -n \"\n${{ steps.ip.outputs.ipv4 }} wlan-ui.wlan.local wlan-ui-graphql.wlan.local\" >> /etc/hosts"
- name: Test HTTP endpoints
run: |
# this is needed to make until work
set +e
urls="https://wlan-ui.wlan.local https://wlan-ui-graphql.wlan.local/graphql"
for url in $urls; do
max_retry=300
counter=0
until curl --silent --insecure $url > /dev/null
do
sleep 1
[[ counter -eq $max_retry ]] && echo "$url not reachable after $counter tries...giving up" && exit 1
echo "#$counter: $url not reachable. trying again..."
((counter++))
done
echo Successfully reached URL $url
done
- name: Test MQTT and OpenSync endpoints
working-directory: wlan-cloud-helm/tip-wlan/resources/certs
run: |
# this is needed to make until work
set +e
endpoints="${{ steps.ip.outputs.ipv4 }}:1883 ${{ steps.ip.outputs.ipv4 }}:6640 ${{ steps.ip.outputs.ipv4 }}:6643"
for endpoint in $endpoints; do
max_retry=300
counter=0
until echo Q | openssl s_client -connect $endpoint -CAfile cacert.pem -cert clientcert.pem -key clientkey.pem > /dev/null
do
sleep 1
[[ counter -eq $max_retry ]] && echo "$endpoint not reachable after $counter tries...giving up" && exit 1
echo "#$counter: $endpoint not reachable. trying again..."
((counter++))
done
echo Successfully reached endpoint $endpoint
done

View File

@@ -0,0 +1,24 @@
import sys
import keepachangelog
CATEGORIES = ['added', 'changed', 'deprecated', 'removed', 'fixed', 'security']
version = sys.argv[1]
try:
changes = keepachangelog.to_dict("CHANGELOG.md")[version]
except KeyError:
print(f'No changelog entry for version {version}', file=sys.stderr)
exit(1)
print('## Changelog')
for category in CATEGORIES:
entries = changes.get(category, [])
if entries:
print(f'### {category.capitalize()}')
for entry in entries:
print(f'- {entry}')

103
.github/workflows/testing.yml vendored Normal file
View File

@@ -0,0 +1,103 @@
name: CloudSDK deployment and testing
env:
PR_NUMBER: ${{ github.event.number }}
HELM_RELEASE_PREFIX: tip-wlan
AWS_EKS_NAME: tip-wlan-main
AWS_DEFAULT_OUTPUT: json
AWS_DEFAULT_REGION: us-east-2
AWS_ACCOUNT_ID: ${{ secrets.AWS_ACCOUNT_ID }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
on:
pull_request:
branches: [ master ]
defaults:
run:
shell: bash
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout required repos
uses: actions/checkout@v2
with:
path: wlan-pki-cert-scripts
repository: Telecominfraproject/wlan-pki-cert-scripts
- name: Checkout Cloud SDK repo
uses: actions/checkout@v2
with:
path: wlan-cloud-helm
repository: Telecominfraproject/wlan-cloud-helm
- name: Checkout helm values repo
uses: actions/checkout@v2
with:
path: Toolsmith
repository: Telecominfraproject/Toolsmith
token: ${{ secrets.PAT_TOKEN }}
- name: Generate Helm values file
run: |
./Toolsmith/helm-values/aws-cicd-testing-pr-deployment.yaml.sh ${{ env.PR_NUMBER }} > pr-deployment.yaml
- name: Generate certs
working-directory: wlan-pki-cert-scripts
run: |
./generate_all.sh
./copy-certs-to-helm.sh ../wlan-cloud-helm
- name: Get kubeconfig for EKS ${{ env.AWS_EKS_NAME }}
run: |
aws eks update-kubeconfig --name ${{ env.AWS_EKS_NAME }}
- name: Deploy Cloud SDK
run: |
helm dependency update wlan-cloud-helm/${{ env.HELM_RELEASE_PREFIX }}
# using a timeout of 20 minutes as the EKS nodes may need to be scaled which takes some time
helm upgrade --install ${{ env.HELM_RELEASE_PREFIX }}-pr-${{ env.PR_NUMBER }} wlan-cloud-helm/tip-wlan -f pr-deployment.yaml --create-namespace --namespace ${{ env.HELM_RELEASE_PREFIX }}-pr-${{ env.PR_NUMBER }} --wait --timeout 20m
test:
runs-on: ubuntu-latest
needs: [ deploy ]
steps:
- name: Execute tests
run: |
echo Running tests...
# this is needed to make until work
set +e
urls="https://wlan-ui-pr-$PR_NUMBER.cicd.lab.wlan.tip.build https://wlan-graphql-pr-$PR_NUMBER.cicd.lab.wlan.tip.build/graphql"
for url in $urls; do
max_retry=300
counter=0
until curl --silent $url > /dev/null
do
sleep 1
[[ counter -eq $max_retry ]] && echo "$url not reachable after $counter tries...giving up" && exit 1
echo "#$counter: $url not reachable. trying again..."
((counter++))
done
echo Successfully reached URL $url
done
echo Tests were successful
cleanup:
runs-on: ubuntu-latest
needs: [ deploy, test ]
if: ${{ always() }}
steps:
- name: Get kubeconfig for EKS ${{ env.AWS_EKS_NAME }}
run: |
aws eks update-kubeconfig --name ${{ env.AWS_EKS_NAME }}
- name: Delete Cloud SDK Helm release
run: |
helm delete ${{ env.HELM_RELEASE_PREFIX }}-pr-${{ env.PR_NUMBER }} --namespace ${{ env.HELM_RELEASE_PREFIX }}-pr-${{ env.PR_NUMBER }} || true
- name: Delete namespace
run: |
kubectl delete namespace ${{ env.HELM_RELEASE_PREFIX }}-pr-${{ env.PR_NUMBER }} --wait=true --ignore-not-found true

10
.gitignore vendored
View File

@@ -2,6 +2,16 @@
*.jks
*.pkcs12
*.p12
*.csr
*.cnf
*.key
*.DS_Store
# local development
*.lock
*.local_dev
*.zip
*.tgz
stern*
helmfile

63
CHANGELOG.md Normal file
View File

@@ -0,0 +1,63 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased] - YYYY-MM-DD
### Added
- export servo MBeans with JMX Prometheus exporter [#65](https://github.com/Telecominfraproject/wlan-cloud-helm/pull/65)
- render post-deployment message [#73](https://github.com/Telecominfraproject/wlan-cloud-helm/pull/73)
### Changed
- migrate to networking.k8s.io/v1 API version for Ingress resources [#74](https://github.com/Telecominfraproject/wlan-cloud-helm/pull/74)
## [1.0.1] - 2021-04-12
### Changed
- bump cloud controller version to 1.0.1
### Fixed
- correct SQL and CQL schema URLs
### Changed
- make images for all init containers configurable [#67](https://github.com/Telecominfraproject/wlan-cloud-helm/pull/67)
## [1.0.0] - 2021-04-01
### Added
- replaced cassandra, postgres and kafka with upstream charts [#49](https://github.com/Telecominfraproject/wlan-cloud-helm/pull/49)
- centralized secrets to the parent chart [#54](https://github.com/Telecominfraproject/wlan-cloud-helm/pull/54)
### Changed
- improved kafka setup templating [#53](https://github.com/Telecominfraproject/wlan-cloud-helm/pull/53)
- improved values.yaml [#53](https://github.com/Telecominfraproject/wlan-cloud-helm/pull/53)
- improved default values and added yaml anchors [#54](https://github.com/Telecominfraproject/wlan-cloud-helm/pull/54)
- make SSC service able to reconnect to Cassandra [#70](https://github.com/Telecominfraproject/wlan-cloud-helm/pull/70)
### Removed
- removed hardcoded docker secret in favor of variables [#53](https://github.com/Telecominfraproject/wlan-cloud-helm/pull/53)
- various outdated sections in values.yaml and environment files
- various secrets in subcharts as they are now part of the parent chart
- references to vendor specific values [#40](https://github.com/Telecominfraproject/wlan-cloud-helm/pull/40)
### Fixed
- make SSC service able to reconnect to Cassandra [#70](https://github.com/Telecominfraproject/wlan-cloud-helm/pull/70)
## [0.4.0] - 2021-01-28
### Added
- initial changelog entry. This is the first versioned release. Next releases will include a detailed overview of all the major changes introduced since the last version.
- [changes since first commit](https://github.com/Telecominfraproject/wlan-cloud-helm/compare/f7c67645736e3dac498e2caec8c267f04d08b7bc...v0.4)

196
README.md
View File

@@ -1,16 +1,68 @@
# wlan-cloud-helm
This repository contains helm charts for various deployment types of the tip wlan cloud services.
# IMPORTANT - Cloud Controller Helm charts v0.4 to v1.x migration procedure
We've introduced breaking changes to how Cloud Controller database charts are managed.
If you want to preserve your data when moving from v0.4 to v1.x of the Cloud Controller Helm charts, follow the steps outlined below.
If you can re-install your Cloud Controller and don't care to loose your data, you can skip the steps and just install the upstream charts version with no changes to the default installation procedure.
## Prerequisites
1. Checkout latest wlan-cloud-helm repository
2. Have your certificates for existing installation
3. Helm 3.2+
## Procedure
All of the commands should be run under tip-wlan-helm directory.
1. Delete your current Helm release. The following commands will remove the pods, however, the PVC (your databases data) **won't be deleted**:
```
helm list -n default (to look up the name of the release)
helm uninstall -n default tip-wlan (tip-wlan is usually the name of the release)
```
2. Replace `REPLACEME` with your storage class name in the `tip-wlan/resources/environments/migration.yaml` file. You can check the available storageclasses with the `kubectl get storageclass` command.
3. Update your values file that you used for deploying the original release with the values from `migration.yaml` to preserve existing cassandra\postgres data (or skip that step and use the second upgrade command mentioned in #7)
4. If you want to preserve the PKI certificates from the original Helm installation, copy them to a new location using the command below (or checkout the latest wlan-pki-cert-script repo and use `copy-certs-to-helm.sh %path_to_new_helm_code%` to generate new self-signed keys):
```
find . -regextype posix-extended -regex '.+(jks|pem|key|pkcs12|p12)$' -exec cp "{}" tip-wlan/resources/certs/ \;
```
5. Remove the old charts from the helm directory, so that the upgrade command can successfully pull new chart depedencies:
```
rm -rf tip-wlan/charts/cassandra tip-wlan/charts/kafka tip-wlan/charts/postgresql
```
6. Pull 3rd party subcharts:
```
helm dependency update tip-wlan
```
7. Perform Helm upgrade:
```
helm upgrade --install tip-wlan tip-wlan/ --namespace tip --create-namespace -f tip-wlan/resources/environments/your_values_with_fixes.yaml
```
Alternatively, you can run the upgrade command as follows (the order of the -f arguments is important!):
```
helm upgrade --install tip-wlan tip-wlan/ --namespace tip --create-namespace -f tip-wlan/resources/environments/original_values.yaml -f tip-wlan/resources/environments/migration.yaml
```
As a precaution you can also run `helm template` with the same arguments as the upgrade command and examine the output before actually installing the chart
# Deploying the wlan-cloud deployment
- Run the following command under tip-wlan-helm directory:
- helm install <RELEASE_NAME> tip-wlan/ -n default -f tip-wlan/resources/environments/dev.yaml
More details can be found here: https://telecominfraproject.atlassian.net/wiki/spaces/WIFI/pages/262176803/Pre-requisites+before+deploying+Tip-Wlan+solution
Run the following command under tip-wlan-helm directory:
```
helm dependency update tip-wlan
helm upgrade --install <RELEASE_NAME> tip-wlan/ --namespace tip --create-namespace -f tip-wlan/resources/environments/dev.yaml
```
More details can be found here: https://telecominfraproject.atlassian.net/wiki/spaces/WIFI/pages/262176803/Pre-requisites+before+deploying+Tip-Wlan+solution
# Deleting the wlan-cloud deployment:
- Run the following command:
- helm del tip-wlan -n default
Run the following command:
```
helm del tip-wlan -n tip (replace the namespace with your namespace)
```
(Note: this would not delete the tip namespace and any PVC/PV/Endpoints under this namespace. These are needed so we can reuse the same PVC mount when the pods are restarted.)
To get rid of them (PVC/PV/Endpoints), you can use the following script (expects that you are in the `tip` namespace or add `-n tip` to the below set of commands):
@@ -45,3 +97,133 @@ This repository contains helm charts for various deployment types of the tip wla
- Run the following command under tip-wlan-helm directory _after_ the components are running:
- helm test <RELEASE_NAME> -n default
(For more details add --debug flag to the above command)
# Local environment
In `wlan-pki-cert-scripts` repository edit the following files and add/replace strings as specified below:
```
mqtt-server.cnf:
-commonName_default = opensync-mqtt-broker.zone1.lab.wlan.tip.build
+commonName_default = opensync-mqtt-broker.wlan.local
openssl-server.cnf:
-DNS.1 = opensync-redirector.zone1.lab.wlan.tip.build
-DNS.2 = opensync-controller.zone1.lab.wlan.tip.build
+DNS.1 = opensync-redirector.wlan.local
+DNS.2 = opensync-controller.wlan.local
DNS.3 = tip-wlan-postgresql
-DNS.4 = ftp.example.com
```
In `wlan-pki-cert-scripts` repository run `./generate_all.sh` to generate CA and certificates, then run `./copy-certs-to-helm.sh <local path to wlan-cloud-helm repo>` in order to copy certificates to helm charts.
Optionally, in order to speedup first and subsequent runs, you may cache some images:
```
minikube cache add zookeeper:3.5.5
minikube cache add bitnami/postgresql:11.8.0-debian-10-r58
minikube cache add postgres:latest
minikube cache add gcr.io/k8s-minikube/storage-provisioner:v3
minikube cache add eclipse-mosquitto:latest
minikube cache add opsfleet/depends-on
```
These images may occasionally need to be updated with these commands:
```
minikube cache reload ## reload images from the upstream
eval $( minikube docker-env )
for img in $( docker images --format '{{.Repository}}:{{.Tag}}' | egrep 'busybox|alpine|confluentinc/cp-kafka|zookeeper|k8s.gcr.io/pause|nginx/nginx-ingress|bitnami/cassandra|bitnami/postgresql|postgres|bitnami/minideb' ); do
minikube cache add $img;
done
```
Run minikube:
```
minikube start --memory=10g --cpus=4 --driver=virtualbox --extra-config=kubelet.serialize-image-pulls=false --extra-config=kubelet.image-pull-progress-deadline=3m0s --docker-opt=max-concurrent-downloads=10
```
Please note that you may choose another driver (parallels, vmwarefusion, hyperkit, vmware, docker, podman) which might be more suitable for your setup. Omitting this option enables auto discovery of available drivers.
Deploy Cloud Controller chart:
```
helm upgrade --install tip-wlan tip-wlan -f tip-wlan/resources/environments/dev-local.yaml -n default
```
Wait a few minutes, when all pods are in `Running` state, obtain web ui link with `minikube service tip-wlan-wlan-cloud-static-portal -n tip --url`, open in the browser. Importing or trusting certificate might be needed.
Services may be exposed to the local machine or local network with ssh, kubectl or kubefwd with port forwarding, please examples below.
Kubefwd:
kubefwd is used to forward Kubernetes services to a local workstation, easing the development of applications that communicate with other services. It is for development purposes only. For production/staging environments services need to be exposed via load balancers.
Download latest release from https://github.com/eugenetaranov/kubefwd/releases and run the binary.
Forward to all interfaces (useful if you need to connect from other devices in your local network):
```
sudo kubefwd services --namespace tip -l "app.kubernetes.io/name in (nginx-ingress-controller,wlan-portal-service,opensync-gw-cloud,opensync-mqtt-broker)" --allinterfaces --extrahosts wlan-ui-graphql.wlan.local,wlan-ui.wlan.local
```
Kubectl port forwarding (alternative to kubefwd):
```
kubectl -n tip port-forward --address 0.0.0.0 $(kubectl -n tip get pods -l app=tip-wlan-nginx-ingress-controller -o jsonpath='{.items[0].metadata.name}') 443:443 &
kubectl -n tip port-forward --address 0.0.0.0 $(kubectl -n tip get pods -l app.kubernetes.io/name=wlan-portal-service -o jsonpath='{.items[0].metadata.name}') 9051:9051 &
kubectl -n tip port-forward --address 0.0.0.0 $(kubectl -n tip get pods -l app.kubernetes.io/name=opensync-gw-cloud -o jsonpath='{.items[0].metadata.name}') 6643:6643 &
kubectl -n tip port-forward --address 0.0.0.0 $(kubectl -n tip get pods -l app.kubernetes.io/name=opensync-gw-cloud -o jsonpath='{.items[0].metadata.name}') 6640:6640 &
kubectl -n tip port-forward --address 0.0.0.0 $(kubectl -n tip get pods -l app.kubernetes.io/name=opensync-mqtt-broker -o jsonpath='{.items[0].metadata.name}') 1883:1883 &
```
Add certificate to the trust store.
Firefox:
1. Open settings, `Privacy and security`, `View certificates`.
2. Click on `Add Exception...`, enter `https://wlan-ui.wlan.local` into Location field, click on `Get certificate`, check `Permanently store this exception` and click on `Confirm Security Exception`.
Repeat the step for `https://wlan-ui-graphql.wlan.local`
Chrome and other browsers using system certificate store:
1. Save certificate below into the file `wlan-ui-graphql.wlan.local.crt` (it is the one defined at tip-wlan/resources/environments/dev-local.yaml:143):
```
-----BEGIN CERTIFICATE-----
MIIFWjCCA0KgAwIBAgIUQNaP/spvRHtBTAKwYRNwbxRfFAswDQYJKoZIhvcNAQEL
BQAwHTEbMBkGA1UEAwwSd2xhbi11aS53bGFuLmxvY2FsMB4XDTIwMDgyNzIwMjY1
NloXDTMwMDgyNTIwMjY1NlowHTEbMBkGA1UEAwwSd2xhbi11aS53bGFuLmxvY2Fs
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAwRagiDWzCNYBtWwBcK+f
TkkQmMt+QAgTjYr0KS8DPJCJf6KkPfZHCu3w4LvrxzY9Nmieh2XU834amdJxIuCw
6IbNo6zskjsyfoO8wFDmlLVWLeg5H9G9doem+WTeKPaEHi3oquzNgt6wLs3mvvOA
TviTIoc88ELjk4dSR2T4dhh0qKCCj+HdXBA6V/9biru+jV+/kxEQuL2zM39DvVd8
9ks35zMVUze36lD4ICOnl7hgaTNBi45O9sdLD0YaUmjiFwQltJUdmPKpaAdbvjUO
nsupnDYjm+Um+9aEpqM4te23efC8N8j1ukexzJrE2GeF/WB/Y1LFIG2wjqVnsPcs
nFF4Yd9EBRRne1EZeXBu3FELFy6lCOHI146oBcc/Ib617rdTKXqxtv/2NL6/TqFk
ns/EEjve6kQYzlBZwWHWpZwQfg3mo6NaoFZpTag98Myu5rZoOofTcxXH6pLm5Px1
OAzgLna9O+2FmA4FjrgHcMY1NIzynZL+DH8fibt1F/v2F2MA+R9vo84vR5ROGNdD
va2ApevkLcjQg/LwsXv0gTopQ/XIzejh6bdUkOrKSwJzT2C9/e9GQn0gppV8LBuK
1zQHoROLnA41MCFvQLQHo+Xt8KGw+Ubaly6hOxBZF51L/BbqjkDH9AEFaJLptiEy
qn1E5v+3whgFS5IZT8IW5uUCAwEAAaOBkTCBjjAdBgNVHQ4EFgQUy2bAUyNPXHS9
3VTSD+woN7t3q8EwHwYDVR0jBBgwFoAUy2bAUyNPXHS93VTSD+woN7t3q8EwDwYD
VR0TAQH/BAUwAwEB/zA7BgNVHREENDAyghp3bGFuLXVpLWdyYXBocWwud2xhbi5s
b2NhbIIOYXBpLndsYW4ubG9jYWyHBMCoAAEwDQYJKoZIhvcNAQELBQADggIBAKH+
bqJee11n34SYgBDvgoZ8lJLQRwsFnqExcSr/plZ7GVIGFH5/Q2Kyo9VyEiTPwrIs
KsErC1evH6xt1URfMzp05zVQ0LYM5+ksamRDagAg3M1cm7oKOdms/dqzPe2gZfGJ
pVdtVW1CHrL0RLTR93h7kgSiBlSEIYMoeKfN5H9AavJ4KryygQs63kkGQ5M9esAp
u6bB307zyfzgS3tmQsU01rgJfhEHQ/Y+Ak9wDuOgvmfx0TWgAOGbKq6Tu8MKYdej
Ie7rV1G5Uv7KfgozVX76g2KdnTVBfspSKo3zyrZkckzApvUu9IefHdToe4JMEU0y
fk7lEU/exzByyNxp+6hdu/ZIg3xb1yA1oVY8NEd1rL1zAViPe351SENEKeJpRanC
kCL3RAFkbxQ7Ihacjox8belR+gmo8cyFZpj9XaoPlSFScdwz573CT0h97v76A7sw
yC+CiSp85gWEV5vgBitNJ7R9onjBdsuH2lgEtMD3JNOs8cCSRihYxriwZSqhT7o/
tcIlcJ84W5m6X6zHJ3GmtuKG3QPNOms0/VVoDTp9qdpL+Ek17uB2A41Npxz3US+l
6yK+pdQQj7ALzKuRfOyg80XbNw2v4SnpI5qbXFBRum52f86sPemFq1KcuNWe4EVC
xDG3eKlu+dllUtKx/PN6yflbT5xcGgcdmrwzRaWS
-----END CERTIFICATE-----
```
2. Double click on it, enter the system admin password, if prompted.

View File

@@ -28,7 +28,6 @@ For other issues faced during deployment, see here:
- If namespace is passed, we will create (if it does not exist) and use that namespace for glusterFS resources.
- If namespace is NOT passed, we will create (if it does not exist) namespace='gluster-ns' and use it for glusterFS resources.
- Deletion:
./gk-deploy --admin-key <ADMIN_KEY> --user-key <USER_KEY> --abort -v -n <GLUSTER_NAMESPACE>
- Note:

View File

@@ -990,7 +990,6 @@ parameters:
output ""
fi
if [[ ${DEPLOY_OBJECT} -eq 1 ]] && [[ "${OBJ_ACCOUNT}" != "" ]] && [[ "${OBJ_USER}" != "" ]] && [[ "${OBJ_PASSWORD}" != "" ]] && [[ ${EXISTS_OBJECT} -eq 0 ]]; then
if [[ "${OBJ_STORAGE_CLASS}" == "glusterfs-for-s3" ]]; then
eval_output "${CLI} create secret generic heketi-${NAMESPACE}-admin-secret --from-literal=key=${ADMIN_KEY} --type=kubernetes.io/glusterfs"

View File

@@ -1,6 +0,0 @@
# Chart for deploying Common templates that are used by other charts
apiVersion: v1
description: Common templates for inclusion in other charts
name: common
version: 0.1.0

View File

@@ -1,6 +0,0 @@
{{- define "common.env" -}}
- name: {{ .Values.env.ssc_url }}
value: "{{ .Values.env.protocol }}://{{ .Release.Name }}-{{ .Values.env.ssc.service }}:{{ .Values.env.ssc.port}}"
- name: {{ .Values.env.prov_url }}
value: "{{ .Values.env.protocol }}://{{ .Release.Name }}-{{ .Values.env.prov.service }}:{{ .Values.env.prov.port}}"
{{- end -}}

View File

@@ -1,74 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "common.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "common.resource.name" -}}
{{- printf "tip-%s-common" $.Release.Namespace | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "common.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "common.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "common.labels" -}}
helm.sh/chart: {{ include "common.chart" . }}
{{ include "common.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "common.selectorLabels" -}}
app.kubernetes.io/name: {{ include "common.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "common.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "common.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Expand service name.
*/}}
{{- define "common.serviceName" -}}
{{- default (include "common.name" .) .Values.controller.service.name }}
{{- end -}}

View File

@@ -1,24 +0,0 @@
{{/*
This template will be used to iterate through the debug-ports and generate
debug-ports mapping
*/}}
{{- define "container.dev.debugport" -}}
{{- range $index, $portid := .Values.debugPorts }}
- name: debugport-{{ $index }}
containerPort: {{ $portid }}
protocol: TCP
{{- end }}
{{- end -}}
{{- define "service.dev.debugport" -}}
{{- range $index, $portid := .Values.debugPorts }}
- port: {{ $portid }}
targetPort: {{ $portid }}
protocol: TCP
name: debugport-{{ $index }}
{{- if eq $.Values.service.type "NodePort" }}
nodePort: {{ $portid }}
{{- end }}
{{- end }}
{{- end -}}

View File

@@ -1,83 +0,0 @@
{{/*
Resolve the Postgres service-name to apply to a chart.
*/}}
{{- define "postgresql.service" -}}
{{- printf "postgres-%s-%s" .Release.Namespace .Values.postgresql.url | trunc 63 -}}
{{- end -}}
{{/*
Form the Zookeeper Service. If zookeeper is installed as part of this chart, use k8s service discovery,
else use user-provided URL
*/}}
{{- define "zookeeper.service" }}
{{- if .Values.zookeeper.enabled -}}
{{- printf "%s" (include "kafka.zookeeper.fullname" .) }}
{{- else -}}
{{- $zookeeperService := printf "%s-%s" .Release.Name .Values.zookeeper.url }}
{{- default $zookeeperService }}
{{- end -}}
{{- end -}}
{{/*
Resolve the Kafka service-name to apply to a chart.
*/}}
{{- define "kafka.service" -}}
{{- printf "kafka-%s-headless" .Release.Namespace | trunc 63 -}}
{{- end -}}
{{/*
Resolve the Cassandra service-name to apply to a chart.
*/}}
{{- define "cassandra.service" -}}
{{- printf "cassandra-%s-headless" .Release.Namespace | trunc 63 -}}
{{- end -}}
{{/*
Resolve the MQTT service-name to apply to a chart.
*/}}
{{- define "mqtt.service" -}}
{{- printf "%s-%s" .Release.Name .Values.mqtt.url | trunc 63 -}}
{{- end -}}
{{/*
Resolve the integratedcloudcomponent service-name to apply to a chart.
*/}}
{{- define "integratedcloudcomponent.service" -}}
{{- printf "%s-%s:%.f" .Release.Name .Values.integratedcloudcomponent.url .Values.integratedcloudcomponent.port | trunc 63 -}}
{{- end -}}
{{/*
Resolve the provisioning service-name to apply to a chart.
*/}}
{{- define "prov.service" -}}
{{- printf "%s-%s:%.f" .Release.Name .Values.prov.url .Values.prov.port | trunc 63 -}}
{{- end -}}
{{/*
Resolve the ssc service-name to apply to a chart.
*/}}
{{- define "ssc.service" -}}
{{- printf "%s-%s:%.f" .Release.Name .Values.ssc.url .Values.ssc.port | trunc 63 -}}
{{- end -}}
{{/*
Resolve the Opensync-gw service-name to apply to a chart.
*/}}
{{- define "opensyncgw.service" -}}
{{- printf "%s-%s:%.f" .Release.Name .Values.opensyncgw.url .Values.opensyncgw.port | trunc 63 -}}
{{- end -}}
{{/*
Resolve the pvc name that's would mounted to 2 charts - Portal and Opensync-gw
*/}}
{{- define "portal.sharedPvc.name" -}}
{{- printf "%s-%s-%s-%.f" .Values.portal.sharedPvc.name .Release.Name .Values.portal.url .Values.portal.sharedPvc.ordinal | trunc 63 -}}
{{- end -}}
{{/*
Resolve the filestore-directory name that's would mounted to 2 charts - Portal and Opensync-gw
*/}}
{{- define "filestore.dir.name" -}}
{{- printf "%s" .Values.filestore.internal | trunc 63 -}}
{{- end -}}

View File

@@ -1,4 +0,0 @@
#################################################################
# Global configuration default values that can be inherited by
# all subcharts.
#################################################################

View File

@@ -1,13 +0,0 @@
# Chart for deploying Common templates that are used by other charts
apiVersion: v1
description: creds secrets for reuse in other charts
name: creds
type: application
appVersion: 0.0.1
version: 0.1.0
dependencies:
- name: common
version: 0.1.0
repository: file://../common

View File

@@ -1,6 +0,0 @@
dependencies:
- name: common
repository: file://../common
version: 0.1.0
digest: sha256:636a65e9846bdff17cc4e65b0849061f783759a37aa51fb85ff6fd8ba5e68467
generated: "2020-10-19T10:42:00.072252Z"

View File

@@ -1,24 +0,0 @@
-----BEGIN CERTIFICATE REQUEST-----
MIID/zCCAucCAQAwgZIxCzAJBgNVBAYTAkNBMRAwDgYDVQQIDAdPbnRhcmlvMQ8w
DQYDVQQHDAZPdHRhd2ExHzAdBgNVBAoMFkNvbm5lY3RVcyBUZWNobm9sb2dpZXMx
HjAcBgNVBAMMFVRlc3RfU2VydmVyX0Nhc3NhbmRyYTEfMB0GCSqGSIb3DQEJARYQ
dGVzdEBleGFtcGxlLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
AL6y03nvC/xCn8i8McxmQw0zL4C0CiF49oDxBCkSr/8qXec4Mz0M5M+8mQ536d58
sseE0DPh+P4ITg05F4FSPVcpJKXZ++5y4VB5Ydyrt8mGpKtaD+96BGy9DOB5Sv2t
VKTZFUODe3R8yWpgpVwWi6zgkhdU09fwWVM7LeKn0YwN4qc6f/o8E71dGhOjGyMB
J8krEDxPE4v18MW6fnI85MFR1KOjXakvbptC2EhafyMZ2l7MY9ddTlHyR8I4ty8v
yGWc5iMXlV1M8/3h20DMNRNnsdfF9asIGENTPi9LKpIjVbZVkNxtUP7p2Mi7+jp9
Rl+3cO4aqPO867mK7cpOsd0CAwEAAaCCASUwggEhBgkqhkiG9w0BCQ4xggESMIIB
DjAdBgNVHQ4EFgQUXfA+Ct7sBUMZPYXQzPsgYPvWTlIwDAYDVR0TAQH/BAIwADAO
BgNVHQ8BAf8EBAMCA6gwIAYDVR0lAQH/BBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMC
MG0GA1UdEQRmMGSCC2V4YW1wbGUuY29tgg93d3cuZXhhbXBsZS5jb22CEG1haWwu
ZXhhbXBsZS5jb22CD2Z0cC5leGFtcGxlLmNvbYIJbG9jYWxob3N0hwR/AAABhxAA
AAAAAAAAAAAAAAAAAAABMD4GCWCGSAGG+EIBDQQxFi9PcGVuU1NMIEdlbmVyYXRl
ZCBTZXJ2ZXIgYW5kIENsaWVudCBDZXJ0aWZpY2F0ZTANBgkqhkiG9w0BAQsFAAOC
AQEAJNgWEgB/Z60deJRjIoNkkCMKfOKrHnw9y6awVo8/+VstE+roCXtdWeEm8u3f
/vbQ50ichn2lYRE2gTfH2PZLecjDOlpQ5/LRhN87BzzFNkAIzPA6ISv14XGk5fTO
yVj++a/wnKSpRjFFunY+nsVrKUHmP8DYfoSJuelXfo7nY7diTlj0pdxhQ4l1786g
iauYtpaLlqLqU4qhZDTSTa03kxPlXU0hMWvoKvV5kn64y1HBcJ1uTscVYjnd2wYj
5ZM8ODyCbrN/RceUuU3mPVIS7Firj93DHPUX3heoUxDxXQQgVpxn9jRxeOWbBzYi
VgvEplmzT/Gptyc6vQju+EHuaQ==
-----END CERTIFICATE REQUEST-----

View File

@@ -1,21 +0,0 @@
-----BEGIN CERTIFICATE REQUEST-----
MIIDaTCCAlECAQAwgYQxCzAJBgNVBAYTAkNBMRAwDgYDVQQIDAdPbnRhcmlvMQ8w
DQYDVQQHDAZPdHRhd2ExHzAdBgNVBAoMFkNvbm5lY3RVcyBUZWNobm9sb2dpZXMx
EDAOBgNVBAMMB09wZW5fQVAxHzAdBgkqhkiG9w0BCQEWEHRlc3RAZXhhbXBsZS5j
b20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDh1nv/bZEoNN8/z0yb
Qi3dCCQ0Q0eHCoP05gy5KJMMO84K1HJ65M3Jk5/6WQFDScLdn4O/0xf52rxX1VFR
GAXDm0+2bqRPt73cLtonufxgf8uA0YVGmorevj2X8cDLuSkyPvZqiHT8w9tSLolT
y5D4AIIF4594xWCdT0wnt4skfxp4GS5YsImBM/ehbLmhssXXhPM9Q2jfEL/0UtbS
O6rN3sjZB4ki9li3s5qx6Ki4kmQ/AF3v02lkCReOJB/mCc+Dh+l/+j/o5w+1VdFl
N6COTZjivJ+0Cz8OCOM+zr8al1vTGDlYKpx+UstIGWJOs3XQPi/9vWPp06rfTQVD
j3CZAgMBAAGggZ4wgZsGCSqGSIb3DQEJDjGBjTCBijAdBgNVHQ4EFgQU7K15oRUA
LiNwGeJJaq7WtS4BncQwDAYDVR0TAQH/BAIwADAOBgNVHQ8BAf8EBAMCBaAwFgYD
VR0lAQH/BAwwCgYIKwYBBQUHAwIwMwYJYIZIAYb4QgENBCYWJE9wZW5TU0wgR2Vu
ZXJhdGVkIENsaWVudCBDZXJ0aWZpY2F0ZTANBgkqhkiG9w0BAQsFAAOCAQEAsqeH
k9yGncyfdLsRHIGqtgaMssLoHBSNshcEOjDawDEKy94jN6XFicUJUgs7BOQgRZHT
fx4RHUsKJRvmauu9FEiss712Fw8z1yXqNvj3sk7vxRdm3I78brdqTHHz8fPwpgah
ony/oMJscjUMRsAXKEN/MV2zQ+uzkiQhiX47yTNprwn0xwlO+8mRD1f71Sz6OPXH
47Z8Lv3IPcg9m+oY4e+e6JYC3/fQMsuplQhh+eVhfOi6FSg2SoPZP+o9Twx59But
NkZNsE26+JbfxjChunaEGR1/Khusnc0O9+5niapGOwfp/67xWnymXfta/IWBJFv3
Q05BhCLqy22kR9fIwg==
-----END CERTIFICATE REQUEST-----

View File

@@ -1,24 +0,0 @@
-----BEGIN CERTIFICATE REQUEST-----
MIID9TCCAt0CAQAwgYgxCzAJBgNVBAYTAkNBMRAwDgYDVQQIDAdPbnRhcmlvMQ8w
DQYDVQQHDAZPdHRhd2ExHzAdBgNVBAoMFkNvbm5lY3RVcyBUZWNobm9sb2dpZXMx
FDASBgNVBAMMC1Rlc3RfU2VydmVyMR8wHQYJKoZIhvcNAQkBFhB0ZXN0QGV4YW1w
bGUuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAv0oM77mgApW3
wdr9i+X24Swf/kYDYkB7wkilW/oi8tQVSLw261fEx/1e0+H34+vBaFtDj/lINTVi
yQMjztigDIWNHkjU99M+/514RbZTCvlvBJOarD2cfs6vFp7T4tuo21ztEbG15x7D
YaQKBYF0e6zzjN1bR0uWJz8+9hzrVcwtURY6r7qa+iYm5GvVLFxzVtBQxbaTNUI0
GrIXOQHOr7omAVFeihAyrUQPK+LTE32uVKRX4agtTAdVHyshiQw/5N3tVGGufzoR
onlsOjiKAKGfDmk6wCSQG17H0DFkEe8/H2Xr50BI/kjkKWUFiH4a22+4GbMBQP7v
x4tVlkoEGwIDAQABoIIBJTCCASEGCSqGSIb3DQEJDjGCARIwggEOMB0GA1UdDgQW
BBQ/nZ9a2IsHW7mOtoW/1Y1G3CCnKDAMBgNVHRMBAf8EAjAAMA4GA1UdDwEB/wQE
AwIDqDAgBgNVHSUBAf8EFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwbQYDVR0RBGYw
ZIILZXhhbXBsZS5jb22CD3d3dy5leGFtcGxlLmNvbYIQbWFpbC5leGFtcGxlLmNv
bYIPZnRwLmV4YW1wbGUuY29tgglsb2NhbGhvc3SHBH8AAAGHEAAAAAAAAAAAAAAA
AAAAAAEwPgYJYIZIAYb4QgENBDEWL09wZW5TU0wgR2VuZXJhdGVkIFNlcnZlciBh
bmQgQ2xpZW50IENlcnRpZmljYXRlMA0GCSqGSIb3DQEBCwUAA4IBAQCZbMT+zgkm
mQnPFt2UT9sxvygaUMxmywso5E89BvgwFt7/kkoKR9zo7TnLUGJ7cCWIHXPYokd5
na1Lomdfe5HTXO7BvNPAkhQAra25iFimAyopQjiLFEm5T79OOVkwWgzHUbhu18/e
LJWVL2Lu+SIvFSzD0q+2x0+IkbXkAHRCs/f1jlRafQi6AH/gzgJDwpQTZKe3S6PN
HST3czqbtpg17ZQuZ4XCxVAczDTZdC/eZ8xpglat7EZQs+6gSbX2FIFkju1CP7an
JvbPItPfwuLSe1EpC2nKFwpd1tcdATHMzQcTdjNN0/tMu5/8M9/4QJdn+ALoWIvn
if3dRjVJn4yr
-----END CERTIFICATE REQUEST-----

View File

@@ -1 +0,0 @@
mypassword

View File

@@ -1,22 +0,0 @@
-----BEGIN CERTIFICATE REQUEST-----
MIIDjzCCAncCAQAwgaoxCzAJBgNVBAYTAkNBMRAwDgYDVQQIDAdPbnRhcmlvMQ8w
DQYDVQQHDAZPdHRhd2ExHzAdBgNVBAoMFkNvbm5lY3RVcyBUZWNobm9sb2dpZXMx
NjA0BgNVBAMMLW9wZW5zeW5jLW1xdHQtYnJva2VyLnpvbmUxLmxhYi53bGFuLnRp
cC5idWlsZDEfMB0GCSqGSIb3DQEJARYQdGVzdEBleGFtcGxlLmNvbTCCASIwDQYJ
KoZIhvcNAQEBBQADggEPADCCAQoCggEBALAIR+8VJAnyD/gnuCDrXcapc7peDBI0
Tzp2dhU0X6THN3r3+TSruQGQKupbgxoF7STMXVMf1R94XWJR5J78tBvr+yI5c7P/
iXKA3OyUh4rb3+S14fn9tEO9IXaPcdKuwhoTtVE2aTl9360B7KLpFCJTY3LP+IDn
fOfcvnmOgE2xXz/8fRRld2BPHN2JHwAtI2lSlY1wOwjW/2AiRV/lXiHg0miXiHFd
qKbMKinEfXWUjQlHUM5G75HQZUsBPD6PP/iEXlzt3yprlDQ0uw4x6qKpHLODBuPI
n+emzPh8ZWJPWAZpm6y+Tk4P3rfTQ0GU8stJgajry/+JSo6movSTb30CAwEAAaCB
njCBmwYJKoZIhvcNAQkOMYGNMIGKMB0GA1UdDgQWBBQPnNwcKpj6cfFpRCzezdaj
e79PIzAMBgNVHRMBAf8EAjAAMA4GA1UdDwEB/wQEAwIDqDAWBgNVHSUBAf8EDDAK
BggrBgEFBQcDATAzBglghkgBhvhCAQ0EJhYkT3BlblNTTCBHZW5lcmF0ZWQgU2Vy
dmVyIENlcnRpZmljYXRlMA0GCSqGSIb3DQEBCwUAA4IBAQAjVorFs2MvFXVzSL8x
TNVQD0OtD5neHGLnTCktKqXh6DD4mUGWm33a2Ql7BjnwteERqz7Khu9EQEA9dj3n
3du4xXOZk6oquxFqfNgKHXa9MRT1jto6oKQ9RFspMDfQSiUGZUW3mMF3FkHH0l67
aGjLasbenOJwIl67gMGW/c/cHJRrI1v4fKp0TU+pgjMWzp6KUP8us+QkybodoEK5
6e7FsEQE0HPojbOR8QcQvnwz1YWt0AZuK+DpQou8DyCzJR0x9IBDd2EpF/N4G70q
wIFTBMRBTUQJxj1JJ0aS/lFVvvKcJU3P1dyFLRxmWT7wFQSaha6/d7tIbEEAtFn6
esX3
-----END CERTIFICATE REQUEST-----

View File

@@ -1,24 +0,0 @@
-----BEGIN CERTIFICATE REQUEST-----
MIIEGTCCAwECAQAwgYgxCzAJBgNVBAYTAkNBMRAwDgYDVQQIDAdPbnRhcmlvMQ8w
DQYDVQQHDAZPdHRhd2ExHzAdBgNVBAoMFkNvbm5lY3RVcyBUZWNobm9sb2dpZXMx
FDASBgNVBAMMC1Rlc3RfU2VydmVyMR8wHQYJKoZIhvcNAQkBFhB0ZXN0QGV4YW1w
bGUuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAqcpUeC79hZlV
lEDaKFr5WqyJ29MY1aAidv0jHQMc4oqvIBjV/77qA0c5IzANHtmjQDF/hC2zIFdo
cQwlNZKNfK8ak4/ixVoYdvr8VUENOz0M8AzpJjJkMYXPmHQapysUsXRptZXi1tyI
KiPsPwxrd25irUm7cghios3VQLTqt0IeKa24Zm/7xL0KIeZfWc0bc51hJw2RE2TR
7diAGVyqZYi5QqEc8Ju94jB2YWJE2Khy/6uX13ZhxDwvY9f2nMFcYicQELC1ZHNm
dWyuTu7wGnpjsdqriLMEDnP6Ne/WUr4ISQrfn4UCwHkLCNxsrRKig5COJt7HHzNr
ObEZkPdb6QIDAQABoIIBSTCCAUUGCSqGSIb3DQEJDjGCATYwggEyMB0GA1UdDgQW
BBRrmzSs74NDLOHB4kOj4XWDXDLZkDAMBgNVHRMBAf8EAjAAMA4GA1UdDwEB/wQE
AwIDqDAWBgNVHSUBAf8EDDAKBggrBgEFBQcDATCBpQYDVR0RBIGdMIGagixvcGVu
c3luYy1yZWRpcmVjdG9yLnpvbmUxLmxhYi53bGFuLnRpcC5idWlsZIIsb3BlbnN5
bmMtY29udHJvbGxlci56b25lMS5sYWIud2xhbi50aXAuYnVpbGSCE3RpcC13bGFu
LXBvc3RncmVzcWyCD2Z0cC5leGFtcGxlLmNvbYcEfwAAAYcQAAAAAAAAAAAAAAAA
AAAAATAzBglghkgBhvhCAQ0EJhYkT3BlblNTTCBHZW5lcmF0ZWQgU2VydmVyIENl
cnRpZmljYXRlMA0GCSqGSIb3DQEBCwUAA4IBAQBSzzzuMSFZurx9RJnf9kesKTEY
LtRWwxY7Zs0D4PvTpOgJMR48D5R69N1nY2miMyH8SAFLhRTik0fOC5hoNkojITDk
XIRSqeA1+GxGfh+4sJRXfRZkdyWVYwaHexS8wBN6rVhAEnJb/FOmmh2p+wn8SRxp
lDzb5Hyr5bi8LoIMe7nSTs3ihpWhNz8W/v/fFsUBgnokRHF2Yy1mQoSvz2p8iDeS
lr+55h2ANdIAgtbjXB6eVa8UY4Uhh2YxkzazJyjnMI8EBtyc3KQCJGI8oO8jIGvY
rFfq5gBiBOSBzQ3yHzHtPB4iyzILpBOwzzn4O7rsQJdYw/15MdxfvxF0kIbS
-----END CERTIFICATE REQUEST-----

View File

@@ -1 +0,0 @@
mypassword

View File

@@ -1,11 +0,0 @@
ssl.endpoint.identification.algorithm=
security.protocol=SSL
ssl.key.password=mypassword
ssl.keystore.location=/bitnami/kafka/kafka-server.pkcs12
ssl.keystore.password=mypassword
ssl.keystore.type=PKCS12
ssl.truststore.location=/bitnami/kafka/truststore.jks
ssl.truststore.password=mypassword
ssl.truststore.type=JKS
bootstrap.servers=tip-wlan-kafka-headless:9093

View File

@@ -1,17 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: tip-{{ .Release.Namespace }}-common-kafka-config
namespace: {{ .Release.Namespace }}
data:
{{ tpl (.Files.Glob "resources/config/server.properties").AsConfig . | indent 2 }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: tip-{{ .Release.Namespace }}-common-postgres-scripts
namespace: {{ .Release.Namespace }}
data:
{{ tpl (.Files.Glob "resources/scripts/creation-replication-user-role.sh").AsConfig . | indent 2 }}

View File

@@ -1,13 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Namespace }}-docker-registry-key
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "common.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
.dockerconfigjson: {{ .Values.dockerRegistrySecret }}
type: kubernetes.io/dockerconfigjson

View File

@@ -1,109 +0,0 @@
---
apiVersion: v1
kind: Secret
metadata:
name: tip-{{ .Release.Namespace }}-common-cassandra-certs
namespace: {{ .Release.Namespace }}
type: Opaque
data:
truststore: {{ .Files.Get "resources/certs/truststore.jks" | b64enc }}
truststore-password: {{ .Files.Get "resources/certs/truststore_creds" | b64enc }}
keystore: {{ .Files.Get "resources/certs/cassandra_server_keystore.jks" | b64enc }}
keystore-password: {{ .Files.Get "resources/certs/keystore_creds" | b64enc }}
cassandraservercert.pem: {{ .Files.Get "resources/certs/cassandraservercert.pem" | b64enc }}
cassandraserverkey_dec.pem: {{ .Files.Get "resources/certs/cassandraserverkey_dec.pem" | b64enc }}
---
apiVersion: v1
kind: Secret
metadata:
name: tip-{{ .Release.Namespace }}-common-cassandra-client-certs
namespace: {{ .Release.Namespace }}
type: Opaque
data:
cacert.pem: {{ .Files.Get "resources/certs/cacert.pem" | b64enc }}
cassandra_server_keystore.jks: {{ .Files.Get "resources/certs/cassandra_server_keystore.jks" | b64enc }}
cassandraservercert.pem: {{ .Files.Get "resources/certs/cassandraservercert.pem" | b64enc }}
cassandraserverkey_dec.pem: {{ .Files.Get "resources/certs/cassandraserverkey_dec.pem" | b64enc }}
kafka-server.pkcs12: {{ .Files.Get "resources/certs/kafka-server.pkcs12" | b64enc }}
truststore.jks: {{ .Files.Get "resources/certs/truststore.jks" | b64enc }}
server.pkcs12: {{ .Files.Get "resources/certs/server.pkcs12" | b64enc }}
---
apiVersion: v1
kind: Secret
metadata:
name: tip-{{ .Release.Namespace }}-common-kafka-certs
namespace: {{ .Release.Namespace }}
type: Opaque
data:
truststore: {{ .Files.Get "resources/certs/truststore.jks" | b64enc }}
truststore-password: {{ .Files.Get "resources/certs/truststore_creds" | b64enc }}
keystore: {{ .Files.Get "resources/certs/cassandra_server_keystore.jks" | b64enc }}
keystore-password: {{ .Files.Get "resources/certs/keystore_creds" | b64enc }}
cassandraservercert.pem: {{ .Files.Get "resources/certs/cassandraservercert.pem" | b64enc }}
cassandraserverkey_dec.pem: {{ .Files.Get "resources/certs/cassandraserverkey_dec.pem" | b64enc }}
kafka-0.keystore.jks: {{ .Files.Get "resources/certs/client_keystore.jks" | b64enc }}
kafka.truststore.jks: {{ .Files.Get "resources/certs/truststore.jks" | b64enc }}
---
apiVersion: v1
kind: Secret
metadata:
name: tip-{{ .Release.Namespace }}-common-kafka-client-certs
namespace: {{ .Release.Namespace }}
type: Opaque
data:
client_keystore.jks: {{ .Files.Get "resources/certs/client_keystore.jks" | b64enc }}
kafka-server.pkcs12: {{ .Files.Get "resources/certs/kafka-server.pkcs12" | b64enc }}
truststore.jks: {{ .Files.Get "resources/certs/truststore.jks" | b64enc }}
server.pkcs12: {{ .Files.Get "resources/certs/server.pkcs12" | b64enc }}
---
apiVersion: v1
kind: Secret
metadata:
name: tip-{{ .Release.Namespace }}-common-postgres-certs
namespace: {{ .Release.Namespace }}
type: Opaque
data:
cacert.pem: {{ .Files.Get "resources/certs/cacert.pem" | b64enc }}
cert.crt: {{ .Files.Get "resources/certs/servercert.pem" | b64enc }}
cert.key: {{ .Files.Get "resources/certs/serverkey_dec.pem" | b64enc }}
postgresclientcert.pem: {{ .Files.Get "resources/certs/postgresclientcert.pem" | b64enc }}
postgresclientkey_dec.pem: {{ .Files.Get "resources/certs/postgresclientkey_dec.pem" | b64enc }}
---
apiVersion: v1
kind: Secret
metadata:
name: tip-{{ .Release.Namespace }}-common-postgres-client-certs
namespace: {{ .Release.Namespace }}
type: Opaque
data:
cacert.pem: {{ .Files.Get "resources/certs/cacert.pem" | b64enc }}
client_keystore.jks: {{ .Files.Get "resources/certs/client_keystore.jks" | b64enc }}
postgresclient.p12: {{ .Files.Get "resources/certs/postgresclient.p12" | b64enc }}
postgresclientcert.pem: {{ .Files.Get "resources/certs/postgresclientcert.pem" | b64enc }}
postgresclientkey_dec.pem: {{ .Files.Get "resources/certs/postgresclientkey_dec.pem" | b64enc }}
server.pkcs12: {{ .Files.Get "resources/certs/server.pkcs12" | b64enc }}
truststore.jks: {{ .Files.Get "resources/certs/truststore.jks" | b64enc }}
---
apiVersion: v1
kind: Secret
metadata:
name: tip-{{ .Release.Namespace }}-common-credentials
namespace: {{ .Release.Namespace }}
type: Opaque
data:
cassandra_tip_user: {{ .Values.cassandra.tip_user | b64enc }}
cassandra_tip_password: {{ .Values.cassandra.tip_password | b64enc }}
postgresql-password: {{ .Values.db.postgresUser.password | b64enc }}
tipuser-password: {{ .Values.db.tipUser.password | b64enc }}
schema-repo-user: {{ .Values.schema_repo.username | b64enc }}
schema-repo-password: {{ .Values.schema_repo.password | b64enc }}
sslKeyPassword: {{ .Values.ssl.keyPassword | b64enc }}
sslKeystorePassword: {{ .Values.ssl.keystorePassword | b64enc }}
sslTruststorePassword: {{ .Values.ssl.truststorePassword | b64enc }}
websocketSessionTokenEncKey: {{ .Values.websocketSessionTokenEncKey | b64enc }}

View File

@@ -1,20 +0,0 @@
#################################################################
# Credentials and secrets for reuse in other charts
#################################################################
creds:
ssl:
keyPassword: mypassword
keystorePassword: mypassword
truststorePassword: mypassword
db:
postgresUser:
password: DUMMY_POSTGRES_PASSWORD
tipUser:
password: tip_password
schema_repo:
username: tip-read
password: tip-read
cassandra:
tip_user: tip_user
tip_password: tip_password

View File

@@ -1,18 +0,0 @@
bases:
- helmfile-environment.yaml
- helmfile-defaults.yaml
---
bases:
- helmfile-repositories.yaml.gotmpl
---
releases:
- name: namespace-{{ .Environment.Values.global.namespace }}
chart: incubator/raw
namespace: default
values:
- resources:
- apiVersion: v1
kind: Namespace
metadata:
name: {{ .Environment.Values.global.namespace }}

View File

@@ -1,226 +0,0 @@
bases:
- helmfile-environment.yaml
- helmfile-defaults.yaml
---
releases:
- name: postgres-{{ .Environment.Values.global.namespace }}
namespace: {{ .Environment.Values.global.namespace }}
chart: bitnami/postgresql
version: 9.8.4
condition: postgres.enabled
labels:
role: prerequisites
app: postgres
values:
- postgresqlDatabase: tip
image:
tag: 11.8.0-debian-10-r58
debug: true
metrics:
enabled: true
serviceMonitor:
enabled: true
namespace: {{ .Environment.Values.global.monitoring.namespace }}
additionalLabels:
release: prometheus-operator
postgresqlUsername: {{ .Environment.Values.postgres.user }}
postgresqlPassword: {{ .Environment.Values.postgres.password }}
pgHbaConfiguration: |
hostssl replication repl_user 0.0.0.0/0 md5 clientcert=0
hostssl postgres postgres 0.0.0.0/0 cert clientcert=1
hostssl postgres postgres ::/0 cert clientcert=1
hostssl all all 0.0.0.0/0 md5 clientcert=1
replication:
enabled: true
user: {{ .Environment.Values.postgres.replication.user }}
password: {{ .Environment.Values.postgres.replication.password }}
slaveReplicas: 1
persistence:
enabled: true
storageClass: {{ .Environment.Values.storageClass }}
volumePermissions:
enabled: true
livenessProbe:
enabled: false
readinessProbe:
enabled: false
tls:
enabled: true
certificatesSecret: tip-{{ .Environment.Values.global.namespace }}-common-postgres-certs
certFilename: cert.crt
certKeyFilename: cert.key
certCAFilename: cacert.pem
initdbScriptsConfigMap: tip-{{ .Environment.Values.global.namespace }}-common-postgres-scripts
extraEnv:
- name: PGSSLCERT
value: /opt/tip-wlan/certs/postgresclientcert.pem
- name: PGSSLKEY
value: /opt/tip-wlan/certs/postgresclientkey_dec.pem
- name: PGSSLROOTCERT
value: "/opt/tip-wlan/certs/cacert.pem"
slave:
extraVolumes:
jsonPatches:
- target:
version: v1
group: apps
kind: StatefulSet
name: postgres-{{ .Environment.Values.global.namespace }}-postgresql-master
patch:
- op: replace
path: /spec/template/spec/initContainers/0/command
value:
- /bin/sh
- -cx
- |
chown 1001:1001 /bitnami/postgresql
mkdir -p /bitnami/postgresql/data /bitnami/postgresql/conf
chmod 700 /bitnami/postgresql/data /bitnami/postgresql/conf
find /bitnami/postgresql -mindepth 1 -maxdepth 1 -not -name ".snapshot" -not -name "lost+found" | xargs chown -R 1001:1001
chmod -R 777 /dev/shm
cp /tmp/certs/* /opt/bitnami/postgresql/certs/
chown -R 1001:1001 /opt/bitnami/postgresql/certs/
chmod 600 /opt/bitnami/postgresql/certs/cert.key
chmod 600 /opt/bitnami/postgresql/certs/postgresclientkey_dec.pem
- name: zookeeper-{{ .Environment.Values.global.namespace }}
namespace: {{ .Environment.Values.global.namespace }}
chart: incubator/zookeeper
version: 2.1.4
condition: zookeeper.enabled
labels:
role: prerequisites
app: zookeeper
values:
- persistence:
enabled: true
storageClass: {{ .Environment.Values.storageClass }}
replicaCount: 1
- name: kafka-{{ .Environment.Values.global.namespace }}
namespace: {{ .Environment.Values.global.namespace }}
chart: bitnami/kafka
version: 11.8.7
condition: kafka.enabled
labels:
role: prerequisites
app: kafka
values:
- replicaCount: 1
image:
debug: true
auth:
clientProtocol: mtls
interBrokerProtocol: plaintext
jksSecret: tip-{{ .Environment.Values.global.namespace }}-common-kafka-certs
jksPassword: {{ .Environment.Values.credentials.keyPassword }}
tlsEndpointIdentificationAlgorithm: https
jaas:
clientUsers:
- brokerUser
clientPassword:
- brokerPassword
# existingConfigmap: tip-{{ .Environment.Values.global.namespace }}-common-kafka-config
# allowPlaintextListener: true
persistence:
enabled: true
storageClass: {{ .Environment.Values.storageClass }}
metrics:
serviceMonitor:
enabled: false
namespace: {{ .Environment.Values.global.monitoring.namespace }}
selector:
release: prometheus-operator
zookeeper:
enabled: false
externalZookeeper:
servers:
- zookeeper-{{ .Environment.Values.global.namespace }}
- name: cassandra-{{ .Environment.Values.global.namespace }}
namespace: {{ .Environment.Values.global.namespace }}
chart: bitnami/cassandra
version: 6.0.1
condition: cassandra.enabled
labels:
role: prerequisites
app: cassandra
values:
- tlsEncryptionSecretName: tip-{{ .Environment.Values.global.namespace }}-common-cassandra-certs
- image:
debug: true
- persistence:
enabled: true
storageClass: {{ .Environment.Values.storageClass }}
- replicaCount: 3
- cluster:
name: TipWlanCluster
seedCount: 1
internodeEncryption: all
clientEncryption: true
- exporter:
enabled: false
serviceMonitor:
enabled: true
additionalLabels:
release: prometheus-operator
- dbUser:
user: {{ .Environment.Values.cassandra.user }}
password: {{ .Environment.Values.cassandra.password }}
- resources:
limits: {}
requests:
cpu: 1
memory: 3Gi
- name: tip-{{ .Environment.Values.global.namespace }}-credentials
namespace: {{ .Environment.Values.global.namespace }}
chart: credentials
labels:
role: prerequisites
app: credentials
values:
- ssl:
keyPassword: {{ .Environment.Values.credentials.keyPassword }}
keystorePassword: {{ .Environment.Values.credentials.keystorePassword }}
truststorePassword: {{ .Environment.Values.credentials.truststorePassword }}
db:
postgresUser:
password: {{ .Environment.Values.postgres.password }}
tipUser:
password: {{ .Environment.Values.postgres.password }}
schema_repo:
username: {{ .Environment.Values.credentials.jFrog.user }}
password: {{ .Environment.Values.credentials.jFrog.password }}
cassandra:
tip_user: {{ .Environment.Values.cassandra.user }}
tip_password: {{ .Environment.Values.cassandra.password }}
websocketSessionTokenEncKey: {{ .Environment.Values.credentials.websocketSessionTokenEncKey }}
dockerRegistrySecret: {{ .Environment.Values.credentials.dockerSecret }}
- name: tip-{{ .Environment.Values.global.namespace }}-efs-provisioner
namespace: {{ .Environment.Values.global.namespace }}
chart: stable/efs-provisioner
version: 0.13.0
condition: efs-provisioner.enabled
labels:
role: prerequisites
app: efs-provisioner
values:
- serviceAccount:
create: true
name: efs-provisioner
- provisioner:
nameExtension: efs-provisioner
replicaCount: 1
strategyType: Recreate
image:
name: quay.io/external_storage/efs-provisioner
tag: latest
efsFileSystemId: fs-8a3fa867
awsRegion: ca-central-1
dnsName: ""
provisionerName: shared-provisioner
efsDnsName: fs-8a3fa867.efs.ca-central-1.amazonaws.com
storageClass: aws-efs

View File

@@ -1,196 +0,0 @@
bases:
- helmfile-environment.yaml
- helmfile-defaults.yaml
---
releases:
- name: tip-{{ .Environment.Values.global.namespace }}-opensync-gw-cloud
namespace: {{ .Environment.Values.global.namespace }}
chart: opensync-gw-cloud
condition: opensync-gw-cloud.enabled
labels:
role: payload
app: opensync-gw-cloud
values:
- global:
nodePortPrefixExt: {{ .Environment.Values.global.nodePortPrefixExt }}
nodePortPrefix: {{ .Environment.Values.global.nodePortPrefix }}
repository: {{ .Environment.Values.global.repository }}
isCloudDeployment: true
pullPolicy: Always
- externalhostaddress:
ovsdb: tip-wlan-opensync-gw-cloud
mqtt: tip-wlan-opensync-mqtt-broker
persistence:
enabled: true
filestore:
url: "https://tip-wlan-opensync-gw-cloud:9096"
- name: tip-{{ .Environment.Values.global.namespace }}-opensync-gw-static
namespace: {{ .Environment.Values.global.namespace }}
chart: opensync-gw-static
condition: opensync-gw-static.enabled
labels:
role: payload
app: opensync-gw-static
values:
- global:
nodePortPrefixExt: {{ .Environment.Values.global.nodePortPrefixExt }}
nodePortPrefix: {{ .Environment.Values.global.nodePortPrefix }}
repository: {{ .Environment.Values.global.repository }}
isCloudDeployment: true
pullPolicy: Always
- name: tip-{{ .Environment.Values.global.namespace }}-opensync-mqtt-broker
namespace: {{ .Environment.Values.global.namespace }}
chart: opensync-mqtt-broker
condition: opensync-mqtt-broker.enabled
labels:
role: payload
app: opensync-mqtt-broker
values:
- global:
nodePortPrefixExt: {{ .Environment.Values.global.nodePortPrefixExt }}
nodePortPrefix: {{ .Environment.Values.global.nodePortPrefix }}
repository: {{ .Environment.Values.global.repository }}
isCloudDeployment: true
pullPolicy: Always
- replicaCount: 1
persistence:
enabled: true
storageClass: {{ .Environment.Values.storageClass }}
- name: tip-{{ .Environment.Values.global.namespace }}-wlan-cloud-graphql-gw
namespace: {{ .Environment.Values.global.namespace }}
chart: wlan-cloud-graphql-gw
condition: wlan-cloud-graphql-gw.enabled
labels:
role: payload
app: wlan-cloud-graphql-gw
values:
- global:
nodePortPrefixExt: {{ .Environment.Values.global.nodePortPrefixExt }}
nodePortPrefix: {{ .Environment.Values.global.nodePortPrefix }}
repository: {{ .Environment.Values.global.repository }}
isCloudDeployment: true
pullPolicy: Always
- env:
portalsvc: graphql.{{ .Environment.Values.global.domain }}
ingress:
hosts:
- host: graphql.{{ .Environment.Values.global.domain }}
paths:
- "/"
- name: tip-{{ .Environment.Values.global.namespace }}-wlan-cloud-static-portal
namespace: {{ .Environment.Values.global.namespace }}
chart: wlan-cloud-static-portal
condition: wlan-cloud-static-portal.enabled
labels:
role: payload
app: wlan-cloud-static-portal
values:
- global:
nodePortPrefixExt: {{ .Environment.Values.global.nodePortPrefixExt }}
nodePortPrefix: {{ .Environment.Values.global.nodePortPrefix }}
repository: {{ .Environment.Values.global.repository }}
isCloudDeployment: true
pullPolicy: Always
- ingress:
hosts:
- host: portal.{{ .Environment.Values.global.domain }}
paths:
- "/"
- name: tip-{{ .Environment.Values.global.namespace }}-wlan-integrated-cloud-component-service
namespace: {{ .Environment.Values.global.namespace }}
chart: wlan-integrated-cloud-component-service
condition: wlan-integrated-cloud-component-service.enabled
labels:
role: payload
app: wlan-integrated-cloud-component-service
values:
- global:
nodePortPrefixExt: {{ .Environment.Values.global.nodePortPrefixExt }}
nodePortPrefix: {{ .Environment.Values.global.nodePortPrefix }}
repository: {{ .Environment.Values.global.repository }}
isCloudDeployment: true
pullPolicy: Always
- name: tip-{{ .Environment.Values.global.namespace }}-wlan-port-forwarding-gateway-service
namespace: {{ .Environment.Values.global.namespace }}
chart: wlan-port-forwarding-gateway-service
condition: wlan-port-forwarding-gateway-service.enabled
labels:
role: payload
app: port-forwarding-gateway-service
values:
- global:
nodePortPrefixExt: {{ .Environment.Values.global.nodePortPrefixExt }}
nodePortPrefix: {{ .Environment.Values.global.nodePortPrefix }}
repository: {{ .Environment.Values.global.repository }}
isCloudDeployment: true
pullPolicy: Always
- name: tip-{{ .Environment.Values.global.namespace }}-wlan-portal-service
namespace: {{ .Environment.Values.global.namespace }}
chart: wlan-portal-service
condition: wlan-portal-service.enabled
labels:
role: payload
app: wlan-portal-service
values:
- global:
nodePortPrefixExt: {{ .Environment.Values.global.nodePortPrefixExt }}
nodePortPrefix: {{ .Environment.Values.global.nodePortPrefix }}
repository: {{ .Environment.Values.global.repository }}
isCloudDeployment: true
pullPolicy: Always
- persistence:
enabled: true
storageClass: {{ .Environment.Values.storageClass }}
- name: tip-{{ .Environment.Values.global.namespace }}-wlan-prov-service
namespace: {{ .Environment.Values.global.namespace }}
chart: wlan-prov-service
condition: wlan-prov-service.enabled
labels:
role: payload
app: wlan-prov-service
values:
- global:
nodePortPrefixExt: {{ .Environment.Values.global.nodePortPrefixExt }}
nodePortPrefix: {{ .Environment.Values.global.nodePortPrefix }}
repository: {{ .Environment.Values.global.repository }}
isCloudDeployment: true
pullPolicy: Always
- name: tip-{{ .Environment.Values.global.namespace }}-wlan-spc-service
namespace: {{ .Environment.Values.global.namespace }}
chart: wlan-spc-service
condition: wlan-spc-service.enabled
labels:
role: payload
app: wlan-spc-service
values:
- global:
nodePortPrefixExt: {{ .Environment.Values.global.nodePortPrefixExt }}
nodePortPrefix: {{ .Environment.Values.global.nodePortPrefix }}
repository: {{ .Environment.Values.global.repository }}
isCloudDeployment: true
pullPolicy: Always
- name: tip-{{ .Environment.Values.global.namespace }}-wlan-ssc-service
namespace: {{ .Environment.Values.global.namespace }}
chart: wlan-ssc-service
condition: wlan-ssc-service.enabled
labels:
role: payload
app: wlan-ssc-service
values:
- global:
nodePortPrefixExt: {{ .Environment.Values.global.nodePortPrefixExt }}
nodePortPrefix: {{ .Environment.Values.global.nodePortPrefix }}
repository: {{ .Environment.Values.global.repository }}
isCloudDeployment: true
pullPolicy: Always

View File

@@ -1,5 +0,0 @@
helmDefaults:
createNamespace: false
force: false
verify: false
wait: false

View File

@@ -1,65 +0,0 @@
environments:
default:
values:
- global:
namespace: testota
domain: lab.wlan.tip.build
repository: tip-tip-wlan-cloud-docker-repo.jfrog.io
monitoring:
namespace: monitoring
nodePortPrefix: 311
nodePortPrefixExt: 313
- credentials:
jFrog:
user: tip-read
password: tip-read
websocketSessionTokenEncKey: MyToKeN0MyToKeN1
keyPassword: mypassword
keystorePassword: mypassword
truststorePassword: mypassword
dockerSecret: ewoJImF1dGhzIjogewoJCSJ0aXAtdGlwLXdsYW4tY2xvdWQtZG9ja2VyLXJlcG8uamZyb2cuaW8iOiB7CgkJCSJhdXRoIjogImRHbHdMWEpsWVdRNmRHbHdMWEpsWVdRPSIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuOCAobGludXgpIgoJfQp9
# Stateful components start here
- storageClass: gp2
- postgres:
enabled: true
user: tip_user
password: DUMMY_POSTGRES_PASSWORD
replication:
user: repl_user
password: repl_password
- zookeeper:
enabled: true
- kafka:
enabled: true
- cassandra:
enabled: true
user: cassandra
password: cassandra
- efs-provisioner:
enabled: false
# Wlan components start here
- opensync-gw-cloud:
enabled: true
- opensync-gw-static:
enabled: true
- opensync-mqtt-broker:
enabled: true
- wlan-cloud-graphql-gw:
enabled: true
- wlan-cloud-static-portal:
enabled: true
- wlan-integrated-cloud-component-service:
enabled: true
- wlan-port-forwarding-gateway-service:
enabled: true
- wlan-portal-service:
enabled: true
- wlan-prov-service:
enabled: true
- wlan-spc-service:
enabled: true
- wlan-ssc-service:
enabled: true

View File

@@ -1,7 +0,0 @@
repositories:
- name: stable
url: https://kubernetes-charts.storage.googleapis.com
- name: incubator
url: https://kubernetes-charts-incubator.storage.googleapis.com
- name: bitnami
url: https://charts.bitnami.com/bitnami

View File

@@ -1,3 +0,0 @@
helmfiles:
- helmfile-0*

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,6 +0,0 @@
dependencies:
- name: common
repository: file://../common
version: 0.1.0
digest: sha256:636a65e9846bdff17cc4e65b0849061f783759a37aa51fb85ff6fd8ba5e68467
generated: "2020-10-19T11:29:27.1946594Z"

View File

@@ -1,12 +0,0 @@
apiVersion: v2
name: opensync-gw-cloud
description: WLAN Opensync Gateway Cloud Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: 1.16.0
dependencies:
- name: common
version: 0.1.0
repository: file://../common

View File

@@ -1,18 +0,0 @@
{
"maxConnectionsTotal":100,
"maxConnectionsPerRoute":10,
"truststoreType":"JKS",
"truststoreProvider":"SUN",
"truststoreFile":"file:/opt/tip-wlan/certs/truststore.jks",
"truststorePass":"mypassword",
"keystoreType":"JKS",
"keystoreProvider":"SUN",
"keystoreFile":"file:/opt/tip-wlan/certs/client_keystore.jks",
"keystorePass":"mypassword",
"keyAlias":"clientkeyalias",
"credentialsList":[
{"host":"localhost","port":-1,"user":"user","password":"password"}
]
}

View File

@@ -1,13 +0,0 @@
truststorePass=mypassword
truststoreFile=file:///opt/tip-wlan/certs/truststore.jks
truststoreType=JKS
truststoreProvider=SUN
keyAlias=1
keystorePass=mypassword
keystoreFile=file:///opt/tip-wlan/certs/server.pkcs12
keystoreType=pkcs12
keystoreProvider=SunJSSE
sslProtocol=TLS

View File

@@ -1,78 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!-- For assistance related to logback-translator or configuration -->
<!-- files in general, please contact the logback user mailing list -->
<!-- at http://www.qos.ch/mailman/listinfo/logback-user -->
<!-- -->
<!-- For professional support please see -->
<!-- http://www.qos.ch/shop/products/professionalSupport -->
<!-- -->
<configuration>
<appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<appender name="mqttDataFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>/app/logs/mqttData.log</file>
<append>true</append>
<encoder>
<pattern>%date %msg%n</pattern>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>/app/logs/mqttData.%i.log.gz</fileNamePattern>
<minIndex>1</minIndex>
<maxIndex>3</maxIndex>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<maxFileSize>20MB</maxFileSize>
</triggeringPolicy>
</appender>
<appender name="logfile" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>/app/logs/opensyncgw.log</file>
<append>true</append>
<encoder>
<pattern>%date %level [%thread] %logger{36} [%file:%line] %msg%n</pattern>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>/app/logs/opensyncgw.%i.log.gz</fileNamePattern>
<minIndex>1</minIndex>
<maxIndex>3</maxIndex>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<maxFileSize>20MB</maxFileSize>
</triggeringPolicy>
</appender>
<!--
details: http://logback.qos.ch/manual/configuration.html#auto_configuration
runtime configuration, if need to override the defaults:
-Dlogback.configurationFile=/path/to/logback.xml
for log configuration debugging - use
-Dlogback.statusListenerClass=ch.qos.logback.core.status.OnConsoleStatusListener
log levels:
OFF ERROR WARN INFO DEBUG TRACE
-->
<logger name="org.apache.catalina.startup.DigesterFactory" level="ERROR"/>
<logger name="org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping" level="INFO"/>
<logger name="org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer" level="INFO"/>
<logger name="com.telecominfraproject.wlan" level="DEBUG"/>
<logger name="com.netflix.servo.tag.aws.AwsInjectableTag" level="OFF"/>
<logger name="com.vmware.ovsdb.service.OvsdbConnectionInfo" level="OFF"/>
<logger name="com.vmware.ovsdb.netty.OvsdbConnectionHandler" level="ERROR"/>
<logger name="MQTT_DATA" level="DEBUG" additivity="false">
<appender-ref ref="mqttDataFile"/>
</logger>
<root level="WARN">
<appender-ref ref="logfile"/>
</root>
</configuration>

View File

@@ -1,21 +0,0 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "common.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "common.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "common.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "common.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:80
{{- end }}

View File

@@ -1,7 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "common.fullname" . }}-log-config
namespace: {{ .Release.Namespace }}
data:
{{ tpl (.Files.Glob "resources/config/logback.xml").AsConfig . | indent 2 }}

View File

@@ -1,287 +0,0 @@
{{- $icc := include "integratedcloudcomponent.service" . -}}
{{- $prov := include "prov.service" . -}}
{{- $ssc := include "ssc.service" . -}}
{{- $mqtt := include "mqtt.service" . -}}
{{- $file_store_path := include "filestore.dir.name" . -}}
{{- $cloudeployment := .Values.global.isCloudDeployment -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "common.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "common.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "common.selectorLabels" . | nindent 8 }}
spec:
imagePullSecrets:
- name: "{{ .Release.Namespace }}-docker-registry-key"
serviceAccountName: {{ include "common.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
initContainers:
- name: {{ include "common.name" . }}-mqtt-readiness
image: eclipse-mosquitto:latest
imagePullPolicy: {{ .Values.global.pullPolicy }}
command:
- sh
- -c
- |
mosquitto_pub -h {{ $mqtt }} -p 1883 --cafile /certs/cacert.pem --cert /certs/clientcert.pem --key /certs/clientkey.pem --insecure -t "/ap/test" -q 0 -m "CheckingMQTTAliveness"
status=$(echo $?)
echo mosquitto_pub response of the request = $status
counter=0
while [ $counter -lt 10 ] && [ $status -ne 0 ]
do
echo {{ $mqtt }} service isnt ready. Tried $counter times
sleep 2
counter=`expr $counter + 1`
mosquitto_pub -h {{ $mqtt }} -p 1883 --cafile /certs/cacert.pem --cert /certs/clientcert.pem --key /certs/clientkey.pem --insecure -t "/ap/test" -q 0 -m "CheckingMQTTAliveness"
status=$(echo $?)
echo mosquitto_pub response of the request = $status
done
if [ $status -eq 0 ]
then
echo {{ $mqtt }} service is ready!
else
echo {{ $mqtt }} service failed to respond after 20 secs
exit 1
fi
volumeMounts:
- mountPath: /certs/cacert.pem
name: certificates
subPath: cacert.pem
- mountPath: /certs/clientcert.pem
name: certificates
subPath: clientcert.pem
- mountPath: /certs/clientkey.pem
name: certificates
subPath: clientkey.pem
{{- if .Values.global.integratedDeployment }}
- name: {{ include "common.name" . }}-readiness-int-cloud
image: alpine
imagePullPolicy: {{ .Values.global.pullPolicy }}
command:
- sh
- -c
- |
if [ {{ $cloudeployment }} = false ]
then
echo "151.101.112.249 dl-cdn.alpinelinux.org" >> /etc/hosts
echo "Added name-resolution for local deployments"
fi
apk add curl
url=https://{{ $icc }}/ping
counter=0
status=$(curl --insecure --head --location --connect-timeout 5 --write-out %{http_code} --silent --output /dev/null ${url});
while [ $counter -lt 10 ] && [ $status -ne 200 ]
do
echo ${url} service isnt ready. Tried $counter times
sleep 5
counter=`expr $counter + 1`
status=$(curl --insecure --head --location --connect-timeout 5 --write-out %{http_code} --silent --output /dev/null ${url});
echo Http Response code of ping request = $status
done
if [ $status -eq 200 ]
then
echo ${url} service is ready!
else
echo ${url} service failed to respond after 50 secs
exit 1
fi
{{- else }}
- name: {{ include "common.name" . }}-readiness-prov
image: alpine
imagePullPolicy: {{ .Values.global.pullPolicy }}
command:
- sh
- -c
- |
if [ {{ $cloudeployment }} = false ]
then
echo "151.101.112.249 dl-cdn.alpinelinux.org" >> /etc/hosts
echo "Added name-resolution for local deployments"
fi
apk add curl
url=https://{{ $prov }}/ping
counter=0
status=$(curl --insecure --head --location --connect-timeout 5 --write-out %{http_code} --silent --output /dev/null ${url});
while [ $counter -lt 10 ] && [ $status -ne 200 ]
do
echo ${url} service isnt ready. Tried $counter times
sleep 5
counter=`expr $counter + 1`
status=$(curl --insecure --head --location --connect-timeout 5 --write-out %{http_code} --silent --output /dev/null ${url});
echo Http Response code of ping request = $status
done
if [ $status -eq 200 ]
then
echo ${url} service is ready!
else
echo ${url} service failed to respond after 50 secs
exit 1
fi
- name: {{ include "common.name" . }}-readiness-ssc
image: alpine
imagePullPolicy: {{ .Values.global.pullPolicy }}
command:
- sh
- -c
- |
if [ {{ $cloudeployment }} = false ]
then
echo "151.101.112.249 dl-cdn.alpinelinux.org" >> /etc/hosts
echo "Added name-resolution for local deployments"
fi
apk add curl
url=https://{{ $ssc }}/ping
counter=0
status=$(curl --insecure --head --location --connect-timeout 5 --write-out %{http_code} --silent --output /dev/null ${url});
while [ $counter -lt 10 ] && [ $status -ne 200 ]
do
echo ${url} service isnt ready. Tried $counter times
sleep 5
counter=`expr $counter + 1`
status=$(curl --insecure --head --location --connect-timeout 5 --write-out %{http_code} --silent --output /dev/null ${url});
echo Http Response code of ping request = $status
done
if [ $status -eq 200 ]
then
echo ${url} service is ready!
else
echo ${url} service failed to respond after 50 secs
exit 1
fi
{{- end }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: {{ .Values.global.repository }}/{{ .Values.image.name }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.global.pullPolicy }}
{{- if .Values.probes.enabled }}
livenessProbe:
tcpSocket:
port: {{ .Values.service.port2 }}
initialDelaySeconds: {{ .Values.probes.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.probes.livenessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.probes.livenessProbe.failureThreshold }}
periodSeconds: {{ .Values.probes.livenessProbe.periodSeconds }}
successThreshold: {{ .Values.probes.livenessProbe.successThreshold }}
readinessProbe:
tcpSocket:
port: {{ .Values.service.port2 }}
initialDelaySeconds: {{ .Values.probes.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.probes.readinessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.probes.readinessProbe.failureThreshold }}
periodSeconds: {{ .Values.probes.readinessProbe.periodSeconds }}
successThreshold: {{ .Values.probes.readinessProbe.successThreshold }}
{{- end }}
env:
{{- include "common.env" . | nindent 12 }}
- name: OVSDB_MANAGER
value: {{ .Values.externalhostaddress.ovsdb }}
- name: OVSDB_MANAGER_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MQTT_SERVER_INTERNAL
value: {{ .Release.Name }}-{{ .Values.mqtt.url }}
- name: MQTT_SERVER_EXTERNAL
value: {{ .Values.externalhostaddress.mqtt }}
{{- if .Values.global.integratedDeployment }}
- name: INTEGRATED_SERVER
value: {{ .Release.Name }}-{{ .Values.integratedcloudcomponent.url }}
{{- else }}
- name: PROV_SERVER
value: {{ .Release.Name }}-{{ .Values.prov.url }}
- name: SSC_SERVER
value: {{ .Release.Name }}-{{ .Values.ssc.url }}
{{- end }}
- name: FILE_STORE_DIRECTORY_INTERNAL
value: {{ $file_store_path }}
- name: FILE_STORE_URL
value: {{ .Values.filestore.url }}
- name: DEFAULT_LAN_NAME
value: {{ .Values.ethernetType.lanName }}
- name: DEFAULT_LAN_TYPE
value: {{ .Values.ethernetType.lanType }}
- name: DEFAULT_WAN_TYPE
value: {{ .Values.ethernetType.wanType }}
- name: DEFAULT_WAN_NAME
value: {{ .Values.ethernetType.wanName }}
volumeMounts:
- mountPath: /opt/tip-wlan/certs/client_keystore.jks
name: certificates
subPath: client_keystore.jks
- mountPath: /opt/tip-wlan/certs/truststore.jks
name: certificates
subPath: truststore.jks
- mountPath: /opt/tip-wlan/certs/server.pkcs12
name: certificates
subPath: server.pkcs12
- mountPath: /opt/tip-wlan/certs/httpClientConfig.json
name: certificates
subPath: httpClientConfig.json
- mountPath: /opt/tip-wlan/certs/ssl.properties
name: certificates
subPath: ssl.properties
- mountPath: /app/opensync/logback.xml
name: logback-config
subPath: logback.xml
- mountPath: {{ $file_store_path }}
name: file-store-data
ports:
- name: {{ .Values.service.name1 }}
containerPort: {{ .Values.service.port1 }}
protocol: TCP
- name: {{ .Values.service.name2 }}
containerPort: {{ .Values.service.port2 }}
protocol: TCP
- name: {{ .Values.service.name3 }}
containerPort: {{ .Values.service.port3 }}
protocol: TCP
- name: {{ .Values.service.name4 }}
containerPort: {{ .Values.service.port4 }}
protocol: TCP
{{- if .Values.debug.enabled }}
- name: {{ .Values.service.name5 }}
containerPort: {{ .Values.service.port5 }}
protocol: TCP
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: certificates
secret:
secretName: {{ include "common.fullname" . }}-certs
- name: logback-config
configMap:
name: {{ include "common.fullname" . }}-log-config
- name: file-store-data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
claimName: {{ include "portal.sharedPvc.name" . }}
{{- else }}
emptyDir: {}
{{- end }}

View File

@@ -1,42 +0,0 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "common.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,8 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ include "common.fullname" . }}-certs
namespace: {{ .Release.Namespace }}
type: Opaque
data:
{{ tpl (.Files.Glob "resources/config/certs/*").AsSecrets . | indent 2 }}

View File

@@ -1,39 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "common.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port1 }}
targetPort: {{ .Values.service.port1 }}
protocol: TCP
name: {{ .Values.service.name1 }}
nodePort: {{ .Values.global.nodePortPrefix }}{{ .Values.service.nodePort1 }}
- port: {{ .Values.service.port2 }}
targetPort: {{ .Values.service.port2 }}
protocol: TCP
name: {{ .Values.service.name2 }}
nodePort: {{ .Values.global.nodePortPrefix }}{{ .Values.service.nodePort2 }}
- port: {{ .Values.service.port3 }}
targetPort: {{ .Values.service.port3 }}
protocol: TCP
name: {{ .Values.service.name3 }}
nodePort: {{ .Values.global.nodePortPrefix }}{{ .Values.service.nodePort3 }}
- port: {{ .Values.service.port4 }}
targetPort: {{ .Values.service.port4 }}
protocol: TCP
name: {{ .Values.service.name4 }}
nodePort: {{ .Values.global.nodePortPrefix }}{{ .Values.service.nodePort4 }}
{{- if .Values.debug.enabled }}
- port: {{ .Values.service.port5 }}
targetPort: {{ .Values.service.port5 }}
protocol: TCP
name: {{ .Values.service.name5 }}
nodePort: {{ .Values.global.nodePortPrefix }}{{ .Values.service.nodePort5 }}
{{- end }}
selector:
{{- include "common.selectorLabels" . | nindent 4 }}

View File

@@ -1,13 +0,0 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "common.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end -}}

View File

@@ -1,18 +0,0 @@
{{- if .Values.testsEnabled -}}
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "common.fullname" . }}-test-connection"
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "common.fullname" . }}:{{ .Values.service.port1 }}']
restartPolicy: Never
{{- end }}

View File

@@ -1,35 +0,0 @@
apiVersion: k8s.nginx.org/v1alpha1
kind: TransportServer
metadata:
name: {{ include "common.name" . }}-controller
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
spec:
listener:
name: opensync-gw-controller-port-listener
protocol: TCP
upstreams:
- name: {{ include "common.name" . }}
service: {{ include "common.fullname" . }}
port: {{ .Values.service.port1 }}
action:
pass: {{ include "common.name" . }}
---
apiVersion: k8s.nginx.org/v1alpha1
kind: TransportServer
metadata:
name: {{ include "common.name" . }}-redirector
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
spec:
listener:
name: opensync-gw-redirector-port-listener
protocol: TCP
upstreams:
- name: {{ include "common.name" . }}
service: {{ include "common.fullname" . }}
port: {{ .Values.service.port2 }}
action:
pass: {{ include "common.name" . }}

View File

@@ -1,170 +0,0 @@
# Default values for opensync-gw.
# This is a YAML-formatted file.
#################################################################
# Application configuration defaults.
#################################################################
# Declare variables to be passed into your templates.
replicaCount: 1
image:
name: opensync-gateway-cloud
tag: 0.0.1-SNAPSHOT
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
## Liveness and Readiness probe values.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes
probes:
enabled: false
livenessProbe:
initialDelaySeconds: 5
periodSeconds: 30
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
initialDelaySeconds: 5
periodSeconds: 30
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
# Enable/Disable Helm tests
testsEnabled: false
# Enable/Disable Remote debugging
debug:
enabled: false
service:
type: NodePort
port1: 6640
nodePort1: 29
name1: controller
port2: 6643
name2: redirector
nodePort2: 30
port3: 9096
name3: server
nodePort3: 27
port4: 9097
name4: internal
nodePort4: 28
port5: 5005
name5: debug
nodePort5: 26
persistence:
enabled: false
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
# the filestore internal: location of the folder where UI files will be stored
# on the PV
# the filestore url: externally reachable URL i.e.; reachable from AP, where it
# can download the files from. Override this value (url) to the configured
# HTTP server in your system
filestore:
internal: "/tmp/filestore"
url: DUMMY_FILESTORE_HTTPS_URL
integratedcloudcomponent:
url: wlan-integrated-cloud-component-service
port: 9091
prov:
url: wlan-prov-service
port: 9092
ssc:
url: wlan-ssc-service
port: 9032
mqtt:
url: opensync-mqtt-broker
portal:
url: wlan-portal-service
sharedPvc:
name: file-store-data
ordinal: 0
# These are list of external HostAddresses for ovsdb, mqtt.
# This is important for ovsdb and mqtt since
# that's what AP sees. Please make sure to override
# them in dev override file for your respective environments.
externalhostaddress:
ovsdb: opensync-gw-cloud
mqtt: opensync-mqtt-broker
ethernetType:
lanName: "lan"
lanType: "bridge"
wanType: "bridge"
wanName: "wan"
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
env:
protocol: https
ssc_url: SSC_RELEASE_URL
prov_url: PROV_RELEASE_URL
ssc:
service: wlan-ssc-service
port: 9031
prov:
service: wlan-prov-service
port: 9091

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,6 +0,0 @@
dependencies:
- name: common
repository: file://../common
version: 0.1.0
digest: sha256:636a65e9846bdff17cc4e65b0849061f783759a37aa51fb85ff6fd8ba5e68467
generated: "2020-10-19T12:15:04.8106439Z"

View File

@@ -1,12 +0,0 @@
apiVersion: v2
name: opensync-gw-static
description: WLAN Opensync Gateway Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: 1.16.0
dependencies:
- name: common
version: 0.1.0
repository: file://../common

View File

@@ -1,2 +0,0 @@
Contains certs needed for this service to start.
Please refer to page: https://telecominfraproject.atlassian.net/wiki/spaces/WIFI/pages/262176803/Pre-requisites+before+deploying+Tip-Wlan+solution

View File

@@ -1,18 +0,0 @@
{
"maxConnectionsTotal":100,
"maxConnectionsPerRoute":10,
"truststoreType":"JKS",
"truststoreProvider":"SUN",
"truststoreFile":"file:/opt/tip-wlan/certs/truststore.jks",
"truststorePass":"mypassword",
"keystoreType":"JKS",
"keystoreProvider":"SUN",
"keystoreFile":"file:/opt/tip-wlan/certs/client_keystore.jks",
"keystorePass":"mypassword",
"keyAlias":"clientkeyalias",
"credentialsList":[
{"host":"localhost","port":-1,"user":"user","password":"password"}
]
}

View File

@@ -1,75 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!-- For assistance related to logback-translator or configuration -->
<!-- files in general, please contact the logback user mailing list -->
<!-- at http://www.qos.ch/mailman/listinfo/logback-user -->
<!-- -->
<!-- For professional support please see -->
<!-- http://www.qos.ch/shop/products/professionalSupport -->
<!-- -->
<configuration>
<appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<!--
<appender name="FILE" class="ch.qos.logback.core.FileAppender">
<file>myApp.log</file>
<encoder>
<pattern>%date %level [%thread] %logger{10} [%file:%line] %msg%n</pattern>
</encoder>
</appender>
-->
<appender name="logfile" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>/app/logs/opensyncgw.log</file>
<append>true</append>
<encoder>
<pattern>%date %level [%thread] %logger{36} [%file:%line] %msg%n</pattern>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>/app/logs/opensyncgw.%i.log.gz</fileNamePattern>
<minIndex>1</minIndex>
<maxIndex>3</maxIndex>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<maxFileSize>20MB</maxFileSize>
</triggeringPolicy>
</appender>
<!--
details: http://logback.qos.ch/manual/configuration.html#auto_configuration
runtime configuration, if need to override the defaults:
-Dlogback.configurationFile=/path/to/logback.xml
for log configuration debugging - use
-Dlogback.statusListenerClass=ch.qos.logback.core.status.OnConsoleStatusListener
log levels:
OFF ERROR WARN INFO DEBUG TRACE
-->
<logger name="org.apache.catalina.startup.DigesterFactory" level="ERROR"/>
<logger name="org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping" level="INFO"/>
<logger name="org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer" level="INFO"/>
<logger name="com.telecominfraproject.wlan" level="DEBUG"/>
<logger name="com.netflix.servo.tag.aws.AwsInjectableTag" level="OFF"/>
<logger name="com.vmware.ovsdb.service.OvsdbConnectionInfo" level="OFF"/>
<logger name="com.vmware.ovsdb.netty.OvsdbConnectionHandler" level="ERROR"/>
<logger name="MQTT_DATA" level="DEBUG"/>
<!--
<logger name="org.springframework.security.web.authentication.preauth" level="DEBUG"/>
-->
<root level="WARN">
<!-- <appender-ref ref="stdout"/>-->
<appender-ref ref="logfile"/>
</root>
</configuration>

View File

@@ -1,21 +0,0 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "common.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "common.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "common.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "common.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:80
{{- end }}

View File

@@ -1,7 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "common.fullname" . }}-log-config
namespace: {{ .Release.Namespace }}
data:
{{ tpl (.Files.Glob "resources/config/logback.xml").AsConfig . | indent 2 }}

View File

@@ -1,94 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "common.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "common.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "common.selectorLabels" . | nindent 8 }}
spec:
imagePullSecrets:
- name: "{{ .Release.Namespace }}-docker-registry-key"
serviceAccountName: {{ include "common.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: {{ .Values.global.repository }}/{{ .Values.image.name }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.global.pullPolicy }}
{{- if .Values.probes.enabled }}
livenessProbe:
tcpSocket:
port: {{ .Values.service.port2 }}
initialDelaySeconds: {{ .Values.probes.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.probes.livenessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.probes.livenessProbe.failureThreshold }}
periodSeconds: {{ .Values.probes.livenessProbe.periodSeconds }}
successThreshold: {{ .Values.probes.livenessProbe.successThreshold }}
readinessProbe:
tcpSocket:
port: {{ .Values.service.port2 }}
initialDelaySeconds: {{ .Values.probes.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.probes.readinessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.probes.readinessProbe.failureThreshold }}
periodSeconds: {{ .Values.probes.readinessProbe.periodSeconds }}
successThreshold: {{ .Values.probes.readinessProbe.successThreshold }}
{{- end }}
volumeMounts:
- mountPath: /opt/tip-wlan/certs/client_keystore.jks
name: certificates
subPath: client_keystore.jks
- mountPath: /opt/tip-wlan/certs/truststore.jks
name: certificates
subPath: truststore.jks
- mountPath: /opt/tip-wlan/certs/server.pkcs12
name: certificates
subPath: server.pkcs12
- mountPath: /opt/tip-wlan/certs/httpClientConfig.json
name: certificates
subPath: httpClientConfig.json
- mountPath: /opt/tip-wlan/certs/ssl.properties
name: certificates
subPath: ssl.properties
- mountPath: /app/opensync/logback.xml
name: logback-config
subPath: logback.xml
ports:
- name: {{ .Values.service.name1 }}
containerPort: {{ .Values.service.port1 }}
protocol: TCP
- name: {{ .Values.service.name2 }}
containerPort: {{ .Values.service.port2 }}
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: certificates
secret:
secretName: {{ include "common.fullname" . }}-certs
- name: logback-config
configMap:
name: {{ include "common.fullname" . }}-log-config

View File

@@ -1,42 +0,0 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "common.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,8 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ include "common.fullname" . }}-certs
namespace: {{ .Release.Namespace }}
type: Opaque
data:
{{ tpl (.Files.Glob "resources/config/certs/*").AsSecrets . | indent 2 }}

View File

@@ -1,20 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "common.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port1 }}
targetPort: {{ .Values.service.port1 }}
protocol: TCP
name: {{ .Values.service.name1 }}
- port: {{ .Values.service.port2 }}
targetPort: {{ .Values.service.port2 }}
protocol: TCP
name: {{ .Values.service.name2 }}
selector:
{{- include "common.selectorLabels" . | nindent 4 }}

View File

@@ -1,13 +0,0 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "common.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end -}}

View File

@@ -1,18 +0,0 @@
{{- if .Values.testsEnabled -}}
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "common.fullname" . }}-test-connection"
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "common.fullname" . }}:{{ .Values.service.port1 }}']
restartPolicy: Never
{{- end }}

View File

@@ -1,35 +0,0 @@
apiVersion: k8s.nginx.org/v1alpha1
kind: TransportServer
metadata:
name: {{ include "common.name" . }}-controller
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
spec:
listener:
name: opensync-gw-controller-port-listener
protocol: TCP
upstreams:
- name: {{ include "common.name" . }}
service: {{ include "common.fullname" . }}
port: {{ .Values.service.port1 }}
action:
pass: {{ include "common.name" . }}
---
apiVersion: k8s.nginx.org/v1alpha1
kind: TransportServer
metadata:
name: {{ include "common.name" . }}-redirector
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
spec:
listener:
name: opensync-gw-redirector-port-listener
protocol: TCP
upstreams:
- name: {{ include "common.name" . }}
service: {{ include "common.fullname" . }}
port: {{ .Values.service.port2 }}
action:
pass: {{ include "common.name" . }}

View File

@@ -1,95 +0,0 @@
# Default values for opensync-gw.
# This is a YAML-formatted file.
#################################################################
# Application configuration defaults.
#################################################################
# Declare variables to be passed into your templates.
replicaCount: 1
image:
name: opensync-gateway-static
tag: 0.0.1-SNAPSHOT
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
## Liveness and Readiness probe values.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes
probes:
enabled: false
livenessProbe:
initialDelaySeconds: 5
periodSeconds: 30
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
initialDelaySeconds: 5
periodSeconds: 30
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
# Enable/Disable Helm tests
testsEnabled: false
service:
type: ClusterIP
port1: 6640
name1: controller
port2: 6643
name2: redirector
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,6 +0,0 @@
dependencies:
- name: common
repository: file://../common
version: 0.1.0
digest: sha256:636a65e9846bdff17cc4e65b0849061f783759a37aa51fb85ff6fd8ba5e68467
generated: "2020-10-19T12:15:25.5035557Z"

View File

@@ -1,12 +0,0 @@
apiVersion: v2
name: opensync-mqtt-broker
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: 1.16.0
dependencies:
- name: common
version: 0.1.0
repository: file://../common

View File

@@ -1,2 +0,0 @@
Contains certs needed for this service to start.
Please refer to page: https://telecominfraproject.atlassian.net/wiki/spaces/WIFI/pages/262176803/Pre-requisites+before+deploying+Tip-Wlan+solution

View File

@@ -1,17 +0,0 @@
cafile /certs/cacert.pem
certfile /certs/mqttservercert.pem
keyfile /certs/mqttserverkey_dec.pem
require_certificate true
use_identity_as_username true
allow_anonymous false
allow_duplicate_messages true
autosave_interval 900
log_dest stdout
max_queued_bytes 0
max_queued_messages 0
message_size_limit 0
persistence true
persistence_file mosquitto.db
persistence_location /mosquitto/db/
pid_file /mosquitto/mosquitto.pid
port 1883

View File

@@ -1,21 +0,0 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "common.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "common.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "common.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "common.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:80
{{- end }}

View File

@@ -1,7 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: mosquitto-config
namespace: {{ .Release.Namespace }}
data:
{{ tpl (.Files.Glob "resources/config/mosquitto.conf").AsConfig . | indent 2 }}

View File

@@ -1,42 +0,0 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "common.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,8 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: opensync-mqtt-broker-certs
namespace: {{ .Release.Namespace }}
type: Opaque
data:
{{ tpl (.Files.Glob "resources/config/certs/*").AsSecrets . | indent 2 }}

View File

@@ -1,22 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "common.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port1 }}
targetPort: {{ .Values.service.port1 }}
protocol: TCP
name: {{ .Values.service.name1 }}
nodePort: {{ .Values.global.nodePortPrefix }}{{ .Values.service.nodePort1 }}
- port: {{ .Values.service.port2 }}
targetPort: {{ .Values.service.port2 }}
protocol: TCP
name: {{ .Values.service.name2 }}
nodePort: {{ .Values.global.nodePortPrefix }}{{ .Values.service.nodePort2 }}
selector:
{{- include "common.selectorLabels" . | nindent 4 }}

View File

@@ -1,13 +0,0 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "common.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end -}}

View File

@@ -1,168 +0,0 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "common.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
spec:
serviceName: {{ include "common.fullname" . }}
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "common.selectorLabels" . | nindent 6 }}
updateStrategy:
{{ toYaml .Values.updateStrategy | indent 4 }}
template:
metadata:
labels:
{{- include "common.selectorLabels" . | nindent 8 }}
{{- if .Values.podLabels }}
## Custom pod labels
{{- range $key, $value := .Values.podLabels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- if .Values.podAnnotations }}
annotations:
## Custom pod annotations
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
imagePullSecrets:
- name: "{{ .Release.Namespace }}-docker-registry-key"
serviceAccountName: {{ include "common.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
initContainers:
- name: {{ include "common.name" . }}-init-dir-ownership-change
image: alpine:3.6
# Change ownership to `mosquitto` user for a mounted volume
command:
- sh
- -c
- |
chown -R 1883:1883 /mosquitto/data
chown -R 1883:1883 /mosquitto/db
volumeMounts:
- name: data
mountPath: /mosquitto/data
- name: db
mountPath: /mosquitto/db
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: {{ .Values.image.name }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.global.pullPolicy }}
{{- if .Values.probes.enabled }}
livenessProbe:
tcpSocket:
port: {{ .Values.service.port1 }}
initialDelaySeconds: {{ .Values.probes.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.probes.livenessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.probes.livenessProbe.failureThreshold }}
periodSeconds: {{ .Values.probes.livenessProbe.periodSeconds }}
successThreshold: {{ .Values.probes.livenessProbe.successThreshold }}
readinessProbe:
tcpSocket:
port: {{ .Values.service.port1 }}
initialDelaySeconds: {{ .Values.probes.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.probes.readinessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.probes.readinessProbe.failureThreshold }}
periodSeconds: {{ .Values.probes.readinessProbe.periodSeconds }}
successThreshold: {{ .Values.probes.readinessProbe.successThreshold }}
{{- end }}
volumeMounts:
- mountPath: /certs/cacert.pem
name: opensync-mqtt-broker-truststore
subPath: cacert.pem
- mountPath: /certs/mqttservercert.pem
name: opensync-mqtt-broker-truststore
subPath: mqttservercert.pem
- mountPath: /certs/mqttserverkey_dec.pem
name: opensync-mqtt-broker-truststore
subPath: mqttserverkey_dec.pem
- mountPath: /mosquitto/config/mosquitto.conf
name: opensync-mqtt-broker-conf
subPath: mosquitto.conf
- mountPath: /mosquitto/db/
name: db
- mountPath: /mosquitto/data/
name: data
ports:
- name: {{ .Values.service.name1 }}
containerPort: {{ .Values.service.port1 }}
protocol: TCP
- name: {{ .Values.service.name2 }}
containerPort: {{ .Values.service.port2 }}
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: opensync-mqtt-broker-truststore
secret:
secretName: opensync-mqtt-broker-certs
- name: opensync-mqtt-broker-conf
configMap:
name: mosquitto-config
{{- if not .Values.persistence.enabled }}
- name: db
emptyDir: {}
- name: data
emptyDir: {}
{{- end }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: db
spec:
accessModes:
- {{ .Values.persistence.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.sizeDb | quote }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
- metadata:
name: data
spec:
accessModes:
- {{ .Values.persistence.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.sizeData | quote }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,18 +0,0 @@
{{- if .Values.testsEnabled -}}
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "common.fullname" . }}-test-connection"
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "common.fullname" . }}:{{ .Values.service.port1 }}']
restartPolicy: Never
{{- end }}

View File

@@ -1,17 +0,0 @@
apiVersion: k8s.nginx.org/v1alpha1
kind: TransportServer
metadata:
name: {{ include "common.name" . }}-mqtt
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
spec:
listener:
name: opensync-mqtt-port-listener
protocol: TCP
upstreams:
- name: {{ include "common.name" . }}
service: {{ include "common.fullname" . }}
port: {{ .Values.service.port1 }}
action:
pass: {{ include "common.name" . }}

View File

@@ -1,129 +0,0 @@
# Default values for mqtt.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
name: eclipse-mosquitto
tag: latest
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
podSecurityContext: {}
# fsGroup: 2000
terminationGracePeriodSeconds: 1800 # Duration in seconds a mosquitto pod needs to terminate gracefully.
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
## Liveness and Readiness probe values.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes
probes:
enabled: true
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
initialDelaySeconds: 5
periodSeconds: 30
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
# Enable/Disable Helm tests
testsEnabled: false
service:
type: NodePort
port1: 1883
name1: listener
nodePort1: 31
port2: 9001
name2: debug
nodePort2: 32
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
priorityClassName: ""
nodeSelector: {}
tolerations: []
affinity: {}
persistence:
enabled: false
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
## existingClaimData: opensync-wifi-controller-opensync-mqtt-broker-data
## existingClaimDb: opensync-wifi-controller-opensync-mqtt-broker-db
## volumeReclaimPolicy: Retain
## If you want to bind to an existing PV, uncomment below with the pv name
## and comment storageClass and belowannotation
## volumeNameDb: pvc-dc52b290-ae86-4cb3-aad0-f2c806a23114
## volumeNameData: pvc-735baedf-323b-47bc-9383-952e6bc5ce3e
## database data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "-"
accessMode: ReadWriteOnce
## Size of Db PVC
sizeDb: 1Gi
## Size of Data PVC
sizeData: 1Gi
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,6 +0,0 @@
dependencies:
- name: common
repository: file://../common
version: 0.1.0
digest: sha256:636a65e9846bdff17cc4e65b0849061f783759a37aa51fb85ff6fd8ba5e68467
generated: "2020-10-19T12:15:26.5973407Z"

View File

@@ -1,12 +0,0 @@
apiVersion: v2
name: wlan-cloud-graphql-gw
description: WLAN Cloud Apollo Server Helm Chart
type: application
version: 0.1.0
appVersion: 0.0.1
dependencies:
- name: common
version: 0.1.0
repository: file://../common

View File

@@ -1,21 +0,0 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "common.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "common.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "common.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "common.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:80
{{- end }}

View File

@@ -1,53 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "common.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "common.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "common.selectorLabels" . | nindent 8 }}
spec:
imagePullSecrets:
- name: "{{ .Release.Namespace }}-docker-registry-key"
serviceAccountName: {{ include "common.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: {{ .Values.global.repository }}/{{ .Values.image.name }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.global.pullPolicy }}
env:
- name: API
{{- if .Values.env.localService }}
value: {{ .Release.Name }}-{{ .Values.env.portalsvc }}
{{- else }}
value: {{ .Values.env.portalsvc }}
{{- end }}
ports:
- name: {{ .Values.service.name }}
containerPort: {{ .Values.service.port }}
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -1,42 +0,0 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "common.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,17 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "common.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.port }}
protocol: TCP
name: {{ .Values.service.name }}
nodePort: {{ .Values.global.nodePortPrefix | default .Values.nodePortPrefix }}{{ .Values.service.nodePort }}
selector:
{{- include "common.selectorLabels" . | nindent 4 }}

View File

@@ -1,13 +0,0 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "common.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end -}}

View File

@@ -1,18 +0,0 @@
{{- if .Values.testsEnabled -}}
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "common.fullname" . }}-test-connection"
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "common.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never
{{- end }}

View File

@@ -1,86 +0,0 @@
# Default values for opensync-gw.
# This is a YAML-formatted file.
#################################################################
# Application configuration defaults.
#################################################################
# Declare variables to be passed into your templates.
replicaCount: 1
image:
name: wlan-cloud-graphql-gw
tag: latest
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
podSecurityContext: {}
# fsGroup: 2000
# Enable/Disable Helm tests
testsEnabled: false
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
# If it's a localService, we will prefix ReleaseName to portalsvc, so service
# is reachable.
env:
portalsvc: wlan-portal-service:9051
localService: false
service:
type: NodePort
port: 4000
name: graphui
nodePort: 23
ingress:
enabled: true
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: wlan-ui-graphql.zone3.lab.connectus.ai
paths: [
/
]
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,6 +0,0 @@
dependencies:
- name: common
repository: file://../common
version: 0.1.0
digest: sha256:636a65e9846bdff17cc4e65b0849061f783759a37aa51fb85ff6fd8ba5e68467
generated: "2020-10-19T12:15:47.5451817Z"

View File

@@ -1,12 +0,0 @@
apiVersion: v2
name: wlan-cloud-static-portal
description: WLAN Cloud Portal Helm Chart
type: application
version: 0.1.0
appVersion: 0.0.1
dependencies:
- name: common
version: 0.1.0
repository: file://../common

View File

@@ -1,21 +0,0 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "common.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "common.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "common.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "common.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:80
{{- end }}

View File

@@ -1,49 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "common.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "common.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "common.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "common.selectorLabels" . | nindent 8 }}
spec:
imagePullSecrets:
- name: "{{ .Release.Namespace }}-docker-registry-key"
serviceAccountName: {{ include "common.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: {{ .Values.global.repository }}/{{ .Values.image.name }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.global.pullPolicy }}
env:
- name: API
value: {{ .Values.env.graphql }}
ports:
- name: {{ .Values.service.name }}
containerPort: {{ .Values.service.port }}
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

Some files were not shown because too many files have changed in this diff Show More