The keystone chart recently had a change to fix the world
readable warning message, but an extra fsGroup entry causes
the chart to fail to deploy when using helm3.
This change removes the offending entry from the values file
in the keystone chart.
Change-Id: I540854da7123f413215b627d3bfb077c6f4864c6
Now that the main linting job runs helm v3, this extra job is
no longer needed. This change removes the specific helm v3
linter job.
Change-Id: I40d6be368a4f36242c54b9a57b7e6f7328be8bb6
Current implementation of Keystone prints a warning message if the
directory containing the fernet keys is world readable (o+r). As OSH
uses a volumeMount to handle fernet keys and is by default readonly,
there is no meaningful way to make the directory (not the keys) world
unreadable. Consequently, keystone just keep logging that warning,
adding no particular value besides flooding the log.
Rather than disabling the log message in keystone (as that warning is
meaningful from a security standpoint), this patch set changes the way
we deal with the secret volume so the directory is no longer world
readable, so keystone will stop issuing that warning message.
Signed-off-by: Tin Lam <t@lam.wtf>
Change-Id: Id29abe667f5ef0b61da3d3825b5bf795f2d98865
As part of the move to helm v3, all the charts in the OSH repos
will no longer lint/build properly due to a lack of helm serve
in helm v3.
This change modifies the helm-toolkit repo location to the
osh-infra repo in order to account for the removal oh helm serve.
This work is part of the migration to helm v3 and will be utilized
in future changes.
Change-Id: I90d25943d69ad6c76455f7778a4894f00c525c46
With the move to helm v3, helm status requires a namespace to be
specified, but doing so breaks helm v2 compatability. In order
to preserve our gating with both versions of helm while we make
the change from v2 to v3, this change removes the usage of helm
serve in openstack-helm's deployment scripts.
Once we fully move to helm v3, these scripts can be improved and
cleaned up to be more compatible with the new v3 syntax.
Change-Id: I02b6bbf780abf8c8bc7c1783c35d9411d25e18a8
If labels are not specified on a Job, kubernetes defaults them
to include the labels of their underlying Pod template. Helm 3
injects metadata into all resources [0] including a
`app.kubernetes.io/managed-by: Helm` label. Thus when kubernetes
sees a Job's labels they are no longer empty and thus do not get
defaulted to the underlying Pod template's labels. This is a
problem since Job labels are depended on by
- Armada pre-upgrade delete hooks
- Armada wait logic configurations
- kubernetes-entrypoint dependencies
Thus, for Job templates previously missed, this adds labels matching
the underlying Pod template to retain the same labels that were
present with Helm 2.
[0]: https://github.com/helm/helm/pull/7649
Change-Id: Ie438b449a3d9853d786215d40a39c32d164e9950
If labels are not specified on a Job, kubernetes defaults them
to include the labels of their underlying Pod template. Helm 3
injects metadata into all resources [0] including a
`app.kubernetes.io/managed-by: Helm` label. Thus when kubernetes
sees a Job's labels they are no longer empty and thus do not get
defaulted to the underlying Pod template's labels. This is a
problem since Job labels are depended on by
- Armada pre-upgrade delete hooks
- Armada wait logic configurations
- kubernetes-entrypoint dependencies
Thus for each Job template this adds labels matching the
underlying Pod template to retain the same labels that were
present with Helm 2.
[0]: https://github.com/helm/helm/pull/7649
Change-Id: Ib5a7eb494fb776d74e1edc767b9522b02453b19d
nova-service-cleaner job deletes the service which are down. If the
database is down, the service will go down as well. When database comes
back up, all the services starts to come back to up status. If the
nova-service-cleaner is run in this interim time, the service that
were down gets deleted. These would have come up if the job had not
run. Adding sleep to this job to give service time to come back up
if recovering. The sleep is set to 2 times the report_interval.
Change-Id: Ia292d19508e9449ccb40d1100b1d56b1283e5d53
It's impossible to disable the helm.sh/hook for the nova-ks-service
job since the hook is being added in duplicity to the job dictionary
before the check for Values.helm3_hook. This commit removes the
duplicity so we can disable it properly.
Signed-off-by: Thiago Brito <thiago.brito@windriver.com>
Change-Id: Ie72a13afc81bce4424b10bbc542dc7c44dd38975
Adding a helm3_hook in values.yaml file in case hooks needs
to be disabled (e.g. on Helm v2).
Signed-off-by: Thiago Brito <thiago.brito@windriver.com>
Change-Id: I1c03ea9ee88d1306283ce577b100c9864bec5d1b
This PS adds the rabbitmq secret volume + mount for the audit
usage cronjob, as it was previously missing and the job's command(s)
were failing when run.
In addition, add labels to the CronJob's metadata, so that it can
be picked up for pre-delete hooks.
Change-Id: I0a2ed0655702b4e41cc12d3908b9aed141e6f0d2
The policy document and policy documents in the code are conflicting and creating strange issue. As the policy for nova, neutron, keystone, glance and cinder are available in horizon code, they have been removed from the chart values file.
Change-Id: I78b487c11d3d018b18ce823ffd9d8b8940dfa575
Removing the hardcoded policy document from the values file of helm chart in favor of policy in code.
Change-Id: I5c3c4699cafc76d3aa7d9c94f6e15eeff3f22b6c
Script fails with too many arguments when provided command like "$(date -d 'now - 2 days')" as the value for --before option. Addition of quotes fix the issue.
Change-Id: I0639d8aea368988976d5990c42e960de44844f61
The default of 'domain_config_dir' in keystone is '/etc/keystone/domains'.
This patch adds the missing slash.
Change-Id: I30523ec3fd3144811a76b9078e915eff4ffa2b66
Now that the kubeadm-aio is fixed, we can re-enable the
multinode jobs for gating against openstack-helm.
Change-Id: Ib1f1bca5f370e0326ea0211dfcfba9544bd458b2
This change removes a bunch of old and duplicated jobs,
duplicate netpol and the old armada jobs that have not
been maintained. Also removed the tls job from
experimental since we run it now in gating.
Change-Id: Ic19520d8790c52d66d62b20a23658c57d954697e
Chart upgrade fails as some immutable fields in job are needed to be applied earlier then the job manifests. To solve the problem, helm.sh/hook annotations with post-install and post-upgrade values can be used so that the jobs are the last one to be applied after all the manifests. As jobs are dependent one services, hook weight is used to maintain the job creation order.
Change-Id: I7551977599d376e4d240fff5cb9d002fc918d9fe
During upgrade, the Cinder pods go through the upgrade
process. Sometimes, the pods are unavailable to handle
the requests in bootstrap even the Cinder services are
up. This patchset gives the bootstrap job additional
attempt to finish the tasks
Change-Id: Ie7bd8909f1c93b76b2242748318f892a6ff9c53d
Now OPENSTACK_ENABLE_PASSWORD_RETRIEVE value is string so always
get true regardless of the config value.
Change-Id: I0fb1203f22ddd6e707eeb80f72a3685c3b9c350f
Chart upgrading was failing due to some immutable fields are needed to upgrade before the jobs can be upgraded. For solving this issue, we have added the helm.sh/hook annotations with post-install and post-upgrade values. As for hook-weight annotations, we have added these to control the flow of the jobs with hook creation as the jobs are dependent. Like, db-init jobs need to run before db-sync and so on. Also values helm3_hook is introduced in values.yaml from which hooks can be disabled if needed.
Change-Id: Ibc99cb20482864f55daa12321e8d81414c1ef9f8
Chart upgrading was failing due to some immutable fields in job are needed to upgrade. So, we have added the helm.sh/hook annotations with post-install and post-upgrade values. As for hook-weight annotations, we have added these to control the flow of the jobs with hook creation as the jobs are dependent. Like, db-init jobs need to run before db-sync and so on. Also helm3_hook value is introduced in values.yaml, which can be used to disable helm hook if needed.
Change-Id: Idb4b992b4061f4a014570b7933a585df1a096299
Chart upgrading was failing due to some immutable fields are needed to be upgraded before the jobs can be upgraded. For solving this issue, helm.sh/hook annotations with post-install and post-upgrade has been added. As for hook-weight annotations, we have added these to control the flow of the jobs with hook creation as the jobs are dependent. Like, db-init jobs need to run before db-sync and so on. Also, helm3_hook value is added in values.yaml file in case hooks needs to be disabled if needed.
Change-Id: I4d489f5ded94f19dd3fcf58dafde00b18ff5bcae
This change adds a helm3 linter to the zuul check job list. The job
currently emits some warnings, these will be cleared up in future
changes.
Change-Id: I4d74ba5464e9e3d78b95298e9778b99f1b387fcd
This change adds the V and W openstack release jobs to the zuul
check list. This will bring OSH testing more in line with the latest
supported releases of OpenStack.
Change-Id: I2cc98159ee9bf1ad3ac5c70a772e2b4c1bbd7fa4