This change migrates the check jobs in OSH to use the
new helm v3 script when deploying kubernetes via
minikube.
This is one step in the move to helm v3. Future changes
will migrate the other jobs.
Change-Id: If741db5997a27ed06584b9af2d50485d8de34a2b
The move to helm v3 breaks the rendering for the ca-issuer chart.
While that gets fixed, we can temporary make the job non-voting
in order to unblock the migration to helm v3.
Change-Id: Ia25ac1f85974fc8c8ac8cf3ffedff746a92f2cf5
This change updates the image references in the keystone chart
to the latest supported releases of both openstack and ubuntu.
Change-Id: If4f30252b5d839cfe517ee57cbef96e7775e7ec5
In some deployement environments, nova compute processes took a bit
longer to register on all hosts, and vm/server is instantiated almost
immediately before the process is registered on remaining hosts.
This PS enhances the cell-setup-init script to enable option to
extend the wait before performing discover hosts.
Change-Id: Ie9867e64c554d4f39fdc7432823a1869f0b4a520
The keystone chart recently had a change to fix the world
readable warning message, but an extra fsGroup entry causes
the chart to fail to deploy when using helm3.
This change removes the offending entry from the values file
in the keystone chart.
Change-Id: I540854da7123f413215b627d3bfb077c6f4864c6
Now that the main linting job runs helm v3, this extra job is
no longer needed. This change removes the specific helm v3
linter job.
Change-Id: I40d6be368a4f36242c54b9a57b7e6f7328be8bb6
Current implementation of Keystone prints a warning message if the
directory containing the fernet keys is world readable (o+r). As OSH
uses a volumeMount to handle fernet keys and is by default readonly,
there is no meaningful way to make the directory (not the keys) world
unreadable. Consequently, keystone just keep logging that warning,
adding no particular value besides flooding the log.
Rather than disabling the log message in keystone (as that warning is
meaningful from a security standpoint), this patch set changes the way
we deal with the secret volume so the directory is no longer world
readable, so keystone will stop issuing that warning message.
Signed-off-by: Tin Lam <t@lam.wtf>
Change-Id: Id29abe667f5ef0b61da3d3825b5bf795f2d98865
As part of the move to helm v3, all the charts in the OSH repos
will no longer lint/build properly due to a lack of helm serve
in helm v3.
This change modifies the helm-toolkit repo location to the
osh-infra repo in order to account for the removal oh helm serve.
This work is part of the migration to helm v3 and will be utilized
in future changes.
Change-Id: I90d25943d69ad6c76455f7778a4894f00c525c46
With the move to helm v3, helm status requires a namespace to be
specified, but doing so breaks helm v2 compatability. In order
to preserve our gating with both versions of helm while we make
the change from v2 to v3, this change removes the usage of helm
serve in openstack-helm's deployment scripts.
Once we fully move to helm v3, these scripts can be improved and
cleaned up to be more compatible with the new v3 syntax.
Change-Id: I02b6bbf780abf8c8bc7c1783c35d9411d25e18a8
If labels are not specified on a Job, kubernetes defaults them
to include the labels of their underlying Pod template. Helm 3
injects metadata into all resources [0] including a
`app.kubernetes.io/managed-by: Helm` label. Thus when kubernetes
sees a Job's labels they are no longer empty and thus do not get
defaulted to the underlying Pod template's labels. This is a
problem since Job labels are depended on by
- Armada pre-upgrade delete hooks
- Armada wait logic configurations
- kubernetes-entrypoint dependencies
Thus, for Job templates previously missed, this adds labels matching
the underlying Pod template to retain the same labels that were
present with Helm 2.
[0]: https://github.com/helm/helm/pull/7649
Change-Id: Ie438b449a3d9853d786215d40a39c32d164e9950
If labels are not specified on a Job, kubernetes defaults them
to include the labels of their underlying Pod template. Helm 3
injects metadata into all resources [0] including a
`app.kubernetes.io/managed-by: Helm` label. Thus when kubernetes
sees a Job's labels they are no longer empty and thus do not get
defaulted to the underlying Pod template's labels. This is a
problem since Job labels are depended on by
- Armada pre-upgrade delete hooks
- Armada wait logic configurations
- kubernetes-entrypoint dependencies
Thus for each Job template this adds labels matching the
underlying Pod template to retain the same labels that were
present with Helm 2.
[0]: https://github.com/helm/helm/pull/7649
Change-Id: Ib5a7eb494fb776d74e1edc767b9522b02453b19d
nova-service-cleaner job deletes the service which are down. If the
database is down, the service will go down as well. When database comes
back up, all the services starts to come back to up status. If the
nova-service-cleaner is run in this interim time, the service that
were down gets deleted. These would have come up if the job had not
run. Adding sleep to this job to give service time to come back up
if recovering. The sleep is set to 2 times the report_interval.
Change-Id: Ia292d19508e9449ccb40d1100b1d56b1283e5d53
It's impossible to disable the helm.sh/hook for the nova-ks-service
job since the hook is being added in duplicity to the job dictionary
before the check for Values.helm3_hook. This commit removes the
duplicity so we can disable it properly.
Signed-off-by: Thiago Brito <thiago.brito@windriver.com>
Change-Id: Ie72a13afc81bce4424b10bbc542dc7c44dd38975
Adding a helm3_hook in values.yaml file in case hooks needs
to be disabled (e.g. on Helm v2).
Signed-off-by: Thiago Brito <thiago.brito@windriver.com>
Change-Id: I1c03ea9ee88d1306283ce577b100c9864bec5d1b
This PS adds the rabbitmq secret volume + mount for the audit
usage cronjob, as it was previously missing and the job's command(s)
were failing when run.
In addition, add labels to the CronJob's metadata, so that it can
be picked up for pre-delete hooks.
Change-Id: I0a2ed0655702b4e41cc12d3908b9aed141e6f0d2
The policy document and policy documents in the code are conflicting and creating strange issue. As the policy for nova, neutron, keystone, glance and cinder are available in horizon code, they have been removed from the chart values file.
Change-Id: I78b487c11d3d018b18ce823ffd9d8b8940dfa575
Removing the hardcoded policy document from the values file of helm chart in favor of policy in code.
Change-Id: I5c3c4699cafc76d3aa7d9c94f6e15eeff3f22b6c
Script fails with too many arguments when provided command like "$(date -d 'now - 2 days')" as the value for --before option. Addition of quotes fix the issue.
Change-Id: I0639d8aea368988976d5990c42e960de44844f61
The default of 'domain_config_dir' in keystone is '/etc/keystone/domains'.
This patch adds the missing slash.
Change-Id: I30523ec3fd3144811a76b9078e915eff4ffa2b66
Now that the kubeadm-aio is fixed, we can re-enable the
multinode jobs for gating against openstack-helm.
Change-Id: Ib1f1bca5f370e0326ea0211dfcfba9544bd458b2
This change removes a bunch of old and duplicated jobs,
duplicate netpol and the old armada jobs that have not
been maintained. Also removed the tls job from
experimental since we run it now in gating.
Change-Id: Ic19520d8790c52d66d62b20a23658c57d954697e
Chart upgrade fails as some immutable fields in job are needed to be applied earlier then the job manifests. To solve the problem, helm.sh/hook annotations with post-install and post-upgrade values can be used so that the jobs are the last one to be applied after all the manifests. As jobs are dependent one services, hook weight is used to maintain the job creation order.
Change-Id: I7551977599d376e4d240fff5cb9d002fc918d9fe