Fixes minor issue with naming of variables which prevents the script to
be compliant the backup retention policy.
Change-Id: Ic241310a66af92ee423f5c762c413af7d6d53f0b
Added a parser for archive names to cover the situation when an archive
name could be represented in two different formats
1) <database name>.<namespace>.<table name | all>.<date-time>.tar.gz
2) <database name>.<namespace>.<table name | all>.<backup mode>.<date-time>.tar.gz
The first format is what is using at the moment,
the second format is recommended for future use.
Change-Id: I6b631b3b938c0a0242c5a8870284995b2cd8f27b
Minor change to list archive directory with files in sub-directory
as below. Without the change, only the directory name 'quarantine'
is displayed.
All Local Archives
==============================================
mariadb.openstack.all.2022-03-20T18:00:17Z.tar.gz
mariadb.openstack.all.2022-03-21T00:00:16Z.tar.gz
mariadb.openstack.all.2022-03-21T06:00:12Z.tar.gz
mariadb.openstack.all.2022-03-21T12:00:13Z.tar.gz
mariadb.openstack.all.2022-03-21T18:00:11Z.tar.gz
quarantine/mariadb.openstack.all.2022-03-23T00:00:12Z.tar.gz
quarantine/mariadb.openstack.all.2022-03-23T06:00:11Z.tar.gz
quarantine/mariadb.openstack.all.2022-03-23T12:00:14Z.tar.gz
quarantine/mariadb.openstack.all.2022-03-23T14:24:04Z.tar.gz
Change-Id: Ic47a30884b82cdecedbfff8ddf1d85fc00d89acc
This adds taint toleration support for openstack jobs
Signed-off-by: Lucas Cavalcante <lucasmedeiros.cavalcante@windriver.com>
Change-Id: I168837f962465d1c89acc511b7bf4064ac4b546c
This is to cover some relatively rare sutuation, when backups
of different databases can share the same storage.
Change-Id: I0770e1baf3d33e2d56c34558a9a97a99a01e5e04
Modifies the backup script in the way that there will always be
a minimum given number of days of backups in both local, and remote
(if applicable) locations, regardless the date that the backups
are taken.
Change-Id: I19d5e592905ce83acdba043f68ca4d0b042de065
The set -x has produced 6 identical log strings every time the
log_backup_error_exit function is called. Prometheus is using
the occurrence and number of some logs over a period of time to
evaluate database backup failure or not. Only one log should be
generated when a particular database backup scenario failed.
Upon discussion with database backup and restore SME, it is
recommended to remove the set -x once and for all.
Change-Id: I846b5c16908f04ac40ee8f4d87d3b7df86036512
This is a code improvement to reuse ceph monitor doscovering function
in different templates. Calling the mentioned above function from
a single place (helm-infra snippets) allows less code maintenance
and simlifies further development.
Rev. 0.1 Charts version bump for ceph-client, ceph-mon, ceph-osd,
ceph-provisioners and helm-toolkit
Rev. 0.2 Mon endpoint discovery functionality added for
the rados gateway. ClusterRole and ClusterRoleBinding added.
Rev. 0.3 checkdns is allowed to correct ceph.conf for RGW deployment.
Rev. 0.4 Added RoleBinding to the deployment-rgw.
Rev. 0.5 Remove _namespace-client-ceph-config-manager.sh.tpl and
the appropriate job, because of duplicated functionality.
Related configuration has been removed.
Rev. 0.6 RoleBinding logic has been changed to meet rules:
checkdns namespace - HAS ACCESS -> RGW namespace(s)
Change-Id: Ie0af212bdcbbc3aa53335689deed9b226e5d4d89
At the moment it is very difficult to pull images from a private
registry that hasn't been configured on Kubernetes nodes as there
is no way to specify imagePullSecrets on pods.
This change introduces a snippet that can return a set of image
pull secrets using either a default or a per pod value. It also
adds this new snippet to the manifests for standard job types.
Change-Id: I710e1feffdf837627b80bc14320751f743e048cb
* Add capability to retry uploading backup to remote server configured
number of times and delay the retires randomly between configured
minimum/maximum seconds.
* Enhanced error checking, logging and retrying logic.
Change-Id: Ida3649420bdd6d39ac6ba7412c8c7078a75e0a10
We need flexibility to add securityContext to ks-user job at pod and containerlevel,
so that it can be executed without elevated privileges.
Change-Id: Ibd8abdc10906ca4648bfcaa91d0f122e56690606
In cert-manager v1 API, the private key size "keySize" was updated to "size"
under "privateKey".
Support of minor (less than v1) API version is also removed for certificates.
Change-Id: If3fa0e296b8a1c2ab473e67b24d4465fe42a5268
This reverts commit 5407b547bb.
Reason for revert: This outputs duplicate securityContext entries,
breaking the yamllinter in osh. This needs a slight rework.
Change-Id: I0c892be5aba7ccd6e3c378e4e45a79d2df03c06a
We need flexibility to add securityContext to ks-user job , so that it can be executed without elevated privileges.
Change-Id: I24544015816d57d86c1e69f44b90b6b0271e76a4
If labels are not specified on a Job, kubernetes defaults them
to include the labels of their underlying Pod template. Helm 3
injects metadata into all resources [0] including a
`app.kubernetes.io/managed-by: Helm` label. Thus when kubernetes
sees a Job's labels they are no longer empty and thus do not get
defaulted to the underlying Pod template's labels. This is a
problem since Job labels are depended on by
- Armada pre-upgrade delete hooks
- Armada wait logic configurations
- kubernetes-entrypoint dependencies
Thus for each Job template this adds labels matching the
underlying Pod template to retain the same labels that were
present with Helm 2.
[0]: https://github.com/helm/helm/pull/7649
Change-Id: I3b6b25fcc6a1af4d56f3e2b335615074e2f04b6d
Currently it isn't possible to set extra labels on pods that use
the labels snippet. This means users are required to fork the helm
repository for OpenStack services to add custom labels. Use cases
for this are for example injecting Istio sidecars.
This change introduces the ability to set one set of labels on all
resources that use the labels snippet.
Change-Id: Iefc8465300f434b89c07b18ba75260fee0a05ef5
The return code from the send_to_remote_server function are
being eaten by an if statement and thus we never hit the elif
section of code.
Change-Id: Id3e256c991421ad6624713f65212abb4881240c1
In the process of secondary development, we found
that we often need to access secrets from pod.
However, it seems that helm-tookit does not support
adding resource of secrets to role. This commit
try to fix that.
Change-Id: If384d6ccb7672a8da5a5e1403733fa655dfe40dd
There is an additional error status 'Service Unavailable' which can
indicate the service is temporary unavailable. Adding that error
status to the retry list in case the issue is resolved during the
backup timeframe.
Change-Id: I9e2fc1a9b33dea3858de06b10d512da98a635015
Remove the TLS_OPTION env from helm-toolkit s3-bucket job. There
can be different option for tls connection, depending on whether
the rgw server is local or remote. This change allows the
create-s3-bucket script to customize its connection argument
which can be pulled from values.yaml.
Change-Id: I2a34c1698e02cd71905bc6ef66f4aefcd5e25e44
The change enables:
(1) TLS for the Elasticsearch transport networking layer. The
transport networking layer is used for internal communication
between nodes in a cluster.
(2) TLS path between Elasticsearch and Ceph-rgw host.
Change-Id: Ifb6cb5db19bc5db2c8cb914f6a5887cf3d0f9434
These hooks were added as part of a previous change, however tiller
does not handle these correctly, and jobs get deleted without being
recreated. This change removes the hook from default htk annotations.
Change-Id: I2aa7bb241ebbb7b54c5dc9cf21cd5ba290b7e5fd
This change primarily changes the type of the api_objects yaml structure
to a map, which allows for additional objects to be added by values
overrides (Arrays/Lists are not mutable like this)
Also, in the previous change, some scripts in HTK were modified, while
other were copied over to the Elasticsearch chart. To simplify the chart's
structure, this change also moves the create_s3_bucket script to Elasticsearch,
and reverts the changes in HTK.
Those HTK scripts are no longer referenced by osh charts, and could be candidates
for removal if that chart needed to be pruned
Change-Id: I7d8d7ef28223948437450dcb64bd03f2975ad54d
This change updates how the Elasticsearch chart handles
S3 configuration and snapshot repository registration.
This allows for
- Multiple snapshot destinations to be configued
- Repositories to use a specific placement target
- Management of multiple account credentials
Change-Id: I12de918adc5964a4ded46f6f6cd3fa94c7235112
This is an update to address a behavior change introduced with
0ae8f4d21a.
Job labels if empty/unspecified are taken from the template. If (any)
labels are specified on the job we do not get this behavior.
Specifically if we *apply*:
apiVersion: batch/v1
kind: Job
metadata:
# no "labels:" here
name: placement-db-init
namespace: openstack
spec:
template:
metadata:
labels:
application: placement
component: db-init
release_group: placement
spec:
containers:
# do stuffs
then *query* we see:
apiVersion: batch/v1
kind: Job
metadata:
# k8s did this for us!
labels:
application: placement
component: db-init
job-name: placement-db-init
release_group: placement
name: placement-db-init
namespace: openstack
spec:
template:
metadata:
labels:
application: placement
component: db-init
release_group: placement
spec:
containers:
# do stuffs
The aforementioned change causes objects we apply and query to look
like:
apiVersion: batch/v1
kind: Job
metadata:
# k8s did this for us!
labels:
application: placement
# nothing else!
name: placement-db-init
namespace: openstack
spec:
template:
metadata:
labels:
application: placement
component: db-init
release_group: placement
spec:
containers:
# do stuffs
Current users rely on this behavior and deployment systems use job
labels for synchronization, those labels being only specified in the
template and propagating to the job.
This change preserves functionality added recently and restores the
previous behavior.
The explicit "application" label is no longer needed as the
helm-toolkit.snippets.kubernetes_metadata_labels macro provides it.
Change-Id: I1582d008217b8848103579b826fae065c538aaf0
v1.2.0 of cert-manager noew supports overriding the default value
of ingress certificate expiry via annotations. This PS add the
required annotation.
Change-Id: Ic81e47f24d4e488eb4fc09688c36a6cea324e9e2
- Add application label using service name
- Add before-hook-creation delete policy as a default
(It is a default one in helmv3)
- Add custom metadata by passing params
Change-Id: Ie09f8491800031b9ff051a63feb3e018cb283342
On somewhat rare occasions the openstack service list call fails with
a connection aborted OSError 104 ECONNRESET. During an upgrade this failure
causes the script to think that the service it is checking for does not
exist and therefore it recreates the script. In turn this causes further
issues when other services try to use this duplicate service.
This is a temporary change in order to alliviate the issue while the root
cause is investigated.
[0] https://review.opendev.org/c/openstack/openstack-helm-infra/+/772416
Change-Id: Id0971a95eb54eca9486a9811f7ec6f603a007cbb
We've seen a few cases where the openstack service list is unable
to establish a connection with keystone thus causing the check to fail.
When this happens, an additional service is created unnecessarily.
When the addtional service is created, it tends to cause issues since
there are no endpoints asscociated with the new service.
Allow this check to retry several times.
Change-Id: I5a1985c680e90de71549177ffc3faf848a831bfa
ClusterIssuer does not belong to a single namespace (unlike Issuer)
and can be referenced by Certificate resources from multiple different
namespaces. When internal TLS is added to multiple namespaces, same
ClusterIssuer can be used instead of one Issuer per namespace.
Change-Id: I1576f486f30d693c4bc6b15e25c238d8004b4568
This patchset adds the capability to delete any archives that are stored
in the local file system or archives that are stored on the remote RGW
data store.
Change-Id: I68cade39e677f895e06ec8f2204f55ff913ce327
- Check issuer type to distinguish the annotation between
clusterissuer and issuer
- Add one more annotation "certmanager.k8s.io/xx" for old version
Change-Id: I320c1fe894c84ac38a2878af33e41706fb067422
Some services attempt to recreate the default domain
with both the values of "default" and "Default". Since this
domain already exists when keystone is deployed, this
creates redundant API calls that only result in conflicts.
This change enables nocasematch for string checking in order
to avoid making multiple unnecessary calls to keystone.
Change-Id: I698fd420dc41eae211a511269cb021d4ab7a5bfc
This PS fixes a problem with the main backup script in the helm-toolkit,
which tries to create a swift container using the SWIFT_URL. The problem
is that the SWIFT_URL is malformed because the call to openstack get
catalog list has a different format in Train than it did in Stein. So a
solution that works for both Train and Stein is needed. This patch will
use openstack catalog show instead and will extract the public URL from
that output.
Change-Id: Ic326b0b4717951525e6b17ab015577f28e1d321a
The existing helm-toolkit function "helm-toolkit.manifests.ingress"
will create namespace-fqdn and cluster-fqdn Ingress objects when the
host_fqdn_override parameter is used, but only for a single hostname.
This change allows additional FQDNs to be associated with the same
Ingress, including the names defined in the list:
endpoints.$service.host_fqdn_override.$endpoint.tls.dnsNames
For example:
endpoints:
grafana:
host_fqdn_override:
public:
host: grafana.openstackhelm.example
tls:
dnsNames:
- grafana-alt.openstackhelm.example
Will produce the following:
spec:
tls:
- secretName: grafana-tls-public
hosts:
- grafana.openstackhelm.example
- grafana-alt.openstackhelm.example
rules:
- host: grafana.openstackhelm.example
http:
# ...
- host: grafana-alt.openstackhelm.example
http:
# ...
Change-Id: I9b068f10d25923bf61220112da98d6fbfdf7ef8a
Added chart lint in zuul CI to enhance the stability for charts.
Fixed some lint errors in the current charts.
Change-Id: I9df4024c7ccf8b3510e665fc07ba0f38871fcbdb