This chart could deploy fluentd either as a Deployment
or a Daemonset. Both options would use the deployment-fluentd
template with various sections toggled off based on values.yaml
I'd like to know - Does anyone run this chart as a Deployment?
We can simplify the chart, and zuul gates, by changing the chart
to deploy a Daemonset specifically.
Change-Id: Ie88ceadbf5113fc60e5bb0ddef09e18fe07a192c
This adds required changes to the Fluentd chart to allow for
deploying Fluentd as either a deployment or a daemonset. This
follows the pattern laid out by the ingress chart. This also
updates the single and multinode jobs to deploy fluentd as both
a daemonset and a deployment for validation
Change-Id: I84353a2daa2ce56ff59882a8d33203286ed27e06
Signed-off-by: Steve Wilkerson <sw5822@att.com>
This begins to split the fluent-logging chart into two separate
charts, one for fluentbit and one for fluentd. This is to help
isolate each chart and its dependencies better, and to treat each
service as its own entity.
This also moves the job for creating Elasticsearch templates to
the Elasticsearch chart, as the elasticsearch chart should have
ownership of creating the templates for its indices.
This also performs some general cleanup of values keys that are
not currently used
Change-Id: I827277d5faa62b8b59c5960330703d23c297ca47
Signed-off-by: Steve Wilkerson <sw5822@att.com>
This organizes the single node gates for osh-infra by function.
This organization aims to improve the single node gates in the
following ways:
1. Reduce number of services deployed in single node jobs
2. Only deploy Ceph for logging job, as Elasticsearch requires
RGW for snapshot repositories.
3. Use NFS for storage for monitoring job, as Ceph is not a
requirement for any of the services here.
4. Remove duplicate services deployed to multiple single node jobs
5. Remove storage from openstack-support job, as the only service
requiring storage is rabbitmq. Rabbitmq is deployed with
storage enabled in the openstack-helm checks/gates.
This also removes the documentation for the single node deployments,
as those deployments do not make sense with this change. This should
be revisited as a follow-on once we have a clear path forward for
the larger gate refactoring work
Change-Id: I46951f76904fa2ab245a202d55f76019b7503362
This ps adds the ability to use the ceph radosgw s3 api for
snapshot repositories. It removes the ability to use a RWM pvc, as
the radosgw solution provides a more robust approach for storing
index snapshots
Change-Id: Ie56ac41ccdc61bfadcac52b400cceb35403e9fae
This attempts to trim down the dev-deploy gates until further
gate refactoring is complete. This disables the elasticsearch and
fluentd exporters and removes the openstack exporter from the
single node deployment gates to ease the load on nodepool vms
Change-Id: If211511e8f52fe39d293966abbd7e62b45b65970
This moves the mariadb chart to openstack-helm-infra as part of
the effort to move charts to the appropriate repositories
Change-Id: Ife56e28de46c536108cebb4f4cdf6bad2a415289
Story: 2002204
Task: 21725
This adds the process exporter to both the developer and multinode
gates, along with adding the relevant deployment steps to the docs
Change-Id: I85d5c398fbbb62145c9bb4e3a885e9a774725e5a
This adds a ceph developer gate to openstack-helm-infra, which
depends on ceph moving to openstack-helm-infra. This also replaces
the NFS backed storage for the multinode gate with ceph instead
Change-Id: I11268463aa037a2e037217a2dbc89c7432c0d277
This deploys the ingress chart in the openstack-helm-infra dev
and multinode gates, which allows for enabling ingresses in the
charts where defined
Change-Id: I055c7b02d9af68f6e3c5eda33d69dd0b8b1b70ca
This aims to introduce documentation to openstack-helm-infra,
similar to what exists in openstack-helm
Change-Id: If6a850d555c9bd4ddae36763733a47e795961a50