Files
openstack-helm-infra/fluent-logging
Jean-Philippe Evrard 5f5e988fb3 Point to OSH-images images
We now have a process for OSH-images image building,
using Zuul, so we should point the images by default to those
images, instead of pointing to stale images.

Without this, the osh-images build process is completely not
in use (and completely opaque to deployers), and updating the
osh-images process or patching its code has no impact on OSH.

This should fix it.

Change-Id: Ic00bd98c151669dc2485cd88e0e8c2ab05445959
2019-05-17 08:17:32 +00:00
..
2017-12-15 10:52:16 -06:00
2018-06-10 19:04:54 -04:00
2019-05-17 08:17:32 +00:00

Fluentd-logging
===============

OpenStack-Helm defines a centralized logging mechanism to provide insight into
the state of the OpenStack services and infrastructure components as
well as underlying kubernetes platform. Among the requirements for a logging
platform, where log data can come from and where log data need to be delivered
are very variable. To support various logging scenarios, OpenStack-Helm should
provide a flexible mechanism to meet with certain operation needs. This chart
proposes fast and lightweight log forwarder and full featured log aggregator
complementing each other providing a flexible and reliable solution. Especially,
Fluent-bit is proposed as a log forwarder and Fluentd is proposed as a main log
aggregator and processor.


Mechanism
---------

Fluent-bit, Fluentd meet OpenStack-Helm's logging requirements for gathering,
aggregating, and delivering of logged events. Flunt-bit runs as a daemonset on
each node and mounts the /var/lib/docker/containers directory. The Docker
container runtime engine directs events posted to stdout and stderr to this
directory on the host. Fluent-bit then forward the contents of that directory to
Fluentd. Fluentd runs as deployment at the designated nodes and expose service
for Fluent-bit to foward logs. Fluentd should then apply the Logstash format to
the logs. Fluentd can also write kubernetes and OpenStack metadata to the logs.
Fluentd will then forward the results to Elasticsearch and to optionally kafka.
Elasticsearch indexes the logs in a logstash-* index by default. kafka stores
the logs in a 'logs' topic by default. Any external tool can then consume the
'logs' topic.