This should give us much more context for a particular error without
having to bother a customer with sending us the logs / digging for them
ourselves in our staging or production environment.
Resolves: #7176.
It appears that I have misunderstood the documentation of
`sentry-tracing`. When a message gets logged as an event (rather than an
"exception") `std::error::Error`s attached as tracing `Value`s do not
get recorded. It doesn't really matter whether we record our events as
exceptions or messages. We should ideally look at all of them and
particularly noisy ones can be muted forever in Sentry so we don't end
up in a "boy who cried wolf" situation. Therefore, this PR changes our
event filter to also submit WARNs as exceptions to make sure they get
logged accordingly.
Resolves: #7161.
Related: https://github.com/getsentry/sentry-rust/issues/702.
With the introduction of the `tracing-sentry` integration in #7105, we
started sending tracing spans to Sentry. By default, all spans with
level INFO and above get sampled at the configured rate and sent to
Sentry.
This results in a lot of useless transaction in Sentry because we use
INFO level spans in multiple places in connlib to attach contextual
information like the current connection ID.
This PR introduces the concept of `telemetry` spans which - similar to
the `telemetry` log target in #7147 - qualifies a span for being sent to
Sentry. By convention, these are also defined as requiring the TRACE
level. This ensures we won't ever see them as part of regular log
output.
With #7105, all ERROR events from `tracing` get logged as exceptions in
Sentry and all WARN events get logged as "messages". We don't want to
fill up the user's harddrive with logs which means we have to be
somewhat conservative, what gets logged on INFO and above (with INFO
being the default log level). There are certain events though where it
would be useful to know, how often they happen because too many of them
can indicate a problem.
To solve this problem, we introduce a dedicated `telemetry` log target
that the tracing-sentry integration layer watches for. Events for the
`telemetry` log target that gets logged on TRACE will be sampled at a
rate of 1% and submitted as messages to Sentry.
Using a trait means the call-site of the log message will always be the
`log_unwrap` module, despite the `#[track_caller]` annotation. That one
only works for `std::panic::Location` unfortunately which `tracing`
isn't using.
Macros will be evaluated earlier and thus the messages will show up with
the correct module name.
My theory for this issue is that we receive a UDP DNS response from an
upstream server that is bigger than our MTU and thus forwarding it
fails.
This PR doesn't fix that issue by itself but only mitigates the actual
panic. To properly fix the underlying issue, we need to parse the DNS
message. Truncate it and set the TC bit.
Related: #7121.
Using the `sentry-tracing` integration, we can automatically capture
events based on what we log via `tracing`. The mapping is defined as
follows:
- ERROR: Gets captured as a fatal error
- WARN: Gets captured as a message
- INFO: Gets captured as a breadcrumb
- `_`: Does not get captured at all
If telemetry isn't active / configured, this integration does nothing.
It is therefore safe to just always enable it.
Our logging library, `tracing` supports structured logging. This is
useful because it preserves the more than just the string representation
of a value and thus allows the active logging backend(s) to capture more
information for a particular value.
In the case of errors, this is especially useful because it allows us to
capture the sources of a particular error.
Unfortunately, recording an error as a tracing value is a bit cumbersome
because `tracing::Value` is only implemented for `&dyn
std::error::Error`. Casting an error to this is quite verbose. To make
it easier, we introduce two utility functions in `firezone-logging`:
- `std_dyn_err`
- `anyhow_dyn_err`
Tracking errors as correct `tracing::Value`s will be especially helpful
once we enable Sentry's `tracing` integration:
https://docs.rs/sentry-tracing/latest/sentry_tracing/#tracking-errors
Currently, any failure in the `StubResolver` while processing a query
results only in a log but no response. For UDP DNS queries, that isn't
too bad because the client will simply retry. With the upcoming support
for TCP DNS queries in #6944, we should really always reply with a
message.
This PR refactors the handling of DNS messages to always generate a
reply. In the case of an error, we reply with SERVFAIL.
This opens up a few more refactorings where we can now collapse the
handling of some branches into the same. As part of that, I noticed the
recurring need for "unwrapping" a `Result<(), E>` and logging the error.
To make that easier, I introduced an extension trait that does exactly
that.
This PR introduces a custom logging format for all Rust-components. It
is more or less a copy of `tracing_subscriber::fmt::format::Compact`
with the main difference that span-names don't get logged.
Spans are super useful because they allow us to record contextual
values, like the current connection ID, for a certain scope. What is IMO
less useful about them is that in the default formatter configuration,
active spans cause a right-drift of the actual log message.
The actual log message is still what most accurately describes, what
`connlib` is currently doing. Spans only add contextual information that
the reader may use for further understand what is happening. This
optional nature of the utility of spans IMO means that they should come
_after_ the actual log message.
Resolves: #7014.
The `expect` attribute is similar to `allow` in that it will silence a
particular lint. In addition to `allow` however, `expect` will fail as
soon as the lint is no longer emitted. This ensures we don't end up with
stale `allow` attributes in our codebase. Additionally, it provides a
way of adding a `reason` to document, why the lint is being suppressed.
On Windows, the network notifier always notifies once at startup. We
make the DNS notifier and Linux match this behavior, and we assert it in
the unit test.
Part of a yak shave towards removing Tauri.
Setting up a logger is something that pretty much every entrypoint needs
to do, be it a test, a shared library embedded in another app or a
standalone application. Thus, it makes sense to introduce a dedicated
crate that allows us to bundle all the things together, how we want to
do logging.
This allows us to introduce convenience functions like
`firezone_logging::test` which allow you to construct a logger for a
test as a one-liner.
Crucially though, introducing `firezone-logging` gives us a place to
store a default log directive that silences very noisy crates. When
looking into a problem, it is common to start by simply setting the
log-filter to `debug`. Without further action, this floods the output
with logs from crates like `netlink_proto` on Linux. It is very unlikely
that those are the logs that you want to see. Without a preset filter,
the only alternative here is to explicitly turn off the log filter for
`netlink_proto` by typing something like
`RUST_LOG=netlink_proto=off,debug`. Especially when debugging issues
with customers, this is annoying.
Log filters can be overridden, i.e. a 2nd filter that matches the exact
same scope overrides a previous one. Thus, with this design it is still
possible to activate certain logs at runtime, even if they have silenced
by default.
I'd expect `firezone-logging` to attract more functionality in the
future. For example, we want to support re-loading of log-filters on
other platforms. Additionally, where logs get stored could also be
defined in this crate.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Reactor Scram <ReactorScram@users.noreply.github.com>