Our Sentry client needs to resolve DNS before being able to send logs or
errors to the backend. Currently, this DNS resolution happens on-demand
as we don't take any control of the underlying HTTP client.
In addition, this will use HTTP/1.1 by default which isn't as efficient
as it could be, especially with concurrent requests.
Finally, if we decide to ever proxy all Sentry for traffic through our
own domain, we have to take control of the underlying client anyway.
To resolve all of the above, we create a custom `TransportFactory` where
we reuse the existing `ReqwestHttpTransport` but provide an already
configured `reqwest::Client` that always uses HTTP/2 with a
pre-configured set of DNS records for the given ingest host.
In addition to sending true/false for a feature-flag, PostHog also
allows us to send a payload with them. We can use this to carry the
log-filter we'd like to stream logs for. With this, we can dynamically
change which logs we are getting forwarded to Sentry.
Unfortunately, this cannot be done on a per-user basis, meaning we will
always have the same log filter for all users where the feature-flag is
enabled.
Rust 1.88 has been released and brings with it a quite exciting feature:
let-chains! It allows us to mix-and-match `if` and `let` expressions,
therefore often reducing the "right-drift" of the relevant code, making
it easier to read.
Rust.188 also comes with a new clippy lint that warns when creating a
mutable reference from an immutable pointer. Attempting to fix this
revealed that this is exactly what we are doing in the eBPF kernel.
Unfortunately, it doesn't seem to be possible to design this in a way
that is both accepted by the borrow-checker AND by the eBPF verifier.
Hence, we simply make the function `unsafe` and document for the
programmer, what needs to be upheld.
Instead of conditionally enabling the `logs` feature in the Sentry
client, we always enable it and control via the `tracing` integration,
which events should get forwarded to Sentry. The feature-flag check
accesses only shared-memory and is therefore really fast.
We already re-evaluate feature flags on a timer which means this boolean
will flip over automatically and logs will be streamed to Sentry.
Originally, we introduced these to gather some data from logs / warnings
that we considered to be too spammy. We've since merged a
burst-protection that will at most submit the same event once every 5
minutes.
The data from the telemetry spans themselves have not been used at all.
Sentry has a new "Logs" feature where we can stream logs directly to
Sentry. Doing this for all Clients and Gateways would be way too much
data to collect though.
In order to aid debugging from customer installations, we add a
PostHog-managed feature flag that - if set to `true` - enables the
streaming of logs to Sentry. This feature flag is evaluated every time
the telemetry context is initialised:
- For all FFI usages of connlib, this happens every time a new session
is created.
- For the Windows/Linux Tunnel service, this also happens every time we
create a new session.
- For the Headless Client and Gateway, it happens on startup and
afterwards, every minute. The feature-flag context itself is only
checked every 5 minutes though so it might take up to 5 minutes before
this takes effect.
The default value - like all feature flags - is `false`. Therefore, if
there is any issue with the PostHog service, we will fallback to the
previous behaviour where logs are simply stored locally.
Resolves: #9600
As part of investigating #9400, I refactored the startup code of the GUI
client to play around with the initialization order of the runtime and
other parts. After finding the root cause (fixed in #9469), I figured it
would still be nice to land these improvements.
This refactors the app startup:
- Only initialize what is absolutely necessary outside of `try_main`:
The runtime and the telemetry instance.
- Settings are loaded earlier
- Telemetry is initializer earlier
- Add a bootstrap logger that logs to stdout whilst we are booting
- Re-order some code inside `gui::run` to bundle the initialization of
Tauri into a single command chain
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Boolean logic is hard. Instead of filtering _out_ these spammy events on
a certain level, we accidentally configured the `SentryLayer` such that
only events on level ERROR would be sent to Sentry.
To fix this, we rewrite the filter such that it is phrased in an
affirmative manner, i.e. it only allows an event if it matches all of
the given criteria. This filter this then applied in a **negated** way,
i.e. we don't want to see events that match these criteria.
Resolves: #9400
Opentelemetry logs a DEBUG log every time it creates a new meter. That
happens fairly often now in our codebase which spams the logs on the
DEBUG level.
```
2025-05-09T02:31:51.147Z DEBUG opentelemetry_sdk: name="MeterProvider.ExistingMeterReturned" meter_name="connlib"
```
We fix that be setting `opentelemetry_sdk` to default to `INFO` if it is
not specified explicitly.
By default, we send all WARN and ERROR logs to Sentry. This also
includes logs emitted via the `log` crate via a facade that `tracing`
installs. The wintun-rs bindings install such a logger in the native
WinTUN code. The WinTUN code has a bug where it doesn't handle adapter
removal idempotently. That is, if the adapter has already been removed
it logs an error instead of succeeding.
Typically, such logs can easily be suppressed in Sentry. In this case
however, Sentry fails to group these different logs together because
WinTUN's error message contains a path to a temporary directory which is
different every time it gets executed. As such, these logs spam our
Sentry instance with no way for us to disable them with existing tools.
The WireGuard mailing list for WinTUN also appears to be dead. We
attempted to contact the list in February and have not received a reply
yet. The last archive goes back to November 2024 [0]. We use WinTUN as a
prebuild and signed DLL so we also cannot meaningfully patch this on our
end without upstreaming it.
Thus, as a last resort, we add some infrastructure to our logging setup
that allows us to client-side filter events based on certain patterns of
the message and a log level.
[0]: https://lists.zx2c4.com/pipermail/wireguard/
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Currently, when `connlib`'s log file gets deleted, we write logs into
nirvana until the corresponding process gets restarted. This is painful
for users to do because they need to restart the IPC service or Network
Extension. Instead, we can simply check if the log file exists prior to
writing to it and re-create it if it doesn't.
Resolves: #6850
Related: #7569
When debugging Firezone, it is useful to use `tail -f` on the current
logfile to see what `connlib` is doing. This is quite annoying to do
however because the log file rolls over with every restart of the
application. As a small QoL improvement, we always symlink the latest
log file to a link called `latest`. Therefore, all one needs to do is
re-run the latest `tail -f ./latest` command to get the new logs.
Resolves: #8388
On Linux, logs sent to stdout from a systemd-service are automatically
captured by `journald`. This is where most admins expect logs to be and
frankly, doing any kind of debugging of Firezone is much easier if you
can do `journalctl -efu firezone-client-ipc.service` in a terminal and
check what the IPC service is doing.
On Windows, stdout from a service is (unfortunately) ignored.
To achieve this and also allow dynamically changing the log-filter, I
had to introduce a (long-overdue) abstraction over tracing's "reload"
layer that allows us to combine multiple reload-handles into one.
Unfortunately, neither the `reload::Layer` nor the `reload::Handle`
implement `Clone`, which makes this unnecessarily difficult.
Related: #8173
As it turns out, the effort in #7104 was not a good idea. By logging
errors as values, most of our Sentry reports all have the same title and
thus cannot be differentiated from within the overview at all. To fix
this, we stringify errors with all their sources whenever they got
logged. This ensures log messages are unique and all Sentry issues will
have a useful title.
At present, the file logger for all Rust code starts each logfile with
`connlib.`. This is very confusing when exporting the logs from the GUI
client because even the logs from the client itself will start with
`connlib.`. To fix this, we make the base file name of the log file
configurable.
Nobody looks at these logs, writing them uses unnecessary CPU + storage
on users devices. It also means we have 1 background thread less because
we need one less non-blocking writer.
This is another attempt at fixing #7386. Previous PR was #7379. The
difference is, this time it works! In the following screenshot,
`handle_input` is a currently active span.

I had to make some patches to Sentry, most notably:
- https://github.com/getsentry/sentry-rust/pull/708
- https://github.com/getsentry/sentry-rust/pull/712
The way we configure Sentry is quite tricky:
First and foremost, we need to understand that the `tracing` adapter for
Sentry has a `span_filter` configuration. When a span gets filtered out
there, the rest of `sentry-tracing` never sees the data in that span.
Thus, in order to capture variables from spans, we need to have a fairly
generous span filter. In this PR, we change this span filter to include
all spans except those on TRACE level.
Secondly, by default, the Sentry SDK doesn't send any spans to the
backend, i.e. the sampling rate is 0. Previously, we set the sampling
rate to 1.0 because the `span_filter` was already filtering out all
non-telemetry spans. A telemetry span is a concept that we invented. It
is a span that gets sampled at _creation_ time with a probability of 1%.
This is useful because creating a lot of spans is also expensive, so we
don't want to do it e.g. on a per-packet basis. With just these
configuration options, we now have a problem: We don't want to submit
all spans to Sentry but we need the `span_filter` to allow all spans
otherwise we can't capture the contextual fields from the span in
breadcrumbs. Luckily, the Sentry SDK has another configuration option:
`traces_sampler`.
The `traces_sampler` gets to compute a sampling rate for each individual
span. This allows us to discard all spans from being sent to Sentry
unless they are `telemetry` spans.
Resolves: #7386.
Rust 1.83 comes with a bunch of new lints for elidible lifetimes. Those
also trigger in the generated code of `derivative`. That crate is
actually unmaintained so we replace our usages of it with `derive_more`.
One of Rust's promises is "if it compiles, it works". However, there are
certain situations in which this isn't true. In particular, when using
dynamic typing patterns where trait objects are downcast to concrete
types, having two versions of the same dependency can silently break
things.
This happened in #7379 where I forgot to patch a certain Sentry
dependency. A similar problem exists with our `tracing-stackdriver`
dependency (see #7241).
Lastly, duplicate dependencies increase the compile-times of a project,
so we should aim for having as few duplicate versions of a particular
dependency as possible in our dependency graph.
This PR introduces `cargo deny`, a linter for Rust dependencies. In
addition to linting for duplicate dependencies, it also enforces that
all dependencies are compatible with an allow-list of licenses and it
warns when a dependency is referred to from multiple crates without
introducing a workspace dependency. Thanks to existing tooling
(https://github.com/mainmatter/cargo-autoinherit), transitioning all
dependencies to workspace dependencies was quite easy.
Resolves: #7241.
It was already a bit sus that we didn't receive as many errors in Sentry
from the IPC service as from the GUI client. Turns out that we forgot to
initialise our `sentry_layer` there. Additionally, we also didn't
initialise the `LogTracer`, meaning we didn't capture logs from the
`log` crate which is used by some of the dependencies, for example
`wintun`.
Due to https://github.com/getsentry/sentry-rust/issues/702, errors which
are embedded as `tracing::Value` unfortunately get silently discarded
when reported as part of Sentry "Event"s and not "Exception"s.
The design idea of these telemetry events is that they aren't fatal
errors so we don't need to treat them with the highest priority. They
may also appear quite often, so to save performance and bandwidth, we
sample them at a rate of 1% at creation time.
In order to not lose the context of these errors, we instead format them
into the message. This makes them completely identical to the `debug!`
logs which we have on every call-site of `telemetry_event!` which
prompted me to make that implicit as part of creating the
`telemetry_event!`.
Resolves: #7343.
I think I finally understood and correctly traced, where the use of ANSI
escape codes came from. It turns out, the `with_ansi` switch on
`tracing_subscriber::fmt::Layer` is what you want to toggle. From there,
it trickles down to the `Writer` which we can then test for in our
`Format`.
Resolves: #7284.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Using the clippy lint `unwrap_used`, we can automatically lint against
all uses of `.unwrap()` on `Result` and `Option`. This turns up quite a
few results actually. In most cases, they are invariants that can't
actually be hit. For these, we change them to `Option`. In other cases,
they can actually be hit. For example, if the user supplies an invalid
log-filter.
Activating this lint ensures the compiler will yell at us every time we
use `.unwrap` to double-check whether we do indeed want to panic here.
Resolves: #7292.
Our logging library `tracing` supports structured logging. Structured
logging means we can include values within a `tracing::Event` without
having to immediately format it as a string. Processing these values -
such as errors - as their original type allows the various `tracing`
layers to capture and represent them as they see fit.
One of these layers is responsible for sending ERROR and WARN events to
Sentry, as part of which `std::error::Error` values get automatically
captured as so-called "sentry exceptions".
Unfortunately, there is a caveat: If an `std::error::Error` value is
included in an event that does not get mapped to an exception, the
`error` field is completely lost. See
https://github.com/getsentry/sentry-rust/issues/702 for details.
To work around this, we introduce a `err_with_sources` adapter that an
error and all its sources together into a string. For all
`tracing::debug!` statements, we then use this to report these errors.
It is really unfortunate that we have to do this and cannot use the same
mechanism, regardless of the log level. However, until this is fixed
upstream, this will do and gives us better information in the log
submitted to Sentry.
"Just let it crash" is terrible advice for software that is shipped to
end users. Where possible, we should use proper error handling and only
fail the current function / task that is active, e.g. drop a particular
packet instead of failing all of connlib. We more or less already do
that.
Activating the clippy lint `unwrap_in_result` surfaced a few more places
where we panic despite being in a function that is fallible already.
These cases can easily be converted to not panic and return an error
instead.
DNS resolution is a critical part of `connlib`. If it is slow for
whatever reason, users will notice this. To make sure we notice as well,
we add `telemetry` spans to the client's and gateway's DNS resolution.
For the client, this applies to all DNS queries that we forward to the
upstream servers. For the gateway, this applies to all DNS resources.
In addition to those IO operations, we also instrument the
`match_resource_linear` function. This function operates in `O(n)` of
all defined DNS resources. It _should_ be fast enough to not create an
impact but it can't hurt to measure this regardless.
Lastly, we also instrument `refresh_translations` on the gateway.
Refreshing the DNS resolution of a DNS resource should really only
happen, when the previous IP addresses become stale yet the user is
still trying to send traffic to them. We don't actually have any data on
how often that happens. By instrumenting it, we can gather some of this
data.
To make sure that none of these telemetry events and spans hurt the
end-user performance, we introduce macros to `firezone-logging` that
sample the creation of these events and spans at a rate of 1%. I ran a
flamegraph and none of these even showed up. The most critical one here
is probably the `match_resource_linear` span because it happens on every
DNS query.
Resolves: #7198.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
This should give us much more context for a particular error without
having to bother a customer with sending us the logs / digging for them
ourselves in our staging or production environment.
Resolves: #7176.
It appears that I have misunderstood the documentation of
`sentry-tracing`. When a message gets logged as an event (rather than an
"exception") `std::error::Error`s attached as tracing `Value`s do not
get recorded. It doesn't really matter whether we record our events as
exceptions or messages. We should ideally look at all of them and
particularly noisy ones can be muted forever in Sentry so we don't end
up in a "boy who cried wolf" situation. Therefore, this PR changes our
event filter to also submit WARNs as exceptions to make sure they get
logged accordingly.
Resolves: #7161.
Related: https://github.com/getsentry/sentry-rust/issues/702.
With the introduction of the `tracing-sentry` integration in #7105, we
started sending tracing spans to Sentry. By default, all spans with
level INFO and above get sampled at the configured rate and sent to
Sentry.
This results in a lot of useless transaction in Sentry because we use
INFO level spans in multiple places in connlib to attach contextual
information like the current connection ID.
This PR introduces the concept of `telemetry` spans which - similar to
the `telemetry` log target in #7147 - qualifies a span for being sent to
Sentry. By convention, these are also defined as requiring the TRACE
level. This ensures we won't ever see them as part of regular log
output.
With #7105, all ERROR events from `tracing` get logged as exceptions in
Sentry and all WARN events get logged as "messages". We don't want to
fill up the user's harddrive with logs which means we have to be
somewhat conservative, what gets logged on INFO and above (with INFO
being the default log level). There are certain events though where it
would be useful to know, how often they happen because too many of them
can indicate a problem.
To solve this problem, we introduce a dedicated `telemetry` log target
that the tracing-sentry integration layer watches for. Events for the
`telemetry` log target that gets logged on TRACE will be sampled at a
rate of 1% and submitted as messages to Sentry.
Using a trait means the call-site of the log message will always be the
`log_unwrap` module, despite the `#[track_caller]` annotation. That one
only works for `std::panic::Location` unfortunately which `tracing`
isn't using.
Macros will be evaluated earlier and thus the messages will show up with
the correct module name.
My theory for this issue is that we receive a UDP DNS response from an
upstream server that is bigger than our MTU and thus forwarding it
fails.
This PR doesn't fix that issue by itself but only mitigates the actual
panic. To properly fix the underlying issue, we need to parse the DNS
message. Truncate it and set the TC bit.
Related: #7121.
Using the `sentry-tracing` integration, we can automatically capture
events based on what we log via `tracing`. The mapping is defined as
follows:
- ERROR: Gets captured as a fatal error
- WARN: Gets captured as a message
- INFO: Gets captured as a breadcrumb
- `_`: Does not get captured at all
If telemetry isn't active / configured, this integration does nothing.
It is therefore safe to just always enable it.
Our logging library, `tracing` supports structured logging. This is
useful because it preserves the more than just the string representation
of a value and thus allows the active logging backend(s) to capture more
information for a particular value.
In the case of errors, this is especially useful because it allows us to
capture the sources of a particular error.
Unfortunately, recording an error as a tracing value is a bit cumbersome
because `tracing::Value` is only implemented for `&dyn
std::error::Error`. Casting an error to this is quite verbose. To make
it easier, we introduce two utility functions in `firezone-logging`:
- `std_dyn_err`
- `anyhow_dyn_err`
Tracking errors as correct `tracing::Value`s will be especially helpful
once we enable Sentry's `tracing` integration:
https://docs.rs/sentry-tracing/latest/sentry_tracing/#tracking-errors