We are currently in the process of transitioning the Firezone Clients
away from always hashing the ID before sending it to the portal. This
will make lookups and correlation of data between our systems much
easier.
The way we are performing this migration is that new installations of
Firezone will directly generate a 64 char hex-string as the Firezone ID.
If the ID looks like a UUID (which is the old format), we still hash it
and send it to the portal, otherwise we send it as-is.
Presently, the telemetry integration with Sentry and PostHog do the
opposite. They always sets the Firezone ID as-is and includes an
`external_id` that is the hashed form if it detects that it is a UUID
(or in the case of PostHog, create an alias). It is much better to flip
this around and always set the hex-string as the user id. That way, we
can simply always filter by the `user.id` attribute in Sentry and always
refer to the ID that we are seeing in the portal.
Feature flags may be accessed _very_ often such as on every log
statement with #9780. To make sure this is as performant as possible, we
move from an `RwLock` to atomic booleans with relaxed ordering.
Instead of conditionally enabling the `logs` feature in the Sentry
client, we always enable it and control via the `tracing` integration,
which events should get forwarded to Sentry. The feature-flag check
accesses only shared-memory and is therefore really fast.
We already re-evaluate feature flags on a timer which means this boolean
will flip over automatically and logs will be streamed to Sentry.
Originally, we introduced these to gather some data from logs / warnings
that we considered to be too spammy. We've since merged a
burst-protection that will at most submit the same event once every 5
minutes.
The data from the telemetry spans themselves have not been used at all.
Sentry has a new "Logs" feature where we can stream logs directly to
Sentry. Doing this for all Clients and Gateways would be way too much
data to collect though.
In order to aid debugging from customer installations, we add a
PostHog-managed feature flag that - if set to `true` - enables the
streaming of logs to Sentry. This feature flag is evaluated every time
the telemetry context is initialised:
- For all FFI usages of connlib, this happens every time a new session
is created.
- For the Windows/Linux Tunnel service, this also happens every time we
create a new session.
- For the Headless Client and Gateway, it happens on startup and
afterwards, every minute. The feature-flag context itself is only
checked every 5 minutes though so it might take up to 5 minutes before
this takes effect.
The default value - like all feature flags - is `false`. Therefore, if
there is any issue with the PostHog service, we will fallback to the
previous behaviour where logs are simply stored locally.
Resolves: #9600
In order to more easily target customers with certain feature flags, we
include the `account_slug` in the `$identify` event to PostHog. This
will allow us to create Cohorts in PostHog and enable / disable feature
flags for all installations of Firezone for a particular customer.
A bit of legacy that we have inherited around our Firezone ID is that
the ID stored on the user's device is sha'd before being passed to the
portal as the "external ID". This makes it difficult to correlate IDs in
Sentry and PostHog with the data we have in the portal. For Sentry and
PostHog, we submit the raw UUID stored on the user's device.
As a first step in overcoming this, we embed an "external ID" in those
services as well IF the provided Firezone ID is a valid UUID. This will
allow us to immediately correlate those events.
As a second step, we automatically generate all new Firezone IDs for the
Windows and Linux Client as `hex(sha256(uuid))`. These won't parse as
valid UUIDs and therefore will be submitted as is to the portal.
As a third step, we update all documentation around generating Firezone
IDs to use `uuidgen | sha256` instead of just `uuidgen`. This is
effectively the equivalent of (2) but for the Headless Client and
Gateway where the Firezone ID can be configured via environment
variables.
Resolves: #9382
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
It is in the nature of our application that errors may occur in rapid
succession if anything in the packet processing path fails. Most of the
time, these repeated errors don't add any additional information so
reporting one of them to Sentry is more than enough.
To achieve this, we add a `before_send` callback that utilizes a
concurrent cache with an upper bound of 10000 items and a TTL of 5
minutes. In other words, if we have submitted an event to Sentry that
had the exact same message in the last 5 minutes, we will not send it.
Internally, `moka` uses a concurrent hash map and therefore, the key is
hashed and not actually stored. Hash codes are u64, meaning the memory
footprint of this cache is only ~ 64kb (not accounting for constant
overhead of the cache internals).
This PR adds basic analytics to `connlib` by sending two events to
PostHog:
1. `new_session` which is sent every time we establish a new session
with a Firezone backend. This could be our production or staging
instance but also a session to an on-premise installation of Firezone.
We include the API URL in the event payload to further distinguish
these.
2. `$identify` to link the client + version as well as the operating
system to the user. The user is identified by the Firezone ID.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
I am not sure if this is currently breaking anything but it seems more
correct to flush all events first and then end the session.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Once we start collecting metrics across various Clients and Gateways,
these metrics need to be tagged with the correct `service.name`,
`service.version` as well as an instance ID to differentiate metrics
from different instances.
The original idea of this feature flag was that we can easily disable
the eBPF router in case it causes issues in production. However,
something seems to be not working in reliably turning this on / off.
Without an explicit toggle of the feature-flag, the eBPF program doesn't
seem to be loaded correctly. The uncertainty in this makes me not the
trust the metrics that we are seeing because we don't know, whether
really all relays are using the eBPF router to relay TURN traffic.
In order to draw truthful conclusions as too how much traffic we are
relaying via eBPF, this patch removes the feature flag again. As of
#8656, we can disable the eBPF program by not setting the
`EBPF_OFFLOADING` env variable. This requires a re-deploy / restart of
relays to take effect which isn't quite as fast as toggling a feature
flag but much reliable and easier to maintain.
This PR implements a feature-flag in PostHog that we can use to toggle
the use of the eBPF data plane at runtime. At every tick of the
event-loop, the relay will compare the (cached) configuration of the
eBPF program with the (cached) value of the feature-flag. If they
differ, the flag will be updated and upon the next packet, the eBPF
program will act accordingly.
Feature-flags are re-evaluated every 5 minutes, meaning there is some
delay until this gets applied.
The default value of our all our feature-flags is `false`, meaning if
there is some problem with evaluating them, we'd turn the eBPF data
plane off. Performing routing in userspace is slower but it is a safer
default.
Resolves: #8548
As per PostHog's recommendation [0], we now use different projects to
manage the feature-flags. This allows us to turn feature flags in
staging or production on / off without affecting the other.
[0]: https://posthog.com/tutorials/multiple-environments
In order to be able to dynamically configure long-running applications
such as the Gateway via feature-flags, we need to regularly re-evaluate
them by sending another POST request to the `/decide` endpoint.
To do this without impacting anything else, we create a separate runtime
that is lazily initialised on first access and use that to run the async
code for connecting to the PostHog service. In addition to that, we also
spawn a task that re-evaluates the feature flags for the currently set
user in the Sentry context every 5 minutes.
Resolves: #8454
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
I suspect that one issue as part local discovery is that we respond to
LLMNR queries with NXDOMAIN if the domain isn't a resource. This is
probably wrong. LLMNR works over multicast so if a particular interface
can't respond to a query with records, it should probably not respond at
all.
Related: #8266
Our Posthog integration was so lenient in regards to errors that I
didn't even notice at all that we failed to deserialise them correctly.
In Posthog, I configured the feature flags with `kebab-case` but we
tried to deserialise them as `snake_case`.
Most of the time, these flags are only read from and not written thus.
By using a read-lock, we make sure that even when we use feature-flags
from multiple threads, they don't cause any contention.
In order to more safely roll out certain changes, being able to
runtime-toggle features is crucial. For this purpose, we build a simple
integration with Posthog that allows us to evaluate feature flags based
on the Firezone ID of a Client or Gateway.
The feature flags are also set in a dedicated context for Sentry events.
This allows us to see, which feature flags were active when a certain
error is logged to Sentry.
Every time we start a new session, our telemetry context potentially
changes, i.e. the user may sign into a new account. This should ensure
that both the IPC service and the GUI always use the most up-to-date
`account_slug` as part of Sentry events. In addition, this will also set
the `account_slug` for clients that just signed in. Previously, the
`account_slug` would only get populated on the next start of the client.
We have a bug in our Rust telemetry code where starting a new telemetry
session for an **unsupported** environment doesn't stop the previous one
if one already exists.
This results in very confusing Sentry issues that cannot be correlated
to our infrastructure.
The application-split itself doesn't really warrant having two different
Sentry projects.
1. The location of the panic / log already tells us, which component is
failing.
2. Both of the projects are built with Rust so the same "platform"
setting applies.
3. Reducing the number of Sentry projects makes things easier to manage.
4. The binaries are started as independent processes, so the two Sentry
contexts don't interfere.
What we should keep in mind is that one instance of an application will
now log into Sentry twice using the same DSN. I _think_ this means that
the number of sessions listed in Sentry will be double the number of
actual client-runs. The same is true for the Apple client though and
once we integrate Sentry for Android, the same will apply there so
relative to each other, those numbers still make sense.
Previously, we needed to track our own user state in order to set the
whole thing in Sentry. That was necessary because Sentry didn't allow us
to _retrieve_ the current user of the scope but always required the full
user to be set. This was changed
https://github.com/getsentry/sentry-rust/pull/715, which allows us to
remove some of that code and hopefully mitigating any sort of lingering
state when it comes to telemetry sessions.
For persistent applications like the IPC service, it is possible that
telemetry gets initialised with different parameters depending on what
the user logs in with. Currently, only the first one is persisted and
all consecutive ones are ignored, leading to events that may be wrongly
tagged for a certain user / environment.
To fix this, we only skip the init if we are still in the same
environment. Otherwise, the close the previous session and initialise a
new one.
Fixes: #7525.
On the one hand, learning about in which edgecases our software fails is
useful and thus having telemetry also active for self-hosted users is
beneficial. On the other hand, we have neither control nor a contact to
those self-hosted and whatever they are doing might spam our Sentry
account with errors that we can't do anything about.
To mitigate this, we disable telemetry for self-hosted users with the
next release.
Once we have more resources, we can consider enabling this again.
This is another attempt at fixing #7386. Previous PR was #7379. The
difference is, this time it works! In the following screenshot,
`handle_input` is a currently active span.

I had to make some patches to Sentry, most notably:
- https://github.com/getsentry/sentry-rust/pull/708
- https://github.com/getsentry/sentry-rust/pull/712
The way we configure Sentry is quite tricky:
First and foremost, we need to understand that the `tracing` adapter for
Sentry has a `span_filter` configuration. When a span gets filtered out
there, the rest of `sentry-tracing` never sees the data in that span.
Thus, in order to capture variables from spans, we need to have a fairly
generous span filter. In this PR, we change this span filter to include
all spans except those on TRACE level.
Secondly, by default, the Sentry SDK doesn't send any spans to the
backend, i.e. the sampling rate is 0. Previously, we set the sampling
rate to 1.0 because the `span_filter` was already filtering out all
non-telemetry spans. A telemetry span is a concept that we invented. It
is a span that gets sampled at _creation_ time with a probability of 1%.
This is useful because creating a lot of spans is also expensive, so we
don't want to do it e.g. on a per-packet basis. With just these
configuration options, we now have a problem: We don't want to submit
all spans to Sentry but we need the `span_filter` to allow all spans
otherwise we can't capture the contextual fields from the span in
breadcrumbs. Luckily, the Sentry SDK has another configuration option:
`traces_sampler`.
The `traces_sampler` gets to compute a sampling rate for each individual
span. This allows us to discard all spans from being sent to Sentry
unless they are `telemetry` spans.
Resolves: #7386.
In addition to monitoring clients and gateways, it is also useful to
monitor relays in the same way. This gives us alerts on ERROR and WARN
messages logged by the relay as well as panics.
One of Rust's promises is "if it compiles, it works". However, there are
certain situations in which this isn't true. In particular, when using
dynamic typing patterns where trait objects are downcast to concrete
types, having two versions of the same dependency can silently break
things.
This happened in #7379 where I forgot to patch a certain Sentry
dependency. A similar problem exists with our `tracing-stackdriver`
dependency (see #7241).
Lastly, duplicate dependencies increase the compile-times of a project,
so we should aim for having as few duplicate versions of a particular
dependency as possible in our dependency graph.
This PR introduces `cargo deny`, a linter for Rust dependencies. In
addition to linting for duplicate dependencies, it also enforces that
all dependencies are compatible with an allow-list of licenses and it
warns when a dependency is referred to from multiple crates without
introducing a workspace dependency. Thanks to existing tooling
(https://github.com/mainmatter/cargo-autoinherit), transitioning all
dependencies to workspace dependencies was quite easy.
Resolves: #7241.
This switches our `sentry-tracing` dependency to a fork that includes
https://github.com/getsentry/sentry-rust/pull/708. Recording our span
fields with breadcrumbs is important to provide accurate context of the
message. Without the span fields, the messages give us a lot less
information.
Since the last release, the open issue on `flush` having a flipped
return value got fixed as well.
Using the clippy lint `unwrap_used`, we can automatically lint against
all uses of `.unwrap()` on `Result` and `Option`. This turns up quite a
few results actually. In most cases, they are invariants that can't
actually be hit. For these, we change them to `Option`. In other cases,
they can actually be hit. For example, if the user supplies an invalid
log-filter.
Activating this lint ensures the compiler will yell at us every time we
use `.unwrap` to double-check whether we do indeed want to panic here.
Resolves: #7292.
Bundles together several minor improvements around telemetry:
- Removes the obsolete "Firezone" context: This is now included in the
user context as of #7310.
- Entirely encapsulates `sentry` within the `telemetry` module
- Concludes sessions that were not explicitly closed as "abnormal"
Sentry has a feature called the "User context" which allows us to assign
events to individual users. This in turn will give us statistics in
Sentry, how many users are affected by a certain issue.
Unfortunately, Sentry's user context cannot be built-up step-by-step but
has to be set as a whole. To achieve this, we need to slightly refactor
`Telemetry` to not be `clone`d and instead passed around by mutable
reference.
Resolves: #7248.
Related: https://github.com/getsentry/sentry-rust/issues/706.
Reading the Git version requires the entire Git repository to be
present, including all tags. The tags are only created _after_ the
artifact is being built, when we publish the release. Therefore, these
tags are never included in the actual released binary.
For Sentry, we use the `CARGO_PKG_VERSION` variable instead. This
doesn't tell us whether somebody built a client from source and then
used it so there could be some confusion in Sentry events. It is quite
unlikely that this happens though so for the majority of Sentry alerts,
this will give us the correct version.
For the Android client, we also depend on the `GITHUB_SHA` env variable
at compile-time. We do the same thing for the GUI client here.
Resolves: #6925.
DNS resolution is a critical part of `connlib`. If it is slow for
whatever reason, users will notice this. To make sure we notice as well,
we add `telemetry` spans to the client's and gateway's DNS resolution.
For the client, this applies to all DNS queries that we forward to the
upstream servers. For the gateway, this applies to all DNS resources.
In addition to those IO operations, we also instrument the
`match_resource_linear` function. This function operates in `O(n)` of
all defined DNS resources. It _should_ be fast enough to not create an
impact but it can't hurt to measure this regardless.
Lastly, we also instrument `refresh_translations` on the gateway.
Refreshing the DNS resolution of a DNS resource should really only
happen, when the previous IP addresses become stale yet the user is
still trying to send traffic to them. We don't actually have any data on
how often that happens. By instrumenting it, we can gather some of this
data.
To make sure that none of these telemetry events and spans hurt the
end-user performance, we introduce macros to `firezone-logging` that
sample the creation of these events and spans at a rate of 1%. I ran a
flamegraph and none of these even showed up. The most critical one here
is probably the `match_resource_linear` span because it happens on every
DNS query.
Resolves: #7198.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
`sentry`'s transport layer appears to be using blocking IO for flushing
events. Performing blocking IO within a future that is running on a
worker-thread of tokio causes this operation to hang and eventually
time-out after 5 seconds. As a result, many events - especially traces -
don't get flushed to sentry when an app is being shut down.
To fix this, we make `Telemetry::stop` an `async fn` and offload the
flushing to a task on tokio's thread-pool for blocking IO.
Closes#7175
Also fixes a bug with the initialization order of Tokio and Sentry.
Previously:
1. Start Tokio, executor threads inherit main thread context
2. Load device ID and set it on the main telemetry hub
Now:
1. Load device ID and set it on the main telemetry hub
2. Start Tokio, executor threads inherit main thread context
The context and possibly tags didn't seem to propagate from the main hub
if we set them after the worker threads spawned.
Based on this understanding, the IPC service process is still wrong, but
a fix will have to wait, because telemetry in the IPC service is more
complicated than in the GUI process.
<img width="818" alt="image"
src="https://github.com/user-attachments/assets/9c9efec8-fc55-4863-99eb-5fe9ba5b36fa">
This should give us much more context for a particular error without
having to bother a customer with sending us the logs / digging for them
ourselves in our staging or production environment.
Resolves: #7176.