Rust 1.91 has been released and brings with it a few new lints that we
need to tidy up. In addition, it also stabilizes `BTreeMap::extract_if`:
A really nifty std-lib function that allows us to conditionally take
elements from a map. We need that in a bunch of places.
With the fix of taking into account link-scoped routes in #10554 we
introduced a bug: If a customer defines routes in Firezone that conflict
with the link-scope ones, those currently take priority as they are
usually more specific.
To fix this, we introduce tiered routing tables controlled by a set of
rules with different priority.
1. In the first "Firezone" routing table, we add all CIDR/IP routes that
users define in Firezone.
2. In the second "Firezone" routing table, we sync in all link-scope
routes from the system.
3. In the third "Firezone" routing table, we only add the Internet
Resource if it is active.
By evaluating the routing tables in this order, we effectively always
prioritize Firezone-controlled routes over local ones but still allow
access to LAN resources when the Internet Resource is active.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
In #10554, we added a syncing mechanism that would copy all link-scoped
routes of the `main` routing table over to the Firezone routing table.
Routes for interfaces that are currently offline cannot be added and
cause a netlink error of "Invalid argument".
To prevent unnecessary warnings from being logged to Sentry, we retrieve
the link state of each interface and skip routes for interfaces are not
online.
## Context
On Linux, we create a dedicated routing table for all routes of the
Firezone TUN device, including the `0.0.0.0/0` route. At a minimum, this
routing table contains the following if the Internet Resource is active:
```
> ip route show table 539098368
default dev tun-firezone proto static
100.64.0.0/11 dev tun-firezone proto static
100.96.0.0/11 dev tun-firezone proto static
100.100.111.0/24 dev tun-firezone proto static
```
In addition, we also create a routing rule that bypasses this routing
table for all packets that are tagged with the `0xfd002021` mark:
```
> ip rule list
0: from all lookup local
32765: not from all fwmark 0xfd002021 lookup 539098368
32766: from all lookup main
32767: from all lookup default
```
Firezone's internal UDP and TCP sockets are tagged with this mark and
thus prevent routing loops where our own packets would otherwise get
redirected back into the tunnel.
Without the Internet Resource active, the rule `from all lookup main`
triggers for local LAN traffic and correctly route the traffic out via
that interface.
For example, on my computer, the Linux kernel created the following
route with the `link` scope in the main table:
```
192.168.188.0/24 dev wlp192s0 proto kernel scope link src 192.168.188.112 metric 600
```
## The problem
With the Internet Resource active, there is a problem. The default route
matches ALL destinations, including those for local LAN destinations
which should actually be sent out via a different interface. As a
result, local LAN traffic is broken on Linux as soon as the Internet
Resource is active. Instead of being sent out via the local interface,
these packets get sent to `tun-firezone` where they get forwarded to the
Gateway and then dropped because their source IP is not a Firezone
Client IP.
## Solution
Fixing this is unfortunately non-trivial. The best I could come up with
is to create a copy of all link-scoped routes in the Firezone routing
table and keep those in sync with all route changes that happen. For
example, when we roam, the link-scoped routes obviously change because
we join a new subnet.
We therefore listen to change-events from netlink and create a debounced
task that reads the current link-scoped routes from the main routing
table, compares it to the ones in the Firezone table and adds any routes
not present.
We don't need to worry about removing routes as link-scoped routes
automatically disappear once the resulting interface goes away.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Unfortunately, it isn't very easy to detect whether a socket supports
GSO on Linux. Hence, `quinn-udp` simply probes for its support by trying
to send GSO batches and effectively disables GSO by setting the
`max-gso-segments` state variable to 1 if it encounters either EINVAL
(-22) or EIO (-5).
For EINVAL, `quinn-udp` has an internal retry mechanism. For EIO, the
`Transmit` which is passed to `quinn-udp` needs to be re-chunked and
thus cannot be automatically retried.
In order to avoid dropping packets, we therefore add a once-off retry
step to sending a datagram whenever we hit EIO on Linux or Android. If
the error was due to GSO not being supported, the 2nd attempt should be
successful and going forward, even the first one should be until we roam
the socket (where this state variable gets reset).
These packet drops have been causing flakiness in CI ever since we
merged the eBPF tests. Those disable checksum offloading which appears
to trigger these errors.
Instead of recording the queue depths on every event-loop tick, we now
record them once a second by setting a Gauge. Not only is that a simpler
instrument to work with but it is significantly more performant. The
current version - when metrics are enabled - takes on quite a bit of CPU
time.
Resolves: #10237
Right now, the Client event-loops have a channel with 1000 items for
sending new resource lists and updates to the TUN device to the host
app. This is kind of unnecessary as we always only care about the last
version of these. Intermediate updates that the host app doesn't process
are effectively irrelevant.
We've had an issue before where a bug in the portal caused us to receive
many updates to resources which ended up crashing Client apps because
this channel filled up.
To be more resilient on this front, we refactor the Client event loop to
use a `watch` channel for this. Watch channels only retain the last
value that got sent into them.
When looking through customer logs, we see a lot of "Resolved best route
outside of tunnel" messages. Those get logged every time we need to
rerun our re-implementation of Windows' weighting algorithm as to which
source interface / IP a packet should be sent from.
Currently, this gets cached in every socket instance so for the
peer-to-peer socket, this is only computed once per destination IP.
However, for DNS queries, we make a new socket for every query. Using a
new source port DNS queries is recommended to avoid fingerprinting of
DNS queries. Using a new socket also means that we need to re-run this
algorithm every time we make a DNS query which is why we see this log so
often.
To fix this, we need to share this cache across all UDP sockets. Cache
invalidation is one of the hardest problems in computer science and this
instance is no different. This cache needs to be reset every time we
roam as that changes the weighting of which source interface to use.
To achieve this, we extend the `SocketFactory` trait with a `reset`
method. This method is called whenever we roam and can then reset a
shared cache inside the `UdpSocketFactory`. The "source IP resolver"
function that is passed to the UDP socket now simply accesses this
shared cache and inserts a new entry when it needs to resolve the IP.
As an added benefit, this may speed up DNS queries on Windows a bit
(although I haven't benchmarked it). It should certainly drastically
reduce the amount of syscalls we make on Windows.
When failing to create the TUN device, the error messages are currently
pretty bare. Add a bit more context so users can self-diagnose easier
what is wrong.
Rust 1.88 has been released and brings with it a quite exciting feature:
let-chains! It allows us to mix-and-match `if` and `let` expressions,
therefore often reducing the "right-drift" of the relevant code, making
it easier to read.
Rust.188 also comes with a new clippy lint that warns when creating a
mutable reference from an immutable pointer. Attempting to fix this
revealed that this is exactly what we are doing in the eBPF kernel.
Unfortunately, it doesn't seem to be possible to design this in a way
that is both accepted by the borrow-checker AND by the eBPF verifier.
Hence, we simply make the function `unsafe` and document for the
programmer, what needs to be upheld.
At present, and as a result of how `connlib` evolved, we still implement
a `Poll`-based function for receiving data on our UDP socket. Ever since
we moved to dedicated threads for the UDP socket, we can directly block
on "block" on receiving datagrams and don't have to poll the socket.
This simplifies the implementation a fair bit. Additionally, it made me
reailise that we currently don't expose any errors on the UDP socket.
Likely, those will be ephemeral but it is still better than completely
silencing them.
On Windows - in order to prevent routing loops - we resolve the best
"non-tunnel" route to a particular host for each IP address. The
resulting source IP is then used as source for packets leaving our
interface. In case the system doesn't have IPv6 connectivity or are
simply no routes available, we fail this "source IP resolver" with an IO
error.
Presently, this uses the "other" IO error type which causes this to be
logged on a WARN level in the event-loop. The IO error types
`HostUnreachable` and `NetworkUnreachable` are expected during normal
operation of Firezone and are therefore only logged on DEBUG.
By changing this IO error type, we fix the WARN log spam on Windows for
machines without IPv6 connectivity.
I believe some of the recent changes around how we load the
`firezone-id.json` from the GUI client surfaced that we in fact don't
always have access to it. Previously, this was silenced because we would
only optionally add it as context to the Sentry client.
Now, we need it to initialise telemetry so we know whether or not to
send logs to Sentry.
In order to be able to access the file, we need to change the config's
directory and the file to be owned by the `firezone-client` group.
In order to detect network changes on Windows, we implement the
`INetworkEvents` callback interface. This callback notifies us every
time the connectivity of a certain network changes.
Performing a network reset in connlib on any of these changes hurts the
user experience as Firezone is booting because it takes a while for this
to settle. Firezone itself is making changes to the network so several
of these change events happen _because_ Firezone is starting.
The documentation from Microsoft on what possible values the `NameType`
attribute can have is pretty thin but I did manage to find the following
values on the Internet:
- `6`: Wired network
- `71`: Wireless network
- `243`: Broadband network
We assume that the user is connected to the Internet through one of
these so we ignore network changes on all other networks.
An alternative approach to reducing the number of false-positive change
events would be to react to a narrower list of change events. I
discarded this approach because it wasn't clear to me, which of the
event types [0] would matter to us and when Windows emits them. I think
in order to effectively react to those, we'd have to do more fine
granular tracking of which state a network is in and e.g. only trigger a
reset if we move from "Disconnected" to e.g. "Subnet connectivity".
Windows also differentiates between local, subnet and Internet
connectivity, yet in my testing, I've never observed the "Internet"
connectivity being emitted.
Hence, it is deemed more robust to just filter out networks based on
their type. Firezone itself is of type 53 and is therefore automatically
filtered out as well. The risk here is that we don't react to
connectivity changes of a network that a customer is relying on.
Unfortunately, I don't think there is a better way to find this out
other than shipping this change and waiting for reports.
[0]:
https://learn.microsoft.com/en-us/windows/win32/api/netlistmgr/ne-netlistmgr-nlm_connectivity#constants
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
A bit of legacy that we have inherited around our Firezone ID is that
the ID stored on the user's device is sha'd before being passed to the
portal as the "external ID". This makes it difficult to correlate IDs in
Sentry and PostHog with the data we have in the portal. For Sentry and
PostHog, we submit the raw UUID stored on the user's device.
As a first step in overcoming this, we embed an "external ID" in those
services as well IF the provided Firezone ID is a valid UUID. This will
allow us to immediately correlate those events.
As a second step, we automatically generate all new Firezone IDs for the
Windows and Linux Client as `hex(sha256(uuid))`. These won't parse as
valid UUIDs and therefore will be submitted as is to the portal.
As a third step, we update all documentation around generating Firezone
IDs to use `uuidgen | sha256` instead of just `uuidgen`. This is
effectively the equivalent of (2) but for the Headless Client and
Gateway where the Firezone ID can be configured via environment
variables.
Resolves: #9382
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
At present, listening for DNS server change and network change events is
handled in the GUI client. Upon an event, a message is sent to the
tunnel service which then applies the new state to `connlib`.
We can avoid some of this boilerplate by moving these listeners to the
tunnel service as part of the handler. As a result, we get a few
improvements:
- We don't need to ignore these events if we don't have a session
because the lifetime of these listeners is tied to the IPC handler on
the service side.
- We need fewer IPC messages
- We can retry the connection directly from within the tunnel service in
case we have no Internet at the time of startup
- We can more easily model out the state machine of a connlib session in
the tunnel service
- On Linux, this means we no longer shell out to `resolvectl` from the
GUI process, unifying access to the "resolvers" from the tunnel service
- On Windows, we no longer need admin privileges on the GUI client for
optimized network-change detection. This now happens in the Tunnel
process which already runs as admin.
Resolves: #9465
In #9498, the Cloudflare response was updated to match what appears to
be a transient change on their end. It looks like this has changed
again, so to prevent this from breaking CI in the future we relax the
assertion.
It appears that Cloudflare changed the response that it is sending for
the 1.1.1.1 IP so we need to adapt our integration test for packet loops
in order to make CI pass.
When working on UI stuff for the Tauri clients on macOS it's helpful if
the UI is buildable. This is a first stab at getting a stub client to
launch on macOS with the help of our AI overlords. Feel free to close or
heavily critique if there is a better approach.
The latest release now also sorts workspace dependencies, as well as
different dependency sections. Keeping these things sorted reduces the
chances of merge conflicts when multiple PRs edit these files.
For our telemetry sessions with Sentry, we need to know which
environment we are running in, i.e. staging, production or on-prem. The
GUI client's tunnel service doesn't have a concept of an environment
until a GUI connects and sends the `StartTelemetry` message. Therefore,
we should scope a telemetry session to a GUI being connected over IPC.
Any errors around setting up / tearing down the background service are a
catch-22. Until a GUI connects, we can't initialise the telemetry
connection but if we fail to set up the background service, no GUI can
ever connect. Hence, the current setup and tear down of the `Telemetry`
module around the `ipc_listen` calls can safely be removed as they are
effectively no-ops anyway.
The name IPC service is not very descriptive. By nature of being
separate processes, we need to use IPC to communicate between them. The
important thing is that the service process has control over the tunnel.
Therefore, we rename everything to "Tunnel service".
The only part that is not changed are historic changelog entries.
Resolves: #9048
In order to avoid routing loops on Windows, our UDP and TCP sockets in
`connlib` embed a "source IP resolver" that finds the "next best"
interface after our TUN device according to Windows' routing metrics.
This ensures that packets don't get routed back into our TUN device.
Currently, errors during this process are only logged on TRACE and
therefore not visible in Sentry. We fix this by moving around some of
the function interfaces and forward the error from the source IP
resolver together with some context of the destination IP.
When this crate is compiled by itself, these features are required. This
doesn't show up in CI because there we compile the entire workspace and
some crate somewhere already activates these features then.
For at least 1 user, the threads shut down correctly, but we didn't seem
to have exited the loop. In
https://firezone-inc.sentry.io/issues/6335839279/events/c11596de18924ee3a1b64ced89b1fba2/?project=4508008945549312,
we can see that both flags are marked as `true` yet we still emitted the
message.
The only way how I can explain this is that the thread shut down in
between the two times we've called the `is_finished` function. To ensure
this doesn't happen, we now only read it once.
This however also shows that 5s may not be enough time for WinTUN to
shutdown. Therefore, we increase the grace period to 10s.
The current `rust/` directory is a bit of a wild-west in terms of how
the crates are organised. Most of them are simply at the top-level when
in reality, they are all `connlib`-related. The Apple and Android FFI
crates - which are entrypoints in the Rust code are defined several
layers deep.
To improve the situation, we move around and rename several crates. The
end result is that all top-level crates / directories are:
- Either entrypoints into the Rust code, i.e. applications such as
Gateway, Relay or a Client
- Or crates shared across all those entrypoints, such as `telemetry` or
`logging`
The module and crate structure around the GUI client and its background
service are currently a mess of circular dependencies. Most of the
service implementation actually sits in `firezone-headless-client`
because the headless-client and the service share certain modules. We
have recently moved most of these to `firezone-bin-shared` which is the
correct place for these modules.
In order to move the background service to `firezone-gui-client`, we
need to untangle a few more things in the GUI client. Those are done
commit-by-commit in this PR. With that out the way, we can finally move
the service module to the GUI client; where is should actually live
given that it has nothing to do with the headless client.
As a result, the headless-client is - as one would expect - really just
a thin wrapper around connlib itself and is reduced down to 4 files with
this PR.
To make things more consistent in the GUI client, we move the `main.rs`
file also into `bin/`. By convention `bin/` is where you define binaries
if a crate has more than one. cargo will then build all of them.
Eventually, we can optimise the compile-times for `firezone-gui-client`
by splitting it into multiple crates:
- Shared structs like IPC messages
- Background service
- GUI client
This will be useful because it allows only re-compiling of the GUI
client alone if nothing in `connlib` changes and vice versa.
Resolves: #6913Resolves: #5754
Both `device_id` and `device_info` are used by the headless-client and
the GUI client / IPC service. They should therefore be defined in the
`bin-shared` crate.
Currently, the platform-specific code for controlling DNS resolution on
a system sits in `firezone-headless-client`. This code is also used by
the GUI client. This creates a weird compile-time dependency from the
GUI client to the headless client.
For other components that have platform-specific implementations, we use
the `firezone-bin-shared` crate. As a first step of resolving the
compile-time dependency, we move the `dns_control` module to
`firezone-bin-shared`.
Presently, the network change detection on Windows is very naive and
simply emits a change event everytime _anything_ changes. We can
optimise this and therefore improve the start-up time of Firezone by:
- Filtering out duplicate events
- Filtering out network change events for our own network adapter
This reduces the number of network change events to 1 during startup. As
far as I can tell from the code comments in this area, we explicitly
send this one to ensure we don't run into a race condition whilst we are
starting up.
Resolves: #8905
The `signals` module isn't something headless-client specific and should
live in our `bin-shared` crate. Once the `ipc_service` module is
decoupled from the headless-client crate, it will be used by both the
headless client and IPC service (which then will be defined in the GUI
client crate).
The `known_dirs` module is used across the headless-client and the GUI
client. It should live in `bin-shared` where all the other
cross-platform modules are.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
The `uptime` module from `firezone-headless-client` is also used in the
GUI client. In order to decouple this dependency, we move the module to
`bin-shared`, next to the other cross-plaform modules.
This tunnel throughput benchmark isn't a very useful benchmark and it is
very flaky. Remove it entirely until we can replace it with something
more robust and useful.
Resolves: #8172
We have been using buffer pools for a while all over `connlib` as a way
to efficiently use heap-allocated memory. This PR harmonizes the usage
of buffer pools across the codebase by introducing a dedicated
`bufferpool` crate. This crate offers a convenient and easy-to-use API
for all the things we (currently) need from buffer pools. As a nice
bonus of having it all in one place, we can now also track metrics of
how many buffers we have currently allocated.
An example output from the local metrics exporter looks like this:
```
Name : system.buffer.count
Description : The number of buffers allocated in the pool.
Unit : {buffers}
Type : Sum
Sum DataPoints
Monotonic : false
Temporality : Cumulative
DataPoint #0
StartTime : 2025-04-29 12:41:25.278436
EndTime : 2025-04-29 12:42:25.278088
Value : 96
Attributes :
-> system.buffer.pool.name: udp-socket-v6
-> system.buffer.pool.buffer_size: 65535
DataPoint #1
StartTime : 2025-04-29 12:41:25.278436
EndTime : 2025-04-29 12:42:25.278088
Value : 7
Attributes :
-> system.buffer.pool.buffer_size: 131600
-> system.buffer.pool.name: gso-queue
DataPoint #2
StartTime : 2025-04-29 12:41:25.278436
EndTime : 2025-04-29 12:42:25.278088
Value : 128
Attributes :
-> system.buffer.pool.name: udp-socket-v4
-> system.buffer.pool.buffer_size: 65535
DataPoint #3
StartTime : 2025-04-29 12:41:25.278436
EndTime : 2025-04-29 12:42:25.278088
Value : 8
Attributes :
-> system.buffer.pool.buffer_size: 1336
-> system.buffer.pool.name: ip-packet
DataPoint #4
StartTime : 2025-04-29 12:41:25.278436
EndTime : 2025-04-29 12:42:25.278088
Value : 9
Attributes :
-> system.buffer.pool.buffer_size: 1336
-> system.buffer.pool.name: snownet
```
Resolves: #8385
When working on the Rust code of Firezone from a MacOS computer, it is
useful to have pretty much all of the code at least compile to ensure
detect problems early. Eventually, once we target features like a
headless MacOS client, some of these stubs will actually be filled in an
be functional.
Correctly implementing asynchronous IO is notoriously hard. In order to
not drop packets in the process, one has to ensure a given socket is
ready to accept packets, buffer them if it is not case, suspend
everything else until the socket is ready and then continue.
Until now, we did this because it was the only option to run the UDP
sockets on the same thread as the actual packet processing. That in turn
was motivated by wanting to pass around references of the received
packets for processing. Rust's borrow-checker does not allow to pass
references between threads which forced us to have the sockets on the
same thread as the packet processing.
Like we already did in other places in `connlib`, this can be solved
through the use of buffer pools. Using a buffer pool, we can use heap
allocations to store the received packets without having to make a new
allocation every time we read new packets. Instead, we can have a
dedicated thread that is connected to `connlib`'s packet processing
thread via two channels (one for inbound and one for outbound packets).
These channels are bounded, which ensures backpressure is maintained in
case one of the two threads lags behind. These bounds also mean that we
have at most N buffers from the buffer pool in-flight (where N is the
capacity of the channel).
Within those dedicated threads, we can then use `async/await` notation
to suspend the entire task when a socket isn't ready for sending.
Resolves: #8000
From the perspective of any application, Firezone is a layer-3 network
and will thus use the host's networking stack to form IP packets for
whichever application protocol is in use (UDP, TCP, etc). These packets
then get encapsulated into UDP packets by Firezone and sent to a
Gateway.
As a result of this design, the IP header seen by the networking stacks
of the Client and the receiving service are not visible to any
intermediary along the network path of the Client and Gateway.
In case this network path is congested and middleboxes such as routers
need to drop packets, they will look at the ECN bits in the IP header
(of the UDP packet generated by a Client or Gateway) and flip a bit in
case the previous value indicated support for ECN (`0x01` or `0x10`).
When received by a network stack that supports ECN, seeing `0x11` means
that the network path is congested and that it must reduce its
send/receive windows (or otherwise throttle the connection).
At present, this doesn't work with Firezone because of the
aforementioned encapsulation of IP packets. To support ECN, we need to
therefore:
- Copy ECN bits from a received IP packet to the datagram that
encapsulates it: This ensures that if the Client's network stack support
ECN, we mirror that support on the wire.
- Copy ECN bits from a received datagram to the IP packet the is sent to
the TUN device: This ensures that if the "Congestion Experienced" bit
get set along the network path between Client and Gateway, we reflect
that accordingly on the IP packet emitted by the TUN device.
Resolves: #3758
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com>
The bugfix we attempted in #8156 turned out wrong. Reading the
source-code, we have to call `Session::shutdown` in order to actually
cancel the `Session::receive_blocking` call. Not doing so means we run
into the timeout when discarding the `Tun` device because the
recv-thread is stuck in `Session::receive_blocking`.
Fixes: #8395
Dependabot appears to have a hard time to bump the Tauri dependencies in
a group together. Additionally, our dependency linter `cargo deny`
disallows duplicate dependencies by default. To avoid introducing more
duplicate dependencies, we depend on the upstream `main` branch of two
projects that have already updated their dependencies but did not yet
cut a release.
In #8159, we introduced a regression that could lead to a deadlock when
shutting down the TUN device. Whilst we did close the channel prior to
awaiting the thread to exit, we failed to notice that _another_ instance
of the sender could be alive as part of an internally stored "sending
permit" with the `PollSender` in case another packet is queued for
sending. We need to explicitly call `abort_send` to free that.
Judging from the comment and a prior bug, this shutdown logic has been
buggy before. To further avoid this deadlock, we introduce two changes:
- The worker threads only receive a `Weak` reference to the
`wintun::Session`
- We move all device-related state into a dedicated `TunState` struct
that we can drop prior to joining the threads
The combination of these features means that all strong references to
channels and the session are definitely dropped without having to wait
for anything. To provide a clean and synchronous shutdown, we wait for
at most 5s on the worker-threads. If they don't exit until then, we log
a warning and exit anyway.
This should greatly reduce the risk of future bugs here because the
session (and thus the WinTUN device) gets shutdown in any case and so at
worst, we have a few zombie threads around.
Resolves: #8265