Currently, only connlib's UDP sockets for sending and receiving STUN &
WireGuard traffic are protected from routing loops. This is was done via
the `Sockets::with_protect` function. Connlib has additional sockets
though:
- A TCP socket to the portal.
- UDP & TCP sockets for DNS resolution via hickory.
Both of these can incur routing loops on certain platforms which becomes
evident as we try to implement #2667.
To fix this, we generalise the idea of "protecting" a socket via a
`SocketFactory` abstraction. By allowing the different platforms to
provide a specialised `SocketFactory`, anything Linux-based can give
special treatment to the socket before handing it to connlib.
As an additional benefit, this allows us to remove the `Sockets`
abstraction from connlib's API again because we can now initialise it
internally via the provided `SocketFactory` for UDP sockets.
---------
Signed-off-by: Gabi <gabrielalejandro7@gmail.com>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
When the property-based state machine test was first created, I
envisioned that we could also easily test advancing time. Unfortunately,
the tricky part of advancing time is to correctly encode the _expected_
behaviour as it requires knowledge of all timeouts etc.
Thus, the `Tick` transition has been left lingering and doesn't actually
test much. It is obviously still sampled by the test runner and thus
"wastes" test cases that don't end up exercising anything meaningful
because the time advancements are < 1000ms.
There are plans to more roughly test time-related things by implementing
delays between applying `Transmit`s. Until then, we can remove the
`Tick` transition.
Connlib's routing logic and networking code is entirely platform
agnostic. The only platform-specific bit is how we interact with the TUN
device. From connlib's perspective though, all it needs is an interface
for reading and writing. How the device gets initialised and updated is
client-business.
For the most part, this is the same on all platforms: We call callbacks
and the client updates the state accordingly. The only annoying bit here
is that Android recreates the TUN interface on every update and thus our
old file descriptor is invalid. The current design works around this by
returning the new file descriptor on Android. This is a problematic
design for several reasons:
- It forces the callback handler to finish synchronously, and halting
connlib until this is complete.
- The synchronous nature also means we cannot replace the callbacks with
events as events don't have a return value.
To fix this, we introduce a new `set_tun` method on `Tunnel`. This moves
the business of how the `Tun` device is created up to the client. The
clients are already platform-specific so this makes sense. In a future
iteration, we can move all the various `Tun` implementations all the way
up to the client-specific crates, thus co-locating the platform-specific
code.
Initialising `Tun` from the outside surfaces another issue: The routes
are still set via the `Tun` handle on Windows. To fix this, we introduce
a `make_tun` function on `TunDeviceManager` in order for it to remember
the interface index on Windows and being able to move the setting of
routes to `TunDeviceManager`.
This simplifies several of connlib's APIs which are now infallible.
Resolves: #4473.
---------
Co-authored-by: Reactor Scram <ReactorScram@users.noreply.github.com>
Co-authored-by: conectado <gabrielalejandro7@gmail.com>
Currently, `tunnel_test` has some old code that attempted to handle
resource _updates_ as part of adding new ones. That is outdated and
wrong. The test is easier to reason about if we disallow updates to
resources as part of _adding_ a new one.
In production, resources IDs are unique so this shouldn't actually
happen. At a later point, we can add explicit transitions for updating
an existing resource.
Currently, `tunnel_test` exercises a lot of code paths within connlib
already by adding & removing resources, roaming the client and sending
ICMP packets. Yet, it does all of this with just a single gateway
whereas in production, we are very likely using more than one gateway.
To capture these other code-paths, we now sample between 1 and 3
gateways and randomly assign the added resources to one of them, which
makes us hit the codepaths that select between different gateways.
Most importantly, the reference implementation has barely any knowledge
about those individual connections. Instead, it is implementation in
terms of connectivity to resources.
The `TunDeviceManager` is a component that the leaf-nodes of our
dependency tree need: the binaries. Thus, it is misplaced in the
`connlib-shared` crate which is at the very bottom of the dependency
tree.
This is necessary to allow the `TunDeviceManager` to actually construct
a `Tun` (which currently lives in `firezone-tunnel`).
Related: #5839.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Reactor Scram <ReactorScram@users.noreply.github.com>
Currently, the logging of fields in spans for encapsulate and
decapsulate operations is a bit inconsistent between client and gateway.
Logging the `from` field for every message is actually quite redundant
because most of these logs are emitted within `snownet`'s `Allocation`
which can add its own span to indicate, which relay we are talking to.
For most other operations, it is much more useful to log the connection
ID instead of IPs.
This should make the logs a bit more succinct.
We have several representations of `ResourceDescription` within connlib.
The ones within the `callbacks` module are meant for _presentation_ to
the clients and thus contain additional information like the site
status.
The `From` impls deleted within the PR are only used within tests. We
can rewrite those tests by asserting on the presented data instead.
This is better because it means information about resources only flows
in one direction: From connlib to the clients.
Applying the initial `init` closure may also print logs that are
currently not captured within the corresponding span. By using
`in_scope`, we ensure those logs are also correctly captured in the
corresponding span.
We are referencing the `tokio` dependency a lot and it makes sense to
ensure that version is tracked only once across the whole workspace.
Extracted out of #5797.
---------
Co-authored-by: Not Applicable <ReactorScram@users.noreply.github.com>
Currently, the type hierarchy within `tunnel_test` is already quite
nested: We have a `Host` that wraps a `SimNode` which wraps a
`ClientState` or `GatewayState`. Additionally, a lot of state that is
actually _per_ client or _per_ gateway is tracked in the root of
`ReferenceState` and `TunnelTest`. That makes it difficult to introduce
multiple gateways / clients to this test.
To fix this, we introduce dedicated `RefClient` and `RefGateway` states.
Those track the expected state of a particular client / gateway.
Similarly, we introduce dedicated `SimClient` and `SimGateway` structs
that track the simulation state by wrapping the corresponding
system-under-test: `ClientState` a `GatewayState`.
This ends up moving a lot of code around but has the great benefit that
all the state is now scoped to a particular instance of a client or a
gateway, paving the way for creating multiple clients & gateways in a
single test.
Closes#5601
It looks like we can hit 100+ Mbps in theory. This covers Wintun, Tokio,
and Windows OS overhead. It doesn't cover the cryptography or anything
in connlib itself.
The code is kinda messy but I'm not sure how to clean it up so I'll just
leave it for review.
This test should fail if there's any regressions in #5598.
It fails if any packet is dropped or if the speed is under 100 Mbps
```[tasklist]
### Tasks
- [x] Use `ip_packet::make`
- [x] Switch to `cargo bench`
- [x] Extract windows ARM PR
- [x] Clean up wintun.dll install code
- [x] Re-request review
```
With the introduction of a routing table in #5786, we can very easily
introduce an additional relay to `tunnel_test`. In production, we are
always given two relays and thus, this mimics the production setup more
closely.
With the performance improvements of `tunnel_test` in #5786, the
`resource_management` test is now in the hot-path of CI runtime. We
reduce the cycles to 50 should cut down overall CI time by ~ 1 minute as
the Windows builds are among the slowest.
Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com>
Co-authored-by: Reactor Scram <ReactorScram@users.noreply.github.com>
This is a bit of a hack because features should never change behaviour.
Unfortunately, we can't use `cfg(test)` here because the proptests live
in a different crate and thus for the tests, we import the crate using
`cfg(not(test))`.
Our `proptest` feature is really only meant to be activated during
testing so I think this is fine for now.
The benefit is that the test logs are much more terse because proptest
will shrink the IDs to `0`, `1` etc. With the upcoming addition of
multiple gateways and multiple relays, we will have a lot more IDs in
the logs. Thus, it is important that they stay legible.
By convention, `tests` modules are usually feature-flagged to not end up
in production code. Additionally, a `use super::*;` import line ensures
we have access to the parent module which is usually the one you want to
test.
Currently, `tunnel_test` uses a rather naive approach when dispatching
`Transmit`s. In particular, it checks client, gateway and relay
separately whether they "want" a certain packet. In a real network,
these packets are routed based on their IP.
To mimic something similar, we introduce a `Host` abstraction that wraps
each component: client, gateway and relay. Additionally, we introduce a
`RoutingTable` where we can add and remove hosts. With these things in
place, routing a `Transmit` is as easy as looking up the destination IP
in the routing table and dispatching to the corresponding host.
Our hosts are type-safe: client, gateway and relay have different types.
Thus, we abstract over them using a `HostId` in order to know, which
host a certain message is for. Following these patches, we can easily
introduce multiple gateways and relays to this test by simply making
more entries in this routing table. This will increase the test coverage
of connlib.
Lastly, this patch massively increases the performance of `tunnel_test`.
It turns out that previously, we spent a lot of CPU cycles accessing
"random" IPs from very large iterators. With this patch, we take a
limited range of 100 IPs that we sample from, thus drastically
increasing performance of this test. The configured 1000 testcases
execute in 3s on my machine now (with opt-level 1 which is what we use
in CI).
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Closes#5449
The smoke tests expect `last_crash.dmp` at a fixed path, so in this case
we write the file with a timestamped name, then copy it over
`last_crash.dmp`.
Within `snownet`'s test harness, packets are dispatched in a particular
order and of none of them match. They are assumed to be for the node
directly. We add a debug assert to ensure that the given address is in
fact part of the "local" interfaces that we have configured in the
tests.
Closes#5052
On my dev VMs:
- systemd-resolved = 15 ms to flush
- Windows = 600 ms to flush
I tested with the headless Clients on Linux and Windows and it fixes the
issue. On Windows I didn't replicate the issue with the GUI Client, on
Linux this patch also fixes it for the GUI Client.
Since we only handle `A`, `AAAA` and `PTR` records of names we handle,
this can lead to unexpected behavior with other record types, where
using Firezone breaks `TXT`, `MX` or other record types for the
resources we handle.
So this is a bit of a refactor, now we lookup a resource and explicitly
return `Some` when there is a record we should be returning (even if
it's empty due to IP exhaustion) or `None` when we should just forward
the query.
This has the added benefit of no longer breaking bonjour or other
non-standard `PTR` queries.
Fixes: #5673.
---------
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
Added for clarity when debugging, it used to look like:
```
2024-06-30T00:16:05.718337Z DEBUG firezone_tunnel::dns: No records for github.com, returning NXDOMAIN
```
And now looks like:
```
2024-06-30T00:16:05.718337Z DEBUG firezone_tunnel::dns: No MX records for github.com, returning NXDOMAIN
```
In a previous design of firezone, relays used to be scoped to a certain
connection. For a while now, this constraint has been lifted and all
connections can use all relays. A related, outdated concern is the idea
of STUN-only servers. Those also used to be assigned on a per-connection
basis.
By removing any use of per-connection relays and STUN-only servers, the
entire `StunBinding` concept is unused code and can thus be deleted.
To push this over the finish line, the `snownet-tests` which test the
hole-punching functionality needed to be slightly adapted to make use of
the more recently introduced API `Node::update_relays`.
Resolves: #4749.
Currently, `snownet` still supports this notion of "reconnecting" which
is a mix between resetting some state but keeping other. In particular,
we currently retain the `StunBinding` and `Allocation` state. This used
to be important because allocations are bound to the 3-tuple of the
client and thus needed to be kept around in case we weren't actually
roaming.
We always rebind the the local UDP sockets upon reconnecting and thus
the 3-tuple always changes anyway. In addition, we always reconnect to
the portal, meaning we receive another `init` message and thus can
actually completely clear the `Node`'s state.
This PR does that an in the process, rebrands `reconnect` as `reset`
which now makes more sense.
Related: #5619.
The [HTTP 1.1 RFC](https://datatracker.ietf.org/doc/html/rfc2616) states
that HTTP headers should be US-ASCII. This is not the case when the
macOS Client is run from a host that has a non-English language selected
as its system default due to the way we build the user agent.
This PR fixes that by normalizing how we build the user agent by more
granularly selecting which fields compose it, and not just relying on
OS-provided version strings that may contain non-ASCII characters.
fixes https://github.com/firezone/firezone/issues/5467
---------
Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Currently, we use the `tracing-oslog` crate to ingest logs on MacOS and
iOS. This crate has a "feature" where it creates so called "Activities"
for spans. Whilst that may initially sound useful, Apple's UI for
viewing these activities is absolutely useless.
Instead of tinkering around with that, we remove the `tracing-oslog`
crate and let `tracing-subscriber` format our logs first and then only
send a single string to the oslog backend.
Related: #5619.
Oops. It runs the same either way so we definitely don't need all that
RAM to be tied up. The Linux and macOS Clients probably have similar
buffer sizes already.
I tested before and after with CloudFlare's speed test and got roughly
140/12 with latency 50 ms both times. The error bars on speed tests are
pretty wide, but we definitely aren't falling 60 MiB behind on
processing and then catching up.
```[tasklist]
### Tasks
- [x] (failed, can't do it right now) ~~Log if we knowingly drop a lot of packets~~
- [x] Extract constant
- [x] Add comment about not knowing if we drop packets
- [x] Merge
- [ ] (skipped) Test while the CPU is loaded
```
Within each allocation, a client has 4095 channels that it can bind to a
different peers. Each channel bindings is valid for 10 minutes unless
rebound. Additionally, there is a 5min cool-down period after a channel
binding expires before it can be rebound to a different peer.
This patch fixes a bug in snownet where we would have first attempted to
rebind the last bound channel instead of just picking the next unused
one. In the case of a clock drift between client and relay, this caused
unnecessary errors when attempting to rebind channels.
Fixes: #5603.
---------
Co-authored-by: conectado <gabrielalejandro7@gmail.com>
Currently, we are sending each ICE candidate individually from the
client to the gateway and vice versa. This causes a slight delay as to
when each ICE candidate gets added on the remote ICE agent. As a result,
they all start being tested with a slight offset which causes "endpoint
hopping" whenever a connection expires as they expire just after each
other.
In addition, sending multiple messages to the portal causes unnecessary
load when establishing connections.
Finally, with #5283 we started **not** adding the server-reflexive
candidate to the local ICE agent. Because we talk to multiple relays, we
detect the same server-reflexive candidate multiple times if we are
behind a non-symmetric NAT. Not adding the server-reflexive candidate to
the ICE agent mitigated our de-duplication strategy here which means we
currently send the same candidate multiple times to a peer, causing
additional, unnecessary load.
All of this can be mitigated by batching together all our ICE candidates
together into one message.
Resolves: #3978.
Whilst it has been helpful to find issues such as #5611, having these
logs on `warn` spams the end user too much and creates a false sense
that things might not be working as there can be a variety of reasons
why packets might not be able to be routed.
Currently, we refresh DNS mappings when:
* We translate a packet for the first time
* There are no more incoming packets for 120 seconds
* There is at least 1 outoing packet in the last 10 seconds
The idea was to coordinate with conntrack somehow, to expire DNS
translation at the point where the NAT session of the OS stops being
valid. That way, if the triggered DNS refresh changes the resolved IPs
it would never kill the underlying connection.
However, TCP sessions by default can last for up to 5 days! And I have
no idea how long for ICMP. To prevent killing these connections, we
assume that for TCP and ICMP packets will elicit a response within 1s.
The DNS refresh for a translation mapping that hasn't seen any responses
is thus delayed by 1s after the last packet has been sent out.
To get an idea of how this works you can imagine it like this
|last incoming packet|------ 120 seconds + x seconds ----|out going
packet|----1 second ----|dns refresh|
However this another case where dns refresh is triggered, in this case
the same packet triggers the refresh period and the period where it was
used in the last 10 seconds
|last incoming packet|------ 111 seconds ----|out going packet|---- 9
seconds ----|dns refresh|
The unit tests should also make clear of when we want to trigger dns
refresh and when we don't.
---------
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
Closes#5589. Refs #5571
Improves upload speeds on my Windows 11 VM from 2 Mbps to 10.5 Mbps.
On the resource-constrained VM it improved from 3 to 7 Mbps.
```[tasklist]
### Tasks
- [x] Open for review
- [x] Manual test on resource-constrained VM
- [x] Run 5x replication steps from #5571 and make sure it doesn't deadlock again
- [x] Merge
- [ ] https://github.com/firezone/firezone/issues/5601
```
Sorted by decreasing speed, M = macOS host, W = Windows guest in
Parallels, RC = Resource-constrained Windows guest in VirtualBox:
- M, Internet - 16 Mbps
- W, Internet - 13 Mbps
- M, Firezone - 12 Mbps
- RC, Internet - 12 Mbps
- W, Firezone, after this PR - 10.5 Mbps
- RC, Firezone, after this PR - 8.5 Mbps
- RC, Firezone, before this PR - 4 Mbps
- W, Firezone, before this PR - 2 Mbps
So it's not perfect but the worst part is fixed.
The slow upload speeds were probably a regression from #5571. The MPSC
channel only has a few spots in it, so if connlib doesn't pick up every
packet immediately (which would be impossible under load), we drop
packets. I measured 25% packet drops in an earlier commit.
I first tried increasing the channel size from 5 to 64, and that worked.
But this solution is simpler. I switch back to `blocking_send` so if
connlib isn't clearing the MPSC channel, Wintun will just queue up
packets in its internal ring buffers, and we aren't responsible for
buffering.
Getting rid of `blocking_send` was a defense-in-depth thing to fix the
deadlock yesterday, but we still close the MPSC channel inside
`Tun::drop`, and I confirmed in a manual test that this will kick the
worker thread out of `blocking_send`, so the deadlock won't come back.
We define a connection as idle if we haven't sent or received any
packets in the last 5 minutes. From `snownet`'s perspective, keep-alives
sent by upper layers (like TCP keep-alives) must be honored and thus
outgoing as well as incoming packets are accounted for.
If the underlying connection breaks, we will hit an ICE timeout which is
an implementation detail of `snownet`. The packets tracked here are IP
packets that the user wants to send / receive via the tunnel. Similarly,
wireguard's keep-alives do not update these timestamps and thus don't
mark a connection as non-idle.
---------
Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com>