In case we received a newly nominated socket from `str0m` whilst our
connection was in idle mode, we mistakenly did not apply that and kept
using the old one. ICE would still be functioning in this case because
`str0m` would have updated its internal state but we would be sending
packets into Nirvana.
I don't think that this is likely to be hit in production though as it
would be quite unusual to receive a new nomination whilst the connection
was completely idle.
When encrypting IP packets, `snownet` needs to prepare a buffer where
the encrypted packet is going to end up. Depending on whether we are
sending data via a relayed connection or direct, this buffer needs to be
offset by 4 bytes to allow for the 4-byte channel-data header of the
TURN protocol.
At present, we always first encrypt the packet and then on-demand move
the packet by 4-bytes to the left if we **don't** need to send it via a
relay. Internally, this translates to a `memmove` instruction which
actually turns out to be very cheap (I couldn't measure a speed
difference between this and `main`).
All of this code has grown historically though so I figured, it is
better to clean it up a bit to first evaluate, whether we have a direct
or relayed connection and based on that, write the encrypted packet
directly to the front of the buffer or offset it by 4 bytes.
Profiling has shown that using a spinlock-based buffer pool is
marginally (~1%) faster than the mutex-based one because it resolves
contention quicker.
Profiling has shown that checking whether the level is enabled is
actually more expensive than checking whether the packet is a DNS
packet. This improves performance by about 3%.
When being presented an invalid peer certificate, there is no reason why
we should retry the connection, it is unlikely to fix itself. Plus, the
certificate may get / be cached and a restart of the application is
necessary.
Resolves: #9944
This was exposed by #9846. It is being added here as a dedicated PR
because the compatibility tests would fail or at least be flaky for the
latest client release so we cannot add the integration test right away.
When receiving an `init` message from the portal, we will now revoke all
authorizations not listed in the `authorizations` list of the `init`
message.
We (partly) test this by introducing a new transition in our proptests
that de-authorizes a certain resource whilst the Gateway is simulated to
be partitioned. It is difficult to test that we cannot make a connection
once that has happened because we would have to simulate a malicious
client that knows about resources / connections or ignores the "remove
resource" message.
Testing this is deferred to a dedicated task. We do test that we hit the
code path of revoking the resource authorization and because the other
resources keep working, we also test that we are at least not revoking
the wrong ones.
Resolves: #9892
From Sentry reports and user-submitted logs, we know that it is possible
for Client and Gateway to de-sync in regards to what each other's public
key is. In such a scenario, ICE will succeed to make a connection but
`boringtun` will fail to handshake a tunnel. By default, `boringtun`
tries for 90s to handshake a session before it gives up and expires it.
In Firezone, the ICE agent takes care of establishing connectivity
whereas `boringtun` itself just encrypts and decrypts packets. As such,
if ICE is working, we know that packets aren't getting lost but instead,
there must be some other issue as to why we cannot establish a session.
To improve the UX in these error cases, we reduce the rekey-attempt-time
to 15s. This roughly matches our ICE timeout. Those 15s count from the
moment we send the first handshake which is just after ICE completes.
Thus we can be sure that after at most 15s, we either have a working
WireGuard session or the connection gets cleaned up.
Related: #9890
Related: #9850
When we invalidate or discard an allocation, it may happen that a relay
still sends channel-data messages to us. We don't recognize those and
will therefore attempt to parse them as WireGuard packets, ultimately
ending in an "Packet has unknown format" error.
To avoid this, we check if the packet is a valid channel-data message
even if we presently don't have an allocation on the relay that is
sending us the packet. In those cases, we can stop processing the
packet, thus avoiding these errors from being logged.
When a connection is in idle-mode, it only sends a STUN request every 25
seconds. If the Client disconnects e.g. due to a network partition, it
may send a new connection intent later. If the Gateway's connection is
still around then because it was in idle mode, it won't send any
candidates to the remote, making the Client's connection fail with "no
candidates received".
To alleviate this, we wake a connection out of idle mode every time it
is being upserted. This ensures that the connection will fail within 15s
IF the above scenario happens, allowing the Client to reconnect within a
much shorter time-frame.
Note that attempting to repair such a connection is likely pointless. It
is much safer to discard it and let them both establish a new
connection.
Related: #9862
Whilst looking through the auth module of the relay, I noticed that we
unnecessarily convert back and forth between expiry timestamps and
username formats when we could just be using the already parsed version.
As per the WireGuard paper, `boringtun` tries to handshake with the
remote peer for 90s before it gives up. This timeout is important
because when a session is discarded due to e.g. missing replies,
WireGuard attempts to handshake a new session. Without this timeout, we
would then try to handshake a session forever.
Unfortunately, `boringtun` does not distinguish a missing handshake
response from a bad one. Decryption errors whilst decoding a handshake
response are simply passed up to the upper layer, in our case `snownet`.
I am not sure how we can actually fail to decrypt a handshake but the
pattern we are seeing in customer logs is that this happens over and
over again, so there is no point in having `boringtun` retry the
handshake. Therefore, we immediately fail the connection when this
happens.
Failed connections are immediately removed, triggering the client send a
new connection-intent to the portal. Such a new connection intent will
then sync-up the state between Client and Gateway so both of them use
the most recent public key.
Resolves: #9845
In the DNS resource NAT table, we track parts of the layer 4 protocol of
the connection in order to map packets back to the correct proxy IP in
case multiple DNS names resolve to the same real IP. The involvement of
layer 4 means we need to perform some packet inspection in case we
receive ICMP errors from an upstream router.
Presently, the only ICMP error we handle here is destination
unreachable. Those are generated e.g. when we are trying to contact an
IPv6 address but we don't have an IPv6 egress interface. An additional
error that we want to handle here is "time exceeded":
Time exceeded is sent when the TTL of a packet reaches 0. Typically,
TTLs are set high enough such that the packet makes it to its
destination. When using tools such as `tracepath` however, the TTL is
specifically only incremented one-by-one in order to resolve the exact
hops a packet is taking to a destination. Without handling the time
exceeded ICMP error, using `tracepath` through Firezone is broken
because the packets get dropped at the DNS resource NAT.
With this PR, we generalise the functionality of detecting destination
unreachable ICMP errors to also handle time-exceeded errors, allowing
tools such as `tracepath` to somewhat work:
```
❯ sudo docker compose exec --env RUST_LOG=info -it client /bin/sh -c 'tracepath -b example.com'
1?: [LOCALHOST] pmtu 1280
1: 100.82.110.64 (100.82.110.64) 0.795ms
1: 100.82.110.64 (100.82.110.64) 0.593ms
2: example.com (100.96.0.1) 0.696ms asymm 45
3: example.com (100.96.0.1) 5.788ms asymm 45
4: example.com (100.96.0.1) 7.787ms asymm 45
5: example.com (100.96.0.1) 8.412ms asymm 45
6: example.com (100.96.0.1) 9.545ms asymm 45
7: example.com (100.96.0.1) 7.312ms asymm 45
8: example.com (100.96.0.1) 8.779ms asymm 45
9: example.com (100.96.0.1) 9.455ms asymm 45
10: example.com (100.96.0.1) 14.410ms asymm 45
11: example.com (100.96.0.1) 24.244ms asymm 45
12: example.com (100.96.0.1) 31.286ms asymm 45
13: no reply
14: example.com (100.96.0.1) 303.860ms asymm 45
15: no reply
16: example.com (100.96.0.1) 135.616ms (This broken router returned corrupted payload) asymm 45
17: no reply
18: example.com (100.96.0.1) 161.647ms asymm 45
19: no reply
20: no reply
21: no reply
22: example.com (100.96.0.1) 238.066ms reached
Resume: pmtu 1280 hops 22 back 45
```
We say "somewhat work" because due to the NAT that is in place for DNS
resources, the output does not disclose the intermediary hops beyond the
Gateway.
Co-authored-by: Antoine Labarussias <antoinelabarussias@gmail.com>
---------
Co-authored-by: Antoine Labarussias <antoinelabarussias@gmail.com>
The latest version of str0m includes a fix that would result in an
immediate ICE timeout if a remote candidate was added prior to a local
candidate. We mitigated this in #9793 to make Firezone overall more
resilient towards sudden changes in the ICE connection state.
As a defense-in-depth measure, we also fixed this issue in str0m by not
transitioning to `Disconnected` if haven't even formed an candidate
pairs yet.
Diff:
2153bf0385...3d6e3d2f27
Rust 1.88 has been released and brings with it a quite exciting feature:
let-chains! It allows us to mix-and-match `if` and `let` expressions,
therefore often reducing the "right-drift" of the relevant code, making
it easier to read.
Rust.188 also comes with a new clippy lint that warns when creating a
mutable reference from an immutable pointer. Attempting to fix this
revealed that this is exactly what we are doing in the eBPF kernel.
Unfortunately, it doesn't seem to be possible to design this in a way
that is both accepted by the borrow-checker AND by the eBPF verifier.
Hence, we simply make the function `unsafe` and document for the
programmer, what needs to be upheld.
With this patch, we sample a list of DNS resources on each test run and
create a "TCP service" for each of their addresses. Using this list of
resources, we then change the `SendTcpPayload` transition to
`ConnectTcp` and establish TCP connections using `smoltcp` to these
services.
For now, we don't send any data on these connections but we do set the
keep-alive interval to 5s, meaning `smoltcp` itself will keep these
connections alive. We also set the timeout to 30s and after each
transition in a test-run, we assert that all TCP sockets are still in
their expected state:
- `ESTABLISHED` for most of them.
- `CLOSED` for all sockets where we ended up sampling an IPv4 address
but the DNS resource only supports IPv6 addresses (or vice-versa). In
these cases, we use the ICMP error to sent by the Gateway to assert that
the socket is `CLOSED`. Unfortunately, `smoltcp` currently does not
handle ICMP messages for its sockets, so we have to call `abort`
ourselves.
Overall, this should assert that regardless of whether we roam networks,
switch relays or do other kind of stuff with the underlying connection,
the tunneled TCP connection stays alive.
In order to make this work, I had to tweak the timeouts when we are
on-demand refreshing allocations. This only happens in one particular
case: When we are being given new relays by the portal, we refresh all
_other_ relays to make sure they are still present. In other words, all
relays that we didn't remove and didn't just add but still had in-memory
are refreshed. This is important for cases where we are
network-partitioned from the portal whilst relays are deployed or reset
their state otherwise. Instead of the previous 8s max elapsed time of
the exponential backoff like we have it for other requests, we now only
use a single message with a 1s timeout there. With the increased ICE
timeout of 15s, a TCP connection with a 30s timeout would otherwise not
survive such an event. This is because it takes the above mentioned 8s
for us to remove a non-functioning relay, all whilst trying to establish
a new connection (which also incurs its own ICE timeout then).
With the reduced timeout on the on-demand refresh of 1s, we detect the
disappeared relay much quicker and can immediately establish a new
connection via one of the new ones. As always with reduced timeouts,
this can create false-positives if the relay doesn't reply within 1s for
some reason.
Resolves: #9531
When defining a resource, a Firezone admin can define traffic filters to
only allow traffic on certain TCP and/or UDP ports and/or restrict
traffic on the ICMP protocol.
Presently, when a packet is filtered out on the Gateway, we simply drop
it. Dropping packets means the sending application can only react to
timeouts and has no other means on error handling. ICMP was conceived to
deal with these kind of situations. In particular, the "destination
unreachable" type has a dedicated code for filtered packets:
"Communication administratively prohibited".
Instead of just dropping the not-allowed packet, we now send back an
ICMP error with this particular code set, thus informing the sending
application that the packet did not get lost but was in fact not routed
for policy reasons.
When setting a traffic filter that does not allow TCP traffic,
attempting to `curl` such a resource now results in the following:
```
❯ sudo docker compose exec --env RUST_LOG=info -it client /bin/sh -c 'curl -v example.com'
* Host example.com:80 was resolved.
* IPv6: fd00:2021:1111:8000::, fd00:2021:1111:8000::1, fd00:2021:1111:8000::2, fd00:2021:1111:8000::3
* IPv4: 100.96.0.1, 100.96.0.2, 100.96.0.3, 100.96.0.4
* Trying [fd00:2021:1111:8000::]:80...
* connect to fd00:2021:1111:8000:: port 80 from fd00:2021:1111::1e:7658 port 34560 failed: Permission denied
* Trying [fd00:2021:1111:8000::1]:80...
* connect to fd00:2021:1111:8000::1 port 80 from fd00:2021:1111::1e:7658 port 34828 failed: Permission denied
* Trying [fd00:2021:1111:8000::2]:80...
* connect to fd00:2021:1111:8000::2 port 80 from fd00:2021:1111::1e:7658 port 44314 failed: Permission denied
* Trying [fd00:2021:1111:8000::3]:80...
* connect to fd00:2021:1111:8000::3 port 80 from fd00:2021:1111::1e:7658 port 37628 failed: Permission denied
* Trying 100.96.0.1:80...
* connect to 100.96.0.1 port 80 from 100.66.110.26 port 53780 failed: Host is unreachable
* Trying 100.96.0.2:80...
* connect to 100.96.0.2 port 80 from 100.66.110.26 port 60748 failed: Host is unreachable
* Trying 100.96.0.3:80...
* connect to 100.96.0.3 port 80 from 100.66.110.26 port 38378 failed: Host is unreachable
* Trying 100.96.0.4:80...
* connect to 100.96.0.4 port 80 from 100.66.110.26 port 49866 failed: Host is unreachable
* Failed to connect to example.com port 80 after 9 ms: Could not connect to server
* closing connection #0
curl: (7) Failed to connect to example.com port 80 after 9 ms: Could not connect to server
```
In order to better track, how well our `ENOBUFS` mitigation is working,
we should log the use of our feature flag to PostHog. This will give us
some stats how often this is happening. That combined with the lack of
error reports should give us good confidence in permanently enabling
this behaviour.
When a packet gets filtered because we are unable to evaluate the source
protocol (i.e. TCP/UDP/ICMP), then the current error message currently
misleadingly says that the packet got filtered because the protocol is
not supported.
The truth however is that we were never able to apply the filter in the
first place. This is a subtle difference that is quite important when
debugging filtered packets. To improve this, we add an error message to
the stack here.
Firezone uses ICMP errors to signal to client applications that e.g. a
certain IP is not reachable. This happens for example if a DNS resource
only resolves to IPv4 addresses yet the client application attempted to
use an IPv6 proxy address to connect to it.
In the presence of traffic filters for such a resource that does _not_
allow ICMP, we currently filter out these ICMP errors because - well -
ICMP traffic is not allowed! However, even in the presence of ICMP
traffic being allowed, we would fail to evaluate this filter because the
ICMP error packet is not an ICMP echo reply and therefore doesn't have
an ICMP identifier. We require this in the DNS resource NAT to identify
"connections" and NAT them correctly. The same L4 component is used to
evaluate the traffic filters.
ICMP errors are critical to many usage scenarios and algorithms like
happy-eyeballs. Dropping them usually results in weird behaviour as
client applications can then only react to timeouts.
Socket APIs across operating systems vary in how they handle
back-pressure. In most cases, a non-blocking socket should return
`EWOULDBLOCK` when it cannot send a given datagram and would have to
block to wait for resources to free up.
It appears that macOS doesn't always behave like that. In particular, we
are seeing error logs from a few users where sending a datagram fails
with
> No buffer space available (os error 55)
Digging through `libc`, I've found that this error is known as `ENOBUFS`
[0].
There are reports on the Apple developer forum [1] that recommend
retrying when this error happens. It is however unclear as to whether it
is entirely safe to map this error to `EWOULDBLOCK`. Other non-blocking
event-loop implementations [2] appear to do that but we don't know
whether it is fully correct.
At present, Firezone's behaviour here is to drop the packet. This means
the host networking stack has to fall-back to running into a timeout and
re-send the packet. This very likely negatively impacts the UX for the
users hitting this.
In order to validate this assumption, we implement a feature-flag. This
allows us to ship this code but switch back to the old behaviour, should
it negatively impact how Firezone behaves. In particular, if the
assumption that mapping `ENOBUFS` to `EWOULDBLOCK` is safe turns out
wrong and `kqueue` does in fact not signal readiness when more buffers
are available, then we may have missing wake-ups which would lead a
further delay in datagrams being sent.
[0]:
8e6f36c6ba/src/unix/bsd/apple/mod.rs (L2998)
[1]: https://developer.apple.com/forums/thread/42334
[2]:
aac866f399/src/unix/stream.c (L820)
When receiving UDP packets that we cannot decode we log an error. In
order to identify, whether we might have bugs in our decoding logic, we
now also print the hex-encoding of the packet for further analysis on
DEBUG.
At present, and as a result of how `connlib` evolved, we still implement
a `Poll`-based function for receiving data on our UDP socket. Ever since
we moved to dedicated threads for the UDP socket, we can directly block
on "block" on receiving datagrams and don't have to poll the socket.
This simplifies the implementation a fair bit. Additionally, it made me
reailise that we currently don't expose any errors on the UDP socket.
Likely, those will be ephemeral but it is still better than completely
silencing them.
When we shipped the feature of optimistc server-reflexive candidates, we
failed to add a check to only combine address and base such that they
are the same IP version. This is not harmful but unnecessary noise.
When we create a new connection, we seed the local ICE agent with all
known local candidates, i.e. host addresses and allocations on relays.
Server-reflexive candidates are never added to the local agent because
you cannot send directly from a server-reflexive addresses. Instead, an
agent sends from the _base_ of a server-reflexive candidate which in
turn is known as a host candidate.
The server-reflexive candidate is however signaled to the remote so it
can try and send packets to it. Those will then be mapped by the NAT to
our host candidate.
In case we have just performed a network reset, our own server-reflexive
candidate may not be known yet and therefore the seeding doesn't add an
candidates. With no candidates being seeded, we also can't signal them
to the remote.
For candidates discovered later in this process, the signalling happens
as part of adding them to the local agent. Because server-reflexive
candidates are not added to the local agent, we currently miss out on
signaling those to the remote IF they weren't already present when the
ICE agent got created.
This scenario can happen right after a network reset. In practice, it
shouldn't be much of an issue though. As soon as we start sending from
our host candidate, the remote will create a peer-reflexive candidate
for it. It is however cleaner to directly send the server-reflexive
candidate once we discover it.
When Firezone detects that the user is switching networks, we perform an
internal reset where we clear all connections and also all local
candidates. As part of the reset, we then send STUN requests to our
relays to re-discover our host and server-reflexive candidates. In this
scenario, the Gateway is still connected to its network and is therefore
able to send its candidates as soon as it receives the connection intent
from the portal.
This opens us up to the following race condition which leads to a
false-positive "ICE timeout":
1. Client roams network and clears all local state.
2. Client sends STUN binding requests to relays.
3. Client initiates a new connection.
4. Gateway acknowledges connection.
5. Client creates new ICE agent and attempts to seed it with local
candidates. We don't have a response from the relays yet and therefore
don't have any local candidates.
6. Client receives remote candidates and adds them to the agent.
7. ICE agent is unable to form pairs and therefore concludes that it is
disconnected.
8. We treat the disconnected event as a connection failure and clear the
connection.
9. Relays respond to STUN binding requests but we cannot add the new
candidates to the connection because it is already cleared.
The ICE spec states that after an agent transitions into the
"disconnected" state, it may transition back to "connected" if e.g. new
candidates are added as those allow the forming of new pairs. In
general, it is recommended to not treat "disconnected" as a permanent
state. To honor this recommendation, we introduce a 2s grace-period in
which we can recover from such a "disconnected" state.
HTTP headers only reliably support ASCII characters. We include
information like the user's kernel build name in there and therefore
need to strip non-ASCII characters from that to avoid encoding errors.
Fixes: #9706
In #9656, we already tried to fix the pipelining of messages to the
portal. Unfortunately, a bug was introduced in a last-minute refactoring
where we would _only_ send messages while we were joining a room. Due a
2nd bug where we weren't actually processing the room join replies
correctly, this didn't matter so the PR was effectively a no-op and
didn't change any behaviour.
Further investigation of the code surfaced additional problems. For one,
we were not re-queuing the message into the correct buffer. Two, we were
only flushing after sending a message.
To fix both of these, we move the flushing out of the message sending
branch completely and duplicate some of the code for sending messages in
order to correctly handle join requests before other messages.
Finally, join requests have an _empty_ payload and are therefore
processed in a different branch. By moving the checking for the replies
of join requests, we can correctly update the state and continue sending
messages once the join is successful.
Resolves: #9647
In a recent release, `str0m` downgraded all INFO logs to DEBUG. Whilst
generally appreciated, it means we don't have a lot of visibility
anymore into which candidates are being exchanged and what the ICE
credentials of the connections are.
We re-add this information to our existing logs when creating and
updating connections.
To avoid race conditions, we wait for all room joins on the WebSocket to
be successful before sending any messages to the portal. This requires
us to split room join messages from other messages so we can still send
them separately.
Resolves: #9647
Instead of a 1 minute TTL for all connections, we vary the TTL based on
the protocol being used. For TCP, that is 2 hours. For UDP and ICMP, we
use 2 minutes.
Resolves: #9645
Originally, we introduced these to gather some data from logs / warnings
that we considered to be too spammy. We've since merged a
burst-protection that will at most submit the same event once every 5
minutes.
The data from the telemetry spans themselves have not been used at all.
WireGuard implements a rate-limit mechanism when the number of handshake
initiations increases a certain limit. This is important because
handshakes involve asymmetric cryptography and are cryptographically
expensive. To prevent DoS attacks where other peers repeatedly ask for
new handshakes, the rate limiter implements a cookie mechanism where -
when under load - the remote peer needs to include a given cookie in new
handshakes. This cookie is tied to the peer's IP address to prevent it
from being reused by other peers.
Up until now, we have not been passing the sender's IP address to
`boringtun` and therefore, the only option when the rate limit was hit
was to error with `UnderLoad`.
By passing the source IP of the packet, `boringtun` can engage in the
cookie-reply mechanism and therefore avoid the `UnderLoad` error.
Resolves: #9643
When we receive the `account_slug` from the portal, the Gateway now
sends a `$identify` event to PostHog. This will allow us to target
Gateways with feature-flags based on the account they are connected to.
Presently, `connlib` always just lets the OS pick a random port for our
UDP socket. This works well in many cases but has the downside that IF
network admins would like to aid in the process of establishing direct
connections, they cannot open a specific port because it is always
random.
It doesn't cost us anything to try and bind to a particular port (here
52625) and fallback to a random one if something is listening there.
The port 52625 was chosen because:
- It is within the ephemeral port range and will therefore never be
registered to anything else.
- It is an palindrome and therefore easy to remember.
- When typing FIRE on a phone keypad, it you get the numbers 3473. 52625
is the port at the offset 3473 from the ephemeral port range.
In order for this port to be useful in establishing direct connections,
we generate optimistic candidates based on existing remote candidates by
combining the IP of all server-reflexive candidates with the port of all
host candidates.
This patch deliberately does not publicly announce this feature in the
docs or the changelog so we can first gather experience with it in our
own test environment.
Resolves: #9559
A bit of legacy that we have inherited around our Firezone ID is that
the ID stored on the user's device is sha'd before being passed to the
portal as the "external ID". This makes it difficult to correlate IDs in
Sentry and PostHog with the data we have in the portal. For Sentry and
PostHog, we submit the raw UUID stored on the user's device.
As a first step in overcoming this, we embed an "external ID" in those
services as well IF the provided Firezone ID is a valid UUID. This will
allow us to immediately correlate those events.
As a second step, we automatically generate all new Firezone IDs for the
Windows and Linux Client as `hex(sha256(uuid))`. These won't parse as
valid UUIDs and therefore will be submitted as is to the portal.
As a third step, we update all documentation around generating Firezone
IDs to use `uuidgen | sha256` instead of just `uuidgen`. This is
effectively the equivalent of (2) but for the Headless Client and
Gateway where the Firezone ID can be configured via environment
variables.
Resolves: #9382
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>