The default send and receive buffer sizes on Linux are too small (only
~200 KB). Checking `nstat` after an iperf run revealed that the number
of dropped packets in the first interval directly correlates with the
number of receive buffer errors reported by `nstat`.
We already try to increase the send and receive buffer sizes for our UDP
socket but unfortunately, we cannot increase them beyond what the system
limits them to. To workaround this, we try to set `rmem_max` and
`wmem_max` during startup of the Linux headless client and Gateway. This
behaviour can be disabled by setting `FIREZONE_NO_INC_BUF=true`.
This doesn't work in Docker unfortunately, so we set the values manually
in the CI perf tests and verify after the test that we didn't encounter
any send and receive buffer errors.
It is yet to be determined how we should deal with this problem for all
the GUI clients. See #10350 as an issue tracking that.
Unfortunately, this doesn't fix all packet drops during the first iperf
interval. With this PR, we now see packet drops on the interface itself.
In earlier versions of Firezone, the WebSocket protocol with the portal
was using the request-response semantics built into Phoenix. This
however is quite cumbersome to work with to due to the polymorphic
nature of the protocol design.
We ended up moving away from it and instead only use one-way messages
where each event directly corresponds to a message type. However, we
have never removed the capability reply messages from the
`phoenix-channel` module, instead all usages just set it to `()`.
We can simplify the code here by always setting this to `()`.
Resolves: #7091
Instead of recording the queue depths on every event-loop tick, we now
record them once a second by setting a Gauge. Not only is that a simpler
instrument to work with but it is significantly more performant. The
current version - when metrics are enabled - takes on quite a bit of CPU
time.
Resolves: #10237
Right now, connections cannot be actively closed in Firezone. The
WireGuard tunnel and the ICE agent are coupled together, meaning only if
either one of them fails will we clean up the connection. One exception
here is when the Client roams. In that case, the Client simply clears
its local memory completely and then re-establishes all necessary
connections by re-requesting access.
There are three cases where gracefully closing a connection is useful:
1. If an access authorization is revoked or expires and this was the
last resource authorisation for that peer, we don't currently remove the
connection on the Gateway. Instead, the Client is still able to send
packets by they'll be dropped because we don't have a peer state
anymore.
1. If a Gateway gets restarted due to e.g. an upgrade or other
maintenance work, it loses all its connections and every Client needs to
wait for the ICE timeout (~15 seconds) before it can establish a new
one.
1. If a Client has its access revoked for all resources it has access to
in a particular site we also don't remove this connection, even though
it has become practically useless.
All of these cases are fixed with this PR. Here we introduce a way to
gracefully shutdown a connection without forcing the other side into an
ICE timeout. The graceful connection shutdown works by introducing a new
"goodbye" p2p control protocol message. Like all our p2p control
protocol messages, this is based on IP and therefore delivery is not
guaranteed. In other words, this "goodbye" message is sent on a
best-effort basis.
In the case of shutdown, the Gateway will wait for all UDP packets to be
flushed but will not resend them or wait for an ACK.
If either end receives such a "goodbye" message, they simply remove the
local peer and connection state just as if the connection would have
failed due to either ICE or WireGuard. For the Client, this means that
the next packet for a resource will trigger a new access authorization
request.
With the introduction of the pre-resolved Sentry host, all Firezone
clients now require Internet on startup. That is a signficant usability
hit that we can easily fix by simply falling back to resolving the host
on-demand.
Our Sentry client needs to resolve DNS before being able to send logs or
errors to the backend. Currently, this DNS resolution happens on-demand
as we don't take any control of the underlying HTTP client.
In addition, this will use HTTP/1.1 by default which isn't as efficient
as it could be, especially with concurrent requests.
Finally, if we decide to ever proxy all Sentry for traffic through our
own domain, we have to take control of the underlying client anyway.
To resolve all of the above, we create a custom `TransportFactory` where
we reuse the existing `ReqwestHttpTransport` but provide an already
configured `reqwest::Client` that always uses HTTP/2 with a
pre-configured set of DNS records for the given ingest host.
We cannot poll the `PhoenixChannel` after it has returned an error,
otherwise it will panic. Therefore, we exit the event-loop then. The
outer event-loop also exits as soon as it receives an error from this
channel so this is fine.
`PhoenixChannel` only returns an error when it has irrecoverably
disconnected, e.g. after the retries have been exhausted or we hit a 4xx
error on the WebSocket connection.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
When the connection to a Client disappears, the Gateway currently clears
all state related to this peer. Whilst eagerly cleaning up memory can be
good, in this case, it may lead to the Client thinking it has access to
a resource when in reality it doesn't.
Just because the connection to a Client failed doesn't mean their access
authorizations are invalid. In case the Client reconnects, it should be
able to just continue sending traffic.
At the moment, this only works if the connection also failed on the
Client and therefore, its view of the world in regards to "which
resources do I have access to" was also reset.
What we are seeing in Sentry reports though is that Clients are
attempting to access these resources, thinking they have access but the
Gateway denies it because it has lost the access authorization state.
With the removal of the NAT64/46 modules, we can now simplify the
internals of our `IpPacket` struct. The requirements for our `IpPacket`
struct are somewhat delicate.
On the one hand, we don't want to be overly restrictive in our parsing /
validation code because there is a lot of broken software out there that
doesn't necessarily follow RFCs. Hence, we want to be as lenient as
possible in what we accept.
On the other hand, we do need to verify certain aspects of the packet,
like the payload lengths. At the moment, we are somewhat too lenient
there which causes errors on the Gateway where we have to NAT or
otherwise manipulate the packets. See #9567 or #9552 for example.
To fix this, we make the parsing in the `IpPacket` constructor more
restrictive. If it is a UDP, TCP or ICMP packet, we attempt to fully
parse its headers and validate the payload lengths.
This parsing allows us to then rely on the integrity of the packet as
part of the implementation. This does create several code paths that can
in theory panic but in practice, should be impossible to hit. To ensure
that this does in fact not happen, we also tackle an issue that is long
overdue: Fuzzing.
Resolves: #6667Resolves: #9567Resolves: #9552
When filtering through logs in Sentry, it is useful to narrow them down
by context of a client, gateway or resource. Currently, these fields are
sometimes called `client`, `cid`, `client_id` etc and the same for the
Gateway and Resources.
To make this filtering easier, name all of them `cid` for Client IDs,
`gid` for Gateway IDs and `rid` for Resource IDs.
These appear to happen on systems that e.g. don't have IPv6 support or
where the destination cannot be reached. It is a bit of a catch-all but
all the ones I am seeing in Sentry are false-positives. To reduce the
noise a bit, we log these on DEBUG now.
When looking through customer logs, we see a lot of "Resolved best route
outside of tunnel" messages. Those get logged every time we need to
rerun our re-implementation of Windows' weighting algorithm as to which
source interface / IP a packet should be sent from.
Currently, this gets cached in every socket instance so for the
peer-to-peer socket, this is only computed once per destination IP.
However, for DNS queries, we make a new socket for every query. Using a
new source port DNS queries is recommended to avoid fingerprinting of
DNS queries. Using a new socket also means that we need to re-run this
algorithm every time we make a DNS query which is why we see this log so
often.
To fix this, we need to share this cache across all UDP sockets. Cache
invalidation is one of the hardest problems in computer science and this
instance is no different. This cache needs to be reset every time we
roam as that changes the weighting of which source interface to use.
To achieve this, we extend the `SocketFactory` trait with a `reset`
method. This method is called whenever we roam and can then reset a
shared cache inside the `UdpSocketFactory`. The "source IP resolver"
function that is passed to the UDP socket now simply accesses this
shared cache and inserts a new entry when it needs to resolve the IP.
As an added benefit, this may speed up DNS queries on Windows a bit
(although I haven't benchmarked it). It should certainly drastically
reduce the amount of syscalls we make on Windows.
When receiving an `init` message from the portal, we will now revoke all
authorizations not listed in the `authorizations` list of the `init`
message.
We (partly) test this by introducing a new transition in our proptests
that de-authorizes a certain resource whilst the Gateway is simulated to
be partitioned. It is difficult to test that we cannot make a connection
once that has happened because we would have to simulate a malicious
client that knows about resources / connections or ignores the "remove
resource" message.
Testing this is deferred to a dedicated task. We do test that we hit the
code path of revoking the resource authorization and because the other
resources keep working, we also test that we are at least not revoking
the wrong ones.
Resolves: #9892
As a first step in preparation for sending OTEL metrics from Clients and
Gateways to a cloud-hosted OTEL collector, we extend the CLI of the
Gateway with configuration options to provide a gRPC endpoint to an OTEL
collector.
If `FIREZONE_METRICS` is set to `otel-collector` and an endpoint is
configured via `OTLP_GRPC_ENDPOINT`, we will report our metrics to that
collector.
The future plan for extending this is such that if `FIREZONE_METRICS` is
set to `otel-collector` (which will likely be the default) and no
`OTLP_GRPC_ENDPOINT` is set, then we will use our own, hosted OTEL
collector and report metrics IF the `export-metrics` feature-flag is set
to `true`.
This is a similar integration as we have done it with streaming logs to
Sentry. We can therefore enable it on a similar granularity as we do
with the logs and e.g. only enable it for the `firezone` account to
start with.
In meantime, customers can already make use of those metrics if they'd
like by using the current integration.
Resolves: #1550
Related: #7419
---------
Co-authored-by: Antoine Labarussias <antoinelabarussias@gmail.com>
I am no longer able to compile `jemalloc` on my system in a debug build.
It fails with the following error:
```
src/malloc_io.c: In function ‘buferror’:
src/malloc_io.c:107:16: error: returning ‘char *’ from a function with return type ‘int’ makes integer from pointer without a cast [-Wint-conversion]
107 | return strerror_r(err, buf, buflen);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
This appears to be a problem with modern versions of clang/gcc. I
believe this started happening when I recently upgraded my system. The
upstream [`jemalloc`](https://github.com/jemalloc/jemalloc) repository
is now archived and thus unmaintained. I am not sure if we ever measured
a significant benefit in using `jemalloc`.
Related: https://github.com/servo/servo/issues/31059
Rust 1.88 has been released and brings with it a quite exciting feature:
let-chains! It allows us to mix-and-match `if` and `let` expressions,
therefore often reducing the "right-drift" of the relevant code, making
it easier to read.
Rust.188 also comes with a new clippy lint that warns when creating a
mutable reference from an immutable pointer. Attempting to fix this
revealed that this is exactly what we are doing in the eBPF kernel.
Unfortunately, it doesn't seem to be possible to design this in a way
that is both accepted by the borrow-checker AND by the eBPF verifier.
Hence, we simply make the function `unsafe` and document for the
programmer, what needs to be upheld.
At present, our primary indicator as to whether telemetry is active is
whether we have a Sentry session. For our analytics events however, we
currently require passing in the Firezone ID and API url again. This
makes it difficult to send analytics events from areas of the code that
don't have this information available.
To still allow for that, we integrate the `analytics` module more
tightly with the Sentry session. This allows us to drop two parameters
from the `$identify` event and also means we now respect the
`NO_TELEMETRY` setting for these events except for `new_session`. This
event is sent regardless because it allows us to track, how many on-prem
installations of Firezone are out there.
Instead of conditionally enabling the `logs` feature in the Sentry
client, we always enable it and control via the `tracing` integration,
which events should get forwarded to Sentry. The feature-flag check
accesses only shared-memory and is therefore really fast.
We already re-evaluate feature flags on a timer which means this boolean
will flip over automatically and logs will be streamed to Sentry.
As with some of our other applications, it is useful to know when they
restart and which version is running. Adding a log on INFO on startup
solves this.
Originally, we introduced these to gather some data from logs / warnings
that we considered to be too spammy. We've since merged a
burst-protection that will at most submit the same event once every 5
minutes.
The data from the telemetry spans themselves have not been used at all.
Sentry has a new "Logs" feature where we can stream logs directly to
Sentry. Doing this for all Clients and Gateways would be way too much
data to collect though.
In order to aid debugging from customer installations, we add a
PostHog-managed feature flag that - if set to `true` - enables the
streaming of logs to Sentry. This feature flag is evaluated every time
the telemetry context is initialised:
- For all FFI usages of connlib, this happens every time a new session
is created.
- For the Windows/Linux Tunnel service, this also happens every time we
create a new session.
- For the Headless Client and Gateway, it happens on startup and
afterwards, every minute. The feature-flag context itself is only
checked every 5 minutes though so it might take up to 5 minutes before
this takes effect.
The default value - like all feature flags - is `false`. Therefore, if
there is any issue with the PostHog service, we will fallback to the
previous behaviour where logs are simply stored locally.
Resolves: #9600
When we receive the `account_slug` from the portal, the Gateway now
sends a `$identify` event to PostHog. This will allow us to target
Gateways with feature-flags based on the account they are connected to.
A bit of legacy that we have inherited around our Firezone ID is that
the ID stored on the user's device is sha'd before being passed to the
portal as the "external ID". This makes it difficult to correlate IDs in
Sentry and PostHog with the data we have in the portal. For Sentry and
PostHog, we submit the raw UUID stored on the user's device.
As a first step in overcoming this, we embed an "external ID" in those
services as well IF the provided Firezone ID is a valid UUID. This will
allow us to immediately correlate those events.
As a second step, we automatically generate all new Firezone IDs for the
Windows and Linux Client as `hex(sha256(uuid))`. These won't parse as
valid UUIDs and therefore will be submitted as is to the portal.
As a third step, we update all documentation around generating Firezone
IDs to use `uuidgen | sha256` instead of just `uuidgen`. This is
effectively the equivalent of (2) but for the Headless Client and
Gateway where the Firezone ID can be configured via environment
variables.
Resolves: #9382
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
This PR adds an optional field `account_slug` to the Gateway's init
message. If populated, we will use this field to set the account-slug in
the telemetry context. This will allow us to know, which customers a
particular Sentry issue is related to.
These don't happen very often so are safe to log on INFO. That is the
default log level and it is useful to see, why we are re-connecting to
the portal.
When we implemented #8350, we chose an error handling strategy that
would shutdown the Gateway in case we didn't have a nameserver selected
for handling those SRV and TXT queries. At the time, this was deemed to
be sufficiently rare to be an adequate strategy. We have since learned
that this can indeed happen when the Gateway starts without network
connectivity which is quite common when using tools such as terraform to
provision infrastructure.
In #9060, we fix this by re-evaluating the fastest nameserver on a
timer. This however doesn't change the error handling strategy when we
don't have a working nameserver at all. It is practically impossible to
have a working Gateway yet us being unable to select a nameserver. We
read them from `/etc/resolv.conf` which is what `libc` uses to also
resolve the domain we connect to for the WebSocket. A working WebSocket
connection is required for us to establish connections to Clients, which
in turn is a precursor to us receiving DNS queries from a Client.
It causes unnecessary complexity to have a code path that can
potentially terminate the Gateway, yet is practically unreachable. To
fix this situation, we remove this code path and instead reply with a
DNS SERVFAIL error.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Validating checksums can be expensive so this is off-by-default. The
intent is to turn it on in our staging environment so we can detects
bugs in our packet handling code during testing.
Once we start collecting metrics across various Clients and Gateways,
these metrics need to be tagged with the correct `service.name`,
`service.version` as well as an instance ID to differentiate metrics
from different instances.
When working on the Rust code of Firezone from a MacOS computer, it is
useful to have pretty much all of the code at least compile to ensure
detect problems early. Eventually, once we target features like a
headless MacOS client, some of these stubs will actually be filled in an
be functional.
`jemalloc` is a modern allocator that is designed for multi-threaded
systems and can better handle smaller allocations that may otherwise
fragment the heap. Firezone's components, especially Relays and Gateways
are intended to operate with a long uptime and therefore need to handle
memory efficiently.
The UDP socket threads added in #7590 are designed to never exit. UDP
sockets are stateless and therefore any error condition on them should
be isolated to sending / receiving a particular datagram. It is however
possible that code panics which will shut down the threads
irrecoverably. In this unlikely event, `connlib`'s event-loop would keep
spinning and spam the log with "UDP socket stopped". There is no good
way on how we can recover from such a situation automatically, so we
just quit `connlib` in that case and shut everything down.
To model this new error path, we refactor the `DisconnectError` to be
internally backed by `anyhow`.
By default, we spawn 1 TUN send and 1 TUN receive thread on the Gateway.
In addition to that, we also have the main processing thread that
encrypts and decrypts packets. With #7590, we will be separating out the
UDP send and receive operations into yet another thread. As a result, we
will have at a minimum 4 threads running that perform IO or important
work.
Thus, in order to benefit from TUN multi-queue, we need more than 4
cores to be able to efficiently parallelise work.
Related: #8769
Currently, errors encountered as part of operating the tunnel are
non-fatal and only logged on `TRACE` in order to not flood the logs.
Recent improvements around how the event loop operates made it such that
we actually emit a lot less errors and ideally there should be 0.
Therefore we can now employ a much more strict policy and log all errors
here on `WARN` in order to get Sentry alerts.
This PR adds opentelemetry-based packet counter metrics to `connlib`. By
default, the collection of these metrics of disabled. Without a
registered metrics-provider, gathering these metrics are effectively
no-ops. They will still incur 1 or 2 function calls per packet but that
should be negligible compared to other operations such as encryption /
decryption.
With this system in place, we can in the future add more metrics to make
debugging easier.
A sizeable chunk of Firezone's Rust components deal with parsing,
manipulating and emitting DNS queries and responses. The API surface of
DNS is quite large and to make handling of all corner-cases easier, we
depend on the `domain` library to do the heavy-lifting for us.
For better or worse, `domain` follows a lazy-parsing approach. Thus,
creating a new DNS message doesn't actually verify that it is in fact
valid. Within Firezone, we make several assumptions around DNS messages,
such as that they will only ever contain a single question.
Historically, DNS allows for multiple questions per query but in
practise, nobody uses that.
Due to how we handle DNS in Firezone, manipulating these messages
happens in multiple places. That combined with the lazy-parsing approach
from `domain` warrants having our own `dns-types` library that wraps
`domain` and provides us with types that offer the interface we need in
the rest of the codebase.
Resolves: #7019
The DNS server added in #8285 was only a dummy DNS server that added
infrastructure to actually receive DNS queries on the IP of the TUN
device at port 53535 and it returns SERVFAIL for all queries. For this
DNS server to be useful, we need to take those queries and replay them
towards a DNS server that is configured locally on the Gateway.
To achieve this, we parse `/etc/resolv.conf` during startup of the
Gateway and pass the contained nameservers into the tunnel. From there,
the Gateway's event-loop can receive the queries, feed them into the
already existing machinery for performing recursive DNS queries that we
use on the Client and resolve the records.
In its current implementation, we only use the first nameserver defined
in `/etc/resolv.conf`. If the lookup fails, we send back a SERVFAIL
error and log a message.
Resolves: #8221
To support resolving SRV and TXT records for DNS-resources, we host a
DNS server on UDP/53535 and TCP/53535 on the IPv4 and IPv6 IP of the
Gateway's TUN device. This will later be used by connlib to send DNS
queries of particular types (concretely SRV and TXT) to the Gateway
itself.
With this PR, this DNS server is already functional and reachable but it
will answer all queries with SERVFAIL. Actual handling of these queries
is left to a future PR.
We listen on port 53535 because:
- Port 53 may be taken by another DNS server running on the customer's
machine where they deploy the Gateway
- Port 5353 is the standard port for mDNS
- I could not find anything on the Internet about it being used by a
specific application
In theory, we could also bind to a random port but then we'd have to
communicate this port somehow to the client. This could be done using a
control protocol message but it just makes things more complicated. For
example, there would be additional buffering needed on the Client side
for the time-period where we've established a connection to the Gateway
already but haven't received the control protocol message yet, at which
port the Gateway is hosting the DNS server.
If one knows the Gateway's IP (and has a connection to it already), this
DNS server will be usable by users with standard DNS tools such as
`dig`:
```sh
dig @100.76.212.99 -p 53535 example.com
```
Related: #8221