In order for search-domains to work on Windows, we need to set the
`SearchList` registry key for our interface. This will result in Windows
sending us a DNS query with the expanded domain name from the search
list which we can then process like normal DNS queries.
Related: #8410
A sizeable chunk of Firezone's Rust components deal with parsing,
manipulating and emitting DNS queries and responses. The API surface of
DNS is quite large and to make handling of all corner-cases easier, we
depend on the `domain` library to do the heavy-lifting for us.
For better or worse, `domain` follows a lazy-parsing approach. Thus,
creating a new DNS message doesn't actually verify that it is in fact
valid. Within Firezone, we make several assumptions around DNS messages,
such as that they will only ever contain a single question.
Historically, DNS allows for multiple questions per query but in
practise, nobody uses that.
Due to how we handle DNS in Firezone, manipulating these messages
happens in multiple places. That combined with the lazy-parsing approach
from `domain` warrants having our own `dns-types` library that wraps
`domain` and provides us with types that offer the interface we need in
the rest of the codebase.
Resolves: #7019
Search domains are a way of performing a DNS lookup without typing the
full-qualified domain name. For example, with a search domain of
`example.com`, performing a DNS query for `app` will automatically
expand the query to `app.example.com`. At present, this doesn't work
with Firezone because there is no way to configure an account-wide
search-domain.
With this PR, we extend the `Interface` message sent by the portal to
also include an optional `search_domain` field that must be a valid
domain name. If set, `connlib`'s DNS stub resolver will now append this
domain to all single-label queries and match the resulting domain
against all active DNS resource.
On Linux - with `systemd-resolved` as the DNS backend - we need to set
the search domain on the TUN interface as well and enable LLMNR in order
to be able to intercept these queries. `resolved` expands the query for
us, however, meaning with this configuration, we don't actually receive
a single-label query in `connlib`. Instead, we directly see
`app.example.com` when we type `host app` or `dig +search app` and have
`example.com` as our search domain.
MacOS has a similar system but with a different fallack. There, the
operating system will first try all configured search domains on the
system (typically just the ones set prior to Firezone starting), and
send queries for FQDN to all resolvers. If none of the resolvers
(including Firezone's stub resolver) return results, it sends the
single-label query directly to the primary resolver. To handle this
case, Firezone needs to know about the search-domain and expand it
itself when it receives the single-label query. In the future, we may
want to look into how we can configure MacOS such that it performs this
expansion for us.
On Windows and Android, queries for a single-label domain will be
directly sent to Firezone's stub resolver where we then hit the same
codepath as explained above.
Specifically, the way this codepath works is that if we receive a
single-label query AND we have a search-domain set, we expand it and
match that particular query against our list of resources. In every
other case, we continue on with the single-label domain.
Related: #8365Fixes: #8377
On Linux, logs sent to stdout from a systemd-service are automatically
captured by `journald`. This is where most admins expect logs to be and
frankly, doing any kind of debugging of Firezone is much easier if you
can do `journalctl -efu firezone-client-ipc.service` in a terminal and
check what the IPC service is doing.
On Windows, stdout from a service is (unfortunately) ignored.
To achieve this and also allow dynamically changing the log-filter, I
had to introduce a (long-overdue) abstraction over tracing's "reload"
layer that allows us to combine multiple reload-handles into one.
Unfortunately, neither the `reload::Layer` nor the `reload::Handle`
implement `Clone`, which makes this unnecessarily difficult.
Related: #8173
Every time we start a new session, our telemetry context potentially
changes, i.e. the user may sign into a new account. This should ensure
that both the IPC service and the GUI always use the most up-to-date
`account_slug` as part of Sentry events. In addition, this will also set
the `account_slug` for clients that just signed in. Previously, the
`account_slug` would only get populated on the next start of the client.
At present, `connlib` communicates with its host app via callbacks.
These callbacks are executed synchronously as part of `connlib`s
event-loop, meaning `connlib` cannot do anything else whilst the
callback is executing in the host app. Additionally, this callback runs
within the `Future` that represents `connlib` and thus runs on a `tokio`
worker thread.
Attempting to interact with the session from within the callback can
lead to panics, for example when `Session::disconnect` is called which
uses `Runtime::block_on`. This isn't allowed by `tokio`: You cannot
block on the execution of an async task from within one of the worker
threads.
To solve both of these problems, we introduce a thread-pool of size 1
that is responsible for executing `connlib` callbacks. Not only does
this allow `connlib` to perform more work such as routing packets or
process portal messages, it also means that it is not possible for the
host app to cause these panics within the `tokio` runtime because the
callbacks run on a different thread.
The error code we see here means "There are no more endpoints available
from the endpoint mapper." This has something to do with Windows'
internal RPC communication between components. DNS deactivation is on a
best-effort basis and it appears that everything else is working just
fine, despite this error.
It appears to happen when we shut down our own service, so perhaps it is
just a race condition.
In order to test changes to the IPC service on Windows more easily, the
IPC service binary offers an `install` command that installs a new
"Debug" IPC service. Prior to that, the previous is uninstalled.
This doesn't work if one doesn't have a previous "Debug" IPC service so
the `install` command only works for devs that have at least run it once
with that part of the function commented out. To improve this Dev UX, we
don't abort if we can't uninstall the previous one.
Bumps [sd-notify](https://github.com/lnicola/sd-notify) from 0.4.3 to
0.4.5.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/lnicola/sd-notify/blob/master/CHANGELOG.md">sd-notify's
changelog</a>.</em></p>
<blockquote>
<h2>[0.4.5] - 2025-01-18</h2>
<h3>Fixed</h3>
<ul>
<li>fixed a dubious transmute between different slice types</li>
</ul>
<h2>[0.4.4] - 2025-01-16</h2>
<h3>Added</h3>
<ul>
<li>added <code>NotifyState::MonotonicUsec</code>, for use with
<code>Type=notify-reload</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="70a941baf1"><code>70a941b</code></a>
Bump to 0.4.5</li>
<li><a
href="6958ce12e4"><code>6958ce1</code></a>
Merge pull request <a
href="https://redirect.github.com/lnicola/sd-notify/issues/15">#15</a>
from tbu-/pr_slice_transmute</li>
<li><a
href="1e938f2fd5"><code>1e938f2</code></a>
Use <code>slice::from_raw_parts</code> instead of
<code>mem::transmute</code></li>
<li><a
href="cb4459a4bb"><code>cb4459a</code></a>
Prepare for new release</li>
<li><a
href="8eb2c5cab3"><code>8eb2c5c</code></a>
Add NotifyState::MonotonicUsec and helper</li>
<li><a
href="5462699164"><code>5462699</code></a>
Add NotifyState::MonotonicUsec and helper</li>
<li><a
href="6990e3733f"><code>6990e37</code></a>
Fix clippy warnings</li>
<li>See full diff in <a
href="https://github.com/lnicola/sd-notify/compare/v0.4.3...v0.4.5">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This publishes the windows headless client using the same convention set
forth by the linux headless client.
Docs and website changes will come in a subsequent PR.
Related: #3782Resolves: #8046
When the IPC service is shutdown gracefully (i.e. purposely), we send a
`TerminatingGracefully` message of the IPC channel. This allows the GUI
to handle this case differently from the a crash.
On Linux, this is achieved by reacting to signals that are sent to the
IPC process. Windows however doesn't send any signals to services.
Instead, we get an event that we are being shutdown.
Currently, this event is handled separately from the signal handler and
the signal handler does nothing on Windows. To make this more uniform
and allow graceful shutdown of the IPC service on Windows, we introduce
a 2nd constructor to the `Terminate` signal abstraction that is already
hooked up with the correct logic here.
As it turns out, the effort in #7104 was not a good idea. By logging
errors as values, most of our Sentry reports all have the same title and
thus cannot be differentiated from within the overview at all. To fix
this, we stringify errors with all their sources whenever they got
logged. This ensures log messages are unique and all Sentry issues will
have a useful title.
As part of fixing #6777, we added a code path to explicitly set
nameservers on the tunnel interface. This was necessary to make DNS
resources work under WSL. Exactly this code also seems to be causing
issues where this particular registry key is not available on a users
machine.
DNS resources outside of WSL will also work without this, therefore we
make this part optional to at least have something working on the users
machine.
Related: #7962.
At present, the GUI client uses a separate task for reading messages
from the IPC connection and forwards them to another channel. The other
end of this channel is then used within the controller to actually react
to IPC messages.
We can simplify this by removing the intermediary task and processing
the messages from the IPC connection directly.
At present, the file logger for all Rust code starts each logfile with
`connlib.`. This is very confusing when exporting the logs from the GUI
client because even the logs from the client itself will start with
`connlib.`. To fix this, we make the base file name of the log file
configurable.
The current CLI of the headless-client allows passing the token as a
positional parameter in addition to an env variable. This can be very
confusing if you make a spelling error in the _command_ that you are
trying to pass to the CLI, i.e. `standalone`. A misspelled command will
be interpreted as the token to use to connect to the portal without any
warning that it is similar to a command. The env variable
`FIREZONE_TOKEN` is completely ignored in that case.
To fix this, we remove the ability to pass the token via stdin. The
token should instead be set via en env variable or read from a file at
`FIREZONE_TOKEN_PATH`.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
For a while now, `connlib` has been calling these two callbacks right
after each other because the internal event already bundles all the
information about the TUN device. With this PR, we merge the two
callback functions also in layers above `connlib` itself.
Resolves: #6182.
The application-split itself doesn't really warrant having two different
Sentry projects.
1. The location of the panic / log already tells us, which component is
failing.
2. Both of the projects are built with Rust so the same "platform"
setting applies.
3. Reducing the number of Sentry projects makes things easier to manage.
4. The binaries are started as independent processes, so the two Sentry
contexts don't interfere.
What we should keep in mind is that one instance of an application will
now log into Sentry twice using the same DSN. I _think_ this means that
the number of sessions listed in Sentry will be double the number of
actual client-runs. The same is true for the Apple client though and
once we integrate Sentry for Android, the same will apply there so
relative to each other, those numbers still make sense.
The current way this is implemented is a bit tricky to read. By
splitting out a dedicated function and adding some logging, it becomes
more apparent what we do here.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Due to how we currently initialise telemetry in the IPC service, I think
we are missing out on events when it _exits_ due to an error because we
don't explicitly stop the telemetry session. We have alerts from a fair
few users in Sentry where the IPC service appears to stop / disappear
but there are no corresponding events for the IPC service.
In case an upstream DNS server responds with a payload that exceeds the
available buffer space of an IP packet, we need to truncate the
response. Currently, this truncation uses the **wrong** constant to
check for the maximum allowed length. Instead of the
`MAX_DATAGRAM_PAYLOAD`, we actually need to check against a limit that
is less than the MTU as the IP layer and the UDP layer both add an
overhead.
To fix this, we introduce such a constant and provide additional
documentation on the remaining ones to hopefully avoid future errors.
In order for Sentry to parse our releases as semver, they need to be in
the form of `package@version` [0]. Without this, the feature of "Mark
this issue as resolved in the _next_ version" doesn't work properly
because Sentry compares the versions as to when it first saw them vs
parsing the semver string itself. We test versions prior to releasing
them, meaning Sentry learns about a 1.4.0 version before it is actually
released. This causes false-positive "regressions" even though they are
fixed in a later (as per semver) release.
This create some redundancy with the different DSNs that we are already
using. I think it would make sense to consider merging the two projects
we have for the GUI client for example. That is really just one project
that happens to run as two binaries.
For all other projects, I think the separation still makes sense because
we e.g. may add Sentry to the "host" applications of Android and
MacOS/iOS as well. For those, we would reuse the DSN and thus funnel the
issues into the same Sentry project.
As per Sentry's docs, releases are organisation-wide and therefore need
a package identifier to be grouped correctly.
[0]:
https://docs.sentry.io/platforms/javascript/configuration/releases/#bind-the-version
## Context
At present, we only have a single thread that reads and writes to the
TUN device on all platforms. On Linux, it is possible to open the file
descriptor of a TUN device multiple times by setting the
`IFF_MULTI_QUEUE` option using `ioctl`. Using multi-queue, we can then
spawn multiple threads that concurrently read and write to the TUN
device. This is critical for achieving a better throughput.
## Solution
`IFF_MULTI_QUEUE` is a Linux-only thing and therefore only applies to
headless-client, GUI-client on Linux and the Gateway (it may also be
possible on Android, I haven't tried). As such, we need to first change
our internal abstractions a bit to move the creation of the TUN thread
to the `Tun` abstraction itself. For this, we change the interface of
`Tun` to the following:
- `poll_recv_many`: An API, inspired by tokio's `mpsc::Receiver` where
multiple items in a channel can be batch-received.
- `poll_send_ready`: Mimics the API of `Sink` to check whether more
items can be written.
- `send`: Mimics the API of `Sink` to actually send an item.
With these APIs in place, we can implement various (performance)
improvements for the different platforms.
- On Linux, this allows us to spawn multiple threads to read and write
from the TUN device and send all packets into the same channel. The `Io`
component of `connlib` then uses `poll_recv_many` to read batches of up
to 100 packets at once. This ties in well with #7210 because we can then
use GSO to send the encrypted packets in single syscalls to the OS.
- On Windows, we already have a dedicated recv thread because `WinTun`'s
most-convenient API uses blocking IO. As such, we can now also tie into
that by batch-receiving from this channel.
- In addition to using multiple threads, this API now also uses correct
readiness checks on Linux, Darwin and Android to uphold backpressure in
case we cannot write to the TUN device.
## Configuration
Local testing has shown that 2 threads give the best performance for a
local `iperf3` run. I suspect this is because there is only so much
traffic that a single application (i.e. `iperf3`) can generate. With
more than 2 threads, the throughput actually drops drastically because
`connlib`'s main thread is too busy with lock-contention and triggering
`Waker`s for the TUN threads (which mostly idle around if there are 4+
of them). I've made it configurable on the Gateway though so we can
experiment with this during concurrent speedtests etc.
In addition, switching `connlib` to a single-threaded tokio runtime
further increased the throughput. I suspect due to less task / context
switching.
## Results
Local testing with `iperf3` shows some very promising results. We now
achieve a throughput of 2+ Gbit/s.
```
Connecting to host 172.20.0.110, port 5201
Reverse mode, remote host 172.20.0.110 is sending
[ 5] local 100.80.159.34 port 57040 connected to 172.20.0.110 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 274 MBytes 2.30 Gbits/sec
[ 5] 1.00-2.00 sec 279 MBytes 2.34 Gbits/sec
[ 5] 2.00-3.00 sec 216 MBytes 1.82 Gbits/sec
[ 5] 3.00-4.00 sec 224 MBytes 1.88 Gbits/sec
[ 5] 4.00-5.00 sec 234 MBytes 1.96 Gbits/sec
[ 5] 5.00-6.00 sec 238 MBytes 2.00 Gbits/sec
[ 5] 6.00-7.00 sec 229 MBytes 1.92 Gbits/sec
[ 5] 7.00-8.00 sec 222 MBytes 1.86 Gbits/sec
[ 5] 8.00-9.00 sec 223 MBytes 1.87 Gbits/sec
[ 5] 9.00-10.00 sec 217 MBytes 1.82 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 2.30 GBytes 1.98 Gbits/sec 22247 sender
[ 5] 0.00-10.00 sec 2.30 GBytes 1.98 Gbits/sec receiver
iperf Done.
```
This is a pretty solid improvement over what is in `main`:
```
Connecting to host 172.20.0.110, port 5201
[ 5] local 100.65.159.3 port 56970 connected to 172.20.0.110 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 90.4 MBytes 758 Mbits/sec 1800 106 KBytes
[ 5] 1.00-2.00 sec 93.4 MBytes 783 Mbits/sec 1550 51.6 KBytes
[ 5] 2.00-3.00 sec 92.6 MBytes 777 Mbits/sec 1350 76.8 KBytes
[ 5] 3.00-4.00 sec 92.9 MBytes 779 Mbits/sec 1800 56.4 KBytes
[ 5] 4.00-5.00 sec 93.4 MBytes 783 Mbits/sec 1650 69.6 KBytes
[ 5] 5.00-6.00 sec 90.6 MBytes 760 Mbits/sec 1500 73.2 KBytes
[ 5] 6.00-7.00 sec 87.6 MBytes 735 Mbits/sec 1400 76.8 KBytes
[ 5] 7.00-8.00 sec 92.6 MBytes 777 Mbits/sec 1600 82.7 KBytes
[ 5] 8.00-9.00 sec 91.1 MBytes 764 Mbits/sec 1500 70.8 KBytes
[ 5] 9.00-10.00 sec 92.0 MBytes 771 Mbits/sec 1550 85.1 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 917 MBytes 769 Mbits/sec 15700 sender
[ 5] 0.00-10.00 sec 916 MBytes 768 Mbits/sec receiver
iperf Done.
```
In order to release the new control protocol to users, we need to bump
the versions of the clients to 1.4.0. The portal has a version gate to
only select gateways with version >= 1.4.0 for clients >= 1.4.0. Thus,
bumping these versions can only happen once testing has completed and
the gateway has actually been released as 1.4.0.
Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com>
At present, the GUI client shares the current log-directives with the
IPC service via the file system. Supposedly, this has been done to allow
the IPC service to start back up with the same log filter as before.
This behaviour appears to be buggy though as we are receiving a fair
number of error reports where this file is not writable.
Instead of relying on the file system to communicate, we send the
current log-directives to the IPC service as soon as we start up. The
IPC service then uses the file system as a cache that log string and
re-apply it on the next startup. This way, no two programs need to read
/ write the same file. The IPC service runs with higher privileges, so
this should resolve the permission errors we are seeing in Sentry.
This PR intends to be a pure refactoring, i.e. no behaviour change. It
simplifies a few aspects of the GUI controller event-loop by getting rid
of the `select!` macro. We also remove some indirection of the
`gui_controller::Builder`.