Currently, when `connlib`'s log file gets deleted, we write logs into
nirvana until the corresponding process gets restarted. This is painful
for users to do because they need to restart the IPC service or Network
Extension. Instead, we can simply check if the log file exists prior to
writing to it and re-create it if it doesn't.
Resolves: #6850
Related: #7569
When working on the Rust code of Firezone from a MacOS computer, it is
useful to have pretty much all of the code at least compile to ensure
detect problems early. Eventually, once we target features like a
headless MacOS client, some of these stubs will actually be filled in an
be functional.
Having multiple threads for reading and writing the TUN device can cause
packet re-orderings on the client. All other clients only use a single
TUN thread, so aligning this value means a more consistent behaviour of
Firezone across all platforms.
This PR adds opentelemetry-based packet counter metrics to `connlib`. By
default, the collection of these metrics of disabled. Without a
registered metrics-provider, gathering these metrics are effectively
no-ops. They will still incur 1 or 2 function calls per packet but that
should be negligible compared to other operations such as encryption /
decryption.
With this system in place, we can in the future add more metrics to make
debugging easier.
Bumps [windows-service](https://github.com/mullvad/windows-service-rs)
from 0.7.0 to 0.8.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/mullvad/windows-service-rs/blob/main/CHANGELOG.md">windows-service's
changelog</a>.</em></p>
<blockquote>
<h2>[0.8.0] - 2025-02-19</h2>
<h3>Added</h3>
<ul>
<li>Add missing ServiceAccess flags <code>READ_CONTROL</code>,
<code>WRITE_DAC</code> and <code>WRITE_OWNER</code>.</li>
</ul>
<h3>Changed</h3>
<ul>
<li>Upgrade <code>windows-sys</code> dependency to 0.59 and bump the
MSRV to 1.60.0.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ffaaf80ae3"><code>ffaaf80</code></a>
Bump version to 0.8.0 and add changelog</li>
<li><a
href="c6afc56e86"><code>c6afc56</code></a>
Bump windows-sys version to 0.59</li>
<li><a
href="96efa4ee71"><code>96efa4e</code></a>
Merge commit '9dc8af8'</li>
<li><a
href="9dc8af8513"><code>9dc8af8</code></a>
Add missing standard access rights</li>
<li>See full diff in <a
href="https://github.com/mullvad/windows-service-rs/compare/v0.7.0...v0.8.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
As a follow-up from #7959, we can now simplify the error handling a fair
bit as all codepaths that can fail in the client are threaded back to
the main function.
Within the event-loop, we already react to the channel being closed
which happens when the `Sender` within the `Session` gets dropped. As
such, there is no need to send an explicit `Stop` command, dropping the
`Session` is equivalent.
As it turns out, `swift-bridge` already calls `Drop` for us when the
last pointer is set to `nil`:
280a9dd999/swift/apple/FirezoneNetworkExtension/Connlib/Generated/connlib-client-apple/connlib-client-apple.swift (L24-L28)
Thus, we can also remove the explicit `disconnect` call to
`WrappedSession` entirely.
In order for search-domains to work on Windows, we need to set the
`SearchList` registry key for our interface. This will result in Windows
sending us a DNS query with the expanded domain name from the search
list which we can then process like normal DNS queries.
Related: #8410
A sizeable chunk of Firezone's Rust components deal with parsing,
manipulating and emitting DNS queries and responses. The API surface of
DNS is quite large and to make handling of all corner-cases easier, we
depend on the `domain` library to do the heavy-lifting for us.
For better or worse, `domain` follows a lazy-parsing approach. Thus,
creating a new DNS message doesn't actually verify that it is in fact
valid. Within Firezone, we make several assumptions around DNS messages,
such as that they will only ever contain a single question.
Historically, DNS allows for multiple questions per query but in
practise, nobody uses that.
Due to how we handle DNS in Firezone, manipulating these messages
happens in multiple places. That combined with the lazy-parsing approach
from `domain` warrants having our own `dns-types` library that wraps
`domain` and provides us with types that offer the interface we need in
the rest of the codebase.
Resolves: #7019
Search domains are a way of performing a DNS lookup without typing the
full-qualified domain name. For example, with a search domain of
`example.com`, performing a DNS query for `app` will automatically
expand the query to `app.example.com`. At present, this doesn't work
with Firezone because there is no way to configure an account-wide
search-domain.
With this PR, we extend the `Interface` message sent by the portal to
also include an optional `search_domain` field that must be a valid
domain name. If set, `connlib`'s DNS stub resolver will now append this
domain to all single-label queries and match the resulting domain
against all active DNS resource.
On Linux - with `systemd-resolved` as the DNS backend - we need to set
the search domain on the TUN interface as well and enable LLMNR in order
to be able to intercept these queries. `resolved` expands the query for
us, however, meaning with this configuration, we don't actually receive
a single-label query in `connlib`. Instead, we directly see
`app.example.com` when we type `host app` or `dig +search app` and have
`example.com` as our search domain.
MacOS has a similar system but with a different fallack. There, the
operating system will first try all configured search domains on the
system (typically just the ones set prior to Firezone starting), and
send queries for FQDN to all resolvers. If none of the resolvers
(including Firezone's stub resolver) return results, it sends the
single-label query directly to the primary resolver. To handle this
case, Firezone needs to know about the search-domain and expand it
itself when it receives the single-label query. In the future, we may
want to look into how we can configure MacOS such that it performs this
expansion for us.
On Windows and Android, queries for a single-label domain will be
directly sent to Firezone's stub resolver where we then hit the same
codepath as explained above.
Specifically, the way this codepath works is that if we receive a
single-label query AND we have a search-domain set, we expand it and
match that particular query against our list of resources. In every
other case, we continue on with the single-label domain.
Related: #8365Fixes: #8377
On Linux, logs sent to stdout from a systemd-service are automatically
captured by `journald`. This is where most admins expect logs to be and
frankly, doing any kind of debugging of Firezone is much easier if you
can do `journalctl -efu firezone-client-ipc.service` in a terminal and
check what the IPC service is doing.
On Windows, stdout from a service is (unfortunately) ignored.
To achieve this and also allow dynamically changing the log-filter, I
had to introduce a (long-overdue) abstraction over tracing's "reload"
layer that allows us to combine multiple reload-handles into one.
Unfortunately, neither the `reload::Layer` nor the `reload::Handle`
implement `Clone`, which makes this unnecessarily difficult.
Related: #8173
Every time we start a new session, our telemetry context potentially
changes, i.e. the user may sign into a new account. This should ensure
that both the IPC service and the GUI always use the most up-to-date
`account_slug` as part of Sentry events. In addition, this will also set
the `account_slug` for clients that just signed in. Previously, the
`account_slug` would only get populated on the next start of the client.
At present, `connlib` communicates with its host app via callbacks.
These callbacks are executed synchronously as part of `connlib`s
event-loop, meaning `connlib` cannot do anything else whilst the
callback is executing in the host app. Additionally, this callback runs
within the `Future` that represents `connlib` and thus runs on a `tokio`
worker thread.
Attempting to interact with the session from within the callback can
lead to panics, for example when `Session::disconnect` is called which
uses `Runtime::block_on`. This isn't allowed by `tokio`: You cannot
block on the execution of an async task from within one of the worker
threads.
To solve both of these problems, we introduce a thread-pool of size 1
that is responsible for executing `connlib` callbacks. Not only does
this allow `connlib` to perform more work such as routing packets or
process portal messages, it also means that it is not possible for the
host app to cause these panics within the `tokio` runtime because the
callbacks run on a different thread.
The error code we see here means "There are no more endpoints available
from the endpoint mapper." This has something to do with Windows'
internal RPC communication between components. DNS deactivation is on a
best-effort basis and it appears that everything else is working just
fine, despite this error.
It appears to happen when we shut down our own service, so perhaps it is
just a race condition.
In order to test changes to the IPC service on Windows more easily, the
IPC service binary offers an `install` command that installs a new
"Debug" IPC service. Prior to that, the previous is uninstalled.
This doesn't work if one doesn't have a previous "Debug" IPC service so
the `install` command only works for devs that have at least run it once
with that part of the function commented out. To improve this Dev UX, we
don't abort if we can't uninstall the previous one.
Bumps [sd-notify](https://github.com/lnicola/sd-notify) from 0.4.3 to
0.4.5.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/lnicola/sd-notify/blob/master/CHANGELOG.md">sd-notify's
changelog</a>.</em></p>
<blockquote>
<h2>[0.4.5] - 2025-01-18</h2>
<h3>Fixed</h3>
<ul>
<li>fixed a dubious transmute between different slice types</li>
</ul>
<h2>[0.4.4] - 2025-01-16</h2>
<h3>Added</h3>
<ul>
<li>added <code>NotifyState::MonotonicUsec</code>, for use with
<code>Type=notify-reload</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="70a941baf1"><code>70a941b</code></a>
Bump to 0.4.5</li>
<li><a
href="6958ce12e4"><code>6958ce1</code></a>
Merge pull request <a
href="https://redirect.github.com/lnicola/sd-notify/issues/15">#15</a>
from tbu-/pr_slice_transmute</li>
<li><a
href="1e938f2fd5"><code>1e938f2</code></a>
Use <code>slice::from_raw_parts</code> instead of
<code>mem::transmute</code></li>
<li><a
href="cb4459a4bb"><code>cb4459a</code></a>
Prepare for new release</li>
<li><a
href="8eb2c5cab3"><code>8eb2c5c</code></a>
Add NotifyState::MonotonicUsec and helper</li>
<li><a
href="5462699164"><code>5462699</code></a>
Add NotifyState::MonotonicUsec and helper</li>
<li><a
href="6990e3733f"><code>6990e37</code></a>
Fix clippy warnings</li>
<li>See full diff in <a
href="https://github.com/lnicola/sd-notify/compare/v0.4.3...v0.4.5">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This publishes the windows headless client using the same convention set
forth by the linux headless client.
Docs and website changes will come in a subsequent PR.
Related: #3782Resolves: #8046
When the IPC service is shutdown gracefully (i.e. purposely), we send a
`TerminatingGracefully` message of the IPC channel. This allows the GUI
to handle this case differently from the a crash.
On Linux, this is achieved by reacting to signals that are sent to the
IPC process. Windows however doesn't send any signals to services.
Instead, we get an event that we are being shutdown.
Currently, this event is handled separately from the signal handler and
the signal handler does nothing on Windows. To make this more uniform
and allow graceful shutdown of the IPC service on Windows, we introduce
a 2nd constructor to the `Terminate` signal abstraction that is already
hooked up with the correct logic here.
As it turns out, the effort in #7104 was not a good idea. By logging
errors as values, most of our Sentry reports all have the same title and
thus cannot be differentiated from within the overview at all. To fix
this, we stringify errors with all their sources whenever they got
logged. This ensures log messages are unique and all Sentry issues will
have a useful title.
As part of fixing #6777, we added a code path to explicitly set
nameservers on the tunnel interface. This was necessary to make DNS
resources work under WSL. Exactly this code also seems to be causing
issues where this particular registry key is not available on a users
machine.
DNS resources outside of WSL will also work without this, therefore we
make this part optional to at least have something working on the users
machine.
Related: #7962.
At present, the GUI client uses a separate task for reading messages
from the IPC connection and forwards them to another channel. The other
end of this channel is then used within the controller to actually react
to IPC messages.
We can simplify this by removing the intermediary task and processing
the messages from the IPC connection directly.
At present, the file logger for all Rust code starts each logfile with
`connlib.`. This is very confusing when exporting the logs from the GUI
client because even the logs from the client itself will start with
`connlib.`. To fix this, we make the base file name of the log file
configurable.
The current CLI of the headless-client allows passing the token as a
positional parameter in addition to an env variable. This can be very
confusing if you make a spelling error in the _command_ that you are
trying to pass to the CLI, i.e. `standalone`. A misspelled command will
be interpreted as the token to use to connect to the portal without any
warning that it is similar to a command. The env variable
`FIREZONE_TOKEN` is completely ignored in that case.
To fix this, we remove the ability to pass the token via stdin. The
token should instead be set via en env variable or read from a file at
`FIREZONE_TOKEN_PATH`.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
For a while now, `connlib` has been calling these two callbacks right
after each other because the internal event already bundles all the
information about the TUN device. With this PR, we merge the two
callback functions also in layers above `connlib` itself.
Resolves: #6182.
The application-split itself doesn't really warrant having two different
Sentry projects.
1. The location of the panic / log already tells us, which component is
failing.
2. Both of the projects are built with Rust so the same "platform"
setting applies.
3. Reducing the number of Sentry projects makes things easier to manage.
4. The binaries are started as independent processes, so the two Sentry
contexts don't interfere.
What we should keep in mind is that one instance of an application will
now log into Sentry twice using the same DSN. I _think_ this means that
the number of sessions listed in Sentry will be double the number of
actual client-runs. The same is true for the Apple client though and
once we integrate Sentry for Android, the same will apply there so
relative to each other, those numbers still make sense.