Bumps [windows-service](https://github.com/mullvad/windows-service-rs)
from 0.7.0 to 0.8.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/mullvad/windows-service-rs/blob/main/CHANGELOG.md">windows-service's
changelog</a>.</em></p>
<blockquote>
<h2>[0.8.0] - 2025-02-19</h2>
<h3>Added</h3>
<ul>
<li>Add missing ServiceAccess flags <code>READ_CONTROL</code>,
<code>WRITE_DAC</code> and <code>WRITE_OWNER</code>.</li>
</ul>
<h3>Changed</h3>
<ul>
<li>Upgrade <code>windows-sys</code> dependency to 0.59 and bump the
MSRV to 1.60.0.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ffaaf80ae3"><code>ffaaf80</code></a>
Bump version to 0.8.0 and add changelog</li>
<li><a
href="c6afc56e86"><code>c6afc56</code></a>
Bump windows-sys version to 0.59</li>
<li><a
href="96efa4ee71"><code>96efa4e</code></a>
Merge commit '9dc8af8'</li>
<li><a
href="9dc8af8513"><code>9dc8af8</code></a>
Add missing standard access rights</li>
<li>See full diff in <a
href="https://github.com/mullvad/windows-service-rs/compare/v0.7.0...v0.8.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This PR implements the "reverse path" of handling TURN traffic, i.e. UDP
datagrams that arrive on an allocation port and need to be wrapped in a
channel-data message to be sent to the TURN client.
In order to achieve that, I had to rewrite most of the TURN code to not
use the `etherparse` crate. I couldn't quite figure out the details but
the eBPF verifier rejected my code in mysterious ways that I didn't
understand. Commenting out random code-paths seemed to make it happy but
all code-paths combined caused an error. Eventually, I decided that we
simply have to use less abstractions to implement the same logic.
All the "parsing" code is now using types inspired by `network-types`.
The only modification here is that we use byte-arrays within our structs
in order to directly receive them in big-endian ordering.
`network-types` uses `u16`s and `u32`s which get interpreted as
little-endian on x86. Instead of converting around between the
endianness, constructing those values where we want them using the right
endianness is deemed much simpler. I opened an issue with upstream which
- if accepted - will allow us to remove our own structs and instead
depend on upstream again.
I also had to aggressively add `#[inline(always)]` to several functions,
otherwise the compiler would not optimise away our function calls,
causing the linker and / or eBPF verifier to fail.
This PR also fixes numerous bugs that I've found in the already existing
eBPF code. The number of bugs makes me question how this has been
working so far at all!
- We did not swap the Ethernet source and destination MAC address when
re-routing the packet. The integration-test didn't catch this because it
only operates on the loopback interface. Further testing on staging
should allow us to confirm that this is indeed working now.
- The UDP checksum update did not incorporate the new src and dst port.
The integration-test didnt' catch that because it has UDP checksumming
disabled. We need to have that disabled in the test because UDP
checksumming is typically offloaded to the NIC and packets on the
loopback interface never leave the device.
Related: https://github.com/vadorovsky/network-types/issues/32.
Related: #7518
As part of iterating on #8496, the API of `relay::Server` had changed
and I had commented out the regression tests to move quicker. In later
iterations, those API changes were reverted but I forgot to uncomment
them.
As part of working on https://github.com/aya-rs/aya/pull/1228, which I
am depending on in here I had to force-push which will break CI. Opening
this to fix it.
## Abstract
This pull-request implements the first stage of off-loading routing of
TURN data channel messages to the kernel via an eBPF XDP program. In
particular, the eBPF kernel implemented here **only** handles the
decapsulation of IPv4 data channel messages into their embedded UDP
payload. Implementation of other data paths, such as the receiving of
UDP traffic on an allocation and wrapping it in a TURN channel data
message is deferred to a later point for reasons explained further down.
As it stands, this PR implements the bare minimum for us to start
experimenting and benefiting from eBPF. It is already massive as it is
due to the infrastructure required for actually doing this. Let's dive
into it!
## A refresher on TURN channel-data messages
TURN specifies a channel-data message for relaying data between two
peers. A channel data message has a fixed 4-byte header:
- The first two bytes specify the channel number
- The second two bytes specify the length of the encapsulated payload
Like all TURN traffic, channel data messages run over UDP by default,
meaning this header sits at the very front of the UDP payload. This will
be important later.
After making an allocation with a TURN server (i.e. reserving a port on
the TURN server's interfaces), a TURN client can bind channels on that
allocation. As such, channel numbers are scoped to a client's
allocation. Channel numbers are allocated by the client within a given
range (0x4000 - 0x4FFF). When binding a channel, the client specifies
the remote's peer address that they'd like the data sent on the channel
to be sent to.
Given this setup, when a TURN server receives a channel data message, it
first looks at the sender's IP + port to infer the allocation (a client
can only ever have 1 allocation at a time). Within that allocation, the
server then looks for the channel number and retrieves the target socket
address from that. The allocation itself is a port on the relay's
interface. With that, we can now "unpack" the payload of the channel
data message and rewrite it to the new receiver:
- The new source IP can be set from the old dst IP (when operating in
user-space mode this is irrelevant because we are working with the
socket API).
- The new source port is the client's allocation.
- The new destination IP is retrieved from the mapping retrieved via the
channel number.
- The new destination port is retrieved from the mapping retrieved via
the channel number.
Last but not least, all that is left is removing the channel data header
from the UDP payload and we can send out the packet. In other words, we
need to cut off the first 4 bytes of the UDP payload.
## User-space relaying
At present, we implement the above flow in user-space. This is tricky to
do because we need to bind _many_ sockets, one for each possible
allocation port (of which there can be 16383). The actual work to be
done on these packets is also extremely minimal. All we do is cut off
(or add on) the data-channel header. Benchmarks show that we spend
pretty much all of our time copying data between user-space and
kernel-space. Cutting this out should give us a massive increase in
performance.
## Implementing an eBPF XDP TURN router
eBPF has been shown to be a very efficient way of speeding up a TURN
server [0]. After many failed experiments (e.g. using TC instead of XDP)
and countless rabbit-holes, we have also arrived at the design
documented within the paper. Most notably:
- The eBPF program is entirely optional. We try to load it on startup,
but if that fails, we will simply use the user-space mode.
- Retaining the user-space mode is also important because under certain
circumstances, the eBPF kernel needs to pass on the packet, for example,
when receiving IPv4 packets with options. Those make the header
dynamically-sized which makes further processing difficult because the
eBPF verifier disallows indexing into the packet with data derived from
the packet itself.
- In order to add/remove the channel-data header, we shift the packet
headers backwards / forwards and leave the payload in place as the
packet headers are constant in size and can thus easily and cheaply be
copied out.
In order to perform the relaying flow explained above, we introduce maps
that are shared with user-space. These maps go from a tuple of
(client-socket, channel-number) to a tuple of (allocation-port,
peer-socket) and thus give us all the data necessary to rewrite the
packet.
## Integration with our relay
Last but not least, to actually integrate the eBPF kernel with our
relay, we need to extend the `Server` with two more events so we can
learn, when channel bindings are created and when they expire. Using
these events, we can then update the eBPF maps accordingly and therefore
influence the routing behaviour in the kernel.
## Scope
What is implemented here is only one of several possible data paths.
Implementing the others isn't conceptually difficult but it does
increase the scope. Landing something that already works allows us to
gain experience running it in staging (and possibly production).
Additionally, I've hit some issues with the eBPF verifier when adding
more codepaths to the kernel. I expect those to be possible to resolve
given sufficient debugging but I'd like to do so after merging this.
---
Depends-On: #8506
Depends-On: #8507
Depends-On: #8500Resolves: #8501
[0]: https://dl.acm.org/doi/pdf/10.1145/3609021.3609296
At present, the Gateway implements a NAT64 conversion that can convert
IPv4 packets to IPv6 and vice versa. Doing this efficiently creates a
fair amount of complexity within our `ip-packet` crate. In addition,
routing ICMP errors back through our NAT is also complicated by this
because we may have to translate the packet embedded in the ICMP error
as well.
The NAT64 module was originally conceived as a result of the new stub
resolver-based DNS architecture. When the Client resolves IPs for a
domain, it doesn't know whether the domain will actually resolve to IPv4
AND IPv6 addresses so it simply assigns 4 of each to every domain. Thus,
when receiving an IPv6 packet for such a DNS resource, the Gateway may
only have IPv4 addresses available and can therefore not route the
packet (unless it translates it).
This problem is not novel. In fact, an IP being unroutable or a
particular route disappearing happens all the time on the Internet. ICMP
was conceived to handle this problem and it is doing a pretty good job
at it. We can make use of that and simply return an ICMP unreachable
error back to the client whenever it picks an IP that we cannot map to
one that we resolved.
In this PR, we leave all of the NAT64 code intact and only add a
feature-flag that - when active - sends aforementioned ICMP error. While
offline (and thus also for our tests), the feature-flag evaluates to
false. It is however set to `true` in the backend, meaning on staging
and later in production, we will send these ICMP errors.
Once this is rolled out and indeed proving to be working as intended, we
can simplify our codebase and rip out the NAT64 module. At that point,
we will also have to adapt the test-suite.
As a follow-up from #7959, we can now simplify the error handling a fair
bit as all codepaths that can fail in the client are threaded back to
the main function.
From the perspective of any application, Firezone is a layer-3 network
and will thus use the host's networking stack to form IP packets for
whichever application protocol is in use (UDP, TCP, etc). These packets
then get encapsulated into UDP packets by Firezone and sent to a
Gateway.
As a result of this design, the IP header seen by the networking stacks
of the Client and the receiving service are not visible to any
intermediary along the network path of the Client and Gateway.
In case this network path is congested and middleboxes such as routers
need to drop packets, they will look at the ECN bits in the IP header
(of the UDP packet generated by a Client or Gateway) and flip a bit in
case the previous value indicated support for ECN (`0x01` or `0x10`).
When received by a network stack that supports ECN, seeing `0x11` means
that the network path is congested and that it must reduce its
send/receive windows (or otherwise throttle the connection).
At present, this doesn't work with Firezone because of the
aforementioned encapsulation of IP packets. To support ECN, we need to
therefore:
- Copy ECN bits from a received IP packet to the datagram that
encapsulates it: This ensures that if the Client's network stack support
ECN, we mirror that support on the wire.
- Copy ECN bits from a received datagram to the IP packet the is sent to
the TUN device: This ensures that if the "Congestion Experienced" bit
get set along the network path between Client and Gateway, we reflect
that accordingly on the IP packet emitted by the TUN device.
Resolves: #3758
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com>
When we receive a DNS query for a resource, we refresh the DNS resource
NAT on the Gateway by clearing the local state. This ensures that if any
of the DNS records have changed, those will be reflected in the new NAT
table on the Gateway.
I cannot fully confirm my theory but I have a hunch that under certain
circumstances, this would lead to loss of buffered packets which lead to
connections getting reset. I couldn't confirm that in my testing though.
The issues I experienced with github.com suddenly stopped
🙃
Within Firezone's Rust codebase, we use the `etherparse` crate
extensively to parse network packets. To provide a more ergonomic API,
this is all encapsulated in our `ip-packet` crate.
For #7518, we need to write an eBPF kernel that parses and manipulates
network packets. Etherparse itself doesn't provide any facilities to
manipulate network packets. That is an open feature request:
https://github.com/JulianSchmid/etherparse/issues/9. For the packet
manipulation that we are doing in `connlib`, we already wrote certain
extensions to the `etherparse` crate but today, those are all within the
`ip-packet` crate.
In order to reuse that within the eBPF kernel, we cannot just depend on
`ip-packet` directly because eBPF is a no-std and no-alloc environment,
thus no crate in the dependency tree is allowed to depend on Rust's
std-lib. `etherparse` itself actually has an `std` feature flag that we
can turn off. Introducing the same in `ip-packet` would require a lot of
conditional-compilation gates using `#[cfg]`. it is much easier to just
introduce a new crate that houses all our in-house extensions to
`etherparse`. Eventually, we can hopefully upstream those which is
another motivator to separate this out.
Tauri needs a tokio runtime in order to spawn tasks. If we don't supply
one, it will start its own runtime. Given that we already start a
runtime, this is unnecessary.
At present, the Windows and Linux GUI client launch the Tauri
application via the `App::run` method. This function never returns
again. Instead, whenever we request the Tauri app to exit, Tauri will
internally call `std::process::exit`, thus preventing ordinary clean-up
from happening.
Whilst we somehow managed to work around this particular part, having
the app exit the process internally also makes error handling and
reporting to the user difficult as there are now two parts in the code
where we need to handle errors:
- Before we start up the Tauri app
- Before we end the Tauri app (i.e. signal to it that we want to exit)
It would be much easier to understand, if we could call into Tauri, let
it do its thing and upon a requested exit by the user, the called
function (i.e. `App::run`) simply returns again. After diving into the
inner workings of Tauri, we have achieved just that by adding a new
function to `App`: `App::run_return`
(https://github.com/tauri-apps/tauri/pull/12668). Using
`App::run_return` we can now orchestrate a `gui::run` function that
simply returns after Tauri has shutdown. Most importantly, it will also
exit upon any fatal errors that we encounter in the controller and thus
unify the error handling path into a single one. These errors are now
all handled at the call-site of `gui::run`.
Building on top of this, we will be able to further simplify the error
handling within the GUI client. I am hoping to gradually replace our
monolithic `Error` enums with individual errors that we can extract from
an `anyhow::Error`. This would make it easier to reason about where
certain errors get generated and thus overall improve the UX of the
application by displaying better error messages, not failing the entire
app in certain cases, etc.
Bumps [semver](https://github.com/dtolnay/semver) from 1.0.25 to 1.0.26.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/dtolnay/semver/releases">semver's
releases</a>.</em></p>
<blockquote>
<h2>1.0.26</h2>
<ul>
<li>Documentation improvements</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="3e64fdbfce"><code>3e64fdb</code></a>
Release 1.0.26</li>
<li><a
href="dd8dc0ad90"><code>dd8dc0a</code></a>
Point standard library links to stable</li>
<li><a
href="479518de59"><code>479518d</code></a>
Unset doc-scrape-examples for lib target</li>
<li><a
href="4fa7acb318"><code>4fa7acb</code></a>
More precise gitignore patterns</li>
<li>See full diff in <a
href="https://github.com/dtolnay/semver/compare/1.0.25...1.0.26">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
On some Linux distributions (Amazon Linux 2023), the default `iptables`
install includes a blanket deny rule in the `FORWARD` chain that
prevents packets from the tunnel interface from ever leaving the host.
To fix this, we ensure our `FORWARD` chain rules are inserted with
priority 1 which takes precedence over the blanket-deny rule.
We also update our MASQUERADE in the NAT table to apply only to the CIDR
range possible for Gateway tunnel IPs, as opposed to the default
`0.0.0.0/0`.
Fixes#8481
Bumps
[android_log-sys](https://github.com/rust-mobile/android_log-sys-rs)
from 0.3.1 to 0.3.2.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/rust-mobile/android_log-sys-rs/commits">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Within the event-loop, we already react to the channel being closed
which happens when the `Sender` within the `Session` gets dropped. As
such, there is no need to send an explicit `Stop` command, dropping the
`Session` is equivalent.
As it turns out, `swift-bridge` already calls `Drop` for us when the
last pointer is set to `nil`:
280a9dd999/swift/apple/FirezoneNetworkExtension/Connlib/Generated/connlib-client-apple/connlib-client-apple.swift (L24-L28)
Thus, we can also remove the explicit `disconnect` call to
`WrappedSession` entirely.
In order to be able to dynamically configure long-running applications
such as the Gateway via feature-flags, we need to regularly re-evaluate
them by sending another POST request to the `/decide` endpoint.
To do this without impacting anything else, we create a separate runtime
that is lazily initialised on first access and use that to run the async
code for connecting to the PostHog service. In addition to that, we also
spawn a task that re-evaluates the feature flags for the currently set
user in the Sentry context every 5 minutes.
Resolves: #8454
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
The bugfix we attempted in #8156 turned out wrong. Reading the
source-code, we have to call `Session::shutdown` in order to actually
cancel the `Session::receive_blocking` call. Not doing so means we run
into the timeout when discarding the `Tun` device because the
recv-thread is stuck in `Session::receive_blocking`.
Fixes: #8395
Dependabot appears to have a hard time to bump the Tauri dependencies in
a group together. Additionally, our dependency linter `cargo deny`
disallows duplicate dependencies by default. To avoid introducing more
duplicate dependencies, we depend on the upstream `main` branch of two
projects that have already updated their dependencies but did not yet
cut a release.
Currently, we are only emitting updates to the `TunConfig` when the
routes or the DNS servers change. This isn't correct, we should also
emit updates for it when the IPs or the search-domain changes.
In order to achieve that, we create a new `TunConfig` based on the
existing one every time we receive an `InterfaceConfig` update.
Depending on our current state, we may create an entirely new
`TunConfig` or create a new one where we copy the fields in from the new
`InterfaceConfig`. We then unconditionally call
`maybe_update_tun_config` which does the necessary work to only emit
updates when things actually changed.
To ensure this works in all cases and the latest update is always
reflected on the TUN device, we also extend the proptests to assert the
latest search domain.
Fixes: #8451
Bumps [tokio-util](https://github.com/tokio-rs/tokio) from 0.7.12 to
0.7.13.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0b31c2f73d"><code>0b31c2f</code></a>
chore: prepare tokio-util v0.7.13 (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7012">#7012</a>)</li>
<li><a
href="129f9fc0c8"><code>129f9fc</code></a>
codec: fix incorrect handling of invalid utf-8 in
<code>LinesCodec::decode_eof</code> (#...</li>
<li><a
href="b5c227d51f"><code>b5c227d</code></a>
tracing: move tracing instrumentation tests into tokio tests (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7007">#7007</a>)</li>
<li><a
href="dcae2b9eb8"><code>dcae2b9</code></a>
ci: unfreeze FreeBSD from rustc 1.81 (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7009">#7009</a>)</li>
<li><a
href="bb9d57017e"><code>bb9d570</code></a>
chore: prepare Tokio v1.42.0 (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7005">#7005</a>)</li>
<li><a
href="af9c683d52"><code>af9c683</code></a>
tests: fix typo in build test instructions (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7004">#7004</a>)</li>
<li><a
href="4bc5a1a058"><code>4bc5a1a</code></a>
ci: allow Unicode-3.0 license for unicode-ident (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7006">#7006</a>)</li>
<li><a
href="f8948ea021"><code>f8948ea</code></a>
runtime: do not defer <code>yield_now</code> inside
<code>block_in_place</code> (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/6999">#6999</a>)</li>
<li><a
href="bce9780dd3"><code>bce9780</code></a>
time: use <code>array::from_fn</code> instead of manually creating array
(<a
href="https://redirect.github.com/tokio-rs/tokio/issues/7000">#7000</a>)</li>
<li><a
href="38151f30cb"><code>38151f3</code></a>
readme: unlist 1.32.x as LTS release (<a
href="https://redirect.github.com/tokio-rs/tokio/issues/6997">#6997</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/tokio-rs/tokio/compare/tokio-util-0.7.12...tokio-util-0.7.13">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [either](https://github.com/rayon-rs/either) from 1.13.0 to
1.15.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="59ae1fce0c"><code>59ae1fc</code></a>
Merge pull request <a
href="https://redirect.github.com/rayon-rs/either/issues/120">#120</a>
from cuviper/release-1.15.0</li>
<li><a
href="7f4bf0222d"><code>7f4bf02</code></a>
Release 1.15.0</li>
<li><a
href="56178e9fdb"><code>56178e9</code></a>
Merge pull request <a
href="https://redirect.github.com/rayon-rs/either/issues/119">#119</a>
from klkvr/klkvr/fix-no-std</li>
<li><a
href="80b6f2a7fd"><code>80b6f2a</code></a>
fix last references of use_std</li>
<li><a
href="2b71801b05"><code>2b71801</code></a>
serde 1.0.95</li>
<li><a
href="8c1ea3e557"><code>8c1ea3e</code></a>
use_std -> std</li>
<li><a
href="d743e25f52"><code>d743e25</code></a>
fix: no-std with serde feature</li>
<li><a
href="6e6dc26828"><code>6e6dc26</code></a>
Merge pull request <a
href="https://redirect.github.com/rayon-rs/either/issues/117">#117</a>
from cuviper/release-1.14.0</li>
<li><a
href="937620642b"><code>9376206</code></a>
Release 1.14.0</li>
<li><a
href="4db2c30e5f"><code>4db2c30</code></a>
Merge pull request <a
href="https://redirect.github.com/rayon-rs/either/issues/118">#118</a>
from cuviper/clippy</li>
<li>Additional commits viewable in <a
href="https://github.com/rayon-rs/either/compare/1.13.0...1.15.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
For existing `TunConfig`, we had a bug where we failed to update the
search_domain if the effective dns_servers were unchanged.
@thomaseizinger I can see why you want to refactor this; it's quite a
mess to follow ;-). I was going to try my hand at cleaning it up a
little bit just so I can grok it but I figured since this area is going
to be changing quite a bit in #8263, I'll leave those changes out for
now.
I suspect that one issue as part local discovery is that we respond to
LLMNR queries with NXDOMAIN if the domain isn't a resource. This is
probably wrong. LLMNR works over multicast so if a particular interface
can't respond to a query with records, it should probably not respond at
all.
Related: #8266
In order for search-domains to work on Windows, we need to set the
`SearchList` registry key for our interface. This will result in Windows
sending us a DNS query with the expanded domain name from the search
list which we can then process like normal DNS queries.
Related: #8410
Our Posthog integration was so lenient in regards to errors that I
didn't even notice at all that we failed to deserialise them correctly.
In Posthog, I configured the feature flags with `kebab-case` but we
tried to deserialise them as `snake_case`.
In order to have the system expand search domains for us, we need to set
a very peculiar combination of configuration options in the
`NEDNSSettings` of the VPN configuration:
- We need to include our search domains in the list of `matchDomains`
- We need to set `matchDomainsNoSearch = false`
- We need to set the `searchDomains` field
Technically, we don't even need to set `searchDomains` by itself.
Reading the docs in more detail for the `matchDomainsNoSearch` flag
explains why:
> A Boolean that specifies if the domains in the matchDomains list
should not be appended to the resolver’s list of search domains.
The double-negative here is confusing but essentially, what this says
is:
> If false, append the list of match domains to the resolver's search
domains.
That is exactly what we want. We want a search domain of e.g.
`example.com` to append to the list of search domains for the primary
resolver of non-scoped DNS queries.
I tested without setting `searchDomains` and it does still work: The
system will still expand the domain for us und send us a FQDN query of
e.g. `foo.example.com`. However, I figured not setting `searchDomains`
at all is quite confusing so I left it in there.
Related: #8410 (Fixes it for MacOS)
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: thomas <firezone@firezones-MacBook-Air.fritz.box>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>