For simplicity sake I assumed that any packet using port 53 would be a
dns packet, but I forgot to exclude it from the range of possible ports
used.
This can cause spurious CI failures
Using the `sentry-tracing` integration, we can automatically capture
events based on what we log via `tracing`. The mapping is defined as
follows:
- ERROR: Gets captured as a fatal error
- WARN: Gets captured as a message
- INFO: Gets captured as a breadcrumb
- `_`: Does not get captured at all
If telemetry isn't active / configured, this integration does nothing.
It is therefore safe to just always enable it.
Similar to the GUI and headless clients, adding error reporting via
Sentry should give us much better insight into how well gateways are
performing.
Resolves: #7099.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
- We don't need to control our deb's deps since we're sticking with
Tauri
- Specifying `pnpm tauri` fixes an odd issue on one dev system
- `tauri-cli` is a dev dep, not a runtime dep
Flushing events to Sentry requires us to be able to resolve domain
names. This is only possible while connlib is active or completely
disabled.
Without this, stopping telemetry pretty much always times out for me on
my local machine when using the headless-client.
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.129 to
1.0.132.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/serde-rs/json/releases">serde_json's
releases</a>.</em></p>
<blockquote>
<h2>1.0.132</h2>
<ul>
<li>Improve binary size and compile time for JSON array and JSON object
deserialization by about 50% (<a
href="https://redirect.github.com/serde-rs/json/issues/1205">#1205</a>)</li>
<li>Improve performance of JSON array and JSON object deserialization by
about 8% (<a
href="https://redirect.github.com/serde-rs/json/issues/1206">#1206</a>)</li>
</ul>
<h2>1.0.131</h2>
<ul>
<li>Implement Deserializer and IntoDeserializer for <code>Map<String,
Value></code> and <code>&Map<String, Value></code> (<a
href="https://redirect.github.com/serde-rs/json/issues/1135">#1135</a>,
thanks <a
href="https://github.com/swlynch99"><code>@swlynch99</code></a>)</li>
</ul>
<h2>1.0.130</h2>
<ul>
<li>Support converting and deserializing <code>Number</code> from i128
and u128 (<a
href="https://redirect.github.com/serde-rs/json/issues/1141">#1141</a>,
thanks <a
href="https://github.com/druide"><code>@druide</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="86d933cfd7"><code>86d933c</code></a>
Release 1.0.132</li>
<li><a
href="f45b422a3b"><code>f45b422</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/json/issues/1206">#1206</a>
from dtolnay/hasnext</li>
<li><a
href="f2082d2a04"><code>f2082d2</code></a>
Clearer order of comparisons</li>
<li><a
href="0f54a1a0df"><code>0f54a1a</code></a>
Handle early return sooner on eof in seq or map</li>
<li><a
href="2a4cb44f7c"><code>2a4cb44</code></a>
Rearrange 'match peek'</li>
<li><a
href="4cb90ce66d"><code>4cb90ce</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/json/issues/1205">#1205</a>
from dtolnay/hasnext</li>
<li><a
href="b71ccd2d8f"><code>b71ccd2</code></a>
Reduce duplicative instantiation of logic in SeqAccess and
MapAccess</li>
<li><a
href="a810ba9850"><code>a810ba9</code></a>
Release 1.0.131</li>
<li><a
href="0d084c5038"><code>0d084c5</code></a>
Touch up PR 1135</li>
<li><a
href="b4954a9561"><code>b4954a9</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/json/issues/1135">#1135</a>
from swlynch99/map-deserializer</li>
<li>Additional commits viewable in <a
href="https://github.com/serde-rs/json/compare/1.0.129...1.0.132">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Our logging library, `tracing` supports structured logging. This is
useful because it preserves the more than just the string representation
of a value and thus allows the active logging backend(s) to capture more
information for a particular value.
In the case of errors, this is especially useful because it allows us to
capture the sources of a particular error.
Unfortunately, recording an error as a tracing value is a bit cumbersome
because `tracing::Value` is only implemented for `&dyn
std::error::Error`. Casting an error to this is quite verbose. To make
it easier, we introduce two utility functions in `firezone-logging`:
- `std_dyn_err`
- `anyhow_dyn_err`
Tracking errors as correct `tracing::Value`s will be especially helpful
once we enable Sentry's `tracing` integration:
https://docs.rs/sentry-tracing/latest/sentry_tracing/#tracking-errors
Currently, tests only send ICMP packets back and forth, to expand our
coverage and later on permit us cover filters and resource picking this
PR implements sending UDP and TCP packets as part of that logic too.
To make this PR simpler in this stage TCP packets don't track an actual
TCP connection, just that they are forwarded back and forth, this will
be fixed in a future PR by emulating TCP sockets.
We also unify how we handle CIDR/DNS/Non Resources to reduce the number
of transitions.
Fixes#7003
---------
Signed-off-by: Gabi <gabrielalejandro7@gmail.com>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
When we query upstream DNS servers through the tunnel via TCP DNS, we
will always be successful in establishing a tunnel, regardless of how
many concurrent queries we send because the TCP stack will keep
re-trying. Thus, tracking, which resources we are connected to after
sending a bunch of DNS queries needs to be split by UDP and TCP.
For UDP, only the "first" resource will be connected, however, with
concurrent TCP and UDP DNS queries, "first" isn't necessarily the order
in which we send the queries because with TCP DNS, one packet doesn't
equate to one query anymore.
This is quite hacky but it will get completely deleted once we buffer
packets during the connection setup.
This PR implements the new idempotent control protocol for the gateway.
We retain backwards-compatibility with old clients to allow admins to
perform a disruption-free update to the latest version.
With this new control protocol, we are moving the responsibility of
exchanging the proxy IPs we assigned to DNS resources to a p2p protocol
between client and gateway. As a result, wildcard DNS resources only get
authorized on the first access. Accessing a new domain within the same
resource will thus no longer require a roundtrip to the portal.
Overall, users will see a greatly decreased connection setup latency. On
top of that, the new protocol will allow us to more easily implement
packet buffering which will be another UX boost for Firezone.
At present, `connlib` only supports DNS over UDP on port 53. Responses
over UDP are size-constrained on the IP MTU and thus, not all DNS
responses fit into a UDP packet. RFC9210 therefore mandates that all DNS
resolvers must also support DNS over TCP to overcome this limitation
[0].
Handling UDP packets is easy, handling TCP streams is more difficult
because we need to effectively implement a valid TCP state machine.
Building on top of a lot of earlier work (linked in issue), this is
relatively easy because we can now simply import
`dns_over_tcp::{Client,Server}` which do the heavy lifting of sending
and receiving the correct packets for us.
The main aspects of the integration that are worth pointing out are:
- We can handle at most 10 concurrent DNS TCP connections _per defined
resolver_. The assumption here is that most applications will first
query for DNS records over UDP and only fall back to TCP if the response
is truncated. Additionally, we assume that clients will close the TCP
connections once they no longer need it.
- Errors on the TCP stream to an upstream resolver result in `SERVFAIL`
responses to the client.
- All TCP connections to upstream resolvers get reset when we roam, all
currently ongoing queries will be answered with `SERVFAIL`.
- Upon network reset (i.e. roaming), we also re-allocate new local ports
for all TCP sockets, similar to our UDP sockets.
Resolves: #6140.
[0]: https://www.ietf.org/rfc/rfc9210.html#section-3-5
Currently, any failure in the `StubResolver` while processing a query
results only in a log but no response. For UDP DNS queries, that isn't
too bad because the client will simply retry. With the upcoming support
for TCP DNS queries in #6944, we should really always reply with a
message.
This PR refactors the handling of DNS messages to always generate a
reply. In the case of an error, we reply with SERVFAIL.
This opens up a few more refactorings where we can now collapse the
handling of some branches into the same. As part of that, I noticed the
recurring need for "unwrapping" a `Result<(), E>` and logging the error.
To make that easier, I introduced an extension trait that does exactly
that.
In #6960, split up the filter handling by resource type. This uncovered
an issue that was already known in and fixed in the portal in #6962:
Upstream DNS servers must not be in any of the reserved IP ranges.
Related: #7060.
When forwarding UDP DNS queries through the tunnel, `connlib` needs to
mangle the IP header to set upstream as the correct destination of the
packet. In order to know, which ones need to be mangled back, we keep a
map of query ID to timestamp. This isn't good enough as the added
proptest failure case shows. If there are two concurrent queries with
the same query ID to different resolvers, we only handle one of those
and don't mangle the 2nd one.
This buffer is effectively limited by the maximum size of our IP packets
(which is guided by our interface MTU). Passing a length is
unnecessarily abstract.
For implementing DNS over TCP, we will need to encapsulate packets that
are emitted by the `dns_over_tcp::Client` which requires creating such a
buffer on the fly.
In the future, we should probably consider also stack-allocating all our
`Transmit`s so we can get rid of passing around this buffer altogether.
The last released version of `smoltcp` is `0.11.0`. That version is
almost a year old. Since then, an important "bug" got fixed in the IPv6
handling code of `smoltcp`.
In order to route packets to our interface, we define a dummy IPv4 and
IPv6 address and create catch-all routes with our interface as the
gateway. Together with `set_any_ip(true)`, the makes `smoltcp` accept
any packet we pass it to. This is necessary because we don't directly
connect `smoltcp` to the TUN device but rather have an `InMemoryDevice`
where we explicitly feed certain packets to it.
In the last released version, `smoltcp` only performs the above logic
for IPv4. For IPv6, the additional check for "do we have a route that
this packet matches" doesn't exist and thus no IPv6 traffic is accepted
by `smoltcp`.
Extracted out of #6944.
Currently, in our tests, traffic that is targeted at a resource is
handled "in-line" on the gateway. This doesn't really represent how the
real world works. In the real world, the gateway uses the IP forwarding
functionality of the Linux kernel and the corresponding NAT to send the
IP packet to the actual resource.
We don't want to implement this forwarding and NAT in the tests.
However, our testing harness is about to get more sophisticated. We will
be sending TCP DNS queries with #6944 and we want to test TCP and its
traffic filters with #7003.
The state of those TCP sockets needs to live _somewhere_.
If we "correctly" model this and introduce some kind of `HashMap` with
`dyn Resource` in `TunnelTest`, then we will have to actually implement
NAT for those packets to ensure that e.g. the SYN-ACK of a TCP handshake
makes it back to the correct(!) gateway.
That is rather cumbersome.
This PR suggests taking a shortcut there by associating the resources
with each gateway individually. At present, all we have are UDP DNS
servers. Those don't actually have any connection state themselves but
putting them in place gives us a framework for where we can put
connection-specific state. Most importantly, these resources MUST NOT
hold application-specific state. Instead, that state needs to be kept in
`ReferenceState` or `TunnelState` and passed in by reference, as we do
here for the DNS records.
This has effectively the same behaviour as correctly translating IP
packets back and forth between resources and gateways. The packets
"emitted" by a particular `UdpDnsServerResource` will always go back to
the correct gateway.
The `smoltcp` crate has its own time-related types like `Instant` which
are backed by a simple microsecond-based integer. It has an integration
with `std::time::Instant` which we are currently using. That integration
however is impure because it relies on `Instant::now`.
To work around this, we initialise `smoltcp` with `Instant::ZERO` and
keep our own `std::time::Instant` around. Using that, we always compute,
how much time has elapsed since we initialised `smoltcp` and pass the
correct `Instant` to it.
Whilst learning about this, I also discovered that `smoltcp` has the
equivalent of `poll_timeout` so I went ahead and implemented that too.
Previously, when a gateway checked if a packet was allowed through, it
used the real IP of the DNS resource that the client was trying to
communicate with.
The problem with this was that if there's an overlapping CIDR resource
with the real IP this would allow a user to send packets using the
filters for that resource instead of the DNS resource filters.
This can be confusing for users as packet can flow unexpectedly to the
resources even if the filter doesn't permit it, so we use the IP of the
packet before we translate it to the real IP to match the filters.
This doesn't change the security of this feature as a user can just
change the IP of the packet with the dst of the DNS or the cidr resource
according to what they need.
Fixes#6806
Because of the patch we apply, if we delete `Cargo.lock`, this line
causes an error. Deleting `Cargo.lock` should be valid in general.
---------
Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com>
In order to forward TCP DNS queries to custom resolvers that are also
resources, `connlib` needs to establish its own TCP connection to that
upstream server.
In the current design of `dns_over_tcp::Client`, this connection gets
established immediately as soon as we learn about, which upstream
resolvers we need to use. This is problematic because the resolver might
not **yet** be a resource. Resources can change at any point so until
they are a resource, we don't actually need to establish a TCP
connection. In fact, even if we wanted to, we couldn't because we can't
map the resolvers IP to a `ResourceId`. TCP is a reliable transport so
it will keep retrying to establish a connection until it eventually
gives up.
Not only is this wasteful but it also causes problems in our tests where
we have model, how and when connections are established. Having a TCP
stack within `connlib` that will retry to establish this connection
messes up that model.
To fix this, we change `connect_to_resolvers` to `set_resolvers`. This
will still create the sockets and allocate the ports but leaves the
socket in `Closed` state. We only issue a `connect` once we receive a
query that we need to send to that resolver.
Closes#6953
- Increases heartbeat in unit test from 5 ms to 30 ms to avoid timer
aliasing on Windows
- Increases interval proportionally to 180 ms
- Corrects measurement of elapsed time for
`returns_heartbeat_after_interval`. The first `heartbeat.poll` seems to
consume a few ms, which can cause a false test failure
Bumps
[@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)
from 22.7.4 to 22.7.5.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [tempfile](https://github.com/Stebalien/tempfile) from 3.12.0 to
3.13.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/Stebalien/tempfile/blob/master/CHANGELOG.md">tempfile's
changelog</a>.</em></p>
<blockquote>
<h2>3.13.0</h2>
<ul>
<li>Add <code>with_suffix</code> constructors for easily creating new
temporary files with a specific suffix (e.g., a specific file
extension). Thanks to <a
href="https://github.com/Borgerr"><code>@Borgerr</code></a>.</li>
<li>Update dependencies (fastrand & rustix).</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a354f8cb11"><code>a354f8c</code></a>
chore: release 3.13.0</li>
<li><a
href="d21b602fa2"><code>d21b602</code></a>
chore: update deps</li>
<li><a
href="d6600da8fc"><code>d6600da</code></a>
Add for <code>with_suffix</code> (<a
href="https://redirect.github.com/Stebalien/tempfile/issues/299">#299</a>)</li>
<li><a
href="19280c5889"><code>19280c5</code></a>
Document current default permissions for tempdirs (<a
href="https://redirect.github.com/Stebalien/tempfile/issues/296">#296</a>)</li>
<li><a
href="c5eac9f690"><code>c5eac9f</code></a>
fix: address clippy unnecessary deref lint in test (<a
href="https://redirect.github.com/Stebalien/tempfile/issues/294">#294</a>)</li>
<li>See full diff in <a
href="https://github.com/Stebalien/tempfile/compare/v3.12.0...v3.13.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [lru](https://github.com/jeromefroe/lru-rs) from 0.12.4 to 0.12.5.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/jeromefroe/lru-rs/blob/master/CHANGELOG.md">lru's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/jeromefroe/lru-rs/tree/0.12.5">v0.12.5</a> -
2024-10-30</h2>
<ul>
<li>Upgrade hashbrown dependency to 0.15.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="2d18d2d333"><code>2d18d2d</code></a>
Merge pull request <a
href="https://redirect.github.com/jeromefroe/lru-rs/issues/203">#203</a>
from jeromefroe/jerome/prepare-0-12-5-release</li>
<li><a
href="b42486918b"><code>b424869</code></a>
Prepare 0.12.5 release</li>
<li><a
href="1ba5130174"><code>1ba5130</code></a>
Merge pull request <a
href="https://redirect.github.com/jeromefroe/lru-rs/issues/202">#202</a>
from torokati44/hashbrown-0.15</li>
<li><a
href="60a7e71c59"><code>60a7e71</code></a>
Use top-level DefaultHashBuilder type alias</li>
<li><a
href="12ed995b7b"><code>12ed995</code></a>
Update hashbrown to 0.15</li>
<li>See full diff in <a
href="https://github.com/jeromefroe/lru-rs/compare/0.12.4...0.12.5">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [tailwindcss](https://github.com/tailwindlabs/tailwindcss) from
3.4.13 to 3.4.14.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tailwindlabs/tailwindcss/releases">tailwindcss's
releases</a>.</em></p>
<blockquote>
<h2>v3.4.14</h2>
<h3>Fixed</h3>
<ul>
<li>Don't set <code>display: none</code> on elements that use
<code>hidden="until-found"</code> (<a
href="https://redirect.github.com/tailwindlabs/tailwindcss/pull/14625">#14625</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/tailwindlabs/tailwindcss/blob/v3.4.14/CHANGELOG.md">tailwindcss's
changelog</a>.</em></p>
<blockquote>
<h2>[3.4.14] - 2024-10-15</h2>
<h3>Fixed</h3>
<ul>
<li>Don't set <code>display: none</code> on elements that use
<code>hidden="until-found"</code> (<a
href="https://redirect.github.com/tailwindlabs/tailwindcss/pull/14625">#14625</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c616fb9562"><code>c616fb9</code></a>
3.4.14</li>
<li><a
href="b570e2b887"><code>b570e2b</code></a>
Don't set <code>display: none</code> on elements that use
<code>hidden="until-found"</code> (<a
href="https://redirect.github.com/tailwindlabs/tailwindcss/issues/14625">#14625</a>)</li>
<li>See full diff in <a
href="https://github.com/tailwindlabs/tailwindcss/compare/v3.4.13...v3.4.14">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Currently, we have a lot of stupid code to forward data from the
`{Client,Gateway}Tunnel` interface to `{Client,Gateway}State`. Recent
refactorings such as #6919 made it possible to get rid of this
forwarding layer by directly exposing `&mut TRoleState`.
To maintain some type-privacy, several functions are made generic to
accept `impl Into` or `impl TryInto`.
The ports < 1024 are reserved and should not be used for outbound TCP
connections. Generally, a port from the ephemeral port range should be
used for that.
To enforce this, we move the port range of the `dns_over_tcp::Client` to
const-generics. At present, `connlib` only uses a single port range so
we set those as the default too.
UDP DNS queries for upstream resolvers that happen to be resources need
to be sent through the tunnel. For that to work correctly, `connlib`
needs to rewrite the IP header such that the destination IP points to
the actual address of the DNS server.
Currently, this happens rather "late" in the processing of the packets,
i.e. after `try_handle_dns` has returned (where that decision is
actually made). This is rather confusing and also forces us to re-parse
the packet as a DNS packet at a later stage.
To avoid this, we move main functionality of
`maybe_mangle_dns_query_to_cidr_resource` into the branch where
`connlib`'s stub DNS resolver tells us that the query needs to be
forwarded via the tunnel.
With the upcoming support of TCP DNS queries, we will have a 2nd source
of IP packets that need to go through the tunnel: Packets emitted from
our internal TCP stack. Attempting to perform the same post-processing
on these TCP packets as we do with UDP is rather confusing, which is why
we want to remove this step from the `encapsulate` function.
Resolves: #5391.
Within `connlib`, the `encapsulate` and `decapsulate` functions on
`ClientState` and `GatewayState` are the entrypoint for sending and
receiving network traffic. For example, IP packets read from the TUN
device are processed using these functions.
Not all packets / traffic passed to these functions is meant to be
encrypted. Some of it is TURN traffic with relays, some of it is DNS
traffic that we intercept.
To clarify this, we rename these functions to `handle_tun_input` and
`handle_network_input`.
As part of this clarification, we also call `handle_timeout` in case we
don't emit a decrypted IP packet when handling network input. Once we
support DNS over TCP (#6944), some IP packets sent through the tunnel
will originate from DNS servers that we forwarded queries to. In that
case, those responses will be handled by `connlib`'s internal TCP stack
and thus not produce a decrypted IP packet. To correctly, advance the
state in this case, we mirror what we already do for `handle_tun_input`
and call `handle_timeout` if `handle_network_input` yields `None`.
When handling DNS queries, `connlib` tries to be as transparent as
possible. For this reason, we byte-for-byte forward the DNS response
from the upstream resolver to the original source socket. In #6999, we
started modelling these DNS queries as explicit tasks in preparation for
DNS over TCP and DNS over HTTPS.
As part of that, we create a DNS response for _every_ IO error we
encounter as part of the recursive query. This includes timeouts, i.e.
when we don't receive a response at all. That actually breaks the rule
of "be a transparent DNS proxy".
In this PR, we slightly refactor the handling of the DNS response to
explicitly match on `io::Errorkind::TimedOut` to not send a packet back,
thus mirroring the behaviour the DNS client would encounter without
Firezone being active.
When performing recursive DNS queries over UDP, `connlib` needs to
remember the original source socket a particular query came from in
order to send the response back to the correct socket. Until now, this
was tracked in a separate `HashMap`, indexed by upstream server and
query ID.
When DNS queries are being retried, they may be resent using the same
query ID, causing "Unknown query" logs if the retry happens on a shorter
interval than the timeout of our recursive query.
We are already tracking a bunch of meta data along-side the actual
query, meaning we can just as easily add the original source socket to
that as well.
Once we add TCP DNS queries, we will need to track the handle of the TCP
socket in a similar manner.
This PR introduces a custom logging format for all Rust-components. It
is more or less a copy of `tracing_subscriber::fmt::format::Compact`
with the main difference that span-names don't get logged.
Spans are super useful because they allow us to record contextual
values, like the current connection ID, for a certain scope. What is IMO
less useful about them is that in the default formatter configuration,
active spans cause a right-drift of the actual log message.
The actual log message is still what most accurately describes, what
`connlib` is currently doing. Spans only add contextual information that
the reader may use for further understand what is happening. This
optional nature of the utility of spans IMO means that they should come
_after_ the actual log message.
Resolves: #7014.