Currently, `connlib` depends on `hickory-resolver` to perform DNS
queries for non-resources. This is unnecessary. Instead of buffering the
original UDP DNS query, consulting hickory to resolve the name and
mapping the response back, we can simply take the UDP payload and send
it via our protected socket directly to the original upstream DNS
server.
This ensures `connlib` is as transparent as possible for DNS queries for
non-resources. Additionally, it removes a lot of error handling and
other cruft that we currently have to perform because we are using
hickory. For example, hickory will automatically retry a DNS query after
a certain timeout. However, the OS / client talking to `connlib` will
also retry after a certain timeout because it is making DNS queries over
an unreliable transport (UDP). It is thus unnecessary for us to do that
internally.
To correctly test this change, our test-suite needed some refactoring.
Specifically, DNS servers are now modelled as dedicated `Host`s that can
receive (UDP) traffic.
Lastly, we can remove our dependency on `hickory-proto` and
`hickory-resolver` everywhere and only use `domain` for parsing DNS
messages.
Resolves: #6141.
Related: #6033.
Related: #4800. (Impossible to happen with this design)
Instead of having one giant, composed strategy, we introduce a dedicated
`stub_portal` strategy. That one samples what is defined in the portal
in production: sites, gateways and resources.
Based on a sampled portal, we can then sample gateways, a client and DNS
records for our resources.
The `DnsServer` struct is quite nested. All it really contains
(currently) is a `SocketAddr`. To make logs containing this structure
easier to use, only print the inner address on debug.
Without masquerading, packets sent by the gateway through the TUN
interface use the wrong source address (the TUN device's address)
instead of the gateway's actual network interface.
We set this env variable in all our uses of the gateway, thus we might
as well remove it and always perform unconditionally.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Reactor Scram <ReactorScram@users.noreply.github.com>
Closes#5878
It won't work properly as admin (deep links will all fail), and this
improves UX by making it obvious that admin powers are no longer needed
for the GUI.
```[tasklist]
- [x] Write up `SAFETY` comments
```
Closes#5063, supersedes #5850
Other refactors and changes made as part of this:
- Adds the ability to disable DNS control on Windows
- Removes the spooky-action-at-a-distance `from_env` functions that used
to be buried in `tunnel`
- `FIREZONE_DNS_CONTROL` is now a regular `clap` argument again
---------
Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com>
Mitigates #5880.
This should fix the issue for all practical purposes, but we don't need
a channel there, so it does not close the ticket. A more permanent fix
would involve factoring out the callbacks or cheating and using a Mutex
inside the callbacks to do a swap-and-notify thing.
This affects both the Headless Client and the GUI Client's IPC service,
on both Linux and Windows.
Builds on top of #6164
Part of the effor towards
https://github.com/firezone/firezone/issues/6074
Prepares connlib to call `setDisableResource` from android.
Furthermore, we add a `disablable` parameter for resources which default
to false for now, in the future the portal will set it for the internet
resource, and further in the future it may be used for other resources.
The `disablable` parameter only affect UI.
This is just the API part for #6074
We expose a new API `set_disabled_resources` which given a set of
resource ids it does the following:
* Disconnect any active connection depending only on this resource
* Prevent any new connection with that resource id being established
The `set_disabled_resources` API is purposely not stateful. In other
words, resources cannot be incrementally enabled or disabled. Instead,
clients always need to send the latest state, i.e. all resources that
should be disabled. `connlib` will figure out the diff and correctly
enable / disable resources as necessary. Thus, enabling a resource is
done by calling `set_disabled_resources` without the previously disabled
resource ID.
Initially, this will only be used for the internet resource but the use
can be expanded for any other resource.
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.203 to
1.0.204.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/serde-rs/serde/releases">serde's
releases</a>.</em></p>
<blockquote>
<h2>v1.0.204</h2>
<ul>
<li>Apply #[diagnostic::on_unimplemented] attribute on Rust 1.78+ to
suggest adding serde derive or enabling a "serde" feature flag
in dependencies (<a
href="https://redirect.github.com/serde-rs/serde/issues/2767">#2767</a>,
thanks <a
href="https://github.com/weiznich"><code>@weiznich</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="18dcae0a77"><code>18dcae0</code></a>
Release 1.0.204</li>
<li><a
href="58c307f9cc"><code>58c307f</code></a>
Alphabetize list of rustc-check-cfg</li>
<li><a
href="8cc4809414"><code>8cc4809</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2769">#2769</a>
from dtolnay/onunimpl</li>
<li><a
href="1179158def"><code>1179158</code></a>
Update ui test with diagnostic::on_unimplemented from PR 2767</li>
<li><a
href="91aa40e749"><code>91aa40e</code></a>
Add ui test of unsatisfied serde trait bound</li>
<li><a
href="595019e979"><code>595019e</code></a>
Cut test_suite from workspace members in old toolchain CI jobs</li>
<li><a
href="b0d7917f88"><code>b0d7917</code></a>
Pull in trybuild 'following types implement trait' fix</li>
<li><a
href="8e6637a1e4"><code>8e6637a</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2767">#2767</a>
from weiznich/feature/diagnostic_on_unimplemented</li>
<li><a
href="694fe05953"><code>694fe05</code></a>
Use the <code>#[diagnostic::on_unimplemented]</code> attribute when
possible</li>
<li><a
href="f3dfd2a237"><code>f3dfd2a</code></a>
Suppress dead code warning in test of unit struct remote derive</li>
<li>Additional commits viewable in <a
href="https://github.com/serde-rs/serde/compare/v1.0.203...v1.0.204">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
With the adoption of #5080, connlib is now resilient against temporarily
failed connections as they'll be immediately re-established. Thus, we no
longer need any of the patches that we are currently maintaining in our
str0m fork.
The only difference is an adjustment of the ICE timeout parameters but
those can be made configurable in str0m.
Related: https://github.com/algesten/str0m/pull/537.
This seems to fix#6033
What **seems** to be happening is that sometimes responses are delayed
and hickory cache the negative response.
We disable the cache, and the multiple attempts to be as transparent as
possible until #6141 is implemented.
Furthermore, the lack of recursion available in responses can cause
issues in some clients and enabling it shouldn't cause any problems.
When a relay disconnects from the portal, either during deployment or
because of a network partition, the portal sends us a `relays_presence`
event. This allows us to discontinue use of a relay. Any connections
that currently use that relay get cut and the next packet reestablishes
a new one.
In the case of relays being re-deployed, their state is gone entirely
and we will receive new relays to use. In the case of a network
partition, the relay would have retained its state but we have already
discarded ours locally. Only one allocation per client (identified by
its 3-tuple) is allowed, so making a new allocation on that relay would
fail.
In order to sync up this inconsistency, we delete our current allocation
and make a new one if we detect this case. To test this, we introduce a
new state transition to `tunnel_test` that simulates such a network
partition.
In addition, we also remove the "upsert" behaviour of relays. The
credentials of a relay can only change if it reboots. Rebooting would
trigger a `relays_presence` event and tell us to disconnect from that
relay. Thus, receiving a relay that we already know is guaranteed to use
the same credentials.
Removal of this upserting behaviour is essentially the fix for #6067.
Due to a portal bug (#6099), we may receive a relay as connected that is
in fact shutting down. In case a channel needs to be refreshed on
exactly that relay - whilst we are trying to refresh the allocation it
as part of upserting - causes a busy loop of attempting to queue a
message but failing to do so because we haven't chosen an
`active_socket` yet for that relay.
Fixes: #6067.
Bumps [tailwindcss](https://github.com/tailwindlabs/tailwindcss) from
3.4.6 to 3.4.7.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tailwindlabs/tailwindcss/releases">tailwindcss's
releases</a>.</em></p>
<blockquote>
<h2>v3.4.7</h2>
<h3>Fixed</h3>
<ul>
<li>Fix class detection in Slim templates with attached attributes and
ID (<a
href="https://redirect.github.com/tailwindlabs/tailwindcss/pull/14019">#14019</a>)</li>
<li>Ensure attribute values in <code>data-*</code> and
<code>aria-*</code> modifiers are always quoted in the generated CSS (<a
href="https://redirect.github.com/tailwindlabs/tailwindcss/pull/14037">#14037</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/tailwindlabs/tailwindcss/blob/v3.4.7/CHANGELOG.md">tailwindcss's
changelog</a>.</em></p>
<blockquote>
<h2>[3.4.7] - 2024-07-25</h2>
<h3>Fixed</h3>
<ul>
<li>Fix class detection in Slim templates with attached attributes and
ID (<a
href="https://redirect.github.com/tailwindlabs/tailwindcss/pull/14019">#14019</a>)</li>
<li>Ensure attribute values in <code>data-*</code> and
<code>aria-*</code> modifiers are always quoted in the generated CSS (<a
href="https://redirect.github.com/tailwindlabs/tailwindcss/pull/14037">#14037</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9824cb64a0"><code>9824cb6</code></a>
Update version in package.json</li>
<li><a
href="aa6c10f67f"><code>aa6c10f</code></a>
Add missing heading to changelog</li>
<li><a
href="245058c7fd"><code>245058c</code></a>
Update changelog for v3.4.7</li>
<li><a
href="605d8cd5eb"><code>605d8cd</code></a>
Update CHANGELOG.md</li>
<li><a
href="680c55c11c"><code>680c55c</code></a>
Normalize attribute selector for <code>data-*</code> and
<code>aria-*</code> modifiers (<a
href="https://redirect.github.com/tailwindlabs/tailwindcss/issues/14037">#14037</a>)</li>
<li><a
href="866860e6a6"><code>866860e</code></a>
Print eventual lightning CSS parsing errors when the CSS matcher fail
(<a
href="https://redirect.github.com/tailwindlabs/tailwindcss/issues/14034">#14034</a>)</li>
<li><a
href="bdc87ae1d7"><code>bdc87ae</code></a>
Fix class detection in Slim templates with attached attributes and IDs
(<a
href="https://redirect.github.com/tailwindlabs/tailwindcss/issues/14019">#14019</a>)</li>
<li>See full diff in <a
href="https://github.com/tailwindlabs/tailwindcss/compare/v3.4.6...v3.4.7">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps
[@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)
from 20.14.12 to 22.0.2.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This almost always indicate a user-impacting connectivity error. For
customers troubleshooting their Gateways by greping for `ERROR`, this
will make these much easier to find.
---------
Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
With the upcoming feature of full-route tunneling aka an "Internet
Resource", we need to expand the reference state machine in
`tunnel_test`. In particular, packets to non-resources will now be
routed the gateway if we have previously activated the Internet
resource.
This is reasonably easy to model as we can see from the small diff.
Because `connlib` doesn't actually support the Internet resource yet,
the code snippet for where it is added to the list of all possible
resources to sample from is commented out.
When `tunnel_test` fails, it prints the initial state in verbose debug
formatting. Most of the fields in `RefClient` track state _during_ the
runtime of the test and are all empty initially. The same thing applies
to `Host`.
To make this output easier to read and scroll, we ignore some of these
fields in the debug output.
Closes#5846
Will be moved down to the IPC service eventually.
The goal for connection roaming is not for totally transparent "Change
Wi-Fi networks without dropping SSH" handoffs, but just for Firezone to
re-connect itself as quickly as possible so that everything above us can
re-connect as quickly as it times out, and won't be hung up with a
broken tunnel.
On the gateway, the only packets we are interested in receiving on the
TUN device are the ones destined for clients. To achieve this, we
specifically set routes for the reserved IP ranges on our interface.
Multicast packets as such as MLDV2 get sent to all packets and cause
unnecessary noise in our logs. Thus, as a defense-in-depth measure, we
drop all packets outside of the IP ranges reserved for our clients.
Currently, each connection always uses all relays. That is pretty
wasteful in terms of bandwidth usage and processing power because we
only ever need a a single relay for a connection. When we re-deploy
relays, we actively invalidate them, meaning the connection gets cut
instantly without waiting for an ICE timeout and the next packet will
establish a new one.
This is now also asserted with a dedicated transition in `tunnel_test`.
To correctly simulate this in `tunnel_test`, we always cut the
connection to all relays. This frees us from modelling `connlib`'s
internal strategy for picking a relay which keeps the reference state
simple.
Resolves: #6014.
Bumps [zip](https://github.com/zip-rs/zip2) from 2.1.3 to 2.1.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/zip-rs/zip2/releases">zip's
releases</a>.</em></p>
<blockquote>
<h2>v2.1.5</h2>
<h3><!-- raw HTML omitted -->🚜 Refactor</h3>
<ul>
<li>change invalid_state() return type to io::Result<!-- raw HTML
omitted --></li>
</ul>
<h2>v2.1.4</h2>
<h3><!-- raw HTML omitted -->🐛 Bug Fixes</h3>
<ul>
<li>fix(<a
href="https://redirect.github.com/zip-rs/zip2/pull/215">#215</a>):
Upgrade to deflate64 0.1.9</li>
<li>Panic when reading a file truncated in the middle of an XZ block
header</li>
<li>Some archives with over u16::MAX files were handled incorrectly or
slowly (<a
href="https://redirect.github.com/zip-rs/zip2/pull/189">#189</a>)</li>
<li>Check number of files when deciding whether a CDE is the real
one</li>
<li>Could still select a fake CDE over a real one in some cases</li>
<li>May have to consider multiple CDEs before filtering for
validity</li>
<li>We now keep searching for a real CDE header after read an invalid
one from the file comment</li>
<li>Always search for data start when opening an archive for append, and
reject the header if data appears to start after central directory</li>
<li><code>deep_copy_file</code> no longer allows overwriting an existing
file, to match the behavior of <code>shallow_copy_file</code></li>
<li>File start position was wrong when extra data was present</li>
<li>Abort file if central extra data is too large</li>
<li>Overflow panic when central directory extra data is too large</li>
<li>ZIP64 header was being written twice when copying a file</li>
<li>ZIP64 header was being written to central header twice</li>
<li>Start position was incorrect when file had no extra data</li>
<li>Allow all reserved headers we can create</li>
<li>Fix a bug where alignment padding interacts with other extra-data
fields</li>
<li>Fix bugs involving alignment padding and Unicode extra fields</li>
<li>Incorrect header when adding AES-encrypted files</li>
<li>Parse the extra field and reject it if invalid</li>
<li>Incorrect behavior following a rare combination of
<code>merge_archive</code>, <code>abort_file</code> and
<code>deep_copy_file</code>. As well, we now return an error when a file
is being copied to itself.</li>
<li>path_to_string now properly handles the case of an empty path</li>
<li>Implement <code>Debug</code> for <code>ZipWriter</code> even when
it's not implemented for the inner writer's type</li>
<li>Fix an issue where the central directory could be incorrectly
detected</li>
<li><code>finish_into_readable()</code> would corrupt the archive if the
central directory had moved</li>
</ul>
<h3><!-- raw HTML omitted -->🚜 Refactor</h3>
<ul>
<li>Verify with debug assertions that no FixedSizeBlock expects a
multi-byte alignment (<a
href="https://redirect.github.com/zip-rs/zip2/pull/198">#198</a>)</li>
<li>Use new do_or_abort_file method</li>
</ul>
<h3><!-- raw HTML omitted -->⚡ Performance</h3>
<ul>
<li>Speed up CRC when encrypting small files</li>
<li>Limit the number of extra fields</li>
<li>Refactor extra-data validation</li>
<li>Store extra data in plain vectors until after validation</li>
<li>Only build one IndexMap after choosing among the possible valid
headers</li>
<li>Simplify validation of empty extra-data fields</li>
<li>Validate automatic extra-data fields only once, even if several are
present</li>
<li>Remove redundant <code>validate_extra_data()</code> call</li>
<li>Skip searching for the ZIP32 header if a valid ZIP64 header is
present (<a
href="https://redirect.github.com/zip-rs/zip2/pull/189">#189</a>)</li>
</ul>
<h3><!-- raw HTML omitted -->⚙️ Miscellaneous Tasks</h3>
<ul>
<li>Fix a bug introduced by c934c824</li>
<li>Fix a failing unit test</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/zip-rs/zip2/blob/master/CHANGELOG.md">zip's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/zip-rs/zip2/compare/v2.1.4...v2.1.5">2.1.5</a>
- 2024-07-20</h2>
<h3><!-- raw HTML omitted -->🚜 Refactor</h3>
<ul>
<li>change invalid_state() return type to io::Result<!-- raw HTML
omitted --></li>
</ul>
<h2><a
href="https://github.com/zip-rs/zip2/compare/v2.1.3...v2.1.4">2.1.4</a>
- 2024-07-18</h2>
<h3><!-- raw HTML omitted -->🐛 Bug Fixes</h3>
<ul>
<li>fix(<a
href="https://redirect.github.com/zip-rs/zip2/pull/215">#215</a>):
Upgrade to deflate64 0.1.9</li>
<li>Panic when reading a file truncated in the middle of an XZ block
header</li>
<li>Some archives with over u16::MAX files were handled incorrectly or
slowly (<a
href="https://redirect.github.com/zip-rs/zip2/pull/189">#189</a>)</li>
<li>Check number of files when deciding whether a CDE is the real
one</li>
<li>Could still select a fake CDE over a real one in some cases</li>
<li>May have to consider multiple CDEs before filtering for
validity</li>
<li>We now keep searching for a real CDE header after read an invalid
one from the file comment</li>
<li>Always search for data start when opening an archive for append, and
reject the header if data appears to start after central directory</li>
<li><code>deep_copy_file</code> no longer allows overwriting an existing
file, to match the behavior of <code>shallow_copy_file</code></li>
<li>File start position was wrong when extra data was present</li>
<li>Abort file if central extra data is too large</li>
<li>Overflow panic when central directory extra data is too large</li>
<li>ZIP64 header was being written twice when copying a file</li>
<li>ZIP64 header was being written to central header twice</li>
<li>Start position was incorrect when file had no extra data</li>
<li>Allow all reserved headers we can create</li>
<li>Fix a bug where alignment padding interacts with other extra-data
fields</li>
<li>Fix bugs involving alignment padding and Unicode extra fields</li>
<li>Incorrect header when adding AES-encrypted files</li>
<li>Parse the extra field and reject it if invalid</li>
<li>Incorrect behavior following a rare combination of
<code>merge_archive</code>, <code>abort_file</code> and
<code>deep_copy_file</code>. As well, we now return an error when a file
is being copied to itself.</li>
<li>path_to_string now properly handles the case of an empty path</li>
<li>Implement <code>Debug</code> for <code>ZipWriter</code> even when
it's not implemented for the inner writer's type</li>
<li>Fix an issue where the central directory could be incorrectly
detected</li>
<li><code>finish_into_readable()</code> would corrupt the archive if the
central directory had moved</li>
</ul>
<h3><!-- raw HTML omitted -->🚜 Refactor</h3>
<ul>
<li>Verify with debug assertions that no FixedSizeBlock expects a
multi-byte alignment (<a
href="https://redirect.github.com/zip-rs/zip2/pull/198">#198</a>)</li>
<li>Use new do_or_abort_file method</li>
</ul>
<h3><!-- raw HTML omitted -->⚡ Performance</h3>
<ul>
<li>Speed up CRC when encrypting small files</li>
<li>Limit the number of extra fields</li>
<li>Refactor extra-data validation</li>
<li>Store extra data in plain vectors until after validation</li>
<li>Only build one IndexMap after choosing among the possible valid
headers</li>
<li>Simplify validation of empty extra-data fields</li>
<li>Validate automatic extra-data fields only once, even if several are
present</li>
<li>Remove redundant <code>validate_extra_data()</code> call</li>
<li>Skip searching for the ZIP32 header if a valid ZIP64 header is
present (<a
href="https://redirect.github.com/zip-rs/zip2/pull/189">#189</a>)</li>
</ul>
<h3><!-- raw HTML omitted -->⚙️ Miscellaneous Tasks</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8fb107ad5e"><code>8fb107a</code></a>
chore: release (<a
href="https://redirect.github.com/zip-rs/zip2/issues/222">#222</a>)</li>
<li><a
href="a7c1230dfa"><code>a7c1230</code></a>
publicly export and document the zip64 threshold constants (<a
href="https://redirect.github.com/zip-rs/zip2/issues/79">#79</a>)</li>
<li><a
href="a60bd79826"><code>a60bd79</code></a>
Merge pull request <a
href="https://redirect.github.com/zip-rs/zip2/issues/210">#210</a> from
a1phyr/multiple_refactors</li>
<li><a
href="7471cf526f"><code>7471cf5</code></a>
refactor: change invalid_state() return type to io::Result<T></li>
<li><a
href="9caa3b678f"><code>9caa3b6</code></a>
Merge pull request <a
href="https://redirect.github.com/zip-rs/zip2/issues/194">#194</a> from
zip-rs/release-plz-2024-06-15T04-17-17Z</li>
<li><a
href="8b11361b9e"><code>8b11361</code></a>
chore: release</li>
<li><a
href="55c2c64249"><code>55c2c64</code></a>
ci(fuzz): Set max length closer to current corpus entries' length</li>
<li><a
href="193bbe125b"><code>193bbe1</code></a>
fix(<a
href="https://redirect.github.com/zip-rs/zip2/issues/215">#215</a>):
Upgrade to deflate64 0.1.9</li>
<li><a
href="4e971d07ab"><code>4e971d0</code></a>
Commit unfinished corpus</li>
<li><a
href="c14986806a"><code>c149868</code></a>
Fix divergence from origin/master</li>
<li>Additional commits viewable in <a
href="https://github.com/zip-rs/zip2/compare/v2.1.3...v2.1.5">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
In `connlib`, traffic is sent through sockets via one of three ways:
1. Direct p2p traffic between clients and gateways: For these, we always
explicitly set the source IP (and thus interface).
2. UDP traffic to the relays: For these, we let the OS pick an
appropriate source interface.
3. WebSocket traffic over TCP to the portal: For this too, we let the OS
pick the source interface.
For (2) and (3), it is possible to run into routing loops, depending on
the routes that we have configured on the TUN device.
In Linux, we can prevent routing loops by marking a socket [0] and
repeating the mark when we add routes [1]. Packets sent via a marked
socket won't be routed by a rule that contains this mark. On Android, we
can do something similar by "protecting" a socket via a syscall on the
Java side [2].
On Windows, routing works slightly different. There, the source
interface is determined based on a computed metric [3] [4]. To prevent
routing loops on Windows, we thus need to find the "next best" interface
after our TUN interface. We can achieve this with a combination of
several syscalls:
1. List all interfaces on the machine
2. Ask Windows for the best route on each interface, except our TUN
interface.
3. Sort by Windows' routing metric and pick the lowest one (lower is
better).
Thanks to the abstraction of `SocketFactory` that we already previously
introduced, Integrating this into `connlib` isn't too difficult:
1. For TCP sockets, we simply resolve the best route after creating the
socket and then bind it to that local interface. That way, all packets
will always going via that interface, regardless of which routes are
present on our TUN interface.
2. UDP is connection-less so we need to decide per-packet, which
interface to use. "Pick the best interface for me" is modelled in
`connlib` via the `DatagramOut::src` field being `None`.
- To ensure those packets don't cause a routing loop, we introduce a
"source IP resolver" for our `UdpSocket`. This function gets called
every time we need to send a packet without a source IP.
- For improved performance, we cache these results. The Windows client
uses this source IP resolver to use the above devised strategy to find a
suitable source IP.
- In case the source IP resolution fails, we don't send the packet. This
is important, otherwise, the kernel might choose our TUN interface again
and trigger a routing loop.
The last remark to make here is that this also works for connection
roaming. The TCP socket gets thrown away when we reconnect to the
portal. Thus, the new socket will pick the new best interface as it is
re-created. The UDP sockets also get thrown away as part of roaming.
That clears the above cache which is what we want: Upon roaming, the
best interface for a given destination IP will likely have changed.
[0]:
59014a9622/rust/headless-client/src/linux.rs (L19-L29)
[1]:
59014a9622/rust/bin-shared/src/tun_device_manager/linux.rs (L204-L224)
[2]:
59014a9622/rust/connlib/clients/android/src/lib.rs (L535-L549)
[3]:
https://learn.microsoft.com/en-us/previous-versions/technet-magazine/cc137807(v=msdn.10)?redirectedfrom=MSDN
[4]:
https://learn.microsoft.com/en-us/windows-server/networking/technologies/network-subsystem/net-sub-interface-metricFixes: #5955.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
`connlib`'s event loop performs work in a very particular order:
1. Local buffers like IP, UDP and DNS packets are emptied.
2. Time-sensitive tasks, if any, are performed.
3. New UDP packets are processed.
4. New IP packets (from the TUN device) are processed.
This priority ensures we don't accept more work (i.e. new packets) until
we have finished processing existing work. As a result, we can keep
local buffers small and processing latencies low.
I am not completely confident on the issue of #6067 but if the busy-loop
originates from a bad timer, then the above priority means we never get
to the part where we read new UDP or IP packets and components such a
`PhoenixChannel` - which operate outside of `connlib'`s event loop -
don't get any CPU time.
A naive fix for this problem is to just de-prioritise the polling of the
timer within `Io::poll`. I say naive because without additional changes,
this could delay the processing of time-sensitive tasks on a very busy
client / gateway where packets are constantly arriving and thus we
never[^1] reach the part where the timer gets polled.
To fix this, we make two distinct changes:
1. We pro-actively break from `connlib'`s event loop every 5000
iterations. This ensures that even on a very busy system, other
components like the `PhoenixChannel` get a chance to do _some_ work once
in a while.
2. In case we force-yield from the event loop, we call `handle_timeout`
and immediately schedule a new wake-up. This ensures time does advance
in regular intervals as well and we don't get wrongly suspended by the
runtime.
These changes don't prevent any timer-loops by themselves. With a
timer-loop, we still busy-loop for 5000 iterations and thus
unnecessarily burn through some CPU cycles. The important bit however is
that we stay operational and can accept packets and portal messages. Any
of them might change the state such that the timer value changes, thus
allowing `connlib` to self-heal from this loop.
Fixes: #6067.
[^1]: This is an assumption based on the possible control flow. In
practise, I believe that reading from the sockets or the TUN device is a
much slower operation than processing the packets. Thus, we should
eventually hit the the timer path too.
We don't want the timer to fire multiple times at the same `Instant`
unless it has been specifically set to that `Instant` again. Thus, clear
the timer after it fired.
I don't think this fixed#6067 but it can't hurt.
Connection roaming within `connlib` has changed a fair-bit since we
introduced the `reconnect` function. The new implementation is basically
a hard-reset of all state within `connlib`. Renaming this function
across all layers makes this more obvious.
Resolves: #6038.
Windows may delete the default route during roaming. To prevent this
from causing problems, we make `set_routes` add all routes regardless of
the previously stored ones. The known routes are only used to compute,
what routes are to be removed.
For Linux we do the same to make it consistent across platforms.
This also give us the chance to not clear the cache when ips are set,
since now all routes are always added, meaning they will be always
re-added when roaming.
Overall, this more closely aligns Linux and Windows with how Firezone
works on Apple and Android. There, we always remove all routes and set
new ones. Removing routes happens very rarely (only when CIDR resources
are deactivated), thus, not removing all and re-adding the routes is
still deemed to be worth it.
With the new implementation, this is guaranteed to always make the new
routes take effect and at the same time be idempotent.
---------
Signed-off-by: Gabi <gabrielalejandro7@gmail.com>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
The dependency update in #6003 introduced a regression: Connecting to
the OTLP exporter was hanging forever and thus the relay failed to start
up.
The hang seems to be related to _dropping_ the `meter_provider`. Looking
at the changelog update, this change was actually called out:
https://github.com/open-telemetry/opentelemetry-rust/blob/main/opentelemetry-otlp/CHANGELOG.md#v0170.
By setting these providers globally, the relay starts up just fine.
To ensure this doesn't regress again, we add an OTEL collector to our
`docker-compose.yml` and configure the `relay-1` to connect to it.