Ever since #7289, we no longer issue any DNS queries to `connlib` when
we reconnect to the portal. Thus, the back-then conceived feature of
"known hosts" that allowed us to resolve that DNS query without having
an upstream receiver is no longer needed.
Rather than notarizing the embedded app, the `notarytool` supports
notarizing the entire disk image instead which will recursively notarize
relevant binaries inside.
On macOS 12, returning an empty body for a `WindowGroup` can cause the
app UI to crash with the following error:
```
*** Assertion failure in void _NSWindowSetFrameIvar(NSWindow *, NSRect)(), NSWindow.m:935
```
Since the `menuBar` is not initialized when the app initializes, this
conditional can return an empty body in a race condition.
Even though we're winding down support for macOS 12, it would be good to
fix this logic bug.
When `connlib` detects that no data is being sent on a connection, it
enters a "low-power" mode within which timers are set to a much longer
interval than usual. For `boringtun` this moves the timer from 1s to
30s.
At present, this timer also guards, how often we actually update the
timer state within `boringtun`. Instead of following a "only update
exactly when this timer fires"-policy, we now adopt a "update at least
this often"-policy. The difference here is that while we are executing
the `handle_timeout` function, we might as well call into `boringtun`
and update its timer state too.
Another side-effect of this timer is that `boringtun` may not be woken
in time to initiate a rekey when the session expires. WireGuard sessions
without activity expire after 3 minutes. Only the initiater should then
recreate the session. If this doesn't happen in time, the responder
(Gateway) may trigger a keep-alive timeout. Without an active session,
keep-alives also initiate sessions, resulting in us having two competing
sessions.
This fixes the failing test cases added in this PR: There, we ran into a
situation where a WireGuard tunnel idled for so long that the spec
requires the session to expire. In the test, we then sent a packet using
such an expired session but that packet got discarded by the Gateway
because of the expired session. The timers are what check whether a
session is expired:
- By calling `update_timers_at` more often, we can expire the session in
time and `boringtun` will buffer the to-be-sent packet until the new
session is established.
- By deactivating the keep-alive on the Gateway, we ensure that we only
ever have a single WireGuard session active.
- With https://github.com/firezone/boringtun/pull/53, we ensure the
Gateway doesn't initiate a new session in the beginning.
- With https://github.com/firezone/boringtun/pull/51, we ensure the
Client only ever initiates a single session.
To be entirely reliable, we also had to remove the idle WG timer and
update `boringtun`'s state every second. This is unfortunate but can
long-term be fixed by patching WireGuard to tell us, when it exactly
wants to be woken instead of us having to proactively wake it every
second _in case_ it needs to act on a timer.
Related: https://github.com/firezone/boringtun/issues/54.
Xcode doesn't allow wildcards in input file lists, so the rules I set up
in #7488 never took effect.
Upon further investigation, it appears that the `strip` command executed
unconditionally at the end of every Rust build was the culprit. Since
Xcode already does this for us, it's a useless step that adds about 30s
to the build time.
Unfortunately there isn't a good way to tell Xcode not to build rust.
But now we don't need to -- `cargo`'s build cache is smart enough to
skip builds and we are back to the ~1-2s range for repeated builds when
only Swift code has changed.
We also add the swift bridge generated code to version control. These
doesn't change regularly, and Xcode sometimes complains that the files
don't exist _before_ it lets you run the `cargo build` to generate them
🙃 .
Making Relay subnet changes requires us to use completely unused ranges
on changes to prod. This is because the `terraform apply` brings up new
resources in addition to existing resources, so the old numbers are
still being occupied.
Because `e2-micro` instances are cheap (our current 24 instances is only
costing $50/mo) it would make sense to deploy a single one in each GCP
region that supports them.
This will increase our global presence, reducing latency for users
around the world especially if they happen to need to go through a Relay
because of a badly behaved NAT. The number of instances in each region
is reduced from `2` to `1` based on the logic that more heavily
populated parts of the world _already_ have a higher density of GCP
regions in them, and we don't need inter-region redundancy.
Also, this ensures our staging Relay deployment matches our Production
relay deployment to reduce the chance that drift between the two will
cause unforeseen downtime.
This will be tested on staging first, and if all goes well, will go out
to production over the weekend.
Previously, it was possible to use the Firezone relay in "standalone"
mode where it would not attempt to connect to a portal. A long time ago,
this mode was introduced in order for us to test the TURN compatibility
of the relay with non-Firezone TURN clients. These tests have long been
removed and thus the mode is no longer required.
The positive side-effect of this is that we can make the
`FIREZONE_API_URL` a mandatory parameter and thus direct self-hosted
users towards setting this to the endpoint of their self-hosted portal.
For a while now, `connlib` has been calling these two callbacks right
after each other because the internal event already bundles all the
information about the TUN device. With this PR, we merge the two
callback functions also in layers above `connlib` itself.
Resolves: #6182.
This code doesn't make sense:
- In the Adapter, we are not running on the main UI thread
- In this callback, we are running on the `workQueue` anyway
- `Task` is not how to spawn a new thread in Swift strictly speaking
With #7684, we update our boringtun fork to support deterministic timers
and handshake jitter. Further testing revealed that there was a bug
within the jitter implementation that prevented the jitter from actually
applying (https://github.com/firezone/boringtun/pull/48). In addition,
we were only calling `update_timers_at` with a precision of 1s, making
the internal jittering of 0 to 333ms within `boringtun` useless.
To fix this, we introduced a `next_timer_update` function in `Tunn` in
https://github.com/firezone/boringtun/pull/49 and make use of it in
here.
Finally, https://github.com/firezone/boringtun/pull/50 prioritizes the
sending of these scheduled handshakes to further improve the timer
precision.
With these patches applied, this is what the rekey logs look like:
```
2025-01-08T13:20:09.209Z DEBUG boringtun::noise::timers: HANDSHAKE(REKEY_AFTER_TIME (on send)) cid=b3d34a15-55ab-40df-994b-a838e75d65d7
2025-01-08T13:20:09.209Z DEBUG boringtun::noise::timers: Scheduling new handshake jitter=204.361814ms cid=b3d34a15-55ab-40df-994b-a838e75d65d7
2025-01-08T13:20:09.415Z DEBUG boringtun::noise: Sending handshake_initiation cid=b3d34a15-55ab-40df-994b-a838e75d65d7
2025-01-08T13:20:09.537Z DEBUG boringtun::noise: Received handshake_response local_idx=2898279939 remote_idx=2039394307 cid=b3d34a15-55ab-40df-994b-a838e75d65d7
2025-01-08T13:20:09.540Z DEBUG boringtun::noise: New session session=2898279939 cid=b3d34a15-55ab-40df-994b-a838e75d65d7
```
We can see that the scheduled handshake now does indeed get sent with
the applied jitter of 200ms.
Currently, telemetry via Sentry in our relay code is opt-out but won't
actually activate for a portal instance that isn't our staging or
production environment. However, this isn't enough to prevent alerts
from relay instances that aren't ours. It turns out that some
self-hosted customers don't realise that they have to change the portal
URL to their self-hosted portal. Without changing that, the relay will
attempt to authenticate to our production portal with an unknown token
and error out with a 401, logging a false-positive to Sentry.
In #7680, I stated that a VPN profile's status goes to `.invalid` when
its associated system extension is yanked. That's not always true. I
observed cases where the profile was still in the `.disconnected` state
with the system extension disabled or removed.
So, now we explicitly check on startup for two distinct states:
- whether the system extension is installed
- whether the VPN profile is valid
Based on this, we show an improved GrantVPNView for macOS only with
clear steps the user must perform to get set up.
If the user accidentally closes this window, they can open the Menubar
and click "Allow VPN permission to sign in" which will both (re)install
the extension and allow the profile.
<img width="1012" alt="Screenshot 2025-01-07 at 5 23 19 PM"
src="https://github.com/user-attachments/assets/c36b078e-835b-4c6e-a186-bc2e5fef7799"
/>
<img width="1012" alt="Screenshot 2025-01-07 at 5 24 06 PM"
src="https://github.com/user-attachments/assets/23d84af4-4fdb-4f03-b8f9-07a1e09da891"
/>
<img width="1012" alt="Screenshot 2025-01-07 at 5 31 41 PM"
src="https://github.com/user-attachments/assets/5b88dfa4-1725-45f2-bd6e-1939b5639cf4"
/>
Makes a few additions to the Apple Changelog in prep for the 1.4.0
release. Also removes the GSO note for Apple because AFAIK Darwin
doesn't support it.
When file descriptors like sockets or the TUN device are opened in
non-blocking mode, performing operations that would block emit the
`WouldBlock` IO error. These errors _should_ be translated into
`Poll::Pending` and have a waker registered that gets called whenever
the operation should be attempted again. Therefore, we should _never_
see these IO errors.
Previously, the implementation of the tunnel's event-loop did not yet
properly handle this backpressure and instead sometimes dropped packets
when it should have suspended. This has since been fixed but the then
introduced branch of just ignored the `io::ErrorKind::WouldBlock` errors
had remained.
Changing this to a debug-assert will alert us whenever we accidentally
break this without altering the behaviour of the release binary.
On macOS, there are two steps the user needs to allow for the VPN
profile to be active: enable the system extension and allow the VPN
profile.
The system extension must be installed before the VPN profile can be
allowed. This PR updates the flow so that the user is prompted to handle
both of those serially. Before, we tried to install the system extension
on launch and prompt the user to allow the profile at the same time.
This PR includes fixes to handle the edge cases where the user removes
the profile and/or extension while the tunnel is connected. When that
happens, the `NEVPNStatus` becomes `.invalid` and we replace the `Sign
In` link in the Menubar with a prompt to restart the grant permissions
flow. On iOS, the behavior is similar -- we move the view back to the
GrantPermissionsView.
Lastly, some refactoring is included to use consistent naming for
`VPNProfile` instead of `TunnelManager` to more closely match what Apple
calls these things.
At present, the WireGuard implementation within `boringtun` is impure
with regards to time due to calls to `Instant::now` and
`Instant::elapsed`. This makes it impossible to exhaustively test
time-related features because time cannot be advanced arbitrarily. The
rest of `connlib` is implemented in a sans-IO fashion where time is
controlled from the outside via `Instant` parameters on every function
that requires access to the current time.
With this PR, we update to the latest version of our `boringtun` fork at
https://github.com/firezone/boringtun which introduces pure equivalents
of all functions that require access to the current time _and_ also
implements the missing handshake-delay jitter feature (see
https://github.com/firezone/boringtun/issues/19).
This is a pretty safe upgrade as the production code doesn't really
change and time advances at the same rate as before. To ensure this
passes our test-suite, I ran 50_000 iterations locally.
`sentry-cli debug-files upload` offers no option to exclude certain
files or directories when recursively searching the given path. Thus, we
need to remove this staging directory to prevent it from recursively
walking the directory and inevitably erroring out when it hits a path it
doesn't have access to.
For our test-suite, we need to sample a unique, non-overlapping IP for
each component that is being simulated (client, gateways and relays).
These are sampled from a predefined range.
Currently, we only consider the first 100 IPs of this range and pick it
from an allocated `Vec`. This isn't ideal for performance and increases
the likelihood of two hosts having the same IP. IPv4 and IPv6 addresses
can also just be represented as numbers. Instead of sampling a random IP
from a list, we can simply sample a random number between the first and
last address of the particular IP network to achieve the same effect.
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.215 to
1.0.217.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/serde-rs/serde/releases">serde's
releases</a>.</em></p>
<blockquote>
<h2>v1.0.217</h2>
<ul>
<li>Support serializing externally tagged unit variant inside flattened
field (<a
href="https://redirect.github.com/serde-rs/serde/issues/2786">#2786</a>,
thanks <a
href="https://github.com/Mingun"><code>@Mingun</code></a>)</li>
</ul>
<h2>v1.0.216</h2>
<ul>
<li>Mark all generated impls with #[automatically_derived] to exclude
from code coverage (<a
href="https://redirect.github.com/serde-rs/serde/issues/2866">#2866</a>,
<a
href="https://redirect.github.com/serde-rs/serde/issues/2868">#2868</a>,
thanks <a
href="https://github.com/tdittr"><code>@tdittr</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="930401b0dd"><code>930401b</code></a>
Release 1.0.217</li>
<li><a
href="cb6eaea151"><code>cb6eaea</code></a>
Fix roundtrip inconsistency:</li>
<li><a
href="b6f339ca36"><code>b6f339c</code></a>
Resolve repr_packed_without_abi clippy lint in tests</li>
<li><a
href="2a5caea1a8"><code>2a5caea</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2872">#2872</a>
from dtolnay/ehpersonality</li>
<li><a
href="b9f93f99aa"><code>b9f93f9</code></a>
Add no-std CI on stable compiler</li>
<li><a
href="eb5cd476ba"><code>eb5cd47</code></a>
Drop #[lang = "eh_personality"] from no-std test</li>
<li><a
href="8478a3b7dd"><code>8478a3b</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2871">#2871</a>
from dtolnay/nostdstart</li>
<li><a
href="dbb909136e"><code>dbb9091</code></a>
Replace #[start] with extern fn main</li>
<li><a
href="ad8dd4148b"><code>ad8dd41</code></a>
Release 1.0.216</li>
<li><a
href="f91d2ed9ae"><code>f91d2ed</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2868">#2868</a>
from dtolnay/automaticallyderived</li>
<li>Additional commits viewable in <a
href="https://github.com/serde-rs/serde/compare/v1.0.215...v1.0.217">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [mio](https://github.com/tokio-rs/mio) from 1.0.2 to 1.0.3.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/mio/blob/master/CHANGELOG.md">mio's
changelog</a>.</em></p>
<blockquote>
<h1>1.0.3</h1>
<ul>
<li>Implement more I/O safety traits
(<a
href="https://redirect.github.com/tokio-rs/mio/pull/1831">tokio-rs/mio#1831</a>).</li>
<li>Remove hermit-abi dependency, now using libc
(<a
href="https://redirect.github.com/tokio-rs/mio/pull/1830">tokio-rs/mio#1830</a>).</li>
<li>Use <code>poll(2)</code> implementation on AIX, removing the need
for using
<code>mio_unsupported_force_poll_poll</code>
(<a
href="https://redirect.github.com/tokio-rs/mio/pull/1833">tokio-rs/mio#1833</a>).</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f45f4928da"><code>f45f492</code></a>
Release v1.0.3 (<a
href="https://redirect.github.com/tokio-rs/mio/issues/1843">#1843</a>)</li>
<li><a
href="cbb53c71a2"><code>cbb53c7</code></a>
Use poll(2) implementation on AIX</li>
<li><a
href="d8d68ac637"><code>d8d68ac</code></a>
Implement more I/O safety traits</li>
<li><a
href="8b6c4b5d21"><code>8b6c4b5</code></a>
Remove dependency to hermit-abi (<a
href="https://redirect.github.com/tokio-rs/mio/issues/1830">#1830</a>)</li>
<li>See full diff in <a
href="https://github.com/tokio-rs/mio/compare/v1.0.2...v1.0.3">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This is a very minor regression caused by #7555. We didn't bind the
status update observer until after the VPN profile was created. In
practice this didn't cause an issue because the status never changes
until a VPN profile is created, but still thought it's good to fix.
The JSON encoder is now configured in the `init()` as well like any
other instance variable should be.
If the user removes a VPN profile while it's connected, and then re-adds
a fresh one, the system will bring the tunnel back up with default
configuration, which means a blank `accountSlug`.
Instead of crashing because this isn't set, we bail the tunnel connect
with an error.
Bumps the npm_and_yarn group in /website with 1 update:
[next](https://github.com/vercel/next.js).
Updates `next` from 14.2.18 to 14.2.21
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/vercel/next.js/releases">next's
releases</a>.</em></p>
<blockquote>
<h2>v14.2.21</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>Upgrade React from 14898b6a9 to 178c267a4e: <a
href="https://redirect.github.com/vercel/next.js/pull/74115">vercel/next.js#74115</a></li>
<li>Fix unstable_allowDynamic when used with pnpm: <a
href="https://redirect.github.com/vercel/next.js/pull/73765">vercel/next.js#73765</a></li>
</ul>
<h3>Misc Changes</h3>
<ul>
<li>chore(docs): add missing search: '' on remotePatterns: <a
href="https://redirect.github.com/vercel/next.js/pull/73927">vercel/next.js#73927</a></li>
<li>chore(docs): update version history of next/image: <a
href="https://redirect.github.com/vercel/next.js/pull/73926">vercel/next.js#73926</a></li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/unstubbable"><code>@unstubbable</code></a>, <a
href="https://github.com/ztanner"><code>@ztanner</code></a>, and <a
href="https://github.com/styfle"><code>@styfle</code></a> for
helping!</p>
<h2>v14.2.20</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>Fix fetch cloning bug (<a
href="https://redirect.github.com/vercel/next.js/pull/73532">vercel/next.js#73532</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/wyattjoh"><code>@wyattjoh</code></a> for
helping!</p>
<h2>v14.2.19</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>ensure worker exits bubble to parent process (<a
href="https://redirect.github.com/vercel/next.js/issues/73433">#73433</a>)</li>
<li>Increase max cache tags to 128 (<a
href="https://redirect.github.com/vercel/next.js/issues/73125">#73125</a>)</li>
</ul>
<h3>Misc Changes</h3>
<ul>
<li>Update max tag items limit in docs (<a
href="https://redirect.github.com/vercel/next.js/issues/73445">#73445</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/ztanner"><code>@ztanner</code></a> and <a
href="https://github.com/ijjk"><code>@ijjk</code></a> for helping!</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="2655f6efd3"><code>2655f6e</code></a>
v14.2.21</li>
<li><a
href="8803d2b46e"><code>8803d2b</code></a>
Backport (v14): Upgrade React from 14898b6a9 to 178c267a4e (<a
href="https://redirect.github.com/vercel/next.js/issues/74115">#74115</a>)</li>
<li><a
href="6e35243eae"><code>6e35243</code></a>
chore(docs): add missing <code>search: ''</code> on
<code>remotePatterns</code> (<a
href="https://redirect.github.com/vercel/next.js/issues/73925">#73925</a>)
(<a
href="https://redirect.github.com/vercel/next.js/issues/73927">#73927</a>)</li>
<li><a
href="54919d2f28"><code>54919d2</code></a>
chore(docs): update version history of <code>next/image</code> (<a
href="https://redirect.github.com/vercel/next.js/issues/73926">#73926</a>)</li>
<li><a
href="049a6907af"><code>049a690</code></a>
Backport: Fix <code>unstable_allowDynamic</code> when used with pnpm (<a
href="https://redirect.github.com/vercel/next.js/issues/73765">#73765</a>)</li>
<li><a
href="663fa9cb29"><code>663fa9c</code></a>
Fix SWC and React versions for <code>14-2-1</code> branch (<a
href="https://redirect.github.com/vercel/next.js/issues/73791">#73791</a>)</li>
<li><a
href="ed78a4aa67"><code>ed78a4a</code></a>
v14.2.20</li>
<li><a
href="530421d3a2"><code>530421d</code></a>
[backport] Fix/dedupe fetch clone (<a
href="https://redirect.github.com/vercel/next.js/issues/73532">#73532</a>)</li>
<li><a
href="cbc62adaba"><code>cbc62ad</code></a>
v14.2.19</li>
<li><a
href="92280dc435"><code>92280dc</code></a>
[backport] Update max tag items limit in docs (<a
href="https://redirect.github.com/vercel/next.js/issues/73445">#73445</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/vercel/next.js/compare/v14.2.18...v14.2.21">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/firezone/firezone/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Although interacting with the Keychain typically involves I/O, there
wasn't a good reason to make all of its functions `async` as this
infects the caller hierarchy. Instead, the calling code should decide
how to wrap the synchronous calls directly with their own preferred
mechanism for dealing with blocking operations.
In practice, it turns out that nearly all of the places we were
interacting with the Keychain was _already_ within some sort of
non-critical Task or equivalent, especially the important parts like the
UI thread. So this complexity is not required.
The Keychain was moved from an `actor` class to a regular enum because
we don't need the serialization guarantees here -- since the calls are
blocking, there is no risk of Keychain operations happening out of
order.
Tested on macOS 15.
If we pass a `completionHandler` to `sendProviderMessage`, the Network
Extension framework will expect that completion handler to be called.
Probably doesn't cause any issue as-is, but it would be better to be
correct here.
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.93 to 1.0.95.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/dtolnay/anyhow/releases">anyhow's
releases</a>.</em></p>
<blockquote>
<h2>1.0.95</h2>
<ul>
<li>Add <a
href="https://docs.rs/anyhow/1/anyhow/struct.Error.html#method.from_boxed"><code>Error::from_boxed</code></a>
(<a
href="https://redirect.github.com/dtolnay/anyhow/issues/401">#401</a>,
<a
href="https://redirect.github.com/dtolnay/anyhow/issues/402">#402</a>)</li>
</ul>
<h2>1.0.94</h2>
<ul>
<li>Documentation improvements</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="48be1caa24"><code>48be1ca</code></a>
Release 1.0.95</li>
<li><a
href="a03d6d60f9"><code>a03d6d6</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/anyhow/issues/402">#402</a>
from dtolnay/fromboxed</li>
<li><a
href="52e4abb1f2"><code>52e4abb</code></a>
Add Error::from_boxed with documentation about bidirectional
<code>?</code></li>
<li><a
href="ffecefcfe0"><code>ffecefc</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/anyhow/issues/401">#401</a>
from dtolnay/construct</li>
<li><a
href="671f700dd3"><code>671f700</code></a>
Add construct_ prefix to name of private construct functions</li>
<li><a
href="8ceb5e988f"><code>8ceb5e9</code></a>
Release 1.0.94</li>
<li><a
href="b9009abc16"><code>b9009ab</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/anyhow/issues/399">#399</a>
from dtolnay/okvalue</li>
<li><a
href="863791a66d"><code>863791a</code></a>
Align naming between Ok function argument and its documentation</li>
<li><a
href="2081692170"><code>2081692</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/anyhow/issues/398">#398</a>
from zertosh/ok_doc_format</li>
<li><a
href="cc2cecb428"><code>cc2cecb</code></a>
Fix anyhow::Ok rustdoc code formatting</li>
<li>Additional commits viewable in <a
href="https://github.com/dtolnay/anyhow/compare/1.0.93...1.0.95">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Reading and writing to the TUN device within `connlib` happens in a
separate thread. The task running within these threads is connected to
the rest of `connlib` via channels. When the application shuts down,
these threads also need to exit. Currently, we attempt to detect this
from within the task when these channels close. It appears that there is
a race condition here because we first attempt to read from the TUN
device before reading from the channels. We treat read & write errors on
the TUN device as non-fatal so we loop around and attempt to read from
it again, causing an infinite-loop and log spam.
To fix this, we swap the order in which we evaluate the two concurrent
tasks: The first task to be polled is now the channel for outbound
packets and only if that one is empty, we attempt to read new packets
from the TUN device. This is also better from a backpressure point of
view: We should attempt to flush out our local buffers of already
processed packets before taking on "new work".
As a defense-in-depth strategy, we also attempt to detect the particular
error from the tokio runtime when it is being shut down and exit the
task.
Resolves: #7601.
Related: https://github.com/tokio-rs/tokio/issues/7056.
- Updates the Sentry release name to follow `component@version` format
- Uses the Sentry `dist` field to set the distribution type,
`appstore|standalone`
The application-split itself doesn't really warrant having two different
Sentry projects.
1. The location of the panic / log already tells us, which component is
failing.
2. Both of the projects are built with Rust so the same "platform"
setting applies.
3. Reducing the number of Sentry projects makes things easier to manage.
4. The binaries are started as independent processes, so the two Sentry
contexts don't interfere.
What we should keep in mind is that one instance of an application will
now log into Sentry twice using the same DSN. I _think_ this means that
the number of sessions listed in Sentry will be double the number of
actual client-runs. The same is true for the Apple client though and
once we integrate Sentry for Android, the same will apply there so
relative to each other, those numbers still make sense.