```[tasklist]
- [x] Update website
- [x] Update blog entry with old link
- [ ] ~~Replace Github URL in GUI Client updater with our own links~~
- [ ] Wait for CI to go green
```
Refs #4531
This proposes a unified scheme for deb and MSI packages, and moves
Windows to that scheme.
This breaks compatibility. Existing Clients won't recognize the new
asset names once this is merged, so they won't show the "Firezone 1.0.0
is available" pop-up.
---------
Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com>
Closes#4682Closes#4691
```[tasklist]
# Before merging
- [x] Wait for `linux-group` test to go green on `main` (#4692)
- [ ] Wait for those browsers tests to get fixed
- [ ] *All* compatibility tests must pass on this branch
```
---------
Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
In Rust, `Result::unwrap()` produces a panic with an owned `String`.
Currently, we only attempt to downcast to a `str` which means those
errors show up as "panicked with a non-string payload" instead of the
actual panic message.
Related: #4736.
As part of testing #4750, @jamilbk ran into an interesting but unrelated
bug. Currently, we never invalidate host candidates. However, because we
rebind our sockets, we get new ports and thus our old host candidates
are always invalid. Thus, if you have a setup where your gateway and
client are on the same subnet they end up settling on a host-host
connection. If the client then roams to a different network, we get a
new srflx IP but because we don't invalidate the host candidate, we run
into an ICE timeout and never switch over the connection.
We actually have a unit test for this but it wasn't caught because of a
bug in str0m (https://github.com/algesten/str0m/pull/504): Candidates
with the same IP but different kind were incorrectly invalidated. In our
test, we don't have a NAT and thus host == srflx candidate. Thus, in the
roaming test, we invalidated the host candidate based on the new srflx
candidate which made the connection migration work.
With the patch included, the reconnect unit test actually fails to send
the packet, confirming this theory. By invalidating all host candidates
on `reconnect`, we fix this bug.
---------
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
Currently, the portal returns us a flat list of relays where each entry
only has a single address. But, our relays can operate in dual-stack
mode, meaning that they listen on IPv4 and IPv6 at the same time. Thus,
for a relay that is in dual-stack mode, this list will contain two
entries with the same relay ID, one for each address.
This wasn't really a problem until #4567 where we started indexing
relays by ID. As a result, a relay that operates in dual-stack mode is
now only reachable either under its IPv4 or IPv6 address. Which one wins
is non-deterministic due to the sorting behaviour of `HashMap`s and the
order that the list is returned from the portal.
For the TURN protocol, clients are indexed by their 3-tuple (IP, port,
protocol) which means a client talking to a relay over IPv4 is a
different client than one talking over IPv6. Thus, treating the same
relay as two different relays has additional consequences: It means we
allocate a pair of IPv4 & IPv6 addresses for each one, resulting in up
to 4 relay candidates per relay.
Both of these problems are solved in this PR.
1. Upon deserializing the list of relays from the portal, we group them
by ID and parse the addresses into a `RelaySocket`. This structure is
the equivalent of `IpStack` on the relay end and represents an enum with
3 different values:
- `V4`: Only an IPv4 address is known.
- `V6`: Only an IPv6 address is known.
- `Dual`: Both an IPv4 and an IPv6 address is known.
2. Instead of creating two `Allocation`s (one per address), we now
initialize an `Allocation` with this `RelaySocket`.
3. We let the `Allocation` figure out, which socket to use. Let's look
into how we do that.
Previously, the first action of an `Allocation` was to send an
`ALLOCATE` request. A naive approach would be to simply send an
`ALLOCATE` request to both IPs. In case the client / gateway has a
properly configured IPv4 and IPv6 address, both of these will succeed!
Which one should we pick?
To avoid this problem, we don't send an `ALLOCATE` but a `BINDING`
request instead. `BINDING` requests don't have side-effects and just
returned the observed address (this is commonly known as STUN). Once the
responses for the `BINDING` requests come back, we can deterministically
chose a socket to use for sending an `ALLOCATE` request. In particular,
we just pick the response that comes back first! A successful `BINDING`
request means the network path is working so we can also just it for
`ALLOCATE`. In case both requests are answered, we record both responses
as server-reflexive candidates.
Lastly, one final change with this PR is that we stop filtering the
relays returned by the portal based on the sockets that we have locally.
When a client roams, we may experience any combination of available
network interfaces (dual stack, IPv4 only and IPv6 only). Thus, it is
important that we always attempt to reach all relays over all network
paths and simply give up if we don't receive a response. Pre-filtering
relays based on the sockets that we currently have may leave us without
relays if we e.g. roam from an IPv4-only to and IPv6-only network. A
consequence of this design is that we might see a few more warnings in
the code in case the client's / gateway's interface doesn't support a
particular IP version. The warnings read something like:
```
2024-04-23T07:09:05.209212Z WARN connlib_client_shared::eventloop: Tunnel error: failed send packet to 35.197.175.154:3478: no IPv4 socket
```
Resolves: #4726.
The sidebar was missing a conditional check when displaying the API
Clients link. This was only a bug in the sidebar UI as visiting the
actual API clients URL path showed a `404` as expected when the REST API
feature was disabled.
In https://github.com/firezone/firezone/pull/4537, we fixed a bug that
made an `Allocation` busy-loop with invalid credentials. There is no
point in keeping invalid credentials around so with this PR, we are
clearing the credentials and free the memory associated with this
`Allocation`.
This is another safe-guard to prevent these kind of busy-loops and also
reduces the memory footprint of very long-running services.
Bumps the cargo group in /rust with 1 update:
[rustls](https://github.com/rustls/rustls).
Updates `rustls` from 0.22.3 to 0.22.4
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ae277befb5"><code>ae277be</code></a>
Prepare 0.22.4</li>
<li><a
href="5374108df6"><code>5374108</code></a>
complete_io: bail out if progress is impossible</li>
<li><a
href="00e695d68d"><code>00e695d</code></a>
Regression test for <code>complete_io</code> infinite loop bug</li>
<li><a
href="0c6cd7ef68"><code>0c6cd7e</code></a>
Don't specially handle unauthenticated close_notify alerts</li>
<li>See full diff in <a
href="https://github.com/rustls/rustls/compare/v/0.22.3...v/0.22.4">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/firezone/firezone/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [time](https://github.com/time-rs/time) from 0.3.34 to 0.3.36.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/time-rs/time/releases">time's
releases</a>.</em></p>
<blockquote>
<h2>v0.3.36</h2>
<p>See the <a
href="https://github.com/time-rs/time/blob/main/CHANGELOG.md">changelog</a>
for details.</p>
<h2>v0.3.35</h2>
<p>See the <a
href="https://github.com/time-rs/time/blob/main/CHANGELOG.md">changelog</a>
for details.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/time-rs/time/blob/main/CHANGELOG.md">time's
changelog</a>.</em></p>
<blockquote>
<h2>0.3.36 [2024-04-10]</h2>
<h3># Fixed</h3>
<ul>
<li><code>FormatItem</code> can be used as part of an import path. See
<a href="https://redirect.github.com/time-rs/time/issues/675">#675</a>
for details.</li>
</ul>
<p><a
href="https://redirect.github.com/time-rs/time/issues/675">#675</a>: <a
href="https://redirect.github.com/time-rs/time/issues/675">time-rs/time#675</a></p>
<h2>0.3.35 [2024-04-10]</h2>
<h3>Added</h3>
<ul>
<li><code>Duration::checked_neg</code></li>
<li><code>ext::InstantExt</code>, which provides methods for using
<code>time::Duration</code> with <code>std::time::Instant</code></li>
</ul>
<h3>Changed</h3>
<ul>
<li><code>Instant</code> is deprecated. It is recommended to use
<code>std::time::Instant</code> directly, importing
<code>time::ext::InstantExt</code> for interoperability with
<code>time::Duration</code>.</li>
<li><code>FormatItem</code> has been renamed to
<code>BorrowedFormatItem</code>, avoiding confusion with
<code>OwnedFormatItem</code>.
An alias has been added for backwards compatibility.</li>
</ul>
<h3>Fixed</h3>
<ul>
<li>The weekday is optional when parsing RFC2822.</li>
<li>The range of sub-second values in <code>Duration</code> is
documented correctly. The previous documentation
contained an off-by-one error.</li>
<li>Leap seconds are now correctly handled when parsing ISO 8601.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="3c3c546a66"><code>3c3c546</code></a>
<code>pub use</code> instead of <code>pub type</code> re-exporting</li>
<li><a
href="266178da67"><code>266178d</code></a>
Update code coverage CI</li>
<li><a
href="131049ea15"><code>131049e</code></a>
v0.3.35 release</li>
<li><a
href="9c15ee3466"><code>9c15ee3</code></a>
Permit leap seconds when parsing ISO 8601</li>
<li><a
href="d279d8d38f"><code>d279d8d</code></a>
Fix invalid offset hour diagnostic test</li>
<li><a
href="f04a28feec"><code>f04a28f</code></a>
Eliminate unreachable branch</li>
<li><a
href="06a096d821"><code>06a096d</code></a>
Rename <code>FormatItem</code> to <code>BorrowedFormatItem</code></li>
<li><a
href="fd664eef0d"><code>fd664ee</code></a>
Include diagnostics regression</li>
<li><a
href="b8d09a7bcc"><code>b8d09a7</code></a>
Address nightly lints</li>
<li><a
href="330865ac90"><code>330865a</code></a>
Update deny.toml</li>
<li>Additional commits viewable in <a
href="https://github.com/time-rs/time/compare/v0.3.34...v0.3.36">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Reactor Scram <ReactorScram@users.noreply.github.com>
Considered using Elixir and Rust to write the tests.
For Elixir, `wallaby` doesn't seem to have a way to attach to an
existing `chromium` instance, launching it each time, which makes it
hard to coordinate with the relay restart.
For Rust we considered `thirtyfour` which would be very nice since we
could test both firefox and chrome but each time it connects to the
instance it launches a new session making it hard to test the DNS cache
behavior.
We also considered `chrome_headless` for Rust it needs a small patch to
prevent it from closing the browser after `Drop` but it still presents a
problem, since it has no easy way to retrieve if loading a page has
succeeded. There are some workarounds such as retrieving the title that
we could have used but after some testing they are quite finnicky and we
don't want that for CI.
So I ended up settling for TypeScript but I'm open to other options, or
a fix for the previous ones!
There are some modifications still incoming for this PR, around the test
name and that sleep in the middle of the test doesn't look good so I
will probably add some retries, but the gist is here, will keep it in
draft until we expect it to be passing.
So feel free to do some initial reviews.
Note: the number of lines changed is greatly exaggerated by
`package.lock`
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
Whenever we receive a `relays_presence` message from the portal, we
invalidate the candidates of all now disconnected relays and make
allocations on the new ones. This triggers signalling of new candidates
to the remote party and migrates the connection to the newly nominated
socket.
This still relies on #4613 until we have #4634.
Resolves: #4548.
---------
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
As a result of moving all logic into `ClientState` and `GatewayState`,
the concrete types of `Peer` are statically known everywhere. Thus, we
can remove this abstraction layer and directly store a `ClientOnGateway`
and `GatewayOnClient` struct in the `PeerStore`.
This makes code-navigation and reasoning easier because one can directly
jump to the function that is being called.
Resolves: #4224.
```[tasklist]
# Before merging
- [x] Remove file extension `.txt`
- [x] Wait for `linux-group` test to go green on `main` (#4692)
- [x] *all* compatibility tests must be green on this branch
```
Closes#4664Closes#4665
~~The compatibility tests are expected to fail until the next release is
cut, for the same reasons as in #4686~~
The compatibility test must be handled somehow, otherwise it'll turn
main red.
`linux-group` was moved out of integration / compatibility testing, but
the DNS tests do need the whole Docker + portal setup, so that one can't
move.
---------
Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
"User-Guides" isn't a great name. "End-user guides" is a tiny bit better
-- the goal for this was to have something an admin could distribute to
their end-users during onboarding.
Also I tried to clarify that only SSO+sync requires the Enterprise tier
for Google/Okta/Entra
With the introduction of `snownet`, we temporarily duplicated the
`IpPacket` abstraction from `firezone-tunnel` because there was no
common place to put it. Overtime, these have grown in size and we needed
to convert back and forth between time. Lately, we've also been adding
more tests to both `snownet` and `firezone-tunnel` that needed to create
`IpPacket`s as test data.
This seems like an appropriate time to do away with this duplication by
introducing a dedicated crate that acts as a facade for the
`pnet_packet` crate, extending it with the functionality that we need.
Resolves: #3926.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>