Commit Graph

505 Commits

Author SHA1 Message Date
Reactor Scram
4104d679cd refactor(windows): Add context to errors, add SAFETY comments, update TODOs (#3517)
The only semantic changes are:
- Add context to Windows errors
- Refactor some `bail!`'s that could be `context`'s

The rest is updating comments:
- Add `SAFETY: TODO` for unmarked unsafe blocks
- Elaborate on existing SAFETY comments 
- Close completed TODOs
- Link in Github issues for open TODOs
- Mark invariants or inter-dependencies between files that aren't
captured by tests or types yet

---------

Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com>
2024-02-01 21:09:32 +00:00
Reactor Scram
5b041e3122 fix(windows): install and load wintun.dll from a well-known path instead of setting the current directory (#3430)
closes #3425

```[tasklist]
- [x] Switch to connlib-shared for BUNDLE_ID and stuff
- [x] Break out small things into other PRs if possible
- [x] Fix merge conflicts
```

---------

Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com>
Co-authored-by: Gabi <gabrielalejandro7@gmail.com>
2024-02-01 19:16:19 +00:00
dependabot[bot]
d7ee1ebe88 build(deps-dev): Bump @types/node from 18.19.8 to 20.11.15 in /rust/windows-client (#3502)
Bumps
[@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)
from 18.19.8 to 20.11.15.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=@types/node&package-manager=npm_and_yarn&previous-version=18.19.8&new-version=20.11.15)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-01 17:53:45 +00:00
Reactor Scram
9cb433dcc9 Reactorscram/fix webview2 crash (#3464)
Closes #3451 

I can't get it to log, because the file logger is destroyed when Tauri
bails. But it shows an error dialog and prints to stderr.
Unfortunately the error dialogs don't have selectable text, but oddly
you _can_ do Ctrl+C on them, to get this:

```
---------------------------
Firezone Error
---------------------------
Firezone cannot start because WebView2 is not installed. Follow the instructions at <https://www.firezone.dev/kb/user-guides/windows-client>.
---------------------------
OK   
---------------------------
```

I don't know where these numbers should go in the docs:

- 1 minute 30 seconds to install Firezone from MSI (including WebView2
download) on 80 Mbps wired Internet, with 2 CPU cores and 8 GB of RAM
allocated to the VM
- 2 minutes with 1 CPU core and 2 GB of RAM on the VM


![image](https://github.com/firezone/firezone/assets/13400041/8ebe6d62-e619-47a9-96ab-a43a0b0a53a8)
2024-02-01 17:13:37 +00:00
Andrew Dryga
a211f96109 feat(portal): Broadcast state changes to connected clients and gateways (#2240)
# Gateways
- [x] When Gateway Group is deleted all gateways should be disconnected
- [x] When Gateway Group is updated (eg. routing) broadcast to all
affected gateway to disconnect all the clients
- [x] When Gateway is deleted it should be disconnected
- [x] When Gateway Token is revoked all gateways that use it should be
disconnected

# Relays
- [x] When Relay Group is deleted all relays should be disconnected
- [x] When Relay is deleted it should be disconnected
- [x] When Relay Token is revoked all gateways that use it should be
disconnected

# Clients
- [x] Remove Delete Client button, show clients using the token on the
Actors page (#2669)
- [x] When client is deleted disconnect it
- [ ] ~When Gateway is offline broadcast to the Clients connected to it
it's status~
- [x] Persist `last_used_token_id` in Clients and show it in tokens UI

# Resources
- [x] When Resource is deleted it should be removed from all gateways
and clients
- [x] When Resource connection is removed it should be deleted from
removed gateway groups
- [x] When Resource is updated (eg. traffic filters) all it's
authorizations should removed

# Authentication
- [x] When Token is deleted related sessions are terminated
- [x] When an Actor is deleted or disabled it should be disconnected
from browser and client
- [x] When Identity is deleted it's sessions should be disconnected from
browser and client
- [x] ^ Ensure the same happens for identities during IdP sync
- [x] When IdP is disabled act like all actors for it are disabled?
- [x] When IdP is deleted act like all actors for it are deleted?

# Authorization
- [x] When Policy is created clients that gain access to a resource
should get an update
- [x] When Policy is deleted we need to all authorizations it's made
- [x] When Policy is disabled we need to all authorizations it's made
- [x] When Actor Group adds or removes a user, related policies should
be re-evaluated
- [x] ^ Ensure the same happens for identities during IdP sync

# Settings
- [x] Re-send init message to Client when DNS settings change

# Code
- [x] Crear way to see all available topics and messages, do not use
binary topics any more

---------

Co-authored-by: conectado <gabrielalejandro7@gmail.com>
2024-02-01 11:02:13 -06:00
Thomas Eizinger
71afc6d9ff fix(snownet): don't try to allocate a new channel if we already have one (#3476)
Currently, we always try to allocate a channel when the user calls
`bind_channel`. This is a problem if we try to re-connect to a peer. The
channel binding will still be active so `bind_channel` needs to be a
no-op.

Resolves: #3475.
2024-02-01 04:16:28 +00:00
Thomas Eizinger
49ceb8ae83 fix(snownet): don't use unbound channels for relaying (#3474)
Currently, the `bound` flag is not considered when attempting to relay
data. This isn't actively harmful because the relay will drop them but
it causes warnings in the logs. This PR adds a check to make sure we
only try to relay data via channels that are bound. Additionally, we now
handle failed channel bind requests by clearing the local state.
2024-02-01 02:40:01 +00:00
Thomas Eizinger
84b3ac50ca fix(relay): correctly separate channel state for different peers (#3472)
Currently, there is a bug in the relay where the channel state of
different peers overlaps because the data isn't indexed correctly by
both peers and clients.

This PR fixes this, introduces more debug assertions (this bug was
caught by one) and also adds some new-type wrappers to avoid conflating
peers with clients.
2024-02-01 01:53:54 +00:00
Reactor Scram
a5a6d81eb1 refactor(windows): change some anyhow errors into thiserror errors (#3461)
This is part of handling the WebView-not-installed error, #3451
2024-02-01 01:44:26 +00:00
Reactor Scram
e35dd53649 ci(windows): Upload Windows debug symbols (#3467)
Closes #3450 

I was able to get stacktraces from a crash generated inside my VM. It
picked out the correct line in gui.rs where the crash was triggered.


![image](https://github.com/firezone/firezone/assets/13400041/1fc521a1-059c-489b-b9b8-506570a4df0f)


![image](https://github.com/firezone/firezone/assets/13400041/17e4bdd9-cd2a-477a-821a-ab23e61eadf7)
2024-02-01 01:36:10 +00:00
Reactor Scram
e2efd725e3 feat(firezone-tunnel): sort resources alphabetically (#3465)
Closes #3217. I just now noticed that one was assigned to me


![image](https://github.com/firezone/firezone/assets/13400041/106ba400-fda8-49b9-ad81-b6ced8414ea4)

The sorting is naive, just sorts the UTF-8 encoded bytes, so lowercase
resources come after all uppercase resources, and it's probably very
wrong for anything outside Latin-1 and English locale. If the names are
identical, resource ID tie-breaks.
2024-02-01 01:14:30 +00:00
Reactor Scram
966432da5b refactor(windows): remove IPC code which is now unused (#3469) 2024-01-31 23:34:46 +00:00
Reactor Scram
5ef6e97f4d fix(windows): don't crash if the saved log filter is invalid (#3460)
Closes #3452
2024-01-31 23:01:05 +00:00
Reactor Scram
d9ac4fa443 fix(windows): CSS nit (#3463)
Before this change, some of the background was (252, 252, 252) (#fcfcfc,
bg-neutral-50) and some was #ffffff white


![image](https://github.com/firezone/firezone/assets/13400041/ebfd0488-2ee7-4790-85d2-dee86edbe272)

After this change, all the background is (248, 247, 247) (#f8f7f7,
bg-neutral-100)


![image](https://github.com/firezone/firezone/assets/13400041/22185728-aa1b-4f45-a888-74a8a4120a8d)

"Before" with exaggerated contrast: 

![image](https://github.com/firezone/firezone/assets/13400041/de63471b-48cd-4073-936b-bf5a0df888c8)
2024-01-31 20:49:17 +00:00
Jamil
2fba4406a6 fix(windows): Take the default button shade darker a notch (#3462)
<img width="688" alt="Screenshot 2024-01-31 at 11 16 11 AM"
src="https://github.com/firezone/firezone/assets/167144/891af931-9ff5-4975-8222-027e081e7ae6">
<img width="679" alt="Screenshot 2024-01-31 at 11 16 22 AM"
src="https://github.com/firezone/firezone/assets/167144/f84f886a-f7d9-428b-9199-3214a3002682">
2024-01-31 20:14:06 +00:00
Jamil
cd1f047575 fix(connlib): handle null-termination of TUN device path string correctly (#3449)
Credit to @Intuinewin from #3445

---------

Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
2024-01-31 01:49:51 +00:00
Thomas Eizinger
ab7c947d0f fix(connection): only emit Transmit.src that correspond to local sockets (#3411)
It turns out that we need to do some post-processing of the
`Transmit.source` attribute from `str0m`. In its current state, `str0m`
may also set that to a server-reflexive address which is **not** a local
socket. There is a longer discussion around this here:
https://github.com/algesten/str0m/issues/453.

This depends on an unmerged PR in `str0m`:
https://github.com/algesten/str0m/pull/455.
2024-01-31 01:42:47 +00:00
Thomas Eizinger
3f8c6cb6eb feat(relay): allow channel bindings to IPv6 addresses (#3434)
Previously, we still had a hard-coded rule in the relay that would not
allow us to relay to an IPv6 peer. We can remove that and properly check
this based on the allocated addresses.

Resolves: #3405.
2024-01-31 00:36:54 +00:00
Thomas Eizinger
6a33516460 feat(connection): rebrand to snownet (#3435)
`firezone-connection` was a working title that I never really quite
liked. Here is a proposal to rebrand it to `snownet`. That is a lot more
concise and derived from the fact that we are established a network of
connections using ICE.
2024-01-31 00:54:00 +00:00
Reactor Scram
6c16d795e9 docs(windows): Update docs for Windows VM testing / resetting files Firezone creates (#3448) 2024-01-30 22:31:46 +00:00
Reactor Scram
f2f8464f02 fix(windows): use a well-known path for the crash handler socket (#3444)
I didn't notice that the socket is a Unix domain socket, and not a named
pipe, so it shows up in the normal Windows filesystem.

Since I'm trying to get rid of the `set_current_dir` call at startup,
this needs to use a well-known path instead of a relative path.
(https://github.com/firezone/firezone/pull/3430/files#diff-8ee58783aeb973dcbf764b93d3038dd0133d981cc0caae8c5429020eb002a52eL62)

So I stuck it in `%LOCALAPPDATA%/data/`.


![image](https://github.com/firezone/firezone/assets/13400041/85335b3f-064f-4c8d-be50-3f4e98b9302c)

I manually tested and made sure that the crash dump is written when we
pass `--crash-on-purpose`, so the client and server are able to reach
each other correctly.

---------

Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
2024-01-30 21:28:18 +00:00
Reactor Scram
9096eee396 feat(windows): enable crash handling on release builds (#3441)
Since #3263 closed, we could enable crash handling for release builds on
Windows, too.
This should get rid of a dead code warning in CI:


![image](https://github.com/firezone/firezone/assets/13400041/7a6cd0ed-5943-4fa5-a23f-1426aa438f51)

Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com>
2024-01-30 21:17:25 +00:00
Reactor Scram
aa25a46b72 refactor(windows): handle tray menu events in the main loop (#3446)
Closes #2983
2024-01-30 20:55:56 +00:00
Reactor Scram
9078b72e9b refactor(windows): use 'use' statements better in crash handling (#3442) 2024-01-30 20:43:43 +00:00
Thomas Eizinger
9f7080b669 feat(connection): allocate IPv6 address (#3436)
Resolve the two TODOs mentioned in the code. As part of #3399, we
correctly are handling different combinations of available sockets and
requested addresses in the relay more gracefully. In particular, we
return whatever addresses we could allocate and only fail if we couldn't
allocate any at all.

The `Allocation` struct will extract whatever allocated addresses are
present in the response. Thus, it is safe for us to **always** request
both, an IPv4 and IPv6 address. A relay that only operates on one of
them will just return that one address.

Resolves: #3406.
2024-01-30 17:28:17 +00:00
Thomas Eizinger
e02aa2eb1f chore(relay): update docs in regards to spec-compliance (#3437) 2024-01-30 17:27:38 +00:00
Reactor Scram
f23e77e412 refactor(windows): set absolute paths for logs and wintun.dll (#3428)
This is part of fixing #3425 

Until now I changed the app's working directory into our %LOCALAPPDATA%
folder and then used relative paths.

But this causes two problems:
- Passing `.\wintun.dll` when loading the DLL can cause Windows to
search for the DLL. We don't want it to search, we want to put the DLL
in one place and make sure it uses that, since that's the version we'll
be updating
- It means I've been using the app's current working dir as de-facto
global mutable state

The reason I could not fix it sooner is that it needed the bundle ID to
be available before Tauri starts.

---------

Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com>
Co-authored-by: Gabi <gabrielalejandro7@gmail.com>
2024-01-30 16:34:37 +00:00
Thomas Eizinger
2bad4c617f feat(phoenix-channel): reconnect on missed heartbeat from portal (#3410)
In case the portal does not reply to our heartbeat when we are about to
send the next one, we try to reconnect. For now, this affects only the
relay and the gateway but will be used in the clients in the future too.

Resolves: #2916.
2024-01-30 03:14:26 +00:00
Reactor Scram
471729c73d fix(windows): Show "Signing in..." menu during auto-sign-in (#3431)
closes #3403 

Given the token is saved on disk, when we start Firezone, then the menu
will show "Signing in..." while connlib connects.
2024-01-30 01:01:44 +00:00
dependabot[bot]
3948470539 build(deps): Bump serde from 1.0.195 to 1.0.196 in /rust (#3421)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.195 to
1.0.196.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/serde-rs/serde/releases">serde's
releases</a>.</em></p>
<blockquote>
<h2>v1.0.196</h2>
<ul>
<li>Improve formatting of &quot;invalid type&quot; error messages
involving floats (<a
href="https://redirect.github.com/serde-rs/serde/issues/2682">#2682</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ede9762a58"><code>ede9762</code></a>
Release 1.0.196</li>
<li><a
href="d438c2d67b"><code>d438c2d</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2682">#2682</a>
from dtolnay/decimalpoint</li>
<li><a
href="bef110b92a"><code>bef110b</code></a>
Format Unexpected::Float with decimal point</li>
<li><a
href="b971ef11d1"><code>b971ef1</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2681">#2681</a>
from dtolnay/workspacedeps</li>
<li><a
href="29d9f69399"><code>29d9f69</code></a>
Fix workspace.dependencies default-features future compat warning</li>
<li><a
href="aecb4083bd"><code>aecb408</code></a>
Sort workspace dependencies</li>
<li><a
href="1c675ab3a3"><code>1c675ab</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2678">#2678</a>
from rodoufu/workspaceDependencies</li>
<li><a
href="dd619630a3"><code>dd61963</code></a>
Adding workspace dependencies</li>
<li><a
href="111803ab07"><code>111803a</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/serde/issues/2673">#2673</a>
from Sky9x/msrv-badge</li>
<li><a
href="0024f74f34"><code>0024f74</code></a>
Use shields.io's MSRV badges</li>
<li>See full diff in <a
href="https://github.com/serde-rs/serde/compare/v1.0.195...v1.0.196">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=serde&package-manager=cargo&previous-version=1.0.195&new-version=1.0.196)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 21:36:28 +00:00
Reactor Scram
3d8ed7f10e fix(windows): move crash dumps into logs dir so they get exported in the zip, closes #3263 (#3426)
![image](https://github.com/firezone/firezone/assets/13400041/0fffa284-4c37-4b35-a0cf-841442246c66)


![image](https://github.com/firezone/firezone/assets/13400041/2d275cba-5d1f-4289-9214-05804f4477c6)

Screenshots taken on 33737b3a38f7
I also refactored some of the related code.
2024-01-29 19:02:16 +00:00
dependabot[bot]
0a01d6c03f build(deps): Bump keyring from 2.3.1 to 2.3.2 in /rust (#3419)
Bumps [keyring](https://github.com/hwchen/keyring-rs) from 2.3.1 to
2.3.2.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/hwchen/keyring-rs/commits">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=keyring&package-manager=cargo&previous-version=2.3.1&new-version=2.3.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 18:52:44 +00:00
Thomas Eizinger
c9834ee8ee feat(relay): print stats every 10s (#3408)
In #3400, a discussion started on what the correct log level would be
for the production relay. Currently, the relay logs some stats about
each packet on debug, i.e. where it came from, where it is going to and
how big it is. This isn't very useful in production though and will fill
up our log disk quickly.

This PR introduces a stats timer like we already have it in other
components. We print the number of allocations, how many channels we
have and how much data we relayed over all these channels since we last
printed. The interval is currently set to 10 seconds.

Here is what this output could look like (captured locally using
`relay/run_smoke_test.sh`, although slightly tweaked, printing ever 2s,
using release mode and larger packets on the clients):

```
2024-01-26T05:01:02.445555Z  INFO relay: Seeding RNG from '0'
2024-01-26T05:01:02.445580Z  WARN relay: No portal token supplied, starting standalone mode
2024-01-26T05:01:02.445827Z  INFO relay: Listening for incoming traffic on UDP port 3478
2024-01-26T05:01:02.447035Z  INFO Eventloop::poll: relay: num_allocations=0 num_channels=0 throughput=0.00 B/s
2024-01-26T05:01:02.649194Z  INFO Eventloop::poll:handle_client_input{sender=127.0.0.1:39092 transaction_id="8f20177512495fcb563c60de" allocation=AID-1}: relay: Created new allocation first_relay_address=127.0.0.1 lifetime=600s
2024-01-26T05:01:02.650744Z  INFO Eventloop::poll:handle_client_input{sender=127.0.0.1:39092 transaction_id="6445943a353d5e8c262a821f" allocation=AID-1 peer=127.0.0.1:41094 channel=16384}: relay: Successfully bound channel
2024-01-26T05:01:04.446317Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=631.54 MB/s
2024-01-26T05:01:06.446319Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=698.73 MB/s
2024-01-26T05:01:08.446325Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=708.98 MB/s
2024-01-26T05:01:10.446324Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=690.79 MB/s
2024-01-26T05:01:12.446316Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=715.53 MB/s
2024-01-26T05:01:14.446315Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=706.90 MB/s
2024-01-26T05:01:16.446313Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=712.03 MB/s
2024-01-26T05:01:18.446319Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=717.54 MB/s
2024-01-26T05:01:20.446316Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=690.74 MB/s
2024-01-26T05:01:22.446313Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=705.08 MB/s
2024-01-26T05:01:24.446311Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=700.41 MB/s
2024-01-26T05:01:26.446319Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=717.57 MB/s
2024-01-26T05:01:28.446320Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=688.82 MB/s
2024-01-26T05:01:30.446329Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=696.35 MB/s
2024-01-26T05:01:32.446317Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=724.03 MB/s
2024-01-26T05:01:34.446320Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=713.46 MB/s
2024-01-26T05:01:36.446314Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=716.13 MB/s
2024-01-26T05:01:38.446327Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=687.16 MB/s
2024-01-26T05:01:40.446315Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=708.20 MB/s
2024-01-26T05:01:42.446314Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=689.36 MB/s
2024-01-26T05:01:44.446314Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=698.62 MB/s
2024-01-26T05:01:46.446315Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=696.21 MB/s
2024-01-26T05:01:48.446378Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=696.36 MB/s
2024-01-26T05:01:50.446314Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=709.47 MB/s
2024-01-26T05:01:52.446319Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=714.48 MB/s
2024-01-26T05:01:54.446323Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=690.71 MB/s
2024-01-26T05:01:56.446313Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=692.70 MB/s
2024-01-26T05:01:58.446321Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=687.87 MB/s
2024-01-26T05:02:00.446316Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=682.11 MB/s
2024-01-26T05:02:02.446312Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=700.07 MB/s
```
2024-01-29 18:52:15 +00:00
Reactor Scram
b521fbcf90 feat(windows): UI notification for reauth (#3329) (#3416)
```[tasklist]
- [x] Smoke-test MSI from CI
- [x] Move out of drafts
- [x] Phrasing
```
2024-01-29 16:31:30 +00:00
dependabot[bot]
5b2ef7a326 build(deps): Bump winreg from 0.51.0 to 0.52.0 in /rust (#3420)
Bumps [winreg](https://github.com/gentoo90/winreg-rs) from 0.51.0 to
0.52.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/gentoo90/winreg-rs/releases">winreg's
releases</a>.</em></p>
<blockquote>
<h2>0.52.0 (windows-rs)</h2>
<ul>
<li>Breaking change: <code>.commit()</code> and <code>.rollback()</code>
now consume the transaction (<a
href="https://redirect.github.com/gentoo90/winreg-rs/issues/62">#62</a>)</li>
<li>Add <code>RegKey::rename_subkey()</code> method (<a
href="https://redirect.github.com/gentoo90/winreg-rs/issues/58">#58</a>)</li>
<li>Make serialization modules public (<a
href="https://redirect.github.com/gentoo90/winreg-rs/issues/59">#59</a>)</li>
<li>Fix UB in <code>FromRegValue</code> for <code>u32</code> and
<code>u64</code> (<a
href="https://redirect.github.com/gentoo90/winreg-rs/issues/61">#61</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/gentoo90/winreg-rs/blob/master/CHANGELOG.md">winreg's
changelog</a>.</em></p>
<blockquote>
<h2>0.52.0</h2>
<ul>
<li>Breaking change: <code>.commit()</code> and <code>.rollback()</code>
now consume the transaction (<a
href="https://redirect.github.com/gentoo90/winreg-rs/issues/62">#62</a>)</li>
<li>Add <code>RegKey::rename_subkey()</code> method (<a
href="https://redirect.github.com/gentoo90/winreg-rs/issues/58">#58</a>)</li>
<li>Make serialization modules public (<a
href="https://redirect.github.com/gentoo90/winreg-rs/issues/59">#59</a>)</li>
<li>Fix UB in <code>FromRegValue</code> for <code>u32</code> and
<code>u64</code> (<a
href="https://redirect.github.com/gentoo90/winreg-rs/issues/61">#61</a>)</li>
</ul>
<h2>0.14.0</h2>
<ul>
<li>Breaking change: increase MSRV to 1.34</li>
<li>Fix UB in <code>FromRegValue</code> for <code>u32</code> and
<code>u64</code> (<a
href="https://redirect.github.com/gentoo90/winreg-rs/issues/61">#61</a>)</li>
</ul>
<h2>0.13.0</h2>
<ul>
<li>Breaking change: <code>.commit()</code> and <code>.rollback()</code>
now consume the transaction (<a
href="https://redirect.github.com/gentoo90/winreg-rs/issues/62">#62</a>)</li>
<li>Add <code>RegKey::rename_subkey()</code> method (<a
href="https://redirect.github.com/gentoo90/winreg-rs/issues/58">#58</a>)</li>
<li>Make serialization modules public (<a
href="https://redirect.github.com/gentoo90/winreg-rs/issues/59">#59</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1c56127adf"><code>1c56127</code></a>
Merge branch 'winapi'. Bump version to 0.52.0</li>
<li><a
href="4b0ba3e243"><code>4b0ba3e</code></a>
Bump version to 0.14.0</li>
<li><a
href="7634148bca"><code>7634148</code></a>
Fix UB in <code>FromRegValue</code> for <code>u32</code> and
<code>u64</code></li>
<li><a
href="9aea2ad818"><code>9aea2ad</code></a>
Bump version to 0.13.0</li>
<li><a
href="53125816b2"><code>5312581</code></a>
Fix build with rust 1.31</li>
<li><a
href="ecda4989e9"><code>ecda498</code></a>
Make serialization modules public.</li>
<li><a
href="0fd5bb8823"><code>0fd5bb8</code></a>
<code>.commit()</code> and <code>.rollback()</code> now consume the
transaction</li>
<li><a
href="fa2f1c851a"><code>fa2f1c8</code></a>
Add <code>RegKey::rename_subkey()</code> method</li>
<li>See full diff in <a
href="https://github.com/gentoo90/winreg-rs/compare/v0.51.0...v0.52.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=winreg&package-manager=cargo&previous-version=0.51.0&new-version=0.52.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 16:19:30 +00:00
Jamil
16f5401a73 fix(gateway): Remove /dev/net/tun requirement and clean up upgrade script (#3392)
* Clean up gateway upgrade script
* Fixes #3226 to remove another place where things can go wrong when
upgrading gateways
2024-01-29 04:19:59 +00:00
Thomas Eizinger
3ca94464b1 fix(connection): buffer channel bindings until we have an allocation (#3417)
Previously, there was a race condition where we would try to bind a
channel despite not yet have made an allocation on the relay.
2024-01-27 00:29:49 +00:00
Thomas Eizinger
76635eb336 feat(relay): print a log for error responses we send to the client (#3413) 2024-01-26 21:58:17 +00:00
Thomas Eizinger
3b8131920e chore(relay): change run_smoke_test.sh to use /usr/bin/env (#3409)
On systems like NixOS, there is no `/bin/bash` because they heavily rely
on lookups in `$PATH`. `/usr/bin/env` does such a lookup and will use
the first `bash` executable it finds to run the script.
2024-01-26 21:48:15 +00:00
Thomas Eizinger
c8eb53ab31 feat(connection): introduce keep-alive and expose last_seen (#3388)
We configure wireguard's keep-alive to 5 seconds and patch `boringtun`
to expose the time since the last packet, which will always be <
KEEP_ALIVE (5 seconds) if the connection is intact. The policy on what
to do on missed keep-alives is pushed to the upper layer. Instead of
acting on it, we simply expose a `stats` function that exposes the data.

An upper layer can then decide on what to do in the case on missed
keep-alives.

Resolves: #3372.
2024-01-26 21:38:31 +00:00
Thomas Eizinger
8bc46eec84 fix(connection): accept relay traffic from non-listening interface (#3412)
Traffic to and from the relay can happen over a socket that we are not
listening on. We should still accept this traffic and only bail out if
we would be passing it to one of the `IceAgent`s.
2024-01-26 21:26:30 +00:00
Thomas Eizinger
b7ddbeea66 feat(connection): add a very basic connection timeout (#3397)
Adds a very basic connection timeout in case there is never a follow-up
with an `Answer`. This is mostly to prevent memory-leaks. I am also
hoping that we can add much more unit-tests in the same manner.
2024-01-26 00:14:11 +00:00
Thomas Eizinger
8f2e9efb21 feat(relay): improve logging and error handling (#3399)
By changing around how the fields are recorded in the tracing spans and
what the functions are called, the logs now align nicely:

Before:

```
[  relay] 2024-01-25T02:01:00.103279Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.103448Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.103627Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.103774Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.103955Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.104119Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.104303Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.104456Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.104650Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.104825Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.105015Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.105165Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.105332Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.105534Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.105739Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.105934Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.106155Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.106343Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.106534Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.106671Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.106838Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.106987Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.107148Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.107289Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.107496Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.107662Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.107846Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.108003Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.108189Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.108340Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.108529Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.108665Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.108835Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.109006Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.109193Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.109363Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.109563Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.109733Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.109919Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.110058Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.110228Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.110360Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.110510Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.110628Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.110773Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.110904Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.111049Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.111199Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.111359Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.111540Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.111735Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.111884Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.112064Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.112246Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.112422Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.112577Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.112754Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.112933Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.113109Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.113254Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.113421Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.113658Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
```

Now:

```
[  relay] 2024-01-25T01:57:42.045265Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.045393Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.045543Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.045676Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.045792Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.045918Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.046060Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.046190Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.046302Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.046410Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.046547Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.046964Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.047100Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.047219Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.047365Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.047497Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.047613Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.047732Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.047863Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.048212Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.048342Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.048460Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.048601Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.049029Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.049167Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.049292Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.049434Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.049549Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.049660Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.049787Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.049927Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.050095Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.050219Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.050688Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.050891Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.051046Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.051159Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.051300Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.051462Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.051618Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.051726Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.051884Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.052130Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.052344Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.052466Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.052631Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.052775Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.052941Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.053065Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.053224Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.053367Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.053527Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.053665Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.053826Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.053972Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.054127Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
```
2024-01-26 00:12:47 +00:00
Thomas Eizinger
35cdf84578 feat(gateway): don't print stacktrace upon exit (#3404)
Previously, we would print the following whenever the gateway exits:

```
2024-01-25T17:37:53.258145Z  INFO init{user_agent="Alpine Linux/3.19.0 (x86_64;6.6.11;) connlib/1.0.0" login_topic="gateway"}: phoenix_channel: Connected to portal, waiting for `init` message
2024-01-25T17:37:53.260751Z  WARN init{user_agent="Alpine Linux/3.19.0 (x86_64;6.6.11;) connlib/1.0.0" login_topic="gateway"}: phoenix_channel: Fatal client error (401 Unauthorized) in portal connection: Invalid token

Error: websocket failed

Caused by:
    HTTP error: 401 Unauthorized

Stack backtrace:
   0: anyhow::error::<impl core::convert::From<E> for anyhow::Error>::from
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/anyhow-1.0.79/src/error.rs:565:25
   1: <core::result::Result<T,F> as core::ops::try_trait::FromResidual<core::result::Result<core::convert::Infallible,E>>>::from_residual
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/result.rs:1963:27
   2: firezone_gateway::run::{{closure}}
             at /build/gateway/src/main.rs:85:26
   3: <core::pin::Pin<P> as core::future::future::Future>::poll
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/future/future.rs:125:9
   4: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/core.rs:328:17
   5: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/loom/std/unsafe_cell.rs:16:9
   6: tokio::runtime::task::core::Core<T,S>::poll
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/core.rs:317:13
   7: tokio::runtime::task::harness::poll_future::{{closure}}
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/harness.rs:485:19
   8: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/panic/unwind_safe.rs:272:9
   9: std::panicking::try::do_call
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/std/src/panicking.rs:552:40
  10: __rust_try
  11: std::panicking::try
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/std/src/panicking.rs:516:19
  12: std::panic::catch_unwind
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/std/src/panic.rs:142:14
  13: tokio::runtime::task::harness::poll_future
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/harness.rs:473:18
  14: tokio::runtime::task::harness::Harness<T,S>::poll_inner
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/harness.rs:208:27
  15: tokio::runtime::task::harness::Harness<T,S>::poll
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/harness.rs:153:15
  16: tokio::runtime::task::raw::poll
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/raw.rs:271:5
  17: tokio::runtime::task::raw::RawTask::poll
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/raw.rs:201:18
  18: tokio::runtime::task::LocalNotified<S>::run
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/mod.rs:416:9
  19: tokio::runtime::scheduler::multi_thread::worker::Context::run_task::{{closure}}
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/scheduler/multi_thread/worker.rs:576:13
  20: tokio::runtime::coop::with_budget
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/coop.rs:107:5
  21: tokio::runtime::coop::budget
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/coop.rs:73:5
  22: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/scheduler/multi_thread/worker.rs:575:9
  23: tokio::runtime::scheduler::multi_thread::worker::Context::run
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/scheduler/multi_thread/worker.rs:526:24
  24: tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}::{{closure}}
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/scheduler/multi_thread/worker.rs:491:21
  25: tokio::runtime::context::scoped::Scoped<T>::set
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/context/scoped.rs:40:9
  26: tokio::runtime::context::set_scheduler::{{closure}}
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/context.rs:176:26
  27: std::thread::local::LocalKey<T>::try_with
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/std/src/thread/local.rs:270:16
  28: std::thread::local::LocalKey<T>::with
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/std/src/thread/local.rs:246:9
  29: tokio::runtime::context::set_scheduler
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/context.rs:176:9
  30: tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/scheduler/multi_thread/worker.rs:486:9
  31: tokio::runtime::context::runtime::enter_runtime
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/context/runtime.rs:65:16
  32: tokio::runtime::scheduler::multi_thread::worker::run
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/scheduler/multi_thread/worker.rs:478:5
  33: tokio::runtime::scheduler::multi_thread::worker::Launch::launch::{{closure}}
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/scheduler/multi_thread/worker.rs:447:45
  34: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/blocking/task.rs:42:21
  35: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/core.rs:328:17
  36: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/loom/std/unsafe_cell.rs:16:9
  37: tokio::runtime::task::core::Core<T,S>::poll
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/core.rs:317:13
  38: tokio::runtime::task::harness::poll_future::{{closure}}
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/harness.rs:485:19
  39: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/panic/unwind_safe.rs:272:9
  40: std::panicking::try::do_call
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/std/src/panicking.rs:552:40
  41: __rust_try
  42: std::panicking::try
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/std/src/panicking.rs:516:19
  43: std::panic::catch_unwind
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/std/src/panic.rs:142:14
  44: tokio::runtime::task::harness::poll_future
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/harness.rs:473:18
  45: tokio::runtime::task::harness::Harness<T,S>::poll_inner
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/harness.rs:208:27
  46: tokio::runtime::task::harness::Harness<T,S>::poll
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/harness.rs:153:15
  47: tokio::runtime::task::raw::poll
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/raw.rs:271:5
  48: tokio::runtime::task::raw::RawTask::poll
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/raw.rs:201:18
  49: tokio::runtime::task::UnownedTask<S>::run
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/task/mod.rs:453:9
  50: tokio::runtime::blocking::pool::Task::run
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/blocking/pool.rs:159:9
  51: tokio::runtime::blocking::pool::Inner::run
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/blocking/pool.rs:513:17
  52: tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}
             at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.1/src/runtime/blocking/pool.rs:471:13
  53: std::sys_common::backtrace::__rust_begin_short_backtrace
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/std/src/sys_common/backtrace.rs:154:18
  54: std::thread::Builder::spawn_unchecked_::{{closure}}::{{closure}}
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/std/src/thread/mod.rs:529:17
  55: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/panic/unwind_safe.rs:272:9
  56: std::panicking::try::do_call
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/std/src/panicking.rs:552:40
  57: __rust_try
  58: std::panicking::try
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/std/src/panicking.rs:516:19
  59: std::panic::catch_unwind
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/std/src/panic.rs:142:14
  60: std::thread::Builder::spawn_unchecked_::{{closure}}
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/std/src/thread/mod.rs:528:30
  61: core::ops::function::FnOnce::call_once{{vtable.shim}}
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/ops/function.rs:250:5
  62: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/alloc/src/boxed.rs:2007:9
  63: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/alloc/src/boxed.rs:2007:9
  64: std::sys::unix::thread::Thread::new::thread_start
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/std/src/sys/unix/thread.rs:108:17
```

Now, we are just printing this:

```
2024-01-25T17:32:51.613258Z  INFO init{user_agent="Alpine Linux/3.19.0 (x86_64;6.6.11;) connlib/1.0.0" login_topic="gateway"}: phoenix_channel: Connected to portal, waiting for `init` message
2024-01-25T17:32:51.617971Z  WARN init{user_agent="Alpine Linux/3.19.0 (x86_64;6.6.11;) connlib/1.0.0" login_topic="gateway"}: phoenix_channel: Fatal client error (401 Unauthorized) in portal connection: Invalid token
2024-01-25T17:32:51.619680Z ERROR firezone_gateway: websocket failed: HTTP error: 401 Unauthorized
```

Resolves: #3401.
2024-01-26 00:12:32 +00:00
Thomas Eizinger
032b0a8db5 chore: update str0m dependency (#3402)
This includes https://github.com/algesten/str0m/pull/452.
2024-01-25 17:36:32 +00:00
Thomas Eizinger
85304329b9 fix(connection-tests): avoid rare flakiness of relay test (#3394)
This was a bug in my test harness, not `firezone-connection`:

For the relay test to succeed, we need to communicate all candidates
between the partys. I noticed that in the tests that failed, one side
did not receive all the candidates. In particular, the `relay` candidate
was sometimes missing which makes it impossible for the two clients to
communicate.

The candidates are communicated over redis and the events from redis are
retrieved together with polling of the event-loop. `tokio::select!`
polls those futures simultaneously but **drops** the other one when one
becomes ready. If that future is "half-way" through receiving a
candidate from the redis DB, it will be lost.

To mitigate this, we now use an `mpsc::channel` between the `Eventloop`
and a separately spawned task that can read from redis without being
interrupted.
2024-01-25 15:57:54 +00:00
Thomas Eizinger
f9f95677d5 feat: automatically rejoin channel on portal after reconnect (#3393)
In https://github.com/firezone/firezone/pull/3364, we forgot to rejoin
the channel on the portal. Additionally, I found a way to detect the
disconnect even more quickly.
2024-01-25 02:05:15 +00:00
Gabi
31f2f52d94 fix(gateway): tokio feature dependencies (#3396)
I don't know how this didn't fail in CI before
2024-01-25 01:18:28 +00:00
Reactor Scram
e7f3dedfe6 fix(windows): allow user to cancel sign-in flow (#3385)
Closes #3316 


![image](https://github.com/firezone/firezone/assets/13400041/51fe4cbb-cd2f-4ca0-aa15-8b40ea18fffd)

The cancel sign-in button uses the same code as a regular sign-out or
token expiration, which is idempotent.
2024-01-24 23:52:47 +00:00
Jamil
d469f6ad42 feat(ci): Test client gracefully handles portal and relay disconnects (#3376)
Test basic connectivity with the headless client after the portal API
restarts.

Based on top of #3364 to test that portal restarts don't cause a
cascading failure.
2024-01-24 21:04:02 +00:00