Commit Graph

107 Commits

Author SHA1 Message Date
Thomas Eizinger
be1a719e2c chore(relay): perform graceful shutdown upon receiving SIGTERM (#4552)
Upon receiving a SIGTERM, we immediately disconnect from the websocket
connection to the portal and set a flag that we are shutting down.

Once we are disconnected from the portal and no longer have an active
allocations, we exit with 0. A repeated SIGTERM signal will interrupt
this process and force the relay to shutdown.

Disconnecting from the portal will (eventually) trigger a message to
clients and gateways that this relay should no longer be used. Thus,
depending on the timeout our supervisor has configured after sending
SIGTERM, the relay will continue all TURN operations until the number of
allocations drops to 0.

Currently, we also allow clients to make new allocations and refreshing
existing allocations. In the future, it may make sense to implement a
dedicated status code and refuse `ALLOCATE` and `REFRESH` messages
whilst we are shutting down.

Related: #4548.

---------

Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
2024-04-12 08:45:08 +00:00
Thomas Eizinger
31eec1aac7 chore(relay): connect to portal in the background during startup (#4594)
In a prior design of the relay and the `phoenix-channel`, connecting to
the portal was a blocking operation, i.e. we weren't meant to start the
relaying operations before the portal connection succeeded.

Since then, `phoenix-channel` got refactored to have an internal
(re)-connection mechanism, meaning we don't actually need to `.await`
anything to obtain a `PhoenixChannel` instance that we can use to
initialize the `Server`. Furthermore, we changed the health-check to
return 200 OK prior to the portal connection being established in #4553.

Taking both of these into account, there is no more need to block on the
portal connection being established, which allows us to remove the use
of `phoenix_channel::init` and connect in the background whilst we
already accept STUN & TURN traffic.
2024-04-12 03:48:09 +00:00
Thomas Eizinger
fb68e90829 chore(snownet): add unit-test for relayed connection (#4570)
This PR adds a unit-test to `snownet` that exercises all code paths that
are required for a relayed connection to work. This includes:

- Nodes make an allocation with real credentials, nonces etc
- Nodes exchange their ICE candidates
- Nodes bind data channels on the relay
- str0m performs ICE over these data channels
- Nodes handshake a wireguard tunnel on the nominated socket

I consider this a baseline. Once merged, I want to attempt writing a
test in #4568 that asserts migration of a connection to a new relay
without the connection expiring. At some point, we can even go further
and move these tests to `firezone-tunnel` and unit-test even more things
like connection intents etc.
2024-04-10 21:31:00 +00:00
Thomas Eizinger
03d89fec50 chore(relay): fail health-check with 400 on being partitioned for > 15min (#4553)
During the latest relay outage, we failed to send heartbeats to the
portal because we were busy-looping and never got to handle messages or
timers for the portal.

To mitigate this or similar bugs, we update an `Instant` every time we
send a heartbeat to the portal. In case we are actually
network-partitioned, this will cause the health-check to fail after 15
minutes. This value is the same as the partition timeout for the portal
connection itself[^1]. Very likely, we will never see a relay being
shutdown because of a failing health check in this case as it would have
already shut itself down.

An exception to this are bugs in the eventloop where we fail to interact
with the portal at all.

Resolves: #4510.

[^1]: Previously, this was unlimited.
2024-04-10 02:05:59 +00:00
Thomas Eizinger
d92eaa30e2 chore(relay): remove stale arg (#4554)
This one slipped in as part of #4426. Originally, I intended to allow
for on-demand profiling of the relay but it didn't turn out to be
necessary.
2024-04-09 16:04:59 +00:00
Thomas Eizinger
8900e263ca refactor(relay): favor Instant over SystemTime (#4468)
This one is a bit tricky. Our auth scheme requires me to know the
current time as a UNIX timestamp and that I can only get from
`SystemTime` but not `Instant`. The `Server` is meant to be SANS-IO,
including the current time so technically, I would have to pass that in
as a parameter.

I ended up settling on a compromise of making the auth verification
impure and internally calling `SystemTime::now`. That results in a much
nicer API and allows us to use `Instant` for everything else, e.g.
expiry of channel bindings, allocations etc.

Resolves: #4464.
2024-04-08 23:37:19 +00:00
Thomas Eizinger
a1a7d925b1 chore(rust): enforce no wildcard matching (#4491)
A wildcard match was the underlying bug fixed in #4486. Despite being a
bit annoying in some cases, I think it is worth having this lint turned
on to ensure we don't wildcard match in situations where it can have bad
consequences, like `poll` functions.

---------

Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
2024-04-08 12:06:37 +00:00
Thomas Eizinger
4ca3a68253 Bump dependency 2024-04-08 20:26:54 +10:00
Thomas Eizinger
c036d1abe5 refactor(relay): remove heap-allocations from hotpath (#4457)
This required a mid-sized refactor of the relay's eventloop. The idea is
that we can use [`mio`](https://docs.rs/mio/latest/mio/) to do the
actual IO handling instead of `tokio`. `tokio` depends on `mio`
internally but doesn't expose its primitives. Most importantly, we don't
get access to the API where we can dynamically register file descriptors
to watch for readiness.

In order to avoid allocations on the relaying hotpath, we need to listen
on a dynamic number of sockets:

1. Our client-facing socket on port 3478
2. All sockets allocated by clients

`mio` is the building block of the async tokio runtime, hence it does
not provide an async primitives. Instead, it blocks the current thread
that it is running on and feeds you events that you need to deal with.
We still need our `tokio` runtime to register timers and for
communication with the portal. To integrate the two, we spawn a
dedicated thread for `mio::Poll` and communicate with it via channels
within the `Sockets` abstraction. Thus, the `Eventloop` itself has no
idea that `mio` is used for all the network communication.

Whenever `mio` sends us an event that a socket is ready, we try to read
from that specific socket. We must read from this socket until it
returns `WouldBlock` at which point we move on to the next event.

We only register for read-readiness. If a socket is not ready for
writing, we just drop the packet.

With this design in place, we can now have a single buffer that we read
incoming packets into and dispatch it to `Server`, depending on which
port is what received on. A future refactoring could maybe even unify
these functions and let the `Server` deal with the ports internally.

Resolves: #4366.
2024-04-04 18:53:59 +00:00
Thomas Eizinger
283bf8271f fix(relay): don't busy-loop on poll_timeout (#4497)
The value returned from `poll_timeout` needs to only reset the `Sleep`
but don't need to go back to the top of the loop. Instead, we move its
polling to below the resetting of `Sleep`. This will correctly register
a waker in case we did change `Sleep`.

This `continue` causes a busy-loop and stops the relay from dealing with
the `phoenix-channel` which means the portal will eventually consider it
offline.

This was first introduced in #4455.
2024-04-03 19:33:09 -06:00
Thomas Eizinger
ddd0a3b986 fix(relay): always continue after ready events (#4494)
This is a similar fix as to #4486. I am not sure if this is / was
actively causing problems but using `continue` after _any_ ready event
is definitely more correct.

This is a low-risk change.
2024-04-04 01:10:30 +00:00
Thomas Eizinger
285249a384 fix(relay): only unbind a channel if it is actually bound (#4495)
Currently, we are emitting the "Channel is now expired" message multiple
times because we don't filter for the ones we have already unbound.
2024-04-04 01:09:58 +00:00
Thomas Eizinger
97e6a92e39 chore(rust): remove unused dependencies (#4475)
These were all found by `cargo-udeps`.

Resolves: #4403.
2024-04-03 14:11:02 +00:00
Thomas Eizinger
b668f8944b chore(rust): lint against redundant async (#4466)
I came across a redundant `async` within the relay code and thought:
"Hey, I know there is a lint against this, let's turn it on".
2024-04-03 02:43:49 +00:00
Thomas Eizinger
1b11d75a91 refactor(relay): replace Command::Wake with poll_timeout (#4455)
This is much more robust than the previous implementation because we now
go through all allocations and channels every time we get a
`handle_timeout` and clean up everything that is expired.

Resolves: #4095.
2024-04-02 23:09:46 +00:00
Thomas Eizinger
5f718ad982 refactor(relay): reduce allocations during relaying (#4453)
Previously, we would allocate each message twice:

1. When receiving the original packet.
2. When forming the resulting channel-data message.

We can optimise this to only one allocation each by:

1. Carrying around the original `ChannelData` message for traffic from
clients to peers.
2. Pre-allocating enough space for the channel-data header for traffic
from peers to clients.

Local flamegraphing still shows most of user-space activity as
allocations. I did occasionally see a throughput of ~10GBps with these
patches. I'd like to still work towards #4095 to ensure we handle
anything time-sensitive better.
2024-04-02 22:00:36 +00:00
Thomas Eizinger
a1bd9248d0 chore(relay): reduce instrumentation overhead (#4426)
Previously, we were creating a lot of spans because they were all set to
`level = error`. We now reduce those spans to `debug` which should help
with the CPU utilization.

Related: #4366.
2024-04-01 23:02:59 +00:00
Thomas Eizinger
a2a86703e7 chore(relay): make profiling in release build possible (#4441)
Currently, controlling the RNG seed is gated for debug builds only. This
makes profiling the release build impossible because we cannot generate
credentials upfront.

Additionally, for flamegraphs to be useful, we need to enable debug
symbols for the relay.
2024-04-01 21:37:11 +00:00
Thomas Eizinger
c96f32105c chore(relay): remove per-packet logs on debug level (#4439)
This follows the same policy as we applied in connlib: Anything that
happens on a per-packet basis is on `trace` level.

Resolves: #4333.
2024-04-01 21:35:11 +00:00
Thomas Eizinger
6efce31a94 chore(relay): apply log target consistently (#4440)
The main relay component uses the `relay` target to be more concise
whilst emitting logs. This PR fixes a few logs that were missing that.
2024-04-01 20:54:24 +00:00
Thomas Eizinger
fb7f7c0b9a chore: apply lints consistently across workspace (#4357)
Motivated by: #4340.

I also activated
[`clippy::unnnecessary_wraps`](https://rust-lang.github.io/rust-clippy/master/#/unnecessary_wraps)
which does create some false-positives for the platform-specific code
but is IMO overall a net-positive. With the amount of Rust code and
crates increasing, it is good to have tools point out simplifications
like these as they are otherwise hard to spot, especially across crate
boundaries.
2024-03-28 06:09:22 +00:00
Thomas Eizinger
de6bbbc10d chore(relay): fix flaky proptest (#4157)
This turned out to be a user error in how I was using proptest.

Related: https://github.com/proptest-rs/proptest/issues/72.
Resolves: #3965.
2024-03-16 01:09:39 +00:00
Thomas Eizinger
9767bddcca feat(gateway): add HTTP health check (#4120)
This adds the same kind of HTTP health-check that is already present in
the relay to the gateway. The health-check returns 200 OK for as long as
the gateway is active. The gateway automatically shuts down on fatal
errors (like authentication failures with the portal).

To enable this, I've extracted a crate `http-health-check` that shares
this code between the relay and the gateway.

Resolves: #2465.

---------

Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Reactor Scram <ReactorScram@users.noreply.github.com>
2024-03-13 21:05:21 +00:00
dependabot[bot]
9836c74ea5 build(deps): Bump the otel group in /rust with 3 updates (#3980)
Bumps the otel group in /rust with 3 updates:
[tracing-stackdriver](https://github.com/NAlexPear/tracing-stackdriver),
[tracing-opentelemetry](https://github.com/tokio-rs/tracing-opentelemetry)
and
[opentelemetry-otlp](https://github.com/open-telemetry/opentelemetry-rust).

Updates `tracing-stackdriver` from 0.8.0 to 0.9.0
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4a9fe0a37e"><code>4a9fe0a</code></a>
⬆ Bump version to 0.9.0</li>
<li>See full diff in <a
href="https://github.com/NAlexPear/tracing-stackdriver/compare/v0.8.0...v0.9.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `tracing-opentelemetry` from 0.21.0 to 0.22.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/tracing-opentelemetry/releases">tracing-opentelemetry's
releases</a>.</em></p>
<blockquote>
<h2>0.22.0</h2>
<h3>Breaking Changes</h3>
<ul>
<li>Upgrade to <code>v0.21.0</code> of <code>opentelemetry</code>
For list of breaking changes in OpenTelemetry, see the
<a
href="https://github.com/open-telemetry/opentelemetry-rust/blob/v0.21.0/opentelemetry/CHANGELOG.md">v0.21.0
changelog</a>.</li>
<li>Update MSRV to require Rust 1.65+, as <code>opentelemetry</code>
requires it now. (<a
href="https://redirect.github.com/tokio-rs/tracing-opentelemetry/issues/68">#68</a>)</li>
</ul>
<h3>Fixed</h3>
<ul>
<li>WASM Support (<a
href="https://redirect.github.com/tokio-rs/tracing-opentelemetry/issues/57">#57</a>)</li>
<li>Fix potential deadlock (<a
href="https://redirect.github.com/tokio-rs/tracing-opentelemetry/issues/59">#59</a>)</li>
</ul>
<p>Thanks to <a
href="https://github.com/jesseditson"><code>@​jesseditson</code></a>, <a
href="https://github.com/AsmPrgmC3"><code>@​AsmPrgmC3</code></a>, and <a
href="https://github.com/rthomas"><code>@​rthomas</code></a> for
contributing to this release!</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/tracing-opentelemetry/blob/v0.1.x/CHANGELOG.md">tracing-opentelemetry's
changelog</a>.</em></p>
<blockquote>
<h1>0.22.0 (November 7, 2023)</h1>
<h3>Breaking Changes</h3>
<ul>
<li>Upgrade to <code>v0.21.0</code> of <code>opentelemetry</code>
For list of breaking changes in OpenTelemetry, see the
<a
href="https://github.com/open-telemetry/opentelemetry-rust/blob/v0.21.0/opentelemetry/CHANGELOG.md">v0.21.0
changelog</a>.</li>
<li>Update MSRV to require Rust 1.65+, as <code>opentelemetry</code>
requires it now. (<a
href="https://redirect.github.com/tokio-rs/tracing-opentelemetry/issues/68">#68</a>)</li>
</ul>
<h3>Fixed</h3>
<ul>
<li>WASM Support (<a
href="https://redirect.github.com/tokio-rs/tracing-opentelemetry/issues/57">#57</a>)</li>
<li>Fix potential deadlock (<a
href="https://redirect.github.com/tokio-rs/tracing-opentelemetry/issues/59">#59</a>)</li>
</ul>
<p>Thanks to <a
href="https://github.com/jesseditson"><code>@​jesseditson</code></a>, <a
href="https://github.com/AsmPrgmC3"><code>@​AsmPrgmC3</code></a>, and <a
href="https://github.com/rthomas"><code>@​rthomas</code></a> for
contributing to this release!</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="62690b4ac9"><code>62690b4</code></a>
Prepare for v0.22.0 release (<a
href="https://redirect.github.com/tokio-rs/tracing-opentelemetry/issues/75">#75</a>)</li>
<li><a
href="2156c236db"><code>2156c23</code></a>
Update otel to version 0.21 (<a
href="https://redirect.github.com/tokio-rs/tracing-opentelemetry/issues/68">#68</a>)</li>
<li><a
href="cfc64f37b6"><code>cfc64f3</code></a>
build(deps): update tracing-log requirement from 0.1.3 to 0.2.0 (<a
href="https://redirect.github.com/tokio-rs/tracing-opentelemetry/issues/64">#64</a>)</li>
<li><a
href="70f3ed6f73"><code>70f3ed6</code></a>
build(deps): update pprof requirement from 0.12.1 to 0.13.0 (<a
href="https://redirect.github.com/tokio-rs/tracing-opentelemetry/issues/61">#61</a>)</li>
<li><a
href="bddef29233"><code>bddef29</code></a>
Add support for instrumented functions which return Result (<a
href="https://redirect.github.com/tokio-rs/tracing-opentelemetry/issues/28">#28</a>)</li>
<li><a
href="1c61ea6b56"><code>1c61ea6</code></a>
Fix potential deadlock (<a
href="https://redirect.github.com/tokio-rs/tracing-opentelemetry/issues/59">#59</a>)</li>
<li><a
href="5223a67887"><code>5223a67</code></a>
Update README example Cargo.toml (<a
href="https://redirect.github.com/tokio-rs/tracing-opentelemetry/issues/60">#60</a>)</li>
<li><a
href="a03ff2275b"><code>a03ff22</code></a>
Update criterion and pprof (<a
href="https://redirect.github.com/tokio-rs/tracing-opentelemetry/issues/42">#42</a>)</li>
<li><a
href="80ae3211db"><code>80ae321</code></a>
WASM Support (<a
href="https://redirect.github.com/tokio-rs/tracing-opentelemetry/issues/57">#57</a>)</li>
<li>See full diff in <a
href="https://github.com/tokio-rs/tracing-opentelemetry/compare/v0.21.0...v0.22.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `opentelemetry-otlp` from 0.13.0 to 0.15.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/open-telemetry/opentelemetry-rust/releases">opentelemetry-otlp's
releases</a>.</em></p>
<blockquote>
<h2>v0.15.0</h2>
<h3>Added</h3>
<ul>
<li>More resource detectors <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/573">#573</a></li>
</ul>
<h3>Changed</h3>
<ul>
<li>Expose the Error type to allow users to set custom error handlers <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/551">#551</a></li>
<li>Allow users to use different channels based on runtime in batch span
processor <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/560">#560</a></li>
<li>Move <code>Unit</code> into <code>metrics</code> module <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/564">#564</a></li>
<li>Update trace flags to match spec <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/565">#565</a></li>
</ul>
<h3>Fixed</h3>
<ul>
<li>Fix debug loop, add notes for <code>#[tokio::test]</code> <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/552">#552</a></li>
<li><code>TraceState</code> cannot insert new key-value pairs <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/567">#567</a></li>
</ul>
<h2>v0.14.0</h2>
<h2>Added</h2>
<ul>
<li>Adding a dynamic dispatch to Aggregator Selector <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/497">#497</a></li>
<li>Add <code>global::force_flush_tracer_provider</code> <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/512">#512</a></li>
<li>Add config <code>max_attributes_per_event</code> and
<code>max_attributes_per_link</code> <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/521">#521</a></li>
<li>Add dropped attribute counts to events and links <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/529">#529</a></li>
</ul>
<h2>Changed</h2>
<ul>
<li>Remove unnecessary clone in <code>Key</code> type <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/491">#491</a></li>
<li>Remove <code>#[must_use]</code> from
<code>set_tracer_provider</code> <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/501">#501</a></li>
<li>Rename remaining usage of <code>default_sampler</code> to
<code>sampler</code> <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/509">#509</a></li>
<li>Use current span for SDK-less context propagation <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/510">#510</a></li>
<li>Always export span batch when limit reached <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/519">#519</a></li>
<li>Rename message events to events <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/530">#530</a></li>
<li>Update resource merge behaviour <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/537">#537</a></li>
<li>Ignore links with invalid context <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/538">#538</a></li>
</ul>
<h2>Removed</h2>
<ul>
<li>Remove remote span context <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/508">#508</a></li>
<li>Remove metrics quantiles <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/525">#525</a></li>
</ul>
<h1>Fixed</h1>
<ul>
<li>Allow users to use custom export kind selector <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/526">#526</a></li>
</ul>
<h2>Performance</h2>
<ul>
<li>Improve simple span processor performance <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/502">#502</a></li>
<li>Local span perf improvements <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/505">#505</a></li>
<li>Reduce string allocations where possible <a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/506">#506</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d7ba1ea4f7"><code>d7ba1ea</code></a>
Prepare for v0.15.0 release (<a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/572">#572</a>)</li>
<li><a
href="6834b64192"><code>6834b64</code></a>
feat: add more resource detectors (<a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/573">#573</a>)</li>
<li><a
href="efbc842c11"><code>efbc842</code></a>
semantic-conventions: update to v1.4.0 spec (<a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/570">#570</a>)</li>
<li><a
href="d70a537548"><code>d70a537</code></a>
Update example and optional dependencies (<a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/568">#568</a>)</li>
<li><a
href="a2dd6e7779"><code>a2dd6e7</code></a>
fix: TraceState cannot insert new key-value pairs. (<a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/567">#567</a>)</li>
<li><a
href="dc7d81fdfa"><code>dc7d81f</code></a>
Update trace flags to match spec (<a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/565">#565</a>)</li>
<li><a
href="635f10e15d"><code>635f10e</code></a>
Move unit into metrics module (<a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/564">#564</a>)</li>
<li><a
href="99e51c1980"><code>99e51c1</code></a>
Update lib and tracing module docs with examples (<a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/563">#563</a>)</li>
<li><a
href="1ca62d337e"><code>1ca62d3</code></a>
feat: allow users to use different channels based on runtime in batch
span pr...</li>
<li><a
href="fb576b0e71"><code>fb576b0</code></a>
move hyper prometheus example into something runnable (<a
href="https://redirect.github.com/open-telemetry/opentelemetry-rust/issues/562">#562</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/open-telemetry/opentelemetry-rust/compare/v0.13.0...v0.15.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
2024-03-12 19:16:35 +00:00
Thomas Eizinger
36848e31a9 fix(relay): actually expire channels which allows re-binding them (#4094)
Previously, the relay neither scheduled a `Wake` command nor did it
register a `TimedAction` to expire a channel binding. Such an action was
only scheduled after the first refresh.

This PR fixes this and adds a test that asserts we can re-bind the same
channel to a different peer after 15 minutes.

Resolves: #3979.
2024-03-12 10:13:47 +00:00
Thomas Eizinger
32e16ec927 feat(relay): improve logs for expiry and deletion of channel bindings (#4089)
Unfortunately, the current logs don't allow us to correlate, which
allocation and thus which client the expired channel binding relates to.
This PR fixes this by adding more fields to the relevant log messages.
2024-03-12 10:13:15 +00:00
Thomas Eizinger
407d20d817 refactor(connlib): use phoenix-channel crate for clients (#3682)
Depends-On: #4048.
Depends-On: #4015.

Resolves: #2158.

---------

Co-authored-by: conectado <gabrielalejandro7@gmail.com>
2024-03-12 08:10:56 +00:00
Thomas Eizinger
fdb33674cd refactor(connlib): introduce LoginUrl component (#4048)
Currently, we are passing a lot of data into `Session::connect`. Half of
this data is only needed to construct the URL we will use to connect to
the portal. We can simplify this by extracting a dedicated `LoginUrl`
component that captures and validates this data early.

Not only does this reduce the number of parameters we pass to
`Session::connect`, it also reduces the number of failure cases we have
to deal with in `Session::connect`. Any time the session fails, we have
to call `onDisconnected` to inform the client. Thus, we should perform
as much validation as we can early on. In other words, once
`Session::connect` returns, the client should be able to expect that the
tunnel is starting.
2024-03-09 09:35:15 +00:00
Thomas Eizinger
4339030d03 refactor(phoenix-channel): reduce Error to fatal errors (#4015)
As part of doing https://github.com/firezone/firezone/pull/3682, we
noticed that the handling of errors up to the clients needs to
differentiate between fatal errors that require clearing the token vs
not.

Upon closer inspection of `phoenix_channel::Error`, it becomes obvious
that the current design is not good here. In particular, we handle
certain errors with retries internally but still expose those same
errors.

To make this more obvious, we reduce the public `Error` to the variants
that are actually fatal. Those can really only be three:

- HTTP client errors (those are by definition non-retryable)
- Token expired
- We have reached our max number of retries
2024-03-09 08:03:25 +00:00
Thomas Eizinger
2c2bbe19ff chore(relay): add FIREZONE_NAME env variable (#3932)
Follow-up from #2544.

---------

Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
2024-03-05 22:37:21 +00:00
Thomas Eizinger
9d8233a406 feat(phoenix-channel): remove concept of "inbound requests" (#3831)
We don't have a concept of "inbound requests", at least not natively in
the phoenix channel JSON format. Thus, we don't need to match on `ref`
for incoming messages.

Extracted out of: #3682.
2024-03-01 15:53:08 +00:00
Andrew Dryga
bfe1fb0ff4 refactor(portal): unify format of error payloads in websocket connection (#3697)
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
2024-02-28 23:06:52 +00:00
Gabi
4c0c8391d5 fix(relay): update tests for current values (#3746)
In #3726 this value was increased but the test didn't reflect that.

I've not the slightest idea how this is passing on CI. It isn't locally.

Now I have an Idea, relay tests aren't run on CI.
2024-02-23 18:02:47 +00:00
Thomas Eizinger
b545a36ae7 feat(relay): increase number of allowed requests per nonce (#3726)
In the relay's authentication scheme, each nonce is only valid for a
certain number of requests. This guards against replay attacks.

Currently, this is set to 10 which means all requests after 10 will
receive a "stale nonce" error. 10 turns out to be way to low and greatly
delays the setup of channels and allocations which is always a burst of
messages that end up incurring additional round trips because they all
need to be re-sent with a new nonce.
2024-02-22 01:09:55 +00:00
Andrew Dryga
a211f96109 feat(portal): Broadcast state changes to connected clients and gateways (#2240)
# Gateways
- [x] When Gateway Group is deleted all gateways should be disconnected
- [x] When Gateway Group is updated (eg. routing) broadcast to all
affected gateway to disconnect all the clients
- [x] When Gateway is deleted it should be disconnected
- [x] When Gateway Token is revoked all gateways that use it should be
disconnected

# Relays
- [x] When Relay Group is deleted all relays should be disconnected
- [x] When Relay is deleted it should be disconnected
- [x] When Relay Token is revoked all gateways that use it should be
disconnected

# Clients
- [x] Remove Delete Client button, show clients using the token on the
Actors page (#2669)
- [x] When client is deleted disconnect it
- [ ] ~When Gateway is offline broadcast to the Clients connected to it
it's status~
- [x] Persist `last_used_token_id` in Clients and show it in tokens UI

# Resources
- [x] When Resource is deleted it should be removed from all gateways
and clients
- [x] When Resource connection is removed it should be deleted from
removed gateway groups
- [x] When Resource is updated (eg. traffic filters) all it's
authorizations should removed

# Authentication
- [x] When Token is deleted related sessions are terminated
- [x] When an Actor is deleted or disabled it should be disconnected
from browser and client
- [x] When Identity is deleted it's sessions should be disconnected from
browser and client
- [x] ^ Ensure the same happens for identities during IdP sync
- [x] When IdP is disabled act like all actors for it are disabled?
- [x] When IdP is deleted act like all actors for it are deleted?

# Authorization
- [x] When Policy is created clients that gain access to a resource
should get an update
- [x] When Policy is deleted we need to all authorizations it's made
- [x] When Policy is disabled we need to all authorizations it's made
- [x] When Actor Group adds or removes a user, related policies should
be re-evaluated
- [x] ^ Ensure the same happens for identities during IdP sync

# Settings
- [x] Re-send init message to Client when DNS settings change

# Code
- [x] Crear way to see all available topics and messages, do not use
binary topics any more

---------

Co-authored-by: conectado <gabrielalejandro7@gmail.com>
2024-02-01 11:02:13 -06:00
Thomas Eizinger
84b3ac50ca fix(relay): correctly separate channel state for different peers (#3472)
Currently, there is a bug in the relay where the channel state of
different peers overlaps because the data isn't indexed correctly by
both peers and clients.

This PR fixes this, introduces more debug assertions (this bug was
caught by one) and also adds some new-type wrappers to avoid conflating
peers with clients.
2024-02-01 01:53:54 +00:00
Thomas Eizinger
3f8c6cb6eb feat(relay): allow channel bindings to IPv6 addresses (#3434)
Previously, we still had a hard-coded rule in the relay that would not
allow us to relay to an IPv6 peer. We can remove that and properly check
this based on the allocated addresses.

Resolves: #3405.
2024-01-31 00:36:54 +00:00
Thomas Eizinger
e02aa2eb1f chore(relay): update docs in regards to spec-compliance (#3437) 2024-01-30 17:27:38 +00:00
Thomas Eizinger
c9834ee8ee feat(relay): print stats every 10s (#3408)
In #3400, a discussion started on what the correct log level would be
for the production relay. Currently, the relay logs some stats about
each packet on debug, i.e. where it came from, where it is going to and
how big it is. This isn't very useful in production though and will fill
up our log disk quickly.

This PR introduces a stats timer like we already have it in other
components. We print the number of allocations, how many channels we
have and how much data we relayed over all these channels since we last
printed. The interval is currently set to 10 seconds.

Here is what this output could look like (captured locally using
`relay/run_smoke_test.sh`, although slightly tweaked, printing ever 2s,
using release mode and larger packets on the clients):

```
2024-01-26T05:01:02.445555Z  INFO relay: Seeding RNG from '0'
2024-01-26T05:01:02.445580Z  WARN relay: No portal token supplied, starting standalone mode
2024-01-26T05:01:02.445827Z  INFO relay: Listening for incoming traffic on UDP port 3478
2024-01-26T05:01:02.447035Z  INFO Eventloop::poll: relay: num_allocations=0 num_channels=0 throughput=0.00 B/s
2024-01-26T05:01:02.649194Z  INFO Eventloop::poll:handle_client_input{sender=127.0.0.1:39092 transaction_id="8f20177512495fcb563c60de" allocation=AID-1}: relay: Created new allocation first_relay_address=127.0.0.1 lifetime=600s
2024-01-26T05:01:02.650744Z  INFO Eventloop::poll:handle_client_input{sender=127.0.0.1:39092 transaction_id="6445943a353d5e8c262a821f" allocation=AID-1 peer=127.0.0.1:41094 channel=16384}: relay: Successfully bound channel
2024-01-26T05:01:04.446317Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=631.54 MB/s
2024-01-26T05:01:06.446319Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=698.73 MB/s
2024-01-26T05:01:08.446325Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=708.98 MB/s
2024-01-26T05:01:10.446324Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=690.79 MB/s
2024-01-26T05:01:12.446316Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=715.53 MB/s
2024-01-26T05:01:14.446315Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=706.90 MB/s
2024-01-26T05:01:16.446313Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=712.03 MB/s
2024-01-26T05:01:18.446319Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=717.54 MB/s
2024-01-26T05:01:20.446316Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=690.74 MB/s
2024-01-26T05:01:22.446313Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=705.08 MB/s
2024-01-26T05:01:24.446311Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=700.41 MB/s
2024-01-26T05:01:26.446319Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=717.57 MB/s
2024-01-26T05:01:28.446320Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=688.82 MB/s
2024-01-26T05:01:30.446329Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=696.35 MB/s
2024-01-26T05:01:32.446317Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=724.03 MB/s
2024-01-26T05:01:34.446320Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=713.46 MB/s
2024-01-26T05:01:36.446314Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=716.13 MB/s
2024-01-26T05:01:38.446327Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=687.16 MB/s
2024-01-26T05:01:40.446315Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=708.20 MB/s
2024-01-26T05:01:42.446314Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=689.36 MB/s
2024-01-26T05:01:44.446314Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=698.62 MB/s
2024-01-26T05:01:46.446315Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=696.21 MB/s
2024-01-26T05:01:48.446378Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=696.36 MB/s
2024-01-26T05:01:50.446314Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=709.47 MB/s
2024-01-26T05:01:52.446319Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=714.48 MB/s
2024-01-26T05:01:54.446323Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=690.71 MB/s
2024-01-26T05:01:56.446313Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=692.70 MB/s
2024-01-26T05:01:58.446321Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=687.87 MB/s
2024-01-26T05:02:00.446316Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=682.11 MB/s
2024-01-26T05:02:02.446312Z  INFO Eventloop::poll: relay: num_allocations=1 num_channels=1 throughput=700.07 MB/s
```
2024-01-29 18:52:15 +00:00
Thomas Eizinger
76635eb336 feat(relay): print a log for error responses we send to the client (#3413) 2024-01-26 21:58:17 +00:00
Thomas Eizinger
8f2e9efb21 feat(relay): improve logging and error handling (#3399)
By changing around how the fields are recorded in the tracing spans and
what the functions are called, the logs now align nicely:

Before:

```
[  relay] 2024-01-25T02:01:00.103279Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.103448Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.103627Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.103774Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.103955Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.104119Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.104303Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.104456Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.104650Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.104825Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.105015Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.105165Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.105332Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.105534Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.105739Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.105934Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.106155Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.106343Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.106534Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.106671Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.106838Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.106987Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.107148Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.107289Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.107496Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.107662Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.107846Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.108003Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.108189Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.108340Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.108529Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.108665Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.108835Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.109006Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.109193Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.109363Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.109563Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.109733Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.109919Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.110058Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.110228Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.110360Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.110510Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.110628Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.110773Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.110904Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.111049Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.111199Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.111359Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.111540Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.111735Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.111884Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.112064Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.112246Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.112422Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.112577Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.112754Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.112933Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.113109Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.113254Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.113421Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:55880}:handle_channel_data_message{sender=127.0.0.1:55880 channel=16384 recipient=127.0.0.1:37062}: relay: Relaying 32 bytes
[  relay] 2024-01-25T02:01:00.113658Z DEBUG Eventloop::poll:handle_relay_input{sender=127.0.0.1:37062 allocation_id=AID-1 recipient=127.0.0.1:55880 channel=16384}: relay: Relaying 32 bytes
```

Now:

```
[  relay] 2024-01-25T01:57:42.045265Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.045393Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.045543Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.045676Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.045792Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.045918Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.046060Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.046190Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.046302Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.046410Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.046547Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.046964Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.047100Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.047219Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.047365Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.047497Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.047613Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.047732Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.047863Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.048212Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.048342Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.048460Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.048601Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.049029Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.049167Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.049292Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.049434Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.049549Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.049660Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.049787Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.049927Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.050095Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.050219Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.050688Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.050891Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.051046Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.051159Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.051300Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.051462Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.051618Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.051726Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.051884Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.052130Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.052344Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.052466Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.052631Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.052775Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.052941Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.053065Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.053224Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.053367Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.053527Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.053665Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.053826Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.053972Z DEBUG Eventloop::poll:handle_peer_traffic{sender=127.0.0.1:45679 allocation_id=AID-1 recipient=127.0.0.1:49137 channel=16384}: relay: Relaying 32 bytes
[  relay] 2024-01-25T01:57:42.054127Z DEBUG Eventloop::poll:handle_client_input{sender=127.0.0.1:49137 allocation_id=AID-1 recipient=127.0.0.1:45679 channel=16384}: relay: Relaying 32 bytes
```
2024-01-26 00:12:47 +00:00
Thomas Eizinger
f9f95677d5 feat: automatically rejoin channel on portal after reconnect (#3393)
In https://github.com/firezone/firezone/pull/3364, we forgot to rejoin
the channel on the portal. Additionally, I found a way to detect the
disconnect even more quickly.
2024-01-25 02:05:15 +00:00
Thomas Eizinger
6b789d6932 feat(phoenix-channel): automatically reconnect based on provided ExponentialBackoff (#3364)
Currently, only the gateway has a reconnect logic for (transient) errors
when connecting to the portal. Instead of duplicating this for the
relay, I moved the reconnect state machine to `phoenix-channel`. This
means the relay now automatically gets it too and in the future, the
clients will also benefit from it.

As a nice benefit, this also greatly simplifies the gateway's
`Eventloop` and removes a bunch of cruft with channels.

Resolves: #2915.
2024-01-24 16:39:53 +00:00
dependabot[bot]
d76c94d057 chore(deps): bump axum from 0.6.20 to 0.7.3 in /rust (#3068)
Bumps [axum](https://github.com/tokio-rs/axum) from 0.6.20 to 0.7.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/axum/releases">axum's
releases</a>.</em></p>
<blockquote>
<h2>axum-extra - v0.7.3</h2>
<ul>
<li><strong>added:</strong> Implement <code>Deref</code> and
<code>DerefMut</code> for built-in extractors (<a
href="https://redirect.github.com/tokio-rs/axum/issues/1922">#1922</a>)</li>
<li><strong>added:</strong> Add <code>OptionalPath</code> extractor (<a
href="https://redirect.github.com/tokio-rs/axum/issues/1889">#1889</a>)</li>
</ul>
<p><a
href="https://redirect.github.com/tokio-rs/axum/issues/1889">#1889</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/1889">tokio-rs/axum#1889</a>
<a
href="https://redirect.github.com/tokio-rs/axum/issues/1922">#1922</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/1922">tokio-rs/axum#1922</a></p>
<h2>axum - v0.7.3</h2>
<ul>
<li><strong>added:</strong> <code>Body</code> implements
<code>From&lt;()&gt;</code> now (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2411">#2411</a>)</li>
<li><strong>change:</strong> Update version of multer used internally
for multipart (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2433">#2433</a>)</li>
<li><strong>change:</strong> Update tokio-tungstenite to 0.21 (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2435">#2435</a>)</li>
<li><strong>added:</strong> Enable <code>tracing</code> feature by
default (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2460">#2460</a>)</li>
<li><strong>added:</strong> Support graceful shutdown on
<code>serve</code> (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2398">#2398</a>)</li>
<li><strong>added:</strong> <code>RouterIntoService</code> implements
<code>Clone</code> (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2456">#2456</a>)</li>
</ul>
<p><a
href="https://redirect.github.com/tokio-rs/axum/issues/2411">#2411</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/2411">tokio-rs/axum#2411</a>
<a
href="https://redirect.github.com/tokio-rs/axum/issues/2433">#2433</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/2433">tokio-rs/axum#2433</a>
<a
href="https://redirect.github.com/tokio-rs/axum/issues/2435">#2435</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/2435">tokio-rs/axum#2435</a>
<a
href="https://redirect.github.com/tokio-rs/axum/issues/2460">#2460</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/2460">tokio-rs/axum#2460</a>
<a
href="https://redirect.github.com/tokio-rs/axum/issues/2398">#2398</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/2398">tokio-rs/axum#2398</a>
<a
href="https://redirect.github.com/tokio-rs/axum/issues/2456">#2456</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/2456">tokio-rs/axum#2456</a></p>
<h2>axum-extra - v0.7.2</h2>
<ul>
<li><strong>added:</strong> Implement <code>IntoResponse</code> for
<code>MultipartError</code> (<a
href="https://redirect.github.com/tokio-rs/axum/issues/1861">#1861</a>)</li>
</ul>
<p><a
href="https://redirect.github.com/tokio-rs/axum/issues/1861">#1861</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/1861">tokio-rs/axum#1861</a></p>
<h2>axum - v0.7.2</h2>
<ul>
<li><strong>added:</strong> Add <code>axum::body::to_bytes</code> (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2373">#2373</a>)</li>
<li><strong>fixed:</strong> Gracefully handle accept errors in
<code>serve</code> (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2400">#2400</a>)</li>
</ul>
<p><a
href="https://redirect.github.com/tokio-rs/axum/issues/2373">#2373</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/2373">tokio-rs/axum#2373</a>
<a
href="https://redirect.github.com/tokio-rs/axum/issues/2400">#2400</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/2400">tokio-rs/axum#2400</a></p>
<h2>axum-extra - v0.7.1</h2>
<ul>
<li>Updated to latest <code>axum-macros</code></li>
</ul>
<h2>axum - v0.7.1</h2>
<ul>
<li><strong>fix</strong>: Fix readme.</li>
</ul>
<h2>axum-extra - v0.7.0</h2>
<ul>
<li><strong>breaking:</strong> Remove the <code>spa</code> feature which
should have been removed in 0.6.0 (<a
href="https://redirect.github.com/tokio-rs/axum/issues/1802">#1802</a>)</li>
<li><strong>added:</strong> Add <code>Multipart</code>. This is similar
to <code>axum::extract::Multipart</code>
except that it enforces field exclusivity at runtime instead of compile
time,
as this improves usability (<a
href="https://redirect.github.com/tokio-rs/axum/issues/1692">#1692</a>)</li>
<li><strong>added:</strong> Implement <code>Clone</code> for
<code>CookieJar</code>, <code>PrivateCookieJar</code> and
<code>SignedCookieJar</code> (<a
href="https://redirect.github.com/tokio-rs/axum/issues/1808">#1808</a>)</li>
<li><strong>fixed:</strong> Add <code>#[must_use]</code> attributes to
types that do nothing unless used (<a
href="https://redirect.github.com/tokio-rs/axum/issues/1809">#1809</a>)</li>
</ul>
<p><a
href="https://redirect.github.com/tokio-rs/axum/issues/1692">#1692</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/1692">tokio-rs/axum#1692</a>
<a
href="https://redirect.github.com/tokio-rs/axum/issues/1802">#1802</a>:
<a
href="https://redirect.github.com/tokio-rs/axum/pull/1802">tokio-rs/axum#1802</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="fe89ab5592"><code>fe89ab5</code></a>
Release (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2461">#2461</a>)</li>
<li><a
href="b494d455cc"><code>b494d45</code></a>
Implement <code>Clone</code> for <code>RouterIntoService</code> (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2456">#2456</a>)</li>
<li><a
href="560213a7b7"><code>560213a</code></a>
docs: add clarification about building middleware and error types (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2448">#2448</a>)</li>
<li><a
href="ea6dd51e98"><code>ea6dd51</code></a>
Enable tracing by default (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2460">#2460</a>)</li>
<li><a
href="12e8c6219d"><code>12e8c62</code></a>
Support graceful shutdown on <code>serve</code> (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2398">#2398</a>)</li>
<li><a
href="56159b0d4e"><code>56159b0</code></a>
JsonDeserializer extractor for zero-copy deserialization (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2431">#2431</a>)</li>
<li><a
href="c3db223532"><code>c3db223</code></a>
Rework error handling example (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2382">#2382</a>)</li>
<li><a
href="6c276c3ff0"><code>6c276c3</code></a>
Updated docs regarding constraints of Handler arguments (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2451">#2451</a>)</li>
<li><a
href="4f010d9b2d"><code>4f010d9</code></a>
Updating <code>tls-rustls</code> example (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2457">#2457</a>)</li>
<li><a
href="3fda093806"><code>3fda093</code></a>
Use separate lexical scope for lock guard in docs (<a
href="https://redirect.github.com/tokio-rs/axum/issues/2439">#2439</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/tokio-rs/axum/compare/axum-v0.6.20...axum-v0.7.3">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=axum&package-manager=cargo&previous-version=0.6.20&new-version=0.7.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
2024-01-15 03:31:32 +00:00
Thomas Eizinger
9a5f4e0ce2 fix(relay): ensure channel numbers are unique to a client (#2744)
Previously, there was a misinterpretation of the spec that didn't allow
_different_ clients to use the same channel number. This is wrong
though. Because channel numbers are managed by clients, they must be
unique _per client_. This patch addresses this short-coming.

I didn't include any dedicated tests for this. The fact that the
existing ones still work means the feature is overall working and the
data structure shows that the channels are now indeed unique per client.
2023-12-04 17:01:55 +00:00
Thomas Eizinger
81598dbaff feat(relay): reduce packet drops (#2737)
There is another channel which we didn't yet increase in size, the one
between the allocation and the main task loop. Increasing to 1000 means
each allocation can potentially buffer 65MB of data. With the biggest
port range (16383 allocations), that would be a theoretical memory
consumption of ~ 1TB. But, this would imply that we have 16383 connected
clients that all send data at max speed, saturating our downlink and our
uplink is somehow ridiculously small. As long as up and downlink are
roughly within the same ballpark figure, it should be impossible to
actually fill up these buffers.

I suspect that the current packet drops of the iperf test are happening
because on localhost, sending 10 UDP packets is so quick that a tokio is
unable to wake up the task in time to empty the queue.

In addition to the increased channel size, I've also added a check for
the other channels to avoid writing to them in case they are not ready
for some reason.

---------

Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
2023-11-29 19:01:17 +00:00
Gabi
aec5b97012 Add performance tests for client-gateway communication (#2655) 2023-11-17 00:32:34 -06:00
Jamil
2bca378f17 Allow data plane configuration at runtime (#2477)
## Changelog

- Updates connlib parameter API_URL (formerly known under different
names as `CONTROL_PLANE_URL`, `PORTAL_URL`, `PORTAL_WS_URL`, and
friends) to be configured as an "advanced" or "hidden" feature at
runtime so that we can test production builds on both staging and
production.
- Makes `AUTH_BASE_URL` configurable at runtime too
- Moves `CONNLIB_LOG_FILTER_STRING` to be configured like this as well
and simplifies its naming
- Fixes a timing attack bug on Android when comparing the `csrf` token
- Adds proper account ID validation to Android to prevent invalid URL
parameter strings from being saved and used
- Cleans up a number of UI / view issues on Android regarding typos,
consistency, etc
- Hides vars from from the `relay` CLI we may not want to expose just
yet
- `get_device_id()` is flawed for connlib components -- SMBios is rarely
available. Data plane components now require a `FIREZONE_ID` now instead
to use for upserting.


Fixes #2482 
Fixes #2471

---------

Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Co-authored-by: Gabi <gabrielalejandro7@gmail.com>
2023-10-30 23:46:53 -07:00
Jamil
573124bd2f Document relay gateway client CLIs (#2424)
Fixes #2363 

* Rename `relay` package to `firezone-relay` so that binaries outputted
match the `firezone-*` cli naming scheme
* Rename `firezone-headless-client` package to `firezone-linux-client`
for consistency
* Add READMEs for user-facing CLI components (there will also be docs
later)
2023-10-19 00:59:17 +00:00
Jamil
6ec10b2669 Revert "Fix/website mdx" (#2434)
Reverts firezone/firezone#2433
2023-10-18 11:42:54 -07:00