Closes#5449
The smoke tests expect `last_crash.dmp` at a fixed path, so in this case
we write the file with a timestamped name, then copy it over
`last_crash.dmp`.
Within `snownet`'s test harness, packets are dispatched in a particular
order and of none of them match. They are assumed to be for the node
directly. We add a debug assert to ensure that the given address is in
fact part of the "local" interfaces that we have configured in the
tests.
PR #5700 had a typo in it. I didn't notice that these match arms use
`|`, so I accidentally flush the DNS for an event that doesn't need it.
Only `OnUpdateResources` should flush DNS.
```[tasklist]
### Tasks
- [x] Check the GUI saves its settings file
- [x] Check the IPC service writes the device ID to disk
- [x] Check the GUI writes a log file (skipped - we already check if the exported zip has any files in it)
- [x] Run the crash file through `minidump-stackwalk`
- [x] Reach feature parity with the original smoke tests
- [x] Ready for review
- [x] Finish #5452
- [ ] Start on #5453
```
I don't believe we use/need TCP for the Relays. Better to keep the ports
closed if so.
Also, the docker-compose.yml is updated to allow the `relay-1` service
to respond to all its ports, since we don't need those mapped typically.
Closes#5052
On my dev VMs:
- systemd-resolved = 15 ms to flush
- Windows = 600 ms to flush
I tested with the headless Clients on Linux and Windows and it fixes the
issue. On Windows I didn't replicate the issue with the GUI Client, on
Linux this patch also fixes it for the GUI Client.
Temporary fix for #5566
A better fix would be to merge the deep link and IPC service code, but I
tried that a couple times and failed, their interfaces are different.
```[tasklist]
### Tasks
- [x] Expand comment explaining the root cause
- [x] Re-request review
```
Since we only handle `A`, `AAAA` and `PTR` records of names we handle,
this can lead to unexpected behavior with other record types, where
using Firezone breaks `TXT`, `MX` or other record types for the
resources we handle.
So this is a bit of a refactor, now we lookup a resource and explicitly
return `Some` when there is a record we should be returning (even if
it's empty due to IP exhaustion) or `None` when we should just forward
the query.
This has the added benefit of no longer breaking bonjour or other
non-standard `PTR` queries.
Fixes: #5673.
---------
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
We added this to diagnose a hang in the IPC service, #5441. That hang,
to the best of our knowledge, was caused by a deadlock which we fixed in
#5571. So the heartbeat task just adds a lot of noise to the stdout
which is annoying for debugging and won't be used in production logs.
The system uptime measuring is still useful, so we now log that just
once when logging starts, next to the git version and log directives.
If we see this pattern in either process' logs, we know something is
suspicious:
- Log file ends without a clean shutdown message
- Next log file starts with a high system uptime
Updates should always result in a clean shutdown message, and a sudden
power loss (mains power outage, or laptop battery dying) would result in
the system uptime being low for the 2nd log file.
Added for clarity when debugging, it used to look like:
```
2024-06-30T00:16:05.718337Z DEBUG firezone_tunnel::dns: No records for github.com, returning NXDOMAIN
```
And now looks like:
```
2024-06-30T00:16:05.718337Z DEBUG firezone_tunnel::dns: No MX records for github.com, returning NXDOMAIN
```
This will simplify #5590 some. The API URL and auth URL still take
effect on the next sign-in, but we don't have to explain that the
settings take effect after restarting the entire Client process, those
take effect somewhat immediately.
For some reason I see some lag, maybe the tracing layers don't check for
a new filter on every span, maybe they have some delay to save CPU time.
This does the same thing as #5621 without removing the library, since it
will now compile against whatever version of `windows` we need
We could do the same with `hostname`, either vendor or ask upstream to
bump deps, and then `windows` 0.52.0 should be gone.
```[tasklist]
### Tasks
- [x] Remove macOS code and shrink everything
```
Bumps
[@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)
from 20.14.2 to 20.14.9.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [flowbite](https://github.com/themesberg/flowbite) from 2.3.0 to
2.4.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/themesberg/flowbite/releases">flowbite's
releases</a>.</em></p>
<blockquote>
<h2>v2.4.1</h2>
<ul>
<li>fix datepicker module declaration naming for TypeScript</li>
</ul>
<h2>v2.4.0</h2>
<ul>
<li>the datepicker is now a core component of Flowbite and has API
methods, events, and options</li>
<li>updated the documentation for the datepicker component and related
integration guides</li>
<li>minor visual bug fixes and improvements</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8c8d65e489"><code>8c8d65e</code></a>
fix(typescript): datepicker naming and version bump to v2.4.1</li>
<li><a
href="2a8c18eed9"><code>2a8c18e</code></a>
Merge branch 'datepicker-instance'</li>
<li><a
href="6b160cc82d"><code>6b160cc</code></a>
chore(version): bump to v2.4.0</li>
<li><a
href="e9b8ae3715"><code>e9b8ae3</code></a>
Merge pull request <a
href="https://redirect.github.com/themesberg/flowbite/issues/907">#907</a>
from themesberg/datepicker-instance</li>
<li><a
href="1d76b8ffc1"><code>1d76b8f</code></a>
docs(changelog): add changelog</li>
<li><a
href="213577a394"><code>213577a</code></a>
docs(datepicker): update Phoenix and Rails docs for new datepicker
update</li>
<li><a
href="6a16510f28"><code>6a16510</code></a>
docs(datepicker): fix TypeScript example from docs</li>
<li><a
href="1e0d112435"><code>1e0d112</code></a>
fix(typescript): fix fucking typescript config for cross npm
declarations</li>
<li><a
href="6d1fbf3285"><code>6d1fbf3</code></a>
docs(nuxt): update Nuxt docs for Flowbite via composables</li>
<li><a
href="36eeab7fb9"><code>36eeab7</code></a>
docs(datepicker): update import statements for parent plugin</li>
<li>Additional commits viewable in <a
href="https://github.com/themesberg/flowbite/compare/v2.3.0...v2.4.1">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Currently, the relay logs all failed requests on WARN. This is a bit
excessive because during normal operation, clients are expected to hit
several 401s due to stale or missing nonces.
In order to not flood the logs with these, we introduce a new type,
`ResponseErrorLevel` that represents the subset of `tracing::Level` that
`make_error_response` can log:
- `Warn`
- `Debug`
Both variants mapping to the variants in `tracing::Level` with the same
name, and the function will log accordingly.
So now the caller can pick what level of error is meant to be used and
reduce the noise on the logs when it's meant to be part of normal
operation.
Fixes: #5490.
---------
Co-authored-by: conectado <gabrielalejandro7@gmail.com>
In a previous design of firezone, relays used to be scoped to a certain
connection. For a while now, this constraint has been lifted and all
connections can use all relays. A related, outdated concern is the idea
of STUN-only servers. Those also used to be assigned on a per-connection
basis.
By removing any use of per-connection relays and STUN-only servers, the
entire `StunBinding` concept is unused code and can thus be deleted.
To push this over the finish line, the `snownet-tests` which test the
hole-punching functionality needed to be slightly adapted to make use of
the more recently introduced API `Node::update_relays`.
Resolves: #4749.
Currently, `snownet` still supports this notion of "reconnecting" which
is a mix between resetting some state but keeping other. In particular,
we currently retain the `StunBinding` and `Allocation` state. This used
to be important because allocations are bound to the 3-tuple of the
client and thus needed to be kept around in case we weren't actually
roaming.
We always rebind the the local UDP sockets upon reconnecting and thus
the 3-tuple always changes anyway. In addition, we always reconnect to
the portal, meaning we receive another `init` message and thus can
actually completely clear the `Node`'s state.
This PR does that an in the process, rebrands `reconnect` as `reset`
which now makes more sense.
Related: #5619.
The [HTTP 1.1 RFC](https://datatracker.ietf.org/doc/html/rfc2616) states
that HTTP headers should be US-ASCII. This is not the case when the
macOS Client is run from a host that has a non-English language selected
as its system default due to the way we build the user agent.
This PR fixes that by normalizing how we build the user agent by more
granularly selecting which fields compose it, and not just relying on
OS-provided version strings that may contain non-ASCII characters.
fixes https://github.com/firezone/firezone/issues/5467
---------
Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Currently, we use the `tracing-oslog` crate to ingest logs on MacOS and
iOS. This crate has a "feature" where it creates so called "Activities"
for spans. Whilst that may initially sound useful, Apple's UI for
viewing these activities is absolutely useless.
Instead of tinkering around with that, we remove the `tracing-oslog`
crate and let `tracing-subscriber` format our logs first and then only
send a single string to the oslog backend.
Related: #5619.
This eliminates `windows` 0.54.0 so it should speed up Windows builds a
little. It's 6% faster on my Macbook according to `cargo build
--timing`, in debug mode.
Oops. It runs the same either way so we definitely don't need all that
RAM to be tied up. The Linux and macOS Clients probably have similar
buffer sizes already.
I tested before and after with CloudFlare's speed test and got roughly
140/12 with latency 50 ms both times. The error bars on speed tests are
pretty wide, but we definitely aren't falling 60 MiB behind on
processing and then catching up.
```[tasklist]
### Tasks
- [x] (failed, can't do it right now) ~~Log if we knowingly drop a lot of packets~~
- [x] Extract constant
- [x] Add comment about not knowing if we drop packets
- [x] Merge
- [ ] (skipped) Test while the CPU is loaded
```
Within each allocation, a client has 4095 channels that it can bind to a
different peers. Each channel bindings is valid for 10 minutes unless
rebound. Additionally, there is a 5min cool-down period after a channel
binding expires before it can be rebound to a different peer.
This patch fixes a bug in snownet where we would have first attempted to
rebind the last bound channel instead of just picking the next unused
one. In the case of a clock drift between client and relay, this caused
unnecessary errors when attempting to rebind channels.
Fixes: #5603.
---------
Co-authored-by: conectado <gabrielalejandro7@gmail.com>
Currently, we are sending each ICE candidate individually from the
client to the gateway and vice versa. This causes a slight delay as to
when each ICE candidate gets added on the remote ICE agent. As a result,
they all start being tested with a slight offset which causes "endpoint
hopping" whenever a connection expires as they expire just after each
other.
In addition, sending multiple messages to the portal causes unnecessary
load when establishing connections.
Finally, with #5283 we started **not** adding the server-reflexive
candidate to the local ICE agent. Because we talk to multiple relays, we
detect the same server-reflexive candidate multiple times if we are
behind a non-symmetric NAT. Not adding the server-reflexive candidate to
the ICE agent mitigated our de-duplication strategy here which means we
currently send the same candidate multiple times to a peer, causing
additional, unnecessary load.
All of this can be mitigated by batching together all our ICE candidates
together into one message.
Resolves: #3978.
Whilst it has been helpful to find issues such as #5611, having these
logs on `warn` spams the end user too much and creates a false sense
that things might not be working as there can be a variety of reasons
why packets might not be able to be routed.
Our NAT table uses TCP & UDP ports for its entries. To correctly handle
ICMP requests and responses, we use the ICMP identifier in those
packets. All other ICMP messages are currently unsupported.
The errors paths for accessing these fields, i.e. ports for UDP/TCP and
identifier for ICMP currently conflate two different errors:
- Unsupported IP payload: it is neither TCP, UDP or ICMP
- Unsupported ICMP type: it is not an ICMP request or response
This makes certain logs look worse than they are because we say
"Unsupported IP protocol: Icmpv6". To avoid this, we create a dedicated
error variant that calls out the unsupported ICMP type.
Fixes: #5594.
Currently, we refresh DNS mappings when:
* We translate a packet for the first time
* There are no more incoming packets for 120 seconds
* There is at least 1 outoing packet in the last 10 seconds
The idea was to coordinate with conntrack somehow, to expire DNS
translation at the point where the NAT session of the OS stops being
valid. That way, if the triggered DNS refresh changes the resolved IPs
it would never kill the underlying connection.
However, TCP sessions by default can last for up to 5 days! And I have
no idea how long for ICMP. To prevent killing these connections, we
assume that for TCP and ICMP packets will elicit a response within 1s.
The DNS refresh for a translation mapping that hasn't seen any responses
is thus delayed by 1s after the last packet has been sent out.
To get an idea of how this works you can imagine it like this
|last incoming packet|------ 120 seconds + x seconds ----|out going
packet|----1 second ----|dns refresh|
However this another case where dns refresh is triggered, in this case
the same packet triggers the refresh period and the period where it was
used in the last 10 seconds
|last incoming packet|------ 111 seconds ----|out going packet|---- 9
seconds ----|dns refresh|
The unit tests should also make clear of when we want to trigger dns
refresh and when we don't.
---------
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
Closes#5589. Refs #5571
Improves upload speeds on my Windows 11 VM from 2 Mbps to 10.5 Mbps.
On the resource-constrained VM it improved from 3 to 7 Mbps.
```[tasklist]
### Tasks
- [x] Open for review
- [x] Manual test on resource-constrained VM
- [x] Run 5x replication steps from #5571 and make sure it doesn't deadlock again
- [x] Merge
- [ ] https://github.com/firezone/firezone/issues/5601
```
Sorted by decreasing speed, M = macOS host, W = Windows guest in
Parallels, RC = Resource-constrained Windows guest in VirtualBox:
- M, Internet - 16 Mbps
- W, Internet - 13 Mbps
- M, Firezone - 12 Mbps
- RC, Internet - 12 Mbps
- W, Firezone, after this PR - 10.5 Mbps
- RC, Firezone, after this PR - 8.5 Mbps
- RC, Firezone, before this PR - 4 Mbps
- W, Firezone, before this PR - 2 Mbps
So it's not perfect but the worst part is fixed.
The slow upload speeds were probably a regression from #5571. The MPSC
channel only has a few spots in it, so if connlib doesn't pick up every
packet immediately (which would be impossible under load), we drop
packets. I measured 25% packet drops in an earlier commit.
I first tried increasing the channel size from 5 to 64, and that worked.
But this solution is simpler. I switch back to `blocking_send` so if
connlib isn't clearing the MPSC channel, Wintun will just queue up
packets in its internal ring buffers, and we aren't responsible for
buffering.
Getting rid of `blocking_send` was a defense-in-depth thing to fix the
deadlock yesterday, but we still close the MPSC channel inside
`Tun::drop`, and I confirmed in a manual test that this will kick the
worker thread out of `blocking_send`, so the deadlock won't come back.
We define a connection as idle if we haven't sent or received any
packets in the last 5 minutes. From `snownet`'s perspective, keep-alives
sent by upper layers (like TCP keep-alives) must be honored and thus
outgoing as well as incoming packets are accounted for.
If the underlying connection breaks, we will hit an ICE timeout which is
an implementation detail of `snownet`. The packets tracked here are IP
packets that the user wants to send / receive via the tunnel. Similarly,
wireguard's keep-alives do not update these timestamps and thus don't
mark a connection as non-idle.
---------
Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com>
Currently, upon reconnecting, `snownet` returns a list of connection IDs
that have been closed. This was done to avoid emitting many identical
`ResourcesChanged` events. In all other events, `snownet` always only
references a single connection. To align this whilst not duplicating
`ResourcesChanged` events, we use a dedicated `bool` to check, whether
any of the events emitted by `snownet` require updating the clients
about our active resources.
Currently, enabling the `wire` log is an all or nothing approach,
logging incoming and outgoing messages from the TUN device, network and
the portal.
Often, only one or more of these is desired but enabling all of `wire`
spams the logs to the point where one cannot see the information they'd
like. With this PR, we move some of the fields of the `wire` log
statements to the log target instead. This allows controlling the logs
via the `RUST_LOG` env variable.
For example, to only see messages sent and received to the API, one can
set `RUST_LOG=wire::api=trace` which will output something like:
```
2024-06-27T02:12:41.821374Z TRACE wire::api::send: {"topic":"client","event":"phx_join","payload":null,"ref":0}
2024-06-27T02:12:42.030573Z TRACE wire::api::recv: {"event":"phx_reply","ref":0,"topic":"client","payload":{"status":"ok","response":{}}}
```
Similarly, enabling `wire::net=trace` will give you logs for packets
sent over the network:
```
2024-06-27T02:12:50.487503Z TRACE wire::net::send: src=None dst=34.80.2.250:3478 num_bytes=20
2024-06-27T02:12:50.487589Z TRACE wire::net::send: src=None dst=[2600:1900:4030:b0d9:0:5::]:3478 num_bytes=20
2024-06-27T02:12:50.487622Z TRACE wire::net::send: src=None dst=34.87.210.10:3478 num_bytes=20
2024-06-27T02:12:50.487652Z TRACE wire::net::send: src=None dst=[2600:1900:40b0:1504:0:17::]:3478 num_bytes=20
2024-06-27T02:12:50.510049Z TRACE wire::net::recv: src=34.87.210.10:3478 dst=192.168.188.71:39207 num_bytes=32
2024-06-27T02:12:50.510382Z TRACE wire::net::send: src=None dst=34.87.210.10:3478 num_bytes=112
2024-06-27T02:12:50.526947Z TRACE wire::net::recv: src=34.87.210.10:3478 dst=192.168.188.71:39207 num_bytes=92
2024-06-27T02:12:50.527295Z TRACE wire::net::send: src=None dst=34.87.210.10:3478 num_bytes=152
```
These targets have been designed to take up equal amounts of space. All
three types (`dev`, `net`, `api`) have 3 letters and `send` and `recv`
have 4. That way, these logs are always aligned which makes them easier
to scan.
In order to handle DNS resources, connlib intercepts all DNS requests on
the system once it has started up. The DNS queries are then forwarded to
the original DNS resolver in case the query isn't for one of the
configured DNS resources _except_ if the configured DNS resovler is also
a CIDR resource.
In that case, the DNS query will be tunneled to a gateway and forwarded
to the DNS resolver from there.
Exactly this configuration results in a dead-lock when roaming networks.
To make roaming more reliable, we now drop all connections when
detecting a network change (see #5308). As a result, DNS queries cannot
be tunneled right away. This isn't usually a problem: We just send a
connection intent to the portal to connect to the gateway. Upon a
network change, we also reconnect the websocket to the portal which also
requires to resolve the domain name. Connlib's DNS resolver is still
active at the point and thus, we end up deadlocking ourselves because
the DNS query to resolve the portal's domain is waiting for a connection
to a gateway that can only be established once we are connected to the
portal.
To prevent this, we extend connlib with a "known hosts" feature. These
are DNS records that are defined statically for the lifetime of a
connlib session and can thus always be resolved, regardless of the
connection state with the portal or the gateways. We populate these
records with the portal's API, allowing the reconnect to work without
having connected gateways.
---------
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
There are several reasons why we would legitimately receive a packet
that we can't handle, i.e. when a connection got cleared locally but the
gateway is still trying to send us packets for that socket. Not handling
these packets can be a bug but more often than not, it is not an issue.
Additionally, all our unit-tests actually `.unwrap` the
`Node::encapsulate` function so any unhandled packets in the tests will
be caught.
Refs #5441, but without a reliable way to replicate that issue, I'm not
sure if this will completely fix it.
Before this PR, a deadlock can happen between 2 threads, call them "main
thread" and "worker thread".
The deadlock is more likely if more traffic is flowing through the
tunnel.
# Test results
I ran a build from this PR inside the resource-constrained VM and it's
likely the deadlock could have triggered there, since the packet channel
had 0 capacity (it was full) when we reached `Tun::drop`:
```jsonl
{"time":"2024-06-26T22:43:33.2398441Z","target":"firezone_headless_client::ipc_service","logging.googleapis.com/sourceLocation":{"file":"headless-client\\src\\ipc_service.rs","line":"304"},"severity":"INFO","gitVersion":"e591bb9","logFilter":"\"str0m=warn,info\""}
..
{"time":"2024-06-26T22:45:42.9035226Z","target":"firezone_tunnel::device_channel::tun_windows","logging.googleapis.com/sourceLocation":{"file":"connlib\\tunnel\\src\\device_channel\\tun_windows.rs","line":"45"},"severity":"INFO","channelCapacity":0,"message":"Shutting down packet channel..."}
{"time":"2024-06-26T22:45:42.9035467Z","target":"firezone_tunnel::device_channel::tun_windows","logging.googleapis.com/sourceLocation":{"file":"connlib\\tunnel\\src\\device_channel\\tun_windows.rs","line":"274"},"severity":"INFO","message":"recv_task exiting gracefully"}
{"time":"2024-06-26T22:45:43.4978015Z","target":"connlib_client_shared","logging.googleapis.com/sourceLocation":{"file":"connlib\\clients\\shared\\src\\lib.rs","line":"150"},"severity":"INFO","message":"connlib exited gracefully"}
```
I followed these steps:
- Run Firezone and sign in
- Start a speed test using Cloudflare
- During the download phase, quit the GUI
I did the same test with 0fac698 (`main`) and got the "All pipe
instances are busy" error dialog 3 out of 5 times.
# Details
The deadlock will happen in this scenario:
- The main thread enters `Tun::drop` here
0fac698dfc/rust/connlib/tunnel/src/device_channel/tun_windows.rs (L44)
- The worker thread is waiting for space in the packet channel
(`packet_tx` and `packet_rx`) here
0fac698dfc/rust/connlib/tunnel/src/device_channel/tun_windows.rs (L249)
- The main thread tells wintun to shut down. If the worker was on line
247 waiting on wintun, this would unblock it, but the worker is not on
line 247.
0fac698dfc/rust/connlib/tunnel/src/device_channel/tun_windows.rs (L45)
- The main thread waits to join the worker thread
0fac698dfc/rust/connlib/tunnel/src/device_channel/tun_windows.rs (L52)
The threads are now deadlocked. The main thread is waiting for the
worker thread to exit, and the worker thread is waiting for the main
thread to either call `poll_recv`, which would cause `blocking_send` to
return, or for the main thread to complete `Tun::drop`, which would
cause Rust to drop `packet_rx`, which would cause `blocking_send` to
return an error.
This PR makes 2 changes to prevent this deadlock. Each change alone
should work, but for defense-in-depth we make both changes:
1. When the main thread starts `Tun::drop`, we `close` the packet
channel, which would unblock any thread waiting on
`Sender::blocking_send`
2. We use `Sender::try_send` instead of `Sender::blocking_send`. If the
main thread can't consume packets fast enough, we're going to drop them
anyway, because the ring buffer in wintun will eventually fill up. So
dropping them here isn't much different from dropping them anywhere
else, and this keeps the worker thread from locking up.
`connlibSessionPtr` is a `Long`, which is 64-bits. On 32-bit Android
architectures, this overwrites part of the `dns_list` for the `setDns`
native function call because Rust uses a `32-bit` sized pointer for
`SessionWrapper` in the function definition.
This causes a JNI crash, detailed below. To fix this, we make sure
`jlong` is received in Rust, and do the pointer conversion in the body
of the functions that need to use it.
Adding @ReactorScram to review for visibility.
```
runtime.cc:655] Runtime aborting...
runtime.cc:655] Dumping all threads without mutator lock held
runtime.cc:655] All threads:
runtime.cc:655] DALVIK THREADS (35):
runtime.cc:655] "ConnectivityThread" prio=5 tid=35 Runnable
runtime.cc:655] | group="" sCount=0 dsCount=0 flags=0 obj=0x131809a8 self=0xa42dea10
runtime.cc:655] | sysTid=8854 nice=0 cgrp=default sched=0/0 handle=0x7fbb71c0
runtime.cc:655] | state=R schedstat=( 0 0 0 ) utm=8 stm=0 core=2 HZ=100
runtime.cc:655] | stack=0x7fab4000-0x7fab6000 stackSize=1040KB
runtime.cc:655] | held mutexes= "abort lock" "mutator lock"(shared held)
runtime.cc:655] native: #00 pc 0037b1dd /apex/com.android.art/lib/libart.so (art::DumpNativeStack(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, int, BacktraceMap*, char const*, art::ArtMethod*, void*, bool)+76)
runtime.cc:655] native: #01 pc 0044cd01 /apex/com.android.art/lib/libart.so (art::Thread::DumpStack(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, bool, BacktraceMap*, bool) const+388)
runtime.cc:655] native: #02 pc 00448447 /apex/com.android.art/lib/libart.so (art::Thread::Dump(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, bool, BacktraceMap*, bool) const+34)
runtime.cc:655] native: #03 pc 00465995 /apex/com.android.art/lib/libart.so (art::DumpCheckpoint::Run(art::Thread*)+688)
runtime.cc:655] native: #04 pc 00460e57 /apex/com.android.art/lib/libart.so (art::ThreadList::RunCheckpoint(art::Closure*, art::Closure*)+354)
runtime.cc:655] native: #05 pc 0046034f /apex/com.android.art/lib/libart.so (art::ThreadList::Dump(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, bool)+1514)
runtime.cc:655] native: #06 pc 0040a3af /apex/com.android.art/lib/libart.so (art::Runtime::Abort(char const*)+1510)
runtime.cc:655] native: #07 pc 0000d989 /system/lib/libbase.so (android::base::SetAborter(std::__1::function<void (char const*)>&&)::$_3::__invoke(char const*)+48)
runtime.cc:655] native: #08 pc 0000d295 /system/lib/libbase.so (android::base::LogMessage::~LogMessage()+224)
runtime.cc:655] native: #09 pc 002965db /apex/com.android.art/lib/libart.so (art::JavaVMExt::JniAbort(char const*, char const*)+1962)
runtime.cc:655] native: #10 pc 002966a5 /apex/com.android.art/lib/libart.so (art::JavaVMExt::JniAbortF(char const*, char const*, ...)+64)
runtime.cc:655] native: #11 pc 004521c1 /apex/com.android.art/lib/libart.so (art::Thread::DecodeJObject(_jobject*) const+544)
runtime.cc:655] native: #12 pc 0028a6e7 /apex/com.android.art/lib/libart.so (art::(anonymous namespace)::ScopedCheck::CheckInstance(art::ScopedObjectAccess&, art::(anonymous namespace)::ScopedCheck::InstanceKind, _jobject*, bool)+82)
runtime.cc:655] native: #13 pc 00289779 /apex/com.android.art/lib/libart.so (art::(anonymous namespace)::ScopedCheck::CheckPossibleHeapValue(art::ScopedObjectAccess&, char, art::(anonymous namespace)::JniValueType)+552)
runtime.cc:655] native: #14 pc 00288f55 /apex/com.android.art/lib/libart.so (art::(anonymous namespace)::ScopedCheck::Check(art::ScopedObjectAccess&, bool, char const*, art::(anonymous namespace)::JniValueType*)+592)
runtime.cc:655] native: #15 pc 0027cbe7 /apex/com.android.art/lib/libart.so (art::(anonymous namespace)::CheckJNI::GetObjectClass(_JNIEnv*, _jobject*)+586)
runtime.cc:655] native: #16 pc 003412db /data/app/~~X6p_4xQWTraApNXlo4SIHA==/dev.firezone.android-zJrN9FN3yhs12tvUNeoOmw==/base.apk!libconnlib.so (offset ec000) (???)
runtime.cc:655] at dev.firezone.android.tunnel.ConnlibSession.setDns(Native method)
runtime.cc:655] at NetworkMonitor.onLinkPropertiesChanged(NetworkMonitor.kt:28)
runtime.cc:655] at android.net.ConnectivityManager$NetworkCallback.onAvailable(ConnectivityManager.java:3328)
runtime.cc:655] at android.net.ConnectivityManager$CallbackHandler.handleMessage(ConnectivityManager.java:3607)
runtime.cc:655] at android.os.Handler.dispatchMessage(Handler.java:106)
runtime.cc:655] at android.os.Looper.loop(Looper.java:223)
runtime.cc:655] at android.os.HandlerThread.run(HandlerThread.java:67)
```
---------
Co-authored-by: conectado <gabrielalejandro7@gmail.com>