Commit Graph

4885 Commits

Author SHA1 Message Date
Thomas Eizinger
04476880e7 ci: only set up runtime tauri deps for smoke tests (#5632)
Setting up Tauri's runtime dependencies takes about a minute and is
unnecessary for the Rust unit tests. The Rust Windows unit tests jobs
are amongst the slowest and thus impact the overall CI runtime.

See
https://github.com/firezone/firezone/actions/runs/9719218798/job/26828616349
for a recent run on `main`.

---------

Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
2024-06-29 05:35:11 +00:00
Thomas Eizinger
b5fd980fb2 fix(relay): don't log all request failures on the same level (#5622)
Currently, the relay logs all failed requests on WARN. This is a bit
excessive because during normal operation, clients are expected to hit
several 401s due to stale or missing nonces.

In order to not flood the logs with these, we introduce a new type,
`ResponseErrorLevel` that represents the subset of `tracing::Level` that
`make_error_response` can log:

- `Warn`
- `Debug`

Both variants mapping to the variants in `tracing::Level` with the same
name, and the function will log accordingly.

So now the caller can pick what level of error is meant to be used and
reduce the noise on the logs when it's meant to be part of normal
operation.

Fixes: #5490.

---------

Co-authored-by: conectado <gabrielalejandro7@gmail.com>
2024-06-29 02:38:55 +00:00
Thomas Eizinger
96536a23cf refactor(connlib): ignore relays per connection (#5631)
In a previous design of firezone, relays used to be scoped to a certain
connection. For a while now, this constraint has been lifted and all
connections can use all relays. A related, outdated concern is the idea
of STUN-only servers. Those also used to be assigned on a per-connection
basis.

By removing any use of per-connection relays and STUN-only servers, the
entire `StunBinding` concept is unused code and can thus be deleted.

To push this over the finish line, the `snownet-tests` which test the
hole-punching functionality needed to be slightly adapted to make use of
the more recently introduced API `Node::update_relays`.

Resolves: #4749.
2024-06-29 02:36:17 +00:00
Thomas Eizinger
f2b6c205c2 refactor(snownet): change reconnect to reset (#5630)
Currently, `snownet` still supports this notion of "reconnecting" which
is a mix between resetting some state but keeping other. In particular,
we currently retain the `StunBinding` and `Allocation` state. This used
to be important because allocations are bound to the 3-tuple of the
client and thus needed to be kept around in case we weren't actually
roaming.

We always rebind the the local UDP sockets upon reconnecting and thus
the 3-tuple always changes anyway. In addition, we always reconnect to
the portal, meaning we receive another `init` message and thus can
actually completely clear the `Node`'s state.

This PR does that an in the process, rebrands `reconnect` as `reset`
which now makes more sense.

Related: #5619.
2024-06-29 02:07:10 +00:00
Thomas Eizinger
38275ecad0 refactor(gateway): extract fn for update-device task (#5581)
Follow-up feedback from #5512.
2024-06-29 01:27:23 +00:00
Thomas Eizinger
839292b1e3 ci: use sccache for building Tauri clients (#5617)
Using sccache results in a more efficient cache usage. GitHub's built-in
cache appears to grow over time and takes ~3minutes to download for the
Windows Tauri builds where it is ~2GB large.

Whilst researching bad performance on Windows runners in general, I came
across the hint to disable Windows defender which appears to slow things
down massively in the case of sccache which performs many small network
downloads and file writes.

This PR harmonizes our cache usage and prefers sccache over GitHub's
cache for everything apart from `cross` builds. The runtimes are either
roughly the same or noticeably better. Overally, the GUI smoke tests are
usually among the last ones to finish, meaning these changes should have
an overall net-positive impact on CI time.


|[`main`](https://github.com/firezone/firezone/actions/runs/9707704927)|[`head`](https://github.com/firezone/firezone/actions/runs/9709368060)|
|---|---|
|![Screenshot from 2024-06-28
17-55-14](https://github.com/firezone/firezone/assets/5486389/63433f24-d6de-4651-8bd8-ed1eb4b5b445)|![Screenshot
from 2024-06-28
17-59-33](https://github.com/firezone/firezone/assets/5486389/b82dd643-dd48-4c7f-9322-6bd45ab0fa70)|
|![Screenshot from 2024-06-28
17-55-17](https://github.com/firezone/firezone/assets/5486389/bc06fdb7-744a-4232-8e4f-c9bd7fd3c278)|![Screenshot
from 2024-06-28
17-59-39](https://github.com/firezone/firezone/assets/5486389/0b0b5207-7d77-4ed4-94d9-1306878e552a)|
|![Screenshot from 2024-06-28
17-55-21](https://github.com/firezone/firezone/assets/5486389/a2187475-8678-4c6b-afef-a96575943c98)|![Screenshot
from 2024-06-28
17-59-44](https://github.com/firezone/firezone/assets/5486389/90e9d335-536e-472a-846c-7ae0edf336fc)|
|![Screenshot from 2024-06-28
17-55-28](https://github.com/firezone/firezone/assets/5486389/a239f4f9-8c3b-4742-8b20-22e903082310)|![Screenshot
from 2024-06-28
17-59-50](https://github.com/firezone/firezone/assets/5486389/be718857-e217-464a-b4e2-515e5ad4c48c)|
|![Screenshot from 2024-06-28
17-55-33](https://github.com/firezone/firezone/assets/5486389/25b2ff75-c5d2-46f0-ab7e-702f2202e3c7)|![Screenshot
from 2024-06-28
17-59-55](https://github.com/firezone/firezone/assets/5486389/7e1ca3a8-dabc-4501-99bc-ff7993886e8f)|
|![Screenshot from 2024-06-28
17-55-37](https://github.com/firezone/firezone/assets/5486389/121a943d-db08-484a-8450-a0b8ca35cd10)|![Screenshot
from 2024-06-28
18-01-51](https://github.com/firezone/firezone/assets/5486389/d1cc137f-0898-4fdb-9798-e473195346a8)|
2024-06-28 22:28:21 +00:00
Thomas Eizinger
8973cc5785 refactor(android): use fmt::Layer with custom writer (#5558)
Currently, the logs that go to logcat on Android are pretty badly
formatted because we use `tracing-android` and it formats the span
fields and message fields itself. There is actually no reason for doing
the formatting ourselves. Instead, we can use the `MakeWriter`
abstraction from `tracing_subscriber` to plug in a custom writer that
writes to Android's logcat.

This results in logs like this:

```
[nix-shell:~/src/github.com/firezone/firezone/rust]$ adb logcat -s connlib
--------- beginning of main
06-28 19:41:20.057 19955 20213 D connlib : phoenix_channel: Connecting to portal host=api.firez.one user_agent=Android/14 5.15.137-android14-11-gbf4f9bc41c3b-ab11664771 connlib/1.1.1
06-28 19:41:20.058 19955 20213 I connlib : firezone_tunnel::client: Network change detected
06-28 19:41:20.061 19955 20213 D connlib : snownet::node: Closed all connections as part of reconnecting num_connections=0
06-28 19:41:20.365 19955 20213 I connlib : phoenix_channel: Connected to portal host=api.firez.one
06-28 19:41:20.601 19955 20213 I connlib : firezone_tunnel::io: Setting new DNS resolvers
06-28 19:41:21.031 19955 20213 D connlib : firezone_tunnel::client: TUN device initialized ip4=100.66.86.233 ip6=fd00:2021:1111::f:d9c1 name=tun1
06-28 19:41:21.031 19955 20213 I connlib : connlib_client_shared::eventloop: Firezone Started!
06-28 19:41:21.031 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.slackb.com
06-28 19:41:21.031 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.test-ipv6.com
06-28 19:41:21.032 19955 20213 I connlib : firezone_tunnel::client: Activating CIDR resource address=5.4.6.7/32 name=5.4.6.7
06-28 19:41:21.032 19955 20213 I connlib : firezone_tunnel::client: Activating CIDR resource address=10.0.32.101/32 name=IPerf3
06-28 19:41:21.032 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=ifconfig.net
06-28 19:41:21.032 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.slack-imgs.com
06-28 19:41:21.032 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.google.com
06-28 19:41:21.032 19955 20213 I connlib : firezone_tunnel::client: Activating CIDR resource address=10.0.0.5/32 name=10.0.0.5
06-28 19:41:21.032 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.githubassets.com
06-28 19:41:21.032 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=dnsleaktest.com
06-28 19:41:21.033 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.slack-edge.com
06-28 19:41:21.033 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.github.com
06-28 19:41:21.033 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=speed.cloudflare.com
06-28 19:41:21.033 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.githubusercontent.com
06-28 19:41:21.033 19955 20213 I connlib : firezone_tunnel::client: Activating CIDR resource address=10.0.14.11/32 name=Staging resource performance
06-28 19:41:21.033 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.whatismyip.com
06-28 19:41:21.033 19955 20213 I connlib : firezone_tunnel::client: Activating CIDR resource address=10.0.0.8/32 name=10.0.0.8
06-28 19:41:21.033 19955 20213 I connlib : firezone_tunnel::client: Activating CIDR resource address=9.9.9.9/32 name=Quad9 DNS
06-28 19:41:21.034 19955 20213 I connlib : firezone_tunnel::client: Activating CIDR resource address=10.0.32.10/32 name=CoreDNS
06-28 19:41:21.216 19955 20213 I connlib : snownet::node: Added new TURN server id=bd6e9d1a-4696-4f8b-8337-aab5d5cea810 address=Dual { v4: 35.197.171.113:3478, v6: [2600:1900:40b0:1504:0:27::]:3478 }
```

---------

Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
2024-06-28 22:15:10 +00:00
Jamil
8655b711db fix(connlib): Don't use operatingSystemVersionString on Apple OSes (#5628)
The [HTTP 1.1 RFC](https://datatracker.ietf.org/doc/html/rfc2616) states
that HTTP headers should be US-ASCII. This is not the case when the
macOS Client is run from a host that has a non-English language selected
as its system default due to the way we build the user agent.

This PR fixes that by normalizing how we build the user agent by more
granularly selecting which fields compose it, and not just relying on
OS-provided version strings that may contain non-ASCII characters.

fixes https://github.com/firezone/firezone/issues/5467

---------

Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
2024-06-28 21:59:02 +00:00
Thomas Eizinger
e5cba1caf4 refactor(apple): use fmt::Layer with custom writer (#5623)
Currently, we use the `tracing-oslog` crate to ingest logs on MacOS and
iOS. This crate has a "feature" where it creates so called "Activities"
for spans. Whilst that may initially sound useful, Apple's UI for
viewing these activities is absolutely useless.

Instead of tinkering around with that, we remove the `tracing-oslog`
crate and let `tracing-subscriber` format our logs first and then only
send a single string to the oslog backend.

Related: #5619.
2024-06-28 21:22:54 +00:00
Thomas Eizinger
9fe070c72f chore(aws-infra): log all portal messages on the gateway (#5615)
Now that we have split the `wire` target into sub-targets, we can
selectively enable the wire messages to and from the API which is very
helpful in debugging what messages are being sent.
2024-06-28 21:21:20 +00:00
Reactor Scram
37d3ebbb7c chore(gui-client/windows): bump tauri-winrt-notification (#5627)
This eliminates `windows` 0.54.0 so it should speed up Windows builds a
little. It's 6% faster on my Macbook according to `cargo build
--timing`, in debug mode.
2024-06-28 21:19:51 +00:00
Thomas Eizinger
d3a091f90b ci: pre-install required tools for smoke tests (#5620)
Currently, the smoke tests rebuild the `dump_syms` and
`minidump-stackwalk` tools from scratch every time which is slow,
especially on Windows.

We can speed this up by utilising the `taiki-e/install-action` GitHub
action which discovers and downloads the latest binary releases of those
projects and installs them into $PATH.

I think those binaries might also be cached as part of the Rust cache
action (https://github.com/Swatinem/rust-cache) so the visible speed-up
is only within a few seconds and comes from the binaries not being
re-built inside the script.

Caching those binaries on Github still requires us to build them at
least once and also rebuild them in case the cache gets invalidated.
Hence I still think this is a good idea on its own.
2024-06-28 21:19:43 +00:00
Reactor Scram
a315c49b3c chore(firezone-tunnel/windows): reduce ring buffer from 64 MiB to 1 MiB (#5609)
Oops. It runs the same either way so we definitely don't need all that
RAM to be tied up. The Linux and macOS Clients probably have similar
buffer sizes already.

I tested before and after with CloudFlare's speed test and got roughly
140/12 with latency 50 ms both times. The error bars on speed tests are
pretty wide, but we definitely aren't falling 60 MiB behind on
processing and then catching up.

```[tasklist]
### Tasks
- [x] (failed, can't do it right now) ~~Log if we knowingly drop a lot of packets~~
- [x] Extract constant
- [x] Add comment about not knowing if we drop packets
- [x] Merge
- [ ] (skipped) Test while the CPU is loaded
```
2024-06-28 21:03:18 +00:00
Reactor Scram
649db863ca chore(gui-client): explain why the update check has redirects disabled (#5608)
Closes #5383
2024-06-28 14:28:09 +00:00
Thomas Eizinger
ed34ca096b chore(gateway): remove dead IP detection (#5618)
This does not work as well as intended and spams the logs. We may need
#5542 before we can implement this properly.

Fixes: #5593.
2024-06-28 04:47:00 +00:00
Jamil
fc8d89ea73 docs: Add AWS NAT Gateway example (#5543)
- Adds the AWS equivalent of our GCP scalable NAT Gateway.
- Adds a new kb section `/kb/automate` that will contain various
automation / IaaC recipes going forward. It's better to have these
guides in the main docs with all the other info.

~~Will update the GCP example in another PR.~~

Portal helper docs in the gateway deploy page will come in another PR
after this is merged.
2024-06-27 21:05:38 -07:00
Jamil
d529ace29c chore: Bump Windows to 1.1.1, update changelog with dl links (#5610)
Fixes #5597
2024-06-27 20:53:00 -07:00
Thomas Eizinger
66cb565915 fix(snownet): use unused channels before reused expired ones (#5613)
Within each allocation, a client has 4095 channels that it can bind to a
different peers. Each channel bindings is valid for 10 minutes unless
rebound. Additionally, there is a 5min cool-down period after a channel
binding expires before it can be rebound to a different peer.

This patch fixes a bug in snownet where we would have first attempted to
rebind the last bound channel instead of just picking the next unused
one. In the case of a clock drift between client and relay, this caused
unnecessary errors when attempting to rebind channels.

Fixes: #5603.

---------

Co-authored-by: conectado <gabrielalejandro7@gmail.com>
2024-06-28 03:14:16 +00:00
Thomas Eizinger
aadb045b27 chore(connlib): batch together sending of ICE candidates (#5616)
Currently, we are sending each ICE candidate individually from the
client to the gateway and vice versa. This causes a slight delay as to
when each ICE candidate gets added on the remote ICE agent. As a result,
they all start being tested with a slight offset which causes "endpoint
hopping" whenever a connection expires as they expire just after each
other.

In addition, sending multiple messages to the portal causes unnecessary
load when establishing connections.

Finally, with #5283 we started **not** adding the server-reflexive
candidate to the local ICE agent. Because we talk to multiple relays, we
detect the same server-reflexive candidate multiple times if we are
behind a non-symmetric NAT. Not adding the server-reflexive candidate to
the ICE agent mitigated our de-duplication strategy here which means we
currently send the same candidate multiple times to a peer, causing
additional, unnecessary load.

All of this can be mitigated by batching together all our ICE candidates
together into one message.

Resolves: #3978.
2024-06-28 02:04:31 +00:00
Thomas Eizinger
79ff3f830b chore(gateway): downgrade warn logs (#5612)
Whilst it has been helpful to find issues such as #5611, having these
logs on `warn` spams the end user too much and creates a false sense
that things might not be working as there can be a variety of reasons
why packets might not be able to be routed.
2024-06-28 01:13:29 +00:00
Thomas Eizinger
1aa95ed17e fix(connlib): be explicit about unsupported ICMP types (#5611)
Our NAT table uses TCP & UDP ports for its entries. To correctly handle
ICMP requests and responses, we use the ICMP identifier in those
packets. All other ICMP messages are currently unsupported.

The errors paths for accessing these fields, i.e. ports for UDP/TCP and
identifier for ICMP currently conflate two different errors:

- Unsupported IP payload: it is neither TCP, UDP or ICMP
- Unsupported ICMP type: it is not an ICMP request or response

This makes certain logs look worse than they are because we say
"Unsupported IP protocol: Icmpv6". To avoid this, we create a dedicated
error variant that calls out the unsupported ICMP type.

Fixes: #5594.
2024-06-28 01:13:25 +00:00
Gabi
375a1b5586 fix(connlib): allow 1s ACK for packet before refreshing DNS (#5560)
Currently, we refresh DNS mappings when:
* We translate a packet for the first time
* There are no more incoming packets for 120 seconds
* There is at least 1 outoing packet in the last 10 seconds

The idea was to coordinate with conntrack somehow, to expire DNS
translation at the point where the NAT session of the OS stops being
valid. That way, if the triggered DNS refresh changes the resolved IPs
it would never kill the underlying connection.

However, TCP sessions by default can last for up to 5 days! And I have
no idea how long for ICMP. To prevent killing these connections, we
assume that for TCP and ICMP packets will elicit a response within 1s.
The DNS refresh for a translation mapping that hasn't seen any responses
is thus delayed by 1s after the last packet has been sent out.

To get an idea of how this works you can imagine it like this

|last incoming packet|------ 120 seconds + x seconds ----|out going
packet|----1 second ----|dns refresh|

However this another case where dns refresh is triggered, in this case
the same packet triggers the refresh period and the period where it was
used in the last 10 seconds

|last incoming packet|------ 111 seconds ----|out going packet|---- 9
seconds ----|dns refresh|

The unit tests should also make clear of when we want to trigger dns
refresh and when we don't.

---------

Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
2024-06-28 00:25:26 +00:00
FTB_lag
efd0218383 chore: fix contributing docs and fix feature flags in docker compose (#5572) 2024-06-27 11:45:59 -07:00
Reactor Scram
76e55e6138 fix(client/windows): fix upload speed by letting Wintun queue packets again (#5598)
Closes #5589. Refs #5571

Improves upload speeds on my Windows 11 VM from 2 Mbps to 10.5 Mbps.

On the resource-constrained VM it improved from 3 to 7 Mbps.

```[tasklist]
### Tasks
- [x] Open for review
- [x] Manual test on resource-constrained VM
- [x] Run 5x replication steps from #5571 and make sure it doesn't deadlock again
- [x] Merge
- [ ] https://github.com/firezone/firezone/issues/5601
```

Sorted by decreasing speed, M = macOS host, W = Windows guest in
Parallels, RC = Resource-constrained Windows guest in VirtualBox:

- M, Internet - 16 Mbps
- W, Internet - 13 Mbps
- M, Firezone - 12 Mbps
- RC, Internet - 12 Mbps
- W, Firezone, after this PR - 10.5 Mbps
- RC, Firezone, after this PR - 8.5 Mbps
- RC, Firezone, before this PR - 4 Mbps
- W, Firezone, before this PR - 2 Mbps

So it's not perfect but the worst part is fixed.

The slow upload speeds were probably a regression from #5571. The MPSC
channel only has a few spots in it, so if connlib doesn't pick up every
packet immediately (which would be impossible under load), we drop
packets. I measured 25% packet drops in an earlier commit.

I first tried increasing the channel size from 5 to 64, and that worked.

But this solution is simpler. I switch back to `blocking_send` so if
connlib isn't clearing the MPSC channel, Wintun will just queue up
packets in its internal ring buffers, and we aren't responsible for
buffering.

Getting rid of `blocking_send` was a defense-in-depth thing to fix the
deadlock yesterday, but we still close the MPSC channel inside
`Tun::drop`, and I confirmed in a manual test that this will kick the
worker thread out of `blocking_send`, so the deadlock won't come back.
2024-06-27 17:59:22 +00:00
Jamil
4f4cb51ecf fix(android): Use SCHEDULE_EXACT_ALARM instead of USE_EXACT_ALARM (#5595)
See
https://developer.android.com/develop/background-work/services/fg-service-types#system-exempted
and
https://developer.android.com/develop/background-work/services/alarms/schedule

We have to use `SCHEDULE_EXACT_ALARM` if we're not an alarm clock or
calendar app.
2024-06-27 16:31:44 +00:00
Reactor Scram
0d3b1314a0 ci(kotlin): replace deprecated Github actions (#5600)
Closes #5599


https://github.com/gradle/actions/blob/main/docs/deprecation-upgrade-guide.md#the-action-gradlegradle-build-action-has-been-replaced-by-gradleactionssetup-gradle
2024-06-27 16:23:16 +00:00
Jamil
7e67fa55f9 feat(website): Add changelog for updated 1.1.0 Clients and 1.1.1 Gateway (#5586)
Will be bumping the actual versions after testing. Draft until those
actually get cut and then bumped.

Asking review to make sure I didn't miss anything major.
2024-06-27 03:01:47 -07:00
Jamil
b5de55ac26 chore: Bump clients to 1.1.0, Gateway to 1.1.1 (#5591) 2024-06-27 02:43:48 -07:00
Jamil
444faaf911 fix(ci): restore version naming in _build_artifacts.yml (#5592)
Reverts part of #5487 to fix empty version numbers in release artifacts.
2024-06-27 02:08:31 -07:00
Thomas Eizinger
b6420eaa3e feat(snownet): close idle connections after 5min (#5576)
We define a connection as idle if we haven't sent or received any
packets in the last 5 minutes. From `snownet`'s perspective, keep-alives
sent by upper layers (like TCP keep-alives) must be honored and thus
outgoing as well as incoming packets are accounted for.

If the underlying connection breaks, we will hit an ICE timeout which is
an implementation detail of `snownet`. The packets tracked here are IP
packets that the user wants to send / receive via the tunnel. Similarly,
wireguard's keep-alives do not update these timestamps and thus don't
mark a connection as non-idle.

---------

Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com>
2024-06-27 08:28:38 +00:00
Jamil
e82a9506ab fix(infra): use sensitive attribute for all secrets (#5562)
Is there a reason not to mark these `sensitive`?


https://developer.hashicorp.com/terraform/tutorials/configuration-language/sensitive-variables
2024-06-27 08:13:35 +00:00
Thomas Eizinger
58fad7cb2d refactor(connlib): batch resource change updates (#5575)
Currently, upon reconnecting, `snownet` returns a list of connection IDs
that have been closed. This was done to avoid emitting many identical
`ResourcesChanged` events. In all other events, `snownet` always only
references a single connection. To align this whilst not duplicating
`ResourcesChanged` events, we use a dedicated `bool` to check, whether
any of the events emitted by `snownet` require updating the clients
about our active resources.
2024-06-27 07:48:41 +00:00
Thomas Eizinger
18b9c35316 chore(connlib): explicitly handle invalid_version error (#5577)
Ensures we correctly deserialize `invalid_version` and don't fall-back
to `Other`.

Related: #5525.
2024-06-27 07:41:41 +00:00
Jamil
9abee60f4f ci: fix changelog link YAML (#5587)
Fixes the newline in the release changelog. This is maintained on the
website now.
2024-06-27 07:41:19 +00:00
Gabi
ad8c92ca35 fix(connlib): dont panic in invalid PTR records (#5588) 2024-06-27 07:24:06 +00:00
Thomas Eizinger
9ddee774b4 chore(connlib): allow filtering of wire log target (#5578)
Currently, enabling the `wire` log is an all or nothing approach,
logging incoming and outgoing messages from the TUN device, network and
the portal.

Often, only one or more of these is desired but enabling all of `wire`
spams the logs to the point where one cannot see the information they'd
like. With this PR, we move some of the fields of the `wire` log
statements to the log target instead. This allows controlling the logs
via the `RUST_LOG` env variable.

For example, to only see messages sent and received to the API, one can
set `RUST_LOG=wire::api=trace` which will output something like:

```
2024-06-27T02:12:41.821374Z TRACE wire::api::send: {"topic":"client","event":"phx_join","payload":null,"ref":0}
2024-06-27T02:12:42.030573Z TRACE wire::api::recv: {"event":"phx_reply","ref":0,"topic":"client","payload":{"status":"ok","response":{}}}
```

Similarly, enabling `wire::net=trace` will give you logs for packets
sent over the network:

```
2024-06-27T02:12:50.487503Z TRACE wire::net::send: src=None dst=34.80.2.250:3478 num_bytes=20
2024-06-27T02:12:50.487589Z TRACE wire::net::send: src=None dst=[2600:1900:4030:b0d9:0:5::]:3478 num_bytes=20
2024-06-27T02:12:50.487622Z TRACE wire::net::send: src=None dst=34.87.210.10:3478 num_bytes=20
2024-06-27T02:12:50.487652Z TRACE wire::net::send: src=None dst=[2600:1900:40b0:1504:0:17::]:3478 num_bytes=20
2024-06-27T02:12:50.510049Z TRACE wire::net::recv: src=34.87.210.10:3478 dst=192.168.188.71:39207 num_bytes=32
2024-06-27T02:12:50.510382Z TRACE wire::net::send: src=None dst=34.87.210.10:3478 num_bytes=112
2024-06-27T02:12:50.526947Z TRACE wire::net::recv: src=34.87.210.10:3478 dst=192.168.188.71:39207 num_bytes=92
2024-06-27T02:12:50.527295Z TRACE wire::net::send: src=None dst=34.87.210.10:3478 num_bytes=152
```

These targets have been designed to take up equal amounts of space. All
three types (`dev`, `net`, `api`) have 3 letters and `send` and `recv`
have 4. That way, these logs are always aligned which makes them easier
to scan.
2024-06-27 06:36:49 +00:00
Gabi
e0e9e078a0 fix(connlib): statically resolve API domain (#5563)
In order to handle DNS resources, connlib intercepts all DNS requests on
the system once it has started up. The DNS queries are then forwarded to
the original DNS resolver in case the query isn't for one of the
configured DNS resources _except_ if the configured DNS resovler is also
a CIDR resource.

In that case, the DNS query will be tunneled to a gateway and forwarded
to the DNS resolver from there.

Exactly this configuration results in a dead-lock when roaming networks.
To make roaming more reliable, we now drop all connections when
detecting a network change (see #5308). As a result, DNS queries cannot
be tunneled right away. This isn't usually a problem: We just send a
connection intent to the portal to connect to the gateway. Upon a
network change, we also reconnect the websocket to the portal which also
requires to resolve the domain name. Connlib's DNS resolver is still
active at the point and thus, we end up deadlocking ourselves because
the DNS query to resolve the portal's domain is waiting for a connection
to a gateway that can only be established once we are connected to the
portal.

To prevent this, we extend connlib with a "known hosts" feature. These
are DNS records that are defined statically for the lifetime of a
connlib session and can thus always be resolved, regardless of the
connection state with the portal or the gateways. We populate these
records with the portal's API, allowing the reconnect to work without
having connected gateways.

---------

Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
2024-06-27 06:00:56 +00:00
Thomas Eizinger
c2b5379fba chore(connlib): demote log for unknown incoming packets to debug (#5584)
There are several reasons why we would legitimately receive a packet
that we can't handle, i.e. when a connection got cleared locally but the
gateway is still trying to send us packets for that socket. Not handling
these packets can be a bug but more often than not, it is not an issue.

Additionally, all our unit-tests actually `.unwrap` the
`Node::encapsulate` function so any unhandled packets in the tests will
be caught.
2024-06-27 05:58:04 +00:00
Jamil
786815dc39 fix(apple): use jsonl file suffix for app-side logs for consistency (#5582)
refs #5504
2024-06-27 05:51:44 +00:00
Thomas Eizinger
7e0a1f8511 ci(android): name jobs consistenly (#5580)
Renames the old `build` job to `build-release` for consistency with
`build-debug`.

---------

Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
2024-06-27 04:48:41 +00:00
Thomas Eizinger
24dd85f8b1 ci(android): build and upload debug APK (#5574)
In order to make it easier to test PRs that affect Android, this patch
adds an additional CI job that builds a debug APK that can be installed
on any Android device right away.
2024-06-27 03:53:35 +00:00
Andrew Dryga
cfe777f389 fix(portal): Do not crash WebSocket when client version is invalid (#5525) 2024-06-26 18:50:43 -06:00
Reactor Scram
990f98e60f fix(windows): prevent deadlock when closing wintun (#5571)
Refs #5441, but without a reliable way to replicate that issue, I'm not
sure if this will completely fix it.

Before this PR, a deadlock can happen between 2 threads, call them "main
thread" and "worker thread".
The deadlock is more likely if more traffic is flowing through the
tunnel.

# Test results

I ran a build from this PR inside the resource-constrained VM and it's
likely the deadlock could have triggered there, since the packet channel
had 0 capacity (it was full) when we reached `Tun::drop`:

```jsonl
{"time":"2024-06-26T22:43:33.2398441Z","target":"firezone_headless_client::ipc_service","logging.googleapis.com/sourceLocation":{"file":"headless-client\\src\\ipc_service.rs","line":"304"},"severity":"INFO","gitVersion":"e591bb9","logFilter":"\"str0m=warn,info\""}
..
{"time":"2024-06-26T22:45:42.9035226Z","target":"firezone_tunnel::device_channel::tun_windows","logging.googleapis.com/sourceLocation":{"file":"connlib\\tunnel\\src\\device_channel\\tun_windows.rs","line":"45"},"severity":"INFO","channelCapacity":0,"message":"Shutting down packet channel..."}
{"time":"2024-06-26T22:45:42.9035467Z","target":"firezone_tunnel::device_channel::tun_windows","logging.googleapis.com/sourceLocation":{"file":"connlib\\tunnel\\src\\device_channel\\tun_windows.rs","line":"274"},"severity":"INFO","message":"recv_task exiting gracefully"}
{"time":"2024-06-26T22:45:43.4978015Z","target":"connlib_client_shared","logging.googleapis.com/sourceLocation":{"file":"connlib\\clients\\shared\\src\\lib.rs","line":"150"},"severity":"INFO","message":"connlib exited gracefully"}
```

I followed these steps:
- Run Firezone and sign in
- Start a speed test using Cloudflare
- During the download phase, quit the GUI

I did the same test with 0fac698 (`main`) and got the "All pipe
instances are busy" error dialog 3 out of 5 times.

# Details

The deadlock will happen in this scenario:

- The main thread enters `Tun::drop` here
0fac698dfc/rust/connlib/tunnel/src/device_channel/tun_windows.rs (L44)
- The worker thread is waiting for space in the packet channel
(`packet_tx` and `packet_rx`) here
0fac698dfc/rust/connlib/tunnel/src/device_channel/tun_windows.rs (L249)
- The main thread tells wintun to shut down. If the worker was on line
247 waiting on wintun, this would unblock it, but the worker is not on
line 247.
0fac698dfc/rust/connlib/tunnel/src/device_channel/tun_windows.rs (L45)
- The main thread waits to join the worker thread
0fac698dfc/rust/connlib/tunnel/src/device_channel/tun_windows.rs (L52)

The threads are now deadlocked. The main thread is waiting for the
worker thread to exit, and the worker thread is waiting for the main
thread to either call `poll_recv`, which would cause `blocking_send` to
return, or for the main thread to complete `Tun::drop`, which would
cause Rust to drop `packet_rx`, which would cause `blocking_send` to
return an error.

This PR makes 2 changes to prevent this deadlock. Each change alone
should work, but for defense-in-depth we make both changes:

1. When the main thread starts `Tun::drop`, we `close` the packet
channel, which would unblock any thread waiting on
`Sender::blocking_send`
2. We use `Sender::try_send` instead of `Sender::blocking_send`. If the
main thread can't consume packets fast enough, we're going to drop them
anyway, because the ring buffer in wintun will eventually fill up. So
dropping them here isn't much different from dropping them anywhere
else, and this keeps the worker thread from locking up.
2024-06-26 23:52:20 +00:00
Gabi
0fac698dfc chore(connlib): set connection expiration to 120seconds to respect the conntrack udp timeout (#5559) 2024-06-26 21:25:00 +00:00
Gabi
2d312ddc71 chore(connlib): reduce log level for unallowed packets in client (#5569)
Work around for too many `unallowed packets`.

Long term fix on #5568 and #5560
2024-06-26 21:24:13 +00:00
dependabot[bot]
c7fbb750be build(deps): Bump the npm_and_yarn group in /scripts/tests/browser with 2 updates (#5499)
Bumps the npm_and_yarn group in /scripts/tests/browser with 2 updates:
[ws](https://github.com/websockets/ws) and
[puppeteer](https://github.com/puppeteer/puppeteer).

Updates `ws` from 8.17.0 to 8.17.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/websockets/ws/releases">ws's
releases</a>.</em></p>
<blockquote>
<h2>8.17.1</h2>
<h1>Bug fixes</h1>
<ul>
<li>Fixed a DoS vulnerability (<a
href="https://redirect.github.com/websockets/ws/issues/2231">#2231</a>).</li>
</ul>
<p>A request with a number of headers exceeding
the[<code>server.maxHeadersCount</code>][]
threshold could be used to crash a ws server.</p>
<pre lang="js"><code>const http = require('http');
const WebSocket = require('ws');
<p>const wss = new WebSocket.Server({ port: 0 }, function () {
const chars =
&quot;!#$%&amp;'*+-.0123456789abcdefghijklmnopqrstuvwxyz^_`|~&quot;.split('');
const headers = {};
let count = 0;</p>
<p>for (let i = 0; i &lt; chars.length; i++) {
if (count === 2000) break;</p>
<pre><code>for (let j = 0; j &amp;lt; chars.length; j++) {
  const key = chars[i] + chars[j];
  headers[key] = 'x';

  if (++count === 2000) break;
}
</code></pre>
<p>}</p>
<p>headers.Connection = 'Upgrade';
headers.Upgrade = 'websocket';
headers['Sec-WebSocket-Key'] = 'dGhlIHNhbXBsZSBub25jZQ==';
headers['Sec-WebSocket-Version'] = '13';</p>
<p>const request = http.request({
headers: headers,
host: '127.0.0.1',
port: wss.address().port
});</p>
<p>request.end();
});
</code></pre></p>
<p>The vulnerability was reported by <a
href="https://github.com/rrlapointe">Ryan LaPointe</a> in <a
href="https://redirect.github.com/websockets/ws/issues/2230">websockets/ws#2230</a>.</p>
<p>In vulnerable versions of ws, the issue can be mitigated in the
following ways:</p>
<ol>
<li>Reduce the maximum allowed length of the request headers using the
[<code>--max-http-header-size=size</code>][] and/or the
[<code>maxHeaderSize</code>][] options so
that no more headers than the <code>server.maxHeadersCount</code> limit
can be sent.</li>
</ol>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="3c56601092"><code>3c56601</code></a>
[dist] 8.17.1</li>
<li><a
href="e55e5106f1"><code>e55e510</code></a>
[security] Fix crash when the Upgrade header cannot be read (<a
href="https://redirect.github.com/websockets/ws/issues/2231">#2231</a>)</li>
<li><a
href="6a00029edd"><code>6a00029</code></a>
[test] Increase code coverage</li>
<li><a
href="ddfe4a804d"><code>ddfe4a8</code></a>
[perf] Reduce the amount of <code>crypto.randomFillSync()</code>
calls</li>
<li>See full diff in <a
href="https://github.com/websockets/ws/compare/8.17.0...8.17.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `puppeteer` from 22.10.1 to 22.12.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/puppeteer/puppeteer/releases">puppeteer's
releases</a>.</em></p>
<blockquote>
<h2>puppeteer-core: v22.12.0</h2>
<h2><a
href="https://github.com/puppeteer/puppeteer/compare/puppeteer-core-v22.11.2...puppeteer-core-v22.12.0">22.12.0</a>
(2024-06-21)</h2>
<h3>Features</h3>
<ul>
<li>support AbortSignal in
page.waitForRequest/Response/NetworkIdle/Frame (<a
href="https://redirect.github.com/puppeteer/puppeteer/issues/12621">#12621</a>)
(<a
href="54ecea7db5">54ecea7</a>)</li>
<li><strong>webdriver:</strong> support for <code>PageEvent.Popup</code>
(<a
href="https://redirect.github.com/puppeteer/puppeteer/issues/12612">#12612</a>)
(<a
href="293926b61a">293926b</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>performance:</strong> clear targets on browser context close
(<a
href="https://redirect.github.com/puppeteer/puppeteer/issues/12609">#12609</a>)
(<a
href="660975824a">6609758</a>)</li>
<li>roll to Chrome 126.0.6478.62 (r1300313) (<a
href="https://redirect.github.com/puppeteer/puppeteer/issues/12615">#12615</a>)
(<a
href="80dd1316a0">80dd131</a>)</li>
<li>roll to Chrome 126.0.6478.63 (r1300313) (<a
href="https://redirect.github.com/puppeteer/puppeteer/issues/12632">#12632</a>)
(<a
href="20ed8fcb14">20ed8fc</a>)</li>
</ul>
<h2>puppeteer: v22.12.0</h2>
<h2><a
href="https://github.com/puppeteer/puppeteer/compare/puppeteer-v22.11.2...puppeteer-v22.12.0">22.12.0</a>
(2024-06-21)</h2>
<h3>Miscellaneous Chores</h3>
<ul>
<li><strong>puppeteer:</strong> Synchronize puppeteer versions</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li>The following workspace dependencies were updated
<ul>
<li>dependencies
<ul>
<li>puppeteer-core bumped from 22.11.2 to 22.12.0</li>
</ul>
</li>
</ul>
</li>
</ul>
<h2>puppeteer-core: v22.11.2</h2>
<h2><a
href="https://github.com/puppeteer/puppeteer/compare/puppeteer-core-v22.11.1...puppeteer-core-v22.11.2">22.11.2</a>
(2024-06-18)</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>deps:</strong> bump ws to 8.17.1 (<a
href="https://redirect.github.com/puppeteer/puppeteer/issues/12605">#12605</a>)
(<a
href="49bcb2537e">49bcb25</a>)</li>
</ul>
<h2>puppeteer: v22.11.2</h2>
<h2><a
href="https://github.com/puppeteer/puppeteer/compare/puppeteer-v22.11.1...puppeteer-v22.11.2">22.11.2</a>
(2024-06-18)</h2>
<h3>Miscellaneous Chores</h3>
<ul>
<li><strong>puppeteer:</strong> Synchronize puppeteer versions</li>
</ul>
<h3>Dependencies</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="6937a76f0a"><code>6937a76</code></a>
chore: release main (<a
href="https://redirect.github.com/puppeteer/puppeteer/issues/12610">#12610</a>)</li>
<li><a
href="20ed8fcb14"><code>20ed8fc</code></a>
fix: roll to Chrome 126.0.6478.63 (r1300313) (<a
href="https://redirect.github.com/puppeteer/puppeteer/issues/12632">#12632</a>)</li>
<li><a
href="e8b29e6742"><code>e8b29e6</code></a>
chore(main): release ng-schematics 0.6.1 (<a
href="https://redirect.github.com/puppeteer/puppeteer/issues/12623">#12623</a>)</li>
<li><a
href="26d59cfccc"><code>26d59cf</code></a>
docs: document pierce/ (<a
href="https://redirect.github.com/puppeteer/puppeteer/issues/12630">#12630</a>)</li>
<li><a
href="8edf120b75"><code>8edf120</code></a>
docs: fix whitespace in docs (<a
href="https://redirect.github.com/puppeteer/puppeteer/issues/12629">#12629</a>)</li>
<li><a
href="34fe0d0309"><code>34fe0d0</code></a>
ci: fix canary ci (<a
href="https://redirect.github.com/puppeteer/puppeteer/issues/12628">#12628</a>)</li>
<li><a
href="c70c0780d4"><code>c70c078</code></a>
docs: update locator docs (<a
href="https://redirect.github.com/puppeteer/puppeteer/issues/12603">#12603</a>)</li>
<li><a
href="fdf40e90d5"><code>fdf40e9</code></a>
chore(deps): Bump the all group across 1 directory with 3 updates (<a
href="https://redirect.github.com/puppeteer/puppeteer/issues/12600">#12600</a>)</li>
<li><a
href="96022e0468"><code>96022e0</code></a>
build(deps): update ng-schematics (<a
href="https://redirect.github.com/puppeteer/puppeteer/issues/12624">#12624</a>)</li>
<li><a
href="54ecea7db5"><code>54ecea7</code></a>
feat: support AbortSignal in
page.waitForRequest/Response/NetworkIdle/Frame (...</li>
<li>Additional commits viewable in <a
href="https://github.com/puppeteer/puppeteer/compare/puppeteer-v22.10.1...puppeteer-v22.12.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/firezone/firezone/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
2024-06-26 20:48:07 +00:00
Jamil
89bb7c2c5d fix(android): Fix crash in setDns on 32-bit Android by using jlong consistently for the SessionWrapper pointer (#5564)
`connlibSessionPtr` is a `Long`, which is 64-bits. On 32-bit Android
architectures, this overwrites part of the `dns_list` for the `setDns`
native function call because Rust uses a `32-bit` sized pointer for
`SessionWrapper` in the function definition.

This causes a JNI crash, detailed below. To fix this, we make sure
`jlong` is received in Rust, and do the pointer conversion in the body
of the functions that need to use it.

Adding @ReactorScram to review for visibility.


```
runtime.cc:655] Runtime aborting...
runtime.cc:655] Dumping all threads without mutator lock held
runtime.cc:655] All threads:
runtime.cc:655] DALVIK THREADS (35):
runtime.cc:655] "ConnectivityThread" prio=5 tid=35 Runnable
runtime.cc:655]   | group="" sCount=0 dsCount=0 flags=0 obj=0x131809a8 self=0xa42dea10
runtime.cc:655]   | sysTid=8854 nice=0 cgrp=default sched=0/0 handle=0x7fbb71c0
runtime.cc:655]   | state=R schedstat=( 0 0 0 ) utm=8 stm=0 core=2 HZ=100
runtime.cc:655]   | stack=0x7fab4000-0x7fab6000 stackSize=1040KB
runtime.cc:655]   | held mutexes= "abort lock" "mutator lock"(shared held)
runtime.cc:655]   native: #00 pc 0037b1dd  /apex/com.android.art/lib/libart.so (art::DumpNativeStack(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, int, BacktraceMap*, char const*, art::ArtMethod*, void*, bool)+76)
runtime.cc:655]   native: #01 pc 0044cd01  /apex/com.android.art/lib/libart.so (art::Thread::DumpStack(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, bool, BacktraceMap*, bool) const+388)
runtime.cc:655]   native: #02 pc 00448447  /apex/com.android.art/lib/libart.so (art::Thread::Dump(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, bool, BacktraceMap*, bool) const+34)
runtime.cc:655]   native: #03 pc 00465995  /apex/com.android.art/lib/libart.so (art::DumpCheckpoint::Run(art::Thread*)+688)
runtime.cc:655]   native: #04 pc 00460e57  /apex/com.android.art/lib/libart.so (art::ThreadList::RunCheckpoint(art::Closure*, art::Closure*)+354)
runtime.cc:655]   native: #05 pc 0046034f  /apex/com.android.art/lib/libart.so (art::ThreadList::Dump(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, bool)+1514)
runtime.cc:655]   native: #06 pc 0040a3af  /apex/com.android.art/lib/libart.so (art::Runtime::Abort(char const*)+1510)
runtime.cc:655]   native: #07 pc 0000d989  /system/lib/libbase.so (android::base::SetAborter(std::__1::function<void (char const*)>&&)::$_3::__invoke(char const*)+48)
runtime.cc:655]   native: #08 pc 0000d295  /system/lib/libbase.so (android::base::LogMessage::~LogMessage()+224)
runtime.cc:655]   native: #09 pc 002965db  /apex/com.android.art/lib/libart.so (art::JavaVMExt::JniAbort(char const*, char const*)+1962)
runtime.cc:655]   native: #10 pc 002966a5  /apex/com.android.art/lib/libart.so (art::JavaVMExt::JniAbortF(char const*, char const*, ...)+64)
runtime.cc:655]   native: #11 pc 004521c1  /apex/com.android.art/lib/libart.so (art::Thread::DecodeJObject(_jobject*) const+544)
runtime.cc:655]   native: #12 pc 0028a6e7  /apex/com.android.art/lib/libart.so (art::(anonymous namespace)::ScopedCheck::CheckInstance(art::ScopedObjectAccess&, art::(anonymous namespace)::ScopedCheck::InstanceKind, _jobject*, bool)+82)
runtime.cc:655]   native: #13 pc 00289779  /apex/com.android.art/lib/libart.so (art::(anonymous namespace)::ScopedCheck::CheckPossibleHeapValue(art::ScopedObjectAccess&, char, art::(anonymous namespace)::JniValueType)+552)
runtime.cc:655]   native: #14 pc 00288f55  /apex/com.android.art/lib/libart.so (art::(anonymous namespace)::ScopedCheck::Check(art::ScopedObjectAccess&, bool, char const*, art::(anonymous namespace)::JniValueType*)+592)
runtime.cc:655]   native: #15 pc 0027cbe7  /apex/com.android.art/lib/libart.so (art::(anonymous namespace)::CheckJNI::GetObjectClass(_JNIEnv*, _jobject*)+586)
runtime.cc:655]   native: #16 pc 003412db  /data/app/~~X6p_4xQWTraApNXlo4SIHA==/dev.firezone.android-zJrN9FN3yhs12tvUNeoOmw==/base.apk!libconnlib.so (offset ec000) (???)
runtime.cc:655]   at dev.firezone.android.tunnel.ConnlibSession.setDns(Native method)
runtime.cc:655]   at NetworkMonitor.onLinkPropertiesChanged(NetworkMonitor.kt:28)
runtime.cc:655]   at android.net.ConnectivityManager$NetworkCallback.onAvailable(ConnectivityManager.java:3328)
runtime.cc:655]   at android.net.ConnectivityManager$CallbackHandler.handleMessage(ConnectivityManager.java:3607)
runtime.cc:655]   at android.os.Handler.dispatchMessage(Handler.java:106)
runtime.cc:655]   at android.os.Looper.loop(Looper.java:223)
runtime.cc:655]   at android.os.HandlerThread.run(HandlerThread.java:67)
```

---------

Co-authored-by: conectado <gabrielalejandro7@gmail.com>
2024-06-26 19:24:44 +00:00
Jamil
624f7a7967 fix(website): Add API route to return deployed SHA (#5567)
Env vars are read at build-time and injected in the Client component.
This fixes that for `DEPLOYED_SHA` which can be updated on each deploy.
2024-06-26 12:23:02 -07:00
Andrew
a5a60d253e Bump constant time for welcome emails 2024-06-26 12:54:17 -06:00
Andrew
ccf18cb9ad Add ops script to provision account users 2024-06-26 12:46:32 -06:00