`onViewCreated()` is called when the view initializes, and then
`onResume()` is called right after, in addition to anytime the view is
shown again.
To prevent showing the VPN permission activity twice, we remove the
`checkTunnelState()` from onViewCreated, allowing only `onResume()` to
call it.
A boolean flag is added to track whether this is the "first" launch of
the app in order to determine whether to `connectOnStart`.
Fixes#9584
---------
Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Instead of checking for buffer surpass _after_ adding new timeseries to
it, we should check before.
Variables were renamed to be a little more clear on what they represent.
`Repo.aggregate(:count)` which performs a `COUNT(*)` query should be
relatively fast if it's able to do an index-only scan. For that to
happen we need to ensure all of the fields in the WHERE clause are
indexed. Currently, we're missing an index on `actors.type` so a full
row scan is executed per account each time we calculate Billing limits,
every 5 minutes, for all accounts.
If we need to check these limits more often and/or our data grows in
size, it could be worth moving these to a limits counter field on
`accounts` which is maintained via INSERT/DELETE triggers.
Related:
https://firezone-inc.sentry.io/issues/6346235615/events/588a61860e0b4875a5dbe8531dbb806a/?project=4508756715569152&referrer=next-event
When reacting to `ActorGroupMembership` updates, we were issuing a query
to expire Flows given an `actor_id, actor_group_id` combination.
Unfortunately, this query never included an `account_id` to scope it,
causing a table scan of flows and associated join tables to resolve it.
To fix this, we introduce the `account_id` and ensure the expire flows
uses this field to ensure only data for an account is considered in the
query.
Related:
https://firezone-inc.sentry.io/issues/6346235615/events/e225e1c488cb4ea3896649aabd529c50
The `compile_config` macro only works on environment and DB variables.
This caused recent confusion when determining where `database_pool_size`
was coming from.
To fix this issue, we rename `compile_config` to be more clear.
We also remove the technical debt around supporting "legacy keys" and
DB-based configuration.
The configuration compiler now works exclusively on environment
variables only, where it is still useful for:
- Casting environment variables to their expected type
- Alerting us when one is missing that should be set
Why:
* We recently had an issue where a space was entered into a provider
form field and caused our system to not be able to authenticate the
admin when setting up the auth provider and directory sync. To mitigate
this moving forward we are making sure all white space is trimmed in the
form fields. This commit focuses on the form fields for the auth
providers.
related: #9579
When profiling the relay, certain syscalls may get interrupted by the
kernel. At present, this crashes the relay which makes profiling
impossible.
Co-authored-by: Antoine Labarussias <antoinelabarussias@gmail.com>
Rust and Swift disagree about the formatting of these, leading to
constant git file dirtiness when working on Apple code.
These were originally added when we didn't have as much automation to
regeneration these on each build.
Sentry isn't started when this runs, so start it and manually capture a
message to ensure we're reminded about pending conditional migrations.
Verified that this works with the Release script.
As recommended by the Sentry team [0], "App hang" tracking should be
disabled before calling into certain system APIs like showing alerts to
prevent false-positives.
[0]: https://github.com/getsentry/sentry-cocoa/issues/3472
When using alternative editors other than Xcode, Intellisense is usually
provided by `sourcekit-lsp`. The LSP server also uses `swift-format`
under the hood to format the code but appears to default to using 4
spaces for indentation. In order to standardise on how much indentation
is used, we add a `.swift-format` file that specifies is.
In #9562, we introduced a bug where the pending conditional migrations
check was run without the repo being started. Wrapping it with
`with_repo` fixes that.
The runner doing the Android builds is running out of disk space. Since
we don't use the emulator, adb, or other tools for the build, we can
save some space by not installing these.
Related: https://github.com/firezone/firezone/actions/runs/15742063800
Some of our users are facing issues on what looks to be very unreliable
network connections. At present, we consider a connection dead if we
don't receive a response within 9.25 seconds. Cutting a connection and
re-establishing it _should_ not be a problem in general and TCP
connections happening through Firezone should resume gracefully. Further
work on whether that is actually the case is due in #9531. Until then,
we increase the ICE timeout to ~15s.
Related: #9526
This table was added to try and gate the request rate to Google's
Metrics API.
However, this was a flawed endeavor as we later discovered that the time
series points need to be spaced apart at least 5s, not the API requests
themselves.
This PR gets rid of the table and therefore the problematic DB query
which is timing out quite often due to the contention involved in 12
elixir nodes trying to grab a lock every 5s.
Related: #9539
Related:
https://firezone-inc.sentry.io/issues/6346235615/?project=4508756715569152&query=is%3Aunresolved&referrer=issue-stream&stream_index=1
Some migrations take a long time to run because they require locks or
modify large amounts of data. To prevent this from causing issues during
deploy, we leverage Ecto's native support for loading migrations from
multiple directories to introduce a `conditional_migrations/` directory
that houses any conditional migrations we want to run.
To run these migrations, you'll need to do one of the following:
- `dev, test`: The `mix ecto.migrate` will run them by default because
we have aliased this to load conditional_migrations for dev
- `prod`: Set the `RUN_CONDITIONAL_MIGRATIONS` env var to `true` before
starting a prod server using the `bin/migrate` script.
- `dev, test, prod`: Run `Domain.Release.migrate(conditional: true)`
from an IEx shell.
If conditional migrations were found that weren't executed during
`Domain.Release.migrate`, a warning is logged to remind us to run them.
---------
Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Any well-behaved NAT should keep the port mappings of an established UDP
connection open for 120s, even without seeing any traffic. Not all NATs
in the wild are well-behaved though and a discarded port mapping causes
connectivity loss for customers.
To combat these situations, we decrease the timer for STUN probes on
idle connections from 60s to 25s.
Related: #9526
To make releases even more smoother, this PR creates a bit of automation
that automatically bumps the versions in the `scripts/bump-versions.sh`
script and opens a PR for it.
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.39 to 4.5.40.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/clap-rs/clap/blob/master/CHANGELOG.md">clap's
changelog</a>.</em></p>
<blockquote>
<h2>[4.5.40] - 2025-06-09</h2>
<h3>Features</h3>
<ul>
<li>Support quoted ids in <code>arg!()</code> macro (e.g.
<code>arg!("check-config": ...)</code>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="cff27dbf57"><code>cff27db</code></a>
chore: Release</li>
<li><a
href="4ef41249f1"><code>4ef4124</code></a>
docs: Update changelog</li>
<li><a
href="ca896175c1"><code>ca89617</code></a>
Merge pull request <a
href="https://redirect.github.com/clap-rs/clap/issues/5848">#5848</a>
from jennings/jennings/push-xolwzyoornps</li>
<li><a
href="99b6391ee9"><code>99b6391</code></a>
fix(complete): Fix PowerShell dynamic completion</li>
<li>See full diff in <a
href="https://github.com/clap-rs/clap/compare/clap_complete-v4.5.39...clap_complete-v4.5.40">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
The recent changes to str0m include a bug fix for network constellations
where both peers are behind symmetric NAT and therefore need a
relay-relay candidate pair to succeed. In the current version, such
candidate pairs would erroneously be rejected as redundant with host
candidates.
Fixes: #9514
To make our FFI layer between Android and Rust safer, we adopt the
UniFFI tool from Mozilla. UniFFI allows us to create a dedicated crate
(here `client-ffi`) that contains Rust structs annotated with various
attributes. These macros then generate code at compile time that is
built into the shared object. Using a dedicated CLI from the UniFFI
project, we can then generate Kotlin bindings from this shared object.
The primary motivation for this effort is memory safety across the FFI
boundary. Most importantly, we want to ensure that:
- The session pointer is not used after it has been free'd
- Disconnecting the session frees the pointer
- Freeing the session does not happen as part of a callback as that
triggers a cyclic dependency on the Rust side (callbacks are executed on
a runtime and that runtime is dropped as part of dropping the session)
To achieve all of these goals, we move away from callbacks altogether.
UniFFI has great support for async functions. We leverage this support
to expose a `suspend fn` to Android that returns `Event`s. These events
map to the current callback functions. Internally, these events are read
from a channel with a capacity of 1000 events. It is therefore not very
time-critical that the app reads from this channel. `connlib` will
happily continue even if the channel is full. 1000 events should be more
than sufficient though in case the host app cannot immediately process
them. We don't send events very often after all.
This event-based design has major advantages: It allows us to make use
of `AutoCloseable` on the Kotlin side, meaning the `session` pointer is
only ever accessed as part of a `use` block and automatically closed
(and therefore free'd) at the end of the block.
To communicate with the session, we introduce a `TunnelCommand` which
represents all actions that the host app can send to `connlib`. These
are passed through a channel to the `suspend fn` which continuously
listens for events and commands.
Resolves: #9499
Related: #3959
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com>
This ensures we always know, what the ICE timeouts of the agent are.
With the backoff implemented in the agent, it is not trivial to compute
this from the input parameters.
The shape of this from libcluster is `[:"NODE_NAME": connected_bool?]`
so we need to extract the first element of each item before using this
var.
This is just for logging and doesn't affect how we actually connect to
nodes.
Why:
* Adding some simple logging around OIDC calls to help with better
debugging.
* Removing the `opentelemetry_liveview` package as it has been pulled in
to the `opentelemetry_phoenix` package that we are already using.
It is in the nature of our application that errors may occur in rapid
succession if anything in the packet processing path fails. Most of the
time, these repeated errors don't add any additional information so
reporting one of them to Sentry is more than enough.
To achieve this, we add a `before_send` callback that utilizes a
concurrent cache with an upper bound of 10000 items and a TTL of 5
minutes. In other words, if we have submitted an event to Sentry that
had the exact same message in the last 5 minutes, we will not send it.
Internally, `moka` uses a concurrent hash map and therefore, the key is
hashed and not actually stored. Hash codes are u64, meaning the memory
footprint of this cache is only ~ 64kb (not accounting for constant
overhead of the cache internals).
These don't happen very often so are safe to log on INFO. That is the
default log level and it is useful to see, why we are re-connecting to
the portal.
It appears that the code generation in our `build.rs` generates the code
with different formatting and this therefore constantly shows up as
untracked changes in my editor.