431 Commits

Author SHA1 Message Date
Thomas Eizinger
bce2aa30b5 feat(portal): extend DNS settings to allow for DoH providers (#10882)
In order to allow customers to make use of connlib's DoH functionality,
we need a configuration UI for it. We take inspiration from the "New
Resource" page and implement a 3-choice UI component for configuring how
Clients should resolve DNS queries:

- System
- Secure DNS
- Custom

The secure and custom DNS options show an additional form when selected
for either picking a DoH provider or the addresses of the custom DNS
servers.

Right now, the "Secure DNS" part is disabled if the
`DISABLE_DOH_PROVIDER` env variable is set. We render a "Coming soon"
tooltip on hover:

<img width="1534" height="1100" alt="image"
src="https://github.com/user-attachments/assets/a12a6ba4-806f-4d19-8aea-5c1cd981d609"
/>

This allows us to test this in staging and still ship to production if
needed prior to enabling it.

Resolves: #10792
Resolves: #10786

---------

Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com>
2025-11-22 06:48:07 +00:00
Brian Manifold
c8900c2a94 fix(portal): update deletion circuit breaker (#10898)
Why:

* The directory sync circuit breaker was put in place to prevent errors
in IdP API responses from deleting identities and groups incorrectly and
thus also deleting policies. This was especially an issue with Google
Workspace sync. The circuit breaker has done a good job of catching some
erroneous API responses, however, it is now preventing legitimate syncs
where large numbers of groups are needing to be deleted. While a few
options have been quickly talked about, no final solution has been
decided on. In the mean time, this commit will update the threshold of
what is considered a "mass deletion" to 75% from 25%. This should give
some time to figure out what a more permanent solution will look like.

Fixes #10892
2025-11-17 16:07:53 +00:00
Thomas Eizinger
cd650de1f8 refactor: prepare client init for upstream DoH servers (#10851)
In order to support multiple different protocols of upstream DNS
resolvers, we deprecate the `upstream_dns` field in the client's `init`
message and introduce two new fields:

- `upstream_do53`
- `upstream_doh`

For now, only `upstream_do53` is populated and `upstream_doh` is always
empty.

On the client-side, we for now only introduce the `upstream_do53` field
but fall-back to `upstream_dns` if that one is empty. This makes this PR
backwards-compatible with the portal version that is currently deployed
in production. Thus, this PR can be merged even prior to deploying the
portal.

Internally, we prepare connlib's abstractions to deal with different
kinds of upstreams by renaming all existing "upstream DNS" references to
`upstream_do53`: DNS over port 53. That includes UDP as well as TCP DNS
resolution.

Resolves: #10791

---------

Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com>
2025-11-12 05:40:58 +00:00
Thomas Eizinger
b8b52c1f07 fix(portal): do not allow ports for upstream DNS servers (#10772)
DNS servers are standarised to be contacted on port 53. This is also
hard-coded within `connlib` when we contact an upstream server. As such,
we should disallow users inputting any custom port for upstream DNS
servers. Luckily - or perhaps because it doesn't presently work - no
users in production have actually put in a port.

Resolves: #8330
2025-11-04 04:44:57 +00:00
Jamil
1e295742bc chore(portal): version gate resource can change sites (#10608)
On clients prior to https://github.com/firezone/firezone/pull/10604,
sending `resource_created_or_updated` for a site change would cause a
panic. With the introduction of that PR, the portal can now send this
message without needing to "toggle" the resource first by sending
`resource_deleted`.

Fixes: #10593
2025-10-26 17:02:02 -07:00
Brian Manifold
28e35b6500 refactor(portal): remove all soft delete columns (#10685)
Why:

* Now that soft delete fields have been removed being referenced in the
codebase and soft deleted rows have been removed from the DB we can now
remove any remaining traces of soft delete functionality.

Resolves #8187
2025-10-22 21:13:59 +00:00
Brian Manifold
27565ea5c8 refactor(portal): remove soft delete elements from portal code (#10607)
Why:

* In previous commits, the portal code had been updated to use hard
deletion rather than soft deletion of data. The fields used in the soft
deletion were still kept in the DB and the code to allow for zero
downtime rollout and an easy rollback if necessary. To continue with
that work the portal code has now been updated to remove any reference
to the soft deleted fields (e.g. deleted_at, persistent_id, etc...).
While the code has been updated the actual data in the DB will need to
remain for now, to once again allow for a zero downtime rollout. Once
this commit has been deployed to production another PR can follow to
remove the columns from the necessary tables in the DB.


Related: #8187
2025-10-18 17:02:26 +00:00
Mariusz Klochowicz
a27676a903 fix(portal): support dark mode in outbound emails (#10493)
Ensure that users with dark mode enabled system-wide get nice experience
whilst reading the emails.

Add a `mix test_emails` task to send all the emails and quickly inspect
them locally.

Before:

<img width="767" height="924" alt="image"
src="https://github.com/user-attachments/assets/aaac75bd-67ad-4fd8-82e8-6726ffea6bae"
/>


After (viewed via `mix test_emails`):

<img width="1063" height="928" alt="image"
src="https://github.com/user-attachments/assets/57d3a4d9-5b8f-4a45-8546-7615e15422d8"
/>

---------

Signed-off-by: Mariusz Klochowicz <mariusz@klochowicz.com>
Co-authored-by: Brian Manifold <bmanifold@users.noreply.github.com>
2025-10-16 19:35:36 +00:00
Jamil
4d43d2cb77 fix(portal): trigger email for Okta 4xx errors (#10578)
When Okta returned a 4xx status code from the API, we had updated error
handler to grab the errors from body or headers and return these.

However, the caller was expecting an explicit empty string for 401 and
403 errors in order to trigger the email send behavior.

Since that wasn't being matched, we were logging the error internally
only, and continuing to retry the sync indefinitely without sending the
user an email.


Fixes #8744 
Fixes #9825
2025-10-16 15:10:04 +00:00
Jamil
d329880ec8 fix(portal): don't use Web functions from Domain (#10546)
Fixes an issue introduced in #10510 where Web functions (like
VerifiedRoutes) cannot be called from Domain because they are not
available in the release.

This happens to work in dev mode because everything is available under
the same dev context.
2025-10-13 20:24:46 +00:00
Jamil
b61fd20de8 chore(portal): remove Jason in favor of JSON (#10550)
Since Elixir 1.18, json encoding and decoding support is included in the
standard library. This is built on OTP's native json support which is
often faster than other implementations.

It mostly has the same API as the popular Jason library, differing
mainly in the format of the error responses returned when decoding
fails.

To minimize dependence on external libraries, we remove the Jason lib in
favor of this external dependency.

Fixes #8011
2025-10-13 17:39:53 +00:00
Jamil
1635c81a69 chore(portal): remove dead telemetry/timer.ex (#10549) 2025-10-13 16:36:19 +00:00
Jamil
bb089846d7 chore(portal): bump phoenix to 1.8 (#10510)
Bumps Phoenix to 1.8 and Phoenix LiveView to 1.1. As part of the bump a
number of issues had to be addressed. Comments inline provide more
context.

Supersedes #10475 
Supersedes #10448
2025-10-10 15:08:50 +00:00
Jamil
f07d2932dc feat(portal): show outdated clients (#10456)
Clients more than one minor away from a particular won't be able to
connect, so it would be useful to help admins recognize when this might
be the case and encourage them to upgrade.

We accomplish that with two small UX improvements in this PR:

- In the outdated gateways email, we show them a count of clients that
will no longer be able to connect to the new gateway version, linking
them to a sorted view of the clients table
- In the clients table, we add a new sortable `version` column to allow
admins to see which clients are outdated

Fixes #7727 
Fixes #10385

---------

Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
2025-09-30 13:23:30 +00:00
Thomas Eizinger
685acdac3a feat: add more specific component type to user-agent header (#10457)
In order to allow the portal to more easily classify, what kind of
component is connecting, we extend the `get_user_agent` header to
include a component type instead of the generic `connlib/`.

---------

Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
2025-09-26 00:18:36 +00:00
Jamil
15283f1af5 feat(portal): batch_upsert and delete_unsynced functions (#10369)
In order to support the new, upcoming directory sync implementations, we
need the ability to batch upsert auth_identities, actors, actor_groups,
and actor_group_memberships. We also need the ability to delete entities
that were not upserted at the tail end of a sync job iteration in order
to remove entities that are no longer in the directory.

To support this, we add these functions and related tests here.

Related: #6294

---------

Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-19 20:48:26 +00:00
Jamil
bfac486df5 refactor(portal): use list comprehensions in cache (#10376)
Elixir's [list
comprehensions](https://hexdocs.pm/elixir/comprehensions.html) are more
concise and [often
faster](https://stackoverflow.com/questions/55038704/elixir-enum-map-vs-for-comprehension)
(~2x) than using multiple Enum.filter and Enum.map calls.

Since I was in these modules debugging possible a race condition for
#10375, I decided to go ahead and update some of these hot functions to
use the more modern approach.

---------

Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-18 18:37:37 +00:00
Jamil
c043359c21 fix(portal): don't count internet site in limits (#10336)
Starter plans don't have access to the internet site so it's not fair to
count it against their limits.

Related:
https://app.hubspot.com/contacts/23723443/record/0-5/29628021256
2025-09-12 14:11:02 +00:00
Brian Manifold
826a304071 feat(portal): enable outdated gateway email (#10281)
Enables 'outdated gateway' notifications for all accounts.

Closes #8361
2025-09-04 03:56:01 +00:00
Brian Manifold
5797511ebd fix(portal): update directory sync job w/ new hard delete (#10267)
Why:

* During the refactor to move to hard delete data in the portal there
were a couple places of inconsistency in the directory sync job where
deletion was concerned.
2025-08-31 01:19:59 +00:00
Brian Manifold
6bd19ee9b0 refactor(portal): hard delete data (#9694) 2025-08-29 22:13:44 +00:00
Jamil
cafe6554ff refactor(portal): reduce cache memory usage (#10058)
Napkin math shows that we can save substantial memory (~3x or more) on
the API nodes as connected clients/gateways grow if we just store the
fields we need in order to keep the client and gateway state maintained
in the channel pids.

To facilitate this, we create new `Cacheable` structs that represent
their `Domain` cousins, which use byte arrays for `id`s and strip out
unused fields.

Additionally, all business logic involved with maintaining these caches
is now contained within two modules: `Domain.Cache.Client` and
`Domain.Cache.Gateway`, and type specs have been added to aid in static
analysis and code documentation.

Comprehensive testing is now added not only for the cache modules, but
for their associated channel modules as well to ensure we handle
different kinds of edge cases gracefully.

The `Events` nomenclature was renamed to `Changes` to better name what
we are doing: Change-Data-Capture.

Lastly, the following related changes are included in this PR since they
were "in the way" so to speak of getting this done:

- We save the last received LSN in each channel and drop the `change`
with a warning if we receive it twice in a row, or we receive it out of
order
- The client/gateway version compatibility calculations have been moved
to `Domain.Resources` and `Domain.Gateways` and have been simplified to
make them easier to understand and maintain going forward.


Related: #10174 
Fixes: #9392 
Fixes: #9965
Fixes: #9501 
Fixes: #10227

---------

Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-22 21:52:29 +00:00
Jamil
e5b2af1d4e chore(portal): add ChangeLogs.truncate/2 and tests (#10155)
In preparation to delete old change_logs based on account and insertion
time, we introduce a simple `truncate` function that removes old change
logs past a cutoff date.

Related: https://github.com/firezone/firezone/issues/10146

---------

Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
2025-08-06 19:19:30 +00:00
Jamil
25e15bbd14 chore(portal): drop id in favor of lsn pkey (#10152)
On the `change_logs` table, we want to minimize write overhead as much
as possible. One major way to do this is the minimize the number of
indexes maintained.

Because `lsn` is guaranteed to be unique, we can use it as the primary,
saving us an index (and column).

**NOTE**: This migration will need to acquire a lock on the table, so
it's added as a manual migration to execute out of band. Since we don't
read ChangeLogs anywhere, it should be fine for the app servers to come
up without this migration applied.
2025-08-06 15:04:34 +00:00
Brian Manifold
80e1c3255f refactor(portal): refactor billing event handler (#10064)
Why:

* There were intermittent issues with accounts updates from Stripe
events. Specifically, when an account would update it's subscription
from Starter to Team. The reason was due to the fact that Stripe does
not guarantee order of delivery for it's webhook events. At times we
were seeing and responding to an event that was a few seconds old after
processing a newer event. This would have the effect of quickly
transitioning an account from Team back to Starter. This commit
refactors our event handler and adds a `processed_stripe_events` DB
table to make sure we don't process duplicate events as well as prevent
processing an event that was created prior to the last event we've
processed for a given account.

* Along with refactoring the billing event handling, the Stripe mock
module has also been refactored to better reflect real Stripe objects.

Related: #8668
2025-08-05 16:56:52 +00:00
Jamil
54d91e2004 fix(portal): don't send reject_access for remaining flows (#10071)
This fixes a simple logic bug where we were mistakenly reacting to a
flow deletion event where flows still existed in the cache by sending
`reject_access`. This fixes that bug, and adds more comprehensive
logging to help diagnose issues like this more quickly in the future.

This PR also fixes the following issues found during the investigation:

- We were redundantly reacting to Token deletion in the channel pids.
This is unnecessary: we send a global socket disconnect from the Token
hook module instead.
- We had a bug that would crash the WAL consumer if a "global" token
(i.e. relay) was deleted or expired - these have no `account_id`.
- We now always use `min(max(all_conforming_polices_expiration),
token.expires_at)` when setting expiration on a new flow to minimize the
possibility for access churn.
- We now check to ensure the token and gateway are still undeleted when
re-authorizing a given flow. This prevents us from failing to send
`reject_access` when a token or gateway is deleted corresponding to a
flow, but the other entities would have granted access.


Related: https://firezone.statuspage.io/incidents/xrsm13tml3dh
Related: #10068 
Related: #9501
2025-08-01 00:03:00 +00:00
Jamil
ef3ee3aba8 fix(portal): relax gateway group perms (#10034)
This is hit by the client channel when a gateway group needs to be
hydrated, which should only require "connect gateways" permissions.
2025-07-28 19:58:11 +00:00
Jamil
f1a5af356d fix(portal): groom resource list and flows periodically (#10005)
Time-based policy conditions are tricky. When they authorize a flow, we
correctly tell the Gateway to remove access when the time window
expires.

However, we do nothing on the client to reset the connectivity state.
This means that whenever the window of time of access was re-entered,
the client would essentially never be able to connect to it again until
the resource was toggled.

To fix this, we add a 1-minute check in the client channel that
re-checks allowed resources, and updates the client state with the
difference. This means that policies that have time-based conditions are
only accurate to the minute, but this is how they're presented anyhow.


For good measure, we also add a periodic job that runs every minute to
delete expired Flows. This will propagate to the Gateway where, if the
access for a particular client-resource is determined to be actually
gone, will receive `reject_access`.

Zooming out a bit, this PR furthers the theme that:

- Client channels react to underlying resource / policy / membership
changes directly, while
- Gateway channels react primarily to flows being deleted, or the
downstream effects of a prior client authorization
2025-07-25 21:04:41 +00:00
Jamil
2959cca8ce fix(portal): use consistent wireguard psk (#10004)
Whenever a client requests a connection to gateway, we need to generate
a preshared key that will be used for the underlying WireGuard tunnel.

When the connection setup broke or otherwise was lost, _after_ the
gateway the received the authorize_flow call, but _before_ the client
could receive the response (and initiate a tunnel), we would have to
wait until an ICE timeout occurred in order to reset state on the
gateway.

This is because the psk was not used to determine if this was a _new_
flow authorization. So the old authorization would be matched, and the
client would never be able to connect, since its tunnel was using the
new psk, and the gateway the old.

To fix this, we generate a secure random 32-byte `psk_base` on each
client and gateway. When a client wishes to connect to a gateway, we
compute the WireGuard preshared key as an HMAC over these two inputs.

This fixes the issue by ensuring that subsequent flow authorization
requests from a particular client to a particular gateway will yield the
same psk.

Related: #9999 
Related: https://github.com/firezone/infra/issues/99
2025-07-25 19:28:47 +00:00
Jamil
ccc736e63e fix(portal): reauthorize new flow when last flow deleted (#9974)
The `flows` table tracks authorizations we've made for a resource and
persists them, so that we can determine which authorizations are still
valid across deploys or hiccups in the control plane connections.

Before, when the "in-use" authorization for a resource was deleted, we
would have flapped the resource in the client, and sent `reject_access`
to the gateway. However, that would cause issues in the following edge
case:

- Client is currently connected to Resource A through Policy B
- Client websocket goes down
- Policy B is created for Resource A (for another actor group), and
Policy A is deleted by admin
- Client reconnects
- Client sees that its resource list is the same
- Gateway has since received `reject_access` because no new flows were
created for this client-resource combination

To prevent this from happening, we now try to "reauthorize" the flow
whenever the last cached flow is removed for a particular
client-resource pair. This avoids needing to toggle the resource on the
client since we won't have sent `reject_access` to the gateway.
2025-07-25 01:53:10 +00:00
Jamil
f41a6f9e0b fix(portal): don't use process.alive on remote pid (#9964)
This can be removed, since we handle the ArgumentError in the link
operation.
2025-07-22 09:42:51 -07:00
Jamil
2c3692582b fix(portal): more robust replication pid discovery (#9960)
When debugging why we're receiving "Failed to start replication
connection" errors on deploy, it was discovered that there's a bug in
the Process discovery mechanism that new nodes use to attempt to link to
the existing replication connection. When restarting an existing
`domain` container that's not doing replication, we see this:

```
{"message":"Elixir.Domain.Events.ReplicationConnection: Publication tables are up to date","time":"2025-07-22T07:18:45.948Z","domain":["elixir"],"application":"domain","severity":"INFO","logging.googleapis.com/sourceLocation":{"function":"Elixir.Domain.Events.ReplicationConnection.handle_publication_tables_diff/2","line":2,"file":"lib/domain/events/replication_connection.ex"},"logging.googleapis.com/operation":{"producer":"#PID<0.764.0>"}}
{"message":"notifier only receiving messages from its own node, functionality may be degraded","time":"2025-07-22T07:18:45.942Z","domain":["elixir"],"application":"oban","source":"oban","severity":"DEBUG","event":"notifier:switch","connectivity_status":"solitary","logging.googleapis.com/sourceLocation":{"function":"Elixir.Oban.Telemetry.log/2","line":624,"file":"lib/oban/telemetry.ex"},"logging.googleapis.com/operation":{"producer":"#PID<0.756.0>"}}
{"message":"Elixir.Domain.ChangeLogs.ReplicationConnection: Publication tables are up to date","time":"2025-07-22T07:18:45.952Z","domain":["elixir"],"application":"domain","severity":"INFO","logging.googleapis.com/sourceLocation":{"function":"Elixir.Domain.ChangeLogs.ReplicationConnection.handle_publication_tables_diff/2","line":2,"file":"lib/domain/change_logs/replication_connection.ex"},"logging.googleapis.com/operation":{"producer":"#PID<0.763.0>"}}
{"message":"Elixir.Domain.ChangeLogs.ReplicationConnection: Starting replication slot change_logs_slot","time":"2025-07-22T07:18:45.966Z","state":"[REDACTED]","domain":["elixir"],"application":"domain","severity":"INFO","logging.googleapis.com/sourceLocation":{"function":"Elixir.Domain.ChangeLogs.ReplicationConnection.handle_result/2","line":2,"file":"lib/domain/change_logs/replication_connection.ex"},"logging.googleapis.com/operation":{"producer":"#PID<0.763.0>"}}
{"message":"Elixir.Domain.Events.ReplicationConnection: Starting replication slot events_slot","time":"2025-07-22T07:18:45.966Z","state":"[REDACTED]","domain":["elixir"],"application":"domain","severity":"INFO","logging.googleapis.com/sourceLocation":{"function":"Elixir.Domain.Events.ReplicationConnection.handle_result/2","line":2,"file":"lib/domain/events/replication_connection.ex"},"logging.googleapis.com/operation":{"producer":"#PID<0.764.0>"}}
{"message":"Elixir.Domain.ChangeLogs.ReplicationConnection: Replication connection disconnected","time":"2025-07-22T07:18:45.977Z","domain":["elixir"],"application":"domain","counter":0,"severity":"INFO","logging.googleapis.com/sourceLocation":{"function":"Elixir.Domain.ChangeLogs.ReplicationConnection.handle_disconnect/1","line":2,"file":"lib/domain/change_logs/replication_connection.ex"},"logging.googleapis.com/operation":{"producer":"#PID<0.763.0>"}}
{"message":"Elixir.Domain.Events.ReplicationConnection: Replication connection disconnected","time":"2025-07-22T07:18:45.977Z","domain":["elixir"],"application":"domain","counter":0,"severity":"INFO","logging.googleapis.com/sourceLocation":{"function":"Elixir.Domain.Events.ReplicationConnection.handle_disconnect/1","line":2,"file":"lib/domain/events/replication_connection.ex"},"logging.googleapis.com/operation":{"producer":"#PID<0.764.0>"}}
{"message":"Failed to start replication connection Elixir.Domain.Events.ReplicationConnection","reason":"%Postgrex.Error{message: nil, postgres: %{code: :object_in_use, line: \"607\", message: \"replication slot \\\"events_slot\\\" is active for PID 135123\", file: \"slot.c\", unknown: \"ERROR\", severity: \"ERROR\", pg_code: \"55006\", routine: \"ReplicationSlotAcquire\"}, connection_id: 136400, query: nil}","time":"2025-07-22T07:18:45.978Z","domain":["elixir"],"application":"domain","max_retries":10,"severity":"INFO","logging.googleapis.com/sourceLocation":{"function":"Elixir.Domain.Replication.Manager.handle_info/2","line":41,"file":"lib/domain/replication/manager.ex"},"logging.googleapis.com/operation":{"producer":"#PID<0.761.0>"},"retries":0}
{"message":"Failed to start replication connection Elixir.Domain.ChangeLogs.ReplicationConnection","reason":"%Postgrex.Error{message: nil, postgres: %{code: :object_in_use, line: \"607\", message: \"replication slot \\\"change_logs_slot\\\" is active for PID 135124\", file: \"slot.c\", unknown: \"ERROR\", severity: \"ERROR\", pg_code: \"55006\", routine: \"ReplicationSlotAcquire\"}, connection_id: 136401, query: nil}","time":"2025-07-22T07:18:45.978Z","domain":["elixir"],"application":"domain","max_retries":10,"severity":"INFO","logging.googleapis.com/sourceLocation":{"function":"Elixir.Domain.Replication.Manager.handle_info/2","line":41,"file":"lib/domain/replication/manager.ex"},"logging.googleapis.com/operation":{"producer":"#PID<0.760.0>"},"retries":0}
```

Before, we relied on `start_link` telling us that there was an existing
pid running in the cluster. However, from the output above, it appears
that may not always be reliable.

Instead, we first check explicitly where the running process is and, if
alive, we try linking to it. If not, we try starting the connection
ourselves.

Once linked to the process, we react to it being torn down as well,
causing a first-one-wins scenario where all nodes will attempt to start
replication, minimizing downtime during deploys.

Now that https://github.com/firezone/infra/pull/94 is in place, I did
verify we are properly handling SIGTERM in the BEAM, so the deployment
would now go like this:

1. GCP brings up the new nodes, they all find the existing pid and link
to it
2. GCP sends SIGTERM to the old nodes
3. The _actual_ pid receives SIGTERM and exits
4. This exit propagates to all other nodes due to the link
5. Some node will "win", and the others will end up linking to it

Fixes #9911
2025-07-22 15:13:45 +00:00
Jamil
1b1bd6401a fix(portal): gracefully account deletions in changelog (#9955)
When an account is perma-deleted, we need to handle that with another
function clause matching the WAL message coming into the change logs
replication connection module.
2025-07-21 20:47:41 +00:00
Jamil
488cb96469 fix(portal): don't prematurely reject access (#9952)
Before:

- When a flow was deleted, we flapped the resource on the client, and
sent `reject_access` naively for the flow's `{client_id, resource_id}`
pair on the gateway. This resulted in lots of unneeded resource flappage
on the client whenever bulk flow deletions happened.

After:

- When a flow is deleted, we check if this is an active flow for the
client. If so, we flap the resource then in order to trigger generation
of a new flow. If access was truly affected, that results in a loss of a
resource, we will push `resource_deleted` for the update that triggered
the flow deletion (for example the resource/policy removal). On the
gateway, we only send `reject_access` if it was the last flow granting
access for a particular `client/resource` tuple.


Why:

- While the access state is still correct in the previous
implementation, we run the possibility of pushing way too many resource
flaps to the client in an overly eager attempt to remove access the
client may not have access to.

cc @thomaseizinger 

Related:
https://firezonehq.slack.com/archives/C08FPHECLUF/p1753101115735179
2025-07-21 13:12:05 -07:00
Jamil
b5af132ae8 feat(portal): allow queue_target and queue_interval via ENV (#9943)
These parameters should be tuned to how long we expect "normal" queries
to take against the SQL instance. For smaller instances, "normal"
queries may take longer than 500ms, so we need to be able to configure
these via our Terraform configuration.

If not specified, the same defaults are used as before.

Related: https://github.com/firezone/infra/pull/82
2025-07-20 12:28:04 -07:00
Jamil
f379e85e9b refactor(portal): cache access state in channel pids (#9773)
When changes occur in the Firezone DB that trigger side effects, we need
some mechanism to broadcast and handle these.

Before, the system we used was:

- Each process subscribes to a myriad of topics related to data it wants
to receive. In some cases it would subscribe to new topics based on
received events from existing topics (I.e. flows in the gateway
channel), and sometimes in a loop. It would then need to be sure to
_unsubscribe_ from these topics
- Handle the side effect in the `after_commit` hook of the Ecto function
call after it completes
- Broadcast only a simply (thin) event message with a DB id
- In the receiver, use the id(s) to re-evaluate, or lookup one or many
records associated with the change
- After the lookup completes, `push` the relevant message(s) to the
LiveView, `client` pid, or `gateway` pid in their respective channel
processes

This system had a number of drawbacks ranging from scalability issues to
undesirable access bugs:

1. The `after_commit` callback, on each App node, is not globally
ordered. Since we broadcast a thin event schema and read from the DB to
hydrate each event, this meant we had a `read after write` problem in
our event architecture, leading to the potential for lost updates. Case
in point: if a policy is updated from `resource_id-1` to
`resource_id-2`, and then back to `resource_id-1`, it's possible that,
given the right amount of delay, the gateway channel will receive two
`reject_access` events for `resource_id-1`, as opposed to one for
`resource_id-1` and one for `resource_id-2`, leading to the potential
for unauthorized access.
1. It was very difficult to ensure that the correct topics were being
subscribed to and unsubscribed from, and the correct number of times,
leading to maintenance issues for other engineers.
1. We had a nasty N+1 query problem whenever memberships were added or
removed that resolved in essentially all access related to that
membership (so all Policies touching its actor group) to be
re-evaluated, and broadcasted. This meant that any bulk addition or
deletion of memberships would generate so many queries that they'd
timeout or consume the entire connection pool.
1. We had no durability for side-effect processing. In some places, we
were iterating over many returned records to send broadcasts.
Broadcasting is not a zero-time operation, each call takes a small
amount of CPU time to copy the message into the receiver's mailbox. If
we deployed while this was happening, the state update would be lost
forever. If this was a `reject_access` for a Gateway, the Gateway would
never remove access for that particular flow.
1. On each flow authorization, we needed to hit `us-east1` not only to
"authorize" the flow, but to log it as well. This incurs latency
especially for users in other parts of the world, which happens on
_each_ connection setup to a new resource.
1. Since we read and re-authorize access due to the thin events
broadcasted from side effects, we risk hitting thundering herd problems
(see the N+1 query problem above) where a single DB change could result
in all receivers hitting the DB at once to "hydrate" their
processing.ion
1. If an administrator modifies the DB directly, or, if we need to run a
DB migration that involves side effects, they'll be lost, because the
side effect triggers happened in `after_commit` hooks that are only
available when querying the DB through Ecto. Manually deleting (or
resurrecting) a policy, for example, would not have updated any
connected clients or gateways with the new state.


To fix all of the above, we move to the system introduced in this PR:

- All changes are now serialized (for free) by Postgres and broadcasted
as a single event stream
- The number of topics has been reduced to just one, the `account_id` of
an account. All receivers subscribe to this one topic for the lifetime
of their pid and then only filter the events they want to act upon,
ignoring all other messages
- The events themselves have been turned into "fat" structs based on the
schemas they present. By making them properly typed, we can apply things
like the existing Policy authorizer functions to them as if we had just
fetched them from the DB.
- All flow creation now happens in memory and doesn't not need to incur
a DB hit in `us-east1` to proceed.
- Since clients and gateways now track state in a push-based manner from
the DB, this means very few actual DB queries are needed to maintain
state in the channel procs, and it also means we can be smarter about
when to send `resource_deleted` and `resource_created_or_updated`
appropriately, since we can always diff between what the client _had_
access to, and what they _now_ have access to.
- All DB operations, whether they happen from the application code, a
`psql` prompt, or even via Google SQL Studio in the GCP console, will
trigger the _same_ side effects.
- We now use a replication consumer based off Postgres logical decoding
of the write-ahead log using a _durable slot_. This means that Postgres
will retain _all events_ until they are acknowledged, giving us the
ability to ensure at-least-once processing semantics for our system.
Today, the ACK is simply, "did we broadcast this event successfully".
But in the future, we can assert that replies are received before we
acknowledge the event as processed back to Postgres.



The tests in this PR have been updated to pass given the refactor.
However, since we are tracking more state now in the channel procs, it
would be a good idea to add more tests for those edge cases. That is
saved as a later PR because (1) this one is already huge, and (2) we
need to get this out to staging to smoke test everything anyhow.

Fixes: #9908 
Fixes: #9909 
Fixes: #9910
Fixes: #9900 
Related: #9501
2025-07-18 22:47:18 +00:00
Jamil
789a3012d6 fix(portal): only process jsonb strings (#9883)
As a followup to #9882, we need to ensure that `jsonb` columns that have
value data other than strings are not decoded as jsonb. An example of
when this happens is when Postgres sends an `:unchanged_toast` to
indicate the data hasn't changed.
2025-07-15 18:06:13 -07:00
Jamil
cce21a8dea fix(portal): handle jsonb for embedded schemas (#9882)
In #9664, we introduced the `Domain.struct_from_params/2` function which
converts a set of params containing string keys into a provided struct
representing a schema module. This is used to broadcast actual structs
pertaining to WAL data as opposed to simple string encodings of the
data.

The problem is that function was a bit too naive and failed to properly
cast embedded schemas, resulting in all embedded schema on the root
struct being `nil` or `[]`.

To fix this, we need to do two things:

1. We now decode JSON/JSONB fields from binaries (strings) into actual
lists and maps in the replication consumer module for downstream
processors to use
2. We update our `struct_from_params/2` function to properly cast
embedded schemas from these lists and maps using Ecto.Changeset's
`apply_changes` function, which uses the same logic to instantiate the
schemas as if we were saving a form or API request.

Lastly, tests are added to ensure this works under various scenarios,
including nested embedded schemas which we use in some places.

Fixes #9835

---------

Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-15 23:50:27 +00:00
Thomas Eizinger
cb497a7435 fix(portal): use correct password generation algorithm (#9874)
In #9870, the password generation algorithm was broken. The correct
order of the elements in the hash is: expiry, stamp_secret, salt. The
relay expects this order when it re-generates the password to validate
the message.

Due to a different bug in our CI system, we weren't actually checking
for warnings / errors in our perf-test suite:
https://github.com/firezone/firezone/actions/runs/16285038111/job/45982241021#step:9:66
2025-07-15 13:39:31 +00:00
Brian Manifold
0d9e865ea8 feat(porat): Update portal telemetry (#9868)
Why:

* Adding more BEAM VM metrics to give us better insight as to how our
BEAM cluster is running since we're in the middle of making some
moderately large architectural changes to the application.
2025-07-15 02:11:59 +00:00
Jamil
17d7e29b81 fix(portal): use public key for TURN creds (#9870)
As a followup to #9856, after talking with @bmanifold, we determined
using the public_key as the username for TURN credentials is a safer bet
because:

- It's by definition public and therefore does not need to be obfuscated
- It's shorter-lived than the token, especially for the gateway
- It essentially represents the data plane connection for client/gateway
and naturally rotates along with the key state for those
2025-07-15 01:48:02 +00:00
Jamil
1e577d31b9 fix(portal): use reproducible relay creds (#9857)
When giving TURN credentials to clients and gateways, it's important
that they remain consistent across hiccups in the portal connection so
that relayed connections are not interrupted during a deploy, or if the
user's internet is flaky, or the GCP load balancer decides to disconnect
the client/gateway.

Prior to this PR, that was not the case because we essentially tied TURN
credentials, required for data plane packet flows, to the WebSocket
connection, a control plane element. This happened because we generated
random `expires_at` and `salt` elements on _each_ connection to the
portal.

Instead, what we do now is make these reproducible and tied to the auth
token by hashing then base64-encoding it. The expiry is tied to the
auth-token's expiry.


Fixes #9856
2025-07-14 17:42:11 +00:00
Jamil
26cfab3b88 fix(portal): reply to all wal keepalives with ack (#9828)
The Postgres logical decoding protocol is lacking documentation and
unclear about keepalive behavior when `wal_sender_timeout` is set to 0
(disabled). We have it disabled so that Postgres doesn't terminate our
connection for falling too far behind.

What we failed to take into account is that on some installations,
Postgres _never_ requests an immediate reply (keepalive with the reply
now bit set) if wal_sender_timeout is disabled. This means we would
always reply with the empty message, failing to advance the position of
the LSN.

In this PR, we fix that to always respond to every keepalive message
with a standby status update to advance the LSN position.

Relevant documentation:
https://www.postgresql.org/docs/current/protocol-replication.html#PROTOCOL-REPLICATION-STANDBY-STATUS-UPDATE
2025-07-11 14:32:56 +00:00
Jamil
cfcd5b3b8f chore(portal): track more WAL monitoring info (#9826)
When debugging WAL processing, it's helpful to know what the last
replied LSN was and when the last keepalive message was received from
postgres.
2025-07-10 18:30:34 -07:00
Jamil
080818c466 fix(portal): fix reply for remaining wal message (#9824)
Missed one reply fix from #9821
2025-07-10 21:46:05 +00:00
Jamil
fb0dd36dbc chore(portal): ignore expected libcluster issue (#9822)
Adds another expected error message to the ignore list. We have a
different (less noisy) log that will alert us if the cluster is below
threshold.
2025-07-10 21:35:18 +00:00
Jamil
704ff9fd7a fix(portal): send empty reply for incoming wal messages (#9821)
In #9733, we changed the replies of the handle_data messages which seems
to have caused Postgres to not respect our acknowledgements sent in the
keepalive.

To fix this, we revert to sending an empty message in response to write
messages.
2025-07-10 19:50:00 +00:00
Jamil
b20c141759 feat(portal): add batch-insert to change logs (#9733)
Inserting a change log incurs some minor overhead for sending query over
the network and reacting to its response. In many cases, this makes up
the bulk of the actual time it takes to run the change log insert.

To reduce this overhead and avoid any kind of processing delay in the
WAL consumers, we introduce batch insert functionality with size `500`
and timeout `30` seconds. If either of those two are hit, we flush the
batch using `insert_all`.

`insert_all` does not use `Ecto.Changeset`, so we need to be a bit more
careful about the data we insert, and check the inserted LSNs to
determine what to update the acknowledged LSN pointer to.

The functionality to determine when to call the new `on_flush/1`
callback lives in the replication_connection module, but the actual
behavior of `on_flush/1` is left to the child modules to implement. The
`Events.ReplicationConnection` module does not use flush behavior, and
so does not override the defaults, which is not to use a flush
mechanism.

Related: #949
2025-07-05 19:03:28 +00:00
Jamil
c869bcfe13 chore(portal): tag Relay WAL todos (#9767)
These aren't a priority to clean up right now, but I wanted to tag them
so I don't forget to do it later on.
2025-07-04 22:30:06 +00:00
Jamil
2a38c532af chore(portal): remove gateway masquerade option (#9790)
AFAIK these are ignored by connlib. Instead, we configure masquerading
on the host.
2025-07-04 21:08:11 +00:00