In order to support multiple different protocols of upstream DNS
resolvers, we deprecate the `upstream_dns` field in the client's `init`
message and introduce two new fields:
- `upstream_do53`
- `upstream_doh`
For now, only `upstream_do53` is populated and `upstream_doh` is always
empty.
On the client-side, we for now only introduce the `upstream_do53` field
but fall-back to `upstream_dns` if that one is empty. This makes this PR
backwards-compatible with the portal version that is currently deployed
in production. Thus, this PR can be merged even prior to deploying the
portal.
Internally, we prepare connlib's abstractions to deal with different
kinds of upstreams by renaming all existing "upstream DNS" references to
`upstream_do53`: DNS over port 53. That includes UDP as well as TCP DNS
resolution.
Resolves: #10791
---------
Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com>
Now that we have an APT repository for Debian / Ubuntu packages, we
should also tell our users about it. We introduce a new "Debian /
Ubuntu" tab on the deployments screen in the portal. The tab is selected
by default as it should provide the best user experience for manually
deployed Gateways:
- Updates are as easy as `sudo apt upgrade`
- The systemd file and token are fully managed in the background
Here is what the new tab looks like:
<img width="679" height="786" alt="image"
src="https://github.com/user-attachments/assets/da69fc55-6a6a-476d-bed4-634dd05df8bc"
/>
Resolves: #10701
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
DNS servers are standarised to be contacted on port 53. This is also
hard-coded within `connlib` when we contact an upstream server. As such,
we should disallow users inputting any custom port for upstream DNS
servers. Luckily - or perhaps because it doesn't presently work - no
users in production have actually put in a port.
Resolves: #8330
In order to make the flow logs emitted by the Gateway more useful and
self-contained, we extend the `authorize_flow` message sent to the
Gateway with some more context around the Client and Actor of that flow.
In particular, we now also send the following to the Gateway:
- `client_version`
- `device_os_version`
- `device_os_name`
- `device_serial`
- `device_uuid`
- `device_identifier_for_vendor`
- `device_firebase_installation_id`
- `identity_id`
- `identity_name`
- `actor_id`
- `actor_email`
We only extend the `authorize_flow` message with these additional
properties. The legacy messages for 1.3.x Clients remain as is. For
those Clients, the above properties will be empty in the flow logs.
Resolves: #10690
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
On clients prior to https://github.com/firezone/firezone/pull/10604,
sending `resource_created_or_updated` for a site change would cause a
panic. With the introduction of that PR, the portal can now send this
message without needing to "toggle" the resource first by sending
`resource_deleted`.
Fixes: #10593
Why:
* Now that soft delete fields have been removed being referenced in the
codebase and soft deleted rows have been removed from the DB we can now
remove any remaining traces of soft delete functionality.
Resolves#8187
When signing in, it's a good idea to clear any previous session cookie
and regenerate it, preventing the chance that any unchecked data in a
possible-fixated session cookie is used.
Why:
* In previous commits, the portal code had been updated to use hard
deletion rather than soft deletion of data. The fields used in the soft
deletion were still kept in the DB and the code to allow for zero
downtime rollout and an easy rollback if necessary. To continue with
that work the portal code has now been updated to remove any reference
to the soft deleted fields (e.g. deleted_at, persistent_id, etc...).
While the code has been updated the actual data in the DB will need to
remain for now, to once again allow for a zero downtime rollout. Once
this commit has been deployed to production another PR can follow to
remove the columns from the necessary tables in the DB.
Related: #8187
When Okta returned a 4xx status code from the API, we had updated error
handler to grab the errors from body or headers and return these.
However, the caller was expecting an explicit empty string for 401 and
403 errors in order to trigger the email send behavior.
Since that wasn't being matched, we were logging the error internally
only, and continuing to retry the sync indefinitely without sending the
user an email.
Fixes#8744Fixes#9825
Fixes an issue introduced in #10510 where Web functions (like
VerifiedRoutes) cannot be called from Domain because they are not
available in the release.
This happens to work in dev mode because everything is available under
the same dev context.
Since Elixir 1.18, json encoding and decoding support is included in the
standard library. This is built on OTP's native json support which is
often faster than other implementations.
It mostly has the same API as the popular Jason library, differing
mainly in the format of the error responses returned when decoding
fails.
To minimize dependence on external libraries, we remove the Jason lib in
favor of this external dependency.
Fixes#8011
This doesn't appear to be used anywhere and eliminates one compile
warning due to the seemingly unmaintained
[sizeable](https://github.com/arvidkahl/sizeable) dep.
Bumps Phoenix to 1.8 and Phoenix LiveView to 1.1. As part of the bump a
number of issues had to be addressed. Comments inline provide more
context.
Supersedes #10475
Supersedes #10448
Bumps
[@fontsource-variable/source-sans-3](https://github.com/fontsource/font-files/tree/HEAD/fonts/variable/source-sans-3)
from 5.2.8 to 5.2.9.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/fontsource/font-files/commits/HEAD/fonts/variable/source-sans-3">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Clients more than one minor away from a particular won't be able to
connect, so it would be useful to help admins recognize when this might
be the case and encourage them to upgrade.
We accomplish that with two small UX improvements in this PR:
- In the outdated gateways email, we show them a count of clients that
will no longer be able to connect to the new gateway version, linking
them to a sorted view of the clients table
- In the clients table, we add a new sortable `version` column to allow
admins to see which clients are outdated
Fixes#7727Fixes#10385
---------
Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
In Firezone, a Client requests an "access authorization" for a Resource
on the fly when it sees the first packet for said Resource going through
the tunnel. If we don't have a connection to the Gateway yet, this is
also where we will establish a connection and create the WireGuard
tunnel.
In order for this to work, the access authorization state between the
Client and the Gateway MUST NOT get out of sync. If the Client thinks it
has access to a Resource, it will just route the traffic to the Gateway.
If the access authorization on the Gateway has expired or vanished
otherwise, the packets will be black-holed.
Starting with #9816, the Gateway sends ICMP errors back to the
application whenever it filters a packet. This can happen either because
the access authorization is gone or because the traffic wasn't allowed
by the specific filter rules on the Resource.
With this patch, the Client will attempt to create a new flow (i.e.
re-authorize) traffic for this resource whenever it sees such an ICMP
error, therefore acting as a way of synchronizing the view of the world
between Client and Gateway should they ever run out of sync.
Testing turned out to be a bit tricky. If we let the authorization on
the Gateway lapse naturally, we portal will also toggle the Resource off
and on on the Client, resulting in "flushing" the current
authorizations. Additionally, it the Client had only access to one
Resource, then the Gateway will gracefully close the connection, also
resulting in the Client creating a new flow for the next packet.
To actually trigger this new behaviour we need to:
- Access at least two resources via the same Gateway
- Directly send `reject_access` to the Gateway for this particular
resource
To achieve this, we dynamically eval some code on the API node and
instruct the Gateway channel to send `reject_access`. The connection
stays intact because there is still another active access authorization
but packets for the other resource are answered with ICMP errors.
To achieve a safe roll-out, the new behaviour is feature-flagged. In
order to still test it, we now also allow feature flags to be set via
env variables.
Resolves: #10074
---------
Co-authored-by: Mariusz Klochowicz <mariusz@klochowicz.com>
In order to allow the portal to more easily classify, what kind of
component is connecting, we extend the `get_user_agent` header to
include a component type instead of the generic `connlib/`.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
When redirecting to paths that don't have LiveViews attached to them,
LiveView complains and emits a warning. To reduce alarm noise this PR
attempts to fix the issue.
- recreates the flows actor_group_membership index that didn't get
created due to name collision with an existing index
- adds missing resource_id, actor_group_id indexes on policies
- removes redundant `resource_id` index on resource_connections since
there's a composite index that matches already
Related: #10396
Why:
* Now that hard-delete has been rolled out, we need to make sure that
all cascade deletes are efficient. Some of the foreign key references
didn't have indexes but needed them.
Fixes#10393
In order to support the new, upcoming directory sync implementations, we
need the ability to batch upsert auth_identities, actors, actor_groups,
and actor_group_memberships. We also need the ability to delete entities
that were not upserted at the tail end of a sync job iteration in order
to remove entities that are no longer in the directory.
To support this, we add these functions and related tests here.
Related: #6294
---------
Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Why:
* During the refactor to move to hard delete data in the portal there
were a couple places of inconsistency in the directory sync job where
deletion was concerned.
In preparation for transitioning from the portal from soft-delete to
hard-delete some updates to the foreign key constraints were required.
Most of these updates are adding `ON DELETE CASCADE` constraints and
relevant indexes. These new constraints should have no effect on the
current portal code as soft-deletes are still being used.
One other update in this PR is changing the FK constraint names on the
clients table. The names were `devices_*` due to the table originally
being called `devices`. This PR updates them to `clients_*`.
Related: #8187
When filters are updated for a Resource, we need to first adapt the
resource before rendering it down to the Gateway. Otherwise, the gateway
may see a Resource that does not match its expected schema.
Napkin math shows that we can save substantial memory (~3x or more) on
the API nodes as connected clients/gateways grow if we just store the
fields we need in order to keep the client and gateway state maintained
in the channel pids.
To facilitate this, we create new `Cacheable` structs that represent
their `Domain` cousins, which use byte arrays for `id`s and strip out
unused fields.
Additionally, all business logic involved with maintaining these caches
is now contained within two modules: `Domain.Cache.Client` and
`Domain.Cache.Gateway`, and type specs have been added to aid in static
analysis and code documentation.
Comprehensive testing is now added not only for the cache modules, but
for their associated channel modules as well to ensure we handle
different kinds of edge cases gracefully.
The `Events` nomenclature was renamed to `Changes` to better name what
we are doing: Change-Data-Capture.
Lastly, the following related changes are included in this PR since they
were "in the way" so to speak of getting this done:
- We save the last received LSN in each channel and drop the `change`
with a warning if we receive it twice in a row, or we receive it out of
order
- The client/gateway version compatibility calculations have been moved
to `Domain.Resources` and `Domain.Gateways` and have been simplified to
make them easier to understand and maintain going forward.
Related: #10174Fixes: #9392Fixes: #9965Fixes: #9501Fixes: #10227
---------
Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Docker for Mac finally supports IPv6 in general availability. It's time
to add IPv6 to our suite of integration tests.
The thinking behind this PR is try and not slow down CI much, if at all,
by testing IPv6 side-by-side with the existing IPv4 tests.
More comprehensive testing is being developed in #10131 that will test
things like IPv4-in-6 relaying, client / gateway IP stack mismatches,
and so forth.
On the `change_logs` table, we want to minimize write overhead as much
as possible. One major way to do this is the minimize the number of
indexes maintained.
Because `lsn` is guaranteed to be unique, we can use it as the primary,
saving us an index (and column).
**NOTE**: This migration will need to acquire a lock on the table, so
it's added as a manual migration to execute out of band. Since we don't
read ChangeLogs anywhere, it should be fine for the app servers to come
up without this migration applied.
Why:
* There were intermittent issues with accounts updates from Stripe
events. Specifically, when an account would update it's subscription
from Starter to Team. The reason was due to the fact that Stripe does
not guarantee order of delivery for it's webhook events. At times we
were seeing and responding to an event that was a few seconds old after
processing a newer event. This would have the effect of quickly
transitioning an account from Team back to Starter. This commit
refactors our event handler and adds a `processed_stripe_events` DB
table to make sure we don't process duplicate events as well as prevent
processing an event that was created prior to the last event we've
processed for a given account.
* Along with refactoring the billing event handling, the Stripe mock
module has also been refactored to better reflect real Stripe objects.
Related: #8668
This fixes a simple logic bug where we were mistakenly reacting to a
flow deletion event where flows still existed in the cache by sending
`reject_access`. This fixes that bug, and adds more comprehensive
logging to help diagnose issues like this more quickly in the future.
This PR also fixes the following issues found during the investigation:
- We were redundantly reacting to Token deletion in the channel pids.
This is unnecessary: we send a global socket disconnect from the Token
hook module instead.
- We had a bug that would crash the WAL consumer if a "global" token
(i.e. relay) was deleted or expired - these have no `account_id`.
- We now always use `min(max(all_conforming_polices_expiration),
token.expires_at)` when setting expiration on a new flow to minimize the
possibility for access churn.
- We now check to ensure the token and gateway are still undeleted when
re-authorizing a given flow. This prevents us from failing to send
`reject_access` when a token or gateway is deleted corresponding to a
flow, but the other entities would have granted access.
Related: https://firezone.statuspage.io/incidents/xrsm13tml3dh
Related: #10068
Related: #9501
The full `account` struct is only used to render the client's interface,
and doesn't need to be stored in the `client` struct when the `subject`
struct already tracks it.