When a customer signs up for Starter or Team, we don't enable tax
calculation by default. This means customers can upgrade to Team, start
paying invoices, and we won't collect taxes.
This creates a management issue and possible tax liability since I need
to manually reconcile these.
Instead, since we have Stripe Tax configured on our account, we can
enable automatic tax calculation when the subscription is created. Any
products (Starter/Team/Enterprise) therefore in the subscription will
automatically collect tax appropriately.
In most cases in the US, the tax rate is 0. In EU transactions, for B2B
sales, the tax rate for us is also 0 (reverse charge basis). If we sell
a Team subscription to an individual, however, we need to collect VAT.
There doesn't seem to be a way to block consumer EU transactions in
Stripe, so we'll likely need to register for VAT in the EU if we cross
the reporting threshold.
A regression was introduced in d0f0de0f8d
whereupon we started using the updated policy record for broadcasting
the `delete_policy` and `expire_flows` events. This caused a security
issue because if the actor group changed from `Everyone` to `thomas`,
for example, we'd only expire flows and broadcast policy removal (i.e.
resource removal) events for `thomas`, and `Everyone` would still have
access granted by the old policy.
To fix this, we broadcast the destructive events to the old policy, so
that its `actor_group_id` and `resource_id` are used, and not the new
policy's.
Fixes#8549
After removing some of the functionality for viewing the Internet
Resource, customer was confused where to find it again.
This places an `Internet` section in the Resources index page (similar
to Sites page) with a short help text and an action button to view the
Internet Resource.
This also adds a convenient helper that allows us to route to
`/#{account}/resources/internet` for a nicer-looking URL that users can
bookmark if needed.
<img width="1423" alt="Screenshot 2025-03-19 at 11 52 31 PM"
src="https://github.com/user-attachments/assets/f2da1c31-92b2-429e-832f-73ddd0524155"
/>
Fixes#8479
Why:
* This commit will allow account admins to send a request through the
Firezone portal to schedule a deletion of their account, rather than
having the account admins email their request manually. Doing this
through the portal allows us to verify that the request actually came
from an admin of the account.
I was debugging some of this just now and realized our naming / comments
are incorrect here, so thought I'd open a PR to tidy things up for the
next person reading this.
Resource CIDRs actually occupy the `100.96.0.0/11` range (and IPv6
equivalent), but the portal doesn't generate these.
Why:
* Previously, when running a directory sync with the Google Workspace
IdP adapter, if a service account had been configured but there was a
problem getting an access token for the service account, the sync job
would fall back to using a personal access token. We no longer want to
rely on any personal access token once a service account has been
configured. This commit will make sure that if a service account is
configured there is no way to fall back to any personal access token.
Fixes#8409
When deploying a Gateway from the admin portal UI, we show various
environment variables required for setup. Until now, we've relied on the
`/var/lib/firezone` persistence method for identifying the Gateway.
However, this can cause issues on some systems that don't have writeable
access to /var/lib/firezone, or old versions of systemd that don't
support sandboxed access to this directory.
This PR updates each deployment method to use `FIREZONE_ID` instead
everywhere. Additionally, since the Docker upgrade script needs to
reinvoke the new container using the same arguments (more or less) as
the install, we need to extract the old `/var/lib/firezone/gateway_id`
file out of the existing container if it exists, and try to insert it
into the upgraded container.
Tested both scripts, including upgrades for the Docker script.
Fixes: #8471
Finishes up the Internet Resource migration by enforcing:
- No internet resources in non-internet sites
- No regular resources in internet sites
- Removing the prompt to migrate
~~I've already migrated the existing internet resources in customer's
accounts. No one that was using the internet resource hadn't already
migrated.~~
Edit: I started to head down that path, then decided doing this here in
a data migration was going to be a better approach.
Fixes#8212
[Step
2](https://cloud.google.com/sql/docs/postgres/pg-audit#set-pgaudit-flag-values)
of the pgaudit setup guide for Google Cloud SQL. It would be good to
have detailed pg audit logs on the master application instance in case
things go wrong.
Notably, this prevents erroring out when the `pgaudit` is not available,
which by default, it is. Enabling the `pgaudit` extension for our dev
instance is left as a future endeavor.
Supersedes #5442
The submit button on the settings -> dns page has a couple UX issues
with the new search domain section:
- It's ambiguous what the `Save` is actually saving
- The spacing makes it look like it's only saving upstream resolvers
This PR introduces a simple fix that address the two issues by:
- Updating the button text to `Save DNS Settings`
- Increasing spacing between submit button and form elements
- Slightly decreasing spacing between the `search domain` and `upstream
resolvers` inputs
<img width="968" alt="Screenshot 2025-03-14 at 12 06 02 AM"
src="https://github.com/user-attachments/assets/651f54c8-3b5f-4747-ad3a-e2ae32eccbf0"
/>
Related #5248
Why:
* This commit updates the 500 error page in the portal to have the same
look and feel of the 404 error page in order to be consistent within the
portal UI.
- Adds a simple text input to configure search domains ("default DNS
suffix") in the Settings -> DNS page.
- Sends the `search_domain` field as part of the client's `init` message
- Fixes a minor UI alignment inconsistency for the upstream resolvers
field so that the total form width and `New resolver` button width are
the same.
<img width="1137" alt="Screenshot 2025-03-09 at 10 56 56 PM"
src="https://github.com/user-attachments/assets/a1d5a570-8eae-4aa9-8a1c-6aaeb9f4c33a"
/>
Fixes#8365
Introduces a simple `search_domain` field embed into our existing
`Accounts.Account.Config` embedded schema. This will be sent to clients
to append to single-label DNS queries.
UI and API changes will come in subsequent PRs: this one adds field and
(lots of) validations only.
Related: #8365
- Adds more actor groups to the existing `oidc_provider`
- Configures a rand seed so our seed data is reproducible across
machines
- Formats the seeds file to allow for some refactoring a later PR
- Adds a `Mock` identity provider adapter with sync enabled
Rather than the current behavior of raising a 500 when we receive
missing / invalid params in IdP auth callbacks, it would be helpful to
show the user which params were provided, in case the IdP has set
anything useful to aid the user.
For example, we recently received these params from `okta` for a pilot
account (and subsequently rendered them a 500):
```
%{"account_id_or_slug" => "<redacted>", "error" => "access_denied", "error_description" => "User is not assigned to the client application.", "provider_id" => "<redacted>", "state" => "<redacted>"}
```
Adds the following endpoints:
- `PUT /clients/:id` for updating the `name`
- `PUT /clients/:client_id/verify` for verifying a client
- `PUT /clients/:client_id/unverify` for unverifying a client
- `GET /clients` for listing clients in an account
- `GET /clients/:id` for getting a single client
- `DELETE /clients/:id` for deleting a client
Related: #8081
If the websocket connection between a relay and the portal experiences a
temporary network split, the portal will immediately send the
disconnected id of the relay to any connected clients and gateways, and
all relayed connections (and current allocations) will be immediately
revoked by connlib.
This tight coupling is needlessly disruptive. As we've seen in staging
and production logs, relay disconnects can happen randomly, and in the
vast majority of cases immediately reconnect. Currently we see about 1-2
dozen of these **per day**.
To better account for this, we introduce a debounce mechanism in the
portal for `relays_presence` disconnects that works as follows:
- When a relay disconnects, record its `stamp_secret` (this is somewhat
tricky as we don't get this at the time of disconnect - we need to cache
it by relay_id beforehand)
- If the same `relay_id` reconnects again with the same `stamp_secret`
within `relays_presence_debounce_timeout` -> no-op
- If the same `relay_id` reconnects again with a **different**
`stamp_secret` -> disconnect immediately
- If it doesn't reconnect, **then** send the `relays_presence` with the
disconnected_id after the `relays_presence_debounce_timeout`
There are several ways connlib detects a relay is down:
1. Binding requests time out. These happen every 25s, so on average we
don't know a Relay is down for 12.5s + backoff timer.
2. `relays_presence` - this is currently the fastest way to detect
relays are down. With this change, the caveat is we will now detect this
with a delay of `relays_presence_debounce_timer`.
Fixes#8301
Bumps [flowbite](https://github.com/themesberg/flowbite) from 3.1.1 to
3.1.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/themesberg/flowbite/releases">flowbite's
releases</a>.</em></p>
<blockquote>
<h2>v3.1.2</h2>
<ul>
<li>create new theme file to move CSS variables</li>
<li>update quickstart guide to reflect this change</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4ffec1008a"><code>4ffec10</code></a>
refactor(flowbite): move color theme variables to css file</li>
<li><a
href="38984c12ae"><code>38984c1</code></a>
refactor(colors): move colors from plugin to theme file</li>
<li><a
href="23732fd518"><code>23732fd</code></a>
docs(datepicker): specify that you need to set source</li>
<li>See full diff in <a
href="https://github.com/themesberg/flowbite/compare/v3.1.1...v3.1.2">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Instead of crashing, it would make sense to log these and let the
connected entity maintain its WebSocket connection.
This should never happen in practice if we maintain our version
compatibility matrix properly, but it will help reduce the blast radius
of a channel message bug that happens to slip out into the wild.
Fixes#4679
In order to properly handle SRV and TXT records on the clients, we need
to be able to pick a Gateway using the initial query itself. After that,
we need to know the Gateway Tunnel IPs we're connecting to so we can
have the query perform the lookup.
Fixes#8281
We had a number of validation issues:
- DNS resources allow address `1.1.1.1` or `1.1.1.1/32`. These are not
valid and will cause issues during resolution.
- IP resources were allowing basically any string character on `edit`
caused by a logic bug in the changeset
- CIDR resources, same as above
- `*.*.*.*.google.com` and similar DNS wildcard resources were not
allowed
This PR beefs all of those up so that we have a higher degree of
certainty that our data is valid. If invalid data reaches connlib, it
will cause a panic.
This PR also introduces a migration to migrate any invalid resources
into the proper format in the DB.
Fixes#8287
Why:
* After merging #8267 it was discovered that there was a race condition
that allowed a `resource_create` message to end up at the Gateway
Channel process. Previously, this message would not have ever arrived,
because we were replacing Resource IDs when a breaking change was made,
but since that is no longer the case, it is possible that a connection
could be established between the time the `delete_resource` and
`create_resource` messages are sent and the `create_resource` would end
up at the Gateway Channel process. This commit adds a no-op handler to
make sure the message gets processed without throwing an error.
Why:
* Rather than using a persistent_id field in Resources/Policies, it was
decided that we should allow "breaking changes" to these entities. This
means that Resources/Policies will now be able to update all fields on
the schema without changing the primary key ID of the entity.
* This change will greatly help the API and Terraform provider
development.
@jamilbk, would you like me to put a migration in this PR to actually
get rid of all of the existing soft deleted entities?
@thomaseizinger, I tagged you on this, because I wanted to make sure
that these changes weren't going to break any expectations in the client
and/or gateways.
---------
Signed-off-by: Brian Manifold <bmanifold@users.noreply.github.com>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
Currently, it would theoretically be possible for an admin to connect
non-internet Resources to the Internet site. This PR fixes that by
enforcing only the `internet` Resource type can belong to the `Internet`
gateway group.
Related: #6834
Sentry uncovered a bug in the resources index liveview where it looks
like some code copy-pasted from the policies index view wasn't updated
properly to work in the resources live view, causing the view to crash
if an admin was viewing the table while the resources are changed in
another page.
In debugging that, I realized the best UX when viewing these tables is
usually just to show a `Reload` button and not update the data live
while the admin is viewing it, as this can cause missed clicks and other
annoyances.
This PR adds an optional `stale` component attribute that, if true, will
render a `Reload` button in the live table which upon clicking will
reload the live table.
Not all index views are updated with this - in some views there is
already logic to handle making an intelligent update without breaking
the view if the data is updated - for example for the clients table.
Ideally, we live-update things that don't reflow layout inline (such as
`online/offline` presence) and for things that do cause layout reflow
(create/delete), we show the `Reload` button.
However that work is saved for a future PR as this one fixes the
immediate bug and this is not the highest priority.
<img width="1195" alt="Screenshot 2025-02-16 at 8 44 43 AM"
src="https://github.com/user-attachments/assets/114efffa-85ea-490d-9cea-78c607081ce3"
/>
<img width="401" alt="Screenshot 2025-02-16 at 9 59 53 AM"
src="https://github.com/user-attachments/assets/8a570213-d4ec-4b6c-a489-dcd9ad1c351c"
/>
It's possible for a client or admin to try and load the redirect URL
directly, or a misconfigured IdP may redirect back to us with missing
params. We should redirect with an error flash instead of 500'ing.
This PR fixes two issues:
1. Since we weren't updating any actual fields in the telemetry reporter
log record, it was never being updated, thus optimistic locking was not
taking effect. To fix this, we use `Repo.update(force: true)`.
2. If a buffer is full, we write immediately, but we provider an empty
`%Log{}` which causes a repetitive `the current value of last_flushed_at
is nil and will not be used as a filter for optimistic locking.`
Some customers have already picked the `Internet` name, which is making
our migrations fail.
This scopes the unique name index by `managed_by` so that our attempts
to create them succeed.
By specifying the `before_send` hook, we can easily drop events based on
their data, such as `original_exception` which contains the original
exception instance raised.
Leveraging this, we can add a `report_to_sentry` parameter to
`Web.LiveErrors.NotFound` to optionally ignore certain not found errors
from going to Sentry.
With the internet site changes now in, editing the Internet Resource is
impossible.
As such, the old instructions for using the Internet Resource no longer
apply, and we need to make sure the Internet Site and Internet Resource
are linked.
This migration ensures that's the case. However, if the internet
resource is currently connected to another site already, we don't move
it. This is only for internet resources that aren't connected to any
sites yet.