Files
firezone/website
Thomas Eizinger d244a99c58 feat(connlib): always use all candidates (#9979)
In #6876, we added functionality that would only make use of new remote
candidates whilst we haven't nominated a socket yet with the remote. The
reason for that was because in the described edge-case where relays
reboot or get replaced whilst the client is partitioned from the portal
(or we experience a connection hiccup), only one of the two peers, i.e.
Client or Gateway would migrate to the new relay, leaving the other one
in an inconsistent state.

Looking at recent customer logs, I've been seeing a lot of these
messages:

> Unknown connection or socket has already been nominated

For this particular customer, these are then very quickly followed by
ICE timeouts, leaving the connection unusable.

Considering that, I no longer think that the above change was a good
idea and we should instead always make use of all candidates that we are
given. What we are seeing is that in deployment scenarios where the
latency link between Client and Gateway is very short (5-10ms) yet the
latency to the portal is longer (~30-50ms), we trigger a race condition
where we are temporarily nominating a _peer-reflexive_ candidate pair
instead of a regular one. This happens because with such a short latency
link, Client and Gateway are _faster_ in sending back and forth several
STUN bindings than the control plane is in delivering all the
candidates.

Due to the functionality added in #6876, this then results in us not
accepting the candidates. It further appears that a nominated
peer-reflexive candidate does not provide a stable connection which is
why we then run into an ICE timeout, requiring Firezone to establish a
new connection only to have the same thing happen again.

This is very disruptive for the user experience as the connection only
works for a few moments at a time.

With #9793, we have actually added a feature that is also at play here.
Now that we don't immediately act on an ICE timeout, it is actually
possible for both Client and Gateway to migrate a connection to a
different relay, should the one that they are using get disconnected. In
#9793, we added a timeout of 2s for this.

To make this fully work, we need to patch str0m to transition to
`Checking` early. Presently, str0m would directly transition from
`Disconnected` to `Connected` in this case which in some of the
high-latency scenarios that we are testing in CI is not enough to
recover the connection within 2s. By transitioning to `Checking` early,
we abort this timer.

Related: https://github.com/algesten/str0m/pull/676
2025-07-24 01:35:54 +00:00
..
2025-06-09 20:12:37 +00:00

This is a Next.js project bootstrapped with create-next-app.

Getting Started

First, install dependencies and populate the timestamps.json file:

pnpm setup

Next, create files .env.local and .env.development.local in this directory.

Put this in .env.local:

NEXT_PUBLIC_MIXPANEL_TOKEN=""
NEXT_PUBLIC_GOOGLE_ANALYTICS_ID=""
NEXT_PUBLIC_LINKEDIN_PARTNER_ID=""
FIREZONE_DEPLOYED_SHA=""

And this in .env.development.local:

# Created by Vercel CLI
EDGE_CONFIG=""
FIREZONE_DEPLOYED_SHA=""
SITE_URL=""
VERCEL_DEEP_CLONE=""

After that, make sure to contact the team for their values.

Then, run the development server:

npm run dev
# or
yarn dev
# or
pnpm dev

Open http://localhost:3000 with your browser to see the result.

You can start editing the page by modifying app/page.tsx. The page auto-updates as you edit the file.

Linting

This project uses Prettier to format code and ensure a consistent style. Use the .prettierrc.json in the root of this repo to configure your editor.

Learn More

To learn more about Next.js, take a look at the following resources:

You can check out the Next.js GitHub repository - your feedback and contributions are welcome!

Deploy on Vercel

The easiest way to deploy your Next.js app is to use the Vercel Platform from the creators of Next.js.

Check out our Next.js deployment documentation for more details.