Within `snownet` - `connlib`'s connectivity library - we use ICE to set up a UDP "connection" between a client and a gateway. UDP is an unreliable transport, meaning the only way how can detect that the connection is broken is for both parties to constantly send messages and acknowledgements back and forth. ICE uses STUN binding requests for this. In the default configuration of `str0m`, a STUN binding is sent every 3s, and we tolerate at most 9 missing responses before we consider the connection broken. As these responses go missing, `str0m` halves this interval, which results in a total ICE timeout of around 17 seconds. We already tweak these values by reducing the number of requests to 8 and setting the interval to 1.5s. This results in a total ICE timeout of ~10s which effectively means that there is at most a 10s lag between the connection breaking and us considering it broken at which point new packets arriving at the TUN interface can trigger the setup of a new connection with the gateway. Lowering these timeouts improves the user experience in case of a broken connection because the user doesn't have to wait as long before they can access their resources again. The downside of lowering these timeouts is that we generate a lot of background noise. Especially on mobile devices, this is bad because it prevents the CPU from going to sleep and thus simply being signed into Firezone will drain your battery, even if you don't use it. Note that this doesn't apply at all if the client application on top detects a network change. In that case, we hard-reset all connections and instantly create new ones. We attempted to fix this in #5576 by closing idle connections after 5 minutes. This however created new problems such as #6778. The original problem here is that we send too many STUN messages as soon as a connection is established. Simply increasing the timeout is not an option because it would make the user experience really bad in case the connection actually drops for reasons that the client app can't detect. In this patch, we attempt to solve this in a different way: Detecting a broken connection is only critical if the user is actively using the tunnel (i.e. sending traffic). If there is no traffic, it doesn't matter if we need longer to detect a broken connection. The user won't notice because their phone is probably in their pocket or something. With this patch, we now implement the following behaviour: - A connection is considered idle after 10s of no application traffic. - On idle connections, we send a STUN requests every 60s - On idle connections, we wait for at most 4 missing responses before considering the connection broken. - Every connection will perform a client-initiated WireGuard keep-alive every 25s, unless there is application traffic. These values have been chosen while considering the following sources: 1. [RFC4787, REQ-5](https://www.rfc-editor.org/rfc/rfc4787.html#section-12) requires NATs to keep UDP NAT mappings alive for at least 2 minutes. 2. [`conntrack`](https://www.kernel.org/doc/Documentation/networking/nf_conntrack-sysctl.rst) adopts this requirement via the `nf_conntrack_udp_timeout_stream` configuration. 3. 25s is the default keep-alive of the WireGuard kernel module. In theory the WireGuard keep-alive itself should be good enough to keep all NAT bindings alive. In practice, missed keep-alives are not exposed by boringtun (the WireGuard implementation we rely on) and thus we need the additional STUN keep-alives to detect broken connections. We set those somewhat conservatively to 60s. As soon as the user triggers new application traffic, these values are reverted back to their defaults, meaning even if the connection died just before the user is starting to use it again, we will know within the usual 10s because we are triggering new STUN requests more often. Note that existing gateways still implement the "close idle connections after 5 minutes" behaviour. Customers will need to upgrade to a new gateway version to fully benefit from these new always-on, low-power connections. Resolves: #6778. --------- Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
A modern alternative to legacy VPNs.
Overview
Firezone is an open source platform to securely manage remote access for any-sized organization. Unlike most VPNs, Firezone takes a granular, least-privileged approach to access management with group-based policies that control access to individual applications, entire subnets, and everything in between.
Features
Firezone is:
- Fast: Built on WireGuard® to be 3-4 times faster than OpenVPN.
- Scalable: Deploy two or more gateways for automatic load balancing and failover.
- Private: Peer-to-peer, end-to-end encrypted tunnels prevent packets from routing through our infrastructure.
- Secure: Zero attack surface thanks to Firezone's holepunching tech which establishes tunnels on-the-fly at the time of access.
- Open: Our entire product is open-source, allowing anyone to audit the codebase.
- Flexible: Authenticate users via email, Google Workspace, Okta, Entra ID, or OIDC and sync users and groups automatically.
- Simple: Deploy gateways and configure access in minutes with a snappy admin UI.
Firezone is not:
- A tool for creating bi-directional mesh networks
- A full-featured router or firewall
- An IPSec or OpenVPN server
Contents of this repository
This is a monorepo containing the full Firezone product, marketing website, and product documentation, organized as follows:
- elixir: Control plane and internal Elixir libraries:
- elixir/apps/web: Admin UI
- elixir/apps/api: API for Clients, Relays and Gateways.
- rust/: Data plane and internal Rust libraries:
- rust/gateway: Gateway - Tunnel server based on WireGuard and deployed to your infrastructure.
- rust/relay: Relay - STUN/TURN server to facilitate holepunching.
- rust/headless-client: Cross-platform CLI client.
- rust/gui-client: Cross-platform GUI client.
- swift/: macOS / iOS clients.
- kotlin/: Android / ChromeOS clients.
- website/: Marketing website and product documentation.
- terraform/: Terraform files for various example deployments.
- terraform/examples/google-cloud/nat-gateway: Example Terraform configuration for deploying a cluster of Firezone Gateways behind a NAT gateway on GCP with a single egress IP.
- terraform/modules/google-cloud/apps/gateway-region-instance-group: Production-ready Terraform module for deploying regional Firezone Gateways to Google Cloud Compute using Regional Instance Groups.
Quickstart
The quickest way to get started with Firezone is to sign up for an account at https://app.firezone.dev/sign_up.
Once you've signed up, follow the instructions in the welcome email to get started.
Frequently asked questions (FAQ)
Can I self-host Firezone?
Our license won't stop you from self-hosting the entire Firezone product top to bottom, but our internal APIs are changing rapidly so we can't meaningfully support self-hosting Firezone in production at this time.
If you're feeling especially adventurous and want to self-host Firezone for educational or hobby purposes, follow the instructions to spin up a local development environment in CONTRIBUTING.md.
The latest published clients (on App Stores and on
releases) are only guaranteed
to work with the managed version of Firezone and may not work with a self-hosted
portal built from this repository. This is because Apple and Google can
sometimes delay updates to their app stores, and so the latest published version
may not be compatible with the tip of main from this repository.
Therefore, if you're experimenting with self-hosting Firezone, you will probably want to use clients you build and distribute yourself as well.
See the READMEs in the following directories for more information on building each client:
- macOS / iOS: swift/apple
- Android / ChromeOS: kotlin/android
- Windows / Linux: rust/gui-client
How long will 0.7 be supported until?
Firezone 0.7 is currently end-of-life and has stopped receiving updates as of
January 31st, 2024. It will continue to be available indefinitely from the
legacy branch of this repo under the Apache 2.0 license.
How much does it cost?
We offer flexible per-seat monthly and annual plans for the cloud-managed version of Firezone, with optional invoicing for larger organizations. See our pricing page for more details.
Those experimenting with self-hosting can use Firezone for free without feature or seat limitations, but we can't provide support for self-hosted installations at this time.
Documentation
Additional documentation on general usage, troubleshooting, and configuration can be found at https://www.firezone.dev/kb.
Get Help
If you're looking for help installing, configuring, or using Firezone, check our community support options:
- Discussion Forums: Ask questions, report bugs, and suggest features.
- Join our Discord Server: Join live discussions, meet other users, and chat with the Firezone team.
- Open a PR: Contribute a bugfix or make a contribution to Firezone.
If you need help deploying or maintaining Firezone for your business, consider contacting our sales team to speak with a Firezone expert.
Star History
Developing and Contributing
See CONTRIBUTING.md.
Security
See SECURITY.md.
License
Portions of this software are licensed as follows:
- All content residing under the "elixir/" directory of this repository, if that directory exists, is licensed under the "Elastic License 2.0" license defined in "elixir/LICENSE".
- All third party components incorporated into the Firezone Software are licensed under the original license provided by the owner of the applicable component.
- Content outside of the above mentioned directories or restrictions above is available under the "Apache 2.0 License" license as defined in "LICENSE".
WireGuard® is a registered trademark of Jason A. Donenfeld.
