Network flow logs are a common feature of VPNs. Due to the nature of a
shared exit node, it is of great interest to a network analyst, which
TCP connections are getting routed through the tunnel, who is initiating
them, for long do they last and how much traffic is sent across them.
With this PR, the Firezone Gateway gains the ability of detecting the
TCP and UDP flows that are being routed through it. The information we
want to attach to these flows is spread out over several layers of the
packet handling code. To simplify the implementation and not complicate
the APIs unnecessarily, we chose to rely on TLS (thread-local storage)
for gathering all the necessary data as a packet gets passed through the
various layers. When using a const initializer, the overhead of a TLS
variable over an actual local variable is basically zero. The entire
routing state of the Gateway is also never sent across any threads,
making TLS variables a particularly good choice for this problem.
In its MVP form, the detected flows are only emitted on stdout and also
that only if `flow_logs=trace` is set using `RUST_LOG`. Early adopters
of this feature are encouraged to enable these logs as described and
then ingest the Gateway's logs into the SIEM of their choice for further
analysis.
Related: #8353
For working on the Android app, we need an installation of the JDK.
Currently, that is being installed separately in CI. We already have
`.tool-versions` which is designed to take care of this so we add
OpenJDK 17 to the list of required tools and remove the dedicated CI
step.
Similar to how resources can be edited to change their address, IP stack
or other properties, they can also be moved between different sites.
Currently, `connlib` requires the portal to explicitly remove the
resource and then re-add it for this to work.
Our system gets more robust if we also detect that the sites of a
resource have changed and handle it like other addressability changes.
To ensure that this works correctly, we also extend the proptests to
simulate addressability changes of resources.
Resolves: #9881
Related: #10593
This PR creates the necessary CI infrastructure to copy `.deb` packages
from releases to our APT repository. Re-generation of the index is
separated out into a dedicated workflow to avoid concurrency issues and
so we can re-generate it without making a release.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
We'll be using a consistent `artifacts` storage account for these built
binaries, so we've renamed the container to `binaries`.
The apt packages would be under the `apt` container at
`artifacts.firezone.dev/apt/` accordingly.
Related: firezone/infra#182
All of these jobs should run every time we publish a release. It makes
sense to merge them into one workflow.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
In #10542, we split out a dedicated workflow for building the data plane
artifacts. Unfortunately, we forgot to add an input section to the
`workflow_dispatch` trigger. This is necessary to correctly build the
artifacts for e.g. an upcoming release.
Here is a test-run:
https://github.com/firezone/firezone/actions/runs/18485551622
Right now, draft releases for Gateways and headless-clients are created
on each merge to main. For all other components, we only create those
when we trigger the workflow for a specific commit.
To align this functionality, we split the `_build_artifacts.yml`
workflow into two:
- `_control-plane.yml`
- `_data-plane.yml`
Apart from the `sha` input, all inputs only concern the data-plane,
therefore massively simplifying the control-plane workflow.
Additionally, the control-plane also doesn't have a manual trigger
because its artifacts never get released on GitHub.
Resolves: #10541
We primarily rely on sccache for speeding up our Rust builds. The
`setup-rust-toolchain` action configures `rust-cache` in addition but
tries to set it up for the root of the repository which doesn't work in
our scenario. It is also unnecessary to set up additional caching for
the `target` directory because we use `sccache` anyway.
sccache only caches compile artifacts but it can also be beneficial to
cache the downloaded Rust sources. Those aren't very big so we use the
`rust-cache` action for that but disable the `target` cache. This config
option isn't forwarded by `setup-rust-toolchain` so we disable its cache
integration and use `rust-cache` directly.
In order to build the iOS app with the Xcode version that is installed
on the GitHub runners, we need to select the Xcode version by major and
minor version. Currently, the iOS builds are failing because Xcode 26.1
also exists but iOS 26.1 isn't supported (or released?).
See
https://github.com/firezone/firezone/actions/runs/18239282351/job/51938727311.
Bumps
[actions/download-artifact](https://github.com/actions/download-artifact)
from 4.3.0 to 5.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/download-artifact/releases">actions/download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@nebuk89</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/407">actions/download-artifact#407</a></li>
<li>BREAKING fix: inconsistent path behavior for single artifact
downloads by ID by <a
href="https://github.com/GrantBirki"><code>@GrantBirki</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/416">actions/download-artifact#416</a></li>
</ul>
<h2>v5.0.0</h2>
<h3>🚨 Breaking Change</h3>
<p>This release fixes an inconsistency in path behavior for single
artifact downloads by ID. <strong>If you're downloading single artifacts
by ID, the output path may change.</strong></p>
<h4>What Changed</h4>
<p>Previously, <strong>single artifact downloads</strong> behaved
differently depending on how you specified the artifact:</p>
<ul>
<li><strong>By name</strong>: <code>name: my-artifact</code> → extracted
to <code>path/</code> (direct)</li>
<li><strong>By ID</strong>: <code>artifact-ids: 12345</code> → extracted
to <code>path/my-artifact/</code> (nested)</li>
</ul>
<p>Now both methods are consistent:</p>
<ul>
<li><strong>By name</strong>: <code>name: my-artifact</code> → extracted
to <code>path/</code> (unchanged)</li>
<li><strong>By ID</strong>: <code>artifact-ids: 12345</code> → extracted
to <code>path/</code> (fixed - now direct)</li>
</ul>
<h4>Migration Guide</h4>
<h5>✅ No Action Needed If:</h5>
<ul>
<li>You download artifacts by <strong>name</strong></li>
<li>You download <strong>multiple</strong> artifacts by ID</li>
<li>You already use <code>merge-multiple: true</code> as a
workaround</li>
</ul>
<h5>⚠️ Action Required If:</h5>
<p>You download <strong>single artifacts by ID</strong> and your
workflows expect the nested directory structure.</p>
<p><strong>Before v5 (nested structure):</strong></p>
<pre lang="yaml"><code>- uses: actions/download-artifact@v4
with:
artifact-ids: 12345
path: dist
# Files were in: dist/my-artifact/
</code></pre>
<blockquote>
<p>Where <code>my-artifact</code> is the name of the artifact you
previously uploaded</p>
</blockquote>
<p><strong>To maintain old behavior (if needed):</strong></p>
<pre lang="yaml"><code></tr></table>
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="634f93cb29"><code>634f93c</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/416">#416</a>
from actions/single-artifact-id-download-path</li>
<li><a
href="b19ff43027"><code>b19ff43</code></a>
refactor: resolve download path correctly in artifact download tests
(mainly ...</li>
<li><a
href="e262cbee4a"><code>e262cbe</code></a>
bundle dist</li>
<li><a
href="bff23f9308"><code>bff23f9</code></a>
update docs</li>
<li><a
href="fff8c148a8"><code>fff8c14</code></a>
fix download path logic when downloading a single artifact by id</li>
<li><a
href="448e3f862a"><code>448e3f8</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/407">#407</a>
from actions/nebuk89-patch-1</li>
<li><a
href="47225c44b3"><code>47225c4</code></a>
Update README.md</li>
<li>See full diff in <a
href="d3f86a106a...634f93cb29">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
In Firezone, a Client requests an "access authorization" for a Resource
on the fly when it sees the first packet for said Resource going through
the tunnel. If we don't have a connection to the Gateway yet, this is
also where we will establish a connection and create the WireGuard
tunnel.
In order for this to work, the access authorization state between the
Client and the Gateway MUST NOT get out of sync. If the Client thinks it
has access to a Resource, it will just route the traffic to the Gateway.
If the access authorization on the Gateway has expired or vanished
otherwise, the packets will be black-holed.
Starting with #9816, the Gateway sends ICMP errors back to the
application whenever it filters a packet. This can happen either because
the access authorization is gone or because the traffic wasn't allowed
by the specific filter rules on the Resource.
With this patch, the Client will attempt to create a new flow (i.e.
re-authorize) traffic for this resource whenever it sees such an ICMP
error, therefore acting as a way of synchronizing the view of the world
between Client and Gateway should they ever run out of sync.
Testing turned out to be a bit tricky. If we let the authorization on
the Gateway lapse naturally, we portal will also toggle the Resource off
and on on the Client, resulting in "flushing" the current
authorizations. Additionally, it the Client had only access to one
Resource, then the Gateway will gracefully close the connection, also
resulting in the Client creating a new flow for the next packet.
To actually trigger this new behaviour we need to:
- Access at least two resources via the same Gateway
- Directly send `reject_access` to the Gateway for this particular
resource
To achieve this, we dynamically eval some code on the API node and
instruct the Gateway channel to send `reject_access`. The connection
stays intact because there is still another active access authorization
but packets for the other resource are answered with ICMP errors.
To achieve a safe roll-out, the new behaviour is feature-flagged. In
order to still test it, we now also allow feature flags to be set via
env variables.
Resolves: #10074
---------
Co-authored-by: Mariusz Klochowicz <mariusz@klochowicz.com>
As it turns out, the flaky test was caused by a bug in the eBPF kernel where we read the old channel data header from the wrong offset. This made us essentially read garbage data for the channel number, causing us to:
a. Compute a bad checksum
b. Send the packet on a completely wrong channel
The reason this caused a flaky test is that it requires on side to pick IPv4 to talk to the relay and the other side IPv6. The happy-eyeballs approach of the `allocation` module made that non-deterministic, only exposing this bug occasionally.
To ensure these kind of things are detected earlier in the future, I am adding an additional CI step that checks all packets emitted by the eBPF kernel for checksum errors.
Fixes: #10404
Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com>
The majority of the log levels stated in the docker-compose file are
stale because those crates have long been deleted or renamed.
Additionally, the `wire` logs have already been disabled in release
builds, meaning we no longer need to patch them out before the perf
tests.
In 0b09d9f2f5, the `--migrations-path` arguments were added to the `Seed
database` step. These only work for the `ecto.migrate` and family of
commands, and not with `ecto.seed`. This meant all of our CI has
essentially not been running the manual migrations.
Then, #10377 we fixed the env var when starting the services so that
manual migrations are run. That env var, however, only applies to
"releases", which the `elixir` service is not.
This caused the API service to try and run the missing manual migrations
upon startup, which failed because they would then be run out-of-order.
To fix all of the above, we fix the `Seed database` step of the
perf-tests job to ensure it runs all migrations. The other places where
we seed the database already handled this properly.
The default send and receive buffer sizes on Linux are too small (only
~200 KB). Checking `nstat` after an iperf run revealed that the number
of dropped packets in the first interval directly correlates with the
number of receive buffer errors reported by `nstat`.
We already try to increase the send and receive buffer sizes for our UDP
socket but unfortunately, we cannot increase them beyond what the system
limits them to. To workaround this, we try to set `rmem_max` and
`wmem_max` during startup of the Linux headless client and Gateway. This
behaviour can be disabled by setting `FIREZONE_NO_INC_BUF=true`.
This doesn't work in Docker unfortunately, so we set the values manually
in the CI perf tests and verify after the test that we didn't encounter
any send and receive buffer errors.
It is yet to be determined how we should deal with this problem for all
the GUI clients. See #10350 as an issue tracking that.
Unfortunately, this doesn't fix all packet drops during the first iperf
interval. With this PR, we now see packet drops on the interface itself.
Now that we have a more realistic network setup in our compose file, we
can extend our router containers to apply the latency on the network
path. This means any use of the compose file has a latency by default,
simplifying our CI setup. It also allows us to restart containers
without having to re-apply the latency which is useful during
performance testing.
Currently, the eBPF module can translate from channel data messages to
UDP packets and vice versa. It can even do that across IP stacks, i.e.
translate from an IPv6 UDP packet to an IPv4 channel data messages.
What it cannot do is handle packets to itself. This can happen if both -
Client and Gateway - pick the same relay to make an allocation. When
exchanging candidates, ICE will then form pairs between both relay
candidates, essentially requiring the relay to loop packets back to
itself.
In eBPF, we cannot do that. When sending a packet back out with
`XDP_TX`, it will actually go out on the wire without an additional
check whether they are for our own IP.
Properly handling this in eBPF (by comparing the destination IP to our
public IP) adds more cases we need to handle. The current module
structure where everything is one file makes this quite hard to
understand, which is why I opted to create four sub-modules:
- `from_ipv4_channel`
- `from_ipv4_udp`
- `from_ipv6_channel`
- `from_ipv6_udp`
For traffic arriving via a data-channel, it is possible that we also
need to send it back out via a data-channel if the peer address we are
sending to is the relay itself. Therefore, the `from_ipX_channel`
modules have four sub-modules:
- `to_ipv4_channel`
- `to_ipv4_udp`
- `to_ipv6_channel`
- `to_ipv6_udp`
For the traffic arriving on an allocation port (`from_ipX_udp`), we
always map to a data-channel and therefore can never get into a routing
loop, resulting in only two modules:
- `to_ipv4_channel`
- `to_ipv6_channel`
The actual implementation of the new code paths is rather simple and
mostly copied from the existing ones. For half of them, we don't need to
make any adjustments to the buffer size (i.e. IPv4 channel to IPv4
channel). For the other half, we need to adjust for the difference in
the IP header size.
To test these changes, we add a new integration test that makes use of
the new docker-compose setup added in #10301 and configures masquerading
for both Client and Gateway. To make this more useful, we also remove
the `direct-` prefix from all tests as the test script itself no longer
makes any decisions as to whether it is operating over a direct or
relayed connection.
Resolves: #7518
Currently, the setup we have in docker-compose does not reflect
real-world scenarios very well because most components share the same
subnet. In reality, Clients, Gateways, relays and the backend are all in
separate subnets, connected via multiple routers on the Internet.
The current setup makes it hard to properly test relayed connections. To
fix this, we move all components into their own subnet with a dedicated
router container that performs source and destination NAT as well as
acts as a firewall for the client and gateway containers to not allow
inbound traffic.
This setup will allow us to more easily test #10286 which requires port
randomization for outgoing traffic on the Client and Gateway side.
Initially, we added the graceful shutdown functionality to the relay to
better deal with deploys and achieve as minimal downtime as possible.
With the split of app and infrastructure that we now have, this
functionality is no longer necessary as portal deploys don't touch the
relay infra at all.
Thus, we can remove this functionality which will actually speed-up
deploys of the relays as systemd no longer has to time-out after sending
the SIGTERM to the binary.
Ubuntu 22.04 is over 3 years old and therefore ships with quite an old
kernel. Our production VMs (for relays) all run Ubuntu 24.04 so it makes
sense to build and test them on the same kernel / OS release. For
consistency reasons, we therefore bump all runners to 24.04.
When we receive a DNS query for a DNS resource in Firezone, we take the
next available 4 IPs from the CG-NAT range and assign them to the domain
name. For example, if `example.com` is a DNS resource and it is the
first resource being queried in a Firezone session, we will assigned the
IPs `100.96.0.1` - `100.96.0.4` to it. If the user now restarts Firezone
or signs out and back in, this state is lost and we assign those same
IPs to the next DNS query coming in.
This creates a problem for applications that do not re-query DNS very
often or never. They expect these IPs to not change. Restarting software
or signing out and back in is a common approach to fixing software
problems, yet in this specific case, doing so may create even more
problems for the user.
To mitigate this, `ClientState` introduce a new event
`DnsRecordsChanged` that gets emitted to the event-loop every time we
assign new records. The event-loop then caches this in memory and reuses
it in case a new session is initiated. The records are only stored
in-memory and not on disk. Most likely, the tunnel process will be alive
for the entire OS session.
To verify this behaviour, we add a new `RestartClient` transition to our
proptests. In the proptests, we already keep a mapping of all DNS names
we ever resolved, including DNS resources. When generating IP traffic,
we sample from this list of IPs and then expect the packet to be routed.
By replacing the `ClientState` as part of this transition and re-seeding
it with the previously exported DNS records, we can verify that packets
to IPs resolved from a previous session still get successfully routed to
the resource.
Related: #5498
To deploy the relays on Azure, we need to make sure the binaries are
copied there, similar to GCP. This adds a job step to do just that,
placing them into a storage account + container using new infra
provisioned in Azure.
In CI, eBPF in driver mode actually functions just fine with no changes
to our existing tests, given we apply a few workarounds and bugfixes:
- The interface learning mechanism had two flaws: (1) it only learned
per-CPU, which meant the risk for a missing entry grew as the core count
of the relay host grew, and (2) it did not filter for unicast IPs, so it
picked up broadcast and link-local addresses, causing cross-relay paths
to fail occasionally
- The `relay-relay` candidate where the two relays are the same relay
causes packet drops / loops in the Docker bridge setup, and possibly in
GCP too. I'm not sure this is a valid path that solves a real
connectivity issue in the wild. I can understand relay-relay paths where
two relays are different hosts, and the client and gateway both talk
over their TURN channel to each other (i.e. WireGuard is blocked in each
of their networks), but I can't think of an advantage for a relay-relay
candidate where the traffic simply hairpins (or is dropped) off the
nearest switch. This has been now detected with a new `PacketLoop` error
that triggers whenever source_ip == dest_ip.
- The relays in CI need a common next-hop to talk to for the MAC address
swapping to work. A simple router service is added which functions as a
basic L3 router (no NAT) that allows the MAC swapping to work.
- The `veth` driver has some peculiar requirements to allow it to
function with XDP_TX. If you send a packet out of one interface of a
veth pair with XDP_TX, you need to either make sure both interfaces have
GRO enabled, or you need to attach a dummy XDP program that simply does
XDP_PASS to the other interface so that the sk_buff is allocated before
going up the stack to the Docker bridge. The GRO method was unreliable
and didn't work in our case, causing massive packet delays and
unpredictable bursts that prevented ICE from working, so we use the
XDP_PASS method instead. A simple docker image is built and lives at
https://github.com/firezone/xdp-pass to handle this.
Related: #10138
Related: #10260
- Removes the swift DerivedData cache. This was added to attempt to
speed up the Swift builds in CI but in reality, those are already fast
and the cache did not speed them up.
- Removes the runner.os/arch specifier from the Webview installer cache
key. The binary download is hardcoded for a specific windows version /
arch already so the cache key just adds unneeded complexity.
These caches are getting saved on PR runs which consumes excess GHA
cache storage.
To avoid burning Azure credits, we move the runners back down to the
free tier. Now that caching is properly set up, this should incur only a
minor increase in CI time.