Bumps
[actions/download-artifact](https://github.com/actions/download-artifact)
from 5.0.0 to 6.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/download-artifact/releases">actions/download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<p><strong>BREAKING CHANGE:</strong> this update supports Node
<code>v24.x</code>. This is not a breaking change per-se but we're
treating it as such.</p>
<ul>
<li>Update README for download-artifact v5 changes by <a
href="https://github.com/yacaovsnc"><code>@yacaovsnc</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/417">actions/download-artifact#417</a></li>
<li>Update README with artifact extraction details by <a
href="https://github.com/yacaovsnc"><code>@yacaovsnc</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/424">actions/download-artifact#424</a></li>
<li>Readme: spell out the first use of GHES by <a
href="https://github.com/danwkennedy"><code>@danwkennedy</code></a> in
<a
href="https://redirect.github.com/actions/download-artifact/pull/431">actions/download-artifact#431</a></li>
<li>Bump <code>@actions/artifact</code> to <code>v4.0.0</code></li>
<li>Prepare <code>v6.0.0</code> by <a
href="https://github.com/danwkennedy"><code>@danwkennedy</code></a> in
<a
href="https://redirect.github.com/actions/download-artifact/pull/438">actions/download-artifact#438</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/danwkennedy"><code>@danwkennedy</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/download-artifact/pull/431">actions/download-artifact#431</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/download-artifact/compare/v5...v6.0.0">https://github.com/actions/download-artifact/compare/v5...v6.0.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="018cc2cf5b"><code>018cc2c</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/438">#438</a>
from actions/danwkennedy/prepare-6.0.0</li>
<li><a
href="815651c680"><code>815651c</code></a>
Revert "Remove <code>github.dep.yml</code>"</li>
<li><a
href="bb3a066a8b"><code>bb3a066</code></a>
Remove <code>github.dep.yml</code></li>
<li><a
href="fa1ce46bbd"><code>fa1ce46</code></a>
Prepare <code>v6.0.0</code></li>
<li><a
href="4a24838f3d"><code>4a24838</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/431">#431</a>
from danwkennedy/patch-1</li>
<li><a
href="5e3251c4ff"><code>5e3251c</code></a>
Readme: spell out the first use of GHES</li>
<li><a
href="abefc31eaf"><code>abefc31</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/424">#424</a>
from actions/yacaovsnc/update_readme</li>
<li><a
href="ac43a6070a"><code>ac43a60</code></a>
Update README with artifact extraction details</li>
<li><a
href="de96f4613b"><code>de96f46</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/417">#417</a>
from actions/yacaovsnc/update_readme</li>
<li><a
href="7993cb44e9"><code>7993cb4</code></a>
Remove migration guide for artifact download changes</li>
<li>Additional commits viewable in <a
href="634f93cb29...018cc2cf5b">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Setting `fail-fast: false` unsurprisingly makes our CI fail pretty
slowly. This is especially noticable in the merge queue where a
long-running job could still hold up the entire queue even though a
different job has failed already and the PR is never going to make it in
anyway.
To avoid this scenario, we set `fail-fast: true` whenever we are in the
merge queue.
Currently, we launch the `required_check` right away with all others and
poll the GitHub API to see if all others have completed already. This
eats into our API quota.
An easier way to do the same thing is to declare a dependency of the
`required_check` onto all other jobs. Normally, this wouldn't work
because we skip certain jobs if the related files haven't been modified.
We can opt out of this default behaviour by telling GitHub to `always()`
run our job. That way, it naturally gets scheduled after all others,
even if some of the jobs have been skipped.
According to GitHub support, this API call is responsible for most of
our API usage. Until we find a better way of organising this, checking
every only minute should be fine too, even if it slows down the merge
queue a bit.
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Right now, draft releases for Gateways and headless-clients are created
on each merge to main. For all other components, we only create those
when we trigger the workflow for a specific commit.
To align this functionality, we split the `_build_artifacts.yml`
workflow into two:
- `_control-plane.yml`
- `_data-plane.yml`
Apart from the `sha` input, all inputs only concern the data-plane,
therefore massively simplifying the control-plane workflow.
Additionally, the control-plane also doesn't have a manual trigger
because its artifacts never get released on GitHub.
Resolves: #10541
Bumps
[actions/download-artifact](https://github.com/actions/download-artifact)
from 4.3.0 to 5.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/download-artifact/releases">actions/download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@nebuk89</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/407">actions/download-artifact#407</a></li>
<li>BREAKING fix: inconsistent path behavior for single artifact
downloads by ID by <a
href="https://github.com/GrantBirki"><code>@GrantBirki</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/416">actions/download-artifact#416</a></li>
</ul>
<h2>v5.0.0</h2>
<h3>🚨 Breaking Change</h3>
<p>This release fixes an inconsistency in path behavior for single
artifact downloads by ID. <strong>If you're downloading single artifacts
by ID, the output path may change.</strong></p>
<h4>What Changed</h4>
<p>Previously, <strong>single artifact downloads</strong> behaved
differently depending on how you specified the artifact:</p>
<ul>
<li><strong>By name</strong>: <code>name: my-artifact</code> → extracted
to <code>path/</code> (direct)</li>
<li><strong>By ID</strong>: <code>artifact-ids: 12345</code> → extracted
to <code>path/my-artifact/</code> (nested)</li>
</ul>
<p>Now both methods are consistent:</p>
<ul>
<li><strong>By name</strong>: <code>name: my-artifact</code> → extracted
to <code>path/</code> (unchanged)</li>
<li><strong>By ID</strong>: <code>artifact-ids: 12345</code> → extracted
to <code>path/</code> (fixed - now direct)</li>
</ul>
<h4>Migration Guide</h4>
<h5>✅ No Action Needed If:</h5>
<ul>
<li>You download artifacts by <strong>name</strong></li>
<li>You download <strong>multiple</strong> artifacts by ID</li>
<li>You already use <code>merge-multiple: true</code> as a
workaround</li>
</ul>
<h5>⚠️ Action Required If:</h5>
<p>You download <strong>single artifacts by ID</strong> and your
workflows expect the nested directory structure.</p>
<p><strong>Before v5 (nested structure):</strong></p>
<pre lang="yaml"><code>- uses: actions/download-artifact@v4
with:
artifact-ids: 12345
path: dist
# Files were in: dist/my-artifact/
</code></pre>
<blockquote>
<p>Where <code>my-artifact</code> is the name of the artifact you
previously uploaded</p>
</blockquote>
<p><strong>To maintain old behavior (if needed):</strong></p>
<pre lang="yaml"><code></tr></table>
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="634f93cb29"><code>634f93c</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/416">#416</a>
from actions/single-artifact-id-download-path</li>
<li><a
href="b19ff43027"><code>b19ff43</code></a>
refactor: resolve download path correctly in artifact download tests
(mainly ...</li>
<li><a
href="e262cbee4a"><code>e262cbe</code></a>
bundle dist</li>
<li><a
href="bff23f9308"><code>bff23f9</code></a>
update docs</li>
<li><a
href="fff8c148a8"><code>fff8c14</code></a>
fix download path logic when downloading a single artifact by id</li>
<li><a
href="448e3f862a"><code>448e3f8</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/407">#407</a>
from actions/nebuk89-patch-1</li>
<li><a
href="47225c44b3"><code>47225c4</code></a>
Update README.md</li>
<li>See full diff in <a
href="d3f86a106a...634f93cb29">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
As it turns out, the flaky test was caused by a bug in the eBPF kernel where we read the old channel data header from the wrong offset. This made us essentially read garbage data for the channel number, causing us to:
a. Compute a bad checksum
b. Send the packet on a completely wrong channel
The reason this caused a flaky test is that it requires on side to pick IPv4 to talk to the relay and the other side IPv6. The happy-eyeballs approach of the `allocation` module made that non-deterministic, only exposing this bug occasionally.
To ensure these kind of things are detected earlier in the future, I am adding an additional CI step that checks all packets emitted by the eBPF kernel for checksum errors.
Fixes: #10404
Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com>
The majority of the log levels stated in the docker-compose file are
stale because those crates have long been deleted or renamed.
Additionally, the `wire` logs have already been disabled in release
builds, meaning we no longer need to patch them out before the perf
tests.
In 0b09d9f2f5, the `--migrations-path` arguments were added to the `Seed
database` step. These only work for the `ecto.migrate` and family of
commands, and not with `ecto.seed`. This meant all of our CI has
essentially not been running the manual migrations.
Then, #10377 we fixed the env var when starting the services so that
manual migrations are run. That env var, however, only applies to
"releases", which the `elixir` service is not.
This caused the API service to try and run the missing manual migrations
upon startup, which failed because they would then be run out-of-order.
To fix all of the above, we fix the `Seed database` step of the
perf-tests job to ensure it runs all migrations. The other places where
we seed the database already handled this properly.
The default send and receive buffer sizes on Linux are too small (only
~200 KB). Checking `nstat` after an iperf run revealed that the number
of dropped packets in the first interval directly correlates with the
number of receive buffer errors reported by `nstat`.
We already try to increase the send and receive buffer sizes for our UDP
socket but unfortunately, we cannot increase them beyond what the system
limits them to. To workaround this, we try to set `rmem_max` and
`wmem_max` during startup of the Linux headless client and Gateway. This
behaviour can be disabled by setting `FIREZONE_NO_INC_BUF=true`.
This doesn't work in Docker unfortunately, so we set the values manually
in the CI perf tests and verify after the test that we didn't encounter
any send and receive buffer errors.
It is yet to be determined how we should deal with this problem for all
the GUI clients. See #10350 as an issue tracking that.
Unfortunately, this doesn't fix all packet drops during the first iperf
interval. With this PR, we now see packet drops on the interface itself.
Now that we have a more realistic network setup in our compose file, we
can extend our router containers to apply the latency on the network
path. This means any use of the compose file has a latency by default,
simplifying our CI setup. It also allows us to restart containers
without having to re-apply the latency which is useful during
performance testing.
Currently, the setup we have in docker-compose does not reflect
real-world scenarios very well because most components share the same
subnet. In reality, Clients, Gateways, relays and the backend are all in
separate subnets, connected via multiple routers on the Internet.
The current setup makes it hard to properly test relayed connections. To
fix this, we move all components into their own subnet with a dedicated
router container that performs source and destination NAT as well as
acts as a firewall for the client and gateway containers to not allow
inbound traffic.
This setup will allow us to more easily test #10286 which requires port
randomization for outgoing traffic on the Client and Gateway side.
Ubuntu 22.04 is over 3 years old and therefore ships with quite an old
kernel. Our production VMs (for relays) all run Ubuntu 24.04 so it makes
sense to build and test them on the same kernel / OS release. For
consistency reasons, we therefore bump all runners to 24.04.
In CI, eBPF in driver mode actually functions just fine with no changes
to our existing tests, given we apply a few workarounds and bugfixes:
- The interface learning mechanism had two flaws: (1) it only learned
per-CPU, which meant the risk for a missing entry grew as the core count
of the relay host grew, and (2) it did not filter for unicast IPs, so it
picked up broadcast and link-local addresses, causing cross-relay paths
to fail occasionally
- The `relay-relay` candidate where the two relays are the same relay
causes packet drops / loops in the Docker bridge setup, and possibly in
GCP too. I'm not sure this is a valid path that solves a real
connectivity issue in the wild. I can understand relay-relay paths where
two relays are different hosts, and the client and gateway both talk
over their TURN channel to each other (i.e. WireGuard is blocked in each
of their networks), but I can't think of an advantage for a relay-relay
candidate where the traffic simply hairpins (or is dropped) off the
nearest switch. This has been now detected with a new `PacketLoop` error
that triggers whenever source_ip == dest_ip.
- The relays in CI need a common next-hop to talk to for the MAC address
swapping to work. A simple router service is added which functions as a
basic L3 router (no NAT) that allows the MAC swapping to work.
- The `veth` driver has some peculiar requirements to allow it to
function with XDP_TX. If you send a packet out of one interface of a
veth pair with XDP_TX, you need to either make sure both interfaces have
GRO enabled, or you need to attach a dummy XDP program that simply does
XDP_PASS to the other interface so that the sk_buff is allocated before
going up the stack to the Docker bridge. The GRO method was unreliable
and didn't work in our case, causing massive packet delays and
unpredictable bursts that prevented ICE from working, so we use the
XDP_PASS method instead. A simple docker image is built and lives at
https://github.com/firezone/xdp-pass to handle this.
Related: #10138
Related: #10260
To avoid burning Azure credits, we move the runners back down to the
free tier. Now that caching is properly set up, this should incur only a
minor increase in CI time.
Right now, `snownet` de-multiplexes WireGuard packets based on their
source tuple (IP + port) to the _first_ connection that would like to
handle this traffic. What appears to be happening based on observation
from customer logs is that we sometimes dispatch the traffic to the
wrong connection.
The WireGuard packet format uses session indices to declare, which
session a packet is for. The local session index is selected during the
handshake for a particular session.
By associating the different session indices (we can have up to 8 in
parallel per peer) with our Firezone-specific connection ID, we can
change our de-multiplexing scheme to uses these indices instead of the
source tuple. This is especially important for Gateways as those talk to
multiple different clients.
The session index is a 32-bit integer where the top 24 bits identify the
connection and the bottom 8 bits are used in a round-robin fashion to
identify individual sessions within the connection. Thus, to find the
correct connection, we right-shift the session index of an incoming
packet to arrive back at the 24-bit connection identifier.
In environments with a limited number of ports outside the NAT, a
connection from a new Client may come from a source tuple of a previous
Client. In such a case, we'd dispatch the packets to the wrong
connection, causing the Client to not be able to handshake a tunnel.
Google Cloud Artifact registry and Cloud storage is a significant cost.
GitHub, on the other hand, is completely free due to our being a public
repository. Hence, it makes sense to ditch GCP for GHCR.
To do this, we move all "staging" artifacts to GHCR. These will then be
used in the infra repo to push to GCP for deploys - we probably still
want pulls for our infra to hit GCP and not GitHub.
One big element of this is that we potentially lose sccache, so I'll be
checking the compile time of this PR and looking for alternatives that
don't involve such a massive cloud bill.
In Docker environments, applying iptables rules to filter
container-container traffic on the Docker bridged network is not
reliable, leading to direct connections being established in our relayed
tests. To fix this, we insert the rules directly from the client
container itself.
---------
Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com>
The tunnel service creates the Firezone ID upon start-up. With recent
changes to the GUI client, we now require reading the ID file when
starting the GUI client.
This exposes a race condition in our smoke-tests where we start them
both at roughly the same time.
To fix this, we sleep for 500ms after starting the tunnel process.
The `expires_at` column on the `flows` table was never used outside of
the context in which the flow was created in the Client Channel. This
ephemeral state, which is created in the `Domain.Flows.authorize_flow/4`
function, is never read from the DB in any meaningful capacity, so it
can be safely removed.
The `expire_flows_for` family of functions now simply reads the needed
fields from the flows table in order to broadcast `{:expire_flow,
flow_id, client_id, resource_id}` directly to the subscribed entities.
This PR is step 1 in removing the reliance on `Flows` to manage
ephemeral access state. In a subsequent PR we will actually change the
structure of what state is kept in the channel PIDs such that reliance
on this Flows table will no longer be necessary.
Additionally, in a few places, we were referencing a Flows.Show view
that was never available in production, so this dead code has been
removed.
Lastly, the `flows` table subscription and associated hook processing
has been completely removed as it is no longer needed. We've implemented
in #9667 logic to remove publications from removed table subscriptions,
so we can expect to get a couple ingest warnings when we deploy this as
the `Hooks.Flows` processor no longer exists, and the WAL data may have
lingering flows records in the queue. These can be safely ignored.
[`actionlint`](https://github.com/rhysd/actionlint) is a static analysis
tool for GitHub workflows and actions. It detects various issues ahead
of time and runs shellcheck on all `run` blocks. It is worth noting that
this does **not** lint the contents of composite actions so we still
need to be vigilant when working with those.
- Adds a timeout to the required_checks workflow
- Expects all jobs to run, exiting the script early for main branch runs
- Adds `set -xe` so we catch script errors going forward
This CI run is running for over an hour, not sure which job it's waiting
on:
https://github.com/firezone/firezone/actions/runs/15565464294
When a CI job is running as part of a merge group, it's possible the
base ref is a few commits away if the merge queue has items in it. So we
update the fetch depth to 20.
We don't need to rebuild the Tauri clients every time we change Rust
code but we almost certainly want to rebuild them if we change any code
in the client itself so we can smoke test them.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>