Bumps [actions/cache](https://github.com/actions/cache) from 4.2.3 to
4.2.4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/releases">actions/cache's
releases</a>.</em></p>
<blockquote>
<h2>v4.2.4</h2>
<h2>What's Changed</h2>
<ul>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@nebuk89</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1620">actions/cache#1620</a></li>
<li>Upgrade <code>@actions/cache</code> to <code>4.0.5</code> and move
<code>@protobuf-ts/plugin</code> to dev depdencies by <a
href="https://github.com/Link"><code>@Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1634">actions/cache#1634</a></li>
<li>Prepare release <code>4.2.4</code> by <a
href="https://github.com/Link"><code>@Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1636">actions/cache#1636</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/nebuk89"><code>@nebuk89</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1620">actions/cache#1620</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v4...v4.2.4">https://github.com/actions/cache/compare/v4...v4.2.4</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/blob/main/RELEASES.md">actions/cache's
changelog</a>.</em></p>
<blockquote>
<h1>Releases</h1>
<h3>4.2.4</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.5</li>
</ul>
<h3>4.2.3</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.3 (obfuscates SAS token in
debug logs for cache entries)</li>
</ul>
<h3>4.2.2</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.2</li>
</ul>
<h3>4.2.1</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.1</li>
</ul>
<h3>4.2.0</h3>
<p>TLDR; The cache backend service has been rewritten from the ground up
for improved performance and reliability. <a
href="https://github.com/actions/cache">actions/cache</a> now integrates
with the new cache service (v2) APIs.</p>
<p>The new service will gradually roll out as of <strong>February 1st,
2025</strong>. The legacy service will also be sunset on the same date.
Changes in these release are <strong>fully backward
compatible</strong>.</p>
<p><strong>We are deprecating some versions of this action</strong>. We
recommend upgrading to version <code>v4</code> or <code>v3</code> as
soon as possible before <strong>February 1st, 2025.</strong> (Upgrade
instructions below).</p>
<p>If you are using pinned SHAs, please use the SHAs of versions
<code>v4.2.0</code> or <code>v3.4.0</code></p>
<p>If you do not upgrade, all workflow runs using any of the deprecated
<a href="https://github.com/actions/cache">actions/cache</a> will
fail.</p>
<p>Upgrading to the recommended versions will not break your
workflows.</p>
<h3>4.1.2</h3>
<ul>
<li>Add GitHub Enterprise Cloud instances hostname filters to inform API
endpoint choices - <a
href="https://redirect.github.com/actions/cache/pull/1474">#1474</a></li>
<li>Security fix: Bump braces from 3.0.2 to 3.0.3 - <a
href="https://redirect.github.com/actions/cache/pull/1475">#1475</a></li>
</ul>
<h3>4.1.1</h3>
<ul>
<li>Restore original behavior of <code>cache-hit</code> output - <a
href="https://redirect.github.com/actions/cache/pull/1467">#1467</a></li>
</ul>
<h3>4.1.0</h3>
<ul>
<li>Ensure <code>cache-hit</code> output is set when a cache is missed -
<a
href="https://redirect.github.com/actions/cache/pull/1404">#1404</a></li>
<li>Deprecate <code>save-always</code> input - <a
href="https://redirect.github.com/actions/cache/pull/1452">#1452</a></li>
</ul>
<h3>4.0.2</h3>
<ul>
<li>Fixed restore <code>fail-on-cache-miss</code> not working.</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0400d5f644"><code>0400d5f</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1636">#1636</a>
from actions/Link-/release-4.2.4</li>
<li><a
href="374a27f269"><code>374a27f</code></a>
Prepare release 4.2.4</li>
<li><a
href="358a7306cd"><code>358a730</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1634">#1634</a>
from actions/Link-/optimise-deps</li>
<li><a
href="2ee706ef74"><code>2ee706e</code></a>
Fix with another approach</li>
<li><a
href="94f7b5d913"><code>94f7b5d</code></a>
Fix bundle exec</li>
<li><a
href="c36116c3f4"><code>c36116c</code></a>
Fix the workflow to use licensed from source</li>
<li><a
href="320fe7d56b"><code>320fe7d</code></a>
Update the licensed workflow to use the latest version</li>
<li><a
href="d81cc477d9"><code>d81cc47</code></a>
Add licensed output</li>
<li><a
href="de243982c5"><code>de24398</code></a>
Add licensed output</li>
<li><a
href="e7b6a9cc9d"><code>e7b6a9c</code></a>
<code>@protobuf-ts/plugin</code> to dev dependencies</li>
<li>Additional commits viewable in <a
href="5a3ec84eff...0400d5f644">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
fix(dependabot): Remove anchors from dependabot config
YAML anchors are not supported here.
Also:
- remove explicit major,minor and patch version cooldown periods
- actually set it to 28 days (like previous PR claimed)
Fixes#10378
The majority of the log levels stated in the docker-compose file are
stale because those crates have long been deleted or renamed.
Additionally, the `wire` logs have already been disabled in release
builds, meaning we no longer need to patch them out before the perf
tests.
In 0b09d9f2f5, the `--migrations-path` arguments were added to the `Seed
database` step. These only work for the `ecto.migrate` and family of
commands, and not with `ecto.seed`. This meant all of our CI has
essentially not been running the manual migrations.
Then, #10377 we fixed the env var when starting the services so that
manual migrations are run. That env var, however, only applies to
"releases", which the `elixir` service is not.
This caused the API service to try and run the missing manual migrations
upon startup, which failed because they would then be run out-of-order.
To fix all of the above, we fix the `Seed database` step of the
perf-tests job to ensure it runs all migrations. The other places where
we seed the database already handled this properly.
The default send and receive buffer sizes on Linux are too small (only
~200 KB). Checking `nstat` after an iperf run revealed that the number
of dropped packets in the first interval directly correlates with the
number of receive buffer errors reported by `nstat`.
We already try to increase the send and receive buffer sizes for our UDP
socket but unfortunately, we cannot increase them beyond what the system
limits them to. To workaround this, we try to set `rmem_max` and
`wmem_max` during startup of the Linux headless client and Gateway. This
behaviour can be disabled by setting `FIREZONE_NO_INC_BUF=true`.
This doesn't work in Docker unfortunately, so we set the values manually
in the CI perf tests and verify after the test that we didn't encounter
any send and receive buffer errors.
It is yet to be determined how we should deal with this problem for all
the GUI clients. See #10350 as an issue tracking that.
Unfortunately, this doesn't fix all packet drops during the first iperf
interval. With this PR, we now see packet drops on the interface itself.
Configure Dependabot with a 28-day cooldown period across all package
ecosystems to protect against supply-chain attacks. This ensures newly
released packages undergo community vetting before adoption.
Key changes:
- Add 7-day cooldown for all dependency types (major, minor, patch)
- Switch from monthly to weekly checks to ensure timely updates after
cooldown expires
- Use YAML anchors to maintain DRY configuration (we can unfold them if
we need custom config)
Security rationale:
- Most supply-chain attacks are discovered within a few days of release
- Patch versions are particularly vulnerable as they're often
auto-merged with less scrutiny
- Weekly checks + 28-day cooldown = roughly matching previous elixir
dependency update cadence
Note: Security updates bypass the cooldown and are applied immediately,
ensuring critical CVEs are patched without delay
Now that we have a more realistic network setup in our compose file, we
can extend our router containers to apply the latency on the network
path. This means any use of the compose file has a latency by default,
simplifying our CI setup. It also allows us to restart containers
without having to re-apply the latency which is useful during
performance testing.
Currently, the eBPF module can translate from channel data messages to
UDP packets and vice versa. It can even do that across IP stacks, i.e.
translate from an IPv6 UDP packet to an IPv4 channel data messages.
What it cannot do is handle packets to itself. This can happen if both -
Client and Gateway - pick the same relay to make an allocation. When
exchanging candidates, ICE will then form pairs between both relay
candidates, essentially requiring the relay to loop packets back to
itself.
In eBPF, we cannot do that. When sending a packet back out with
`XDP_TX`, it will actually go out on the wire without an additional
check whether they are for our own IP.
Properly handling this in eBPF (by comparing the destination IP to our
public IP) adds more cases we need to handle. The current module
structure where everything is one file makes this quite hard to
understand, which is why I opted to create four sub-modules:
- `from_ipv4_channel`
- `from_ipv4_udp`
- `from_ipv6_channel`
- `from_ipv6_udp`
For traffic arriving via a data-channel, it is possible that we also
need to send it back out via a data-channel if the peer address we are
sending to is the relay itself. Therefore, the `from_ipX_channel`
modules have four sub-modules:
- `to_ipv4_channel`
- `to_ipv4_udp`
- `to_ipv6_channel`
- `to_ipv6_udp`
For the traffic arriving on an allocation port (`from_ipX_udp`), we
always map to a data-channel and therefore can never get into a routing
loop, resulting in only two modules:
- `to_ipv4_channel`
- `to_ipv6_channel`
The actual implementation of the new code paths is rather simple and
mostly copied from the existing ones. For half of them, we don't need to
make any adjustments to the buffer size (i.e. IPv4 channel to IPv4
channel). For the other half, we need to adjust for the difference in
the IP header size.
To test these changes, we add a new integration test that makes use of
the new docker-compose setup added in #10301 and configures masquerading
for both Client and Gateway. To make this more useful, we also remove
the `direct-` prefix from all tests as the test script itself no longer
makes any decisions as to whether it is operating over a direct or
relayed connection.
Resolves: #7518
Currently, the setup we have in docker-compose does not reflect
real-world scenarios very well because most components share the same
subnet. In reality, Clients, Gateways, relays and the backend are all in
separate subnets, connected via multiple routers on the Internet.
The current setup makes it hard to properly test relayed connections. To
fix this, we move all components into their own subnet with a dedicated
router container that performs source and destination NAT as well as
acts as a firewall for the client and gateway containers to not allow
inbound traffic.
This setup will allow us to more easily test #10286 which requires port
randomization for outgoing traffic on the Client and Gateway side.
Initially, we added the graceful shutdown functionality to the relay to
better deal with deploys and achieve as minimal downtime as possible.
With the split of app and infrastructure that we now have, this
functionality is no longer necessary as portal deploys don't touch the
relay infra at all.
Thus, we can remove this functionality which will actually speed-up
deploys of the relays as systemd no longer has to time-out after sending
the SIGTERM to the binary.
Ubuntu 22.04 is over 3 years old and therefore ships with quite an old
kernel. Our production VMs (for relays) all run Ubuntu 24.04 so it makes
sense to build and test them on the same kernel / OS release. For
consistency reasons, we therefore bump all runners to 24.04.
When we receive a DNS query for a DNS resource in Firezone, we take the
next available 4 IPs from the CG-NAT range and assign them to the domain
name. For example, if `example.com` is a DNS resource and it is the
first resource being queried in a Firezone session, we will assigned the
IPs `100.96.0.1` - `100.96.0.4` to it. If the user now restarts Firezone
or signs out and back in, this state is lost and we assign those same
IPs to the next DNS query coming in.
This creates a problem for applications that do not re-query DNS very
often or never. They expect these IPs to not change. Restarting software
or signing out and back in is a common approach to fixing software
problems, yet in this specific case, doing so may create even more
problems for the user.
To mitigate this, `ClientState` introduce a new event
`DnsRecordsChanged` that gets emitted to the event-loop every time we
assign new records. The event-loop then caches this in memory and reuses
it in case a new session is initiated. The records are only stored
in-memory and not on disk. Most likely, the tunnel process will be alive
for the entire OS session.
To verify this behaviour, we add a new `RestartClient` transition to our
proptests. In the proptests, we already keep a mapping of all DNS names
we ever resolved, including DNS resources. When generating IP traffic,
we sample from this list of IPs and then expect the packet to be routed.
By replacing the `ClientState` as part of this transition and re-seeding
it with the previously exported DNS records, we can verify that packets
to IPs resolved from a previous session still get successfully routed to
the resource.
Related: #5498
To deploy the relays on Azure, we need to make sure the binaries are
copied there, similar to GCP. This adds a job step to do just that,
placing them into a storage account + container using new infra
provisioned in Azure.
In CI, eBPF in driver mode actually functions just fine with no changes
to our existing tests, given we apply a few workarounds and bugfixes:
- The interface learning mechanism had two flaws: (1) it only learned
per-CPU, which meant the risk for a missing entry grew as the core count
of the relay host grew, and (2) it did not filter for unicast IPs, so it
picked up broadcast and link-local addresses, causing cross-relay paths
to fail occasionally
- The `relay-relay` candidate where the two relays are the same relay
causes packet drops / loops in the Docker bridge setup, and possibly in
GCP too. I'm not sure this is a valid path that solves a real
connectivity issue in the wild. I can understand relay-relay paths where
two relays are different hosts, and the client and gateway both talk
over their TURN channel to each other (i.e. WireGuard is blocked in each
of their networks), but I can't think of an advantage for a relay-relay
candidate where the traffic simply hairpins (or is dropped) off the
nearest switch. This has been now detected with a new `PacketLoop` error
that triggers whenever source_ip == dest_ip.
- The relays in CI need a common next-hop to talk to for the MAC address
swapping to work. A simple router service is added which functions as a
basic L3 router (no NAT) that allows the MAC swapping to work.
- The `veth` driver has some peculiar requirements to allow it to
function with XDP_TX. If you send a packet out of one interface of a
veth pair with XDP_TX, you need to either make sure both interfaces have
GRO enabled, or you need to attach a dummy XDP program that simply does
XDP_PASS to the other interface so that the sk_buff is allocated before
going up the stack to the Docker bridge. The GRO method was unreliable
and didn't work in our case, causing massive packet delays and
unpredictable bursts that prevented ICE from working, so we use the
XDP_PASS method instead. A simple docker image is built and lives at
https://github.com/firezone/xdp-pass to handle this.
Related: #10138
Related: #10260
- Removes the swift DerivedData cache. This was added to attempt to
speed up the Swift builds in CI but in reality, those are already fast
and the cache did not speed them up.
- Removes the runner.os/arch specifier from the Webview installer cache
key. The binary download is hardcoded for a specific windows version /
arch already so the cache key just adds unneeded complexity.
These caches are getting saved on PR runs which consumes excess GHA
cache storage.
To avoid burning Azure credits, we move the runners back down to the
free tier. Now that caching is properly set up, this should incur only a
minor increase in CI time.
The COS images we currently use to run our Relays ship with an older
Linux kernel that doesn't have some of the nice verifier improvements
for our eBPF relay.
To fix this, we need to use Ubuntu 24.04. To keep things simple there,
we would like to avoid installing Docker on that image and instead run
the Relay raw. To support that, we first need to push the built relay
binary to our staging cloud storage bucket.
Related: #10177
Related: https://github.com/firezone/infra/pull/116
---------
Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
We are _very much_ over our GHA cache limit of 10 GB so in an effort to
keep evictions to a minimum, we update the Rust SCCACHE to only write on
`main` and the Docker elixir and data plane image build steps to do the
same.
Fixes#10145
In the real world, it's entirely possible that the latency between
clients, gateways, and relays is much lower than the latency to the API
nodes. This added latency will test that we can handle such cases
reliably.
---------
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
Right now, `snownet` de-multiplexes WireGuard packets based on their
source tuple (IP + port) to the _first_ connection that would like to
handle this traffic. What appears to be happening based on observation
from customer logs is that we sometimes dispatch the traffic to the
wrong connection.
The WireGuard packet format uses session indices to declare, which
session a packet is for. The local session index is selected during the
handshake for a particular session.
By associating the different session indices (we can have up to 8 in
parallel per peer) with our Firezone-specific connection ID, we can
change our de-multiplexing scheme to uses these indices instead of the
source tuple. This is especially important for Gateways as those talk to
multiple different clients.
The session index is a 32-bit integer where the top 24 bits identify the
connection and the bottom 8 bits are used in a round-robin fashion to
identify individual sessions within the connection. Thus, to find the
correct connection, we right-shift the session index of an incoming
packet to arrive back at the 24-bit connection identifier.
In environments with a limited number of ports outside the NAT, a
connection from a new Client may come from a source tuple of a previous
Client. In such a case, we'd dispatch the packets to the wrong
connection, causing the Client to not be able to handshake a tunnel.
Bumps
[taiki-e/install-action](https://github.com/taiki-e/install-action) from
2.55.3 to 2.57.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/taiki-e/install-action/releases">taiki-e/install-action's
releases</a>.</em></p>
<blockquote>
<h2>2.57.5</h2>
<ul>
<li>Update <code>vacuum@latest</code> to 0.17.7.</li>
</ul>
<h2>2.57.4</h2>
<ul>
<li>Update <code>trivy@latest</code> to 0.65.0.</li>
</ul>
<h2>2.57.3</h2>
<ul>
<li>Update <code>syft@latest</code> to 1.29.1.</li>
</ul>
<h2>2.57.2</h2>
<ul>
<li>
<p>Update <code>grcov@latest</code> to 0.10.3.</p>
</li>
<li>
<p>Update <code>cargo-shear@latest</code> to 1.4.1.</p>
</li>
</ul>
<h2>2.57.1</h2>
<ul>
<li>
<p>Update <code>git-cliff@latest</code> to 2.10.0.</p>
</li>
<li>
<p>Update <code>cargo-binstall@latest</code> to 1.14.2.</p>
</li>
</ul>
<h2>2.57.0</h2>
<ul>
<li>Support <code>mdbook-alerts</code>. (<a
href="https://redirect.github.com/taiki-e/install-action/pull/1060">#1060</a>,
thanks <a
href="https://github.com/CommanderStorm"><code>@CommanderStorm</code></a>)</li>
</ul>
<h2>2.56.24</h2>
<ul>
<li>Update <code>just@latest</code> to 1.42.4.</li>
</ul>
<h2>2.56.23</h2>
<ul>
<li>Update <code>release-plz@latest</code> to 0.3.139.</li>
</ul>
<h2>2.56.22</h2>
<ul>
<li>Update <code>wasmtime@latest</code> to 35.0.0.</li>
</ul>
<h2>2.56.21</h2>
<ul>
<li>Improve error message for unsupported host architectures.</li>
</ul>
<h2>2.56.20</h2>
<ul>
<li>Update <code>syft@latest</code> to 1.29.0.</li>
</ul>
<h2>2.56.19</h2>
<ul>
<li>Update <code>cargo-llvm-cov@latest</code> to 0.6.18.</li>
</ul>
<h2>2.56.18</h2>
<ul>
<li>Update <code>just@latest</code> to 1.42.3.</li>
</ul>
<h2>2.56.17</h2>
<ul>
<li>Update <code>wasmtime@latest</code> to 34.0.2.</li>
</ul>
<h2>2.56.16</h2>
<ul>
<li>
<p>Update <code>cargo-zigbuild@latest</code> to 0.20.1.</p>
</li>
<li>
<p>Update <code>cargo-lambda@latest</code> to 1.8.6.</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/taiki-e/install-action/blob/main/CHANGELOG.md">taiki-e/install-action's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<p>All notable changes to this project will be documented in this
file.</p>
<p>This project adheres to <a href="https://semver.org">Semantic
Versioning</a>.</p>
<!-- raw HTML omitted -->
<h2>[Unreleased]</h2>
<h2>[2.57.5] - 2025-08-01</h2>
<ul>
<li>Update <code>vacuum@latest</code> to 0.17.7.</li>
</ul>
<h2>[2.57.4] - 2025-07-31</h2>
<ul>
<li>Update <code>trivy@latest</code> to 0.65.0.</li>
</ul>
<h2>[2.57.3] - 2025-07-31</h2>
<ul>
<li>Update <code>syft@latest</code> to 1.29.1.</li>
</ul>
<h2>[2.57.2] - 2025-07-29</h2>
<ul>
<li>
<p>Update <code>grcov@latest</code> to 0.10.3.</p>
</li>
<li>
<p>Update <code>cargo-shear@latest</code> to 1.4.1.</p>
</li>
</ul>
<h2>[2.57.1] - 2025-07-27</h2>
<ul>
<li>
<p>Update <code>git-cliff@latest</code> to 2.10.0.</p>
</li>
<li>
<p>Update <code>cargo-binstall@latest</code> to 1.14.2.</p>
</li>
</ul>
<h2>[2.57.0] - 2025-07-26</h2>
<ul>
<li>Support <code>mdbook-alerts</code>. (<a
href="https://redirect.github.com/taiki-e/install-action/pull/1060">#1060</a>,
thanks <a
href="https://github.com/CommanderStorm"><code>@CommanderStorm</code></a>)</li>
</ul>
<h2>[2.56.24] - 2025-07-25</h2>
<ul>
<li>Update <code>just@latest</code> to 1.42.4.</li>
</ul>
<h2>[2.56.23] - 2025-07-24</h2>
<ul>
<li>Update <code>release-plz@latest</code> to 0.3.139.</li>
</ul>
<h2>[2.56.22] - 2025-07-24</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d31232495a"><code>d312324</code></a>
Release 2.57.5</li>
<li><a
href="00ad1b8748"><code>00ad1b8</code></a>
Update <code>vacuum@latest</code> to 0.17.7</li>
<li><a
href="e8c1cf74a6"><code>e8c1cf7</code></a>
Release 2.57.4</li>
<li><a
href="f5b10fbf06"><code>f5b10fb</code></a>
Update <code>trivy@latest</code> to 0.65.0</li>
<li><a
href="17ad3887d7"><code>17ad388</code></a>
Release 2.57.3</li>
<li><a
href="450b647d5c"><code>450b647</code></a>
Update <code>syft@latest</code> to 1.29.1</li>
<li><a
href="bbdef1c33c"><code>bbdef1c</code></a>
Release 2.57.2</li>
<li><a
href="c01bd8006a"><code>c01bd80</code></a>
Update <code>grcov@latest</code> to 0.10.3</li>
<li><a
href="658daa5fc2"><code>658daa5</code></a>
Update <code>cargo-shear@latest</code> to 1.4.1</li>
<li><a
href="a416ddeedb"><code>a416dde</code></a>
Release 2.57.1</li>
<li>Additional commits viewable in <a
href="https://github.com/taiki-e/install-action/compare/v2.55.3...d31232495ad76f47aad66e3501e47780b49f0f3e">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps
[dtolnay/rust-toolchain](https://github.com/dtolnay/rust-toolchain) from
a54c7afa936fefeb4456b2dd8068152669aa8203 to
b3b07ba8b418998c39fb20f53e8b695cdcc8de1b.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b3b07ba8b4"><code>b3b07ba</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/rust-toolchain/issues/152">#152</a>
from dtolnay/trailingwhitespace</li>
<li><a
href="6ff96e92a9"><code>6ff96e9</code></a>
Clean up trailing whitespace from PR 145</li>
<li><a
href="3038d437c0"><code>3038d43</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/rust-toolchain/issues/151">#151</a>
from dtolnay/winrustup</li>
<li><a
href="d69c8f6cd5"><code>d69c8f6</code></a>
Use rustup.rs advertised download URLs</li>
<li><a
href="c9b8f05fe9"><code>c9b8f05</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/rust-toolchain/issues/149">#149</a>
from dtolnay/wincargohome</li>
<li><a
href="eceb16e78c"><code>eceb16e</code></a>
Respect pre-existing CARGO_HOME on Windows</li>
<li><a
href="449259c7e2"><code>449259c</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/rust-toolchain/issues/150">#150</a>
from dtolnay/githubpath</li>
<li><a
href="f36efbae07"><code>f36efba</code></a>
Fix GITHUB_PATH</li>
<li><a
href="3d21cbbc39"><code>3d21cbb</code></a>
Merge pull request <a
href="https://redirect.github.com/dtolnay/rust-toolchain/issues/148">#148</a>
from dtolnay/backslash</li>
<li><a
href="802126c77d"><code>802126c</code></a>
Consistently use backslash directories on Windows</li>
<li>Additional commits viewable in <a
href="a54c7afa93...b3b07ba8b4">compare
view</a></li>
</ul>
</details>
<br />
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This fixes an issue introduced when we moved to GHCR hosting where the
`latest` tag was being applied to each `main` build of the `client` and
`gateway` instead of publish builds only.
With the removal of the NAT64/46 modules, we can now simplify the
internals of our `IpPacket` struct. The requirements for our `IpPacket`
struct are somewhat delicate.
On the one hand, we don't want to be overly restrictive in our parsing /
validation code because there is a lot of broken software out there that
doesn't necessarily follow RFCs. Hence, we want to be as lenient as
possible in what we accept.
On the other hand, we do need to verify certain aspects of the packet,
like the payload lengths. At the moment, we are somewhat too lenient
there which causes errors on the Gateway where we have to NAT or
otherwise manipulate the packets. See #9567 or #9552 for example.
To fix this, we make the parsing in the `IpPacket` constructor more
restrictive. If it is a UDP, TCP or ICMP packet, we attempt to fully
parse its headers and validate the payload lengths.
This parsing allows us to then rely on the integrity of the packet as
part of the implementation. This does create several code paths that can
in theory panic but in practice, should be impossible to hit. To ensure
that this does in fact not happen, we also tackle an issue that is long
overdue: Fuzzing.
Resolves: #6667Resolves: #9567Resolves: #9552