Currently, tests only send ICMP packets back and forth, to expand our
coverage and later on permit us cover filters and resource picking this
PR implements sending UDP and TCP packets as part of that logic too.
To make this PR simpler in this stage TCP packets don't track an actual
TCP connection, just that they are forwarded back and forth, this will
be fixed in a future PR by emulating TCP sockets.
We also unify how we handle CIDR/DNS/Non Resources to reduce the number
of transitions.
Fixes#7003
---------
Signed-off-by: Gabi <gabrielalejandro7@gmail.com>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
At present, `connlib` only supports DNS over UDP on port 53. Responses
over UDP are size-constrained on the IP MTU and thus, not all DNS
responses fit into a UDP packet. RFC9210 therefore mandates that all DNS
resolvers must also support DNS over TCP to overcome this limitation
[0].
Handling UDP packets is easy, handling TCP streams is more difficult
because we need to effectively implement a valid TCP state machine.
Building on top of a lot of earlier work (linked in issue), this is
relatively easy because we can now simply import
`dns_over_tcp::{Client,Server}` which do the heavy lifting of sending
and receiving the correct packets for us.
The main aspects of the integration that are worth pointing out are:
- We can handle at most 10 concurrent DNS TCP connections _per defined
resolver_. The assumption here is that most applications will first
query for DNS records over UDP and only fall back to TCP if the response
is truncated. Additionally, we assume that clients will close the TCP
connections once they no longer need it.
- Errors on the TCP stream to an upstream resolver result in `SERVFAIL`
responses to the client.
- All TCP connections to upstream resolvers get reset when we roam, all
currently ongoing queries will be answered with `SERVFAIL`.
- Upon network reset (i.e. roaming), we also re-allocate new local ports
for all TCP sockets, similar to our UDP sockets.
Resolves: #6140.
[0]: https://www.ietf.org/rfc/rfc9210.html#section-3-5
Since we've added these tests, `connlib`'s test coverage has increased
significantly to the point where we don't need all of them anymore.
Especially pretty much everything in regards to relays is unnecessary to
be tested using docker.
These integration tests are sometimes flaky due to docker not starting
or images failing to pull. Thus, having fewer of them is better because
it increases CI reliability. Also, there are only so many jobs that
GitHub will execute in parallel so having less jobs is better for that
too.
Resolves: #6451.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Refs #6927
This PR creates a GTK+ event loop, a blank window, and the tray menu. It
connects to the IPC service, you can sign in and everything, but the
About window, Settings window, and Welcome window aren't implemented.
We build a deb package in CI but it isn't pushed to the draft releases
in CD yet.

Pros over Iced:
- More mature
- Easy integration with `tray-icon`
- Small binaries (< 1 MB for this example)
Cons:
- GTK 3.x is abandoned as of March. GTK 4 isn't packaged for Ubuntu
20.04.
- Widgets might be hard to use
- Hard to set up on Windows, only using this for Linux for now
---------
Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
`SENTRY_ENVIRONMENT` is only read at run time, not at build time, and we
override the environment in our code anyway, so this env var was doing
nothing.
Our tests are pretty fast now, meaning we can afford running more
permutations. This makes it less likely to encounter flakes in the
"coverage" tests where we grep for certain log lines to ensure that the
tests hit certain code paths.
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Closes#6854
- Sets release version from the GUI Client / Headless Client version
instead of the `firezone-telemetry` version
- Set environment to "production" and "staging" for well-known API URLs,
and "self-hosted" for others, since environments in Sentry can't have
slashes in them
- Sets API URL as a tag
- Sets release to `unit test` for unit testing `firezone-telemetry`
itself, since it has no good version number
<img width="398" alt="image"
src="https://github.com/user-attachments/assets/86f71193-2511-45c1-8304-413db8e5ef90">
Refs #6138
Sentry is always enabled for now. In the near future we'll make it
opt-out per device and opt-in per org (see #6138 for details)
- Replaces the `crash_handling` module
- Catches panics in GUI process, tunnel daemon, and Headless Client
- Added a couple "breadcrumbs" to play with that feature
- User ID is not set yet
- Environment is set to the API URL, e.g. `wss://api.firezone.dev`
- Reports panics from the connlib async task
- Release should be automatically pulled from the Cargo version which we
automatically set in the version Makefile
Example screenshot of sentry.io with a caught panic:
<img width="861" alt="image"
src="https://github.com/user-attachments/assets/c5188d86-10d0-4d94-b503-3fba51a21a90">
In the past, we struggled a lot of the reproducibility of `tunnel_test`
failures because our input state and transition strategies were not
deterministic. In the end, we found out that it was due to the iteration
order of `HashMap`s.
To make sure this doesn't regress, we added a check to CI at the time
that compares the debug output of all regression seeds against a 2nd run
and ensures they are the same. That is overall a bit wonky.
We can do better by simple sampling a value from the strategy twice from
a test runner with the same seed. If the strategy is deterministic,
those need to be the same. We still rely on the debug output being
identical because:
a. Deriving `PartialEq` on everything is somewhat cumbersome
b. We actually care about the iteration order which a fancy `PartialEq`
implementation might ignore
This happens because the smoke test is stubbed out for release builds,
so any `use` statements that are only used in the smoke tests will cause
a warning in `--release` builds, including when we make release bundles.
Currently, we have two structs for representing IP packets: `IpPacket`
and `MutableIpPacket`. As the name suggests, they mostly differ in
mutability. This design was originally inspired by the `pnet_packet`
crate which we based our `IpPacket` on. With subsequent iterations, we
added more and more functionality onto our `IpPacket`, like NAT64 &
NAT46 translation. As a result of that, the `MutableIpPacket` is no
longer directly based on `pnet_packet` but instead just keeps an
internal buffer.
This duplication can be resolved by merging the two structs into a
single `IpPacket`. We do this by first replacing all usages of
`IpPacket` with `MutableIpPacket`, deleting `IpPacket` and renaming
`MutableIpPacket` to `IpPacket`. The final design now has different
`self`-receivers: Some functions take `&self`, some `&mut self` and some
consume the packet using `self`.
This results in a more ergonomic usage of `IpPacket` across the codebase
and deletes a fair bit of code. It also takes us one step closer towards
using `etherparse` for all our IP packet interaction-needs. Lastly, I am
currently exploring a performance-optimisation idea that stack-allocates
all IP packets and for that, the current split between `IpPacket` and
`MutableIpPacket` does not really work.
Related: #6366.
This moves about 2/3rds of the code from `firezone-gui-client` to
`firezone-gui-client-common`.
I tested it in aarch64 Windows and cycled through sign-in and sign-out
and closing and re-opening the GUI process while the IPC service stays
running. IPC and updates each get their own MPSC channel in this, so I
wanted to be sure it didn't break.
---------
Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
This PR introduces the `etherparse` dependency for parsing and
generating IP packets.
Using `etherparse`, we can implement the NAT46 & NAT64 implementations
for the gateway more elegantly because it allows us to parse the IP and
protocol headers into a static and much richer representation. The
conversion to the IPv4/IPv6 equivalent is then just a question of
transforming one data structure into another and writing it to the
correct place in the buffer.
We extract this functionality into dedicated `nat64` and `nat46`
modules.
Furthermore, we implement the various functions in `ip_packet::make`
using `etherparse` too. Following that, we also overhaul the NAT
translation tests that we have in `ip_packet::proptests`. Those now use
the more low-level `consume_to_ipX` APIs which makes the tests more
ergonomic to write.
In the future, we should upstream `Ipv4HeaderSliceMut` and
`Ipv6HeaderSliceMut` to `etherparse`.
Moving all of this functionality to `etherparse` will make it easier to
write tests that involve more IP packets as well as customise the
behaviour of our NAT.
Related: #5614.
Related: #6371.
Related: #6353.
Bumps [gradle/actions](https://github.com/gradle/actions) from 3 to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/gradle/actions/releases">gradle/actions's
releases</a>.</em></p>
<blockquote>
<h2>v4.0.0</h2>
<p>Final release of <code>v4.0.0</code> of the
<code>setup-gradle</code>, <code>dependency-submission</code> and
<code>wrapper-validation</code> actions provided under
<code>gradle/actions</code>.
This release is available under the <code>v4</code> tag.</p>
<h2>Major changes from the <code>v3</code> release</h2>
<h3>The <code>arguments</code> parameter has been removed</h3>
<p>Using the action to execute Gradle via the <code>arguments
</code>parameter was deprecated in <code>v3</code> and this parameter
has been removed.
<a
href="https://github.com/gradle/actions/blob/v4.0.0-rc.1/docs/deprecation-upgrade-guide.md#using-the-action-to-execute-gradle-via-the-arguments-parameter-is-deprecated">See
here for more details</a>.</p>
<h3>Cache cleanup enabled by default</h3>
<p>After a number of fixes and improvements, this release enables <a
href="https://github.com/gradle/actions/blob/v4.0.0-rc.1/docs/setup-gradle.md#configuring-cache-cleanup">cache-cleanup</a>
by default for all Jobs using the <code>setup-gradle</code> and
<code>dependency-submission</code> actions.</p>
<p>Improvements and bugfixes related cache cleanup:</p>
<ul>
<li>By default, cache cleanup is not run if any Gradle build fails (<a
href="https://redirect.github.com/gradle/actions/issues/71">#71</a>)</li>
<li>Cache cleanup is not run after configuration-cache reuse (<a
href="https://redirect.github.com/gradle/actions/issues/19">#19</a>)</li>
</ul>
<p>This feature should help to minimize the size of entries written to
the GitHub Actions cache, speeding up builds and reducing cache
usage.</p>
<h3>Wrapper validation enabled by default</h3>
<p>In <code>v3</code>, the <code>setup-gradle</code> action was enhanced
to support Gradle wrapper validation, removing the need to use a
separate workflow
file with the <code>gradle/actions/wrapper-validation</code> action.</p>
<p>With this release, wrapper validation has been significantly
improved, and is now enabled by default (<a
href="https://redirect.github.com/gradle/actions/issues/12">#12</a>):</p>
<ul>
<li>The <code>allow-snapshot-wrappers</code> makes it possible to
validate snapshot wrapper jars using <code>setup-gradle</code>.</li>
<li>Checksums for <a
href="https://services.gradle.org/distributions-snapshots/">nightly and
snapshot Gradle versions</a> are now validated (<a
href="https://redirect.github.com/gradle/actions/issues/281">#281</a>).</li>
<li>Valid wrapper checksums are cached in Gradle User Home, reducing the
need to retrieve checksum values remotely (<a
href="https://redirect.github.com/gradle/actions/issues/172">#172</a>).</li>
<li>Reduce network calls in <code>wrapper-validation</code> for new
Gradle versions: By only fetching wrapper checksums for Gradle versions
that were not known when this action was released, this release reduces
the likelihood that a network failure could cause failure in wrapper
validation (<a
href="https://redirect.github.com/gradle/actions/issues/171">#171</a>)</li>
<li>Improved error message when <code>wrapper-validation</code> finds no
wrapper jars (<a
href="https://redirect.github.com/gradle/actions/issues/284">#284</a>)</li>
</ul>
<p>Wrapper validation is important for supply-chain integrity. Enabling
this feature by default will increase the coverage of wrapper
validation on projects using GitHub Actions.</p>
<h3>New input parameters for Dependency Graph generation</h3>
<p>Some dependency-graph inputs that could previously only be configured
via environment variables now have dedicated action inputs:</p>
<ul>
<li><code>dependency-graph-report-dir</code>: sets the location where
dependency-graph reports will be generated</li>
<li><code>dependency-graph-exclude-projects</code> and
<code>dependency-graph-include-projects</code>: <a
href="https://github.com/gradle/actions/blob/v4.0.0-rc.1/docs/dependency-submission.md#selecting-gradle-projects-that-will-contribute-to-the-dependency-graph">select
which Gradle projects will contribute to the generated dependency
graph</a>.</li>
<li><code>dependency-graph-exclude-configurations</code> and
<code>dependency-graph-include-configurations</code>: <a
href="https://github.com/gradle/actions/blob/v4.0.0-rc.1/docs/dependency-submission.md#selecting-gradle-configurations-that-will-contribute-to-the-dependency-graph">select
which Gradle configurations will contribute to the generated dependency
graph</a>.</li>
</ul>
<h3>Other improvements</h3>
<ul>
<li>In Job summary, the action now provides an explanation when cache is
set to <code>read-only</code> or <code>disabled</code> (<a
href="https://redirect.github.com/gradle/actions/issues/255">#255</a>)</li>
<li>When <code>setup-gradle</code> requests a specific Gradle version,
the action will no longer download and install that version if it is
already available on the <code>PATH</code> of the runner (<a
href="https://redirect.github.com/gradle/actions/issues/270">#270</a>)</li>
<li>To attempt to speed up builds, the <code>setup-gradle</code> and
<code>dependency-submission</code> actions now attempt to use the
<code>D:</code> drive for Gradle User Home if it is available (<a
href="https://redirect.github.com/gradle/actions/issues/290">#290</a>)</li>
</ul>
<h2>Deprecations and breaking changes</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="16bf8bc8fe"><code>16bf8bc</code></a>
Rework docs for Develocity support</li>
<li><a
href="faf4eeacd5"><code>faf4eea</code></a>
[bot] Update dist directory</li>
<li><a
href="4b7cc6e174"><code>4b7cc6e</code></a>
Differentiate Gradle 8.1 from 8.10 when checking version (<a
href="https://redirect.github.com/gradle/actions/issues/358">#358</a>)</li>
<li><a
href="0873530e60"><code>0873530</code></a>
Increase Gradle version coverage for init-scripts</li>
<li><a
href="f67327f0c8"><code>f67327f</code></a>
[bot] Update dist directory</li>
<li><a
href="d32a10b3ae"><code>d32a10b</code></a>
Dependency updates (<a
href="https://redirect.github.com/gradle/actions/issues/356">#356</a>)</li>
<li><a
href="e598a32529"><code>e598a32</code></a>
Quote version 8.10 in integ test</li>
<li><a
href="d6c8cf816c"><code>d6c8cf8</code></a>
Bump unzip-stream from 0.3.1 to 0.3.4 in /sources</li>
<li><a
href="79ea5b8f3e"><code>79ea5b8</code></a>
Bump org.junit.jupiter:junit-jupiter</li>
<li><a
href="d77a030aaf"><code>d77a030</code></a>
Bump com.google.guava:guava in /.github/workflow-samples/kotlin-dsl</li>
<li>Additional commits viewable in <a
href="https://github.com/gradle/actions/compare/v3...v4">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This publishes the 1.3.0 clients and gateways so that Internet Resources
will work.
The feature is still disabled for the Stripe plans until we publish the
launch post. Select customers have the feature enabled.
Closes#2667
Closes#5811
<img width="206" alt="image"
src="https://github.com/user-attachments/assets/a2c46bb6-c76a-49ca-a933-4363597d4029">
- Waits a random amount of time up to 24 hours before the first network
check, to avoid the thundering herd problem
- Polls every 24 hours (86,400 seconds) after that
- Saves the network response to disk so we ~~can show "Update ready"
immediately at startup~~ won't notify twice about the same version
- Not clear whether suspending the computer suspends the timer - "it is
also not specified whether system suspends count as elapsed time or not.
The behavior varies across platforms and Rust versions."
---------
Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com>
- No known issues from the knowledge base were fixed
- I confirmed on the Windows laptop that the fix for #6469 is in this
MSI.
- The changelog looks good in the Vercel preview
---------
Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com>
This will always build images we can use for last-minute compatibility
tests, even if the merge group was bypassed.
---------
Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Currently, the gateway requires a strict ordering of first receiving a
`request_connection` message, following by multiple `allow_access`
messages. Additionally, access can be granted as part of the initial
`request_connection` message too.
This isn't an ideal design. Setting up a new connection is infallible,
all we need to do is send our ICE credentials back to the client.
However, untangling that will require a bit more effort.
Starting with #6335, following this strict order on the client is a more
difficult. Whilst we can send them in order, it is harder to maintain
those ordering guarantees across all our systems.
To avoid this, we change the gateway to perform an upsert for its local
ACLs for a client. In case that an `allow_access` call would somehow get
to the gateway earlier, we can simply already create the `Peer` and only
set up the actual connection later.
---------
Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>