`sentry-cli debug-files upload` offers no option to exclude certain
files or directories when recursively searching the given path. Thus, we
need to remove this staging directory to prevent it from recursively
walking the directory and inevitably erroring out when it hits a path it
doesn't have access to.
The application-split itself doesn't really warrant having two different
Sentry projects.
1. The location of the panic / log already tells us, which component is
failing.
2. Both of the projects are built with Rust so the same "platform"
setting applies.
3. Reducing the number of Sentry projects makes things easier to manage.
4. The binaries are started as independent processes, so the two Sentry
contexts don't interfere.
What we should keep in mind is that one instance of an application will
now log into Sentry twice using the same DSN. I _think_ this means that
the number of sessions listed in Sentry will be double the number of
actual client-runs. The same is true for the Apple client though and
once we integrate Sentry for Android, the same will apply there so
relative to each other, those numbers still make sense.
- Refactor the way we build download links on the Changelog page to make
them more flexible
- Add Android download redirects
- Update user-facing docs to mention new download options
Bumps
[hashicorp/tfc-workflows-github](https://github.com/hashicorp/tfc-workflows-github)
from 1.3.1 to 1.3.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/hashicorp/tfc-workflows-github/releases">hashicorp/tfc-workflows-github's
releases</a>.</em></p>
<blockquote>
<h2>v1.3.2</h2>
<ul>
<li>Bug fixes and enhancements from <a
href="https://github.com/hashicorp/tfc-workflows-tooling/releases/tag/v1.3.2">tfc-workflows-tooling@v1.3.2</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/hashicorp/tfc-workflows-github/blob/main/CHANGELOG.md">hashicorp/tfc-workflows-github's
changelog</a>.</em></p>
<blockquote>
<h1>v1.3.2</h1>
<ul>
<li>Bug fixes and enhancements from <a
href="https://github.com/hashicorp/tfc-workflows-tooling/releases/tag/v1.3.2">tfc-workflows-tooling@v1.3.2</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8e08d1ba95"><code>8e08d1b</code></a>
Prepare v1.3.2 release (<a
href="https://redirect.github.com/hashicorp/tfc-workflows-github/issues/2981">#2981</a>)</li>
<li><a
href="2a0a556cba"><code>2a0a556</code></a>
[COMPLIANCE] Update MPL-2.0 LICENSE (<a
href="https://redirect.github.com/hashicorp/tfc-workflows-github/issues/2980">#2980</a>)</li>
<li><a
href="b15578fa52"><code>b15578f</code></a>
Merge pull request <a
href="https://redirect.github.com/hashicorp/tfc-workflows-github/issues/2976">#2976</a>
from salilsub/main</li>
<li><a
href="030a2307e5"><code>030a230</code></a>
Adding GITHUB_TOKEN link to README</li>
<li><a
href="833d60e689"><code>833d60e</code></a>
Adding information about setting the GITHUB_TOKEN permissions</li>
<li>See full diff in <a
href="https://github.com/hashicorp/tfc-workflows-github/compare/v1.3.1...v1.3.2">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Unfortunately Apple's API doesn't expect to be hit this frequently and
also doesn't respond with obvious errors when we ask too much of it.
Because of this, we move the App Store connect upload back to manual
trigger only, and update the standalone upload to GitHub releases to the
same because it needs to hit Apple's notary service API.
- Attaching the standalone client needs to happen on `main` runs, like
the other clients
- GitHub can't seem to find the release. I suspect the
`GITHUB_REPOSITORY` var is unneeded.
The CI swift workflow needs to be updated to accommodate the macOS
standalone build. This required a decent amount of refactoring to make
the Apple build process more maintainable.
Unfortunately this PR ended up being a giant ball of yarn where pulling
on one thread tended to unravel things elsewhere, since building the
Apple artifacts involve multiple interconnected systems. Combined with
the slow iteration of running in CI, I wasn't able to split this PR into
easier to digest commits, so I've annotated the PR as much as I can to
explain what's changed.
The good news is that Apple release artifacts can now be easily built
from a developer's machine with simply
`scripts/build/macos-standalone.sh`. The only thing needed is the proper
provisioning profiles and signing certs installed.
Since this PR is so big already, I'll save the swift/apple/README.md
updates for another PR.
Standalone distribution requires using a different signing identity
(certificate), set of provisioning profiles, and (annoyingly) requires
the `-systemextension` suffix for our network extension capabilities.
This PR prepares the Xcode environment for building a Standalone app in
CI that will be notarized by matching certificates and provisioning
profiles in our Apple Developer account.
Currently, the Gateway logs all errors that happen when the event-loop
exits on ERROR level. This creates Sentry alerts for things like
"Unauthorized" errors or "404 Not found".
That isn't useful to us. To mitigate this, we polish the code a bit to
only log an ERROR when we actually fail to setup something during
startup (like the TUN device). In all other cases, we now log a more
user-friendly message on INFO but still exit with the appropriate exit
code (0 on CTRL+C, 1 on any other error).
In order for Sentry to parse our releases as semver, they need to be in
the form of `package@version` [0]. Without this, the feature of "Mark
this issue as resolved in the _next_ version" doesn't work properly
because Sentry compares the versions as to when it first saw them vs
parsing the semver string itself. We test versions prior to releasing
them, meaning Sentry learns about a 1.4.0 version before it is actually
released. This causes false-positive "regressions" even though they are
fixed in a later (as per semver) release.
This create some redundancy with the different DSNs that we are already
using. I think it would make sense to consider merging the two projects
we have for the GUI client for example. That is really just one project
that happens to run as two binaries.
For all other projects, I think the separation still makes sense because
we e.g. may add Sentry to the "host" applications of Android and
MacOS/iOS as well. For those, we would reuse the DSN and thus funnel the
issues into the same Sentry project.
As per Sentry's docs, releases are organisation-wide and therefore need
a package identifier to be grouped correctly.
[0]:
https://docs.sentry.io/platforms/javascript/configuration/releases/#bind-the-version
In order to release the new control protocol to users, we need to bump
the versions of the clients to 1.4.0. The portal has a version gate to
only select gateways with version >= 1.4.0 for clients >= 1.4.0. Thus,
bumping these versions can only happen once testing has completed and
the gateway has actually been released as 1.4.0.
Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com>
## Context
The Gateway implements a stateful NAT that translates the destination IP
and source protocol of every packet that targets a DNS resource IP. This
is necessary because the IPs for DNS resources are generated on the
client without actually performing a DNS lookup, instead it always
generates 4 IPv4 and 4 IPv6 addresses. On the Gateway, these IPs are
then assigned in a round-robin fashion to the actual IPs that the domain
resolves to, necessitating a NAT64/46 translation in case a domain only
resolves to IPs of one family.
A domain may resolve to a set of IPs but not all of these IPs may be
routable. Whilst an arguably poor practise of the domain administrator,
routing problems can occur for all kinds of reasons and are well handled
on the wider Internet.
When an IP packet cannot be routed further, the current routing node
generates an ICMP error describing the routing failure and sends it back
to the original sender. ICMP is a layer 4 protocol itself, same as TCP
and UDP. As such, sending out a UDP packet may result in receiving an
ICMP response. In order to allow the sender to learn, which packet
failed to route, the ICMP error embeds parts of the original packet in
its payload [0] [1].
The Gateway's NAT table uses parts of the layer 4 protocol as part of
its key; the UDP and TCP source port and the ICMP echo request
identifier (further referred to as "source protocol"). An ICMP error
message doesn't have any of these, meaning the lookup in the NAT table
currently fails and the ICMP error is silently dropped.
A lot of software implements a happy-eyeballs approach and probs for
IPv6 and IPv4 connectivity simulataneously. The absence of the ICMP
errors confuses that algorithm as it detects the packet loss and starts
retransmits instead of giving up.
## Solution
Upon receiving an ICMP error on the Gateway, we now extract the
partially embedded packet in the ICMP error payload. We use the
destination IP and source protocol of _that_ packet for the lookup in
the NAT table. This returns us the original (client-assigned)
destination IP and source protocol. In order for the Gateway's NAT to be
transparent, we need to patch the packet embedded in the ICMP error to
use the original destination and source protocol. We also have to
account for the fact that the original packet may have been translated
with NAT64/46 and translate it back. Finally, we generate an ICMP error
with the appropriate code and embed the patched packet in its payload.
## Test implementation
To test that this works for all kind of combinations, we extend
`tunnel_test` to sample a list of unreachable IPs from all IPs sampled
for DNS resources. Upon receiving a packet for one of these IPs, the
Gateway will send an ICMP error back instead of invoking its regular
echo reply logic. On the client-side, upon receiving an ICMP error, we
extract the originally failed packet from the body and treat it as a
successful response.
This may seem a bit hacky at first but is actually how operating systems
would treat ICMP errors as well. For example, a `TcpSocket::connect`
call (triggering a TCP SYN packet) may fail with an IO error if we
receive an ICMP error packet. Thus, in a way, the original packet got
answered, just not with what we expected.
In addition, by treating these ICMP errors as responses to the original
packet, we automatically perform other assertions on them, like ensuring
that they come from the right IP address, that there are no unexpected
packets etc.
## Test alternatives
It is tricky to solve this in other ways in the test suite because at
the time of generating a packet for a DNS resource, we don't know the
actual IP that is being targeted by a certain proxy IP unless we'd start
reimplementing the round-robin algorithm employed by the Gateway. To
"test" the transparency of the NAT, we'd like to avoid knowing about
these implementation details in the test.
## Future work
In this PR, we currently only deal with "Destination Unreachable" ICMP
errors. There are other ICMP messages such as ICMPv6's `PacketTooBig` or
`ParameterProblem`. We should eventually handle these as well. They are
being deferred because translating those between the different IP
versions is only partially implemented and would thus require more work.
The most pressing need is to translate destination unreachable errors to
enable happy-eyeballs algorithms to work correctly.
Resolves: #5614.
Resolves: #6371.
[0]: https://www.rfc-editor.org/rfc/rfc792
[1]: https://www.rfc-editor.org/rfc/rfc4443#section-3.1
One of Rust's promises is "if it compiles, it works". However, there are
certain situations in which this isn't true. In particular, when using
dynamic typing patterns where trait objects are downcast to concrete
types, having two versions of the same dependency can silently break
things.
This happened in #7379 where I forgot to patch a certain Sentry
dependency. A similar problem exists with our `tracing-stackdriver`
dependency (see #7241).
Lastly, duplicate dependencies increase the compile-times of a project,
so we should aim for having as few duplicate versions of a particular
dependency as possible in our dependency graph.
This PR introduces `cargo deny`, a linter for Rust dependencies. In
addition to linting for duplicate dependencies, it also enforces that
all dependencies are compatible with an allow-list of licenses and it
warns when a dependency is referred to from multiple crates without
introducing a workspace dependency. Thanks to existing tooling
(https://github.com/mainmatter/cargo-autoinherit), transitioning all
dependencies to workspace dependencies was quite easy.
Resolves: #7241.
In #7389, we started tagging the built images with the branch name in
addition to other tags such as `latest` and the version number. That
wasn't enough unfortunately because we also need to tag the merged
manifest image that bundles the different architectures together as only
that one actually gets pushed to the registry.
Our `docker-compose` file references the images using the `main` tag.
However, doing a `docker compose pull` fails because the `main` tag
doesn't exist for the client, gateway and relay images.
In order for this tag to exist, we need to instruct the
`docker/metadata-action` to generate a tag for the current branch.
Explicitly creating the Sentry release allows us to associate the
commits since the last release with the new one. This might help us to
identify potential sources of regressions. For the current releases,
I've set them manually to ensure that this automation has something to
pick up on for the next release.
The releases will already exists prior to this because they are
automatically created when a client / gateway first logs in with a
certain version.
What this does it mark it as "finalized" and set the commit range
accordingly.
Resolves: #7358.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
In order to display better stacktraces when Firezone crashes, Sentry
needs debug symbols for our binaries. Debug symbols on Sentry are
retained for 90 days after they have been last used [0]. We can thus
simply upload them every time we build a binary on `main`.
For the moment, this only uploads them for the GUI client. Debug symbols
for the Android and Apple clients will be done in separate PRs.
[0]:
https://docs.sentry.io/platforms/native/data-management/debug-files/#retention-policy
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
This ensure that we run prettier across all supported filetypes to check
for any formatting / style inconsistencies. Previously, it was only run
for files in the website/ directory using a deprecated pre-commit
plugin.
The benefit to keeping this in our pre-commit config is that devs can
optionally run these checks locally with `pre-commit run --config
.github/pre-commit-config.yaml`.
---------
Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>