Bumps the npm_and_yarn group in /rust/gui-client with 1 update:
[braces](https://github.com/micromatch/braces).
Updates `braces` from 3.0.2 to 3.0.3
<details>
<summary>Commits</summary>
<ul>
<li><a
href="74b2db2938"><code>74b2db2</code></a>
3.0.3</li>
<li><a
href="88f1429a0f"><code>88f1429</code></a>
update eslint. lint, fix unit tests.</li>
<li><a
href="415d660c30"><code>415d660</code></a>
Snyk js braces 6838727 (<a
href="https://redirect.github.com/micromatch/braces/issues/40">#40</a>)</li>
<li><a
href="190510f79d"><code>190510f</code></a>
fix tests, skip 1 test in test/braces.expand</li>
<li><a
href="716eb9f12d"><code>716eb9f</code></a>
readme bump</li>
<li><a
href="a5851e57f4"><code>a5851e5</code></a>
Merge pull request <a
href="https://redirect.github.com/micromatch/braces/issues/37">#37</a>
from coderaiser/fix/vulnerability</li>
<li><a
href="2092bd1fb1"><code>2092bd1</code></a>
feature: braces: add maxSymbols (<a
href="https://github.com/micromatch/braces/issues/">https://github.com/micromatch/braces/issues/</a>...</li>
<li><a
href="9f5b4cf473"><code>9f5b4cf</code></a>
fix: vulnerability (<a
href="https://security.snyk.io/vuln/SNYK-JS-BRACES-6838727">https://security.snyk.io/vuln/SNYK-JS-BRACES-6838727</a>)</li>
<li><a
href="98414f9f1f"><code>98414f9</code></a>
remove funding file</li>
<li><a
href="665ab5d561"><code>665ab5d</code></a>
update keepEscaping doc (<a
href="https://redirect.github.com/micromatch/braces/issues/27">#27</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/micromatch/braces/compare/3.0.2...3.0.3">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/firezone/firezone/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
The implementation of `tunnel_test` has grown substantially in the last
couple of weeks (> 2500 LoC). To make things easier to manage, we split
it up into multiple modules:
- `assertions`: Houses the actual assertions of the test.
- `reference:` The reference implementation of connlib. Used to as the
"expectation" for the assertions.
- `sut`: A wrapper around connlib itself, acting as the
system-under-test (SUT).
- `transition`: All state transitions that the test might go through.
- `strategies`: Auxiliary strategies used in multiple places.
- `sim_*`: Wrappers for simulating various parts in the code: Clients,
relays, gateways & the portal.
I chose to place strategies into the same modules as where things are
defined. For example, the `sim_node_prototype` strategy is defined in
the `sim_node` module. Similarly, the strategies for the individual
transitions are also defined in the `transition` module.
Currently, there is a bug in `snownet` where we accidentally invalidate
a srflx candidate because we try and look for the nominated candidate
based on the nominated address. The nominated address represents the
socket that the application should send from:
- For host candidates, this is the host candidate's address itself.
- For server-reflexive candidates, it is their base (which is equivalent
to the host candidate)
- For relay candidates, it is the address of the allocation.
Because of the ambiguity between host and server-reflexive candidates,
we invalidate the server-reflexive candidate locally, send that to the
remote and the remote as a result kills the connection because it thinks
it should no longer talk to this address.
To fix this, we don't add server-reflexive candidates to the local agent
anymore. Only the remote peer needs to know about the server-reflexive
address in order to send packets _to_ it. By sending from the host
candidate, we automatically send "from" the server-reflexive address.
Not adding these server-reflexive candidates has an additional impact.
To make the tests pass reliably, I am entirely removing the invalidation
of candidates after the connection setup, as keeping that fails
connections early in the roaming test. This will increase background
traffic a bit but that seems like an okay trade-off to get more
resilient connections (the current bug is only caused by us trying to be
clever in how many candidate pairs we keep alive). We still use the
messages for invalidating candidates on the remote to make roaming work
reasonably smoothly.
Resolves: #5276.
In production, the portal will signal disconnected relays to both the
client and the gateway. We should mimic this in the tests.
In #5283, we remove invalidation of candidates during the connection
setup which breaks this roaming test due to "unhandled messages". We
could ignore those but I'd prefer to setup the test such that we panic
on unhandled messages instead and thus, this seems to be the better fix.
- Removes version numbers from infra components (elixir/relay)
- Removes version bumping from Rust workspace members that don't get
published
- Splits release publishing into `gateway-`, `headless-client-`, and
`gui-client-`
- Removes auto-deploying new infrastructure when a release is published.
Use the Deploy Production workflow instead.
Fixes#4397
Currently, we simply drop a DNS query if we can't fulfill it. Because
DNS is based on UDP which is unreliable, a downstream system will
re-send a DNS query if it doesn't receive an answer within a certain
timeout window.
Instead of dropping queries, we now reply with `SERVFAIL`, indicating to
the client that we can't fulfill that DNS query. The intent is that this
will stop any kind of automated retry-loop and surface an error to the
user.
Related: #4800.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Reactor Scram <ReactorScram@users.noreply.github.com>
Currently, the same proxy IP can only ever point to one DNS record.
Proxy IPs are given out on a per-connection basis. As a result, if two
or more domains resolve to the same IP on the same gateway, previous
entries to this domain are lost and return an empty record as a result.
To fix this issue, we now store the set of resources that resolves to
this proxy IP instead of just a single resource. An invariant we have to
maintain here is that all of these resources must point to the same
gateway. This should always be true because proxy IPs are assigned
sequentially across all connections and thus the same IP can always
point back to the same proxy IP on the same gateway.
Fixes: #5259.
---------
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
Closes#3567 (again)
Closes#5214
Ready for review
```[tasklist]
### Before merging
- [x] The IPC service should report system uptime when it starts. This will tell us whether the computer was rebooted or just the IPC service itself was upgraded / rebooted.
- [x] The IPC service should report the PID of itself and the GUI if possible
- [x] The GUI should report the PID of the IPC service if possible
- [x] Extra logging between `GIT_VERSION = ` and the token loading log line, especially right before and right after the critical Tauri launching step
- [x] If a 2nd GUI or IPC service runs and exits due to single-instance, it must log that
- [x] Remove redundant DNS deactivation when IPC service starts (I think conectado noticed this in another PR)
- [x] Manually test that the GUI logs something on clean shutdown
- [x] Logarithmic heartbeat?
- [x] If possible, log monotonic time somewhere so NTP syncs don't make the logs unreadable (uptime in the heartbeat should be monotonic, mostly)
- [x] Apply the same logging fix to the IPC service
- [x] Ensure log zips include GUI crash dumps
- [x] ~~Fix #5042~~ (that's a separate issue, I don't want to drag this PR out)
- [x] Test IPC service restart (logs as a stop event)
- [x] Test IPC service stop
- [x] Test IPC service logs during system suspend (Not logged, maybe because we aren't subscribed to power events)
- [x] Test IPC service logs during system reboot (Logged as shutdown, we exit gracefully)
- [x] Test IPC service logs during system shut down (Logged as a suspend)
- [x] Test IPC service upgrade (Logged as a stop)
- [x] Log unhandled events from the Windows service controller (Power events like suspend and resume are logged and not handled)
```
---------
Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com>
In case a configured DNS server is also a CIDR resource, DNS queries
will be routed through the tunnel to the gateway. For this to work
correctly, the destination of the request and the source of the response
need to be mangled back to the originally configured DNS server.
Currently, this mangling happens in the connection-specific
`GatewayOnClient` state. More specifically, the state that we need to
track are the IDs of the DNS queries that we actually mangled. This
state isn't connection-specific and can thus be moved out of
`GatewayOnClient` into `ClientState`.
Removing this state is important because we will soon (#5080) implement
roaming the client by simply dropping all connections and establishing
new connections as the packets are flowing in. For this, we must store
as little state as possible associated with each connection.
Resolves: #5079.
In #5207, I already added logs for which assertions we are performing on
ICMP packets. This PR does the same thing for the DNS queries that are
being to connlib. It also adds spans that add some more context to the
messages.
Here is an excerpt of what this looks like:
```
Applying transition 19/19: SendICMPPacketToResource { idx: Index(3210705382108961150), seq: 57053, identifier: 28234, src: TunnelIp6 }
2024-06-05T07:06:30.742455Z INFO assertions: ✅ Performed the expected 2 ICMP handshakes
2024-06-05T07:06:30.742459Z INFO icmp{seq=15543 identifier=63125}: assertions: ✅ dst IP of request matches src IP of response: 3fb8:a7b0:c912:a648:6c9:7910:92dc:8db
2024-06-05T07:06:30.742461Z INFO icmp{seq=15543 identifier=63125}: assertions: ✅ src IP of request matches dst IP of response: fd00:2021:1111::a:3531
2024-06-05T07:06:30.742464Z INFO icmp{seq=15543 identifier=63125}: assertions: ✅ 3fb8:a7b0:c912:a648:6c9:7910:92dc:8db is the correct resource
2024-06-05T07:06:30.742467Z INFO icmp{seq=57053 identifier=28234}: assertions: ✅ dst IP of request matches src IP of response: 3fb8:a7b0:c912:a648:6c9:7910:92dc:8d8
2024-06-05T07:06:30.742470Z INFO icmp{seq=57053 identifier=28234}: assertions: ✅ src IP of request matches dst IP of response: fd00:2021:1111::a:3531
2024-06-05T07:06:30.742473Z INFO icmp{seq=57053 identifier=28234}: assertions: ✅ 3fb8:a7b0:c912:a648:6c9:7910:92dc:8d8 is the correct resource
2024-06-05T07:06:30.742477Z INFO dns{query_id=58256}: assertions: ✅ dst IP of request matches src IP of response: fd00:2021:1111:8000:100:100:111:0
2024-06-05T07:06:30.742480Z INFO dns{query_id=58256}: assertions: ✅ src IP of request matches dst IP of response: fd00:2021:1111::a:3531
2024-06-05T07:06:30.742483Z INFO dns{query_id=58256}: assertions: ✅ dst port of request matches src port of response: 53
2024-06-05T07:06:30.742485Z INFO dns{query_id=58256}: assertions: ✅ src port of request matches dst port of response: 9999
2024-06-05T07:06:30.742488Z INFO dns{query_id=22568}: assertions: ✅ dst IP of request matches src IP of response: 100.100.111.1
2024-06-05T07:06:30.742491Z INFO dns{query_id=22568}: assertions: ✅ src IP of request matches dst IP of response: 100.75.34.66
2024-06-05T07:06:30.742494Z INFO dns{query_id=22568}: assertions: ✅ dst port of request matches src port of response: 53
2024-06-05T07:06:30.742497Z INFO dns{query_id=22568}: assertions: ✅ src port of request matches dst port of response: 9999
2024-06-05T07:06:30.742500Z INFO dns{query_id=58735}: assertions: ✅ dst IP of request matches src IP of response: fd00:2021:1111:8000:100:100:111:2
2024-06-05T07:06:30.742502Z INFO dns{query_id=58735}: assertions: ✅ src IP of request matches dst IP of response: fd00:2021:1111::a:3531
2024-06-05T07:06:30.742505Z INFO dns{query_id=58735}: assertions: ✅ dst port of request matches src port of response: 53
2024-06-05T07:06:30.742507Z INFO dns{query_id=58735}: assertions: ✅ src port of request matches dst port of response: 9999
2024-06-05T07:06:30.742512Z INFO dns{query_id=59096}: assertions: ✅ dst IP of request matches src IP of response: fd00:2021:1111:8000:100:100:111:1
2024-06-05T07:06:30.742514Z INFO dns{query_id=59096}: assertions: ✅ src IP of request matches dst IP of response: fd00:2021:1111::a:3531
2024-06-05T07:06:30.742517Z INFO dns{query_id=59096}: assertions: ✅ dst port of request matches src port of response: 53
2024-06-05T07:06:30.742519Z INFO dns{query_id=59096}: assertions: ✅ src port of request matches dst port of response: 9999
2024-06-05T07:06:30.742522Z INFO dns{query_id=41570}: assertions: ✅ dst IP of request matches src IP of response: fd00:2021:1111:8000:100:100:111:1
2024-06-05T07:06:30.742525Z INFO dns{query_id=41570}: assertions: ✅ src IP of request matches dst IP of response: fd00:2021:1111::a:3531
2024-06-05T07:06:30.742527Z INFO dns{query_id=41570}: assertions: ✅ dst port of request matches src port of response: 53
2024-06-05T07:06:30.742530Z INFO dns{query_id=41570}: assertions: ✅ src port of request matches dst port of response: 9999
2024-06-05T07:06:30.742533Z INFO dns{query_id=15028}: assertions: ✅ dst IP of request matches src IP of response: fd00:2021:1111:8000:100:100:111:1
2024-06-05T07:06:30.742536Z INFO dns{query_id=15028}: assertions: ✅ src IP of request matches dst IP of response: fd00:2021:1111::a:3531
2024-06-05T07:06:30.742538Z INFO dns{query_id=15028}: assertions: ✅ dst port of request matches src port of response: 53
2024-06-05T07:06:30.742541Z INFO dns{query_id=15028}: assertions: ✅ src port of request matches dst port of response: 9999
```
It is a bit repetitive because all assertions always run on all state
transition. Nevertheless I've found it useful to be able to look at the
assertions and visually verify that they make sense.
To make proptests efficient, it is important to generate the set of
possible test cases algorithmically instead of filtering through
randomly generated values.
This PR makes the strategies for upstream DNS servers and IP networks
more efficient by removing the filtering.
With #5049, we will allocate a fixed set of 4 IPs per DNS resource on
the client. In order to ensure that this always works correctly,
increase the number of resolved IPs to at most 6.
This may have been needed when the logger rolled files and uploaded, but
now it compiles fine without it.
I tested it once manually on Windows. I don't think the logging is
covered by automated tests.
In case an upstream DNS server is a resource, we need to not only send
an ICMP packet through the tunnel but also DNS queries. These can be
larger than 200 bytes which currently breaks the test because we only
give it a buffer of 200 bytes.
Closes#5143
The initial half-second backoff should typically be enough, and if the
user is manually re-opening the GUI after a GUI crash, I don't think
they'll notice. If they do, they can open the GUI again and it should
all work.
Most of these were in `known_dirs.rs` because it's platform-specific and
`cargo-mutants` wasn't ignoring other platforms correctly.
Using `cargo mutants -p firezone-gui-client -p firezone-headless-client`
176 / 236 mutants missed before
155 / 206 mutants missed after
Currently, `tunnel_test` only tests DNS resources with fully-qualified
domain names. Firezone also supports wildcard domains in the forms of
`*.example.com` and `?.example.com`.
To include these in the tests, we generate a bunch of DNS records that
include various subdomains for such wildcard DNS resources.
When sampling DNS queries, we already take them from the pool of global
DNS records which now also includes these subdomains, thus nothing else
needed to be changed to support testing these resources.
With #5049, connlib will start mangling and translating ICMP requests
which means we can no longer rely on the ICMP request emitted by the
gateway to have the same sequence number and identifier as originally
generated by the client. In the end, that is actually also not a
property we care about.
What we do care about is that an ICMP request results in an ICMP reply
and that _those_ have a matching sequence number and identifier.
Additionally, every ICMP request arriving at the gateway should target
the correct resource. For CIDR resources, we already now which IP that
should be. For DNS resources, it has to be one of the resolved IPs for
the domain.
Currently, the transition for sending ICMP packets does not explicitly
state whether we want to send an IPv4 or an IPv6 packet. Being explicit
about this makes things a bit easier to understand.
It may also simplify the adaption of the tests for #5049.
Bumps [redis](https://github.com/redis-rs/redis-rs) from 0.25.3 to
0.25.4.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="337dd81553"><code>337dd81</code></a>
Prepare release 0.25.4</li>
<li><a
href="5c6db272c9"><code>5c6db27</code></a>
Fix clippy warnings (<a
href="https://redirect.github.com/redis-rs/redis-rs/issues/1180">#1180</a>)</li>
<li><a
href="c6da2ee262"><code>c6da2ee</code></a>
Fix explicit IoError not being recognized</li>
<li>See full diff in <a
href="https://github.com/redis-rs/redis-rs/compare/redis-0.25.3...redis-0.25.4">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Currently, `tunnel_test` only sends DNS queries to a client's configured
DNS resources. However, connlib receives _all_ DNS requests made on a
system and forwards them to the originally set upstream resolvers in
case they are for non-resources.
To capture the code paths for forwarding these DNS queries, we introduce
a `global_dns_records` strategies that pre-fills the `ReferenceState`
with DNS records that are not DNS resources. Thus, when sampling for a
domain to query, we might pick one that is not a DNS resource.
The expectation here is that this query still resolves (we assert that
we don't have any unanswered DNS queries). In addition, we introduce a
`Transition` to send an ICMP packet to such a resolved address. In a
real system, these wouldn't get routed to connlib but if they were, we
still want to assert that they don't get routed.
There is a special-case where the chosen DNS server is actually a CIDR
resource. In that case, the DNS packet gets lost and we use it to
trigger initiate a connection to the corresponding gateway. In that
case, a repeated query to such a DNS server actually gets sent via the
tunnel to the gateway. As such, we need generate a DNS response
similarly to how we need to send an ICMP reply.
This allows us to add a few more useful assertions to the test: Correct
mangling of source and destination port of UDP packets.
Currently, we field `expected_icmp_handshakes` tracks the resource
destination as an `IpAddr` with a separate `ResourceKind` enum to
correctly interpret the address.
When generating the transition, we already use a `ResourceDst` enum
differentiate between the two different kinds of resources.
We can reuse that same enum to make the assertion clearer.
Currently, in order to assert whether we have any unexpected packets, we
remove the ones we did expect from the state and later compare that
`HashMap` with an empty one. This isn't very clean because it modifies
the state purely for an easier assertion.
Instead of removing, this PR introduces a utility function
`find_unexpected_entries` that computes the unexpected ones on-the-fly
which allows us to achieve the same functionality without mutating
state.
Extracted out of #5168.
Bumps
[@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)
from 20.12.7 to 20.14.0.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [@tauri-apps/cli](https://github.com/tauri-apps/tauri) from 1.5.12
to 1.5.14.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tauri-apps/tauri/releases"><code>@tauri-apps/cli</code>'s
releases</a>.</em></p>
<blockquote>
<h2><code>@tauri-apps/cli</code> v1.5.14</h2>
<h2>[1.5.14]</h2>
<h3>Dependencies</h3>
<ul>
<li>Upgraded to <code>tauri-cli@1.5.14</code></li>
</ul>
<h2><code>@tauri-apps/cli</code> v1.5.13</h2>
<h2>[1.5.13]</h2>
<h3>Dependencies</h3>
<ul>
<li>Upgraded to <code>tauri-cli@1.5.13</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="15c62b5d99"><code>15c62b5</code></a>
Apply Version Updates From Current Changes (v1) (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9721">#9721</a>)</li>
<li><a
href="9b90b67ed2"><code>9b90b67</code></a>
Revert "Apply Version Updates From Current Changes (v1)" (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9724">#9724</a>)</li>
<li><a
href="f1b0b00159"><code>f1b0b00</code></a>
docs: added example of tauri.allowlist.protocol (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9726">#9726</a>)</li>
<li><a
href="6bb721cd3d"><code>6bb721c</code></a>
Apply Version Updates From Current Changes (v1) (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9709">#9709</a>)</li>
<li><a
href="7f885bd5ed"><code>7f885bd</code></a>
fix(core/shell): speedup <code>Command.execute</code> & fix extra
new lines (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9706">#9706</a>)</li>
<li><a
href="2eb21378a6"><code>2eb2137</code></a>
fix(tauri-runtime-wry): window draw span not closing (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9718">#9718</a>)</li>
<li><a
href="ab9ec42c10"><code>ab9ec42</code></a>
fix(windows): nsis failed to resolve resources with <code>$</code> in
their name, closes...</li>
<li><a
href="07b6f9fa83"><code>07b6f9f</code></a>
apply version updates (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9683">#9683</a>)</li>
<li><a
href="db9ec4e79c"><code>db9ec4e</code></a>
ci: fix msrv check (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9682">#9682</a>)</li>
<li><a
href="2a9a28044b"><code>2a9a280</code></a>
ci: fix msrv check (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9681">#9681</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/tauri-apps/tauri/compare/@tauri-apps/cli-v1.5.12...@tauri-apps/cli-v1.5.14">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [@tauri-apps/api](https://github.com/tauri-apps/tauri) from 1.5.4
to 1.5.6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tauri-apps/tauri/releases"><code>@tauri-apps/api</code>'s
releases</a>.</em></p>
<blockquote>
<h2><code>@tauri-apps/api</code> v1.5.6</h2>
<!-- raw HTML omitted -->
<pre><code>yarn audit v1.22.22
info No lockfile found.
0 vulnerabilities found - Packages audited: 146
Done in 1.64s.
</code></pre>
<!-- raw HTML omitted -->
<h2>[1.5.6]</h2>
<h3>Bug Fixes</h3>
<ul>
<li><a
href="3b69c1384b"><code>3b69c1384</code></a>(<a
href="https://redirect.github.com/tauri-apps/tauri/pull/9792">#9792</a>)
Revert <a
href="https://redirect.github.com/tauri-apps/tauri/pull/9706">#9706</a>
which broke compatability between <code>tauri</code> crate and the JS
<code>@tauri-apps/api</code> npm package in a patch release where it
should've been in a minor release.</li>
</ul>
<!-- raw HTML omitted -->
<pre><code>yarn run v1.22.22
$ yarn build && cd ./dist && yarn publish --access
public --loglevel silly
$ rollup -c --configPlugin typescript
[36m
[1m./src/app.ts, ./src/cli.ts, ./src/clipboard.ts, ./src/dialog.ts,
./src/event.ts, ./src/fs.ts, ./src/globalShortcut.ts, ./src/http.ts,
./src/index.ts, ./src/mocks.ts, ./src/notification.ts, ./src/os.ts,
./src/path.ts, ./src/process.ts, ./src/shell.ts, ./src/tauri.ts,
./src/updater.ts, ./src/window.ts[22m → [1m./dist, ./dist[22m...[39m
[32mcreated [1m./dist, ./dist[22m in [1m1.4s[22m[39m
[36m
[1msrc/index.ts[22m →
[1m../../core/tauri/scripts/bundle.global.js[22m...[39m
[32mcreated [1m../../core/tauri/scripts/bundle.global.js[22m in
[1m1.7s[22m[39m
[1/4] Bumping version...
info Current version: 1.5.6
[2/4] Logging in...
[3/4] Publishing...
success Published.
[4/4] Revoking token...
info Not revoking login token, specified via config file.
Done in 8.11s.
</code></pre>
<!-- raw HTML omitted -->
<h2><code>@tauri-apps/api</code> v1.5.5</h2>
<!-- raw HTML omitted -->
<pre><code>yarn audit v1.22.22
</tr></table>
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d78fa20d86"><code>d78fa20</code></a>
Apply Version Updates From Current Changes (v1) (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9793">#9793</a>)</li>
<li><a
href="3b69c1384b"><code>3b69c13</code></a>
Revert "fix(core/shell): speedup <code>Command.execute</code> &
fix extra new lines (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/97">#97</a>...</li>
<li><a
href="704260bb3c"><code>704260b</code></a>
fix(macos/dialog): avoid setting empty <code>default_path</code> (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9784">#9784</a>)</li>
<li><a
href="36b082a9c8"><code>36b082a</code></a>
ci: pull <code>.crate</code> file from workspace <code>target</code>
directory (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9732">#9732</a>)</li>
<li><a
href="f45d35cf06"><code>f45d35c</code></a>
Apply Version Updates From Current Changes (v1) (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9730">#9730</a>)</li>
<li><a
href="ef35a793c5"><code>ef35a79</code></a>
fix(core): fix compilation when <code>shell-execute</code> or
<code>shell-sidecar</code> (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9729">#9729</a>)</li>
<li><a
href="15c62b5d99"><code>15c62b5</code></a>
Apply Version Updates From Current Changes (v1) (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9721">#9721</a>)</li>
<li><a
href="9b90b67ed2"><code>9b90b67</code></a>
Revert "Apply Version Updates From Current Changes (v1)" (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9724">#9724</a>)</li>
<li><a
href="f1b0b00159"><code>f1b0b00</code></a>
docs: added example of tauri.allowlist.protocol (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9726">#9726</a>)</li>
<li><a
href="6bb721cd3d"><code>6bb721c</code></a>
Apply Version Updates From Current Changes (v1) (<a
href="https://redirect.github.com/tauri-apps/tauri/issues/9709">#9709</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/tauri-apps/tauri/compare/@tauri-apps/api-v1.5.4...@tauri-apps/api-v1.5.6">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [keyring](https://github.com/hwchen/keyring-rs) from 2.3.2 to
2.3.3.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/hwchen/keyring-rs/commits/v2.3.3">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Refs #3636 (This pays down some of the technical debt from Linux DNS)
Refs #4473 (This partially fulfills it)
Refs #5068 (This is needed to make `FIREZONE_DNS_CONTROL` mandatory)
As of dd6421:
- On both Linux and Windows, DNS control and IP setting (i.e.
`on_set_interface_config`) both move to the Client
- On Windows, route setting stays in `tun_windows.rs`. Route setting in
Windows requires us to know the interface index, which we don't know in
the Client code. If we could pass opaque platform-specific data between
the tunnel and the Client it would be easy.
- On Linux, route setting moves to the Client and Gateway, which
completely removes the `worker` task in `tun_linux.rs`
- Notifying systemd that we're ready moves up to the headless Client /
IPC service
```[tasklist]
### Before merging / notes
- [x] Does DNS roaming work on Linux on `main`? I don't see where it hooks up. I think I only set up DNS in `Tun::new` (Yes, the `Tun` gets recreated every time we reconfigure the device)
- [x] Fix Windows Clients
- [x] Fix Gateway
- [x] Make sure connlib doesn't get the DNS control method from the env var (will be fixed in #5068)
- [x] De-dupe consts
- [ ] ~~Add DNS control test~~ (failed)
- [ ] Smoke test Linux
- [ ] Smoke test Windows
```
When creating an echo request or reply packet using pnet it uses the
whole packet since the identifier and sequence is part of the icmp
header not the payload.
Those fields aren't accessible unless the packet is converted to an echo
request or reply because the interpretation of that header field depends
on the specific type of packet.
Currently, `tunnel_test` only sends ICMPs to CIDR resources. We also
want to test certain properties in regards to DNS resources. In
particular, we want to test:
- Given a DNS resource, can we query it for an IP?
- Can we send an ICMP packet to the resolved IP?
- Is the mapping of proxy IP to upstream IP stable?
To achieve this, we sample a list of `IpAddr` whenever we add a DNS
resource to the state. We also add the transition
`SendQueryToDnsResource`. As the name suggests, this one simulates a DNS
query coming from the system for one of our resources. We simulate A and
AAAA queries and take note of the addresses that connlib returns to us
for the queries.
Lastly, as part of `SendICMPPacketToResource`, we now may also sample
from a list of IPs that connlib gave us for a domain and send an ICMP
packet to that one.
There is one caveat in this test that I'd like to point out: At the
moment, the exact mapping of proxy IP to real IP is an implementation
detail of connlib. As a result, I don't know which proxy IP I need to
use in order to ping a particular "real" IP. This presents an issue in
the assertions: Upon the first ICMP packet, I cannot assert what the
expected destination is. Instead, I need to "remember" it. In case we
send another ICMP packet to the same resource and happen to sample the
same proxy IP, we can then assert that the mapping did not change.
Closes#5155
I keep seeing these in my debug Clients and I just want to make sure the
URL it's using is correct.
e.g.
```
2024-05-29T18:10:14.131542Z ERROR firezone_gui_client::client::gui: Error in check_for_updates error=Error in client::updates::check
Caused by:
HTTP status: 404 Not Found from update URL `https://www.firezone.dev/dl/firezone-client-gui-windows/latest/aarch64`
```
To encode that clients always have both ipv4 and ipv6 and they are the
only allowed source ips for any given client, into the type, we split
those into their specific fields in the `ClientOnGateway` struct and
update tests accordingly.
Furthermore, these will be used for the DNS refactor for ipv6-in-ipv4
and ipv4-in-ipv6 to set the source ip of outgoing packets, without
having to do additional routing or mappings. There will be more notes on
this on the corresponding PR #5049 .
---------
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
This doesn't really matter for the functionality of the test because in
connlib, we don't expect the IPs to adhere to a certain range.
Nevertheless, to make output more readable, it is nicer if these IPs
match what we also see in production logs.
Currently, the gateway has a piece of functionality to ensure we only
ever route packets that actually originate from the client. This is
important because a gateway connects to multiple clients and thus -
without this check - client A could send a packet through the tunnel
that gets interpreted as traffic from client B by mangling the source IP
of their packets.
The portal assigns these source IPs when the clients sign in and passes
them to the gateway whenever a client connects. We can thus drop all
traffic on the gateway side from IPs that we don't recognise.
Currently, a client will still trigger a connection intent for an IP
packet, even if it doesn't have the tunnel's source IP set. We may want
to consider changing this behaviour in the future.
Since we expect a fixed MTU size, we can encode this in the size of the
buffers for the device, we will never read or write more than the 1280
MTU we expect.
Note that the `write_buf` needs an extra 16 bytes for the aead tag that
boringtun will copy over.
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.116 to
1.0.117.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/serde-rs/json/releases">serde_json's
releases</a>.</em></p>
<blockquote>
<h2>v1.0.117</h2>
<ul>
<li>Resolve unexpected_cfgs warning (<a
href="https://redirect.github.com/serde-rs/json/issues/1130">#1130</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0ae247ca63"><code>0ae247c</code></a>
Release 1.0.117</li>
<li><a
href="4517c7a2d9"><code>4517c7a</code></a>
PartialEq is not implemented between Value and 128-bit ints</li>
<li><a
href="fdf99c7c38"><code>fdf99c7</code></a>
Combine number PartialEq tests</li>
<li><a
href="b4fc2451d7"><code>b4fc245</code></a>
Merge pull request <a
href="https://redirect.github.com/serde-rs/json/issues/1130">#1130</a>
from serde-rs/checkcfg</li>
<li><a
href="98f1a247de"><code>98f1a24</code></a>
Resolve unexpected_cfgs warning</li>
<li>See full diff in <a
href="https://github.com/serde-rs/json/compare/v1.0.116...v1.0.117">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Currently, we assert on the actual IP packet that gets sent between
client and gateway in the tunnel test. This does not work with DNS
resources because - unless we model _how_ connlib assigns IPs for DNS
resources - we don't know what the destination IP of the resource is
that we are about to ping.
From an applications PoV, it doesn't matter, what the IP is. Thus, it is
better to write an assertion closer to what the application expects:
- A received ICMP reply should come from the IP that we pinged.
- The ICMP packet emitted on the gateway should target the actual IP of
the DNS resource.
Extracted out of #5083.
An `IpPacket` may contain an ICMP or ICMPv6 packet. To extract metadata
like the sequence number of identifier from it, we need to be able to
parse an `IpPacket`'s payload into the appropriate packets.
Extracted out of #5083.
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: Reactor Scram <ReactorScram@users.noreply.github.com>
Whilst initially convenient, the `prop_oneof` macro is convenient when
it comes to conditionally including strategies. So far, we have used
conditional weights but this breaks once we get past 10 strategies
(happening in #5083). That is because `prop_oneof` calls
`Union::new_weighted` underneath for anything more than 10 strategies
and this constructor panics on weights of 0.
Fortunately, we can simply get rid of the macro and construct a list
that we conditionally push all valid strategies into. This approach
scales to any number of strategies and doesn't involve any macros.