mirror of
https://github.com/outbackdingo/firezone.git
synced 2026-01-27 18:18:55 +00:00
8700a680d5520054566a562f730b92d78108c627
1057 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
8700a680d5 |
chore: Bump versions to point to new artifacts (#5337)
Currently dl links are broken due to the updated format. |
||
|
|
f0c1f9556a |
refactor(connlib): use selectors to randomly pick values (#5310)
Reading through more of the `proptest` library, I came across the `Selector` concept. It is more generic than the `sample::Index` and allows us to directly pick from anything that is an `IntoIterator`. This greatly simplifies a lot of the code in `tunnel_test`. In order (pun intended) to make things deterministic, we migrate all maps and sets to `BTreeMap`s and `BTreeSets` which have a deterministic ordering of their contents, thus avoiding additional sorting. |
||
|
|
28cddc8304 |
chore(snownet): improve logs on blocked STUN traffic (#5305)
Detecting blocked STUN traffic is somewhat tricky. What we can observe is not receiving any responses from a relay (neither on IPv4 nor IPv6). Once an `Allocation` gives up retrying requests with a relay (after 60s), we now de-allocate the `Allocation` and print the following message: > INFO snownet::node: Disconnecting from relay; no response received. Is STUN blocked? id=613f68ac-483e-4e9d-bf87-457fd7223bf6 I chose to go with the wording of "disconnecting from relay" as sysdamins likely don't have any clue of what an "Allocation" is. The error message is specific to a relay though so it could also be emitted if a relay is down for > 60s or not responding for whatever reason. Resolves: #5281. |
||
|
|
98b37f56ed |
build(deps): Bump crash-handler from 0.6.1 to 0.6.2 in /rust (#5326)
Bumps [crash-handler](https://github.com/EmbarkStudios/crash-handling) from 0.6.1 to 0.6.2. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/EmbarkStudios/crash-handling/releases">crash-handler's releases</a>.</em></p> <blockquote> <h2>crash-handler-0.6.2</h2> <h3>Added</h3> <ul> <li><a href="https://redirect.github.com/EmbarkStudios/crash-handling/pull/86">PR#86</a> (carrying on from <a href="https://redirect.github.com/EmbarkStudios/crash-handling/pull/85">PR#85</a>) added support for <a href="https://learn.microsoft.com/en-us/windows/win32/debug/vectored-exception-handling">vectored exception handlers</a> on Windows, which can catch heap corruption exceptions that the vanilla exception handler cannot catch. Thanks <a href="https://github.com/h3r2tic">Tom!</a>!</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
bd9ab8d88c |
build(deps-dev): Bump tailwindcss from 3.4.3 to 3.4.4 in /rust/gui-client (#5321)
Bumps [tailwindcss](https://github.com/tailwindlabs/tailwindcss) from 3.4.3 to 3.4.4. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/tailwindlabs/tailwindcss/releases">tailwindcss's releases</a>.</em></p> <blockquote> <h2>v3.4.4</h2> <h3>Fixed</h3> <ul> <li>Make it possible to use multiple <code><alpha-value></code> placeholders in a single color definition (<a href="https://redirect.github.com/tailwindlabs/tailwindcss/pull/13740">#13740</a>)</li> <li>Don't prefix classes in arbitrary values of <code>has-*</code>, <code>group-has-*</code>, and <code>peer-has-*</code> variants (<a href="https://redirect.github.com/tailwindlabs/tailwindcss/pull/13770">#13770</a>)</li> <li>Support negative values for <code>{col,row}-{start,end}</code> utilities (<a href="https://redirect.github.com/tailwindlabs/tailwindcss/pull/13781">#13781</a>)</li> <li>Update embedded browserslist database (<a href="https://redirect.github.com/tailwindlabs/tailwindcss/pull/13792">#13792</a>)</li> </ul> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/tailwindlabs/tailwindcss/blob/v3.4.4/CHANGELOG.md">tailwindcss's changelog</a>.</em></p> <blockquote> <h2>[3.4.4] - 2024-06-05</h2> <h3>Fixed</h3> <ul> <li>Make it possible to use multiple <code><alpha-value></code> placeholders in a single color definition (<a href="https://redirect.github.com/tailwindlabs/tailwindcss/pull/13740">#13740</a>)</li> <li>Don't prefix classes in arbitrary values of <code>has-*</code>, <code>group-has-*</code>, and <code>peer-has-*</code> variants (<a href="https://redirect.github.com/tailwindlabs/tailwindcss/pull/13770">#13770</a>)</li> <li>Support negative values for <code>{col,row}-{start,end}</code> utilities (<a href="https://redirect.github.com/tailwindlabs/tailwindcss/pull/13781">#13781</a>)</li> <li>Update embedded browserslist database (<a href="https://redirect.github.com/tailwindlabs/tailwindcss/pull/13792">#13792</a>)</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
52ed9d5cce |
build(deps-dev): Bump braces from 3.0.2 to 3.0.3 in /rust/gui-client in the npm_and_yarn group (#5314)
Bumps the npm_and_yarn group in /rust/gui-client with 1 update: [braces](https://github.com/micromatch/braces). Updates `braces` from 3.0.2 to 3.0.3 <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
9a01745a1d |
build(deps): Bump the windows group in /rust with 2 updates (#5288)
Bumps the windows group in /rust with 2 updates: [windows](https://github.com/microsoft/windows-rs) and [windows-implement](https://github.com/microsoft/windows-rs). Updates `windows` from 0.56.0 to 0.57.0 <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
c35a7579a8 |
refactor(connlib): split tunnel_test into multiple modules (#5266)
The implementation of `tunnel_test` has grown substantially in the last couple of weeks (> 2500 LoC). To make things easier to manage, we split it up into multiple modules: - `assertions`: Houses the actual assertions of the test. - `reference:` The reference implementation of connlib. Used to as the "expectation" for the assertions. - `sut`: A wrapper around connlib itself, acting as the system-under-test (SUT). - `transition`: All state transitions that the test might go through. - `strategies`: Auxiliary strategies used in multiple places. - `sim_*`: Wrappers for simulating various parts in the code: Clients, relays, gateways & the portal. I chose to place strategies into the same modules as where things are defined. For example, the `sim_node_prototype` strategy is defined in the `sim_node` module. Similarly, the strategies for the individual transitions are also defined in the `transition` module. |
||
|
|
e1877bc250 |
fix(snownet): don't invalidate candidates after nomination (#5283)
Currently, there is a bug in `snownet` where we accidentally invalidate a srflx candidate because we try and look for the nominated candidate based on the nominated address. The nominated address represents the socket that the application should send from: - For host candidates, this is the host candidate's address itself. - For server-reflexive candidates, it is their base (which is equivalent to the host candidate) - For relay candidates, it is the address of the allocation. Because of the ambiguity between host and server-reflexive candidates, we invalidate the server-reflexive candidate locally, send that to the remote and the remote as a result kills the connection because it thinks it should no longer talk to this address. To fix this, we don't add server-reflexive candidates to the local agent anymore. Only the remote peer needs to know about the server-reflexive address in order to send packets _to_ it. By sending from the host candidate, we automatically send "from" the server-reflexive address. Not adding these server-reflexive candidates has an additional impact. To make the tests pass reliably, I am entirely removing the invalidation of candidates after the connection setup, as keeping that fails connections early in the roaming test. This will increase background traffic a bit but that seems like an okay trade-off to get more resilient connections (the current bug is only caused by us trying to be clever in how many candidate pairs we keep alive). We still use the messages for invalidating candidates on the remote to make roaming work reasonably smoothly. Resolves: #5276. |
||
|
|
96ced27e5a |
fix(snownet): notify remote of invalidated relay candidate (#5303)
When migrating to new relays, we need to notify the remote of our invalidated relay candidates and not just invalidate them locally. Related: #5283. |
||
|
|
5b065d3e4c |
test(snownet): migrate relays for both parties (#5302)
In production, the portal will signal disconnected relays to both the client and the gateway. We should mimic this in the tests. In #5283, we remove invalidation of candidates during the connection setup which breaks this roaming test due to "unhandled messages". We could ignore those but I'd prefer to setup the test such that we panic on unhandled messages instead and thus, this seems to be the better fix. |
||
|
|
7e533c42f8 |
refactor: Split releases for Clients and Gateways (#5287)
- Removes version numbers from infra components (elixir/relay) - Removes version bumping from Rust workspace members that don't get published - Splits release publishing into `gateway-`, `headless-client-`, and `gui-client-` - Removes auto-deploying new infrastructure when a release is published. Use the Deploy Production workflow instead. Fixes #4397 |
||
|
|
4117639cf4 |
fix(connlib): reply with SERVFAIL on DNS query errors (#5263)
Currently, we simply drop a DNS query if we can't fulfill it. Because DNS is based on UDP which is unreliable, a downstream system will re-send a DNS query if it doesn't receive an answer within a certain timeout window. Instead of dropping queries, we now reply with `SERVFAIL`, indicating to the client that we can't fulfill that DNS query. The intent is that this will stop any kind of automated retry-loop and surface an error to the user. Related: #4800. --------- Signed-off-by: Thomas Eizinger <thomas@eizinger.io> Co-authored-by: Reactor Scram <ReactorScram@users.noreply.github.com> |
||
|
|
7d76774ae0 |
fix(connlib): domains can resolve to same IPs on same gateway (#5272)
Currently, the same proxy IP can only ever point to one DNS record. Proxy IPs are given out on a per-connection basis. As a result, if two or more domains resolve to the same IP on the same gateway, previous entries to this domain are lost and return an empty record as a result. To fix this issue, we now store the set of resources that resolves to this proxy IP instead of just a single resource. An invariant we have to maintain here is that all of these resources must point to the same gateway. This should always be true because proxy IPs are assigned sequentially across all connections and thus the same IP can always point back to the same proxy IP on the same gateway. Fixes: #5259. --------- Co-authored-by: Thomas Eizinger <thomas@eizinger.io> |
||
|
|
609ba73f84 |
chore(gui-client): improve logging around Client startup and IPC connections (#5216)
Closes #3567 (again) Closes #5214 Ready for review ```[tasklist] ### Before merging - [x] The IPC service should report system uptime when it starts. This will tell us whether the computer was rebooted or just the IPC service itself was upgraded / rebooted. - [x] The IPC service should report the PID of itself and the GUI if possible - [x] The GUI should report the PID of the IPC service if possible - [x] Extra logging between `GIT_VERSION = ` and the token loading log line, especially right before and right after the critical Tauri launching step - [x] If a 2nd GUI or IPC service runs and exits due to single-instance, it must log that - [x] Remove redundant DNS deactivation when IPC service starts (I think conectado noticed this in another PR) - [x] Manually test that the GUI logs something on clean shutdown - [x] Logarithmic heartbeat? - [x] If possible, log monotonic time somewhere so NTP syncs don't make the logs unreadable (uptime in the heartbeat should be monotonic, mostly) - [x] Apply the same logging fix to the IPC service - [x] Ensure log zips include GUI crash dumps - [x] ~~Fix #5042~~ (that's a separate issue, I don't want to drag this PR out) - [x] Test IPC service restart (logs as a stop event) - [x] Test IPC service stop - [x] Test IPC service logs during system suspend (Not logged, maybe because we aren't subscribed to power events) - [x] Test IPC service logs during system reboot (Logged as shutdown, we exit gracefully) - [x] Test IPC service logs during system shut down (Logged as a suspend) - [x] Test IPC service upgrade (Logged as a stop) - [x] Log unhandled events from the Windows service controller (Power events like suspend and resume are logged and not handled) ``` --------- Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com> |
||
|
|
9c0c1c141c |
refactor(connlib): don't strinify domain name early (#5264)
Turning a query's `name` into a `String` as late as possible avoids reparsing it in the tests. |
||
|
|
a0b2ea4073 |
refactor(connlib): remove DNS mangling from connection state (#5222)
In case a configured DNS server is also a CIDR resource, DNS queries will be routed through the tunnel to the gateway. For this to work correctly, the destination of the request and the source of the response need to be mangled back to the originally configured DNS server. Currently, this mangling happens in the connection-specific `GatewayOnClient` state. More specifically, the state that we need to track are the IDs of the DNS queries that we actually mangled. This state isn't connection-specific and can thus be moved out of `GatewayOnClient` into `ClientState`. Removing this state is important because we will soon (#5080) implement roaming the client by simply dropping all connections and establishing new connections as the packets are flowing in. For this, we must store as little state as possible associated with each connection. Resolves: #5079. |
||
|
|
492d5c976e |
test(connlib): improve assertion logs (#5223)
In #5207, I already added logs for which assertions we are performing on ICMP packets. This PR does the same thing for the DNS queries that are being to connlib. It also adds spans that add some more context to the messages. Here is an excerpt of what this looks like: ``` Applying transition 19/19: SendICMPPacketToResource { idx: Index(3210705382108961150), seq: 57053, identifier: 28234, src: TunnelIp6 } 2024-06-05T07:06:30.742455Z INFO assertions: ✅ Performed the expected 2 ICMP handshakes 2024-06-05T07:06:30.742459Z INFO icmp{seq=15543 identifier=63125}: assertions: ✅ dst IP of request matches src IP of response: 3fb8:a7b0:c912:a648:6c9:7910:92dc:8db 2024-06-05T07:06:30.742461Z INFO icmp{seq=15543 identifier=63125}: assertions: ✅ src IP of request matches dst IP of response: fd00:2021:1111::a:3531 2024-06-05T07:06:30.742464Z INFO icmp{seq=15543 identifier=63125}: assertions: ✅ 3fb8:a7b0:c912:a648:6c9:7910:92dc:8db is the correct resource 2024-06-05T07:06:30.742467Z INFO icmp{seq=57053 identifier=28234}: assertions: ✅ dst IP of request matches src IP of response: 3fb8:a7b0:c912:a648:6c9:7910:92dc:8d8 2024-06-05T07:06:30.742470Z INFO icmp{seq=57053 identifier=28234}: assertions: ✅ src IP of request matches dst IP of response: fd00:2021:1111::a:3531 2024-06-05T07:06:30.742473Z INFO icmp{seq=57053 identifier=28234}: assertions: ✅ 3fb8:a7b0:c912:a648:6c9:7910:92dc:8d8 is the correct resource 2024-06-05T07:06:30.742477Z INFO dns{query_id=58256}: assertions: ✅ dst IP of request matches src IP of response: fd00:2021:1111:8000:100:100:111:0 2024-06-05T07:06:30.742480Z INFO dns{query_id=58256}: assertions: ✅ src IP of request matches dst IP of response: fd00:2021:1111::a:3531 2024-06-05T07:06:30.742483Z INFO dns{query_id=58256}: assertions: ✅ dst port of request matches src port of response: 53 2024-06-05T07:06:30.742485Z INFO dns{query_id=58256}: assertions: ✅ src port of request matches dst port of response: 9999 2024-06-05T07:06:30.742488Z INFO dns{query_id=22568}: assertions: ✅ dst IP of request matches src IP of response: 100.100.111.1 2024-06-05T07:06:30.742491Z INFO dns{query_id=22568}: assertions: ✅ src IP of request matches dst IP of response: 100.75.34.66 2024-06-05T07:06:30.742494Z INFO dns{query_id=22568}: assertions: ✅ dst port of request matches src port of response: 53 2024-06-05T07:06:30.742497Z INFO dns{query_id=22568}: assertions: ✅ src port of request matches dst port of response: 9999 2024-06-05T07:06:30.742500Z INFO dns{query_id=58735}: assertions: ✅ dst IP of request matches src IP of response: fd00:2021:1111:8000:100:100:111:2 2024-06-05T07:06:30.742502Z INFO dns{query_id=58735}: assertions: ✅ src IP of request matches dst IP of response: fd00:2021:1111::a:3531 2024-06-05T07:06:30.742505Z INFO dns{query_id=58735}: assertions: ✅ dst port of request matches src port of response: 53 2024-06-05T07:06:30.742507Z INFO dns{query_id=58735}: assertions: ✅ src port of request matches dst port of response: 9999 2024-06-05T07:06:30.742512Z INFO dns{query_id=59096}: assertions: ✅ dst IP of request matches src IP of response: fd00:2021:1111:8000:100:100:111:1 2024-06-05T07:06:30.742514Z INFO dns{query_id=59096}: assertions: ✅ src IP of request matches dst IP of response: fd00:2021:1111::a:3531 2024-06-05T07:06:30.742517Z INFO dns{query_id=59096}: assertions: ✅ dst port of request matches src port of response: 53 2024-06-05T07:06:30.742519Z INFO dns{query_id=59096}: assertions: ✅ src port of request matches dst port of response: 9999 2024-06-05T07:06:30.742522Z INFO dns{query_id=41570}: assertions: ✅ dst IP of request matches src IP of response: fd00:2021:1111:8000:100:100:111:1 2024-06-05T07:06:30.742525Z INFO dns{query_id=41570}: assertions: ✅ src IP of request matches dst IP of response: fd00:2021:1111::a:3531 2024-06-05T07:06:30.742527Z INFO dns{query_id=41570}: assertions: ✅ dst port of request matches src port of response: 53 2024-06-05T07:06:30.742530Z INFO dns{query_id=41570}: assertions: ✅ src port of request matches dst port of response: 9999 2024-06-05T07:06:30.742533Z INFO dns{query_id=15028}: assertions: ✅ dst IP of request matches src IP of response: fd00:2021:1111:8000:100:100:111:1 2024-06-05T07:06:30.742536Z INFO dns{query_id=15028}: assertions: ✅ src IP of request matches dst IP of response: fd00:2021:1111::a:3531 2024-06-05T07:06:30.742538Z INFO dns{query_id=15028}: assertions: ✅ dst port of request matches src port of response: 53 2024-06-05T07:06:30.742541Z INFO dns{query_id=15028}: assertions: ✅ src port of request matches dst port of response: 9999 ``` It is a bit repetitive because all assertions always run on all state transition. Nevertheless I've found it useful to be able to look at the assertions and visually verify that they make sense. |
||
|
|
d0efc55918 |
test(connlib): reduce number of local rejections (#5221)
To make proptests efficient, it is important to generate the set of possible test cases algorithmically instead of filtering through randomly generated values. This PR makes the strategies for upstream DNS servers and IP networks more efficient by removing the filtering. |
||
|
|
fb97e3c5a3 |
test(connlib): generate up to 6 resolved IPs (#5218)
With #5049, we will allocate a fixed set of 4 IPs per DNS resource on the client. In order to ensure that this always works correctly, increase the number of resolved IPs to at most 6. |
||
|
|
f644d734fc |
refactor(connlib-client-shared): remove unnecessary Arc<Mutex> from logger (#5224)
This may have been needed when the logger rolled files and uploaded, but now it compiles fine without it. I tested it once manually on Windows. I don't think the logging is covered by automated tests. |
||
|
|
ff97c56b92 |
test(connlib): increase buffer sizes (#5220)
In case an upstream DNS server is a resource, we need to not only send an ICMP packet through the tunnel but also DNS queries. These can be larger than 200 bytes which currently breaks the test because we only give it a buffer of 200 bytes. |
||
|
|
f161fd290e |
fix(tauri_client/windows): close and re-open the named pipe properly, and back off if needed (#5156)
Closes #5143 The initial half-second backoff should typically be enough, and if the user is manually re-opening the GUI after a GUI crash, I don't think they'll notice. If they do, they can open the GUI again and it should all work. |
||
|
|
d561e0ee0d |
test: fix 21 mutants from cargo-mutants (#5170)
Most of these were in `known_dirs.rs` because it's platform-specific and `cargo-mutants` wasn't ignoring other platforms correctly. Using `cargo mutants -p firezone-gui-client -p firezone-headless-client` 176 / 236 mutants missed before 155 / 206 mutants missed after |
||
|
|
dfbfbbe8c9 |
build(deps): Bump tokio from 1.37.0 to 1.38.0 in /rust (#5193)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.37.0 to 1.38.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/tokio-rs/tokio/releases">tokio's releases</a>.</em></p> <blockquote> <h2>Tokio v1.38.0</h2> <p>This release marks the beginning of stabilization for runtime metrics. It stabilizes <code>RuntimeMetrics::worker_count</code>. Future releases will continue to stabilize more metrics.</p> <h3>Added</h3> <ul> <li>fs: add <code>File::create_new</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6573">#6573</a>)</li> <li>io: add <code>copy_bidirectional_with_sizes</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6500">#6500</a>)</li> <li>io: implement <code>AsyncBufRead</code> for <code>Join</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6449">#6449</a>)</li> <li>net: add Apple visionOS support (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6465">#6465</a>)</li> <li>net: implement <code>Clone</code> for <code>NamedPipeInfo</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6586">#6586</a>)</li> <li>net: support QNX OS (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6421">#6421</a>)</li> <li>sync: add <code>Notify::notify_last</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6520">#6520</a>)</li> <li>sync: add <code>mpsc::Receiver::{capacity,max_capacity}</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6511">#6511</a>)</li> <li>sync: add <code>split</code> method to the semaphore permit (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6472">#6472</a>, <a href="https://redirect.github.com/tokio-rs/tokio/issues/6478">#6478</a>)</li> <li>task: add <code>tokio::task::join_set::Builder::spawn_blocking</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6578">#6578</a>)</li> <li>wasm: support rt-multi-thread with wasm32-wasi-preview1-threads (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6510">#6510</a>)</li> </ul> <h3>Changed</h3> <ul> <li>macros: make <code>#[tokio::test]</code> append <code>#[test]</code> at the end of the attribute list (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6497">#6497</a>)</li> <li>metrics: fix <code>blocking_threads</code> count (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6551">#6551</a>)</li> <li>metrics: stabilize <code>RuntimeMetrics::worker_count</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6556">#6556</a>)</li> <li>runtime: move task out of the <code>lifo_slot</code> in <code>block_in_place</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6596">#6596</a>)</li> <li>runtime: panic if <code>global_queue_interval</code> is zero (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6445">#6445</a>)</li> <li>sync: always drop message in destructor for oneshot receiver (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6558">#6558</a>)</li> <li>sync: instrument <code>Semaphore</code> for task dumps (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6499">#6499</a>)</li> <li>sync: use FIFO ordering when waking batches of wakers (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6521">#6521</a>)</li> <li>task: make <code>LocalKey::get</code> work with Clone types (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6433">#6433</a>)</li> <li>tests: update nix and mio-aio dev-dependencies (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6552">#6552</a>)</li> <li>time: clean up implementation (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6517">#6517</a>)</li> <li>time: lazily init timers on first poll (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6512">#6512</a>)</li> <li>time: remove the <code>true_when</code> field in <code>TimerShared</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6563">#6563</a>)</li> <li>time: use sharding for timer implementation (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6534">#6534</a>)</li> </ul> <h3>Fixed</h3> <ul> <li>taskdump: allow building taskdump docs on non-unix machines (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6564">#6564</a>)</li> <li>time: check for overflow in <code>Interval::poll_tick</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6487">#6487</a>)</li> <li>sync: fix incorrect <code>is_empty</code> on mpsc block boundaries (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6603">#6603</a>)</li> </ul> <h3>Documented</h3> <ul> <li>fs: rewrite file system docs (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6467">#6467</a>)</li> <li>io: fix <code>stdin</code> documentation (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6581">#6581</a>)</li> <li>io: fix obsolete reference in <code>ReadHalf::unsplit()</code> documentation (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6498">#6498</a>)</li> <li>macros: render more comprehensible documentation for <code>select!</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6468">#6468</a>)</li> <li>net: add missing types to module docs (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6482">#6482</a>)</li> <li>net: fix misleading <code>NamedPipeServer</code> example (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6590">#6590</a>)</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
3f3ea96ca7 |
test(connlib): generate resources with wildcard and ? addresses (#5209)
Currently, `tunnel_test` only tests DNS resources with fully-qualified domain names. Firezone also supports wildcard domains in the forms of `*.example.com` and `?.example.com`. To include these in the tests, we generate a bunch of DNS records that include various subdomains for such wildcard DNS resources. When sampling DNS queries, we already take them from the pool of global DNS records which now also includes these subdomains, thus nothing else needed to be changed to support testing these resources. |
||
|
|
b7077bfdee |
test(connlib): refactor checked ICMP properties (#5207)
With #5049, connlib will start mangling and translating ICMP requests which means we can no longer rely on the ICMP request emitted by the gateway to have the same sequence number and identifier as originally generated by the client. In the end, that is actually also not a property we care about. What we do care about is that an ICMP request results in an ICMP reply and that _those_ have a matching sequence number and identifier. Additionally, every ICMP request arriving at the gateway should target the correct resource. For CIDR resources, we already now which IP that should be. For DNS resources, it has to be one of the resolved IPs for the domain. |
||
|
|
91a7e00b85 |
test(connlib): make the packet source an explicit input (#5217)
Currently, the transition for sending ICMP packets does not explicitly state whether we want to send an IPv4 or an IPv6 packet. Being explicit about this makes things a bit easier to understand. It may also simplify the adaption of the tests for #5049. |
||
|
|
d27a7a3083 |
feat(relay): support custom turn port (#5208)
Original PR: #5130. Co-authored-by: Antoine <antoinelabarussias@gmail.com> |
||
|
|
4ce2913ab9 |
build(deps): Bump redis from 0.25.3 to 0.25.4 in /rust (#5196)
Bumps [redis](https://github.com/redis-rs/redis-rs) from 0.25.3 to 0.25.4. <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
63d7c35717 |
test(connlib): send DNS queries to non-resources (#5168)
Currently, `tunnel_test` only sends DNS queries to a client's configured DNS resources. However, connlib receives _all_ DNS requests made on a system and forwards them to the originally set upstream resolvers in case they are for non-resources. To capture the code paths for forwarding these DNS queries, we introduce a `global_dns_records` strategies that pre-fills the `ReferenceState` with DNS records that are not DNS resources. Thus, when sampling for a domain to query, we might pick one that is not a DNS resource. The expectation here is that this query still resolves (we assert that we don't have any unanswered DNS queries). In addition, we introduce a `Transition` to send an ICMP packet to such a resolved address. In a real system, these wouldn't get routed to connlib but if they were, we still want to assert that they don't get routed. There is a special-case where the chosen DNS server is actually a CIDR resource. In that case, the DNS packet gets lost and we use it to trigger initiate a connection to the corresponding gateway. In that case, a repeated query to such a DNS server actually gets sent via the tunnel to the gateway. As such, we need generate a DNS response similarly to how we need to send an ICMP reply. This allows us to add a few more useful assertions to the test: Correct mangling of source and destination port of UDP packets. |
||
|
|
708375bbb4 |
refactor(connlib): reuse ResourceDst for assertion (#5206)
Currently, we field `expected_icmp_handshakes` tracks the resource destination as an `IpAddr` with a separate `ResourceKind` enum to correctly interpret the address. When generating the transition, we already use a `ResourceDst` enum differentiate between the two different kinds of resources. We can reuse that same enum to make the assertion clearer. |
||
|
|
40cba442a4 |
test(connlib): don't modify asserted state (#5192)
Currently, in order to assert whether we have any unexpected packets, we remove the ones we did expect from the state and later compare that `HashMap` with an empty one. This isn't very clean because it modifies the state purely for an easier assertion. Instead of removing, this PR introduces a utility function `find_unexpected_entries` that computes the unexpected ones on-the-fly which allows us to achieve the same functionality without mutating state. Extracted out of #5168. |
||
|
|
879fdeea19 |
build(deps-dev): Bump @types/node from 20.12.7 to 20.14.0 in /rust/gui-client (#5204)
Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 20.12.7 to 20.14.0. <details> <summary>Commits</summary> <ul> <li>See full diff in <a href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node">compare view</a></li> </ul> </details> <br /> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
||
|
|
7ef06e4c22 |
build(deps): Bump @tauri-apps/cli from 1.5.12 to 1.5.14 in /rust/gui-client (#5189)
Bumps [@tauri-apps/cli](https://github.com/tauri-apps/tauri) from 1.5.12 to 1.5.14. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/tauri-apps/tauri/releases"><code>@tauri-apps/cli</code>'s releases</a>.</em></p> <blockquote> <h2><code>@tauri-apps/cli</code> v1.5.14</h2> <h2>[1.5.14]</h2> <h3>Dependencies</h3> <ul> <li>Upgraded to <code>tauri-cli@1.5.14</code></li> </ul> <h2><code>@tauri-apps/cli</code> v1.5.13</h2> <h2>[1.5.13]</h2> <h3>Dependencies</h3> <ul> <li>Upgraded to <code>tauri-cli@1.5.13</code></li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
80f216b7aa |
build(deps): Bump @tauri-apps/api from 1.5.4 to 1.5.6 in /rust/gui-client (#5191)
Bumps [@tauri-apps/api](https://github.com/tauri-apps/tauri) from 1.5.4 to 1.5.6. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/tauri-apps/tauri/releases"><code>@tauri-apps/api</code>'s releases</a>.</em></p> <blockquote> <h2><code>@tauri-apps/api</code> v1.5.6</h2> <!-- raw HTML omitted --> <pre><code>yarn audit v1.22.22 info No lockfile found. 0 vulnerabilities found - Packages audited: 146 Done in 1.64s. </code></pre> <!-- raw HTML omitted --> <h2>[1.5.6]</h2> <h3>Bug Fixes</h3> <ul> <li><a href=" |
||
|
|
2a1187bd9c |
build(deps): Bump keyring from 2.3.2 to 2.3.3 in /rust (#5195)
Bumps [keyring](https://github.com/hwchen/keyring-rs) from 2.3.2 to 2.3.3. <details> <summary>Commits</summary> <ul> <li>See full diff in <a href="https://github.com/hwchen/keyring-rs/commits/v2.3.3">compare view</a></li> </ul> </details> <br /> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
||
|
|
deefabd8f8 |
refactor(firezone-tunnel): move routes and DNS control out of connlib and up to the Client (#5111)
Refs #3636 (This pays down some of the technical debt from Linux DNS) Refs #4473 (This partially fulfills it) Refs #5068 (This is needed to make `FIREZONE_DNS_CONTROL` mandatory) As of dd6421: - On both Linux and Windows, DNS control and IP setting (i.e. `on_set_interface_config`) both move to the Client - On Windows, route setting stays in `tun_windows.rs`. Route setting in Windows requires us to know the interface index, which we don't know in the Client code. If we could pass opaque platform-specific data between the tunnel and the Client it would be easy. - On Linux, route setting moves to the Client and Gateway, which completely removes the `worker` task in `tun_linux.rs` - Notifying systemd that we're ready moves up to the headless Client / IPC service ```[tasklist] ### Before merging / notes - [x] Does DNS roaming work on Linux on `main`? I don't see where it hooks up. I think I only set up DNS in `Tun::new` (Yes, the `Tun` gets recreated every time we reconfigure the device) - [x] Fix Windows Clients - [x] Fix Gateway - [x] Make sure connlib doesn't get the DNS control method from the env var (will be fixed in #5068) - [x] De-dupe consts - [ ] ~~Add DNS control test~~ (failed) - [ ] Smoke test Linux - [ ] Smoke test Windows ``` |
||
|
|
94cb494e0a |
refactor(gui-client): finish refactors from #4978 (#5158)
```[tasklist] ### Before opening for review - [ ] ~~Wait for some other refactors to merge~~ - [x] Test Windows - [x] Test Linux ``` |
||
|
|
499edd2dd4 |
chore(connlib): fix echo request and reply packets (#5169)
When creating an echo request or reply packet using pnet it uses the whole packet since the identifier and sequence is part of the icmp header not the payload. Those fields aren't accessible unless the packet is converted to an echo request or reply because the interpretation of that header field depends on the specific type of packet. |
||
|
|
ce929e1204 |
test(connlib): resolve DNS resources in tunnel_test (#5083)
Currently, `tunnel_test` only sends ICMPs to CIDR resources. We also want to test certain properties in regards to DNS resources. In particular, we want to test: - Given a DNS resource, can we query it for an IP? - Can we send an ICMP packet to the resolved IP? - Is the mapping of proxy IP to upstream IP stable? To achieve this, we sample a list of `IpAddr` whenever we add a DNS resource to the state. We also add the transition `SendQueryToDnsResource`. As the name suggests, this one simulates a DNS query coming from the system for one of our resources. We simulate A and AAAA queries and take note of the addresses that connlib returns to us for the queries. Lastly, as part of `SendICMPPacketToResource`, we now may also sample from a list of IPs that connlib gave us for a domain and send an ICMP packet to that one. There is one caveat in this test that I'd like to point out: At the moment, the exact mapping of proxy IP to real IP is an implementation detail of connlib. As a result, I don't know which proxy IP I need to use in order to ping a particular "real" IP. This presents an issue in the assertions: Upon the first ICMP packet, I cannot assert what the expected destination is. Instead, I need to "remember" it. In case we send another ICMP packet to the same resource and happen to sample the same proxy IP, we can then assert that the mapping did not change. |
||
|
|
c6adba23de |
chore(gui-client): log update URL if fetching the version number fails (#5157)
Closes #5155 I keep seeing these in my debug Clients and I just want to make sure the URL it's using is correct. e.g. ``` 2024-05-29T18:10:14.131542Z ERROR firezone_gui_client::client::gui: Error in check_for_updates error=Error in client::updates::check Caused by: HTTP status: 404 Not Found from update URL `https://www.firezone.dev/dl/firezone-client-gui-windows/latest/aarch64` ``` |
||
|
|
b3d2059cad |
chore(connlib): split allowed_ips into ipv4 and ipv6 in ClientOnGateway (#5160)
To encode that clients always have both ipv4 and ipv6 and they are the only allowed source ips for any given client, into the type, we split those into their specific fields in the `ClientOnGateway` struct and update tests accordingly. Furthermore, these will be used for the DNS refactor for ipv6-in-ipv4 and ipv4-in-ipv6 to set the source ip of outgoing packets, without having to do additional routing or mappings. There will be more notes on this on the corresponding PR #5049 . --------- Co-authored-by: Thomas Eizinger <thomas@eizinger.io> |
||
|
|
73085f2f00 |
test(connlib): use same tunnel IP subnets as real code (#5162)
This doesn't really matter for the functionality of the test because in connlib, we don't expect the IPs to adhere to a certain range. Nevertheless, to make output more readable, it is nicer if these IPs match what we also see in production logs. |
||
|
|
20cfcac7da |
test(connlib): don't route packets from IPs other than the client's (#5161)
Currently, the gateway has a piece of functionality to ensure we only ever route packets that actually originate from the client. This is important because a gateway connects to multiple clients and thus - without this check - client A could send a packet through the tunnel that gets interpreted as traffic from client B by mangling the source IP of their packets. The portal assigns these source IPs when the clients sign in and passes them to the gateway whenever a client connects. We can thus drop all traffic on the gateway side from IPs that we don't recognise. Currently, a client will still trigger a connection intent for an IP packet, even if it doesn't have the tunnel's source IP set. We may want to consider changing this behaviour in the future. |
||
|
|
cb9fe34437 |
chore(connlib): make device buffers smaller (#5145)
Since we expect a fixed MTU size, we can encode this in the size of the buffers for the device, we will never read or write more than the 1280 MTU we expect. Note that the `write_buf` needs an extra 16 bytes for the aead tag that boringtun will copy over. |
||
|
|
d52d519e7d |
build(deps): Bump serde_json from 1.0.116 to 1.0.117 in /rust (#5136)
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.116 to 1.0.117. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/serde-rs/json/releases">serde_json's releases</a>.</em></p> <blockquote> <h2>v1.0.117</h2> <ul> <li>Resolve unexpected_cfgs warning (<a href="https://redirect.github.com/serde-rs/json/issues/1130">#1130</a>)</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
adb00af3d4 |
test(connlib): assert on expected ICMP handshakes (#5150)
Currently, we assert on the actual IP packet that gets sent between client and gateway in the tunnel test. This does not work with DNS resources because - unless we model _how_ connlib assigns IPs for DNS resources - we don't know what the destination IP of the resource is that we are about to ping. From an applications PoV, it doesn't matter, what the IP is. Thus, it is better to write an assertion closer to what the application expects: - A received ICMP reply should come from the IP that we pinged. - The ICMP packet emitted on the gateway should target the actual IP of the DNS resource. Extracted out of #5083. |
||
|
|
974eb95dc5 |
test(connlib): reduce number of sites to 3 (#5152)
Generating up to 10 can be quite verbose in the output. I think 3 should also be enough to hit all codepaths that need to deal with more than 1. |
||
|
|
9c1af37c85 |
chore(ip-packet): model ICMP packets (#5147)
An `IpPacket` may contain an ICMP or ICMPv6 packet. To extract metadata like the sequence number of identifier from it, we need to be able to parse an `IpPacket`'s payload into the appropriate packets. Extracted out of #5083. --------- Signed-off-by: Thomas Eizinger <thomas@eizinger.io> Co-authored-by: Reactor Scram <ReactorScram@users.noreply.github.com> |