mirror of
https://github.com/outbackdingo/firezone.git
synced 2026-04-05 15:06:26 +00:00
abfd378fe9500a97e5fdc28450735440a2ce46ff
177 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
64d2d89542 |
test(connlib): add coverage for the Internet Resource (#6089)
With the upcoming feature of full-route tunneling aka an "Internet Resource", we need to expand the reference state machine in `tunnel_test`. In particular, packets to non-resources will now be routed the gateway if we have previously activated the Internet resource. This is reasonably easy to model as we can see from the small diff. Because `connlib` doesn't actually support the Internet resource yet, the code snippet for where it is added to the list of all possible resources to sample from is commented out. |
||
|
|
bd49298240 |
build(deps): Bump tokio from 1.38.0 to 1.39.2 in /rust (#6082)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.38.0 to 1.39.2. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/tokio-rs/tokio/releases">tokio's releases</a>.</em></p> <blockquote> <h2>Tokio v1.39.2</h2> <h1>1.39.2 (July 27th, 2024)</h1> <p>This release fixes a regression where the <code>select!</code> macro stopped accepting expressions that make use of temporary lifetime extension. (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6722">#6722</a>)</p> <p><a href="https://redirect.github.com/tokio-rs/tokio/issues/6722">#6722</a>: <a href="https://redirect.github.com/tokio-rs/tokio/pull/6722">tokio-rs/tokio#6722</a></p> <h2>Tokio v1.39.1</h2> <h1>1.39.1 (July 23rd, 2024)</h1> <p>This release reverts "time: avoid traversing entries in the time wheel twice" because it contains a bug. (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6715">#6715</a>)</p> <p><a href="https://redirect.github.com/tokio-rs/tokio/issues/6715">#6715</a>: <a href="https://redirect.github.com/tokio-rs/tokio/pull/6715">tokio-rs/tokio#6715</a></p> <h2>Tokio v1.39.0</h2> <h1>1.39.0 (July 23rd, 2024)</h1> <ul> <li>This release bumps the MSRV to 1.70. (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6645">#6645</a>)</li> <li>This release upgrades to mio v1. (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6635">#6635</a>)</li> <li>This release upgrades to windows-sys v0.52 (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6154">#6154</a>)</li> </ul> <h3>Added</h3> <ul> <li>io: implement <code>AsyncSeek</code> for <code>Empty</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6663">#6663</a>)</li> <li>metrics: stabilize <code>num_alive_tasks</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6619">#6619</a>, <a href="https://redirect.github.com/tokio-rs/tokio/issues/6667">#6667</a>)</li> <li>process: add <code>Command::as_std_mut</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6608">#6608</a>)</li> <li>sync: add <code>watch::Sender::same_channel</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6637">#6637</a>)</li> <li>sync: add <code>{Receiver,UnboundedReceiver}::{sender_strong_count,sender_weak_count}</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6661">#6661</a>)</li> <li>sync: implement <code>Default</code> for <code>watch::Sender</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6626">#6626</a>)</li> <li>task: implement <code>Clone</code> for <code>AbortHandle</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6621">#6621</a>)</li> <li>task: stabilize <code>consume_budget</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6622">#6622</a>)</li> </ul> <h3>Changed</h3> <ul> <li>io: improve panic message of <code>ReadBuf::put_slice()</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6629">#6629</a>)</li> <li>io: read during write in <code>copy_bidirectional</code> and <code>copy</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6532">#6532</a>)</li> <li>runtime: replace <code>num_cpus</code> with <code>available_parallelism</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6709">#6709</a>)</li> <li>task: avoid stack overflow when passing large future to <code>block_on</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6692">#6692</a>)</li> <li>time: avoid traversing entries in the time wheel twice (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6584">#6584</a>)</li> <li>time: support <code>IntoFuture</code> with <code>timeout</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6666">#6666</a>)</li> <li>macros: support <code>IntoFuture</code> with <code>join!</code> and <code>select!</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6710">#6710</a>)</li> </ul> <h3>Fixed</h3> <ul> <li>docs: fix docsrs builds with the fs feature enabled (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6585">#6585</a>)</li> <li>io: only use short-read optimization on known-to-be-compatible platforms (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6668">#6668</a>)</li> <li>time: fix overflow panic when using large durations with <code>Interval</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6612">#6612</a>)</li> </ul> <h3>Added (unstable)</h3> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
c3a45f53df |
fix(connlib): prevent routing loops on windows (#6032)
In `connlib`, traffic is sent through sockets via one of three ways: 1. Direct p2p traffic between clients and gateways: For these, we always explicitly set the source IP (and thus interface). 2. UDP traffic to the relays: For these, we let the OS pick an appropriate source interface. 3. WebSocket traffic over TCP to the portal: For this too, we let the OS pick the source interface. For (2) and (3), it is possible to run into routing loops, depending on the routes that we have configured on the TUN device. In Linux, we can prevent routing loops by marking a socket [0] and repeating the mark when we add routes [1]. Packets sent via a marked socket won't be routed by a rule that contains this mark. On Android, we can do something similar by "protecting" a socket via a syscall on the Java side [2]. On Windows, routing works slightly different. There, the source interface is determined based on a computed metric [3] [4]. To prevent routing loops on Windows, we thus need to find the "next best" interface after our TUN interface. We can achieve this with a combination of several syscalls: 1. List all interfaces on the machine 2. Ask Windows for the best route on each interface, except our TUN interface. 3. Sort by Windows' routing metric and pick the lowest one (lower is better). Thanks to the abstraction of `SocketFactory` that we already previously introduced, Integrating this into `connlib` isn't too difficult: 1. For TCP sockets, we simply resolve the best route after creating the socket and then bind it to that local interface. That way, all packets will always going via that interface, regardless of which routes are present on our TUN interface. 2. UDP is connection-less so we need to decide per-packet, which interface to use. "Pick the best interface for me" is modelled in `connlib` via the `DatagramOut::src` field being `None`. - To ensure those packets don't cause a routing loop, we introduce a "source IP resolver" for our `UdpSocket`. This function gets called every time we need to send a packet without a source IP. - For improved performance, we cache these results. The Windows client uses this source IP resolver to use the above devised strategy to find a suitable source IP. - In case the source IP resolution fails, we don't send the packet. This is important, otherwise, the kernel might choose our TUN interface again and trigger a routing loop. The last remark to make here is that this also works for connection roaming. The TCP socket gets thrown away when we reconnect to the portal. Thus, the new socket will pick the new best interface as it is re-created. The UDP sockets also get thrown away as part of roaming. That clears the above cache which is what we want: Upon roaming, the best interface for a given destination IP will likely have changed. [0]: |
||
|
|
6862213cc2 |
fix(headless-client/linux): only notify systemd that we're up after Resources are available (#6026)
Closes #5912 Before this, I had the `--exit` CLI flag and the `sd_notify` call hanging off the wrong callback. |
||
|
|
82b8de4c9c |
refactor(client/windows): de-dupe wintun.dll (#6020)
Closes #5977 Refactored some other stuff to make this work Also removed a redundant impl of `ensure_dll` in a benchmark |
||
|
|
7be47f2c6e |
build(deps): Bump url from 2.5.0 to 2.5.2 in /rust (#6002)
Bumps [url](https://github.com/servo/rust-url) from 2.5.0 to 2.5.2. <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
6d09344521 |
build(deps): Bump uuid from 1.8.0 to 1.10.0 in /rust (#6005)
Bumps [uuid](https://github.com/uuid-rs/uuid) from 1.8.0 to 1.10.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/uuid-rs/uuid/releases">uuid's releases</a>.</em></p> <blockquote> <h2>1.10.0</h2> <h2>Deprecations</h2> <p>This release deprecates and renames the following functions:</p> <ul> <li><code>Builder::from_rfc4122_timestamp</code> -> <code>Builder::from_gregorian_timestamp</code></li> <li><code>Builder::from_sorted_rfc4122_timestamp</code> -> <code>Builder::from_sorted_gregorian_timestamp</code></li> <li><code>Timestamp::from_rfc4122</code> -> <code>Timestamp::from_gregorian</code></li> <li><code>Timestamp::to_rfc4122</code> -> <code>Timestamp::to_gregorian</code></li> </ul> <h2>What's Changed</h2> <ul> <li>Use const identifier in uuid macro by <a href="https://github.com/Vrajs16"><code>@Vrajs16</code></a> in <a href="https://redirect.github.com/uuid-rs/uuid/pull/764">uuid-rs/uuid#764</a></li> <li>Rename most methods referring to RFC4122 by <a href="https://github.com/Mikopet"><code>@Mikopet</code></a> / <a href="https://github.com/KodrAus"><code>@KodrAus</code></a> in <a href="https://redirect.github.com/uuid-rs/uuid/pull/765">uuid-rs/uuid#765</a></li> <li>prepare for 1.10.0 release by <a href="https://github.com/KodrAus"><code>@KodrAus</code></a> in <a href="https://redirect.github.com/uuid-rs/uuid/pull/766">uuid-rs/uuid#766</a></li> </ul> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/Vrajs16"><code>@Vrajs16</code></a> made their first contribution in <a href="https://redirect.github.com/uuid-rs/uuid/pull/764">uuid-rs/uuid#764</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/uuid-rs/uuid/compare/1.9.1...1.10.0">https://github.com/uuid-rs/uuid/compare/1.9.1...1.10.0</a></p> <h2>1.9.1</h2> <h2>What's Changed</h2> <ul> <li>Add an example of generating bulk v7 UUIDs by <a href="https://github.com/KodrAus"><code>@KodrAus</code></a> in <a href="https://redirect.github.com/uuid-rs/uuid/pull/761">uuid-rs/uuid#761</a></li> <li>Avoid taking the shared lock when getting usable bits in Uuid::now_v7 by <a href="https://github.com/KodrAus"><code>@KodrAus</code></a> in <a href="https://redirect.github.com/uuid-rs/uuid/pull/762">uuid-rs/uuid#762</a></li> <li>Prepare for 1.9.1 release by <a href="https://github.com/KodrAus"><code>@KodrAus</code></a> in <a href="https://redirect.github.com/uuid-rs/uuid/pull/763">uuid-rs/uuid#763</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/uuid-rs/uuid/compare/1.9.0...1.9.1">https://github.com/uuid-rs/uuid/compare/1.9.0...1.9.1</a></p> <h2>1.9.0</h2> <h2><code>Uuid::now_v7()</code> is guaranteed to be monotonic</h2> <p>Before this release, <code>Uuid::now_v7()</code> would only use the millisecond-precision timestamp for ordering. It now also uses a global 42-bit counter that's re-initialized each millisecond so that the following will always pass:</p> <pre lang="rust"><code>let a = Uuid::now_v7(); let b = Uuid::now_v7(); <p>assert!(a < b);<br /> </code></pre></p> <h2>What's Changed</h2> <ul> <li>Add a get_node_id method for v1 and v6 UUIDs by <a href="https://github.com/KodrAus"><code>@KodrAus</code></a> in <a href="https://redirect.github.com/uuid-rs/uuid/pull/748">uuid-rs/uuid#748</a></li> <li>Update atomic and zerocopy to latest by <a href="https://github.com/KodrAus"><code>@KodrAus</code></a> in <a href="https://redirect.github.com/uuid-rs/uuid/pull/750">uuid-rs/uuid#750</a></li> <li>Add repository field to uuid-macro-internal crate by <a href="https://github.com/paolobarbolini"><code>@paolobarbolini</code></a> in <a href="https://redirect.github.com/uuid-rs/uuid/pull/752">uuid-rs/uuid#752</a></li> <li>update docs to updated RFC (from 4122 to 9562) by <a href="https://github.com/Mikopet"><code>@Mikopet</code></a> in <a href="https://redirect.github.com/uuid-rs/uuid/pull/753">uuid-rs/uuid#753</a></li> <li>Support counters in v7 UUIDs by <a href="https://github.com/KodrAus"><code>@KodrAus</code></a> in <a href="https://redirect.github.com/uuid-rs/uuid/pull/755">uuid-rs/uuid#755</a></li> </ul> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/paolobarbolini"><code>@paolobarbolini</code></a> made their first contribution in <a href="https://redirect.github.com/uuid-rs/uuid/pull/752">uuid-rs/uuid#752</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
50d6b865a1 |
refactor(connlib): move Tun implementations out of firezone-tunnel (#5903)
The different implementations of `Tun` are the last platform-specific code within `firezone-tunnel`. By introducing a dedicated crate and a `Tun` trait, we can move this code into (platform-specific) leaf crates: - `connlib-client-android` - `connlib-client-apple` - `firezone-bin-shared` Related: #4473. --------- Co-authored-by: Not Applicable <ReactorScram@users.noreply.github.com> |
||
|
|
45879ba481 |
chore(connlib): shorter formatting for Debug impls of IDs (#5946)
We almost never `Debug`-print our IDs. Except in the proptests where the test runner prints them. To allow for better use of full-text search, apply the same formatting that we have for the `Display` output to the `Debug` output as well. |
||
|
|
5268756b60 |
feat(connlib): add placeholder for Internet Resource (#5900)
In preparation for #2667, we add an `internet` variant to our list of possible resource types. This is backwards-compatible with existing clients and ensures that, once the portal starts sending Internet resources to clients, they won't fail to deserialise these messages. The portal will have a version check to not send this to older clients anyway but the sooner we can land this, the better. It simplifies the initial development as we start preparing for the next client release. Adding new fields to a JSON message is always backwards-compatible so we can extend this later with whatever we need. |
||
|
|
4f4134b000 |
test(connlib): model gateway <> site <> resource relationship (#5871)
Currently, the relationship between gateways, sites and resources is modeled in an ad-hoc fashion within `tunnel_test`. The correct relationship is: - The portal knows about all sites. - A resource can only be added for an existing site. - One or more gateways belong to a single site. To express this relationship in `tunnel_test`, we first sample between 1 and 3 sites. Then we sample between 1 and 3 gateways and assign them a site each. When adding new resources, we sample a site that the resource belongs to. Upon a connection intent, we sample a gateway from all gateways that belong to the site that the resource is defined in. In addition, this patch-set removes multi-site resources from the `tunnel_test`. As far as connlib's routing logic is concerned, we route packets to a resource on a selected gateway. How the portal selected the site of the gateway doesn't matter to connlib and thus doesn't need to be covered in these tests. |
||
|
|
7e963f74ca |
chore(connlib): performance improvement for picking cidr resources (#5891)
Extracted from #5840 Some cleanup on generating IPs and improve performance of picking a host within an IP range by doing some math instead of iterating through the ip range. |
||
|
|
14abda01fd |
refactor(connlib): polish DNS resource matching (#5866)
In preparation for implementing #5056, I familiarized myself with the current code and ended up implementing a couple of refactorings. |
||
|
|
a4a8221b8b |
refactor(connlib): explicitly initialise Tun (#5839)
Connlib's routing logic and networking code is entirely platform agnostic. The only platform-specific bit is how we interact with the TUN device. From connlib's perspective though, all it needs is an interface for reading and writing. How the device gets initialised and updated is client-business. For the most part, this is the same on all platforms: We call callbacks and the client updates the state accordingly. The only annoying bit here is that Android recreates the TUN interface on every update and thus our old file descriptor is invalid. The current design works around this by returning the new file descriptor on Android. This is a problematic design for several reasons: - It forces the callback handler to finish synchronously, and halting connlib until this is complete. - The synchronous nature also means we cannot replace the callbacks with events as events don't have a return value. To fix this, we introduce a new `set_tun` method on `Tunnel`. This moves the business of how the `Tun` device is created up to the client. The clients are already platform-specific so this makes sense. In a future iteration, we can move all the various `Tun` implementations all the way up to the client-specific crates, thus co-locating the platform-specific code. Initialising `Tun` from the outside surfaces another issue: The routes are still set via the `Tun` handle on Windows. To fix this, we introduce a `make_tun` function on `TunDeviceManager` in order for it to remember the interface index on Windows and being able to move the setting of routes to `TunDeviceManager`. This simplifies several of connlib's APIs which are now infallible. Resolves: #4473. --------- Co-authored-by: Reactor Scram <ReactorScram@users.noreply.github.com> Co-authored-by: conectado <gabrielalejandro7@gmail.com> |
||
|
|
c92dd559f7 |
chore(rust): format Cargo.toml using cargo-sort (#5851)
|
||
|
|
d95193be7d |
test(connlib): introduce dynamic number of gateways to tunnel_test (#5823)
Currently, `tunnel_test` exercises a lot of code paths within connlib already by adding & removing resources, roaming the client and sending ICMP packets. Yet, it does all of this with just a single gateway whereas in production, we are very likely using more than one gateway. To capture these other code-paths, we now sample between 1 and 3 gateways and randomly assign the added resources to one of them, which makes us hit the codepaths that select between different gateways. Most importantly, the reference implementation has barely any knowledge about those individual connections. Instead, it is implementation in terms of connectivity to resources. |
||
|
|
960ce80680 |
refactor(connlib): move TunDeviceManager into firezone-bin-shared (#5843)
The `TunDeviceManager` is a component that the leaf-nodes of our dependency tree need: the binaries. Thus, it is misplaced in the `connlib-shared` crate which is at the very bottom of the dependency tree. This is necessary to allow the `TunDeviceManager` to actually construct a `Tun` (which currently lives in `firezone-tunnel`). Related: #5839. --------- Signed-off-by: Thomas Eizinger <thomas@eizinger.io> Co-authored-by: Reactor Scram <ReactorScram@users.noreply.github.com> |
||
|
|
2013d6a2bf |
chore(connlib): improve logging (#5836)
Currently, the logging of fields in spans for encapsulate and decapsulate operations is a bit inconsistent between client and gateway. Logging the `from` field for every message is actually quite redundant because most of these logs are emitted within `snownet`'s `Allocation` which can add its own span to indicate, which relay we are talking to. For most other operations, it is much more useful to log the connection ID instead of IPs. This should make the logs a bit more succinct. |
||
|
|
08182913a5 |
refactor(connlib): remove CidrV4 and CidrV6 types from callbacks (#5842)
These are only necessary for the Android and Apple client. Other clients should not need to bother with these custom types. Required-for: #5843. |
||
|
|
f39a57fa50 |
refactor(connlib): remove cyclic From impls (#5837)
We have several representations of `ResourceDescription` within connlib. The ones within the `callbacks` module are meant for _presentation_ to the clients and thus contain additional information like the site status. The `From` impls deleted within the PR are only used within tests. We can rewrite those tests by asserting on the presented data instead. This is better because it means information about resources only flows in one direction: From connlib to the clients. |
||
|
|
00a3940717 |
chore(rust): introduce tokio workspace dependency (#5821)
We are referencing the `tokio` dependency a lot and it makes sense to ensure that version is tracked only once across the whole workspace. Extracted out of #5797. --------- Co-authored-by: Not Applicable <ReactorScram@users.noreply.github.com> |
||
|
|
78f1c7c519 |
test(firezone-tunnel/windows): Test Windows upload speed in CI (#5607)
Closes #5601 It looks like we can hit 100+ Mbps in theory. This covers Wintun, Tokio, and Windows OS overhead. It doesn't cover the cryptography or anything in connlib itself. The code is kinda messy but I'm not sure how to clean it up so I'll just leave it for review. This test should fail if there's any regressions in #5598. It fails if any packet is dropped or if the speed is under 100 Mbps ```[tasklist] ### Tasks - [x] Use `ip_packet::make` - [x] Switch to `cargo bench` - [x] Extract windows ARM PR - [x] Clean up wintun.dll install code - [x] Re-request review ``` |
||
|
|
0e6ac2040c |
test(connlib): use two relays in tunnel_test (#5804)
With the introduction of a routing table in #5786, we can very easily introduce an additional relay to `tunnel_test`. In production, we are always given two relays and thus, this mimics the production setup more closely. |
||
|
|
d15c43b6f2 |
test(connlib): render IDs as hex u128 (#5803)
This is a bit of a hack because features should never change behaviour. Unfortunately, we can't use `cfg(test)` here because the proptests live in a different crate and thus for the tests, we import the crate using `cfg(not(test))`. Our `proptest` feature is really only meant to be activated during testing so I think this is fine for now. The benefit is that the test logs are much more terse because proptest will shrink the IDs to `0`, `1` etc. With the upcoming addition of multiple gateways and multiple relays, we will have a lot more IDs in the logs. Thus, it is important that they stay legible. |
||
|
|
9caca475dc |
test(connlib): introduce routing table to tunnel_test (#5786)
Currently, `tunnel_test` uses a rather naive approach when dispatching `Transmit`s. In particular, it checks client, gateway and relay separately whether they "want" a certain packet. In a real network, these packets are routed based on their IP. To mimic something similar, we introduce a `Host` abstraction that wraps each component: client, gateway and relay. Additionally, we introduce a `RoutingTable` where we can add and remove hosts. With these things in place, routing a `Transmit` is as easy as looking up the destination IP in the routing table and dispatching to the corresponding host. Our hosts are type-safe: client, gateway and relay have different types. Thus, we abstract over them using a `HostId` in order to know, which host a certain message is for. Following these patches, we can easily introduce multiple gateways and relays to this test by simply making more entries in this routing table. This will increase the test coverage of connlib. Lastly, this patch massively increases the performance of `tunnel_test`. It turns out that previously, we spent a lot of CPU cycles accessing "random" IPs from very large iterators. With this patch, we take a limited range of 100 IPs that we sample from, thus drastically increasing performance of this test. The configured 1000 testcases execute in 3s on my machine now (with opt-level 1 which is what we use in CI). --------- Signed-off-by: Thomas Eizinger <thomas@eizinger.io> |
||
|
|
f6e99752ec |
fix(client): flush the OS' DNS cache whenever resources change (#5700)
Closes #5052 On my dev VMs: - systemd-resolved = 15 ms to flush - Windows = 600 ms to flush I tested with the headless Clients on Linux and Windows and it fixes the issue. On Windows I didn't replicate the issue with the GUI Client, on Linux this patch also fixes it for the GUI Client. |
||
|
|
8973cc5785 |
refactor(android): use fmt::Layer with custom writer (#5558)
Currently, the logs that go to logcat on Android are pretty badly
formatted because we use `tracing-android` and it formats the span
fields and message fields itself. There is actually no reason for doing
the formatting ourselves. Instead, we can use the `MakeWriter`
abstraction from `tracing_subscriber` to plug in a custom writer that
writes to Android's logcat.
This results in logs like this:
```
[nix-shell:~/src/github.com/firezone/firezone/rust]$ adb logcat -s connlib
--------- beginning of main
06-28 19:41:20.057 19955 20213 D connlib : phoenix_channel: Connecting to portal host=api.firez.one user_agent=Android/14 5.15.137-android14-11-gbf4f9bc41c3b-ab11664771 connlib/1.1.1
06-28 19:41:20.058 19955 20213 I connlib : firezone_tunnel::client: Network change detected
06-28 19:41:20.061 19955 20213 D connlib : snownet::node: Closed all connections as part of reconnecting num_connections=0
06-28 19:41:20.365 19955 20213 I connlib : phoenix_channel: Connected to portal host=api.firez.one
06-28 19:41:20.601 19955 20213 I connlib : firezone_tunnel::io: Setting new DNS resolvers
06-28 19:41:21.031 19955 20213 D connlib : firezone_tunnel::client: TUN device initialized ip4=100.66.86.233 ip6=fd00:2021:1111::f:d9c1 name=tun1
06-28 19:41:21.031 19955 20213 I connlib : connlib_client_shared::eventloop: Firezone Started!
06-28 19:41:21.031 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.slackb.com
06-28 19:41:21.031 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.test-ipv6.com
06-28 19:41:21.032 19955 20213 I connlib : firezone_tunnel::client: Activating CIDR resource address=5.4.6.7/32 name=5.4.6.7
06-28 19:41:21.032 19955 20213 I connlib : firezone_tunnel::client: Activating CIDR resource address=10.0.32.101/32 name=IPerf3
06-28 19:41:21.032 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=ifconfig.net
06-28 19:41:21.032 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.slack-imgs.com
06-28 19:41:21.032 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.google.com
06-28 19:41:21.032 19955 20213 I connlib : firezone_tunnel::client: Activating CIDR resource address=10.0.0.5/32 name=10.0.0.5
06-28 19:41:21.032 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.githubassets.com
06-28 19:41:21.032 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=dnsleaktest.com
06-28 19:41:21.033 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.slack-edge.com
06-28 19:41:21.033 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.github.com
06-28 19:41:21.033 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=speed.cloudflare.com
06-28 19:41:21.033 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.githubusercontent.com
06-28 19:41:21.033 19955 20213 I connlib : firezone_tunnel::client: Activating CIDR resource address=10.0.14.11/32 name=Staging resource performance
06-28 19:41:21.033 19955 20213 I connlib : firezone_tunnel::dns: Activating DNS resource address=*.whatismyip.com
06-28 19:41:21.033 19955 20213 I connlib : firezone_tunnel::client: Activating CIDR resource address=10.0.0.8/32 name=10.0.0.8
06-28 19:41:21.033 19955 20213 I connlib : firezone_tunnel::client: Activating CIDR resource address=9.9.9.9/32 name=Quad9 DNS
06-28 19:41:21.034 19955 20213 I connlib : firezone_tunnel::client: Activating CIDR resource address=10.0.32.10/32 name=CoreDNS
06-28 19:41:21.216 19955 20213 I connlib : snownet::node: Added new TURN server id=bd6e9d1a-4696-4f8b-8337-aab5d5cea810 address=Dual { v4: 35.197.171.113:3478, v6: [2600:1900:40b0:1504:0:27::]:3478 }
```
---------
Signed-off-by: Thomas Eizinger <thomas@eizinger.io>
|
||
|
|
8655b711db |
fix(connlib): Don't use operatingSystemVersionString on Apple OSes (#5628)
The [HTTP 1.1 RFC](https://datatracker.ietf.org/doc/html/rfc2616) states that HTTP headers should be US-ASCII. This is not the case when the macOS Client is run from a host that has a non-English language selected as its system default due to the way we build the user agent. This PR fixes that by normalizing how we build the user agent by more granularly selecting which fields compose it, and not just relying on OS-provided version strings that may contain non-ASCII characters. fixes https://github.com/firezone/firezone/issues/5467 --------- Signed-off-by: Jamil <jamilbk@users.noreply.github.com> |
||
|
|
6c842de83c |
refactor(connlib): don't re-initialise Tun on config updates (#5392)
Currently, connlib re-initialises the TUN device on Linux every time its configuration gets updated such as when roaming from one network to another. This is unnecessary. Instead, we can adopt the same approach as already used on MacOS, iOS and Windows and only initialise it if it doesn't exist yet. Doing so surfaces an interesting bug. Currently, attempting to re-initialise the TUN device fails with a warning: > connlib_client_shared::eventloop: Failed to set interface on tunnel: Resource busy (os error 16) See https://github.com/firezone/firezone/actions/runs/9656570163/job/26634409346#step:7:103 for an example. As a consequence, we never actually trigger the `on_set_interface_config` callback and thus never actually set the new IPs on the TUN device. Now that we _are_ calling this callback, we execute `TunDeviceManager::set_ips` which first clears all IPs from the device and then attaches the new ones. A consequence of this is that the Linux kernel will clear all routes associated with the device. This clashes with an optimisation we have in `TunDeviceManager` where we remember the previously set routes and don't set new ones if they are the same. This `HashSet` needs to be cleared upon setting new IPs in order to actually set the new routes correctly afterwards. Without that, we stop receiving traffic on the TUN device. |
||
|
|
409039afde |
chore(connlib): improve error messages in TunDeviceManager (#5530)
|
||
|
|
bd989d4416 |
chore(connlib): improve logging for set_routes on Linux (#5529)
Logging the routes in the span and in an event creates duplicate information so we remove the former. Additionally, we add a debug log in case we short-circuit the function. |
||
|
|
eec0652abe |
chore(connlib): shrink "packet not allowed" log (#5476)
All allowed IPs can be a fair few which clutters the log. Remove the `HashSet` from the error and also remove the stuttering; the error already says "Packet not allowed". |
||
|
|
aea03a490c |
feat(connlib): clients make use of DNS mangling on gateways (#5049)
This PR is the "client-side" of things for #4994. Up until now, when a user wanted to connect to a DNS resource, we would establish a connection to the gateway and pass along the domain we are trying to access. The gateway would resolve that domain and send the response back to the client, allowing them to finally send a DNS response. Now, we instantly assign and respond with 4x A and 4x AAAA records to any query for one of our DNS resources. Upon the first IP packet for one of these "proxy IPs", we select a gateway, establish a connection and send our proxy IPs along. The gateway then performs the necessary mangling and NATing of all packets. See #5354 for details. Resolves: #4994. Resolves: #5491. --------- Co-authored-by: Thomas Eizinger <thomas@eizinger.io> |
||
|
|
28378fe24e |
refactor(headless-client): remove FIREZONE_PACKAGE_VERSION (#5487)
Closes #5481 With this, I can connect to the staging portal without a build.rs or any extra env var setup <img width="387" alt="image" src="https://github.com/firezone/firezone/assets/13400041/9c080b36-3a76-49c7-b706-20723697edc7"> ```[tasklist] ### Next steps - [x] Split out a refactor PR for `ConnectArgs` (#5488) - [x] Try doing this for other Clients - [x] Check Gateway - [x] Check Tauri Client - [x] Change to `app_version` - [x] Open for review - [ ] Use `option_env` so that `FIREZONE_PACKAGE_VERSION` can still override the Cargo.toml version for local testing - [ ] Check Android Client - [ ] Check Apple Client ``` --------- Signed-off-by: Reactor Scram <ReactorScram@users.noreply.github.com> |
||
|
|
14785eba9f |
chore(connlib): tune logs around proxy IPs and DNS resources (#5439)
Adds and tunes some logs around creating, using and disassociated proxy IPs for DNS resources. |
||
|
|
95f13c89c6 |
fix(connlib): don't treat pending connections as errors (#5433)
When a user sends the first packet to a resource, we generate a "connection intent" and consult the portal, which gateway to use for this resource. This process is throttled to only generate a new intent every 2s. Once we know, which gateway to use for a certain resource, we initiate a connection via snownet. This involves an OFFER-ANSWER handshake with the gateway. A connection for which we have sent an offer and have not yet received an answer is what we call a "pending connection". In case the connection setup takes longer than 2s, we will generate another connection intent which can point to the same gateway that we are currently setting up a connection with. Currently, encountering a "pending connection" during another connection setup is treated as an error which results in some state being cleaned-up / removed. This is where the bug surfaces: If we remove the state for a resource as a result of a 2nd connection intent and then receive the response of the first one, we will be left with no state that knows about this resource. We fix this by refactoring `create_or_reuse_connection` to be atomic in regards to its state changes: All checks that fail the function are moved to the top which means there is no state to clean up in case of an error. Additionally, we model the case of a "pending connection" using an `Option` to not flood the logs with "pending connection" warnings as those are expected during normal operation. Fixes: #5385 |
||
|
|
2ea6a5d07e |
feat(gateway): NAT & mangling for DNS resources (#5354)
As part of #4994, the IP translation and mangling of packets to and from DNS resources is moved to the gateway. This PR represents the "gateway-half" of the required changes. Eventually, the client will send a list of proxy IPs that it assigned for a certain DNS resource. The gateway assigns each proxy IP to a real IP and mangles outgoing and incoming traffic accordingly. There are a number of things that we need to take care of as part of that: - We need to implement NAT to correctly route traffic. Our NAT table maps from source port* and destination IP to an assigned port* and real IP. We say port* because that is only true for UDP and TCP. For ICMP, we use the identifier. - We need to translate between IPv4 and IPv6 in case a DNS resource e.g. only resolves to IPv6 addresses but the client gave out an IPv4 proxy address to the application. This translation is was added in #5364 and is now being used here. This PR is backwards-compatible because currently, clients don't send any IPs to the gateway. No proxy IPs means we cannot do any translation and thus, packets are simply routed through as is which is what the current clients expect. --------- Co-authored-by: Thomas Eizinger <thomas@eizinger.io> |
||
|
|
75faf25050 |
fix(connlib): accept null address_descriptions (#5366)
Co-authored-by: Jamil Bou Kheir <jamilbk@users.noreply.github.com> |
||
|
|
489a14a0ed |
test(connlib): directly sample from state instead of indexing (#5332)
Currently, we use `sample::Index` and `sample::Selector` to deterministically select parts of our state. Originally, this was done because I did not yet fully understand, how `proptest-state-machine` works. The available transitions are always sampled from the current state, meaning we can directly use `sample::select` to pick an element like an IP address from a list. This has several advantages: - The transitions are more readable when debug-printed because they now contain the actual data that is being used. - I _think_ this results in better shrinking because `sample::select` will perform a binary search for the problematic value. - We can more easily implement transitions that _remove_ state. Currently, we cannot remove things from the `ReferenceState` because the system-under-test would also have to index into the `ReferenceState` as part of executing its transition. By directly embedding all necessary information in the transition, this is much simpler. |
||
|
|
a3f15ebf60 |
build(deps): Bump itertools from 0.12.1 to 0.13.0 in /rust (#5289)
Bumps [itertools](https://github.com/rust-itertools/itertools) from 0.12.1 to 0.13.0. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/rust-itertools/itertools/blob/master/CHANGELOG.md">itertools's changelog</a>.</em></p> <blockquote> <h2>0.13.0</h2> <h3>Breaking</h3> <ul> <li>Removed implementation of <code>DoubleEndedIterator</code> for <code>ConsTuples</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/853">#853</a>)</li> <li>Made <code>MultiProduct</code> fused and fixed on an empty iterator (<a href="https://redirect.github.com/rust-itertools/itertools/issues/835">#835</a>, <a href="https://redirect.github.com/rust-itertools/itertools/issues/834">#834</a>)</li> <li>Changed <code>iproduct!</code> to return tuples for maxi one iterator too (<a href="https://redirect.github.com/rust-itertools/itertools/issues/870">#870</a>)</li> <li>Changed <code>PutBack::put_back</code> to return the old value (<a href="https://redirect.github.com/rust-itertools/itertools/issues/880">#880</a>)</li> <li>Removed deprecated <code>repeat_call, Itertools::{foreach, step, map_results, fold_results}</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/878">#878</a>)</li> <li>Removed <code>TakeWhileInclusive::new</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/912">#912</a>)</li> </ul> <h3>Added</h3> <ul> <li>Added <code>Itertools::{smallest_by, smallest_by_key, largest, largest_by, largest_by_key}</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/654">#654</a>, <a href="https://redirect.github.com/rust-itertools/itertools/issues/885">#885</a>)</li> <li>Added <code>Itertools::tail</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/899">#899</a>)</li> <li>Implemented <code>DoubleEndedIterator</code> for <code>ProcessResults</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/910">#910</a>)</li> <li>Implemented <code>Debug</code> for <code>FormatWith</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/931">#931</a>)</li> <li>Added <code>Itertools::get</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/891">#891</a>)</li> </ul> <h3>Changed</h3> <ul> <li>Deprecated <code>Itertools::group_by</code> (renamed <code>chunk_by</code>) (<a href="https://redirect.github.com/rust-itertools/itertools/issues/866">#866</a>, <a href="https://redirect.github.com/rust-itertools/itertools/issues/879">#879</a>)</li> <li>Deprecated <code>unfold</code> (use <code>std::iter::from_fn</code> instead) (<a href="https://redirect.github.com/rust-itertools/itertools/issues/871">#871</a>)</li> <li>Optimized <code>GroupingMapBy</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/873">#873</a>, <a href="https://redirect.github.com/rust-itertools/itertools/issues/876">#876</a>)</li> <li>Relaxed <code>Fn</code> bounds to <code>FnMut</code> in <code>diff_with, Itertools::into_group_map_by</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/886">#886</a>)</li> <li>Relaxed <code>Debug/Clone</code> bounds for <code>MapInto</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/889">#889</a>)</li> <li>Documented the <code>use_alloc</code> feature (<a href="https://redirect.github.com/rust-itertools/itertools/issues/887">#887</a>)</li> <li>Optimized <code>Itertools::set_from</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/888">#888</a>)</li> <li>Removed badges in <code>README.md</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/890">#890</a>)</li> <li>Added "no-std" categories in <code>Cargo.toml</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/894">#894</a>)</li> <li>Fixed <code>Itertools::k_smallest</code> on short unfused iterators (<a href="https://redirect.github.com/rust-itertools/itertools/issues/900">#900</a>)</li> <li>Deprecated <code>Itertools::tree_fold1</code> (renamed <code>tree_reduce</code>) (<a href="https://redirect.github.com/rust-itertools/itertools/issues/895">#895</a>)</li> <li>Deprecated <code>GroupingMap::fold_first</code> (renamed <code>reduce</code>) (<a href="https://redirect.github.com/rust-itertools/itertools/issues/902">#902</a>)</li> <li>Fixed <code>Itertools::k_smallest(0)</code> to consume the iterator, optimized <code>Itertools::k_smallest(1)</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/909">#909</a>)</li> <li>Specialized <code>Combinations::nth</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/914">#914</a>)</li> <li>Specialized <code>MergeBy::fold</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/920">#920</a>)</li> <li>Specialized <code>CombinationsWithReplacement::nth</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/923">#923</a>)</li> <li>Specialized <code>FlattenOk::{fold, rfold}</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/927">#927</a>)</li> <li>Specialized <code>Powerset::nth</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/924">#924</a>)</li> <li>Documentation fixes (<a href="https://redirect.github.com/rust-itertools/itertools/issues/882">#882</a>, <a href="https://redirect.github.com/rust-itertools/itertools/issues/936">#936</a>)</li> <li>Fixed <code>assert_equal</code> for iterators longer than <code>i32::MAX</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/932">#932</a>)</li> <li>Updated the <code>must_use</code> message of non-lazy <code>KMergeBy</code> and <code>TupleCombinations</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/939">#939</a>)</li> </ul> <h3>Notable Internal Changes</h3> <ul> <li>Tested iterator laziness (<a href="https://redirect.github.com/rust-itertools/itertools/issues/792">#792</a>)</li> <li>Created <code>CONTRIBUTING.md</code> (<a href="https://redirect.github.com/rust-itertools/itertools/issues/767">#767</a>)</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
7c5c7a856a |
fix: Use correct component versions by overriding from FIREZONE_PACKAGE_VERSION (#5344)
Now that #4397 is complete, we need a way to bake in the desired component version so that it's reported properly to the portal. This PR adds a global override, "FIREZONE_PACKAGE_VERSION" that can be optionally set to bake the version in. If left blank, the behavior is unchanged, "CARGO_PKG_VERSION" is used instead, which is populated from `connlib-shared`'s Cargo.toml. ## Problem <img width="520" alt="Screenshot 2024-06-12 at 11 34 45 AM" src="https://github.com/firezone/firezone/assets/167144/b04fcbe5-dcba-4a0d-b93f-7abd923b4f04"> <img width="439" alt="Screenshot 2024-06-12 at 11 34 36 AM" src="https://github.com/firezone/firezone/assets/167144/7b1828fe-4073-4a1f-8cbd-5e55ba241745"> |
||
|
|
9a01745a1d |
build(deps): Bump the windows group in /rust with 2 updates (#5288)
Bumps the windows group in /rust with 2 updates: [windows](https://github.com/microsoft/windows-rs) and [windows-implement](https://github.com/microsoft/windows-rs). Updates `windows` from 0.56.0 to 0.57.0 <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
7e533c42f8 |
refactor: Split releases for Clients and Gateways (#5287)
- Removes version numbers from infra components (elixir/relay) - Removes version bumping from Rust workspace members that don't get published - Splits release publishing into `gateway-`, `headless-client-`, and `gui-client-` - Removes auto-deploying new infrastructure when a release is published. Use the Deploy Production workflow instead. Fixes #4397 |
||
|
|
d0efc55918 |
test(connlib): reduce number of local rejections (#5221)
To make proptests efficient, it is important to generate the set of possible test cases algorithmically instead of filtering through randomly generated values. This PR makes the strategies for upstream DNS servers and IP networks more efficient by removing the filtering. |
||
|
|
dfbfbbe8c9 |
build(deps): Bump tokio from 1.37.0 to 1.38.0 in /rust (#5193)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.37.0 to 1.38.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/tokio-rs/tokio/releases">tokio's releases</a>.</em></p> <blockquote> <h2>Tokio v1.38.0</h2> <p>This release marks the beginning of stabilization for runtime metrics. It stabilizes <code>RuntimeMetrics::worker_count</code>. Future releases will continue to stabilize more metrics.</p> <h3>Added</h3> <ul> <li>fs: add <code>File::create_new</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6573">#6573</a>)</li> <li>io: add <code>copy_bidirectional_with_sizes</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6500">#6500</a>)</li> <li>io: implement <code>AsyncBufRead</code> for <code>Join</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6449">#6449</a>)</li> <li>net: add Apple visionOS support (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6465">#6465</a>)</li> <li>net: implement <code>Clone</code> for <code>NamedPipeInfo</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6586">#6586</a>)</li> <li>net: support QNX OS (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6421">#6421</a>)</li> <li>sync: add <code>Notify::notify_last</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6520">#6520</a>)</li> <li>sync: add <code>mpsc::Receiver::{capacity,max_capacity}</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6511">#6511</a>)</li> <li>sync: add <code>split</code> method to the semaphore permit (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6472">#6472</a>, <a href="https://redirect.github.com/tokio-rs/tokio/issues/6478">#6478</a>)</li> <li>task: add <code>tokio::task::join_set::Builder::spawn_blocking</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6578">#6578</a>)</li> <li>wasm: support rt-multi-thread with wasm32-wasi-preview1-threads (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6510">#6510</a>)</li> </ul> <h3>Changed</h3> <ul> <li>macros: make <code>#[tokio::test]</code> append <code>#[test]</code> at the end of the attribute list (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6497">#6497</a>)</li> <li>metrics: fix <code>blocking_threads</code> count (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6551">#6551</a>)</li> <li>metrics: stabilize <code>RuntimeMetrics::worker_count</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6556">#6556</a>)</li> <li>runtime: move task out of the <code>lifo_slot</code> in <code>block_in_place</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6596">#6596</a>)</li> <li>runtime: panic if <code>global_queue_interval</code> is zero (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6445">#6445</a>)</li> <li>sync: always drop message in destructor for oneshot receiver (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6558">#6558</a>)</li> <li>sync: instrument <code>Semaphore</code> for task dumps (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6499">#6499</a>)</li> <li>sync: use FIFO ordering when waking batches of wakers (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6521">#6521</a>)</li> <li>task: make <code>LocalKey::get</code> work with Clone types (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6433">#6433</a>)</li> <li>tests: update nix and mio-aio dev-dependencies (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6552">#6552</a>)</li> <li>time: clean up implementation (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6517">#6517</a>)</li> <li>time: lazily init timers on first poll (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6512">#6512</a>)</li> <li>time: remove the <code>true_when</code> field in <code>TimerShared</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6563">#6563</a>)</li> <li>time: use sharding for timer implementation (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6534">#6534</a>)</li> </ul> <h3>Fixed</h3> <ul> <li>taskdump: allow building taskdump docs on non-unix machines (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6564">#6564</a>)</li> <li>time: check for overflow in <code>Interval::poll_tick</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6487">#6487</a>)</li> <li>sync: fix incorrect <code>is_empty</code> on mpsc block boundaries (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6603">#6603</a>)</li> </ul> <h3>Documented</h3> <ul> <li>fs: rewrite file system docs (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6467">#6467</a>)</li> <li>io: fix <code>stdin</code> documentation (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6581">#6581</a>)</li> <li>io: fix obsolete reference in <code>ReadHalf::unsplit()</code> documentation (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6498">#6498</a>)</li> <li>macros: render more comprehensible documentation for <code>select!</code> (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6468">#6468</a>)</li> <li>net: add missing types to module docs (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6482">#6482</a>)</li> <li>net: fix misleading <code>NamedPipeServer</code> example (<a href="https://redirect.github.com/tokio-rs/tokio/issues/6590">#6590</a>)</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href=" |
||
|
|
3f3ea96ca7 |
test(connlib): generate resources with wildcard and ? addresses (#5209)
Currently, `tunnel_test` only tests DNS resources with fully-qualified domain names. Firezone also supports wildcard domains in the forms of `*.example.com` and `?.example.com`. To include these in the tests, we generate a bunch of DNS records that include various subdomains for such wildcard DNS resources. When sampling DNS queries, we already take them from the pool of global DNS records which now also includes these subdomains, thus nothing else needed to be changed to support testing these resources. |
||
|
|
deefabd8f8 |
refactor(firezone-tunnel): move routes and DNS control out of connlib and up to the Client (#5111)
Refs #3636 (This pays down some of the technical debt from Linux DNS) Refs #4473 (This partially fulfills it) Refs #5068 (This is needed to make `FIREZONE_DNS_CONTROL` mandatory) As of dd6421: - On both Linux and Windows, DNS control and IP setting (i.e. `on_set_interface_config`) both move to the Client - On Windows, route setting stays in `tun_windows.rs`. Route setting in Windows requires us to know the interface index, which we don't know in the Client code. If we could pass opaque platform-specific data between the tunnel and the Client it would be easy. - On Linux, route setting moves to the Client and Gateway, which completely removes the `worker` task in `tun_linux.rs` - Notifying systemd that we're ready moves up to the headless Client / IPC service ```[tasklist] ### Before merging / notes - [x] Does DNS roaming work on Linux on `main`? I don't see where it hooks up. I think I only set up DNS in `Tun::new` (Yes, the `Tun` gets recreated every time we reconfigure the device) - [x] Fix Windows Clients - [x] Fix Gateway - [x] Make sure connlib doesn't get the DNS control method from the env var (will be fixed in #5068) - [x] De-dupe consts - [ ] ~~Add DNS control test~~ (failed) - [ ] Smoke test Linux - [ ] Smoke test Windows ``` |
||
|
|
ce929e1204 |
test(connlib): resolve DNS resources in tunnel_test (#5083)
Currently, `tunnel_test` only sends ICMPs to CIDR resources. We also want to test certain properties in regards to DNS resources. In particular, we want to test: - Given a DNS resource, can we query it for an IP? - Can we send an ICMP packet to the resolved IP? - Is the mapping of proxy IP to upstream IP stable? To achieve this, we sample a list of `IpAddr` whenever we add a DNS resource to the state. We also add the transition `SendQueryToDnsResource`. As the name suggests, this one simulates a DNS query coming from the system for one of our resources. We simulate A and AAAA queries and take note of the addresses that connlib returns to us for the queries. Lastly, as part of `SendICMPPacketToResource`, we now may also sample from a list of IPs that connlib gave us for a domain and send an ICMP packet to that one. There is one caveat in this test that I'd like to point out: At the moment, the exact mapping of proxy IP to real IP is an implementation detail of connlib. As a result, I don't know which proxy IP I need to use in order to ping a particular "real" IP. This presents an issue in the assertions: Upon the first ICMP packet, I cannot assert what the expected destination is. Instead, I need to "remember" it. In case we send another ICMP packet to the same resource and happen to sample the same proxy IP, we can then assert that the mapping did not change. |
||
|
|
974eb95dc5 |
test(connlib): reduce number of sites to 3 (#5152)
Generating up to 10 can be quite verbose in the output. I think 3 should also be enough to hit all codepaths that need to deal with more than 1. |
||
|
|
fbc13f6946 |
test(connlib): generate actual domain names as inputs (#5146)
Extracted out of #5083. |