At present, `connlib` utilises the portal as a signalling layer for any kind of control message that needs to be exchanged between clients and gateways. For anything regard to connectivity, this is crucial: Before we have a direct connection to the gateway, we don't really have a choice other than using the portal as a "relay" to e.g. exchange address candidates for ICE. However, once a direct connection has been established, exchanging information directly with the gateway is faster and removes the portal as a potential point of failure for the data plane. For DNS resources, `connlib` intercepts all DNS requests on the client and assigns its own IPs within the CG-NAT range to all domains that are configured as resources. Thus, all packets targeting DNS resources will have one of these IPs set as their destination. The gateway needs to learn about all the IPs that have been assigned to a certain domain by the client and perform NAT. We call this concept "DNS resource NAT". Currently, the domain + the assigned IPs are sent together with the `allow_access` or `request_connection` message via the portal. The new control protocol defined in #6732 purposely excludes this information and only authorises traffic to the entire resource which could also be a wildcard-DNS resource. To exchange the assigned IPs for a certain domain with the gateway, we introduce our own p2p control protocol built on top of IP. All control protocol messages are sent through the tunnel and thus encrypted at all times. They are differentiated from regular application traffic as follows: - IP src is set to the unspecified IPv6 address (`::`) - IP dst is set to the unspecified IPv6 address (`::`) - IP protocol is set to reserved (`0xFF`) The combination of all three should never appear as regular traffic. To ensure forwards-compatibility, the control protocol utilises a fixed 8-byte header where the first byte denotes the message kind. In this current design, there is no concept of a request or response in the wire-format. Each message is unidirectional and the fact that the two messages we define in here appear in tandem is purely by convention. We use the IPv6 payload length to determine the total length of the packet. The payloads are JSON-encoded. Message types are free to chose whichever encoding they'd like. This protocol is sent through the WireGuard tunnel, meaning we are effectively limited by our device MTU of 1280, otherwise we'd have to implement fragmentation. For the messages of setting up the DNS resource NAT, we are below this limit: - UUIDs are 16 bytes - Domain names are at most 255 bytes - IPv6 addresses are 16 bytes * 4 - IPv4 addressers are 4 bytes * 4 Including the JSON serialisation overhead, this results in a total maximum payload size of 402 bytes, which is well below our MTU. Finally, another thing to consider here is that IP is unreliable, meaning each use of this protocol needs to make sure that: - It is resilient against message re-ordering - It is resilient against packet loss The details of how this is ensured for setting up the DNS resource NAT is left to #6732.
Rust development guide
Firezone uses Rust for all data plane components. This directory contains the Linux and Windows clients, and low-level networking implementations related to STUN/TURN.
We target the last stable release of Rust using rust-toolchain.toml.
If you are using rustup, that is automatically handled for you.
Otherwise, ensure you have the latest stable version of Rust installed.
Reading Client logs
The Client logs are written as JSONL for machine-readability.
To make them more human-friendly, pipe them through jq like this:
cd path/to/logs # e.g. `$HOME/.cache/dev.firezone.client/data/logs` on Linux
cat *.log | jq -r '"\(.time) \(.severity) \(.message)"'
Resulting in, e.g.
2024-04-01T18:25:47.237661392Z INFO started log
2024-04-01T18:25:47.238193266Z INFO GIT_VERSION = 1.0.0-pre.11-35-gcc0d43531
2024-04-01T18:25:48.295243016Z INFO No token / actor_name on disk, starting in signed-out state
2024-04-01T18:25:48.295360641Z INFO null
Benchmarking on Linux
The recommended way for benchmarking any of the Rust components is Linux' perf utility.
For example, to attach to a running application, do:
- Ensure the binary you are profiling is compiled with the
benchprofile. sudo perf perf record -g --freq 10000 --pid $(pgrep <your-binary>).- Run the speed test or whatever load-inducing task you want to measure.
sudo perf script > profile.perf- Open profiler.firefox.com and load
profile.perf
Instead of attaching to a process with --pid, you can also specify the path to executable directly.
That is useful if you want to capture perf data for a test or a micro-benchmark.