Currently, the eBPF module can translate from channel data messages to UDP packets and vice versa. It can even do that across IP stacks, i.e. translate from an IPv6 UDP packet to an IPv4 channel data messages. What it cannot do is handle packets to itself. This can happen if both - Client and Gateway - pick the same relay to make an allocation. When exchanging candidates, ICE will then form pairs between both relay candidates, essentially requiring the relay to loop packets back to itself. In eBPF, we cannot do that. When sending a packet back out with `XDP_TX`, it will actually go out on the wire without an additional check whether they are for our own IP. Properly handling this in eBPF (by comparing the destination IP to our public IP) adds more cases we need to handle. The current module structure where everything is one file makes this quite hard to understand, which is why I opted to create four sub-modules: - `from_ipv4_channel` - `from_ipv4_udp` - `from_ipv6_channel` - `from_ipv6_udp` For traffic arriving via a data-channel, it is possible that we also need to send it back out via a data-channel if the peer address we are sending to is the relay itself. Therefore, the `from_ipX_channel` modules have four sub-modules: - `to_ipv4_channel` - `to_ipv4_udp` - `to_ipv6_channel` - `to_ipv6_udp` For the traffic arriving on an allocation port (`from_ipX_udp`), we always map to a data-channel and therefore can never get into a routing loop, resulting in only two modules: - `to_ipv4_channel` - `to_ipv6_channel` The actual implementation of the new code paths is rather simple and mostly copied from the existing ones. For half of them, we don't need to make any adjustments to the buffer size (i.e. IPv4 channel to IPv4 channel). For the other half, we need to adjust for the difference in the IP header size. To test these changes, we add a new integration test that makes use of the new docker-compose setup added in #10301 and configures masquerading for both Client and Gateway. To make this more useful, we also remove the `direct-` prefix from all tests as the test script itself no longer makes any decisions as to whether it is operating over a direct or relayed connection. Resolves: #7518
relay
This crate houses a minimalistic STUN & TURN server.
Features
We aim to support the following feature set:
- STUN binding requests
- TURN allocate requests
- TURN refresh requests
- TURN channel bind requests
- TURN channel data requests
Relaying of data through other means such as DATA frames is not supported.
Building
You can build the relay using: cargo build --release --bin firezone-relay
You should then find a binary in target/release/firezone-relay.
Running
The Firezone Relay supports Linux only. To run the Relay binary on your Linux host:
- Generate a new Relay token from the "Relays" section of the admin portal and save it in your secrets manager.
- Ensure the
FIREZONE_TOKEN=<relay_token>environment variable is set securely in your Relay's shell environment. The Relay expects this variable at startup. - Now, you can start the Firezone Relay with:
firezone-relay
To view more advanced configuration options pass the --help flag:
firezone-relay --help
Ports
By default, the relay listens on port udp/3478. This is the standard port for
STUN/TURN. Additionally, the relay needs to have access to the port range
49152 - 65535 for the allocations.
Portal Connection
When given a token, the relay will connect to the Firezone portal and wait for
an init message before commencing relay operations.
Metrics
The relay parses the OTLP_GRPC_ENDPOINT env variable.
Traces and metrics will be sent to an OTLP collector listening on that endpoint.
It is recommended to set additional environment variables to scope your metrics:
OTEL_SERVICE_NAME: Translates to theservice.name.OTEL_RESOURCE_ATTRIBUTES: Additional, comma-separated key=value attributes.
By default, we set the following OTEL attributes:
service.name=relayservice.namespace=firezone
The docker-init-relay.sh script integrates with GCE.
When OTEL_METADATA_DISCOVERY_METHOD=gce_metadata, the service.instance.id
variables is set to the instance ID of the VM.
Design
The relay is designed in a sans-IO fashion, meaning the core components do not cause side effects but operate as pure, synchronous state machines. They take in data and emit commands: wake me at this point in time, send these bytes to this peer, etc.
This allows us to very easily unit-test all kinds of scenarios because all inputs are simple values.
The main server runs in a single task and spawns one additional task for each allocation. Incoming data that needs to be relayed is forwarded to the main task where it gets authenticated and relayed on success.