This PR fixes `docker compose up` but it doesn't have the test client ->
resource flow working but it prevent anything from erroring at startup.
This fixes:
* tokens (use the correct token for the client user agent we are using)
* randomize `name_suffix` at start up for connlib (we will eventually
allow options to set it manually)
* remove port ranges for relay (see firezone/product#613)
This makes it possible to build the Apple/Android FFI bridges and
integrate them with their respective client apps.
---------
Signed-off-by: Francesca Lovebloom <franlovebloom@gmail.com>
Co-authored-by: Roopesh Chander <roop@roopc.net>
There are problems building the docker images in macos using musl due to
ring's problems therefore we started using slim-debian with glibc for
development.
When using `docker compose build` or any other way of building docker
images in parallel the way the cache was working with the rust's
Dockerfile made the caches between images overlap and corrupt each
other. We add a `locked` which prevents multiple writers to the same
cache to fix this behaviour.
This brindgs connlib from its own separated repo to firezone's monorepo.
On top of bringing connlib we also add and unify the Dockerfile for all
rust binaries and add a docker-compose that can run a headless client, a
relay and a gateway which eventually will test the whole flow between a
client and a resource. For this to work we also incorporated some elixir
scripts to generate portal tokens for those components.
With this PR, the relay can be configured with a WebSocket URL on startup. If given, it will attempt to connect to it and join the `relay` room with its `stamp_secret`. Once the `init` message is received, regular relay operation will begin.
Targets specified in the `rust-toolchain.toml` file are automatically installed by `rustup`. This avoid setup steps for other devs and also simplifies the CI setup.
To be able to compile native code to musl, we do need `musl-gcc` which comes with the `musl-tools` package on ubuntu.
Previously, the relay would treat the `stamp_secret` internally as bytes and share it with the outside world as hex-string. The portal however treats it as an opaque string and uses the UTF-8 bytes to create username and password.
This patch aligns the relay's functionality with the portal and stores the `stamp_secret` internally as a string.
This saves us several lines of code and allows usage of the relay via
commandline arguments in addition to env variables. Note that because of
`#[arg(env)]`, all of these can still be configured via environment
variables too.
To complete the authentication scheme for the relay, we need to prompt
the client with a nonce when they send an unauthenticated request. The
semantic meaning of a nonce is opaque to the client. As a starting
point, we implement a count-based scheme. Each nonce is valid for 10
requests. After that, a request will be rejected with a 401 and the
client has to authenticate with a new nonce.
This scheme provides a basic form of replay-protection.
We introduce dedicated types for each message that the `Server` can
handle. This allows us to make the functions public because the
type-system now guarantees that those are either parsed from bytes or
constructed with the correct data.
The latter will be useful to write tests against a richer API.
With this patch, the relay can parse and respond to allocation requests. I
ran some basics tests against https://icetest.info/ and implemented a
regression test as a result of the logged data.
In writing this, I also had to slightly change the design of `Server`
(as expected). Event handlers for incoming data now do not return a
message directly. Instead, the caller is responsible to drain `Command`s
from it.
When creating an allocation, we need to start listening on a new port.
This needs to happen outside the `Server` as I am going for a sans-IO
style. We emit a `Command` that instructs the main event loop to listen
on a new port. Any incoming data on that port will be forwarded to the
`Server`.
At the moment, this incoming data is just dropped. This is actually
standards-compliant because we cannot handle binding requests yet which
would allow this data to be forwarded to the client.
In some areas, the code is still a bit rough but I expect to iron those
things out as we go along.
This is an alternative to https://github.com/firezone/firezone/pull/1602
that implements the server using a library I've found called
`stun_codec`.
It already has support for parsing a variety of attributes.
The following is a nice website to test some of the functionality:
https://icetest.info/
The server is still listening on:
`ec2-3-89-112-240.compute-1.amazonaws.com:3478`.