Looks like it broke the staging WS connections. Getting a failure of
Liveview socket connection on `app.firez.one`:
```
insertId: 1o7nymzg12jh1k5
jsonPayload:
cos.googleapis.com/container_id: 89b4633e81432e43dfbaa3957324fd5ead3f2362737bac84648a8f839b6eb16c
cos.googleapis.com/container_name: klt-web-cpap
cos.googleapis.com/stream: stdout
message:
domain:
- elixir
erl_level: error
logging.googleapis.com/sourceLocation:
file: lib/phoenix/socket/transport.ex
function: Elixir.Phoenix.Socket.Transport.check_origin/5
line: 344
message: |+
Could not check origin for Phoenix.Socket transport.
Origin of the request: https://app.firez.one
This happens when you are attempting a socket connection to
a different host than the one configured in your config/
files. For example, in development the host is configured
to "localhost" but you may be trying to access it from
"127.0.0.1". To fix this issue, you may either:
1. update [url: [host: ...]] to your actual host in the
config file for your current environment (recommended)
2. pass the :check_origin option when configuring your
endpoint or when configuring the transport in your
UserSocket module, explicitly outlining which origins
are allowed:
check_origin: ["https://example.com",
"//another.com:888", "//other.com"]
severity: ERROR
time: '2023-08-26T21:24:36.002Z'
time: '2023-08-26T21:24:36.002628434Z'
logName: projects/firezone-staging/logs/cos_containers
receiveTimestamp: '2023-08-26T21:24:36.402398476Z'
resource:
labels:
instance_id: '8218473336234347240'
project_id: firezone-staging
zone: us-east1-d
type: gce_instance
timestamp: '2023-08-26T21:24:36.002628434Z'
```
The Sign Up page will allow users to create new organization accounts.
During sign-up, a randomly generated slug will be created for the
account and "magic link" will be set as the first identity provider to
allow the user to login to the newly created account.
---------
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
* Refactor sharedPreferences to only save the AccountId
* Update TeamId -> AccountId to match naming elsewhere
* Update JWT -> Token to avoid confusion; this token is **not** a valid
JWT and should be treated as an opaque token
* Update FFI `connect` to accept an optional file descriptor (int32) as
a first argument. This seemed to be the most straightforward way to pass
it to the tunnel stack. Retrieving it via callback is another option,
but retrieving return vars with the `jni` was more complex. We could
have used a similar approach that we did in the Apple client
(enumerating all fd's in the `new()` function until we found ours) but
this approach is [explicitly
documented/recommended](https://developer.android.com/reference/android/net/VpnService.Builder#establish())
by the Android docs so I figured it's not likely to break.
Additionally, there was a thread safety bug in the recent JNI callback
implementation that consistently crashed the VM with `JNI DETECTED ERROR
IN APPLICATION: use of invalid jobject...`. The fix was to use
`GlobalRef` which has the explicit purpose of outliving the `JNIEnv`
lifetime so that no `static` lifetimes need to be used.
---------
Signed-off-by: Jamil <jamilbk@users.noreply.github.com>
Co-authored-by: Pratik Velani <pratikvelani@gmail.com>
Co-authored-by: Gabi <gabrielalejandro7@gmail.com>
This PR fixes issues with the iOS client connecting to the portal and
setting up the tunnel.
- portal IPv6 unique-local prefix typo
- Use `rustls-webpki-roots` instead of `rustls-native-roots` for tokio
tungstenite since the latter [only supports macOS, Linux, and
Windows](https://github.com/rustls/rustls-native-certs) while the former
seems to work on all platforms(?)
- Remove Multipath TCP entitlement for iOS since it's not relevant for
us.
@conectado After this is merged, we _almost_ have a working tunnel on
iOS. I believe the error we're hitting now is the 4-byte address family
header that we need to add and strip from each packet written to / read
from the tunnel. See below log for sample output when attempting to
connect to the `HTTPbin` resource:
```
dev.firezone.firezone.network-extension packet-tunnel debug 16:10:13.401705-0700 FirezoneNetworkExtensioniOS Adapter state changed to: tunnelReady
dev.firezone.firezone.network-extension packet-tunnel debug 16:10:13.401731-0700 FirezoneNetworkExtensioniOS Beginning path monitoring
com.apple.network path default 16:10:13.402211-0700 FirezoneNetworkExtensioniOS nw_path_evaluator_start [1ACDE975-615B-4557-BF7C-678F3594452E <NULL> generic, multipath service: 1, attribution: developer]
path: satisfied (Path is satisfied), interface: en0[802.11], scoped, ipv4, ipv6, dns
com.apple.network path info 16:10:13.402235-0700 FirezoneNetworkExtensioniOS nw_path_evaluator_call_update_handler [1ACDE975-615B-4557-BF7C-678F3594452E] scheduling update
com.apple.network path info 16:10:13.402261-0700 FirezoneNetworkExtensioniOS nw_path_evaluator_call_update_handler_block_invoke [1ACDE975-615B-4557-BF7C-678F3594452E] delivering update
com.apple.network debug 16:10:13.402286-0700 FirezoneNetworkExtensioniOS nw_path_copy_interface_with_generation Cache miss for interface for index 3 (generation 4574)
com.apple.network debug 16:10:13.402312-0700 FirezoneNetworkExtensioniOS nw_path_copy_interface_with_generation Cache miss for interface for index 31 (generation 141)
dev.firezone.firezone.network-extension packet-tunnel debug 16:10:13.402363-0700 FirezoneNetworkExtensioniOS Suppressing calls to disableSomeRoamingForBrokenMobileSemantics() and bumpSockets()
dev.firezone.firezone connlib debug 16:10:14.368105-0700 FirezoneNetworkExtensioniOS Reading from iface 76 bytes
dev.firezone.firezone connlib debug 16:10:15.369018-0700 FirezoneNetworkExtensioniOS Reading from iface 76 bytes
dev.firezone.firezone connlib debug 16:10:16.095618-0700 FirezoneNetworkExtensioniOS Reading from iface 76 bytes
dev.firezone.firezone connlib debug 16:10:16.370908-0700 FirezoneNetworkExtensioniOS Reading from iface 76 bytes
dev.firezone.firezone connlib debug 16:10:17.372035-0700 FirezoneNetworkExtensioniOS Reading from iface 76 bytes
dev.firezone.firezone connlib debug 16:10:18.373423-0700 FirezoneNetworkExtensioniOS Reading from iface 76 bytes
dev.firezone.firezone connlib debug 16:10:20.402863-0700 FirezoneNetworkExtensioniOS Reading from iface 76 bytes
dev.firezone.firezone connlib debug 16:10:24.381581-0700 FirezoneNetworkExtensioniOS Reading from iface 76 bytes
dev.firezone.firezone connlib debug 16:10:32.374566-0700 FirezoneNetworkExtensioniOS Reading from iface 76 bytes
dev.firezone.firezone connlib debug 16:10:38.137437-0700 FirezoneNetworkExtensioniOS Text("{\"ref\":null,\"topic\":\"phoenix\",\"event\":\"phx_reply\",\"payload\":{\"status\":\"ok\",\"response\":{}}}")
dev.firezone.firezone connlib debug 16:10:38.137757-0700 FirezoneNetworkExtensioniOS Phoenix status message
dev.firezone.firezone connlib debug 16:10:48.376339-0700 FirezoneNetworkExtensioniOS Reading from iface 76 bytes
dev.firezone.firezone connlib debug 16:11:08.148369-0700 FirezoneNetworkExtensioniOS Text("{\"ref\":null,\"topic\":\"phoenix\",\"event\":\"phx_reply\",\"payload\":{\"status\":\"ok\",\"response\":{}}}")
dev.firezone.firezone connlib debug 16:11:08.148654-0700 FirezoneNetworkExtensioniOS Phoenix status message
```
Fixes a small bug where `client_platform` wasn't being added to the
redirect_params in the magic link auth flow, so the token form input was
never shown.
Also adds a `hidden` type input that omits the `class=` attribute and
`div` wrapper.
Feel free to build off this or close and open a more thorough fix if
this is not the desired approach.
This is a result of our discussion with @conectado, this PR will add a
new message type which will allow reusing existing connections to the
gateway to access a new resource. We will also change the LB strategy to
be aware of the current device connection so that we will not pick a
different one if we have a connected gateway that can serve a new
resource.
---------
Co-authored-by: conectado <gabrielalejandro7@gmail.com>
Why:
* Policies are needed to make sure devices are allowed to connect to a
given resource.
---------
Signed-off-by: bmanifold <bmanifold@users.noreply.github.com>
Co-authored-by: Andrew Dryga <andrew@dryga.com>
Why:
* The previous Actor and Device Liveviews had used static views and data
as a starting point for fleshing out the web UI. This commit builds on
that and replaces (most) of the static data with data from the database,
as well as updating the static Liveview templates to use components
where possible.
Why:
* The previous Resource Liveviews had used static views and data as a
starting point for fleshing out the web UI. This commit builds on that
and replaces (most) of the static data with data from the database, as
well as updating the static Liveview templates to use components where
possible.
Note: These changes are only meant to involve the Resource views
(index/new/show/edit). More changes to other resources will follow(i.e.
Users, Devices, etc...)
This updates the license for the admin portal (`elixir/`) to the Elastic
License v2, keeping other components Apache 2.0 licensed.
What does this mean for 1.0 going forward?
[Elastic's FAQ](https://www.elastic.co/licensing/elastic-license/faq) is
broadly applicable to Firezone as well. Most notably, MSPs may still use
Firezone to provide general remote access services for third party
users, just not to the Firezone admin portal itself (and REST API).
### Why?
We would lose a little bit of business, though one could argue that the
tradeoff is worth it due to increased market exposure/distribution.
The main, tangible reasons for us today involve the negative impact this
has on our ability to reach product-market fit:
1. We lose the direct feedback channel with paying customers, isolating
them (and us) from our roadmap.
2. Reseller licenses should be offered as part of a proper partner
alliance / reseller program when we have the resources to support it,
which will result in a much better experience for all parties involved
(and restore the lost feedback channel).
3. Having outdated, unpatched, and potentially buggy Firezone instances
running in the wild that we have no visibility or insight into is a
major liability to our brand and reputation and may even result in a
legal liability depending on the jurisdiction and severity of the issue.
See [this
example](https://aws.amazon.com/marketplace/pp/prodview-xgj7kkar35gus)
and [this
one](https://aws.amazon.com/marketplace/pp/prodview-jyd73dot3zrnw).
Why:
* The `show` pages for all of the Firezone resources (i.e. Gateways,
Resources, Devices, etc...) were all very similar but were explicitly
defined in individual tables with their styling also explicitly defined
in each table. This commit creates a `vertical_table` component and a
`vertical_table_row` component to allow the styling to be defined once
and then consistently applied to each `show` page.
Why:
* The previous Gateway Liveviews had used static views and data as a
starting point for fleshing out the web UI. This commit builds on that
and replaces (most) of the static data with data from the database, as
well as updating the static Liveview templates to use components where
possible.
Note: These changes are only meant to involve the Gateway views
(index/show/edit). More changes to other resources will follow(i.e.
Resource, Users, Devices, etc...)
---------
Signed-off-by: bmanifold <bmanifold@users.noreply.github.com>
Co-authored-by: Andrew Dryga <andrew@dryga.com>
~~This is an attempt to fix the CI bug
[here](https://github.com/firezone/firezone/actions/runs/5491388141/jobs/10007864417#step:4:1638)
possibly introduced in
[d9eb2d18](https://github.com/firezone/firezone/commit/d9eb2d18#diff-88bd94db0d5cfd5f0617b7c4ed48c0212597378ed7e28714c5d86c95999b4c7dR29)
and uncovered / exacerbated in Elixir 1.15~~
Edit: looks like this ended up being a couple cache issues with GitHub
actions:
1. The `elixir_api-container-build` cache would always overwrite the
`elixir_web-container-build` on subsequent builds of the same
`github.ref_name` (cache is scoped to branch name by default), leading
to the consistent error `Elixir.Web.Mailer.NoopAdapter does not exist`
whenever a branch was pushed to more than once.
2. The same thing happens with the `integration_test-basic-flow` job
because the `api` service gets built after the `web` service in
docker-compose.yml, overwriting its cache
For some reason it seems the `APPLICATION_NAME` ARG is not busting the
Docker cache properly on GitHub actions for elixir container builds, so
the fix here was to [use
`scope=`](https://docs.docker.com/build/cache/backends/gha/#scope) to
segregate the cache layers between builds of the same branch.
Looks like for some reason the id/1 callback doesn't subscribe the channel process any more (only the socket itself), so we are doing that explicitly now.
This PR fixes `docker compose up` but it doesn't have the test client ->
resource flow working but it prevent anything from erroring at startup.
This fixes:
* tokens (use the correct token for the client user agent we are using)
* randomize `name_suffix` at start up for connlib (we will eventually
allow options to set it manually)
* remove port ranges for relay (see firezone/product#613)
**Update CONTRIBUTING.md**
Why:
* The CONTRIBUTING.md doc seems to have fallen slightly out of date with
how Firezone now works. This commit updates the doc to provide a
quick start guide for getting all of the various Firezone components
up and running as quick as possible. The doc then links to the more
specific `Elixir` and `Rust` README.md files in the respective
directories to help developers who would like to contribute.
**Update docker-compose vault health check**
Why:
* The current Vault health check listed in the docker-compose file does
not seem to be working when using `localhost` in the `wget` command.
Updating the URL to use `127.0.0.1` seems to have fixed it.
---------
Signed-off-by: bmanifold <bmanifold@users.noreply.github.com>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
Did some research on status page providers to manage incidents.
statuspage.io seems to be easy to use and cost-effective, fairly popular
and provides a good amount of flexibility to customize emails,
notifications, etc.
Super easy to set up and use but am not married to it if anyone feels
strongly about using another incident management service.
https://firezone.statuspage.io
## Demo:
<img width="235" alt="Screenshot 2023-06-27 at 8 07 29 AM"
src="https://github.com/firezone/firezone/assets/167144/8ad12b9b-7345-4a5d-bf43-c8af798d85f9">