This updates the license for the admin portal (`elixir/`) to the Elastic
License v2, keeping other components Apache 2.0 licensed.
What does this mean for 1.0 going forward?
[Elastic's FAQ](https://www.elastic.co/licensing/elastic-license/faq) is
broadly applicable to Firezone as well. Most notably, MSPs may still use
Firezone to provide general remote access services for third party
users, just not to the Firezone admin portal itself (and REST API).
### Why?
We would lose a little bit of business, though one could argue that the
tradeoff is worth it due to increased market exposure/distribution.
The main, tangible reasons for us today involve the negative impact this
has on our ability to reach product-market fit:
1. We lose the direct feedback channel with paying customers, isolating
them (and us) from our roadmap.
2. Reseller licenses should be offered as part of a proper partner
alliance / reseller program when we have the resources to support it,
which will result in a much better experience for all parties involved
(and restore the lost feedback channel).
3. Having outdated, unpatched, and potentially buggy Firezone instances
running in the wild that we have no visibility or insight into is a
major liability to our brand and reputation and may even result in a
legal liability depending on the jurisdiction and severity of the issue.
See [this
example](https://aws.amazon.com/marketplace/pp/prodview-xgj7kkar35gus)
and [this
one](https://aws.amazon.com/marketplace/pp/prodview-jyd73dot3zrnw).
Why:
* The `show` pages for all of the Firezone resources (i.e. Gateways,
Resources, Devices, etc...) were all very similar but were explicitly
defined in individual tables with their styling also explicitly defined
in each table. This commit creates a `vertical_table` component and a
`vertical_table_row` component to allow the styling to be defined once
and then consistently applied to each `show` page.
Why:
* The previous Gateway Liveviews had used static views and data as a
starting point for fleshing out the web UI. This commit builds on that
and replaces (most) of the static data with data from the database, as
well as updating the static Liveview templates to use components where
possible.
Note: These changes are only meant to involve the Gateway views
(index/show/edit). More changes to other resources will follow(i.e.
Resource, Users, Devices, etc...)
---------
Signed-off-by: bmanifold <bmanifold@users.noreply.github.com>
Co-authored-by: Andrew Dryga <andrew@dryga.com>
~~This is an attempt to fix the CI bug
[here](https://github.com/firezone/firezone/actions/runs/5491388141/jobs/10007864417#step:4:1638)
possibly introduced in
[d9eb2d18](https://github.com/firezone/firezone/commit/d9eb2d18#diff-88bd94db0d5cfd5f0617b7c4ed48c0212597378ed7e28714c5d86c95999b4c7dR29)
and uncovered / exacerbated in Elixir 1.15~~
Edit: looks like this ended up being a couple cache issues with GitHub
actions:
1. The `elixir_api-container-build` cache would always overwrite the
`elixir_web-container-build` on subsequent builds of the same
`github.ref_name` (cache is scoped to branch name by default), leading
to the consistent error `Elixir.Web.Mailer.NoopAdapter does not exist`
whenever a branch was pushed to more than once.
2. The same thing happens with the `integration_test-basic-flow` job
because the `api` service gets built after the `web` service in
docker-compose.yml, overwriting its cache
For some reason it seems the `APPLICATION_NAME` ARG is not busting the
Docker cache properly on GitHub actions for elixir container builds, so
the fix here was to [use
`scope=`](https://docs.docker.com/build/cache/backends/gha/#scope) to
segregate the cache layers between builds of the same branch.
Looks like for some reason the id/1 callback doesn't subscribe the channel process any more (only the socket itself), so we are doing that explicitly now.
This PR fixes `docker compose up` but it doesn't have the test client ->
resource flow working but it prevent anything from erroring at startup.
This fixes:
* tokens (use the correct token for the client user agent we are using)
* randomize `name_suffix` at start up for connlib (we will eventually
allow options to set it manually)
* remove port ranges for relay (see firezone/product#613)
**Update CONTRIBUTING.md**
Why:
* The CONTRIBUTING.md doc seems to have fallen slightly out of date with
how Firezone now works. This commit updates the doc to provide a
quick start guide for getting all of the various Firezone components
up and running as quick as possible. The doc then links to the more
specific `Elixir` and `Rust` README.md files in the respective
directories to help developers who would like to contribute.
**Update docker-compose vault health check**
Why:
* The current Vault health check listed in the docker-compose file does
not seem to be working when using `localhost` in the `wget` command.
Updating the URL to use `127.0.0.1` seems to have fixed it.
---------
Signed-off-by: bmanifold <bmanifold@users.noreply.github.com>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>
Did some research on status page providers to manage incidents.
statuspage.io seems to be easy to use and cost-effective, fairly popular
and provides a good amount of flexibility to customize emails,
notifications, etc.
Super easy to set up and use but am not married to it if anyone feels
strongly about using another incident management service.
https://firezone.statuspage.io
## Demo:
<img width="235" alt="Screenshot 2023-06-27 at 8 07 29 AM"
src="https://github.com/firezone/firezone/assets/167144/8ad12b9b-7345-4a5d-bf43-c8af798d85f9">
@AndrewDryga ~~Was still hitting some redirect issues so I'll wait for
those to be resolved before continuing on building more views.~~ Edit:
After some sleep and coffee, I figured it out. Nice work on the sign in
form!
I went ahead and scoped existing dashboard links with `@account` and
fixed a dark mode issue -- you may want to cherry-pick those commits.
I'll add these to authenticated routes and integrate into what you have
so far.
As I was going through last night exploring your route approach I
thought of some edge cases; can discuss next week. I think the main one
that came to mind was that we probably want to differentiate between
login flows initiated directly in the browser (this is an admin logging
into the dashboard) vs login flows initiated from a client app (these
will terminate with a final redirect to respective `dest` whitelisted
URL). Maybe it makes sense to segregate these flows?
If a regular user tries login directly from the browser maybe we want to
show them something like "Please login from your Firezone application
instead" as they should only be able to initiate logins from a client
application. Or maybe there's simply no possibility to end up at the
final Android App Link or `firezone://` URI with a login initiated
directly from the browser?
This brindgs connlib from its own separated repo to firezone's monorepo.
On top of bringing connlib we also add and unify the Dockerfile for all
rust binaries and add a docker-compose that can run a headless client, a
relay and a gateway which eventually will test the whole flow between a
client and a resource. For this to work we also incorporated some elixir
scripts to generate portal tokens for those components.
Did some research when picking a package manager for the website and
settled on `pnpm` for the following reasons:
- CLI-compatible with `npm`
- Typically faster than even `yarn` especially on Apple silicon
- Security: Pnpm uses a different dependency resolution algorithm and
different folder structure of node_modules that prevents illegal access
to packages by other packages.
I think I caught all the places, but I may be missing something, so if
this isn't a good idea we can revert back.
This PR also cleans up the actions workflows to remove dead code.
TODO:
- [x] Cluster formation for all API and web nodes
- [x] Injest Docker logs to Stackdriver
- [x] Fix assets building for prod
To finish later:
- [ ] Structured logging:
https://issuetracker.google.com/issues/285950891
- [ ] Better networking policy (eg. use public postmark ranges and deny
all unwanted egress)
- [ ] OpenTelemetry collector for Google Stackdriver
- [ ] LoggerJSON.Plug integration
---------
Signed-off-by: Andrew Dryga <andrew@dryga.com>
Co-authored-by: Jamil <jamilbk@users.noreply.github.com>