- If we encounter a deadlock/long running test it is better to have go
test timeout. As we've noticed if we hit the GitHub step timeout, we
lose all information about what was running at the time of the timeout
making things harder to diagnose.
- Having the timeout through go test itself on a long running test it
outputs what test was running along with a full panic output within
the logs which is quite useful to diagnose
In order for our enterprise nightlies to run the same test-go job but
across a matrix of different base references we need to consider the
checkout ref in our failure and summary uploads in order to prevent
an upload race.
We also configure Git with our token before setting up Go so that
enterprise CI workflows can execute without downloading a module cache.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Update hashicorp/actions-packaging-linux to our rewritten version
that no longer requires building a Docker container or relies on code
hosted in a non-hashicorp repo for packaging.
As internal actions are not managed in the same manner as external
actions in via the tsccr trusted components db, the tsccr helper is
unable to easily re-pin hashicorp/* actions. As such, we unpin some
pinned hashicorp/* actions to automatically pull in updates that are
compatible.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Pin to the latest actions in preparation for the migration to
`actions/upload-artifact@v4`, `actions/download-artifact@v4`, and
`hashicorp/actions-docker-build@v2` on May 6 or 7.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Context
-------
Building and testing Vault artifacts on pull requests and merges is
responsible for about 1/3rd of our overall spend on Vault CI. Of the
artifacts that we ship as part of a release, we do Enos testing scenarios
on the `linux/amd64` and `linux/arm64` binaries and their derivative
artifacts. The extended build artifacts for non-Linux platforms or less
common machine architectures are not tested at this time. They are built,
notarized, and signed as part of every pull request update and merge. As
we don't actually test these artifacts, the only gain we get from this
rather expensive behavior is that we wont merge a change that would prevent
Vault from building on one of the extended targets. Extended platform or
architecture changes are quite rare, so performing this work as frequently
as we do is costly in both monetary and developer time for little relative
safety benefit.
Goals
-----
Rethink and implement how and when we build binaries and artifacts of Vault
so that we can spend less money on repetitive work and while also reducing
the time it takes for the build and test pipelines to complete.
Solution
--------
Instead of building all release artifacts on every push, we'll opt to build
only our testable (core) artifacts. With this change we are introducing a
bit of risk. We could merge a change that breaks an extended platform and
only find out after the fact when we trigger a complete build for a release.
We'll hedge against that risk by building all of the release targets on a
scheduled cadence to ensure that they are still buildable.
We'll make building all of the targets optional on any pull request by
use of a `build/all` label on the pull request.
Further considerations
----------------------
* We want to reduce the total number of workflows and runners for all of our
pipelines if possible. As each workflow runner has infrastructure cost and
runner time penalties, using a single runner over many is often preferred.
* Many of our jobs runners have been optimized for cost and performance. We
should simplify the choices of which runners to use.
* CRT requires us to use the same build workflow in both CE and Ent.
Historically that meant that modifying `build.yml` in CE would result in a
merge conflict with `build.yml` in Ent, and break our merge workflows.
* Workflow flow control in both `build.yml` and `ci.yml` can be quite
complicated, as each needs to maintain compatibility whether executed as CE
or Ent, and when triggered with various Github events like pull_request,
push, and workflow_call, each with their own requirements.
* Many jobs utilize similar patterns of flow control and metadata but are not
reusable.
* Workflow call depth has a maximum of four, so we need to be quite
considerate when calling other workflows.
* Called workflows can only have 10 inputs.
Implementation
--------------
* Refactor the `build.yml` workflow to be agnostic to whether or not it is
executing in CE or Ent. That makes future updates to the build much easier
as we won't have to worry about merge conflicts when the change is merged
downstream.
* Extract common steps in workflows into composite actions that we can reuse.
* Fix bugs where some but not all workflows would use different Git
references when building and testing a pull request.
* We rewrite the application, docs, and UI change helpers as a composite
action. This allows us to re-use this logic to make consistent behavior
choices across build and CI.
* We combine several `build.yml` and `ci.yml` jobs into our final job.
This reduces the number of workflows required for the same behavior while
saving time overall.
* Update most of our action pins.
Results
-------
| Metric | Before | After | Diff |
|-------------------|----------|---------|-------|
| Duration: | ~14-18m | ~15-18m | ~ = |
| Workflows: | 43 | 18 | - 58% |
| Billable time: | ~1h15m | 16m | - 79% |
| Saved artifacts: | 34 | 12 | - 65% |
Infra costs should map closely to billable time.
Network I/O costs should map closely to the workflow count.
Storage costs should map directly with saved artifacts.
We could probably get parity with duration by getting more clever with
our UBI container build, as that's where we're seeing the increase. I'm
not yet concerned as it takes roughly the same time for this job to
complete as it did before.
While the CI workflow was not the focus on the PR, some shared
refactoring does show some marginal improvements there.
| Metric | Before | After | Diff |
|-------------------|----------|----------|--------|
| Duration: | ~24m | ~12.75m | - 15% |
| Workflows: | 55 | 47 | - 8% |
| Billable time: | ~4h20m | ~3h36m | - 7% |
Further focus on streamlining the CI workflows would likely result in a
few more marginal improvements, but nothing on the order like we've seen
with the build workflow.
Signed-off-by: Ryan Cragun <me@ryan.ec>
This PR introduces a new testonly endpoint for introspecting the
RequestLimiter state. It makes use of the endpoint to verify that changes to
the request_limiter config are honored across reload.
In the future, we may choose to make the sys/internal/request-limiter/status
endpoint available in normal binaries, but this is an expedient way to expose
the status for testing without having to rush the design.
In order to re-use as much of the existing command package utility funcionality
as possible without introducing sprawling code changes, I introduced a new
server_util.go and exported some fields via accessors.
The tests shook out a couple of bugs (including a deadlock and lack of
locking around the core limiterRegistry state).
The actions/upload-artifact action does not support filenames with
special characters as it needs to maintain restore compatibility with
NTFS filesystems. Instead of uploading raw log files, which can inherit
names with special characters and break the upload, we tar them all
together to preserve their names and upload the resulting tarball.
Signed-off-by: Ryan Cragun <me@ryan.ec>
We're on a quest to reduce our pipeline execution time to both enhance
our developer productivity but also to reduce the overall cost of the CI
pipeline. The strategy we use here reduces workflow execution time and
network I/O cost by reducing our module cache size and using binary
external tools when possible. We no longer download modules and build
many of the external tools thousands of times a day.
Our previous process of installing internal and external developer tools
was scattered and inconsistent. Some tools were installed via `go
generate -tags tools ./tools/...`,
others via various `make` targets, and some only in Github Actions
workflows. This process led to some undesirable side effects:
* The modules of some dev and test tools were included with those
of the Vault project. This leads to us having to manage our own
Go modules with those of external tools. Prior to Go 1.16 this
was the recommended way to handle external tools, but now
`go install tool@version` is the recommended way to handle
external tools that need to be build from source as it supports
specific versions but does not modify the go.mod.
* Due to Github cache constraints we combine our build and test Go
module caches together, but having our developer tools as deps in
our module results in a larger cache which is downloaded on every
build and test workflow runner. Removing the external tools that were
included in our go.mod reduced the expanded module cache by size
by ~300MB, thus saving time and network I/O costs when downloading
the module cache.
* Not all of our developer tools were included in our modules. Some were
being installed with `go install` or `go run`, so they didn't take
advantage of a single module cache. This resulted in us downloading
Go modules on every CI and Build runner in order to build our
external tools.
* Building our developer tools from source in CI is slow. Where possible
we can prefer to use pre-built binaries in CI workflows. No more
module download or tool compiles if we can avoid them.
I've refactored how we define internal and external build tools
in our Makefile and added several new targets to handle both building
the developer tools locally for development and verifying that they are
available. This allows for an easy developer bootstrap while also
supporting installation of many of the external developer tools from
pre-build binaries in CI. This reduces our network IO and run time
across nearly all of our actions runners.
While working on this I caught and resolved a few unrelated issue:
* Both our Go and Proto format checks we're being run incorrectly. In
CI they we're writing changes but not failing if changes were
detected. The Go was less of a problem as we have git hooks that
are intended to enforce formatting, however we drifted over time.
* Our Git hooks couldn't handle removing a Go file without failing. I
moved the diff check into the new Go helper and updated it to handle
removing files.
* I combined a few separate scripts and into helpers and added a few
new capabilities.
* I refactored how we install Go modules to make it easier to download
and tidy all of the projects go.mod's.
* Refactor our internal and external tool installation and verification
into a tools.sh helper.
* Combined more complex Go verification into `scripts/go-helper.sh` and
utilize it in the `Makefile` and git commit hooks.
* Add `Makefile` targets for executing our various tools.sh helpers.
* Update our existing `make` targets to use new tool targets.
* Normalize our various scripts and targets output to have a consistent
output format.
* In CI, install many of our external dependencies as binaries wherever
possible. When not possible we'll build them from scratch but not mess
with the shared module cache.
* [QT-641] Remove our external build tools from our project Go modules.
* [QT-641] Remove extraneous `go list`'s from our `set-up-to` composite
action.
* Fix formatting and regen our protos
Signed-off-by: Ryan Cragun <me@ryan.ec>
* Pulls in github.com/go-secure-stdlib/plugincontainer@v0.3.0 which exposes a new `Config.Rootless` option to opt in to extra container configuration options that allow establishing communication with a non-root plugin within a rootless container runtime.
* Adds a new "rootless" option for plugin runtimes, so Vault needs to be explicitly told whether the container runtime on the machine is rootless or not. It defaults to false as rootless installs are not the default.
* Updates `run_config.go` to use the new option when the plugin runtime is rootless.
* Adds new `-rootless` flag to `vault plugin runtime register`, and `rootless` API option to the register API.
* Adds rootless Docker installation to CI to support tests for the new functionality.
* Minor test refactor to minimise the number of test Vault cores that need to be made for the external plugin container tests.
* Documentation for the new rootless configuration and the new (reduced) set of restrictions for plugin containers.
* As well as adding rootless support, we've decided to drop explicit support for podman for now, but there's no barrier other than support burden to adding it back again in future so it will depend on demand.
Fix missing log files: we need to use an absolute path, since go test chdirs into the test package dir before running tests. Move the cleanup-on-success behaviour from NewTestCluster into NewTestLogger so it applies more broadly.
We can't use `sudo` on our self-hosted runners at the moment to do
the install and Docker reload.
So, we'll disable this for now, which should automatically cause
the gVisor-related tests to be skipped.
* Also makes plugin directory optional when registering container plugins
* And threads plugin runtime settings through to plugin execution config
* Add runsc to github runner for plugin container tests
* adding testonly CI test job
* small instance for testonly tests
* feedback
* shopt
* disable glob expansion
* revert back to a large instance
* fix a mistake