As the Vault pipeline and release processes evolve over time, so too must the tooling that drives them. Historically we've utilized a combination of CI features and shell scripts that are wrapped into make targets to drive our CI. While this
approach has worked, it requires careful consideration of what features to use (bash in CI almost never matches bash in developer machines, etc.) and often requires a deep understanding of several CLI tools (jq, etc). `make` itself also has limitations in user experience, e.g. passing flags.
As we're all in on Github Actions as our pipeline coordinator, continuing to utilize and build CLI tools to perform our pipeline tasks makes sense. This PR adds a new CLI tool called `pipeline` which we can use to build new isolated tasks that we can string together in Github Actions. We intend to use this utility as the interface for future release automation work, see VAULT-27514.
For the first task in this new `pipeline` tool, I've chosen to build two small sub-commands:
* `pipeline releases list-versions` - Allows us to list Vault versions between a range. The range is configurable either by setting `--upper` and/or `--lower` bounds, or by using the `--nminus` to set the N-X to go back from the current branches version. As CE and ENT do not have version parity we also consider the `--edition`, as well as none-to-many `--skip` flags to exclude specific versions.
* `pipeline generate enos-dynamic-config` - Which creates dynamic enos configuration based on the branch and the current list of release versions. It takes largely the same flags as the `release list-versions` command, however it also expects a `--dir` for the enos directory and a `--file` where the dynamic configuration will be written. This allows us to dynamically update and feed the latest versions into our sampling algorithm to get coverage over all supported prior versions.
We then integrate these new tools into the pipeline itself and cache the dynamic config on a weekly basis. We also cache the pipeline tool itself as it will likely become a repository for pipeline specific tooling. The caching strategy for the `pipeline` tool itself will make most workflows that require it super fast.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* add gosimport to make fmt and run it
* move installation to tools.sh
* correct weird spacing issue
* Update Makefile
Co-authored-by: Nick Cabatoff <ncabatoff@hashicorp.com>
* fix a weird issue
---------
Co-authored-by: Nick Cabatoff <ncabatoff@hashicorp.com>
Git doesn’t allow hooks to be in-repo which prevents branch specific hooks.
To get around this we’ve historically copied our hooks from .hooks into
.git/hooks when running make prep in vault and vault-enterprise.
That sort of works but has the following issues:
* If you hooks call into files in-repo and they are modified between branches
you have to re-sync to resolve it
* Remembering to sync the hooks is cumbersome
We can’t exactly get around the first issue. It’s always possible that if
you change branches and don’t update your hooks you could run into this
problem if you try to commit without updating them. But we can make it less
likely to fail by:
* Always syncing the hooks whenever make is called
* Updating the files in the hooks on all maintained branches to be consistent
Signed-off-by: Ryan Cragun <me@ryan.ec>
We're on a quest to reduce our pipeline execution time to both enhance
our developer productivity but also to reduce the overall cost of the CI
pipeline. The strategy we use here reduces workflow execution time and
network I/O cost by reducing our module cache size and using binary
external tools when possible. We no longer download modules and build
many of the external tools thousands of times a day.
Our previous process of installing internal and external developer tools
was scattered and inconsistent. Some tools were installed via `go
generate -tags tools ./tools/...`,
others via various `make` targets, and some only in Github Actions
workflows. This process led to some undesirable side effects:
* The modules of some dev and test tools were included with those
of the Vault project. This leads to us having to manage our own
Go modules with those of external tools. Prior to Go 1.16 this
was the recommended way to handle external tools, but now
`go install tool@version` is the recommended way to handle
external tools that need to be build from source as it supports
specific versions but does not modify the go.mod.
* Due to Github cache constraints we combine our build and test Go
module caches together, but having our developer tools as deps in
our module results in a larger cache which is downloaded on every
build and test workflow runner. Removing the external tools that were
included in our go.mod reduced the expanded module cache by size
by ~300MB, thus saving time and network I/O costs when downloading
the module cache.
* Not all of our developer tools were included in our modules. Some were
being installed with `go install` or `go run`, so they didn't take
advantage of a single module cache. This resulted in us downloading
Go modules on every CI and Build runner in order to build our
external tools.
* Building our developer tools from source in CI is slow. Where possible
we can prefer to use pre-built binaries in CI workflows. No more
module download or tool compiles if we can avoid them.
I've refactored how we define internal and external build tools
in our Makefile and added several new targets to handle both building
the developer tools locally for development and verifying that they are
available. This allows for an easy developer bootstrap while also
supporting installation of many of the external developer tools from
pre-build binaries in CI. This reduces our network IO and run time
across nearly all of our actions runners.
While working on this I caught and resolved a few unrelated issue:
* Both our Go and Proto format checks we're being run incorrectly. In
CI they we're writing changes but not failing if changes were
detected. The Go was less of a problem as we have git hooks that
are intended to enforce formatting, however we drifted over time.
* Our Git hooks couldn't handle removing a Go file without failing. I
moved the diff check into the new Go helper and updated it to handle
removing files.
* I combined a few separate scripts and into helpers and added a few
new capabilities.
* I refactored how we install Go modules to make it easier to download
and tidy all of the projects go.mod's.
* Refactor our internal and external tool installation and verification
into a tools.sh helper.
* Combined more complex Go verification into `scripts/go-helper.sh` and
utilize it in the `Makefile` and git commit hooks.
* Add `Makefile` targets for executing our various tools.sh helpers.
* Update our existing `make` targets to use new tool targets.
* Normalize our various scripts and targets output to have a consistent
output format.
* In CI, install many of our external dependencies as binaries wherever
possible. When not possible we'll build them from scratch but not mess
with the shared module cache.
* [QT-641] Remove our external build tools from our project Go modules.
* [QT-641] Remove extraneous `go list`'s from our `set-up-to` composite
action.
* Fix formatting and regen our protos
Signed-off-by: Ryan Cragun <me@ryan.ec>
Replace our prior implementation of Enos test groups with the new Enos
sampling feature. With this feature we're able to describe which
scenarios and variant combinations are valid for a given artifact and
allow enos to create a valid sample field (a matrix of all compatible
scenarios) and take an observation (select some to run) for us. This
ensures that every valid scenario and variant combination will
now be a candidate for testing in the pipeline. See QT-504[0] for further
details on the Enos sampling capabilities.
Our prior implementation only tested the amd64 and arm64 zip artifacts,
as well as the Docker container. We now include the following new artifacts
in the test matrix:
* CE Amd64 Debian package
* CE Amd64 RPM package
* CE Arm64 Debian package
* CE Arm64 RPM package
Each artifact includes a sample definition for both pre-merge/post-merge
(build) and release testing.
Changes:
* Remove the hand crafted `enos-run-matrices` ci matrix targets and replace
them with per-artifact samples.
* Use enos sampling to generate different sample groups on all pull
requests.
* Update the enos scenario matrices to handle HSM and FIPS packages.
* Simplify enos scenarios by using shared globals instead of
cargo-culted locals.
Note: This will require coordination with vault-enterprise to ensure a
smooth migration to the new system. Integrating new scenarios or
modifying existing scenarios/variants should be much smoother after this
initial migration.
[0] https://github.com/hashicorp/enos/pull/102
Signed-off-by: Ryan Cragun <me@ryan.ec>
* adding new version bump refactoring
* address comments
* remove changes used for testing
* add the version bump event!
* fix local enos scenarios
* remove unnecessary local get_local_metadata steps from scenarios
* add version base, pre, and meta to the get_local_metadata module
* use the get_local_metadata module in the local builder for version
metadata
* update the version verifier to always require a build date
Signed-off-by: Ryan Cragun <me@ryan.ec>
* Update to embed the base version from the VERSION file directly into version.go.
This ensures that any go tests can use the same (valid) version as CI and so can local builds and local enos runs.
We still want to be able to set a default metadata value in version_base.go as this is not something that we set in the VERSION file - we pass this in as an ldflag in CI (matters more for ENT but we want to keep these files in sync across repos).
* update comment
* fixing bad merge
* removing actions-go-build as it won't work with the latest go caching changes
* fix logic for getting version in enos-lint.yml
* fix version number
* removing unneeded module
---------
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Claire <claire@hashicorp.com>
Co-authored-by: Ryan Cragun <me@ryan.ec>
* Migrate protobuf generation to Buf
Buf simplifies the generation story and allows us to lean
into other features in the Buf ecosystem, such as dependency
management, linting, breaking change detection, formatting
and remote plugins.
* Format all protobuf files with buf
Also add a CI job to ensure formatting remains consistent
* Add CI job to warn on proto generate diffs
Some files were not regenerated with the latest version
of the protobuf binary. This CI job will ensure we are always
detect if the protobuf files need regenerating.
* Add CI job for linting protobuf files
* fix multiline
* shellcheck, and success message for builds
* add full path
* cat the summary
* fix and faster
* fix if condition
* base64 in a separate step
* echo
* check against empty string
* add echo
* only use matrix ids
* only id
* echo matrix
* remove wrapping array
* tojson
* try echo again
* use jq to get packages
* don't quote
* only run binary tests once
* only run binary tests once
* test what's wrong with the binary
* separate file
* use matrix file
* failed test
* update comment on success
* correct variable name
* bae64 fix
* output to file
* use multiline
* fix
* fix formatting
* fix newline
* fix whitespace
* correct body, remove comma
* small fixes
* shellcheck
* another shellcheck fix
* fix deprecation checker
* only run comments for prs
* Update .github/workflows/test-go.yml
Co-authored-by: Mike Palmiotto <mike.palmiotto@hashicorp.com>
* Update .github/workflows/test-go.yml
Co-authored-by: Mike Palmiotto <mike.palmiotto@hashicorp.com>
* fixes
---------
Co-authored-by: Mike Palmiotto <mike.palmiotto@hashicorp.com>
* Add backend format linting to pre-commit hook
By taking a slight penalty with each commit, we can ensure that
contributors follow the format behavior by default (if they run hooks),
making accidental PRs without proper formatting less likely.
Additionally, fix gofmtcheck to align with the Makefile, fixing the
corresponding fmtcheck target for use with the hook.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Fix formatting errors
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
---------
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
Add a go:generate helper called stubmaker, which generates appropriate stubs on ent based on oss stubs, but only when needed (i.e. real ent funcs haven't been added yet.)
* combine into one checker
* combine and simplify ci checks
* add to test package list
* remove testing test
* only run deprecations check
* only run deprecations check
* remove unneeded repo check
* fix bash options
* VAULT-15385 Add GHA that checks for nil, nil returns on functions that return an error
* VAULT-15385 add failing function, for sanity
* VAULT-15385 fix makefile
* VAULT-15385 remove test dir
* VAULT-15385 Fix typo
* VAULT-15385 fix job name
* VAULT-15385 Add test to packages
* VAULT-15835 add opt-out
* VAULT-15835 Wrong file for comment
* VAULT-15835 remove failing function
* VAULT-15835 return not nil-nil :)
* VAULT-15835 Restrict to two-result functions
* Add Helios Design System Components (#19278)
* adds hds dependency
* updates reset import path
* sets minifyCSS advanced option to false
* Remove node-sass (#19376)
* removes node-sass and fixes sass compilation
* fixes active tab li class
* Sidebar Navigation Components (#19446)
* links ember-shared-components addon and imports styles
* adds sidebar frame and nav components
* updates HcNav component name to HcAppFrame and adds sidebar UserMenu component
* adds tests for sidebar components
* fixes tests
* updates user menu styling
* fixes typos in nav cluster component
* changes padding value in sidebar stylesheet to use variable
* Replace and remove old nav components with new ones (#19447)
* links ember-shared-components addon and imports styles
* adds sidebar frame and nav components
* updates activeCluster on auth service and adds activeSession prop for sidebar visibility
* replaces old nav components with new ones in templates
* fixes sidebar visibility issue and updates user menu label class
* removes NavHeader usage
* adds clients index route to redirect to dashboard
* removes unused HcAppFrame footer block and reduces page header top margin
* Nav component cleanup (#19681)
* removes nav-header components
* removes navbar styling
* removes status-menu component and styles
* removes cluster and auth info components
* removes menu-sidebar component and styling
* fixes tests
* Console Panel Updates (#19741)
* updates console panel styling
* adds test for opening and closing the console panel
* updates console panel background color to use hds token
* adds right margin to console panel input
* updates link-status banner styling
* updates hc nav components to new API
* Namespace Picker Updates (#19753)
* updates namespace-picker
* updates namespace picker menu styling
* adds bottom margin to env banner
* updates class order on namespace picker link
* restores manage namespaces refresh icon
* removes manage namespaces nav icon
* removes home link component (#20027)
* Auth and Error View Updates (#19749)
* adds vault logo to auth page
* updates top level error template
* updates loading substate handling and moves policies link from access to cluster nav (#20033)
* moves console panel to bottom of viewport (#20183)
* HDS Sidebar Nav Components (#20197)
* updates nav components to hds
* upgrades project yarn version to 3.5
* fixes issues in app frame component
* updates sidenav actions to use icon button component
* Sidebar navigation acceptance tests (#20270)
* adds sidebar navigation acceptance tests and fixes other test failures
* console panel styling tweaks
* bumps addon version
* remove and ignore yarn install-state file
* fixes auth service and console tests
* moves classes from deleted files after bulma merge
* fixes sass syntax errors blocking build
* cleans up dart sass deprecation warnings
* adds changelog entry
* hides namespace picker when sidebar nav panel is minimized
* style tweaks
* fixes sidebar nav tests
* bumps hds addon to latest version and removes style override
* updates modify-passthrough-response helper
* updates sidebar nav tests
* mfa-setup test fix attempt
* fixes cluster mfa setup test
* remove deprecated yarn ignore-optional flag from makefile
* removes another instance of yarn ignore-optional and updates ui readme
* removes unsupported yarn verbose flag from ci-helper
* hides nav headings when user does not have access to any sub links
* removes unused optional deps and moves lint-staged to dev deps
* updates has-permission helper and permissions service tests
* fixes issue with console panel not filling container width
* deprecation check
* adding script
* add execute permission to script
* revert changes
* adding the script back
* added working script for local and GHA
* give execute permissions
* updating revgrep
* adding changes to script, tools
* run go mod tidy
* removing default ref
* make bootstrap
* adding to makefile
* Add a GHA job running Go tests with race detection enabled to the CI workflow
* Incorporate logic from test-go-race into the test-go testing matrix
* Make test-go testing matrix job names more meaningful
* Fix some a bug in script's logic
* Experiment: bump wait time in the failing TestLoginMFASinglePhase test to see if that makes a difference
* Lower the wait time in TestLoginMFASinglePhase
* Change the wait time in TestLoginMFASinglePhase to 15
* Add more detail to test-go testing matrix job names
* Test whether we already have access to larger runners
* Run Go tests with enabled data race detection from a separate job than the standard suite of tests
* Tweak runner sizes for OSS
* Try rebalancing test buckets
* Change instance type for larger ENT runners
* Undo rebalancing of test buckets as it changed nothing
* Change instance type for larger OSS runners
* Change the way we generate names for matrix jobs
* Consolidate the Go build tags variables, update them to use comma as a separator and fix the if statement in test-go
* Fix a typo
* replace use of os.Unsetenv in test with t.Setenv and remove t.Parallel from test that rely on env being modified.
* experiment with using fromJSON function
* revert previous experiment
* including double quotes in the output value for the string ubuntu-latest
* use go run to launch gofumpt
* example for checking go doc tests
* add analyzer test and action
* get metadata step
* install revgrep
* fix for ci
* add revgrep to go.mod
* clarify how analysistest works
Introducing a new approach to testing Vault artifacts before merge
and after merge/notorization/signing. Rather than run a few static
scenarios across the artifacts, we now have the ability to run a
pseudo random sample of scenarios across many different build artifacts.
We've added 20 possible scenarios for the AMD64 and ARM64 binary
bundles, which we've broken into five test groups. On any given push to
a pull request branch, we will now choose a random test group and
execute its corresponding scenarios against the resulting build
artifacts. This gives us greater test coverage but lets us split the
verification across many different pull requests.
The post-merge release testing pipeline behaves in a similar fashion,
however, the artifacts that we use for testing have been notarized and
signed prior to testing. We've also reduce the number of groups so that
we run more scenarios after merge to a release branch.
We intend to take what we've learned building this in Github Actions and
roll it into an easier to use feature that is native to Enos. Until then,
we'll have to manually add scenarios to each matrix file and manually
number the test group. It's important to note that Github requires every
matrix to include at least one vector, so every artifact that is being
tested must include a single scenario in order for all workflows to pass
and thus satisfy branch merge requirements.
* Add support for different artifact types to enos-run
* Add support for different runner type to enos-run
* Add arm64 scenarios to build matrix
* Expand build matrices to include different variants
* Update Consul versions in Enos scenarios and matrices
* Refactor enos-run environment
* Add minimum version filtering support to enos-run. This allows us to
automatically exclude scenarios that require a more recent version of
Vault
* Add maximum version filtering support to enos-run. This allows us to
automatically exclude scenarios that require an older version of
Vault
* Fix Node 12 deprecation warnings
* Rename enos-verify-stable to enos-release-testing-oss
* Convert artifactory matrix into enos-release-testing-oss matrices
* Add all Vault editions to Enos scenario matrices
* Fix verify version with complex Vault edition metadata
* Rename the crt-builder to ci-helper
* Add more version helpers to ci-helper and Makefile
* Update CODEOWNERS for quality team
* Add support for filtering matrices by group and version constraints
* Add support for pseudo random test scenario execution
Signed-off-by: Ryan Cragun <me@ryan.ec>
* add Link config, init, and capabilities
* add node status proto
* bump protoc version to 3.21.9
* make proto
* adding link tests
* remove wrapped link
* add changelog entry
* update changelog entry
Here we make the following major changes:
* Centralize CRT builder logic into a script utility so that we can share the
logic for building artifacts in CI or locally.
* Simplify the build workflow by calling a reusable workflow many times
instead of repeating the contents.
* Create a workflow that validates whether or not the build workflow and all
child workflows have succeeded to allow for merge protection.
Motivation
* We need branch requirements for the build workflow and all subsequent
integration tests (QT-353)
* We need to ensure that the Enos local builder works (QT-558)
* Debugging build failures can be difficult because one has to hand craft the
steps to recreate the build
* Merge conflicts between Vault OSS and Vault ENT build workflows are quite
painful. As the build workflow must be the same file and name we'll reduce
what is contained in each that is unique. Implementations of building
will be unique per edition so we don't have to worry about conflict
resolution.
* Since we're going to be touching the build workflow to do the first two
items we might as well try and improve those other issues at the same time
to reduce the overhead of backports and conflicts.
Considerations
* Build logic for Vault OSS and Vault ENT differs
* The Enos local builder was duplicating a lot of what we did in the CRT build
workflow
* Version and other artifact metadata has been an issue before. Debugging it
has been tedious and error prone.
* The build workflow is full of brittle copy and paste that is hard to
understand, especially for all of the release editions in Vault Enterprise
* Branch check requirements for workflows are incredibly painful to use for
workflows that are dynamic or change often. The required workflows have to be
configured in Github settings by administrators. They would also prevent us
from having simple docs PRs since required integration workflows always have
to run to satisfy branch requirements.
* Doormat credentials requirements that are coming will require us to modify
which event types trigger workflows. This changes those ahead of time since
we're doing so much to build workflow. The only noticeable impact will be
that the build workflow no longer runs on pushes to non-main or release
branches. In order to test other branches it requires a workflow_dispatch
from the Actions tab or a pull request.
Solutions
* Centralize the logic that determines build metadata and creates releasable
Vault artifacts. Instead of cargo-culting logic multiple times in the build
workflow and the Enos local modules, we now have a crt-builder script which
determines build metadata and also handles building the UI, Vault, and the
package bundle. There are make targets for all of the available sub-commands.
Now what we use in the pipeline is the same thing as the local builder, and
it can be executed locally by developers. The crt-builder script works in OSS
and Enterprise so we will never have to deal with them being divergent or with
special casing things in the build workflow.
* Refactor the bulk of the Vault building into a reusable workflow that we can
call multiple times. This allows us to define Vault builds in a much simpler
manner and makes resolving merge conflicts much easier.
* Rather than trying to maintain a list and manually configure the branch check
requirements for build, we'll trigger a single workflow that uses the github
event system to determine if the build workflow (all of the sub-workflows
included) have passed. We'll then create branch restrictions on that single
workflow down the line.
Signed-off-by: Ryan Cragun me@ryan.ec
By adding the link flags `-s -w` we can reduce the Vault binary size
from 204 MB to 167 MB (about 18% reduction in size).
This removes the DWARF section of the binary.
i.e., before:
```
$ objdump --section-headers vault-debug
vault-debug: file format mach-o arm64
Sections:
Idx Name Size VMA Type
0 __text 03a00340 0000000100001000 TEXT
1 __symbol_stub1 00000618 0000000103a01340 TEXT
2 __rodata 00c18088 0000000103a01960 DATA
3 __rodata 015aee18 000000010461c000 DATA
4 __typelink 0004616c 0000000105bcae20 DATA
5 __itablink 0000eb68 0000000105c10fa0 DATA
6 __gosymtab 00000000 0000000105c1fb08 DATA
7 __gopclntab 02a5b8e0 0000000105c1fb20 DATA
8 __go_buildinfo 00008c10 000000010867c000 DATA
9 __nl_symbol_ptr 00000410 0000000108684c10 DATA
10 __noptrdata 000fed00 0000000108685020 DATA
11 __data 0004e1f0 0000000108783d20 DATA
12 __bss 00052520 00000001087d1f20 BSS
13 __noptrbss 000151b0 0000000108824440 BSS
14 __zdebug_abbrev 00000129 000000010883c000 DATA, DEBUG
15 __zdebug_line 00651374 000000010883c129 DATA, DEBUG
16 __zdebug_frame 001e1de9 0000000108e8d49d DATA, DEBUG
17 __debug_gdb_scri 00000043 000000010906f286 DATA, DEBUG
18 __zdebug_info 00de2c09 000000010906f2c9 DATA, DEBUG
19 __zdebug_loc 00a619ea 0000000109e51ed2 DATA, DEBUG
20 __zdebug_ranges 001e94a6 000000010a8b38bc DATA, DEBUG
```
And after:
```
$ objdump --section-headers vault-no-debug
vault-no-debug: file format mach-o arm64
Sections:
Idx Name Size VMA Type
0 __text 03a00340 0000000100001000 TEXT
1 __symbol_stub1 00000618 0000000103a01340 TEXT
2 __rodata 00c18088 0000000103a01960 DATA
3 __rodata 015aee18 000000010461c000 DATA
4 __typelink 0004616c 0000000105bcae20 DATA
5 __itablink 0000eb68 0000000105c10fa0 DATA
6 __gosymtab 00000000 0000000105c1fb08 DATA
7 __gopclntab 02a5b8e0 0000000105c1fb20 DATA
8 __go_buildinfo 00008c20 000000010867c000 DATA
9 __nl_symbol_ptr 00000410 0000000108684c20 DATA
10 __noptrdata 000fed00 0000000108685040 DATA
11 __data 0004e1f0 0000000108783d40 DATA
12 __bss 00052520 00000001087d1f40 BSS
13 __noptrbss 000151b0 0000000108824460 BSS
```
The only side effect I have been able to find is that it is no longer
possible to use [delve](https://github.com/go-delve/delve) to run the
Vault binary.
Note, however, that running delve and other debuggers requires access
to the full source code, which isn't provided for the Enterprise, HSM,
etc. binaries, so it isn't possible to debug those anyway outside of
people who have the full source.
* panic traces
* `vault debug`
* error messages
* Despite what the documentation says, these flags do *not* delete the
function symbol table (so it is not the same as having a `strip`ped
binary).
It contains mappings between the compiled binary and functions,
paramters, and variables in the source code.
Using `llvm-dwarfdump`, it looks like:
```
0x011a6d85: DW_TAG_subprogram
DW_AT_name ("github.com/hashicorp/vault/api.(*replicationStateStore).recordState")
DW_AT_low_pc (0x0000000000a99300)
DW_AT_high_pc (0x0000000000a99419)
DW_AT_frame_base (DW_OP_call_frame_cfa)
DW_AT_decl_file ("/home/swenson/vault/api/client.go")
DW_AT_external (0x01)
0x011a6de1: DW_TAG_formal_parameter
DW_AT_name ("w")
DW_AT_variable_parameter (0x00)
DW_AT_decl_line (1735)
DW_AT_type (0x00000000001e834a "github.com/hashicorp/vault/api.replicationStateStore *")
DW_AT_location (0x009e832a:
[0x0000000000a99300, 0x0000000000a9933a): DW_OP_reg0 RAX
[0x0000000000a9933a, 0x0000000000a99419): DW_OP_call_frame_cfa)
0x011a6def: DW_TAG_formal_parameter
DW_AT_name ("resp")
DW_AT_variable_parameter (0x00)
DW_AT_decl_line (1735)
DW_AT_type (0x00000000001e82a2 "github.com/hashicorp/vault/api.Response *")
DW_AT_location (0x009e8370:
[0x0000000000a99300, 0x0000000000a9933a): DW_OP_reg3 RBX
[0x0000000000a9933a, 0x0000000000a99419): DW_OP_fbreg +8)
0x011a6e00: DW_TAG_variable
DW_AT_name ("newState")
DW_AT_decl_line (1738)
DW_AT_type (0x0000000000119f32 "string")
DW_AT_location (0x009e83b7:
[0x0000000000a99385, 0x0000000000a99385): DW_OP_reg0 RAX, DW_OP_piece 0x8, DW_OP_piece 0x8
[0x0000000000a99385, 0x0000000000a993a4): DW_OP_reg0 RAX, DW_OP_piece 0x8, DW_OP_reg3 RBX, DW_OP_piece 0x8
[0x0000000000a993a4, 0x0000000000a993a7): DW_OP_piece 0x8, DW_OP_reg3 RBX, DW_OP_piece 0x8)
```
This says that the particular binary section is the function
`github.com/hashicorp/vault/api.(*replicationStateStore).recordState`,
from the file `/home/swenson/vault/api/client.go`, containing
the `w` parameter on line 1735 mapped to certain registers and memory,
the `resp` paramter on line 1735 mapped to certain reigsters and memory,
and the `newState` variable on line 1738, mapped to certain registers,
and memory.
It's really only useful for a debugger.
Anyone running the code in a debugger will need full access the source
code anyway, so presumably they will be able to run `make dev` and build
the version with the DWARF sections intact, and then run their debugger.
* Update go version to 1.19.2
This commit updates the default version of go to 1.19.2. This update
includes minor security fixes for archive/tar, net/http/httputil, and
regexp packages.
For more information on the release, see: https://go.dev/doc/devel/release#go1.19.2
* Update Docker versions in CI to 20.10.17
After updating Vault to go version 1.19.2, there were several SIGABRTs
in the vault tests. These were related to a missing `pthread_create`
syscall in Docker. Since CI was using a much older version of Docker,
the fix was to bump it to latest-1 (20.10.17).
While we're at it, add a note in the developer docs encouraging the use
of the latest Docker version.
Add plugin version to GRPC interface
Added a version interface in the sdk/logical so that it can be shared between all plugin types, and then wired it up to RunningVersion in the mounts, auth list, and database systems.
I've tested that this works with auth, database, and secrets plugin types, with the following logic to populate RunningVersion:
If a plugin has a PluginVersion() method implemented, then that is used
If not, and the plugin is built into the Vault binary, then the go.mod version is used
Otherwise, the it will be the empty string.
My apologies for the length of this PR.
* Placeholder backend should be external
We use a placeholder backend (previously a framework.Backend) before a
GRPC plugin is lazy-loaded. This makes us later think the plugin is a
builtin plugin.
So we added a `placeholderBackend` type that overrides the
`IsExternal()` method so that later we know that the plugin is external,
and don't give it a default builtin version.