Replace our prior implementation of Enos test groups with the new Enos
sampling feature. With this feature we're able to describe which
scenarios and variant combinations are valid for a given artifact and
allow enos to create a valid sample field (a matrix of all compatible
scenarios) and take an observation (select some to run) for us. This
ensures that every valid scenario and variant combination will
now be a candidate for testing in the pipeline. See QT-504[0] for further
details on the Enos sampling capabilities.
Our prior implementation only tested the amd64 and arm64 zip artifacts,
as well as the Docker container. We now include the following new artifacts
in the test matrix:
* CE Amd64 Debian package
* CE Amd64 RPM package
* CE Arm64 Debian package
* CE Arm64 RPM package
Each artifact includes a sample definition for both pre-merge/post-merge
(build) and release testing.
Changes:
* Remove the hand crafted `enos-run-matrices` ci matrix targets and replace
them with per-artifact samples.
* Use enos sampling to generate different sample groups on all pull
requests.
* Update the enos scenario matrices to handle HSM and FIPS packages.
* Simplify enos scenarios by using shared globals instead of
cargo-culted locals.
Note: This will require coordination with vault-enterprise to ensure a
smooth migration to the new system. Integrating new scenarios or
modifying existing scenarios/variants should be much smoother after this
initial migration.
[0] https://github.com/hashicorp/enos/pull/102
Signed-off-by: Ryan Cragun <me@ryan.ec>
We further optimize the CI workflow for better costs and speed.
We tested the Go CI workflows across several instance classes
and update our compute choices. We achieve an average execution
speed improvement of 2-2.5 minutes per test workflow while
reducing the infrastructure cost by about 20%. We also also save
another ~2 minutes by installing `gotestsum` from the Github release
instead of downloading the Go modules and compiling it every time.
In addition to the speed improvements, we also further reduced our cache
usage by updating the `security-scan` workflow to not cache Go modules.
We also use the `cache/save` and `cache/restore` actions for timing
caches. This results is saving half as many cache results for timing
data.
*UI test results*
results for 2x runs:
* c6a.2xlarge (12m54s, 11m55s)
* c6a.4xlarge (10m47s, 11m6s)
* c6a.8xlarge (11m32s, 10m51s)
* m5.2xlarge (15m23s, 14m16s)
* m5.4xlarge (14m48s, 12m54s)
* m5.8xlarge (12m27s, 12m24s)
* m6a.2xlarge (11m55s, 12m20s)
* m6a.4xlarge (10m54s, 10m43s)
* m6a.8xlarge (10m33s, 10m51s)
Current runner:
m5.2xlarge (15m23s, 14m16s, avg 14m50s) @ 0.448/hr = $0.11
Faster candidates
* c6a.2xlarge (12m54s, 11m55s, avg 12m24s) @ 0.3816/hr = $0.078
* m6a.2xlarge (11m55s, 12m20s, avg 12m8s) @ 0.4032/hr = $0.081
* c6a.4xlarge (10m47s, 11m6s, avg 10m56s) @ 0.7632/hr = $0.139
* m6a.4xlarge (10m54s, 10m43s, avg 10m48s) @ 0.8064/hr = $0.140
Best bang for the buck for test-ui:
m6a.2xlarge, > 25% cost savings from current and we save ~2.5 minutes.
*Go test results*
During testing the external replication tests, when not broken up, will
always take the longest. Our original analysis focuses on this job.
Most other tests groups will finish ~3m faster so we'll use subtract
that time when estimating the cost for the whole job.
external replication job results:
* c6a.2xlarge (20m49s, 19m20s, avg 20m5s)
* c6a.4xlarge (19m1s, 19m38s, avg 19m20s)
* c6a.8xlarge (19m51s, 18m54s, avg 19m23s)
* m5.2xlarge (22m12s, 20m29s, avg 21m20s)
* m5.4xlarge (20m7s, 19m3s, avg 20m35s)
* m5.8xlarge (20m24s, 19m42s, avg 20m3s)
* m6a.2xlarge (21m10s, 19m37s, avg 20m23s)
* m6a.4xlarge (18m58s, 19m51s, avg 19m24s)
* m6a.8xlarge (19m27s, 18m47s, avg 19m7s)
There is little separation in time when we increase class size. In the
best case a class size increase yields about a ~5% performance increase
and doubles the cost. For test-go our best bang for the buck is
certainly going to be in the 2xlarge class.
Current runner:
m5.2xlarge (22m12s, 20m29s, avg 21m20s) @ 0.448/hr (16@avg-3m + 1@avg) = $2.35
Candidates in the same class
* c6a.2xlarge (20m49s, 19m20s, avg 20m5s) @ 0.3816/hr (16@avg-3m + 1@avg) = $1.86
* m6a.2xlarge (21m10s, 19m37s, avg 20m23s) @ 0.4032/hr (16@avg-3m + 1@avg) = $2.00
Best bang for the buck for test-go:
c6a.2xlarge: 20% cost savings and save about ~2.25 minutes.
We ran the tests with similar instances and saw similar execution times as
with test-go. Therefore we can use the same recommended instance sizes.
After breaking up test-go's external replication tests, the longest group
was shorter on average. I choose to look at group 3 as it was usually the
longest grouping:
* c6a.2xlarge: (14m51s, 14m48s)
* c6a.4xlarge: (14m14s, 14m15)
* c6a.8xlarge: (14m0s, 13m54s)
* m5.2xlarge: (15m36s, 15m35s)
* m5.4xlarge: (14m46s, 14m49s)
* m5.8xlarge: (14m25s, 14m25s)
* m6a.2xlarge: 14m51s, 14m53s)
* m6a.4xlarge: 14m16s, 14m16s)
* m6a.8xlarge: (14m2s, 13m57s)
Again, we see ~5% performance gains between the 2x and 8x instance classes
at quadruple the cost. The c6a and m6a families are almost identical, with
the c6a class being cheaper.
*Notes*
* UI and Go Test timing results: https://github.com/hashicorp/vault-enterprise/actions/runs/5556957460/jobs/10150759959
* Go Test with data race detection timing results: https://github.com/hashicorp/vault-enterprise/actions/runs/5558013192
* Go Test with replication broken up: https://github.com/hashicorp/vault-enterprise/actions/runs/5558490899
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Ryan Cragun <me@ryan.ec>
* backport of commit dc104898f7 (#21853)
* fix multiline
* shellcheck, and success message for builds
* add full path
* cat the summary
* fix and faster
* fix if condition
* base64 in a separate step
* echo
* check against empty string
* add echo
* only use matrix ids
* only id
* echo matrix
* remove wrapping array
* tojson
* try echo again
* use jq to get packages
* don't quote
* only run binary tests once
* only run binary tests once
* test what's wrong with the binary
* separate file
* use matrix file
* failed test
* update comment on success
* correct variable name
* bae64 fix
* output to file
* use multiline
* fix
* fix formatting
* fix newline
* fix whitespace
* correct body, remove comma
* small fixes
* shellcheck
* another shellcheck fix
* fix deprecation checker
* only run comments for prs
* Update .github/workflows/test-go.yml
Co-authored-by: Mike Palmiotto <mike.palmiotto@hashicorp.com>
* Update .github/workflows/test-go.yml
Co-authored-by: Mike Palmiotto <mike.palmiotto@hashicorp.com>
* fixes
---------
Co-authored-by: Mike Palmiotto <mike.palmiotto@hashicorp.com>
* backport of commit 3b00dde1ba (#21936)
* limit test comments
* remove unecessary tee
* fix go test condition
* fix
* fail test
* remove ailways entirely
* fix columns
* make a bunch of tests fail
* separate line
* include Failures:
* remove test fails
* fix whitespace
* backport of commit 245430215c (#21973)
* only add binary tests if they exist
* shellcheck
---------
Co-authored-by: miagilepner <mia.epner@hashicorp.com>
Co-authored-by: Mike Palmiotto <mike.palmiotto@hashicorp.com>
* combine into one checker
* combine and simplify ci checks
* add to test package list
* remove testing test
* only run deprecations check
* only run deprecations check
* remove unneeded repo check
* fix bash options
Co-authored-by: miagilepner <mia.epner@hashicorp.com>
* backport all gha migration changes to release/1.13.x
* remove the .circleci directory
* remove references to circleci configuration from pre-commit hook
* remove reference to .circleci in Makefile
* port change to how gofumpt is executed in Makefile
* add gotestsum to tools/tools.go
* remove postgresql/scram package from generate-test-package-lists.sh since it didn't exist in release 1.13 or earlier
* blank out environment variables to allow test to properly function
* use go:embed to load files into test
---------
Co-authored-by: Kuba Wieczorek <kuba.wieczorek@hashicorp.com>
* example for checking go doc tests
* add analyzer test and action
* get metadata step
* install revgrep
* fix for ci
* add revgrep to go.mod
* clarify how analysistest works
Introducing a new approach to testing Vault artifacts before merge
and after merge/notorization/signing. Rather than run a few static
scenarios across the artifacts, we now have the ability to run a
pseudo random sample of scenarios across many different build artifacts.
We've added 20 possible scenarios for the AMD64 and ARM64 binary
bundles, which we've broken into five test groups. On any given push to
a pull request branch, we will now choose a random test group and
execute its corresponding scenarios against the resulting build
artifacts. This gives us greater test coverage but lets us split the
verification across many different pull requests.
The post-merge release testing pipeline behaves in a similar fashion,
however, the artifacts that we use for testing have been notarized and
signed prior to testing. We've also reduce the number of groups so that
we run more scenarios after merge to a release branch.
We intend to take what we've learned building this in Github Actions and
roll it into an easier to use feature that is native to Enos. Until then,
we'll have to manually add scenarios to each matrix file and manually
number the test group. It's important to note that Github requires every
matrix to include at least one vector, so every artifact that is being
tested must include a single scenario in order for all workflows to pass
and thus satisfy branch merge requirements.
* Add support for different artifact types to enos-run
* Add support for different runner type to enos-run
* Add arm64 scenarios to build matrix
* Expand build matrices to include different variants
* Update Consul versions in Enos scenarios and matrices
* Refactor enos-run environment
* Add minimum version filtering support to enos-run. This allows us to
automatically exclude scenarios that require a more recent version of
Vault
* Add maximum version filtering support to enos-run. This allows us to
automatically exclude scenarios that require an older version of
Vault
* Fix Node 12 deprecation warnings
* Rename enos-verify-stable to enos-release-testing-oss
* Convert artifactory matrix into enos-release-testing-oss matrices
* Add all Vault editions to Enos scenario matrices
* Fix verify version with complex Vault edition metadata
* Rename the crt-builder to ci-helper
* Add more version helpers to ci-helper and Makefile
* Update CODEOWNERS for quality team
* Add support for filtering matrices by group and version constraints
* Add support for pseudo random test scenario execution
Signed-off-by: Ryan Cragun <me@ryan.ec>
* add Link config, init, and capabilities
* add node status proto
* bump protoc version to 3.21.9
* make proto
* adding link tests
* remove wrapped link
* add changelog entry
* update changelog entry
Here we make the following major changes:
* Centralize CRT builder logic into a script utility so that we can share the
logic for building artifacts in CI or locally.
* Simplify the build workflow by calling a reusable workflow many times
instead of repeating the contents.
* Create a workflow that validates whether or not the build workflow and all
child workflows have succeeded to allow for merge protection.
Motivation
* We need branch requirements for the build workflow and all subsequent
integration tests (QT-353)
* We need to ensure that the Enos local builder works (QT-558)
* Debugging build failures can be difficult because one has to hand craft the
steps to recreate the build
* Merge conflicts between Vault OSS and Vault ENT build workflows are quite
painful. As the build workflow must be the same file and name we'll reduce
what is contained in each that is unique. Implementations of building
will be unique per edition so we don't have to worry about conflict
resolution.
* Since we're going to be touching the build workflow to do the first two
items we might as well try and improve those other issues at the same time
to reduce the overhead of backports and conflicts.
Considerations
* Build logic for Vault OSS and Vault ENT differs
* The Enos local builder was duplicating a lot of what we did in the CRT build
workflow
* Version and other artifact metadata has been an issue before. Debugging it
has been tedious and error prone.
* The build workflow is full of brittle copy and paste that is hard to
understand, especially for all of the release editions in Vault Enterprise
* Branch check requirements for workflows are incredibly painful to use for
workflows that are dynamic or change often. The required workflows have to be
configured in Github settings by administrators. They would also prevent us
from having simple docs PRs since required integration workflows always have
to run to satisfy branch requirements.
* Doormat credentials requirements that are coming will require us to modify
which event types trigger workflows. This changes those ahead of time since
we're doing so much to build workflow. The only noticeable impact will be
that the build workflow no longer runs on pushes to non-main or release
branches. In order to test other branches it requires a workflow_dispatch
from the Actions tab or a pull request.
Solutions
* Centralize the logic that determines build metadata and creates releasable
Vault artifacts. Instead of cargo-culting logic multiple times in the build
workflow and the Enos local modules, we now have a crt-builder script which
determines build metadata and also handles building the UI, Vault, and the
package bundle. There are make targets for all of the available sub-commands.
Now what we use in the pipeline is the same thing as the local builder, and
it can be executed locally by developers. The crt-builder script works in OSS
and Enterprise so we will never have to deal with them being divergent or with
special casing things in the build workflow.
* Refactor the bulk of the Vault building into a reusable workflow that we can
call multiple times. This allows us to define Vault builds in a much simpler
manner and makes resolving merge conflicts much easier.
* Rather than trying to maintain a list and manually configure the branch check
requirements for build, we'll trigger a single workflow that uses the github
event system to determine if the build workflow (all of the sub-workflows
included) have passed. We'll then create branch restrictions on that single
workflow down the line.
Signed-off-by: Ryan Cragun me@ryan.ec
By adding the link flags `-s -w` we can reduce the Vault binary size
from 204 MB to 167 MB (about 18% reduction in size).
This removes the DWARF section of the binary.
i.e., before:
```
$ objdump --section-headers vault-debug
vault-debug: file format mach-o arm64
Sections:
Idx Name Size VMA Type
0 __text 03a00340 0000000100001000 TEXT
1 __symbol_stub1 00000618 0000000103a01340 TEXT
2 __rodata 00c18088 0000000103a01960 DATA
3 __rodata 015aee18 000000010461c000 DATA
4 __typelink 0004616c 0000000105bcae20 DATA
5 __itablink 0000eb68 0000000105c10fa0 DATA
6 __gosymtab 00000000 0000000105c1fb08 DATA
7 __gopclntab 02a5b8e0 0000000105c1fb20 DATA
8 __go_buildinfo 00008c10 000000010867c000 DATA
9 __nl_symbol_ptr 00000410 0000000108684c10 DATA
10 __noptrdata 000fed00 0000000108685020 DATA
11 __data 0004e1f0 0000000108783d20 DATA
12 __bss 00052520 00000001087d1f20 BSS
13 __noptrbss 000151b0 0000000108824440 BSS
14 __zdebug_abbrev 00000129 000000010883c000 DATA, DEBUG
15 __zdebug_line 00651374 000000010883c129 DATA, DEBUG
16 __zdebug_frame 001e1de9 0000000108e8d49d DATA, DEBUG
17 __debug_gdb_scri 00000043 000000010906f286 DATA, DEBUG
18 __zdebug_info 00de2c09 000000010906f2c9 DATA, DEBUG
19 __zdebug_loc 00a619ea 0000000109e51ed2 DATA, DEBUG
20 __zdebug_ranges 001e94a6 000000010a8b38bc DATA, DEBUG
```
And after:
```
$ objdump --section-headers vault-no-debug
vault-no-debug: file format mach-o arm64
Sections:
Idx Name Size VMA Type
0 __text 03a00340 0000000100001000 TEXT
1 __symbol_stub1 00000618 0000000103a01340 TEXT
2 __rodata 00c18088 0000000103a01960 DATA
3 __rodata 015aee18 000000010461c000 DATA
4 __typelink 0004616c 0000000105bcae20 DATA
5 __itablink 0000eb68 0000000105c10fa0 DATA
6 __gosymtab 00000000 0000000105c1fb08 DATA
7 __gopclntab 02a5b8e0 0000000105c1fb20 DATA
8 __go_buildinfo 00008c20 000000010867c000 DATA
9 __nl_symbol_ptr 00000410 0000000108684c20 DATA
10 __noptrdata 000fed00 0000000108685040 DATA
11 __data 0004e1f0 0000000108783d40 DATA
12 __bss 00052520 00000001087d1f40 BSS
13 __noptrbss 000151b0 0000000108824460 BSS
```
The only side effect I have been able to find is that it is no longer
possible to use [delve](https://github.com/go-delve/delve) to run the
Vault binary.
Note, however, that running delve and other debuggers requires access
to the full source code, which isn't provided for the Enterprise, HSM,
etc. binaries, so it isn't possible to debug those anyway outside of
people who have the full source.
* panic traces
* `vault debug`
* error messages
* Despite what the documentation says, these flags do *not* delete the
function symbol table (so it is not the same as having a `strip`ped
binary).
It contains mappings between the compiled binary and functions,
paramters, and variables in the source code.
Using `llvm-dwarfdump`, it looks like:
```
0x011a6d85: DW_TAG_subprogram
DW_AT_name ("github.com/hashicorp/vault/api.(*replicationStateStore).recordState")
DW_AT_low_pc (0x0000000000a99300)
DW_AT_high_pc (0x0000000000a99419)
DW_AT_frame_base (DW_OP_call_frame_cfa)
DW_AT_decl_file ("/home/swenson/vault/api/client.go")
DW_AT_external (0x01)
0x011a6de1: DW_TAG_formal_parameter
DW_AT_name ("w")
DW_AT_variable_parameter (0x00)
DW_AT_decl_line (1735)
DW_AT_type (0x00000000001e834a "github.com/hashicorp/vault/api.replicationStateStore *")
DW_AT_location (0x009e832a:
[0x0000000000a99300, 0x0000000000a9933a): DW_OP_reg0 RAX
[0x0000000000a9933a, 0x0000000000a99419): DW_OP_call_frame_cfa)
0x011a6def: DW_TAG_formal_parameter
DW_AT_name ("resp")
DW_AT_variable_parameter (0x00)
DW_AT_decl_line (1735)
DW_AT_type (0x00000000001e82a2 "github.com/hashicorp/vault/api.Response *")
DW_AT_location (0x009e8370:
[0x0000000000a99300, 0x0000000000a9933a): DW_OP_reg3 RBX
[0x0000000000a9933a, 0x0000000000a99419): DW_OP_fbreg +8)
0x011a6e00: DW_TAG_variable
DW_AT_name ("newState")
DW_AT_decl_line (1738)
DW_AT_type (0x0000000000119f32 "string")
DW_AT_location (0x009e83b7:
[0x0000000000a99385, 0x0000000000a99385): DW_OP_reg0 RAX, DW_OP_piece 0x8, DW_OP_piece 0x8
[0x0000000000a99385, 0x0000000000a993a4): DW_OP_reg0 RAX, DW_OP_piece 0x8, DW_OP_reg3 RBX, DW_OP_piece 0x8
[0x0000000000a993a4, 0x0000000000a993a7): DW_OP_piece 0x8, DW_OP_reg3 RBX, DW_OP_piece 0x8)
```
This says that the particular binary section is the function
`github.com/hashicorp/vault/api.(*replicationStateStore).recordState`,
from the file `/home/swenson/vault/api/client.go`, containing
the `w` parameter on line 1735 mapped to certain registers and memory,
the `resp` paramter on line 1735 mapped to certain reigsters and memory,
and the `newState` variable on line 1738, mapped to certain registers,
and memory.
It's really only useful for a debugger.
Anyone running the code in a debugger will need full access the source
code anyway, so presumably they will be able to run `make dev` and build
the version with the DWARF sections intact, and then run their debugger.
* Update go version to 1.19.2
This commit updates the default version of go to 1.19.2. This update
includes minor security fixes for archive/tar, net/http/httputil, and
regexp packages.
For more information on the release, see: https://go.dev/doc/devel/release#go1.19.2
* Update Docker versions in CI to 20.10.17
After updating Vault to go version 1.19.2, there were several SIGABRTs
in the vault tests. These were related to a missing `pthread_create`
syscall in Docker. Since CI was using a much older version of Docker,
the fix was to bump it to latest-1 (20.10.17).
While we're at it, add a note in the developer docs encouraging the use
of the latest Docker version.
Add plugin version to GRPC interface
Added a version interface in the sdk/logical so that it can be shared between all plugin types, and then wired it up to RunningVersion in the mounts, auth list, and database systems.
I've tested that this works with auth, database, and secrets plugin types, with the following logic to populate RunningVersion:
If a plugin has a PluginVersion() method implemented, then that is used
If not, and the plugin is built into the Vault binary, then the go.mod version is used
Otherwise, the it will be the empty string.
My apologies for the length of this PR.
* Placeholder backend should be external
We use a placeholder backend (previously a framework.Backend) before a
GRPC plugin is lazy-loaded. This makes us later think the plugin is a
builtin plugin.
So we added a `placeholderBackend` type that overrides the
`IsExternal()` method so that later we know that the plugin is external,
and don't give it a default builtin version.
Update Go to 1.18
From 1.17.12
1.18.5 was just released, but not all packages have been updated, so I
went with 1.18.4
Co-authored-by: Steven Clark <steven.clark@hashicorp.com>
Remove gox in favor of go build.
`gox` hasn't had a release to update it in many years, so is missing
support for many modern systems, like `darwin/arm64`.
In any case, we only use it for dev builds, where we don't even use
the ability of it to build for multiple platforms. Release builds use
`go build` now.
So, this switches to `go build` everywhere.
I pulled this down and tested it in Windows as well. (Side note: I
couldn't get `gox` to work in Windows, so couldn't build before this
change.)
* add BuildDate to version base
* populate BuildDate with ldflags
* include BuildDate in FullVersionNumber
* add BuildDate to seal-status and associated status cmd
* extend core/versions entries to include BuildDate
* include BuildDate in version-history API and CLI
* fix version history tests
* fix sys status tests
* fix TestStatusFormat
* remove extraneous LD_FLAGS from build.sh
* add BuildDate to build.bat
* fix TestSysUnseal_Reset
* attempt to add build-date to release builds
* add branch to github build workflow
* add get-build-date to build-* job needs
* fix release build command vars
* add missing quote in release build command
* Revert "add branch to github build workflow"
This reverts commit b835699ecb7c2c632757fa5fe64b3d5f60d2a886.
* add changelog entry
* port SSCT OSS
* port header hmac key to ent and generate token proto without make command
* remove extra nil check in request handling
* add changelog
* add comment to router.go
* change test var to use length constants
* remove local index is 0 check and extra defer which can be removed after use of ExternalID
* feat: DB plugin multiplexing (#13734)
* WIP: start from main and get a plugin runner from core
* move MultiplexedClient map to plugin catalog
- call sys.NewPluginClient from PluginFactory
- updates to getPluginClient
- thread through isMetadataMode
* use go-plugin ClientProtocol interface
- call sys.NewPluginClient from dbplugin.NewPluginClient
* move PluginSets to dbplugin package
- export dbplugin HandshakeConfig
- small refactor of PluginCatalog.getPluginClient
* add removeMultiplexedClient; clean up on Close()
- call client.Kill from plugin catalog
- set rpcClient when muxed client exists
* add ID to dbplugin.DatabasePluginClient struct
* only create one plugin process per plugin type
* update NewPluginClient to return connection ID to sdk
- wrap grpc.ClientConn so we can inject the ID into context
- get ID from context on grpc server
* add v6 multiplexing protocol version
* WIP: backwards compat for db plugins
* Ensure locking on plugin catalog access
- Create public GetPluginClient method for plugin catalog
- rename postgres db plugin
* use the New constructor for db plugins
* grpc server: use write lock for Close and rlock for CRUD
* cleanup MultiplexedClients on Close
* remove TODO
* fix multiplexing regression with grpc server connection
* cleanup grpc server instances on close
* embed ClientProtocol in Multiplexer interface
* use PluginClientConfig arg to make NewPluginClient plugin type agnostic
* create a new plugin process for non-muxed plugins
* feat: plugin multiplexing: handle plugin client cleanup (#13896)
* use closure for plugin client cleanup
* log and return errors; add comments
* move rpcClient wrapping to core for ID injection
* refactor core plugin client and sdk
* remove unused ID method
* refactor and only wrap clientConn on multiplexed plugins
* rename structs and do not export types
* Slight refactor of system view interface
* Revert "Slight refactor of system view interface"
This reverts commit 73d420e5cd.
* Revert "Revert "Slight refactor of system view interface""
This reverts commit f75527008a1db06d04a23e04c3059674be8adb5f.
* only provide pluginRunner arg to the internal newPluginClient method
* embed ClientProtocol in pluginClient and name logger
* Add back MLock support
* remove enableMlock arg from setupPluginCatalog
* rename plugin util interface to PluginClient
Co-authored-by: Brian Kassouf <bkassouf@hashicorp.com>
* feature: multiplexing: fix unit tests (#14007)
* fix grpc_server tests and add coverage
* update run_config tests
* add happy path test case for grpc_server ID from context
* update test helpers
* feat: multiplexing: handle v5 plugin compiled with new sdk
* add mux supported flag and increase test coverage
* set multiplexingSupport field in plugin server
* remove multiplexingSupport field in sdk
* revert postgres to non-multiplexed
* add comments on grpc server fields
* use pointer receiver on grpc server methods
* add changelog
* use pointer for grpcserver instance
* Use a gRPC server to determine if a plugin should be multiplexed
* Apply suggestions from code review
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* add lock to removePluginClient
* add multiplexingSupport field to externalPlugin struct
* do not send nil to grpc MultiplexingSupport
* check err before logging
* handle locking scenario for cleanupFunc
* allow ServeConfigMultiplex to dispense v5 plugin
* reposition structs, add err check and comments
* add comment on locking for cleanupExternalPlugin
Co-authored-by: Brian Kassouf <bkassouf@hashicorp.com>
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* adding CRT to main branch
* cleanup
* um i dont know how that got removed but heres the fix
* add vault.service
Co-authored-by: Kyle Penfound <kpenfound11@gmail.com>
* copy over the webui
move web_ui to http
remove web ui files, add .gitkeep
updates, messing with gitkeep and ignoring web_ui
update ui scripts
gitkeep
ignore http/web_ui
Remove debugging
remove the jwt reference, that was from something else
restore old jwt plugin
move things around
Revert "move things around"
This reverts commit 2a35121850f5b6b82064ecf78ebee5246601c04f.
Update ui path handling to not need the web_ui name part
add desc
move the http.FS conversion internal to assetFS
update gitignore
remove bindata dep
clean up some comments
remove asset check script that's no longer needed
Update readme
remove more bindata things
restore asset check
update packagespec
update stub
stub the assetFS method and set uiBuiltIn to false for non-ui builds
update packagespec to build ui
* fail if assets aren't found
* tidy up vendor
* go mod tidy
* updating .circleci
* restore tools.go
* re-re-re-run make packages
* re-enable arm64
* Adding change log
* Removing a file
Co-authored-by: hamid ghaf <hamid@hashicorp.com>