mirror of
https://github.com/optim-enterprises-bv/vault.git
synced 2025-10-30 18:17:55 +00:00
[VAULT-26888] Create developer scenarios (#27028)
* [VAULT-26888] Create developer scenarios Create developer scenarios that have simplified inputs designed for provisioning clusters and limited verification. * Migrate Artifactory installation module from support team focused scenarios to the vault repository. * Migrate support focused scenarios to the repo and update them to use the latest in-repo modules. * Fully document and comment scenarios to help users outline, configure, and use the scenarios. * Remove outdated references to the private registry that is not needed. * Automatically configure the login shell profile to include the path to the vault binary and the VAULT_ADDR/VAULT_TOKEN environment variables. Signed-off-by: Ryan Cragun <me@ryan.ec>
This commit is contained in:
@@ -18,34 +18,35 @@ is going to give you faster feedback and execution time, whereas Enos is going
|
||||
to give you a real-world execution and validation of the requirement. Consider
|
||||
the following cases as examples of when one might opt for an Enos scenario:
|
||||
|
||||
* The feature require third-party integrations. Whether that be networked
|
||||
- The feature require third-party integrations. Whether that be networked
|
||||
dependencies like a real Consul backend, a real KMS key to test awskms
|
||||
auto-unseal, auto-join discovery using AWS tags, or Cloud hardware KMS's.
|
||||
* The feature might behave differently under multiple configuration variants
|
||||
- The feature might behave differently under multiple configuration variants
|
||||
and therefore should be tested with both combinations, e.g. auto-unseal and
|
||||
manual shamir unseal or replication in HA mode with integrated storage or
|
||||
Consul storage.
|
||||
* The scenario requires coordination between multiple targets. For example,
|
||||
- The scenario requires coordination between multiple targets. For example,
|
||||
consider the complex lifecycle event of migrating the seal type or storage,
|
||||
or manually triggering a raft disaster scenario by partitioning the network
|
||||
between the leader and follower nodes. Or perhaps an auto-pilot upgrade between
|
||||
a stable version of Vault and our candidate version.
|
||||
* The scenario has specific deployment strategy requirements. For example,
|
||||
- The scenario has specific deployment strategy requirements. For example,
|
||||
if we want to add a regression test for an issue that only arises when the
|
||||
software is deployed in a certain manner.
|
||||
* The scenario needs to use actual build artifacts that will be promoted
|
||||
- The scenario needs to use actual build artifacts that will be promoted
|
||||
through the pipeline.
|
||||
|
||||
## Requirements
|
||||
* AWS access. HashiCorp Vault developers should use Doormat.
|
||||
* Terraform >= 1.2
|
||||
* Enos >= v0.0.10. You can [install it from a release channel](https://github.com/hashicorp/Enos-Docs/blob/main/installation.md).
|
||||
* Access to the QTI org in Terraform Cloud. HashiCorp Vault developers can
|
||||
access a shared token in 1Password or request their own in #team-quality on
|
||||
Slack.
|
||||
* An SSH keypair in the AWS region you wish to run the scenario. You can use
|
||||
- AWS access. HashiCorp Vault developers should use Doormat.
|
||||
- Terraform >= 1.7
|
||||
- Enos >= v0.0.28. You can [download a release](https://github.com/hashicorp/enos/releases/) or
|
||||
install it with Homebrew:
|
||||
```shell
|
||||
brew tap hashicorp/tap && brew update && brew install hashicorp/tap/enos
|
||||
```
|
||||
- An SSH keypair in the AWS region you wish to run the scenario. You can use
|
||||
Doormat to log in to the AWS console to create or upload an existing keypair.
|
||||
* A Vault artifact is downloaded from the GHA artifacts when using the `artifact_source:crt` variants, from Artifactory when using `artifact_source:artifactory`, and is built locally from the current branch when using `artifact_source:local` variant.
|
||||
- A Vault artifact is downloaded from the GHA artifacts when using the `artifact_source:crt` variants, from Artifactory when using `artifact_source:artifactory`, and is built locally from the current branch when using `artifact_source:local` variant.
|
||||
|
||||
## Scenario Variables
|
||||
In CI, each scenario is executed via Github Actions and has been configured using
|
||||
@@ -57,7 +58,6 @@ variables, or you can update `enos.vars.hcl` with values and uncomment the lines
|
||||
Variables that are required:
|
||||
* `aws_ssh_keypair_name`
|
||||
* `aws_ssh_private_key_path`
|
||||
* `tfc_api_token`
|
||||
* `vault_bundle_path`
|
||||
* `vault_license_path` (only required for non-OSS editions)
|
||||
|
||||
@@ -206,7 +206,6 @@ This variant is for running the Enos scenario to test an artifact from Artifacto
|
||||
* `artifactory_token`
|
||||
* `aws_ssh_keypair_name`
|
||||
* `aws_ssh_private_key_path`
|
||||
* `tfc_api_token`
|
||||
* `vault_product_version`
|
||||
* `vault_revision`
|
||||
|
||||
@@ -234,7 +233,6 @@ and destroyed each time a scenario is run, the Terraform state will be managed b
|
||||
Here are the steps to configure the GitHub Actions service user:
|
||||
|
||||
#### Pre-requisites
|
||||
- Access to the `hashicorp-qti` organization in Terraform Cloud.
|
||||
- Full access to the CI AWS account is required.
|
||||
|
||||
**Notes:**
|
||||
|
||||
914
enos/enos-dev-scenario-pr-replication.hcl
Normal file
914
enos/enos-dev-scenario-pr-replication.hcl
Normal file
@@ -0,0 +1,914 @@
|
||||
# Copyright (c) HashiCorp, Inc.
|
||||
# SPDX-License-Identifier: BUSL-1.1
|
||||
|
||||
scenario "dev_pr_replication" {
|
||||
description = <<-EOF
|
||||
This scenario spins up a single Vault cluster with either an external Consul cluster or
|
||||
integrated Raft for storage. None of our test verification is included in this scenario in order
|
||||
to improve end-to-end speed. If you wish to perform such verification you'll need to use a
|
||||
non-dev scenario instead.
|
||||
|
||||
This scenario spins up a two Vault clusters with either an external Consul cluster or
|
||||
integrated Raft for storage. The secondary cluster is configured with performance replication
|
||||
from the primary cluster. None of our test verification is included in this scenario in order
|
||||
to improve end-to-end speed. If you wish to perform such verification you'll need to a non-dev
|
||||
scenario.
|
||||
|
||||
The scenario supports finding and installing any released 'linux/amd64' or 'linux/arm64' Vault
|
||||
artifact as long as its version is >= 1.8. You can also use the 'artifact:local' variant to
|
||||
build and deploy the current branch!
|
||||
|
||||
In order to execute this scenario you'll need to install the enos CLI:
|
||||
brew tap hashicorp/tap && brew update && brew install hashicorp/tap/enos
|
||||
|
||||
You'll also need access to an AWS account with an SSH keypair.
|
||||
Perform the steps here to get AWS access with Doormat https://eng-handbook.hashicorp.services/internal-tools/enos/common-setup-steps/#authenticate-with-doormat
|
||||
Perform the steps here to get an AWS keypair set up: https://eng-handbook.hashicorp.services/internal-tools/enos/common-setup-steps/#set-your-aws-key-pair-name-and-private-key
|
||||
|
||||
Please note that this scenario requires several inputs variables to be set in order to function
|
||||
properly. While not all variants will require all variables, it's suggested that you look over
|
||||
the scenario outline to determine which variables affect which steps and which have inputs that
|
||||
you should set. You can use the following command to get a textual outline of the entire
|
||||
scenario:
|
||||
enos scenario outline dev_pr_replication
|
||||
|
||||
You can also create an HTML version that is suitable for viewing in web browsers:
|
||||
enos scenario outline dev_pr_replication --format html > index.html
|
||||
open index.html
|
||||
|
||||
To configure the required variables you have a couple of choices. You can create an
|
||||
'enos-local.vars' file in the same 'enos' directory where this scenario is defined. In it you
|
||||
declare your desired variable values. For example, you could copy the following content and
|
||||
then set the values as necessary:
|
||||
|
||||
artifactory_username = "username@hashicorp.com"
|
||||
artifactory_token = "<ARTIFACTORY TOKEN VALUE>
|
||||
aws_region = "us-west-2"
|
||||
aws_ssh_keypair_name = "<YOUR REGION SPECIFIC KEYPAIR NAME>"
|
||||
aws_ssh_keypair_key_path = "/path/to/your/private/key.pem"
|
||||
dev_consul_version = "1.18.1"
|
||||
vault_license_path = "./support/vault.hclic"
|
||||
vault_product_version = "1.16.2"
|
||||
|
||||
Alternatively, you can set them in your environment:
|
||||
export ENOS_VAR_aws_region="us-west-2"
|
||||
export ENOS_VAR_vault_license_path="./support/vault.hclic"
|
||||
|
||||
After you've configured your inputs you can list and filter the available scenarios and then
|
||||
subsequently launch and destroy them.
|
||||
enos scenario list --help
|
||||
enos scenario launch --help
|
||||
enos scenario list dev_pr_replication
|
||||
enos scenario launch dev_pr_replication arch:amd64 artifact:deb distro:ubuntu edition:ent.hsm primary_backend:raft primary_seal:awskms secondary_backend:raft secondary_seal:pkcs11
|
||||
|
||||
When the scenario is finished launching you refer to the scenario outputs to see information
|
||||
related to your cluster. You can use this information to SSH into nodes and/or to interact
|
||||
with vault.
|
||||
enos scenario output dev_pr_replication arch:amd64 artifact:deb distro:ubuntu edition:ent.hsm primary_backend:raft primary_seal:awskms secondary_backend:raft secondary_seal:pkcs11
|
||||
ssh -i /path/to/your/private/key.pem <PUBLIC_IP>
|
||||
vault status
|
||||
|
||||
After you've finished you can tear down the cluster
|
||||
enos scenario destroy dev_pr_replication arch:amd64 artifact:deb distro:ubuntu edition:ent.hsm primary_backend:raft primary_seal:awskms secondary_backend:raft secondary_seal:pkcs11
|
||||
EOF
|
||||
|
||||
// The matrix is where we define all the baseline combinations that enos can utilize to customize
|
||||
// your scenario. By default enos attempts to perform your command an the entire product! Most
|
||||
// of the time you'll want to reduce that by passing in a filter.
|
||||
// Run 'enos scenario list --help' to see more about how filtering scenarios works in enos.
|
||||
matrix {
|
||||
arch = ["amd64", "arm64"]
|
||||
artifact = ["local", "deb", "rpm", "zip"]
|
||||
distro = ["ubuntu", "rhel"]
|
||||
edition = ["ent", "ent.fips1402", "ent.hsm", "ent.hsm.fips1402"]
|
||||
primary_backend = ["consul", "raft"]
|
||||
primary_seal = ["awskms", "pkcs11", "shamir"]
|
||||
secondary_backend = ["consul", "raft"]
|
||||
secondary_seal = ["awskms", "pkcs11", "shamir"]
|
||||
|
||||
exclude {
|
||||
edition = ["ent.hsm", "ent.fips1402", "ent.hsm.fips1402"]
|
||||
arch = ["arm64"]
|
||||
}
|
||||
|
||||
exclude {
|
||||
artifact = ["rpm"]
|
||||
distro = ["ubuntu"]
|
||||
}
|
||||
|
||||
exclude {
|
||||
artifact = ["deb"]
|
||||
distro = ["rhel"]
|
||||
}
|
||||
|
||||
exclude {
|
||||
primary_seal = ["pkcs11"]
|
||||
edition = ["ce", "ent", "ent.fips1402"]
|
||||
}
|
||||
|
||||
exclude {
|
||||
secondary_seal = ["pkcs11"]
|
||||
edition = ["ce", "ent", "ent.fips1402"]
|
||||
}
|
||||
}
|
||||
|
||||
// Specify which Terraform configs and providers to use in this scenario. Most of the time you'll
|
||||
// never need to change this! If you wanted to test with different terraform or terraform CLI
|
||||
// settings you can define them and assign them here.
|
||||
terraform_cli = terraform_cli.default
|
||||
terraform = terraform.default
|
||||
|
||||
// Here we declare all of the providers that we might need for our scenario.
|
||||
providers = [
|
||||
provider.aws.default,
|
||||
provider.enos.ubuntu,
|
||||
provider.enos.rhel
|
||||
]
|
||||
|
||||
// These are variable values that are local to our scenario. They are evaluated after external
|
||||
// variables and scenario matrices but before any of our steps.
|
||||
locals {
|
||||
// The enos provider uses different ssh transport configs for different distros (as
|
||||
// specified in enos-providers.hcl), and we need to be able to access both of those here.
|
||||
enos_provider = {
|
||||
rhel = provider.enos.rhel
|
||||
ubuntu = provider.enos.ubuntu
|
||||
}
|
||||
// We install vault packages from artifactory. If you wish to use one of these variants you'll
|
||||
// need to configure your artifactory credentials.
|
||||
use_artifactory = matrix.artifact == "deb" || matrix.artifact == "rpm"
|
||||
// Zip bundles and local builds don't come with systemd units or any associated configuration.
|
||||
// When this is true we'll let enos handle this for us.
|
||||
manage_service = matrix.artifact == "zip" || matrix.artifact == "local"
|
||||
// If you are using an ent edition, you will need a Vault license. Common convention
|
||||
// is to store it at ./support/vault.hclic, but you may change this path according
|
||||
// to your own preference.
|
||||
vault_install_dir = matrix.artifact == "zip" ? var.vault_install_dir : global.vault_install_dir_packages[matrix.distro]
|
||||
}
|
||||
|
||||
// Begin scenario steps. These are the steps we'll perform to get your cluster up and running.
|
||||
step "maybe_build_or_find_artifact" {
|
||||
description = <<-EOF
|
||||
Depending on how we intend to get our Vault artifact, this step either builds vault from our
|
||||
current branch or finds debian or redhat packages in Artifactory. If we're using a zip bundle
|
||||
we'll get it from releases.hashicorp.com and skip this step entirely. Please note that if you
|
||||
wish to use a deb or rpm artifact you'll have to configure your artifactory credentials!
|
||||
|
||||
Variables that are used in this step:
|
||||
|
||||
artifactory_host:
|
||||
The artifactory host to search. It's very unlikely that you'll want to change this. The
|
||||
default value is the HashiCorp Artifactory instance.
|
||||
artifactory_repo
|
||||
The artifactory host to search. It's very unlikely that you'll want to change this. The
|
||||
default value is where CRT will publish packages.
|
||||
artifactory_username
|
||||
The artifactory username associated with your token. You'll need this if you wish to use
|
||||
deb or rpm artifacts! You can request access via Okta.
|
||||
artifactory_token
|
||||
The artifactory token associated with your username. You'll need this if you wish to use
|
||||
deb or rpm artifacts! You can create a token by logging into Artifactory via Okta.
|
||||
vault_product_version:
|
||||
When using the artifact:rpm or artifact:deb variants we'll use this variable to determine
|
||||
which version of the Vault pacakge we should fetch from Artifactory.
|
||||
vault_artifact_path:
|
||||
When using the artifact:local variant we'll utilize this variable to determine where
|
||||
to create the vault.zip archive from the local branch. Default: to /tmp/vault.zip.
|
||||
vault_local_tags:
|
||||
When using the artifact:local variant we'll use this variable to inject custom build
|
||||
tags. If left unset we'll automatically use the build tags that correspond to the edition
|
||||
variant.
|
||||
EOF
|
||||
module = matrix.artifact == "local" ? "build_local" : local.use_artifactory ? "build_artifactory_package" : null
|
||||
skip_step = matrix.artifact == "zip"
|
||||
|
||||
variables {
|
||||
// Used for all modules
|
||||
arch = matrix.arch
|
||||
edition = matrix.edition
|
||||
product_version = var.vault_product_version
|
||||
// Required for the local build which will always result in using a local zip bundle
|
||||
artifact_path = var.vault_artifact_path
|
||||
build_tags = var.vault_local_build_tags != null ? var.vault_local_build_tags : global.build_tags[matrix.edition]
|
||||
goarch = matrix.arch
|
||||
goos = "linux"
|
||||
// Required when using a RPM or Deb package
|
||||
// Some of these variables don't have default values so we'll only set them if they are
|
||||
// required.
|
||||
artifactory_host = local.use_artifactory ? var.artifactory_host : null
|
||||
artifactory_repo = local.use_artifactory ? var.artifactory_repo : null
|
||||
artifactory_username = local.use_artifactory ? var.artifactory_username : null
|
||||
artifactory_token = local.use_artifactory ? var.artifactory_token : null
|
||||
distro = matrix.distro
|
||||
}
|
||||
}
|
||||
|
||||
step "ec2_info" {
|
||||
description = "This discovers usefull metadata in Ec2 like AWS AMI ID's that we use in later modules."
|
||||
module = module.ec2_info
|
||||
}
|
||||
|
||||
step "create_vpc" {
|
||||
description = <<-EOF
|
||||
Create the VPC resources required for our scenario.
|
||||
|
||||
Variables that are used in this step:
|
||||
tags:
|
||||
If you wish to add custom tags to taggable resources in AWS you can set the 'tags' variable
|
||||
and they'll be added to resources when possible.
|
||||
EOF
|
||||
module = module.create_vpc
|
||||
depends_on = [step.ec2_info]
|
||||
|
||||
variables {
|
||||
common_tags = global.tags
|
||||
}
|
||||
}
|
||||
|
||||
step "read_backend_license" {
|
||||
description = <<-EOF
|
||||
Read the contents of the backend license if we're using a Consul backend for either cluster
|
||||
and the backend_edition variable is set to "ent".
|
||||
|
||||
Variables that are used in this step:
|
||||
backend_edition:
|
||||
The edition of Consul to use. If left unset it will default to CE.
|
||||
backend_license_path:
|
||||
If this variable is set we'll use it to determine the local path on disk that contains a
|
||||
Consul Enterprise license. If it is not set we'll attempt to load it from
|
||||
./support/consul.hclic.
|
||||
EOF
|
||||
skip_step = (var.backend_edition == "ce" || var.backend_edition == "oss") || (matrix.primary_backend == "raft" && matrix.secondary_backend == "raft")
|
||||
module = module.read_license
|
||||
|
||||
variables {
|
||||
file_name = global.backend_license_path
|
||||
}
|
||||
}
|
||||
|
||||
step "read_vault_license" {
|
||||
description = <<-EOF
|
||||
Validates and reads into memory the contents of a local Vault Enterprise license if we're
|
||||
using an Enterprise edition. This step does not run when using a community edition of Vault.
|
||||
|
||||
Variables that are used in this step:
|
||||
vault_license_path:
|
||||
If this variable is set we'll use it to determine the local path on disk that contains a
|
||||
Vault Enterprise license. If it is not set we'll attempt to load it from
|
||||
./support/vault.hclic.
|
||||
EOF
|
||||
module = module.read_license
|
||||
|
||||
variables {
|
||||
file_name = global.vault_license_path
|
||||
}
|
||||
}
|
||||
|
||||
step "create_primary_seal_key" {
|
||||
description = <<-EOF
|
||||
Create the necessary seal keys depending on our configured seal.
|
||||
|
||||
Variables that are used in this step:
|
||||
tags:
|
||||
If you wish to add custom tags to taggable resources in AWS you can set the 'tags' variable
|
||||
and they'll be added to resources when possible.
|
||||
EOF
|
||||
module = "seal_${matrix.primary_seal}"
|
||||
depends_on = [step.create_vpc]
|
||||
|
||||
providers = {
|
||||
enos = provider.enos.ubuntu
|
||||
}
|
||||
|
||||
variables {
|
||||
cluster_id = step.create_vpc.id
|
||||
cluster_meta = "primary"
|
||||
common_tags = global.tags
|
||||
}
|
||||
}
|
||||
|
||||
step "create_secondary_seal_key" {
|
||||
description = <<-EOF
|
||||
Create the necessary seal keys depending on our configured seal.
|
||||
|
||||
Variables that are used in this step:
|
||||
tags:
|
||||
If you wish to add custom tags to taggable resources in AWS you can set the 'tags' variable
|
||||
and they'll be added to resources when possible.
|
||||
EOF
|
||||
module = "seal_${matrix.secondary_seal}"
|
||||
depends_on = [step.create_vpc]
|
||||
|
||||
providers = {
|
||||
enos = provider.enos.ubuntu
|
||||
}
|
||||
|
||||
variables {
|
||||
cluster_id = step.create_vpc.id
|
||||
cluster_meta = "secondary"
|
||||
common_tags = global.tags
|
||||
other_resources = step.create_primary_seal_key.resource_names
|
||||
}
|
||||
}
|
||||
|
||||
step "create_primary_cluster_targets" {
|
||||
description = <<-EOF
|
||||
Creates the necessary machine infrastructure targets for the Vault cluster. We also ensure
|
||||
that the firewall is configured to allow the necessary Vault and Consul traffic and SSH
|
||||
from the machine executing the Enos scenario.
|
||||
|
||||
Variables that are used in this step:
|
||||
aws_ssh_keypair_name:
|
||||
The AWS SSH Keypair name to use for target machines.
|
||||
project_name:
|
||||
The project name is used for additional tag metadata on resources.
|
||||
tags:
|
||||
If you wish to add custom tags to taggable resources in AWS you can set the 'tags' variable
|
||||
and they'll be added to resources when possible.
|
||||
vault_instance_count:
|
||||
How many instances to provision for the Vault cluster. If left unset it will use a default
|
||||
of three.
|
||||
EOF
|
||||
module = module.target_ec2_instances
|
||||
depends_on = [step.create_vpc]
|
||||
|
||||
providers = {
|
||||
enos = local.enos_provider[matrix.distro]
|
||||
}
|
||||
|
||||
variables {
|
||||
ami_id = step.ec2_info.ami_ids[matrix.arch][matrix.distro][global.distro_version[matrix.distro]]
|
||||
cluster_tag_key = global.vault_tag_key
|
||||
common_tags = global.tags
|
||||
seal_key_names = step.create_primary_seal_key.resource_names
|
||||
vpc_id = step.create_vpc.id
|
||||
}
|
||||
}
|
||||
|
||||
step "create_primary_cluster_backend_targets" {
|
||||
description = <<-EOF
|
||||
Creates the necessary machine infrastructure targets for the backend Consul storage cluster.
|
||||
We also ensure that the firewall is configured to allow the necessary Consul traffic and SSH
|
||||
from the machine executing the Enos scenario. When using integrated storage this step is a
|
||||
no-op that does nothing.
|
||||
|
||||
Variables that are used in this step:
|
||||
tags:
|
||||
If you wish to add custom tags to taggable resources in AWS you can set the 'tags' variable
|
||||
and they'll be added to resources when possible.
|
||||
project_name:
|
||||
The project name is used for additional tag metadata on resources.
|
||||
aws_ssh_keypair_name:
|
||||
The AWS SSH Keypair name to use for target machines.
|
||||
EOF
|
||||
module = matrix.primary_backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
|
||||
depends_on = [step.create_vpc]
|
||||
|
||||
providers = {
|
||||
enos = provider.enos.ubuntu
|
||||
}
|
||||
|
||||
variables {
|
||||
ami_id = step.ec2_info.ami_ids["arm64"]["ubuntu"]["22.04"]
|
||||
cluster_tag_key = global.backend_tag_key
|
||||
common_tags = global.tags
|
||||
seal_key_names = step.create_primary_seal_key.resource_names
|
||||
vpc_id = step.create_vpc.id
|
||||
}
|
||||
}
|
||||
|
||||
step "create_secondary_cluster_targets" {
|
||||
description = <<-EOF
|
||||
Creates the necessary machine infrastructure targets for the Vault cluster. We also ensure
|
||||
that the firewall is configured to allow the necessary Vault and Consul traffic and SSH
|
||||
from the machine executing the Enos scenario.
|
||||
EOF
|
||||
module = module.target_ec2_instances
|
||||
depends_on = [step.create_vpc]
|
||||
|
||||
providers = {
|
||||
enos = local.enos_provider[matrix.distro]
|
||||
}
|
||||
|
||||
variables {
|
||||
ami_id = step.ec2_info.ami_ids[matrix.arch][matrix.distro][global.distro_version[matrix.distro]]
|
||||
cluster_tag_key = global.vault_tag_key
|
||||
common_tags = global.tags
|
||||
seal_key_names = step.create_secondary_seal_key.resource_names
|
||||
vpc_id = step.create_vpc.id
|
||||
}
|
||||
}
|
||||
|
||||
step "create_secondary_cluster_backend_targets" {
|
||||
description = <<-EOF
|
||||
Creates the necessary machine infrastructure targets for the backend Consul storage cluster.
|
||||
We also ensure that the firewall is configured to allow the necessary Consul traffic and SSH
|
||||
from the machine executing the Enos scenario. When using integrated storage this step is a
|
||||
no-op that does nothing.
|
||||
EOF
|
||||
|
||||
module = matrix.secondary_backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
|
||||
depends_on = [step.create_vpc]
|
||||
|
||||
providers = {
|
||||
enos = provider.enos.ubuntu
|
||||
}
|
||||
|
||||
variables {
|
||||
ami_id = step.ec2_info.ami_ids["arm64"]["ubuntu"]["22.04"]
|
||||
cluster_tag_key = global.backend_tag_key
|
||||
common_tags = global.tags
|
||||
seal_key_names = step.create_secondary_seal_key.resource_names
|
||||
vpc_id = step.create_vpc.id
|
||||
}
|
||||
}
|
||||
|
||||
step "create_primary_backend_cluster" {
|
||||
description = <<-EOF
|
||||
Install, configure, and start the backend Consul storage cluster for the primary Vault Cluster.
|
||||
When we are using the raft storage variant this step is a no-op.
|
||||
|
||||
Variables that are used in this step:
|
||||
backend_edition:
|
||||
When configured with the backend:consul variant we'll utilize this variable to determine
|
||||
the edition of Consul to use for the cluster. Note that if you set it to 'ent' you will
|
||||
also need a valid license configured for the read_backend_license step. Default: ce.
|
||||
dev_consul_version:
|
||||
When configured with the backend:consul variant we'll utilize this variable to determine
|
||||
the version of Consul to use for the cluster.
|
||||
EOF
|
||||
module = "backend_${matrix.primary_backend}"
|
||||
depends_on = [
|
||||
step.create_primary_cluster_backend_targets
|
||||
]
|
||||
|
||||
providers = {
|
||||
enos = provider.enos.ubuntu
|
||||
}
|
||||
|
||||
variables {
|
||||
cluster_name = step.create_primary_cluster_backend_targets.cluster_name
|
||||
cluster_tag_key = global.backend_tag_key
|
||||
license = matrix.primary_backend == "consul" ? step.read_backend_license.license : null
|
||||
release = {
|
||||
edition = var.backend_edition
|
||||
version = var.dev_consul_version
|
||||
}
|
||||
target_hosts = step.create_primary_cluster_backend_targets.hosts
|
||||
}
|
||||
}
|
||||
|
||||
step "create_primary_cluster" {
|
||||
description = <<-EOF
|
||||
Install, configure, start, initialize and unseal the primary Vault cluster on the specified
|
||||
target instances.
|
||||
|
||||
Variables that are used in this step:
|
||||
backend_edition:
|
||||
When configured with the backend:consul variant we'll utilize this variable to determine
|
||||
which version of the consul client to install on each node for Consul storage. Note that
|
||||
if you set it to 'ent' you will also need a valid license configured for the
|
||||
read_backend_license step. If left unset we'll use an unlicensed CE version.
|
||||
dev_config_mode:
|
||||
You can set this variable to instruct enos on how to primarily configure Vault when starting
|
||||
the service. Options are 'file' and 'env' for configuration file or environment variables.
|
||||
If left unset we'll use the default value.
|
||||
dev_consul_version:
|
||||
When configured with the backend:consul variant we'll utilize this variable to determine
|
||||
which version of Consul to install. If left unset we'll utilize the default value.
|
||||
vault_artifact_path:
|
||||
When using the artifact:local variant this variable is utilized to specify where on
|
||||
the local disk the vault.zip file we've built is located. It can be left unset to use
|
||||
the default value.
|
||||
vault_enable_audit_devices:
|
||||
Whether or not to enable various audit devices after unsealing the Vault cluster. By default
|
||||
we'll configure syslog, socket, and file auditing.
|
||||
vault_product_version:
|
||||
When using the artifact:zip variant this variable is utilized to specify the version of
|
||||
Vault to download from releases.hashicorp.com.
|
||||
EOF
|
||||
module = module.vault_cluster
|
||||
depends_on = [
|
||||
step.create_primary_backend_cluster,
|
||||
step.create_primary_cluster_targets,
|
||||
]
|
||||
|
||||
providers = {
|
||||
enos = local.enos_provider[matrix.distro]
|
||||
}
|
||||
|
||||
variables {
|
||||
// We set vault_artifactory_release when we want to get a .deb or .rpm package from Artifactory.
|
||||
// We set vault_release when we want to get a .zip bundle from releases.hashicorp.com
|
||||
// We only set one or the other, never both.
|
||||
artifactory_release = local.use_artifactory ? step.maybe_build_or_find_artifact.release : null
|
||||
backend_cluster_name = step.create_primary_cluster_backend_targets.cluster_name
|
||||
backend_cluster_tag_key = global.backend_tag_key
|
||||
cluster_name = step.create_primary_cluster_targets.cluster_name
|
||||
config_mode = var.dev_config_mode
|
||||
consul_license = matrix.primary_backend == "consul" ? step.read_backend_license.license : null
|
||||
consul_release = matrix.primary_backend == "consul" ? {
|
||||
edition = var.backend_edition
|
||||
version = var.dev_consul_version
|
||||
} : null
|
||||
enable_audit_devices = var.vault_enable_audit_devices
|
||||
install_dir = local.vault_install_dir
|
||||
license = step.read_vault_license.license
|
||||
local_artifact_path = matrix.artifact == "local" ? abspath(var.vault_artifact_path) : null
|
||||
manage_service = local.manage_service
|
||||
packages = concat(global.packages, global.distro_packages[matrix.distro])
|
||||
release = matrix.artifact == "zip" ? { version = var.vault_product_version, edition = matrix.edition } : null
|
||||
seal_attributes = step.create_primary_seal_key.attributes
|
||||
seal_type = matrix.primary_seal
|
||||
storage_backend = matrix.primary_backend
|
||||
target_hosts = step.create_primary_cluster_targets.hosts
|
||||
}
|
||||
}
|
||||
|
||||
step "create_secondary_backend_cluster" {
|
||||
description = <<-EOF
|
||||
Install, configure, and start the backend Consul storage cluster for the primary Vault Cluster.
|
||||
When we are using the raft storage variant this step is a no-op.
|
||||
|
||||
Variables that are used in this step:
|
||||
backend_edition:
|
||||
When configured with the backend:consul variant we'll utilize this variable to determine
|
||||
the edition of Consul to use for the cluster. Note that if you set it to 'ent' you will
|
||||
also need a valid license configured for the read_backend_license step. Default: ce.
|
||||
dev_consul_version:
|
||||
When configured with the backend:consul variant we'll utilize this variable to determine
|
||||
the version of Consul to use for the cluster.
|
||||
EOF
|
||||
module = "backend_${matrix.secondary_backend}"
|
||||
depends_on = [
|
||||
step.create_secondary_cluster_backend_targets
|
||||
]
|
||||
|
||||
providers = {
|
||||
enos = provider.enos.ubuntu
|
||||
}
|
||||
|
||||
variables {
|
||||
cluster_name = step.create_secondary_cluster_backend_targets.cluster_name
|
||||
cluster_tag_key = global.backend_tag_key
|
||||
license = matrix.secondary_backend == "consul" ? step.read_backend_license.license : null
|
||||
release = {
|
||||
edition = var.backend_edition
|
||||
version = var.dev_consul_version
|
||||
}
|
||||
target_hosts = step.create_secondary_cluster_backend_targets.hosts
|
||||
}
|
||||
}
|
||||
|
||||
step "create_secondary_cluster" {
|
||||
description = <<-EOF
|
||||
Install, configure, start, initialize and unseal the secondary Vault cluster on the specified
|
||||
target instances.
|
||||
|
||||
Variables that are used in this step:
|
||||
backend_edition:
|
||||
When configured with the backend:consul variant we'll utilize this variable to determine
|
||||
which version of the consul client to install on each node for Consul storage. Note that
|
||||
if you set it to 'ent' you will also need a valid license configured for the
|
||||
read_backend_license step. If left unset we'll use an unlicensed CE version.
|
||||
dev_config_mode:
|
||||
You can set this variable to instruct enos on how to primarily configure Vault when starting
|
||||
the service. Options are 'file' and 'env' for configuration file or environment variables.
|
||||
If left unset we'll use the default value.
|
||||
dev_consul_version:
|
||||
When configured with the backend:consul variant we'll utilize this variable to determine
|
||||
which version of Consul to install. If left unset we'll utilize the default value.
|
||||
vault_artifact_path:
|
||||
When using the artifact:local variant this variable is utilized to specify where on
|
||||
the local disk the vault.zip file we've built is located. It can be left unset to use
|
||||
the default value.
|
||||
vault_enable_audit_devices:
|
||||
Whether or not to enable various audit devices after unsealing the Vault cluster. By default
|
||||
we'll configure syslog, socket, and file auditing.
|
||||
vault_product_version:
|
||||
When using the artifact:zip variant this variable is utilized to specify the version of
|
||||
Vault to download from releases.hashicorp.com.
|
||||
EOF
|
||||
module = module.vault_cluster
|
||||
depends_on = [
|
||||
step.create_secondary_backend_cluster,
|
||||
step.create_secondary_cluster_targets
|
||||
]
|
||||
|
||||
providers = {
|
||||
enos = local.enos_provider[matrix.distro]
|
||||
}
|
||||
|
||||
variables {
|
||||
// We set vault_artifactory_release when we want to get a .deb or .rpm package from Artifactory.
|
||||
// We set vault_release when we want to get a .zip bundle from releases.hashicorp.com
|
||||
// We only set one or the other, never both.
|
||||
artifactory_release = local.use_artifactory ? step.maybe_build_or_find_artifact.release : null
|
||||
backend_cluster_name = step.create_secondary_cluster_backend_targets.cluster_name
|
||||
backend_cluster_tag_key = global.backend_tag_key
|
||||
cluster_name = step.create_secondary_cluster_targets.cluster_name
|
||||
config_mode = var.dev_config_mode
|
||||
consul_license = matrix.secondary_backend == "consul" ? step.read_backend_license.license : null
|
||||
consul_release = matrix.secondary_backend == "consul" ? {
|
||||
edition = var.backend_edition
|
||||
version = var.dev_consul_version
|
||||
} : null
|
||||
enable_audit_devices = var.vault_enable_audit_devices
|
||||
install_dir = local.vault_install_dir
|
||||
license = step.read_vault_license.license
|
||||
local_artifact_path = matrix.artifact == "local" ? abspath(var.vault_artifact_path) : null
|
||||
manage_service = local.manage_service
|
||||
packages = concat(global.packages, global.distro_packages[matrix.distro])
|
||||
release = matrix.artifact == "zip" ? { version = var.vault_product_version, edition = matrix.edition } : null
|
||||
seal_attributes = step.create_secondary_seal_key.attributes
|
||||
seal_type = matrix.secondary_seal
|
||||
storage_backend = matrix.secondary_backend
|
||||
target_hosts = step.create_secondary_cluster_targets.hosts
|
||||
}
|
||||
}
|
||||
|
||||
step "verify_that_vault_primary_cluster_is_unsealed" {
|
||||
description = <<-EOF
|
||||
Wait for the for the primary cluster to unseal and reach a healthy state.
|
||||
EOF
|
||||
module = module.vault_verify_unsealed
|
||||
depends_on = [
|
||||
step.create_primary_cluster
|
||||
]
|
||||
|
||||
providers = {
|
||||
enos = local.enos_provider[matrix.distro]
|
||||
}
|
||||
|
||||
variables {
|
||||
vault_instances = step.create_primary_cluster_targets.hosts
|
||||
vault_install_dir = local.vault_install_dir
|
||||
}
|
||||
}
|
||||
|
||||
step "verify_that_vault_secondary_cluster_is_unsealed" {
|
||||
description = <<-EOF
|
||||
Wait for the for the secondary cluster to unseal and reach a healthy state.
|
||||
EOF
|
||||
module = module.vault_verify_unsealed
|
||||
depends_on = [
|
||||
step.create_secondary_cluster
|
||||
]
|
||||
|
||||
providers = {
|
||||
enos = local.enos_provider[matrix.distro]
|
||||
}
|
||||
|
||||
variables {
|
||||
vault_instances = step.create_secondary_cluster_targets.hosts
|
||||
vault_install_dir = local.vault_install_dir
|
||||
}
|
||||
}
|
||||
|
||||
step "get_primary_cluster_ips" {
|
||||
description = <<-EOF
|
||||
Determine which node is the primary and which are followers and map their private IP address
|
||||
to their public IP address. We'll use this information so that we can enable performance
|
||||
replication on the leader.
|
||||
EOF
|
||||
module = module.vault_get_cluster_ips
|
||||
depends_on = [step.verify_that_vault_primary_cluster_is_unsealed]
|
||||
|
||||
providers = {
|
||||
enos = local.enos_provider[matrix.distro]
|
||||
}
|
||||
|
||||
variables {
|
||||
vault_hosts = step.create_primary_cluster_targets.hosts
|
||||
vault_install_dir = local.vault_install_dir
|
||||
vault_root_token = step.create_primary_cluster.root_token
|
||||
}
|
||||
}
|
||||
|
||||
step "get_secondary_cluster_ips" {
|
||||
description = <<-EOF
|
||||
Determine which node is the primary and which are followers and map their private IP address
|
||||
to their public IP address. We'll use this information so that we can enable performance
|
||||
replication on the leader.
|
||||
EOF
|
||||
module = module.vault_get_cluster_ips
|
||||
depends_on = [step.verify_that_vault_secondary_cluster_is_unsealed]
|
||||
|
||||
providers = {
|
||||
enos = local.enos_provider[matrix.distro]
|
||||
}
|
||||
|
||||
variables {
|
||||
vault_hosts = step.create_secondary_cluster_targets.hosts
|
||||
vault_install_dir = local.vault_install_dir
|
||||
vault_root_token = step.create_secondary_cluster.root_token
|
||||
}
|
||||
}
|
||||
|
||||
step "setup_userpass_for_replication_auth" {
|
||||
description = <<-EOF
|
||||
Enable the auth userpass method and create a new user.
|
||||
EOF
|
||||
module = module.vault_verify_write_data
|
||||
depends_on = [step.get_primary_cluster_ips]
|
||||
|
||||
|
||||
providers = {
|
||||
enos = local.enos_provider[matrix.distro]
|
||||
}
|
||||
|
||||
variables {
|
||||
leader_public_ip = step.get_primary_cluster_ips.leader_public_ip
|
||||
leader_private_ip = step.get_primary_cluster_ips.leader_private_ip
|
||||
vault_instances = step.create_primary_cluster_targets.hosts
|
||||
vault_install_dir = local.vault_install_dir
|
||||
vault_root_token = step.create_primary_cluster.root_token
|
||||
}
|
||||
}
|
||||
|
||||
step "configure_performance_replication_primary" {
|
||||
description = <<-EOF
|
||||
Create a superuser policy write it for our new user. Activate performance replication on
|
||||
the primary.
|
||||
EOF
|
||||
module = module.vault_setup_perf_primary
|
||||
depends_on = [
|
||||
step.get_primary_cluster_ips,
|
||||
step.get_secondary_cluster_ips,
|
||||
step.setup_userpass_for_replication_auth,
|
||||
]
|
||||
|
||||
providers = {
|
||||
enos = local.enos_provider[matrix.distro]
|
||||
}
|
||||
|
||||
variables {
|
||||
primary_leader_public_ip = step.get_primary_cluster_ips.leader_public_ip
|
||||
primary_leader_private_ip = step.get_primary_cluster_ips.leader_private_ip
|
||||
vault_install_dir = local.vault_install_dir
|
||||
vault_root_token = step.create_primary_cluster.root_token
|
||||
}
|
||||
}
|
||||
|
||||
step "generate_secondary_token" {
|
||||
description = <<-EOF
|
||||
Create a random token and write it to sys/replication/performance/primary/secondary-token on
|
||||
the primary.
|
||||
EOF
|
||||
module = module.generate_secondary_token
|
||||
depends_on = [step.configure_performance_replication_primary]
|
||||
|
||||
providers = {
|
||||
enos = local.enos_provider[matrix.distro]
|
||||
}
|
||||
|
||||
variables {
|
||||
primary_leader_public_ip = step.get_primary_cluster_ips.leader_public_ip
|
||||
vault_install_dir = local.vault_install_dir
|
||||
vault_root_token = step.create_primary_cluster.root_token
|
||||
}
|
||||
}
|
||||
|
||||
step "configure_performance_replication_secondary" {
|
||||
description = <<-EOF
|
||||
Enable performance replication on the secondary using the new shared token.
|
||||
EOF
|
||||
module = module.vault_setup_perf_secondary
|
||||
depends_on = [step.generate_secondary_token]
|
||||
|
||||
providers = {
|
||||
enos = local.enos_provider[matrix.distro]
|
||||
}
|
||||
|
||||
variables {
|
||||
secondary_leader_public_ip = step.get_secondary_cluster_ips.leader_public_ip
|
||||
secondary_leader_private_ip = step.get_secondary_cluster_ips.leader_private_ip
|
||||
vault_install_dir = local.vault_install_dir
|
||||
vault_root_token = step.create_secondary_cluster.root_token
|
||||
wrapping_token = step.generate_secondary_token.secondary_token
|
||||
}
|
||||
}
|
||||
|
||||
step "unseal_secondary_followers" {
|
||||
description = <<-EOF
|
||||
After replication is enabled we need to unseal the followers on the secondary cluster.
|
||||
Depending on how we're configured we'll pass the unseal keys according to this guide:
|
||||
https://developer.hashicorp.com/vault/docs/enterprise/replication#seals
|
||||
EOF
|
||||
module = module.vault_unseal_nodes
|
||||
depends_on = [
|
||||
step.create_primary_cluster,
|
||||
step.create_secondary_cluster,
|
||||
step.get_secondary_cluster_ips,
|
||||
step.configure_performance_replication_secondary
|
||||
]
|
||||
|
||||
providers = {
|
||||
enos = local.enos_provider[matrix.distro]
|
||||
}
|
||||
|
||||
variables {
|
||||
follower_public_ips = step.get_secondary_cluster_ips.follower_public_ips
|
||||
vault_install_dir = local.vault_install_dir
|
||||
vault_unseal_keys = matrix.primary_seal == "shamir" ? step.create_primary_cluster.unseal_keys_hex : step.create_primary_cluster.recovery_keys_hex
|
||||
vault_seal_type = matrix.primary_seal == "shamir" ? matrix.primary_seal : matrix.secondary_seal
|
||||
}
|
||||
}
|
||||
|
||||
step "verify_secondary_cluster_is_unsealed_after_enabling_replication" {
|
||||
description = <<-EOF
|
||||
Verify that the secondary cluster is unsealed after we enable PR replication.
|
||||
EOF
|
||||
module = module.vault_verify_unsealed
|
||||
depends_on = [
|
||||
step.unseal_secondary_followers
|
||||
]
|
||||
|
||||
providers = {
|
||||
enos = local.enos_provider[matrix.distro]
|
||||
}
|
||||
|
||||
variables {
|
||||
vault_instances = step.create_secondary_cluster_targets.hosts
|
||||
vault_install_dir = local.vault_install_dir
|
||||
}
|
||||
}
|
||||
|
||||
step "verify_performance_replication" {
|
||||
description = <<-EOF
|
||||
Check sys/replication/performance/status and ensure that all nodes are in the correct state
|
||||
after enabling performance replication.
|
||||
EOF
|
||||
module = module.vault_verify_performance_replication
|
||||
depends_on = [step.verify_secondary_cluster_is_unsealed_after_enabling_replication]
|
||||
|
||||
providers = {
|
||||
enos = local.enos_provider[matrix.distro]
|
||||
}
|
||||
|
||||
variables {
|
||||
primary_leader_public_ip = step.get_primary_cluster_ips.leader_public_ip
|
||||
primary_leader_private_ip = step.get_primary_cluster_ips.leader_private_ip
|
||||
secondary_leader_public_ip = step.get_secondary_cluster_ips.leader_public_ip
|
||||
secondary_leader_private_ip = step.get_secondary_cluster_ips.leader_private_ip
|
||||
vault_install_dir = local.vault_install_dir
|
||||
}
|
||||
}
|
||||
|
||||
// When using a Consul backend, these output values will be for the Consul backend.
|
||||
// When using a Raft backend, these output values will be null.
|
||||
output "audit_device_file_path" {
|
||||
description = "The file path for the file audit device, if enabled"
|
||||
value = step.create_primary_cluster.audit_device_file_path
|
||||
}
|
||||
|
||||
output "primary_cluster_hosts" {
|
||||
description = "The Vault primary cluster target hosts"
|
||||
value = step.create_primary_cluster_targets.hosts
|
||||
}
|
||||
|
||||
output "primary_cluster_root_token" {
|
||||
description = "The Vault primary cluster root token"
|
||||
value = step.create_primary_cluster.root_token
|
||||
}
|
||||
|
||||
output "primary_cluster_unseal_keys_b64" {
|
||||
description = "The Vault primary cluster unseal keys"
|
||||
value = step.create_primary_cluster.unseal_keys_b64
|
||||
}
|
||||
|
||||
output "primary_cluster_unseal_keys_hex" {
|
||||
description = "The Vault primary cluster unseal keys hex"
|
||||
value = step.create_primary_cluster.unseal_keys_hex
|
||||
}
|
||||
|
||||
output "primary_cluster_recovery_key_shares" {
|
||||
description = "The Vault primary cluster recovery key shares"
|
||||
value = step.create_primary_cluster.recovery_key_shares
|
||||
}
|
||||
|
||||
output "primary_cluster_recovery_keys_b64" {
|
||||
description = "The Vault primary cluster recovery keys b64"
|
||||
value = step.create_primary_cluster.recovery_keys_b64
|
||||
}
|
||||
|
||||
output "primary_cluster_recovery_keys_hex" {
|
||||
description = "The Vault primary cluster recovery keys hex"
|
||||
value = step.create_primary_cluster.recovery_keys_hex
|
||||
}
|
||||
|
||||
output "secondary_cluster_hosts" {
|
||||
description = "The Vault secondary cluster public IPs"
|
||||
value = step.create_secondary_cluster_targets.hosts
|
||||
}
|
||||
|
||||
output "secondary_cluster_root_token" {
|
||||
description = "The Vault secondary cluster root token"
|
||||
value = step.create_secondary_cluster.root_token
|
||||
}
|
||||
|
||||
output "performance_secondary_token" {
|
||||
description = "The performance secondary replication token"
|
||||
value = step.generate_secondary_token.secondary_token
|
||||
}
|
||||
}
|
||||
507
enos/enos-dev-scenario-single-cluster.hcl
Normal file
507
enos/enos-dev-scenario-single-cluster.hcl
Normal file
@@ -0,0 +1,507 @@
|
||||
# Copyright (c) HashiCorp, Inc.
|
||||
# SPDX-License-Identifier: BUSL-1.1
|
||||
|
||||
scenario "dev_single_cluster" {
|
||||
description = <<-EOF
|
||||
This scenario spins up a single Vault cluster with either an external Consul cluster or
|
||||
integrated Raft for storage. None of our test verification is included in this scenario in order
|
||||
to improve end-to-end speed. If you wish to perform such verification you'll need to use a
|
||||
non-dev scenario instead.
|
||||
|
||||
The scenario supports finding and installing any released 'linux/amd64' or 'linux/arm64' Vault
|
||||
artifact as long as its version is >= 1.8. You can also use the 'artifact:local' variant to
|
||||
build and deploy the current branch!
|
||||
|
||||
In order to execute this scenario you'll need to install the enos CLI:
|
||||
brew tap hashicorp/tap && brew update && brew install hashicorp/tap/enos
|
||||
|
||||
You'll also need access to an AWS account with an SSH keypair.
|
||||
Perform the steps here to get AWS access with Doormat https://eng-handbook.hashicorp.services/internal-tools/enos/common-setup-steps/#authenticate-with-doormat
|
||||
Perform the steps here to get an AWS keypair set up: https://eng-handbook.hashicorp.services/internal-tools/enos/common-setup-steps/#set-your-aws-key-pair-name-and-private-key
|
||||
|
||||
Please note that this scenario requires several inputs variables to be set in order to function
|
||||
properly. While not all variants will require all variables, it's suggested that you look over
|
||||
the scenario outline to determine which variables affect which steps and which have inputs that
|
||||
you should set. You can use the following command to get a textual outline of the entire
|
||||
scenario:
|
||||
enos scenario outline dev_single_cluster
|
||||
|
||||
You can also create an HTML version that is suitable for viewing in web browsers:
|
||||
enos scenario outline dev_single_cluster --format html > index.html
|
||||
open index.html
|
||||
|
||||
To configure the required variables you have a couple of choices. You can create an
|
||||
'enos-local.vars' file in the same 'enos' directory where this scenario is defined. In it you
|
||||
declare your desired variable values. For example, you could copy the following content and
|
||||
then set the values as necessary:
|
||||
|
||||
artifactory_username = "username@hashicorp.com"
|
||||
artifactory_token = "<ARTIFACTORY TOKEN VALUE>
|
||||
aws_region = "us-west-2"
|
||||
aws_ssh_keypair_name = "<YOUR REGION SPECIFIC KEYPAIR NAME>"
|
||||
aws_ssh_keypair_key_path = "/path/to/your/private/key.pem"
|
||||
dev_consul_version = "1.18.1"
|
||||
vault_license_path = "./support/vault.hclic"
|
||||
vault_product_version = "1.16.2"
|
||||
|
||||
Alternatively, you can set them in your environment:
|
||||
export ENOS_VAR_aws_region="us-west-2"
|
||||
export ENOS_VAR_vault_license_path="./support/vault.hclic"
|
||||
|
||||
After you've configured your inputs you can list and filter the available scenarios and then
|
||||
subsequently launch and destroy them.
|
||||
enos scenario list --help
|
||||
enos scenario launch --help
|
||||
enos scenario list dev_single_cluster
|
||||
enos scenario launch dev_single_cluster arch:arm64 artifact:local backend:raft distro:ubuntu edition:ce seal:awskms
|
||||
|
||||
When the scenario is finished launching you refer to the scenario outputs to see information
|
||||
related to your cluster. You can use this information to SSH into nodes and/or to interact
|
||||
with vault.
|
||||
enos scenario output dev_single_cluster arch:arm64 artifact:local backend:raft distro:ubuntu edition:ce seal:awskms
|
||||
ssh -i /path/to/your/private/key.pem <PUBLIC_IP>
|
||||
vault status
|
||||
|
||||
After you've finished you can tear down the cluster
|
||||
enos scenario destroy dev_single_cluster arch:arm64 artifact:local backend:raft distro:ubuntu edition:ce seal:awskms
|
||||
EOF
|
||||
|
||||
// The matrix is where we define all the baseline combinations that enos can utilize to customize
|
||||
// your scenario. By default enos attempts to perform your command an the entire product! Most
|
||||
// of the time you'll want to reduce that by passing in a filter.
|
||||
// Run 'enos scenario list --help' to see more about how filtering scenarios works in enos.
|
||||
matrix {
|
||||
arch = ["amd64", "arm64"]
|
||||
artifact = ["local", "deb", "rpm", "zip"]
|
||||
backend = ["consul", "raft"]
|
||||
distro = ["ubuntu", "rhel"]
|
||||
edition = ["ce", "ent", "ent.fips1402", "ent.hsm", "ent.hsm.fips1402"]
|
||||
seal = ["awskms", "pkcs11", "shamir"]
|
||||
|
||||
exclude {
|
||||
edition = ["ent.hsm", "ent.fips1402", "ent.hsm.fips1402"]
|
||||
arch = ["arm64"]
|
||||
}
|
||||
|
||||
exclude {
|
||||
artifact = ["rpm"]
|
||||
distro = ["ubuntu"]
|
||||
}
|
||||
|
||||
exclude {
|
||||
artifact = ["deb"]
|
||||
distro = ["rhel"]
|
||||
}
|
||||
|
||||
exclude {
|
||||
seal = ["pkcs11"]
|
||||
edition = ["ce", "ent", "ent.fips1402"]
|
||||
}
|
||||
}
|
||||
|
||||
// Specify which Terraform configs and providers to use in this scenario. Most of the time you'll
|
||||
// never need to change this! If you wanted to test with different terraform or terraform CLI
|
||||
// settings you can define them and assign them here.
|
||||
terraform_cli = terraform_cli.default
|
||||
terraform = terraform.default
|
||||
|
||||
// Here we declare all of the providers that we might need for our scenario.
|
||||
providers = [
|
||||
provider.aws.default,
|
||||
provider.enos.ubuntu,
|
||||
provider.enos.rhel
|
||||
]
|
||||
|
||||
// These are variable values that are local to our scenario. They are evaluated after external
|
||||
// variables and scenario matrices but before any of our steps.
|
||||
locals {
|
||||
// The enos provider uses different ssh transport configs for different distros (as
|
||||
// specified in enos-providers.hcl), and we need to be able to access both of those here.
|
||||
enos_provider = {
|
||||
rhel = provider.enos.rhel
|
||||
ubuntu = provider.enos.ubuntu
|
||||
}
|
||||
// We install vault packages from artifactory. If you wish to use one of these variants you'll
|
||||
// need to configure your artifactory credentials.
|
||||
use_artifactory = matrix.artifact == "deb" || matrix.artifact == "rpm"
|
||||
// Zip bundles and local builds don't come with systemd units or any associated configuration.
|
||||
// When this is true we'll let enos handle this for us.
|
||||
manage_service = matrix.artifact == "zip" || matrix.artifact == "local"
|
||||
// If you are using an ent edition, you will need a Vault license. Common convention
|
||||
// is to store it at ./support/vault.hclic, but you may change this path according
|
||||
// to your own preference.
|
||||
vault_install_dir = matrix.artifact == "zip" ? var.vault_install_dir : global.vault_install_dir_packages[matrix.distro]
|
||||
}
|
||||
|
||||
// Begin scenario steps. These are the steps we'll perform to get your cluster up and running.
|
||||
step "maybe_build_or_find_artifact" {
|
||||
description = <<-EOF
|
||||
Depending on how we intend to get our Vault artifact, this step either builds vault from our
|
||||
current branch or finds debian or redhat packages in Artifactory. If we're using a zip bundle
|
||||
we'll get it from releases.hashicorp.com and skip this step entirely. Please note that if you
|
||||
wish to use a deb or rpm artifact you'll have to configure your artifactory credentials!
|
||||
|
||||
Variables that are used in this step:
|
||||
|
||||
artifactory_host:
|
||||
The artifactory host to search. It's very unlikely that you'll want to change this. The
|
||||
default value is the HashiCorp Artifactory instance.
|
||||
artifactory_repo
|
||||
The artifactory host to search. It's very unlikely that you'll want to change this. The
|
||||
default value is where CRT will publish packages.
|
||||
artifactory_username
|
||||
The artifactory username associated with your token. You'll need this if you wish to use
|
||||
deb or rpm artifacts! You can request access via Okta.
|
||||
artifactory_token
|
||||
The artifactory token associated with your username. You'll need this if you wish to use
|
||||
deb or rpm artifacts! You can create a token by logging into Artifactory via Okta.
|
||||
vault_product_version:
|
||||
When using the artifact:rpm or artifact:deb variants we'll use this variable to determine
|
||||
which version of the Vault pacakge we should fetch from Artifactory.
|
||||
vault_artifact_path:
|
||||
When using the artifact:local variant we'll utilize this variable to determine where
|
||||
to create the vault.zip archive from the local branch. Default: to /tmp/vault.zip.
|
||||
vault_local_build_tags:
|
||||
When using the artifact:local variant we'll use this variable to inject custom build
|
||||
tags. If left unset we'll automatically use the build tags that correspond to the edition
|
||||
variant.
|
||||
EOF
|
||||
module = matrix.artifact == "local" ? "build_local" : local.use_artifactory ? "build_artifactory_package" : null
|
||||
skip_step = matrix.artifact == "zip"
|
||||
|
||||
variables {
|
||||
// Used for all modules
|
||||
arch = matrix.arch
|
||||
edition = matrix.edition
|
||||
product_version = var.vault_product_version
|
||||
// Required for the local build which will always result in using a local zip bundle
|
||||
artifact_path = var.vault_artifact_path
|
||||
build_tags = var.vault_local_build_tags != null ? var.vault_local_build_tags : global.build_tags[matrix.edition]
|
||||
goarch = matrix.arch
|
||||
goos = "linux"
|
||||
// Required when using a RPM or Deb package
|
||||
// Some of these variables don't have default values so we'll only set them if they are
|
||||
// required.
|
||||
artifactory_host = local.use_artifactory ? var.artifactory_host : null
|
||||
artifactory_repo = local.use_artifactory ? var.artifactory_repo : null
|
||||
artifactory_username = local.use_artifactory ? var.artifactory_username : null
|
||||
artifactory_token = local.use_artifactory ? var.artifactory_token : null
|
||||
distro = matrix.distro
|
||||
}
|
||||
}
|
||||
|
||||
step "ec2_info" {
|
||||
description = "This discovers usefull metadata in Ec2 like AWS AMI ID's that we use in later modules."
|
||||
module = module.ec2_info
|
||||
}
|
||||
|
||||
step "create_vpc" {
|
||||
description = <<-EOF
|
||||
Create the VPC resources required for our scenario.
|
||||
|
||||
Variables that are used in this step:
|
||||
tags:
|
||||
If you wish to add custom tags to taggable resources in AWS you can set the 'tags' variable
|
||||
and they'll be added to resources when possible.
|
||||
EOF
|
||||
module = module.create_vpc
|
||||
depends_on = [step.ec2_info]
|
||||
|
||||
variables {
|
||||
common_tags = global.tags
|
||||
}
|
||||
}
|
||||
|
||||
step "read_backend_license" {
|
||||
description = <<-EOF
|
||||
Read the contents of the backend license if we're using a Consul backend and the edition is "ent".
|
||||
|
||||
Variables that are used in this step:
|
||||
backend_edition:
|
||||
The edition of Consul to use. If left unset it will default to CE.
|
||||
backend_license_path:
|
||||
If this variable is set we'll use it to determine the local path on disk that contains a
|
||||
Consul Enterprise license. If it is not set we'll attempt to load it from
|
||||
./support/consul.hclic.
|
||||
EOF
|
||||
skip_step = matrix.backend == "raft" || var.backend_edition == "oss" || var.backend_edition == "ce"
|
||||
module = module.read_license
|
||||
|
||||
variables {
|
||||
file_name = global.backend_license_path
|
||||
}
|
||||
}
|
||||
|
||||
step "read_vault_license" {
|
||||
description = <<-EOF
|
||||
Validates and reads into memory the contents of a local Vault Enterprise license if we're
|
||||
using an Enterprise edition. This step does not run when using a community edition of Vault.
|
||||
|
||||
Variables that are used in this step:
|
||||
vault_license_path:
|
||||
If this variable is set we'll use it to determine the local path on disk that contains a
|
||||
Vault Enterprise license. If it is not set we'll attempt to load it from
|
||||
./support/vault.hclic.
|
||||
EOF
|
||||
skip_step = matrix.edition == "ce"
|
||||
module = module.read_license
|
||||
|
||||
variables {
|
||||
file_name = global.vault_license_path
|
||||
}
|
||||
}
|
||||
|
||||
step "create_seal_key" {
|
||||
description = <<-EOF
|
||||
Create the necessary seal keys depending on our configured seal.
|
||||
|
||||
Variables that are used in this step:
|
||||
tags:
|
||||
If you wish to add custom tags to taggable resources in AWS you can set the 'tags' variable
|
||||
and they'll be added to resources when possible.
|
||||
EOF
|
||||
module = "seal_${matrix.seal}"
|
||||
depends_on = [step.create_vpc]
|
||||
|
||||
providers = {
|
||||
enos = provider.enos.ubuntu
|
||||
}
|
||||
|
||||
variables {
|
||||
cluster_id = step.create_vpc.id
|
||||
common_tags = global.tags
|
||||
}
|
||||
}
|
||||
|
||||
step "create_vault_cluster_targets" {
|
||||
description = <<-EOF
|
||||
Creates the necessary machine infrastructure targets for the Vault cluster. We also ensure
|
||||
that the firewall is configured to allow the necessary Vault and Consul traffic and SSH
|
||||
from the machine executing the Enos scenario.
|
||||
|
||||
Variables that are used in this step:
|
||||
aws_ssh_keypair_name:
|
||||
The AWS SSH Keypair name to use for target machines.
|
||||
project_name:
|
||||
The project name is used for additional tag metadata on resources.
|
||||
tags:
|
||||
If you wish to add custom tags to taggable resources in AWS you can set the 'tags' variable
|
||||
and they'll be added to resources when possible.
|
||||
vault_instance_count:
|
||||
How many instances to provision for the Vault cluster. If left unset it will use a default
|
||||
of three.
|
||||
EOF
|
||||
module = module.target_ec2_instances
|
||||
depends_on = [step.create_vpc]
|
||||
|
||||
providers = {
|
||||
enos = local.enos_provider[matrix.distro]
|
||||
}
|
||||
|
||||
variables {
|
||||
ami_id = step.ec2_info.ami_ids[matrix.arch][matrix.distro][global.distro_version[matrix.distro]]
|
||||
instance_count = try(var.vault_instance_count, 3)
|
||||
cluster_tag_key = global.vault_tag_key
|
||||
common_tags = global.tags
|
||||
seal_key_names = step.create_seal_key.resource_names
|
||||
vpc_id = step.create_vpc.id
|
||||
}
|
||||
}
|
||||
|
||||
step "create_vault_cluster_backend_targets" {
|
||||
description = <<-EOF
|
||||
Creates the necessary machine infrastructure targets for the backend Consul storage cluster.
|
||||
We also ensure that the firewall is configured to allow the necessary Consul traffic and SSH
|
||||
from the machine executing the Enos scenario. When using integrated storage this step is a
|
||||
no-op that does nothing.
|
||||
|
||||
Variables that are used in this step:
|
||||
tags:
|
||||
If you wish to add custom tags to taggable resources in AWS you can set the 'tags' variable
|
||||
and they'll be added to resources when possible.
|
||||
project_name:
|
||||
The project name is used for additional tag metadata on resources.
|
||||
aws_ssh_keypair_name:
|
||||
The AWS SSH Keypair name to use for target machines.
|
||||
EOF
|
||||
|
||||
module = matrix.backend == "consul" ? module.target_ec2_instances : module.target_ec2_shim
|
||||
depends_on = [step.create_vpc]
|
||||
|
||||
providers = {
|
||||
enos = provider.enos.ubuntu
|
||||
}
|
||||
|
||||
variables {
|
||||
ami_id = step.ec2_info.ami_ids["arm64"]["ubuntu"]["22.04"]
|
||||
cluster_tag_key = global.backend_tag_key
|
||||
common_tags = global.tags
|
||||
seal_key_names = step.create_seal_key.resource_names
|
||||
vpc_id = step.create_vpc.id
|
||||
}
|
||||
}
|
||||
|
||||
step "create_backend_cluster" {
|
||||
description = <<-EOF
|
||||
Install, configure, and start the backend Consul storage cluster. When we are using the raft
|
||||
storage variant this step is a no-op.
|
||||
|
||||
Variables that are used in this step:
|
||||
backend_edition:
|
||||
When configured with the backend:consul variant we'll utilize this variable to determine
|
||||
the edition of Consul to use for the cluster. Note that if you set it to 'ent' you will
|
||||
also need a valid license configured for the read_backend_license step. Default: ce.
|
||||
dev_consul_version:
|
||||
When configured with the backend:consul variant we'll utilize this variable to determine
|
||||
the version of Consul to use for the cluster.
|
||||
EOF
|
||||
module = "backend_${matrix.backend}"
|
||||
depends_on = [
|
||||
step.create_vault_cluster_backend_targets
|
||||
]
|
||||
|
||||
providers = {
|
||||
enos = provider.enos.ubuntu
|
||||
}
|
||||
|
||||
variables {
|
||||
cluster_name = step.create_vault_cluster_backend_targets.cluster_name
|
||||
cluster_tag_key = global.backend_tag_key
|
||||
license = (matrix.backend == "consul" && var.backend_edition == "ent") ? step.read_backend_license.license : null
|
||||
release = {
|
||||
edition = var.backend_edition
|
||||
version = var.dev_consul_version
|
||||
}
|
||||
target_hosts = step.create_vault_cluster_backend_targets.hosts
|
||||
}
|
||||
}
|
||||
|
||||
step "create_vault_cluster" {
|
||||
description = <<-EOF
|
||||
Install, configure, start, initialize and unseal the Vault cluster on the specified target
|
||||
instances.
|
||||
|
||||
Variables that are used in this step:
|
||||
backend_edition:
|
||||
When configured with the backend:consul variant we'll utilize this variable to determine
|
||||
which version of the consul client to install on each node for Consul storage. Note that
|
||||
if you set it to 'ent' you will also need a valid license configured for the
|
||||
read_backend_license step. If left unset we'll use an unlicensed CE version.
|
||||
dev_config_mode:
|
||||
You can set this variable to instruct enos on how to primarily configure Vault when starting
|
||||
the service. Options are 'file' and 'env' for configuration file or environment variables.
|
||||
If left unset we'll use the default value.
|
||||
dev_consul_version:
|
||||
When configured with the backend:consul variant we'll utilize this variable to determine
|
||||
which version of Consul to install. If left unset we'll utilize the default value.
|
||||
vault_artifact_path:
|
||||
When using the artifact:local variant this variable is utilized to specify where on
|
||||
the local disk the vault.zip file we've built is located. It can be left unset to use
|
||||
the default value.
|
||||
vault_enable_audit_devices:
|
||||
Whether or not to enable various audit devices after unsealing the Vault cluster. By default
|
||||
we'll configure syslog, socket, and file auditing.
|
||||
vault_product_version:
|
||||
When using the artifact:zip variant this variable is utilized to specify the version of
|
||||
Vault to download from releases.hashicorp.com.
|
||||
EOF
|
||||
module = module.vault_cluster
|
||||
depends_on = [
|
||||
step.create_backend_cluster,
|
||||
step.create_vault_cluster_targets,
|
||||
]
|
||||
|
||||
providers = {
|
||||
enos = local.enos_provider[matrix.distro]
|
||||
}
|
||||
|
||||
variables {
|
||||
// We set vault_artifactory_release when we want to get a .deb or .rpm package from Artifactory.
|
||||
// We set vault_release when we want to get a .zip bundle from releases.hashicorp.com
|
||||
// We only set one or the other, never both.
|
||||
artifactory_release = local.use_artifactory ? step.maybe_build_or_find_artifact.release : null
|
||||
backend_cluster_name = step.create_vault_cluster_backend_targets.cluster_name
|
||||
backend_cluster_tag_key = global.backend_tag_key
|
||||
cluster_name = step.create_vault_cluster_targets.cluster_name
|
||||
config_mode = var.dev_config_mode
|
||||
consul_license = (matrix.backend == "consul" && var.backend_edition == "ent") ? step.read_backend_license.license : null
|
||||
consul_release = matrix.backend == "consul" ? {
|
||||
edition = var.backend_edition
|
||||
version = var.dev_consul_version
|
||||
} : null
|
||||
enable_audit_devices = var.vault_enable_audit_devices
|
||||
install_dir = local.vault_install_dir
|
||||
license = matrix.edition != "ce" ? step.read_vault_license.license : null
|
||||
local_artifact_path = matrix.artifact == "local" ? abspath(var.vault_artifact_path) : null
|
||||
manage_service = local.manage_service
|
||||
packages = concat(global.packages, global.distro_packages[matrix.distro])
|
||||
release = matrix.artifact == "zip" ? { version = var.vault_product_version, edition = matrix.edition } : null
|
||||
seal_attributes = step.create_seal_key.attributes
|
||||
seal_type = matrix.seal
|
||||
storage_backend = matrix.backend
|
||||
target_hosts = step.create_vault_cluster_targets.hosts
|
||||
}
|
||||
}
|
||||
|
||||
// When using a Consul backend, these output values will be for the Consul backend.
|
||||
// When using a Raft backend, these output values will be null.
|
||||
output "audit_device_file_path" {
|
||||
description = "The file path for the file audit device, if enabled"
|
||||
value = step.create_vault_cluster.audit_device_file_path
|
||||
}
|
||||
|
||||
output "cluster_name" {
|
||||
description = "The Vault cluster name"
|
||||
value = step.create_vault_cluster.cluster_name
|
||||
}
|
||||
|
||||
output "hosts" {
|
||||
description = "The Vault cluster target hosts"
|
||||
value = step.create_vault_cluster.target_hosts
|
||||
}
|
||||
|
||||
output "private_ips" {
|
||||
description = "The Vault cluster private IPs"
|
||||
value = step.create_vault_cluster.private_ips
|
||||
}
|
||||
|
||||
output "public_ips" {
|
||||
description = "The Vault cluster public IPs"
|
||||
value = step.create_vault_cluster.public_ips
|
||||
}
|
||||
|
||||
output "root_token" {
|
||||
description = "The Vault cluster root token"
|
||||
value = step.create_vault_cluster.root_token
|
||||
}
|
||||
|
||||
output "recovery_key_shares" {
|
||||
description = "The Vault cluster recovery key shares"
|
||||
value = step.create_vault_cluster.recovery_key_shares
|
||||
}
|
||||
|
||||
output "recovery_keys_b64" {
|
||||
description = "The Vault cluster recovery keys b64"
|
||||
value = step.create_vault_cluster.recovery_keys_b64
|
||||
}
|
||||
|
||||
output "recovery_keys_hex" {
|
||||
description = "The Vault cluster recovery keys hex"
|
||||
value = step.create_vault_cluster.recovery_keys_hex
|
||||
}
|
||||
|
||||
output "seal_key_attributes" {
|
||||
description = "The Vault cluster seal attributes"
|
||||
value = step.create_seal_key.attributes
|
||||
}
|
||||
|
||||
output "unseal_keys_b64" {
|
||||
description = "The Vault cluster unseal keys"
|
||||
value = step.create_vault_cluster.unseal_keys_b64
|
||||
}
|
||||
|
||||
output "unseal_keys_hex" {
|
||||
description = "The Vault cluster unseal keys hex"
|
||||
value = step.create_vault_cluster.unseal_keys_hex
|
||||
}
|
||||
}
|
||||
15
enos/enos-dev-variables.hcl
Normal file
15
enos/enos-dev-variables.hcl
Normal file
@@ -0,0 +1,15 @@
|
||||
# Copyright (c) HashiCorp, Inc.
|
||||
# SPDX-License-Identifier: BUSL-1.1
|
||||
|
||||
variable "dev_config_mode" {
|
||||
type = string
|
||||
description = "The method to use when configuring Vault. When set to 'env' we will configure Vault using VAULT_ style environment variables if possible. When 'file' we'll use the HCL configuration file for all configuration options."
|
||||
default = "file" // or "env"
|
||||
}
|
||||
|
||||
variable "dev_consul_version" {
|
||||
type = string
|
||||
description = "The version of Consul to use when using Consul for storage!"
|
||||
default = "1.18.1"
|
||||
// NOTE: You can also set the "backend_edition" if you want to use Consul Enterprise
|
||||
}
|
||||
@@ -2,11 +2,12 @@
|
||||
# SPDX-License-Identifier: BUSL-1.1
|
||||
|
||||
globals {
|
||||
archs = ["amd64", "arm64"]
|
||||
artifact_sources = ["local", "crt", "artifactory"]
|
||||
artifact_types = ["bundle", "package"]
|
||||
backends = ["consul", "raft"]
|
||||
backend_tag_key = "VaultStorage"
|
||||
archs = ["amd64", "arm64"]
|
||||
artifact_sources = ["local", "crt", "artifactory"]
|
||||
artifact_types = ["bundle", "package"]
|
||||
backends = ["consul", "raft"]
|
||||
backend_license_path = abspath(var.backend_license_path != null ? var.backend_license_path : joinpath(path.root, "./support/consul.hclic"))
|
||||
backend_tag_key = "VaultStorage"
|
||||
build_tags = {
|
||||
"ce" = ["ui"]
|
||||
"ent" = ["ui", "enterprise", "ent"]
|
||||
|
||||
@@ -16,6 +16,10 @@ module "backend_raft" {
|
||||
source = "./modules/backend_raft"
|
||||
}
|
||||
|
||||
module "build_artifactory_package" {
|
||||
source = "./modules/build_artifactory_package"
|
||||
}
|
||||
|
||||
module "build_crt" {
|
||||
source = "./modules/build_crt"
|
||||
}
|
||||
|
||||
@@ -4,10 +4,6 @@
|
||||
terraform_cli "default" {
|
||||
plugin_cache_dir = var.terraform_plugin_cache_dir != null ? abspath(var.terraform_plugin_cache_dir) : null
|
||||
|
||||
credentials "app.terraform.io" {
|
||||
token = var.tfc_api_token
|
||||
}
|
||||
|
||||
/*
|
||||
provider_installation {
|
||||
dev_overrides = {
|
||||
|
||||
@@ -93,12 +93,6 @@ variable "terraform_plugin_cache_dir" {
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "tfc_api_token" {
|
||||
description = "The Terraform Cloud QTI Organization API token. This is used to download the enos Terraform provider."
|
||||
type = string
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
variable "ubuntu_distro_version" {
|
||||
description = "The version of ubuntu to use"
|
||||
type = string
|
||||
|
||||
@@ -51,10 +51,6 @@
|
||||
# It must exist.
|
||||
# terraform_plugin_cache_dir = "/Users/<user>/.terraform/plugin-cache-dir
|
||||
|
||||
# tfc_api_token is the Terraform Cloud QTI Organization API token. We need this
|
||||
# to download the enos Terraform provider and the enos Terraform modules.
|
||||
# tfc_api_token = "XXXXX.atlasv1.XXXXX..."
|
||||
|
||||
# ui_test_filter is the test filter to limit the ui tests to execute for the ui scenario. It will
|
||||
# be appended to the ember test command as '-f=\"<filter>\"'.
|
||||
# ui_test_filter = "sometest"
|
||||
|
||||
@@ -17,8 +17,4 @@ terraform "k8s" {
|
||||
|
||||
terraform_cli "default" {
|
||||
plugin_cache_dir = var.terraform_plugin_cache_dir != null ? abspath(var.terraform_plugin_cache_dir) : null
|
||||
|
||||
credentials "app.terraform.io" {
|
||||
token = var.tfc_api_token
|
||||
}
|
||||
}
|
||||
|
||||
@@ -43,11 +43,6 @@ variable "terraform_plugin_cache_dir" {
|
||||
default = null
|
||||
}
|
||||
|
||||
variable "tfc_api_token" {
|
||||
description = "The Terraform Cloud QTI Organization API token."
|
||||
type = string
|
||||
}
|
||||
|
||||
variable "vault_build_date" {
|
||||
description = "The build date for the vault docker image"
|
||||
type = string
|
||||
|
||||
159
enos/modules/build_artifactory_package/main.tf
Normal file
159
enos/modules/build_artifactory_package/main.tf
Normal file
@@ -0,0 +1,159 @@
|
||||
# Copyright (c) HashiCorp, Inc.
|
||||
# SPDX-License-Identifier: BUSL-1.1
|
||||
|
||||
terraform {
|
||||
required_providers {
|
||||
enos = {
|
||||
source = "registry.terraform.io/hashicorp-forge/enos"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
variable "arch" {
|
||||
type = string
|
||||
description = "The architecture for the desired artifact"
|
||||
}
|
||||
|
||||
variable "artifactory_username" {
|
||||
type = string
|
||||
description = "The username to use when connecting to Artifactory"
|
||||
}
|
||||
|
||||
variable "artifactory_token" {
|
||||
type = string
|
||||
description = "The token to use when connecting to Artifactory"
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
variable "artifactory_host" {
|
||||
type = string
|
||||
description = "The Artifactory host to search for Vault artifacts"
|
||||
default = "https://artifactory.hashicorp.engineering/artifactory"
|
||||
}
|
||||
|
||||
variable "distro" {
|
||||
type = string
|
||||
description = "The distro for the desired artifact (ubuntu or rhel)"
|
||||
}
|
||||
|
||||
variable "distro_version" {
|
||||
type = string
|
||||
description = "The RHEL version for .rpm packages"
|
||||
default = "9"
|
||||
}
|
||||
|
||||
variable "edition" {
|
||||
type = string
|
||||
description = "The edition of Vault to use"
|
||||
}
|
||||
|
||||
variable "product_version" {
|
||||
type = string
|
||||
description = "The version of Vault to use"
|
||||
}
|
||||
|
||||
// Shim variables that we don't use but include to satisfy the build module "interface"
|
||||
variable "artifact_path" { default = null }
|
||||
variable "artifact_type" { default = null }
|
||||
variable "artifactory_repo" { default = null }
|
||||
variable "build_tags" { default = null }
|
||||
variable "bundle_path" { default = null }
|
||||
variable "goarch" { default = null }
|
||||
variable "goos" { default = null }
|
||||
variable "revision" { default = null }
|
||||
|
||||
locals {
|
||||
// File name prefixes for the various distributions and editions
|
||||
artifact_prefix = {
|
||||
ubuntu = {
|
||||
"ce" = "vault_"
|
||||
"ent" = "vault-enterprise_",
|
||||
"ent.hsm" = "vault-enterprise-hsm_",
|
||||
"ent.hsm.fips1402" = "vault-enterprise-hsm-fips1402_",
|
||||
"oss" = "vault_"
|
||||
},
|
||||
rhel = {
|
||||
"ce" = "vault-"
|
||||
"ent" = "vault-enterprise-",
|
||||
"ent.hsm" = "vault-enterprise-hsm-",
|
||||
"ent.hsm.fips1402" = "vault-enterprise-hsm-fips1402-",
|
||||
"oss" = "vault-"
|
||||
}
|
||||
}
|
||||
|
||||
// Format the version and edition to use in the artifact name
|
||||
artifact_version = {
|
||||
"ce" = "${var.product_version}"
|
||||
"ent" = "${var.product_version}+ent"
|
||||
"ent.hsm" = "${var.product_version}+ent"
|
||||
"ent.hsm.fips1402" = "${var.product_version}+ent"
|
||||
"oss" = "${var.product_version}"
|
||||
}
|
||||
|
||||
// File name extensions for the various architectures and distributions
|
||||
artifact_extension = {
|
||||
amd64 = {
|
||||
ubuntu = "-1_amd64.deb"
|
||||
rhel = "-1.x86_64.rpm"
|
||||
}
|
||||
arm64 = {
|
||||
ubuntu = "-1_arm64.deb"
|
||||
rhel = "-1.aarch64.rpm"
|
||||
}
|
||||
}
|
||||
|
||||
// Use the above variables to construct the artifact name to look up in Artifactory.
|
||||
// Will look something like:
|
||||
// vault_1.12.2-1_arm64.deb
|
||||
// vault-enterprise_1.12.2+ent-1_amd64.deb
|
||||
// vault-enterprise-hsm-1.12.2+ent-1.x86_64.rpm
|
||||
artifact_name = "${local.artifact_prefix[var.distro][var.edition]}${local.artifact_version[var.edition]}${local.artifact_extension[var.arch][var.distro]}"
|
||||
|
||||
// The path within the Artifactory repo that corresponds to the appropriate architecture
|
||||
artifactory_repo_path_dir = {
|
||||
"amd64" = "x86_64"
|
||||
"arm64" = "aarch64"
|
||||
}
|
||||
}
|
||||
|
||||
data "enos_artifactory_item" "vault_package" {
|
||||
username = var.artifactory_username
|
||||
token = var.artifactory_token
|
||||
name = local.artifact_name
|
||||
host = var.artifactory_host
|
||||
repo = var.distro == "rhel" ? "hashicorp-rpm-release-local*" : "hashicorp-apt-release-local*"
|
||||
path = var.distro == "rhel" ? "RHEL/${var.distro_version}/${local.artifactory_repo_path_dir[var.arch]}/stable" : "pool/${var.arch}/main"
|
||||
}
|
||||
|
||||
output "results" {
|
||||
value = data.enos_artifactory_item.vault_package.results
|
||||
}
|
||||
|
||||
output "url" {
|
||||
value = data.enos_artifactory_item.vault_package.results[0].url
|
||||
description = "The artifactory download url for the artifact"
|
||||
}
|
||||
|
||||
output "sha256" {
|
||||
value = data.enos_artifactory_item.vault_package.results[0].sha256
|
||||
description = "The sha256 checksum for the artifact"
|
||||
}
|
||||
|
||||
output "size" {
|
||||
value = data.enos_artifactory_item.vault_package.results[0].size
|
||||
description = "The size in bytes of the artifact"
|
||||
}
|
||||
|
||||
output "name" {
|
||||
value = data.enos_artifactory_item.vault_package.results[0].name
|
||||
description = "The name of the artifact"
|
||||
}
|
||||
|
||||
output "release" {
|
||||
value = {
|
||||
url = data.enos_artifactory_item.vault_package.results[0].url
|
||||
sha256 = data.enos_artifactory_item.vault_package.results[0].sha256
|
||||
username = var.artifactory_username
|
||||
token = var.artifactory_token
|
||||
}
|
||||
}
|
||||
@@ -26,24 +26,11 @@ variable "artifactory_host" { default = null }
|
||||
variable "artifactory_repo" { default = null }
|
||||
variable "artifactory_username" { default = null }
|
||||
variable "artifactory_token" { default = null }
|
||||
variable "arch" {
|
||||
default = null
|
||||
}
|
||||
variable "artifact_path" {
|
||||
default = null
|
||||
}
|
||||
variable "artifact_type" {
|
||||
default = null
|
||||
}
|
||||
variable "distro" {
|
||||
default = null
|
||||
}
|
||||
variable "edition" {
|
||||
default = null
|
||||
}
|
||||
variable "revision" {
|
||||
default = null
|
||||
}
|
||||
variable "product_version" {
|
||||
default = null
|
||||
}
|
||||
variable "arch" { default = null }
|
||||
variable "artifact_path" { default = null }
|
||||
variable "artifact_type" { default = null }
|
||||
variable "distro" { default = null }
|
||||
variable "distro_version" { default = null }
|
||||
variable "edition" { default = null }
|
||||
variable "revision" { default = null }
|
||||
variable "product_version" { default = null }
|
||||
|
||||
@@ -37,6 +37,7 @@ variable "artifact_path" {
|
||||
}
|
||||
variable "artifact_type" { default = null }
|
||||
variable "distro" { default = null }
|
||||
variable "distro_version" { default = null }
|
||||
variable "edition" { default = null }
|
||||
variable "revision" { default = null }
|
||||
variable "product_version" { default = null }
|
||||
|
||||
@@ -239,6 +239,30 @@ resource "enos_vault_unseal" "maybe_force_unseal" {
|
||||
}
|
||||
}
|
||||
|
||||
# Add the vault install location to the PATH and set up VAULT_ADDR and VAULT_TOKEN environement
|
||||
# variables in the login shell so we don't have to do it if/when we login in to a cluster node.
|
||||
resource "enos_remote_exec" "configure_login_shell_profile" {
|
||||
depends_on = [
|
||||
enos_vault_init.leader,
|
||||
enos_vault_unseal.leader,
|
||||
]
|
||||
for_each = var.target_hosts
|
||||
|
||||
environment = {
|
||||
VAULT_ADDR = "http://127.0.0.1:8200"
|
||||
VAULT_TOKEN = enos_vault_init.leader[0].root_token
|
||||
VAULT_INSTALL_DIR = var.install_dir
|
||||
}
|
||||
|
||||
scripts = [abspath("${path.module}/scripts/set-up-login-shell-profile.sh")]
|
||||
|
||||
transport = {
|
||||
ssh = {
|
||||
host = each.value.public_ip
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# We need to ensure that the directory used for audit logs is present and accessible to the vault
|
||||
# user on all nodes, since logging will only happen on the leader.
|
||||
resource "enos_remote_exec" "create_audit_log_dir" {
|
||||
|
||||
@@ -0,0 +1,51 @@
|
||||
#!/usr/bin/env bash
|
||||
# Copyright (c) HashiCorp, Inc.
|
||||
# SPDX-License-Identifier: BUSL-1.1
|
||||
|
||||
set -e
|
||||
|
||||
fail() {
|
||||
echo "$1" 1>&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
[[ -z "$VAULT_ADDR" ]] && fail "VAULT_ADDR env variable has not been set"
|
||||
[[ -z "$VAULT_INSTALL_DIR" ]] && fail "VAULT_INSTALL_DIR env variable has not been set"
|
||||
[[ -z "$VAULT_TOKEN" ]] && fail "VAULT_TOKEN env variable has not been set"
|
||||
|
||||
# Determine the profile file we should write to. We only want to affect login shells and bash will
|
||||
# only read one of these in ordered of precendence.
|
||||
determineProfileFile() {
|
||||
if [ -f "$HOME/.bash_profile" ]; then
|
||||
printf "%s/.bash_profile\n" "$HOME"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [ -f "$HOME/.bash_login" ]; then
|
||||
printf "%s/.bash_login\n" "$HOME"
|
||||
return 0
|
||||
fi
|
||||
|
||||
printf "%s/.profile\n" "$HOME"
|
||||
}
|
||||
|
||||
appendVaultProfileInformation() {
|
||||
tee -a "$1" <<< "export PATH=$PATH:$VAULT_INSTALL_DIR
|
||||
export VAULT_ADDR=$VAULT_ADDR
|
||||
export VAULT_TOKEN=$VAULT_TOKEN"
|
||||
}
|
||||
|
||||
main() {
|
||||
local profile_file
|
||||
if ! profile_file=$(determineProfileFile); then
|
||||
fail "failed to determine login shell profile file location"
|
||||
fi
|
||||
|
||||
if ! appendVaultProfileInformation "$profile_file"; then
|
||||
fail "failed to write vault configuration to login shell profile"
|
||||
fi
|
||||
|
||||
exit 0
|
||||
}
|
||||
|
||||
main
|
||||
Reference in New Issue
Block a user