* [QT-602] Run `proxy` and `agent` test scenarios (#23176)
Update our `proxy` and `agent` scenarios to support new variants and
perform baseline verification and their scenario specific verification.
We integrate these updated scenarios into the pipeline by adding them
to artifact samples.
We've also improved the reliability of the `autopilot` and `replication`
scenarios by refactoring our IP address gathering. Previously, we'd ask
vault for the primary IP address and use some Terraform logic to determine
followers. The leader IP address gathering script was also implicitly
responsible for ensuring that a found leader was within a given group of
hosts, and thus waiting for a given cluster to have a leader, and also for
doing some arithmetic and outputting `replication` specific output data.
We've broken these responsibilities into individual modules, improved their
error messages, and fixed various races and bugs, including:
* Fix a race between creating the file audit device and installing and starting
vault in the `replication` scenario.
* Fix how we determine our leader and follower IP addresses. We now query
vault instead of a prior implementation that inferred the followers and sometimes
did not allow all nodes to be an expected leader.
* Fix a bug where we'd always always fail on the first wrong condition
in the `vault_verify_performance_replication` module.
We also performed some maintenance tasks on Enos scenarios byupdating our
references from `oss` to `ce` to handle the naming and license changes. We
also enabled `shellcheck` linting for enos module scripts.
* Rename `oss` to `ce` for license and naming changes.
* Convert template enos scripts to scripts that take environment
variables.
* Add `shellcheck` linting for enos module scripts.
* Add additional `backend` and `seal` support to `proxy` and `agent`
scenarios.
* Update scenarios to include all baseline verification.
* Add `proxy` and `agent` scenarios to artifact samples.
* Remove IP address verification from the `vault_get_cluster_ips`
modules and implement a new `vault_wait_for_leader` module.
* Determine follower IP addresses by querying vault in the
`vault_get_cluster_ips` module.
* Move replication specific behavior out of the `vault_get_cluster_ips`
module and into it's own `replication_data` module.
* Extend initial version support for the `upgrade` and `autopilot`
scenarios.
We also discovered an issue with undo_logs that has been described in
the VAULT-20259. As such, we've disabled the undo_logs check until
it has been fixed.
* actions: fix actionlint error and linting logic (#23305)
* enos: don't attempt to use the vault proxy command before 1.14
---------
Signed-off-by: Ryan Cragun <me@ryan.ec>
Rather than assuming a short sleep will work, we instead wait until netcat is listening of the socket. We've also configured the netcat listener to persist after the first connection, which allows Vault and us to check the connection without the process closing.
As we implemented this we also ran into AWS issues in us-east-1 and us-west-2, so we've changed our deploy regions until those issues are resolved.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Begin the process of migrating away from the "strongly encouraged not to
use"[0] Ec2 spot fleet API to the more modern `ec2:CreateFleet`.
Unfortuantely the `instant` type fleet does not guarantee fulfillment
with either on-demand or spot types. We'll need to add a feature similar
to `wait_for_fulfillment` on the `spot_fleet_request` resource[1] to
`ec2_fleet` before we can rely on it.
We also update the existing target fleets to support provisioning generic
targets. This has allowed us to remove our usage of `terraform-enos-aws-consul`
and replace it with a smaller `backend_consul` module in-repo.
We also remove `terraform-enos-aws-infra` and replace it with two smaller
in-repo modules `ec2_info` and `create_vpc`. This has allowed us to simplify
the vpc resources we use for each scneario, which in turn allows us to
not rely on flaky resources.
As part of this refactor we've also made it possible to provision
targets using different distro versions.
[0] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-best-practices.html#which-spot-request-method-to-use
[1] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/spot_fleet_request#wait_for_fulfillment
* enos/consul: add `backend_consul` module that accepts target hosts.
* enos/target_ec2_spot_fleet: add support for consul networking.
* enos/target_ec2_spot_fleet: add support for customizing cluster tag
key.
* enos/scenarios: create `target_ec2_fleet` which uses a more modern
`ec2_fleet` API.
* enos/create_vpc: replace `terraform-enos-aws-infra` with smaller and
simplified version. Flatten the networking to a single route on the
default route table and a single subnet.
* enos/ec2_info: add a new module to give us useful ec2 information
including AMI id's for various arch/distro/version combinations.
* enos/ci: update service user role to allow for managing ec2 fleets.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Ryan Cragun <me@ryan.ec>
* [QT-525] enos: use spot instances for Vault targets (#20037)
The previous strategy for provisioning infrastructure targets was to use
the cheapest instances that could reliably perform as Vault cluster
nodes. With this change we introduce a new model for target node
infrastructure. We've replaced on-demand instances for a spot
fleet. While the spot price fluctuates based on dynamic pricing,
capacity, region, instance type, and platform, cost savings for our
most common combinations range between 20-70%.
This change only includes spot fleet targets for Vault clusters.
We'll be updating our Consul backend bidding in another PR.
* Create a new `vault_cluster` module that handles installation,
configuration, initializing, and unsealing Vault clusters.
* Create a `target_ec2_instances` module that can provision a group of
instances on-demand.
* Create a `target_ec2_spot_fleet` module that can bid on a fleet of
spot instances.
* Extend every Enos scenario to utilize the spot fleet target acquisition
strategy and the `vault_cluster` module.
* Update our Enos CI modules to handle both the `aws-nuke` permissions
and also the privileges to provision spot fleets.
* Only use us-east-1 and us-west-2 in our scenario matrices as costs are
lower than us-west-1.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* [QT-530] enos: allow-list all public IP addresses (#20304)
The security groups that allow access to remote machines in Enos
scenarios have been configured to only allow port 22 (SSH) from the
public IP address of machine executing the Enos scenario. To achieve
this we previously utilized the `enos_environment.public_ip_address`
attribute. Sometime in mid March we started seeing sporadic SSH i/o
timeout errors when attempting to execute Enos resources against SSH
transport targets. We've only ever seen this when communicating from
Azure hosted runners to AWS hosted machines.
While testing we were able to confirm that in some cases the public IP
address resolved using DNS over UDP4 to Google and OpenDNS name servers
did not match what was resolved when using the HTTPS/TCP IP address
service hosted by AWS. The Enos data source was implemented in a way
that we'd attempt resolution of a single name server and only attempt
resolving from the next if previous name server could not get a result.
We'd then allow-list that single IP address. That's a problem if we can
resolve two different public IP addresses depending our endpoint address.
This change utlizes the new `enos_environment.public_ip_addresses`
attribute and subsequent behavior change. Now the data source will
attempt to resolve our public IP address via name servers hosted by
Google, OpenDNS, Cloudflare, and AWS. We then return a unique set of
these IP addresses and allow-list all of them in our security group. It
is our hope that this resolves these i/o timeout errors that seem like
they're caused by the security group black-holing our attempted access
because the IP we resolved does not match what we're actually exiting
with.
Signed-off-by: Ryan Cragun <me@ryan.ec>
---------
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Ryan Cragun <me@ryan.ec>