Allow Multiple Issuers in PKI Secret Engine Mounts - PKI Pod (#15277)

* Starter PKI CA Storage API (#14796)

* Simple starting PKI storage api for CA rotation
* Add key and issuer storage apis
* Add listKeys and listIssuers storage implementations
* Add simple keys and issuers configuration storage api methods

* Handle resolving key, issuer references

The API context will usually have a user-specified reference to the key.
This is either the literal string "default" to select the default key,
an identifier of the key, or a slug name for the key. Here, we wish to
resolve this reference to an actual identifier that can be understood by
storage.

Also adds the missing Name field to keys.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add method to fetch an issuer's cert bundle

This adds a method to construct a certutil.CertBundle from the specified
issuer identifier, optionally loading its corresponding key for signing.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Refactor certutil PrivateKey PEM handling

This refactors the parsing of PrivateKeys from PEM blobs into shared
methods (ParsePEMKey, ParseDERKey) that can be reused by the existing
Bundle parsing logic (ParsePEMBundle) or independently in the new
issuers/key-based PKI storage code.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add importKey, importCert to PKI storage

importKey is generally preferable to the low-level writeKey for adding
new entries. This takes only the contents of the private key (as a
string -- so a PEM bundle or a managed key handle) and checks if it
already exists in the storage.

If it does, it returns the existing key instance.

Otherwise, we create a new one. In the process, we detect any issuers
using this key and link them back to the new key entry.

The same holds for importCert over importKey, with the note that keys
are not modified when importing certificates.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add tests for importing issuers, keys

This adds tests for importing keys and issuers into the new storage
layout, ensuring that identifiers are correctly inferred and linked.

Note that directly writing entries to storage (writeKey/writeissuer)
will take KeyID links from the parent entry and should not be used for
import; only existing entries should be updated with this info.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Implement PKI storage migration.

 - Hook into the backend::initialize function, calling the migration on a primary only.
 - Migrate an existing certificate bundle to the new issuers and key layout

* Make fetchCAInfo aware of new storage layout

This allows fetchCAInfo to fetch a specified issuer, via a reference
parameter provided by the user. We pass that into the storage layer and
have it return a cert bundle for us. Finally, we need to validate that
it truly has the key desired.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Begin /issuers API endpoints

This implements the fetch operations around issuers in the PKI Secrets
Engine. We implement the following operations:

 - LIST /issuers - returns a list of known issuers' IDs and names.
 - GET /issuer/:ref - returns a JSON blob with information about this
   issuer.
 - POST /issuer/:ref - allows configuring information about issuers,
   presently just its name.
 - DELETE /issuer/:ref - allows deleting the specified issuer.
 - GET /issuer/:ref/{der,pem} - returns a raw API response with just
   the DER (or PEM) of the issuer's certificate.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add import to PKI Issuers API

This adds the two core import code paths to the API:
/issuers/import/cert and /issuers/import/bundle. The former differs from
the latter in that the latter allows the import of keys. This allows
operators to restrict importing of keys to privileged roles, while
allowing more operators permission to import additional certificates
(not used for signing, but instead for path/chain building).

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add /issuer/:ref/sign-intermediate endpoint

This endpoint allows existing issuers to be used to sign intermediate
CA certificates. In the process, we've updated the existing
/root/sign-intermediate endpoint to be equivalent to a call to
/issuer/default/sign-intermediate.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add /issuer/:ref/sign-self-issued endpoint

This endpoint allows existing issuers to be used to sign self-signed
certificates. In the process, we've updated the existing
/root/sign-self-issued endpoint to be equivalent to a call to
/issuer/default/sign-self-issued.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add /issuer/:ref/sign-verbatim endpoint

This endpoint allows existing issuers to be used to directly sign CSRs.
In the process, we've updated the existing /sign-verbatim endpoint to be
equivalent to a call to /issuer/:ref/sign-verbatim.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Allow configuration of default issuers

Using the new updateDefaultIssuerId(...) from the storage migration PR
allows for easy implementation of configuring the default issuer. We
restrict callers from setting blank defaults and setting default to
default.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Fix fetching default issuers

After setting a default issuer, one should be able to use the old /ca,
/ca_chain, and /cert/{ca,ca_chain} endpoints to fetch the default issuer
(and its chain). Update the fetchCertBySerial helper to no longer
support fetching the ca and prefer fetchCAInfo for that instead (as
we've already updated that to support fetching the new issuer location).

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add /issuer/:ref/{sign,issue}/:role

This updates the /sign and /issue endpoints, allowing them to take the
default issuer (if none is provided by a role) and adding
issuer-specific versions of them.

Note that at this point in time, the behavior isn't yet ideal (as
/sign/:role allows adding the ref=... parameter to override the default
issuer); a later change adding role-based issuer specification will fix
this incorrect behavior.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add support root issuer generation

* Add support for issuer generate intermediate end-point

* Update issuer and key arguments to consistent values

 - Update all new API endpoints to use the new agreed upon argument names.
   - issuer_ref & key_ref to refer to existing
   - issuer_name & key_name for new definitions
 - Update returned values to always user issuer_id and key_id

* Add utility methods to fetch common ref and name arguments

 - Add utility methods to fetch the issuer_name, issuer_ref, key_name and key_ref arguments from data fields.
 - Centralize the logic to clean up these inputs and apply various validations to all of them.

* Rename common PKI backend handlers

 - Use the buildPath convention for the function name instead of common...

* Move setting PKI defaults from writeCaBundle to proper import{keys,issuer} methods

 - PR feedback, move setting up the default configuration references within
   the import methods instead of within the writeCaBundle method. This should
   now cover all use cases of us setting up the defaults properly.

* Introduce constants for issuer_ref, rename isKeyDefaultSet...

* Fix legacy PKI sign-verbatim api path

 - Addresses some test failures due to an incorrect refactoring of a legacy api
   path /sign-verbatim within PKI

* Use import code to handle intermediate, config/ca

The existing bundle import code will satisfy the intermediate import;
use it instead of the old ca_bundle import logic. Additionally, update
/config/ca to use the new import code as well.

While testing, a panic was discovered:

> reflect.Value.SetMapIndex: value of type string is not assignable to type pki.keyId

This was caused by returning a map with type issuerId->keyId; instead
switch to returning string->string maps so the audit log can properly
HMAC them.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Clarify error message on missing defaults

When the default issuer and key are missing (and haven't yet been
specified), we should clarify that error message.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Update test semantics for new changes

This makes two minor changes to the existing test suite:

 1. Importing partial bundles should now succeed, where they'd
    previously error.
 2. fetchCertBySerial no longer handles CA certificates.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add support for deleting all keys, issuers

The old DELETE /root code must now delete all keys and issuers for
backwards compatibility. We strongly suggest calling individual delete
methods (DELETE /key/:key_ref or DELETE /issuer/:issuer_ref) instead,
for finer control.

In the process, we detect whether the deleted key/issuers was set as the
default. This will allow us to warn (from the single key/deletion issuer
code) whether or not the default was deleted (while allowing the
operation to succeed).

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Introduce defaultRef constant within PKI

 - Replace hardcoded "default" references with a constant to easily identify various usages.
 - Use the addIssuerRefField function instead of redefining the field in various locations.

* Rework PKI test TestBackend_Root_Idempotency

 - Validate that generate/root calls are no longer idempotent, but the bundle importing
   does not generate new keys/issuers
 - As before make sure that the delete root api resets everything
 - Address a bug within the storage that we bombed when we had multiple different
   key types within storage.

* Assign Name=current to migrated key and issuer

 - Detail I missed from the RFC was to assign the Name field as "current" for migrated key and issuer.

* Build CRL upon PKI intermediary set-signed api called

 - Add a call to buildCRL if we created an issuer within pathImportIssuers
 - Augment existing FullCAChain to verify we have a proper CRL post set-signed api call
 - Remove a code block writing out "ca" storage entry that is no longer used.

* Identify which certificate or key failed

When importing complex chains, we should identify in which certificate
or key the failure occurred.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* PKI migration writes out empty migration log entry

 - Since the elements of the struct were not exported we serialized an empty
   migration log to disk and would re-run the migration

* Add chain-building logic to PKI issuers path

With the one-entry-per-issuer approach, CA Chains become implicitly
constructed from the pool of issuers. This roughly matches the existing
expectations from /config/ca (wherein a chain could be provided) and
/intemediate/set-signed (where a chain may be provided). However, in
both of those cases, we simply accepted a chain. Here, we need to be
able to reconstruct the chain from parts on disk.

However, with potential rotation of roots, we need to be aware of
disparate chains. Simply concating together all issuers isn't
sufficient. Thus we need to be able to parse a certificate's Issuer and
Subject field and reconstruct valid (and potentially parallel)
parent<->child mappings.

This attempts to handle roots, intermediates, cross-signed
intermediates, cross-signed roots, and rotated keys (wherein one might
not have a valid signature due to changed key material with the same
subject).

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Return CA Chain when fetching issuers

This returns the CA Chain attribute of an issuer, showing its computed
chain based on other issuers in the database, when fetching a specific
issuer.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add testing for chain building

Using the issuance infrastructure, we generate new certificates (either
roots or intermediates), positing that this is roughly equivalent to
importing an external bundle (minus error handling during partial
imports). This allows us to incrementally construct complex chains,
creating reissuance cliques and cross-signing cycles.

By using ECDSA certificates, we avoid high signature verification and
key generation times.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Allow manual construction of issuer chain

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Fix handling of duplicate names

With the new issuer field (manual_chain), we can no longer err when a
name already exists: we might be updating the existing issuer (with the
same name), but changing its manual_chain field. Detect this error and
correctly handle it.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add tests for manual chain building

We break the clique, instead building these chains manually, ensuring
that the remaining chains do not change and only the modified certs
change. We then reset them (back to implicit chain building) and ensure
we get the same results as earlier.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add stricter verification of issuers PEM format

This ensures each issuer is only a single certificate entry (as
validated by count and parsing) without any trailing data.

We further ensure that each certificate PEM has leading and trailing
spaces removed with only a single trailing new line remaining.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Fix full chain building

Don't set the legacy IssuingCA field on the certificate bundle, as we
prefer the CAChain field over it.

Additionally, building the full chain could result in duplicate
certificates when the CAChain included the leaf certificate itself. When
building the full chain, ensure we don't include the bundle's
certificate twice.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add stricter tests for full chain construction

We wish to ensure that each desired certificate in the chain is only
present once.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Rename PKI types to avoid constant variable name collisions

 keyId -> keyID
 issuerId -> issuerID
 key -> keyEntry
 issuer -> issuerEntry
 keyConfig -> keyConfigEntry
 issuerConfig -> issuerConfigEntry

* Update CRL handling for multiple issuers

When building CRLs, we've gotta make sure certs issued by that issuer
land up on that issuer's CRL and not some other CRL. If no CRL is
found (matching a cert), we'll place it on the default CRL.
However, in the event of equivalent issuers (those with the same subject
AND the same key  material) -- perhaps due to reissuance -- we'll only
create a single (unified) CRL for them.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Allow fetching updated CRL locations

This updates fetchCertBySerial to support querying the default issuer's
CRL.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Remove legacy CRL storage location test case

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Update to CRLv2 Format to copy RawIssuer

When using the older Certificate.CreateCRL(...) call, Go's x509 library
copies the parsed pkix.Name version of the CRL Issuer's Subject field.
For certain constructed CAs, this fails since pkix.Name is not suitable
for round-tripping. This also builds a CRLv1 (per RFC 5280) CRL.

In updating to the newer x509.CreateRevocationList(...) call, we can
construct the CRL in the CRLv2 format and correctly copy the issuer's
name. However, this requires holding an additional field per-CRL, the
CRLNumber field, which is required in Go's implementation of CRLv2
(though OPTIONAL in the spec). We store this on the new
LocalCRLConfigEntry object, per-CRL.

Co-authored-by: Alexander Scheel <alex.scheel@hashicorp.com>
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add comment regarding CRL non-assignment in GOTO

In previous versions of Vault, it was possible to sign an empty CRL
(when the CRL was disabled and a force-rebuild was requested). Add a
comment about this case.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Allow fetching the specified issuer's CRL

We add a new API endpoint to fetch the specified issuer's CRL directly
(rather than the default issuer's CRL at /crl and /certs/crl). We also
add a new test to validate the CRL in a multi-root scenario and ensure
it is signed with the correct keys.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add new PKI key prefix to seal wrapped storage (#15126)

* Refactor common backend initialization within backend_test

 - Leverage an existing helper method within the PKI backend tests to setup a PKI backend with storage.

* Add ability to read legacy cert bundle if the migration has not occurred on secondaries.

 - Track the migration state forbidding an issuer/key writing api call if we have not migrated
 - For operations that just need to read the CA bundle, use the same tracking variable to
   switch between reading the legacy bundle or use the new key/issuer storage.
 - Add an invalidation function that will listen for updates to our log path to refresh the state
   on secondary clusters.

* Always write migration entry to trigger secondary clusters to wake up

 - Some PR feedback and handle a case in which the primary cluster does
   not have a CA bundle within storage but somehow a secondary does.

* Update CA Chain to report entire chain

This merges the ca_chain JSON field (of the /certs/ca_chain path) with
the regular certificate field, returning the root of trust always. This
also affects the non-JSON (raw) endpoints as well.

We return the default issuer's chain here, rather than all known issuers
(as that may not form a strict chain).

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Allow explicit issuer override on roles

When a role is used to generate a certificate (such as with the sign/
and issue/ legacy paths or the legacy sign-verbatim/ paths), we prefer
that issuer to the one on the request. This allows operators to set an
issuer (other than default) for requests to be issued against,
effectively making the change no different from the users' perspective
as it is "just" a different role name.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add tests for role-based issuer selection

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Expand NotAfter limit enforcement behavior

Vault previously strictly enforced NotAfter/ttl values on certificate
requests, erring if the requested TTL extended past the NotAfter date of
the issuer. In the event of issuing an intermediate, this behavior was
ignored, instead permitting the issuance.

Users generally do not think to check their issuer's NotAfter date when
requesting a certificate; thus this behavior was generally surprising.

Per RFC 5280 however, issuers need to maintain status information
throughout the life cycle of the issued cert. If this leaf cert were to
be issued for a longer duration than the parent issuer, the CA must
still maintain revocation information past its expiration.

Thus, we add an option to the issuer to change the desired behavior:

 - err, to err out,
 - permit, to permit the longer NotAfter date, or
 - truncate, to silently truncate the expiration to the issuer's
   NotAfter date.

Since expiration of certificates in the system's trust store are not
generally validated (when validating an arbitrary leaf, e.g., during TLS
validation), permit should generally only be used in that case. However,
browsers usually validate intermediate's validity periods, and thus
truncate should likely be used (as with permit, the leaf's chain will
not validate towards the end of the issuance period).

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add tests for expanded issuance behaviors

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add warning on keyless default issuer (#15178)

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Update PKI to new Operations framework (#15180)

The backend Framework has updated Callbacks (used extensively in PKI) to
become deprecated; Operations takes their place and clarifies forwarding
of requests.

We switch to the new format everywhere, updating some bad assumptions
about forwarding along the way. Anywhere writes are handled (that should
be propagated to all nodes in all clusters), we choose to forward the
request all the way up to the performance primary cluster's primary
node. This holds for issuers/keys, roles, and configs (such as CRL
config, which is globally set for all clusters despite all clusters
having their own separate CRL).

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Kitography/vault 5474 rebase (#15150)

* These parts work (put in signature so that backend wouldn't break, but missing fields, desc, etc.)

* Import and Generate API calls w/ needed additions to SDK.

* make fmt

* Add Help/Sync Text, fix some of internal/exported/kms code.

* Fix PEM/DER Encoding issue.

* make fmt

* Standardize keyIdParam, keyNameParam, keyTypeParam

* Add error response if key to be deleted is in use.

* replaces all instances of "default" in code with defaultRef

* Updates from Callbacks to Operations Function with explicit forwarding.

* Fixes a panic with names not being updated everywhere.

* add a logged error in addition to warning on deleting default key.

* Normalize whitespace upon importing keys.

Authored-by: Alexander Scheel <alexander.m.scheel@gmail.com>

* Fix isKeyInUse functionality.

* Fixes tests associated with newline at end of key pem.

* Add alternative proposal PKI aliased paths (#15211)

* Add aliased path for root/rotate/:exported

This adds a user-friendly path name for generating a rotated root. We
automatically choose the name "next" for the newly generated root at
this path if it doesn't already exist.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add aliased path for intermediate/cross-sign

This allows cross-signatures to work.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add path for replacing the current root

This updates default to point to the value of the issuer with name
"next" rather than its current value.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Remove plural issuers/ in signing paths

These paths use a single issuer and thus shouldn't include the plural
issuers/ as a path prefix, instead using the singular issuer/ path
prefix.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Only warn if default issuer was imported

When the default issuer was not (re-)imported, we'd fail to find it,
causing an extraneous warning about missing keys, even though this
issuer indeed had a key.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add missing issuer sign/issue paths

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Clean up various warnings within the PKI package (#15230)

* Rebuild CRLs on secondary performance clusters post migration and on new/updated issuers

 - Hook into the backend invalidation function so that secondaries are notified of
   new/updated issuer or migrations occuring on the primary cluster. Upon notification
   schedule a CRL rebuild to take place upon the next process to read/update the CRL
   or within the periodic function if no request comes in.

* Schedule rebuilding PKI CRLs on active nodes only

 - Address an issue that we were scheduling the rebuilding of a CRL on standby
   nodes, which would not be able to write to storage.
 - Fix an issue with standby nodes not correctly determining that a migration previously
   occurred.

* Return legacy CRL storage path when no migration has occurred.

* Handle issuer, keys locking (#15227)

* Handle locking of issuers during writes

We need a write lock around writes to ensure serialization of
modifications. We use a single lock for both issuer and key
updates, in part because certain operations (like deletion) will
potentially affect both.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add missing b.useLegacyBundleCaStorage guards

Several locations needed to guard against early usage of the new issuers
endpoint pre-migration.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Address PKI to properly support managed keys (#15256)

* Address codebase for managed key fixes
* Add proper public key comparison for better managed key support to importKeys
* Remove redundant public key fetching within PKI importKeys

* Correctly handle rebuilding remaining chains

When deleting a specific issuer, we might impact the chains. From a
consistency perspective, we need to ensure the remaining chains are
correct and don't refer to the since-deleted issuer, so trigger a full
rebuild here.

We don't need to call this in the delete-the-world (DELETE /root) code
path, as there shouldn't be any remaining issuers or chains to build.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Remove legacy CRL bundle on world deletion

When calling DELETE /root, we should remove the legacy CRL bundle, since
we're deleting the legacy CA issuer bundle as well.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Remove deleted issuers' CRL entries

Since CRLs are no longer resolvable after deletion (due to missing
issuer ID, which will cause resolution to fail regardless of if an ID or
a name/default reference was used), we should delete these CRLs from
storage to avoid leaking them.

In the event that this issuer comes back (with key material), we can
simply rebuild the CRL at that time (from the remaining revoked storage
entries).

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add unauthed JSON fetching of CRLs, Issuers (#15253)

Default to fetching JSON CRL for consistency

This makes the bare issuer-specific CRL fetching endpoint return the
JSON-wrapped CRL by default, moving the DER CRL to a specific endpoint.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

Add JSON-specific endpoint for fetching issuers

Unlike the unqualified /issuer/:ref endpoint (which also returns JSON),
we have a separate /issuer/:ref/json endpoint to return _only_ the
PEM-encoded certificate and the chain, mirroring the existing /cert/ca
endpoint but for a specific issuer. This allows us to make the endpoint
unauthenticated, whereas the bare endpoint would remain authenticated
and usually privileged.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

Add tests for raw JSON endpoints

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add unauthenticated issuers endpoints to PKI table

This adds the unauthenticated issuers endpoints?

 - LIST /issuers,
 - Fetching _just_ the issuer certificates (in JSON/DER/PEM form), and
 - Fetching the CRL of this issuer (in JSON/DER/PEM form).

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* Add issuer usage restrictions bitset

This allows issuers to have usage restrictions, limiting whether they
can be used to issue certificates or if they can generate CRLs. This
allows certain issuers to not generate a CRL (if the global config is
with the CRL enabled) or allows the issuer to not issue new certificates
(but potentially letting the CRL generation continue).

Setting both fields to false effectively forms a soft delete capability.

Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>

* PKI Pod rotation Add Base Changelog (#15283)

* PKI Pod rotation changelog.
* Use feature release-note formatting of changelog.

Co-authored-by: Steven Clark <steven.clark@hashicorp.com>
Co-authored-by: Kit Haines <kit.haines@hashicorp.com>
Co-authored-by: kitography <khaines@mit.edu>
This commit is contained in:
Alexander Scheel
2022-05-11 12:42:28 -04:00
committed by GitHub
parent 91e0bbe95b
commit b42cdf3040
41 changed files with 7780 additions and 904 deletions

View File

@@ -5,8 +5,11 @@ import (
"fmt"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/hashicorp/vault/sdk/helper/consts"
"github.com/armon/go-metrics"
"github.com/hashicorp/vault/helper/metricsutil"
"github.com/hashicorp/vault/helper/namespace"
@@ -22,22 +25,27 @@ const (
/*
* PKI requests are a bit special to keep up with the various failure and load issues.
* The main ca and intermediate requests are always forwarded to the Primary cluster's active
* node to write and send the key material/config globally across all clusters.
*
* CRL/Revocation and Issued certificate apis are handled by the active node within the cluster
* they originate. Which means if a request comes into a performance secondary cluster the writes
* Any requests to write/delete shared data (such as roles, issuers, keys, and configuration)
* are always forwarded to the Primary cluster's active node to write and send the key
* material/config globally across all clusters. Reads should be handled locally, to give a
* sense of where this cluster's replication state is at.
*
* CRL/Revocation and Fetch Certificate APIs are handled by the active node within the cluster
* they originate. This means, if a request comes into a performance secondary cluster, the writes
* will be forwarded to that cluster's active node and not go all the way up to the performance primary's
* active node.
*
* If a certificate issue request has a role in which no_store is set to true that node itself
* will issue the certificate and not forward the request to the active node.
* If a certificate issue request has a role in which no_store is set to true, that node itself
* will issue the certificate and not forward the request to the active node, as this does not
* need to write to storage.
*
* Following the same pattern if a managed key is involved to sign an issued certificate request
* Following the same pattern, if a managed key is involved to sign an issued certificate request
* and the local node does not have access for some reason to it, the request will be forwarded to
* the active node within the cluster only.
*
* To make sense of what goes where the following bits need to be analyzed within the codebase.
*
* 1. The backend LocalStorage paths determine what storage paths will remain within a
* cluster and not be forwarded to a performance primary
* 2. Within each path's OperationHandler definition, check to see if ForwardPerformanceStandby &
@@ -69,11 +77,19 @@ func Backend(conf *logical.BackendConfig) *backend {
"ca",
"crl/pem",
"crl",
"issuer/+/crl/der",
"issuer/+/crl/pem",
"issuer/+/crl",
"issuer/+/pem",
"issuer/+/der",
"issuer/+/json",
"issuers",
},
LocalStorage: []string{
"revoked/",
"crl",
legacyCRLPath,
"crls/",
"certs/",
},
@@ -83,7 +99,8 @@ func Backend(conf *logical.BackendConfig) *backend {
},
SealWrapStorage: []string{
"config/ca_bundle",
legacyCertBundlePath,
keyPrefix,
},
},
@@ -103,6 +120,35 @@ func Backend(conf *logical.BackendConfig) *backend {
pathSign(&b),
pathIssue(&b),
pathRotateCRL(&b),
pathRevoke(&b),
pathTidy(&b),
pathTidyStatus(&b),
// Issuer APIs
pathListIssuers(&b),
pathGetIssuer(&b),
pathGetIssuerCRL(&b),
pathImportIssuer(&b),
pathIssuerIssue(&b),
pathIssuerSign(&b),
pathIssuerSignIntermediate(&b),
pathIssuerSignSelfIssued(&b),
pathIssuerSignVerbatim(&b),
pathIssuerGenerateRoot(&b),
pathRotateRoot(&b),
pathIssuerGenerateIntermediate(&b),
pathCrossSignIntermediate(&b),
pathConfigIssuers(&b),
pathReplaceRoot(&b),
// Key APIs
pathListKeys(&b),
pathKey(&b),
pathGenerateKey(&b),
pathImportKey(&b),
pathConfigKeys(&b),
// Fetch APIs have been lowered to favor the newer issuer API endpoints
pathFetchCA(&b),
pathFetchCAChain(&b),
pathFetchCRL(&b),
@@ -110,29 +156,34 @@ func Backend(conf *logical.BackendConfig) *backend {
pathFetchValidRaw(&b),
pathFetchValid(&b),
pathFetchListCerts(&b),
pathRevoke(&b),
pathTidy(&b),
pathTidyStatus(&b),
},
Secrets: []*framework.Secret{
secretCerts(&b),
},
BackendType: logical.TypeLogical,
BackendType: logical.TypeLogical,
InitializeFunc: b.initialize,
Invalidate: b.invalidate,
PeriodicFunc: b.periodicFunc,
}
b.crlLifetime = time.Hour * 72
b.tidyCASGuard = new(uint32)
b.tidyStatus = &tidyStatus{state: tidyStatusInactive}
b.storage = conf.StorageView
b.backendUuid = conf.BackendUUID
b.pkiStorageVersion.Store(0)
b.crlBuilder = &crlBuilder{}
return &b
}
type backend struct {
*framework.Backend
backendUuid string
storage logical.Storage
crlLifetime time.Duration
revokeStorageLock sync.RWMutex
@@ -140,6 +191,12 @@ type backend struct {
tidyStatusLock sync.RWMutex
tidyStatus *tidyStatus
pkiStorageVersion atomic.Value
crlBuilder *crlBuilder
// Write lock around issuers and keys.
issuersLock sync.RWMutex
}
type (
@@ -233,3 +290,72 @@ func (b *backend) metricsWrap(callType string, roleMode int, ofunc roleOperation
return resp, err
}
}
// initialize is used to perform a possible PKI storage migration if needed
func (b *backend) initialize(ctx context.Context, _ *logical.InitializationRequest) error {
// Load up our current pki storage state, no matter the host type we are on.
b.updatePkiStorageVersion(ctx)
// Early exit if not a primary cluster or performance secondary with a local mount.
if b.System().ReplicationState().HasState(consts.ReplicationDRSecondary|consts.ReplicationPerformanceStandby) ||
(!b.System().LocalMount() && b.System().ReplicationState().HasState(consts.ReplicationPerformanceSecondary)) {
b.Logger().Debug("skipping PKI migration as we are not on primary or secondary with a local mount")
return nil
}
b.issuersLock.Lock()
defer b.issuersLock.Unlock()
if err := migrateStorage(ctx, b, b.storage); err != nil {
b.Logger().Error("Error during migration of PKI mount: " + err.Error())
return err
}
b.updatePkiStorageVersion(ctx)
return nil
}
func (b *backend) useLegacyBundleCaStorage() bool {
version := b.pkiStorageVersion.Load()
return version == nil || version == 0
}
func (b *backend) updatePkiStorageVersion(ctx context.Context) {
info, err := getMigrationInfo(ctx, b.storage)
if err != nil {
b.Logger().Error(fmt.Sprintf("Failed loading PKI migration status, staying in legacy mode: %v", err))
return
}
if info.isRequired {
b.Logger().Info("PKI migration is required, reading cert bundle from legacy ca location")
b.pkiStorageVersion.Store(0)
} else {
b.Logger().Debug("PKI migration completed, reading cert bundle from key/issuer storage")
b.pkiStorageVersion.Store(1)
}
}
func (b *backend) invalidate(ctx context.Context, key string) {
switch {
case strings.HasPrefix(key, legacyMigrationBundleLogKey):
// This is for a secondary cluster to pick up that the migration has completed
// and reset its compatibility mode and rebuild the CRL locally.
b.updatePkiStorageVersion(ctx)
b.crlBuilder.requestRebuildIfActiveNode(b)
case strings.HasPrefix(key, issuerPrefix):
// If an issuer has changed on the primary, we need to schedule an update of our CRL,
// the primary cluster would have done it already, but the CRL is cluster specific so
// force a rebuild of ours.
if !b.useLegacyBundleCaStorage() {
b.crlBuilder.requestRebuildIfActiveNode(b)
} else {
b.Logger().Debug("Ignoring invalidation updates for issuer as the PKI migration has yet to complete.")
}
}
}
func (b *backend) periodicFunc(ctx context.Context, request *logical.Request) error {
return b.crlBuilder.rebuildIfForced(ctx, b, request)
}

View File

@@ -281,18 +281,7 @@ func TestBackend_InvalidParameter(t *testing.T) {
func TestBackend_CSRValues(t *testing.T) {
initTest.Do(setCerts)
defaultLeaseTTLVal := time.Hour * 24
maxLeaseTTLVal := time.Hour * 24 * 32
b, err := Factory(context.Background(), &logical.BackendConfig{
Logger: nil,
System: &logical.StaticSystemView{
DefaultLeaseTTLVal: defaultLeaseTTLVal,
MaxLeaseTTLVal: maxLeaseTTLVal,
},
})
if err != nil {
t.Fatalf("Unable to create backend: %s", err)
}
b, _ := createBackendWithStorage(t)
testCase := logicaltest.TestCase{
LogicalBackend: b,
@@ -308,18 +297,7 @@ func TestBackend_CSRValues(t *testing.T) {
func TestBackend_URLsCRUD(t *testing.T) {
initTest.Do(setCerts)
defaultLeaseTTLVal := time.Hour * 24
maxLeaseTTLVal := time.Hour * 24 * 32
b, err := Factory(context.Background(), &logical.BackendConfig{
Logger: nil,
System: &logical.StaticSystemView{
DefaultLeaseTTLVal: defaultLeaseTTLVal,
MaxLeaseTTLVal: maxLeaseTTLVal,
},
})
if err != nil {
t.Fatalf("Unable to create backend: %s", err)
}
b, _ := createBackendWithStorage(t)
testCase := logicaltest.TestCase{
LogicalBackend: b,
@@ -354,18 +332,8 @@ func TestBackend_Roles(t *testing.T) {
t.Run(tc.name, func(t *testing.T) {
initTest.Do(setCerts)
defaultLeaseTTLVal := time.Hour * 24
maxLeaseTTLVal := time.Hour * 24 * 32
b, err := Factory(context.Background(), &logical.BackendConfig{
Logger: nil,
System: &logical.StaticSystemView{
DefaultLeaseTTLVal: defaultLeaseTTLVal,
MaxLeaseTTLVal: maxLeaseTTLVal,
},
})
if err != nil {
t.Fatalf("Unable to create backend: %s", err)
}
b, _ := createBackendWithStorage(t)
testCase := logicaltest.TestCase{
LogicalBackend: b,
Steps: []logicaltest.TestStep{
@@ -1748,14 +1716,127 @@ func generateRoleSteps(t *testing.T, useCSRs bool) []logicaltest.TestStep {
return ret
}
func TestBackend_PathFetchValidRaw(t *testing.T) {
config := logical.TestBackendConfig()
storage := &logical.InmemStorage{}
config.StorageView = storage
func TestRolesAltIssuer(t *testing.T) {
coreConfig := &vault.CoreConfig{
LogicalBackends: map[string]logical.Factory{
"pki": Factory,
},
}
cluster := vault.NewTestCluster(t, coreConfig, &vault.TestClusterOptions{
HandlerFunc: vaulthttp.Handler,
})
cluster.Start()
defer cluster.Cleanup()
b := Backend(config)
err := b.Setup(context.Background(), config)
client := cluster.Cores[0].Client
var err error
err = client.Sys().Mount("pki", &api.MountInput{
Type: "pki",
Config: api.MountConfigInput{
DefaultLeaseTTL: "16h",
MaxLeaseTTL: "60h",
},
})
if err != nil {
t.Fatal(err)
}
// Create two issuers.
resp, err := client.Logical().Write("pki/root/generate/internal", map[string]interface{}{
"common_name": "root a - example.com",
"issuer_name": "root-a",
"key_type": "ec",
})
require.NoError(t, err)
require.NotNil(t, resp)
rootAPem := resp.Data["certificate"].(string)
rootACert := parseCert(t, rootAPem)
resp, err = client.Logical().Write("pki/root/generate/internal", map[string]interface{}{
"common_name": "root b - example.com",
"issuer_name": "root-b",
"key_type": "ec",
})
require.NoError(t, err)
require.NotNil(t, resp)
rootBPem := resp.Data["certificate"].(string)
rootBCert := parseCert(t, rootBPem)
// Create three roles: one with no assignment, one with explicit root-a,
// one with explicit root-b.
_, err = client.Logical().Write("pki/roles/use-default", map[string]interface{}{
"allow_any_name": true,
"enforce_hostnames": false,
"key_type": "ec",
})
require.NoError(t, err)
_, err = client.Logical().Write("pki/roles/use-root-a", map[string]interface{}{
"allow_any_name": true,
"enforce_hostnames": false,
"key_type": "ec",
"issuer_ref": "root-a",
})
require.NoError(t, err)
_, err = client.Logical().Write("pki/roles/use-root-b", map[string]interface{}{
"allow_any_name": true,
"enforce_hostnames": false,
"issuer_ref": "root-b",
})
require.NoError(t, err)
// Now issue certs against these roles.
resp, err = client.Logical().Write("pki/issue/use-default", map[string]interface{}{
"common_name": "testing",
"ttl": "5s",
})
require.NoError(t, err)
leafPem := resp.Data["certificate"].(string)
leafCert := parseCert(t, leafPem)
err = leafCert.CheckSignatureFrom(rootACert)
require.NoError(t, err, "should be signed by root-a but wasn't")
resp, err = client.Logical().Write("pki/issue/use-root-a", map[string]interface{}{
"common_name": "testing",
"ttl": "5s",
})
require.NoError(t, err)
leafPem = resp.Data["certificate"].(string)
leafCert = parseCert(t, leafPem)
err = leafCert.CheckSignatureFrom(rootACert)
require.NoError(t, err, "should be signed by root-a but wasn't")
resp, err = client.Logical().Write("pki/issue/use-root-b", map[string]interface{}{
"common_name": "testing",
"ttl": "5s",
})
require.NoError(t, err)
leafPem = resp.Data["certificate"].(string)
leafCert = parseCert(t, leafPem)
err = leafCert.CheckSignatureFrom(rootBCert)
require.NoError(t, err, "should be signed by root-b but wasn't")
// Update the default issuer to be root B and make sure that the
// use-default role updates.
_, err = client.Logical().Write("pki/config/issuers", map[string]interface{}{
"default": "root-b",
})
require.NoError(t, err)
resp, err = client.Logical().Write("pki/issue/use-default", map[string]interface{}{
"common_name": "testing",
"ttl": "5s",
})
require.NoError(t, err)
leafPem = resp.Data["certificate"].(string)
leafCert = parseCert(t, leafPem)
err = leafCert.CheckSignatureFrom(rootBCert)
require.NoError(t, err, "should be signed by root-b but wasn't")
}
func TestBackend_PathFetchValidRaw(t *testing.T) {
b, storage := createBackendWithStorage(t)
resp, err := b.HandleRequest(context.Background(), &logical.Request{
Operation: logical.UpdateOperation,
@@ -1773,7 +1854,7 @@ func TestBackend_PathFetchValidRaw(t *testing.T) {
}
rootCaAsPem := resp.Data["certificate"].(string)
// The ca_chain call at least for now does not return the root CA authority
// Chain should contain the root.
resp, err = b.HandleRequest(context.Background(), &logical.Request{
Operation: logical.ReadOperation,
Path: "ca_chain",
@@ -1785,7 +1866,9 @@ func TestBackend_PathFetchValidRaw(t *testing.T) {
if resp != nil && resp.IsError() {
t.Fatalf("failed read ca_chain, %#v", resp)
}
require.Equal(t, []byte{}, resp.Data[logical.HTTPRawBody], "ca_chain response should have been empty")
if strings.Count(string(resp.Data[logical.HTTPRawBody].([]byte)), rootCaAsPem) != 1 {
t.Fatalf("expected raw chain to contain the root cert")
}
// The ca/pem should return us the actual CA...
resp, err = b.HandleRequest(context.Background(), &logical.Request{
@@ -1884,15 +1967,7 @@ func TestBackend_PathFetchValidRaw(t *testing.T) {
func TestBackend_PathFetchCertList(t *testing.T) {
// create the backend
config := logical.TestBackendConfig()
storage := &logical.InmemStorage{}
config.StorageView = storage
b := Backend(config)
err := b.Setup(context.Background(), config)
if err != nil {
t.Fatal(err)
}
b, storage := createBackendWithStorage(t)
// generate root
rootData := map[string]interface{}{
@@ -2034,15 +2109,7 @@ func TestBackend_SignVerbatim(t *testing.T) {
func runTestSignVerbatim(t *testing.T, keyType string) {
// create the backend
config := logical.TestBackendConfig()
storage := &logical.InmemStorage{}
config.StorageView = storage
b := Backend(config)
err := b.Setup(context.Background(), config)
if err != nil {
t.Fatal(err)
}
b, storage := createBackendWithStorage(t)
// generate root
rootData := map[string]interface{}{
@@ -2275,92 +2342,108 @@ func TestBackend_Root_Idempotency(t *testing.T) {
})
cluster.Start()
defer cluster.Cleanup()
client := cluster.Cores[0].Client
var err error
err = client.Sys().Mount("pki", &api.MountInput{
Type: "pki",
Config: api.MountConfigInput{
DefaultLeaseTTL: "16h",
MaxLeaseTTL: "32h",
},
})
if err != nil {
t.Fatal(err)
}
mountPKIEndpoint(t, client, "pki")
// This is a change within 1.11, we are no longer idempotent across generate/internal calls.
resp, err := client.Logical().Write("pki/root/generate/internal", map[string]interface{}{
"common_name": "myvault.com",
})
if err != nil {
t.Fatal(err)
}
if resp == nil {
t.Fatal("expected ca info")
}
require.NoError(t, err)
require.NotNil(t, resp, "expected ca info")
keyId1 := resp.Data["key_id"]
issuerId1 := resp.Data["issuer_id"]
resp, err = client.Logical().Read("pki/cert/ca_chain")
if err != nil {
t.Fatalf("error reading ca_chain: %v", err)
}
require.NoError(t, err, "error reading ca_chain: %v", err)
r1Data := resp.Data
// Try again, make sure it's a 204 and same CA
// Calling generate/internal should generate a new CA as well.
resp, err = client.Logical().Write("pki/root/generate/internal", map[string]interface{}{
"common_name": "myvault.com",
})
if err != nil {
t.Fatal(err)
}
if resp == nil {
t.Fatal("expected a warning")
}
if resp.Data != nil || len(resp.Warnings) == 0 {
t.Fatalf("bad response: %#v", *resp)
}
require.NoError(t, err)
require.NotNil(t, resp, "expected ca info")
keyId2 := resp.Data["key_id"]
issuerId2 := resp.Data["issuer_id"]
// Make sure that we actually generated different issuer and key values
require.NotEqual(t, keyId1, keyId2)
require.NotEqual(t, issuerId1, issuerId2)
// Now because the issued CA's have no links, the call to ca_chain should return the same data (ca chain from default)
resp, err = client.Logical().Read("pki/cert/ca_chain")
if err != nil {
t.Fatalf("error reading ca_chain: %v", err)
}
require.NoError(t, err, "error reading ca_chain: %v", err)
r2Data := resp.Data
if !reflect.DeepEqual(r1Data, r2Data) {
t.Fatal("got different ca certs")
}
resp, err = client.Logical().Delete("pki/root")
if err != nil {
t.Fatal(err)
}
if resp != nil {
t.Fatal("expected nil response")
}
// Make sure it behaves the same
resp, err = client.Logical().Delete("pki/root")
if err != nil {
t.Fatal(err)
}
if resp != nil {
t.Fatal("expected nil response")
}
_, err = client.Logical().Read("pki/cert/ca_chain")
if err == nil {
t.Fatal("expected error")
}
resp, err = client.Logical().Write("pki/root/generate/internal", map[string]interface{}{
"common_name": "myvault.com",
// Now let's validate that the import bundle is idempotent.
pemBundleRootCA := string(cluster.CACertPEM) + string(cluster.CAKeyPEM)
resp, err = client.Logical().Write("pki/config/ca", map[string]interface{}{
"pem_bundle": pemBundleRootCA,
})
if err != nil {
t.Fatal(err)
}
if resp == nil {
t.Fatal("expected ca info")
}
require.NoError(t, err)
require.NotNil(t, resp, "expected ca info")
firstImportedKeys := resp.Data["imported_keys"].([]interface{})
firstImportedIssuers := resp.Data["imported_issuers"].([]interface{})
require.NotContains(t, firstImportedKeys, keyId1)
require.NotContains(t, firstImportedKeys, keyId2)
require.NotContains(t, firstImportedIssuers, issuerId1)
require.NotContains(t, firstImportedIssuers, issuerId2)
// Performing this again should result in no key/issuer ids being imported/generated.
resp, err = client.Logical().Write("pki/config/ca", map[string]interface{}{
"pem_bundle": pemBundleRootCA,
})
require.NoError(t, err)
require.NotNil(t, resp, "expected ca info")
secondImportedKeys := resp.Data["imported_keys"]
secondImportedIssuers := resp.Data["imported_issuers"]
require.Nil(t, secondImportedKeys)
require.Nil(t, secondImportedIssuers)
resp, err = client.Logical().Delete("pki/root")
require.NoError(t, err)
require.NotNil(t, resp)
require.Equal(t, 1, len(resp.Warnings))
// Make sure we can delete twice...
resp, err = client.Logical().Delete("pki/root")
require.NoError(t, err)
require.NotNil(t, resp)
require.Equal(t, 1, len(resp.Warnings))
_, err = client.Logical().Read("pki/cert/ca_chain")
if err != nil {
t.Fatal(err)
require.Error(t, err, "expected an error fetching deleted ca_chain")
// We should be able to import the same ca bundle as before and get a different key/issuer ids
resp, err = client.Logical().Write("pki/config/ca", map[string]interface{}{
"pem_bundle": pemBundleRootCA,
})
require.NoError(t, err)
require.NotNil(t, resp, "expected ca info")
postDeleteImportedKeys := resp.Data["imported_keys"]
postDeleteImportedIssuers := resp.Data["imported_issuers"]
// Make sure that we actually generated different issuer and key values, then the previous import
require.NotNil(t, postDeleteImportedKeys)
require.NotNil(t, postDeleteImportedIssuers)
require.NotEqual(t, postDeleteImportedKeys, firstImportedKeys)
require.NotEqual(t, postDeleteImportedIssuers, firstImportedIssuers)
resp, err = client.Logical().Read("pki/cert/ca_chain")
require.NoError(t, err)
caChainPostDelete := resp.Data
if reflect.DeepEqual(r1Data, caChainPostDelete) {
t.Fatal("ca certs from ca_chain were the same post delete, should have changed.")
}
}
@@ -2463,15 +2546,7 @@ func TestBackend_SignIntermediate_AllowedPastCA(t *testing.T) {
func TestBackend_SignSelfIssued(t *testing.T) {
// create the backend
config := logical.TestBackendConfig()
storage := &logical.InmemStorage{}
config.StorageView = storage
b := Backend(config)
err := b.Setup(context.Background(), config)
if err != nil {
t.Fatal(err)
}
b, storage := createBackendWithStorage(t)
// generate root
rootData := map[string]interface{}{
@@ -2585,7 +2660,7 @@ func TestBackend_SignSelfIssued(t *testing.T) {
t.Fatal(err)
}
signingBundle, err := fetchCAInfo(context.Background(), b, &logical.Request{Storage: storage})
signingBundle, err := fetchCAInfo(context.Background(), b, &logical.Request{Storage: storage}, defaultRef, ReadOnlyUsage)
if err != nil {
t.Fatal(err)
}
@@ -2610,15 +2685,7 @@ func TestBackend_SignSelfIssued(t *testing.T) {
// require_matching_certificate_algorithms flag.
func TestBackend_SignSelfIssued_DifferentTypes(t *testing.T) {
// create the backend
config := logical.TestBackendConfig()
storage := &logical.InmemStorage{}
config.StorageView = storage
b := Backend(config)
err := b.Setup(context.Background(), config)
if err != nil {
t.Fatal(err)
}
b, storage := createBackendWithStorage(t)
// generate root
rootData := map[string]interface{}{
@@ -3595,6 +3662,7 @@ func TestReadWriteDeleteRoles(t *testing.T) {
"province": []interface{}{},
"street_address": []interface{}{},
"code_signing_flag": false,
"issuer_ref": "default",
}
if diff := deep.Equal(expectedData, resp.Data); len(diff) > 0 {
@@ -3839,25 +3907,7 @@ func TestBackend_RevokePlusTidy_Intermediate(t *testing.T) {
// Get CRL and ensure the tidied cert is still in the list after the tidy
// operation since it's not past the NotAfter (ttl) value yet.
req := client.NewRequest("GET", "/v1/pki/crl")
resp, err := client.RawRequest(req)
if err != nil {
t.Fatal(err)
}
defer resp.Body.Close()
crlBytes, err := ioutil.ReadAll(resp.Body)
if err != nil {
t.Fatalf("err: %s", err)
}
if len(crlBytes) == 0 {
t.Fatalf("expected CRL in response body")
}
crl, err := x509.ParseDERCRL(crlBytes)
if err != nil {
t.Fatal(err)
}
crl := getParsedCrl(t, client, "pki")
revokedCerts := crl.TBSCertList.RevokedCertificates
if len(revokedCerts) == 0 {
@@ -3970,14 +4020,64 @@ func TestBackend_RevokePlusTidy_Intermediate(t *testing.T) {
}
}
req = client.NewRequest("GET", "/v1/pki/crl")
resp, err = client.RawRequest(req)
crl = getParsedCrl(t, client, "pki")
revokedCerts = crl.TBSCertList.RevokedCertificates
if len(revokedCerts) != 0 {
t.Fatal("expected CRL to be empty")
}
}
func getParsedCrl(t *testing.T, client *api.Client, mountPoint string) *pkix.CertificateList {
path := fmt.Sprintf("/v1/%s/crl", mountPoint)
return getParsedCrlAtPath(t, client, path)
}
func getParsedCrlForIssuer(t *testing.T, client *api.Client, mountPoint string, issuer string) *pkix.CertificateList {
path := fmt.Sprintf("/v1/%v/issuer/%v/crl/der", mountPoint, issuer)
crl := getParsedCrlAtPath(t, client, path)
// Now fetch the issuer as well and verify the certificate
path = fmt.Sprintf("/v1/%v/issuer/%v/der", mountPoint, issuer)
req := client.NewRequest("GET", path)
resp, err := client.RawRequest(req)
if err != nil {
t.Fatal(err)
}
defer resp.Body.Close()
crlBytes, err = ioutil.ReadAll(resp.Body)
certBytes, err := ioutil.ReadAll(resp.Body)
if err != nil {
t.Fatalf("err: %s", err)
}
if len(certBytes) == 0 {
t.Fatalf("expected certificate in response body")
}
cert, err := x509.ParseCertificate(certBytes)
if err != nil {
t.Fatal(err)
}
if cert == nil {
t.Fatalf("expected parsed certificate")
}
if err := cert.CheckCRLSignature(crl); err != nil {
t.Fatalf("expected valid signature on CRL for issuer %v: %v", issuer, crl)
}
return crl
}
func getParsedCrlAtPath(t *testing.T, client *api.Client, path string) *pkix.CertificateList {
req := client.NewRequest("GET", path)
resp, err := client.RawRequest(req)
if err != nil {
t.Fatal(err)
}
defer resp.Body.Close()
crlBytes, err := ioutil.ReadAll(resp.Body)
if err != nil {
t.Fatalf("err: %s", err)
}
@@ -3985,15 +4085,11 @@ func TestBackend_RevokePlusTidy_Intermediate(t *testing.T) {
t.Fatalf("expected CRL in response body")
}
crl, err = x509.ParseDERCRL(crlBytes)
crl, err := x509.ParseDERCRL(crlBytes)
if err != nil {
t.Fatal(err)
}
revokedCerts = crl.TBSCertList.RevokedCertificates
if len(revokedCerts) != 0 {
t.Fatal("expected CRL to be empty")
}
return crl
}
func TestBackend_Root_FullCAChain(t *testing.T) {
@@ -4062,8 +4158,8 @@ func runFullCAChainTest(t *testing.T, keyType string) {
}
fullChain := resp.Data["ca_chain"].(string)
if !strings.Contains(fullChain, rootCert) {
t.Fatal("expected full chain to contain root certificate")
if strings.Count(fullChain, rootCert) != 1 {
t.Fatalf("expected full chain to contain root certificate; got %v occurrences", strings.Count(fullChain, rootCert))
}
// Now generate an intermediate at /pki-intermediate, signed by the root.
@@ -4125,12 +4221,16 @@ func runFullCAChainTest(t *testing.T, keyType string) {
t.Fatal("expected intermediate chain information")
}
// Verify we have a proper CRL now
crl := getParsedCrl(t, client, "pki-intermediate")
require.Equal(t, 0, len(crl.TBSCertList.RevokedCertificates))
fullChain = resp.Data["ca_chain"].(string)
if !strings.Contains(fullChain, intermediateCert) {
t.Fatal("expected full chain to contain intermediate certificate")
if strings.Count(fullChain, intermediateCert) != 1 {
t.Fatalf("expected full chain to contain intermediate certificate; got %v occurrences", strings.Count(fullChain, intermediateCert))
}
if !strings.Contains(fullChain, rootCert) {
t.Fatal("expected full chain to contain root certificate")
if strings.Count(fullChain, rootCert) != 1 {
t.Fatalf("expected full chain to contain root certificate; got %v occurrences", strings.Count(fullChain, rootCert))
}
// Finally, import this signing cert chain into a new mount to ensure
@@ -4163,11 +4263,11 @@ func runFullCAChainTest(t *testing.T, keyType string) {
}
fullChain = resp.Data["ca_chain"].(string)
if !strings.Contains(fullChain, intermediateCert) {
t.Fatal("expected full chain to contain intermediate certificate")
if strings.Count(fullChain, intermediateCert) != 1 {
t.Fatalf("expected full chain to contain intermediate certificate; got %v occurrences", strings.Count(fullChain, intermediateCert))
}
if !strings.Contains(fullChain, rootCert) {
t.Fatal("expected full chain to contain root certificate")
if strings.Count(fullChain, rootCert) != 1 {
t.Fatalf("expected full chain to contain root certificate; got %v occurrences", strings.Count(fullChain, rootCert))
}
// Now issue a short-lived certificate from our pki-external.
@@ -4637,6 +4737,334 @@ func TestBackend_Roles_KeySizeRegression(t *testing.T) {
t.Log(fmt.Sprintf("Key size regression expanded matrix test scenarios: %d", tested))
}
func TestRootWithExistingKey(t *testing.T) {
coreConfig := &vault.CoreConfig{
LogicalBackends: map[string]logical.Factory{
"pki": Factory,
},
}
cluster := vault.NewTestCluster(t, coreConfig, &vault.TestClusterOptions{
HandlerFunc: vaulthttp.Handler,
})
cluster.Start()
defer cluster.Cleanup()
client := cluster.Cores[0].Client
var err error
mountPKIEndpoint(t, client, "pki-root")
// Fail requests if type is existing, and we specify the key_type param
ctx := context.Background()
_, err = client.Logical().WriteWithContext(ctx, "pki-root/root/generate/existing", map[string]interface{}{
"common_name": "root myvault.com",
"key_type": "rsa",
})
require.Error(t, err)
require.Contains(t, err.Error(), "key_type nor key_bits arguments can be set in this mode")
// Fail requests if type is existing, and we specify the key_bits param
_, err = client.Logical().WriteWithContext(ctx, "pki-root/root/generate/existing", map[string]interface{}{
"common_name": "root myvault.com",
"key_bits": "2048",
})
require.Error(t, err)
require.Contains(t, err.Error(), "key_type nor key_bits arguments can be set in this mode")
// Fail if the specified key does not exist.
_, err = client.Logical().WriteWithContext(ctx, "pki-root/issuers/generate/root/existing", map[string]interface{}{
"common_name": "root myvault.com",
"issuer_name": "my-issuer1",
"key_ref": "my-key1",
})
require.Error(t, err)
require.Contains(t, err.Error(), "unable to find PKI key for reference: my-key1")
// Fail if the specified key name is default.
_, err = client.Logical().WriteWithContext(ctx, "pki-root/issuers/generate/root/internal", map[string]interface{}{
"common_name": "root myvault.com",
"issuer_name": "my-issuer1",
"key_name": "Default",
})
require.Error(t, err)
require.Contains(t, err.Error(), "reserved keyword 'default' can not be used as key name")
// Fail if the specified issuer name is default.
_, err = client.Logical().WriteWithContext(ctx, "pki-root/issuers/generate/root/internal", map[string]interface{}{
"common_name": "root myvault.com",
"issuer_name": "DEFAULT",
})
require.Error(t, err)
require.Contains(t, err.Error(), "reserved keyword 'default' can not be used as issuer name")
// Create the first CA
resp, err := client.Logical().WriteWithContext(ctx, "pki-root/issuers/generate/root/internal", map[string]interface{}{
"common_name": "root myvault.com",
"key_type": "rsa",
"issuer_name": "my-issuer1",
})
require.NoError(t, err)
require.NotNil(t, resp.Data["certificate"])
myIssuerId1 := resp.Data["issuer_id"]
myKeyId1 := resp.Data["key_id"]
require.NotEmpty(t, myIssuerId1)
require.NotEmpty(t, myKeyId1)
// Fetch the parsed CRL; it should be empty as we've not revoked anything
parsedCrl := getParsedCrlForIssuer(t, client, "pki-root", "my-issuer1")
require.Equal(t, len(parsedCrl.TBSCertList.RevokedCertificates), 0, "should have no revoked certificates")
// Fail if the specified issuer name is re-used.
_, err = client.Logical().WriteWithContext(ctx, "pki-root/issuers/generate/root/internal", map[string]interface{}{
"common_name": "root myvault.com",
"issuer_name": "my-issuer1",
})
require.Error(t, err)
require.Contains(t, err.Error(), "issuer name already in use")
// Create the second CA
resp, err = client.Logical().WriteWithContext(ctx, "pki-root/issuers/generate/root/internal", map[string]interface{}{
"common_name": "root myvault.com",
"key_type": "rsa",
"issuer_name": "my-issuer2",
"key_name": "root-key2",
})
require.NoError(t, err)
require.NotNil(t, resp.Data["certificate"])
myIssuerId2 := resp.Data["issuer_id"]
myKeyId2 := resp.Data["key_id"]
require.NotEmpty(t, myIssuerId2)
require.NotEmpty(t, myKeyId2)
// Fetch the parsed CRL; it should be empty as we've not revoked anything
parsedCrl = getParsedCrlForIssuer(t, client, "pki-root", "my-issuer2")
require.Equal(t, len(parsedCrl.TBSCertList.RevokedCertificates), 0, "should have no revoked certificates")
// Fail if the specified key name is re-used.
_, err = client.Logical().WriteWithContext(ctx, "pki-root/issuers/generate/root/internal", map[string]interface{}{
"common_name": "root myvault.com",
"issuer_name": "my-issuer3",
"key_name": "root-key2",
})
require.Error(t, err)
require.Contains(t, err.Error(), "key name already in use")
// Create a third CA re-using key from CA 1
resp, err = client.Logical().WriteWithContext(ctx, "pki-root/issuers/generate/root/existing", map[string]interface{}{
"common_name": "root myvault.com",
"issuer_name": "my-issuer3",
"key_ref": myKeyId1,
})
require.NoError(t, err)
require.NotNil(t, resp.Data["certificate"])
myIssuerId3 := resp.Data["issuer_id"]
myKeyId3 := resp.Data["key_id"]
require.NotEmpty(t, myIssuerId3)
require.NotEmpty(t, myKeyId3)
// Fetch the parsed CRL; it should be empty as we've not revoking anything.
parsedCrl = getParsedCrlForIssuer(t, client, "pki-root", "my-issuer3")
require.Equal(t, len(parsedCrl.TBSCertList.RevokedCertificates), 0, "should have no revoked certificates")
// Signatures should be the same since this is just a reissued cert. We
// use signature as a proxy for "these two CRLs are equal".
firstCrl := getParsedCrlForIssuer(t, client, "pki-root", "my-issuer1")
require.Equal(t, parsedCrl.SignatureValue, firstCrl.SignatureValue)
require.NotEqual(t, myIssuerId1, myIssuerId2)
require.NotEqual(t, myIssuerId1, myIssuerId3)
require.NotEqual(t, myKeyId1, myKeyId2)
require.Equal(t, myKeyId1, myKeyId3)
resp, err = client.Logical().ListWithContext(ctx, "pki-root/issuers")
require.NoError(t, err)
require.Equal(t, 3, len(resp.Data["keys"].([]interface{})))
require.Contains(t, resp.Data["keys"], myIssuerId1)
require.Contains(t, resp.Data["keys"], myIssuerId2)
require.Contains(t, resp.Data["keys"], myIssuerId3)
}
func TestIntermediateWithExistingKey(t *testing.T) {
coreConfig := &vault.CoreConfig{
LogicalBackends: map[string]logical.Factory{
"pki": Factory,
},
}
cluster := vault.NewTestCluster(t, coreConfig, &vault.TestClusterOptions{
HandlerFunc: vaulthttp.Handler,
})
cluster.Start()
defer cluster.Cleanup()
client := cluster.Cores[0].Client
var err error
mountPKIEndpoint(t, client, "pki-root")
// Fail requests if type is existing, and we specify the key_type param
ctx := context.Background()
_, err = client.Logical().WriteWithContext(ctx, "pki-root/intermediate/generate/existing", map[string]interface{}{
"common_name": "root myvault.com",
"key_type": "rsa",
})
require.Error(t, err)
require.Contains(t, err.Error(), "key_type nor key_bits arguments can be set in this mode")
// Fail requests if type is existing, and we specify the key_bits param
_, err = client.Logical().WriteWithContext(ctx, "pki-root/intermediate/generate/existing", map[string]interface{}{
"common_name": "root myvault.com",
"key_bits": "2048",
})
require.Error(t, err)
require.Contains(t, err.Error(), "key_type nor key_bits arguments can be set in this mode")
// Fail if the specified key does not exist.
_, err = client.Logical().WriteWithContext(ctx, "pki-root/issuers/generate/intermediate/existing", map[string]interface{}{
"common_name": "root myvault.com",
"key_ref": "my-key1",
})
require.Error(t, err)
require.Contains(t, err.Error(), "unable to find PKI key for reference: my-key1")
// Create the first intermediate CA
resp, err := client.Logical().WriteWithContext(ctx, "pki-root/issuers/generate/intermediate/internal", map[string]interface{}{
"common_name": "root myvault.com",
"key_type": "rsa",
})
require.NoError(t, err)
// csr1 := resp.Data["csr"]
myKeyId1 := resp.Data["key_id"]
require.NotEmpty(t, myKeyId1)
// Create the second intermediate CA
resp, err = client.Logical().WriteWithContext(ctx, "pki-root/issuers/generate/intermediate/internal", map[string]interface{}{
"common_name": "root myvault.com",
"key_type": "rsa",
"key_name": "interkey1",
})
require.NoError(t, err)
// csr2 := resp.Data["csr"]
myKeyId2 := resp.Data["key_id"]
require.NotEmpty(t, myKeyId2)
// Create a third intermediate CA re-using key from intermediate CA 1
resp, err = client.Logical().WriteWithContext(ctx, "pki-root/issuers/generate/intermediate/existing", map[string]interface{}{
"common_name": "root myvault.com",
"key_ref": myKeyId1,
})
require.NoError(t, err)
// csr3 := resp.Data["csr"]
myKeyId3 := resp.Data["key_id"]
require.NotEmpty(t, myKeyId3)
require.NotEqual(t, myKeyId1, myKeyId2)
require.Equal(t, myKeyId1, myKeyId3, "our new ca did not seem to reuse the key as we expected.")
}
func TestIssuanceTTLs(t *testing.T) {
coreConfig := &vault.CoreConfig{
LogicalBackends: map[string]logical.Factory{
"pki": Factory,
},
}
cluster := vault.NewTestCluster(t, coreConfig, &vault.TestClusterOptions{
HandlerFunc: vaulthttp.Handler,
})
cluster.Start()
defer cluster.Cleanup()
client := cluster.Cores[0].Client
var err error
err = client.Sys().Mount("pki", &api.MountInput{
Type: "pki",
Config: api.MountConfigInput{
DefaultLeaseTTL: "16h",
MaxLeaseTTL: "60h",
},
})
if err != nil {
t.Fatal(err)
}
resp, err := client.Logical().Write("pki/root/generate/internal", map[string]interface{}{
"common_name": "root example.com",
"issuer_name": "root",
"ttl": "15s",
"key_type": "ec",
})
require.NoError(t, err)
require.NotNil(t, resp)
_, err = client.Logical().Write("pki/roles/local-testing", map[string]interface{}{
"allow_any_name": true,
"enforce_hostnames": false,
"key_type": "ec",
})
require.NoError(t, err)
_, err = client.Logical().Write("pki/issue/local-testing", map[string]interface{}{
"common_name": "testing",
"ttl": "1s",
})
require.NoError(t, err, "expected issuance to succeed due to shorter ttl than cert ttl")
_, err = client.Logical().Write("pki/issue/local-testing", map[string]interface{}{
"common_name": "testing",
})
require.Error(t, err, "expected issuance to fail due to longer default ttl than cert ttl")
resp, err = client.Logical().Write("pki/issuer/root", map[string]interface{}{
"issuer_name": "root",
"leaf_not_after_behavior": "permit",
})
require.NoError(t, err)
require.NotNil(t, resp)
_, err = client.Logical().Write("pki/issue/local-testing", map[string]interface{}{
"common_name": "testing",
})
require.NoError(t, err, "expected issuance to succeed due to permitted longer TTL")
resp, err = client.Logical().Write("pki/issuer/root", map[string]interface{}{
"issuer_name": "root",
"leaf_not_after_behavior": "truncate",
})
require.NoError(t, err)
require.NotNil(t, resp)
_, err = client.Logical().Write("pki/issue/local-testing", map[string]interface{}{
"common_name": "testing",
})
require.NoError(t, err, "expected issuance to succeed due to truncated ttl")
// Sleep until the parent cert expires.
time.Sleep(16 * time.Second)
resp, err = client.Logical().Write("pki/issuer/root", map[string]interface{}{
"issuer_name": "root",
"leaf_not_after_behavior": "err",
})
require.NoError(t, err)
require.NotNil(t, resp)
// Even 1s ttl should now fail.
_, err = client.Logical().Write("pki/issue/local-testing", map[string]interface{}{
"common_name": "testing",
"ttl": "1s",
})
require.Error(t, err, "expected issuance to fail due to longer default ttl than cert ttl")
}
func TestSealWrappedStorageConfigured(t *testing.T) {
b, _ := createBackendWithStorage(t)
wrappedEntries := b.Backend.PathsSpecial.SealWrapStorage
// Make sure our legacy bundle is within the list
// NOTE: do not convert these test values to constants, we should always have these paths within seal wrap config
require.Contains(t, wrappedEntries, "config/ca_bundle", "Legacy bundle missing from seal wrap")
// The trailing / is important as it treats the entire folder requiring seal wrapping, not just config/key
require.Contains(t, wrappedEntries, "config/key/", "key prefix with trailing / missing from seal wrap.")
}
var (
initTest sync.Once
rsaCAKey string
@@ -4659,7 +5087,7 @@ func mountPKIEndpoint(t *testing.T, client *api.Client, path string) {
require.NoError(t, err, "failed mounting pki endpoint")
}
func requireSignedBy(t *testing.T, cert x509.Certificate, key crypto.PublicKey) {
func requireSignedBy(t *testing.T, cert *x509.Certificate, key crypto.PublicKey) {
switch key.(type) {
case *rsa.PublicKey:
requireRSASignedBy(t, cert, key.(*rsa.PublicKey))
@@ -4672,7 +5100,7 @@ func requireSignedBy(t *testing.T, cert x509.Certificate, key crypto.PublicKey)
}
}
func requireRSASignedBy(t *testing.T, cert x509.Certificate, key *rsa.PublicKey) {
func requireRSASignedBy(t *testing.T, cert *x509.Certificate, key *rsa.PublicKey) {
require.Contains(t, []x509.SignatureAlgorithm{x509.SHA256WithRSA, x509.SHA512WithRSA},
cert.SignatureAlgorithm, "only sha256 signatures supported")
@@ -4695,7 +5123,7 @@ func requireRSASignedBy(t *testing.T, cert x509.Certificate, key *rsa.PublicKey)
require.NoError(t, err, "the certificate was not signed by the expected public rsa key.")
}
func requireECDSASignedBy(t *testing.T, cert x509.Certificate, key *ecdsa.PublicKey) {
func requireECDSASignedBy(t *testing.T, cert *x509.Certificate, key *ecdsa.PublicKey) {
require.Contains(t, []x509.SignatureAlgorithm{x509.ECDSAWithSHA256, x509.ECDSAWithSHA512},
cert.SignatureAlgorithm, "only ecdsa signatures supported")
@@ -4714,21 +5142,21 @@ func requireECDSASignedBy(t *testing.T, cert x509.Certificate, key *ecdsa.Public
require.True(t, verify, "the certificate was not signed by the expected public ecdsa key.")
}
func requireED25519SignedBy(t *testing.T, cert x509.Certificate, key ed25519.PublicKey) {
func requireED25519SignedBy(t *testing.T, cert *x509.Certificate, key ed25519.PublicKey) {
require.Equal(t, x509.PureEd25519, cert.SignatureAlgorithm)
ed25519.Verify(key, cert.RawTBSCertificate, cert.Signature)
}
func parseCert(t *testing.T, pemCert string) x509.Certificate {
func parseCert(t *testing.T, pemCert string) *x509.Certificate {
block, _ := pem.Decode([]byte(pemCert))
require.NotNil(t, block, "failed to decode PEM block")
cert, err := x509.ParseCertificate(block.Bytes)
require.NoError(t, err)
return *cert
return cert
}
func requireMatchingPublicKeys(t *testing.T, cert x509.Certificate, key crypto.PublicKey) {
func requireMatchingPublicKeys(t *testing.T, cert *x509.Certificate, key crypto.PublicKey) {
certPubKey := cert.PublicKey
require.True(t, reflect.DeepEqual(certPubKey, key),
"public keys mismatched: got: %v, expected: %v", certPubKey, key)

View File

@@ -257,13 +257,13 @@ func runSteps(t *testing.T, rootB, intB *backend, client *api.Client, rootName,
// Load CA cert/key in and ensure we can fetch it back in various formats,
// unauthenticated
{
// Attempt import but only provide one the cert
// Attempt import but only provide one the cert; this should work.
{
_, err := client.Logical().Write(rootName+"config/ca", map[string]interface{}{
"pem_bundle": caCert,
})
if err == nil {
t.Fatal("expected error")
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
}
@@ -272,41 +272,47 @@ func runSteps(t *testing.T, rootB, intB *backend, client *api.Client, rootName,
_, err := client.Logical().Write(rootName+"config/ca", map[string]interface{}{
"pem_bundle": caKey,
})
if err == nil {
t.Fatal("expected error")
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
}
// Import CA bundle
// Import entire CA bundle; this should work as well
{
_, err := client.Logical().Write(rootName+"config/ca", map[string]interface{}{
"pem_bundle": strings.Join([]string{caKey, caCert}, "\n"),
})
if err != nil {
t.Fatal(err)
t.Fatalf("unexpected error: %v", err)
}
}
prevToken := client.Token()
client.SetToken("")
// cert/ca path
{
resp, err := client.Logical().Read(rootName + "cert/ca")
// cert/ca and issuer/default/json path
for _, path := range []string{"cert/ca", "issuer/default/json"} {
resp, err := client.Logical().Read(rootName + path)
if err != nil {
t.Fatal(err)
}
if resp == nil {
t.Fatal("nil response")
}
if diff := deep.Equal(resp.Data["certificate"].(string), caCert); diff != nil {
expected := caCert
if path == "issuer/default/json" {
// Preserves the new line.
expected += "\n"
}
if diff := deep.Equal(resp.Data["certificate"].(string), expected); diff != nil {
t.Fatal(diff)
}
}
// ca/pem path (raw string)
{
// ca/pem and issuer/default/pem path (raw string)
for _, path := range []string{"ca/pem", "issuer/default/pem"} {
req := &logical.Request{
Path: "ca/pem",
Path: path,
Operation: logical.ReadOperation,
Storage: rootB.storage,
}
@@ -317,7 +323,12 @@ func runSteps(t *testing.T, rootB, intB *backend, client *api.Client, rootName,
if resp == nil {
t.Fatal("nil response")
}
if diff := deep.Equal(resp.Data["http_raw_body"].([]byte), []byte(caCert)); diff != nil {
expected := []byte(caCert)
if path == "issuer/default/pem" {
// Preserves the new line.
expected = []byte(caCert + "\n")
}
if diff := deep.Equal(resp.Data["http_raw_body"].([]byte), expected); diff != nil {
t.Fatal(diff)
}
if resp.Data["http_content_type"].(string) != "application/pem-certificate-chain" {
@@ -325,10 +336,10 @@ func runSteps(t *testing.T, rootB, intB *backend, client *api.Client, rootName,
}
}
// ca (raw DER bytes)
{
// ca and issuer/default/der (raw DER bytes)
for _, path := range []string{"ca", "issuer/default/der"} {
req := &logical.Request{
Path: "ca",
Path: path,
Operation: logical.ReadOperation,
Storage: rootB.storage,
}
@@ -464,8 +475,8 @@ func runSteps(t *testing.T, rootB, intB *backend, client *api.Client, rootName,
if err != nil {
t.Fatal(err)
}
if resp != nil {
t.Fatal("expected nil response")
if resp == nil {
t.Fatal("nil response")
}
}
@@ -521,9 +532,16 @@ func runSteps(t *testing.T, rootB, intB *backend, client *api.Client, rootName,
}
// Fetch the CRL and make sure it shows up
{
for path, derPemOrJSON := range map[string]int{
"crl": 0,
"issuer/default/crl/der": 0,
"crl/pem": 1,
"issuer/default/crl/pem": 1,
"cert/crl": 2,
"issuer/default/crl": 3,
} {
req := &logical.Request{
Path: "crl",
Path: path,
Operation: logical.ReadOperation,
Storage: rootB.storage,
}
@@ -534,7 +552,25 @@ func runSteps(t *testing.T, rootB, intB *backend, client *api.Client, rootName,
if resp == nil {
t.Fatal("nil response")
}
crlBytes := resp.Data["http_raw_body"].([]byte)
var crlBytes []byte
if derPemOrJSON == 2 {
// Old endpoint
crlBytes = []byte(resp.Data["certificate"].(string))
} else if derPemOrJSON == 3 {
// New endpoint
crlBytes = []byte(resp.Data["crl"].(string))
} else {
// DER or PEM
crlBytes = resp.Data["http_raw_body"].([]byte)
}
if derPemOrJSON >= 1 {
// Do for both PEM and JSON endpoints
pemBlock, _ := pem.Decode(crlBytes)
crlBytes = pemBlock.Bytes
}
certList, err := x509.ParseCRL(crlBytes)
if err != nil {
t.Fatal(err)

View File

@@ -2,8 +2,10 @@ package pki
import (
"context"
"crypto"
"crypto/ecdsa"
"crypto/rsa"
"errors"
"fmt"
"io"
"time"
@@ -15,18 +17,17 @@ import (
"github.com/hashicorp/vault/sdk/logical"
)
func (b *backend) getGenerationParams(ctx context.Context,
data *framework.FieldData, mountPoint string,
) (exported bool, format string, role *roleEntry, errorResp *logical.Response) {
func (b *backend) getGenerationParams(ctx context.Context, storage logical.Storage, data *framework.FieldData, mountPoint string) (exported bool, format string, role *roleEntry, errorResp *logical.Response) {
exportedStr := data.Get("exported").(string)
switch exportedStr {
case "exported":
exported = true
case "internal":
case "existing":
case "kms":
default:
errorResp = logical.ErrorResponse(
`the "exported" path parameter must be "internal", "exported" or "kms"`)
`the "exported" path parameter must be "internal", "existing", exported" or "kms"`)
return
}
@@ -36,47 +37,11 @@ func (b *backend) getGenerationParams(ctx context.Context,
`the "format" path parameter must be "pem", "der", or "pem_bundle"`)
return
}
keyType := data.Get("key_type").(string)
keyBits := data.Get("key_bits").(int)
if exportedStr == "kms" {
_, okKeyType := data.Raw["key_type"]
_, okKeyBits := data.Raw["key_bits"]
if okKeyType || okKeyBits {
errorResp = logical.ErrorResponse(
`invalid parameter for the kms path parameter, key_type nor key_bits arguments can be set in this mode`)
return
}
keyId, err := getManagedKeyId(data)
if err != nil {
errorResp = logical.ErrorResponse("unable to determine managed key id")
return
}
// Determine key type and key bits from the managed public key
err = withManagedPKIKey(ctx, b, keyId, mountPoint, func(ctx context.Context, key logical.ManagedSigningKey) error {
pubKey, err := key.GetPublicKey(ctx)
if err != nil {
return err
}
switch pubKey.(type) {
case *rsa.PublicKey:
keyType = "rsa"
keyBits = pubKey.(*rsa.PublicKey).Size() * 8
case *ecdsa.PublicKey:
keyType = "ec"
case *ed25519.PublicKey:
keyType = "ed25519"
default:
return fmt.Errorf("unsupported public key: %#v", pubKey)
}
return nil
})
if err != nil {
errorResp = logical.ErrorResponse("failed to lookup public key from managed key: %s", err.Error())
return
}
mkc := newManagedKeyContext(ctx, b, mountPoint)
keyType, keyBits, err := getKeyTypeAndBitsForRole(mkc, storage, data)
if err != nil {
errorResp = logical.ErrorResponse(err.Error())
return
}
role = &roleEntry{
@@ -102,7 +67,6 @@ func (b *backend) getGenerationParams(ctx context.Context,
}
*role.AllowWildcardCertificates = true
var err error
if role.KeyBits, role.SignatureBits, err = certutil.ValidateDefaultOrValueKeyTypeSignatureLength(role.KeyType, role.KeyBits, role.SignatureBits); err != nil {
errorResp = logical.ErrorResponse(err.Error())
}
@@ -112,7 +76,33 @@ func (b *backend) getGenerationParams(ctx context.Context,
func generateCABundle(ctx context.Context, b *backend, input *inputBundle, data *certutil.CreationBundle, randomSource io.Reader) (*certutil.ParsedCertBundle, error) {
if kmsRequested(input) {
return generateManagedKeyCABundle(ctx, b, input, data, randomSource)
keyId, err := getManagedKeyId(input.apiData)
if err != nil {
return nil, err
}
return generateManagedKeyCABundle(ctx, b, input, keyId, data, randomSource)
}
if existingKeyRequested(input) {
keyRef, err := getKeyRefWithErr(input.apiData)
if err != nil {
return nil, err
}
keyEntry, err := getExistingKeyFromRef(ctx, input.req.Storage, keyRef)
if err != nil {
return nil, err
}
if keyEntry.isManagedPrivateKey() {
keyId, err := keyEntry.getManagedKeyUUID()
if err != nil {
return nil, err
}
return generateManagedKeyCABundle(ctx, b, input, keyId, data, randomSource)
}
return certutil.CreateCertificateWithKeyGenerator(data, randomSource, existingKeyGeneratorFromBytes(keyEntry))
}
return certutil.CreateCertificateWithRandomSource(data, randomSource)
@@ -120,7 +110,34 @@ func generateCABundle(ctx context.Context, b *backend, input *inputBundle, data
func generateCSRBundle(ctx context.Context, b *backend, input *inputBundle, data *certutil.CreationBundle, addBasicConstraints bool, randomSource io.Reader) (*certutil.ParsedCSRBundle, error) {
if kmsRequested(input) {
return generateManagedKeyCSRBundle(ctx, b, input, data, addBasicConstraints, randomSource)
keyId, err := getManagedKeyId(input.apiData)
if err != nil {
return nil, err
}
return generateManagedKeyCSRBundle(ctx, b, input, keyId, data, addBasicConstraints, randomSource)
}
if existingKeyRequested(input) {
keyRef, err := getKeyRefWithErr(input.apiData)
if err != nil {
return nil, err
}
key, err := getExistingKeyFromRef(ctx, input.req.Storage, keyRef)
if err != nil {
return nil, err
}
if key.isManagedPrivateKey() {
keyId, err := key.getManagedKeyUUID()
if err != nil {
return nil, err
}
return generateManagedKeyCSRBundle(ctx, b, input, keyId, data, addBasicConstraints, randomSource)
}
return certutil.CreateCSRWithKeyGenerator(data, addBasicConstraints, randomSource, existingKeyGeneratorFromBytes(key))
}
return certutil.CreateCSRWithRandomSource(data, addBasicConstraints, randomSource)
@@ -132,3 +149,105 @@ func parseCABundle(ctx context.Context, b *backend, req *logical.Request, bundle
}
return bundle.ToParsedCertBundle()
}
func getKeyTypeAndBitsForRole(mkc managedKeyContext, storage logical.Storage, data *framework.FieldData) (string, int, error) {
exportedStr := data.Get("exported").(string)
var keyType string
var keyBits int
switch exportedStr {
case "internal":
fallthrough
case "exported":
keyType = data.Get("key_type").(string)
keyBits = data.Get("key_bits").(int)
return keyType, keyBits, nil
}
// existing and kms types don't support providing the key_type and key_bits args.
_, okKeyType := data.Raw["key_type"]
_, okKeyBits := data.Raw["key_bits"]
if okKeyType || okKeyBits {
return "", 0, errors.New("invalid parameter for the kms/existing path parameter, key_type nor key_bits arguments can be set in this mode")
}
var pubKey crypto.PublicKey
if kmsRequestedFromFieldData(data) {
keyId, err := getManagedKeyId(data)
if err != nil {
return "", 0, errors.New("unable to determine managed key id" + err.Error())
}
pubKeyManagedKey, err := getManagedKeyPublicKey(mkc, keyId)
if err != nil {
return "", 0, errors.New("failed to lookup public key from managed key: " + err.Error())
}
pubKey = pubKeyManagedKey
}
if existingKeyRequestedFromFieldData(data) {
existingPubKey, err := getExistingPublicKey(mkc, storage, data)
if err != nil {
return "", 0, errors.New("failed to lookup public key from existing key: " + err.Error())
}
pubKey = existingPubKey
}
privateKeyType, keyBits, err := getKeyTypeAndBitsFromPublicKeyForRole(pubKey)
return string(privateKeyType), keyBits, err
}
func getExistingPublicKey(mkc managedKeyContext, s logical.Storage, data *framework.FieldData) (crypto.PublicKey, error) {
keyRef, err := getKeyRefWithErr(data)
if err != nil {
return nil, err
}
id, err := resolveKeyReference(mkc.ctx, s, keyRef)
if err != nil {
return nil, err
}
key, err := fetchKeyById(mkc.ctx, s, id)
if err != nil {
return nil, err
}
return getPublicKey(mkc, key)
}
func getKeyTypeAndBitsFromPublicKeyForRole(pubKey crypto.PublicKey) (certutil.PrivateKeyType, int, error) {
var keyType certutil.PrivateKeyType
var keyBits int
switch pubKey.(type) {
case *rsa.PublicKey:
keyType = certutil.RSAPrivateKey
keyBits = certutil.GetPublicKeySize(pubKey)
case *ecdsa.PublicKey:
keyType = certutil.ECPrivateKey
case *ed25519.PublicKey:
keyType = certutil.Ed25519PrivateKey
default:
return certutil.UnknownPrivateKey, 0, fmt.Errorf("unsupported public key: %#v", pubKey)
}
return keyType, keyBits, nil
}
func getExistingKeyFromRef(ctx context.Context, s logical.Storage, keyRef string) (*keyEntry, error) {
keyId, err := resolveKeyReference(ctx, s, keyRef)
if err != nil {
return nil, err
}
return fetchKeyById(ctx, s, keyId)
}
func existingKeyGeneratorFromBytes(key *keyEntry) certutil.KeyGenerator {
return func(_ string, _ int, container certutil.ParsedPrivateKeyContainer, _ io.Reader) error {
signer, _, pemBytes, err := getSignerFromKeyEntryBytes(key)
if err != nil {
return err
}
container.SetParsedPrivateKey(signer, key.PrivateKeyType, pemBytes.Bytes)
return nil
}
}

View File

@@ -64,19 +64,9 @@ var (
leftWildLabelRegex = regexp.MustCompile(`^(` + allWildRegex + `|` + startWildRegex + `|` + endWildRegex + `|` + middleWildRegex + `)$`)
// OIDs for X.509 certificate extensions used below.
oidExtensionBasicConstraints = []int{2, 5, 29, 19}
oidExtensionSubjectAltName = []int{2, 5, 29, 17}
oidExtensionSubjectAltName = []int{2, 5, 29, 17}
)
func oidInExtensions(oid asn1.ObjectIdentifier, extensions []pkix.Extension) bool {
for _, e := range extensions {
if e.Id.Equal(oid) {
return true
}
}
return false
}
func getFormat(data *framework.FieldData) string {
format := data.Get("format").(string)
switch format {
@@ -89,23 +79,29 @@ func getFormat(data *framework.FieldData) string {
return format
}
// Fetches the CA info. Unlike other certificates, the CA info is stored
// in the backend as a CertBundle, because we are storing its private key
func fetchCAInfo(ctx context.Context, b *backend, req *logical.Request) (*certutil.CAInfoBundle, error) {
bundleEntry, err := req.Storage.Get(ctx, "config/ca_bundle")
// fetchCAInfo will fetch the CA info, will return an error if no ca info exists.
func fetchCAInfo(ctx context.Context, b *backend, req *logical.Request, issuerRef string, usage issuerUsage) (*certutil.CAInfoBundle, error) {
entry, bundle, err := fetchCertBundle(ctx, b, req.Storage, issuerRef)
if err != nil {
return nil, errutil.InternalError{Err: fmt.Sprintf("unable to fetch local CA certificate/key: %v", err)}
}
if bundleEntry == nil {
return nil, errutil.UserError{Err: "backend must be configured with a CA certificate/key"}
switch err.(type) {
case errutil.UserError:
return nil, err
case errutil.InternalError:
return nil, err
default:
return nil, errutil.InternalError{Err: fmt.Sprintf("error fetching CA info: %v", err)}
}
}
var bundle certutil.CertBundle
if err := bundleEntry.DecodeJSON(&bundle); err != nil {
return nil, errutil.InternalError{Err: fmt.Sprintf("unable to decode local CA certificate/key: %v", err)}
if err := entry.EnsureUsage(usage); err != nil {
return nil, errutil.InternalError{Err: fmt.Sprintf("error while attempting to use issuer %v: %v", issuerRef, err)}
}
parsedBundle, err := parseCABundle(ctx, b, req, &bundle)
if bundle == nil {
return nil, errutil.UserError{Err: "no CA information is present"}
}
parsedBundle, err := parseCABundle(ctx, b, req, bundle)
if err != nil {
return nil, errutil.InternalError{Err: err.Error()}
}
@@ -113,8 +109,15 @@ func fetchCAInfo(ctx context.Context, b *backend, req *logical.Request) (*certut
if parsedBundle.Certificate == nil {
return nil, errutil.InternalError{Err: "stored CA information not able to be parsed"}
}
if parsedBundle.PrivateKey == nil {
return nil, errutil.UserError{Err: fmt.Sprintf("unable to fetch corresponding key for issuer %v; unable to use this issuer for signing", issuerRef)}
}
caInfo := &certutil.CAInfoBundle{ParsedCertBundle: *parsedBundle, URLs: nil}
caInfo := &certutil.CAInfoBundle{
ParsedCertBundle: *parsedBundle,
URLs: nil,
LeafNotAfterBehavior: entry.LeafNotAfterBehavior,
}
entries, err := getURLs(ctx, req)
if err != nil {
@@ -132,9 +135,33 @@ func fetchCAInfo(ctx context.Context, b *backend, req *logical.Request) (*certut
return caInfo, nil
}
// fetchCertBundle is our flex point to load either the legacy ca bundle if migration has yet to be
// performed or load the bundle from the new key/issuer storage. Any function that needs a bundle
// should load it using this method to maintain compatibility on secondary nodes for which their
// primary's have not upgraded yet.
// NOTE: This function can return a nil, nil response.
func fetchCertBundle(ctx context.Context, b *backend, s logical.Storage, issuerRef string) (*issuerEntry, *certutil.CertBundle, error) {
if b.useLegacyBundleCaStorage() {
// We have not completed the migration so attempt to load the bundle from the legacy location
b.Logger().Info("Using legacy CA bundle as PKI migration has not completed.")
return getLegacyCertBundle(ctx, s)
}
id, err := resolveIssuerReference(ctx, s, issuerRef)
if err != nil {
// Usually a bad label from the user or misconfigured default.
return nil, nil, errutil.UserError{Err: err.Error()}
}
return fetchCertBundleByIssuerId(ctx, s, id, true)
}
// Allows fetching certificates from the backend; it handles the slightly
// separate pathing for CA, CRL, and revoked certificates.
func fetchCertBySerial(ctx context.Context, req *logical.Request, prefix, serial string) (*logical.StorageEntry, error) {
// separate pathing for CRL, and revoked certificates.
//
// Support for fetching CA certificates was removed, due to the new issuers
// changes.
func fetchCertBySerial(ctx context.Context, b *backend, req *logical.Request, prefix, serial string) (*logical.StorageEntry, error) {
var path, legacyPath string
var err error
var certEntry *logical.StorageEntry
@@ -143,15 +170,19 @@ func fetchCertBySerial(ctx context.Context, req *logical.Request, prefix, serial
colonSerial := strings.Replace(strings.ToLower(serial), "-", ":", -1)
switch {
// Revoked goes first as otherwise ca/crl get hardcoded paths which fail if
// Revoked goes first as otherwise crl get hardcoded paths which fail if
// we actually want revocation info
case strings.HasPrefix(prefix, "revoked/"):
legacyPath = "revoked/" + colonSerial
path = "revoked/" + hyphenSerial
case serial == "ca":
path = "ca"
case serial == "crl":
path = "crl"
case serial == legacyCRLPath:
if err = b.crlBuilder.rebuildIfForced(ctx, b, req); err != nil {
return nil, err
}
path, err = resolveIssuerCRLPath(ctx, b, req.Storage, defaultRef)
if err != nil {
return nil, err
}
default:
legacyPath = "certs/" + colonSerial
path = "certs/" + hyphenSerial
@@ -1250,13 +1281,22 @@ func generateCreationBundle(b *backend, data *inputBundle, caSign *certutil.CAIn
} else {
notAfter = time.Now().Add(ttl)
}
// If it's not self-signed, verify that the issued certificate won't be
// valid past the lifetime of the CA certificate
if caSign != nil &&
notAfter.After(caSign.Certificate.NotAfter) && !data.role.AllowExpirationPastCA {
return nil, errutil.UserError{Err: fmt.Sprintf(
"cannot satisfy request, as TTL would result in notAfter %s that is beyond the expiration of the CA certificate at %s", notAfter.Format(time.RFC3339Nano), caSign.Certificate.NotAfter.Format(time.RFC3339Nano))}
if caSign != nil && notAfter.After(caSign.Certificate.NotAfter) {
// If it's not self-signed, verify that the issued certificate
// won't be valid past the lifetime of the CA certificate, and
// act accordingly. This is dependent based on the issuers's
// LeafNotAfterBehavior argument.
switch caSign.LeafNotAfterBehavior {
case certutil.PermitNotAfterBehavior:
// Explicitly do nothing.
case certutil.TruncateNotAfterBehavior:
notAfter = caSign.Certificate.NotAfter
case certutil.ErrNotAfterBehavior:
fallthrough
default:
return nil, errutil.UserError{Err: fmt.Sprintf(
"cannot satisfy request, as TTL would result in notAfter %s that is beyond the expiration of the CA certificate at %s", notAfter.Format(time.RFC3339Nano), caSign.Certificate.NotAfter.Format(time.RFC3339Nano))}
}
}
}

View File

@@ -12,7 +12,7 @@ import (
)
func TestPki_FetchCertBySerial(t *testing.T) {
storage := &logical.InmemStorage{}
b, storage := createBackendWithStorage(t)
cases := map[string]struct {
Req *logical.Request
@@ -46,7 +46,7 @@ func TestPki_FetchCertBySerial(t *testing.T) {
t.Fatalf("error writing to storage on %s colon-based storage path: %s", name, err)
}
certEntry, err := fetchCertBySerial(context.Background(), tc.Req, tc.Prefix, tc.Serial)
certEntry, err := fetchCertBySerial(context.Background(), b, tc.Req, tc.Prefix, tc.Serial)
if err != nil {
t.Fatalf("error on %s for colon-based storage path: %s", name, err)
}
@@ -81,48 +81,11 @@ func TestPki_FetchCertBySerial(t *testing.T) {
t.Fatalf("error writing to storage on %s hyphen-based storage path: %s", name, err)
}
certEntry, err := fetchCertBySerial(context.Background(), tc.Req, tc.Prefix, tc.Serial)
certEntry, err := fetchCertBySerial(context.Background(), b, tc.Req, tc.Prefix, tc.Serial)
if err != nil || certEntry == nil {
t.Fatalf("error on %s for hyphen-based storage path: err: %v, entry: %v", name, err, certEntry)
}
}
noConvCases := map[string]struct {
Req *logical.Request
Prefix string
Serial string
}{
"ca": {
&logical.Request{
Storage: storage,
},
"",
"ca",
},
"crl": {
&logical.Request{
Storage: storage,
},
"",
"crl",
},
}
// Test for ca and crl case
for name, tc := range noConvCases {
err := storage.Put(context.Background(), &logical.StorageEntry{
Key: tc.Serial,
Value: []byte("some data"),
})
if err != nil {
t.Fatalf("error writing to storage on %s: %s", name, err)
}
certEntry, err := fetchCertBySerial(context.Background(), tc.Req, tc.Prefix, tc.Serial)
if err != nil || certEntry == nil {
t.Fatalf("error on %s: err: %v, entry: %v", name, err, certEntry)
}
}
}
// Demonstrate that multiple OUs in the name are handled in an

View File

@@ -0,0 +1,983 @@
package pki
import (
"crypto/x509"
"encoding/pem"
"fmt"
"strings"
"testing"
"github.com/hashicorp/vault/api"
vaulthttp "github.com/hashicorp/vault/http"
"github.com/hashicorp/vault/sdk/logical"
"github.com/hashicorp/vault/vault"
)
// For speed, all keys are ECDSA.
type CBGenerateKey struct {
Name string
}
func (c CBGenerateKey) Run(t *testing.T, client *api.Client, mount string, knownKeys map[string]string, knownCerts map[string]string) {
resp, err := client.Logical().Write(mount+"/keys/generate/exported", map[string]interface{}{
"name": c.Name,
"algo": "ec",
"bits": 256,
})
if err != nil {
t.Fatalf("failed to provision key (%v): %v", c.Name, err)
}
knownKeys[c.Name] = resp.Data["private"].(string)
}
// Generate a root.
type CBGenerateRoot struct {
Key string
Existing bool
Name string
CommonName string
ErrorMessage string
}
func (c CBGenerateRoot) Run(t *testing.T, client *api.Client, mount string, knownKeys map[string]string, knownCerts map[string]string) {
url := mount + "/issuers/generate/root/"
data := make(map[string]interface{})
if c.Existing {
url += "existing"
data["key_ref"] = c.Key
} else {
url += "exported"
data["key_type"] = "ec"
data["key_bits"] = 256
data["key_name"] = c.Key
}
data["issuer_name"] = c.Name
data["common_name"] = c.Name
if len(c.CommonName) > 0 {
data["common_name"] = c.CommonName
}
resp, err := client.Logical().Write(url, data)
if err != nil {
if len(c.ErrorMessage) > 0 {
if !strings.Contains(err.Error(), c.ErrorMessage) {
t.Fatalf("failed to generate root cert for issuer (%v): expected (%v) in error message but got %v", c.Name, c.ErrorMessage, err)
}
return
}
t.Fatalf("failed to provision issuer (%v): %v / body: %v", c.Name, err, data)
} else if len(c.ErrorMessage) > 0 {
t.Fatalf("expected to fail generation of issuer (%v) with error message containing (%v)", c.Name, c.ErrorMessage)
}
if !c.Existing {
knownKeys[c.Key] = resp.Data["private_key"].(string)
}
knownCerts[c.Name] = resp.Data["certificate"].(string)
}
// Generate an intermediate. Might not really be an intermediate; might be
// a cross-signed cert.
type CBGenerateIntermediate struct {
Key string
Existing bool
Name string
CommonName string
Parent string
ImportErrorMessage string
}
func (c CBGenerateIntermediate) Run(t *testing.T, client *api.Client, mount string, knownKeys map[string]string, knownCerts map[string]string) {
// Build CSR
url := mount + "/issuers/generate/intermediate/"
data := make(map[string]interface{})
if c.Existing {
url += "existing"
data["key_ref"] = c.Key
} else {
url += "exported"
data["key_type"] = "ec"
data["key_bits"] = 256
data["key_name"] = c.Key
}
resp, err := client.Logical().Write(url, data)
if err != nil {
t.Fatalf("failed to generate CSR for issuer (%v): %v / body: %v", c.Name, err, data)
}
if !c.Existing {
knownKeys[c.Key] = resp.Data["private_key"].(string)
}
csr := resp.Data["csr"].(string)
// Sign CSR
url = fmt.Sprintf(mount+"/issuer/%s/sign-intermediate", c.Parent)
data = make(map[string]interface{})
data["csr"] = csr
data["common_name"] = c.Name
if len(c.CommonName) > 0 {
data["common_name"] = c.CommonName
}
resp, err = client.Logical().Write(url, data)
if err != nil {
t.Fatalf("failed to sign CSR for issuer (%v): %v / body: %v", c.Name, err, data)
}
knownCerts[c.Name] = strings.TrimSpace(resp.Data["certificate"].(string))
// Set the signed intermediate
url = mount + "/intermediate/set-signed"
data = make(map[string]interface{})
data["certificate"] = knownCerts[c.Name]
data["issuer_name"] = c.Name
resp, err = client.Logical().Write(url, data)
if err != nil {
if len(c.ImportErrorMessage) > 0 {
if !strings.Contains(err.Error(), c.ImportErrorMessage) {
t.Fatalf("failed to import signed cert for issuer (%v): expected (%v) in error message but got %v", c.Name, c.ImportErrorMessage, err)
}
return
}
t.Fatalf("failed to import signed cert for issuer (%v): %v / body: %v", c.Name, err, data)
} else if len(c.ImportErrorMessage) > 0 {
t.Fatalf("expected to fail import (with error %v) of cert for issuer (%v) but was success: response: %v", c.ImportErrorMessage, c.Name, resp)
}
// Update the name since set-signed doesn't actually take an issuer name
// parameter.
rawNewCerts := resp.Data["imported_issuers"].([]interface{})
if len(rawNewCerts) != 1 {
t.Fatalf("Expected a single new certificate during import of signed cert for %v: got %v\nresp: %v", c.Name, len(rawNewCerts), resp)
}
newCertId := rawNewCerts[0].(string)
_, err = client.Logical().Write(mount+"/issuer/"+newCertId, map[string]interface{}{
"issuer_name": c.Name,
})
if err != nil {
t.Fatalf("failed to update name for issuer (%v/%v): %v", c.Name, newCertId, err)
}
}
// Delete an issuer; breaks chains.
type CBDeleteIssuer struct {
Issuer string
}
func (c CBDeleteIssuer) Run(t *testing.T, client *api.Client, mount string, knownKeys map[string]string, knownCerts map[string]string) {
url := fmt.Sprintf(mount+"/issuer/%v", c.Issuer)
_, err := client.Logical().Delete(url)
if err != nil {
t.Fatalf("failed to delete issuer (%v): %v", c.Issuer, err)
}
delete(knownCerts, c.Issuer)
}
// Validate the specified chain exists, by name.
type CBValidateChain struct {
Chains map[string][]string
Aliases map[string]string
}
func (c CBValidateChain) ChainToPEMs(t *testing.T, parent string, chain []string, knownCerts map[string]string) []string {
var result []string
for entryIndex, entry := range chain {
var chainEntry string
modifiedEntry := entry
if entryIndex == 0 && entry == "self" {
modifiedEntry = parent
}
for pattern, replacement := range c.Aliases {
modifiedEntry = strings.ReplaceAll(modifiedEntry, pattern, replacement)
}
for _, issuer := range strings.Split(modifiedEntry, ",") {
cert, ok := knownCerts[issuer]
if !ok {
t.Fatalf("Unknown issuer %v in chain for %v: %v", issuer, parent, chain)
}
chainEntry += cert
}
result = append(result, chainEntry)
}
return result
}
func (c CBValidateChain) FindNameForCert(t *testing.T, cert string, knownCerts map[string]string) string {
for issuer, known := range knownCerts {
if strings.TrimSpace(known) == strings.TrimSpace(cert) {
return issuer
}
}
t.Fatalf("Unable to find cert:\n[%v]\nin known map:\n%v\n", cert, knownCerts)
return ""
}
func (c CBValidateChain) PrettyChain(t *testing.T, chain []string, knownCerts map[string]string) []string {
var prettyChain []string
for _, cert := range chain {
prettyChain = append(prettyChain, c.FindNameForCert(t, cert, knownCerts))
}
return prettyChain
}
func (c CBValidateChain) ToCertificate(t *testing.T, cert string) *x509.Certificate {
block, _ := pem.Decode([]byte(cert))
if block == nil {
t.Fatalf("Unable to parse certificate: nil PEM block\n[%v]\n", cert)
}
ret, err := x509.ParseCertificate(block.Bytes)
if err != nil {
t.Fatalf("Unable to parse certificate: %v\n[%v]\n", err, cert)
}
return ret
}
func (c CBValidateChain) Run(t *testing.T, client *api.Client, mount string, knownKeys map[string]string, knownCerts map[string]string) {
for issuer, chain := range c.Chains {
resp, err := client.Logical().Read(mount + "/issuer/" + issuer)
if err != nil {
t.Fatalf("failed to get chain for issuer (%v): %v", issuer, err)
}
rawCurrentChain := resp.Data["ca_chain"].([]interface{})
var currentChain []string
for _, entry := range rawCurrentChain {
currentChain = append(currentChain, strings.TrimSpace(entry.(string)))
}
// Ensure the issuer cert is always first.
if currentChain[0] != knownCerts[issuer] {
pretty := c.FindNameForCert(t, currentChain[0], knownCerts)
t.Fatalf("expected certificate at index 0 to be self:\n[%v]\n[pretty: %v]\nis not the issuer's cert:\n[%v]\n[pretty: %v]", currentChain[0], pretty, knownCerts[issuer], issuer)
}
// Validate it against the expected chain.
expectedChain := c.ChainToPEMs(t, issuer, chain, knownCerts)
if len(currentChain) != len(expectedChain) {
prettyCurrentChain := c.PrettyChain(t, currentChain, knownCerts)
t.Fatalf("Lengths of chains for issuer %v mismatched: got %v vs expected %v:\n[%v]\n[pretty: %v]\n[%v]\n[pretty: %v]", issuer, len(currentChain), len(expectedChain), currentChain, prettyCurrentChain, expectedChain, chain)
}
for currentIndex, currentCert := range currentChain {
// Chains might be forked so we may not be able to strictly validate
// the chain against a single value. Instead, use strings.Contains
// to validate the current cert is in the list of allowed
// possibilities.
if !strings.Contains(expectedChain[currentIndex], currentCert) {
pretty := c.FindNameForCert(t, currentCert, knownCerts)
t.Fatalf("chain mismatch at index %v for issuer %v: got cert:\n[%v]\n[pretty: %v]\nbut expected one of\n[%v]\n[pretty: %v]\n", currentIndex, issuer, currentCert, pretty, expectedChain[currentIndex], chain[currentIndex])
}
}
// Due to alternate paths, the above doesn't ensure ensure each cert
// in the chain is only used once. Validate that now.
for thisIndex, thisCert := range currentChain {
for otherIndex, otherCert := range currentChain[thisIndex+1:] {
if thisCert == otherCert {
thisPretty := c.FindNameForCert(t, thisCert, knownCerts)
otherPretty := c.FindNameForCert(t, otherCert, knownCerts)
otherIndex += thisIndex + 1
t.Fatalf("cert reused in chain for %v:\n[%v]\n[pretty: %v / index: %v]\n[%v]\n[pretty: %v / index: %v]\n", issuer, thisCert, thisPretty, thisIndex, otherCert, otherPretty, otherIndex)
}
}
}
// Finally, validate that all certs verify something that came before
// it. In the linear chain sense, this should strictly mean that the
// parent comes before the child.
for thisIndex, thisCertPem := range currentChain[1:] {
thisIndex += 1 // Absolute index.
parentCert := c.ToCertificate(t, thisCertPem)
// Iterate backwards; prefer the most recent cert to the older
// certs.
foundCert := false
for otherIndex := thisIndex - 1; otherIndex >= 0; otherIndex-- {
otherCertPem := currentChain[otherIndex]
childCert := c.ToCertificate(t, otherCertPem)
if err := childCert.CheckSignatureFrom(parentCert); err == nil {
foundCert = true
}
}
if !foundCert {
pretty := c.FindNameForCert(t, thisCertPem, knownCerts)
t.Fatalf("malformed test scenario: certificate at chain index %v when validating %v does not validate any previous certificates:\n[%v]\n[pretty: %v]\n", thisIndex, issuer, thisCertPem, pretty)
}
}
}
}
// Update an issuer
type CBUpdateIssuer struct {
Name string
CAChain []string
}
func (c CBUpdateIssuer) Run(t *testing.T, client *api.Client, mount string, knownKeys map[string]string, knownCerts map[string]string) {
url := mount + "/issuer/" + c.Name
data := make(map[string]interface{})
data["issuer_name"] = c.Name
data["manual_chain"] = c.CAChain
_, err := client.Logical().Write(url, data)
if err != nil {
t.Fatalf("failed to update issuer (%v): %v / body: %v", c.Name, err, data)
}
}
type CBTestStep interface {
Run(t *testing.T, client *api.Client, mount string, knownKeys map[string]string, knownCerts map[string]string)
}
type CBTestScenario struct {
Steps []CBTestStep
}
func Test_CAChainBuilding(t *testing.T) {
coreConfig := &vault.CoreConfig{
LogicalBackends: map[string]logical.Factory{
"pki": Factory,
},
}
cluster := vault.NewTestCluster(t, coreConfig, &vault.TestClusterOptions{
HandlerFunc: vaulthttp.Handler,
})
cluster.Start()
defer cluster.Cleanup()
client := cluster.Cores[0].Client
testCases := []CBTestScenario{
{
// This test builds up two cliques lined by a cycle, dropping into
// a single intermediate.
Steps: []CBTestStep{
// Create a reissued certificate using the same key. These
// should validate themselves.
CBGenerateRoot{
Key: "key-root-old",
Name: "root-old-a",
CommonName: "root-old",
},
CBValidateChain{
Chains: map[string][]string{
"root-old-a": {"self"},
},
},
// After adding the second root using the same key and common
// name, there should now be two certs in each chain.
CBGenerateRoot{
Key: "key-root-old",
Existing: true,
Name: "root-old-b",
CommonName: "root-old",
},
CBValidateChain{
Chains: map[string][]string{
"root-old-a": {"self", "root-old-b"},
"root-old-b": {"self", "root-old-a"},
},
},
// After adding a third root, there are now two possibilities for
// each later chain entry.
CBGenerateRoot{
Key: "key-root-old",
Existing: true,
Name: "root-old-c",
CommonName: "root-old",
},
CBValidateChain{
Chains: map[string][]string{
"root-old-a": {"self", "root-old-bc", "root-old-bc"},
"root-old-b": {"self", "root-old-ac", "root-old-ac"},
"root-old-c": {"self", "root-old-ab", "root-old-ab"},
},
Aliases: map[string]string{
"root-old-ac": "root-old-a,root-old-c",
"root-old-ab": "root-old-a,root-old-b",
"root-old-bc": "root-old-b,root-old-c",
},
},
// If we generate an unrelated issuer, it shouldn't affect either
// chain.
CBGenerateRoot{
Key: "key-root-new",
Name: "root-new-a",
CommonName: "root-new",
},
CBValidateChain{
Chains: map[string][]string{
"root-old-a": {"self", "root-old-bc", "root-old-bc"},
"root-old-b": {"self", "root-old-ac", "root-old-ac"},
"root-old-c": {"self", "root-old-ab", "root-old-ab"},
"root-new-a": {"self"},
},
Aliases: map[string]string{
"root-old-ac": "root-old-a,root-old-c",
"root-old-ab": "root-old-a,root-old-b",
"root-old-bc": "root-old-b,root-old-c",
},
},
// Reissuing this new root should form another clique.
CBGenerateRoot{
Key: "key-root-new",
Existing: true,
Name: "root-new-b",
CommonName: "root-new",
},
CBValidateChain{
Chains: map[string][]string{
"root-old-a": {"self", "root-old-bc", "root-old-bc"},
"root-old-b": {"self", "root-old-ac", "root-old-ac"},
"root-old-c": {"self", "root-old-ab", "root-old-ab"},
"root-new-a": {"self", "root-new-b"},
"root-new-b": {"self", "root-new-a"},
},
Aliases: map[string]string{
"root-old-ac": "root-old-a,root-old-c",
"root-old-ab": "root-old-a,root-old-b",
"root-old-bc": "root-old-b,root-old-c",
},
},
// Generating a cross-signed cert from old->new should result
// in all old clique certs showing up in the new root's paths.
// This does not form a cycle.
CBGenerateIntermediate{
// In order to validate the existing root-new clique, we
// have to reuse the key and common name here for
// cross-signing.
Key: "key-root-new",
Existing: true,
Name: "cross-old-new",
CommonName: "root-new",
// Which old issuer is used here doesn't matter as they have
// the same CN and key.
Parent: "root-old-a",
},
CBValidateChain{
Chains: map[string][]string{
"root-old-a": {"self", "root-old-bc", "root-old-bc"},
"root-old-b": {"self", "root-old-ac", "root-old-ac"},
"root-old-c": {"self", "root-old-ab", "root-old-ab"},
"cross-old-new": {"self", "root-old-abc", "root-old-abc", "root-old-abc"},
"root-new-a": {"self", "root-new-b", "cross-old-new", "root-old-abc", "root-old-abc", "root-old-abc"},
"root-new-b": {"self", "root-new-a", "cross-old-new", "root-old-abc", "root-old-abc", "root-old-abc"},
},
Aliases: map[string]string{
"root-old-ac": "root-old-a,root-old-c",
"root-old-ab": "root-old-a,root-old-b",
"root-old-bc": "root-old-b,root-old-c",
"root-old-abc": "root-old-a,root-old-b,root-old-c",
},
},
// If we create a new intermediate off of the root-new, we should
// simply add to the existing chain.
CBGenerateIntermediate{
Key: "key-inter-a-root-new",
Name: "inter-a-root-new",
Parent: "root-new-a",
},
CBValidateChain{
Chains: map[string][]string{
"root-old-a": {"self", "root-old-bc", "root-old-bc"},
"root-old-b": {"self", "root-old-ac", "root-old-ac"},
"root-old-c": {"self", "root-old-ab", "root-old-ab"},
"cross-old-new": {"self", "root-old-abc", "root-old-abc", "root-old-abc"},
"root-new-a": {"self", "root-new-b", "cross-old-new", "root-old-abc", "root-old-abc", "root-old-abc"},
"root-new-b": {"self", "root-new-a", "cross-old-new", "root-old-abc", "root-old-abc", "root-old-abc"},
// If we find cross-old-new first, the old clique will be ahead
// of the new clique; otherwise the new clique will appear first.
"inter-a-root-new": {"self", "full-cycle", "full-cycle", "full-cycle", "full-cycle", "full-cycle", "full-cycle"},
},
Aliases: map[string]string{
"root-old-ac": "root-old-a,root-old-c",
"root-old-ab": "root-old-a,root-old-b",
"root-old-bc": "root-old-b,root-old-c",
"root-old-abc": "root-old-a,root-old-b,root-old-c",
"full-cycle": "root-old-a,root-old-b,root-old-c,cross-old-new,root-new-a,root-new-b",
},
},
// Now, if we cross-sign back from new to old, we should
// form cycle with multiple reissued cliques. This means
// all nodes will have the same chain.
CBGenerateIntermediate{
// In order to validate the existing root-old clique, we
// have to reuse the key and common name here for
// cross-signing.
Key: "key-root-old",
Existing: true,
Name: "cross-new-old",
CommonName: "root-old",
// Which new issuer is used here doesn't matter as they have
// the same CN and key.
Parent: "root-new-a",
},
CBValidateChain{
Chains: map[string][]string{
"root-old-a": {"self", "root-old-bc", "root-old-bc", "both-cross-old-new", "both-cross-old-new", "root-new-ab", "root-new-ab"},
"root-old-b": {"self", "root-old-ac", "root-old-ac", "both-cross-old-new", "both-cross-old-new", "root-new-ab", "root-new-ab"},
"root-old-c": {"self", "root-old-ab", "root-old-ab", "both-cross-old-new", "both-cross-old-new", "root-new-ab", "root-new-ab"},
"cross-old-new": {"self", "cross-new-old", "both-cliques", "both-cliques", "both-cliques", "both-cliques", "both-cliques"},
"cross-new-old": {"self", "cross-old-new", "both-cliques", "both-cliques", "both-cliques", "both-cliques", "both-cliques"},
"root-new-a": {"self", "root-new-b", "both-cross-old-new", "both-cross-old-new", "root-old-abc", "root-old-abc", "root-old-abc"},
"root-new-b": {"self", "root-new-a", "both-cross-old-new", "both-cross-old-new", "root-old-abc", "root-old-abc", "root-old-abc"},
"inter-a-root-new": {"self", "full-cycle", "full-cycle", "full-cycle", "full-cycle", "full-cycle", "full-cycle", "full-cycle"},
},
Aliases: map[string]string{
"root-old-ac": "root-old-a,root-old-c",
"root-old-ab": "root-old-a,root-old-b",
"root-old-bc": "root-old-b,root-old-c",
"root-old-abc": "root-old-a,root-old-b,root-old-c",
"root-new-ab": "root-new-a,root-new-b",
"both-cross-old-new": "cross-old-new,cross-new-old",
"both-cliques": "root-old-a,root-old-b,root-old-c,root-new-a,root-new-b",
"full-cycle": "root-old-a,root-old-b,root-old-c,cross-old-new,cross-new-old,root-new-a,root-new-b",
},
},
// Update each old root to only include itself.
CBUpdateIssuer{
Name: "root-old-a",
CAChain: []string{"root-old-a"},
},
CBUpdateIssuer{
Name: "root-old-b",
CAChain: []string{"root-old-b"},
},
CBUpdateIssuer{
Name: "root-old-c",
CAChain: []string{"root-old-c"},
},
// Step 19
CBValidateChain{
Chains: map[string][]string{
"root-old-a": {"self"},
"root-old-b": {"self"},
"root-old-c": {"self"},
"cross-old-new": {"self", "cross-new-old", "both-cliques", "both-cliques", "both-cliques", "both-cliques", "both-cliques"},
"cross-new-old": {"self", "cross-old-new", "both-cliques", "both-cliques", "both-cliques", "both-cliques", "both-cliques"},
"root-new-a": {"self", "root-new-b", "both-cross-old-new", "both-cross-old-new", "root-old-abc", "root-old-abc", "root-old-abc"},
"root-new-b": {"self", "root-new-a", "both-cross-old-new", "both-cross-old-new", "root-old-abc", "root-old-abc", "root-old-abc"},
"inter-a-root-new": {"self", "full-cycle", "full-cycle", "full-cycle", "full-cycle", "full-cycle", "full-cycle", "full-cycle"},
},
Aliases: map[string]string{
"root-old-ac": "root-old-a,root-old-c",
"root-old-ab": "root-old-a,root-old-b",
"root-old-bc": "root-old-b,root-old-c",
"root-old-abc": "root-old-a,root-old-b,root-old-c",
"root-new-ab": "root-new-a,root-new-b",
"both-cross-old-new": "cross-old-new,cross-new-old",
"both-cliques": "root-old-a,root-old-b,root-old-c,root-new-a,root-new-b",
"full-cycle": "root-old-a,root-old-b,root-old-c,cross-old-new,cross-new-old,root-new-a,root-new-b",
},
},
// Reset the old roots; should get the original chains back.
CBUpdateIssuer{
Name: "root-old-a",
},
CBUpdateIssuer{
Name: "root-old-b",
},
CBUpdateIssuer{
Name: "root-old-c",
},
CBValidateChain{
Chains: map[string][]string{
"root-old-a": {"self", "root-old-bc", "root-old-bc", "both-cross-old-new", "both-cross-old-new", "root-new-ab", "root-new-ab"},
"root-old-b": {"self", "root-old-ac", "root-old-ac", "both-cross-old-new", "both-cross-old-new", "root-new-ab", "root-new-ab"},
"root-old-c": {"self", "root-old-ab", "root-old-ab", "both-cross-old-new", "both-cross-old-new", "root-new-ab", "root-new-ab"},
"cross-old-new": {"self", "cross-new-old", "both-cliques", "both-cliques", "both-cliques", "both-cliques", "both-cliques"},
"cross-new-old": {"self", "cross-old-new", "both-cliques", "both-cliques", "both-cliques", "both-cliques", "both-cliques"},
"root-new-a": {"self", "root-new-b", "both-cross-old-new", "both-cross-old-new", "root-old-abc", "root-old-abc", "root-old-abc"},
"root-new-b": {"self", "root-new-a", "both-cross-old-new", "both-cross-old-new", "root-old-abc", "root-old-abc", "root-old-abc"},
"inter-a-root-new": {"self", "full-cycle", "full-cycle", "full-cycle", "full-cycle", "full-cycle", "full-cycle", "full-cycle"},
},
Aliases: map[string]string{
"root-old-ac": "root-old-a,root-old-c",
"root-old-ab": "root-old-a,root-old-b",
"root-old-bc": "root-old-b,root-old-c",
"root-old-abc": "root-old-a,root-old-b,root-old-c",
"root-new-ab": "root-new-a,root-new-b",
"both-cross-old-new": "cross-old-new,cross-new-old",
"both-cliques": "root-old-a,root-old-b,root-old-c,root-new-a,root-new-b",
"full-cycle": "root-old-a,root-old-b,root-old-c,cross-old-new,cross-new-old,root-new-a,root-new-b",
},
},
},
},
{
// Here we're testing our chain capacity. First we'll create a
// bunch of unique roots to form a cycle of length 10.
Steps: []CBTestStep{
CBGenerateRoot{
Key: "key-root-a",
Name: "root-a",
CommonName: "root-a",
},
CBGenerateRoot{
Key: "key-root-b",
Name: "root-b",
CommonName: "root-b",
},
CBGenerateRoot{
Key: "key-root-c",
Name: "root-c",
CommonName: "root-c",
},
CBGenerateRoot{
Key: "key-root-d",
Name: "root-d",
CommonName: "root-d",
},
CBGenerateRoot{
Key: "key-root-e",
Name: "root-e",
CommonName: "root-e",
},
// They should all be disjoint to start.
CBValidateChain{
Chains: map[string][]string{
"root-a": {"self"},
"root-b": {"self"},
"root-c": {"self"},
"root-d": {"self"},
"root-e": {"self"},
},
},
// Start the cross-signing chains. These are all linear, so there's
// no error expected; they're just long.
CBGenerateIntermediate{
Key: "key-root-b",
Existing: true,
Name: "cross-a-b",
CommonName: "root-b",
Parent: "root-a",
},
CBValidateChain{
Chains: map[string][]string{
"root-a": {"self"},
"cross-a-b": {"self", "root-a"},
"root-b": {"self", "cross-a-b", "root-a"},
"root-c": {"self"},
"root-d": {"self"},
"root-e": {"self"},
},
},
CBGenerateIntermediate{
Key: "key-root-c",
Existing: true,
Name: "cross-b-c",
CommonName: "root-c",
Parent: "root-b",
},
CBValidateChain{
Chains: map[string][]string{
"root-a": {"self"},
"cross-a-b": {"self", "root-a"},
"root-b": {"self", "cross-a-b", "root-a"},
"cross-b-c": {"self", "b-or-cross", "b-chained-cross", "b-chained-cross"},
"root-c": {"self", "cross-b-c", "b-or-cross", "b-chained-cross", "b-chained-cross"},
"root-d": {"self"},
"root-e": {"self"},
},
Aliases: map[string]string{
"b-or-cross": "root-b,cross-a-b",
"b-chained-cross": "root-b,cross-a-b,root-a",
},
},
CBGenerateIntermediate{
Key: "key-root-d",
Existing: true,
Name: "cross-c-d",
CommonName: "root-d",
Parent: "root-c",
},
CBValidateChain{
Chains: map[string][]string{
"root-a": {"self"},
"cross-a-b": {"self", "root-a"},
"root-b": {"self", "cross-a-b", "root-a"},
"cross-b-c": {"self", "b-or-cross", "b-chained-cross", "b-chained-cross"},
"root-c": {"self", "cross-b-c", "b-or-cross", "b-chained-cross", "b-chained-cross"},
"cross-c-d": {"self", "c-or-cross", "c-chained-cross", "c-chained-cross", "c-chained-cross", "c-chained-cross"},
"root-d": {"self", "cross-c-d", "c-or-cross", "c-chained-cross", "c-chained-cross", "c-chained-cross", "c-chained-cross"},
"root-e": {"self"},
},
Aliases: map[string]string{
"b-or-cross": "root-b,cross-a-b",
"b-chained-cross": "root-b,cross-a-b,root-a",
"c-or-cross": "root-c,cross-b-c",
"c-chained-cross": "root-c,cross-b-c,root-b,cross-a-b,root-a",
},
},
CBGenerateIntermediate{
Key: "key-root-e",
Existing: true,
Name: "cross-d-e",
CommonName: "root-e",
Parent: "root-d",
},
CBValidateChain{
Chains: map[string][]string{
"root-a": {"self"},
"cross-a-b": {"self", "root-a"},
"root-b": {"self", "cross-a-b", "root-a"},
"cross-b-c": {"self", "b-or-cross", "b-chained-cross", "b-chained-cross"},
"root-c": {"self", "cross-b-c", "b-or-cross", "b-chained-cross", "b-chained-cross"},
"cross-c-d": {"self", "c-or-cross", "c-chained-cross", "c-chained-cross", "c-chained-cross", "c-chained-cross"},
"root-d": {"self", "cross-c-d", "c-or-cross", "c-chained-cross", "c-chained-cross", "c-chained-cross", "c-chained-cross"},
"cross-d-e": {"self", "d-or-cross", "d-chained-cross", "d-chained-cross", "d-chained-cross", "d-chained-cross", "d-chained-cross", "d-chained-cross"},
"root-e": {"self", "cross-d-e", "d-or-cross", "d-chained-cross", "d-chained-cross", "d-chained-cross", "d-chained-cross", "d-chained-cross", "d-chained-cross"},
},
Aliases: map[string]string{
"b-or-cross": "root-b,cross-a-b",
"b-chained-cross": "root-b,cross-a-b,root-a",
"c-or-cross": "root-c,cross-b-c",
"c-chained-cross": "root-c,cross-b-c,root-b,cross-a-b,root-a",
"d-or-cross": "root-d,cross-c-d",
"d-chained-cross": "root-d,cross-c-d,root-c,cross-b-c,root-b,cross-a-b,root-a",
},
},
// Importing the new e->a cross fails because the cycle
// it builds is too long.
CBGenerateIntermediate{
Key: "key-root-a",
Existing: true,
Name: "cross-e-a",
CommonName: "root-a",
Parent: "root-e",
ImportErrorMessage: "exceeds max size",
},
// Deleting any root and one of its crosses (either a->b or b->c)
// should fix this.
CBDeleteIssuer{"root-b"},
CBDeleteIssuer{"cross-b-c"},
// Importing the new e->a cross fails because the cycle
// it builds is too long.
CBGenerateIntermediate{
Key: "key-root-a",
Existing: true,
Name: "cross-e-a",
CommonName: "root-a",
Parent: "root-e",
},
},
},
{
// Here we're testing our clique capacity. First we'll create a
// bunch of unique roots to form a cycle of length 10.
Steps: []CBTestStep{
CBGenerateRoot{
Key: "key-root",
Name: "root-a",
CommonName: "root",
},
CBGenerateRoot{
Key: "key-root",
Existing: true,
Name: "root-b",
CommonName: "root",
},
CBGenerateRoot{
Key: "key-root",
Existing: true,
Name: "root-c",
CommonName: "root",
},
CBGenerateRoot{
Key: "key-root",
Existing: true,
Name: "root-d",
CommonName: "root",
},
CBGenerateRoot{
Key: "key-root",
Existing: true,
Name: "root-e",
CommonName: "root",
},
CBGenerateRoot{
Key: "key-root",
Existing: true,
Name: "root-f",
CommonName: "root",
},
// Seventh reissuance fails.
CBGenerateRoot{
Key: "key-root",
Existing: true,
Name: "root-g",
CommonName: "root",
ErrorMessage: "excessively reissued certificate",
},
// Deleting one and trying again should succeed.
CBDeleteIssuer{"root-a"},
CBGenerateRoot{
Key: "key-root",
Existing: true,
Name: "root-g",
CommonName: "root",
},
},
},
{
// There's one more pathological case here: we have a cycle
// which validates a clique/cycle via cross-signing. We call
// the parent cycle new roots and the child cycle/clique the
// old roots.
Steps: []CBTestStep{
// New Cycle
CBGenerateRoot{
Key: "key-root-new-a",
Name: "root-new-a",
},
CBGenerateRoot{
Key: "key-root-new-b",
Name: "root-new-b",
},
CBGenerateIntermediate{
Key: "key-root-new-b",
Existing: true,
Name: "cross-root-new-b-sig-a",
CommonName: "root-new-b",
Parent: "root-new-a",
},
CBGenerateIntermediate{
Key: "key-root-new-a",
Existing: true,
Name: "cross-root-new-a-sig-b",
CommonName: "root-new-a",
Parent: "root-new-b",
},
// Old Cycle + Clique
CBGenerateRoot{
Key: "key-root-old-a",
Name: "root-old-a",
},
CBGenerateRoot{
Key: "key-root-old-a",
Existing: true,
Name: "root-old-a-reissued",
CommonName: "root-old-a",
},
CBGenerateRoot{
Key: "key-root-old-b",
Name: "root-old-b",
},
CBGenerateRoot{
Key: "key-root-old-b",
Existing: true,
Name: "root-old-b-reissued",
CommonName: "root-old-b",
},
CBGenerateIntermediate{
Key: "key-root-old-b",
Existing: true,
Name: "cross-root-old-b-sig-a",
CommonName: "root-old-b",
Parent: "root-old-a",
},
CBGenerateIntermediate{
Key: "key-root-old-a",
Existing: true,
Name: "cross-root-old-a-sig-b",
CommonName: "root-old-a",
Parent: "root-old-b",
},
// Validate the chains are separate before linking them.
CBValidateChain{
Chains: map[string][]string{
// New stuff
"root-new-a": {"self", "cross-root-new-a-sig-b", "root-new-b-or-cross", "root-new-b-or-cross"},
"root-new-b": {"self", "cross-root-new-b-sig-a", "root-new-a-or-cross", "root-new-a-or-cross"},
"cross-root-new-b-sig-a": {"self", "any-root-new", "any-root-new", "any-root-new"},
"cross-root-new-a-sig-b": {"self", "any-root-new", "any-root-new", "any-root-new"},
// Old stuff
"root-old-a": {"self", "root-old-a-reissued", "cross-root-old-a-sig-b", "cross-root-old-b-sig-a", "both-root-old-b", "both-root-old-b"},
"root-old-a-reissued": {"self", "root-old-a", "cross-root-old-a-sig-b", "cross-root-old-b-sig-a", "both-root-old-b", "both-root-old-b"},
"root-old-b": {"self", "root-old-b-reissued", "cross-root-old-b-sig-a", "cross-root-old-a-sig-b", "both-root-old-a", "both-root-old-a"},
"root-old-b-reissued": {"self", "root-old-b", "cross-root-old-b-sig-a", "cross-root-old-a-sig-b", "both-root-old-a", "both-root-old-a"},
"cross-root-old-b-sig-a": {"self", "all-root-old", "all-root-old", "all-root-old", "all-root-old", "all-root-old"},
"cross-root-old-a-sig-b": {"self", "all-root-old", "all-root-old", "all-root-old", "all-root-old", "all-root-old"},
},
Aliases: map[string]string{
"root-new-a-or-cross": "root-new-a,cross-root-new-a-sig-b",
"root-new-b-or-cross": "root-new-b,cross-root-new-b-sig-a",
"both-root-new": "root-new-a,root-new-b",
"any-root-new": "root-new-a,cross-root-new-a-sig-b,root-new-b,cross-root-new-b-sig-a",
"both-root-old-a": "root-old-a,root-old-a-reissued",
"both-root-old-b": "root-old-b,root-old-b-reissued",
"all-root-old": "root-old-a,root-old-a-reissued,root-old-b,root-old-b-reissued,cross-root-old-b-sig-a,cross-root-old-a-sig-b",
},
},
// Finally, generate an intermediate to link new->old. We
// link root-new-a into root-old-a.
CBGenerateIntermediate{
Key: "key-root-old-a",
Existing: true,
Name: "cross-root-old-a-sig-root-new-a",
CommonName: "root-old-a",
Parent: "root-new-a",
},
CBValidateChain{
Chains: map[string][]string{
// New stuff should be unchanged.
"root-new-a": {"self", "cross-root-new-a-sig-b", "root-new-b-or-cross", "root-new-b-or-cross"},
"root-new-b": {"self", "cross-root-new-b-sig-a", "root-new-a-or-cross", "root-new-a-or-cross"},
"cross-root-new-b-sig-a": {"self", "any-root-new", "any-root-new", "any-root-new"},
"cross-root-new-a-sig-b": {"self", "any-root-new", "any-root-new", "any-root-new"},
// Old stuff
"root-old-a": {"self", "root-old-a-reissued", "cross-root-old-a-sig-b", "cross-root-old-b-sig-a", "both-root-old-b", "both-root-old-b", "cross-root-old-a-sig-root-new-a", "any-root-new", "any-root-new", "any-root-new", "any-root-new"},
"root-old-a-reissued": {"self", "root-old-a", "cross-root-old-a-sig-b", "cross-root-old-b-sig-a", "both-root-old-b", "both-root-old-b", "cross-root-old-a-sig-root-new-a", "any-root-new", "any-root-new", "any-root-new", "any-root-new"},
"root-old-b": {"self", "root-old-b-reissued", "cross-root-old-b-sig-a", "cross-root-old-a-sig-b", "both-root-old-a", "both-root-old-a", "cross-root-old-a-sig-root-new-a", "any-root-new", "any-root-new", "any-root-new", "any-root-new"},
"root-old-b-reissued": {"self", "root-old-b", "cross-root-old-b-sig-a", "cross-root-old-a-sig-b", "both-root-old-a", "both-root-old-a", "cross-root-old-a-sig-root-new-a", "any-root-new", "any-root-new", "any-root-new", "any-root-new"},
"cross-root-old-b-sig-a": {"self", "all-root-old", "all-root-old", "all-root-old", "all-root-old", "all-root-old", "cross-root-old-a-sig-root-new-a", "any-root-new", "any-root-new", "any-root-new", "any-root-new"},
"cross-root-old-a-sig-b": {"self", "all-root-old", "all-root-old", "all-root-old", "all-root-old", "all-root-old", "cross-root-old-a-sig-root-new-a", "any-root-new", "any-root-new", "any-root-new", "any-root-new"},
// Link
"cross-root-old-a-sig-root-new-a": {"self", "root-new-a-or-cross", "any-root-new", "any-root-new", "any-root-new"},
},
Aliases: map[string]string{
"root-new-a-or-cross": "root-new-a,cross-root-new-a-sig-b",
"root-new-b-or-cross": "root-new-b,cross-root-new-b-sig-a",
"both-root-new": "root-new-a,root-new-b",
"any-root-new": "root-new-a,cross-root-new-a-sig-b,root-new-b,cross-root-new-b-sig-a",
"both-root-old-a": "root-old-a,root-old-a-reissued",
"both-root-old-b": "root-old-b,root-old-b-reissued",
"all-root-old": "root-old-a,root-old-a-reissued,root-old-b,root-old-b-reissued,cross-root-old-b-sig-a,cross-root-old-a-sig-b",
},
},
},
},
}
for testIndex, testCase := range testCases {
mount := fmt.Sprintf("pki-test-%v", testIndex)
mountPKIEndpoint(t, client, mount)
knownKeys := make(map[string]string)
knownCerts := make(map[string]string)
for stepIndex, testStep := range testCase.Steps {
t.Logf("Running %v / %v", testIndex, stepIndex)
testStep.Run(t, client, mount, knownKeys, knownCerts)
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,56 @@
package pki
import (
"context"
"strings"
"github.com/hashicorp/vault/sdk/logical"
)
func isDefaultKeySet(ctx context.Context, s logical.Storage) (bool, error) {
config, err := getKeysConfig(ctx, s)
if err != nil {
return false, err
}
return strings.TrimSpace(config.DefaultKeyId.String()) != "", nil
}
func isDefaultIssuerSet(ctx context.Context, s logical.Storage) (bool, error) {
config, err := getIssuersConfig(ctx, s)
if err != nil {
return false, err
}
return strings.TrimSpace(config.DefaultIssuerId.String()) != "", nil
}
func updateDefaultKeyId(ctx context.Context, s logical.Storage, id keyID) error {
config, err := getKeysConfig(ctx, s)
if err != nil {
return err
}
if config.DefaultKeyId != id {
return setKeysConfig(ctx, s, &keyConfigEntry{
DefaultKeyId: id,
})
}
return nil
}
func updateDefaultIssuerId(ctx context.Context, s logical.Storage, id issuerID) error {
config, err := getIssuersConfig(ctx, s)
if err != nil {
return err
}
if config.DefaultIssuerId != id {
return setIssuersConfig(ctx, s, &issuerConfigEntry{
DefaultIssuerId: id,
})
}
return nil
}

View File

@@ -128,12 +128,58 @@ func TestBackend_CRL_EnableDisable(t *testing.T) {
require.NotEqual(t, crlCreationTime1, crlCreationTime2)
}
func getCrlCertificateList(t *testing.T, client *api.Client) pkix.TBSCertificateList {
resp, err := client.Logical().ReadWithContext(context.Background(), "pki/cert/crl")
func TestBackend_Secondary_CRL_Rebuilding(t *testing.T) {
ctx := context.Background()
b, s := createBackendWithStorage(t)
mkc := newManagedKeyContext(ctx, b, "test")
// Write out the issuer/key to storage without going through the api call as replication would.
bundle := genCertBundle(t, b, s)
issuer, _, err := writeCaBundle(mkc, s, bundle, "", "")
require.NoError(t, err)
// Just to validate, before we call the invalidate function, make sure our CRL has not been generated
// and we get a nil response
resp := requestCrlFromBackend(t, s, b)
require.Nil(t, resp.Data["http_raw_body"])
// This should force any calls from now on to rebuild our CRL even a read
b.invalidate(ctx, issuerPrefix+issuer.ID.String())
// Perform the read operation again, we should have a valid CRL now...
resp = requestCrlFromBackend(t, s, b)
crl := parseCrlPemBytes(t, resp.Data["http_raw_body"].([]byte))
require.Equal(t, 0, len(crl.RevokedCertificates))
}
func requestCrlFromBackend(t *testing.T, s logical.Storage, b *backend) *logical.Response {
crlReq := &logical.Request{
Operation: logical.ReadOperation,
Path: "crl/pem",
Storage: s,
}
resp, err := b.HandleRequest(context.Background(), crlReq)
require.NoError(t, err, "crl req failed with an error")
require.NotNil(t, resp, "crl response was nil with no error")
require.False(t, resp.IsError(), "crl error response: %v", resp)
return resp
}
func getCrlCertificateList(t *testing.T, client *api.Client) pkix.TBSCertificateList {
resp, err := client.Logical().ReadWithContext(context.Background(), "pki/cert/crl")
require.NoError(t, err, "crl req failed with an error")
require.NotNil(t, resp, "crl response was nil with no error")
crlPem := resp.Data["certificate"].(string)
certList, err := x509.ParseCRL([]byte(crlPem))
return parseCrlPemString(t, crlPem)
}
func parseCrlPemString(t *testing.T, crlPem string) pkix.TBSCertificateList {
return parseCrlPemBytes(t, []byte(crlPem))
}
func parseCrlPemBytes(t *testing.T, crlPem []byte) pkix.TBSCertificateList {
certList, err := x509.ParseCRL(crlPem)
require.NoError(t, err)
return certList.TBSCertList
}

View File

@@ -1,24 +1,92 @@
package pki
import (
"bytes"
"context"
"crypto/rand"
"crypto/x509"
"crypto/x509/pkix"
"errors"
"fmt"
"math/big"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/hashicorp/vault/sdk/helper/consts"
"github.com/hashicorp/vault/sdk/helper/certutil"
"github.com/hashicorp/vault/sdk/helper/errutil"
"github.com/hashicorp/vault/sdk/logical"
)
const revokedPath = "revoked/"
type revocationInfo struct {
CertificateBytes []byte `json:"certificate_bytes"`
RevocationTime int64 `json:"revocation_time"`
RevocationTimeUTC time.Time `json:"revocation_time_utc"`
CertificateIssuer issuerID `json:"issuer_id"`
}
// crlBuilder is gatekeeper for controlling various read/write operations to the storage of the CRL.
// The extra complexity arises from secondary performance clusters seeing various writes to its storage
// without the actual API calls. During the storage invalidation process, we do not have the required state
// to actually rebuild the CRLs, so we need to schedule it in a deferred fashion. This allows either
// read or write calls to perform the operation if required, or have the flag reset upon a write operation
type crlBuilder struct {
m sync.Mutex
forceRebuild uint32
}
const (
_ignoreForceFlag = true
_enforceForceFlag = false
)
// rebuildIfForced is to be called by readers or periodic functions that might need to trigger
// a refresh of the CRL before the read occurs.
func (cb *crlBuilder) rebuildIfForced(ctx context.Context, b *backend, request *logical.Request) error {
if atomic.LoadUint32(&cb.forceRebuild) == 1 {
return cb._doRebuild(ctx, b, request, true, _enforceForceFlag)
}
return nil
}
// rebuild is to be called by various write apis that know the CRL is to be updated and can be now.
func (cb *crlBuilder) rebuild(ctx context.Context, b *backend, request *logical.Request, forceNew bool) error {
return cb._doRebuild(ctx, b, request, forceNew, _ignoreForceFlag)
}
// requestRebuildIfActiveNode will schedule a rebuild of the CRL from the next read or write api call assuming we are the active node of a cluster
func (cb *crlBuilder) requestRebuildIfActiveNode(b *backend) {
// Only schedule us on active nodes, ignoring secondary nodes, the active can/should rebuild the CRL.
if b.System().ReplicationState().HasState(consts.ReplicationPerformanceStandby) ||
b.System().ReplicationState().HasState(consts.ReplicationDRSecondary) {
b.Logger().Debug("Ignoring request to schedule a CRL rebuild, not on active node.")
return
}
b.Logger().Info("Scheduling PKI CRL rebuild.")
cb.m.Lock()
defer cb.m.Unlock()
atomic.StoreUint32(&cb.forceRebuild, 1)
}
func (cb *crlBuilder) _doRebuild(ctx context.Context, b *backend, request *logical.Request, forceNew bool, ignoreForceFlag bool) error {
cb.m.Lock()
defer cb.m.Unlock()
if cb.forceRebuild == 1 || ignoreForceFlag {
defer atomic.StoreUint32(&cb.forceRebuild, 0)
// if forceRebuild was requested, that should force a complete rebuild even if requested not too by forceNew
myForceNew := cb.forceRebuild == 1 || forceNew
return buildCRLs(ctx, b, request, myForceNew)
}
return nil
}
// Revokes a cert, and tries to be smart about error recovery
@@ -32,7 +100,7 @@ func revokeCert(ctx context.Context, b *backend, req *logical.Request, serial st
return nil, nil
}
signingBundle, caErr := fetchCAInfo(ctx, b, req)
signingBundle, caErr := fetchCAInfo(ctx, b, req, defaultRef, ReadOnlyUsage)
if caErr != nil {
switch caErr.(type) {
case errutil.UserError:
@@ -53,7 +121,7 @@ func revokeCert(ctx context.Context, b *backend, req *logical.Request, serial st
alreadyRevoked := false
var revInfo revocationInfo
revEntry, err := fetchCertBySerial(ctx, req, "revoked/", serial)
revEntry, err := fetchCertBySerial(ctx, b, req, revokedPath, serial)
if err != nil {
switch err.(type) {
case errutil.UserError:
@@ -72,7 +140,7 @@ func revokeCert(ctx context.Context, b *backend, req *logical.Request, serial st
}
if !alreadyRevoked {
certEntry, err := fetchCertBySerial(ctx, req, "certs/", serial)
certEntry, err := fetchCertBySerial(ctx, b, req, "certs/", serial)
if err != nil {
switch err.(type) {
case errutil.UserError:
@@ -117,7 +185,7 @@ func revokeCert(ctx context.Context, b *backend, req *logical.Request, serial st
revInfo.RevocationTime = currTime.Unix()
revInfo.RevocationTimeUTC = currTime.UTC()
revEntry, err = logical.StorageEntryJSON("revoked/"+normalizeSerial(serial), revInfo)
revEntry, err = logical.StorageEntryJSON(revokedPath+normalizeSerial(serial), revInfo)
if err != nil {
return nil, fmt.Errorf("error creating revocation entry")
}
@@ -128,7 +196,7 @@ func revokeCert(ctx context.Context, b *backend, req *logical.Request, serial st
}
}
crlErr := buildCRL(ctx, b, req, false)
crlErr := b.crlBuilder.rebuild(ctx, b, req, false)
if crlErr != nil {
switch crlErr.(type) {
case errutil.UserError:
@@ -149,9 +217,302 @@ func revokeCert(ctx context.Context, b *backend, req *logical.Request, serial st
return resp, nil
}
func buildCRLs(ctx context.Context, b *backend, req *logical.Request, forceNew bool) error {
// In order to build all CRLs, we need knowledge of all issuers. Any two
// issuers with the same keys _and_ subject should have the same CRL since
// they're functionally equivalent.
//
// When building CRLs, there's two types of CRLs: an "internal" CRL for
// just certificates issued by this issuer, and a "default" CRL, which
// not only contains certificates by this issuer, but also ones issued
// by "unknown" or past issuers. This means we need knowledge of not
// only all issuers (to tell whether or not to include these orphaned
// certs) but whether the present issuer is the configured default.
//
// If a configured default is lacking, we won't provision these
// certificates on any CRL.
//
// In order to know which CRL a given cert belongs on, we have to read
// it into memory, identify the corresponding issuer, and update its
// map with the revoked cert instance. If no such issuer is found, we'll
// place it in the default issuer's CRL.
//
// By not updating storage, we allow issuers to come and go (either by
// direct deletion or by having their keys delete, preventing CRLs from
// being signed) -- and when they return, we'll correctly place certs
// on their CRLs.
issuers, err := listIssuers(ctx, req.Storage)
if err != nil {
return fmt.Errorf("error building CRL: while listing issuers: %v", err)
}
config, err := getIssuersConfig(ctx, req.Storage)
if err != nil {
return fmt.Errorf("error building CRLs: while getting the default config: %v", err)
}
// We map issuerID->entry for fast lookup and also issuerID->Cert for
// signature verification and correlation of revoked certs.
issuerIDEntryMap := make(map[issuerID]*issuerEntry, len(issuers))
issuerIDCertMap := make(map[issuerID]*x509.Certificate, len(issuers))
// We use a double map (keyID->subject->issuerID) to store whether or not this
// key+subject paring has been seen before. We can then iterate over each
// key/subject and choose any representative issuer for that combination.
keySubjectIssuersMap := make(map[keyID]map[string][]issuerID)
for _, issuer := range issuers {
thisEntry, err := fetchIssuerById(ctx, req.Storage, issuer)
if err != nil {
return fmt.Errorf("error building CRLs: unable to fetch specified issuer (%v): %v", issuer, err)
}
if len(thisEntry.KeyID) == 0 {
continue
}
// Skip entries which aren't enabled for CRL signing.
if err := thisEntry.EnsureUsage(CRLSigningUsage); err != nil {
continue
}
issuerIDEntryMap[issuer] = thisEntry
thisCert, err := thisEntry.GetCertificate()
if err != nil {
return fmt.Errorf("error building CRLs: unable to parse issuer (%v)'s certificate: %v", issuer, err)
}
issuerIDCertMap[issuer] = thisCert
subject := string(thisCert.RawIssuer)
if _, ok := keySubjectIssuersMap[thisEntry.KeyID]; !ok {
keySubjectIssuersMap[thisEntry.KeyID] = make(map[string][]issuerID)
}
keySubjectIssuersMap[thisEntry.KeyID][subject] = append(keySubjectIssuersMap[thisEntry.KeyID][subject], issuer)
}
// Fetch the cluster-local CRL mapping so we know where to write the
// CRLs.
crlConfig, err := getLocalCRLConfig(ctx, req.Storage)
if err != nil {
return fmt.Errorf("error building CRLs: unable to fetch cluster-local CRL configuration: %v", err)
}
// Next, we load and parse all revoked certificates. We need to assign
// these certificates to an issuer. Some certificates will not be
// assignable (if they were issued by a since-deleted issuer), so we need
// a separate pool for those.
unassignedCerts, revokedCertsMap, err := getRevokedCertEntries(ctx, req, issuerIDCertMap)
if err != nil {
return fmt.Errorf("error building CRLs: unable to get revoked certificate entries: %v", err)
}
// Now we can call buildCRL once, on an arbitrary/representative issuer
// from each of these (keyID, subject) sets.
for _, subjectIssuersMap := range keySubjectIssuersMap {
for _, issuersSet := range subjectIssuersMap {
if len(issuersSet) == 0 {
continue
}
var revokedCerts []pkix.RevokedCertificate
representative := issuersSet[0]
var crlIdentifier crlID
var crlIdIssuer issuerID
for _, issuerId := range issuersSet {
if issuerId == config.DefaultIssuerId {
if len(unassignedCerts) > 0 {
revokedCerts = append(revokedCerts, unassignedCerts...)
}
representative = issuerId
}
if thisRevoked, ok := revokedCertsMap[issuerId]; ok && len(thisRevoked) > 0 {
revokedCerts = append(revokedCerts, thisRevoked...)
}
if thisCRLId, ok := crlConfig.IssuerIDCRLMap[issuerId]; ok && len(thisCRLId) > 0 {
if len(crlIdentifier) > 0 && crlIdentifier != thisCRLId {
return fmt.Errorf("error building CRLs: two issuers with same keys/subjects (%v vs %v) have different internal CRL IDs: %v vs %v", issuerId, crlIdIssuer, thisCRLId, crlIdentifier)
}
crlIdentifier = thisCRLId
crlIdIssuer = issuerId
}
}
if len(crlIdentifier) == 0 {
// Create a new random UUID for this CRL if none exists.
crlIdentifier = genCRLId()
crlConfig.CRLNumberMap[crlIdentifier] = 1
}
// Update all issuers in this group to set the CRL Issuer
for _, issuerId := range issuersSet {
crlConfig.IssuerIDCRLMap[issuerId] = crlIdentifier
}
// We always update the CRL Number since we never want to
// duplicate numbers and missing numbers is fine.
crlNumber := crlConfig.CRLNumberMap[crlIdentifier]
crlConfig.CRLNumberMap[crlIdentifier] += 1
// Lastly, build the CRL.
if err := buildCRL(ctx, b, req, forceNew, representative, revokedCerts, crlIdentifier, crlNumber); err != nil {
return fmt.Errorf("error building CRLs: unable to build CRL for issuer (%v): %v", representative, err)
}
}
}
// Before persisting our updated CRL config, check to see if we have
// any dangling references. If we have any issuers that don't exist,
// remove them, remembering their CRLs IDs. If we've completely removed
// all issuers pointing to that CRL number, we can remove it from the
// number map and from storage.
//
// Note that we persist the last generated CRL for a specified issuer
// if it is later disabled for CRL generation. This mirrors the old
// root deletion behavior, but using soft issuer deletes. If there is an
// alternate, equivalent issuer however, we'll keep updating the shared
// CRL; all equivalent issuers must have their CRLs disabled.
for mapIssuerId := range crlConfig.IssuerIDCRLMap {
stillHaveIssuer := false
for _, listedIssuerId := range issuers {
if mapIssuerId == listedIssuerId {
stillHaveIssuer = true
break
}
}
if !stillHaveIssuer {
delete(crlConfig.IssuerIDCRLMap, mapIssuerId)
}
}
for crlId := range crlConfig.CRLNumberMap {
stillHaveIssuerForID := false
for _, remainingCRL := range crlConfig.IssuerIDCRLMap {
if remainingCRL == crlId {
stillHaveIssuerForID = true
break
}
}
if !stillHaveIssuerForID {
if err := req.Storage.Delete(ctx, "crls/"+crlId.String()); err != nil {
return fmt.Errorf("error building CRLs: unable to clean up deleted issuers' CRL: %v", err)
}
}
}
// Finally, persist our potentially updated local CRL config
if err := setLocalCRLConfig(ctx, req.Storage, crlConfig); err != nil {
return fmt.Errorf("error building CRLs: unable to persist updated cluster-local CRL config: %v", err)
}
// All good :-)
return nil
}
func getRevokedCertEntries(ctx context.Context, req *logical.Request, issuerIDCertMap map[issuerID]*x509.Certificate) ([]pkix.RevokedCertificate, map[issuerID][]pkix.RevokedCertificate, error) {
var unassignedCerts []pkix.RevokedCertificate
revokedCertsMap := make(map[issuerID][]pkix.RevokedCertificate)
revokedSerials, err := req.Storage.List(ctx, revokedPath)
if err != nil {
return nil, nil, errutil.InternalError{Err: fmt.Sprintf("error fetching list of revoked certs: %s", err)}
}
for _, serial := range revokedSerials {
var revInfo revocationInfo
revokedEntry, err := req.Storage.Get(ctx, revokedPath+serial)
if err != nil {
return nil, nil, errutil.InternalError{Err: fmt.Sprintf("unable to fetch revoked cert with serial %s: %s", serial, err)}
}
if revokedEntry == nil {
return nil, nil, errutil.InternalError{Err: fmt.Sprintf("revoked certificate entry for serial %s is nil", serial)}
}
if revokedEntry.Value == nil || len(revokedEntry.Value) == 0 {
// TODO: In this case, remove it and continue? How likely is this to
// happen? Alternately, could skip it entirely, or could implement a
// delete function so that there is a way to remove these
return nil, nil, errutil.InternalError{Err: fmt.Sprintf("found revoked serial but actual certificate is empty")}
}
err = revokedEntry.DecodeJSON(&revInfo)
if err != nil {
return nil, nil, errutil.InternalError{Err: fmt.Sprintf("error decoding revocation entry for serial %s: %s", serial, err)}
}
revokedCert, err := x509.ParseCertificate(revInfo.CertificateBytes)
if err != nil {
return nil, nil, errutil.InternalError{Err: fmt.Sprintf("unable to parse stored revoked certificate with serial %s: %s", serial, err)}
}
// NOTE: We have to change this to UTC time because the CRL standard
// mandates it but Go will happily encode the CRL without this.
newRevCert := pkix.RevokedCertificate{
SerialNumber: revokedCert.SerialNumber,
}
if !revInfo.RevocationTimeUTC.IsZero() {
newRevCert.RevocationTime = revInfo.RevocationTimeUTC
} else {
newRevCert.RevocationTime = time.Unix(revInfo.RevocationTime, 0).UTC()
}
// If we have a CertificateIssuer field on the revocation entry,
// prefer it to manually checking each issuer signature, assuming it
// appears valid. Its highly unlikely for two different issuers
// to have the same id (after the first was deleted).
if len(revInfo.CertificateIssuer) > 0 {
issuerId := revInfo.CertificateIssuer
if _, issuerExists := issuerIDCertMap[issuerId]; issuerExists {
revokedCertsMap[issuerId] = append(revokedCertsMap[issuerId], newRevCert)
continue
}
// Otherwise, fall through and update the entry.
}
// Now we need to assign the revoked certificate to an issuer.
foundParent := false
for issuerId, issuerCert := range issuerIDCertMap {
if bytes.Equal(revokedCert.RawIssuer, issuerCert.RawSubject) {
if err := revokedCert.CheckSignatureFrom(issuerCert); err == nil {
// Valid mapping. Add it to the specified entry.
revokedCertsMap[issuerId] = append(revokedCertsMap[issuerId], newRevCert)
revInfo.CertificateIssuer = issuerId
foundParent = true
break
}
}
}
if !foundParent {
// If the parent isn't found, add it to the unassigned bucket.
unassignedCerts = append(unassignedCerts, newRevCert)
} else {
// When the CertificateIssuer field wasn't found on the existing
// entry (or was invalid), and we've found a new value for it,
// we should update the entry to make future CRL builds faster.
revokedEntry, err = logical.StorageEntryJSON(revokedPath+serial, revInfo)
if err != nil {
return nil, nil, fmt.Errorf("error creating revocation entry for existing cert: %v", serial)
}
err = req.Storage.Put(ctx, revokedEntry)
if err != nil {
return nil, nil, fmt.Errorf("error updating revoked certificate at existing location: %v", serial)
}
}
}
return unassignedCerts, revokedCertsMap, nil
}
// Builds a CRL by going through the list of revoked certificates and building
// a new CRL with the stored revocation times and serial numbers.
func buildCRL(ctx context.Context, b *backend, req *logical.Request, forceNew bool) error {
func buildCRL(ctx context.Context, b *backend, req *logical.Request, forceNew bool, thisIssuerId issuerID, revoked []pkix.RevokedCertificate, identifier crlID, crlNumber int64) error {
crlInfo, err := b.CRL(ctx, req.Storage)
if err != nil {
return errutil.InternalError{Err: fmt.Sprintf("error fetching CRL config information: %s", err)}
@@ -159,8 +520,6 @@ func buildCRL(ctx context.Context, b *backend, req *logical.Request, forceNew bo
crlLifetime := b.crlLifetime
var revokedCerts []pkix.RevokedCertificate
var revInfo revocationInfo
var revokedSerials []string
if crlInfo != nil {
if crlInfo.Expiry != "" {
@@ -175,55 +534,21 @@ func buildCRL(ctx context.Context, b *backend, req *logical.Request, forceNew bo
if !forceNew {
return nil
}
// NOTE: in this case, the passed argument (revoked) is not added
// to the revokedCerts list. This is because we want to sign an
// **empty** CRL (as the CRL was disabled but we've specified the
// forceNew option). In previous versions of Vault (1.10 series and
// earlier), we'd have queried the certs below, whereas we now have
// an assignment from a pre-queried list.
goto WRITE
}
}
revokedSerials, err = req.Storage.List(ctx, "revoked/")
if err != nil {
return errutil.InternalError{Err: fmt.Sprintf("error fetching list of revoked certs: %s", err)}
}
for _, serial := range revokedSerials {
revokedEntry, err := req.Storage.Get(ctx, "revoked/"+serial)
if err != nil {
return errutil.InternalError{Err: fmt.Sprintf("unable to fetch revoked cert with serial %s: %s", serial, err)}
}
if revokedEntry == nil {
return errutil.InternalError{Err: fmt.Sprintf("revoked certificate entry for serial %s is nil", serial)}
}
if revokedEntry.Value == nil || len(revokedEntry.Value) == 0 {
// TODO: In this case, remove it and continue? How likely is this to
// happen? Alternately, could skip it entirely, or could implement a
// delete function so that there is a way to remove these
return errutil.InternalError{Err: fmt.Sprintf("found revoked serial but actual certificate is empty")}
}
err = revokedEntry.DecodeJSON(&revInfo)
if err != nil {
return errutil.InternalError{Err: fmt.Sprintf("error decoding revocation entry for serial %s: %s", serial, err)}
}
revokedCert, err := x509.ParseCertificate(revInfo.CertificateBytes)
if err != nil {
return errutil.InternalError{Err: fmt.Sprintf("unable to parse stored revoked certificate with serial %s: %s", serial, err)}
}
// NOTE: We have to change this to UTC time because the CRL standard
// mandates it but Go will happily encode the CRL without this.
newRevCert := pkix.RevokedCertificate{
SerialNumber: revokedCert.SerialNumber,
}
if !revInfo.RevocationTimeUTC.IsZero() {
newRevCert.RevocationTime = revInfo.RevocationTimeUTC
} else {
newRevCert.RevocationTime = time.Unix(revInfo.RevocationTime, 0).UTC()
}
revokedCerts = append(revokedCerts, newRevCert)
}
revokedCerts = revoked
WRITE:
signingBundle, caErr := fetchCAInfo(ctx, b, req)
_, bundle, caErr := fetchCertBundleByIssuerId(ctx, req.Storage, thisIssuerId, true /* need the signing key */)
if caErr != nil {
switch caErr.(type) {
case errutil.UserError:
@@ -233,13 +558,30 @@ WRITE:
}
}
crlBytes, err := signingBundle.Certificate.CreateCRL(rand.Reader, signingBundle.PrivateKey, revokedCerts, time.Now(), time.Now().Add(crlLifetime))
signingBundle, caErr := parseCABundle(ctx, b, req, bundle)
if caErr != nil {
switch caErr.(type) {
case errutil.UserError:
return errutil.UserError{Err: fmt.Sprintf("could not fetch the CA certificate: %s", caErr)}
default:
return errutil.InternalError{Err: fmt.Sprintf("error fetching CA certificate: %s", caErr)}
}
}
revocationListTemplate := &x509.RevocationList{
RevokedCertificates: revokedCerts,
Number: big.NewInt(crlNumber),
ThisUpdate: time.Now(),
NextUpdate: time.Now().Add(crlLifetime),
}
crlBytes, err := x509.CreateRevocationList(rand.Reader, revocationListTemplate, signingBundle.Certificate, signingBundle.PrivateKey)
if err != nil {
return errutil.InternalError{Err: fmt.Sprintf("error creating new CRL: %s", err)}
}
err = req.Storage.Put(ctx, &logical.StorageEntry{
Key: "crl",
Key: "crls/" + identifier.String(),
Value: crlBytes,
})
if err != nil {

View File

@@ -2,6 +2,15 @@ package pki
import "github.com/hashicorp/vault/sdk/framework"
const (
issuerRefParam = "issuer_ref"
keyNameParam = "key_name"
keyRefParam = "key_ref"
keyIdParam = "key_id"
keyTypeParam = "key_type"
keyBitsParam = "key_bits"
)
// addIssueAndSignCommonFields adds fields common to both CA and non-CA issuing
// and signing
func addIssueAndSignCommonFields(fields map[string]*framework.FieldSchema) map[string]*framework.FieldSchema {
@@ -132,6 +141,8 @@ be larger than the role max TTL.`,
The value format should be given in UTC format YYYY-MM-ddTHH:MM:SSZ`,
}
fields = addIssuerRefField(fields)
return fields
}
@@ -308,6 +319,9 @@ SHA-2-512. Defaults to 0 to automatically detect based on key length
Value: "rsa",
},
}
fields = addKeyRefNameFields(fields)
return fields
}
@@ -328,5 +342,61 @@ func addCAIssueFields(fields map[string]*framework.FieldSchema) map[string]*fram
},
}
fields = addIssuerNameField(fields)
return fields
}
func addIssuerRefNameFields(fields map[string]*framework.FieldSchema) map[string]*framework.FieldSchema {
fields = addIssuerNameField(fields)
fields = addIssuerRefField(fields)
return fields
}
func addIssuerNameField(fields map[string]*framework.FieldSchema) map[string]*framework.FieldSchema {
fields["issuer_name"] = &framework.FieldSchema{
Type: framework.TypeString,
Description: `Provide a name to the generated issuer, the name
must be unique across all issuers and not be the reserved value 'default'`,
}
return fields
}
func addIssuerRefField(fields map[string]*framework.FieldSchema) map[string]*framework.FieldSchema {
fields[issuerRefParam] = &framework.FieldSchema{
Type: framework.TypeString,
Description: `Reference to a existing issuer; either "default"
for the configured default issuer, an identifier or the name assigned
to the issuer.`,
Default: defaultRef,
}
return fields
}
func addKeyRefNameFields(fields map[string]*framework.FieldSchema) map[string]*framework.FieldSchema {
fields = addKeyNameField(fields)
fields = addKeyRefField(fields)
return fields
}
func addKeyNameField(fields map[string]*framework.FieldSchema) map[string]*framework.FieldSchema {
fields[keyNameParam] = &framework.FieldSchema{
Type: framework.TypeString,
Description: `Provide a name for the key that will be generated,
the name must be unique across all keys and not be the reserved value
'default'`,
}
return fields
}
func addKeyRefField(fields map[string]*framework.FieldSchema) map[string]*framework.FieldSchema {
fields[keyRefParam] = &framework.FieldSchema{
Type: framework.TypeString,
Description: `Reference to a existing key; either "default"
for the configured default key, an identifier or the name assigned
to the key.`,
Default: defaultRef,
}
return fields
}

View File

@@ -0,0 +1,126 @@
package pki
import (
"context"
"crypto"
"encoding/pem"
"errors"
"fmt"
"github.com/hashicorp/vault/sdk/helper/certutil"
"github.com/hashicorp/vault/sdk/helper/errutil"
"github.com/hashicorp/vault/sdk/logical"
)
type managedKeyContext struct {
ctx context.Context
b *backend
mountPoint string
}
func newManagedKeyContext(ctx context.Context, b *backend, mountPoint string) managedKeyContext {
return managedKeyContext{
ctx: ctx,
b: b,
mountPoint: mountPoint,
}
}
func comparePublicKey(ctx managedKeyContext, key *keyEntry, publicKey crypto.PublicKey) (bool, error) {
publicKeyForKeyEntry, err := getPublicKey(ctx, key)
if err != nil {
return false, err
}
return certutil.ComparePublicKeysAndType(publicKeyForKeyEntry, publicKey)
}
func getPublicKey(mkc managedKeyContext, key *keyEntry) (crypto.PublicKey, error) {
if key.PrivateKeyType == certutil.ManagedPrivateKey {
keyId, err := extractManagedKeyId([]byte(key.PrivateKey))
if err != nil {
return nil, err
}
return getManagedKeyPublicKey(mkc, keyId)
}
signer, _, _, err := getSignerFromKeyEntryBytes(key)
if err != nil {
return nil, err
}
return signer.Public(), nil
}
func getSignerFromKeyEntryBytes(key *keyEntry) (crypto.Signer, certutil.BlockType, *pem.Block, error) {
if key.PrivateKeyType == certutil.UnknownPrivateKey {
return nil, certutil.UnknownBlock, nil, errutil.InternalError{Err: fmt.Sprintf("unsupported unknown private key type for key: %s (%s)", key.ID, key.Name)}
}
if key.PrivateKeyType == certutil.ManagedPrivateKey {
return nil, certutil.UnknownBlock, nil, errutil.InternalError{Err: fmt.Sprintf("can not get a signer from a managed key: %s (%s)", key.ID, key.Name)}
}
bytes, blockType, blk, err := getSignerFromBytes([]byte(key.PrivateKey))
if err != nil {
return nil, certutil.UnknownBlock, nil, errutil.InternalError{Err: fmt.Sprintf("failed parsing key entry bytes for key id: %s (%s): %s", key.ID, key.Name, err.Error())}
}
return bytes, blockType, blk, nil
}
func getSignerFromBytes(keyBytes []byte) (crypto.Signer, certutil.BlockType, *pem.Block, error) {
pemBlock, _ := pem.Decode(keyBytes)
if pemBlock == nil {
return nil, certutil.UnknownBlock, pemBlock, errutil.InternalError{Err: "no data found in PEM block"}
}
signer, blk, err := certutil.ParseDERKey(pemBlock.Bytes)
if err != nil {
return nil, certutil.UnknownBlock, pemBlock, errutil.InternalError{Err: fmt.Sprintf("failed to parse PEM block: %s", err.Error())}
}
return signer, blk, pemBlock, nil
}
func getManagedKeyPublicKey(mkc managedKeyContext, keyId managedKeyId) (crypto.PublicKey, error) {
// Determine key type and key bits from the managed public key
var pubKey crypto.PublicKey
err := withManagedPKIKey(mkc.ctx, mkc.b, keyId, mkc.mountPoint, func(ctx context.Context, key logical.ManagedSigningKey) error {
var myErr error
pubKey, myErr = key.GetPublicKey(ctx)
if myErr != nil {
return myErr
}
return nil
})
if err != nil {
return nil, errors.New("failed to lookup public key from managed key: " + err.Error())
}
return pubKey, nil
}
func getPublicKeyFromBytes(keyBytes []byte) (crypto.PublicKey, error) {
signer, _, _, err := getSignerFromBytes(keyBytes)
if err != nil {
return nil, errutil.InternalError{Err: fmt.Sprintf("failed parsing key bytes: %s", err.Error())}
}
return signer.Public(), nil
}
func importKeyFromBytes(mkc managedKeyContext, s logical.Storage, keyValue string, keyName string) (*keyEntry, bool, error) {
signer, _, _, err := getSignerFromBytes([]byte(keyValue))
if err != nil {
return nil, false, err
}
privateKeyType := certutil.GetPrivateKeyTypeFromSigner(signer)
if privateKeyType == certutil.UnknownPrivateKey {
return nil, false, errors.New("unsupported private key type within pem bundle")
}
key, existed, err := importKey(mkc, s, keyValue, keyName, privateKeyType)
if err != nil {
return nil, false, err
}
return key, existed, nil
}

View File

@@ -13,18 +13,26 @@ import (
var errEntOnly = errors.New("managed keys are supported within enterprise edition only")
func generateManagedKeyCABundle(_ context.Context, _ *backend, _ *inputBundle, _ *certutil.CreationBundle, _ io.Reader) (*certutil.ParsedCertBundle, error) {
func generateManagedKeyCABundle(ctx context.Context, b *backend, input *inputBundle, keyId managedKeyId, data *certutil.CreationBundle, randomSource io.Reader) (bundle *certutil.ParsedCertBundle, err error) {
return nil, errEntOnly
}
func generateManagedKeyCSRBundle(_ context.Context, _ *backend, _ *inputBundle, _ *certutil.CreationBundle, _ bool, _ io.Reader) (*certutil.ParsedCSRBundle, error) {
func generateManagedKeyCSRBundle(ctx context.Context, b *backend, input *inputBundle, keyId managedKeyId, data *certutil.CreationBundle, addBasicConstraints bool, randomSource io.Reader) (bundle *certutil.ParsedCSRBundle, err error) {
return nil, errEntOnly
}
func parseManagedKeyCABundle(_ context.Context, _ *backend, _ *logical.Request, _ *certutil.CertBundle) (*certutil.ParsedCertBundle, error) {
func parseManagedKeyCABundle(ctx context.Context, b *backend, req *logical.Request, bundle *certutil.CertBundle) (*certutil.ParsedCertBundle, error) {
return nil, errEntOnly
}
func withManagedPKIKey(_ context.Context, _ *backend, _ managedKeyId, _ string, _ logical.ManagedSigningKeyConsumer) error {
func withManagedPKIKey(ctx context.Context, b *backend, keyId managedKeyId, mountPoint string, f logical.ManagedSigningKeyConsumer) error {
return errEntOnly
}
func extractManagedKeyId(privateKeyBytes []byte) (UUIDKey, error) {
return "", errEntOnly
}
func createKmsKeyBundle(mkc managedKeyContext, keyId managedKeyId) (certutil.KeyBundle, certutil.PrivateKeyType, error) {
return certutil.KeyBundle{}, certutil.UnknownPrivateKey, errEntOnly
}

View File

@@ -2,11 +2,8 @@ package pki
import (
"context"
"fmt"
"github.com/hashicorp/vault/sdk/framework"
"github.com/hashicorp/vault/sdk/helper/certutil"
"github.com/hashicorp/vault/sdk/helper/errutil"
"github.com/hashicorp/vault/sdk/logical"
)
@@ -21,8 +18,13 @@ secret key and certificate.`,
},
},
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.UpdateOperation: b.pathCAWrite,
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathImportIssuers,
// Read more about why these flags are set in backend.go.
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
},
HelpSynopsis: pathConfigCAHelpSyn,
@@ -30,67 +32,6 @@ secret key and certificate.`,
}
}
func (b *backend) pathCAWrite(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
pemBundle := data.Get("pem_bundle").(string)
if pemBundle == "" {
return logical.ErrorResponse("'pem_bundle' was empty"), nil
}
parsedBundle, err := certutil.ParsePEMBundle(pemBundle)
if err != nil {
switch err.(type) {
case errutil.InternalError:
return nil, err
default:
return logical.ErrorResponse(err.Error()), nil
}
}
if parsedBundle.PrivateKey == nil {
return logical.ErrorResponse("private key not found in the PEM bundle"), nil
}
if parsedBundle.PrivateKeyType == certutil.UnknownPrivateKey {
return logical.ErrorResponse("unknown private key found in the PEM bundle"), nil
}
if parsedBundle.Certificate == nil {
return logical.ErrorResponse("no certificate found in the PEM bundle"), nil
}
if !parsedBundle.Certificate.IsCA {
return logical.ErrorResponse("the given certificate is not marked for CA use and cannot be used with this backend"), nil
}
cb, err := parsedBundle.ToCertBundle()
if err != nil {
return nil, fmt.Errorf("error converting raw values into cert bundle: %w", err)
}
entry, err := logical.StorageEntryJSON("config/ca_bundle", cb)
if err != nil {
return nil, err
}
err = req.Storage.Put(ctx, entry)
if err != nil {
return nil, err
}
// For ease of later use, also store just the certificate at a known
// location, plus a fresh CRL
entry.Key = "ca"
entry.Value = parsedBundle.CertificateBytes
err = req.Storage.Put(ctx, entry)
if err != nil {
return nil, err
}
err = buildCRL(ctx, b, req, true)
return nil, err
}
const pathConfigCAHelpSyn = `
Set the CA certificate and private key used for generated credentials.
`
@@ -103,36 +44,198 @@ secret key and certificate.
For security reasons, the secret key cannot be retrieved later.
`
const pathConfigCAGenerateHelpSyn = `
Generate a new CA certificate and private key used for signing.
func pathConfigIssuers(b *backend) *framework.Path {
return &framework.Path{
Pattern: "config/issuers",
Fields: map[string]*framework.FieldSchema{
defaultRef: {
Type: framework.TypeString,
Description: `Reference (name or identifier) to the default issuer.`,
},
},
Operations: map[logical.Operation]framework.OperationHandler{
logical.ReadOperation: &framework.PathOperation{
Callback: b.pathCAIssuersRead,
},
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathCAIssuersWrite,
// Read more about why these flags are set in backend.go.
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
},
HelpSynopsis: pathConfigIssuersHelpSyn,
HelpDescription: pathConfigIssuersHelpDesc,
}
}
func pathReplaceRoot(b *backend) *framework.Path {
return &framework.Path{
Pattern: "root/replace",
Fields: map[string]*framework.FieldSchema{
"default": {
Type: framework.TypeString,
Description: `Reference (name or identifier) to the default issuer.`,
Default: "next",
},
},
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathCAIssuersWrite,
// Read more about why these flags are set in backend.go.
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
},
HelpSynopsis: pathConfigIssuersHelpSyn,
HelpDescription: pathConfigIssuersHelpDesc,
}
}
func (b *backend) pathCAIssuersRead(ctx context.Context, req *logical.Request, _ *framework.FieldData) (*logical.Response, error) {
config, err := getIssuersConfig(ctx, req.Storage)
if err != nil {
return logical.ErrorResponse("Error loading issuers configuration: " + err.Error()), nil
}
return &logical.Response{
Data: map[string]interface{}{
defaultRef: config.DefaultIssuerId,
},
}, nil
}
func (b *backend) pathCAIssuersWrite(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
// Since we're planning on updating issuers here, grab the lock so we've
// got a consistent view.
b.issuersLock.Lock()
defer b.issuersLock.Unlock()
newDefault := data.Get(defaultRef).(string)
if len(newDefault) == 0 || newDefault == defaultRef {
return logical.ErrorResponse("Invalid issuer specification; must be non-empty and can't be 'default'."), nil
}
parsedIssuer, err := resolveIssuerReference(ctx, req.Storage, newDefault)
if err != nil {
return logical.ErrorResponse("Error resolving issuer reference: " + err.Error()), nil
}
response := &logical.Response{
Data: map[string]interface{}{
"default": parsedIssuer,
},
}
entry, err := fetchIssuerById(ctx, req.Storage, parsedIssuer)
if err != nil {
return logical.ErrorResponse("Unable to fetch issuer: " + err.Error()), nil
}
if len(entry.KeyID) == 0 {
msg := "This selected default issuer has no key associated with it. Some operations like issuing certificates and signing CRLs will be unavailable with the requested default issuer until a key is imported or the default issuer is changed."
response.AddWarning(msg)
b.Logger().Error(msg)
}
err = updateDefaultIssuerId(ctx, req.Storage, parsedIssuer)
if err != nil {
return logical.ErrorResponse("Error updating issuer configuration: " + err.Error()), nil
}
return response, nil
}
const pathConfigIssuersHelpSyn = `Read and set the default issuer certificate for signing.`
const pathConfigIssuersHelpDesc = `
This path allows configuration of issuer parameters.
Presently, the "default" parameter controls which issuer is the default,
accessible by the existing signing paths (/root/sign-intermediate,
/root/sign-self-issued, /sign-verbatim, /sign/:role, and /issue/:role).
The /root/replace path is aliased to this path, with default taking the
value of the issuer with the name "next", if it exists.
`
const pathConfigCAGenerateHelpDesc = `
This path generates a CA certificate and private key to be used for
credentials generated by this mount. The path can either
end in "internal" or "exported"; this controls whether the
unencrypted private key is exported after generation. This will
be your only chance to export the private key; for security reasons
it cannot be read or exported later.
func pathConfigKeys(b *backend) *framework.Path {
return &framework.Path{
Pattern: "config/keys",
Fields: map[string]*framework.FieldSchema{
defaultRef: {
Type: framework.TypeString,
Description: `Reference (name or identifier) of the default key.`,
},
},
If the "type" option is set to "self-signed", the generated
certificate will be a self-signed root CA. Otherwise, this mount
will act as an intermediate CA; a CSR will be returned, to be signed
by your chosen CA (which could be another mount of this backend).
Note that the CRL path will be set to this mount's CRL path; if you
need further customization it is recommended that you create a CSR
separately and get it signed. Either way, use the "config/ca/set"
endpoint to load the signed certificate into Vault.
`
const pathConfigCASignHelpSyn = `
Generate a signed CA certificate from a CSR.
`
const pathConfigCASignHelpDesc = `
This path generates a CA certificate to be used for credentials
generated by the certificate's destination mount.
Use the "config/ca/set" endpoint to load the signed certificate
into Vault another Vault mount.
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathKeyDefaultWrite,
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
logical.ReadOperation: &framework.PathOperation{
Callback: b.pathKeyDefaultRead,
ForwardPerformanceStandby: false,
ForwardPerformanceSecondary: false,
},
},
HelpSynopsis: pathConfigKeysHelpSyn,
HelpDescription: pathConfigKeysHelpDesc,
}
}
func (b *backend) pathKeyDefaultRead(ctx context.Context, req *logical.Request, _ *framework.FieldData) (*logical.Response, error) {
config, err := getKeysConfig(ctx, req.Storage)
if err != nil {
return logical.ErrorResponse("Error loading keys configuration: " + err.Error()), nil
}
return &logical.Response{
Data: map[string]interface{}{
defaultRef: config.DefaultKeyId,
},
}, nil
}
func (b *backend) pathKeyDefaultWrite(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
// Since we're planning on updating keys here, grab the lock so we've
// got a consistent view.
b.issuersLock.Lock()
defer b.issuersLock.Unlock()
newDefault := data.Get(defaultRef).(string)
if len(newDefault) == 0 || newDefault == defaultRef {
return logical.ErrorResponse("Invalid key specification; must be non-empty and can't be 'default'."), nil
}
parsedKey, err := resolveKeyReference(ctx, req.Storage, newDefault)
if err != nil {
return logical.ErrorResponse("Error resolving issuer reference: " + err.Error()), nil
}
err = updateDefaultKeyId(ctx, req.Storage, parsedKey)
if err != nil {
return logical.ErrorResponse("Error updating issuer configuration: " + err.Error()), nil
}
return &logical.Response{
Data: map[string]interface{}{
defaultRef: parsedKey,
},
}, nil
}
const pathConfigKeysHelpSyn = `Read and set the default key used for signing`
const pathConfigKeysHelpDesc = `
This path allows configuration of key parameters.
The "default" parameter controls which key is the default used by signing paths.
`

View File

@@ -32,9 +32,16 @@ valid; defaults to 72 hours`,
},
},
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.ReadOperation: b.pathCRLRead,
logical.UpdateOperation: b.pathCRLWrite,
Operations: map[logical.Operation]framework.OperationHandler{
logical.ReadOperation: &framework.PathOperation{
Callback: b.pathCRLRead,
},
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathCRLWrite,
// Read more about why these flags are set in backend.go.
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
},
HelpSynopsis: pathConfigCRLHelpSyn,
@@ -59,7 +66,7 @@ func (b *backend) CRL(ctx context.Context, s logical.Storage) (*crlConfig, error
return &result, nil
}
func (b *backend) pathCRLRead(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
func (b *backend) pathCRLRead(ctx context.Context, req *logical.Request, _ *framework.FieldData) (*logical.Response, error) {
config, err := b.CRL(ctx, req.Storage)
if err != nil {
return nil, err
@@ -111,7 +118,7 @@ func (b *backend) pathCRLWrite(ctx context.Context, req *logical.Request, d *fra
if oldDisable != config.Disable {
// It wasn't disabled but now it is, rotate
crlErr := buildCRL(ctx, b, req, true)
crlErr := b.crlBuilder.rebuild(ctx, b, req, true)
if crlErr != nil {
switch crlErr.(type) {
case errutil.UserError:

View File

@@ -34,9 +34,13 @@ for the OCSP servers attribute. See also RFC 5280 Section 4.2.2.1.`,
},
},
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.UpdateOperation: b.pathWriteURL,
logical.ReadOperation: b.pathReadURL,
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathWriteURL,
},
logical.ReadOperation: &framework.PathOperation{
Callback: b.pathReadURL,
},
},
HelpSynopsis: pathConfigURLsHelpSyn,
@@ -88,7 +92,7 @@ func writeURLs(ctx context.Context, req *logical.Request, entries *certutil.URLE
return nil
}
func (b *backend) pathReadURL(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
func (b *backend) pathReadURL(ctx context.Context, req *logical.Request, _ *framework.FieldData) (*logical.Response, error) {
entries, err := getURLs(ctx, req)
if err != nil {
return nil, err

View File

@@ -16,8 +16,10 @@ func pathFetchCA(b *backend) *framework.Path {
return &framework.Path{
Pattern: `ca(/pem)?`,
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.ReadOperation: b.pathFetchRead,
Operations: map[logical.Operation]framework.OperationHandler{
logical.ReadOperation: &framework.PathOperation{
Callback: b.pathFetchRead,
},
},
HelpSynopsis: pathFetchHelpSyn,
@@ -30,8 +32,10 @@ func pathFetchCAChain(b *backend) *framework.Path {
return &framework.Path{
Pattern: `(cert/)?ca_chain`,
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.ReadOperation: b.pathFetchRead,
Operations: map[logical.Operation]framework.OperationHandler{
logical.ReadOperation: &framework.PathOperation{
Callback: b.pathFetchRead,
},
},
HelpSynopsis: pathFetchHelpSyn,
@@ -44,8 +48,10 @@ func pathFetchCRL(b *backend) *framework.Path {
return &framework.Path{
Pattern: `crl(/pem)?`,
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.ReadOperation: b.pathFetchRead,
Operations: map[logical.Operation]framework.OperationHandler{
logical.ReadOperation: &framework.PathOperation{
Callback: b.pathFetchRead,
},
},
HelpSynopsis: pathFetchHelpSyn,
@@ -65,8 +71,10 @@ hyphen-separated octal`,
},
},
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.ReadOperation: b.pathFetchRead,
Operations: map[logical.Operation]framework.OperationHandler{
logical.ReadOperation: &framework.PathOperation{
Callback: b.pathFetchRead,
},
},
HelpSynopsis: pathFetchHelpSyn,
@@ -87,8 +95,10 @@ hyphen-separated octal`,
},
},
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.ReadOperation: b.pathFetchRead,
Operations: map[logical.Operation]framework.OperationHandler{
logical.ReadOperation: &framework.PathOperation{
Callback: b.pathFetchRead,
},
},
HelpSynopsis: pathFetchHelpSyn,
@@ -101,8 +111,10 @@ func pathFetchCRLViaCertPath(b *backend) *framework.Path {
return &framework.Path{
Pattern: `cert/crl`,
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.ReadOperation: b.pathFetchRead,
Operations: map[logical.Operation]framework.OperationHandler{
logical.ReadOperation: &framework.PathOperation{
Callback: b.pathFetchRead,
},
},
HelpSynopsis: pathFetchHelpSyn,
@@ -115,8 +127,10 @@ func pathFetchListCerts(b *backend) *framework.Path {
return &framework.Path{
Pattern: "certs/?$",
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.ListOperation: b.pathFetchCertList,
Operations: map[logical.Operation]framework.OperationHandler{
logical.ListOperation: &framework.PathOperation{
Callback: b.pathFetchCertList,
},
},
HelpSynopsis: pathFetchHelpSyn,
@@ -124,7 +138,7 @@ func pathFetchListCerts(b *backend) *framework.Path {
}
}
func (b *backend) pathFetchCertList(ctx context.Context, req *logical.Request, data *framework.FieldData) (response *logical.Response, retErr error) {
func (b *backend) pathFetchCertList(ctx context.Context, req *logical.Request, _ *framework.FieldData) (response *logical.Response, retErr error) {
entries, err := req.Storage.List(ctx, "certs/")
if err != nil {
return nil, err
@@ -165,14 +179,14 @@ func (b *backend) pathFetchRead(ctx context.Context, req *logical.Request, data
contentType = "application/pkix-cert"
}
case req.Path == "crl" || req.Path == "crl/pem":
serial = "crl"
serial = legacyCRLPath
contentType = "application/pkix-crl"
if req.Path == "crl/pem" {
pemType = "X509 CRL"
contentType = "application/x-pem-file"
}
case req.Path == "cert/crl":
serial = "crl"
serial = legacyCRLPath
pemType = "X509 CRL"
case strings.HasSuffix(req.Path, "/pem") || strings.HasSuffix(req.Path, "/raw"):
serial = data.Get("serial").(string)
@@ -190,8 +204,9 @@ func (b *backend) pathFetchRead(ctx context.Context, req *logical.Request, data
goto reply
}
if serial == "ca_chain" {
caInfo, err := fetchCAInfo(ctx, b, req)
// Prefer fetchCAInfo to fetchCertBySerial for CA certificates.
if serial == "ca_chain" || serial == "ca" {
caInfo, err := fetchCAInfo(ctx, b, req, defaultRef, ReadOnlyUsage)
if err != nil {
switch err.(type) {
case errutil.UserError:
@@ -203,32 +218,37 @@ func (b *backend) pathFetchRead(ctx context.Context, req *logical.Request, data
}
}
caChain := caInfo.GetCAChain()
var certStr string
for _, ca := range caChain {
block := pem.Block{
Type: "CERTIFICATE",
Bytes: ca.Bytes,
if serial == "ca_chain" {
rawChain := caInfo.GetFullChain()
var chainStr string
for _, ca := range rawChain {
block := pem.Block{
Type: "CERTIFICATE",
Bytes: ca.Bytes,
}
chainStr = strings.Join([]string{chainStr, strings.TrimSpace(string(pem.EncodeToMemory(&block)))}, "\n")
}
certStr = strings.Join([]string{certStr, strings.TrimSpace(string(pem.EncodeToMemory(&block)))}, "\n")
}
certificate = []byte(strings.TrimSpace(certStr))
fullChain = []byte(strings.TrimSpace(chainStr))
certificate = fullChain
} else if serial == "ca" {
certificate = caInfo.Certificate.Raw
rawChain := caInfo.GetFullChain()
var chainStr string
for _, ca := range rawChain {
block := pem.Block{
Type: "CERTIFICATE",
Bytes: ca.Bytes,
if len(pemType) != 0 {
block := pem.Block{
Type: pemType,
Bytes: certificate,
}
// This is convoluted on purpose to ensure that we don't have trailing
// newlines via various paths
certificate = []byte(strings.TrimSpace(string(pem.EncodeToMemory(&block))))
}
chainStr = strings.Join([]string{chainStr, strings.TrimSpace(string(pem.EncodeToMemory(&block)))}, "\n")
}
fullChain = []byte(strings.TrimSpace(chainStr))
goto reply
}
certEntry, funcErr = fetchCertBySerial(ctx, req, req.Path, serial)
certEntry, funcErr = fetchCertBySerial(ctx, b, req, req.Path, serial)
if funcErr != nil {
switch funcErr.(type) {
case errutil.UserError:
@@ -256,7 +276,7 @@ func (b *backend) pathFetchRead(ctx context.Context, req *logical.Request, data
certificate = []byte(strings.TrimSpace(string(pem.EncodeToMemory(&block))))
}
revokedEntry, funcErr = fetchCertBySerial(ctx, req, "revoked/", serial)
revokedEntry, funcErr = fetchCertBySerial(ctx, b, req, "revoked/", serial)
if funcErr != nil {
switch funcErr.(type) {
case errutil.UserError:

View File

@@ -0,0 +1,563 @@
package pki
import (
"context"
"encoding/pem"
"fmt"
"strings"
"github.com/hashicorp/vault/sdk/framework"
"github.com/hashicorp/vault/sdk/helper/certutil"
"github.com/hashicorp/vault/sdk/logical"
)
func pathListIssuers(b *backend) *framework.Path {
return &framework.Path{
Pattern: "issuers/?$",
Operations: map[logical.Operation]framework.OperationHandler{
logical.ListOperation: &framework.PathOperation{
Callback: b.pathListIssuersHandler,
},
},
HelpSynopsis: pathListIssuersHelpSyn,
HelpDescription: pathListIssuersHelpDesc,
}
}
func (b *backend) pathListIssuersHandler(ctx context.Context, req *logical.Request, _ *framework.FieldData) (*logical.Response, error) {
if b.useLegacyBundleCaStorage() {
return logical.ErrorResponse("Can not list issuers until migration has completed"), nil
}
var responseKeys []string
responseInfo := make(map[string]interface{})
entries, err := listIssuers(ctx, req.Storage)
if err != nil {
return nil, err
}
// For each issuer, we need not only the identifier (as returned by
// listIssuers), but also the name of the issuer. This means we have to
// fetch the actual issuer object as well.
for _, identifier := range entries {
issuer, err := fetchIssuerById(ctx, req.Storage, identifier)
if err != nil {
return nil, err
}
responseKeys = append(responseKeys, string(identifier))
responseInfo[string(identifier)] = map[string]interface{}{
"issuer_name": issuer.Name,
}
}
return logical.ListResponseWithInfo(responseKeys, responseInfo), nil
}
const (
pathListIssuersHelpSyn = `Fetch a list of CA certificates.`
pathListIssuersHelpDesc = `
This endpoint allows listing of known issuing certificates, returning
their identifier and their name (if set).
`
)
func pathGetIssuer(b *backend) *framework.Path {
pattern := "issuer/" + framework.GenericNameRegex(issuerRefParam) + "(/der|/pem|/json)?"
return buildPathGetIssuer(b, pattern)
}
func buildPathGetIssuer(b *backend, pattern string) *framework.Path {
fields := map[string]*framework.FieldSchema{}
fields = addIssuerRefNameFields(fields)
// Fields for updating issuer.
fields["manual_chain"] = &framework.FieldSchema{
Type: framework.TypeCommaStringSlice,
Description: `Chain of issuer references to use to build this
issuer's computed CAChain field, when non-empty.`,
}
fields["leaf_not_after_behavior"] = &framework.FieldSchema{
Type: framework.TypeString,
Description: `Behavior of leaf's NotAfter fields: "err" to error
if the computed NotAfter date exceeds that of this issuer; "truncate" to
silently truncate to that of this issuer; or "permit" to allow this
issuance to succeed (with NotAfter exceeding that of an issuer). Note that
not all values will results in certificates that can be validated through
the entire validity period. It is suggested to use "truncate" for
intermediate CAs and "permit" only for root CAs.`,
Default: "err",
}
fields["usage"] = &framework.FieldSchema{
Type: framework.TypeCommaStringSlice,
Description: `Comma-separated list (or string slice) of usages for
this issuer; valid values are "read-only", "issuing-certificates", and
"crl-signing". Multiple values may be specified. Read-only is implicit
and always set.`,
Default: []string{"read-only", "issuing-certificates", "crl-signing"},
}
return &framework.Path{
// Returns a JSON entry.
Pattern: pattern,
Fields: fields,
Operations: map[logical.Operation]framework.OperationHandler{
logical.ReadOperation: &framework.PathOperation{
Callback: b.pathGetIssuer,
},
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathUpdateIssuer,
// Read more about why these flags are set in backend.go.
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
logical.DeleteOperation: &framework.PathOperation{
Callback: b.pathDeleteIssuer,
// Read more about why these flags are set in backend.go.
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
},
HelpSynopsis: pathGetIssuerHelpSyn,
HelpDescription: pathGetIssuerHelpDesc,
}
}
func (b *backend) pathGetIssuer(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
// Handle raw issuers first.
if strings.HasSuffix(req.Path, "/der") || strings.HasSuffix(req.Path, "/pem") || strings.HasSuffix(req.Path, "/json") {
return b.pathGetRawIssuer(ctx, req, data)
}
if b.useLegacyBundleCaStorage() {
return logical.ErrorResponse("Can not get issuer until migration has completed"), nil
}
issuerName := getIssuerRef(data)
if len(issuerName) == 0 {
return logical.ErrorResponse("missing issuer reference"), nil
}
ref, err := resolveIssuerReference(ctx, req.Storage, issuerName)
if err != nil {
return nil, err
}
if ref == "" {
return logical.ErrorResponse("unable to resolve issuer id for reference: " + issuerName), nil
}
issuer, err := fetchIssuerById(ctx, req.Storage, ref)
if err != nil {
return nil, err
}
var respManualChain []string
for _, entity := range issuer.ManualChain {
respManualChain = append(respManualChain, string(entity))
}
return &logical.Response{
Data: map[string]interface{}{
"issuer_id": issuer.ID,
"issuer_name": issuer.Name,
"key_id": issuer.KeyID,
"certificate": issuer.Certificate,
"manual_chain": respManualChain,
"ca_chain": issuer.CAChain,
"leaf_not_after_behavior": issuer.LeafNotAfterBehavior,
"usage": issuer.Usage.Names(),
},
}, nil
}
func (b *backend) pathUpdateIssuer(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
// Since we're planning on updating issuers here, grab the lock so we've
// got a consistent view.
b.issuersLock.Lock()
defer b.issuersLock.Unlock()
if b.useLegacyBundleCaStorage() {
return logical.ErrorResponse("Can not update issuer until migration has completed"), nil
}
issuerName := getIssuerRef(data)
if len(issuerName) == 0 {
return logical.ErrorResponse("missing issuer reference"), nil
}
ref, err := resolveIssuerReference(ctx, req.Storage, issuerName)
if err != nil {
return nil, err
}
if ref == "" {
return logical.ErrorResponse("unable to resolve issuer id for reference: " + issuerName), nil
}
issuer, err := fetchIssuerById(ctx, req.Storage, ref)
if err != nil {
return nil, err
}
newName, err := getIssuerName(ctx, req.Storage, data)
if err != nil && err != errIssuerNameInUse {
// If the error is name already in use, and the new name is the
// old name for this issuer, we're not actually updating the
// issuer name (or causing a conflict) -- so don't err out. Other
// errs should still be surfaced, however.
return logical.ErrorResponse(err.Error()), nil
}
if err == errIssuerNameInUse && issuer.Name != newName {
// When the new name is in use but isn't this name, throw an error.
return logical.ErrorResponse(err.Error()), nil
}
newPath := data.Get("manual_chain").([]string)
rawLeafBehavior := data.Get("leaf_not_after_behavior").(string)
var newLeafBehavior certutil.NotAfterBehavior
switch rawLeafBehavior {
case "err":
newLeafBehavior = certutil.ErrNotAfterBehavior
case "truncate":
newLeafBehavior = certutil.TruncateNotAfterBehavior
case "permit":
newLeafBehavior = certutil.PermitNotAfterBehavior
default:
return logical.ErrorResponse("Unknown value for field `leaf_not_after_behavior`. Possible values are `err`, `truncate`, and `permit`."), nil
}
rawUsage := data.Get("usage").([]string)
newUsage, err := NewIssuerUsageFromNames(rawUsage)
if err != nil {
return logical.ErrorResponse(fmt.Sprintf("Unable to parse specified usages: %v - valid values are %v", rawUsage, AllIssuerUsages.Names())), nil
}
modified := false
if newName != issuer.Name {
issuer.Name = newName
modified = true
}
if newLeafBehavior != issuer.LeafNotAfterBehavior {
issuer.LeafNotAfterBehavior = newLeafBehavior
modified = true
}
if newUsage != issuer.Usage {
issuer.Usage = newUsage
modified = true
}
var updateChain bool
var constructedChain []issuerID
for index, newPathRef := range newPath {
// Allow self for the first entry.
if index == 0 && newPathRef == "self" {
newPathRef = string(ref)
}
resolvedId, err := resolveIssuerReference(ctx, req.Storage, newPathRef)
if err != nil {
return nil, err
}
if index == 0 && resolvedId != ref {
return logical.ErrorResponse(fmt.Sprintf("expected first cert in chain to be a self-reference, but was: %v/%v", newPathRef, resolvedId)), nil
}
constructedChain = append(constructedChain, resolvedId)
if len(issuer.ManualChain) < len(constructedChain) || constructedChain[index] != issuer.ManualChain[index] {
updateChain = true
}
}
if len(issuer.ManualChain) != len(constructedChain) {
updateChain = true
}
if updateChain {
issuer.ManualChain = constructedChain
// Building the chain will write the issuer to disk; no need to do it
// twice.
modified = false
err := rebuildIssuersChains(ctx, req.Storage, issuer)
if err != nil {
return nil, err
}
}
if modified {
err := writeIssuer(ctx, req.Storage, issuer)
if err != nil {
return nil, err
}
}
var respManualChain []string
for _, entity := range issuer.ManualChain {
respManualChain = append(respManualChain, string(entity))
}
return &logical.Response{
Data: map[string]interface{}{
"issuer_id": issuer.ID,
"issuer_name": issuer.Name,
"key_id": issuer.KeyID,
"certificate": issuer.Certificate,
"manual_chain": respManualChain,
"ca_chain": issuer.CAChain,
"leaf_not_after_behavior": issuer.LeafNotAfterBehavior,
"usage": issuer.Usage.Names(),
},
}, nil
}
func (b *backend) pathGetRawIssuer(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
if b.useLegacyBundleCaStorage() {
return logical.ErrorResponse("Can not get issuer until migration has completed"), nil
}
issuerName := getIssuerRef(data)
if len(issuerName) == 0 {
return logical.ErrorResponse("missing issuer reference"), nil
}
ref, err := resolveIssuerReference(ctx, req.Storage, issuerName)
if err != nil {
return nil, err
}
if ref == "" {
return logical.ErrorResponse("unable to resolve issuer id for reference: " + issuerName), nil
}
issuer, err := fetchIssuerById(ctx, req.Storage, ref)
if err != nil {
return nil, err
}
certificate := []byte(issuer.Certificate)
var contentType string
if strings.HasSuffix(req.Path, "/pem") {
contentType = "application/pem-certificate-chain"
} else if strings.HasSuffix(req.Path, "/der") {
contentType = "application/pkix-cert"
}
if strings.HasSuffix(req.Path, "/der") {
pemBlock, _ := pem.Decode(certificate)
if pemBlock == nil {
return nil, err
}
certificate = pemBlock.Bytes
}
statusCode := 200
if len(certificate) == 0 {
statusCode = 204
}
if strings.HasSuffix(req.Path, "/pem") || strings.HasSuffix(req.Path, "/der") {
return &logical.Response{
Data: map[string]interface{}{
logical.HTTPContentType: contentType,
logical.HTTPRawBody: certificate,
logical.HTTPStatusCode: statusCode,
},
}, nil
} else {
return &logical.Response{
Data: map[string]interface{}{
"certificate": string(certificate),
"ca_chain": issuer.CAChain,
},
}, nil
}
}
func (b *backend) pathDeleteIssuer(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
// Since we're planning on updating issuers here, grab the lock so we've
// got a consistent view.
b.issuersLock.Lock()
defer b.issuersLock.Unlock()
if b.useLegacyBundleCaStorage() {
return logical.ErrorResponse("Can not delete issuer until migration has completed"), nil
}
issuerName := getIssuerRef(data)
if len(issuerName) == 0 {
return logical.ErrorResponse("missing issuer reference"), nil
}
ref, err := resolveIssuerReference(ctx, req.Storage, issuerName)
if err != nil {
return nil, err
}
if ref == "" {
return logical.ErrorResponse("unable to resolve issuer id for reference: " + issuerName), nil
}
wasDefault, err := deleteIssuer(ctx, req.Storage, ref)
if err != nil {
return nil, err
}
var response *logical.Response
if wasDefault {
response = &logical.Response{}
response.AddWarning(fmt.Sprintf("Deleted issuer %v (via issuer_ref %v); this was configured as the default issuer. Operations without an explicit issuer will not work until a new default is configured.", ref, issuerName))
}
// Since we've deleted an issuer, the chains might've changed. Call the
// rebuild code. We shouldn't technically err (as the issuer was deleted
// successfully), but log a warning (and to the response) if this fails.
if err := rebuildIssuersChains(ctx, req.Storage, nil); err != nil {
msg := fmt.Sprintf("Failed to rebuild remaining issuers' chains: %v", err)
b.Logger().Error(msg)
response.AddWarning(msg)
}
return response, nil
}
const (
pathGetIssuerHelpSyn = `Fetch a single issuer certificate.`
pathGetIssuerHelpDesc = `
This allows fetching information associated with the underlying issuer
certificate.
:ref can be either the literal value "default", in which case /config/issuers
will be consulted for the present default issuer, an identifier of an issuer,
or its assigned name value.
Use /issuer/:ref/der or /issuer/:ref/pem to return just the certificate in
raw DER or PEM form, without the JSON structure of /issuer/:ref.
Writing to /issuer/:ref allows updating of the name field associated with
the certificate.
`
)
func pathGetIssuerCRL(b *backend) *framework.Path {
pattern := "issuer/" + framework.GenericNameRegex(issuerRefParam) + "/crl(/pem|/der)?"
return buildPathGetIssuerCRL(b, pattern)
}
func buildPathGetIssuerCRL(b *backend, pattern string) *framework.Path {
fields := map[string]*framework.FieldSchema{}
fields = addIssuerRefNameFields(fields)
return &framework.Path{
// Returns raw values.
Pattern: pattern,
Fields: fields,
Operations: map[logical.Operation]framework.OperationHandler{
logical.ReadOperation: &framework.PathOperation{
Callback: b.pathGetIssuerCRL,
},
},
HelpSynopsis: pathGetIssuerCRLHelpSyn,
HelpDescription: pathGetIssuerCRLHelpDesc,
}
}
func (b *backend) pathGetIssuerCRL(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
if b.useLegacyBundleCaStorage() {
return logical.ErrorResponse("Can not get issuer's CRL until migration has completed"), nil
}
issuerName := getIssuerRef(data)
if len(issuerName) == 0 {
return logical.ErrorResponse("missing issuer reference"), nil
}
if err := b.crlBuilder.rebuildIfForced(ctx, b, req); err != nil {
return nil, err
}
crlPath, err := resolveIssuerCRLPath(ctx, b, req.Storage, issuerName)
if err != nil {
return nil, err
}
crlEntry, err := req.Storage.Get(ctx, crlPath)
if err != nil {
return nil, err
}
var certificate []byte
if crlEntry != nil && len(crlEntry.Value) > 0 {
certificate = []byte(crlEntry.Value)
}
var contentType string
if strings.HasSuffix(req.Path, "/der") {
contentType = "application/pkix-crl"
} else if strings.HasSuffix(req.Path, "/pem") {
contentType = "application/x-pem-file"
}
if !strings.HasSuffix(req.Path, "/der") {
// Rather return an empty response rather than an empty PEM blob.
// We build this PEM block for both the JSON and PEM endpoints.
if len(certificate) > 0 {
pemBlock := pem.Block{
Type: "X509 CRL",
Bytes: certificate,
}
certificate = pem.EncodeToMemory(&pemBlock)
}
}
statusCode := 200
if len(certificate) == 0 {
statusCode = 204
}
if strings.HasSuffix(req.Path, "/der") || strings.HasSuffix(req.Path, "/pem") {
return &logical.Response{
Data: map[string]interface{}{
logical.HTTPContentType: contentType,
logical.HTTPRawBody: certificate,
logical.HTTPStatusCode: statusCode,
},
}, nil
}
return &logical.Response{
Data: map[string]interface{}{
"crl": string(certificate),
},
}, nil
}
const (
pathGetIssuerCRLHelpSyn = `Fetch an issuer's Certificate Revocation Log (CRL).`
pathGetIssuerCRLHelpDesc = `
This allows fetching the specified issuer's CRL. Note that this is different
than the legacy path (/crl and /certs/crl) in that this is per-issuer and not
just the default issuer's CRL.
Two issuers will have the same CRL if they have the same key material and if
they have the same Subject value.
:ref can be either the literal value "default", in which case /config/issuers
will be consulted for the present default issuer, an identifier of an issuer,
or its assigned name value.
- /issuer/:ref/crl is JSON encoded and contains a PEM CRL,
- /issuer/:ref/crl/pem contains the PEM-encoded CRL,
- /issuer/:ref/crl/DER contains the raw DER-encoded (binary) CRL.
`
)

View File

@@ -0,0 +1,253 @@
package pki
import (
"context"
"fmt"
"github.com/hashicorp/vault/sdk/framework"
"github.com/hashicorp/vault/sdk/logical"
)
func pathListKeys(b *backend) *framework.Path {
return &framework.Path{
Pattern: "keys/?$",
Operations: map[logical.Operation]framework.OperationHandler{
logical.ListOperation: &framework.PathOperation{
Callback: b.pathListKeysHandler,
ForwardPerformanceStandby: false,
ForwardPerformanceSecondary: false,
},
},
HelpSynopsis: pathListKeysHelpSyn,
HelpDescription: pathListKeysHelpDesc,
}
}
const (
pathListKeysHelpSyn = `Fetch a list of all issuer keys`
pathListKeysHelpDesc = `This endpoint allows listing of known backing keys, returning
their identifier and their name (if set).`
)
func (b *backend) pathListKeysHandler(ctx context.Context, req *logical.Request, _ *framework.FieldData) (*logical.Response, error) {
if b.useLegacyBundleCaStorage() {
return logical.ErrorResponse("Can not list keys until migration has completed"), nil
}
var responseKeys []string
responseInfo := make(map[string]interface{})
entries, err := listKeys(ctx, req.Storage)
if err != nil {
return nil, err
}
for _, identifier := range entries {
key, err := fetchKeyById(ctx, req.Storage, identifier)
if err != nil {
return nil, err
}
responseKeys = append(responseKeys, string(identifier))
responseInfo[string(identifier)] = map[string]interface{}{
keyNameParam: key.Name,
}
}
return logical.ListResponseWithInfo(responseKeys, responseInfo), nil
}
func pathKey(b *backend) *framework.Path {
pattern := "key/" + framework.GenericNameRegex(keyRefParam)
return buildPathKey(b, pattern)
}
func buildPathKey(b *backend, pattern string) *framework.Path {
return &framework.Path{
Pattern: pattern,
Fields: map[string]*framework.FieldSchema{
keyRefParam: {
Type: framework.TypeString,
Description: `Reference to key; either "default" for the configured default key, an identifier of a key, or the name assigned to the key.`,
Default: defaultRef,
},
keyNameParam: {
Type: framework.TypeString,
Description: `Human-readable name for this key.`,
},
},
Operations: map[logical.Operation]framework.OperationHandler{
logical.ReadOperation: &framework.PathOperation{
Callback: b.pathGetKeyHandler,
ForwardPerformanceStandby: false,
ForwardPerformanceSecondary: false,
},
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathUpdateKeyHandler,
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
logical.DeleteOperation: &framework.PathOperation{
Callback: b.pathDeleteKeyHandler,
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
},
HelpSynopsis: pathKeysHelpSyn,
HelpDescription: pathKeysHelpDesc,
}
}
const (
pathKeysHelpSyn = `Fetch a single issuer key`
pathKeysHelpDesc = `This allows fetching information associated with the underlying key.
:ref can be either the literal value "default", in which case /config/keys
will be consulted for the present default key, an identifier of a key,
or its assigned name value.
Writing to /key/:ref allows updating of the name field associated with
the certificate.
`
)
func (b *backend) pathGetKeyHandler(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
if b.useLegacyBundleCaStorage() {
return logical.ErrorResponse("Can not get keys until migration has completed"), nil
}
keyRef := data.Get(keyRefParam).(string)
if len(keyRef) == 0 {
return logical.ErrorResponse("missing key reference"), nil
}
keyId, err := resolveKeyReference(ctx, req.Storage, keyRef)
if err != nil {
return nil, err
}
if keyId == "" {
return logical.ErrorResponse("unable to resolve key id for reference" + keyRef), nil
}
key, err := fetchKeyById(ctx, req.Storage, keyId)
if err != nil {
return nil, err
}
return &logical.Response{
Data: map[string]interface{}{
keyIdParam: key.ID,
keyNameParam: key.Name,
keyTypeParam: key.PrivateKeyType,
},
}, nil
}
func (b *backend) pathUpdateKeyHandler(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
// Since we're planning on updating keys here, grab the lock so we've
// got a consistent view.
b.issuersLock.Lock()
defer b.issuersLock.Unlock()
if b.useLegacyBundleCaStorage() {
return logical.ErrorResponse("Can not update keys until migration has completed"), nil
}
keyRef := data.Get(keyRefParam).(string)
if len(keyRef) == 0 {
return logical.ErrorResponse("missing key reference"), nil
}
keyId, err := resolveKeyReference(ctx, req.Storage, keyRef)
if err != nil {
return nil, err
}
if keyId == "" {
return logical.ErrorResponse("unable to resolve key id for reference" + keyRef), nil
}
key, err := fetchKeyById(ctx, req.Storage, keyId)
if err != nil {
return nil, err
}
newName := data.Get(keyNameParam).(string)
if len(newName) > 0 && !nameMatcher.MatchString(newName) {
return logical.ErrorResponse("new key name outside of valid character limits"), nil
}
if newName != key.Name {
key.Name = newName
err := writeKey(ctx, req.Storage, *key)
if err != nil {
return nil, err
}
}
resp := &logical.Response{
Data: map[string]interface{}{
keyIdParam: key.ID,
keyNameParam: key.Name,
keyTypeParam: key.PrivateKeyType,
},
}
if len(newName) == 0 {
resp.AddWarning("Name successfully deleted, you will now need to reference this key by it's Id: " + string(key.ID))
}
return resp, nil
}
func (b *backend) pathDeleteKeyHandler(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
// Since we're planning on updating issuers here, grab the lock so we've
// got a consistent view.
b.issuersLock.Lock()
defer b.issuersLock.Unlock()
if b.useLegacyBundleCaStorage() {
return logical.ErrorResponse("Can not delete keys until migration has completed"), nil
}
keyRef := data.Get(keyRefParam).(string)
if len(keyRef) == 0 {
return logical.ErrorResponse("missing key reference"), nil
}
keyId, err := resolveKeyReference(ctx, req.Storage, keyRef)
if err != nil {
return nil, err
}
if keyId == "" {
return logical.ErrorResponse("unable to resolve key id for reference" + keyRef), nil
}
keyInUse, issuerId, err := isKeyInUse(keyId.String(), ctx, req.Storage)
if err != nil {
return nil, err
}
if keyInUse {
return logical.ErrorResponse(fmt.Sprintf("Failed to Delete Key. Key in Use by Issuer: %s", issuerId)), nil
}
wasDefault, err := deleteKey(ctx, req.Storage, keyId)
if err != nil {
return nil, err
}
var response *logical.Response
if wasDefault {
msg := fmt.Sprintf("Deleted key %v (via key_ref %v); this was configured as the default key. Operations without an explicit key will not work until a new default is configured.", string(keyId), keyRef)
b.Logger().Error(msg)
response = &logical.Response{}
response.AddWarning(msg)
}
return response, nil
}

View File

@@ -6,38 +6,12 @@ import (
"fmt"
"github.com/hashicorp/vault/sdk/framework"
"github.com/hashicorp/vault/sdk/helper/certutil"
"github.com/hashicorp/vault/sdk/helper/errutil"
"github.com/hashicorp/vault/sdk/logical"
)
func pathGenerateIntermediate(b *backend) *framework.Path {
ret := &framework.Path{
Pattern: "intermediate/generate/" + framework.GenericNameRegex("exported"),
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathGenerateIntermediate,
// Read more about why these flags are set in backend.go
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
},
HelpSynopsis: pathGenerateIntermediateHelpSyn,
HelpDescription: pathGenerateIntermediateHelpDesc,
}
ret.Fields = addCACommonFields(map[string]*framework.FieldSchema{})
ret.Fields = addCAKeyGenerationFields(ret.Fields)
ret.Fields["add_basic_constraints"] = &framework.FieldSchema{
Type: framework.TypeBool,
Description: `Whether to add a Basic Constraints
extension with CA: true. Only needed as a
workaround in some compatibility scenarios
with Active Directory Certificate Services.`,
}
return ret
return buildPathGenerateIntermediate(b, "intermediate/generate/"+framework.GenericNameRegex("exported"))
}
func pathSetSignedIntermediate(b *backend) *framework.Path {
@@ -50,12 +24,13 @@ func pathSetSignedIntermediate(b *backend) *framework.Path {
Description: `PEM-format certificate. This must be a CA
certificate with a public key matching the
previously-generated key from the generation
endpoint.`,
endpoint. Additional parent CAs may be optionally
appended to the bundle.`,
},
},
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathSetSignedIntermediate,
Callback: b.pathImportIssuers,
// Read more about why these flags are set in backend.go
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
@@ -70,13 +45,33 @@ endpoint.`,
}
func (b *backend) pathGenerateIntermediate(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
// Since we're planning on updating issuers here, grab the lock so we've
// got a consistent view.
b.issuersLock.Lock()
defer b.issuersLock.Unlock()
var err error
exported, format, role, errorResp := b.getGenerationParams(ctx, data, req.MountPoint)
if b.useLegacyBundleCaStorage() {
return logical.ErrorResponse("Can not create intermediate until migration has completed"), nil
}
// Nasty hack :-) For cross-signing, we want to use the existing key, but
// this isn't _actually_ part of the path. Put it into the request
// parameters as if it was.
if req.Path == "intermediate/cross-sign" {
data.Raw["exported"] = "existing"
}
exported, format, role, errorResp := b.getGenerationParams(ctx, req.Storage, data, req.MountPoint)
if errorResp != nil {
return errorResp, nil
}
keyName, err := getKeyName(ctx, req.Storage, data)
if err != nil {
return logical.ErrorResponse(err.Error()), nil
}
var resp *logical.Response
input := &inputBundle{
role: role,
@@ -135,117 +130,15 @@ func (b *backend) pathGenerateIntermediate(ctx context.Context, req *logical.Req
}
}
cb := &certutil.CertBundle{}
cb.PrivateKey = csrb.PrivateKey
cb.PrivateKeyType = csrb.PrivateKeyType
entry, err := logical.StorageEntryJSON("config/ca_bundle", cb)
if err != nil {
return nil, err
}
err = req.Storage.Put(ctx, entry)
myKey, _, err := importKey(newManagedKeyContext(ctx, b, req.MountPoint), req.Storage, csrb.PrivateKey, keyName, csrb.PrivateKeyType)
if err != nil {
return nil, err
}
resp.Data["key_id"] = myKey.ID
return resp, nil
}
func (b *backend) pathSetSignedIntermediate(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
cert := data.Get("certificate").(string)
if cert == "" {
return logical.ErrorResponse("no certificate provided in the \"certificate\" parameter"), nil
}
inputBundle, err := certutil.ParsePEMBundle(cert)
if err != nil {
switch err.(type) {
case errutil.InternalError:
return nil, err
default:
return logical.ErrorResponse(err.Error()), nil
}
}
if inputBundle.Certificate == nil {
return logical.ErrorResponse("supplied certificate could not be successfully parsed"), nil
}
cb := &certutil.CertBundle{}
entry, err := req.Storage.Get(ctx, "config/ca_bundle")
if err != nil {
return nil, err
}
if entry == nil {
return logical.ErrorResponse("could not find any existing entry with a private key"), nil
}
err = entry.DecodeJSON(cb)
if err != nil {
return nil, err
}
if len(cb.PrivateKey) == 0 || cb.PrivateKeyType == "" {
return logical.ErrorResponse("could not find an existing private key"), nil
}
parsedCB, err := parseCABundle(ctx, b, req, cb)
if err != nil {
return nil, err
}
if parsedCB.PrivateKey == nil {
return nil, fmt.Errorf("saved key could not be parsed successfully")
}
inputBundle.PrivateKey = parsedCB.PrivateKey
inputBundle.PrivateKeyType = parsedCB.PrivateKeyType
inputBundle.PrivateKeyBytes = parsedCB.PrivateKeyBytes
if !inputBundle.Certificate.IsCA {
return logical.ErrorResponse("the given certificate is not marked for CA use and cannot be used with this backend"), nil
}
if err := inputBundle.Verify(); err != nil {
return nil, fmt.Errorf("verification of parsed bundle failed: %w", err)
}
cb, err = inputBundle.ToCertBundle()
if err != nil {
return nil, fmt.Errorf("error converting raw values into cert bundle: %w", err)
}
entry, err = logical.StorageEntryJSON("config/ca_bundle", cb)
if err != nil {
return nil, err
}
err = req.Storage.Put(ctx, entry)
if err != nil {
return nil, err
}
entry.Key = "certs/" + normalizeSerial(cb.SerialNumber)
entry.Value = inputBundle.CertificateBytes
err = req.Storage.Put(ctx, entry)
if err != nil {
return nil, err
}
// For ease of later use, also store just the certificate at a known
// location
entry.Key = "ca"
entry.Value = inputBundle.CertificateBytes
err = req.Storage.Put(ctx, entry)
if err != nil {
return nil, err
}
// Build a fresh CRL
err = buildCRL(ctx, b, req, true)
return nil, err
}
const pathGenerateIntermediateHelpSyn = `
Generate a new CSR and private key used for signing.
`

View File

@@ -5,6 +5,7 @@ import (
"crypto/rand"
"encoding/base64"
"fmt"
"strings"
"time"
"github.com/hashicorp/vault/sdk/framework"
@@ -15,11 +16,23 @@ import (
)
func pathIssue(b *backend) *framework.Path {
ret := &framework.Path{
Pattern: "issue/" + framework.GenericNameRegex("role"),
pattern := "issue/" + framework.GenericNameRegex("role")
return buildPathIssue(b, pattern)
}
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.UpdateOperation: b.metricsWrap("issue", roleRequired, b.pathIssue),
func pathIssuerIssue(b *backend) *framework.Path {
pattern := "issuer/" + framework.GenericNameRegex(issuerRefParam) + "/issue/" + framework.GenericNameRegex("role")
return buildPathIssue(b, pattern)
}
func buildPathIssue(b *backend, pattern string) *framework.Path {
ret := &framework.Path{
Pattern: pattern,
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.metricsWrap("issue", roleRequired, b.pathIssue),
},
},
HelpSynopsis: pathIssueHelpSyn,
@@ -31,11 +44,23 @@ func pathIssue(b *backend) *framework.Path {
}
func pathSign(b *backend) *framework.Path {
ret := &framework.Path{
Pattern: "sign/" + framework.GenericNameRegex("role"),
pattern := "sign/" + framework.GenericNameRegex("role")
return buildPathSign(b, pattern)
}
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.UpdateOperation: b.metricsWrap("sign", roleRequired, b.pathSign),
func pathIssuerSign(b *backend) *framework.Path {
pattern := "issuer/" + framework.GenericNameRegex(issuerRefParam) + "/sign/" + framework.GenericNameRegex("role")
return buildPathSign(b, pattern)
}
func buildPathSign(b *backend, pattern string) *framework.Path {
ret := &framework.Path{
Pattern: pattern,
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.metricsWrap("sign", roleRequired, b.pathSign),
},
},
HelpSynopsis: pathSignHelpSyn,
@@ -53,19 +78,32 @@ func pathSign(b *backend) *framework.Path {
return ret
}
func pathSignVerbatim(b *backend) *framework.Path {
ret := &framework.Path{
Pattern: "sign-verbatim" + framework.OptionalParamRegex("role"),
func pathIssuerSignVerbatim(b *backend) *framework.Path {
pattern := "issuer/" + framework.GenericNameRegex(issuerRefParam) + "/sign-verbatim"
return buildPathIssuerSignVerbatim(b, pattern)
}
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.UpdateOperation: b.metricsWrap("sign-verbatim", roleOptional, b.pathSignVerbatim),
func pathSignVerbatim(b *backend) *framework.Path {
pattern := "sign-verbatim" + framework.OptionalParamRegex("role")
return buildPathIssuerSignVerbatim(b, pattern)
}
func buildPathIssuerSignVerbatim(b *backend, pattern string) *framework.Path {
ret := &framework.Path{
Pattern: pattern,
Fields: map[string]*framework.FieldSchema{},
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.metricsWrap("sign-verbatim", roleOptional, b.pathSignVerbatim),
},
},
HelpSynopsis: pathSignHelpSyn,
HelpDescription: pathSignHelpDesc,
HelpSynopsis: pathIssuerSignVerbatimHelpSyn,
HelpDescription: pathIssuerSignVerbatimHelpDesc,
}
ret.Fields = addNonCACommonFields(map[string]*framework.FieldSchema{})
ret.Fields = addNonCACommonFields(ret.Fields)
ret.Fields["csr"] = &framework.FieldSchema{
Type: framework.TypeString,
@@ -104,6 +142,26 @@ this value to an empty list.`,
return ret
}
const (
pathIssuerSignVerbatimHelpSyn = `Issue a certificate directly based on the provided CSR.`
pathIssuerSignVerbatimHelpDesc = `
This API endpoint allows for directly signing the specified certificate
signing request (CSR) without the typical role-based validation. This
allows for attributes from the CSR to be directly copied to the resulting
certificate.
Usually the role-based sign operations (/sign and /issue) are preferred to
this operation.
Note that this is a very privileged operation and should be extremely
restricted in terms of who is allowed to use it. All values will be taken
directly from the incoming CSR. No further verification of attribute are
performed, except as permitted by this endpoint's parameters.
See the API documentation for more information about required parameters.
`
)
// pathIssue issues a certificate and private key from given parameters,
// subject to role restrictions
func (b *backend) pathIssue(ctx context.Context, req *logical.Request, data *framework.FieldData, role *roleEntry) (*logical.Response, error) {
@@ -155,6 +213,11 @@ func (b *backend) pathSignVerbatim(ctx context.Context, req *logical.Request, da
*entry.GenerateLease = *role.GenerateLease
}
entry.NoStore = role.NoStore
entry.Issuer = role.Issuer
}
if len(entry.Issuer) == 0 {
entry.Issuer = defaultRef
}
return b.pathIssueSignCert(ctx, req, data, entry, true, true)
@@ -167,6 +230,31 @@ func (b *backend) pathIssueSignCert(ctx context.Context, req *logical.Request, d
return nil, logical.ErrReadOnly
}
// We prefer the issuer from the role in two cases:
//
// 1. On the legacy sign-verbatim paths, as we always provision an issuer
// in both the role and role-less cases, and
// 2. On the legacy sign/:role or issue/:role paths, as the issuer was
// set on the role directly (either via upgrade or not). Note that
// the updated issuer/:ref/{sign,issue}/:role path is not affected,
// and we instead pull the issuer out of the path instead (which
// allows users with access to those paths to manually choose their
// issuer in desired scenarios).
var issuerName string
if strings.HasPrefix(req.Path, "sign-verbatim/") || strings.HasPrefix(req.Path, "sign/") || strings.HasPrefix(req.Path, "issue/") {
issuerName = role.Issuer
if len(issuerName) == 0 {
issuerName = defaultRef
}
} else {
// Otherwise, we must have a newer API which requires an issuer
// reference. Fetch it in this case
issuerName = getIssuerRef(data)
if len(issuerName) == 0 {
return logical.ErrorResponse("missing issuer reference"), nil
}
}
format := getFormat(data)
if format == "" {
return logical.ErrorResponse(
@@ -174,7 +262,7 @@ func (b *backend) pathIssueSignCert(ctx context.Context, req *logical.Request, d
}
var caErr error
signingBundle, caErr := fetchCAInfo(ctx, b, req)
signingBundle, caErr := fetchCAInfo(ctx, b, req, issuerName, IssuanceUsage)
if caErr != nil {
switch caErr.(type) {
case errutil.UserError:

View File

@@ -0,0 +1,254 @@
package pki
import (
"bytes"
"context"
"encoding/pem"
"fmt"
"strings"
"github.com/hashicorp/vault/sdk/framework"
"github.com/hashicorp/vault/sdk/helper/errutil"
"github.com/hashicorp/vault/sdk/logical"
)
func pathIssuerGenerateRoot(b *backend) *framework.Path {
return buildPathGenerateRoot(b, "issuers/generate/root/"+framework.GenericNameRegex("exported"))
}
func pathRotateRoot(b *backend) *framework.Path {
return buildPathGenerateRoot(b, "root/rotate/"+framework.GenericNameRegex("exported"))
}
func buildPathGenerateRoot(b *backend, pattern string) *framework.Path {
ret := &framework.Path{
Pattern: pattern,
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathCAGenerateRoot,
// Read more about why these flags are set in backend.go
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
},
HelpSynopsis: pathGenerateRootHelpSyn,
HelpDescription: pathGenerateRootHelpDesc,
}
ret.Fields = addCACommonFields(map[string]*framework.FieldSchema{})
ret.Fields = addCAKeyGenerationFields(ret.Fields)
ret.Fields = addCAIssueFields(ret.Fields)
return ret
}
func pathIssuerGenerateIntermediate(b *backend) *framework.Path {
return buildPathGenerateIntermediate(b,
"issuers/generate/intermediate/"+framework.GenericNameRegex("exported"))
}
func pathCrossSignIntermediate(b *backend) *framework.Path {
return buildPathGenerateIntermediate(b, "intermediate/cross-sign")
}
func buildPathGenerateIntermediate(b *backend, pattern string) *framework.Path {
ret := &framework.Path{
Pattern: pattern,
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathGenerateIntermediate,
// Read more about why these flags are set in backend.go
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
},
HelpSynopsis: pathGenerateIntermediateHelpSyn,
HelpDescription: pathGenerateIntermediateHelpDesc,
}
ret.Fields = addCACommonFields(map[string]*framework.FieldSchema{})
ret.Fields = addCAKeyGenerationFields(ret.Fields)
ret.Fields["add_basic_constraints"] = &framework.FieldSchema{
Type: framework.TypeBool,
Description: `Whether to add a Basic Constraints
extension with CA: true. Only needed as a
workaround in some compatibility scenarios
with Active Directory Certificate Services.`,
}
return ret
}
func pathImportIssuer(b *backend) *framework.Path {
return &framework.Path{
Pattern: "issuers/import/(cert|bundle)",
Fields: map[string]*framework.FieldSchema{
"pem_bundle": {
Type: framework.TypeString,
Description: `PEM-format, concatenated unencrypted
secret-key (optional) and certificates.`,
},
},
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathImportIssuers,
// Read more about why these flags are set in backend.go
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
},
HelpSynopsis: pathImportIssuersHelpSyn,
HelpDescription: pathImportIssuersHelpDesc,
}
}
func (b *backend) pathImportIssuers(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
// Since we're planning on updating issuers here, grab the lock so we've
// got a consistent view.
b.issuersLock.Lock()
defer b.issuersLock.Unlock()
keysAllowed := strings.HasSuffix(req.Path, "bundle") || req.Path == "config/ca"
if b.useLegacyBundleCaStorage() {
return logical.ErrorResponse("Can not import issuers until migration has completed"), nil
}
var pemBundle string
var certificate string
rawPemBundle, bundleOk := data.GetOk("pem_bundle")
rawCertificate, certOk := data.GetOk("certificate")
if bundleOk {
pemBundle = rawPemBundle.(string)
}
if certOk {
certificate = rawCertificate.(string)
}
if len(pemBundle) == 0 && len(certificate) == 0 {
return logical.ErrorResponse("'pem_bundle' and 'certificate' parameters were empty"), nil
}
if len(pemBundle) > 0 && len(certificate) > 0 {
return logical.ErrorResponse("'pem_bundle' and 'certificate' parameters were both provided"), nil
}
if len(certificate) > 0 {
keysAllowed = false
pemBundle = certificate
}
var createdKeys []string
var createdIssuers []string
issuerKeyMap := make(map[string]string)
// Rather than using certutil.ParsePEMBundle (which restricts the
// construction of the PEM bundle), we manually parse the bundle instead.
pemBytes := []byte(pemBundle)
var pemBlock *pem.Block
var issuers []string
var keys []string
// By decoding and re-encoding PEM blobs, we can pass strict PEM blobs
// to the import functionality (importKeys, importIssuers). This allows
// them to validate no duplicate issuers exist (and place greater
// restrictions during parsing) but allows this code to accept OpenSSL
// parsed chains (with full textual output between PEM entries).
for len(bytes.TrimSpace(pemBytes)) > 0 {
pemBlock, pemBytes = pem.Decode(pemBytes)
if pemBlock == nil {
return nil, errutil.UserError{Err: "no data found in PEM block"}
}
pemBlockString := string(pem.EncodeToMemory(pemBlock))
switch pemBlock.Type {
case "CERTIFICATE", "X509 CERTIFICATE":
// Must be a certificate
issuers = append(issuers, pemBlockString)
case "CRL", "X509 CRL":
// Ignore any CRL entries.
default:
// Otherwise, treat them as keys.
keys = append(keys, pemBlockString)
}
}
if len(keys) > 0 && !keysAllowed {
return logical.ErrorResponse("private keys found in the PEM bundle but not allowed by the path; use /issuers/import/bundle"), nil
}
mkc := newManagedKeyContext(ctx, b, req.MountPoint)
for keyIndex, keyPem := range keys {
// Handle import of private key.
key, existing, err := importKeyFromBytes(mkc, req.Storage, keyPem, "")
if err != nil {
return logical.ErrorResponse(fmt.Sprintf("Error parsing key %v: %v", keyIndex, err)), nil
}
if !existing {
createdKeys = append(createdKeys, key.ID.String())
}
}
for certIndex, certPem := range issuers {
cert, existing, err := importIssuer(mkc, req.Storage, certPem, "")
if err != nil {
return logical.ErrorResponse(fmt.Sprintf("Error parsing issuer %v: %v\n%v", certIndex, err, certPem)), nil
}
issuerKeyMap[cert.ID.String()] = cert.KeyID.String()
if !existing {
createdIssuers = append(createdIssuers, cert.ID.String())
}
}
response := &logical.Response{
Data: map[string]interface{}{
"mapping": issuerKeyMap,
"imported_keys": createdKeys,
"imported_issuers": createdIssuers,
},
}
if len(createdIssuers) > 0 {
err := b.crlBuilder.rebuild(ctx, b, req, true)
if err != nil {
return nil, err
}
}
// While we're here, check if we should warn about a bad default key. We
// do this unconditionally if the issuer or key was modified, so the admin
// is always warned. But if unrelated key material was imported, we do
// not warn.
config, err := getIssuersConfig(ctx, req.Storage)
if err == nil && len(config.DefaultIssuerId) > 0 {
// We can use the mapping above to check the issuer mapping.
if keyId, ok := issuerKeyMap[string(config.DefaultIssuerId)]; ok && len(keyId) == 0 {
msg := "The default issuer has no key associated with it. Some operations like issuing certificates and signing CRLs will be unavailable with the requested default issuer until a key is imported or the default issuer is changed."
response.AddWarning(msg)
b.Logger().Error(msg)
}
}
return response, nil
}
const (
pathImportIssuersHelpSyn = `Import the specified issuing certificates.`
pathImportIssuersHelpDesc = `
This endpoint allows importing the specified issuer certificates.
:type is either the literal value "cert", to only allow importing
certificates, else "bundle" to allow importing keys as well as
certificates.
Depending on the value of :type, the pem_bundle request parameter can
either take PEM-formatted certificates, and, if :type="bundle", unencrypted
secret-keys.
`
)

View File

@@ -0,0 +1,195 @@
package pki
import (
"context"
"strings"
"github.com/hashicorp/vault/sdk/framework"
"github.com/hashicorp/vault/sdk/helper/certutil"
"github.com/hashicorp/vault/sdk/logical"
)
func pathGenerateKey(b *backend) *framework.Path {
return &framework.Path{
Pattern: "keys/generate/(internal|exported|kms)",
Fields: map[string]*framework.FieldSchema{
keyNameParam: {
Type: framework.TypeString,
Description: "Optional name to be used for this key",
},
keyTypeParam: {
Type: framework.TypeString,
Default: "rsa",
Description: `Type of the secret key to generate`,
},
keyBitsParam: {
Type: framework.TypeInt,
Default: 2048,
Description: `Type of the secret key to generate`,
},
"managed_key_name": {
Type: framework.TypeString,
Description: `The name of the managed key to use when the exported
type is kms. When kms type is the key type, this field or managed_key_id
is required. Ignored for other types.`,
},
"managed_key_id": {
Type: framework.TypeString,
Description: `The name of the managed key to use when the exported
type is kms. When kms type is the key type, this field or managed_key_name
is required. Ignored for other types.`,
},
},
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathGenerateKeyHandler,
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
},
HelpSynopsis: pathGenerateKeyHelpSyn,
HelpDescription: pathGenerateKeyHelpDesc,
}
}
const (
pathGenerateKeyHelpSyn = `Generate a new private key used for signing.`
pathGenerateKeyHelpDesc = `This endpoint will generate a new key pair of the specified type (internal, exported, or kms).`
)
func (b *backend) pathGenerateKeyHandler(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
// Since we're planning on updating issuers here, grab the lock so we've
// got a consistent view.
b.issuersLock.Lock()
defer b.issuersLock.Unlock()
keyName, err := getKeyName(ctx, req.Storage, data)
if err != nil { // Fail Immediately if Key Name is in Use, etc...
return nil, err
}
mkc := newManagedKeyContext(ctx, b, req.MountPoint)
exportPrivateKey := false
var keyBundle certutil.KeyBundle
var actualPrivateKeyType certutil.PrivateKeyType
switch {
case strings.HasSuffix(req.Path, "/exported"):
exportPrivateKey = true
fallthrough
case strings.HasSuffix(req.Path, "/internal"):
keyType := data.Get(keyTypeParam).(string)
keyBits := data.Get(keyBitsParam).(int)
// Internal key generation, stored in storage
keyBundle, err = certutil.CreateKeyBundle(keyType, keyBits, b.GetRandomReader())
if err != nil {
return nil, err
}
actualPrivateKeyType = keyBundle.PrivateKeyType
case strings.HasSuffix(req.Path, "/kms"):
keyId, err := getManagedKeyId(data)
if err != nil {
return nil, err
}
keyBundle, actualPrivateKeyType, err = createKmsKeyBundle(mkc, keyId)
if err != nil {
return nil, err
}
default:
return logical.ErrorResponse("Unknown type of key to generate"), nil
}
privateKeyPemString, err := keyBundle.ToPrivateKeyPemString()
if err != nil {
return nil, err
}
key, _, err := importKey(mkc, req.Storage, privateKeyPemString, keyName, keyBundle.PrivateKeyType)
if err != nil {
return nil, err
}
responseData := map[string]interface{}{
keyIdParam: key.ID,
keyNameParam: key.Name,
keyTypeParam: string(actualPrivateKeyType),
}
if exportPrivateKey {
responseData["private_key"] = privateKeyPemString
}
return &logical.Response{
Data: responseData,
}, nil
}
func pathImportKey(b *backend) *framework.Path {
return &framework.Path{
Pattern: "keys/import",
Fields: map[string]*framework.FieldSchema{
keyNameParam: {
Type: framework.TypeString,
Description: "Optional name to be used for this key",
},
"pem_bundle": {
Type: framework.TypeString,
Description: `PEM-format, unencrypted secret key`,
},
},
Operations: map[logical.Operation]framework.OperationHandler{
logical.CreateOperation: &framework.PathOperation{
Callback: b.pathImportKeyHandler,
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
},
HelpSynopsis: pathImportKeyHelpSyn,
HelpDescription: pathImportKeyHelpDesc,
}
}
const (
pathImportKeyHelpSyn = `Import the specified key.`
pathImportKeyHelpDesc = `This endpoint allows importing a specified issuer key from a pem bundle.
If name is set, that will be set on the key.`
)
func (b *backend) pathImportKeyHandler(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
// Since we're planning on updating issuers here, grab the lock so we've
// got a consistent view.
b.issuersLock.Lock()
defer b.issuersLock.Unlock()
keyValueInterface, isOk := data.GetOk("pem_bundle")
if !isOk {
return logical.ErrorResponse("keyValue must be set"), nil
}
keyValue := keyValueInterface.(string)
keyName := data.Get(keyNameParam).(string)
mkc := newManagedKeyContext(ctx, b, req.MountPoint)
key, existed, err := importKeyFromBytes(mkc, req.Storage, keyValue, keyName)
if err != nil {
return logical.ErrorResponse(err.Error()), nil
}
resp := logical.Response{
Data: map[string]interface{}{
keyIdParam: key.ID,
keyNameParam: key.Name,
keyTypeParam: key.PrivateKeyType,
},
}
if existed {
resp.AddWarning("Key already imported, use key/ endpoint to update name.")
}
return &resp, nil
}

View File

@@ -22,8 +22,14 @@ hyphen-separated octal`,
},
},
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.UpdateOperation: b.metricsWrap("revoke", noRole, b.pathRevokeWrite),
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.metricsWrap("revoke", noRole, b.pathRevokeWrite),
// This should never be forwarded. See backend.go for more information.
// If this needs to write, the entire request will be forwarded to the
// active node of the current performance cluster, but we don't want to
// forward invalid revoke requests there.
},
},
HelpSynopsis: pathRevokeHelpSyn,
@@ -35,8 +41,14 @@ func pathRotateCRL(b *backend) *framework.Path {
return &framework.Path{
Pattern: `crl/rotate`,
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.ReadOperation: b.pathRotateCRLRead,
Operations: map[logical.Operation]framework.OperationHandler{
logical.ReadOperation: &framework.PathOperation{
Callback: b.pathRotateCRLRead,
// See backend.go; we will read a lot of data prior to calling write,
// so this request should be forwarded when it is first seen, not
// when it is ready to write.
ForwardPerformanceStandby: true,
},
},
HelpSynopsis: pathRotateCRLHelpSyn,
@@ -64,11 +76,11 @@ func (b *backend) pathRevokeWrite(ctx context.Context, req *logical.Request, dat
return revokeCert(ctx, b, req, serial, false)
}
func (b *backend) pathRotateCRLRead(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
func (b *backend) pathRotateCRLRead(ctx context.Context, req *logical.Request, _ *framework.FieldData) (*logical.Response, error) {
b.revokeStorageLock.RLock()
defer b.revokeStorageLock.RUnlock()
crlErr := buildCRL(ctx, b, req, false)
crlErr := b.crlBuilder.rebuild(ctx, b, req, false)
if crlErr != nil {
switch crlErr.(type) {
case errutil.UserError:

View File

@@ -18,8 +18,10 @@ func pathListRoles(b *backend) *framework.Path {
return &framework.Path{
Pattern: "roles/?$",
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.ListOperation: b.pathRoleList,
Operations: map[logical.Operation]framework.OperationHandler{
logical.ListOperation: &framework.PathOperation{
Callback: b.pathRoleList,
},
},
HelpSynopsis: pathListRolesHelpSyn,
@@ -405,12 +407,30 @@ for "generate_lease".`,
Description: `Set the not after field of the certificate with specified date value.
The value format should be given in UTC format YYYY-MM-ddTHH:MM:SSZ.`,
},
"issuer_ref": {
Type: framework.TypeString,
Description: `Reference to the issuer used to sign requests
serviced by this role.`,
Default: defaultRef,
},
},
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.ReadOperation: b.pathRoleRead,
logical.UpdateOperation: b.pathRoleCreate,
logical.DeleteOperation: b.pathRoleDelete,
Operations: map[logical.Operation]framework.OperationHandler{
logical.ReadOperation: &framework.PathOperation{
Callback: b.pathRoleRead,
},
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathRoleCreate,
// Read more about why these flags are set in backend.go.
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
logical.DeleteOperation: &framework.PathOperation{
Callback: b.pathRoleDelete,
// Read more about why these flags are set in backend.go.
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
},
HelpSynopsis: pathRoleHelpSyn,
@@ -527,6 +547,14 @@ func (b *backend) getRole(ctx context.Context, s logical.Storage, n string) (*ro
modified = true
}
// Set the issuer field to default if not set. We want to do this
// unconditionally as we should probably never have an empty issuer
// on a stored roles.
if len(result.Issuer) == 0 {
result.Issuer = defaultRef
modified = true
}
if modified && (b.System().LocalMount() || !b.System().ReplicationState().HasState(consts.ReplicationPerformanceSecondary)) {
jsonEntry, err := logical.StorageEntryJSON("role/"+n, &result)
if err != nil {
@@ -572,7 +600,7 @@ func (b *backend) pathRoleRead(ctx context.Context, req *logical.Request, data *
return resp, nil
}
func (b *backend) pathRoleList(ctx context.Context, req *logical.Request, d *framework.FieldData) (*logical.Response, error) {
func (b *backend) pathRoleList(ctx context.Context, req *logical.Request, _ *framework.FieldData) (*logical.Response, error) {
entries, err := req.Storage.List(ctx, "role/")
if err != nil {
return nil, err
@@ -628,6 +656,7 @@ func (b *backend) pathRoleCreate(ctx context.Context, req *logical.Request, data
BasicConstraintsValidForNonCA: data.Get("basic_constraints_valid_for_non_ca").(bool),
NotBeforeDuration: time.Duration(data.Get("not_before_duration").(int)) * time.Second,
NotAfter: data.Get("not_after").(string),
Issuer: data.Get("issuer_ref").(string),
}
allowedOtherSANs := data.Get("allowed_other_sans").([]string)
@@ -681,14 +710,22 @@ func (b *backend) pathRoleCreate(ctx context.Context, req *logical.Request, data
}
}
allow_wildcard_certificates, present := data.GetOk("allow_wildcard_certificates")
allowWildcardCertificates, present := data.GetOk("allow_wildcard_certificates")
if !present {
// While not the most secure default, when AllowWildcardCertificates isn't
// explicitly specified in the request, we automatically set it to true to
// preserve compatibility with previous versions of Vault.
allow_wildcard_certificates = true
allowWildcardCertificates = true
}
*entry.AllowWildcardCertificates = allowWildcardCertificates.(bool)
// Ensure issuers ref is set to a non-empty value. Note that we never
// resolve the reference (to an issuerId) at role creation time; instead,
// resolve it at use time. This allows values such as `default` or other
// user-assigned names to "float" and change over time.
if len(entry.Issuer) == 0 {
entry.Issuer = defaultRef
}
*entry.AllowWildcardCertificates = allow_wildcard_certificates.(bool)
// Store it
jsonEntry, err := logical.StorageEntryJSON("role/"+name, entry)
@@ -836,8 +873,7 @@ type roleEntry struct {
BasicConstraintsValidForNonCA bool `json:"basic_constraints_valid_for_non_ca" mapstructure:"basic_constraints_valid_for_non_ca"`
NotBeforeDuration time.Duration `json:"not_before_duration" mapstructure:"not_before_duration"`
NotAfter string `json:"not_after" mapstructure:"not_after"`
// Used internally for signing intermediates
AllowExpirationPastCA bool
Issuer string `json:"issuer" mapstructure:"issuer"`
}
func (r *roleEntry) ToResponseData() map[string]interface{} {
@@ -884,6 +920,7 @@ func (r *roleEntry) ToResponseData() map[string]interface{} {
"basic_constraints_valid_for_non_ca": r.BasicConstraintsValidForNonCA,
"not_before_duration": int64(r.NotBeforeDuration.Seconds()),
"not_after": r.NotAfter,
"issuer_ref": r.Issuer,
}
if r.MaxPathLength != nil {
responseData["max_path_length"] = r.MaxPathLength

View File

@@ -20,6 +20,8 @@ func createBackendWithStorage(t *testing.T) (*backend, logical.Storage) {
if err != nil {
t.Fatal(err)
}
// Assume for our tests we have performed the migration already.
b.pkiStorageVersion.Store(1)
return b, config.StorageView
}

View File

@@ -26,27 +26,7 @@ import (
)
func pathGenerateRoot(b *backend) *framework.Path {
ret := &framework.Path{
Pattern: "root/generate/" + framework.GenericNameRegex("exported"),
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathCAGenerateRoot,
// Read more about why these flags are set in backend.go
ForwardPerformanceStandby: true,
ForwardPerformanceSecondary: true,
},
},
HelpSynopsis: pathGenerateRootHelpSyn,
HelpDescription: pathGenerateRootHelpDesc,
}
ret.Fields = addCACommonFields(map[string]*framework.FieldSchema{})
ret.Fields = addCAKeyGenerationFields(ret.Fields)
ret.Fields = addCAIssueFields(ret.Fields)
return ret
return buildPathGenerateRoot(b, "root/generate/"+framework.GenericNameRegex("exported"))
}
func pathDeleteRoot(b *backend) *framework.Path {
@@ -68,94 +48,67 @@ func pathDeleteRoot(b *backend) *framework.Path {
return ret
}
func pathSignIntermediate(b *backend) *framework.Path {
ret := &framework.Path{
Pattern: "root/sign-intermediate",
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathCASignIntermediate,
},
},
func (b *backend) pathCADeleteRoot(ctx context.Context, req *logical.Request, _ *framework.FieldData) (*logical.Response, error) {
// Since we're planning on updating issuers here, grab the lock so we've
// got a consistent view.
b.issuersLock.Lock()
defer b.issuersLock.Unlock()
HelpSynopsis: pathSignIntermediateHelpSyn,
HelpDescription: pathSignIntermediateHelpDesc,
if !b.useLegacyBundleCaStorage() {
issuers, err := listIssuers(ctx, req.Storage)
if err != nil {
return nil, err
}
keys, err := listKeys(ctx, req.Storage)
if err != nil {
return nil, err
}
// Delete all issuers and keys. Ignore deleting the default since we're
// explicitly deleting everything.
for _, issuer := range issuers {
if _, err = deleteIssuer(ctx, req.Storage, issuer); err != nil {
return nil, err
}
}
for _, key := range keys {
if _, err = deleteKey(ctx, req.Storage, key); err != nil {
return nil, err
}
}
}
ret.Fields = addCACommonFields(map[string]*framework.FieldSchema{})
ret.Fields = addCAIssueFields(ret.Fields)
ret.Fields["csr"] = &framework.FieldSchema{
Type: framework.TypeString,
Default: "",
Description: `PEM-format CSR to be signed.`,
// Delete legacy CA bundle.
if err := req.Storage.Delete(ctx, legacyCertBundlePath); err != nil {
return nil, err
}
ret.Fields["use_csr_values"] = &framework.FieldSchema{
Type: framework.TypeBool,
Default: false,
Description: `If true, then:
1) Subject information, including names and alternate
names, will be preserved from the CSR rather than
using values provided in the other parameters to
this path;
2) Any key usages requested in the CSR will be
added to the basic set of key usages used for CA
certs signed by this path; for instance,
the non-repudiation flag;
3) Extensions requested in the CSR will be copied
into the issued certificate.`,
// Delete legacy CRL bundle.
if err := req.Storage.Delete(ctx, legacyCRLPath); err != nil {
return nil, err
}
return ret
}
func pathSignSelfIssued(b *backend) *framework.Path {
ret := &framework.Path{
Pattern: "root/sign-self-issued",
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathCASignSelfIssued,
},
},
Fields: map[string]*framework.FieldSchema{
"certificate": {
Type: framework.TypeString,
Description: `PEM-format self-issued certificate to be signed.`,
},
"require_matching_certificate_algorithms": {
Type: framework.TypeBool,
Default: false,
Description: `If true, require the public key algorithm of the signer to match that of the self issued certificate.`,
},
},
HelpSynopsis: pathSignSelfIssuedHelpSyn,
HelpDescription: pathSignSelfIssuedHelpDesc,
}
return ret
}
func (b *backend) pathCADeleteRoot(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
return nil, req.Storage.Delete(ctx, "config/ca_bundle")
// Return a warning about preferring to delete issuers and keys
// explicitly versus deleting everything.
resp := &logical.Response{}
resp.AddWarning("DELETE /root deletes all keys and issuers; prefer the new DELETE /key/:key_ref and DELETE /issuer/:issuer_ref for finer granularity, unless removal of all keys and issuers is desired.")
return resp, nil
}
func (b *backend) pathCAGenerateRoot(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
// Since we're planning on updating issuers here, grab the lock so we've
// got a consistent view.
b.issuersLock.Lock()
defer b.issuersLock.Unlock()
var err error
entry, err := req.Storage.Get(ctx, "config/ca_bundle")
if err != nil {
return nil, err
}
if entry != nil {
resp := &logical.Response{}
resp.AddWarning(fmt.Sprintf("Refusing to generate a root certificate over an existing root certificate. "+
"If you really want to destroy the original root certificate, please issue a delete against %s root.", req.MountPoint))
return resp, nil
if b.useLegacyBundleCaStorage() {
return logical.ErrorResponse("Can not create root CA until migration has completed"), nil
}
exported, format, role, errorResp := b.getGenerationParams(ctx, data, req.MountPoint)
exported, format, role, errorResp := b.getGenerationParams(ctx, req.Storage, data, req.MountPoint)
if errorResp != nil {
return errorResp, nil
}
@@ -166,6 +119,25 @@ func (b *backend) pathCAGenerateRoot(ctx context.Context, req *logical.Request,
role.MaxPathLength = &maxPathLength
}
issuerName, err := getIssuerName(ctx, req.Storage, data)
if err != nil {
return logical.ErrorResponse(err.Error()), nil
}
// Handle the aliased path specifying the new issuer name as "next", but
// only do it if its not in use.
if strings.HasPrefix(req.Path, "root/rotate/") && len(issuerName) == 0 {
// err is nil when the issuer name is in use.
_, err = resolveIssuerReference(ctx, req.Storage, "next")
if err != nil {
issuerName = "next"
}
}
keyName, err := getKeyName(ctx, req.Storage, data)
if err != nil {
return logical.ErrorResponse(err.Error()), nil
}
input := &inputBundle{
req: req,
apiData: data,
@@ -232,14 +204,12 @@ func (b *backend) pathCAGenerateRoot(ctx context.Context, req *logical.Request,
}
// Store it as the CA bundle
entry, err = logical.StorageEntryJSON("config/ca_bundle", cb)
if err != nil {
return nil, err
}
err = req.Storage.Put(ctx, entry)
myIssuer, myKey, err := writeCaBundle(newManagedKeyContext(ctx, b, req.MountPoint), req.Storage, cb, issuerName, keyName)
if err != nil {
return nil, err
}
resp.Data["issuer_id"] = myIssuer.ID
resp.Data["key_id"] = myKey.ID
// Also store it as just the certificate identified by serial number, so it
// can be revoked
@@ -251,17 +221,8 @@ func (b *backend) pathCAGenerateRoot(ctx context.Context, req *logical.Request,
return nil, fmt.Errorf("unable to store certificate locally: %w", err)
}
// For ease of later use, also store just the certificate at a known
// location
entry.Key = "ca"
entry.Value = parsedBundle.CertificateBytes
err = req.Storage.Put(ctx, entry)
if err != nil {
return nil, err
}
// Build a fresh CRL
err = buildCRL(ctx, b, req, true)
err = b.crlBuilder.rebuild(ctx, b, req, true)
if err != nil {
return nil, err
}
@@ -273,9 +234,14 @@ func (b *backend) pathCAGenerateRoot(ctx context.Context, req *logical.Request,
return resp, nil
}
func (b *backend) pathCASignIntermediate(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
func (b *backend) pathIssuerSignIntermediate(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
var err error
issuerName := getIssuerRef(data)
if len(issuerName) == 0 {
return logical.ErrorResponse("missing issuer reference"), nil
}
format := getFormat(data)
if format == "" {
return logical.ErrorResponse(
@@ -301,7 +267,6 @@ func (b *backend) pathCASignIntermediate(ctx context.Context, req *logical.Reque
AllowedOtherSANs: []string{"*"},
AllowedSerialNumbers: []string{"*"},
AllowedURISANs: []string{"*"},
AllowExpirationPastCA: true,
NotAfter: data.Get("not_after").(string),
}
*role.AllowWildcardCertificates = true
@@ -311,7 +276,7 @@ func (b *backend) pathCASignIntermediate(ctx context.Context, req *logical.Reque
}
var caErr error
signingBundle, caErr := fetchCAInfo(ctx, b, req)
signingBundle, caErr := fetchCAInfo(ctx, b, req, issuerName, IssuanceUsage)
if caErr != nil {
switch caErr.(type) {
case errutil.UserError:
@@ -323,6 +288,11 @@ func (b *backend) pathCASignIntermediate(ctx context.Context, req *logical.Reque
}
}
// Since we are signing an intermediate, we explicitly want to override
// the leaf NotAfterBehavior to permit issuing intermediates longer than
// the life of this issuer.
signingBundle.LeafNotAfterBehavior = certutil.PermitNotAfterBehavior
useCSRValues := data.Get("use_csr_values").(bool)
maxPathLengthIface, ok := data.GetOk("max_path_length")
@@ -417,9 +387,14 @@ func (b *backend) pathCASignIntermediate(ctx context.Context, req *logical.Reque
return resp, nil
}
func (b *backend) pathCASignSelfIssued(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
func (b *backend) pathIssuerSignSelfIssued(ctx context.Context, req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
var err error
issuerName := getIssuerRef(data)
if len(issuerName) == 0 {
return logical.ErrorResponse("missing issuer reference"), nil
}
certPem := data.Get("certificate").(string)
block, _ := pem.Decode([]byte(certPem))
if block == nil || len(block.Bytes) == 0 {
@@ -442,7 +417,7 @@ func (b *backend) pathCASignSelfIssued(ctx context.Context, req *logical.Request
}
var caErr error
signingBundle, caErr := fetchCAInfo(ctx, b, req)
signingBundle, caErr := fetchCAInfo(ctx, b, req, issuerName, IssuanceUsage)
if caErr != nil {
switch caErr.(type) {
case errutil.UserError:
@@ -551,23 +526,3 @@ Deletes the root CA key to allow a new one to be generated.
const pathDeleteRootHelpDesc = `
See the API documentation for more information.
`
const pathSignIntermediateHelpSyn = `
Issue an intermediate CA certificate based on the provided CSR.
`
const pathSignIntermediateHelpDesc = `
see the API documentation for more information.
`
const pathSignSelfIssuedHelpSyn = `
Signs another CA's self-issued certificate.
`
const pathSignSelfIssuedHelpDesc = `
Signs another CA's self-issued certificate. This is most often used for rolling roots; unless you know you need this you probably want to use sign-intermediate instead.
Note that this is a very privileged operation and should be extremely restricted in terms of who is allowed to use it. All values will be taken directly from the incoming certificate and only verification that it is self-issued will be performed.
Configured URLs for CRLs/OCSP/etc. will be copied over and the issuer will be this mount's CA cert. Other than that, all other values will be used verbatim.
`

View File

@@ -0,0 +1,138 @@
package pki
import (
"github.com/hashicorp/vault/sdk/framework"
"github.com/hashicorp/vault/sdk/logical"
)
func pathIssuerSignIntermediate(b *backend) *framework.Path {
pattern := "issuer/" + framework.GenericNameRegex(issuerRefParam) + "/sign-intermediate"
return pathIssuerSignIntermediateRaw(b, pattern)
}
func pathSignIntermediate(b *backend) *framework.Path {
pattern := "root/sign-intermediate"
return pathIssuerSignIntermediateRaw(b, pattern)
}
func pathIssuerSignIntermediateRaw(b *backend, pattern string) *framework.Path {
fields := addIssuerRefField(map[string]*framework.FieldSchema{})
path := &framework.Path{
Pattern: pattern,
Fields: fields,
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathIssuerSignIntermediate,
},
},
HelpSynopsis: pathIssuerSignIntermediateHelpSyn,
HelpDescription: pathIssuerSignIntermediateHelpDesc,
}
path.Fields = addCACommonFields(path.Fields)
path.Fields = addCAIssueFields(path.Fields)
path.Fields["csr"] = &framework.FieldSchema{
Type: framework.TypeString,
Default: "",
Description: `PEM-format CSR to be signed.`,
}
path.Fields["use_csr_values"] = &framework.FieldSchema{
Type: framework.TypeBool,
Default: false,
Description: `If true, then:
1) Subject information, including names and alternate
names, will be preserved from the CSR rather than
using values provided in the other parameters to
this path;
2) Any key usages requested in the CSR will be
added to the basic set of key usages used for CA
certs signed by this path; for instance,
the non-repudiation flag;
3) Extensions requested in the CSR will be copied
into the issued certificate.`,
}
return path
}
const (
pathIssuerSignIntermediateHelpSyn = `Issue an intermediate CA certificate based on the provided CSR.`
pathIssuerSignIntermediateHelpDesc = `
This API endpoint allows for signing the specified CSR, adding to it a basic
constraint for IsCA=True. This allows the issued certificate to issue its own
leaf certificates.
Note that the resulting certificate is not imported as an issuer in this PKI
mount. This means that you can use the resulting certificate in another Vault
PKI mount point or to issue an external intermediate (e.g., for use with
another X.509 CA).
See the API documentation for more information about required parameters.
`
)
func pathIssuerSignSelfIssued(b *backend) *framework.Path {
pattern := "issuer/" + framework.GenericNameRegex(issuerRefParam) + "/sign-self-issued"
return buildPathIssuerSignSelfIssued(b, pattern)
}
func pathSignSelfIssued(b *backend) *framework.Path {
pattern := "root/sign-self-issued"
return buildPathIssuerSignSelfIssued(b, pattern)
}
func buildPathIssuerSignSelfIssued(b *backend, pattern string) *framework.Path {
fields := map[string]*framework.FieldSchema{
"certificate": {
Type: framework.TypeString,
Description: `PEM-format self-issued certificate to be signed.`,
},
"require_matching_certificate_algorithms": {
Type: framework.TypeBool,
Default: false,
Description: `If true, require the public key algorithm of the signer to match that of the self issued certificate.`,
},
}
fields = addIssuerRefField(fields)
path := &framework.Path{
Pattern: pattern,
Fields: fields,
Operations: map[logical.Operation]framework.OperationHandler{
logical.UpdateOperation: &framework.PathOperation{
Callback: b.pathIssuerSignSelfIssued,
},
},
HelpSynopsis: pathIssuerSignSelfIssuedHelpSyn,
HelpDescription: pathIssuerSignSelfIssuedHelpDesc,
}
return path
}
const (
pathIssuerSignSelfIssuedHelpSyn = `Re-issue a self-signed certificate based on the provided certificate.`
pathIssuerSignSelfIssuedHelpDesc = `
This API endpoint allows for signing the specified self-signed certificate,
effectively allowing cross-signing of external root CAs. This allows for an
alternative validation path, chaining back through this PKI mount. This
endpoint is also useful in a rolling-root scenario, allowing devices to trust
and validate later (or earlier) root certificates and their issued leaves.
Usually the sign-intermediate operation is preferred to this operation.
Note that this is a very privileged operation and should be extremely
restricted in terms of who is allowed to use it. All values will be taken
directly from the incoming certificate and only verification that it is
self-issued will be performed.
Configured URLs for CRLs/OCSP/etc. will be copied over and the issuer will
be this mount's CA cert. Other than that, all other values will be used
verbatim from the given certificate.
See the API documentation for more information about required parameters.
`
)

View File

@@ -225,7 +225,7 @@ func (b *backend) pathTidyWrite(ctx context.Context, req *logical.Request, d *fr
}
if rebuildCRL {
if err := buildCRL(ctx, b, req, false); err != nil {
if err := b.crlBuilder.rebuild(ctx, b, req, false); err != nil {
return err
}
}
@@ -247,7 +247,7 @@ func (b *backend) pathTidyWrite(ctx context.Context, req *logical.Request, d *fr
return logical.RespondWithStatusCode(resp, req, http.StatusAccepted)
}
func (b *backend) pathTidyStatusRead(ctx context.Context, req *logical.Request, d *framework.FieldData) (*logical.Response, error) {
func (b *backend) pathTidyStatusRead(_ context.Context, _ *logical.Request, _ *framework.FieldData) (*logical.Response, error) {
// If this node is a performance secondary return an ErrReadOnly so that the request gets forwarded,
// but only if the PKI backend is not a local mount.
if b.System().ReplicationState().HasState(consts.ReplicationPerformanceSecondary) && !b.System().LocalMount() {

View File

@@ -35,7 +35,7 @@ reference`,
}
}
func (b *backend) secretCredsRevoke(ctx context.Context, req *logical.Request, d *framework.FieldData) (*logical.Response, error) {
func (b *backend) secretCredsRevoke(ctx context.Context, req *logical.Request, _ *framework.FieldData) (*logical.Response, error) {
if req.Secret == nil {
return nil, fmt.Errorf("secret is nil in request")
}

View File

@@ -0,0 +1,857 @@
package pki
import (
"context"
"crypto"
"crypto/x509"
"encoding/pem"
"fmt"
"strings"
"github.com/hashicorp/go-uuid"
"github.com/hashicorp/vault/sdk/helper/certutil"
"github.com/hashicorp/vault/sdk/helper/errutil"
"github.com/hashicorp/vault/sdk/logical"
)
const (
storageKeyConfig = "config/keys"
storageIssuerConfig = "config/issuers"
keyPrefix = "config/key/"
issuerPrefix = "config/issuer/"
storageLocalCRLConfig = "crls/config"
legacyMigrationBundleLogKey = "config/legacyMigrationBundleLog"
legacyCertBundlePath = "config/ca_bundle"
legacyCRLPath = "crl"
)
type keyID string
func (p keyID) String() string {
return string(p)
}
type issuerID string
func (p issuerID) String() string {
return string(p)
}
type crlID string
func (p crlID) String() string {
return string(p)
}
const (
IssuerRefNotFound = issuerID("not-found")
KeyRefNotFound = keyID("not-found")
)
type keyEntry struct {
ID keyID `json:"id" structs:"id" mapstructure:"id"`
Name string `json:"name" structs:"name" mapstructure:"name"`
PrivateKeyType certutil.PrivateKeyType `json:"private_key_type" structs:"private_key_type" mapstructure:"private_key_type"`
PrivateKey string `json:"private_key" structs:"private_key" mapstructure:"private_key"`
}
func (e keyEntry) getManagedKeyUUID() (UUIDKey, error) {
if !e.isManagedPrivateKey() {
return "", errutil.InternalError{Err: "getManagedKeyId called on a key id %s (%s) "}
}
return extractManagedKeyId([]byte(e.PrivateKey))
}
func (e keyEntry) isManagedPrivateKey() bool {
return e.PrivateKeyType == certutil.ManagedPrivateKey
}
type issuerUsage uint
const (
ReadOnlyUsage issuerUsage = iota
IssuanceUsage issuerUsage = 1 << iota
CRLSigningUsage issuerUsage = 1 << iota
// When adding a new usage in the future, we'll need to create a usage
// mask field on the IssuerEntry and handle migrations to a newer mask,
// inferring a value for the new bits.
AllIssuerUsages issuerUsage = ReadOnlyUsage | IssuanceUsage | CRLSigningUsage
)
var namedIssuerUsages = map[string]issuerUsage{
"read-only": ReadOnlyUsage,
"issuing-certificates": IssuanceUsage,
"crl-signing": CRLSigningUsage,
}
func (i *issuerUsage) ToggleUsage(usages ...issuerUsage) {
for _, usage := range usages {
*i ^= usage
}
}
func (i issuerUsage) HasUsage(usage issuerUsage) bool {
return (i & usage) == usage
}
func (i issuerUsage) Names() string {
var names []string
var builtUsage issuerUsage
for name, usage := range namedIssuerUsages {
if i.HasUsage(usage) {
names = append(names, name)
builtUsage.ToggleUsage(usage)
}
}
if i != builtUsage {
// Found some unknown usage, we should indicate this in the names.
names = append(names, fmt.Sprintf("unknown:%v", i^builtUsage))
}
return strings.Join(names, ",")
}
func NewIssuerUsageFromNames(names []string) (issuerUsage, error) {
var result issuerUsage
for index, name := range names {
usage, ok := namedIssuerUsages[name]
if !ok {
return ReadOnlyUsage, fmt.Errorf("unknown name for usage at index %v: %v", index, name)
}
result.ToggleUsage(usage)
}
return result, nil
}
type issuerEntry struct {
ID issuerID `json:"id" structs:"id" mapstructure:"id"`
Name string `json:"name" structs:"name" mapstructure:"name"`
KeyID keyID `json:"key_id" structs:"key_id" mapstructure:"key_id"`
Certificate string `json:"certificate" structs:"certificate" mapstructure:"certificate"`
CAChain []string `json:"ca_chain" structs:"ca_chain" mapstructure:"ca_chain"`
ManualChain []issuerID `json:"manual_chain" structs:"manual_chain" mapstructure:"manual_chain"`
SerialNumber string `json:"serial_number" structs:"serial_number" mapstructure:"serial_number"`
LeafNotAfterBehavior certutil.NotAfterBehavior `json:"not_after_behavior" structs:"not_after_behavior" mapstructure:"not_after_behavior"`
Usage issuerUsage `json:"usage" structs:"usage" mapstructure:"usage"`
}
type localCRLConfigEntry struct {
IssuerIDCRLMap map[issuerID]crlID `json:"issuer_id_crl_map" structs:"issuer_id_crl_map" mapstructure:"issuer_id_crl_map"`
CRLNumberMap map[crlID]int64 `json:"crl_number_map" structs:"crl_number_map" mapstructure:"crl_number_map"`
}
type keyConfigEntry struct {
DefaultKeyId keyID `json:"default" structs:"default" mapstructure:"default"`
}
type issuerConfigEntry struct {
DefaultIssuerId issuerID `json:"default" structs:"default" mapstructure:"default"`
}
func listKeys(ctx context.Context, s logical.Storage) ([]keyID, error) {
strList, err := s.List(ctx, keyPrefix)
if err != nil {
return nil, err
}
keyIds := make([]keyID, 0, len(strList))
for _, entry := range strList {
keyIds = append(keyIds, keyID(entry))
}
return keyIds, nil
}
func fetchKeyById(ctx context.Context, s logical.Storage, keyId keyID) (*keyEntry, error) {
if len(keyId) == 0 {
return nil, errutil.InternalError{Err: fmt.Sprintf("unable to fetch pki key: empty key identifier")}
}
entry, err := s.Get(ctx, keyPrefix+keyId.String())
if err != nil {
return nil, errutil.InternalError{Err: fmt.Sprintf("unable to fetch pki key: %v", err)}
}
if entry == nil {
// FIXME: Dedicated/specific error for this?
return nil, errutil.UserError{Err: fmt.Sprintf("pki key id %s does not exist", keyId.String())}
}
var key keyEntry
if err := entry.DecodeJSON(&key); err != nil {
return nil, errutil.InternalError{Err: fmt.Sprintf("unable to decode pki key with id %s: %v", keyId.String(), err)}
}
return &key, nil
}
func writeKey(ctx context.Context, s logical.Storage, key keyEntry) error {
keyId := key.ID
json, err := logical.StorageEntryJSON(keyPrefix+keyId.String(), key)
if err != nil {
return err
}
return s.Put(ctx, json)
}
func deleteKey(ctx context.Context, s logical.Storage, id keyID) (bool, error) {
config, err := getKeysConfig(ctx, s)
if err != nil {
return false, err
}
wasDefault := false
if config.DefaultKeyId == id {
wasDefault = true
config.DefaultKeyId = keyID("")
if err := setKeysConfig(ctx, s, config); err != nil {
return wasDefault, err
}
}
return wasDefault, s.Delete(ctx, keyPrefix+id.String())
}
func importKey(mkc managedKeyContext, s logical.Storage, keyValue string, keyName string, keyType certutil.PrivateKeyType) (*keyEntry, bool, error) {
// importKey imports the specified PEM-format key (from keyValue) into
// the new PKI storage format. The first return field is a reference to
// the new key; the second is whether or not the key already existed
// during import (in which case, *key points to the existing key reference
// and identifier); the last return field is whether or not an error
// occurred.
//
// Normalize whitespace before beginning. See note in importIssuer as to
// why we do this.
keyValue = strings.TrimSpace(keyValue) + "\n"
//
// Before we can import a known key, we first need to know if the key
// exists in storage already. This means iterating through all known
// keys and comparing their private value against this value.
knownKeys, err := listKeys(mkc.ctx, s)
if err != nil {
return nil, false, err
}
// Get our public key from the current inbound key, to compare against all the other keys.
var pkForImportingKey crypto.PublicKey
if keyType == certutil.ManagedPrivateKey {
managedKeyUUID, err := extractManagedKeyId([]byte(keyValue))
if err != nil {
return nil, false, errutil.InternalError{Err: fmt.Sprintf("failed extracting managed key uuid from key: %v", err)}
}
pkForImportingKey, err = getManagedKeyPublicKey(mkc, managedKeyUUID)
if err != nil {
return nil, false, err
}
} else {
pkForImportingKey, err = getPublicKeyFromBytes([]byte(keyValue))
if err != nil {
return nil, false, err
}
}
for _, identifier := range knownKeys {
existingKey, err := fetchKeyById(mkc.ctx, s, identifier)
if err != nil {
return nil, false, err
}
areEqual, err := comparePublicKey(mkc, existingKey, pkForImportingKey)
if err != nil {
return nil, false, err
}
if areEqual {
// Here, we don't need to stitch together the issuer entries,
// because the last run should've done that for us (or, when
// importing an issuer).
return existingKey, true, nil
}
}
// Haven't found a key, so we've gotta create it and write it into storage.
var result keyEntry
result.ID = genKeyId()
result.Name = keyName
result.PrivateKey = keyValue
result.PrivateKeyType = keyType
// Finally, we can write the key to storage.
if err := writeKey(mkc.ctx, s, result); err != nil {
return nil, false, err
}
// Before we return below, we need to iterate over _all_ issuers and see if
// one of them has a missing KeyId link, and if so, point it back to
// ourselves. We fetch the list of issuers up front, even when don't need
// it, to give ourselves a better chance of succeeding below.
knownIssuers, err := listIssuers(mkc.ctx, s)
if err != nil {
return nil, false, err
}
// Now, for each issuer, try and compute the issuer<->key link if missing.
for _, identifier := range knownIssuers {
existingIssuer, err := fetchIssuerById(mkc.ctx, s, identifier)
if err != nil {
return nil, false, err
}
// If the KeyID value is already present, we can skip it.
if len(existingIssuer.KeyID) > 0 {
continue
}
// Otherwise, compare public values. Note that there might be multiple
// certificates (e.g., cross-signed) with the same key.
cert, err := existingIssuer.GetCertificate()
if err != nil {
// Malformed issuer.
return nil, false, err
}
equal, err := certutil.ComparePublicKeysAndType(cert.PublicKey, pkForImportingKey)
if err != nil {
return nil, false, err
}
if equal {
// These public keys are equal, so this key entry must be the
// corresponding private key to this issuer; update it accordingly.
existingIssuer.KeyID = result.ID
if err := writeIssuer(mkc.ctx, s, existingIssuer); err != nil {
return nil, false, err
}
}
}
// If there was no prior default value set and/or we had no known
// keys when we started, set this key as default.
keyDefaultSet, err := isDefaultKeySet(mkc.ctx, s)
if err != nil {
return nil, false, err
}
if len(knownKeys) == 0 || !keyDefaultSet {
if err = updateDefaultKeyId(mkc.ctx, s, result.ID); err != nil {
return nil, false, err
}
}
// All done; return our new key reference.
return &result, false, nil
}
func (i issuerEntry) GetCertificate() (*x509.Certificate, error) {
block, extra := pem.Decode([]byte(i.Certificate))
if block == nil {
return nil, errutil.InternalError{Err: fmt.Sprintf("unable to parse certificate from issuer: invalid PEM: %v", i.ID)}
}
if len(strings.TrimSpace(string(extra))) > 0 {
return nil, errutil.InternalError{Err: fmt.Sprintf("unable to parse certificate for issuer (%v): trailing PEM data: %v", i.ID, string(extra))}
}
return x509.ParseCertificate(block.Bytes)
}
func (i issuerEntry) EnsureUsage(usage issuerUsage) error {
// We want to spit out a nice error message about missing usages.
if i.Usage.HasUsage(usage) {
return nil
}
issuerRef := fmt.Sprintf("id:%v", i.ID)
if len(i.Name) > 0 {
issuerRef = fmt.Sprintf("%v / name:%v", issuerRef, i.Name)
}
// These usages differ at some point in time. We've gotta find the first
// usage that differs and return a logical-sounding error message around
// that difference.
for name, candidate := range namedIssuerUsages {
if usage.HasUsage(candidate) && !i.Usage.HasUsage(candidate) {
return fmt.Errorf("requested usage %v for issuer [%v] but only had usage %v", name, issuerRef, i.Usage.Names())
}
}
// Maybe we have an unnamed usage that's requested.
return fmt.Errorf("unknown delta between usages: %v -> %v / for issuer [%v]", usage.Names(), i.Usage.Names(), issuerRef)
}
func listIssuers(ctx context.Context, s logical.Storage) ([]issuerID, error) {
strList, err := s.List(ctx, issuerPrefix)
if err != nil {
return nil, err
}
issuerIds := make([]issuerID, 0, len(strList))
for _, entry := range strList {
issuerIds = append(issuerIds, issuerID(entry))
}
return issuerIds, nil
}
func resolveKeyReference(ctx context.Context, s logical.Storage, reference string) (keyID, error) {
if reference == defaultRef {
// Handle fetching the default key.
config, err := getKeysConfig(ctx, s)
if err != nil {
return keyID("config-error"), err
}
if len(config.DefaultKeyId) == 0 {
return KeyRefNotFound, fmt.Errorf("no default key currently configured")
}
return config.DefaultKeyId, nil
}
keys, err := listKeys(ctx, s)
if err != nil {
return keyID("list-error"), err
}
// Cheaper to list keys and check if an id is a match...
for _, keyId := range keys {
if keyId == keyID(reference) {
return keyId, nil
}
}
// ... than to pull all keys from storage.
for _, keyId := range keys {
key, err := fetchKeyById(ctx, s, keyId)
if err != nil {
return keyID("key-read"), err
}
if key.Name == reference {
return key.ID, nil
}
}
// Otherwise, we must not have found the key.
return KeyRefNotFound, errutil.UserError{Err: fmt.Sprintf("unable to find PKI key for reference: %v", reference)}
}
func fetchIssuerById(ctx context.Context, s logical.Storage, issuerId issuerID) (*issuerEntry, error) {
if len(issuerId) == 0 {
return nil, errutil.InternalError{Err: fmt.Sprintf("unable to fetch pki issuer: empty issuer identifier")}
}
entry, err := s.Get(ctx, issuerPrefix+issuerId.String())
if err != nil {
return nil, errutil.InternalError{Err: fmt.Sprintf("unable to fetch pki issuer: %v", err)}
}
if entry == nil {
// FIXME: Dedicated/specific error for this?
return nil, errutil.UserError{Err: fmt.Sprintf("pki issuer id %s does not exist", issuerId.String())}
}
var issuer issuerEntry
if err := entry.DecodeJSON(&issuer); err != nil {
return nil, errutil.InternalError{Err: fmt.Sprintf("unable to decode pki issuer with id %s: %v", issuerId.String(), err)}
}
return &issuer, nil
}
func writeIssuer(ctx context.Context, s logical.Storage, issuer *issuerEntry) error {
issuerId := issuer.ID
json, err := logical.StorageEntryJSON(issuerPrefix+issuerId.String(), issuer)
if err != nil {
return err
}
return s.Put(ctx, json)
}
func deleteIssuer(ctx context.Context, s logical.Storage, id issuerID) (bool, error) {
config, err := getIssuersConfig(ctx, s)
if err != nil {
return false, err
}
wasDefault := false
if config.DefaultIssuerId == id {
wasDefault = true
config.DefaultIssuerId = issuerID("")
if err := setIssuersConfig(ctx, s, config); err != nil {
return wasDefault, err
}
}
return wasDefault, s.Delete(ctx, issuerPrefix+id.String())
}
func importIssuer(ctx managedKeyContext, s logical.Storage, certValue string, issuerName string) (*issuerEntry, bool, error) {
// importIssuers imports the specified PEM-format certificate (from
// certValue) into the new PKI storage format. The first return field is a
// reference to the new issuer; the second is whether or not the issuer
// already existed during import (in which case, *issuer points to the
// existing issuer reference and identifier); the last return field is
// whether or not an error occurred.
// Before we begin, we need to ensure the PEM formatted certificate looks
// good. Restricting to "just" `CERTIFICATE` entries is a little
// restrictive, as it could be a `X509 CERTIFICATE` entry or a custom
// value wrapping an actual DER cert. So validating the contents of the
// PEM header is out of the question (and validating the contents of the
// PEM block is left to our GetCertificate call below).
//
// However, we should trim all leading and trailing spaces and add a
// single new line. This allows callers to blindly concatenate PEM
// blobs from the API and get roughly what they'd expect.
//
// Discussed further in #11960 and RFC 7468.
certValue = strings.TrimSpace(certValue) + "\n"
// Before we can import a known issuer, we first need to know if the issuer
// exists in storage already. This means iterating through all known
// issuers and comparing their private value against this value.
knownIssuers, err := listIssuers(ctx.ctx, s)
if err != nil {
return nil, false, err
}
// Before we return below, we need to iterate over _all_ keys and see if
// one of them a public key matching this certificate, and if so, update our
// link accordingly. We fetch the list of keys up front, even may not need
// it, to give ourselves a better chance of succeeding below.
knownKeys, err := listKeys(ctx.ctx, s)
if err != nil {
return nil, false, err
}
for _, identifier := range knownIssuers {
existingIssuer, err := fetchIssuerById(ctx.ctx, s, identifier)
if err != nil {
return nil, false, err
}
if existingIssuer.Certificate == certValue {
// Here, we don't need to stitch together the key entries,
// because the last run should've done that for us (or, when
// importing a key).
return existingIssuer, true, nil
}
}
// Haven't found an issuer, so we've gotta create it and write it into
// storage.
var result issuerEntry
result.ID = genIssuerId()
result.Name = issuerName
result.Certificate = certValue
result.LeafNotAfterBehavior = certutil.ErrNotAfterBehavior
result.Usage.ToggleUsage(IssuanceUsage, CRLSigningUsage)
// We shouldn't add CSRs or multiple certificates in this
countCertificates := strings.Count(result.Certificate, "-BEGIN ")
if countCertificates != 1 {
return nil, false, fmt.Errorf("bad issuer: potentially multiple PEM blobs in one certificate storage entry:\n%v", result.Certificate)
}
// Extracting the certificate is necessary for two reasons: first, it lets
// us fetch the serial number; second, for the public key comparison with
// known keys.
issuerCert, err := result.GetCertificate()
if err != nil {
return nil, false, err
}
// Ensure this certificate is a usable as a CA certificate.
if !issuerCert.BasicConstraintsValid || !issuerCert.IsCA {
return nil, false, errutil.UserError{Err: "Refusing to import non-CA certificate"}
}
result.SerialNumber = strings.TrimSpace(certutil.GetHexFormatted(issuerCert.SerialNumber.Bytes(), ":"))
// Now, for each key, try and compute the issuer<->key link. We delay
// writing issuer to storage as we won't need to update the key, only
// the issuer.
for _, identifier := range knownKeys {
existingKey, err := fetchKeyById(ctx.ctx, s, identifier)
if err != nil {
return nil, false, err
}
equal, err := comparePublicKey(ctx, existingKey, issuerCert.PublicKey)
if err != nil {
return nil, false, err
}
if equal {
result.KeyID = existingKey.ID
// Here, there's exactly one stored key with the same public key
// as us, per guarantees in importKey; as we're importing an
// issuer, there's no other keys or issuers we'd need to read or
// update, so exit.
break
}
}
// Finally, rebuild the chains. In this process, because the provided
// reference issuer is non-nil, we'll save this issuer to storage.
if err := rebuildIssuersChains(ctx.ctx, s, &result); err != nil {
return nil, false, err
}
// If there was no prior default value set and/or we had no known
// issuers when we started, set this issuer as default.
issuerDefaultSet, err := isDefaultIssuerSet(ctx.ctx, s)
if err != nil {
return nil, false, err
}
if len(knownIssuers) == 0 || !issuerDefaultSet {
if err = updateDefaultIssuerId(ctx.ctx, s, result.ID); err != nil {
return nil, false, err
}
}
// All done; return our new key reference.
return &result, false, nil
}
func setLocalCRLConfig(ctx context.Context, s logical.Storage, mapping *localCRLConfigEntry) error {
json, err := logical.StorageEntryJSON(storageLocalCRLConfig, mapping)
if err != nil {
return err
}
return s.Put(ctx, json)
}
func getLocalCRLConfig(ctx context.Context, s logical.Storage) (*localCRLConfigEntry, error) {
entry, err := s.Get(ctx, storageLocalCRLConfig)
if err != nil {
return nil, err
}
mapping := &localCRLConfigEntry{}
if entry != nil {
if err := entry.DecodeJSON(mapping); err != nil {
return nil, errutil.InternalError{Err: fmt.Sprintf("unable to decode cluster-local CRL configuration: %v", err)}
}
}
if len(mapping.IssuerIDCRLMap) == 0 {
mapping.IssuerIDCRLMap = make(map[issuerID]crlID)
}
if len(mapping.CRLNumberMap) == 0 {
mapping.CRLNumberMap = make(map[crlID]int64)
}
return mapping, nil
}
func setKeysConfig(ctx context.Context, s logical.Storage, config *keyConfigEntry) error {
json, err := logical.StorageEntryJSON(storageKeyConfig, config)
if err != nil {
return err
}
return s.Put(ctx, json)
}
func getKeysConfig(ctx context.Context, s logical.Storage) (*keyConfigEntry, error) {
entry, err := s.Get(ctx, storageKeyConfig)
if err != nil {
return nil, err
}
keyConfig := &keyConfigEntry{}
if entry != nil {
if err := entry.DecodeJSON(keyConfig); err != nil {
return nil, errutil.InternalError{Err: fmt.Sprintf("unable to decode key configuration: %v", err)}
}
}
return keyConfig, nil
}
func setIssuersConfig(ctx context.Context, s logical.Storage, config *issuerConfigEntry) error {
json, err := logical.StorageEntryJSON(storageIssuerConfig, config)
if err != nil {
return err
}
return s.Put(ctx, json)
}
func getIssuersConfig(ctx context.Context, s logical.Storage) (*issuerConfigEntry, error) {
entry, err := s.Get(ctx, storageIssuerConfig)
if err != nil {
return nil, err
}
issuerConfig := &issuerConfigEntry{}
if entry != nil {
if err := entry.DecodeJSON(issuerConfig); err != nil {
return nil, errutil.InternalError{Err: fmt.Sprintf("unable to decode issuer configuration: %v", err)}
}
}
return issuerConfig, nil
}
func resolveIssuerReference(ctx context.Context, s logical.Storage, reference string) (issuerID, error) {
if reference == defaultRef {
// Handle fetching the default issuer.
config, err := getIssuersConfig(ctx, s)
if err != nil {
return issuerID("config-error"), err
}
if len(config.DefaultIssuerId) == 0 {
return IssuerRefNotFound, fmt.Errorf("no default issuer currently configured")
}
return config.DefaultIssuerId, nil
}
issuers, err := listIssuers(ctx, s)
if err != nil {
return issuerID("list-error"), err
}
// Cheaper to list issuers and check if an id is a match...
for _, issuerId := range issuers {
if issuerId == issuerID(reference) {
return issuerId, nil
}
}
// ... than to pull all issuers from storage.
for _, issuerId := range issuers {
issuer, err := fetchIssuerById(ctx, s, issuerId)
if err != nil {
return issuerID("issuer-read"), err
}
if issuer.Name == reference {
return issuer.ID, nil
}
}
// Otherwise, we must not have found the issuer.
return IssuerRefNotFound, errutil.UserError{Err: fmt.Sprintf("unable to find PKI issuer for reference: %v", reference)}
}
func resolveIssuerCRLPath(ctx context.Context, b *backend, s logical.Storage, reference string) (string, error) {
if b.useLegacyBundleCaStorage() {
return "crl", nil
}
issuer, err := resolveIssuerReference(ctx, s, reference)
if err != nil {
return legacyCRLPath, err
}
crlConfig, err := getLocalCRLConfig(ctx, s)
if err != nil {
return legacyCRLPath, err
}
if crlId, ok := crlConfig.IssuerIDCRLMap[issuer]; ok && len(crlId) > 0 {
return fmt.Sprintf("crls/%v", crlId), nil
}
return legacyCRLPath, fmt.Errorf("unable to find CRL for issuer: id:%v/ref:%v", issuer, reference)
}
// Builds a certutil.CertBundle from the specified issuer identifier,
// optionally loading the key or not.
func fetchCertBundleByIssuerId(ctx context.Context, s logical.Storage, id issuerID, loadKey bool) (*issuerEntry, *certutil.CertBundle, error) {
issuer, err := fetchIssuerById(ctx, s, id)
if err != nil {
return nil, nil, err
}
var bundle certutil.CertBundle
bundle.Certificate = issuer.Certificate
bundle.CAChain = issuer.CAChain
bundle.SerialNumber = issuer.SerialNumber
// Fetch the key if it exists. Sometimes we don't need the key immediately.
if loadKey && issuer.KeyID != keyID("") {
key, err := fetchKeyById(ctx, s, issuer.KeyID)
if err != nil {
return nil, nil, err
}
bundle.PrivateKeyType = key.PrivateKeyType
bundle.PrivateKey = key.PrivateKey
}
return issuer, &bundle, nil
}
func writeCaBundle(mkc managedKeyContext, s logical.Storage, caBundle *certutil.CertBundle, issuerName string, keyName string) (*issuerEntry, *keyEntry, error) {
myKey, _, err := importKey(mkc, s, caBundle.PrivateKey, keyName, caBundle.PrivateKeyType)
if err != nil {
return nil, nil, err
}
myIssuer, _, err := importIssuer(mkc, s, caBundle.Certificate, issuerName)
if err != nil {
return nil, nil, err
}
for _, cert := range caBundle.CAChain {
if _, _, err = importIssuer(mkc, s, cert, ""); err != nil {
return nil, nil, err
}
}
return myIssuer, myKey, nil
}
func genIssuerId() issuerID {
return issuerID(genUuid())
}
func genKeyId() keyID {
return keyID(genUuid())
}
func genCRLId() crlID {
return crlID(genUuid())
}
func genUuid() string {
aUuid, err := uuid.GenerateUUID()
if err != nil {
panic(err)
}
return aUuid
}
func isKeyInUse(keyId string, ctx context.Context, s logical.Storage) (inUse bool, issuerId string, err error) {
knownIssuers, err := listIssuers(ctx, s)
if err != nil {
return true, "", err
}
for _, issuerId := range knownIssuers {
issuerEntry, err := fetchIssuerById(ctx, s, issuerId)
if err != nil {
return true, issuerId.String(), errutil.InternalError{Err: fmt.Sprintf("unable to fetch pki issuer: %v", err)}
}
if issuerEntry == nil {
return true, issuerId.String(), errutil.InternalError{Err: fmt.Sprintf("Issuer listed: %s does not exist", issuerId.String())}
}
if issuerEntry.KeyID.String() == keyId {
return true, issuerId.String(), nil
}
}
return false, "", nil
}

View File

@@ -0,0 +1,186 @@
package pki
import (
"context"
"crypto/sha256"
"encoding/hex"
"time"
"github.com/hashicorp/vault/sdk/helper/certutil"
"github.com/hashicorp/vault/sdk/logical"
)
// This allows us to record the version of the migration code within the log entry
// in case we find out in the future that something was horribly wrong with the migration,
// and we need to perform it again...
const (
latestMigrationVersion = 1
legacyBundleShimID = issuerID("legacy-entry-shim-id")
)
type legacyBundleMigrationLog struct {
Hash string `json:"hash" structs:"hash" mapstructure:"hash"`
Created time.Time `json:"created" structs:"created" mapstructure:"created"`
MigrationVersion int `json:"migrationVersion" structs:"migrationVersion" mapstructure:"migrationVersion"`
}
type migrationInfo struct {
isRequired bool
legacyBundle *certutil.CertBundle
legacyBundleHash string
migrationLog *legacyBundleMigrationLog
}
func getMigrationInfo(ctx context.Context, s logical.Storage) (migrationInfo, error) {
migrationInfo := migrationInfo{
isRequired: false,
legacyBundle: nil,
legacyBundleHash: "",
migrationLog: nil,
}
var err error
_, migrationInfo.legacyBundle, err = getLegacyCertBundle(ctx, s)
if err != nil {
return migrationInfo, err
}
migrationInfo.migrationLog, err = getLegacyBundleMigrationLog(ctx, s)
if err != nil {
return migrationInfo, err
}
migrationInfo.legacyBundleHash, err = computeHashOfLegacyBundle(migrationInfo.legacyBundle)
if err != nil {
return migrationInfo, err
}
// Even if there isn't anything to migrate, we always want to write out the log entry
// as that will trigger the secondary clusters to toggle/wake up
if (migrationInfo.migrationLog == nil) ||
(migrationInfo.migrationLog.Hash != migrationInfo.legacyBundleHash) ||
(migrationInfo.migrationLog.MigrationVersion != latestMigrationVersion) {
migrationInfo.isRequired = true
}
return migrationInfo, nil
}
func migrateStorage(ctx context.Context, b *backend, s logical.Storage) error {
migrationInfo, err := getMigrationInfo(ctx, s)
if err != nil {
return err
}
if !migrationInfo.isRequired {
// No migration was deemed to be required.
b.Logger().Debug("existing migration found and was considered valid, skipping migration.")
return nil
}
b.Logger().Info("performing PKI migration to new keys/issuers layout")
if migrationInfo.legacyBundle != nil {
mkc := newManagedKeyContext(ctx, b, b.backendUuid)
anIssuer, aKey, err := writeCaBundle(mkc, s, migrationInfo.legacyBundle, "current", "current")
if err != nil {
return err
}
b.Logger().Debug("Migration generated the following ids and set them as defaults",
"issuer id", anIssuer.ID, "key id", aKey.ID)
} else {
b.Logger().Debug("No legacy CA certs found, no migration required.")
}
// Since we do not have all the mount information available we must schedule
// the CRL to be rebuilt at a later time.
b.crlBuilder.requestRebuildIfActiveNode(b)
// We always want to write out this log entry as the secondary clusters leverage this path to wake up
// if they were upgraded prior to the primary cluster's migration occurred.
err = setLegacyBundleMigrationLog(ctx, s, &legacyBundleMigrationLog{
Hash: migrationInfo.legacyBundleHash,
Created: time.Now(),
MigrationVersion: latestMigrationVersion,
})
if err != nil {
return err
}
b.Logger().Info("successfully completed migration to new keys/issuers layout")
return nil
}
func computeHashOfLegacyBundle(bundle *certutil.CertBundle) (string, error) {
hasher := sha256.New()
// Generate an empty hash if the bundle does not exist.
if bundle != nil {
// We only hash the main certificate and the certs within the CAChain,
// assuming that any sort of change that occurred would have influenced one of those two fields.
if _, err := hasher.Write([]byte(bundle.Certificate)); err != nil {
return "", err
}
for _, cert := range bundle.CAChain {
if _, err := hasher.Write([]byte(cert)); err != nil {
return "", err
}
}
}
return hex.EncodeToString(hasher.Sum(nil)), nil
}
func getLegacyBundleMigrationLog(ctx context.Context, s logical.Storage) (*legacyBundleMigrationLog, error) {
entry, err := s.Get(ctx, legacyMigrationBundleLogKey)
if err != nil {
return nil, err
}
if entry == nil {
return nil, nil
}
lbm := &legacyBundleMigrationLog{}
err = entry.DecodeJSON(lbm)
if err != nil {
// If we can't decode our bundle, lets scrap it and assume a blank value,
// re-running the migration will at most bring back an older certificate/private key
return nil, nil
}
return lbm, nil
}
func setLegacyBundleMigrationLog(ctx context.Context, s logical.Storage, lbm *legacyBundleMigrationLog) error {
json, err := logical.StorageEntryJSON(legacyMigrationBundleLogKey, lbm)
if err != nil {
return err
}
return s.Put(ctx, json)
}
func getLegacyCertBundle(ctx context.Context, s logical.Storage) (*issuerEntry, *certutil.CertBundle, error) {
entry, err := s.Get(ctx, legacyCertBundlePath)
if err != nil {
return nil, nil, err
}
if entry == nil {
return nil, nil, nil
}
cb := &certutil.CertBundle{}
err = entry.DecodeJSON(cb)
if err != nil {
return nil, nil, err
}
// Fake a storage entry with backwards compatibility in mind. We only need
// the fields in the CAInfoBundle; everything else doesn't matter.
issuer := &issuerEntry{
ID: legacyBundleShimID,
Name: "legacy-entry-shim",
LeafNotAfterBehavior: certutil.ErrNotAfterBehavior,
}
issuer.Usage.ToggleUsage(IssuanceUsage, CRLSigningUsage)
return issuer, cb, nil
}

View File

@@ -0,0 +1,141 @@
package pki
import (
"context"
"strings"
"testing"
"time"
"github.com/hashicorp/vault/sdk/helper/certutil"
"github.com/hashicorp/vault/sdk/logical"
"github.com/stretchr/testify/require"
)
func Test_migrateStorageEmptyStorage(t *testing.T) {
startTime := time.Now()
ctx := context.Background()
b, s := createBackendWithStorage(t)
// Reset the version the helper above set to 1.
b.pkiStorageVersion.Store(0)
require.True(t, b.useLegacyBundleCaStorage(), "pre migration we should have been told to use legacy storage.")
request := &logical.InitializationRequest{Storage: s}
err := b.initialize(ctx, request)
require.NoError(t, err)
issuerIds, err := listIssuers(ctx, s)
require.NoError(t, err)
require.Empty(t, issuerIds)
keyIds, err := listKeys(ctx, s)
require.NoError(t, err)
require.Empty(t, keyIds)
logEntry, err := getLegacyBundleMigrationLog(ctx, s)
require.NoError(t, err)
require.NotNil(t, logEntry)
require.Equal(t, latestMigrationVersion, logEntry.MigrationVersion)
require.True(t, len(strings.TrimSpace(logEntry.Hash)) > 0,
"Hash value (%s) should not have been empty", logEntry.Hash)
require.True(t, startTime.Before(logEntry.Created),
"created log entry time (%v) was before our start time(%v)?", logEntry.Created, startTime)
require.False(t, b.useLegacyBundleCaStorage(), "post migration we are still told to use legacy storage")
// Make sure we can re-run the migration without issues
request = &logical.InitializationRequest{Storage: s}
err = b.initialize(ctx, request)
require.NoError(t, err)
logEntry2, err := getLegacyBundleMigrationLog(ctx, s)
require.NoError(t, err)
require.NotNil(t, logEntry2)
// Make sure the hash and created times have not changed.
require.Equal(t, logEntry.Created, logEntry2.Created)
require.Equal(t, logEntry.Hash, logEntry2.Hash)
}
func Test_migrateStorageSimpleBundle(t *testing.T) {
startTime := time.Now()
ctx := context.Background()
b, s := createBackendWithStorage(t)
// Reset the version the helper above set to 1.
b.pkiStorageVersion.Store(0)
require.True(t, b.useLegacyBundleCaStorage(), "pre migration we should have been told to use legacy storage.")
bundle := genCertBundle(t, b, s)
json, err := logical.StorageEntryJSON(legacyCertBundlePath, bundle)
require.NoError(t, err)
err = s.Put(ctx, json)
require.NoError(t, err)
request := &logical.InitializationRequest{Storage: s}
err = b.initialize(ctx, request)
require.NoError(t, err)
require.NoError(t, err)
issuerIds, err := listIssuers(ctx, s)
require.NoError(t, err)
require.Equal(t, 1, len(issuerIds))
keyIds, err := listKeys(ctx, s)
require.NoError(t, err)
require.Equal(t, 1, len(keyIds))
logEntry, err := getLegacyBundleMigrationLog(ctx, s)
require.NoError(t, err)
require.NotNil(t, logEntry)
require.Equal(t, latestMigrationVersion, logEntry.MigrationVersion)
require.True(t, len(strings.TrimSpace(logEntry.Hash)) > 0,
"Hash value (%s) should not have been empty", logEntry.Hash)
require.True(t, startTime.Before(logEntry.Created),
"created log entry time (%v) was before our start time(%v)?", logEntry.Created, startTime)
issuerId := issuerIds[0]
keyId := keyIds[0]
issuer, err := fetchIssuerById(ctx, s, issuerId)
require.NoError(t, err)
require.Equal(t, "current", issuer.Name) // RFC says we should import with Name=current
require.Equal(t, certutil.ErrNotAfterBehavior, issuer.LeafNotAfterBehavior)
key, err := fetchKeyById(ctx, s, keyId)
require.NoError(t, err)
require.Equal(t, "current", key.Name) // RFC says we should import with Name=current
require.Equal(t, issuerId, issuer.ID)
require.Equal(t, bundle.SerialNumber, issuer.SerialNumber)
require.Equal(t, strings.TrimSpace(bundle.Certificate), strings.TrimSpace(issuer.Certificate))
require.Equal(t, keyId, issuer.KeyID)
// FIXME: Add tests for CAChain...
require.Equal(t, keyId, key.ID)
require.Equal(t, strings.TrimSpace(bundle.PrivateKey), strings.TrimSpace(key.PrivateKey))
require.Equal(t, bundle.PrivateKeyType, key.PrivateKeyType)
// Make sure we kept the old bundle
_, certBundle, err := getLegacyCertBundle(ctx, s)
require.NoError(t, err)
require.Equal(t, bundle, certBundle)
// Make sure we setup the default values
keysConfig, err := getKeysConfig(ctx, s)
require.NoError(t, err)
require.Equal(t, &keyConfigEntry{DefaultKeyId: keyId}, keysConfig)
issuersConfig, err := getIssuersConfig(ctx, s)
require.NoError(t, err)
require.Equal(t, &issuerConfigEntry{DefaultIssuerId: issuerId}, issuersConfig)
// Make sure if we attempt to re-run the migration nothing happens...
err = migrateStorage(ctx, b, s)
require.NoError(t, err)
logEntry2, err := getLegacyBundleMigrationLog(ctx, s)
require.NoError(t, err)
require.NotNil(t, logEntry2)
require.Equal(t, logEntry.Created, logEntry2.Created)
require.Equal(t, logEntry.Hash, logEntry2.Hash)
require.False(t, b.useLegacyBundleCaStorage(), "post migration we are still told to use legacy storage")
}

View File

@@ -0,0 +1,217 @@
package pki
import (
"context"
"strings"
"testing"
"github.com/hashicorp/vault/sdk/framework"
"github.com/hashicorp/vault/sdk/helper/certutil"
"github.com/hashicorp/vault/sdk/logical"
"github.com/stretchr/testify/require"
)
var ctx = context.Background()
func Test_ConfigsRoundTrip(t *testing.T) {
_, s := createBackendWithStorage(t)
// Verify we handle nothing stored properly
keyConfigEmpty, err := getKeysConfig(ctx, s)
require.NoError(t, err)
require.Equal(t, &keyConfigEntry{}, keyConfigEmpty)
issuerConfigEmpty, err := getIssuersConfig(ctx, s)
require.NoError(t, err)
require.Equal(t, &issuerConfigEntry{}, issuerConfigEmpty)
// Now attempt to store and reload properly
origKeyConfig := &keyConfigEntry{
DefaultKeyId: genKeyId(),
}
origIssuerConfig := &issuerConfigEntry{
DefaultIssuerId: genIssuerId(),
}
err = setKeysConfig(ctx, s, origKeyConfig)
require.NoError(t, err)
err = setIssuersConfig(ctx, s, origIssuerConfig)
require.NoError(t, err)
keyConfig, err := getKeysConfig(ctx, s)
require.NoError(t, err)
require.Equal(t, origKeyConfig, keyConfig)
issuerConfig, err := getIssuersConfig(ctx, s)
require.NoError(t, err)
require.Equal(t, origIssuerConfig, issuerConfig)
}
func Test_IssuerRoundTrip(t *testing.T) {
b, s := createBackendWithStorage(t)
issuer1, key1 := genIssuerAndKey(t, b, s)
issuer2, key2 := genIssuerAndKey(t, b, s)
// We get an error when issuer id not found
_, err := fetchIssuerById(ctx, s, issuer1.ID)
require.Error(t, err)
// We get an error when key id not found
_, err = fetchKeyById(ctx, s, key1.ID)
require.Error(t, err)
// Now write out our issuers and keys
err = writeKey(ctx, s, key1)
require.NoError(t, err)
err = writeIssuer(ctx, s, &issuer1)
require.NoError(t, err)
err = writeKey(ctx, s, key2)
require.NoError(t, err)
err = writeIssuer(ctx, s, &issuer2)
require.NoError(t, err)
fetchedKey1, err := fetchKeyById(ctx, s, key1.ID)
require.NoError(t, err)
fetchedIssuer1, err := fetchIssuerById(ctx, s, issuer1.ID)
require.NoError(t, err)
require.Equal(t, &key1, fetchedKey1)
require.Equal(t, &issuer1, fetchedIssuer1)
keys, err := listKeys(ctx, s)
require.NoError(t, err)
require.ElementsMatch(t, []keyID{key1.ID, key2.ID}, keys)
issuers, err := listIssuers(ctx, s)
require.NoError(t, err)
require.ElementsMatch(t, []issuerID{issuer1.ID, issuer2.ID}, issuers)
}
func Test_KeysIssuerImport(t *testing.T) {
b, s := createBackendWithStorage(t)
mkc := newManagedKeyContext(ctx, b, "test")
issuer1, key1 := genIssuerAndKey(t, b, s)
issuer2, key2 := genIssuerAndKey(t, b, s)
// Key 1 before Issuer 1; Issuer 2 before Key 2.
// Remove KeyIDs from non-written entities before beginning.
key1.ID = ""
issuer1.ID = ""
issuer1.KeyID = ""
key1Ref1, existing, err := importKey(mkc, s, key1.PrivateKey, "key1", key1.PrivateKeyType)
require.NoError(t, err)
require.False(t, existing)
require.Equal(t, strings.TrimSpace(key1.PrivateKey), strings.TrimSpace(key1Ref1.PrivateKey))
// Make sure if we attempt to re-import the same private key, no import/updates occur.
// So the existing flag should be set to true, and we do not update the existing Name field.
key1Ref2, existing, err := importKey(mkc, s, key1.PrivateKey, "ignore-me", key1.PrivateKeyType)
require.NoError(t, err)
require.True(t, existing)
require.Equal(t, key1.PrivateKey, key1Ref1.PrivateKey)
require.Equal(t, key1Ref1.ID, key1Ref2.ID)
require.Equal(t, key1Ref1.Name, key1Ref2.Name)
issuer1Ref1, existing, err := importIssuer(mkc, s, issuer1.Certificate, "issuer1")
require.NoError(t, err)
require.False(t, existing)
require.Equal(t, strings.TrimSpace(issuer1.Certificate), strings.TrimSpace(issuer1Ref1.Certificate))
require.Equal(t, key1Ref1.ID, issuer1Ref1.KeyID)
require.Equal(t, "issuer1", issuer1Ref1.Name)
// Make sure if we attempt to re-import the same issuer, no import/updates occur.
// So the existing flag should be set to true, and we do not update the existing Name field.
issuer1Ref2, existing, err := importIssuer(mkc, s, issuer1.Certificate, "ignore-me")
require.NoError(t, err)
require.True(t, existing)
require.Equal(t, strings.TrimSpace(issuer1.Certificate), strings.TrimSpace(issuer1Ref1.Certificate))
require.Equal(t, issuer1Ref1.ID, issuer1Ref2.ID)
require.Equal(t, key1Ref1.ID, issuer1Ref2.KeyID)
require.Equal(t, issuer1Ref1.Name, issuer1Ref2.Name)
err = writeIssuer(ctx, s, &issuer2)
require.NoError(t, err)
err = writeKey(ctx, s, key2)
require.NoError(t, err)
// Same double import tests as above, but make sure if the previous was created through writeIssuer not importIssuer.
issuer2Ref, existing, err := importIssuer(mkc, s, issuer2.Certificate, "ignore-me")
require.NoError(t, err)
require.True(t, existing)
require.Equal(t, strings.TrimSpace(issuer2.Certificate), strings.TrimSpace(issuer2Ref.Certificate))
require.Equal(t, issuer2.ID, issuer2Ref.ID)
require.Equal(t, "", issuer2Ref.Name)
require.Equal(t, issuer2.KeyID, issuer2Ref.KeyID)
// Same double import tests as above, but make sure if the previous was created through writeKey not importKey.
key2Ref, existing, err := importKey(mkc, s, key2.PrivateKey, "ignore-me", key2.PrivateKeyType)
require.NoError(t, err)
require.True(t, existing)
require.Equal(t, key2.PrivateKey, key2Ref.PrivateKey)
require.Equal(t, key2.ID, key2Ref.ID)
require.Equal(t, "", key2Ref.Name)
}
func genIssuerAndKey(t *testing.T, b *backend, s logical.Storage) (issuerEntry, keyEntry) {
certBundle := genCertBundle(t, b, s)
keyId := genKeyId()
pkiKey := keyEntry{
ID: keyId,
PrivateKeyType: certBundle.PrivateKeyType,
PrivateKey: strings.TrimSpace(certBundle.PrivateKey) + "\n",
}
issuerId := genIssuerId()
pkiIssuer := issuerEntry{
ID: issuerId,
KeyID: keyId,
Certificate: strings.TrimSpace(certBundle.Certificate) + "\n",
CAChain: certBundle.CAChain,
SerialNumber: certBundle.SerialNumber,
}
return pkiIssuer, pkiKey
}
func genCertBundle(t *testing.T, b *backend, s logical.Storage) *certutil.CertBundle {
// Pretty gross just to generate a cert bundle, but
fields := addCACommonFields(map[string]*framework.FieldSchema{})
fields = addCAKeyGenerationFields(fields)
fields = addCAIssueFields(fields)
apiData := &framework.FieldData{
Schema: fields,
Raw: map[string]interface{}{
"exported": "internal",
"cn": "example.com",
"ttl": 3600,
},
}
_, _, role, respErr := b.getGenerationParams(ctx, s, apiData, "/pki")
require.Nil(t, respErr)
input := &inputBundle{
req: &logical.Request{
Operation: logical.UpdateOperation,
Path: "issue/testrole",
Storage: s,
},
apiData: apiData,
role: role,
}
parsedCertBundle, err := generateCert(ctx, b, input, nil, true, b.GetRandomReader())
require.NoError(t, err)
certBundle, err := parsedCertBundle.ToCertBundle()
require.NoError(t, err)
return certBundle
}

View File

@@ -1,9 +1,13 @@
package pki
import (
"context"
"fmt"
"regexp"
"strings"
"github.com/hashicorp/vault/sdk/logical"
"github.com/hashicorp/vault/sdk/framework"
"github.com/hashicorp/vault/sdk/helper/errutil"
@@ -12,6 +16,13 @@ import (
const (
managedKeyNameArg = "managed_key_name"
managedKeyIdArg = "managed_key_id"
defaultRef = "default"
)
var (
nameMatcher = regexp.MustCompile("^" + framework.GenericNameRegex(issuerRefParam) + "$")
errIssuerNameInUse = errutil.UserError{Err: "issuer name already in use"}
errKeyNameInUse = errutil.UserError{Err: "key name already in use"}
)
func normalizeSerial(serial string) string {
@@ -23,13 +34,29 @@ func denormalizeSerial(serial string) string {
}
func kmsRequested(input *inputBundle) bool {
exportedStr, ok := input.apiData.GetOk("exported")
return kmsRequestedFromFieldData(input.apiData)
}
func kmsRequestedFromFieldData(data *framework.FieldData) bool {
exportedStr, ok := data.GetOk("exported")
if !ok {
return false
}
return exportedStr.(string) == "kms"
}
func existingKeyRequested(input *inputBundle) bool {
return existingKeyRequestedFromFieldData(input.apiData)
}
func existingKeyRequestedFromFieldData(data *framework.FieldData) bool {
exportedStr, ok := data.GetOk("exported")
if !ok {
return false
}
return exportedStr.(string) == "existing"
}
type managedKeyId interface {
String() string
}
@@ -63,6 +90,16 @@ func getManagedKeyId(data *framework.FieldData) (managedKeyId, error) {
return keyId, nil
}
func getKeyRefWithErr(data *framework.FieldData) (string, error) {
keyRef := getKeyRef(data)
if len(keyRef) == 0 {
return "", errutil.UserError{Err: fmt.Sprintf("missing argument key_ref for existing type")}
}
return keyRef, nil
}
func getManagedKeyNameOrUUID(data *framework.FieldData) (name string, UUID string, err error) {
getApiData := func(argName string) (string, error) {
arg, ok := data.GetOk(argName)
@@ -93,3 +130,69 @@ func getManagedKeyNameOrUUID(data *framework.FieldData) (name string, UUID strin
return keyName, keyUUID, nil
}
func getIssuerName(ctx context.Context, s logical.Storage, data *framework.FieldData) (string, error) {
issuerName := ""
issuerNameIface, ok := data.GetOk("issuer_name")
if ok {
issuerName = strings.TrimSpace(issuerNameIface.(string))
if strings.ToLower(issuerName) == defaultRef {
return issuerName, errutil.UserError{Err: "reserved keyword 'default' can not be used as issuer name"}
}
if !nameMatcher.MatchString(issuerName) {
return issuerName, errutil.UserError{Err: "issuer name contained invalid characters"}
}
issuerId, err := resolveIssuerReference(ctx, s, issuerName)
if err == nil {
return issuerName, errIssuerNameInUse
}
if err != nil && issuerId != IssuerRefNotFound {
return issuerName, errutil.InternalError{Err: err.Error()}
}
}
return issuerName, nil
}
func getKeyName(ctx context.Context, s logical.Storage, data *framework.FieldData) (string, error) {
keyName := ""
keyNameIface, ok := data.GetOk(keyNameParam)
if ok {
keyName = strings.TrimSpace(keyNameIface.(string))
if strings.ToLower(keyName) == defaultRef {
return "", errutil.UserError{Err: "reserved keyword 'default' can not be used as key name"}
}
if !nameMatcher.MatchString(keyName) {
return "", errutil.UserError{Err: "key name contained invalid characters"}
}
keyId, err := resolveKeyReference(ctx, s, keyName)
if err == nil {
return "", errKeyNameInUse
}
if err != nil && keyId != KeyRefNotFound {
return "", errutil.InternalError{Err: err.Error()}
}
}
return keyName, nil
}
func getIssuerRef(data *framework.FieldData) string {
return extractRef(data, issuerRefParam)
}
func getKeyRef(data *framework.FieldData) string {
return extractRef(data, keyRefParam)
}
func extractRef(data *framework.FieldData, paramName string) string {
value := strings.TrimSpace(data.Get(paramName).(string))
if strings.EqualFold(value, defaultRef) {
return defaultRef
}
return value
}

11
changelog/15277.txt Normal file
View File

@@ -0,0 +1,11 @@
```release-note:feature
**Allows Multiple Issuer Certificates to enable Non-Disruptive
Intermediate/Root Certificate Rotation**: This introduces /keys and /issuers
endpoints to allow import, generation and configuration of any number of keys
or issuers that can be used to issue and revoke certificates. Keys and Issuers
can be referred to by (a) a unique UUID; (b) a name; (c) “default”. If an
issuer existed prior to this feature, that issuer will be tagged by a migration
as “default” to allow backwards compatible calls which dont specify an issuer.
Creation of new roles will assume an issuer of “default” unless otherwise
specified. This default can be configured at /config/issuers and /config/keys.
```

View File

@@ -10,6 +10,7 @@ import (
"time"
"github.com/hashicorp/go-secure-stdlib/strutil"
"github.com/hashicorp/vault/sdk/helper/consts"
"github.com/hashicorp/vault/sdk/logical"
"github.com/stretchr/testify/require"

View File

@@ -2,6 +2,7 @@ package certutil
import (
"bytes"
"crypto"
"crypto/ecdsa"
"crypto/ed25519"
"crypto/elliptic"
@@ -853,6 +854,86 @@ func setCerts() {
issuingCaChainPem = []string{intCertPEM, caCertPEM}
}
func TestComparePublicKeysAndType(t *testing.T) {
rsa1 := genRsaKey(t).Public()
rsa2 := genRsaKey(t).Public()
eddsa1 := genEdDSA(t).Public()
eddsa2 := genEdDSA(t).Public()
ed25519_1, _ := genEd25519Key(t)
ed25519_2, _ := genEd25519Key(t)
type args struct {
key1Iface crypto.PublicKey
key2Iface crypto.PublicKey
}
tests := []struct {
name string
args args
want bool
wantErr bool
}{
{name: "RSA_Equal", args: args{key1Iface: rsa1, key2Iface: rsa1}, want: true, wantErr: false},
{name: "RSA_NotEqual", args: args{key1Iface: rsa1, key2Iface: rsa2}, want: false, wantErr: false},
{name: "EDDSA_Equal", args: args{key1Iface: eddsa1, key2Iface: eddsa1}, want: true, wantErr: false},
{name: "EDDSA_NotEqual", args: args{key1Iface: eddsa1, key2Iface: eddsa2}, want: false, wantErr: false},
{name: "ED25519_Equal", args: args{key1Iface: ed25519_1, key2Iface: ed25519_1}, want: true, wantErr: false},
{name: "ED25519_NotEqual", args: args{key1Iface: ed25519_1, key2Iface: ed25519_2}, want: false, wantErr: false},
{name: "Mismatched_RSA", args: args{key1Iface: rsa1, key2Iface: ed25519_2}, want: false, wantErr: false},
{name: "Mismatched_EDDSA", args: args{key1Iface: ed25519_1, key2Iface: rsa1}, want: false, wantErr: false},
{name: "Mismatched_ED25519", args: args{key1Iface: ed25519_1, key2Iface: rsa1}, want: false, wantErr: false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := ComparePublicKeysAndType(tt.args.key1Iface, tt.args.key2Iface)
if (err != nil) != tt.wantErr {
t.Errorf("ComparePublicKeysAndType() error = %v, wantErr %v", err, tt.wantErr)
return
}
if got != tt.want {
t.Errorf("ComparePublicKeysAndType() got = %v, want %v", got, tt.want)
}
})
}
}
func TestNotAfterValues(t *testing.T) {
if ErrNotAfterBehavior != 0 {
t.Fatalf("Expected ErrNotAfterBehavior=%v to have value 0", ErrNotAfterBehavior)
}
if TruncateNotAfterBehavior != 1 {
t.Fatalf("Expected TruncateNotAfterBehavior=%v to have value 1", TruncateNotAfterBehavior)
}
if PermitNotAfterBehavior != 2 {
t.Fatalf("Expected PermitNotAfterBehavior=%v to have value 2", PermitNotAfterBehavior)
}
}
func genRsaKey(t *testing.T) *rsa.PrivateKey {
key, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
t.Fatal(err)
}
return key
}
func genEdDSA(t *testing.T) *ecdsa.PrivateKey {
key, err := ecdsa.GenerateKey(elliptic.P384(), rand.Reader)
if err != nil {
t.Fatal(err)
}
return key
}
func genEd25519Key(t *testing.T) (ed25519.PublicKey, ed25519.PrivateKey) {
key, priv, err := ed25519.GenerateKey(rand.Reader)
if err != nil {
t.Fatal(err)
}
return key, priv
}
var (
initTest sync.Once
privRSA8KeyPem string

View File

@@ -150,6 +150,46 @@ func ParsePKIJSON(input []byte) (*ParsedCertBundle, error) {
return nil, errutil.UserError{Err: "unable to parse out of either secret data or a secret object"}
}
func ParseDERKey(privateKeyBytes []byte) (signer crypto.Signer, format BlockType, err error) {
if signer, err = x509.ParseECPrivateKey(privateKeyBytes); err == nil {
format = ECBlock
return
}
if signer, err = x509.ParsePKCS1PrivateKey(privateKeyBytes); err == nil {
format = PKCS1Block
return
}
var rawKey interface{}
if rawKey, err = x509.ParsePKCS8PrivateKey(privateKeyBytes); err == nil {
switch rawSigner := rawKey.(type) {
case *rsa.PrivateKey:
signer = rawSigner
case *ecdsa.PrivateKey:
signer = rawSigner
case ed25519.PrivateKey:
signer = rawSigner
default:
return nil, UnknownBlock, errutil.InternalError{Err: "unknown type for parsed PKCS8 Private Key"}
}
format = PKCS8Block
return
}
return nil, UnknownBlock, err
}
func ParsePEMKey(keyPem string) (crypto.Signer, BlockType, error) {
pemBlock, _ := pem.Decode([]byte(keyPem))
if pemBlock == nil {
return nil, UnknownBlock, errutil.UserError{Err: "no data found in PEM block"}
}
return ParseDERKey(pemBlock.Bytes)
}
// ParsePEMBundle takes a string of concatenated PEM-format certificate
// and private key values and decodes/parses them, checking validity along
// the way. The first certificate must be the subject certificate and issuing
@@ -170,43 +210,19 @@ func ParsePEMBundle(pemBundle string) (*ParsedCertBundle, error) {
return nil, errutil.UserError{Err: "no data found in PEM block"}
}
if signer, err := x509.ParseECPrivateKey(pemBlock.Bytes); err == nil {
if signer, format, err := ParseDERKey(pemBlock.Bytes); err == nil {
if parsedBundle.PrivateKeyType != UnknownPrivateKey {
return nil, errutil.UserError{Err: "more than one private key given; provide only one private key in the bundle"}
}
parsedBundle.PrivateKeyFormat = ECBlock
parsedBundle.PrivateKeyType = ECPrivateKey
parsedBundle.PrivateKeyFormat = format
parsedBundle.PrivateKeyType = GetPrivateKeyTypeFromSigner(signer)
if parsedBundle.PrivateKeyType == UnknownPrivateKey {
return nil, errutil.UserError{Err: "Unknown type of private key included in the bundle: %v"}
}
parsedBundle.PrivateKeyBytes = pemBlock.Bytes
parsedBundle.PrivateKey = signer
} else if signer, err := x509.ParsePKCS1PrivateKey(pemBlock.Bytes); err == nil {
if parsedBundle.PrivateKeyType != UnknownPrivateKey {
return nil, errutil.UserError{Err: "more than one private key given; provide only one private key in the bundle"}
}
parsedBundle.PrivateKeyType = RSAPrivateKey
parsedBundle.PrivateKeyFormat = PKCS1Block
parsedBundle.PrivateKeyBytes = pemBlock.Bytes
parsedBundle.PrivateKey = signer
} else if signer, err := x509.ParsePKCS8PrivateKey(pemBlock.Bytes); err == nil {
parsedBundle.PrivateKeyFormat = PKCS8Block
if parsedBundle.PrivateKeyType != UnknownPrivateKey {
return nil, errutil.UserError{Err: "More than one private key given; provide only one private key in the bundle"}
}
switch signer := signer.(type) {
case *rsa.PrivateKey:
parsedBundle.PrivateKey = signer
parsedBundle.PrivateKeyType = RSAPrivateKey
parsedBundle.PrivateKeyBytes = pemBlock.Bytes
case *ecdsa.PrivateKey:
parsedBundle.PrivateKey = signer
parsedBundle.PrivateKeyType = ECPrivateKey
parsedBundle.PrivateKeyBytes = pemBlock.Bytes
case ed25519.PrivateKey:
parsedBundle.PrivateKey = signer
parsedBundle.PrivateKeyType = Ed25519PrivateKey
parsedBundle.PrivateKeyBytes = pemBlock.Bytes
}
} else if certificates, err := x509.ParseCertificates(pemBlock.Bytes); err == nil {
certPath = append(certPath, &CertBlock{
Certificate: certificates[0],
@@ -336,7 +352,21 @@ func generateSerialNumber(randReader io.Reader) (*big.Int, error) {
return serial, nil
}
// ComparePublicKeys compares two public keys and returns true if they match
// ComparePublicKeysAndType compares two public keys and returns true if they match,
// false if their types or contents differ, and an error on unsupported key types.
func ComparePublicKeysAndType(key1Iface, key2Iface crypto.PublicKey) (bool, error) {
equal, err := ComparePublicKeys(key1Iface, key2Iface)
if err != nil {
if strings.Contains(err.Error(), "key types do not match:") {
return false, nil
}
}
return equal, err
}
// ComparePublicKeys compares two public keys and returns true if they match,
// returns an error if public key types are mismatched, or they are an unsupported key type.
func ComparePublicKeys(key1Iface, key2Iface crypto.PublicKey) (bool, error) {
switch key1Iface.(type) {
case *rsa.PublicKey:
@@ -1198,3 +1228,20 @@ func GetPublicKeySize(key crypto.PublicKey) int {
return -1
}
// CreateKeyBundle create a KeyBundle struct object which includes a generated key
// of keyType with keyBits leveraging the randomness from randReader.
func CreateKeyBundle(keyType string, keyBits int, randReader io.Reader) (KeyBundle, error) {
return CreateKeyBundleWithKeyGenerator(keyType, keyBits, randReader, generatePrivateKey)
}
// CreateKeyBundleWithKeyGenerator create a KeyBundle struct object which includes
// a generated key of keyType with keyBits leveraging the randomness from randReader and
// delegates the actual key generation to keyGenerator
func CreateKeyBundleWithKeyGenerator(keyType string, keyBits int, randReader io.Reader, keyGenerator KeyGenerator) (KeyBundle, error) {
result := KeyBundle{}
if err := keyGenerator(keyType, keyBits, &result, randReader); err != nil {
return result, err
}
return result, nil
}

View File

@@ -78,9 +78,10 @@ type BlockType string
// Well-known formats
const (
PKCS1Block BlockType = "RSA PRIVATE KEY"
PKCS8Block BlockType = "PRIVATE KEY"
ECBlock BlockType = "EC PRIVATE KEY"
UnknownBlock BlockType = ""
PKCS1Block BlockType = "RSA PRIVATE KEY"
PKCS8Block BlockType = "PRIVATE KEY"
ECBlock BlockType = "EC PRIVATE KEY"
)
// ParsedPrivateKeyContainer allows common key setting for certs and CSRs
@@ -137,6 +138,25 @@ type ParsedCSRBundle struct {
CSR *x509.CertificateRequest
}
type KeyBundle struct {
PrivateKeyType PrivateKeyType
PrivateKeyBytes []byte
PrivateKey crypto.Signer
}
func GetPrivateKeyTypeFromSigner(signer crypto.Signer) PrivateKeyType {
switch signer.(type) {
case *rsa.PrivateKey:
return RSAPrivateKey
case *ecdsa.PrivateKey:
return ECPrivateKey
case ed25519.PrivateKey:
return Ed25519PrivateKey
default:
return UnknownPrivateKey
}
}
// ToPEMBundle converts a string-based certificate bundle
// to a PEM-based string certificate bundle in trust path
// order, leaf certificate first
@@ -661,9 +681,18 @@ type URLEntries struct {
OCSPServers []string `json:"ocsp_servers" structs:"ocsp_servers" mapstructure:"ocsp_servers"`
}
type NotAfterBehavior int
const (
ErrNotAfterBehavior NotAfterBehavior = iota
TruncateNotAfterBehavior
PermitNotAfterBehavior
)
type CAInfoBundle struct {
ParsedCertBundle
URLs *URLEntries
URLs *URLEntries
LeafNotAfterBehavior NotAfterBehavior
}
func (b *CAInfoBundle) GetCAChain() []*CertBlock {
@@ -690,10 +719,14 @@ func (b *CAInfoBundle) GetCAChain() []*CertBlock {
func (b *CAInfoBundle) GetFullChain() []*CertBlock {
var chain []*CertBlock
chain = append(chain, &CertBlock{
Certificate: b.Certificate,
Bytes: b.CertificateBytes,
})
// Some bundles already include the root included in the chain,
// so don't include it twice.
if len(b.CAChain) == 0 || !bytes.Equal(b.CAChain[0].Bytes, b.CertificateBytes) {
chain = append(chain, &CertBlock{
Certificate: b.Certificate,
Bytes: b.CertificateBytes,
})
}
if len(b.CAChain) > 0 {
chain = append(chain, b.CAChain...)
@@ -825,3 +858,30 @@ func AddKeyUsages(data *CreationBundle, certTemplate *x509.Certificate) {
certTemplate.ExtKeyUsage = append(certTemplate.ExtKeyUsage, x509.ExtKeyUsageMicrosoftKernelCodeSigning)
}
}
// SetParsedPrivateKey sets the private key parameters on the bundle
func (p *KeyBundle) SetParsedPrivateKey(privateKey crypto.Signer, privateKeyType PrivateKeyType, privateKeyBytes []byte) {
p.PrivateKey = privateKey
p.PrivateKeyType = privateKeyType
p.PrivateKeyBytes = privateKeyBytes
}
func (p *KeyBundle) ToPrivateKeyPemString() (string, error) {
block := pem.Block{}
if p.PrivateKeyBytes != nil && len(p.PrivateKeyBytes) > 0 {
block.Bytes = p.PrivateKeyBytes
switch p.PrivateKeyType {
case RSAPrivateKey:
block.Type = "RSA PRIVATE KEY"
case ECPrivateKey:
block.Type = "EC PRIVATE KEY"
default:
block.Type = "PRIVATE KEY"
}
privateKeyPemString := strings.TrimSpace(string(pem.EncodeToMemory(&block)))
return privateKeyPemString, nil
}
return "", errutil.InternalError{Err: "No Private Key Bytes to Wrap"}
}