All Projects → travelaudience → kubernetes-vault

travelaudience / kubernetes-vault

Licence: Apache-2.0 license
Run Hashicorp Vault on top of Kubernetes (GKE). Includes instructions for automated backups (GCS) and day-to-day usage.

Programming Languages

shell
77523 projects

Projects that are alternatives of or similar to kubernetes-vault

vault-terraform-demo
Deploy HashiCorp Vault with Terraform in GKE.
Stars: ✭ 47 (+213.33%)
Mutual labels:  vault, gke
kubernetes-vault-example
Placeholder for training material related to TA usage of Vault for securing Kubernetes apps.
Stars: ✭ 16 (+6.67%)
Mutual labels:  vault, gke
gke-vault-demo
This demo builds two GKE Clusters and guides you through using secrets in Vault, using Kubernetes authentication from within a pod to login to Vault, and fetching short-lived Google Service Account credentials on-demand from Vault within a pod.
Stars: ✭ 63 (+320%)
Mutual labels:  vault, gke
artifactory-secrets-plugin
HashiCorp Vault Artifactory Secrets Plugin
Stars: ✭ 17 (+13.33%)
Mutual labels:  vault
vaultlib
Lightweight Go client library for reading Vault kv secrets
Stars: ✭ 21 (+40%)
Mutual labels:  vault
gke-rbac-demo
This project covers two use cases for RBAC within a Kubernetes Engine cluster. First, assigning different permissions to user personas. Second, granting limited API access to an application running within your cluster. Since RBAC's flexibility can occasionally result in complex rules, you will also perform common steps for troubleshooting RBAC a…
Stars: ✭ 138 (+820%)
Mutual labels:  gke
vault-load-testing
Automated load tests for Vault and Consul using the locust.io Python framework
Stars: ✭ 44 (+193.33%)
Mutual labels:  vault
rundeck-vault-plugin
Development continues here:
Stars: ✭ 17 (+13.33%)
Mutual labels:  vault
croc-hunter-jenkinsx
Croc Hunter demo, deployed with Jenkins X
Stars: ✭ 19 (+26.67%)
Mutual labels:  gke
cryptorious
CLI Password Manager
Stars: ✭ 15 (+0%)
Mutual labels:  vault
gke-demo
Demonstration of complete, fully-featured CI/CD and cloud automation for microservices, done with GCP/GKE
Stars: ✭ 47 (+213.33%)
Mutual labels:  gke
gke-ip-address-management
An application to help with IP Address Management (IPAM) for Google Kubernetes Engine (GKE) clusters. Easily allows the calculation of the subnets required to spin up GKE clusters in VPC-native mode. See it at: https://googlecloudplatform.github.io/gke-ip-address-management/
Stars: ✭ 45 (+200%)
Mutual labels:  gke
vault-consul-swarm
Deploy Vault and Consul with Docker Swarm
Stars: ✭ 20 (+33.33%)
Mutual labels:  vault
continuous-delivery-spinnaker-gke
Tutorial for deploying, configuring and running Spinnaker on GKE for continuous delivery
Stars: ✭ 34 (+126.67%)
Mutual labels:  gke
ansible-vault-editor-idea-plugin
Ansible Vault Editor IntelliJ Plugin with auto encryption/decryption
Stars: ✭ 29 (+93.33%)
Mutual labels:  vault
vault-token-helper
@hashicorp Vault Token Helper for macOS, Linux and Windows with support for secure token storage and multiple Vault servers 🔐
Stars: ✭ 74 (+393.33%)
Mutual labels:  vault
terraform-gke
A set of terraform modules for building GKE clusters.
Stars: ✭ 17 (+13.33%)
Mutual labels:  gke
offensive-infrastructure
Offensive Infrastructure with Modern Technologies
Stars: ✭ 88 (+486.67%)
Mutual labels:  vault
nanvault
A standalone CLI tool to encrypt and decrypt files in the Ansible Vault format
Stars: ✭ 33 (+120%)
Mutual labels:  vault
hcat
Hashicorp Configuration and Templating library (hcat, pronounced hashicat)
Stars: ✭ 89 (+493.33%)
Mutual labels:  vault

kubernetes-vault

Pre-Requisites

  • A GCP project with the Google Cloud Key Management Service (KMS) API enabled.

  • Having the following Google Cloud IAM roles in this project:

    • Cloud KMS > Cloud KMS Admin

    • Cloud KMS > Cloud KMS CryptoKey Encrypter/Decrypter

    • Container > Container Engine Admin

    • Compute Engine > Compute Network Admin

  • A working GKE cluster running Kubernetes v1.8.3 with legacy authorization disabled and Read/Write permissions for Storage.

  • A working installation of gcloud configured to access the target Google Cloud Platform project.

  • A working installation of kubectl configured to access the target cluster.

  • A working installation of cfssl (more details below).

  • A working installation of jq (more details below).

  • The Vault binary (more details below).

Before proceeding

The steps described below are mandatory.

All commands should be run from the root of this repository.

Disable GKE legay authorization mode

Disabling legacy authorization means that Role-Based Access Control (RBAC) is enabled. RBAC is the new way to manage permissions in Kubernetes 1.6+. Reading the RBAC documentation and understanding the differences between RBAC and legacy authorization is strongly encouraged.

Setup RBAC

Due to a known issue with RBAC on GKE, one must grant themselves the cluster-admin role manually before proceeding. In order to do so, one must retrieve their identity:

$ MY_GCLOUD_USER=$(gcloud info \
  | grep Account \
  | awk -F'[][]' '{print $2}')

and then create a ClusterRoleBinding:

$ kubectl create clusterrolebinding \
  <cluster-role-binding-name> \
  --clusterrole=cluster-admin \
  --user=${MY_GCLOUD_USER}

One must replace <cluster-role-binding-name> above with a unique, meaningful name. It is a good idea to include one’s identity in the name (e.g., j.doe-cluster-admin).

Setup Google Cloud KMS

As the setup process generates and deals with sensitive information one will use Google Cloud KMS to add an extra layer of security to the process. To do that, one should first create a vault Google Cloud KMS keyring:

$ gcloud kms keyrings create vault --location global

Then, one should create an etcd and an init Google Cloud KMS keys, which will be used to encrypt and decrypt data:

$ gcloud kms keys create etcd --location global --keyring vault --purpose encryption
$ gcloud kms keys create init --location global --keyring vault --purpose encryption

Install cfssl and cfssljson

cfssl can be installed either

  • from source — this requires a Go 1.6+ environment setup on one’s workstation; or

  • by downloading the binaries for one’s architecture and dropping them on ${PATH}.

If downloading the binaries one needs only to download cfssl and cfssljson.

Install jq

jq can be installed either

  • by downloading the appropriate binary from the project’s page and dropping it on ${PATH}; or

  • by executing brew install jq if on macOS.

Install Vault client

Vault client can be installed by downloading the binary for one’s architecture and dropping it on ${PATH}. The binary can act both as server and client, and will be needed later on to configure the Vault deployment.

Creating the vault namespace

To better group and manage the components of the solution it is recommended to create a dedicated namespace in the cluster:

$ kubectl create namespace vault
namespace "vault" created

If one does not follow this approach, or if one chooses a different name for the namespace, one must update the scripts and Kubernetes descriptors in this repository accordingly.

Deploying etcd

Deploying etcd-operator

etcd-operator will be responsible for managing the etcd cluster that Vault will use as storage backend. etcd-backup-operator and etcd-restore-operator will, in turn, handle tasks such as periodic backups and disaster recovery. These three components and the cluster itself will live in the vault namespace.

Since RBAC is active on the cluster, one also needs to setup adequate permissions. To do this one needs to

  • create a ClusterRole specifying a list of permissions;

  • create a dedicated ServiceAccount for etcd-operator;

  • create a CluserRoleBinding that grants these permissions to the service account.

Below is the command needed to perform the tasks described above and deploy the three components of etcd-operator:

$ kubectl create -f ./etcd-operator/etcd-operator-bundle.yaml
clusterrole "etcd-operator" created
serviceaccount "etcd-operator" created
clusterrolebinding "etcd-operator" created
deployment "etcd-operator" created
deployment "etcd-backup-operator" created
deployment "etcd-restore-operator" created
service "etcd-restore-operator" created

At this point it is a good idea to check whether the deployments succeeded. One should wait for a few seconds and then run:

$ ETCD_OPERATOR_POD_NAME=$(kubectl get pod --namespace vault \
  | grep etcd-operator \
  | awk 'NR==1' \
  | awk '{print $1}')
$ ETCD_BACKUP_OPERATOR_POD_NAME=$(kubectl get pod --namespace vault \
  | grep etcd-backup-operator \
  | awk 'NR==1' \
  | awk '{print $1}')
$ ETCD_RESTORE_OPERATOR_POD_NAME=$(kubectl get pod --namespace vault \
  | grep etcd-restore-operator \
  | awk 'NR==1' \
  | awk '{print $1}')
$ kubectl logs --follow --namespace vault "${ETCD_OPERATOR_POD_NAME}"
time="2017-11-25T10:03:26Z" level=info msg="etcd-operator Version: 0.7.0"
time="2017-11-25T10:03:26Z" level=info msg="Git SHA: 3bcbdb1"
time="2017-11-25T10:03:26Z" level=info msg="Go Version: go1.9.1"
time="2017-11-25T10:03:26Z" level=info msg="Go OS/Arch: linux/amd64"
time="2017-11-25T10:03:26Z" level=info msg="Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"vault", Name:"etcd-operator", UID:"e1047470-d1c7-11e7-800e-42010a9a0046", APIVersion:"v1", ResourceVersion:"698376", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' etcd-operator-5876bdb586-dhn95 became leader"
$ kubectl logs --follow --namespace vault "${ETCD_BACKUP_OPERATOR_POD_NAME}"
time="2017-11-25T10:03:26Z" level=info msg="Go Version: go1.9.2"
time="2017-11-25T10:03:26Z" level=info msg="Go OS/Arch: linux/amd64"
time="2017-11-25T10:03:26Z" level=info msg="etcd-backup-operator Version: 0.0.1+git"
time="2017-11-25T10:03:26Z" level=info msg="Git SHA: b970de9d"
ERROR: logging before flag.Parse: I1127 18:30:36.257589       1 leaderelection.go:174] attempting to acquire leader lease...
ERROR: logging before flag.Parse: I1127 18:30:36.295994       1 leaderelection.go:184] successfully acquired lease vault/etcd-backup-operator
time="2017-11-25T10:03:26Z" level=info msg="Event(v1.ObjectReference{Kind:\"Endpoints\", Namespace:\"vault\", Name:\"etcd-backup-operator\", UID:\"0f990a0b-d3a1-11e7-800e-42010a9a0046\", APIVersion:\"v1\", ResourceVersion:\"1083817\", FieldPath:\"\"}): type: 'Normal' reason: 'LeaderElection' etcd-backup-operator-65ff78d94d-t2jjf became leader"
time="2017-11-25T10:03:26Z" level=info msg="starting backup controller" pkg=controller
$ kubectl logs --follow --namespace vault "${ETCD_RESTORE_OPERATOR_POD_NAME}"
time="2017-11-25T10:03:26Z" level=info msg="Go Version: go1.9.2"
time="2017-11-25T10:03:26Z" level=info msg="Go OS/Arch: linux/amd64"
time="2017-11-25T10:03:26Z" level=info msg="etcd-backup-operator Version: 0.0.1+git"
time="2017-11-25T10:03:26Z" level=info msg="Git SHA: b970de9d"
ERROR: logging before flag.Parse: I1127 18:30:36.257589       1 leaderelection.go:174] attempting to acquire leader lease...
ERROR: logging before flag.Parse: I1127 18:30:36.295994       1 leaderelection.go:184] successfully acquired lease vault/etcd-backup-operator
time="2017-11-25T10:03:26Z" level=info msg="Event(v1.ObjectReference{Kind:\"Endpoints\", Namespace:\"vault\", Name:\"etcd-restore-operator\", UID:\"8663f9bb-d3a0-11e7-800e-42010a9a0046\", APIVersion:\"v1\", ResourceVersion:\"1083818\", FieldPath:\"\"}): type: 'Normal' reason: 'LeaderElection' etcd-restore-operator-86f7f67cdc-dk455 became leader"
time="2017-11-25T10:03:26Z" level=info msg="listening on 0.0.0.0:19999"
time="2017-11-25T10:03:26Z" level=info msg="starting restore controller" pkg=controller

If the output doesn’t differ much from the example it is safe to proceed.

Securing etcd cluster

One is now almost ready to create the etcd cluster that will back the Vault deployment. However, before proceeding, one needs to generate TLS certificates to secure communications within and to the etcd cluster.

ℹ️

Even though the etcd cluster won’t be exposed to outside of the Kubernetes cluster, and even though Vault encrypts all data before it reaches the network, it is highly recommended to adopt additional security measures, such as enabling TLS authentication and communication within the cluster, i.e. cluster membership, and with clients of the cluster.

One will need different types of certificates for establishing TLS:

  • A server certificate which etcd will use for serving client-to-server requests (such as a request for a key).

  • A server certificate which etcd will use for serving server-to-server aka peer-to-peer requests (such as clustering operations).

  • A client certificate to authenticate requests from etcd-operator.

  • A client certificate to authenticate requests from Vault.

One will also need a Certificate Authority (CA) to sign these certificates. Since one will be securing communications in cluster-internal domains (such as etcd-0000.etcd.vault.svc.cluster.local) one cannot rely on an external CA to provide these certificates. Therefore, one must bootstrap their own CA and use it to sign these certificates.

ℹ️

Since etcd-operator has some strict requirements on the format of the input for TLS configuration, and due to the amount of certificates one needs to generate, a helper script is provided at tls/create-etcd-certs.sh. Running it will bootstrap the CA and sign all the necessary certificates.

To generate the certificates run:

$ ./tls/create-etcd-certs.sh
2017/11/25 10:08:12 [INFO] generating a new CA key and certificate from CSR
(...)

This will generate some *-crt.pem.kms and \*-key.pem.kms files that will be placed in the tls/certs folder. These files are encrypted using Google Cloud KMS and may only be decrypted by an individual with the Cloud KMS CryptoKey Encrypter/Decrypter permissions on the current GCP project. Nonetheless, one should make sure that these files are distributed only among trusted individuals.

ℹ️

The Certificate Authority generated in this step is not the same thing as the Certificate Authority one is seeking to establish as a result of deploying this project. Its only purpose is to establish trust in this particular setup of etcd and Vault, and it must not be used for anything else.

As mentioned above, etcd-operator has strict requirements regarding the names of the certificate files used to establish TLS communications. In particular, etcd-operator expects three Kubernetes secrets to be provided when creating a new etcd cluster:

Secret name

Description

etcd-peer-tls

a secret containing a certificate bundle for server-to-server communication.

etcd-server-tls

a secret containing a certificate bundle for client-to-server communication.

etcd-operator-tls

a secret containing a certificate bundle for authenticating etcd-operator requests.

ℹ️

The structure of each secret is discussed in detail in the etcd-operator docs. In order to ease the creation of these secrets, a helper script is provided at tls/create-etcd-secrets.sh. Running it will create all the necessary secrets in the Kubernetes cluster.

To create the aforementioned secrets, one must run:

$ ./tls/create-etcd-secrets.sh
secret "etcd-peer-tls" created
secret "etcd-server-tls" created
secret "etcd-operator-tls" created
secret "vault-etcd-tls" created

The vault-tls secret will be needed later on.

ℹ️

At this point one should give this note a second read and decide what to do with the files in tls/certs, as they won’t be needed for the remainder of the procedure.

Deploying the etcd cluster

Now that etcd-operator and the necessary Kubernetes secrets are adequately setup, it is time to create the etcd cluster. To do that, one must run:

$ kubectl create -f etcd/etcd-etcdcluster.yaml
etcdcluster "etcd" created

By default ./etcd/etcd-etcdcluster.yaml[the cluster specification] is:

  • Cluster name is etcd.

  • Use etcd v3.1.10.

  • Have three nodes.

Before proceeding any further, one must check whether the etcd cluster deployment succeeded by inspecting pods in the vault namespace:

$ kubectl get pod --namespace vault
NAME                                     READY     STATUS    RESTARTS   AGE
etcd-0000                                1/1       Running   0          6m
etcd-0001                                1/1       Running   0          6m
etcd-0002                                1/1       Running   0          6m
etcd-operator-5876bdb586-dj6dc           1/1       Running   0          7m
etcd-backup-operator-5876bdb586-t2jjf    1/1       Running   0          7m
etcd-restore-operator-5876bdb586-dk455   1/1       Running   0          7m

If one’s output is similar to this it is safe to proceed.

Deploying Vault

Vault’s deployment has to be split in three parts:

  • One first creates the Vault StatefulSet itself, which creates two Vault instances that are uninitialized and sealed. This means they will not accept any requests except for the ones required for the initial configuration process.

  • One then proceeds to initializing the Vault storage backend and unsealing the two Vault instances. This will leave Vault in a state in which it can accept requests.

  • One finally exposes the Vault deployment to outside the Kubernetes cluster and secures the deployment.

Initial deployment

Vault’s deployment is composed of seven files:

File

Description

nginx-configmap.yaml

contains Nginx’s configuration file

vault-configmap.yaml

contains Vault’s configuration file

vault-serviceaccount.yaml

creates a service account for Vault

vault-service.yaml

exposes Vault as a service inside the Kubernetes cluster (both for API requests and clustering)

vault-statefulset.yaml

describes the deployment of Vault itself

vault-api-service.yaml

creates a NodePort service that exposes the Vault API

vault-api-ingress.yaml

exposes the Vault API to outside the Kubernetes cluster

ℹ️

Creating a dedicated service account for Vault doesn’t bring any immediate benefit. However, it allows us to follow the principle of least-privilege from an early stage and to prevent some known issues with default service accounts.

ℹ️

The headless service service defined in vault-service.yaml supports both the StatefulSet defined in vault-statefulset.yaml as well as clustering and high-availability of the Vault deployment.

ℹ️

One must create the vault-api-service.yaml service to support the ingress resource in GCP, since the GCE ingress controller requires a service of type NodePort to be created.

In this first part one will be creating the first five resources, leaving the second service and the ingress resources for later. In order to start the deployment one needs to run the following commands:

Before running the following commands one should update the vault/vault-configmap.yaml file with the address where Vault will be made publicly accessible (check below).

$ kubectl create -f vault/nginx-configmap.yaml
configmap "vault" created
$ kubectl create -f vault/vault-configmap.yaml
configmap "vault" created
$ kubectl create -f vault/vault-serviceaccount.yaml
serviceaccount "vault" created
$ kubectl create -f vault/vault-service.yaml
service "vault" created
$ kubectl create -f vault/vault-statefulset.yaml
statefulset "vault" created

As mentioned above, this will create two Vault instances that are uninitialized and sealed. This means that they will not accept requests except for the ones required for the initial configuration process.

Before proceeding any further, one must check whether the Vault deployment suceeded by inspecting pods in the vault namespace:

$ kubectl get pod --namespace vault
NAME                                     READY     STATUS    RESTARTS   AGE
etcd-0000                                1/1       Running   0          1h
etcd-0001                                1/1       Running   0          1h
etcd-0002                                1/1       Running   0          1h
etcd-operator-5876bdb586-dj6dc           1/1       Running   0          1h
etcd-backup-operator-5876bdb586-t2jjf    1/1       Running   0          1h
etcd-restore-operator-5876bdb586-dk455   1/1       Running   0          1h
vault-0                                  1/2       Running   0          19s
vault-1                                  1/2       Running   0          19s

If one’s output is similar to this it is safe to proceed.

At this point, Vault is yet to be initialized and unsealed. Only after it is, will Kubernetes detect the Vault service is ready to be served.

If one inspects the logs of a Vault container, say vault-0, one will find the following output:

$ kubectl logs --namespace vault --container vault vault-0
==> Vault server configuration:

                     Cgo: disabled
         Cluster Address: https://vault:8201
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", tls: "disabled")
               Log Level: info
                   Mlock: supported: true, enabled: true
        Redirect Address: https://vault.example.com
                 Storage: etcd (HA available)
                 Version: Vault v0.9.0
             Version Sha: bdac1854478538052ba5b7ec9a9ec688d35a3335

==> Vault server started! Log data will stream in below:

2017/11/25 13:22:16.424742 [INFO ] core: security barrier not initialized
2017/11/25 13:22:21.423651 [INFO ] core: security barrier not initialized
2017/11/25 13:22:26.424083 [INFO ] core: security barrier not initialized
(...)

These INFO level messages indicate that Vault hasn’t been initialized yet. Vault will keep repeating these until one takes action.

Initializing and unsealing Vault

This procedure must be executed by a trusted individual. One will be handling information that, if leaked, can compromise the security of the data stored by Vault.

Vault must now be initialized, and both instances must be unsealed. As the Vault pods are not accessible from outside the cluster at this time, one needs to establish port-forwarding to one’s local workstation. To do that, one should run the following:

$ kubectl port-forward --namespace vault vault-0 18200:8200 // (1)
Forwarding from 127.0.0.1:18200 -> 8200
Forwarding from [::1]:18200 -> 8200
  1. Forwards port 8200 of the first Vault pod to the local 18200 port.

Now, one should leave this command running, open a second terminal window and:

Set the value of the VAULT_ADDR environment variable to the address where the first Vault pod is exposed locally.

$ export VAULT_ADDR="http://127.0.0.1:18200" // (1)

Initialize Vault, encrypting the resulting information using the abovementioned key:

$ vault init | gcloud kms encrypt \
    --plaintext-file - \
    --ciphertext-file vault-init.kms \
    --keyring vault \
    --key init \
    --location global

Before proceeding one may want to check that the initialization and encryption process were successful. To do that one must run:

$ gcloud kms decrypt \
    --plaintext-file - \
    --ciphertext-file vault-init.kms \
    --keyring vault \
    --key init \
    --location global
Unseal Key 1: +G8hVWrVaOnEQquasRyWdE2RAFuCQumodY6YgzfJzGOD
Unseal Key 2: XpfepkWVkMWLMJRyranNQDSofE1TjXTJho+ImaozyQ6X
Unseal Key 3: wfFvslot+7s0ainbE40iIhfSk7L6rs+4prc0pjQzvxtJ
Unseal Key 4: BhWFOwkg2QTW5DkBfzZWTygWAQ3IA6pMGtUF1i+wUxOr
Unseal Key 5: iLGQSSJhBqe65zpkliOATGcCe+7d2L0wn5Nl3KO3PZW9
Initial Root Token: c689c370-22ec-8268-0ea8-4cbb50c2e00c

Vault initialized with 5 keys and a key threshold of 3. Please
securely distribute the above keys. When the vault is re-sealed,
restarted, or stopped, you must provide at least 3 of these keys
to unseal it again.

Vault does not store the master key. Without at least 3 keys,
your vault will remain permanently sealed.

As one may see, this outputs the five unseal keys and the initial root token for the Vault instance. At this point it is of extreme importance to:

  • establish adequate access to Google Cloud KMS in the project so that only trusted individuals are able to decrypt vault-init.kms.

  • distribute vault-init.kms among these trusted individuals.

ℹ️

Every individual with Cloud KMS CryptoKey Encrypter/Decrypter permissions on the project and access to vault-init.kms is able to unseal Vault and perform operations as root.

Now that Vault is initialized it is time to unseal it so that it can be used. Per default configuration, one will need to unseal 3 times, each one with one non-repeated unseal key generated above.

Using the same terminal window where one ran vault init, one must run:

$ for i in {1..3}; do \
    vault unseal "$(
      gcloud kms decrypt \
        --plaintext-file - \
        --ciphertext-file vault-init.kms \
        --keyring vault \
        --key init \
        --location global \
        | awk "NR==${i}" \
        | awk -F ": " '{print $2}'
      )"
done

This will decrypt vault-init.kms in-memory, pick the first three unseal keys and perform an unseal step with each one.

The first Vault pod is now unsealed and ready to serve requests.

Inspecting pods in the vault namespace should now output something similar to:

$ kubectl get --namespace vault pod
NAME                                     READY     STATUS    RESTARTS   AGE
etcd-0000                                1/1       Running   0          1h
etcd-0001                                1/1       Running   0          1h
etcd-0002                                1/1       Running   0          1h
etcd-operator-5876bdb586-dj6dc           1/1       Running   0          1h
etcd-backup-operator-5876bdb586-t2jjf    1/1       Running   0          1h
etcd-restore-operator-5876bdb586-dk455   1/1       Running   0          1h
vault-0                                  2/2       Running   0          3m
vault-1                                  2/2       Running   0          3m

Now, one must also unseal the second Vault instance. One should get back to the first terminal window — where kubectl port-forward is running — and stop the running process (using Ctrl-C). Then, one should run

$ kubectl port-forward --namespace vault vault-1 28200:8200 // (1)
Forwarding from 127.0.0.1:28200 -> 8200
Forwarding from [::1]:28200 -> 8200
  1. Forwards port 8200 of the first Vault pod to the local 28200 port.

Now one should get back to the second terminal window — where vault init and vault unseal were run before — and repeat the instructions followed to unseal the first Vault pod.

If one inspects the logs of the vault-1 pod one will see that it is now unsealed:

$ kubectl logs --container vault --namespace vault vault-1
(...)
2017/11/25 13:25:13.459271 [INFO ] core: vault is unsealed
2017/11/25 13:25:13.459401 [INFO ] core: entering standby mode

However, inspecting pods in the vault namespace will still show the pod as not ready:

$ kubectl get --namespace vault pod
NAME      READY     STATUS    RESTARTS   AGE
vault-0   2/2       Running   0          9m
vault-1   1/2       Running   0          9m

This is because the second Vault pod will operate as a standby instance, meaning that it is ready to serve requests in case the first pod fails. However, for network optimization, Kubernetes will mark such a standby instance as not ready so that any requests are directed only to the active instance. When the vault-0 pod fails for any reason, vault-1 will take control and Kubernetes will direct all incoming requests to it. In this scenario, vault-0 will take the role of standby once it recovers from failure.

To learn more about clustering and high-availability in Vault one should head over to Vault’s HA documentation.

Revoke root token

There is one last step one should do before proceeding. We need to revoke the initial root token. While this may seem counter-intuitive it is in fact a recommended practice. In the same terminal window where one ran the last vault unseal command, one should run:

$ vault auth "$(
    gcloud kms decrypt \
      --plaintext-file - \
      --ciphertext-file vault-init.kms \
      --keyring vault \
      --key init \
      --location global \
      | awk "NR==6" \
      | awk -F ": " '{print $2}'
    )"
Successfully authenticated! You are now logged in.
token: (1)
token_duration: 0
token_policies: [root]

This will decrypt vault-init.kms in-memory, pick the initial root token and perform an authentication step.

$ vault token-revoke -self (1)
Success! Token revoked if it existed.
  1. This corresponds to the initial root token.

The Vault deployment is now initialized, both instances are unsealed, and the initial root token has been revoked. It is now time to continue the deployment by exposing the Vault deployment to outside the Kubernetes cluster.

💡

One may now return to the terminal window where kubectl port-forward is running and terminate the process using Ctrl-C.

The unseal procedure must be performed to every new Vault pod, i.e. when a pod crashes or is restarted.

Exposing Vault

One will now expose Vault to outside the cluster, so that applications running in other clusters can access it. To do this one needs to create a global static IP in GCP:

$ gcloud compute addresses create vault --global
Created [https://www.googleapis.com/compute/v1/projects/<project-name>/global/addresses/vault].
$ gcloud compute addresses describe vault --global
address: (1)
creationTimestamp: '2017-11-25T06:25:39.628-07:00'
description: ''
id: '7579662126224115422'
ipVersion: IPV4
kind: compute#address
name: vault
selfLink: https://www.googleapis.com/compute/v1/projects/<project-name>/global/addresses/vault
status: RESERVED
  1. The IP address one will use to expose Vault.

If one creates the IP address with a different name one must update the vault/vault-api-ingress.yaml file accordingly.

After the vault IP address is created, one must configure the DNS of the domain one is going to use to expose Vault. For instance, if one wants to expose Vault at https://vault.example.com one has to create a DNS record with type A and name vault pointing to the abovementioned IP address at the DNS provider for the example.com domain. The steps to do this are highly dependent on the DNS provider for the domain and cannot be detailed here.

From this point on, it is assumed that DNS has been properly configured and that changes have propagated. One can test whether changes have propagated by using dig:

dig @8.8.8.8 vault.example.com A

; <<>> DiG 9.8.3-P1 <<>> @8.8.8.8 vault.example.com A
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43874
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;vault.example.com.		IN	A

;; ANSWER SECTION:
vault.example.com.	299	IN	A	(2)

;; Query time: 61 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Mon Sep 18 13:13:49 2017
;; MSG SIZE  rcvd: 53
  1. Looks-up A records for vault.example.com at Google Public DNS (8.8.8.8).

  2. This must match the global IP address created above.

It is highly recommended to wait for changes to propagate before proceeding.

Before running the following commands, one should update the vault/vault-api-ingress.yaml file with the actual domain name used to expose Vault.

Once the vault IP address is created, one must create the service and ingress resources:

$ kubectl create -f vault/vault-api-service.yaml
service "vault" created
$ kubectl create -f vault/vault-api-ingress.yaml
ingress "vault" created

The above will create a global external load-balancer pointing to the Vault deployment.

In order to secure Vault external access one must now configure HTTPS access. The easiest and cheapest way to obtain a trusted TLS certicate is using Let’s Encrypt, and the easiest way to automate the process of obtaining and renewing certificates from Let’s Encrypt is by using kube-lego:

Before running the following commands one should update the kube-lego/kube-lego.yaml file with one’s information, i.e. email.

$ kubectl create -f ./kube-lego/kube-lego-bundle.yaml
namespace "kube-lego" created
configmap "kube-lego" created
deployment "kube-lego" created

As soon as it starts, kube-lego will start monitoring ingress resources and requesting certificates from Let’s Encrypt. One can check that the deployment succeeded by running the following:

$ KUBE_LEGO_POD_NAME=$(kubectl get --namespace kube-lego pod \
  | grep kube-lego \
  | awk 'NR==1' \
  | awk '{print $1}')
$ kubectl logs --namespace kube-lego "${KUBE_LEGO_POD_NAME}"
time="2017-11-25T13:29:38Z" level=info msg="kube-lego 0.1.5-a9592932 starting" context=kubelego
time="2017-11-25T13:29:38Z" level=info msg="connecting to kubernetes api: https://10.15.240.1:443" context=kubelego
time="2017-11-25T13:29:38Z" level=info msg="successfully connected to kubernetes api v1.8.3-gke.0" context=kubelego
time="2017-11-25T13:29:38Z" level=info msg="server listening on http://:8080/" context=acme
(...)

Let’s Encrypt must be able to reach port TCP 80 on domains for which certificates are requested, so one must use the

kubernetes.io/ingress.allow-http: "true"

annotation in vault/vault-api-ingress.yaml. Please note that it is safe to set the abovementioned annotation, since the NGINX instance that is deployed alongside Vault makes sure that Vault only communicates over HTTPS. Any request to Vault via plain HTTP will be rejected.

If everything goes well, after a short while one will be able to access

securely. On the other hand, one will not be able to access http://vault.example.com/v1/sys/health.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].