All Projects → GoogleCloudPlatform → gke-vault-demo

GoogleCloudPlatform / gke-vault-demo

Licence: Apache-2.0 License
This demo builds two GKE Clusters and guides you through using secrets in Vault, using Kubernetes authentication from within a pod to login to Vault, and fetching short-lived Google Service Account credentials on-demand from Vault within a pod.

Programming Languages

shell
77523 projects
HCL
1544 projects
python
139335 projects - #7 most used programming language
Makefile
30231 projects

Projects that are alternatives of or similar to gke-vault-demo

gke-ip-address-management
An application to help with IP Address Management (IPAM) for Google Kubernetes Engine (GKE) clusters. Easily allows the calculation of the subnets required to spin up GKE clusters in VPC-native mode. See it at: https://googlecloudplatform.github.io/gke-ip-address-management/
Stars: ✭ 45 (-28.57%)
Mutual labels:  gcp, gke, kubernetes-engine, gke-helmsman
gke-logging-sinks-demo
This project describes the steps required to deploy a sample application to Kubernetes Engine that forwards log events to Stackdriver Logging. As a part of the exercise, you will create a Cloud Storage bucket and a BigQuery dataset for exporting log data.
Stars: ✭ 45 (-28.57%)
Mutual labels:  gke, kubernetes-engine, gke-helmsman
gke-istio-telemetry-demo
This project demonstrates how to use an Istio service mesh in a single Kubernetes Engine cluster alongside Prometheus, Jaeger, and Grafana, to monitor cluster and workload performance metrics. You will first deploy the Istio control plane, data plane, and additional visibility tools using the provided scripts, then explore the collected metrics …
Stars: ✭ 55 (-12.7%)
Mutual labels:  gke, kubernetes-engine, gke-helmsman
gke-anthos-holistic-demo
This repository guides you through deploying a private GKE cluster and provides a base platform for hands-on exploration of several GKE related topics which leverage or integrate with that infrastructure. After completing the exercises in all topic areas, you will have a deeper understanding of several core components of GKE and GCP as configure…
Stars: ✭ 55 (-12.7%)
Mutual labels:  gcp, gke, gke-helmsman
gke-datadog-demo
This project demonstrates how a third party solution, like Datadog, can be used to monitor a Kubernetes Engine cluster and its workloads. Using the provided manifest, you will install Datadog and a simple nginx workload into your cluster. The Datadog agents will be configured to monitor the nginx workload, and ship metrics to your own Datadog ac…
Stars: ✭ 21 (-66.67%)
Mutual labels:  gke, kubernetes-engine, gke-helmsman
gke-istio-gce-demo
In this project, you will leverage Kubernetes Engine and Google Compute Engine to explore how Istio can manage services that reside outside of the Kubernetes Engine environment. You will deploy a typical Istio service mesh in Kubernetes Engine, then configure an externally deployed microservice to join the mesh.
Stars: ✭ 53 (-15.87%)
Mutual labels:  gke, kubernetes-engine, gke-helmsman
gke-rbac-demo
This project covers two use cases for RBAC within a Kubernetes Engine cluster. First, assigning different permissions to user personas. Second, granting limited API access to an application running within your cluster. Since RBAC's flexibility can occasionally result in complex rules, you will also perform common steps for troubleshooting RBAC a…
Stars: ✭ 138 (+119.05%)
Mutual labels:  gke, kubernetes-engine, gke-helmsman
gke-managed-certificates-demo
GKE ingress with GCP managed certificates
Stars: ✭ 21 (-66.67%)
Mutual labels:  gcp, gke, gke-helmsman
gtoken
Securely access AWS services from GKE cluster
Stars: ✭ 43 (-31.75%)
Mutual labels:  gcp, gke
vault-terraform-demo
Deploy HashiCorp Vault with Terraform in GKE.
Stars: ✭ 47 (-25.4%)
Mutual labels:  vault, gke
secrets cli
CLI for storing and reading your secrets via vault
Stars: ✭ 24 (-61.9%)
Mutual labels:  vault, hashicorp-vault
google-managed-certs-gke
DEPRECATED: How to use Google Managed SSL Certificates on GKE
Stars: ✭ 16 (-74.6%)
Mutual labels:  gcp, gke
kubernetes-vault
Run Hashicorp Vault on top of Kubernetes (GKE). Includes instructions for automated backups (GCS) and day-to-day usage.
Stars: ✭ 15 (-76.19%)
Mutual labels:  vault, gke
vault-puppet
Using @hashicorp Vault with Puppet
Stars: ✭ 36 (-42.86%)
Mutual labels:  vault, hashicorp-vault
vault-consul-swarm
Deploy Vault and Consul with Docker Swarm
Stars: ✭ 20 (-68.25%)
Mutual labels:  vault, hashicorp-vault
vault-demo
Walkthroughs and scripts for my @hashicorp Vault talks
Stars: ✭ 67 (+6.35%)
Mutual labels:  vault, hashicorp-vault
teamcity-hashicorp-vault-plugin
TeamCity plugin to support HashiCorp Vault
Stars: ✭ 23 (-63.49%)
Mutual labels:  vault, hashicorp-vault
gke-enterprise-mt
This repository hosts the terraform module that helps setup a GKE cluster and environment based on the Enterprise Multi-Tenancy Best Practices Guide.
Stars: ✭ 20 (-68.25%)
Mutual labels:  gcp, gke-helmsman
inspec-gke-cis-benchmark
GKE CIS 1.1.0 Benchmark InSpec Profile
Stars: ✭ 27 (-57.14%)
Mutual labels:  gcp, gke
breakglass
A command line tool to provide login credentials from Hashicorp Vault
Stars: ✭ 33 (-47.62%)
Mutual labels:  vault, hashicorp-vault

Vault on GKE

Open in Cloud Shell

Signup for a free Google Cloud account

Table of Contents

Introduction

Hashicorp Vault secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets. In addition, Vault offers unique capabilities for centrally managing secrets used by application pods inside a Google Kubernetes Engine cluster. For example, Vault supports authenticating application pods via the Kubernetes Service Account, audit logging of clients accessing/using secrets, automatic credential expiration, credential rotation, and more.

Many new users to Kubernetes leverage the built-in secrets object to store sensitive data used by their application pods. However, storing secret data in YAML files checked into source control is not a recommended approach for several security reasons. The secret data is statically defined, difficult to change, difficult to control access to, and difficult to keep off developer filesystems and CI/CD systems. As a best practice, secrets should not kept alongside the application in the same YAML manifests. They should be stored in a central secrets management system such as Vault and fetched at runtime only by the application or process that needs them. Should those secrets ever become compromised, the process of revoking, auditing, and rotating the secrets is simple since they are centrally controlled and managed with Vault.

Building and running a highly-available Vault cluster on a dedicated GKE cluster is outside the scope of this demo, so this codebase leverages Seth Vargo's Vault-on-GKE repository as a Terraform module. Seth's repository stands up a separate, highly-availabile GKE cluster running the Vault cluster components with Google Cloud Storage for a highly durable secrets storage backend.

This demo deploys two private Kubernetes Engine Clusters into separate GCP projects. One cluster is dedicated to running Vault and is built using Seth Vargo's Vault-on-GKE Terraform repository. The second cluster holds the applications that will fetch and use secrets from the Vault cluster. The walkthrough covers creating and storing secrets in Vault, using Kubernetes authentication from within a pod to login to Vault, and fetching short-lived Google Service Account credentials on-demand from Vault within a pod. These examples demonstrate the most common usage patterns of Vault from pods within another Kubernetes cluster.

Architecture

The demonstration code will deploy a dedicated project (pictured left) to house the Vault cluster in its own GKE Cluster and expose the TLS-protected Vault endpoint URL behind a Regional Load Balancer. It will also create a separate GKE Cluster (pictured right) to hold the sample applications that will interact with the Vault endpoint to retrieve secrets in several ways.

Architecture

Important Notes:

This demo codebase is NOT production-ready in the default state.

  • The Vault URL is exposed via a public load balancer which is not typically suitable for production environments. Refer to: Vault on GKE for more information on production hardening this Vault cluster.

  • The GKE Clusters are configured as private clusters which removes the public IP from GKE worker nodes. However, the Terraform master_authorized_networks_config setting is configured by default with the cidr block of 0.0.0.0/0 which allows any IP to reach the GKE API Servers. Production configurations should set specific IPs/subnets to restrict access to the API servers from only approved source locations. To implement this hardening measure, modify the kubernetes_master_authorized_networks list variable in scripts/generate-tfvars.sh before proceeding. Be sure that the subnets include the IP address your workstation is originating from or the provisioning steps will fail.

Prerequisites

The steps described in this document require the installation of several tools and the proper configuration of authentication to allow them to access your GCP resources.

Cloud Project

You'll need access to a Google Cloud Project with billing enabled. See Creating and Managing Projects for creating a new project. To make cleanup easier it's recommended to create a new project.

Install Cloud SDK

If you are not running on Google Cloud Shell, you will need to install the Google Cloud SDK. The Google Cloud SDK is used to interact with your GCP resources. Installation instructions for multiple platforms are available online.

Install Kubectl

If you are not running on Google Cloud Shell, you will need to install kubectl. The kubectl CLI is used to interteract with both Kubernetes Engine and kubernetes in general. Installation instructions for multiple platforms are available online.

Install Terraform

Terraform is used to automate the manipulation of cloud infrastructure. Its installation instructions are also available online.

Install Vault CLI

The Vault CLI binary is used to connect to the Vault cluster to set configuration and retrieve secrets. Follow the installation instructions to install the binary for your platform.

Configure Authentication

The Terraform configuration will execute against your GCP environment and create a Kubernetes Engine cluster running a simple application. The configuration will use your personal account to build out these resources. To setup the default account the configuration will use, run the following command to select the appropriate account:

gcloud auth application-default login

Deployment

Create the clusters

The infrastructure required by this project can be deployed by executing:

make create

This will:

  1. Enable any APIs we need and verify our prerequisites are met.
  2. Read your project & zone configuration to generate the following config file:
    • ./terraform/terraform.tfvars for Terraform variables
  3. Run terraform init to prepare Terraform to create the infrastructure.
  4. Run terraform apply to create the GKE Clusters and supporting resources.

If no errors are displayed, then after a few minutes you should see your Kubernetes Engine clusters in the GCP Console. Note that the dynamically generated Vault Cluster project name will be displayed in the Terraform output.

Configure Static Key-Value Secrets in Vault

The simplest example of storing and retrieving a secret with Vault is by using the "Key Value" storage method. Abbreviated kv, this is a static secret storage mechanism that requires only a small amount of configuration to use.

To begin, set the VAULT_ADDR, VAULT_TOKEN, and VAULT_CAPATH environment variables using information generated during the make create step:

export VAULT_ADDR="https://$(terraform output -state=terraform/terraform.tfstate vault-address)"
export VAULT_TOKEN="$(terraform output -state=terraform/terraform.tfstate vault-root-token)"
export VAULT_CAPATH="$(pwd)/tls/ca.pem"

With the above configured, your terminal should now be able to authenticate to Vault with the "root" token. Validate by running vault status:

vault status

Key                      Value
---                      -----
Recovery Seal Type       shamir
Sealed                   false
Total Recovery Shares    1
Threshold                1
Version                  1.2.0
Cluster Name             vault-cluster-be7094aa
Cluster ID               ac0d2d33-61db-a06a-77d0-eb9c1e87b236
HA Enabled               true
HA Cluster               https://10.24.1.3:8201
HA Mode                  active

Enable the kv store inside Vault:

vault secrets enable -path=secret/ kv

Create a sample secret in Vault inside the custom kv path:

vault kv put secret/myapp/config \
  ttl="30s" \
  apikey='MYAPIKEYHERE'

To validate it was stored correctly, retrieve the secret:

vault kv get secret/myapp/config

===== Data =====
Key       Value
---       -----
apikey    MYAPIKEYHERE
ttl       30s

You are now ready to proceed with fetching this secret from within Kubernetes pods in the next section.

Configure Kubernetes Pod Authentication to Vault

In this next step, several tasks have been combined into a script to ease the configuration process. The following high-level tasks are being performed in scripts/auth-to-vault.sh:

  • Configure a dedicated Service Account for Vault to use to communicate with this GKE API server.
  • Configure RBAC permissions for the dedicated service account to allow it to validate Service Account tokens sent by calling applications.
  • Extract several key items from the dedicated Service Account object.
  • Configure Vault's Kubernetes authentication configuration using those key items.
  • Define a policy for granting permissions to the kv storage location.
  • Define a role mapping that grants the default service account in the default namespace of a Kubernetes cluster the ability to use the policy which grants access to the kv storage location.
  • Define a configmap and secret in the default namespace that holds the Vault URL endpoint information and certificate authority information. This can be mounted into pods that need to know how to reach Vault.

Run the scripts/auth-to-vault.sh script. Note that the Cluster Name, Cluster ID, and HA Cluster vales will differ for your environment:

./scripts/auth-to-vault.sh

Key                      Value
---                      -----
Recovery Seal Type       shamir
Sealed                   false
Total Recovery Shares    1
Threshold                1
Version                  1.2.0
Cluster Name             vault-cluster-be7094aa
Cluster ID               ac0d2d33-61db-a06a-77d0-eb9c1e87b236
HA Enabled               true
HA Cluster               https://10.24.1.3:8201
HA Mode                  active
Fetching cluster endpoint and auth data.
kubeconfig entry generated for app.
serviceaccount/vault-auth created
clusterrolebinding.rbac.authorization.k8s.io/role-tokenreview-binding created
Success! Enabled kubernetes auth method at: kubernetes/
Success! Data written to: auth/kubernetes/config
Success! Uploaded policy: myapp-kv-rw
Success! Data written to: auth/kubernetes/role/myapp-role
configmap/vault created
secret/vault-tls created

Manually Retrieve Secrets from a Pod

For the first exercise, you will create a pod, kubectl exec into it, and manually retrieve a secret from Vault using a few curl commands. The purpose of doing this by hand is to give you a full understanding of the mechanics for authenticating to a Vault server and fetching secret information programmatically.

Review the pod specification. Notice that the pod mounts the vault-specific configmap and secret to assist in locating the Vault URL:

cat k8s-manifests/sample.yaml

Now, create the deployment which starts a sample pod:

kubectl apply -f k8s-manifests/sample.yaml

deployment.apps/samplepod created

Enter into the newly created samplepod using kubectl exec and specifying its label:

kubectl exec -it $(kubectl get pod -l "app=samplepod" -o jsonpath="{.items[0].metadata.name}") -- bash

Now that you are inside a shell on the pod, run the following commands to simulate what an application would do to login to Vault and fetch a secret:

# Install curl and jq
apk add --no-cache curl jq
# Fetch the pod's service account token
KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
# Use curl to login to vault and obtain a client access token
VAULT_K8S_LOGIN=$(curl --cacert /etc/vault/tls/ca.pem -s --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "myapp-role"}' ${VAULT_ADDR}/v1/auth/kubernetes/login)
# View the login response which includes the vault client access token
echo $VAULT_K8S_LOGIN | jq
# Extract just the client access token
X_VAULT_TOKEN=$(echo $VAULT_K8S_LOGIN | jq -r '.auth.client_token')
# Use the client access token to retrieve the contents of the secret
curl --cacert /etc/vault/tls/ca.pem -s --header "X-Vault-Token: $X_VAULT_TOKEN" --request GET ${VAULT_ADDR}/v1/secret/myapp/config | jq

The last command should output the contents of the secret created earlier in the kv secret location secret/myapp/config. Congratulations! You have just retrieved a secret from Vault the "hard way".

Now, exit from the pod and delete the deployment:

exit
kubectl delete -f k8s-manifests/sample.yaml

Configure an Auto-Init Example Application

In the previous section, the exercise was to log into vault with curl and retrieve a secret manually. However, there are some subtle issues with using that approach in a real environment. Namely, it requires the application to explicitly understand the Vault authentication and retrieval APIs. It also does not have logic for refreshing the secret to keep it updated locally if it were to change in Vault. In this step, you'll leverage what's known as the "sidecar" pattern to add two containers to the pod that automatically handle the tasks of logging into vault, obtaining a client token, and continuously fetching a secret's contents onto a local file location. This allows the application to read secrets from a file inside the pod normally without needing to be modified to interact with Vault directly.

Review the k8s-manifests/sidecar.yaml before proceeding. Notice the init container and consul-template container "sidecar" are now present.

cat k8s-manifests/sidecar.yaml

Deploy the sidecar application:

kubectl apply -f k8s-manifests/sidecar.yaml

This command finds and execs into the sidecar deployment, showing the contents of /etc/secrets/config from its local disk. If the pod is healthy and running, this should be the contents of the secret by the same name. If this output succeeds, the init and sidecar containers have performed their functions correctly.

kubectl exec -it $(kubectl get pod -l "app=kv-sidecar" -o jsonpath="{.items[0].metadata.name}") -c app -- cat /etc/secrets/config

---
apikey: MYAPIKEYHERE

To validate that the sidecar continously retrieves the updated secret contents into the pod, make a change to the secret's contents inside vault. Notice the number "2" added to the end of the apikey.

vault kv put secret/myapp/config \
  ttl="30s" \
  apikey='MYAPIKEYHERE2'

Success! Data written to: secret/myapp/config

After a few seconds, re-run the following command. (You may have to wait up to 10 seconds). Your command output should now be the updated secret contents:

kubectl exec -it $(kubectl get pod -l "app=kv-sidecar" -o jsonpath="{.items[0].metadata.name}") -c app -- cat /etc/secrets/config

---
apikey: MYAPIKEYHERE2

If you see the updated apikey value, the consul-template "sidecar" has successfully communicated with Vault and updated the file /etc/secrets/config on disk inside the pod automatically.

Delete the sidecar application:

kubectl delete -f k8s-manifests/sidecar.yaml

deployment.apps "kv-sidecar" deleted

Configure Dynamic GCP Service Account Credentials

Another feature Vault allows via its GCP Secrets Engine is to have Vault dynamically create and automatically manage Google Cloud Platform Service Accounts and corresponding Service Account Keys. This means that you no longer have to manually generate, export, and embed service account JSON files containing static private keys and hardcoded expiration dates from the Console UI. Instead, the application can authenticate to Vault and Vault can return a valid service account key JSON every time it's asked. These short-lived service accounts offer convenience and security for applications looking to authenticate to GCP services such as Google Cloud Storage (GCS).

The Vault GCP Secrets Engine can provide dynamic service account credentials or OAuth2 tokens. In this example, we'll configure and use a dynamic service account credential to access a GCS bucket.

Run the scripts/gcp-secrets-engine.sh script to configure Vault to use GCP's Secrets Engine:

./scripts/gcp-secrets-engine.sh

Key                      Value
---                      -----
Recovery Seal Type       shamir
Sealed                   false
Total Recovery Shares    1
Threshold                1
Version                  1.2.0
Cluster Name             vault-cluster-be7094aa
Cluster ID               ac0d2d33-61db-a06a-77d0-eb9c1e87b236
HA Enabled               true
HA Cluster               https://10.24.1.3:8201
HA Mode                  standby
Active Node Address      https://35.245.173.48
Success! Enabled the gcp secrets engine at: gcp/
Success! Data written to: gcp/config
Success! Data written to: gcp/roleset/gcs-sa-role-set
Success! Uploaded policy: myapp-gcs-rw
Success! Data written to: auth/kubernetes/role/my-gcs-role

Next, create the sample application:

kubectl apply -f k8s-manifests/sample.yaml

deployment.apps/samplepod created

Obtain the current project name:

gcloud config get-value core/project

Exec a shell inside the pod:

kubectl exec -it $(kubectl get pod -l "app=samplepod" -o jsonpath="{.items[0].metadata.name}") -- bash

bash-4.4#

Install curl and jq:

apk add --no-cache curl jq

Set the environment variables. Be sure to make PROJECT equal to the output of the gcloud config get-value core/project command above:

PROJECT="YOUR_ACTUAL_PROJECT_NAME"
BUCKET_NAME="${PROJECT}-gcs"
FILENAME=helloworld.txt

Similar to the prior exercises, use curl to authenticate to Vault and then extract the dynamic service account credentials (.data.private_key_data) to a local file named sa.json.

# Fetch the pod's service account token
KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
# Use curl to login to vault and obtain a client access token
VAULT_K8S_LOGIN=$(curl --cacert /etc/vault/tls/ca.pem -s --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "my-gcs-role"}' ${VAULT_ADDR}/v1/auth/kubernetes/login)
# Extract just the client access token
X_VAULT_TOKEN=$(echo $VAULT_K8S_LOGIN | jq -r '.auth.client_token')
# Use the client access token to retrieve the contents of the service account credential
curl --cacert /etc/vault/tls/ca.pem -s --header "X-Vault-Token: $X_VAULT_TOKEN" --request GET ${VAULT_ADDR}/v1/gcp/key/gcs-sa-role-set | jq -r '.data.private_key_data' | base64 -d > sa.json

Configure the installed gcloud SDK to use the sa.json for authentication.

gcloud auth activate-service-account --key-file=sa.json

Activated service account credentials for: [vaultgcs-sa-role-se-1548887675@MY_ACTUAL_PROJECT_NAME.iam.gserviceaccount.com]

Finally, create a sample file, list the empty bucket, upload the file, list the bucket with the new file, and then remove the file. These actions are granted by the roles/* block in the Vault GCP roleset.

echo "Hello world" > "${FILENAME}"
gsutil ls "gs://$BUCKET_NAME/"
gsutil cp helloworld.txt "gs://$BUCKET_NAME/helloworld.txt"
gsutil ls "gs://$BUCKET_NAME/"
gsutil rm "gs://$BUCKET_NAME/helloworld.txt"
exit
kubectl delete -f k8s-manifests/sample.yaml

If your application uses OAuth2 tokens to authenticate to Google Cloud Platform APIs instead of service account credentials, the configuration is very similar. The limitation of 10 service account keys doesn't apply to OAuth2 tokens, so it is a more scalable method to use if the desired GCP API accepts OAuth2 tokens for authentication.

Validation

Run make validate to verify that the clusters were fully deployed, a pod can authenticate to Vault, and the pod can retrieve a secret successfully.

Teardown

When you are ready to clean up the resources that were created and avoid accruing further charges, run the following command to remove all resources on GCP and any configurations that were added/updated to your local environment:

make teardown

Troubleshooting

** The scripts/auth-to-vault.sh script exits with an error requiring vault to be installed. **

Follow the installation instructions to install the binary for your platform.

** The provisioning steps performed by Terraform in the make create step fail with kubectl connection time out errors **

If you've modified the kubernetes_master_authorized_networks variable in scripts/generate-tfvars.sh, ensure your workstation's source IP is included in the list of allowed subnets. Run make teardown, modify scripts/generate-tfvars.sh to include the correct subnets, and re-run make create.

Relevant Material

** This is not an officially supported Google product **

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].