All Projects → terraform-google-modules → Terraform Example Foundation

terraform-google-modules / Terraform Example Foundation

Licence: apache-2.0
Example repo showing how the CFT modules can be composed to build a secure cloud foundation.

Labels

Projects that are alternatives of or similar to Terraform Example Foundation

Aws Incident Response
Stars: ✭ 167 (-12.57%)
Mutual labels:  hcl
Terraform Aws Foundation
Establish a solid Foundation on AWS with these modules for Terraform
Stars: ✭ 173 (-9.42%)
Mutual labels:  hcl
Hcl Picker
🎨 Colorpicker for data
Stars: ✭ 178 (-6.81%)
Mutual labels:  hcl
Terraform Aws Cloudtrail Cloudwatch Alarms
Terraform module for creating alarms for tracking important changes and occurrences from cloudtrail.
Stars: ✭ 170 (-10.99%)
Mutual labels:  hcl
Tfk8s
A tool for converting Kubernetes YAML manifests to Terraform HCL
Stars: ✭ 167 (-12.57%)
Mutual labels:  hcl
Stack
A set of Terraform modules for configuring production infrastructure with AWS
Stars: ✭ 2,080 (+989.01%)
Mutual labels:  hcl
Terraform Aws Openshift
Create infrastructure with Terraform and AWS, install OpenShift. Party!
Stars: ✭ 165 (-13.61%)
Mutual labels:  hcl
Terraform Aws Nomad
A Terraform Module for how to run Nomad on AWS using Terraform and Packer
Stars: ✭ 189 (-1.05%)
Mutual labels:  hcl
Heroku
GitHub Action for interacting with Heroku
Stars: ✭ 172 (-9.95%)
Mutual labels:  hcl
Tf aws bastion s3 keys
A Terraform module for creating bastion host on AWS EC2 and populate its ~/.ssh/authorized_keys with public keys from bucket
Stars: ✭ 178 (-6.81%)
Mutual labels:  hcl
Terraform Aws Components
Opinionated, self-contained Terraform root modules that each solve one, specific problem
Stars: ✭ 168 (-12.04%)
Mutual labels:  hcl
Terraform Amazon Ecs
Terraform files for deploying and running Amazon ECS (+ Private Docker Registry)
Stars: ✭ 171 (-10.47%)
Mutual labels:  hcl
Terraform Gke Kubeflow Cluster
Terraform module for creating GKE clusters to run Kubeflow
Stars: ✭ 177 (-7.33%)
Mutual labels:  hcl
C1m
Nomad, Terraform, and Packer configurations for the Million Container Challenge (C1M)
Stars: ✭ 167 (-12.57%)
Mutual labels:  hcl
Terraform Shell Resource
Run (exec) a command in shell and capture the output (stdout, stderr) and status code (exit status)
Stars: ✭ 181 (-5.24%)
Mutual labels:  hcl
Terraform Aws Autoscaling
Terraform module which creates Auto Scaling resources on AWS
Stars: ✭ 166 (-13.09%)
Mutual labels:  hcl
K8s Scw Baremetal
Kubernetes installer for Scaleway bare-metal AMD64 and ARMv7
Stars: ✭ 176 (-7.85%)
Mutual labels:  hcl
Terraform Aws Lambda
Terraform module, which takes care of a lot of AWS Lambda/serverless tasks (build dependencies, packages, updates, deployments) in countless combinations
Stars: ✭ 190 (-0.52%)
Mutual labels:  hcl
Vault Infra
Terraform to create Vault infrastructure
Stars: ✭ 186 (-2.62%)
Mutual labels:  hcl
Nomad Guides
Example usage of HashiCorp Nomad
Stars: ✭ 178 (-6.81%)
Mutual labels:  hcl

terraform-example-foundation

This is an example repo showing how the CFT Terraform modules can be composed to build a secure GCP foundation, following the Google Cloud security foundations guide. The supplied structure and code is intended to form a starting point for building your own foundation with pragmatic defaults you can customize to meet your own requirements. Currently, the step 0 is manually executed. From step 1 onwards, the Terraform code is deployed by leveraging either Google Cloud Build (by default) or Jenkins. Cloud Build has been chosen by default to allow teams to quickly get started without needing to deploy a CI/CD tool, although it is worth noting the code can easily be executed by your preferred tool.

Overview

This repo contains several distinct Terraform projects each within their own directory that must be applied separately, but in sequence. Each of these Terraform projects are to be layered on top of each other, running in the following order.

0. bootstrap

This stage executes the CFT Bootstrap module which bootstraps an existing GCP organization, creating all the required GCP resources & permissions to start using the Cloud Foundation Toolkit (CFT). For CI/CD pipelines, you can use either Cloud Build (by default) or Jenkins. If you want to use Jenkins instead of Cloud Build, please see README-Jenkins on how to use the included Jenkins sub-module.

The bootstrap step includes:

  • The cft-seed project, which contains:
    • Terraform state bucket
    • Custom Service Account used by Terraform to create new resources in GCP
  • The cft-cloudbuild project (prj-cicd if using Jenkins), which contains:
    • A CI/CD pipeline implemented with either Cloud Build or Jenkins
    • If using Cloud Build:
      • Cloud Source Repository
    • If using Jenkins:
      • A GCE Instance configured as a Jenkins Agent
      • Custom Service Account to run Jenkins Agents GCE instances
      • VPN connection with on-prem (or where ever your Jenkins Master is located)

It is a best practice to separate concerns by having two projects here: one for the CFT resources and one for the CI/CD tool. The cft-seed project stores Terraform state and has the Service Account able to create / modify infrastructure. On the other hand, the deployment of that infrastructure is coordinated by a CI/CD tool of your choice allocated in a second project (named cft-cloudbuild project if using Google Cloud Build and prj-cicd project if using Jenkins).

To further separate the concerns at the IAM level as well, the service account of the CI/CD tool is given different permissions than the Terraform account. The CI/CD tool account (@cloudbuild.gserviceaccount.com if using Cloud Build and [email protected] if using Jenkins) is granted access to generate tokens over the Terraform custom service account. In this configuration, the baseline permissions of the CI/CD tool are limited and the Terraform custom Service Account is granted the IAM permissions required to build the foundation.

After executing this step, you will have the following structure:

example-organization/
└── fldr-bootstrap
    ├── cft-cloudbuild (prj-cicd if using Jenkins)
    └── cft-seed

When this step uses the Cloud Build submodule, it sets up Cloud Build and Cloud Source Repositories for each of the stages below. Triggers are configured to run a terraform plan for any non environment branch and terraform apply when changes are merged to an environment branch (development, non-production & production). Usage instructions are available in the 0-bootstrap README.

1. org

The purpose of this stage is to set up the common folder used to house projects which contain shared resources such as DNS Hub, Interconnect, SCC Notification, org level secrets and org level logging. This will create the following folder & project structure:

example-organization
└── fldr-common
    ├── prj-c-logging
    ├── prj-c-billing-logs
    ├── prj-c-dns-hub
    ├── prj-c-interconnect
    ├── prj-c-scc
    └── prj-c-secrets

Logs

Among the six projects created under the common folder, two projects (prj-c-logging, prj-c-billing-logs) are used for logging. The first one for organization wide audit logs and the latter for billing logs. In both cases the logs are collected into BigQuery datasets which can then be used general querying, dashboarding & reporting. Logs are also exported to Pub/Sub and GCS bucket. The various audit log types being captured in BigQuery are retained for 30 days.

For billing data, a BigQuery dataset is created with permissions attached, however you will need to configure a billing export manually, as there is no easy way to automate this at the moment.

DNS Hub

Under the common folder, one project is created. This project will host the DNS Hub for the organization.

Interconnect

Under the common folder, one project is created. This project will host the Interconnect infrastructure for the organization.

SCC Notification

Under the common folder, one project is created. This project will host the SCC Notification resources at the organization level. This project will contain a Pub/Sub topic and subscription, a SCC Notification configured to send all new Findings to the topic created. You can adjust the filter when deploying this step.

Secrets

Under the common folder, one project is created. This project is allocated for GCP Secret Manager for secrets shared by the organization.

Usage instructions are available for the org step in the README.

2. environments

The purpose of this stage is to set up the environments folders used to house projects which contain monitoring, secrets, networking projects. This will create the following folder & project structure:

example-organization
└── fldr-development
    ├── prj-d-monitoring
    ├── prj-d-secrets
    ├── prj-d-shared-base
    └── prj-d-shared-restricted
└── fldr-non-production
    ├── prj-n-monitoring
    ├── prj-n-secrets
    ├── prj-n-shared-base
    └── prj-n-shared-restricted
└── fldr-production
    ├── prj-p-monitoring
    ├── prj-p-secrets
    ├── prj-p-shared-base
    └── prj-p-shared-restricted

Monitoring

Under the environment folder, a project is created per environment (development, non-production & production), which is intended to be used as a Cloud Monitoring workspace for all projects in that environment. Please note that creating the workspace and linking projects can currently only be completed through the Cloud Console. If you have strong IAM requirements for these monitoring workspaces, it is worth considering creating these at a more granular level, such as per business unit or per application.

Networking

Under the environment folder, two projects, one for base and another for restricted network, are created per environment (development, non-production & production) which is intended to be used as a Shared VPC Host project for all projects in that environment. This stage only creates the projects and enables the correct APIs, the following networks stage creates the actual Shared VPC networks.

Secrets

Under the environment folder, one project is created. This is allocated for GCP Secret Manager for secrets shared by the environment.

Usage instructions are available for the environments step in the README.

3. networks

This step focuses on creating a Shared VPC per environment (development, non-production & production) in a standard configuration with a reasonable security baseline. Currently this includes:

  • Optional - Example subnets for development, non-production & production inclusive of secondary ranges for those that want to use GKE.
  • Optional - Default firewall rules created to allow remote access to VMs through IAP, without needing public IPs.
    • allow-iap-ssh and allow-iap-rdp network tags respectively
  • Optional - Default firewall rule created to allow for load balancing using allow-lb tag.
  • Private service networking configured to enable workload dependant resources like Cloud SQL.
  • Base Shared VPC with private.googleapis.com configured for base access to googleapis.com and gcr.io. Route added for VIP so no internet access is required to access APIs.
  • Restricted Shared VPC with restricted.googleapis.com configured for restricted access to googleapis.com and gcr.io. Route added for VIP so no internet access is required to access APIs.
  • Default routes to internet removed, with tag based route egress-internet required on VMs in order to reach the internet.
  • Optional - Cloud NAT configured for all subnets with logging and static outbound IPs.
  • Default Cloud DNS policy applied, with DNS logging and inbound query forwarding turned on.

Usage instructions are available for the networks step in the README.

4. projects

This step, is focused on creating service projects with a standard configuration and that are attached to the Shared VPC created in the previous step. Running this code as-is should generate a structure as shown below:

example-organization/
└── fldr-development
    ├── prj-bu1-d-sample-floating
    ├── prj-bu1-d-sample-base
    ├── prj-bu1-d-sample-restrict
    ├── prj-bu2-d-sample-floating
    ├── prj-bu2-d-sample-base
    └── prj-bu2-d-sample-restrict
└── fldr-non-production
    ├── prj-bu1-n-sample-floating
    ├── prj-bu1-n-sample-base
    ├── prj-bu1-n-sample-restrict
    ├── prj-bu2-n-sample-floating
    ├── prj-bu2-n-sample-base
    └── prj-bu2-n-sample-restrict
└── fldr-production
    ├── prj-bu1-p-sample-floating
    ├── prj-bu1-p-sample-base
    ├── prj-bu1-p-sample-restrict
    ├── prj-bu2-p-sample-floating
    ├── prj-bu2-p-sample-base
    └── prj-bu2-p-sample-restrict

The code in this step includes two options for creating projects. The first is the standard projects module which creates a project per environment and the second creates a standalone project for one environment. If relevant for your use case, there are also two optional submodules which can be used to create a subnet per project and a dedicated private DNS zone per project.

Usage instructions are available for the projects step in the README.

Final View

Once all steps above have been executed your GCP organization should represent the structure shown below, with projects being the lowest nodes in the tree.

example-organization
└── fldr-common
    ├── prj-c-logging
    ├── prj-c-billing-logs
    ├── prj-c-dns-hub
    ├── prj-c-interconnect
    ├── prj-c-scc
    └── prj-c-secrets
└── fldr-development
    ├── prj-bu1-d-sample-floating
    ├── prj-bu1-d-sample-base
    ├── prj-bu1-d-sample-restrict
    ├── prj-bu2-d-sample-floating
    ├── prj-bu2-d-sample-base
    ├── prj-bu2-d-sample-restrict
    ├── prj-d-monitoring
    ├── prj-d-secrets
    ├── prj-d-shared-base
    └── prj-d-shared-restricted
└── fldr-non-production
    ├── prj-bu1-n-sample-floating
    ├── prj-bu1-n-sample-base
    ├── prj-bu1-n-sample-restrict
    ├── prj-bu2-n-sample-floating
    ├── prj-bu2-n-sample-base
    ├── prj-bu2-n-sample-restrict
    ├── prj-n-monitoring
    ├── prj-n-secrets
    ├── prj-n-shared-base
    └── prj-n-shared-restricted
└── fldr-production
    ├── prj-bu1-p-sample-floating
    ├── prj-bu1-p-sample-base
    ├── prj-bu1-p-sample-restrict
    ├── prj-bu2-p-sample-floating
    ├── prj-bu2-p-sample-base
    ├── prj-bu2-p-sample-restrict
    ├── prj-p-monitoring
    ├── prj-p-secrets
    ├── prj-p-shared-base
    └── prj-p-shared-restricted
└── fldr-bootstrap
    ├── cft-cloudbuild (prj-cicd if using Jenkins)
    └── cft-seed

Branching strategy

There are three main named branches - development, non-production and production that reflect the corresponding environments. These branches should be protected. When the CI/CD pipeline (Jenkins/CloudBuild) runs on a particular named branch (say for instance development), only the corresponding environment (development) is applied. An exception is the shared environment which is only applied when triggered on the production branch. This is because any changes in the shared environment may affect resources in other environments and can have adverse effects if not validated correctly.

Development happens on feature/bugfix branches (which can be named feature/new-foo, bugfix/fix-bar, etc.) and when complete, a pull request (PR) or merge request (MR) can be opened targeting the development branch. This will trigger the CI pipeline to perform a plan and validate against all environments (development, non-production, shared and production). Once code review is complete and changes are validated, this branch can be merged into development. This will trigger a CI pipeline that applies the latest changes in the development branch on the development environment.

Once validated in development, changes can be promoted to non-production by opening a PR/MR targeting the non-production branch and merging them. Similarly changes can be promoted from non-production to production.

Optional Variables

Some variables used to deploy the steps have default values, check those before deployment to ensure they match your requirements. For more information, there are tables of inputs and outputs for the Terraform modules, each with a detailed description of their variables. Look for variables marked as not required in the section Inputs of these READMEs:

Errata Summary

Refer to the Errata Summary for an overview of the delta between the example foundation repository and the Google Cloud security foundations guide.

Contributing

Refer to the contribution guidelines for information on contributing to this module.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].