All Projects → gruntwork-io → Terragrunt Infrastructure Live Example

gruntwork-io / Terragrunt Infrastructure Live Example

A repo used to show examples file/folder structures you can use with Terragrunt and Terraform

Projects that are alternatives of or similar to Terragrunt Infrastructure Live Example

Terraform Aws Components
Opinionated, self-contained Terraform root modules that each solve one, specific problem
Stars: ✭ 168 (-41.26%)
Mutual labels:  terraform, hcl, examples
Terragrunt Infrastructure Modules Example
A repo used to show examples file/folder structures you can use with Terragrunt and Terraform
Stars: ✭ 135 (-52.8%)
Mutual labels:  terraform, hcl, examples
Terraform Aws Elastic Beanstalk Environment
Terraform module to provision an AWS Elastic Beanstalk Environment
Stars: ✭ 211 (-26.22%)
Mutual labels:  terraform, hcl
Terraform Aws Ecs Container Definition
Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource
Stars: ✭ 217 (-24.13%)
Mutual labels:  terraform, hcl
Iam Policy Json To Terraform
Small tool to convert an IAM Policy in JSON format into a Terraform aws_iam_policy_document
Stars: ✭ 282 (-1.4%)
Mutual labels:  terraform, hcl
Intellij Hcl
HCL language support for IntelliJ platform based IDEs
Stars: ✭ 207 (-27.62%)
Mutual labels:  terraform, hcl
Terraform Ecs Fargate
Source code for a tutorial on Medium I published - "Deploying Containers on Amazon’s ECS using Fargate and Terraform: Part 2"
Stars: ✭ 208 (-27.27%)
Mutual labels:  terraform, hcl
Terraform Aws Tfstate Backend
Terraform module that provision an S3 bucket to store the `terraform.tfstate` file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.
Stars: ✭ 229 (-19.93%)
Mutual labels:  terraform, hcl
Go Lambda Ping
Deploy a Lambda to Ping a Site in 20 Seconds!
Stars: ✭ 195 (-31.82%)
Mutual labels:  terraform, hcl
Cloudblock
Cloudblock automates deployment of secure ad-blocking for all of your devices - even when mobile. Step-by-step text and video guides included! Compatible clouds include AWS, Azure, Google Cloud, and Oracle Cloud. Cloudblock deploys Wireguard VPN, Pi-Hole DNS Ad-blocking, and DNS over HTTPS in a cloud provider - or locally - using Terraform and Ansible.
Stars: ✭ 257 (-10.14%)
Mutual labels:  terraform, hcl
Terraform Aws Eks Cluster
Terraform module for provisioning an EKS cluster
Stars: ✭ 256 (-10.49%)
Mutual labels:  terraform, hcl
Vim Terraform Completion
A (Neo)Vim Autocompletion and linter for Terraform, a HashiCorp tool
Stars: ✭ 280 (-2.1%)
Mutual labels:  terraform, hcl
Terraform Fargate Example
Example repository to run an ECS cluster on Fargate
Stars: ✭ 206 (-27.97%)
Mutual labels:  terraform, hcl
Terragrunt Reference Architecture
Terragrunt Reference Architecture (upd: May 2020)
Stars: ✭ 204 (-28.67%)
Mutual labels:  terraform, hcl
Terraform Website S3 Cloudfront Route53
Terraform scripts to setup an S3 based static website, with a CloudFront distribution and the required Route53 entries.
Stars: ✭ 210 (-26.57%)
Mutual labels:  terraform, hcl
Terraform Aws Jenkins
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack
Stars: ✭ 197 (-31.12%)
Mutual labels:  terraform, hcl
Azure arc
Automated Azure Arc environments
Stars: ✭ 224 (-21.68%)
Mutual labels:  terraform, hcl
Kubenow
Deploy Kubernetes. Now!
Stars: ✭ 285 (-0.35%)
Mutual labels:  terraform, hcl
K8s Scw Baremetal
Kubernetes installer for Scaleway bare-metal AMD64 and ARMv7
Stars: ✭ 176 (-38.46%)
Mutual labels:  terraform, hcl
Tf aws bastion s3 keys
A Terraform module for creating bastion host on AWS EC2 and populate its ~/.ssh/authorized_keys with public keys from bucket
Stars: ✭ 178 (-37.76%)
Mutual labels:  terraform, hcl

Maintained by Gruntwork.io

Example infrastructure-live for Terragrunt

This repo, along with the terragrunt-infrastructure-modules-example repo, show an example file/folder structure you can use with Terragrunt to keep your Terraform code DRY. For background information, check out the Keep your Terraform code DRY section of the Terragrunt documentation.

This repo shows an example of how to use the modules from the terragrunt-infrastructure-modules-example repo to deploy an Auto Scaling Group (ASG) and a MySQL DB across three environments (qa, stage, prod) and two AWS accounts (non-prod, prod), all without duplicating any of the Terraform code. That's because there is just a single copy of the Terraform code, defined in the terragrunt-infrastructure-modules-example repo, and in this repo, we solely define terragrunt.hcl files that reference that code (at a specific version, too!) and fill in variables specific to each environment.

Note: This code is solely for demonstration purposes. This is not production-ready code, so use at your own risk. If you are interested in battle-tested, production-ready Terraform code, check out Gruntwork.

How do you deploy the infrastructure in this repo?

Pre-requisites

  1. Install Terraform version 0.13.0 or newer and Terragrunt version v0.25.1 or newer.
  2. Update the bucket parameter in the root terragrunt.hcl. We use S3 as a Terraform backend to store your Terraform state, and S3 bucket names must be globally unique. The name currently in the file is already taken, so you'll have to specify your own. Alternatives, you can set the environment variable TG_BUCKET_PREFIX to set a custom prefix.
  3. Configure your AWS credentials using one of the supported authentication mechanisms.
  4. Fill in your AWS Account ID's in prod/account.hcl and non-prod/account.hcl.

Deploying a single module

  1. cd into the module's folder (e.g. cd non-prod/us-east-1/qa/mysql).
  2. Note: if you're deploying the MySQL DB, you'll need to configure your DB password as an environment variable: export TF_VAR_master_password=(...).
  3. Run terragrunt plan to see the changes you're about to apply.
  4. If the plan looks good, run terragrunt apply.

Deploying all modules in a region

  1. cd into the region folder (e.g. cd non-prod/us-east-1).
  2. Configure the password for the MySQL DB as an environment variable: export TF_VAR_master_password=(...).
  3. Run terragrunt plan-all to see all the changes you're about to apply.
  4. If the plan looks good, run terragrunt apply-all.

Testing the infrastructure after it's deployed

After each module is finished deploying, it will write a bunch of outputs to the screen. For example, the ASG will output something like the following:

Outputs:

asg_name = tf-asg-00343cdb2415e9d5f20cda6620
asg_security_group_id = sg-d27df1a3
elb_dns_name = webserver-example-prod-1234567890.us-east-1.elb.amazonaws.com
elb_security_group_id = sg-fe62ee8f
url = http://webserver-example-prod-1234567890.us-east-1.elb.amazonaws.com:80

A minute or two after the deployment finishes, and the servers in the ASG have passed their health checks, you should be able to test the url output in your browser or with curl:

curl http://webserver-example-prod-1234567890.us-east-1.elb.amazonaws.com:80

Hello, World

Similarly, the MySQL module produces outputs that will look something like this:

Outputs:

arn = arn:aws:rds:us-east-1:1234567890:db:terraform-00d7a11c1e02cf617f80bbe301
db_name = mysql_prod
endpoint = terraform-1234567890.abcdefghijklmonp.us-east-1.rds.amazonaws.com:3306

You can use the endpoint and db_name outputs with any MySQL client:

mysql --host=terraform-1234567890.abcdefghijklmonp.us-east-1.rds.amazonaws.com:3306 --user=admin --password mysql_prod

How is the code in this repo organized?

The code in this repo uses the following folder hierarchy:

account
 └ _global
 └ region
    └ _global
    └ environment
       └ resource

Where:

  • Account: At the top level are each of your AWS accounts, such as stage-account, prod-account, mgmt-account, etc. If you have everything deployed in a single AWS account, there will just be a single folder at the root (e.g. main-account).

  • Region: Within each account, there will be one or more AWS regions, such as us-east-1, eu-west-1, and ap-southeast-2, where you've deployed resources. There may also be a _global folder that defines resources that are available across all the AWS regions in this account, such as IAM users, Route 53 hosted zones, and CloudTrail.

  • Environment: Within each region, there will be one or more "environments", such as qa, stage, etc. Typically, an environment will correspond to a single AWS Virtual Private Cloud (VPC), which isolates that environment from everything else in that AWS account. There may also be a _global folder that defines resources that are available across all the environments in this AWS region, such as Route 53 A records, SNS topics, and ECR repos.

  • Resource: Within each environment, you deploy all the resources for that environment, such as EC2 Instances, Auto Scaling Groups, ECS Clusters, Databases, Load Balancers, and so on. Note that the Terraform code for most of these resources lives in the terragrunt-infrastructure-modules-example repo.

Creating and using root (account) level variables

In the situation where you have multiple AWS accounts or regions, you often have to pass common variables down to each of your modules. Rather than copy/pasting the same variables into each terragrunt.hcl file, in every region, and in every environment, you can inherit them from the inputs defined in the root terragrunt.hcl file.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].