All Projects → WesleyCharlesBlake → Terraform Aws Eks

WesleyCharlesBlake / Terraform Aws Eks

Licence: mit
Deploy a full EKS cluster with Terraform

Projects that are alternatives of or similar to Terraform Aws Eks

Terraform Aws Secure Baseline
Terraform module to set up your AWS account with the secure baseline configuration based on CIS Amazon Web Services Foundations and AWS Foundational Security Best Practices.
Stars: ✭ 596 (+376.8%)
Mutual labels:  terraform, hcl, devops
K8s Digitalocean Terraform
Deploy latest Kubernetes cluster on DigitalOcean using Terraform
Stars: ✭ 33 (-73.6%)
Mutual labels:  terraform, hcl, devops
Terraform Modules
Terraform Modules
Stars: ✭ 25 (-80%)
Mutual labels:  terraform, hcl, devops
Tecli
In a world where everything is Terraform, teams use Terraform Cloud API to manage their workloads. TECLI increases teams productivity by facilitating such interaction and by providing easy commands that can be executed on a terminal or on CI/CD systems.
Stars: ✭ 158 (+26.4%)
Mutual labels:  amazon-web-services, terraform, devops
Terraform Aws Couchbase
Reusable infrastructure modules for running Couchbase on AWS
Stars: ✭ 73 (-41.6%)
Mutual labels:  terraform, hcl, devops
Terratag
Terratag is a CLI tool that enables users of Terraform to automatically create and maintain tags across their entire set of AWS, Azure, and GCP resources
Stars: ✭ 385 (+208%)
Mutual labels:  terraform, hcl, devops
Ebs bckup
Stars: ✭ 32 (-74.4%)
Mutual labels:  terraform, hcl, devops
Terraform Aws Cross Account Role
A Terraform module to create an IAM Role for Cross Account delegation.
Stars: ✭ 30 (-76%)
Mutual labels:  amazon-web-services, terraform, hcl
Terraform
Terraform automation for Cloud
Stars: ✭ 121 (-3.2%)
Mutual labels:  terraform, hcl, devops
Terraform Modules
Reusable Terraform modules
Stars: ✭ 63 (-49.6%)
Mutual labels:  terraform, hcl, devops
Terraform Aws Landing Zone
Terraform Module for AWS Landing Zone
Stars: ✭ 142 (+13.6%)
Mutual labels:  amazon-web-services, terraform, hcl
Config Lint
Command line tool to validate configuration files
Stars: ✭ 118 (-5.6%)
Mutual labels:  terraform, hcl, devops
Terraform Aws Cloudtrail Cloudwatch Alarms
Terraform module for creating alarms for tracking important changes and occurrences from cloudtrail.
Stars: ✭ 170 (+36%)
Mutual labels:  terraform, hcl, devops
Intro To Terraform
Sample code for the blog post series "A Comprehensive Guide to Terraform."
Stars: ✭ 550 (+340%)
Mutual labels:  terraform, hcl, devops
Vishwakarma
Terraform modules to create a self-hosting Kubernetes cluster on opinionated Cloud Platform.
Stars: ✭ 127 (+1.6%)
Mutual labels:  terraform, hcl, devops
Doact
A Terraform module for hosting your own runner for CI/CD on Digital Ocean to run jobs in your GitHub Actions workflows. 🚀
Stars: ✭ 42 (-66.4%)
Mutual labels:  terraform, hcl, devops
Terraform Eks
Terraform for AWS EKS
Stars: ✭ 82 (-34.4%)
Mutual labels:  terraform, hcl, devops
Ecs Pipeline
☁️ 🐳 ⚡️ 🚀 Create environment and deployment pipelines to ECS Fargate with CodePipeline, CodeBuild and Github using Terraform
Stars: ✭ 85 (-32%)
Mutual labels:  terraform, hcl, devops
Terraform Aws Dynamic Subnets
Terraform module for public and private subnets provisioning in existing VPC
Stars: ✭ 106 (-15.2%)
Mutual labels:  terraform, hcl
Typhoon
Minimal and free Kubernetes distribution with Terraform
Stars: ✭ 1,397 (+1017.6%)
Mutual labels:  terraform, hcl

terraform-aws-eks

CircleCI TerraformRefigistry

Deploy a full AWS EKS cluster with Terraform

What resources are created

  1. VPC
  2. Internet Gateway (IGW)
  3. Public and Private Subnets
  4. Security Groups, Route Tables and Route Table Associations
  5. IAM roles, instance profiles and policies
  6. An EKS Cluster
  7. EKS Managed Node group
  8. Autoscaling group and Launch Configuration
  9. Worker Nodes in a private Subnet
  10. bastion host for ssh access to the VPC
  11. The ConfigMap required to register Nodes with EKS
  12. KUBECONFIG file to authenticate kubectl using the aws eks get-token command. needs awscli version 1.16.156 >

Configuration

You can configure you config with the following input variables:

Name Description Default
cluster-name The name of your EKS Cluster eks-cluster
aws-region The AWS Region to deploy EKS us-east-1
availability-zones AWS Availability Zones ["us-east-1a", "us-east-1b", "us-east-1c"]
k8s-version The desired K8s version to launch 1.13
node-instance-type Worker Node EC2 instance type m4.large
root-block-size Size of the root EBS block device 20
desired-capacity Autoscaling Desired node capacity 2
max-size Autoscaling Maximum node capacity 5
min-size Autoscaling Minimum node capacity 1
vpc-subnet-cidr Subnet CIDR 10.0.0.0/16
private-subnet-cidr Private Subnet CIDR ["10.0.0.0/19", "10.0.32.0/19", "10.0.64.0/19"]
public-subnet-cidr Public Subnet CIDR ["10.0.128.0/20", "10.0.144.0/20", "10.0.160.0/20"]
db-subnet-cidr DB/Spare Subnet CIDR ["10.0.192.0/21", "10.0.200.0/21", "10.0.208.0/21"]
eks-cw-logging EKS Logging Components ["api", "audit", "authenticator", "controllerManager", "scheduler"]
ec2-key-public-key EC2 Key Pair for bastion and nodes ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 [email protected]

You can create a file called terraform.tfvars or copy variables.tf into the project root, if you would like to over-ride the defaults.

How to use this example

NOTE on versions The versions of this module are compatible with the following Terraform releases. Please use the correct version for your use case:

  • version = 3.0.0 > with terraform 0.13.x >
  • version = 2.0.0 with terraform < 0.12.x
  • version = 1.0.4 with terraform < 0.11.x

Have a look at the examples for complete references You can use this module from the Terraform registry as a remote source:

module "eks" {
  source  = "WesleyCharlesBlake/eks/aws"

  aws-region          = "us-east-1"
  availability-zones  = ["us-east-1a", "us-east-1b", "us-east-1c"]
  cluster-name        = "my-cluster"
  k8s-version         = "1.17"
  node-instance-type  = "t3.medium"
  root-block-size     = "40"
  desired-capacity    = "3"
  max-size            = "5"
  min-size            = "1"
  vpc-subnet-cidr     = "10.0.0.0/16"
  private-subnet-cidr = ["10.0.0.0/19", "10.0.32.0/19", "10.0.64.0/19"]
  public-subnet-cidr  = ["10.0.128.0/20", "10.0.144.0/20", "10.0.160.0/20"]
  db-subnet-cidr      = ["10.0.192.0/21", "10.0.200.0/21", "10.0.208.0/21"]
  eks-cw-logging      = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
  ec2-key-public-key  = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 [email protected]"
}

output "kubeconfig" {
  value = module.eks.kubeconfig
}

output "config-map" {
  value = module.eks.config-map-aws-auth
}

Or by using variables.tf or a tfvars file:

module "eks" {
  source  = "WesleyCharlesBlake/eks/aws"

  aws-region          = var.aws-region
  availability-zones  = var.availability-zones
  cluster-name        = var.cluster-name
  k8s-version         = var.k8s-version
  node-instance-type  = var.node-instance-type
  root-block-size     = var.root-block-size
  desired-capacity    = var.desired-capacity
  max-size            = var.max-size
  min-size            = var.min-size
  vpc-subnet-cidr     = var.vpc-subnet-cidr
  private-subnet-cidr = var.private-subnet-cidr
  public-subnet-cidr  = var.public-subnet-cidr
  db-subnet-cidr      = var.db-subnet-cidr
  eks-cw-logging      = var.eks-cw-logging
  ec2-key-public-key  = var.ec2-key
}

IAM

The AWS credentials must be associated with a user having at least the following AWS managed IAM policies

  • IAMFullAccess
  • AutoScalingFullAccess
  • AmazonEKSClusterPolicy
  • AmazonEKSWorkerNodePolicy
  • AmazonVPCFullAccess
  • AmazonEKSServicePolicy
  • AmazonEKS_CNI_Policy
  • AmazonEC2FullAccess

In addition, you will need to create the following managed policies

EKS

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "eks:*"
            ],
            "Resource": "*"
        }
    ]
}

Terraform

You need to run the following commands to create the resources with Terraform:

terraform init
terraform plan
terraform apply

TIP: you should save the plan state terraform plan -out eks-state or even better yet, setup remote storage for Terraform state. You can store state in an S3 backend, with locking via DynamoDB

Setup kubectl

Setup your KUBECONFIG

terraform output kubeconfig > ~/.kube/eks-cluster
export KUBECONFIG=~/.kube/eks-cluster

Authorize users to access the cluster

Initially, only the system that deployed the cluster will be able to access the cluster. To authorize other users for accessing the cluster, aws-auth config needs to be modified by using the steps given below:

  • Open the aws-auth file in the edit mode on the machine that has been used to deploy EKS cluster:
sudo kubectl edit -n kube-system configmap/aws-auth
  • Add the following configuration in that file by changing the placeholders:
mapUsers: |
  - userarn: arn:aws:iam::111122223333:user/<username>
    username: <username>
    groups:
      - system:masters

So, the final configuration would look like this:

apiVersion: v1
data:
  mapRoles: |
    - rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
  mapUsers: |
    - userarn: arn:aws:iam::111122223333:user/<username>
      username: <username>
      groups:
        - system:masters
  • Once the user map is added in the configuration we need to create cluster role binding for that user:
kubectl create clusterrolebinding ops-user-cluster-admin-binding-<username> --clusterrole=cluster-admin --user=<username>

Replace the placeholder with proper values

Cleaning up

You can destroy this cluster entirely by running:

terraform plan -destroy
terraform destroy  --force
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].