All Projects → hashicorp → C1m

hashicorp / C1m

Licence: mpl-2.0
Nomad, Terraform, and Packer configurations for the Million Container Challenge (C1M)

Labels

Projects that are alternatives of or similar to C1m

Terraform.tmlanguage
Terraform (HCL) configuration file syntax highlighting for Sublime Text 2 and 3
Stars: ✭ 148 (-11.38%)
Mutual labels:  hcl
Terraform Google Nat Gateway
Modular NAT Gateway on Google Compute Engine for Terraform.
Stars: ✭ 155 (-7.19%)
Mutual labels:  hcl
Terraform Aws Cloudfront S3 Cdn
Terraform module to easily provision CloudFront CDN backed by an S3 origin
Stars: ✭ 162 (-2.99%)
Mutual labels:  hcl
Terraform Aws Eks
Terraform module to create an Elastic Kubernetes (EKS) cluster and associated worker instances on AWS
Stars: ✭ 2,464 (+1375.45%)
Mutual labels:  hcl
Cka Practice Exercises
This is a guide for passing the CNCF Certified Kubernetes Administrator (CKA) with practice exercises. Good luck!
Stars: ✭ 151 (-9.58%)
Mutual labels:  hcl
Terraform Aws Kubernetes
Terraform module for Kubernetes setup on AWS
Stars: ✭ 159 (-4.79%)
Mutual labels:  hcl
Terraform Google Vault
Modular deployment of Vault on Google Compute Engine with Terraform
Stars: ✭ 147 (-11.98%)
Mutual labels:  hcl
Terraform Aws Autoscaling
Terraform module which creates Auto Scaling resources on AWS
Stars: ✭ 166 (-0.6%)
Mutual labels:  hcl
Terraform Aws Ssh Bastion Service
Terraform plan to deploy ssh bastion as a containerised, stateless service on AWS with IAM based authentication
Stars: ✭ 154 (-7.78%)
Mutual labels:  hcl
Terraform Kubernetes Installer
Terraform Installer for Kubernetes on Oracle Cloud Infrastructure
Stars: ✭ 162 (-2.99%)
Mutual labels:  hcl
Terraform Learn
A best practice baseline Terraform repository containing Terraform scripts with the ability to deploy both compute and networking infrastructure into AWS, Microsoft Azure and Google Cloud Platform.
Stars: ✭ 150 (-10.18%)
Mutual labels:  hcl
Aws Labs
step by step guide for aws mini labs. Currently maintained on : https://github.com/Cloud-Yeti/aws-labs Youtube playlist for labs:
Stars: ✭ 153 (-8.38%)
Mutual labels:  hcl
Zeit Now
GitHub Action for interacting with Zeit Now
Stars: ✭ 160 (-4.19%)
Mutual labels:  hcl
Multiregion Terraform
Example multi-region AWS Terraform application
Stars: ✭ 149 (-10.78%)
Mutual labels:  hcl
Terraform Aws Rds Aurora
Terraform module which creates RDS Aurora resources on AWS
Stars: ✭ 165 (-1.2%)
Mutual labels:  hcl
Terraform Aws Lambda
Terraform module for AWS Lambda functions
Stars: ✭ 148 (-11.38%)
Mutual labels:  hcl
Apn Blog
APN Blog article code and configurations.
Stars: ✭ 156 (-6.59%)
Mutual labels:  hcl
Aws Incident Response
Stars: ✭ 167 (+0%)
Mutual labels:  hcl
Terraform Aws Openshift
Create infrastructure with Terraform and AWS, install OpenShift. Party!
Stars: ✭ 165 (-1.2%)
Mutual labels:  hcl
Dcos Kubernetes Quickstart
Quickstart guide for Kubernetes on DC/OS
Stars: ✭ 161 (-3.59%)
Mutual labels:  hcl

Nomad C1M Challenge

This repository contains the infrastructure code necessary to run the Million Container Challenge using HashiCorp's Nomad on Google's Compute Engine Cloud or Amazon Web Services.

We leverage Packer and Terraform to provision the infrastructure. Below are the instructions to provision the infrastructure.

Build Artifacts with Packer

Artifacts need to first be created for Terraform to provision. This can be accomplished by running the below commands in the packer/. directory. You can alternatively build these images locally by running the packer build command instead of building them in Atlas with packer push.

From the root directory of this repository, run the below commands to build your images with Packer.

GCE

If you're using GCE you will need to get an account.json file from GCE and place it in the root of this repository.

cd packer

export ATLAS_USERNAME=YOUR_ATLAS_USERNAME
export GCE_PROJECT_ID=YOUR_GOOGLE_PROJECT_ID
export GCE_DEFAULT_ZONE=us-central1-a
export GCE_SOURCE_IMAGE=ubuntu-1404-trusty-v20160114e

packer push gce_utility.json
packer push gce_consul_server.json
packer push gce_nomad_server.json
packer push gce_nomad_client.json
AWS
cd packer

export ATLAS_USERNAME=YOUR_ATLAS_USERNAME
export AWS_ACCESS_KEY_ID=YOUR_AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=YOUR_AWS_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION=us-east-1
export AWS_SOURCE_AMI=ami-9a562df2

packer push aws_utility.json
packer push aws_consul_server.json
packer push aws_nomad_server.json
packer push aws_nomad_client.json

Provision Infrastructure with Terraform

To provision the infrastructure necessary for C1M, run the below Terraform commands. If you want to provision locally rather than in Atlas, use the terraform apply command instead of terraform push.

From the root directory of this repository, run the below commands to provision your infrastructure with Terraform.

You only need to run the terraform remote config command once.

GCE

If you're using GCE you will need to get an account.json file from GCE and place it in the directory you're running Terraform commands from (terraform/_env/gce).

cd terraform/_env/gce

export ATLAS_USERNAME=YOUR_ATLAS_USERNAME
export ATLAS_TOKEN=YOUR_ATLAS_TOKEN
export ATLAS_ENVIRONMENT=c1m-gce

terraform remote config -backend-config name=$ATLAS_USERNAME/$ATLAS_ENVIRONMENT # Only need to run this command once
terraform get
terraform push -name $ATLAS_USERNAME/$ATLAS_ENVIRONMENT -var "atlas_token=$ATLAS_TOKEN" -var "atlas_username=$ATLAS_USERNAME"
AWS
cd terraform/_env/aws

export ATLAS_USERNAME=YOUR_ATLAS_USERNAME
export ATLAS_TOKEN=YOUR_ATLAS_TOKEN
export ATLAS_ENVIRONMENT=c1m-aws

terraform remote config -backend-config name=$ATLAS_USERNAME/$ATLAS_ENVIRONMENT # Only need to run this command once
terraform get
terraform push -name $ATLAS_USERNAME/$ATLAS_ENVIRONMENT -var "atlas_token=$ATLAS_TOKEN" -var "atlas_username=$ATLAS_USERNAME"

To tweak the infrastructure size, update the Terraform variable(s) for the region(s) you're provisioning.

Scheduling with Nomad

Once your infrastructure is provisioned, grab a Nomad Server to ssh into from the output of the terraform apply or from the web console. Jot down the public IP of the Nomad server you're going to run these jobs from, you will need to later to gather your results.

Run the below commands to schedule your first job. This will schedule 5 docker containers on 5 different nodes using the node_class constraint. Each job contains a different number of tasks that will be scheduled by Nomad, the job types are defined below.

  • Docker Driver (classlogger_n_docker.nomad)
    • Schedules n number of docker containers
  • Docker Driver with Consul (classlogger_n_consul_docker.nomad)
    • Schedules n number of docker containers registering each container as a service with Consul
  • Raw Fork/Exec Driver (classlogger_n_raw_exec.nomad)
    • Schedules n number of tasks
  • Raw Fork/Exec Driver with Consul (classlogger_n_consul_raw_exec.nomad)
    • Schedules n number of tasks registering each task as a service with Consul

You can change the job being run by modifying the JOBSPEC environment variable and change the number of jobs being run by modifying the JOBS environment variables.

ssh [email protected]_SERVER_IP

cd /opt/nomad/jobs
sudo JOBSPEC=docker-classlogger-1.nomad JOBS=1 bench-runner /usr/local/bin/bench-nomad

To gather results, complete the C1M Results steps after running each job. Before adding more nodes to your infrastructure, be sure to pull down the Spawn Results locally so you can see how fast Terraform and GCE spun up each infrastructure size.

Gather Results

To gather the results of the C1M challenge, follow the below instructions.

Spawn Results

Run the below commands from the Utility box to gather C1M spawn results.

consul exec 'scp -C -q -o StrictHostKeyChecking=no -i /home/ubuntu/c1m/site.pem /home/ubuntu/c1m/spawn/spawn.csv [email protected]:/home/ubuntu/c1m/spawn/$(hostname).file'

FILE=/home/ubuntu/c1m/spawn/$(date '+%s').csv && find /home/ubuntu/c1m/spawn/. -type f -name '*.file' -exec cat {} + >> $FILE && sed -i '1s/^/type,name,time\n/' $FILE

Make sure the consul exec command pulled in all of the spawn files by running ls -1 | grep '.file' | wc -l and confirm the count matches your number of nodes. This command is idempotent and can be run multiple times. Below are some examples of how you can get live updates on Consul and Nomad agents, as well as running consul-exec until it has all necessary files.

# Live updates on Nomad agents
EXPECTED=5000 && CURRENT=0 && while [ $CURRENT -lt $EXPECTED ]; do CURRENT=$(nomad node-status | grep 'ready' |  wc -l); echo "Nomad nodes ready: $CURRENT/$EXPECTED"; sleep 10; done

# Live updates on Consul agents
EXPECTED=5009 && CURRENT=0 && while [ $CURRENT -lt $EXPECTED ]; do CURRENT=$(consul members | grep 'alive' |  wc -l); echo "Consul members alive: $CURRENT/$EXPECTED"; sleep 10; done

# Run consul exec until you have the number of spawn files you need
EXPECTED=5009 && CURRENT=0 && while [ $CURRENT -lt $EXPECTED ]; do consul exec 'scp -C -q -o StrictHostKeyChecking=no -i /home/ubuntu/c1m/site.pem /home/ubuntu/c1m/spawn/spawn.csv [email protected]:/home/ubuntu/c1m/spawn/$(hostname).file'; CURRENT=$(ls -1 /home/ubuntu/c1m/spawn | grep '.file' |  wc -l); echo "Spawn files: $CURRENT/$EXPECTED"; sleep 60; done

Run spawn_results.sh locally after running all jobs for each node count to gather all C1M spawn results.

sh spawn_results.sh UTILITY_IP NODE_COUNT
C1M Results

Run c1m_results.sh locally after running each job for each node count to gather all C1M performance results.

sh c1m_results.sh NOMAD_SERVER_IP UTILITY_IP NODE_COUNT JOB_NAME

Nomad Join

Use the below consul exec commands to run a Nomad join operation on any subset of Nomad agents. This can be used to join servers, join clients to servers, or change the Nomad server cluster at which clients are pointing.

Join Nomad Servers
consul exec -datacenter gce-us-central1 -service nomad-server 'sudo /opt/nomad/nomad_join.sh "nomad-server?dc=gce-us-central1&passing" "server"'
Join Nomad Client to Nomad Servers
consul exec -datacenter gce-us-central1 -service nomad-client 'sudo /opt/nomad/nomad_join.sh "nomad-server?dc=gce-us-central1&passing"'
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].