All Projects → rms1000watt → local-hashicorp-stack

rms1000watt / local-hashicorp-stack

Licence: other
Local Hashicorp Stack for DevOps Development without Hypervisor or Cloud

Programming Languages

HCL
1544 projects
shell
77523 projects

Projects that are alternatives of or similar to local-hashicorp-stack

vim-hcl
Syntax highlighting for HashiCorp Configuration Language (HCL)
Stars: ✭ 83 (+260.87%)
Mutual labels:  packer, consul, hashicorp, nomad
nomad-consult-ansible-centos
Deploy nomad & consult on centos with ansible
Stars: ✭ 17 (-26.09%)
Mutual labels:  consul, hashicorp, nomad
Hashi Ui
A modern user interface for @hashicorp Consul & Nomad
Stars: ✭ 1,119 (+4765.22%)
Mutual labels:  consul, hashicorp, nomad
Hashi Up
bootstrap HashiCorp Consul, Nomad, or Vault over SSH < 1 minute
Stars: ✭ 113 (+391.3%)
Mutual labels:  consul, hashicorp, nomad
nomad-box
Nomad Box - Simple Terraform-powered setup to Azure of clustered Consul, Nomad and Traefik Load Balancer that runs Docker/GoLang/Java workloads. NOTE: Only suitable in dev environments at the moment until I learn more Terraform, Consul, Nomad, Vault :P
Stars: ✭ 18 (-21.74%)
Mutual labels:  consul, hashicorp, nomad
Replicator
Automated Cluster and Job Scaling For HashiCorp Nomad
Stars: ✭ 166 (+621.74%)
Mutual labels:  consul, hashicorp, nomad
Nomad Firehose
Firehose all nomad job, allocation, nodes and evaluations changes to rabbitmq, kinesis or stdout
Stars: ✭ 96 (+317.39%)
Mutual labels:  consul, hashicorp, nomad
terraform-google-nomad
📗 Terraform Module for Nomad clusters with Consul on GCP
Stars: ✭ 63 (+173.91%)
Mutual labels:  packer, consul, nomad
hashicorp-labs
Deploy locally on VM an Hashicorp cluster formed by Vault, Consul and Nomad. Ready for deploying and testing your apps.
Stars: ✭ 32 (+39.13%)
Mutual labels:  consul, hashicorp, nomad
Ansible Vault
🔑 Ansible role for Hashicorp Vault
Stars: ✭ 189 (+721.74%)
Mutual labels:  virtualbox, consul, hashicorp
ansible-role-packer-debian
Ansible Role - Packer Debian/Ubuntu Configuration for Vagrant VirtualBox
Stars: ✭ 32 (+39.13%)
Mutual labels:  packer, virtualbox, hashicorp
Ansible Role Packer rhel
Ansible Role - Packer RHEL/CentOS Configuration for Vagrant VirtualBox
Stars: ✭ 45 (+95.65%)
Mutual labels:  packer, virtualbox, hashicorp
Packer Ubuntu 1804
This build has been moved - see README.md
Stars: ✭ 101 (+339.13%)
Mutual labels:  packer, virtualbox
Beetbox
Pre-provisioned L*MP stack
Stars: ✭ 94 (+308.7%)
Mutual labels:  packer, virtualbox
Packertemplates
Packer Templates for building Windows Operating Systems
Stars: ✭ 148 (+543.48%)
Mutual labels:  packer, virtualbox
Packer Build
Packer Automated VM Image and Vagrant Box Builds
Stars: ✭ 199 (+765.22%)
Mutual labels:  packer, virtualbox
My Cheat Sheets
A place to keep all my cheat sheets for the complete development of ASIC/FPGA hardware or a software app/service.
Stars: ✭ 94 (+308.7%)
Mutual labels:  packer, consul
Nixbox
NixOS Vagrant boxes [[email protected]]
Stars: ✭ 189 (+721.74%)
Mutual labels:  packer, virtualbox
Packer Templates
Scripts and Templates used for generating Vagrant images
Stars: ✭ 219 (+852.17%)
Mutual labels:  packer, virtualbox
Packer
Packer helpers and templates for Docker, IIS, SQL Server and Visual Studio on Windows and Ubuntu
Stars: ✭ 242 (+952.17%)
Mutual labels:  packer, virtualbox

Local HashiCorp Stack

Introduction

This projects lets you run a 3 Server + 3 Client Nomad/Consul cluster in 6 Virtualbox VMs on OS X using Packer & Terraform

Contents

Motivation

HashiCorp tools enable you to build/maintain multi-datacenter systems with ease. However, you usually don't have datacenters to play with. This project builds VirtualBox VMs that you can run Terraform against to play with Nomad, Consul, etc.

The workflow is:

  • Build ISOs (Packer)
  • Deploy VMs to your local machine (Terraform + 3rd Party Provider)
  • Play with Nomad, Consul, etc.

(Packer is used directly instead of Vagrant so the pipeline is the same when you build & deploy against hypervisors and clouds)

Prerequisites

  • OS X
  • Homebrew
  • brew install packer terraform nomad
  • brew cask install virtualbox

Build

cd packer
packer build -on-error=abort -force packer.json
cd output-virtualbox-iso
tar -zcvf ubuntu-16.04-docker.box *.ovf *.vmdk
cd ../..

Deploy

cd terraform
# Remove any cached golden images before redeploying
rm -rf ~/.terraform/virtualbox/gold/ubuntu-16.04-docker
terraform init
terraform apply
cd ..

You can ssh onto a host by running:

ssh -o 'IdentitiesOnly yes' [email protected]
# password: packer

Jobs

Take the IP Address of the server deployment and run Nomad jobs:

cd jobs
nomad run -address http://192.168.0.118:4646 redis-job.nomad
nomad run -address http://192.168.0.118:4646 echo-job.nomad
nomad run -address http://192.168.0.118:4646 golang-redis-pg.nomad
nomad run -address http://192.168.0.118:4646 raw.nomad
cd ..

You can view the logs of an allocation:

nomad logs -address http://192.168.0.118:4646 bf90d9cb

At a later time, you can stop the nomad jobs (but first look at the UI):

cd jobs
nomad stop -address http://192.168.0.118:4646 Echo-Job
nomad stop -address http://192.168.0.118:4646 Redis-Job
nomad stop -address http://192.168.0.118:4646 Golang-Redis-PG
nomad stop -address http://192.168.0.118:4646 view_files
cd ..

UI

Using the IP Address of the server deployment, you can:

HDFS

You can deploy HDFS by running:

cd jobs
nomad run -address http://192.168.0.118:4646 hdfs.nomad
cd ..

(Give it a minute to download the docker image..)

Then you can view the UI at: http://192.168.0.118:50070/

Spark

SSH into a server node then start PySpark:

pyspark \
--master nomad \
--conf spark.executor.instances=2 \
--conf spark.nomad.datacenters=dc-1 \
--conf spark.nomad.sparkDistribution=local:///usr/local/bin/spark

Then run some PySpark commands:

df = spark.read.json("/usr/local/bin/spark/examples/src/main/resources/people.json")
df.show()
df.printSchema()
df.createOrReplaceTempView("people")
sqlDF = spark.sql("SELECT * FROM people")
sqlDF.show()

Vault

Init the Vault system and go through the process for 1 of the Vault servers

vault init   -address http://192.168.0.118:8200
vault unseal -address http://192.168.0.118:8200
vault auth   -address=http://192.168.0.118:8200 66344296-222d-5be6-e052-15679209e0e7
vault write  -address=http://192.168.0.118:8200 secret/names name=ryan
vault read   -address=http://192.168.0.118:8200 secret/names

Then unseal the other Vault servers for HA

vault unseal -address http://192.168.0.125:8200
vault unseal -address http://192.168.0.161:8200

Then check Consul to see the health checks show that all the vault servers are unlocked

Attributions

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].