All Projects → mrlesmithjr → vagrant-vault-consul-docker-monitoring

mrlesmithjr / vagrant-vault-consul-docker-monitoring

Licence: other
No description or website provided.

Programming Languages

shell
77523 projects
Batchfile
5799 projects

Projects that are alternatives of or similar to vagrant-vault-consul-docker-monitoring

Docker monitoring logging alerting
Docker host and container monitoring, logging and alerting out of the box using cAdvisor, Prometheus, Grafana for monitoring, Elasticsearch, Kibana and Logstash for logging and elastalert and Alertmanager for alerting.
Stars: ✭ 479 (+2295%)
Mutual labels:  kibana, grafana
Microservice Scaffold
基于Spring Cloud(Greenwich.SR2)搭建的微服务脚手架(适用于在线系统),已集成注册中心(Nacos Config)、配置中心(Nacos Discovery)、认证授权(Oauth 2 + JWT)、日志处理(ELK + Kafka)、限流熔断(AliBaba Sentinel)、应用指标监控(Prometheus + Grafana)、调用链监控(Pinpoint)、以及Spring Boot Admin。
Stars: ✭ 211 (+955%)
Mutual labels:  kibana, grafana
Jmeter Elasticsearch Backend Listener
JMeter plugin that lets you send sample results to an ElasticSearch engine to enable live monitoring of load tests.
Stars: ✭ 72 (+260%)
Mutual labels:  kibana, grafana
skalogs-bundle
Open Source data and event driven real time Monitoring and Analytics Platform
Stars: ✭ 16 (-20%)
Mutual labels:  kibana, grafana
jmx-monitoring-stacks
No description or website provided.
Stars: ✭ 170 (+750%)
Mutual labels:  kibana, grafana
K8s Tew
Kubernetes - The Easier Way
Stars: ✭ 269 (+1245%)
Mutual labels:  kibana, grafana
Microservices Sample
Sample project to create an application using microservices architecture
Stars: ✭ 167 (+735%)
Mutual labels:  kibana, consul
Cault
docker compose for consul and vault official images
Stars: ✭ 157 (+685%)
Mutual labels:  consul, vault
hashicorp-labs
Deploy locally on VM an Hashicorp cluster formed by Vault, Consul and Nomad. Ready for deploying and testing your apps.
Stars: ✭ 32 (+60%)
Mutual labels:  consul, vault
docker grafana statsd elk
Docker repo for a general purpose graphing and logging container - includes graphite+carbon, grafana, statsd, elasticsearch, kibana, nginx, logstash indexer (currently using redis as an intermediary)
Stars: ✭ 19 (-5%)
Mutual labels:  kibana, grafana
docker-case
这个项目主要是为了快速拉起docker服务
Stars: ✭ 31 (+55%)
Mutual labels:  kibana, grafana
netdata-influx
Netdata ➡️ InfluxDB metrics exporter & Grafana dashboard
Stars: ✭ 29 (+45%)
Mutual labels:  grafana, netdata
docker-elk-stack
The ELK stack Docker containerization (Elasticsearch, Logstash and Kibana)
Stars: ✭ 20 (+0%)
Mutual labels:  kibana, cadvisor
Awesome Monitoring
INFRASTRUCTURE、OPERATION SYSTEM and APPLICATION monitoring tools for Operations.
Stars: ✭ 356 (+1680%)
Mutual labels:  kibana, grafana
Ansible Vault
🔑 Ansible role for Hashicorp Vault
Stars: ✭ 189 (+845%)
Mutual labels:  consul, vault
Stagemonitor
an open source solution to application performance monitoring for java server applications
Stars: ✭ 1,664 (+8220%)
Mutual labels:  kibana, grafana
Docker Compose Ha Consul Vault Ui
A docker-compose example of HA Consul + Vault + Vault UI
Stars: ✭ 136 (+580%)
Mutual labels:  consul, vault
Hashi Helper
Disaster Recovery and Configuration Management for Consul and Vault
Stars: ✭ 155 (+675%)
Mutual labels:  consul, vault
vault-consul-kubernetes
vault + consul on kubernetes
Stars: ✭ 60 (+200%)
Mutual labels:  consul, vault
hubble
hubbling the universe nebula by nebula
Stars: ✭ 18 (-10%)
Mutual labels:  consul, vault

Repo Info

Spin up a multi-node Vagrant environment for learning/testing monitoring tools for a micro-services world. All provisioning is automated using Ansible.

Cloning Repo

All Ansible roles are added as submodules, therefore in order to properly clone the repo you must do the following:

git clone https://github.com/mrlesmithjr/vagrant-vault-consul-docker-monitoring.git --recursive

Requirements

Environment

IP address assignments

  • node0 (192.168.250.10)
  • node1 (192.168.250.11)
  • node2 (192.168.250.12)
  • node3 (192.168.250.13)
  • node4 (192.168.250.14)
  • node5 (192.168.250.15)
  • node6 (192.168.250.16)
  • node7 (192.168.250.17)
  • node8 (192.168.250.18)

Usage

Spin up Vagrant environment

vagrant up

cAdvisor

Docker hosts have exposed metrics for Prometheus consumption.

Consul

Checking Consul member status:

vagrant ssh node0

vagrant@node0:~$ sudo consul members list
Node   Address              Status  Type    Build  Protocol  DC
node0  192.168.250.10:8301  alive   server  0.8.1  2         dc1
node1  192.168.250.11:8301  alive   server  0.8.1  2         dc1
node2  192.168.250.12:8301  alive   server  0.8.1  2         dc1
node7  192.168.250.17:8301  alive   client  0.8.1  2         dc1
node8  192.168.250.18:8301  alive   client  0.8.1  2         dc1

Docker

Checking Docker swarm node status:

vagrant ssh node5

vagrant@node5:~$ sudo docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
41oybdyk9njn7trhplpohk4tn *  node5     Ready   Active        Leader
4zc9ndv7rurfbgrhfzxs68sux    node4     Ready   Active        Reachable
8c3y3ta5ad56hlhmfzx2wmdgr    node8     Ready   Active
vmmpixn2i401cyhgd5g4l3cfd    node7     Ready   Active
x50d9z0zkloixvijxht1l36we    node6     Ready   Active        Reachable

Elasticsearch

Running as a Docker swarm service for storing Docker container logs.

To validate cluster functionality:

curl http://192.168.250.14:9200/_cluster/health\?pretty\=true

{
  "cluster_name" : "elasticsearch",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 5,
  "number_of_data_nodes" : 5,
  "active_primary_shards" : 5,
  "active_shards" : 10,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

For the above you may also check against the other Docker Swarm hosts.

Filebeat

Docker logs for each host sent to Elasticsearch

Grafana

Log into the Grafana web UI here

username/password: admin/admin

Add the Prometheus data source:

  • click Add data source
  • Name: prometheus
  • type: Prometheus
  • Url: http://192.168.250.10:9090
  • click Add

Kibana

Dashboard to view Docker logs

Netdata

node0 is configured as a Netdata registry for all over nodes to announce to which are also running Netdata

Prometheus

Vault

Monitoring Docker Services

As part of the provisioning of this environment we spin up the following:

For the above you may also check against the other Docker Swarm hosts.

#!/usr/bin/env bash

# Larry Smith Jr.
# @mrlesmithjr
# http://everythingshouldbevirtual.com

# Turn on verbose execution
set -x

BACKEND_NET="monitoring"
CADVISOR_IMAGE="google/cadvisor:v0.24.1"
ELASTICSEARCH_IMAGE="elasticsearch:2.4"
ELK_ES_SERVER_PORT="9200"
ELK_ES_SERVER="escluster"
ELK_REDIS_SERVER="redis"
FRONTEND_NET="elasticsearch-frontend"
KIBANA_IMAGE="kibana:4.6.3"
LABEL_GROUP="monitoring"

# Check/create Backend Network if missing
docker network ls | grep $BACKEND_NET
RC=$?
if [ $RC != 0 ]; then
  docker network create -d overlay $BACKEND_NET
fi

# Check for running cadvisor and spinup if not running
docker service ls | grep cadvisor
RC=$?
if [ $RC != 0 ]; then
  docker service create --name cadvisor \
    --mount type=bind,source=/var/lib/docker/,destination=/var/lib/docker:ro \
    --mount type=bind,source=/var/run,destination=/var/run:rw \
    --mount type=bind,source=/sys,destination=/sys:ro \
    --mount type=bind,source=/,destination=/rootfs:ro \
    --label org.label-schema.group="$LABEL_GROUP" \
    --network $BACKEND_NET \
    --mode global \
    --publish 8080:8080 \
    $CADVISOR_IMAGE
fi

# Spin up official Elasticsearch Docker image
docker service ls | grep $ELK_ES_SERVER
RC=$?
if [ $RC != 0 ]; then
  docker service create \
    --endpoint-mode dnsrr \
    --mode global \
    --name $ELK_ES_SERVER \
    --network $BACKEND_NET \
    --update-delay 60s \
    --update-parallelism 1 \
    $ELASTICSEARCH_IMAGE \
    elasticsearch \
    -Des.discovery.zen.ping.multicast.enabled=false \
    -Des.discovery.zen.ping.unicast.hosts=$ELK_ES_SERVER \
    -Des.gateway.expected_nodes=3 \
    -Des.discovery.zen.minimum_master_nodes=2 \
    -Des.gateway.recover_after_nodes=2 \
    -Des.network.bind=_eth0:ipv4_
fi

docker service ls | grep "es-lb"
RC=$?
if [ $RC != 0 ]; then
# Give ES time to come up and create cluster
  sleep 5m
  docker service create \
    --name "es-lb" \
    --network $BACKEND_NET \
    --publish 9200:9200 \
    -e BACKEND_SERVICE_NAME=$ELK_ES_SERVER \
    -e BACKEND_SERVICE_PORT="9200" \
    -e FRONTEND_SERVICE_PORT="9200" \
    mrlesmithjr/nginx-lb:ubuntu-tcp-lb
fi

# Spin up offical Kibana Docker image
docker service ls | grep kibana
RC=$?
if [ $RC != 0 ]; then
  docker service create \
    --mode global \
    --name kibana \
    --network $BACKEND_NET \
    --publish 5601:5601 \
    -e ELASTICSEARCH_URL=http://$ELK_ES_SERVER:$ELK_ES_SERVER_PORT \
    $KIBANA_IMAGE
fi

License

MIT

Author Information

Larry Smith Jr.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].