All Projects → shazChaudhry → Docker Elastic

shazChaudhry / Docker Elastic

Deploy Elastic stack in a Docker Swarm cluster. Ship application logs and metrics using beats & GELF plugin to Elasticsearch

Programming Languages

shell
77523 projects

Projects that are alternatives of or similar to Docker Elastic

Elk Hole
elasticsearch, logstash and kibana configuration for pi-hole visualiziation
Stars: ✭ 136 (-32.67%)
Mutual labels:  logstash, filebeat, elasticsearch, logging, kibana
Pfelk
pfSense/OPNsense + ELK
Stars: ✭ 417 (+106.44%)
Mutual labels:  logstash, elasticsearch, travis, kibana
Json Logging Python
Python logging library to emit JSON log that can be easily indexed and searchable by logging infrastructure such as ELK, EFK, AWS Cloudwatch, GCP Stackdriver
Stars: ✭ 143 (-29.21%)
Mutual labels:  logstash, elasticsearch, logging, kibana
Logtrail
Kibana plugin to view, search & live tail log events
Stars: ✭ 1,343 (+564.85%)
Mutual labels:  logstash, elasticsearch, logging, kibana
Synesis lite suricata
Suricata IDS/IPS log analytics using the Elastic Stack.
Stars: ✭ 167 (-17.33%)
Mutual labels:  logstash, filebeat, elasticsearch, kibana
Elk
搭建ELK日志分析平台。
Stars: ✭ 688 (+240.59%)
Mutual labels:  logstash, filebeat, elasticsearch, kibana
Vagrant Elastic Stack
Giving the Elastic Stack a try in Vagrant
Stars: ✭ 131 (-35.15%)
Mutual labels:  logstash, filebeat, elasticsearch, kibana
Dockerfile
some personally made dockerfile
Stars: ✭ 2,021 (+900.5%)
Mutual labels:  logstash, filebeat, elasticsearch
Elastic Stack
Aprenda Elasticsearch, Logstash, Kibana e Beats do jeito mais fácil ⭐️
Stars: ✭ 135 (-33.17%)
Mutual labels:  logstash, elasticsearch, kibana
Elk Docker
Elasticsearch, Logstash, Kibana (ELK) Docker image
Stars: ✭ 1,973 (+876.73%)
Mutual labels:  logstash, elasticsearch, kibana
Docker Elk
The Elastic stack (ELK) powered by Docker and Compose.
Stars: ✭ 12,327 (+6002.48%)
Mutual labels:  logstash, elasticsearch, kibana
Elastic Docker
Example setups for Elasticsearch, Kibana, Logstash, and Beats with docker-compose
Stars: ✭ 118 (-41.58%)
Mutual labels:  logstash, elasticsearch, kibana
Filebeat Kubernetes
Filebeat container, alternative to fluentd used to ship kubernetes cluster and pod logs
Stars: ✭ 147 (-27.23%)
Mutual labels:  logstash, filebeat, logging
Redelk
Red Team's SIEM - tool for Red Teams used for tracking and alarming about Blue Team activities as well as better usability in long term operations.
Stars: ✭ 1,692 (+737.62%)
Mutual labels:  logstash, elasticsearch, kibana
Spring Cloud Microservices Development
Spring Cloud Microservices Development.《Spring Cloud 微服务架构开发实战》
Stars: ✭ 106 (-47.52%)
Mutual labels:  logstash, elasticsearch, kibana
Elassandra
Elassandra = Elasticsearch + Apache Cassandra
Stars: ✭ 1,610 (+697.03%)
Mutual labels:  logstash, elasticsearch, kibana
Elk Stack
ELK Stack ... based on Elastic Stack 5.x
Stars: ✭ 148 (-26.73%)
Mutual labels:  logstash, elasticsearch, kibana
Microservices Sample
Sample project to create an application using microservices architecture
Stars: ✭ 167 (-17.33%)
Mutual labels:  logstash, elasticsearch, kibana
Dynamite Nsm
DynamiteNSM is a free Network Security Monitor developed by Dynamite Analytics to enable network visibility and advanced cyber threat detection
Stars: ✭ 92 (-54.46%)
Mutual labels:  logstash, elasticsearch, kibana
Elkstack
The config files and docker-compose.yml files of Dockerized ELK Stack
Stars: ✭ 96 (-52.48%)
Mutual labels:  logstash, filebeat, elasticsearch

Build Status on Travis

User story

As a DevOps team member, I want to install Elastic Stack (v7.9.1 by default) so that all application and system logs are collected centrally for searching, visualizing, analyzing and reporting purpose

Elastic products

Assumptions

  • Infrastructre is setup in Docker swarm mode
  • All containerized custom applications are designed to start with GELF log driver in order to send logs to Elastic Stack
  • NOTE: for cases where filebeat is to be run in "Docker for AWS": you will need to turn off auditd module in the filebeat config. Otherwise, filebeat service will fail to run

Architecture

The architecture used is shown in the table below

High level design In scope Not in scope
Elastic Stack Only beats for log files and metrics are used. All logs and metrics are shipped to elasticsearch directly in this repo. 2x Elasticsearch, 1x apm-server and 1x Kibana are used. Ingest nodes are not used
Elastic Stack All containerized custom applications are designed to start with GELF log driver in order to send logs to Elastic Stack -

For the full list of free features that are included in the basic license, see: https://www.elastic.co/subscriptions

Prerequisite

  • One docker swarm mode cluster allocated to running Elastic Stack. This cluster must have at least two nodes; 1x master and 1x worker. On each Elasticsearch cluster node, maximum map count check should be set to as follows: (required to run Elasticsearch)
    • sudo sysctl -w vm.max_map_count=262144
    • sudo echo 'vm.max_map_count=262144' >> /etc/sysctl.conf (to persist reboots)
  • One docekr swarm mode cluster allocated to running containerized custom applications. This cluster must have at least on node; 1x master

Get docker compose files

You will need these files to deploy Eleasticsearch, Logstash, Kibana, and Beats. So, first SSH in to the master node of the Docker Swarm cluster allocated to running Elastic Stack and clone this repo by following these commands:

  • alias git='docker run -it --rm --name git -u $(id -u ${USER}):$(id -g ${USER}) -v $PWD:/git -w /git alpine/git' (This alias is only required if git is not already installed on your machine. This alias will allow you to clone the repo using a git container)
  • git version
  • git clone https://github.com/shazChaudhry/docker-elastic.git
  • cd docker-elastic

Deploy Elastic Stack

  • SSH in to the master node of the Docker Swarm cluster allocated to running Elastic Stack. Deploy Elastic stack by running the following commands:
    • export ELASTIC_VERSION=7.9.1
    • export ELASTICSEARCH_USERNAME=elastic
    • export ELASTICSEARCH_PASSWORD=changeme
    • export INITIAL_MASTER_NODES=node1 (See Important discovery and cluster formation settings)
    • export ELASTICSEARCH_HOST=node1 (node1 is default value if you are creating VirtualBox with the provided Vagrantfile. Otherwise, change this value to one of your VMs in the swarm cluster)
    • docker network create --driver overlay --attachable elastic
    • docker stack deploy --compose-file docker-compose.yml elastic
      • You will need to be a little patient and wait for about 5 mins to allow stack to be ready
      • Assuming you have only two VMs, this will deploy a reverse proxy, logstash, Kibana and 2x Elasticsearch instances in Master / data nodes configuration. Please note that Elasticsearch is configured to start as a global service which means elasticsearch data nodes will be scalled out automatically as soon as new VMs are added to the Swarm cluster. Here is an explaination on various Elasticsearch cluster nodes
  • Check status of the stack services by running the following commands:
    • docker stack services elastic
    • docker stack ps --no-trunc elastic (address any error reported at this point)
    • curl -XGET -u ${ELASTICSEARCH_USERNAME}:${ELASTICSEARCH_PASSWORD} ${ELASTICSEARCH_HOST}':9200/_cat/health?v&pretty' (Inspect cluster health status which should be green. It should also show 2x nodes in total assuming you only have two VMs in the cluster)
  • If in case beats are also desired to be installed in this very docker swarm cluster, then use the instructions provided in the next section

Deploy Beats

SSH in to the master node of the Docker Swarm cluster allocated to running containerized custom applications and beats. Clone this repo and change directory as per the instructions above.

Execute the following commands to deploy filebeat and metricbeat:

  • export ELASTIC_VERSION=7.9.1
  • export ELASTICSEARCH_USERNAME=elastic
  • export ELASTICSEARCH_PASSWORD=changeme
  • export ELASTICSEARCH_HOST=node1 (node1 is default value if you are creating VirtualBox with the provided Vagrantfile. Otherwise, change this value to your Elasticsearch host)
  • export KIBANA_HOST=node1 (node1 is default value if you are creating VirtualBox with the provided Vagrantfile. Otherwise, change this value to your Kibana host)
  • docker network create --driver overlay --attachable elastic

Filebeat

  • docker stack deploy --compose-file filebeat-docker-compose.yml filebeat (Filebeat starts as a global service on all docker swarm nodes. It is only configured to pick up container logs for all services at '/var/lib/docker/containers/*/*.log' (container stdout and stderr logs) and forward thtem to Elasticsearch. These logs will then be available under filebeat index in Kibana. You will need to add additional configurations for other log locations. You may wish to read Docker Reference Architecture: Docker Logging Design and Best Practices)
  • Running the following command should print elasticsearch index and one of the rows should have filebeat-*
    • curl -XGET -u ${ELASTICSEARCH_USERNAME}:${ELASTICSEARCH_PASSWORD} ${ELASTICSEARCH_HOST}':9200/_cat/indices?v&pretty'

Metricbeat

  • docker stack deploy --compose-file metricbeat-docker-compose.yml metricbeat (Metricbeat starts as a global service on all docker swarm nodes. It sends system and docker stats from each node to Elasticsearch. These stats will then be available under metricbeat index in Kibana)
  • Running the following command should print elasticsearch index and one of the rows should have metricbeat-*
    • curl -XGET -u ${ELASTICSEARCH_USERNAME}:${ELASTICSEARCH_PASSWORD} ${ELASTICSEARCH_HOST}':9200/_cat/indices?v&pretty'

Testing

Wait until all stacks above are started and are up and running and then run jenkins container where filebeat is running:

  • docker container run -d --rm --name jenkins -p 8080:8080 jenkinsci/blueocean
  • Login at http://[KIBANA_HOST] which should show Management tab
    • username = elastic
    • password = changeme
  • On the Kibana Management tab, configure an index pattern (if not already done automatically)
    • Index name or pattern = filebeat-*
    • Time-field name = @timestamp
  • Click on Kibana Discover tab to view containers' console logs (including Jenkins) under filebeat-* index. Here is a screenshot showing Jenkins container logs:

Jenkins Container logs

Sending messages to Logstash over gelf

Logstash pipeline is configured to accept messages with gelf log driver. Gelf is one of the plugin mentioned in Docker Reference Architecture: Docker Logging Design and Best Practices. Start an application which sends messages with gelf. An example could be as follows:

  • Stop the Jenkins container started earlier:
    • docker container stop jenkins
  • Start Jenkins container again but with gelf log driver this time:
    • export LOGSTASH_HOST=node1
    • docker container run -d --rm --name jenkins -p 8080:8080 --log-driver=gelf --log-opt gelf-address=udp://${LOGSTASH_HOST}:12201 jenkinsci/blueocean
    • Note that --log-driver=gelf --log-opt gelf-address=udp://${LOGSTASH_HOST}:12201 sends container console logs to Elastic stack
  • On the Kibana Management tab, configure an index pattern
    • Index name or pattern = logstash-*
    • Time-field name = @timestamp
  • Click on Discover tab and select logstash-* index in order to see logs sent to Elasticsearch via Logstash. Here is a screenshot showing Jenkins container logs:

Jenkins Container logs of Gelf plugin

Here is another example:

  • docker container run --rm -it --log-driver=gelf --log-opt gelf-address=udp://${LOGSTASH_HOST}:12201 alpine ping 8.8.8.8
  • Login to Kibana and you should see traffic coming into Elasticsearch under logstash-* index
  • You can use syslog as well as TLS if you wish to add in your own certs

Testing with APM Java Agent

Follow these instructions to build a java app that we will use for APM:

WIP

References

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].