All Projects → sermilrod → kafka-elk-docker-compose

sermilrod / kafka-elk-docker-compose

Licence: MIT license
Deploy ELK stack and kafka with docker-compose

Projects that are alternatives of or similar to kafka-elk-docker-compose

elk-stack
ELK Stack (Elasticsearch, Logstash & Kibana)
Stars: ✭ 13 (-83.33%)
Mutual labels:  logstash, filebeat
Elkstack
The config files and docker-compose.yml files of Dockerized ELK Stack
Stars: ✭ 96 (+23.08%)
Mutual labels:  logstash, filebeat
k8s-log
容器日志搜集套件。
Stars: ✭ 15 (-80.77%)
Mutual labels:  logstash, filebeat
filebeat.py
Python 版 Filebeat
Stars: ✭ 48 (-38.46%)
Mutual labels:  logstash, filebeat
Filebeat Kubernetes
Filebeat container, alternative to fluentd used to ship kubernetes cluster and pod logs
Stars: ✭ 147 (+88.46%)
Mutual labels:  logstash, filebeat
skalogs-bundle
Open Source data and event driven real time Monitoring and Analytics Platform
Stars: ✭ 16 (-79.49%)
Mutual labels:  logstash, zookeeper
Elk
搭建ELK日志分析平台。
Stars: ✭ 688 (+782.05%)
Mutual labels:  logstash, filebeat
elastic-stack
A complete documentation on how to install Elastic Stack on Ubuntu 16.04 Server ASAP 😎
Stars: ✭ 12 (-84.62%)
Mutual labels:  logstash, filebeat
tutorials
Tutorials
Stars: ✭ 80 (+2.56%)
Mutual labels:  logstash, filebeat
Elk Hole
elasticsearch, logstash and kibana configuration for pi-hole visualiziation
Stars: ✭ 136 (+74.36%)
Mutual labels:  logstash, filebeat
docker-elk-stack
The ELK stack Docker containerization (Elasticsearch, Logstash and Kibana)
Stars: ✭ 20 (-74.36%)
Mutual labels:  logstash, filebeat
Synesis lite suricata
Suricata IDS/IPS log analytics using the Elastic Stack.
Stars: ✭ 167 (+114.1%)
Mutual labels:  logstash, filebeat
MeetU
Application that build on Elasticsearch and Spring Boot Microservices (Synchronous Service)
Stars: ✭ 22 (-71.79%)
Mutual labels:  logstash, filebeat
seahorse
ELKFH - Elastic, Logstash, Kibana, Filebeat and Honeypot (HTTP, HTTPS, SSH, RDP, VNC, Redis, MySQL, MONGO, SMB, LDAP)
Stars: ✭ 31 (-60.26%)
Mutual labels:  logstash, filebeat
dissect-tester
Simple API/UI for testing filebeat dissect patterns against a collection of sample log lines.
Stars: ✭ 58 (-25.64%)
Mutual labels:  logstash, filebeat
Aliware Kafka Demos
提供各种客户端接入阿里云 消息队列 Kafka 的demo工程
Stars: ✭ 279 (+257.69%)
Mutual labels:  logstash, filebeat
ELK-Hunting
Threat Hunting with ELK Workshop (InfoSecWorld 2017)
Stars: ✭ 58 (-25.64%)
Mutual labels:  logstash, filebeat
S1EM
This project is a SIEM with SIRP and Threat Intel, all in one.
Stars: ✭ 270 (+246.15%)
Mutual labels:  logstash, filebeat
Vagrant Elastic Stack
Giving the Elastic Stack a try in Vagrant
Stars: ✭ 131 (+67.95%)
Mutual labels:  logstash, filebeat
Dockerfile
some personally made dockerfile
Stars: ✭ 2,021 (+2491.03%)
Mutual labels:  logstash, filebeat

kafka-elk-docker-compose

This repository deploys with docker-compose an ELK stack which has kafka cluster buffering the logs collection process. This repository tries to make your life easier while testing a similar architecture. It is highly discouraged to use this repository as a production ready solution of this stack.

Setup

  1. Install Docker engine
  2. Install Docker compose
  3. Clone this repository:
    git clone [email protected]:sermilrod/kafka-elk-docker-compose.git
    
  4. Configure File Descriptors and MMap To do so you have to type the following command:
    sysctl -w vm.max_map_count=262144
    
    Be aware that the previous sysctl setting vanishes when your machine restarts. If you want to make it permanent place vm.max_map_count setting in your /etc/sysctl.conf.
  5. Create the elasticsearch volume:
    $ cd kafka-elk-docker-compose
    $ mkdir esdata
    By default the docker-compose.yml uses esdata as the host volumen path name. If you want to use another name you have to edit the docker-compose.yml file and create your own structure.
  6. Create the apache-logs folder:
    $ cd kafka-elk-docker-compose
    $ mkdir apache-logs
    This repository uses a default apache container to generate logs and it is required for filebeat to be present. If you do not want to use this apache or you want to add new components to the system just use the docker-compose.yml as a base for your use case.

Usage

Deploy your Kafka+ELK Stack using docker-compose:

$ docker-compose up -d

By default the apache container generating logs is exposed through port 8888. You can perform some requests to generate a few log entries for later visualization in kibana:

$ curl http://localhost:8888/

The full stack takes around a minute to be fully functional as there are dependencies beteween services. After that you should be able to hit Kibana http://localhost:5601

Before you see the log entries generated before you have to configure an index pattern in kibana. Make sure you configure it with these two options:

  • Index name or pattern: logstash-*
  • Time-field name: @timestamp

Configuration

The docker-compose.yml deploys an ELK solution using kafka as a buffer for log collection. This repository is shipped with the minimal amount of configuration needed to make the stack work. The default config files are:

filebeat.yml:

filebeat.prospectors:
- paths:
    - /apache-logs/access.log
  tags:
    - testenv
    - apache_access
  input_type: log
  document_type: apache_access
  fields_under_root: true

- paths:
    - /apache-logs/error.log
  tags:
    - testenv
    - apache_error
  input_type: log
  document_type: apache_error
  fields_under_root: true

output.kafka:
  hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"]
  topic: 'log'
  partition.round_robin:
    reachable_only: false
  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000

As you can see it is configured to read the default apache logs and push them to kafka. Any addition or change to the filebeat agent should be perform in this config file.

logstash.conf:

input {
  kafka {
    bootstrap_servers => "kafka1:9092,kafka2:9092,kafka3:9092"
    client_id => "logstash"
    group_id => "logstash"
    consumer_threads => 3
    topics => ["log"]
    codec => "json"
    tags => ["log", "kafka_source"]
    type => "log"
  }
}

filter {
  if [type] == "apache_access" {
    grok {
      match => { "message" => "%{COMMONAPACHELOG}" }
    }
    date {
      match => ["timestamp", "dd/MMM/yyyy:HH:mm:ss Z"]
      remove_field => ["timestamp"]
    }
  }
  if [type] == "apache_error" {
    grok {
      match => { "message" => "%{COMMONAPACHELOG}" }
    }
    date {
      match => ["timestamp", "dd/MMM/yyyy:HH:mm:ss Z"]
      remove_field => ["timestamp"]
    }
  }
}

output {
  if [type] == "apache_access" {
    elasticsearch {
         hosts => ["elasticsearch:9200"]
         index => "logstash-apache-access-%{+YYYY.MM.dd}"
    }
  }
  if [type] == "apache_error" {
    elasticsearch {
         hosts => ["elasticsearch:9200"]
         index => "logstash-apache-error-%{+YYYY.MM.dd}"
    }
  }
}

As you can, logstash is configured as a kafka consumer to parse apache logs and to insert them into elasticsearch. Any addition or change to the logstash behaviour should be perform in this config file.

kibana.yml:

server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
xpack.monitoring.ui.container.elasticsearch.enabled: false

It is remarkable the fact that by default that both kibana and elasticsearch docker images enable by default the xpack plugin, which you will have to pay for it after the trial. This repository disables this paid feature by default. Any addition or change to the kibana behaviour should be perform in this config file.

Other configuration:

You can configure much more each of the components of the stack. It is up to you and your use case to extend the configuration files and change the docker-compose.yml to make it so.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].