All Projects → Telefonica → Prometheus Kafka Adapter

Telefonica / Prometheus Kafka Adapter

Licence: apache-2.0
Use Kafka as a remote storage database for Prometheus (remote write only)

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to Prometheus Kafka Adapter

Librdkafka
The Apache Kafka C/C++ library
Stars: ✭ 5,617 (+3283.73%)
Mutual labels:  kafka, kafka-producer
Rafka
Kafka proxy with a simple API, speaking the Redis protocol
Stars: ✭ 49 (-70.48%)
Mutual labels:  kafka, kafka-producer
Quarkus Microservices Poc
Very simplified shop sales system made in a microservices architecture using quarkus
Stars: ✭ 16 (-90.36%)
Mutual labels:  kafka, prometheus
Debezium
Change data capture for a variety of databases. Please log issues at https://issues.redhat.com/browse/DBZ.
Stars: ✭ 5,937 (+3476.51%)
Mutual labels:  kafka, kafka-producer
Filodb
Distributed Prometheus time series database
Stars: ✭ 1,286 (+674.7%)
Mutual labels:  kafka, prometheus
Kq
Kafka-based Job Queue for Python
Stars: ✭ 530 (+219.28%)
Mutual labels:  kafka, kafka-producer
Kafka exporter
Kafka exporter for Prometheus
Stars: ✭ 996 (+500%)
Mutual labels:  kafka, prometheus
Qbusbridge
The Apache Kafka Client SDK
Stars: ✭ 272 (+63.86%)
Mutual labels:  kafka, kafka-producer
Karafka
Framework for Apache Kafka based Ruby and Rails applications development.
Stars: ✭ 1,223 (+636.75%)
Mutual labels:  kafka, kafka-producer
Kattlo Cli
Kattlo CLI Project
Stars: ✭ 58 (-65.06%)
Mutual labels:  kafka, kafka-producer
Zenko
Zenko is the open source multi-cloud data controller: own and keep control of your data on any cloud.
Stars: ✭ 353 (+112.65%)
Mutual labels:  kafka, prometheus
Neo4j Streams
Neo4j Kafka Integrations, Docs =>
Stars: ✭ 126 (-24.1%)
Mutual labels:  kafka, kafka-producer
Trubka
A CLI tool for Kafka
Stars: ✭ 296 (+78.31%)
Mutual labels:  kafka, kafka-producer
Books Recommendation
程序员进阶书籍(视频),持续更新(Programmer Books)
Stars: ✭ 558 (+236.14%)
Mutual labels:  kafka, prometheus
Kminion
KMinion is a feature-rich Prometheus exporter for Apache Kafka written in Go. It is lightweight and highly configurable so that it will meet your requirements.
Stars: ✭ 274 (+65.06%)
Mutual labels:  kafka, prometheus
Anotherkafkamonitor Akm
Another app which used to monitor the progress of Kafka Producer and Consumer
Stars: ✭ 36 (-78.31%)
Mutual labels:  kafka, kafka-producer
Bcmall
以教学为目的的电商系统。包含ToB复杂业务、互联网高并发业务、缓存应用;DDD、微服务指导。模型驱动、数据驱动。了解大型服务进化路线,编码技巧、学习Linux,性能调优。Docker/k8s助力、监控、日志收集、中间件学习。前端技术、后端实践等。主要技术:SpringBoot+JPA+Mybatis-plus+Antd+Vue3。
Stars: ✭ 188 (+13.25%)
Mutual labels:  kafka, prometheus
Kafka Ui
Open-Source Web GUI for Apache Kafka Management
Stars: ✭ 230 (+38.55%)
Mutual labels:  kafka, kafka-producer
Pretendyourexyzzy
A web clone of the card game Cards Against Humanity.
Stars: ✭ 1,069 (+543.98%)
Mutual labels:  kafka, kafka-producer
Kukulcan
A REPL for Apache Kafka
Stars: ✭ 103 (-37.95%)
Mutual labels:  kafka, kafka-producer

prometheus-kafka-adapter

Build Status

Prometheus-kafka-adapter is a service which receives Prometheus metrics through remote_write, marshal into JSON and sends them into Kafka.

output

It is able to write JSON or Avro-JSON messages in a kafka topic, depending on the SERIALIZATION_FORMAT configuration variable.

JSON

{
  "timestamp": "1970-01-01T00:00:00Z",
  "value": "9876543210",
  "name": "up",

  "labels": {
    "__name__": "up",
    "label1": "value1",
    "label2": "value2"
  }
}

timestamp and value are reserved values, and can't be used as label names. __name__ is a special label that defines the name of the metric and is copied as name to the top level for convenience.

Avro JSON

The Avro-JSON serialization is the same. See the Avro schema.

configuration

prometheus-kafka-adapter

There is a docker image telefonica/prometheus-kafka-adapter:1.7.0 available on Docker Hub.

Prometheus-kafka-adapter listens for metrics coming from Prometheus and sends them to Kafka. This behaviour can be configured with the following environment variables:

  • KAFKA_BROKER_LIST: defines kafka endpoint and port, defaults to kafka:9092.
  • KAFKA_TOPIC: defines kafka topic to be used, defaults to metrics. Could use go template, labels are passed (as a map) to the template: e.g: metrics.{{ index . "__name__" }} to use per-metric topic. Two template functions are available: replace ({{ index . "__name__" | replace "message" "msg" }}) and substring ({{ index . "__name__" | substring 0 5 }})
  • KAFKA_COMPRESSION: defines the compression type to be used, defaults to none.
  • KAFKA_BATCH_NUM_MESSAGES: defines the number of messages to batch write, defaults to 10000.
  • SERIALIZATION_FORMAT: defines the serialization format, can be json, avro-json, defaults to json.
  • PORT: defines http port to listen, defaults to 8080, used directly by gin.
  • BASIC_AUTH_USERNAME: basic auth username to be used for receive endpoint, defaults is no basic auth.
  • BASIC_AUTH_PASSWORD: basic auth password to be used for receive endpoint, defaults is no basic auth.
  • LOG_LEVEL: defines log level for logrus, can be debug, info, warn, error, fatal or panic, defaults to info.
  • GIN_MODE: manage gin debug logging, can be debug or release.

To connect to Kafka over SSL define the following additonal environment variables:

  • KAFKA_SSL_CLIENT_CERT_FILE: Kafka SSL client certificate file, defaults to ""
  • KAFKA_SSL_CLIENT_KEY_FILE: Kafka SSL client certificate key file, defaults to ""
  • KAFKA_SSL_CLIENT_KEY_PASS: Kafka SSL client certificate key password (optional), defaults to ""
  • KAFKA_SSL_CA_CERT_FILE: Kafka SSL broker CA certificate file, defaults to ""

To connect to Kafka over SASL/SCRAM authentication define the following additonal environment variables:

  • KAFKA_SECURITY_PROTOCOL: Kafka client used protocol to communicate with brokers, must be set if SASL is going to be used, either plain or with SSL
  • KAFKA_SASL_MECHANISM: SASL mechanism to use for authentication, defaults to ""
  • KAFKA_SASL_USERNAME: SASL username for use with the PLAIN and SASL-SCRAM-.. mechanisms, defaults to ""
  • KAFKA_SASL_PASSWORD: SASL password for use with the PLAIN and SASL-SCRAM-.. mechanism, defaults to ""

When deployed in a Kubernetes cluster using Helm and using a Kafka external to the cluster, it might be necessary to define the kafka hostname resolution locally (this fills the /etc/hosts of the container). Use a custom values.yaml file with section hostAliases (as mentioned in default values.yaml).

prometheus

Prometheus needs to have a remote_write url configured, pointing to the '/receive' endpoint of the host and port where the prometheus-kafka-adapter service is running. For example:

remote_write:
  - url: "http://prometheus-kafka-adapter:8080/receive"

When deployed in a Kubernetes cluster using Helm and using an external Prometheus, it might be necessary to expose prometheus-kafka-adapter input port as a node port. Use a custom values.yaml file to set service.type: NodePort and service.nodeport: <PortNumber> (see comments in default values.yaml)

development

Building requires librdkafka be available on the building host. It is typically available in distribution package archives, but can also be downloaded and built from here: https://github.com/edenhill/librdkafka.git

go test
go build

contributing

With issues:

  • Use the search tool before opening a new issue.
  • Please provide source code and commit sha if you find a bug.
  • Review existing issues and provide feedback or react to them.

With pull requests:

  • Open your pull request against master
  • It should pass all tests in the continuous integration pipeline (TravisCI).
  • You should add/modify tests to cover your proposed code changes.
  • If your pull request contains a new feature, please document it in this README.

license

Copyright 2018 Telefónica

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].