All Projects → lomik → Carbon Clickhouse

lomik / Carbon Clickhouse

Licence: mit
Graphite metrics receiver with ClickHouse as storage

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to Carbon Clickhouse

Carbonapi
Implementation of graphite API (graphite-web) in golang
Stars: ✭ 243 (+74.82%)
Mutual labels:  timeseries, graphite, carbon
Go Carbon
Golang implementation of Graphite/Carbon server with classic architecture: Agent -> Cache -> Persister
Stars: ✭ 713 (+412.95%)
Mutual labels:  timeseries, graphite, carbon
Hastic Server
Hastic data management server for analyzing patterns and anomalies from Grafana
Stars: ✭ 292 (+110.07%)
Mutual labels:  timeseries, graphite
Graphouse
Graphouse allows you to use ClickHouse as a Graphite storage.
Stars: ✭ 241 (+73.38%)
Mutual labels:  clickhouse, graphite
puppet-graphite
Puppet module for graphite monitoring tools
Stars: ✭ 67 (-51.8%)
Mutual labels:  graphite, carbon
Influxgraph
Graphite InfluxDB backend. InfluxDB storage finder / plugin for Graphite API.
Stars: ✭ 87 (-37.41%)
Mutual labels:  timeseries, graphite
inspector-metrics
Typescript metrics / monitoring library
Stars: ✭ 19 (-86.33%)
Mutual labels:  graphite, carbon
Carbon Relay Ng
Fast carbon relay+aggregator with admin interfaces for making changes online - production ready
Stars: ✭ 429 (+208.63%)
Mutual labels:  graphite, carbon
Graphyte
Python 3 compatible library to send data to a Graphite metrics server (Carbon)
Stars: ✭ 59 (-57.55%)
Mutual labels:  graphite, carbon
Facette
Time series data visualization software
Stars: ✭ 1,115 (+702.16%)
Mutual labels:  timeseries, graphite
Carbon
Carbon is one of the components of Graphite, and is responsible for receiving metrics over the network and writing them down to disk using a storage backend.
Stars: ✭ 1,435 (+932.37%)
Mutual labels:  graphite, carbon
Kaggle Web Traffic
1st place solution
Stars: ✭ 1,641 (+1080.58%)
Mutual labels:  timeseries
Lstm Autoencoders
Anomaly detection for streaming data using autoencoders
Stars: ✭ 113 (-18.71%)
Mutual labels:  timeseries
Clickhouse Rs
Asynchronous ClickHouse client library for Rust programming language.
Stars: ✭ 113 (-18.71%)
Mutual labels:  clickhouse
Tsmoothie
A python library for time-series smoothing and outlier detection in a vectorized way.
Stars: ✭ 109 (-21.58%)
Mutual labels:  timeseries
Influxdb Client Csharp
InfluxDB 2.0 C# Client
Stars: ✭ 130 (-6.47%)
Mutual labels:  timeseries
Icinga2
Icinga is a monitoring system which checks the availability of your network resources, notifies users of outages, and generates performance data for reporting.
Stars: ✭ 1,670 (+1101.44%)
Mutual labels:  graphite
Flink Learning
flink learning blog. http://www.54tianzhisheng.cn/ 含 Flink 入门、概念、原理、实战、性能调优、源码解析等内容。涉及 Flink Connector、Metrics、Library、DataStream API、Table API & SQL 等内容的学习案例,还有 Flink 落地应用的大型项目案例(PVUV、日志存储、百亿数据实时去重、监控告警)分享。欢迎大家支持我的专栏《大数据实时计算引擎 Flink 实战与性能优化》
Stars: ✭ 11,378 (+8085.61%)
Mutual labels:  clickhouse
Sqli
orm sql interface, Criteria, CriteriaBuilder, ResultMapBuilder
Stars: ✭ 1,644 (+1082.73%)
Mutual labels:  clickhouse
Timeseriesadmin
Administration panel and querying interface for InfluxDB databases. (Electron app / Docker container)
Stars: ✭ 107 (-23.02%)
Mutual labels:  timeseries

deb rpm

carbon-clickhouse

Graphite metrics receiver with ClickHouse as storage

Production status

Last releases are stable and ready for production use

TL;DR

Preconfigured docker-compose

Build

# build binary
git clone https://github.com/lomik/carbon-clickhouse.git
cd carbon-clickhouse
make

ClickHouse configuration

  1. Add graphite_rollup section to config.xml. Sample here. You can use carbon-schema-to-clickhouse for generate rollup xml from graphite storage-schemas.conf.

  2. Create tables.

CREATE TABLE graphite ( 
  Path String,  
  Value Float64,  
  Time UInt32,  
  Date Date,  
  Timestamp UInt32
) ENGINE = GraphiteMergeTree('graphite_rollup')
PARTITION BY toYYYYMM(Date)
ORDER BY (Path, Time);

-- optional table for faster metric search
CREATE TABLE graphite_index (
  Date Date,
  Level UInt32,
  Path String,
  Version UInt32
) ENGINE = ReplacingMergeTree(Version)
PARTITION BY toYYYYMM(Date)
ORDER BY (Level, Path, Date);

-- optional table for storing Graphite tags
CREATE TABLE graphite_tagged (
  Date Date,
  Tag1 String,
  Path String,
  Tags Array(String),
  Version UInt32
) ENGINE = ReplacingMergeTree(Version)
PARTITION BY toYYYYMM(Date)
ORDER BY (Tag1, Path, Date);

GraphiteMergeTree documentation

You can create Replicated tables. See ClickHouse documentation

Configuration

$ carbon-clickhouse -help
Usage of carbon-clickhouse:
  -check-config=false: Check config and exit
  -config="": Filename of config
  -config-print-default=false: Print default config
  -version=false: Print version
[common]
# Prefix for store all internal carbon-clickhouse graphs. Supported macroses: {host}
metric-prefix = "carbon.agents.{host}"
# Endpoint for store internal carbon metrics. Valid values: "" or "local", "tcp://host:port", "udp://host:port"
metric-endpoint = "local"
# Interval of storing internal metrics. Like CARBON_METRIC_INTERVAL
metric-interval = "1m0s"
# GOMAXPROCS
max-cpu = 1

[logging]
# "stderr", "stdout" can be used as file name
file = "/var/log/carbon-clickhouse/carbon-clickhouse.log"
# Logging error level. Valid values: "debug", "info", "warn" "error"
level = "info"

[data]
# Folder for buffering received data
path = "/data/carbon-clickhouse/"
# Rotate (and upload) file iniciated on size and interval
# Rotate (and upload) file size (in bytes, also k, m and g units can be used)
# chunk-max-size = '512m'
chunk-max-size = 0
# Rotate (and upload) file interval
# Minimize chunk-interval for minimize lag between point receive and store
chunk-interval = "1s"
# Auto-increase chunk interval if the number of unprocessed files is grown
# Sample, set chunk interval to 10 if unhandled files count >= 5 and set to 60s if unhandled files count >= 20:
# chunk-auto-interval = "5:10s,20:60s"
chunk-auto-interval = ""

# Compression algorithm to use when storing temporary files.
# Might be useful to reduce space usage when Clickhouse is unavailable for an extended period of time.
# Currently supported: none, lz4
compression = "none"

# Compression level to use.
# For "lz4" 0 means use normal LZ4, >=1 use LZ4HC with this depth (the higher - the better compression, but slower)
compression-level = 0

[upload.graphite]
type = "points"
table = "graphite"
threads = 1
url = "http://localhost:8123/"
# compress-data enables gzip compression while sending to clickhouse
compress-data = true
timeout = "1m0s"
# save zero value to Timestamp column (for point and posts-reverse tables)
zero-timestamp = false 

[upload.graphite_index]
type = "index"
table = "graphite_index"
threads = 1
url = "http://localhost:8123/"
timeout = "1m0s"
cache-ttl = "12h0m0s"
# Store hash of metric in memory instead of full metric name
# Allowed values: "", "city64" (empty value - disabled)
hash = ""
# If daily index should be disabled, default is `false`
disable-daily-index = false

# # You can define additional upload destinations of any supported type:
# # - points
# # - index
# # - tagged (is described below)
# # - points-reverse (same scheme as points, but path 'a1.b2.c3' stored as 'c3.b2.a1')

# # For uploaders with types "points" and "points-reverse" there is a possibility to ignore data using patterns. E.g.
# [upload.graphite]
# type = "graphite"
# table = "graphite.points"
# threads = 1
# url = "http://localhost:8123/"
# timeout = "30s"
# ignored-patterns = [
#     "a1.b2.*.c3",
# ]

# # Extra table which can be used as index for tagged series
# # Also, there is an opportunity to avoid writing tags for some metrics.
# # Example below, ignored-tagged-metrics.
# [upload.graphite_tagged]
# type = "tagged"
# table = "graphite_tagged"
# threads = 1
# url = "http://localhost:8123/"
# timeout = "1m0s"
# cache-ttl = "12h0m0s"
# ignored-tagged-metrics = [
#     "a.b.c.d",  # all tags (but __name__) will be ignored for metrics like a.b.c.d?tagName1=tagValue1&tagName2=tagValue2...
#     "*",  # all tags (but __name__) will be ignored for all metrics; this is the only special case with wildcards
# ]

[udp]
listen = ":2003"
enabled = true
# drop received point if timestamp > now + value. 0 - don't drop anything
drop-future = "0s"
# drop received point if timestamp < now - value. 0 - don't drop anything
drop-past = "0s"
# drop metrics with names longer than this value. 0 - don't drop anything
drop-longer-than = 0

[tcp]
listen = ":2003"
enabled = true
drop-future = "0s"
drop-past = "0s"
drop-longer-than = 0

[pickle]
listen = ":2004"
enabled = true
drop-future = "0s"
drop-past = "0s"
drop-longer-than = 0

# https://github.com/lomik/carbon-clickhouse/blob/master/grpc/carbon.proto
[grpc]
listen = ":2005"
enabled = false
drop-future = "0s"
drop-past = "0s"
drop-longer-than = 0

[prometheus]
listen = ":2006"
enabled = false
drop-future = "0s"
drop-past = "0s"
drop-longer-than = 0

[telegraf_http_json]
listen = ":2007"
enabled = false
drop-future = "0s"
drop-past = "0s"
drop-longer-than = 0
# the character to join telegraf metric and field (default is "_" for historical reason and Prometheus compatibility)
concat = "."

# Golang pprof + some extra locations
#
# Last 1000 points dropped by "drop-future", "drop-past" and "drop-longer-than" rules:
# /debug/receive/tcp/dropped/
# /debug/receive/udp/dropped/
# /debug/receive/pickle/dropped/
# /debug/receive/grpc/dropped/
# /debug/receive/prometheus/dropped/
# /debug/receive/telegraf_http_json/dropped/
[pprof] 
listen = "localhost:7007"
enabled = false

# You can use tag matching like in InfluxDB. Format is exactly the same.
# It will parse all metrics that don't have tags yet.
# For more information see https://docs.influxdata.com/influxdb/v1.7/supported_protocols/graphite/
# Example:
# [convert_to_tagged]
# enabled = true 
# separator = "_"
# tags = ["region=us-east", "zone=1c"]
# templates = [
#     "generated.* .measurement.cpu  metric=idle",
#     "* host.measurement* template_match=none",
# ] 
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].