All Projects → free → Sql_exporter

free / Sql_exporter

Licence: mit
Database agnostic SQL exporter for Prometheus

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to Sql exporter

Beekeeper Studio
Modern and easy to use SQL client for MySQL, Postgres, SQLite, SQL Server, and more. Linux, MacOS, and Windows.
Stars: ✭ 8,053 (+2575.42%)
Mutual labels:  database, mysql, postgresql, sql-server
Nut
Advanced, Powerful and easy to use ORM for Qt
Stars: ✭ 181 (-39.87%)
Mutual labels:  database, mysql, postgresql, sql-server
Prisma
Next-generation ORM for Node.js & TypeScript | PostgreSQL, MySQL, MariaDB, SQL Server, SQLite & MongoDB (Preview)
Stars: ✭ 18,168 (+5935.88%)
Mutual labels:  database, mysql, postgresql, sql-server
Sqlfiddle3
New version based on vert.x and docker
Stars: ✭ 242 (-19.6%)
Mutual labels:  database, mysql, postgresql, sql-server
Querybuilder
SQL query builder, written in c#, helps you build complex queries easily, supports SqlServer, MySql, PostgreSql, Oracle, Sqlite and Firebird
Stars: ✭ 2,111 (+601.33%)
Mutual labels:  database, mysql, postgresql, sql-server
Obevo
Obevo is a database deployment tool that handles enterprise scale schemas and complexity
Stars: ✭ 192 (-36.21%)
Mutual labels:  database, postgresql, sql-server
Shardingsphere
Build criterion and ecosystem above multi-model databases
Stars: ✭ 14,989 (+4879.73%)
Mutual labels:  database, mysql, postgresql
Endb
Key-value storage for multiple databases. Supports MongoDB, MySQL, Postgres, Redis, and SQLite.
Stars: ✭ 208 (-30.9%)
Mutual labels:  database, mysql, postgresql
Pg chameleon
MySQL to PostgreSQL replica system
Stars: ✭ 274 (-8.97%)
Mutual labels:  database, mysql, postgresql
Linq2db
Linq to database provider.
Stars: ✭ 2,211 (+634.55%)
Mutual labels:  database, mysql, postgresql
Fluentmigrator
Fluent migrations framework for .NET
Stars: ✭ 2,636 (+775.75%)
Mutual labels:  database, mysql, sql-server
Dbq
Zero boilerplate database operations for Go
Stars: ✭ 273 (-9.3%)
Mutual labels:  database, mysql, postgresql
Condenser
Condenser is a database subsetting tool
Stars: ✭ 189 (-37.21%)
Mutual labels:  database, mysql, postgresql
Entityframework.exceptions
Handle database errors easily when working with Entity Framework Core. Supports SQLServer, PostgreSQL, SQLite, Oracle and MySql
Stars: ✭ 266 (-11.63%)
Mutual labels:  mysql, postgresql, sql-server
Scalardb
Universal transaction manager
Stars: ✭ 178 (-40.86%)
Mutual labels:  database, mysql, postgresql
Db
Data access layer for PostgreSQL, CockroachDB, MySQL, SQLite and MongoDB with ORM-like features.
Stars: ✭ 2,832 (+840.86%)
Mutual labels:  database, mysql, postgresql
Django Migration Linter
🚀 Detect backward incompatible migrations for your django project
Stars: ✭ 231 (-23.26%)
Mutual labels:  database, mysql, postgresql
Gorm Bulk Insert
implement BulkInsert using gorm, just pass a Slice of Struct. Simple and compatible.
Stars: ✭ 241 (-19.93%)
Mutual labels:  database, mysql, postgresql
Node Orm2
Object Relational Mapping
Stars: ✭ 3,063 (+917.61%)
Mutual labels:  database, mysql, postgresql
Ohmysql
Easy direct access to your database 🎯 http://oleghnidets.github.io/OHMySQL/
Stars: ✭ 166 (-44.85%)
Mutual labels:  database, mysql, dbms

Prometheus SQL Exporter Build Status Go Report Card GoDoc Docker Pulls

Database agnostic SQL exporter for Prometheus.

Overview

SQL Exporter is a configuration driven exporter that exposes metrics gathered from DBMSs, for use by the Prometheus monitoring system. Out of the box, it provides support for MySQL, PostgreSQL, Microsoft SQL Server and Clickhouse, but any DBMS for which a Go driver is available may be monitored after rebuilding the binary with the DBMS driver included.

The collected metrics and the queries that produce them are entirely configuration defined. SQL queries are grouped into collectors -- logical groups of queries, e.g. query stats or I/O stats, mapped to the metrics they populate. Collectors may be DBMS-specific (e.g. MySQL InnoDB stats) or custom, deployment specific (e.g. pricing data freshness). This means you can quickly and easily set up custom collectors to measure data quality, whatever that might mean in your specific case.

Per the Prometheus philosophy, scrapes are synchronous (metrics are collected on every /metrics poll) but, in order to keep load at reasonable levels, minimum collection intervals may optionally be set per collector, producing cached metrics when queried more frequently than the configured interval.

Usage

Get Prometheus SQL Exporter, either as a packaged release, as a Docker image or build it yourself:

$ go install github.com/free/sql_exporter/cmd/sql_exporter

then run it from the command line:

$ sql_exporter

Use the -help flag to get help information.

$ ./sql_exporter -help
Usage of ./sql_exporter:
  -config.file string
      SQL Exporter configuration file name. (default "sql_exporter.yml")
  -web.listen-address string
      Address to listen on for web interface and telemetry. (default ":9399")
  -web.metrics-path string
      Path under which to expose metrics. (default "/metrics")
  [...]

Configuration

SQL Exporter is deployed alongside the DB server it collects metrics from. If both the exporter and the DB server are on the same host, they will share the same failure domain: they will usually be either both up and running or both down. When the database is unreachable, /metrics responds with HTTP code 500 Internal Server Error, causing Prometheus to record up=0 for that scrape. Only metrics defined by collectors are exported on the /metrics endpoint. SQL Exporter process metrics are exported at /sql_exporter_metrics.

The configuration examples listed here only cover the core elements. For a comprehensive and comprehensively documented configuration file check out documentation/sql_exporter.yml. You will find ready to use "standard" DBMS-specific collector definitions in the examples directory. You may contribute your own collector definitions and metric additions if you think they could be more widely useful, even if they are merely different takes on already covered DBMSs.

./sql_exporter.yml

# Global settings and defaults.
global:
  # Subtracted from Prometheus' scrape_timeout to give us some headroom and prevent Prometheus from
  # timing out first.
  scrape_timeout_offset: 500ms
  # Minimum interval between collector runs: by default (0s) collectors are executed on every scrape.
  min_interval: 0s
  # Maximum number of open connections to any one target. Metric queries will run concurrently on
  # multiple connections.
  max_connections: 3
  # Maximum number of idle connections to any one target.
  max_idle_connections: 3

# The target to monitor and the list of collectors to execute on it.
target:
  # Data source name always has a URI schema that matches the driver name. In some cases (e.g. MySQL)
  # the schema gets dropped or replaced to match the driver expected DSN format.
  data_source_name: 'sqlserver://prom_user:[email protected]:1433'

  # Collectors (referenced by name) to execute on the target.
  collectors: [pricing_data_freshness]

# Collector definition files.
collector_files: 
  - "*.collector.yml"

Collectors

Collectors may be defined inline, in the exporter configuration file, under collectors, or they may be defined in separate files and referenced in the exporter configuration by name, making them easy to share and reuse.

The collector definition below generates gauge metrics of the form pricing_update_time{market="US"}.

./pricing_data_freshness.collector.yml

# This collector will be referenced in the exporter configuration as `pricing_data_freshness`.
collector_name: pricing_data_freshness

# A Prometheus metric with (optional) additional labels, value and labels populated from one query.
metrics:
  - metric_name: pricing_update_time
    type: gauge
    help: 'Time when prices for a market were last updated.'
    key_labels:
      # Populated from the `market` column of each row.
      - Market
    static_labels:
      # Arbitrary key/value pair
      portfolio: income
    values: [LastUpdateTime]
    query: |
      SELECT Market, max(UpdateTime) AS LastUpdateTime
      FROM MarketPrices
      GROUP BY Market

Data Source Names

To keep things simple and yet allow fully configurable database connections to be set up, SQL Exporter uses DSNs (like sqlserver://prom_user:[email protected]:1433) to refer to database instances. However, because the Go sql library does not allow for automatic driver selection based on the DSN (i.e. an explicit driver name must be specified) SQL Exporter uses the schema part of the DSN (the part before the ://) to determine which driver to use.

Unfortunately, while this works out of the box with the MS SQL Server and PostgreSQL drivers, the MySQL driver DSNs format does not include a schema and the Clickhouse one uses tcp://. So SQL Exporter does a bit of massaging of DSNs for the latter two drivers in order for this to work:

DB SQL Exporter expected DSN Driver sees
MySQL mysql://user:[email protected](host:port)/dbname user:[email protected](host:port)/dbname
PostgreSQL postgres://user:[email protected]:port/dbname unchanged
SQL Server sqlserver://user:[email protected]:port/instance unchanged
Clickhouse clickhouse://host:port?username=user&password=passw&database=dbname tcp://host:port?username=user&password=passw&database=dbname

Why It Exists

SQL Exporter started off as an exporter for Microsoft SQL Server, for which no reliable exporters exist. But what is the point of a configuration driven SQL exporter, if you're going to use it along with 2 more exporters with wholly different world views and configurations, because you also have MySQL and PostgreSQL instances to monitor?

A couple of alternative database agnostic exporters are available -- https://github.com/justwatchcom/sql_exporter and https://github.com/chop-dbhi/prometheus-sql -- but they both do the collection at fixed intervals, independent of Prometheus scrapes. This is partly a philosophical issue, but practical issues are not all that difficult to imagine: jitter; duplicate data points; or collected but not scraped data points. The control they provide over which labels get applied is limited, and the base label set spammy. And finally, configurations are not easily reused without copy-pasting and editing across jobs and instances.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].