All Projects → infinityworks → Prometheus Example Queries

infinityworks / Prometheus Example Queries

Licence: mit
Simple place for people to provide examples of queries they've found useful.

Projects that are alternatives of or similar to Prometheus Example Queries

Promgen
Promgen is a configuration file generator for Prometheus
Stars: ✭ 754 (-1.44%)
Mutual labels:  monitoring, prometheus
Promxy
An aggregating proxy to enable HA prometheus
Stars: ✭ 562 (-26.54%)
Mutual labels:  monitoring, prometheus
Alertmanager Bot
Bot for Prometheus' Alertmanager
Stars: ✭ 473 (-38.17%)
Mutual labels:  monitoring, prometheus
Dockprom
Docker hosts and containers monitoring with Prometheus, Grafana, cAdvisor, NodeExporter and AlertManager
Stars: ✭ 4,489 (+486.8%)
Mutual labels:  monitoring, prometheus
Snmp exporter
SNMP Exporter for Prometheus
Stars: ✭ 705 (-7.84%)
Mutual labels:  monitoring, prometheus
Cluster Monitoring
Cluster monitoring stack for clusters based on Prometheus Operator
Stars: ✭ 453 (-40.78%)
Mutual labels:  monitoring, prometheus
Nexclipper
Metrics Pipeline for interoperability and Enterprise Prometheus
Stars: ✭ 533 (-30.33%)
Mutual labels:  monitoring, prometheus
Squzy
Squzy - is a high-performance open-source monitoring, incident and alert system written in Golang with Bazel and love.
Stars: ✭ 359 (-53.07%)
Mutual labels:  monitoring, prometheus
Opencensus Java
A stats collection and distributed tracing framework
Stars: ✭ 640 (-16.34%)
Mutual labels:  monitoring, prometheus
Conprof
Continuous profiling for performance analysis of CPU, memory over time.
Stars: ✭ 571 (-25.36%)
Mutual labels:  monitoring, prometheus
Cortex
A horizontally scalable, highly available, multi-tenant, long term Prometheus.
Stars: ✭ 4,491 (+487.06%)
Mutual labels:  monitoring, prometheus
Ansible Prometheus
Deploy Prometheus monitoring system
Stars: ✭ 758 (-0.92%)
Mutual labels:  monitoring, prometheus
Prometheus For Developers
Practical introduction to Prometheus for developers.
Stars: ✭ 382 (-50.07%)
Mutual labels:  monitoring, prometheus
Urlooker
enterprise-level websites monitoring system
Stars: ✭ 469 (-38.69%)
Mutual labels:  monitoring, prometheus
Dogvscat
Sample Docker Swarm cluster stack of tools
Stars: ✭ 377 (-50.72%)
Mutual labels:  monitoring, prometheus
Statping
Status Page for monitoring your websites and applications with beautiful graphs, analytics, and plugins. Run on any type of environment.
Stars: ✭ 5,806 (+658.95%)
Mutual labels:  monitoring, prometheus
Kubegraf
Grafana-plugin for k8s' monitoring
Stars: ✭ 345 (-54.9%)
Mutual labels:  monitoring, prometheus
Awesome Monitoring
INFRASTRUCTURE、OPERATION SYSTEM and APPLICATION monitoring tools for Operations.
Stars: ✭ 356 (-53.46%)
Mutual labels:  monitoring, prometheus
Swagger Stats
API Observability. Trace API calls and Monitor API performance, health and usage statistics in Node.js Microservices.
Stars: ✭ 559 (-26.93%)
Mutual labels:  monitoring, prometheus
Prometheus Operator
Prometheus Operator creates/configures/manages Prometheus clusters atop Kubernetes
Stars: ✭ 6,451 (+743.27%)
Mutual labels:  monitoring, prometheus

Purpose

Prometheus is awesome, but the human mind doesn't work in PromQL. The intention of this repository is to become a simple place for people to provide examples of queries they've found useful. We encourage all to contribute so that this can become something valuable to the community.

Simple or complex, all input is welcome.

Further Reading

PromQL Examples

These examples are formatted as recording rules, but can be used as normal expressions.

Please ensure all examples are submitted in the same format, we'd like to keep this nice and easy to read and maintain. The examples may contain some metric names and labels that aren't present on your system, if you're looking to re-use these then make sure validate the labels and metric names match your system.


Show Overall CPU usage for a server

- record: instance:node_cpu_utilization_percent:rate5m
  expr: 100 * (1 - avg by(instance)(irate(node_cpu{mode='idle'}[5m])))

Summary: Often useful to newcomers to Prometheus looking to replicate common host CPU checks. This query ultimately provides an overall metric for CPU usage, per instance. It does this by a calculation based on the idle metric of the CPU, working out the overall percentage of the other states for a CPU in a 5 minute window and presenting that data per instance.


Track http error rates as a proportion of total traffic

- record: job_instance_method_path:demo_api_request_errors_50x_requests:rate5m
  expr: >
    rate(demo_api_request_duration_seconds_count{status="500",job="demo"}[5m]) * 50
      > on(job, instance, method, path)
    rate(demo_api_request_duration_seconds_count{status="200",job="demo"}[5m])

Summary: This query selects the 500-status rate for any job, instance, method, and path combinations for which the 200-status rate is not at least 50 times higher than the 500-status rate. The rate function has been used here as it's designed to be used with the counters in this query.

link: Julius Volz - Tutorial


90th Percentile latency

- record: instance:demo_api_90th_over_50ms_and_requests_over_1:rate5m
  expr: >
    histogram_quantile(0.9, rate(demo_api_request_duration_seconds_bucket{job="demo"}[5m])) > 0.05
      and
    rate(demo_api_request_duration_seconds_count{job="demo"}[5m]) > 1

Summary: Select any HTTP endpoints that have a 90th percentile latency higher than 50ms (0.05s) but only for the dimensional combinations that receive more than one request per second. We use the histogram_quantile() function for the percentile calculation here. It calculates the 90th percentile latency for each sub-dimension. To filter the resulting bad latencies and retain only those that receive more than one request per second. histogram_quantile is only suitable for usage with a Histogram metric.

link: Julius Volz - Tutorial


HTTP request rate, per second.. an hour ago

- record: instance:api_http_requests_total:offset_1h_rate5m
  expr: rate(api_http_requests_total{status=500}[5m] offset 1h)

Summary: The rate() function calculates the per-second average rate of time series in a range vector. Combining all the above tools, we can get the rates of HTTP requests of a specific timeframe. The query calculates the per-second rates of all HTTP requests that occurred in the last 5 minutes, an hour ago. Suitable for usage on a counter metric.

Link: Tom Verelst - Ordina


Kubernetes Container Memory Usage

- record: kubernetes_pod_name:container_memory_usage_bytes:sum
  expr: sum by(kubernetes_pod_name) (container_memory_usage_bytes{kubernetes_namespace="kube-system"})

Summary: How much memory are the tools in the kube-system namespace using? Break it down by Pod and NameSpace!

Link: Joe Bowers - CoreOS


Most expensive time series

- record: metric_name:metrics:top_ten_count
  expr: topk(10, count by (__name__)({__name__=~".+"}))

Summary: Which are your most expensive time series to store? When tuning Prometheus, these quries can help you monitor your most expensive metrics. Be cautious, this query is expensive to run.

Link: Brian Brazil - Robust Perception


Most expensive time series

- record: job:metrics:top_ten_count
  expr: topk(10, count by (job)({__name__=~".+"}))

Summary: Which of your jobs have the most timeseries? Be cautious, this query is expensive to run.

Link: Brian Brazil - Robust Perception


Which Alerts have been firing?

- record: alerts_fired:24h
  expr:   sort_desc(sum(sum_over_time(ALERTS{alertstate=`firing`}[24h])) by (alertname))

Summary: Which of your Alerts have been firing the most? Useful to track alert trends.


Alert Rules Examples

These are examples of rules you can use with Prometheus to trigger the firing of an event, usually to the Prometheus alertmanager application. You can refer to the official documentation for more information.

- alert: <alert name>
  expr: <expression>
  for: <duration>
  labels:
    label_name: <label value>
  annotations:
    annotation_name: <annotation value>

Disk Will Fill in 4 Hours

- alert: PreditciveHostDiskSpace
  expr: predict_linear(node_filesystem_free{mountpoint="/"}[4h], 4 * 3600) < 0
  for: 30m
  labels:
    severity: warning
  annotations:
    description: 'Based on recent sampling, the disk is likely to will fill on volume
      {{ $labels.mountpoint }} within the next 4 hours for instace: {{ $labels.instance_id
      }} tagged as: {{ $labels.instance_name_tag }}'
    summary: Predictive Disk Space Utilisation Alert

Summary: Asks Prometheus to predict if the hosts disks will fill within four hours, based upon the last hour of sampled data. In this example, we are returning AWS EC2 specific labels to make the alert more readable.


Alert on High Memory Load

- expr: (sum(node_memory_MemTotal) - sum(node_memory_MemFree + node_memory_Buffers + node_memory_Cached) ) / sum(node_memory_MemTotal) * 100 > 85

Summary: Trigger an alert if the memory of a host is almost full. This is done by deducting the total memory by the free, buffered and cached memory and dividing it by total again to obtain a percentage. The > 85 will only return when the resulting value is above 85.

Link: Stefan Prodan - Blog


Alert on High CPU utilisation

- alert: HostCPUUtilisation
  expr: 100 - (avg by(instance) (irate(node_cpu{mode="idle"}[5m])) * 100) > 70
  for: 20m
  labels:
    severity: warning
  annotations:
    description: 'High CPU utilisation detected for instance {{ $labels.instance_id
      }} tagged as: {{ $labels.instance_name_tag }}, the utilisation is currently:
      {{ $value }}%'
    summary: CPU Utilisation Alert

Summary: Trigger an alert if a host's CPU becomes over 70% utilised for 20 minutes or more.


Alert if Prometheus is throttling

- alert: PrometheusIngestionThrottling
  expr: prometheus_local_storage_persistence_urgency_score > 0.95
  for: 1m
  labels:
    severity: warning
  annotations:
    description: Prometheus cannot persist chunks to disk fast enough. It's urgency
      value is {{$value}}.
    summary: Prometheus is (or borderline) throttling ingestion of metrics

Summary: Trigger an alert if Prometheus begins to throttle its ingestion. If you see this, some TLC is required.


Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].