All Projects → kayrus → Prometheus Kubernetes

kayrus / Prometheus Kubernetes

Licence: gpl-2.0
Most common Prometheus deployment example with alerts for Kubernetes cluster

Programming Languages

shell
77523 projects

Projects that are alternatives of or similar to Prometheus Kubernetes

Nexclipper
Metrics Pipeline for interoperability and Enterprise Prometheus
Stars: ✭ 533 (+24.53%)
Mutual labels:  prometheus, kubernetes-monitoring
Goldpinger
Debugging tool for Kubernetes which tests and displays connectivity between nodes in the cluster.
Stars: ✭ 2,015 (+370.79%)
Mutual labels:  prometheus, kubernetes-monitoring
Kube State Metrics
Add-on agent to generate and expose cluster-level metrics.
Stars: ✭ 3,433 (+702.1%)
Mutual labels:  prometheus, kubernetes-monitoring
Kubegraf
Grafana-plugin for k8s' monitoring
Stars: ✭ 345 (-19.39%)
Mutual labels:  prometheus
Devops Guide
DevOps Guide - Development to Production all configurations with basic notes to debug efficiently.
Stars: ✭ 4,119 (+862.38%)
Mutual labels:  kubernetes-monitoring
Faas
OpenFaaS - Serverless Functions Made Simple
Stars: ✭ 20,820 (+4764.49%)
Mutual labels:  prometheus
Docs
Prometheus documentation: content and static site generator
Stars: ✭ 411 (-3.97%)
Mutual labels:  prometheus
Prometheus.ex
Prometheus.io Elixir client
Stars: ✭ 343 (-19.86%)
Mutual labels:  prometheus
Dockprom
Docker hosts and containers monitoring with Prometheus, Grafana, cAdvisor, NodeExporter and AlertManager
Stars: ✭ 4,489 (+948.83%)
Mutual labels:  prometheus
Version Checker
Kubernetes utility for exposing image versions in use, compared to latest available upstream, as metrics.
Stars: ✭ 371 (-13.32%)
Mutual labels:  prometheus
Client ruby
Prometheus instrumentation library for Ruby applications
Stars: ✭ 369 (-13.79%)
Mutual labels:  prometheus
Zenko
Zenko is the open source multi-cloud data controller: own and keep control of your data on any cloud.
Stars: ✭ 353 (-17.52%)
Mutual labels:  prometheus
Prometheus For Developers
Practical introduction to Prometheus for developers.
Stars: ✭ 382 (-10.75%)
Mutual labels:  prometheus
M3
M3 monorepo - Distributed TSDB, Aggregator and Query Engine, Prometheus Sidecar, Graphite Compatible, Metrics Platform
Stars: ✭ 3,898 (+810.75%)
Mutual labels:  prometheus
Elasticsearch Prometheus Exporter
Prometheus exporter plugin for Elasticsearch
Stars: ✭ 409 (-4.44%)
Mutual labels:  prometheus
Go Project Sample
Introduce the best practice experience of Go project with a complete project example.通过一个完整的项目示例介绍Go语言项目的最佳实践经验.
Stars: ✭ 344 (-19.63%)
Mutual labels:  prometheus
Kubernetes App
A set of dashboards and panels for kubernetes.
Stars: ✭ 398 (-7.01%)
Mutual labels:  prometheus
Pihole Exporter
A Prometheus exporter for PI-Hole's Raspberry PI ad blocker
Stars: ✭ 352 (-17.76%)
Mutual labels:  prometheus
Squzy
Squzy - is a high-performance open-source monitoring, incident and alert system written in Golang with Bazel and love.
Stars: ✭ 359 (-16.12%)
Mutual labels:  prometheus
Dogvscat
Sample Docker Swarm cluster stack of tools
Stars: ✭ 377 (-11.92%)
Mutual labels:  prometheus

See also Elasticsearch+Kibana Kubernetes complete example

Prerequisites

Kubectl

kubectl should be configured.

Namespace

This example uses monitoring namespace. If you wish to use your own namespace, just export NAMESPACE=mynamespace environment variable.

Upload etcd TLS keypair

In case when you use TLS keypair and TLS auth for your etcd cluster, please put corresponding TLS keypair into the etcd-tls-client-certs secrets:

kubectl --namespace=monitoring create secret generic --from-file=ca.pem=/path/to/ca.pem --from-file=client.pem=/path/to/client.pem --from-file=client-key.pem=/path/to/client-key.pem etcd-tls-client-certs

otherwise create a dummy secret:

kubectl --namespace=monitoring create secret generic --from-literal=ca.pem=123 --from-literal=client.pem=123 --from-literal=client-key.pem=123 etcd-tls-client-certs

Upload Ingress controller server TLS keypairs

In order to provide secure endpoint available trough the Internet you have to set example-tls secret inside the monitoring Kubernetes namespace.

kubectl create --namespace=monitoring secret tls example-tls --cert=cert.crt --key=key.key

Detailed information is available here. Ingress manifest example.

Create Ingress basic auth entry

With the internal-services-auth name. More info is here. Ingress manifest example.

Set proper external URLs to have correct links in notifications

Run EXTERNAL_URL=https://my-external-prometheus.example.com ./deploy.sh to deploy Prometheus monitoring configured to use https://my-external-prometheus.example.com base URL. Otherwise it will use default value: https://prometheus.example.com.

Assumptions

Disk mount points

This repo assumes that your Kubernetes worker nodes contain two observable mount points:

  • root mount point / which is mounted as readonly /root-disk inside the node-exporter pod
  • data mount point /localdata which is mounted as readonly /data-disk inside the node-exporter pod

If you wish to change these values, you have to modify node-exporter-ds.yaml, prometheus-rules/low-disk-space.rules, grafana-import-dashboards-configmap and then rebuild configmap manifests before you run ./deploy.sh script.

Data storage

This repo uses emptyDir data storage which means that every pod restart will cause data loss. In case when you wish to use persistant storage please modify the following manifests correspondingly:

Grafana dashboards

Initial Grafana dashboards were taken from this repo and adjusted.

Ingress controller

Example of an ingress controller to get an access from outside:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    ingress.kubernetes.io/auth-realm: Authentication Required
    ingress.kubernetes.io/auth-secret: internal-services-auth
    ingress.kubernetes.io/auth-type: basic
    kubernetes.io/ingress.allow-http: "false"
  name: ingress-monitoring
  namespace: monitoring
spec:
  tls:
  - hosts:
    - prometheus.example.com
    - grafana.example.com
    secretName: example-tls
  rules:
  - host: prometheus.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: prometheus-svc
          servicePort: 9090
      - path: /alertmanager
        backend:
          serviceName: alertmanager
          servicePort: 9093
  - host: grafana.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: grafana
          servicePort: 3000

If you still don't have an Ingress controller installed, you can use manifests from the test_ingress directory for test purposes.

Alerting

Included alert rules

Prometheus alert rules which are already included in this repo:

  • NodeCPUUsage > 50%
  • NodeLowRootDisk > 80% (relates to /root-disk mount point inside node-exporter pod)
  • NodeLowDataDisk > 80% (relates to /data-disk mount point inside node-exporter pod)
  • NodeSwapUsage > 10%
  • NodeMemoryUsage > 75%
  • ESLogsStatus (alerts when Elasticsearch cluster status goes yellow or red)
  • NodeLoadAverage (alerts when node's load average divided by amount of CPUs exceeds 1)

Notifications

alertmanager-configmap.yaml contains smtp_* and slack_* inside the global sections. Adjust them to meet your needs.

Updating configuration

Prometheus configuration

Update command line parameters

Modify prometheus-deployment.yaml and apply a manifest:

kubectl --namespace=monitoring apply -f prometheus-deployment.yaml

If deployment manifest was changed, all Prometheus pods will be restarted with data loss.

Update configfile

Update prometheus-configmap.yaml or prometheus-rules directory contents and apply them:

./update_prometheus_config.sh
# or
./update_prometheus_rules.sh

These scripts will update configmaps, wait until changes will be delivered into the pod volume (if the configmap was not changed, this script will work forever) and reload the configs. You can also reload configs manually using the commands below:

curl -XPOST --user "%username%:%password%" https://prometheus.example.com/-/reload
# or
kubectl --namespace=monitoring exec $(kubectl --namespace=monitoring get pods -l app=prometheus -o jsonpath={.items..metadata.name}) -- killall -HUP prometheus

Alertmanager configuration

Update command line parameters

Modify alertmanager-deployment.yaml and apply a manifest:

kubectl --namespace=monitoring apply -f alertmanager-deployment.yaml

If deployment manifest was changed, all Alertmanager pods will be restarted with data loss.

Update configfile

Update alertmanager-configmap.yaml or alertmanager-templates directory contents and apply them:

./update_alertmanager_config.sh
# or
./update_alertmanager_templates.sh

These scripts will update configmaps, wait until changes will be delivered into the pod volume (if the configmap was not changed, this script will work forever) and reload the configs. You can also reload configs manually using the commands below:

curl -XPOST --user "%username%:%password%" https://prometheus.example.com/alertmanager/-/reload
# or
kubectl --namespace=monitoring exec $(kubectl --namespace=monitoring get pods -l app=alertmanager -o jsonpath={.items..metadata.name}) -- killall -HUP alertmanager

Pictures

grafana

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].