All Projects â†’ rchakode â†’ Kube Opex Analytics

rchakode / Kube Opex Analytics

Licence: apache-2.0
🎨 Kubernetes Cost Allocation and Capacity Planning Analytics Tool. Hourly, daily, monthly reports - Prometheus exporter - Built-in & Grafana dashboard.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Kube Opex Analytics

Vudash
Powerful, Flexible, Open Source dashboards for anything
Stars: ✭ 363 (+56.47%)
Mutual labels:  analytics, monitoring, dashboard
Grafana Dashboards
Grafana Dashboards
Stars: ✭ 228 (-1.72%)
Mutual labels:  monitoring, grafana-dashboard, dashboard
Netdata
Real-time performance monitoring, done right! https://www.netdata.cloud
Stars: ✭ 57,056 (+24493.1%)
Mutual labels:  analytics, monitoring, dashboard
Flask Profiler
a flask profiler which watches endpoint calls and tries to make some analysis.
Stars: ✭ 622 (+168.1%)
Mutual labels:  analytics, monitoring, dashboard
Goaccess
GoAccess is a real-time web log analyzer and interactive viewer that runs in a terminal in *nix systems or through your browser.
Stars: ✭ 14,096 (+5975.86%)
Mutual labels:  analytics, monitoring, dashboard
Unifi Poller
Application: Collect ALL UniFi Controller, Site, Device & Client Data - Export to InfluxDB or Prometheus
Stars: ✭ 1,050 (+352.59%)
Mutual labels:  prometheus-exporter, grafana-dashboard, dashboard
X509 Certificate Exporter
A Prometheus exporter to monitor x509 certificates expiration in Kubernetes clusters or standalone
Stars: ✭ 40 (-82.76%)
Mutual labels:  prometheus-exporter, grafana-dashboard, dashboard
Grafana
The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.
Stars: ✭ 45,930 (+19697.41%)
Mutual labels:  analytics, monitoring, dashboard
Github Monitoring
Monitor your GitHub Repos with Docker & Prometheus
Stars: ✭ 163 (-29.74%)
Mutual labels:  monitoring, dashboard
Swiv
For the open source UI formerly know as Pivot
Stars: ✭ 165 (-28.88%)
Mutual labels:  analytics, dashboard
Hastic Grafana App
Hastic data management server for labeling patterns and anomalies in Grafana
Stars: ✭ 166 (-28.45%)
Mutual labels:  monitoring, dashboard
Dashbuilder
Dashboard composition tooling based on the Uberfire framework
Stars: ✭ 163 (-29.74%)
Mutual labels:  monitoring, dashboard
Gitlab Ci Monitor
A simple dashboard for monitoring GitLab CI builds. Alpha version.
Stars: ✭ 152 (-34.48%)
Mutual labels:  monitoring, dashboard
Pmm Server
PMM Server
Stars: ✭ 165 (-28.88%)
Mutual labels:  monitoring, grafana-dashboard
Appmetrics
App Metrics is an open-source and cross-platform .NET library used to record and report metrics within an application.
Stars: ✭ 1,986 (+756.03%)
Mutual labels:  monitoring, grafana-dashboard
Jmx exporter
A process for exposing JMX Beans via HTTP for Prometheus consumption
Stars: ✭ 2,134 (+819.83%)
Mutual labels:  monitoring, prometheus-exporter
Alertmanager2es
Receives HTTP webhook notifications from AlertManager and inserts them into an Elasticsearch index for searching and analysis
Stars: ✭ 173 (-25.43%)
Mutual labels:  analytics, monitoring
Ktop
top for k8s
Stars: ✭ 178 (-23.28%)
Mutual labels:  monitoring, dashboard
Prometheus Nats Exporter
A Prometheus exporter for NATS metrics
Stars: ✭ 179 (-22.84%)
Mutual labels:  monitoring, grafana-dashboard
Dark
(grafana) Dashboards As Resources in Kubernetes
Stars: ✭ 190 (-18.1%)
Mutual labels:  grafana-dashboard, dashboard

logo

Apache License Latest build status Calendar Versioning Docker pulls

Overview

In a nutshell, kube-opex-analytics or literally Kubernetes Opex Analytics is a tool to help organizations track the resources being consumed by their Kubernetes clusters to prevent overpaying. To this end, it generates short-, mid- and long-term usage reports showing relevant insights on what amount of resources each project is consuming over time. The final goal being to ease cost allocation and capacity planning decisions with factual analytics.

Multi-cluster analytics: kube-opex-analytics tracks the usage for a single instance of Kubernetes. For a centralized multi-Kubernetes usage analytics, you may have to consider our Krossboard project. Watch a demo video here.

Table of Contents

Concepts

kube-opex-analytics periodically collects CPU and memory usage metrics from Kubernetes's API, processes and consolidates them over various time-aggregation perspectives (hourly, daily, monthly), to produce resource usage reports covering up to a year. The reports focus on namespace level, while a special care is taken to also account and highlight shares of non-allocatable capacities.

Fundamentals Principles

kube-opex-analytics is designed atop the following core concepts and features:

  • Namespace-focused: Means that consolidated resource usage metrics consider individual namespaces as fundamental units for resource sharing. A special care is taken to also account and highlight non-allocatable resources .
  • Hourly Usage & Trends: Like on public clouds, resource consumption for each namespace is consolidated on a hourly-basic. This actually corresponds to the ratio (%) of resource used per namespace during each hour. It's the foundation for cost allocation and also allows to get over time trends about resources being consuming per namespace and also at the Kubernetes cluster scale.
  • Daily and Monthly Usage Costs: Provides for each period (daily/monthly), namespace, and resource type (CPU/memory), consolidated cost computed given one of the following ways: (i) accumulated hourly usage over the period; (ii) actual costs computed based on resource usage and a given hourly billing rate; (iii) normalized ratio of usage per namespace compared against the global cluster usage.
  • Occupation of Nodes by Namespaced Pods: Highlights for each node the share of resources used by active pods labelled by their namespace.
  • Efficient Visualization: For metrics generated, kube-opex-analytics provides dashboards with relevant charts covering as well the last couple of hours than the last 12 months (i.e. year). For this there are built-in charts, a Prometheus Exporter along with Grafana Dashboard that all work out of the box.

Cost Models

Cost allocation models can be set through the startup configuration variable KOA_COST_MODEL. Possible values are:

  • CUMULATIVE_RATIO: (default value) compute costs as cumulative resource usage for each period of time (daily, monthly).
  • RATIO: compute costs as normalized ratios (%) of resource usage during each period of time.
  • CHARGE_BACK: compute actual costs using a given cluster hourly rate and the cumulative resource usage during each period of time.

Read the Configuration section for more details.

Screenshots

The below screenshots illustrate some reports leveraged via the kube-opex-analytics's built-in charts or via Grafana backed by the kube-opex-analytics's Prometheus exporter.

Last Week Hourly Resource Usage Trends

Two-weeks Daily CPU and Memory Usage

One-year Monthly CPU and Memory Usage

Nodes' Occupation by Pods

Grafana Dashboard

This is a screenshot of our official one backed by the kube-opex-analytics's built-in Prometheus Exporter.

Getting Started

Kubernetes API Access

kube-opex-analytics needs read-only access to the following Kubernetes APIs.

  • /api/v1
  • /apis/metrics.k8s.io/v1beta1 (provided by Kubernetes Metrics Server, which shall be installed on the cluster if it's not yet the case).

You need to provide the base URL of the Kubernetes API when starting the program.

  • For a typically deployment inside the Kubernetes cluster, you have to provide the local cluster API endpoint at (i.e. https://kubernetes.default).
  • Otherwise, if you're planning an installation outside the Kubernetes cluster you can provide either, the URL to the Kubernetes API (e.g. https://1.2.3.4:6443), or a proxied API (the command kubectl proxy shall open a proxied access to the Kubernetes API with the following endpoint by default http://127.0.0.1:8001).

When deployed outside the cluster without a proxy access, it'll be likely required to provide credentials to authenticate against the Kubernetes API. The credentials can be a Bearer token, a Basic auth token, or even X509 client certifcate credentials. See Configuration Variables for more details.

Configuration Variables

When needed, these configuration environment variables shall be set before starting kube-opex-analytics:

  • KOA_DB_LOCATION sets the path to use to store internal data. Typically when you consider to set a volume to store those data, you should also take care to set this path to belong to the mounting point.
  • KOA_K8S_API_ENDPOINT sets the endpoint to the Kubernetes API.
  • KOA_K8S_CACERT sets the path to CA file for a self-signed certificate.
  • KOA_K8S_AUTH_TOKEN sets a Bearer token to authenticate against the Kubernetes API.
  • KOA_K8S_AUTH_CLIENT_CERT sets the path to the X509 client certificate to authenticate against the Kubernetes API.
  • KOA_K8S_AUTH_CLIENT_CERT_KEY sets the path to the X509 client certificate key.
  • KOA_K8S_AUTH_USERNAME sets the username to authenticate against the Kubernetes API using Basic Authentication.
  • KOA_K8S_AUTH_PASSWORD sets the password for Basic Authentication.
  • KOA_COST_MODEL (version >= 0.2.0): sets the model of cost allocation to use. Possible values are: CUMULATIVE_RATIO (default) indicates to compute cost as cumulative resource usage for each period of time (daily, monthly); CHARGE_BACK calculates cost based on a given cluster hourly rate (see KOA_BILLING_HOURLY_RATE); RATIO indicates to compute cost as a normalized percentage of resource usage during each period of time.
  • KOA_BILLING_HOURLY_RATE (required if cost model is CHARGE_BACK): defines a positive floating number corresponding to an estimated hourly rate for the Kubernetes cluster. For example if your cluster cost is $5,000 dollars a month (i.e. ~30*24 hours), its estimated hourly cost would be 6.95 = 5000/(30*24).
  • KOA_BILLING_CURRENCY_SYMBOL (optional, default is '$'): sets a currency string to use to annotate costs on reports.

Deployment on Docker

kube-opex-analytics is released as a Docker image. So you can quickly start an instance of the service by running the following command:

$ docker run -d \
        --net="host" \
        --name 'kube-opex-analytics' \
        -v /var/lib/kube-opex-analytics:/data \
        -e KOA_DB_LOCATION=/data/db \
        -e KOA_K8S_API_ENDPOINT=http://127.0.0.1:8001 \
        rchakode/kube-opex-analytics

In this command:

  • We provide a local path /var/lib/kube-opex-analytics as data volume for the container. That's where kube-opex-analytics will store its internal analytics data. You can change this local path to another location, but please keep the container volume /data as is.
  • The environment variable KOA_DB_LOCATION points to the container path to store data. You may note that this directory belongs to the data volume atached to the container.
  • The environment variable KOA_K8S_API_ENDPOINT set the address of the Kubernetes API endpoint.

Get Access to the User Interface

Once the container started you can open access the kube-opex-analytics's web interface at http://<DOCKER_HOST>:5483/. Where <DOCKER_HOST> should be replaced by the IP address or the hostmane of the Docker server.

For instance, if you're running Docker on your local machine the interface will be available at: http://127.0.0.1:5483/

You typically need to wait almost an hour to have all charts filled. This is a normal operations of kube-opex-analytics which is an hourly-based analytics tool.

Deployment on a Kubernetes cluster

There is a Helm chart to ease the deployment on Kubernetes using either Helm or kubectl.

First review the values.yaml file to customize the configuration options according to your specific environment.

In particular, you may need to customize the default settings used for the persistent data volume, the Prometheus Operator and its ServiceMonitor, the security context, and many others.

Security Context: kube-opex-analytics's pod is deployed with a unprivileged security context by default. However, if needed, it's possible to launch the pod in privileged mode by setting the Helm configuration value securityContext.enabled to false.

In the next deployment commands, it's assumed that the target namespace kube-opex-analytics exists. You thus need to create it first or, alternatively, adapt the commands to use any other namespace of your choice.

Installation using Helm

The deployment, which is validated with Helm 2 and 3, can be performed as follows.

helm upgrade \
  --namespace kube-opex-analytics \
  --install kube-opex-analytics \
  helm/kube-opex-analytics/

Installation using Kubectl

This approach requires to have the Helm client (version 2 or 3) installed to generate a raw template for kubectl.

$ helm template \
  kube-opex-analytics \
  --namespace kube-opex-analytics \
  helm/kube-opex-analytics/ | kubectl apply -f -

Get Access to UI Service

The Helm deploys an HTTP service named kube-opex-analytics on port 80 in the selected namespace, providing to the built-in dashboard of kube-opex-analytics.

Export Charts and Datasets (PNG, CSV, JSON)

Any chart provided by kube-opex-analytics can be exported, either as PNG image, CSV or JSON data files.

The exportation steps are the following:

  • Get access to kube-opex-analytics's interface.

  • Go to the chart that you want to export dataset.

  • Click on the tricolon icon near the chart title, then select the desired export format.

  • You're done, the last step shall download the result file instantly.

Prometheus Exporter

Starting from version 0.3.0, kube-opex-analytics enables a Prometheus exporter through the endpoint /metrics.

The exporter exposes the following metrics:

  • koa_namespace_hourly_usage exposes for each namespace its current hourly resource usage for both CPU and memory.
  • koa_namespace_daily_usage exposes for each namespace and for the ongoing day, its current resource usage for both CPU and memory.
  • koa_namespace_monthly_usage exposes for each namespace and for the ongoing month, its current resource usage for both CPU and memory.

The Prometheus scraping job can be configured like below (adapt the target URL if needed). A scraping interval less than 5 minutes (i.e. 300s) is useless as kube-opex-analytics would not generate any new metrics in the meantime.

scrape_configs:
  - job_name: 'kube-opex-analytics'
    scrape_interval: 300s
    static_configs:
      - targets: ['kube-opex-analytics:5483']

When the option prometheusOperator is enabled during the deployment (see Helm values.yaml file), you have nothing to do as the scraping should be automatically configured by the deployed Prometheus ServiceMonitor.

Grafana Dashboards

You can either build your own Grafana dashboard or use our official one.

This official Grafana dashboard looks as below and is designed to work out-of-the box with the kube-opex-analytics's Prometheus exporter. It requires to set a Grafana variable named KOA_DS_PROMETHEUS, which shall point to your Prometheus server data source.

The dashboard currently provides the following reports:

  • Hourly resource usage over time.
  • Current day's ongoing resource usage.
  • Current month's ongoing resource usage.

You should notice those reports are less rich compared against the ones enabled by the built-in kube-opex-analytics dashboard. In particular, the daily and the monthly usage for the different namespaces are not stacked, neither than there are not analytics for past days and months. These limitations are inherent to how Grafana handles timeseries and bar charts.

Multi-cluster analytics

Thanks to a partnership with the 2Alchemists SAS company, this feature is now implemented by Krossboard.

It's actively tested against Amazon EKS, Microsoft AKS, Google GKE Red Hat OpenShift, Rancher RKE, and various vanilla deployments. [Learn more...]

License & Copyrights

kube-opex-analytics (code and documentation) is licensed under the terms of Apache License 2.0. Read the LICENSE file for more details on the license terms.

It includes and is bound to third-party libraries provided with their owns licenses and copyrights. Read the NOTICE file for additional information.

Support & Contributions

We encourage feedback and always make our best to handle any troubles you may encounter when using it.

Here is the link to submit issues: https://github.com/rchakode/kube-opex-analytics/issues.

New ideas are welcomed, if you have any idea to improve it please open an issue to submit it.

Contributions are accepted subject that the code and documentation be released under the terms of Apache 2.0 License.

To contribute bug patches or new features, please submit a Pull Request.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].