All Projects → 9corp → 9volt

9corp / 9volt

Licence: mit
A modern, distributed monitoring system written in Go

Programming Languages

go
31211 projects - #10 most used programming language
golang
3204 projects

Projects that are alternatives of or similar to 9volt

Nebula
Nebula is a powerful framwork for building highly concurrent, distributed, and resilient message-driven applications for C++.
Stars: ✭ 385 (+140.63%)
Mutual labels:  distributed, high-performance
Pcp
Performance Co-Pilot
Stars: ✭ 716 (+347.5%)
Mutual labels:  monitoring, distributed
Xxl Rpc
A high performance, distributed RPC framework.(分布式服务框架XXL-RPC)
Stars: ✭ 493 (+208.13%)
Mutual labels:  distributed, high-performance
Joyrpc
high-performance, high-extensibility Java rpc framework.
Stars: ✭ 290 (+81.25%)
Mutual labels:  distributed, high-performance
Maze
Maze Applied Reinforcement Learning Framework
Stars: ✭ 85 (-46.87%)
Mutual labels:  monitoring, distributed
Tarsjava
Java language framework rpc source code implementation
Stars: ✭ 321 (+100.63%)
Mutual labels:  high-availability, high-performance
Promxy
An aggregating proxy to enable HA prometheus
Stars: ✭ 562 (+251.25%)
Mutual labels:  monitoring, high-availability
radondb-mysql-kubernetes
Open Source,High Availability Cluster,based on MySQL
Stars: ✭ 146 (-8.75%)
Mutual labels:  high-performance, high-availability
Tns
tns provides distributed solutions for thrift, support service discovery, high availability, load balancing, the gray release, horizontal scaling, and so on.
Stars: ✭ 53 (-66.87%)
Mutual labels:  high-availability, distributed
Lizardfs
LizardFS is an Open Source Distributed File System licensed under GPLv3.
Stars: ✭ 793 (+395.63%)
Mutual labels:  high-availability, high-performance
Beeping
HTTP Monitoring via API - Measure the performance of your servers
Stars: ✭ 267 (+66.88%)
Mutual labels:  monitoring, distributed
Raft.net
Implementation of RAFT distributed consensus algorithm among TCP Peers on .NET / .NETStandard / .NETCore / dotnet
Stars: ✭ 112 (-30%)
Mutual labels:  high-availability, distributed
K8s
Important production-grade Kubernetes Ops Services
Stars: ✭ 253 (+58.13%)
Mutual labels:  monitoring, high-availability
Linstor Server
High Performance Software-Defined Block Storage for container, cloud and virtualisation. Fully integrated with Docker, Kubernetes, Openstack, Proxmox etc.
Stars: ✭ 374 (+133.75%)
Mutual labels:  high-availability, high-performance
leafserver
🍃A high performance distributed unique ID generation system
Stars: ✭ 31 (-80.62%)
Mutual labels:  high-performance, distributed
Haipproxy
💖 High available distributed ip proxy pool, powerd by Scrapy and Redis
Stars: ✭ 4,993 (+3020.63%)
Mutual labels:  high-availability, distributed
cachegrand
cachegrand is an open-source fast, scalable and secure Key-Value store, also fully compatible with Redis protocol, designed from the ground up to take advantage of modern hardware vertical scalability, able to provide better performance and a larger cache at lower cost, without losing focus on distributed systems.
Stars: ✭ 87 (-45.62%)
Mutual labels:  high-performance, distributed
k8s-lemp
LEMP stack in a Kubernetes cluster
Stars: ✭ 74 (-53.75%)
Mutual labels:  distributed, high-availability
Agola
Agola: CI/CD Redefined
Stars: ✭ 783 (+389.38%)
Mutual labels:  high-availability, distributed
Tars
Tars is a high-performance RPC framework based on name service and Tars protocol, also integrated administration platform, and implemented hosting-service via flexible schedule.
Stars: ✭ 9,277 (+5698.13%)
Mutual labels:  high-availability, high-performance

9volt

Build Status Go Report Card

A modern, distributed monitoring system written in Go.

Another monitoring system? Why?

While there are a bunch of solutions for monitoring and alerting using time series data, there aren't many (or any?) modern solutions for 'regular'/'old-skool' remote monitoring similar to Nagios and Icinga.

9volt offers the following things out of the box:

  • Single binary deploy
  • Fully distributed
  • Incredibly easy to scale to hundreds of thousands of checks
  • Uses etcd for all configuration
  • Real-time configuration pick-up (update etcd - 9volt immediately picks up the change)
  • Support for assigning checks to specific (groups of) nodes
    • Helpful for getting around network restrictions (or requiring certain checks to run from a specific region)
  • Interval based monitoring (ie. run check XYZ every 1s, 1y, 1d or even 1ms)
  • Natively supported monitors:
    • TCP
    • HTTP
    • Exec
    • DNS
  • Natively supported alerters:
    • Slack
    • Pagerduty
    • Email
  • RESTful API for querying current monitoring state and loaded configuration
  • Comes with a built-in, react based UI that provides another way to view and manage the cluster
  • Comes with a built-in monitor and alerter config management util (that parses and syncs YAML-based configs to etcd)
    • ./9volt cfg --help

Usage

  • Install/setup etcd
  • Download latest 9volt release
  • Start server: ./9volt server -e http://etcd-server-1.example.com:2379 -e http://etcd-server-2.example.com:2379 -e http://etcd-server-3.example.com:2379
  • Optional: use 9volt cfg for managing configs
  • Optional: add 9volt to be managed by supervisord, upstart or some other process manager
  • Optional: Several configuration params can be passed to 9volt via env vars

... or, if you prefer to do things via Docker, check out these docs.

H/A and scaling

Scaling 9volt is incredibly simple. Launch another 9volt service on a separate host and point it to the same etcd hosts as the main 9volt service.

Your main 9volt node will produce output similar to this when it detects a node join:

node join

Checks will be automatically divided between the all 9volt instances.

If one of the nodes were to go down, a new leader will be elected (if the node that went down was the previous leader) and checks will be redistributed among the remaining nodes.

This will produce output similar to this (and will be also available in the event stream via the API and UI):

node-leave

API

API documentation can be found here.

Minimum requirements (can handle ~1,000-3,000 <10s interval checks)

  • 1 x 9volt instance (1 core, 256MB RAM)
  • 1 x etcd node (1 core, 256MB RAM)

Note In the minimum configuration, you could run both 9volt and etcd on the same node.

Recommended (production) requirements (can handle 10,000+ <10s interval checks)

  • 3 x 9volt instances (2+ cores, 512MB RAM)
  • 3 x etcd nodes (2+ cores, 1GB RAM)

Configuration

While you can manage 9volt alerter and monitor configs via the API, another approach to config management is to use the built-in config utility (9volt cfg <flags>).

This utility allows you to scan a given directory for any YAML files that resemble 9volt configs (the file must contain either 'monitor' or 'alerter' sections) and it will automatically parse, validate and push them to your etcd server(s).

By default, the utility will keep your local configs in sync with your etcd server(s). In other words, if the utility comes across a config in etcd that does not exist locally (in config(s)), it will remove the config entry from etcd (and vice versa). This functionality can be turned off by flipping the --nosync flag.

cfg run

You can look at an example of a YAML based config here.

Docs

Read through the docs dir.

Suggestions/ideas

Got a suggestion/idea? Something that is preventing you from using 9volt over another monitoring system because of a missing feature? Submit an issue and we'll see what we can do!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].