All Projects → otoolep → Hraftd

otoolep / Hraftd

Licence: mit
A reference use of Hashicorp's Raft implementation

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to Hraftd

Verdi Raft
An implementation of the Raft distributed consensus protocol, verified in Coq using the Verdi framework
Stars: ✭ 143 (-80.46%)
Mutual labels:  key-value, consensus, raft, distributed-systems
Etcd
Distributed reliable key-value store for the most critical data of a distributed system
Stars: ✭ 38,238 (+5123.77%)
Mutual labels:  key-value, consensus, raft, distributed-systems
Dragonboat
Dragonboat is a high performance multi-group Raft consensus library in pure Go.
Stars: ✭ 3,983 (+444.13%)
Mutual labels:  consensus, raft, distributed-systems
Raft
Raft Consensus Algorithm
Stars: ✭ 370 (-49.45%)
Mutual labels:  consensus, raft, distributed-systems
Nuraft
C++ implementation of Raft core logic as a replication library
Stars: ✭ 428 (-41.53%)
Mutual labels:  consensus, raft, distributed-systems
Atomix
A reactive Java framework for building fault-tolerant distributed systems
Stars: ✭ 2,182 (+198.09%)
Mutual labels:  consensus, raft, distributed-systems
Tikv
Distributed transactional key-value database, originally created to complement TiDB
Stars: ✭ 10,403 (+1321.17%)
Mutual labels:  key-value, consensus, raft
raft-badger
Badger-based backend for Hashicorp's raft package
Stars: ✭ 27 (-96.31%)
Mutual labels:  key-value, raft, consensus
little-raft
The lightest distributed consensus library. Run your own replicated state machine! ❤️
Stars: ✭ 316 (-56.83%)
Mutual labels:  distributed-systems, raft, consensus
huffleraft
Replicated key-value store driven by the raft consensus protocol 🚵
Stars: ✭ 32 (-95.63%)
Mutual labels:  distributed-systems, key-value, raft
Elasticell
Elastic Key-Value Storage With Strong Consistency and Reliability
Stars: ✭ 453 (-38.11%)
Mutual labels:  key-value, raft, distributed-systems
Js
Gryadka is a minimalistic master-master replicated consistent key-value storage based on the CASPaxos protocol
Stars: ✭ 304 (-58.47%)
Mutual labels:  key-value, consensus, distributed-systems
Zatt
Python implementation of the Raft algorithm for distributed consensus
Stars: ✭ 119 (-83.74%)
Mutual labels:  consensus, raft, distributed-systems
Copycat
A novel implementation of the Raft consensus algorithm
Stars: ✭ 551 (-24.73%)
Mutual labels:  consensus, raft, distributed-systems
Bifrost
Pure rust building block for distributed systems
Stars: ✭ 118 (-83.88%)
Mutual labels:  consensus, raft, distributed-systems
Rqlite
The lightweight, distributed relational database built on SQLite
Stars: ✭ 9,147 (+1149.59%)
Mutual labels:  consensus, raft, distributed-systems
Zookeeper
Apache ZooKeeper
Stars: ✭ 10,061 (+1274.45%)
Mutual labels:  distributed-systems, key-value, consensus
raftor
Distributed chat system built with rust
Stars: ✭ 31 (-95.77%)
Mutual labels:  distributed-systems, raft, consensus
coolbeans
Coolbeans is a distributed work queue that implements the beanstalkd protocol.
Stars: ✭ 56 (-92.35%)
Mutual labels:  distributed-systems, raft, consensus
raft-rocks
A simple database based on raft and rocksdb
Stars: ✭ 38 (-94.81%)
Mutual labels:  key-value, raft, consensus

For background on this project check out this blog post.

hraftd Circle CI appveyor GoDoc Go Report Card

hraftd is a reference example use of the Hashicorp Raft implementation v1.0. Raft is a distributed consensus protocol, meaning its purpose is to ensure that a set of nodes -- a cluster -- agree on the state of some arbitrary state machine, even when nodes are vulnerable to failure and network partitions. Distributed consensus is a fundamental concept when it comes to building fault-tolerant systems.

A simple example system like hraftd makes it easy to study the Raft consensus protocol in general, and Hashicorp's Raft implementation in particular. It can be run on Linux, OSX, and Windows.

Reading and writing keys

The reference implementation is a very simple in-memory key-value store. You can set a key by sending a request to the HTTP bind address (which defaults to localhost:11000):

curl -XPOST localhost:11000/key -d '{"foo": "bar"}'

You can read the value for a key like so:

curl -XGET localhost:11000/key/foo

Running hraftd

Building hraftd requires Go 1.13 or later. gvm is a great tool for installing and managing your versions of Go.

Starting and running a hraftd cluster is easy. Download hraftd like so:

mkdir hraftd
cd hraftd/
export GOPATH=$PWD
GO111MODULE=on go get github.com/otoolep/hraftd

Run your first hraftd node like so:

$GOPATH/bin/hraftd -id node0 ~/node0

You can now set a key and read its value back:

curl -XPOST localhost:11000/key -d '{"user1": "batman"}'
curl -XGET localhost:11000/key/user1

Bring up a cluster

A walkthrough of setting up a more realistic cluster is here.

Let's bring up 2 more nodes, so we have a 3-node cluster. That way we can tolerate the failure of 1 node:

$GOPATH/bin/hraftd -id node1 -haddr :11001 -raddr :12001 -join :11000 ~/node1
$GOPATH/bin/hraftd -id node2 -haddr :11002 -raddr :12002 -join :11000 ~/node2

This example shows each hraftd node running on the same host, so each node must listen on different ports. This would not be necessary if each node ran on a different host.

This tells each new node to join the existing node. Once joined, each node now knows about the key:

curl -XGET localhost:11000/key/user1
curl -XGET localhost:11001/key/user1
curl -XGET localhost:11002/key/user1

Furthermore you can add a second key:

curl -XPOST localhost:11000/key -d '{"user2": "robin"}'

Confirm that the new key has been set like so:

curl -XGET localhost:11000/key/user2
curl -XGET localhost:11001/key/user2
curl -XGET localhost:11002/key/user2

Stale reads

Because any node will answer a GET request, and nodes may "fall behind" updates, stale reads are possible. Again, hraftd is a simple program, for the purpose of demonstrating a distributed key-value store. If you are particularly interested in learning more about issue, you should check out rqlite. rqlite allows the client to control read consistency, allowing the client to trade off read-responsiveness and correctness.

Read-consistency support could be ported to hraftd if necessary.

Tolerating failure

Kill the leader process and watch one of the other nodes be elected leader. The keys are still available for query on the other nodes, and you can set keys on the new leader. Furthermore, when the first node is restarted, it will rejoin the cluster and learn about any updates that occurred while it was down.

A 3-node cluster can tolerate the failure of a single node, but a 5-node cluster can tolerate the failure of two nodes. But 5-node clusters require that the leader contact a larger number of nodes before any change e.g. setting a key's value, can be considered committed.

Leader-forwarding

Automatically forwarding requests to set keys to the current leader is not implemented. The client must always send requests to change a key to the leader or an error will be returned.

Production use of Raft

For a production-grade example of using Hashicorp's Raft implementation, to replicate a SQLite database, check out rqlite.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].