All Projects → uwplse → Verdi Raft

uwplse / Verdi Raft

Licence: bsd-2-clause
An implementation of the Raft distributed consensus protocol, verified in Coq using the Verdi framework

Projects that are alternatives of or similar to Verdi Raft

Hraftd
A reference use of Hashicorp's Raft implementation
Stars: ✭ 732 (+411.89%)
Mutual labels:  key-value, consensus, raft, distributed-systems
Etcd
Distributed reliable key-value store for the most critical data of a distributed system
Stars: ✭ 38,238 (+26639.86%)
Mutual labels:  key-value, consensus, raft, distributed-systems
Disel
Distributed Separation Logic: a framework for compositional verification of distributed protocols and their implementations in Coq
Stars: ✭ 85 (-40.56%)
Mutual labels:  proof, coq, distributed-systems
Dragonboat
Dragonboat is a high performance multi-group Raft consensus library in pure Go.
Stars: ✭ 3,983 (+2685.31%)
Mutual labels:  consensus, raft, distributed-systems
Zookeeper
Apache ZooKeeper
Stars: ✭ 10,061 (+6935.66%)
Mutual labels:  distributed-systems, key-value, consensus
Rqlite
The lightweight, distributed relational database built on SQLite
Stars: ✭ 9,147 (+6296.5%)
Mutual labels:  consensus, raft, distributed-systems
Js
Gryadka is a minimalistic master-master replicated consistent key-value storage based on the CASPaxos protocol
Stars: ✭ 304 (+112.59%)
Mutual labels:  key-value, consensus, distributed-systems
Raft
Raft Consensus Algorithm
Stars: ✭ 370 (+158.74%)
Mutual labels:  consensus, raft, distributed-systems
huffleraft
Replicated key-value store driven by the raft consensus protocol 🚵
Stars: ✭ 32 (-77.62%)
Mutual labels:  distributed-systems, key-value, raft
Verdi
A framework for formally verifying distributed systems implementations in Coq
Stars: ✭ 496 (+246.85%)
Mutual labels:  proof, coq, distributed-systems
Elasticell
Elastic Key-Value Storage With Strong Consistency and Reliability
Stars: ✭ 453 (+216.78%)
Mutual labels:  key-value, raft, distributed-systems
Copycat
A novel implementation of the Raft consensus algorithm
Stars: ✭ 551 (+285.31%)
Mutual labels:  consensus, raft, distributed-systems
raft-rocks
A simple database based on raft and rocksdb
Stars: ✭ 38 (-73.43%)
Mutual labels:  key-value, raft, consensus
chapar
A framework for verification of causal consistency for distributed key-value stores and their clients in Coq [maintainer=@palmskog]
Stars: ✭ 29 (-79.72%)
Mutual labels:  distributed-systems, key-value, coq
Tikv
Distributed transactional key-value database, originally created to complement TiDB
Stars: ✭ 10,403 (+7174.83%)
Mutual labels:  key-value, consensus, raft
coolbeans
Coolbeans is a distributed work queue that implements the beanstalkd protocol.
Stars: ✭ 56 (-60.84%)
Mutual labels:  distributed-systems, raft, consensus
raftor
Distributed chat system built with rust
Stars: ✭ 31 (-78.32%)
Mutual labels:  distributed-systems, raft, consensus
little-raft
The lightest distributed consensus library. Run your own replicated state machine! ❤️
Stars: ✭ 316 (+120.98%)
Mutual labels:  distributed-systems, raft, consensus
Nuraft
C++ implementation of Raft core logic as a replication library
Stars: ✭ 428 (+199.3%)
Mutual labels:  consensus, raft, distributed-systems
Zatt
Python implementation of the Raft algorithm for distributed consensus
Stars: ✭ 119 (-16.78%)
Mutual labels:  consensus, raft, distributed-systems

Verdi Raft

Build Status

An implementation of the Raft distributed consensus protocol, verified in Coq using the Verdi framework.

Requirements

Definitions and proofs:

Executable vard key-value store:

Client for vard:

Integration testing of vard:

Unit testing of unverified vard code:

Building

We recommend installing the dependencies of Verdi Raft via OPAM:

opam repo add coq-extra-dev https://coq.inria.fr/opam/extra-dev
opam install coq-struct-tact coq-cheerios coq-verdi

Then, run ./configure in the Verdi Raft root directory. This will check for the appropriate version of Coq and ensure all necessary Coq dependencies can be located. By default, Verdi, StructTact, and Cheerios are assumed to be installed in Coq's user-contrib directory, but this can be overridden by setting the Verdi_PATH, StructTact_PATH, and Cheerios_PATH environment variables.

Finally, run make in the root directory. This will compile the Raft implementation and proof interfaces, and check all the proofs. To speed up proof checking on multi-core machines, use make -jX, where X is at least the number of cores.

To build the vard key-value store program in extraction/vard, you need to install additional dependencies:

opam repo add distributedcomponents-dev http://opam-dev.distributedcomponents.net
opam install verdi-runtime cheerios-runtime ocamlbuild

Then, run make vard in the root directory. If the Coq implementation has been compiled as above, this simply compiles the extracted OCaml code to a native executable; otherwise, the implementation is extracted to OCaml and compiled without checking any proofs.

Files

The raft and raft-proofs subdirectories contain the implementation and verification of Raft. For each proof interface file in raft, there is a corresponding proof file in raft-proofs. The files in the raft subdirectory include:

  • Raft.v: an implementation of Raft in Verdi
  • RaftRefinementInterface.v: an application of the ghost-variable transformer to Raft which tracks several ghost variables used in the verification of Raft
  • CommonTheorems.v: several useful theorems about functions used by the Raft implementation
  • OneLeaderPerTermInterface: a statement of Raft's election safety property. See also the corresponding proof file in raft-proofs.
    • CandidatesVoteForSelvesInterface.v, VotesCorrectInterface.v, and CroniesCorrectInterface.v: statements of properties used by the proof OneLeaderPerTermProof.v
  • LogMatchingInterface.v: a statement of Raft's log matching property. See also LogMatchingProof.v in raft-proofs
    • LeaderSublogInterface.v, SortedInterface.v, and UniqueIndicesInterface.v: statements of properties used by LogMatchingProof.v

The file EndToEndLinearizability.v in raft-proofs uses the proofs of all proof interfaces to show Raft's linearizability property.

The vard Key-Value Store

vard is a simple key-value store implemented using Verdi. vard is specified and verified against Verdi's state-machine semantics in the VarD.v example system distributed with Verdi. When the Raft transformer is applied, vard can be run as a strongly-consistent, fault-tolerant key-value store along the lines of etcd.

After running make vard in the root directory, OCaml code for vard is extracted, compiled, and linked against a Verdi shim and some vard-specific serialization/debugging code, to produce a vard.native binary in extraction/vard.

Running make bench-vard in extraction/vard will produce some benchmark numbers, which are largely meaningless on localhost (multiple processes writing and fsync-ing to the same disk and communicating over loopback doesn't accurately model real-world use cases). Running make debug will get you a tmux session where you can play around with a vard cluster in debug mode; look in bench/vard.py for a simple Python vard client.

As the name suggests, vard is designed to be comparable to the etcd key-value store (although it currently supports many fewer features). To that end, we include a very simple etcd "client" which can be used for benchmarking. Running make bench-etcd will run the vard benchmarks against etcd (although see above for why these results are not particularly meaningful). See below for instructions to run both stores on a cluster in order to get a more useful performance comparison.

Running vard on a cluster

vard accepts the following command-line options:

-me NAME             name for this node
-port PORT           port for client commands
-dbpath DIRECTORY    directory for storing database files
-node NAME,IP:PORT   node in the cluster
-debug               run in debug mode

Note that vard node names are integers starting from 0.

For example, to run vard on a cluster with IP addresses 192.168.0.1, 192.168.0.2, 192.168.0.3, client (input) port 8000, and port 9000 for inter-node communication, use the following:

# on 192.168.0.1
$ ./vard.native -dbpath /tmp/vard-8000 -port 8000 -me 0 -node 0,192.168.0.1:9000 \
                -node 1,192.168.0.2:9000 -node 2,192.168.0.3:9000

# on 192.168.0.2
$ ./vard.native -dbpath /tmp/vard-8000 -port 8000 -me 1 -node 0,192.168.0.1:9000 \
                -node 1,192.168.0.2:9000 -node 2,192.168.0.3:9000

# on 192.168.0.3
$ ./vard.native -dbpath /tmp/vard-8000 -port 8000 -me 2 -node 0,192.168.0.1:9000 \
                    -node 1,192.168.0.2:9000 -node 2,192.168.0.3:9000

When the cluster is set up, a benchmark can be run as follows:

# on the client machine
$ python2 bench/setup.py --service vard --keys 50 \
                         --cluster "192.168.0.1:8000,192.168.0.2:8000,192.168.0.3:8000"
$ python2 bench/bench.py --service vard --keys 50 \
                         --cluster "192.168.0.1:8000,192.168.0.2:8000,192.168.0.3:8000" \
                         --threads 8 --requests 100

Running etcd on a cluster

We can compare numbers for vard and etcd running on the same cluster as follows:

# on 192.168.0.1
$ etcd --name=one \
 --listen-client-urls http://192.168.0.1:8000 \
 --advertise-client-urls http://192.168.0.1:8000 \
 --initial-advertise-peer-urls http://192.168.0.1:9000 \
 --listen-peer-urls http://192.168.0.1:9000 \
 --data-dir=/tmp/etcd \
 --initial-cluster "one=http://192.168.0.1:9000,two=http://192.168.0.2:9000,three=http://192.168.0.3:9000"

# on 192.168.0.2
$ etcd --name=two \
 --listen-client-urls http://192.168.0.2:8000 \
 --advertise-client-urls http://192.168.0.2:8000 \
 --initial-advertise-peer-urls http://192.168.0.2:9000 \
 --listen-peer-urls http://192.168.0.2:9000 \
 --data-dir=/tmp/etcd \
 --initial-cluster "one=http://192.168.0.1:9000,two=http://192.168.0.2:9000,three=http://192.168.0.3:9000"

# on 192.168.0.3
$ etcd --name=three \
 --listen-client-urls http://192.168.0.3:8000 \
 --advertise-client-urls http://192.168.0.3:8000 \
 --initial-advertise-peer-urls http://192.168.0.3:9000 \
 --listen-peer-urls http://192.168.0.3:9000 \
 --data-dir=/tmp/etcd \
 --initial-cluster "one=http://192.168.0.1:9000,two=http://192.168.0.2:9000,three=http://192.168.0.3:9000"

# on the client machine
$ python2 bench/setup.py --service etcd --keys 50 \
                         --cluster "192.168.0.1:8000,192.168.0.2:8000,192.168.0.3:8000"
$ python2 bench/bench.py --service etcd --keys 50 \
                         --cluster "192.168.0.1:8000,192.168.0.2:8000,192.168.0.3:8000" \
                         --threads 8 --requests 100
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].