All Projects → PikaLabs → Floyd

PikaLabs / Floyd

Licence: gpl-3.0
A raft consensus implementation that is simply and understandable

Projects that are alternatives of or similar to Floyd

Verdi Raft
An implementation of the Raft distributed consensus protocol, verified in Coq using the Verdi framework
Stars: ✭ 143 (-44.79%)
Mutual labels:  consensus, raft
raft-rocks
A simple database based on raft and rocksdb
Stars: ✭ 38 (-85.33%)
Mutual labels:  raft, consensus
Atomix
A reactive Java framework for building fault-tolerant distributed systems
Stars: ✭ 2,182 (+742.47%)
Mutual labels:  consensus, raft
X0
Xzero HTTP Application Server
Stars: ✭ 111 (-57.14%)
Mutual labels:  consensus, raft
raft
raft is a golang library that provides a simple, clean, and idiomatic implementation of the Raft consensus protocol
Stars: ✭ 35 (-86.49%)
Mutual labels:  raft, consensus
Bifrost
Pure rust building block for distributed systems
Stars: ✭ 118 (-54.44%)
Mutual labels:  consensus, raft
raft-badger
Badger-based backend for Hashicorp's raft package
Stars: ✭ 27 (-89.58%)
Mutual labels:  raft, consensus
Tikv
Distributed transactional key-value database, originally created to complement TiDB
Stars: ✭ 10,403 (+3916.6%)
Mutual labels:  consensus, raft
FISCO-BCOS
FISCO BCOS是由微众牵头的金链盟主导研发、对外开源、安全可控的企业级金融区块链底层技术平台。 单链配置下,性能TPS可达万级。提供群组架构、并行计算、分布式存储、可插拔的共识机制、隐私保护算法、支持全链路国密算法等诸多特性。 经过多个机构、多个应用,长时间在生产环境中的实践检验,具备金融级的高性能、高可用性及高安全性。FISCO BCOS is a secure and reliable financial-grade open-source blockchain platform. The platform provides rich features including group architecture, cross-chain communication protoc…
Stars: ✭ 1,603 (+518.92%)
Mutual labels:  raft, consensus
Raft-Paxos-Sample
MIT6.824实现分布式一致性算法——Raft&Paxos
Stars: ✭ 37 (-85.71%)
Mutual labels:  raft, consensus
Trepang
Trepang is an implementation of Raft Algorithm in Go
Stars: ✭ 111 (-57.14%)
Mutual labels:  consensus, raft
little-raft
The lightest distributed consensus library. Run your own replicated state machine! ❤️
Stars: ✭ 316 (+22.01%)
Mutual labels:  raft, consensus
Yaraft
Yet Another RAFT implementation
Stars: ✭ 109 (-57.92%)
Mutual labels:  consensus, raft
Zatt
Python implementation of the Raft algorithm for distributed consensus
Stars: ✭ 119 (-54.05%)
Mutual labels:  consensus, raft
Etcd
Distributed reliable key-value store for the most critical data of a distributed system
Stars: ✭ 38,238 (+14663.71%)
Mutual labels:  consensus, raft
Sofa Jraft
A production-grade java implementation of RAFT consensus algorithm.
Stars: ✭ 2,618 (+910.81%)
Mutual labels:  consensus, raft
Consensus Yaraft
consensus-yaraft is a library for distributed, strong consistent, highly replicated log storage. It's based on yaraft, which is an implementation of the Raft protocol.
Stars: ✭ 30 (-88.42%)
Mutual labels:  consensus, raft
Rqlite
The lightweight, distributed relational database built on SQLite
Stars: ✭ 9,147 (+3431.66%)
Mutual labels:  consensus, raft
openraft
rust raft with improvements
Stars: ✭ 826 (+218.92%)
Mutual labels:  raft, consensus
raftor
Distributed chat system built with rust
Stars: ✭ 31 (-88.03%)
Mutual labels:  raft, consensus

Floyd 中文

Build Status

Floyd is an C++ library based on Raft consensus protocol.

  • Raft is a consensus algorithm which is easy to understand;
  • Floyd is a library that could be easily embeded into users' application;
  • Floyd support consistency between cluster nodes by APIs like Read/Write/Delete;
  • Also some query and debug managment APIs: GetLeader/GetServerStatus/set_log_level
  • Floyd support lock operation upon raft consensus protocol

Users

  • Floyd has provided high available store for Meta cluster of Zeppelin , which is a huge distributed key-value storage.
  • Floyd lock interface has used in our production pika_hub
  • The list will goes on.

Why do we prefer a library to a service?

When we want to coordinate services, ZooKeeper is always a good choice.

  • But we have to maintain another service.
  • We must use its' SDK at the same time.

In our opion, a single service is much more simple than two services. As a result, an embeded library could be a better choice.

Floyd's Features and APIs

type API Status
Consensus Read support
Consensus Write support
Consensus Delete support
Local DirtyRead support
Local DirtyWrite support
Query GetLeader support
Query GetServerStatus support
Debug set_log_level support
  • Raft fetaures
Language Leader election + Log Replication Membership Changes Log Compaction
C++ Yes No No

Compile and Have a Try

  • Dependencies

    • gcc version 4.8+ to support C++11.
    • protobuf-devel
    • snappy-devel
    • bzip2-devel
    • zlib-devel
    • bzip2
    • submodules:        - Pink        - Slash
  • Get source code and submodules recursively.

git clone --recursive https://github.com/Qihoo360/floyd.git
  • Compile and get the libfloyd.a library
make

Example

Then right now there is three examples in the example directory, go and compile in the corresponding directory

example/simple/

contains many cases wrapped floyd into small example

Get all simple example will make command, then every example will start floyd with five node

make
  1. t is a single thread wirte tool to get performance
  2. t1 is multi thread program to get performance, in this case, all the writes is from the leader node
  3. t2 is an example test node join and leave
  4. t4 is an example used to see the message passing by each node in a stable situation
  5. t5 used to test single mode floyd, including starting a node and writing data
  6. t6 is the same as t1 except that all the writes is from the follower node
  7. t7 test write 3 node and then join the other 2 node case

example/redis/

raftis is a consensus server with 5 nodes and supporting redis protocol(get/set command). raftis is an example of building a consensus cluster with floyd(floyd is a simple implementation of raft protocol). It's very simple and intuitive. we can test raftis with redis-cli, benchmark with redis redis-benchmark tools.

compile raftis with command make and then start with run.sh

make && sh run.sh
#!/bin/sh
# start with five node
./output/bin/raftis "127.0.0.1:8901,127.0.0.1:8902,127.0.0.1:8903,127.0.0.1:8904,127.0.0.1:8905" "127.0.0.1" 8901 "./data1/" 6379 &
./output/bin/raftis "127.0.0.1:8901,127.0.0.1:8902,127.0.0.1:8903,127.0.0.1:8904,127.0.0.1:8905" "127.0.0.1" 8902 "./data2/" 6479 &
./output/bin/raftis "127.0.0.1:8901,127.0.0.1:8902,127.0.0.1:8903,127.0.0.1:8904,127.0.0.1:8905" "127.0.0.1" 8903 "./data3/" 6579 &
./output/bin/raftis "127.0.0.1:8901,127.0.0.1:8902,127.0.0.1:8903,127.0.0.1:8904,127.0.0.1:8905" "127.0.0.1" 8904 "./data4/" 6679 &
./output/bin/raftis "127.0.0.1:8901,127.0.0.1:8902,127.0.0.1:8903,127.0.0.1:8904,127.0.0.1:8905" "127.0.0.1" 8905 "./data5/" 6779 &
└─[$] ./src/redis-benchmark -t set -n 1000000 -r 100000000 -c 20
====== SET ======
  1000000 requests completed in 219.76 seconds
  20 parallel clients
  3 bytes payload
  keep alive: 1

0.00% <= 2 milliseconds
0.00% <= 3 milliseconds
8.72% <= 4 milliseconds
95.39% <= 5 milliseconds
95.96% <= 6 milliseconds
99.21% <= 7 milliseconds
99.66% <= 8 milliseconds
99.97% <= 9 milliseconds
99.97% <= 11 milliseconds
99.97% <= 12 milliseconds
99.97% <= 14 milliseconds
99.97% <= 15 milliseconds
99.99% <= 16 milliseconds
99.99% <= 17 milliseconds
99.99% <= 18 milliseconds
99.99% <= 19 milliseconds
99.99% <= 26 milliseconds
99.99% <= 27 milliseconds
100.00% <= 28 milliseconds
100.00% <= 29 milliseconds
100.00% <= 30 milliseconds
100.00% <= 61 milliseconds
100.00% <= 62 milliseconds
100.00% <= 63 milliseconds
100.00% <= 63 milliseconds
4550.31 requests per second

example/kv/

A simple consensus kv example contain server and client builded with floyd

Test

floyd has pass the jepsen test, you can get the test case here jepsen

Documents

Contant us

Anyone who is interested in raft protocol, used floyd in your production or has wrote some article about souce code of floyd please contact me, we have a article list.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].