All Projects → komuw → kshaka

komuw / kshaka

Licence: MIT license
Kshaka is a Go implementation of the CASPaxos consensus protocol.

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to kshaka

Awesome Consensus
Awesome list for Paxos and friends
Stars: ✭ 1,619 (+6939.13%)
Mutual labels:  raft, paxos
Dragonboat
Dragonboat is a high performance multi-group Raft consensus library in pure Go.
Stars: ✭ 3,983 (+17217.39%)
Mutual labels:  raft, paxos
paper
Computer Foundations Practices
Stars: ✭ 17 (-26.09%)
Mutual labels:  raft, paxos
Distributed-Algorithms
利用 Go 语言实现多种分布式算法
Stars: ✭ 53 (+130.43%)
Mutual labels:  raft, paxos
Raft-Paxos-Sample
MIT6.824实现分布式一致性算法——Raft&Paxos
Stars: ✭ 37 (+60.87%)
Mutual labels:  raft, paxos
paxoid
Paxos based masterless ID/Sequence generator.
Stars: ✭ 20 (-13.04%)
Mutual labels:  paxos
nebula-graph
A distributed, fast open-source graph database featuring horizontal scalability and high availability. This is an archived repo for v2.5 only, from 2.6.0 +, NebulaGraph switched back to https://github.com/vesoft-inc/nebula
Stars: ✭ 833 (+3521.74%)
Mutual labels:  raft
openraft
rust raft with improvements
Stars: ✭ 826 (+3491.3%)
Mutual labels:  raft
Braft
An industrial-grade C++ implementation of RAFT consensus algorithm based on brpc, widely used inside Baidu to build highly-available distributed systems.
Stars: ✭ 2,964 (+12786.96%)
Mutual labels:  raft
distmq
Distributed Message Queue based on Raft
Stars: ✭ 32 (+39.13%)
Mutual labels:  raft
slock
High-performance distributed sync service and atomic DB
Stars: ✭ 50 (+117.39%)
Mutual labels:  raft
video features
Extract video features from raw videos using multiple GPUs. We support RAFT and PWC flow frames as well as S3D, I3D, R(2+1)D, VGGish, CLIP, ResNet features.
Stars: ✭ 225 (+878.26%)
Mutual labels:  raft
6.824
Distributed Systems
Stars: ✭ 35 (+52.17%)
Mutual labels:  raft
paxakos
Rust implementation of Paxos consensus algorithm
Stars: ✭ 79 (+243.48%)
Mutual labels:  paxos
paxos-rs
Paxos implementation in Rust
Stars: ✭ 66 (+186.96%)
Mutual labels:  paxos
dledger
A raft-based java library for building high-available, high-durable, strong-consistent commitlog.
Stars: ✭ 615 (+2573.91%)
Mutual labels:  raft
raft-badger
Badger-based backend for Hashicorp's raft package
Stars: ✭ 27 (+17.39%)
Mutual labels:  raft
raft
An C++ implementation of RAFT consensus algorithm based on jrpc
Stars: ✭ 58 (+152.17%)
Mutual labels:  raft
aurora
A Raft based K-V database implemented with cpp.
Stars: ✭ 32 (+39.13%)
Mutual labels:  raft
blockchain consensus algorithm
代码实现五种区块链共识算法 The code implements five blockchain consensus algorithms
Stars: ✭ 251 (+991.3%)
Mutual labels:  raft

Kshaka

CircleCI codecov GoDoc Go Report Card

Kshaka is a Go implementation of the CASPaxos consensus protocol.
It's name is derived from the Kenyan hip hop group, Kalamashaka.

CASPaxos is a replicated state machine (RSM) kshaka. Unlike Raft and Multi-Paxos, it doesn’t use leader election and log replication, thus avoiding associated complexity.
Its symmetric peer-to-peer approach achieves optimal commit latency in wide-area networks and doesn’t cause transient unavailability when any [N−1] of N nodes crash." - The CASPaxos whitepaper

This is work in progress, do not use it anywhere you would regret. API will change over time.

Installation

  • todo

Usage

package main

import (
	"fmt"

	"github.com/hashicorp/raft-boltdb"
	"github.com/komuw/kshaka"
)

func main() {
	// The store should, ideally be disk persisted.
	// Any that implements hashicorp/raft StableStore interface will suffice
	boltStore, err := raftboltdb.NewBoltStore("/tmp/bolt.db")
	if err != nil {
		panic(err)
	}

	// The function that will be applied by CASPaxos.
	// This will be applied to the current value stored
	// under the key passed into the Propose method of the proposer.
	var setFunc = func(val []byte) kshaka.ChangeFunction {
		return func(current []byte) ([]byte, error) {
			return val, nil
		}
	}

	// Note that, in practice, nodes ideally should be
	// in different machines each with its own store.
	node1 := kshaka.NewNode(1, boltStore)
	node2 := kshaka.NewNode(2, boltStore)
	node3 := kshaka.NewNode(3, boltStore)

	transport1 := &kshaka.InmemTransport{Node: node1}
	transport2 := &kshaka.InmemTransport{Node: node2}
	transport3 := &kshaka.InmemTransport{Node: node3}
	node1.AddTransport(transport1)
	node2.AddTransport(transport2)
	node3.AddTransport(transport3)

	kshaka.MingleNodes(node1, node2, node3)

	key := []byte("name")
	val := []byte("Masta-Ace")

	// make a proposition; consensus via CASPaxos will happen
	newstate, err := node2.Propose(key, setFunc(val))
	if err != nil {
		fmt.Printf("err: %v", err)
	}
	fmt.Printf("\n newstate: %v \n", newstate)
}

System design

1. Intro:

  • Clients initiate a request by communicating with a proposer; clients may be stateless, the system may have arbitrary numbers of clients.

  • Proposers perform the initialization by communicating with acceptors. Proposers keep minimal state needed to generate unique increasing update IDs (Ballot numbers), the system may have arbitrary numbers of proposers.

  • Acceptors store the accepted value; the system should have 2F+1 acceptors to tolerate F failures.

  • It’s convenient to use tuples as Ballot numbers. To generate it a proposer combines its numerical ID with a local increasing counter: (counter, ID). To compare Ballot tuples, we should compare the first component of the tuples and use ID only as a tiebreaker.

  • When a proposer receives a conflicting message from an acceptor, it should fast-forward its counter to avoid a conflict in the future. If an acceptor returns a conflict if it already saw a greater Ballot number during the prepare message, does the Proposer retry with a higher Ballot number or does it just stop? Ans: It doesn't matter from the protocol's point of view and different implementations may implement it in different ways. - https://twitter.com/rystsov/status/971797758284677120
    Proposers in Kshaka will, for the time been, will not retry after conflicts.

  • Clients change its value by submitting side-effect free functions which take the current state as an argument and yield new as a result. Out of the concurrent requests only one can succeed; we should acquire a lock:: https://github.com/gryadka/js/blob/dfc6ed6f7580c895a9db44d06756a3dd637e47f6/core/src/Proposer.js#L47-L48

2. Algo:

A. Prepare phase

  • A client submits the f change function to a proposer.
  • The proposer generates a Ballot number, B, and sends ”prepare” messages containing that number(and it's ID) to the acceptors.
  • Acceptor returns a conflict if it already saw a greater Ballot number, it also submits the Ballot and accepted value it has. Persists the Ballot number as a promise and returns a confirmation either with an empty value (if it hasn’t accepted any value yet) or with a tuple of an accepted value and its Ballot number.
  • Proposer waits for the F + 1 confirmations.

B. Accept phase

  • If they(prepare replies from acceptors) all contain the empty value, then the proposer defines the current state as ∅ otherwise it picks the value of the tuple with the highest Ballot number.
  • Proposer applies the f function to the current state and sends the result, new state, along with the generated Ballot number B (an ”accept” message) to the acceptors.
  • Accept returns a conflict if it already saw a greater Ballot number, it also submits the Ballot and accepted value it has. Erases the promise, marks the received tuple (Ballot number, value) as the accepted value and returns a confirmation

C. End

  • Proposer waits for the F + 1 confirmations.
  • Proposer returns the new state to the client.

3. Cluster membership change

  • todo

4. Deleting record/s

  • todo

5. Optimizations

  • todo

dev

debug one test;

dlv test -- -test.v -test.run ^Test_proposer_Propose
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].