All Projects → etcd-io → Etcd Play

etcd-io / Etcd Play

Licence: apache-2.0
etcd playground

Programming Languages

go
31211 projects - #10 most used programming language

Labels

Projects that are alternatives of or similar to Etcd Play

Etcd3
🔖 Node.js client for etcd3
Stars: ✭ 336 (+585.71%)
Mutual labels:  etcd
Etcdkeeper
web ui client for etcd
Stars: ✭ 612 (+1148.98%)
Mutual labels:  etcd
Traefik
The Cloud Native Application Proxy
Stars: ✭ 36,089 (+73551.02%)
Mutual labels:  etcd
Patroni
A template for PostgreSQL High Availability with Etcd, Consul, ZooKeeper, or Kubernetes
Stars: ✭ 4,434 (+8948.98%)
Mutual labels:  etcd
Konfig
Composable, observable and performant config handling for Go for the distributed processing era
Stars: ✭ 597 (+1118.37%)
Mutual labels:  etcd
Gonet
go分布式服务器,基于内存mmo
Stars: ✭ 804 (+1540.82%)
Mutual labels:  etcd
Gokv
Simple key-value store abstraction and implementations for Go (Redis, Consul, etcd, bbolt, BadgerDB, LevelDB, Memcached, DynamoDB, S3, PostgreSQL, MongoDB, CockroachDB and many more)
Stars: ✭ 314 (+540.82%)
Mutual labels:  etcd
Dister
dister(Distribution Cluster)是一款轻量级高性能的分布式集群管理软件,实现了分布式软件架构中的常用核心组件,包括:服务配置管理中心、服务注册与发现、服务健康检查、服务负载均衡。dister的灵感来源于ZooKeeper、Consul、Etcd,它们都实现了类似的分布式组件,但是dister更加的轻量级、低成本、易维护、架构清晰、简单实用、性能高效,这也是dister设计的初衷。
Stars: ✭ 41 (-16.33%)
Mutual labels:  etcd
Tectonic Installer
Install a Kubernetes cluster the CoreOS Tectonic Way: HA, self-hosted, RBAC, etcd Operator, and more
Stars: ✭ 599 (+1122.45%)
Mutual labels:  etcd
Pyetcdlock
a mutux network lock based on etcd
Stars: ✭ 9 (-81.63%)
Mutual labels:  etcd
Etcdadm
Stars: ✭ 428 (+773.47%)
Mutual labels:  etcd
E3w
etcd v3 Web UI
Stars: ✭ 439 (+795.92%)
Mutual labels:  etcd
Kubeasz
使用Ansible脚本安装K8S集群,介绍组件交互原理,方便直接,不受国内网络环境影响
Stars: ✭ 7,629 (+15469.39%)
Mutual labels:  etcd
Hyperf
🚀 A coroutine framework that focuses on hyperspeed and flexibility. Building microservice or middleware with ease.
Stars: ✭ 4,206 (+8483.67%)
Mutual labels:  etcd
Example Api
A base API project to bootstrap and prototype quickly.
Stars: ✭ 27 (-44.9%)
Mutual labels:  etcd
Containerdns
a fast DNS for Kubernetes clusters
Stars: ✭ 321 (+555.1%)
Mutual labels:  etcd
Follow Me Install Kubernetes Cluster
和我一步步部署 kubernetes 集群
Stars: ✭ 6,662 (+13495.92%)
Mutual labels:  etcd
Zetcd
Serve the Apache Zookeeper API but back it with an etcd cluster
Stars: ✭ 1,025 (+1991.84%)
Mutual labels:  etcd
Etcd Manage Server
etcd-manage 服务端
Stars: ✭ 30 (-38.78%)
Mutual labels:  etcd
Blog
my blog, using markdown
Stars: ✭ 25 (-48.98%)
Mutual labels:  etcd

Note: This project is replaced with https://github.com/coreos/etcdlabs

etcd-play

Build Status Godoc

etcd-play is a playground for exploring the etcd distributed key-value database. Try it out live at play.etcd.io.

Play with etcd in a web browser

etcd uses the Raft consensus algorithm to replicate data on distributed machines in order to gracefully handle network partitions, node failures, and even leader failures. The etcd team extensively tests failure scenarios in the etcd functional test suite. Real-time results from this testing are available at the etcd test dashboard.

In Raft, followers are passive, only responding to incoming RPCs. Clients can make requests to any node, follower or leader. Followers, in turn, forward requests to their leader. Last, the leader appends those requests (commands) to its log and sends AppendEntries RPCs to all of its followers.

Follower failures

What if followers fail?

The leader retries RPCs until they succeed. As soon as a follower recovers, it will catch up with the leader.

follower-failures

In the animation above, notice the stress on the remaining nodes while two of followers (etcd1 and etcd2) are down. Nevertheless, notice that all data is replicated across the cluster, except those two failed ones. Immediately after the nodes recover, the followers sync their data from the leader, looping on this process until all hashes match.

Leader failure

What if a leader fails?

A leader sends periodic heartbeat messages to its followers to maintain its authority. If a follower has not received heartbeats from a valid leader within the election timeout, it assumes that there is no current leader in the cluster, and becomes a candidate to start a new election. Each node includes its last term and last log index in its RequestVote RPC, so that Raft can choose the candidate that is most likely to contain all committed entries. When the old leader recovers, it will retry to commit log entries of its own. The Raft term is used to detect these stale leaders: Followers deny RPCs if the sender's term is older, and then the sender (often the old leader) reverts back to follower state and updates its term to the latest cluster term.

leader-failure

The animation above shows the Leader going down, and shortly a new leader is elected.

All nodes failure

etcd is highly available as long as a quorum of cluster members are operational and can communicate with each other and with clients. 5-node clusters can tolerate failures of any two members. Data loss is still possible in catastrophic events, like all nodes failing. etcd persists enough information on stable storage so that members can recover safely from the disk and rejoin the cluster. In particular, etcd stores new log entries onto disk before committing them to the log to prevent committed entries from being lost on an unexpected restart.

all-node-failures

The animation above shows all nodes being terminated with the Kill button. etcd recovers the data from stable storage. You can see the number of keys and hash values match, before and after. The cluster can handle client requests immediately after recovery with a new leader.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].