All Projects → hendrikmaus → kube-leader-election

hendrikmaus / kube-leader-election

Licence: MIT License
A crate to implement leader election for Kubernetes workloads in Rust.

Programming Languages

rust
11053 projects

Projects that are alternatives of or similar to kube-leader-election

Distributed-System-Algorithms-Implementation
Algorithms for implementation of Clock Synchronization, Consistency, Mutual Exclusion, Leader Election
Stars: ✭ 39 (+56%)
Mutual labels:  leader-election
evel
An Eventual Leader Election Library for Erlang
Stars: ✭ 35 (+40%)
Mutual labels:  leader-election
sidecloq
Recurring / Periodic / Scheduled / Cron job extension for Sidekiq
Stars: ✭ 81 (+224%)
Mutual labels:  leader-election
Atomix
A reactive Java framework for building fault-tolerant distributed systems
Stars: ✭ 2,182 (+8628%)
Mutual labels:  leader-election
ring-election
A node js library with a distributed leader/follower algorithm ready to be used
Stars: ✭ 92 (+268%)
Mutual labels:  leader-election

Kubernetes Leader Election in Rust

CI workflow crates.io version License: MIT

This library provides simple leader election for Kubernetes workloads.

[dependencies]
kube-leader-election = "0.10.2"

Example

Acquire leadership on a Kubernetes Lease called some-operator-lock, in the default namespace and promise to renew the lock every 15 seconds:

let leadership = LeaseLock::new(
    kube::Client::try_default().await?,
    "default",
    LeaseLockParams {
        holder_id: "some-operator".into(),
        lease_name: "some-operator-lock".into(),
        lease_ttl: Duration::from_secs(15),
    },
);

// Run this in a background task every 5 seconds
// Share the result with the rest of your application; for example using Arc<AtomicBool>
// See https://github.com/hendrikmaus/kube-leader-election/blob/master/examples/shared-lease.rs
let lease = leadership.try_acquire_or_renew().await?;

log::info!("currently leading: {}", lease.acquired_lease);

Please refer to the examples for runnable usage demonstrations.

Features

Kubernetes Lease Locking

A very basic form of leader election without fencing, i.e., only use this if your application can tolerate multiple replicas acting as leader for a short amount of time.

This implementation uses a Kubernetes Lease resource from the API group coordination.k8s.io, which is locked and continuously renewed by the leading replica. The leaseholder, as well as all candidates, use timestamps to determine if a lease can be acquired. Therefore, this implementation is volatile to datetime skew within a cluster.

Only use this implementation if you are aware of its downsides, and your workload can tolerate them.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

License

MIT

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].