All Projects → sanderploegsma → Redis Cluster

sanderploegsma / Redis Cluster

Licence: mit
Redis Cluster setup running on Kubernetes

Projects that are alternatives of or similar to Redis Cluster

Phpredis
A PHP extension for Redis
Stars: ✭ 9,203 (+3901.3%)
Mutual labels:  redis, redis-cluster, cluster
Docker Redis Cluster
Dockerfile for Redis Cluster (redis 3.0+)
Stars: ✭ 1,035 (+350%)
Mutual labels:  redis, redis-cluster, cluster
Reading And Comprehense Redis Cluster
分布式NOSQL redis源码阅读中文分析注释,带详尽注释以及相关流程调用注释,提出改造点,redis cluster集群功能、节点扩容、槽位迁移、failover故障切换、一致性选举完整分析,对理解redis源码很有帮助,解决了source insight中文注释乱码问题,更新完毕(redis源码学习交流QQ群:568892619)
Stars: ✭ 224 (-2.61%)
Mutual labels:  redis, cluster
E3 Springboot
SpringBoot+Docker重构宜立方商城
Stars: ✭ 139 (-39.57%)
Mutual labels:  redis, redis-cluster
Redis
Type-safe Redis client for Golang
Stars: ✭ 13,117 (+5603.04%)
Mutual labels:  redis, redis-cluster
Redis Game Transaction
在大型游戏中经常使用分布式,分布式中因为游戏逻辑会经常游戏事务,借助redis特性我们可以实现分布式锁和分布式事务。很多redis集群不支持redis的事务特性。 这个框架用来解决分布式服务器下redis集群事务失效的情况下,基于分布式锁完成分布式事务。支持独占锁,共享锁,读写锁,并且支持事务提交失败情况下的回滚操作,让开发者可以有更多时间侧重游戏逻辑.
Stars: ✭ 124 (-46.09%)
Mutual labels:  redis, redis-cluster
Csredis
.NET Core or .NET Framework 4.0+ client for Redis and Redis Sentinel (2.8) and Cluster. Includes both synchronous and asynchronous clients.
Stars: ✭ 1,714 (+645.22%)
Mutual labels:  redis, redis-cluster
Redis Operator
Redis Operator creates/configures/manages Redis clusters atop Kubernetes
Stars: ✭ 142 (-38.26%)
Mutual labels:  redis, redis-cluster
Codis
Proxy based Redis cluster solution supporting pipeline and scaling dynamically
Stars: ✭ 12,285 (+5241.3%)
Mutual labels:  redis, redis-cluster
Redex
Cloud-native Redis server implemented in Elixir
Stars: ✭ 160 (-30.43%)
Mutual labels:  redis, redis-cluster
Undermoon
Mordern Redis Cluster solution for easy operation.
Stars: ✭ 166 (-27.83%)
Mutual labels:  redis, redis-cluster
Php Redis Client
RedisClient is a fast, fully-functional and user-friendly client for Redis, optimized for performance. RedisClient supports the latest versions of Redis starting from 2.6 to 6.0
Stars: ✭ 112 (-51.3%)
Mutual labels:  redis, redis-cluster
Nginx Lua Redis Rate Measuring
A lua library to provide distributed rate measurement using nginx + redis, you can use it to do a throttling system within many nodes.
Stars: ✭ 109 (-52.61%)
Mutual labels:  redis, redis-cluster
Overlord
Overlord是哔哩哔哩基于Go语言编写的memcache和redis&cluster的代理及集群管理功能,致力于提供自动化高可用的缓存服务解决方案。
Stars: ✭ 1,884 (+719.13%)
Mutual labels:  redis, redis-cluster
Redis Tools
my tools working with redis
Stars: ✭ 104 (-54.78%)
Mutual labels:  redis, cluster
Redis exporter
Prometheus Exporter for Redis Metrics. Supports Redis 2.x, 3.x, 4.x, 5.x and 6.x
Stars: ✭ 2,092 (+809.57%)
Mutual labels:  redis, redis-cluster
Redis Manager
Redis 一站式管理平台,支持集群的监控、安装、管理、告警以及基本的数据操作
Stars: ✭ 2,646 (+1050.43%)
Mutual labels:  redis, redis-cluster
Memento
Fairly basic redis-like hashmap implementation on top of a epoll TCP server.
Stars: ✭ 74 (-67.83%)
Mutual labels:  redis, cluster
Ioredis
🚀 A robust, performance-focused, and full-featured Redis client for Node.js.
Stars: ✭ 9,754 (+4140.87%)
Mutual labels:  redis, redis-cluster
Camellia
camellia framework by netease-im. provider: 1) redis-client; 2) redis-proxy(redis-sentinel/redis-cluster); 3) hbase-client; 4) others
Stars: ✭ 146 (-36.52%)
Mutual labels:  redis, redis-cluster

Redis cluster

A redis cluster running in Kubernetes.

⚠️ Note: this repository is no longer actively maintained. While it served as a nice example to run Redis Cluster in Kubernetes when I wrote it, there are currently more stable solutions to spin up a cluster. I recommend looking at community-built Kubernetes Operators for Redis, or an actively maintained Helm chart.

If the cluster configuration of a redis node is lost in some way, it will come back with a different ID, which upsets the balance in the cluster (and probably in the Force). To prevent this, the setup uses a combination of Kubernetes StatefulSets and PersistentVolumeClaims to make sure the state of the cluster is maintained after rescheduling or failures.

Setup

kubectl apply -f redis-cluster.yml

This will spin up 6 redis-cluster pods one by one, which may take a while. After all pods are in a running state, you can itialize the cluster using the redis-cli in any of the pods. After the initialization, you will end up with 3 master and 3 slave nodes.

kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 \
$(kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')

Adding nodes

Adding nodes to the cluster involves a few manual steps. First, let's add two nodes:

kubectl scale statefulset redis-cluster --replicas=8

Have the first new node join the cluster as master:

kubectl exec redis-cluster-0 -- redis-cli --cluster add-node \
$(kubectl get pod redis-cluster-6 -o jsonpath='{.status.podIP}'):6379 \
$(kubectl get pod redis-cluster-0 -o jsonpath='{.status.podIP}'):6379

The second new node should join the cluster as slave. This will automatically bind to the master with the least slaves (in this case, redis-cluster-6)

kubectl exec redis-cluster-0 -- redis-cli --cluster add-node --cluster-slave \
$(kubectl get pod redis-cluster-7 -o jsonpath='{.status.podIP}'):6379 \
$(kubectl get pod redis-cluster-0 -o jsonpath='{.status.podIP}'):6379

Finally, automatically rebalance the masters:

kubectl exec redis-cluster-0 -- redis-cli --cluster rebalance --cluster-use-empty-masters \
$(kubectl get pod redis-cluster-0 -o jsonpath='{.status.podIP}'):6379

Removing nodes

Removing slaves

Slaves can be deleted safely. First, let's get the id of the slave:

$ kubectl exec redis-cluster-7 -- redis-cli cluster nodes | grep myself
3f7cbc0a7e0720e37fcb63a81dc6e2bf738c3acf 172.17.0.11:6379 myself,slave 32f250e02451352e561919674b8b705aef4dbdc6 0 0 0 connected

Then delete it:

kubectl exec redis-cluster-0 -- redis-cli --cluster del-node \
$(kubectl get pod redis-cluster-0 -o jsonpath='{.status.podIP}'):6379 \
3f7cbc0a7e0720e37fcb63a81dc6e2bf738c3acf

Removing a master

To remove master nodes from the cluster, we first have to move the slots used by them to the rest of the cluster, to avoid data loss.

First, take note of the id of the master node we are removing:

$ kubectl exec redis-cluster-6 -- redis-cli cluster nodes | grep myself
27259a4ae75c616bbde2f8e8c6dfab2c173f2a1d 172.17.0.10:6379 myself,master - 0 0 9 connected 0-1364 5461-6826 10923-12287

Also note the id of any other master node:

$ kubectl exec redis-cluster-6 -- redis-cli cluster nodes | grep master | grep -v myself
32f250e02451352e561919674b8b705aef4dbdc6 172.17.0.4:6379 master - 0 1495120400893 2 connected 6827-10922
2a42aec405aca15ec94a2470eadf1fbdd18e56c9 172.17.0.6:6379 master - 0 1495120398342 8 connected 12288-16383
0990136c9a9d2e48ac7b36cfadcd9dbe657b2a72 172.17.0.2:6379 master - 0 1495120401395 1 connected 1365-5460

Then, use the reshard command to move all slots from redis-cluster-6:

kubectl exec redis-cluster-0 -- redis-cli --cluster reshard --cluster-yes \
--cluster-from 27259a4ae75c616bbde2f8e8c6dfab2c173f2a1d \
--cluster-to 32f250e02451352e561919674b8b705aef4dbdc6 \
--cluster-slots 16384 \
$(kubectl get pod redis-cluster-0 -o jsonpath='{.status.podIP}'):6379

After resharding, it is safe to delete the redis-cluster-6 master node:

kubectl exec redis-cluster-0 -- redis-cli --cluster del-node \
$(kubectl get pod redis-cluster-0 -o jsonpath='{.status.podIP}'):6379 \
27259a4ae75c616bbde2f8e8c6dfab2c173f2a1d

Finally, we can rebalance the remaining masters to evenly distribute slots:

kubectl exec redis-cluster-0 -- redis-cli --cluster rebalance --cluster-use-empty-masters \
$(kubectl get pod redis-cluster-0 -o jsonpath='{.status.podIP}'):6379

Scaling down

After the master has been resharded and both nodes are removed from the cluster, it is safe to scale down the statefulset:

kubectl scale statefulset redis-cluster --replicas=6

Cleaning up

kubectl delete statefulset,svc,configmap,pvc -l app=redis-cluster
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].