All Projects → bofm → tarantool-autovshard

bofm / tarantool-autovshard

Licence: MIT license
Vshard wrapper with automatic master election, failover and centralized configuration storage in Consul.

Programming Languages

lua
6591 projects
python
139335 projects - #7 most used programming language
Gherkin
971 projects
Smarty
1635 projects
Dockerfile
14818 projects
Makefile
30231 projects
shell
77523 projects

Projects that are alternatives of or similar to tarantool-autovshard

tarantool.ex
Tarantool client library for Elixir projects
Stars: ✭ 26 (+30%)
Mutual labels:  tarantool
mapper
Map tarantool tuples to php objects.
Stars: ✭ 68 (+240%)
Mutual labels:  tarantool
docker
Docker images for tarantool database
Stars: ✭ 44 (+120%)
Mutual labels:  tarantool
go-tarantool
Tarantool 1.10+ client for Go language
Stars: ✭ 148 (+640%)
Mutual labels:  tarantool
tarantool-operator
Tarantool Operator manages Tarantool Cartridge clusters atop Kubernetes
Stars: ✭ 48 (+140%)
Mutual labels:  tarantool
tarantool-php
PECL PHP driver for Tarantool
Stars: ✭ 82 (+310%)
Mutual labels:  tarantool
Tarantool
Get your data in RAM. Get compute close to data. Enjoy the performance.
Stars: ✭ 2,787 (+13835%)
Mutual labels:  tarantool
tarantool rs
Sync/Async tarantool database connector. WORK IN PROGRESS. DON'T SHARE THIS REPO
Stars: ✭ 14 (-30%)
Mutual labels:  tarantool
tarantool-module
Tarantool Rust SDK
Stars: ✭ 24 (+20%)
Mutual labels:  tarantool
tarantool-admin
No description or website provided.
Stars: ✭ 90 (+350%)
Mutual labels:  tarantool
ansible-cartridge
Ansible role for deploying tarantool cartridge-based applications
Stars: ✭ 14 (-30%)
Mutual labels:  tarantool
go-tarantool
Tarantool 1.6+ connector for Go language
Stars: ✭ 45 (+125%)
Mutual labels:  tarantool
mysql-tarantool-replication
A standalone MySQL -> Tarantool replication daemon
Stars: ✭ 47 (+135%)
Mutual labels:  tarantool

Autovshard

Build Status Coverage Status

A wrapper around Tarantool Vshard with automatic master election, failover and centralized configuration storage in Consul.

Sponsored by Avito

Features

  • Centralized config storage with Consul.
  • Automatic Vsahrd reconfiguration (both storage and router) when the config changes in Consul.
  • Automatic master election for each replicaset with a distributed lock with Consul.
  • Automatic failover when a master instance becomes unavailable.
  • Master weight to set the preferred master instance.
  • Switchover delay.

Status

Usage

  1. Put Autovshard config to Consul KV under <consul_kv_prefix>/<vshard_cluster_name>/autovshard_cfg_yaml.

    # autovshard_cfg.yaml
    rebalancer_max_receiving: 10
    bucket_count: 100
    rebalancer_disbalance_threshold: 10
    sharding:
        aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:
            weight: 10
            replicas:
                aaaaaaaa-aaaa-aaaa-aaaa-000000000001:
                    master_weight: 99
                    switchover_delay: 10
                    address: a1:3301
                    name: a1
                    master: false
                aaaaaaaa-aaaa-aaaa-aaaa-000000000002:
                    master_weight: 20
                    switchover_delay: 10
                    address: a2:3301
                    name: a2
                    master: false
        bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb:
            weight: 10
            replicas:
                bbbbbbbb-bbbb-bbbb-bbbb-000000000001:
                    master_weight: 10
                    switchover_delay: 10
                    address: b1:3301
                    name: b1
                    master: false
                bbbbbbbb-bbbb-bbbb-bbbb-000000000002:
                    master_weight: 55
                    switchover_delay: 10
                    address: b2:3301
                    name: b2
                    master: false
    #!/usr/bin/env sh
    
    cat autovshard_cfg.yaml | consul kv put "autovshard/mycluster/autovshard_cfg_yaml" -

    Autovshard Consul config parameters

    The config is similar to Vshard config, but it has some extra fields and has address field instead of uri because we don't want to mix config with passwords.

    • master_weight - an instance with higher weight in a replica set eventually gets master role. This parameter is dynamic and can be changed by administrator at any time. The number is used only for comparison with the master_weights of the other members of a replica set.
    • switchover_delay - a delay in seconds to wait before taking master role away from another running instance with lower master_weight. This parameter is dynamic and can be changed by administrator at any time. A case when this parameter is useful is when an instance with the highest master_weight is restarted several times in a short amount of time. If the instance is up for a shorter time than the switchover_delay there will be no master switch (switchover) every time the instance is restarted. And when the instance with the highest master_weight stays up for longer than the switchover_delay then the instance will finally get promoted to master role.
    • address - TCP address of the Tarantool instance in this format: <host>:<port>. It is passed through to Vshard as part of uri parameter.
    • name - same as name in Vshard.
    • master - same as master in Vshard. The role of the instance. DO NOT set master=true for multiple instances in one replica set. This parameter will be changed dynamically during the lifecycle of Autovshard. It can also be changed by administrator at any time. It is safe to set master=false for all instances.
  2. Put this into your tarantool init.lua.

    local box_cfg = {
        listen = 3301,  -- required
        instance_uuid = "aaaaaaaa-aaaa-aaaa-aaaa-000000000001",  -- required for storage instances, prefer lowercase
        replicaset_uuid = "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",  -- required for storage instances, prefer lowercase
        replication_connect_quorum = 0,  -- recommended, search Tarantool issue tracker for "quorum" and "bootstrap"
        replication_connect_timeout=5,  -- to start faster when some replicas are unavailable
        -- ! DO NOT set `replication` parameter, Vshard will take care of it
        -- specify any other_box_cfg options
    }
    
    autovshard = require("autovshard").Autovshard.new{
        box_cfg = box_cfg,  -- Tarantool instance config
        cluster_name = "mycluster",  -- the name of your sharding cluster
        login = "storage",  -- login for Vshard
        password = "storage",  -- password for Vshard
        consul_http_address = "http://127.0.0.1:8500",  -- assuming Consul agent is running on localhost
        consul_token = nil,
        consul_kv_prefix = "autovshard",
        -- consul_session_ttl = 60 -- optional, not recommended to change, default is 15 seconds
        router = true,  -- true for Vshard router instance
        storage = true,  -- true for Vshard storage instance
        automaster = true,  -- enables automatic master election and auto-failover
    }
    
    autovshard:start()  -- autovshard will run in the background
    -- to stop it call autovshard:stop()
    
    -- This might be helpful (Tarantool >= 2.0)
    -- box.ctl.on_shutdown(function() autovshard:stop(); require("fiber").sleep(2) end)
    
    -- If you use package.reload (https://github.com/moonlibs/package-reload)
    -- package.reload:register(autovshard, autovshard.stop)
    

    Important: If Consul is unreachable the Tarantool instance is set to read-only mode.

    Autovshard Tarantool config parameters

    • box_cfg - table, parameters for box.cfg call
    • cluster_name - string, the name of your sharding cluster
    • login - string, login for Vshard
    • password - string, password for Vshard
    • consul_http_address - a string with Consul address or a table of multiple Consul addresses. Examples: "http://127.0.0.1:8500", {"https://consul1.example.com:8501", "https://consul2.example.com:8501"} If multiple Consul addresses are set and Consul is unreachable at an address, Autovshard will use the next address from the array for the subsequent requests to Consul. Note: All addresses must point to the instances of the same Consul cluster in the same Consul datacenter.
    • consul_token - optional string, Consul token (if you use ACLs)
    • consul_kv_prefix - string, a prefix in Consul KV storage. Must be the same on all instances in a Tarantool cluster.
    • consul_session_ttl - optional number, Consul session TTL. Not recommended to change, default is 15 seconds. Must be between 10 and 86400.
    • router - boolean, true for Vshard router instances
    • storage - boolean, - true for Vshard storage instance
    • automaster - boolean, enables automatic master election and auto-failover

See also

Installation

Luarocks sucks at pinning dependencies, and Vshard does not support (as of 2019-07-01) painless installation without Tarantool sources. Therefore Vshard is not mentioned in the rockspec.

  1. Install Vshard first.
  2. Install Autovshard. Autovshard depends only on Vshard. Replace <version> with the version you want to install:
    luarocks install "https://raw.githubusercontent.com/bofm/tarantool-autovshard/master/rockspecs/autovshard-<version>-1.rockspec"
    
    or
    tarantoolctl rocks install "https://raw.githubusercontent.com/bofm/tarantool-autovshard/master/rockspecs/autovshard-<version>-1.rockspec"
    

How it works

Internally Autovshard does 2 things (which are almost independent of each other):

  • Watch the config in Consul and apply it as soon as it changes. Whatever the config is, it is converted to Vshard config and passed to vshard.storage.cfg() and vshard.router.cfg() according to the parameters of the Autovshard Tarantool config. If Consul is unreachable, Autovshard sets the Tarantool instance to read-only mode to avoid having multiple master instances in a replicaset (this feature is called fencing).
  • Maintain master election with a distributed lock and change the config in Consul when the lock is acquired. This is done only on Vshard storage instances when automaster is enabled. Autovshard only changes master field of the Autovshard Consul config.

You can check out CI e2e tests logs to get familiar with what Autovshard prints to the Tarantool log in different situations.

Notes on Consul

It is recommended to run Consul agent on each server with Tarantool instances and set consul_http_address to the address of the agent on localhost.

TODO

  • More testing
  • Integration testing and CI
  • e2e tests with Gherkin and BDD
  • Improve logging
  • See todo's in the sources
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].