All Projects → seocahill → ha-postgres-docker-stack

seocahill / ha-postgres-docker-stack

Licence: MIT license
Postgres + patroni + wal-e + haproxy + etcd

Programming Languages

PLpgSQL
1095 projects
shell
77523 projects
ruby
36898 projects - #4 most used programming language
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to ha-postgres-docker-stack

Longhorn
Cloud-Native distributed storage built on and for Kubernetes
Stars: ✭ 3,384 (+7100%)
Mutual labels:  high-availability
nest-convoy
[WIP] An opinionated framework for building distributed domain driven systems using microservices architecture
Stars: ✭ 20 (-57.45%)
Mutual labels:  high-availability
stackgres
StackGres Operator, Full Stack PostgreSQL on Kubernetes // !! Mirror repository of https://gitlab.com/ongresinc/stackgres, only accept Merge Requests there.
Stars: ✭ 479 (+919.15%)
Mutual labels:  high-availability
Mosquitto Cluster
a built-in, autonomous Mosquitto Cluster implementation. MQTT集群.
Stars: ✭ 238 (+406.38%)
Mutual labels:  high-availability
k0s-ansible
Create a Kubernetes Cluster using Ansible and the vanilla upstream Kubernetes distro k0s.
Stars: ✭ 56 (+19.15%)
Mutual labels:  high-availability
syncflux
SyncFlux is an Open Source InfluxDB Data synchronization and replication tool for migration purposes or HA clusters
Stars: ✭ 145 (+208.51%)
Mutual labels:  high-availability
Walrus
🔥 Fast, Secure and Reliable System Backup, Set up in Minutes.
Stars: ✭ 197 (+319.15%)
Mutual labels:  high-availability
ansible-role-pacemaker
Ansible role to deploy Pacemaker HA clusters
Stars: ✭ 19 (-59.57%)
Mutual labels:  high-availability
docker-redis-haproxy-cluster
A Redis Replication Cluster accessible through HAProxy running across a Docker Composed-Swarm with Supervisor and Sentinel
Stars: ✭ 44 (-6.38%)
Mutual labels:  high-availability
Recon
HA LDAP based key/value solution for projects configuration storing with multi master replication support
Stars: ✭ 12 (-74.47%)
Mutual labels:  high-availability
Advanced Java
😮 Core Interview Questions & Answers For Experienced Java(Backend) Developers | 互联网 Java 工程师进阶知识完全扫盲:涵盖高并发、分布式、高可用、微服务、海量数据处理等领域知识
Stars: ✭ 59,142 (+125734.04%)
Mutual labels:  high-availability
ha cluster exporter
Prometheus exporter for Pacemaker based Linux HA clusters
Stars: ✭ 63 (+34.04%)
Mutual labels:  high-availability
trento
An open cloud-native web console improving on the work day of SAP Applications administrators.
Stars: ✭ 35 (-25.53%)
Mutual labels:  high-availability
Keepalived
Keepalived
Stars: ✭ 2,877 (+6021.28%)
Mutual labels:  high-availability
blogr-pve
Puppet provisioning of HA failover/cluster environment implemented in Proxmox Virtual Environment and Linux boxes.
Stars: ✭ 28 (-40.43%)
Mutual labels:  high-availability
Airflow Scheduler Failover Controller
A process that runs in unison with Apache Airflow to control the Scheduler process to ensure High Availability
Stars: ✭ 204 (+334.04%)
Mutual labels:  high-availability
WatsonCluster
A simple C# class using Watson TCP to enable a one-to-one high availability cluster.
Stars: ✭ 18 (-61.7%)
Mutual labels:  high-availability
helm-openldap
Helm chart of Openldap in High availability with multi-master replication and PhpLdapAdmin and Ltb-Passwd
Stars: ✭ 101 (+114.89%)
Mutual labels:  high-availability
ansible-role-etcd
Ansible role for installing etcd cluster
Stars: ✭ 38 (-19.15%)
Mutual labels:  high-availability
MySQL-InnoDB-Cluster-3VM-Setup
Installing and testing InnoDB cluster on 3 servers
Stars: ✭ 19 (-59.57%)
Mutual labels:  high-availability

HA Postgresql cluster on docker

This is a docker compose file and some helper scripts to demonstrate how to deploy a highly available postgres cluster with automatic failover using docker swarm.

I've written some blog posts that explain what's happening here in more depth, you can find them here:

The complete stack is:

  • docker swarm mode (orchestration)
  • haproxy (endpoint for db write/reads)
  • etcd (configuration, leader election)
  • patroni (governs db repliation and high availability)
  • postgres

Not implemented by default but present:

  • wal-e log shipping to s3
  • cron:
    • logical backup and test logical backups
    • physical backup and test physical backups
  • sample callback scripts for patroni events e.g. email admin on failover.

Documentation:

asciicast

Test / Development

Use the docker-stack.test.yml when running the suite or testing the stack on docker for mac for example.

Prerequisites

Tested on docker 17.09.0-ce.

If you are using the deploy and test scripts you will also need to install curl, wget, awscli, and jq.

There is also a .alias file included with useful shortcut commands. Installation instructions are here.

Once you have the docker daemon installed and running on you dev machine initiate swarm mode with the following command:

docker swarm init

Test setup

A basic test suite is included that covers cluster initiation, replication and failover.

First copy the test env template to test.env

cp test.env.tmpl test.env

Patroni will not boot without these environment variables present.

Run the test suite with

scripts/run_tests.sh [-a to keep the stack up]

The test setup also includes the pagila test dataset. See the steps in the test script for more details on how to load it.

Development

Assuming you have loaded the aliases into your shell environment you can bring the test stack by running

tsu

If you want to use docker-stack.yml instead you'll need to remove the deploy restriction conditions or else the services will never start.

You can use patroni's cli to check the cluster's status

pcli list pg-test-cluster

You can also access cluster information via http requests to the api.

curl localhost:8008/patroni | jq .

The master db is accessible on localhost:5000 and the replicas on 5001

psql -p 5000

Staging

The staging setup has been tested with docker-machine on AWS and Digital Ocean. You will find a sample deploy script included in the scripts folder for aws deployment.

Environment variables

The staging setup is slightly different to the test stack in that each dbnode expects its own env file to be present at the root of the repo e.g. db-1.env

The absolute minimum setup required to get the stack up is a combination of the inline environment variables from docker-stack.test.yml and those in test.env.tmpl.

For full configuration options consult the patroni documentation.

AWS

For deploying on AWS you will need aws cli installed, jq for json parsing and you will need to have your AWS credentials set. Docker machine will pick up on the standard aws environment variables.

scripts/deploy_aws.sh

With aws there are a lot of user specific variables which may prevent the script from working out of the box. Please consult the docker-machine aws plugin docs for more on how to configure your local environment.

Another option is to use Dockers cloudformation script to provision a docker ready environment from scratch on AWS.

Digital ocean

You will need to retrieve your credentials as described here before preceding.

For each node run:

docker-machine create
    --driver digitalocean
    --digitalocean-access-token=your-secret-token
    --digitalocean-image=ubuntu-16-04-x64
    --digitalocean-region=ams-3
    --digitalocean-size=512mb
    --digitalocean-ipv6=false
    --digitalocean-private-networking=false
    --digitalocean-backups=false
    --digitalocean-ssh-user=root
    --digitalocean-ssh-port=22

The rest of the stack setup is identical to the docker commands run in the aws deploy script.

Virtualbox

Make sure you have allocated enough RAM to run three nodes locally and then run

docker-machine create --driver virtualbox node-name

for each node.

Deploying the stack

Make sure you are executing these commands in the context of your node manager:

eval $(docker-machine env db-1)

To deploy your stack run

docker stack deploy -c docker-stack.yml pg_cluster

To check the state of your services run

docker service ls

For logs run:

docker service logs pg_cluster_haproxy 

Cleanup

Remove the hosts:

docker-machine rm db-1 db-2 db-3

Reset your docker environment:

eval $(docker-machine env --unset)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].