All Projects → vitabaks → Postgresql_cluster

vitabaks / Postgresql_cluster

Licence: mit
PostgreSQL High-Availability Cluster (based on "Patroni" and "DCS(etcd)"). Automating deployment with Ansible.

Projects that are alternatives of or similar to Postgresql cluster

Ansible Role Patroni
🐘 Ansible Role for Patroni
Stars: ✭ 40 (-86.39%)
Mutual labels:  ansible, postgresql, high-availability, failover, cluster
Repmgr
A lightweight replication manager for PostgreSQL (Postgres) - latest version 5.2.1 (2020-12-07)
Stars: ✭ 1,207 (+310.54%)
Mutual labels:  postgresql, postgres, replication, failover, cluster
Patroni
A template for PostgreSQL High Availability with Etcd, Consul, ZooKeeper, or Kubernetes
Stars: ✭ 4,434 (+1408.16%)
Mutual labels:  postgresql, etcd, high-availability, failover
Paf
PostgreSQL Automatic Failover: High-Availibility for Postgres, based on Pacemaker and Corosync.
Stars: ✭ 288 (-2.04%)
Mutual labels:  postgresql, postgres, high-availability, failover
pg keeper
Simplified clustering module for PostgreSQL
Stars: ✭ 32 (-89.12%)
Mutual labels:  replication, failover, high-availability
Vip Manager
Manages a virtual IP based on state kept in etcd or Consul
Stars: ✭ 75 (-74.49%)
Mutual labels:  postgresql, postgres, etcd
pg-dock
pg-dock cluster managment
Stars: ✭ 19 (-93.54%)
Mutual labels:  cluster, failover, high-availability
Tunnel
PG数据同步工具(Java实现)
Stars: ✭ 122 (-58.5%)
Mutual labels:  postgresql, postgres, replication
Wal E
Continuous Archiving for Postgres
Stars: ✭ 3,313 (+1026.87%)
Mutual labels:  postgresql, postgres, replication
Testgres
Testing framework for PostgreSQL and its extensions
Stars: ✭ 85 (-71.09%)
Mutual labels:  postgresql, postgres, replication
Postgres Operator
Production PostgreSQL for Kubernetes, from high availability Postgres clusters to full-scale database-as-a-service.
Stars: ✭ 2,166 (+636.73%)
Mutual labels:  postgresql, postgres, high-availability
Postdock
PostDock - Postgres & Docker - Postgres streaming replication cluster for any docker environment
Stars: ✭ 985 (+235.03%)
Mutual labels:  postgresql, failover, cluster
Pg auto failover
Postgres extension and service for automated failover and high-availability
Stars: ✭ 564 (+91.84%)
Mutual labels:  postgresql, postgres, high-availability
Ansible Role Postgresql
Ansible Role - PostgreSQL
Stars: ✭ 310 (+5.44%)
Mutual labels:  ansible, postgresql, postgres
Amazonriver
amazonriver 是一个将postgresql的实时数据同步到es或kafka的服务
Stars: ✭ 198 (-32.65%)
Mutual labels:  postgresql, postgres, replication
Pg chameleon
MySQL to PostgreSQL replica system
Stars: ✭ 274 (-6.8%)
Mutual labels:  postgresql, postgres, replication
Moha
MoHA(Mobike High Availability): A MySQL/Postgres high availability supervisor
Stars: ✭ 117 (-60.2%)
Mutual labels:  postgresql, etcd, high-availability
Stolon
PostgreSQL cloud native High Availability and more.
Stars: ✭ 3,481 (+1084.01%)
Mutual labels:  postgresql, etcd, high-availability
Postgres Operator
Postgres operator creates and manages PostgreSQL clusters running in Kubernetes
Stars: ✭ 2,194 (+646.26%)
Mutual labels:  postgresql, postgres, cluster
Awx Ha Instancegroup
Build AWX clustering on Docker Standalone Installation
Stars: ✭ 106 (-63.95%)
Mutual labels:  ansible, high-availability, cluster

PostgreSQL High-Availability Cluster 🐘 💖

GitHub license GitHub stars

Deploy a Production Ready PostgreSQL High-Availability Cluster (based on "Patroni" and "DCS(etcd)"). Automating with Ansible.

This Ansible playbook is designed for deploying a PostgreSQL high availability cluster on dedicated physical servers for a production environment. Сluster can be deployed in virtual machines for test environments and small projects.

This playbook support the deployment of cluster over already existing and running PostgreSQL. You must specify the variable postgresql_exists='true' in the inventory file. Attention! Your PostgreSQL will be stopped before running in cluster mode. You must planing downtime of existing databases.

❗️ Please test it in your test enviroment before using in a production.

You have two options available for deployment ("Type A" and "Type B"):

[Type A] PostgreSQL High-Availability with Load Balancing

TypeA

To use this scheme, specify with_haproxy_load_balancing: true in variable file vars/main.yml

This scheme provides the ability to distribute the load on reading. This also allows us to scale out the cluster (with read-only replicas).

  • port 5000 (read / write) master
  • port 5001 (read only) all replicas
if variable "synchronous_mode" is 'true' (vars/main.yml):
  • port 5002 (read only) synchronous replica only
  • port 5003 (read only) asynchronous replicas only

❗️ Your application must have support sending read requests to a custom port (ex 5001), and write requests (ex 5000).

Components of high availability:

Patroni is a template for you to create your own customized, high-availability solution using Python and - for maximum accessibility - a distributed configuration store like ZooKeeper, etcd, Consul or Kubernetes. Used for automate the management of PostgreSQL instances and auto failover.

etcd is a distributed reliable key-value store for the most critical data of a distributed system. etcd is written in Go and uses the Raft consensus algorithm to manage a highly-available replicated log. It is used by Patroni to store information about the status of the cluster and PostgreSQL configuration parameters.

What is Distributed Consensus?

Components of load balancing:

HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications.

confd manage local application configuration files using templates and data from etcd or consul. Used to automate HAProxy configuration file management.

Keepalived provides a virtual high-available IP address (VIP) and single entry point for databases access. Implementing VRRP (Virtual Router Redundancy Protocol) for Linux. In our configuration keepalived checks the status of the HAProxy service and in case of a failure delegates the VIP to another server in the cluster.

PgBouncer is a connection pooler for PostgreSQL.

[Type B] PostgreSQL High-Availability only

TypeB

This is simple scheme without load balancing Used by default

To provide a single entry point (VIP) for databases access is used "vip-manager".

vip-manager is a service that gets started on all cluster nodes and connects to the DCS. If the local node owns the leader-key, vip-manager starts the configured VIP. In case of a failover, vip-manager removes the VIP on the old leader and the corresponding service on the new leader starts it there.
Written in Go. Cybertec Schönig & Schönig GmbH https://www.cybertec-postgresql.com


Compatibility

RedHat and Debian based distros (x86_64)

Minimum OS versions:
  • CentOS: 7
  • Ubuntu: 16.04
  • Debian: 9

✅ tested, works fine: Debian 9/10, Ubuntu 18.04/20.04, CentOS 7.x/8.x

PostgreSQL versions:

all supported PostgreSQL versions

✅ tested, works fine: PostgreSQL 9.6, 10, 11, 12

Table of results of daily automated testing of cluster deployment: | Distribution | Test result | |--------------|:----------:| | CentOS 7 | GitHub Workflow Status | | CentOS 8 | GitHub Workflow Status | | Debian 9 | GitHub Workflow Status | | Debian 10 | GitHub Workflow Status | | Ubuntu 18.04 | GitHub Workflow Status | | Ubuntu 20.04 | GitHub Workflow Status |

Ansible version

This has been tested on Ansible 2.7.х, 2.8.х, 2.9.х

Requirements

This playbook requires root privileges or sudo.

Ansible (What is Ansible?)

Recommendations

  • linux (Operation System):

Update your operating system on your target servers before deploying;

Make sure you have time synchronization is configured (NTP). Specify ntp_enabled:'true' and ntp_servers if you want to install and configure the ntp service.

  • DCS (Distributed Configuration Store):

Fast drives and a reliable network are the most important factors for the performance and stability of an etcd cluster.

Avoid storing etcd data on the same drive along with other processes (such as the database) that are intensively using the resources of the disk subsystem! Store the etcd and postgresql data on different disks (see etcd_data_dir variable), use ssd drives if possible. See hardware recommendations and tuning guides.

Overloaded (highload) database clusters may require the installation of the etcd cluster on dedicated servers, separate from the database servers.

  • Placement of cluster members in different data centers:

If you’d prefer a cross-data center setup, where the replicating databases are located in different data centers, etcd member placement becomes critical.

There are quite a lot of things to consider if you want to create a really robust etcd cluster, but there is one rule: do not placing all etcd members in your primary data center. See some examples.

  • How to prevent data loss in case of autofailover (synchronous_modes and pg_rewind):

Due to performance reasons, a synchronous replication is disabled by default.

To minimize the risk of losing data on autofailover, you can configure settings in the following way:

  • synchronous_mode: 'true'
  • synchronous_mode_strict: 'true'
  • synchronous_commit: 'on' (or 'remote_write'/'remote_apply')
  • use_pg_rewind: 'false' (enabled by default)

Deployment: quick start

  1. Install Ansible to the managed machine
Example: install latest release using pip

sudo apt install python3-pip sshpass git -y
sudo pip3 install ansible

  1. Download or clone this repository

git clone https://github.com/vitabaks/postgresql_cluster.git

  1. Go to the playbook directory

cd postgresql_cluster/

  1. Edit the inventory file
Specify the ip addresses and connection settings (ansible_user, ansible_ssh_pass ...) for your environment

vim inventory

  1. Edit the variable file vars/main.yml

vim vars/main.yml

Minimum set of variables:
  • proxy_env # if required (for download packages)

example:

proxy_env:
  http_proxy: http://proxy_server_ip:port
  https_proxy: http://proxy_server_ip:port
  • cluster_vip # for client access to databases in the cluster (optional)
  • patroni_cluster_name
  • with_haproxy_load_balancing 'true' (Type A) or 'false'/default (Type B)
  • postgresql_version
  • postgresql_data_dir
  1. Run playbook:

ansible-playbook deploy_pgcluster.yml

asciicast


Variables

See the vars/main.yml, system.yml and (Debian.yml or RedHat.yml) files for more details.

Cluster Scaling

Add new postgresql node to existing cluster

Click here to expand...

After you successfully deployed your PostgreSQL HA cluster, you may need to scale it further.
Use the add_pgnode.yml playbook for this.

❕ This playbook does not scaling the etcd cluster and haproxy balancers.

During the run this playbook, the new nodes will be prepared in the same way as when first deployment the cluster. But unlike the initial deployment, all the necessary configuration files will be copied from the master server.

Preparation:
  1. Add a new node (or subnet) to the pg_hba.conf file on all nodes in your cluster
  2. Apply pg_hba.conf for all PostgreSQL (see patronictl reload --help)
Steps to add a new node:
  1. Go to the playbook directory
  2. Edit the inventory file

Specify the ip address of one of the nodes of the cluster in the [master] group, and the new node (which you want to add) in the [replica] group.

  1. Edit the variable files

Variables that should be the same on all cluster nodes:
with_haproxy_load_balancing,postgresql_version, postgresql_data_dir,postgresql_conf_dir.

  1. Run playbook:

ansible-playbook add_pgnode.yml

Add new haproxy balancer node

Click here to expand...

Use the add_balancer.yml playbook for this.

During the run this playbook, the new balancer node will be prepared in the same way as when first deployment the cluster. But unlike the initial deployment, all necessary configuration files will be copied from the server specified in the [master] group.

❗️ Please test it in your test enviroment before using in a production.

Steps to add a new banlancer node:
  1. Go to the playbook directory

  2. Edit the inventory file

Specify the ip address of one of the existing balancer nodes in the [master] group, and the new balancer node (which you want to add) in the [balancers] group.

❗️ Attention! The list of Firewall ports is determined dynamically based on the group in which the host is specified.
If you adding a new haproxy balancer node to one of the existing nodes from the [etcd_cluster] or [master]/[replica] groups, you can rewrite the iptables rules!
See firewall_allowed_tcp_ports_for.balancers variable in the system.yml file.

  1. Edit the main.yml variable file

Specify with_haproxy_load_balancing: true

  1. Run playbook:

ansible-playbook add_balancer.yml

Restore and Cloning

Create new clusters from your existing backups with pgBackRest or WAL-G
Point-In-Time-Recovery

Click here to expand...

Create cluster with pgBackRest:
  1. Edit the main.yml variable file
patroni_cluster_bootstrap_method: "pgbackrest"

patroni_create_replica_methods:
  - pgbackrest
  - basebackup

postgresql_restore_command: "pgbackrest --stanza={{ pgbackrest_stanza }} archive-get %f %p"

pgbackrest_install: true
pgbackrest_stanza: "stanza_name"  # specify your --stanza
pgbackrest_repo_type: "posix"  # or "s3"
pgbackrest_repo_host: "ip-address"  # dedicated repository host (if repo_type: "posix")
pgbackrest_repo_user: "postgres"  # if "repo_host" is set
pgbackrest_conf:  # see more options https://pgbackrest.org/configuration.html
  global:  # [global] section
    - {option: "xxxxxxx", value: "xxxxxxx"}
    ...
  stanza:  # [stanza_name] section
    - {option: "xxxxxxx", value: "xxxxxxx"}
    ...
    
pgbackrest_patroni_cluster_restore_command:
  '/usr/bin/pgbackrest --stanza={{ pgbackrest_stanza }} --type=time "--target=2020-06-01 11:00:00+03" --delta restore'

example for S3 https://github.com/vitabaks/postgresql_cluster/pull/40#issuecomment-647146432

  1. Run playbook:

ansible-playbook deploy_pgcluster.yml

Create cluster with WAL-G:
  1. Edit the main.yml variable file
patroni_cluster_bootstrap_method: "wal-g"

patroni_create_replica_methods:
  - wal_g
  - basebackup

postgresql_restore_command: "wal-g wal-fetch %f %p"

wal_g_install: true
wal_g_ver: "v0.2.15"  # version to install
wal_g_json:  # see more options https://github.com/wal-g/wal-g#configuration
  - {option: "xxxxxxx", value: "xxxxxxx"}
  - {option: "xxxxxxx", value: "xxxxxxx"}
  ...
  1. Run playbook:

ansible-playbook deploy_pgcluster.yml

Point-In-Time-Recovery:

You can run automatic restore of your existing patroni cluster
for PITR, specify the required parameters in the main.yml variable file and run the playbook with the tag:

ansible-playbook deploy_pgcluster.yml --tags point_in_time_recovery

Recovery steps with pgBackRest:

1. Stop patroni service on the Replica servers (if running);
2. Stop patroni service on the Master server;
3. Remove patroni cluster "xxxxxxx" from DCS (if exist);
4. Run "/usr/bin/pgbackrest --stanza=xxxxxxx --delta restore" on Master;
5. Run "/usr/bin/pgbackrest --stanza=xxxxxxx --delta restore" on Replica (if patroni_create_replica_methods: "pgbackrest");
6. Waiting for restore from backup (timeout 24 hours);
7. Start PostgreSQL for Recovery (master and replicas);
8. Waiting for PostgreSQL Recovery to complete (WAL apply);
9. Stop PostgreSQL instance (if running);
10. Disable PostgreSQL archive_command (if enabled);
11. Start patroni service on the Master server;
12. Check PostgreSQL is started and accepting connections on Master;
13. Make sure the postgresql users (superuser and replication) are present, and password does not differ from the specified in vars/main.yml;
14. Update postgresql authentication parameter in patroni.yml (if superuser or replication users is changed);
15. Reload patroni service (if patroni.yml is updated);
16. Start patroni service on Replica servers;
17. Check that the patroni is healthy on the replica server (timeout 10 hours);
18. Check postgresql cluster health (finish).

Why disable archive_command?

This is necessary to avoid conflicts in the archived log storage when archiving WALs. When multiple clusters try to send WALs to the same storage.
For example, when you make multiple clones of a cluster from one backup.

You can change this parameter using patronictl edit-config after restore.
Or set disable_archive_command: false to not disable archive_command after restore.

Maintenance

Please note that the original design goal of this playbook was more concerned with the initial deploiment of a PostgreSQL HA Cluster and so it does not currently concern itself with performing ongoing maintenance of a cluster.

You should learn each component of the cluster for its further maintenance.

Disaster Recovery

A high availability cluster provides an automatic failover mechanism, and does not cover all disaster recovery scenarios. You must take care of backing up your data yourself.

etcd

Patroni nodes are dumping the state of the DCS options to disk upon for every change of the configuration into the file patroni.dynamic.json located in the Postgres data directory. The master (patroni leader) is allowed to restore these options from the on-disk dump if these are completely absent from the DCS or if they are invalid.

However, I recommend that you read the disaster recovery guide for the etcd cluster:

PostgreSQL (databases)

I can recommend the following backup and restore tools:

Do not forget to validate your backups (for example pgbackrest auto).


License

Licensed under the MIT License. See the LICENSE file for details.

Author

Vitaliy Kukharik (PostgreSQL DBA) [email protected]

Feedback, bug-reports, requests, ...

Are welcome!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].