All Projects → cvb941 → Haha

cvb941 / Haha

Highly Available Home Assistant - a solution for running a redundant installation of Home Assistant

Projects that are alternatives of or similar to Haha

Home Assistant Mail And Packages
Home Assistant integration providing day of package counts and USPS informed delivery images.
Stars: ✭ 155 (-12.43%)
Mutual labels:  home-assistant, homeassistant
Homeassistant
Example Home Assistant Configs
Stars: ✭ 168 (-5.08%)
Mutual labels:  home-assistant, homeassistant
Addon Grafana
Grafana - Home Assistant Community Add-ons
Stars: ✭ 102 (-42.37%)
Mutual labels:  home-assistant, homeassistant
Addon Grocy
Grocy - Home Assistant Community Add-ons
Stars: ✭ 97 (-45.2%)
Mutual labels:  home-assistant, homeassistant
Addon Tasmoadmin
TasmoAdmin - Home Assistant Community Add-ons
Stars: ✭ 130 (-26.55%)
Mutual labels:  home-assistant, homeassistant
Home Assistant Config
My Home Assistant configuration & documentation.
Stars: ✭ 99 (-44.07%)
Mutual labels:  home-assistant, homeassistant
Scheduler Card
HA Lovelace card for control of scheduler entities
Stars: ✭ 154 (-12.99%)
Mutual labels:  home-assistant, homeassistant
Hassio Addons
The repository for my Home Assistant Supervisor Add-ons.
Stars: ✭ 71 (-59.89%)
Mutual labels:  home-assistant, homeassistant
Addon Motioneye
motionEye - Home Assistant Community Add-ons
Stars: ✭ 122 (-31.07%)
Mutual labels:  home-assistant, homeassistant
Addon Pi Hole
Pi-hole - Home Assistant Community Add-ons
Stars: ✭ 120 (-32.2%)
Mutual labels:  home-assistant, homeassistant
Addon Homebridge
 Homebridge - Community Hass.io Add-on for Home Assistant
Stars: ✭ 96 (-45.76%)
Mutual labels:  home-assistant, homeassistant
Addon Adguard Home
AdGuard Home - Home Assistant Community Add-ons
Stars: ✭ 138 (-22.03%)
Mutual labels:  home-assistant, homeassistant
Homeassistant Config
Configuration for @brianjking & @KinnaT's Home Assistant Installation
Stars: ✭ 80 (-54.8%)
Mutual labels:  home-assistant, homeassistant
Addon Aircast
AirCast - Home Assistant Community Add-ons
Stars: ✭ 100 (-43.5%)
Mutual labels:  home-assistant, homeassistant
Ha Bt Proximity
Distributed Bluetooth Room Presence Sensor for Home Assistant
Stars: ✭ 77 (-56.5%)
Mutual labels:  home-assistant, homeassistant
Home Assistant Configuration
My Home Assistant Config. For more Information visit ->
Stars: ✭ 102 (-42.37%)
Mutual labels:  home-assistant, homeassistant
Hass Bha Icons
Additional icons for Home Assistant to accompany the MDI icons
Stars: ✭ 75 (-57.63%)
Mutual labels:  home-assistant, homeassistant
Monitor docker
Monitor Docker containers from Home Assistant
Stars: ✭ 76 (-57.06%)
Mutual labels:  home-assistant, homeassistant
Addon Ssh
SSH & Web Terminal - Home Assistant Community Add-ons
Stars: ✭ 114 (-35.59%)
Mutual labels:  home-assistant, homeassistant
Simple Weather Card
Minimalistic weather card for Home Assistant
Stars: ✭ 135 (-23.73%)
Mutual labels:  home-assistant, homeassistant
HAHA Logo

HAHA - Highly Available Home Assistant

A Docker swarm solution allowing to run Home Assistant in a highly available, failover configuration.

Uses Ansible playbooks for setup and removal of the cluster on target devices. GlusterFS for real-time file synchronization between the nodes.

Deploys a fully redundant Home Assistant stack with pre-configured MariaDB Galera Cluster and Mosquitto MQTT broker.

How it works

This project provides a docker-compose file which deploys a stack with Home Assistant, MariaDB, Mosquitto and optionally Portainer containers.

Even though by itself, Docker provides a failover capability by rescheduling failed containers, it doesn't transfer any state between them. This is important, as without state transfer the new containers would simply start from scratch, everytime a new container would be started.

To achieve state transferring for a stateful application such as this, this project configures a real-time file-based synchronization between the nodes for the Home Assistant directory and Mosquitto retain files, using the network filesystem GlusterFS.

As for the MariaDB database used for the recording of history in Home Assistant, the Galera Cluster is setup and used in a master/master configuration, synchronizing the databases throughout the cluster.

Connecting sensors and devices to the cluster using MQTT

This cluster includes the Mosquitto broker for communicating with devices through MQTT. Even though the broker is accessible from any node in the Docker swarm through the swarm's overlay network, a sensor still needs to connect to one of the nodes in the first place.

In the sensor's configuration, we could specify the MQTT broker's address to be just one of the nodes IP addresses. But in case that specific node fails, we could no longer connect to the swarm at all, and thus could not connect to the broker.

A solution would be to include all of the nodes' addresses in the sensor's firmware, and have some kind of a round-robin trial and error in trying to connnect to one of them. However, this list of addresses could be difficult to manage with large amounts of sensors, and some systems, like ESPHome, do not support multiple MQTT broker addresses.

Other than setting up an own DNS server, a working solution is to set the same hostnames to all of the nodes. This way, a sensor will always resolve an IP address, through mDNS, to one of the nodes.

Experimentally, when turning off the node to which a sensor, flashed with ESPHome, was connected, the sensor was stuck in trying to resolve a different IP address. A solution to this is to restart the Avahi service on any of the running nodes and the sensor magically connects. Because of this, a cron job is setup with Ansible to restart the Avahi service every minute.

Limitations

To have a fully autonomously recoverable setup, a minimum of three devices need to be part of the cluster. This is due to how both Docker swarm and GlusterFS work. Simply put, all of the decisions in both systems need to be made with the majority of the nodes in consensus. If there were only two nodes, just one of them failing would already cause the majority to be lost. More info on the algorithm can be found here and here.

At this point, this project is made to be used, and has only been tested with Raspberry Pi. The images it uses are made for the Raspberry and its arm32v7 architecture. Support for other devices and architecture could be provided hopefully easily enough, and is just a matter of finding the right Docker images and testing. In the future, I would like to add support for other devices.

Running the cluster

Although effort has been put to make the setup easy to execute by using the Ansible tool, there are some manual steps that need to be made.

Cluster options

You can change the MariaDB passwords in the files located in the .secrets directory, all passwords are pass123 by default.

Change the username of the MariaDB database in the docker-compose.yml file. Default username is user123. Be careful, there are two services in the file, mariadb-seed and mariadb-node.

Preparing the nodes

Make sure the nodes have the Docker Engine installed and ready. Also setup SSH if you plan on connecting to them from Ansible through it.

Ansible inventory setup

Create hacluster group in the /etc/ansible/hosts file. Add your cluster nodes to it. Instructions on how to do so can be found here.

# Example hacluster group definition. Append this to your /etc/ansible/hosts inventory file.

[hacluster]
192.168.1.100  ansible_connection=ssh  ansible_ssh_user=pi  ansible_ssh_pass=raspberry
192.168.1.101  ansible_connection=ssh  ansible_ssh_user=pi  ansible_ssh_pass=raspberry
192.168.1.102  ansible_connection=ssh  ansible_ssh_user=pi  ansible_ssh_pass=raspberry

Setup HAHA cluster playbook

This playbook is what sets up everything - it adds the nodes to a trusted Gluster pool, creates a Gluster volume and mounts it on all nodes. It then copies the default configuration files for Home Assistant and Mosquitto. Then, it sets up a cron job on all nodes to periodically restart the Avahi service. Finally, it initializes a Docker swarm, joins all nodes to it and starts the HAHA stack on it. Run the playbook using the command below.

$ ansible-playbook setup-hacluster.yml

Initializing the Galera Cluster

After running the Ansible playbook, the cluster is not ready for use yet. The Galera Cluster requires some manual initialization. Follow the steps below. For these, you can use the provided Portainer container manager, included in the docker-compose.yml (runs on port 9000).

  1. Wait until the initialization of the container mariadb-seed succeeds (container is healthy)
  2. Raise the number of replicas of the mariadb-node service from zero to the number of nodes in the cluster.
  3. After all of the mariadb-node containers are done initializing (marked as healthy), remove the mariadb-seed service (set replicas to zero).
  4. Set the number of replicas of the homeassistant service to one.

After this, the Home Assistant container should initialize and be ready to use on the port 8123 in the swarm.

Remove HAHA cluster playbook

This playbook stops the HAHA stack, unmounts the gluster volumes and removes them. It also cancels the cron job for restarting the Avahi service. Run the following command:

$ ansible-playbook remove-hacluster.yml

Extremely high availability mode

When reacting to a failure inside the cluster, the Docker swarm scheduler might take a while to start and initialize a new Home Assistant or Mosquitto container. More so, if the new node doesn't have the Docker image it needs, in which case it needs to download it.

A solution is to speculatively run multiple instances of Home Assistant at the same time, so that the backup is always running. For that, you need to raise the replicas of the homeassistant service to two. In background, both instances will write, in real time, to the same storage backend in the GlusterFS volume, which might seem troubling at first, but in practice there weren't any apparent problems and GlusterFS handles it pretty well.

Running two instances of the Mosquitto broker simultaneously would require bridging them, which is not a part of this project. An alternative could be to use the ESPHome's Native API as a replacement of MQTT.

Running with 2 devices

Although unsupported by both Docker swarm and GlusterFS due to reasons explained in the limitations section, the cluster can technically run on just two nodes.

However, you will lose the automatic recovery of using 3+ devices. Docker swarm will fail to start replacement containers in case of a node failure (it will basically freeze), so you will need to apply the principles from the extremely high availabilty section, to make sure that there will remain at least one container of each service running. In practice, this means running two Home Assistant containers simultaneously, by setting the replicas count to 2 instead of 1 during the initialization.

Moreover, a split brain scenario may occur in the GlusterFS volume if only one of two nodes are online. The volume will still be working, but a manual recovery will be needed when re-adding the lost node.

Credits

Colin Mollenhour - For his amazing solution for managing a MariaDB Galera Cluster in auto-scheduling systems, such as Docker swarm.

User quasar66 - For describing his own HA setup which served as an inspiration for this project.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].