All Projects → Comcast → Bynar

Comcast / Bynar

Licence: Apache-2.0 license
Server remediation as a service

Programming Languages

rust
11053 projects
PLpgSQL
1095 projects
shell
77523 projects

Projects that are alternatives of or similar to Bynar

Bluezero
Middleware for distributed applications
Stars: ✭ 17 (-67.92%)
Mutual labels:  protobuf, zeromq
Experiments
Personal code, scripts and config files for experiments
Stars: ✭ 457 (+762.26%)
Mutual labels:  protobuf, ceph
J1939-Framework
Framework to work with J1939 Frames used in CAN bus in bus, car and trucks industries
Stars: ✭ 123 (+132.08%)
Mutual labels:  protobuf
MetaTrader4-Bridge
Communication layer between MetaTrader 4 and your project.
Stars: ✭ 66 (+24.53%)
Mutual labels:  zeromq
elm-protobuf
protobuf plugin for elm
Stars: ✭ 93 (+75.47%)
Mutual labels:  protobuf
ceph-open-terrarium
ceph-open-terrarium: deploy with terraform-libvirt ceph cluster.. Configure with saltstack or ansible.
Stars: ✭ 18 (-66.04%)
Mutual labels:  ceph
ZeroMQ
🚀 Client/Server & Pub/Sub Examples with ZeroMQ & Boost
Stars: ✭ 33 (-37.74%)
Mutual labels:  zeromq
pronto
Clojure support for protocol buffers
Stars: ✭ 66 (+24.53%)
Mutual labels:  protobuf
proto2gql
The project has been migrated to https://github.com/EGT-Ukraine/go2gql.
Stars: ✭ 21 (-60.38%)
Mutual labels:  protobuf
protobuf-maven-plugin
Maven Plugin that executes the Protocol Buffers (protoc) compiler
Stars: ✭ 204 (+284.91%)
Mutual labels:  protobuf
protobuf-ts
Protobuf and RPC for TypeScript
Stars: ✭ 527 (+894.34%)
Mutual labels:  protobuf
methanol
⚗️ Lightweight HTTP extensions for Java
Stars: ✭ 172 (+224.53%)
Mutual labels:  protobuf
faabric
Messaging and state layer for distributed serverless applications
Stars: ✭ 39 (-26.42%)
Mutual labels:  zeromq
protoactor-python
Proto Actor - Ultra fast distributed actors
Stars: ✭ 78 (+47.17%)
Mutual labels:  protobuf
ceph-salt
Ceph cluster deployment with SaltStack
Stars: ✭ 83 (+56.6%)
Mutual labels:  ceph
dynamic-queue
The dynamic queue
Stars: ✭ 17 (-67.92%)
Mutual labels:  zeromq
protobuf-decoder
JavaScript-based web UI to decode ad-hoc Protobuf data
Stars: ✭ 107 (+101.89%)
Mutual labels:  protobuf
fmutils
Golang protobuf FieldMask missing utils
Stars: ✭ 64 (+20.75%)
Mutual labels:  protobuf
scalapb-circe
Json/Protobuf convertors for ScalaPB use circe
Stars: ✭ 38 (-28.3%)
Mutual labels:  protobuf
ProtobufDecoder
A Google Protocol Buffers (Protobuf) payload decoder/analyzer
Stars: ✭ 33 (-37.74%)
Mutual labels:  protobuf

Bynar

Build Status

Warehouse scale server repair, more benign than borg.


Bynar is an open source system for automating server maintenance across the datacenter. Bynar builds upon many years of experience automating the drudgery of server repair. The goal is to have the datacenter maintain itself. Large clusters these days require lots of maintenance. Cassandra, Ceph, Gluster, Hadoop and others all require quick replacement of server parts as they break down or the cluster can become degraded. As your cluster grows, you generally need to have more people to maintain them. Bynar hopes to break this cycle and free up your time so that your clusters can scale to ever greater sizes without requiring more people to maintain them.

The project is divided into different binaries that all communicate over protobuf:

  1. disk-manager: This program handles adding and the removal of disks from a server
  2. bynar: This program handles detection of failed hard drives, files a ticket for a datacenter technician to replace the drive, waits for the resolution of the ticket and then makes an API call to disk-manager to add the new disk back into the server.
  3. bynar-client: Enables you to manually make API calls against disk-manager and bynar

To start using Bynar

Infrastructure:

Bynar requires a Postgres database to be setup. Setting up a production ready Postgres is outside the scope of this document. For testing Bynar a docker postgres container is quick to setup. The database maintains information about hardware status and ongoing operations.

Configuration:

  1. Create your configuration file. The utility takes json config information. Edit the /etc/bynar/bynar.json file to configure it. The slack_* fields are optional. They will allow Bynar to send alerts to a channel while it's performing maintenance. The daemon_* fields are optional.
    They will allow the user to choose the output files if Bynar is run as a daemon. JIRA is the only currently supported back end ticketing system. A plugin system allows for more back end support.
    An optional proxy field can be configured to send JIRA REST API requests through. For extra security we highly recommend that you enable the vault integration. The disk-manager sits on a port and if an attacker gains access to it they can quickly wipe out your disks. If you don't wish to enable vault integration set the disk-manager up to only listen on a loopback port. Fields for this file are listed below. A sample file can also be found under config/bynar.json.
{
 "proxy": "https://my.proxy",
 "manager_host": "localhost",
 "manager_port": 5555,
 "slack_webhook": "https://hooks.slack.com/services/ID",
 "slack_channel": "#my-channel",
 "slack_botname": "my-bot",
 "jira_user": "test_user",
 "jira_password": "user_password",
 "jira_host": "https://tickets.jira.com",
 "jira_issue_type": "3",
 "jira_priority": "4",
 "jira_project_id": "MyProject",
 "jira_ticket_assignee": "assignee_username",
 "vault_endpoint": "https://my_vault.com",
 "vault_token": "token_98706420",
 "database": {
     "username": "postgres_user",
     "password": "postgres_passwd",
     "port": "8888",
     "dbname": "database_name",
     "endpoint": "some.endpoint"
 },
 "daemon_output": "bynar_daemon.out",
 "daemon_error" : "bynar_daemon.err",
 "daemon_pid" : "bynar_daemon.pid"
}

Disk Manager

This binary handles adding and removing disks from a server. It uses protobuf serialization to allow RPC usage. Please check the api crate for more information or the bynar-client.

Configuration:

  1. Create your configuration file. The utility takes json config /etc/bynar/disk-manager.json file. This file should be deployed
    when the Bynar package is installed. The vault_* options are optional but recommended. When enabled the disk-manager upon starting will save the generated public key to vault under /bynar/{hostname}.pem. Any clients wanting to connect to it will need to contact vault first. If vault is not enabled it will save the public key to /etc/bynar/.
{
  "backend": "ceph",
  "vault_endpoint": "https://my_vault:8888",
  "vault_token": "token_98706420"
}

Bynar that runs on Ceph, should have a ceph.json file to describe it. This tells where to look for ceph configuration, user details etc. /etc/bynar/ceph.json file:

{
  "config_file": "/etc/ceph/ceph.conf",
  "user_id": "admin",
  "pool_name": "pool_name",
  "target_weight": 1.0,
  "system_disks": [
    {
      "device": "/dev/sdc"
    }
  ],
  "journal_devices": [
		{
			"device": "/dev/sda"
		},
		{
			"device": "/dev/sdb",
			"partition_id": 1
		}
	],
  "osd_config": [
    {
      "is_lvm": false,
      "dev_path": "/dev/sdx",
      "journal_path" : "/dev/sdxY",
      "rdb_path': "dev/sdxZ",
    }
  ]
	"udev_rule_path": "/etc/udev/rules.d"
}

The pool_name is the name of the pool used to measure latency in the cluster, target_weight the desired weight of OSDs in the cluster.

System Disks must be specified for ceph to filter out.
This is a list of all disks that Ceph should not run on.
A disk with the root or boot partition, as wellas the device path of the root and boot (/boot, /boot/efi) partitions must be provided for Bynar to filter out.
Bynar needs to be able to distinguish the disks so it does not try to wipe a boot partition.
If not provided ceph will attempt to add/remove the disk/partition as an OSD.

Optionally, latency_cap, backfill_cap, and increment can be specified for ceph to use. Bynar will gradually weight in an osd that is added to the cluster so as not to introduce too much latency to the cluster or cause issues with pgs stuck in backfill.
Bynar has its own defaults to use however explicit parameters can be set. Please note that the latency_cap is in ms

Journal devices can optionally be specified for ceph to use. Bynar will attempt to balance the number of partitions across the devices given. If an explict partition_id is also given Bynar will make use of that. If no partition_id is given Bynar will create new partitions when disks are added. The partition size will be equal to the ceph.conf osd journal size configuration setting which is given in megabytes.

Osd Configs should be specified for ceph to use for each OSD device on the server.
This lets Bynar know whether to add an osd device manually or through LVM. When configuring for a Bluestore device that will not be added as an LVM, you can also specify the journal path and the RocksDB path (the block.wal and block.db symlinks respectively), though they should not point to the same location.

The udev_rules_path is needed when adding an osd device manually, as the kernel needs to recognize that the device is owned by ceph:ceph

Directory layout:

  1. Top level is the dead disk detector aka bynar
  2. api is the protobuf api create
  3. disk-manager is the service that handles the adding and removal of disks

Launch the program

  1. After building Bynar from source or downloading prebuilt packages launch the disk-manager, bynar service on every server you want maintained.

To start developing Bynar

This community repository hosts all information about building Bynar from source, how to contribute code and documentation, who to contact about what, etc.

Dependencies for Ubuntu 18.04:

Ensure there is enough space on the root partition of your development system. Typical recommendation is that the root partition should be atleast 25GB. The following packages are required. Install using:

sudo apt install <package_name> 
  1. libzmq3-dev 4.1 or higher
  2. libprotobuf-dev 2.5 or higher
  3. librados2 # ceph jewel or higher
  4. libatasmart-dev
  5. libssl-dev
  6. libblkid-dev
  7. libudev-dev # for building
  8. librados-dev # for building
  9. pkg-config # for building libudev
  10. libclang-dev
  11. libzmq5
  12. llvm
  13. libdevmapper-dev
  14. liblvm2-dev
  15. liblvm2app2.2
  16. gcc
  17. clang
  18. smartmontools
  19. parted

CLI command to install all the dependencies:

sudo apt install libzmq3-dev libprotobuf-dev librados2 libatasmart-dev libssl-dev libblkid-dev libudev-dev librados-dev pkg-config libclang-dev llvm libdevmapper-dev liblvm2-dev liblvm2app2.2 gcc clang smartmontools parted

Working Rust environment

Install Rust and point it to the nightly build. The stable version will not be sufficient to run the test cases it needs a feature only available on nightly build.

$ curl https://sh.rustup.rs -sSf | sh
$ rustup override set nightly

Retrieving source

Login to your github account, and checkout the latest source code from this repository. Then, to create executable binary

Run:

$ cargo build --release

To check your code without building the binary:

$ cargo check

Bynar Workflow

Hardware issues crop up all the time as part of the regular cycle of things in servers. Bynar can nearly completely automate that maintenance of hard drive failure except for the actual replacing of the drive. The typical workflow by a human would look something like this:

  1. Receive an alert about a drive failing
  2. SSH over to the server to investigate. Try to rule out obvious things
  3. Conclude drive is dead and file a support ticket with the datacenter tech to remove it
    • Or file a ticket with HP/Dell/Cisco/Etc to replace the drive
  4. Depending on the software running on top of this drive I may have to:
    • Inform the cluster that the drive is dead
    • Rebalance the data in the cluster
  5. Wait for a replacement
  6. After the drive is replaced inform the clusters that the drive is now back in service and rebalance the data back onto the drive.

So how can Bynar help? Well it can handle steps 1,2,3,4 and 6. Nearly everything! While it is replacing your drives it can also inform you over slack or other channels to keep you in the loop. The time saved here multplies with each piece of hardware replaced and now you can focus your time and energy on other things. It's a positive snowball effect!

Testing

Note that root permissions are required for integration testing. The reason is that the test functions will attempt to create loopback devices, mount them, check their filesystems etc and all that requires root. The nightly compiler is also required for testing because mocktopus makes use of features that haven't landed in stable yet. Run: sudo ~/.cargo/bin/cargo test -- --nocapture to test.

Support and Contributions

If you need support, start by checking the issues page. If that doesn't answer your questions, or if you think you found a bug, please file an issue.

That said, if you have questions, reach out to us communication.

Want to contribute to Bynar? Awesome! Check out the contributing guide.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].