All Projects → bytewatch → Dolphinbeat

bytewatch / Dolphinbeat

Licence: apache-2.0
A server that pulls and parses MySQL binlog, pushs change data into different sinks like Kafka.

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to Dolphinbeat

Machine
Machine is a workflow/pipeline library for processing data
Stars: ✭ 78 (-52.44%)
Mutual labels:  pipeline, stream-processing
Prisma
Next-generation ORM for Node.js & TypeScript | PostgreSQL, MySQL, MariaDB, SQL Server, SQLite & MongoDB (Preview)
Stars: ✭ 18,168 (+10978.05%)
Mutual labels:  mysql, mariadb
Aiomysql
aiomysql is a library for accessing a MySQL database from the asyncio
Stars: ✭ 1,252 (+663.41%)
Mutual labels:  mysql, mariadb
Pomelo.entityframeworkcore.mysql
Entity Framework Core provider for MySQL and MariaDB built on top of MySqlConnector
Stars: ✭ 2,099 (+1179.88%)
Mutual labels:  mysql, mariadb
Nukeviet
NukeViet CMS is multi Content Management System. NukeViet CMS is the 1st open source content management system in Vietnam. NukeViet was awarded the Vietnam Talent 2011, the Ministry of Education and Training Vietnam officially encouraged to use.
Stars: ✭ 113 (-31.1%)
Mutual labels:  mysql, mariadb
Dbbench
🏋️ dbbench is a simple database benchmarking tool which supports several databases and own scripts
Stars: ✭ 52 (-68.29%)
Mutual labels:  mysql, mariadb
Oneinstack
OneinStack - A PHP/JAVA Deployment Tool
Stars: ✭ 1,983 (+1109.15%)
Mutual labels:  mysql, mariadb
Tbls
tbls is a CI-Friendly tool for document a database, written in Go.
Stars: ✭ 940 (+473.17%)
Mutual labels:  mysql, mariadb
Flink Learning
flink learning blog. http://www.54tianzhisheng.cn/ 含 Flink 入门、概念、原理、实战、性能调优、源码解析等内容。涉及 Flink Connector、Metrics、Library、DataStream API、Table API & SQL 等内容的学习案例,还有 Flink 落地应用的大型项目案例(PVUV、日志存储、百亿数据实时去重、监控告警)分享。欢迎大家支持我的专栏《大数据实时计算引擎 Flink 实战与性能优化》
Stars: ✭ 11,378 (+6837.8%)
Mutual labels:  stream-processing, mysql
Mysqlconfigurer
Releem MySQL Configurer is a tool that will assist you with MySQL performance tuning. Releem is an online service for automatic optimization MySQL configuration to improve performance and reduce costs.
Stars: ✭ 102 (-37.8%)
Mutual labels:  mysql, mariadb
Ensembl Hive
EnsEMBL Hive - a system for creating and running pipelines on a distributed compute resource
Stars: ✭ 44 (-73.17%)
Mutual labels:  pipeline, mysql
Mysql
Go MySQL Driver is a MySQL driver for Go's (golang) database/sql package
Stars: ✭ 11,735 (+7055.49%)
Mutual labels:  mysql, mariadb
Mysqldump Php
PHP version of mysqldump cli that comes with MySQL
Stars: ✭ 975 (+494.51%)
Mutual labels:  mysql, mariadb
Ebean
Ebean ORM
Stars: ✭ 1,172 (+614.63%)
Mutual labels:  mysql, mariadb
Wait4x
Wait4X is a cli tool to wait for everything! It can be wait for a port to open or enter to rquested state.
Stars: ✭ 30 (-81.71%)
Mutual labels:  mysql, mariadb
Dockerweb
A docker-powered bash script for shared web hosting management. The ultimate Docker LAMP/LEMP Stack.
Stars: ✭ 89 (-45.73%)
Mutual labels:  mysql, mariadb
Vector
A reliable, high-performance tool for building observability data pipelines.
Stars: ✭ 8,736 (+5226.83%)
Mutual labels:  pipeline, stream-processing
Skeema
Schema management CLI for MySQL
Stars: ✭ 859 (+423.78%)
Mutual labels:  mysql, mariadb
Pinba2
Pinba2: new implementation of https://github.com/tony2001/pinba_engine
Stars: ✭ 101 (-38.41%)
Mutual labels:  mysql, mariadb
Alpine Mariadb
MariaDB running on Alpine Linux [Docker]
Stars: ✭ 117 (-28.66%)
Mutual labels:  mysql, mariadb

DolphinBeat Build Status

Other languages: 中文

This is a high available server that pulls MySQL binlog, parses binlog and pushs incremental update data into different sinks.

The types of sink supported currently and officially are Kafka and Stdout.

Features:

  • Supports MySQL and MariaDB.
  • Supports GTID and not GTID.
  • Supports MySQL failover: if using GTID, dolphinbeat can work smoothly even if MySQL failover.
  • Supports MySQL DDL: dolphinbeat can parse DDL statement and replay DDL upon it's own schema data in memory.
  • Supports breakpoint resume: dolphinbeat has persistent metadata, it can resume to work after crash recover.
  • Supports standalone and election mode: if election enabled, dolphinbeat follower will take over dead leader.
  • Supports filter rules base on database and table for each sink.
  • Supports HTTP API to inspect dolphinbeat.
  • Supports metrics in Prometheus style.

The types of sink are scalable, you can implement your own sink if need, but I recommend you to use Kafka sink and let business consumes data from Kafka.

Quick start

Prepare your MySQL source, trun on binlog with ROW format, and type following commands and you will see JSON printed by dolphinbeat's Stdout sink.

docker run -e MYSQL_ADDR='8.8.8.8:3306' -e MYSQL_USER='root' -e MYSQL_PASSWORD='xxx' bytewatch/dolphinbeat
{
  "header": {
    "server_id": 66693,
    "type": "rotate",
    "timestamp": 0,
    "log_pos": 0
  },
  "next_log_name": "mysql-bin.000008",
  "next_log_pos": 4
}
...
...

The docker image above is for MySQL with GTID and only with Stdout sink enabled.

If your source database is not GTID, please add -e GTID_ENABLED='false' arg. If your source database is MariaDB, please add -e FLAVOR='mariadb' arg.

If you want to have a deep test, type following commands and you will get a shell:

docker run -e MYSQL_ADDR='8.8.8.8:3306' -e MYSQL_USER='root' -e MYSQL_PASSWORD='xxx' sh

In this shell, you can modify configurations in /data directory, and then start dolphinbeat manually.

Configuration description is presented in Wiki.

Compile from source

Type following commands and you will get builded binary distribution at build/dolphinbeat directory:

go get github.com/bytewatch/dolphinbeat
make 

Documents

Sink

Kafka

This is a sink used for production. Dolphinbeat write data encoded with Protobuf into Kafka and business consumes data from Kafka.

Business need use client library to decode data in Kafka message, do stream processing on the binlog stream.

The Protobuf protocol is presented in protocol.proto .

Kafka sink has following features:

  • Strong-ordered delivery: business will receive events in the same order with MySQL binlog.
  • Exactly-once delivery: client library can dedup duplicated message with same sequence number which may caused by producer retry or Kafka failover.
  • Unlimited event size: dolphinbeat use fragments algorithm like IPV4 if the binlog event is bigger than Kafka's max message size.

A small example using client library is presented in kafka-consumer.

kafka-consumer is a command tool to decode data in Kafka message and print out with JSON.

Stdout

This is a sink used for demonstration. Dolphinbeat write data encoded with JSON to Stdout.

Stdout sink doesn't support breakpoint resume.

Special thanks

Thank siddontang for his popular and powerful go-mysql library!

License

Apache License 2.0

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].