All Projects → Deenbe → blaster

Deenbe / blaster

Licence: Apache-2.0 license
Web hooks for message queues

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to blaster

django-eb-sqs-worker
Django Background Tasks for Amazon Elastic Beanstalk
Stars: ✭ 27 (+92.86%)
Mutual labels:  sqs, sqs-consumer, sqs-client
sns-sqs-big-payload
Amazon SNS/SQS client library that enables sending and receiving messages with payload larger than 256KiB via Amazon S3.
Stars: ✭ 40 (+185.71%)
Mutual labels:  sqs, sqs-consumer
celery-connectors
Want to handle 100,000 messages in 90 seconds? Celery and Kombu are that awesome - Multiple publisher-subscriber demos for processing json or pickled messages from Redis, RabbitMQ or AWS SQS. Includes Kombu message processors using native Producer and Consumer classes as well as ConsumerProducerMixin workers for relay publish-hook or caching
Stars: ✭ 37 (+164.29%)
Mutual labels:  sqs, sqs-consumer
mq-go
SQS Consumer Server for Go
Stars: ✭ 28 (+100%)
Mutual labels:  sqs, sqs-consumer
discord-message-handler
Message and command handler for discord.js bots and applications
Stars: ✭ 19 (+35.71%)
Mutual labels:  messaging, message-handler
Parallel Consumer
Parallel Apache Kafka client wrapper with client side queueing, a simpler consumer/producer API with key concurrency and extendable non-blocking IO processing.
Stars: ✭ 154 (+1000%)
Mutual labels:  messaging, kafka-consumer
Kombu
Kombu is a messaging library for Python.
Stars: ✭ 2,263 (+16064.29%)
Mutual labels:  messaging, sqs
qpid-cpp
Mirror of Apache Qpid C++
Stars: ✭ 77 (+450%)
Mutual labels:  messaging
toiler
Toiler is a AWS SQS long-polling thread-based message processor.
Stars: ✭ 15 (+7.14%)
Mutual labels:  sqs
dispersy
The elastic database system. A database designed for P2P-like scenarios, where potentially millions of computers send database updates around.
Stars: ✭ 81 (+478.57%)
Mutual labels:  messaging
azure-service-bus-java
☁️ Java client library for Azure Service Bus
Stars: ✭ 61 (+335.71%)
Mutual labels:  messaging
numspy
A python module for sending free sms as well as finding details of mobile number via website Way2sms.
Stars: ✭ 57 (+307.14%)
Mutual labels:  messaging
chatcola
chatcola.com messaging server - self-host your messages without multi-domain nightmare!
Stars: ✭ 25 (+78.57%)
Mutual labels:  messaging
super-mario-message
Display custom messages in a Super Mario Bros environment
Stars: ✭ 18 (+28.57%)
Mutual labels:  messaging
sqsiphon
No description or website provided.
Stars: ✭ 19 (+35.71%)
Mutual labels:  sqs
streamsx.kafka
Repository for integration with Apache Kafka
Stars: ✭ 13 (-7.14%)
Mutual labels:  messaging
mnm-hammer
mnm implements TMTP protocol. Let Internet sites message members directly, instead of unreliable, insecure email. Contributors welcome! (Client)
Stars: ✭ 66 (+371.43%)
Mutual labels:  messaging
ontopic
Display SNS messages on your terminal
Stars: ✭ 20 (+42.86%)
Mutual labels:  sqs
Muni
Chat with Cloud Firestore
Stars: ✭ 22 (+57.14%)
Mutual labels:  messaging
gosd
A library for scheduling when to dispatch a message to a channel
Stars: ✭ 21 (+50%)
Mutual labels:  messaging

blaster

Blaster

CLI based message pump

Build Status codecov Go Report Card

Blaster is a CLI utility to pump messages from various message queue services. Users can write custom message processing logic in their language of choice and rely on Blaster for optimal work scheduling and fault tolerance.

Table of Contents

Introduction

A typical workflow to consume messages in a message queue is; fetch one message, process, remove and repeat. This seemingly straightforward process however is often convoluted by the logic required to deal with subtleties in message processing. Following list summarises some of the common complexities without being too exhaustive.

  • Read messages in batches to reduce network round-trips.
  • Enhance the work distribution by intelligently filling read ahead buffers.
  • Retry handling the messages when there are intermittent errors.
  • Reduce the stress on recovering downstream services with circuit breakers.
  • Process multiple messages simultaneously.
  • Prevent exhausting the host resources by throttling the maximum number of messages processed at any given time.

Blaster simplifies the message handling code by providing a flexible message processing pipeline with built-in features to deal with the well-known complexities.

It's built with minimum overhead (i.e. cpu/memory) to ensure that the handlers are cost effective when operated in pay-as-you-go infrastructures (e.g. AWS Fargate).

How it works

Blaster has an extensible pipeline to bind to a target queue and efficiently deliver messages to user's message handler. Message handler is launched and managed as a child process of Blaster and communication between the two processes occur via a well defined interface over HTTP.

Example

Step 1: Write a handler

Blaster message handler is any executable exposing the message handling logic as an HTTP endpoint. In this example, it is a script written in Javascript.

#!/usr/bin/env node

const express = require('express');

const app = express();
app.use(express.json());

// Messages are submitted to the handler endpoint as HTTP POST requests
app.post('/', (req, res) => {
    console.log(req.body);
    res.send('ok');
});

// By default blaster forwards messages to http://localhost:8312/
app.listen(8312, () => { console.log('listening'); });

Step 2: Launch it with blaster

Now that we have a message handler, we can launch blaster to handle messages stored in a supported broker. For instance, to process messages in an AWS SQS queue called test with the script created in step 1, launch blaster with following command (this should be executed in the directory containing node script):

chmod +x ./handler.js
AWS_REGION=ap-southeast-2 blaster sqs --queue-name "test" --handler-command ./handler.js

FAQ

Why doesn't it support forwarding messages to an external HTTP endpoint? Blaster abstracts the complexities of scheduling and fault tolerance in a message consumer. In order to make smart decisions about how work is scheduled, Blaster requires visibility to resource utilisation of user's code handling the message. Blaster does not support delivering messages to an external HTTP point because this level of transparency is not easily achievable with externally managed processes (e.g. They could be executing in a remote host).

Usage

blaster <broker> [options]

Global Options

--handler-command

Command to launch the handler.

--handler-args

Comma separated list of arguments to the handler command.

--max-handlers

Maximum number of concurrent handlers to execute. By default this is calculated by multiplying the number of CPUs available to the process by 256.

--startup-delay-seconds

Number of seconds to wait on start before delivering messages to the handler. Default setting is five seconds. Turning startup delay off by setting it to zero will notify blaster that handler's readiness endpoint should be probed instead of a static delay. Readiness endpoint must listen for HTTP GET requests at the handler URL. When handler is ready to accept message, readiness endpoint must respond with an HTTP 200.

--handler-url

Endpoint that handler is listening on. Default value is http://localhost:8312/

--retry-count

When the handler does not respond with an HTTP 200, blaster retries the delivery for the number of times indicated by this option.

--retry-delay-seconds

Number of seconds to wait before retrying the delivery of a message.

--version

Show blaster version

Broker Options

SQS

Use AWS_REGION, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY env variables to specify the environment of the SQS queue.

--queue-name

Name of the queue. Blaster will resolve the queue URL using its name and the region set in AWS_REGION.

--max-number-of-messages

Maximum number of messages to receive in a single poll from SQS. Default setting is 1 and the maximum value supported is 10.

--wait-time-seconds

Blaster uses long polling when receiving messages from SQS. Use this option to control the delay between polls. Default setting is 1.

Kafka

Blaster creates a consumer group with the specified name to receive messages from a Kafka topic. An instance of handler executable is launched for each partition assigned to the current blaster instance. Since the handler process is isolated in its own address space, it alleviates the need to synchronise access to shared memory in handler code. As a result of this multi-process design, Kafka message handlers should listen on the designated port advertised via BLASTER_HANDLER_PORT environment variable (as shown in the sample code snippet below).

Kafka binding in blaster is also aware of partition re-balances that may occur due to new members (i.e. new blaster instances) joining the consumer group. During a re-balance event, blaster gracefully brings the current handler processes down and launches new ones as per new partition assignment. This is a useful feature to auto scale the message processing nodes based on their resource consumption.

#!/usr/bin/env node

const express = require('express');

const app = express();
app.use(express.json());

app.post('/', (req, res) => {
    console.log(`pid: ${process.pid} partion: ${req.body.properties.partitionId} offset: ${req.body.properties.offset} messageId: ${req.body.messageId}: ${req.body.body}`);
    return res.send('ok');
});

// Bind to the port assigned by blaster or default port. Using default
// port would only work if the topic has a single partition.
const port = process.env['BLASTER_HANDLER_PORT'] || 8312;
app.listen(port, () => {
    console.log('listening on port ', port);
});

Complete example can be found here

--brokers

Comma separated list of broker addresses.

--topic

Name of the topic to read messages from.

--group

Name of the consumer group. Blaster creates a handler instance for each partition assigned to a member of the consumer group. Each message is sequentially delivered to the handler in the order they are received.

--start-from-oldest

Force blaster to start reading the partition from oldest available offset.

--buffer-size

Number of messages to be read into the local buffer.

Message Schema

Since Blaster is designed to work with many different message brokers, it converts the message to a general purpose format before forwarding it to the handler.

{
    "$schema": "http://json-schema.org/schema#",
    "$id": "https://github.com/buddyspike/blaster/message-schema.json",
    "title": "Message",
    "type": "object",
    "properties": {
        "messageId": {
            "type": "string",
            "description": "Unique message id that is generally assigned by the broker"
        },
        "body": {
            "type": "string",
            "description": "Message body with the content"
        },
        "properties": {
            "type": "object",
            "description": "Additional information available in the message such as headers"
        }
    }
}

Deploy with Docker

To deploy blaster handler in a docker container, copy the linux binary from Releases to the path and set the entry point with desired options.

from node:10.15.3-alpine

RUN mkdir /usr/local/handler
WORKDIR /usr/local/handler
COPY .tmp/blaster /usr/local/bin/
COPY *.js *.json /usr/local/handler/

RUN npm install

ENTRYPOINT ["blaster", "sqs", "--handler-command", "./index.js", "--startup-delay-seconds", "0"]

Full example can be found here.

Contributing

git clone https://github.com/buddyspike/blaster
cd blaster

# Run tests
make test

# Build binary
make build

# Build binary and copy it to path
make install

# Build cross compiled binaries
./build.sh

Credits

Made in Australia with

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].