All Projects → Jeffail → Tunny

Jeffail / Tunny

Licence: mit
A goroutine pool for Go

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to Tunny

Node Cluster Email
📨 send email if node cluster throw exception
Stars: ✭ 60 (-97.82%)
Mutual labels:  workers
React Native Workers
Do heavy data process outside of your UI JS thread.
Stars: ✭ 114 (-95.86%)
Mutual labels:  workers
Meteor Service Worker
An universal service worker for meteor apps
Stars: ✭ 132 (-95.21%)
Mutual labels:  workers
Rqueue
Rqueue aka Redis Queue [Task Queue, Message Broker] for Spring framework
Stars: ✭ 76 (-97.24%)
Mutual labels:  workers
Workers
Cloudflare Workers
Stars: ✭ 111 (-95.97%)
Mutual labels:  workers
Kafka Flow
KafkaFlow is a .NET framework to consume and produce Kafka messages with multi-threading support. It's very simple to use and very extendable. You just need to install, configure, start/stop the bus with your app and create a middleware/handler to process the messages.
Stars: ✭ 118 (-95.72%)
Mutual labels:  workers
Bootstrap
Tools to bootstrap micro computers
Stars: ✭ 55 (-98%)
Mutual labels:  workers
Cfworker
A collection of packages optimized for Cloudflare Workers and service workers.
Stars: ✭ 152 (-94.48%)
Mutual labels:  workers
Serverless Cloudflare Workers
Serverless provider plugin for Cloudflare Workers
Stars: ✭ 114 (-95.86%)
Mutual labels:  workers
Worker Typescript Template
ʕ •́؈•̀) TypeScript template for Cloudflare Workers
Stars: ✭ 129 (-95.32%)
Mutual labels:  workers
Ngx Papaparse
Papa Parse wrapper for Angular
Stars: ✭ 83 (-96.99%)
Mutual labels:  workers
Webworkify Webpack
launch a web worker at runtime that can require() in the browser with webpack
Stars: ✭ 105 (-96.19%)
Mutual labels:  workers
Workq
Job server in Go
Stars: ✭ 1,546 (-43.88%)
Mutual labels:  workers
Qutee
PHP Background Jobs (Tasks) Manager
Stars: ✭ 63 (-97.71%)
Mutual labels:  workers
Gores
👷 Redis-backed library for creating background jobs in Go. Placing jobs in multiple queues, and process them later asynchronously.
Stars: ✭ 137 (-95.03%)
Mutual labels:  workers
Muster
A universal data layer for components and services
Stars: ✭ 59 (-97.86%)
Mutual labels:  workers
Dtcqueuebundle
Symfony2/3/4/5 Queue Bundle (for background jobs) supporting Mongo (Doctrine ODM), Mysql (and any Doctrine ORM), RabbitMQ, Beanstalkd, Redis, and ... {write your own}
Stars: ✭ 115 (-95.83%)
Mutual labels:  workers
Php Resque
An implementation of Resque in PHP.
Stars: ✭ 157 (-94.3%)
Mutual labels:  workers
Worker Plugin
👩‍🏭 Adds native Web Worker bundling support to Webpack.
Stars: ✭ 1,840 (-33.21%)
Mutual labels:  workers
Simpleue
PHP queue worker and consumer - Ready for AWS SQS, Redis, Beanstalkd and others.
Stars: ✭ 124 (-95.5%)
Mutual labels:  workers

Tunny

godoc for Jeffail/tunny goreportcard for Jeffail/tunny

Tunny is a Golang library for spawning and managing a goroutine pool, allowing you to limit work coming from any number of goroutines with a synchronous API.

A fixed goroutine pool is helpful when you have work coming from an arbitrary number of asynchronous sources, but a limited capacity for parallel processing. For example, when processing jobs from HTTP requests that are CPU heavy you can create a pool with a size that matches your CPU count.

Install

go get github.com/Jeffail/tunny

Or, using dep:

dep ensure -add github.com/Jeffail/tunny

Use

For most cases your heavy work can be expressed in a simple func(), where you can use NewFunc. Let's see how this looks using our HTTP requests to CPU count example:

package main

import (
	"io/ioutil"
	"net/http"
	"runtime"

	"github.com/Jeffail/tunny"
)

func main() {
	numCPUs := runtime.NumCPU()

	pool := tunny.NewFunc(numCPUs, func(payload interface{}) interface{} {
		var result []byte

		// TODO: Something CPU heavy with payload

		return result
	})
	defer pool.Close()

	http.HandleFunc("/work", func(w http.ResponseWriter, r *http.Request) {
		input, err := ioutil.ReadAll(r.Body)
		if err != nil {
			http.Error(w, "Internal error", http.StatusInternalServerError)
		}
		defer r.Body.Close()

		// Funnel this work into our pool. This call is synchronous and will
		// block until the job is completed.
		result := pool.Process(input)

		w.Write(result.([]byte))
	})

	http.ListenAndServe(":8080", nil)
}

Tunny also supports timeouts. You can replace the Process call above to the following:

result, err := pool.ProcessTimed(input, time.Second*5)
if err == tunny.ErrJobTimedOut {
	http.Error(w, "Request timed out", http.StatusRequestTimeout)
}

You can also use the context from the request (or any other context) to handle timeouts and deadlines. Simply replace the Process call to the following:

result, err := pool.ProcessCtx(r.Context(), input)
if err == context.DeadlineExceeded {
	http.Error(w, "Request timed out", http.StatusRequestTimeout)
}

Changing Pool Size

The size of a Tunny pool can be changed at any time with SetSize(int):

pool.SetSize(10) // 10 goroutines
pool.SetSize(100) // 100 goroutines

This is safe to perform from any goroutine even if others are still processing.

Goroutines With State

Sometimes each goroutine within a Tunny pool will require its own managed state. In this case you should implement tunny.Worker, which includes calls for terminating, interrupting (in case a job times out and is no longer needed) and blocking the next job allocation until a condition is met.

When creating a pool using Worker types you will need to provide a constructor function for spawning your custom implementation:

pool := tunny.New(poolSize, func() Worker {
	// TODO: Any per-goroutine state allocation here.
	return newCustomWorker()
})

This allows Tunny to create and destroy Worker types cleanly when the pool size is changed.

Ordering

Backlogged jobs are not guaranteed to be processed in order. Due to the current implementation of channels and select blocks a stack of backlogged jobs will be processed as a FIFO queue. However, this behaviour is not part of the spec and should not be relied upon.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].