All Projects → brmmm3 → fastthreadpool

brmmm3 / fastthreadpool

Licence: MIT License
An efficient and lightweight thread pool

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to fastthreadpool

python-PooledProcessMixIn
Fast Concurrent Pool of preforked-processes and threads MixIn for python's socket server
Stars: ✭ 31 (+14.81%)
Mutual labels:  thread, pool
Lite Pool
A lite fast object pool
Stars: ✭ 42 (+55.56%)
Mutual labels:  fast, pool
Oeasypool
c++11 thread pool
Stars: ✭ 18 (-33.33%)
Mutual labels:  thread, pool
HiFramework.Unity
Based on component to manage project's core logic and module used in unity3d
Stars: ✭ 22 (-18.52%)
Mutual labels:  thread, pool
koa-rest-router
Most powerful, flexible and composable router for building enterprise RESTful APIs easily!
Stars: ✭ 67 (+148.15%)
Mutual labels:  fast
Maat
Validation and transformation library powered by deductive ascending parser. Made to be extended for any kind of project.
Stars: ✭ 27 (+0%)
Mutual labels:  fast
fundamental-tools
Web applications with ABAP, done simple.
Stars: ✭ 42 (+55.56%)
Mutual labels:  fast
DynAdjust
Least squares adjustment software
Stars: ✭ 43 (+59.26%)
Mutual labels:  fast
wymlp
tiny fast portable real-time deep neural network for regression and classification within 50 LOC.
Stars: ✭ 36 (+33.33%)
Mutual labels:  fast
simple-pool
#NVJOB Simple Pool. Pool for optimizing object loading. Unity Asset.
Stars: ✭ 16 (-40.74%)
Mutual labels:  pool
CPU-MEM-monitor
A simple script to log Linux CPU and memory usage (using top or pidstat command) over time and output an Excel- or OpenOfficeCalc-friendly report
Stars: ✭ 41 (+51.85%)
Mutual labels:  thread
cryptonote-aeon-pool
AEON coin mining pool
Stars: ✭ 15 (-44.44%)
Mutual labels:  pool
dowels
🔨 a tiny but powerful javascript library that performs client-side routing, templating, and REST API communication to help you get your single-page web applications running in seconds
Stars: ✭ 13 (-51.85%)
Mutual labels:  fast
litchi
这是一款分布式的java游戏服务器框架
Stars: ✭ 97 (+259.26%)
Mutual labels:  fast
base58
fast/simple Base58 encoding/decoding in golang.
Stars: ✭ 39 (+44.44%)
Mutual labels:  fast
tiket
TIKET is a ticketing/helpdesk system to support and help you deal with issues/incidents in your organization or from customers.
Stars: ✭ 59 (+118.52%)
Mutual labels:  thread
jobflow
runs stuff in parallel (like GNU parallel, but much faster and memory-efficient)
Stars: ✭ 67 (+148.15%)
Mutual labels:  fast
python-libmf
No description or website provided.
Stars: ✭ 24 (-11.11%)
Mutual labels:  fast
charnapool
High performance Node.js (with native C addons) mining pool for Cryptonote based coins, optimized for Charnacoin.
Stars: ✭ 25 (-7.41%)
Mutual labels:  pool
amber-router
A URL Routing shard.
Stars: ✭ 16 (-40.74%)
Mutual labels:  fast

An efficient and lightweight thread pool

Existing implementations of thread pools have a relatively high overhead in certain situations. Especially apply_async in multiprocessing.pool.ThreadPool and concurrent.futures.ThreadPoolExecutor at all (see benchmarks). In case of ThreadPoolExecutor don't use the wait. It can be extremely slow! If you've only a small number of jobs and the jobs have a relatively long processing time, then these overheads don't count. But in case of high number of jobs with short processing time the overhead of the above implementations will noticeably slow down the processing speed. The fastthreadpool module solves this issue, because it has a very small overhead in all situations.

The API is described here.

Examples

pool = fastthreadpool.Pool()
pool.map(worker, iterable)
pool.shutdown()

Results with successful execution were saved in the done queue, with failed execution in the failed queue.

pool = fastthreadpool.Pool()
pool.map(worker, iterable, done_cb)
pool.shutdown()

For every successful execution of the worker the done_cb callback function is called. Results with failed execution in the failed queue.

pool = fastthreadpool.Pool(result_id = True)
job_id1 = pool.submit(worker, foo1)
pool.shutdown()

Results with successful execution were saved in the done queue, with failed execution in the failed queue. Each entry in the queues is a tuple with the job_id as the first argument and the result as the second argument.

pool = fastthreadpool.Pool(result_id = True)
for i in range(100):
    jobid = pool.submit(worker, foo1, i)
pool.submit_first(worker, foo2)
pool.cancel(jobid)
pool.submit_later(0.1, delayed_worker, foo3)
pool.schedule(1.0, scheduled_worker, foo4)
time.sleep(1.0)
pool.cancel(None, True)
pool.shutdown()

This is a more complex example which shows some of the features of fastthreadpool. First 100 jobs with foo1 and a counter are submitted. Then a job is submitted to the beginning of the job queue. Then the job with foo1 and i=99 is cancelled. Then a job is scheduled for a one time execution in 0.1 seconds. Finally a job is scheduled for repeated execution in a 1 second interval.

Next example shows a use case of an initialization callback function:

def worker(compressed_data):
    return current_thread().Z.decompress(compressed_data)

def cbInit(ctx):
    ctx.Z = zstd.ZstdDecompressor()

pool = fastthreadpool.Pool(init_callback = cbInit)
for data in iterable:
    pool.submit(worker, data)

Next example shows a simple echo server. The echo server is extremely fast is the buffer size is big enough. Results have shown on a Ryzen 7 and Linux that this simple server can handle more than 400000 messages / second:

def pool_echo_server(address, threads, size):
    sock = socket(AF_INET, SOCK_STREAM)
    sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)
    sock.bind(address)
    sock.listen(threads)
    with sock:
        while True:
            client, addr = sock.accept()
            pool.submit(pool_echo_client, client, size)

def pool_echo_client(client, size):
    client.setsockopt(IPPROTO_TCP, TCP_NODELAY, 1)
    b = bytearray(size)
    bl = [ b ]
    with client:
        try:
            while True:
                client.recvmsg_into(bl)
                client.sendall(b)
        except:
            pass

pool = fastthreadpool.Pool(8)
pool.submit(pool_echo_server, addr, 8, 4096)
pool.join()

Benchmarks

Example ex_semaphore.py results on a Celeron N3160 are:

1.8018 seconds for threading.Semaphore
0.083 seconds for fasthreadpool.Semaphore

fastthreadpool.Semaphore is 21.7 x faster.

Example ex_simple_sum.py results on a Celeron N3160 are:

0.019 seconds for simple for loop.
0.037 seconds for simple for loop. Result is saved in class variable.
0.048 seconds for fastthreadpool.map. Results are save in done queue.
0.494 seconds for fastthreadpool.submit. Results are save in done queue.
0.111 seconds for multiprocessing.pool.ThreadPool.map_async.
21.280 seconds for multiprocessing.pool.ThreadPool.apply_async.

fastthreadpool.map is 2,3 x faster than multiprocessing.pool.ThreadPool.map_async. fastthreadpool.submit is 43 x faster than multiprocessing.pool.ThreadPool.apply_async.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].