All Projects → max0x7ba → Atomic_queue

max0x7ba / Atomic_queue

Licence: mit
C++ lockless queue.

Programming Languages

cpp
1120 projects
cplusplus
227 projects

Projects that are alternatives of or similar to Atomic queue

Object threadsafe
We make any object thread-safe and std::shared_mutex 10 times faster to achieve the speed of lock-free algorithms on >85% reads
Stars: ✭ 280 (-24.93%)
Mutual labels:  multithreading, lock-free, high-performance, low-latency
Data-structures
Data Structures in Java
Stars: ✭ 13 (-96.51%)
Mutual labels:  queue, datastructures, data-structures
Swiftcoroutine
Swift coroutines for iOS, macOS and Linux.
Stars: ✭ 690 (+84.99%)
Mutual labels:  multithreading, lock-free, atomic
Onering
A collection of concurrent ring buffers
Stars: ✭ 114 (-69.44%)
Mutual labels:  lock-free, atomic, queue
hatrack
Fast, multi-reader, multi-writer, lockless data structures for parallel programming
Stars: ✭ 55 (-85.25%)
Mutual labels:  queue, high-performance, lock-free
Liblist
Generic Linked list Management Library in C
Stars: ✭ 22 (-94.1%)
Mutual labels:  datastructures, multi-threading, queue
Interview Questions
List of all the Interview questions practiced from online resources and books
Stars: ✭ 187 (-49.87%)
Mutual labels:  data-structures, datastructures, queue
Containers
This library provides various containers. Each container has utility functions to manipulate the data it holds. This is an abstraction as to not have to manually manage and reallocate memory.
Stars: ✭ 125 (-66.49%)
Mutual labels:  data-structures, datastructures, queue
C Macro Collections
Easy to use, header only, macro generated, generic and type-safe Data Structures in C
Stars: ✭ 192 (-48.53%)
Mutual labels:  data-structures, datastructures, queue
MemoryAllocator.KanameShiki
Fast multi-threaded memory allocator
Stars: ✭ 73 (-80.43%)
Mutual labels:  multi-threading, multithreading, lock-free
Data-Structures-and-Algorithms
Data Structures and Algorithms implementation in Python
Stars: ✭ 31 (-91.69%)
Mutual labels:  queue, data-structures
swift-algorithms-data-structs
📒 Algorithms and Data Structures in Swift. The used approach attempts to fully utilize the Swift Standard Library and Protocol-Oriented paradigm.
Stars: ✭ 42 (-88.74%)
Mutual labels:  datastructures, data-structures
MultiHttp
This is a high performance , very useful multi-curl tool written in php. 一个超级好用的并发CURL工具!!!(httpful,restful, concurrency)
Stars: ✭ 79 (-78.82%)
Mutual labels:  high-performance, multithreading
NanoLogLite
A revised version of NanoLog which writes human readable log file, and is easier to use.
Stars: ✭ 18 (-95.17%)
Mutual labels:  high-performance, low-latency
gumble
Collection of high-performance, thread-safe, lock-free data structures for go
Stars: ✭ 12 (-96.78%)
Mutual labels:  high-performance, lock-free
Emitter
High performance, distributed and low latency publish-subscribe platform.
Stars: ✭ 3,130 (+739.14%)
Mutual labels:  high-performance, low-latency
Data Structures Algorithms
My implementation of 85+ popular data structures and algorithms and interview questions in Python 3 and C++
Stars: ✭ 273 (-26.81%)
Mutual labels:  data-structures, datastructures
Corium
Corium is a modern scripting language which combines simple, safe and efficient programming.
Stars: ✭ 18 (-95.17%)
Mutual labels:  high-performance, multithreading
Javascript Datastructures Algorithms
📚 collection of JavaScript and TypeScript data structures and algorithms for education purposes. Source code bundle of JavaScript algorithms and data structures book
Stars: ✭ 3,221 (+763.54%)
Mutual labels:  data-structures, queue
Crossbeam
Tools for concurrent programming in Rust
Stars: ✭ 4,180 (+1020.64%)
Mutual labels:  data-structures, lock-free

C/C++ CI

atomic_queue

C++14 multiple-producer-multiple-consumer lockless queues based on circular buffer with std::atomic.

The main design principle these queues follow is minimalism: the bare minimum of atomic operations, fixed size buffer, value semantics.

These qualities are also limitations:

  • The maximum queue size must be set at compile time or construction time. The circular buffer side-steps the memory reclamation problem inherent in linked-list based queues for the price of fixed buffer size. See Effective memory reclamation for lock-free data structures in C++ for more details. Fixed buffer size may not be that much of a limitation, since once the queue gets larger than the maximum expected size that indicates a problem that elements aren't processed fast enough, and if the queue keeps growing it may eventually consume all available memory which may affect the entire system, rather than the problematic process only. The only apparent inconvenience is that one has to do an upfront back-of-the-envelope calculation on what would be the largest expected/acceptable queue size.
  • There are no OS-blocking push/pop functions. This queue is designed for ultra-low-latency scenarios and using an OS blocking primitive would be sacrificing push-to-pop latency. For lowest possible latency one cannot afford blocking in the OS kernel because the wake-up latency of a blocked thread is about 1-3 microseconds, whereas this queue's round-trip time can be as low as 150 nanoseconds.

Ultra-low-latency applications need just that and nothing more. The minimalism pays off, see the throughput and latency benchmarks.

Available containers are:

  • AtomicQueue - a fixed size ring-buffer for atomic elements.
  • OptimistAtomicQueue - a faster fixed size ring-buffer for atomic elements which busy-waits when empty or full.
  • AtomicQueue2 - a fixed size ring-buffer for non-atomic elements.
  • OptimistAtomicQueue2 - a faster fixed size ring-buffer for non-atomic elements which busy-waits when empty or full.

These containers have corresponding AtomicQueueB, OptimistAtomicQueueB, AtomicQueueB2, OptimistAtomicQueueB2 versions where the buffer size is specified as an argument to the constructor.

Totally ordered mode is supported. In this mode consumers receive messages in the same FIFO order the messages were posted. This mode is supported for push and pop functions, but for not the try_ versions. On Intel x86 the totally ordered mode has 0 cost, as of 2019.

Single-producer-single-consumer mode is supported. In this mode, no read-modify-write instructions are necessary, only the atomic loads and stores. That improves queue throughput significantly.

A few other thread-safe containers are used for reference in the benchmarks:

  • std::mutex - a fixed size ring-buffer with std::mutex.
  • pthread_spinlock - a fixed size ring-buffer with pthread_spinlock_t.
  • boost::lockfree::spsc_queue - a wait-free single-producer-single-consumer queue from Boost library.
  • boost::lockfree::queue - a lock-free multiple-producer-multiple-consumer queue from Boost library.
  • moodycamel::ConcurrentQueue - a lock-free multiple-producer-multiple-consumer queue used in non-blocking mode.
  • moodycamel::ReaderWriterQueue - a lock-free single-producer-single-consumer queue used in non-blocking mode.
  • xenium::michael_scott_queue - a lock-free multi-producer-multi-consumer queue proposed by Michael and Scott (this queue is similar to boost::lockfree::queue which is also based on the same proposal).
  • xenium::ramalhete_queue - a lock-free multi-producer-multi-consumer queue proposed by Ramalhete and Correia.
  • xenium::vyukov_bounded_queue - a bounded multi-producer-multi-consumer queue based on the version proposed by Vyukov.
  • tbb::spin_mutex - a locked fixed size ring-buffer with tbb::spin_mutex from Intel Threading Building Blocks.
  • tbb::concurrent_bounded_queue - eponymous queue used in non-blocking mode from Intel Threading Building Blocks.

Using the library

The containers provided are header-only class templates, no building/installing is necessary.

  1. Clone the project:
git clone https://github.com/max0x7ba/atomic_queue.git
  1. Add atomic_queue/include directory (use full path) to the include paths of your build system.
  2. #include <atomic_queue/atomic_queue.h> in your C++ source.

Benchmark build and run instructions

The containers provided are header-only class templates that require only #include <atomic_queue/atomic_queue.h>, no building/installing is necessary.

Building is necessary to run the tests and benchmarks.

git clone https://github.com/cameron314/concurrentqueue.git
git clone https://github.com/cameron314/readerwriterqueue.git
git clone https://github.com/mpoeter/xenium.git
git clone https://github.com/max0x7ba/atomic_queue.git
cd atomic_queue
make -r -j4 run_benchmarks

The benchmark also requires Intel TBB library to be available. It assumes that it is installed in /usr/local/include and /usr/local/lib. If it is installed elsewhere you may like to modify cppflags.tbb and ldlibs.tbb in Makefile.

API

The containers support the following APIs:

  • try_push - Appends an element to the end of the queue. Returns false when the queue is full.
  • try_pop - Removes an element from the front of the queue. Returns false when the queue is empty.
  • push - Appends an element to the end of the queue. Busy waits when the queue is full. Faster than try_push when the queue is not full. Optional FIFO producer queuing and total order.
  • pop - Removes an element from the front of the queue. Busy waits when the queue is empty. Faster than try_pop when the queue is not empty. Optional FIFO consumer queuing and total order.
  • was_size - Returns the number of unconsumed elements during the call. The state may have changed by the time the return value is examined.
  • was_empty - Returns true if the container was empty during the call. The state may have changed by the time the return value is examined.
  • was_full - Returns true if the container was full during the call. The state may have changed by the time the return value is examined.
  • capacity - Returns the maximum number of elements the queue can possibly hold.

See example.cc for a usage example.

TODO: full API reference.

Implementation Notes

The available queues here use a ring-buffer array for storing elements. The size of the queue is fixed at compile time or construction time.

In a production multiple-producer-multiple-consumer scenario the ring-buffer size should be set to the maximum allowable queue size. When the buffer size is exhausted it means that the consumers cannot consume the elements fast enough, fixing which would require either of:

  • increasing the buffer size to be able to handle temporary spikes of produced elements, or
  • increasing the number of consumers to consume elements faster, or
  • decreasing the number of producers to producer fewer elements.

Using a power-of-2 ring-buffer array size allows a couple of important optimizations:

  • The writer and reader indexes get mapped into the ring-buffer array index using remainder binary operator % SIZE and using a power-of-2 size turns that remainder operator into one plain and instruction and that is as fast as it gets.
  • The element index within the cache line gets swapped with the cache line index within the ring-buffer array element index, so that subsequent queue elements actually reside in different cache lines. This eliminates contention between producers and consumers on the ring-buffer cache lines. Instead of N producers together with M consumers competing on the same ring-buffer array cache line in the worst case, it is only one producer competing with one consumer. This optimisation scales better with the number of producers and consumers, and element size. With low number of producers and consumers (up to about 2 of each in these benchmarks) disabling this optimisation may yield better throughput (but higher variance across runs).

The containers use unsigned type for size and internal indexes. On x86-64 platform unsigned is 32-bit wide, whereas size_t is 64-bit wide. 64-bit instructions utilise an extra byte instruction prefix resulting in slightly more pressure on the CPU instruction cache and the front-end. Hence, 32-bit unsigned indexes are used to maximise performance. That limits the queue size to 4,294,967,295 elements, which seems to be a reasonable hard limit for many applications.

While the atomic queues can be used with any moveable element types (including std::unique_ptr), for best througput and latency the queue elements should be cheap to copy and lock-free (e.g. unsigned or T*), so that push and pop operations complete fastest.

push and pop both perform two atomic operations: increment the counter to claim the element slot and store the element into the array. If a thread calling push or pop is pre-empted between the two atomic operations that causes another thread calling pop or push (corresondingly) on the same slot to spin on loading the element until the element is stored; other threads calling push and pop are not affected. Using real-time SCHED_FIFO threads reduces the risk of pre-emption, however, a higher priority SCHED_FIFO thread or kernel interrupt handler can still preempt your SCHED_FIFO thread. If the queues are used on isolated cores with real-time priority threads, in which case no pre-emption or interrupts occur, the queues operations become lock-free.

So, ideally, you may like to run your critical low-latency code on isolated cores that also no other processes can possibly use. And disable real-time thread throttling to prevent SCHED_FIFO real-time threads from being throttled.

Some people proposed busy-waiting with a call to sched_yield/pthread_yield. However, sched_yield is a wrong tool for locking because it doesn't communicate to the OS kernel what the thread is waiting for, so that the OS scheduler can never wake up the calling thread at the "right" time, unless there are no other threads that can run on this CPU. More details about sched_yield and spinlocks from Linus Torvalds.

Benchmarks

View throughput and latency benchmarks charts.

Methodology

There are a few OS behaviours that complicate benchmarking:

  • CPU scheduler can place threads on different CPU cores each run. To avoid that the threads are pinned to specific CPU cores.
  • CPU scheduler can preempt threads. To avoid that real-time SCHED_FIFO priority 50 is used to disable scheduler time quantum expiry and make the threads non-preemptable by lower priority processes/threads.
  • Real-time thread throttling disabled.
  • Adverse address space randomisation may cause extra CPU cache conflicts. To minimise effects of that benchmarks executable is run at least 33 times and then the results with the highest throughput / lowest latency are selected.

I only have access to a few x86-64 machines. If you have access to different hardware feel free to submit the output file of scripts/run-benchmarks.sh and I will include your results into the benchmarks page.

Huge pages

When huge pages are available the benchmarks use 1x1GB or 16x2MB huge pages for the queues to minimise TLB misses. To enable huge pages do one of:

sudo hugeadm --pool-pages-min 1GB:1 --pool-pages-max 1GB:1
sudo hugeadm --pool-pages-min 2MB:16 --pool-pages-max 2MB:16

Real-time thread throttling

By default, Linux scheduler throttles real-time threads from consuming 100% of CPU and that is detrimental to benchmarking. Full details can be found in Real-Time group scheduling. To disable real-time thread throttling do:

echo -1 | sudo tee /proc/sys/kernel/sched_rt_runtime_us >/dev/null

Throughput and scalability benchmark

N producer threads push a 4-byte integer into one same queue, N consumer threads pop the integers from the queue. All producers posts 1,000,000 messages in total. Total time to send and receive all the messages is measured. The benchmark is run for from 1 producer and 1 consumer up to (total-number-of-cpus / 2) producers/consumers to measure the scalability of different queues.

Ping-pong benchmark

One thread posts an integer to another thread through one queue and waits for a reply from another queue (2 queues in total). The benchmarks measures the total time of 100,000 ping-pongs, best of 10 runs. Contention is minimal here (1-producer-1-consumer, 1 element in the queue) to be able to achieve and measure the lowest latency. Reports the average round-trip time.

Contributing

The project uses .editorconfig and .clang-format to automate formatting. Pull requests are expected to be formatted using these settings.

Help needed

  • Submit pull requests with benchmarking code for other queues. The queues should be somewhat widely used or have exceptional performance, not my-first-mpmc-queue type projects.
  • Benchmarking results on different architectures or with much more cores, in particular on AMD Ryzen CPUs. Run scripts/run-benchmarks.sh and email me the results file, or put it under results/ and submit a pull request.

Copyright (c) 2019 Maxim Egorushkin. MIT License. See the full licence in file LICENSE.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].