All Projects → germandiagogomez → The Cpp Abstraction Penalty

germandiagogomez / The Cpp Abstraction Penalty

Licence: mit
Modern C++ benchmarking

Programming Languages

cpp
1120 projects

Labels

Projects that are alternatives of or similar to The Cpp Abstraction Penalty

Php Framework Benchmark
PHP Framework Benchmark
Stars: ✭ 1,035 (+1400%)
Mutual labels:  benchmark
Rb
A thread-safe fixed-size circular buffer written in safe Rust.
Stars: ✭ 59 (-14.49%)
Mutual labels:  benchmark
Umesimd
UME::SIMD A library for explicit simd vectorization.
Stars: ✭ 66 (-4.35%)
Mutual labels:  benchmark
Jsbench Me
jsbench.me - JavaScript performance benchmarking playground
Stars: ✭ 50 (-27.54%)
Mutual labels:  benchmark
Pplbench
Evaluation Framework for Probabilistic Programming Languages
Stars: ✭ 57 (-17.39%)
Mutual labels:  benchmark
Scalajs Benchmark
Benchmarks: write in Scala or JS, run in your browser. Live demo:
Stars: ✭ 63 (-8.7%)
Mutual labels:  benchmark
Java Sec Code
Java web common vulnerabilities and security code which is base on springboot and spring security
Stars: ✭ 1,033 (+1397.1%)
Mutual labels:  benchmark
Http Benchmark Tornado
基于Python Tornado的高性能http性能测试工具。Java Netty版: https://github.com/junneyang/http-benchmark-netty 。
Stars: ✭ 67 (-2.9%)
Mutual labels:  benchmark
Functional Components Benchmark
Directly calling functional components instead of mounting them is faster.
Stars: ✭ 59 (-14.49%)
Mutual labels:  benchmark
Http Benchmarks
Benchmarks for common embedded Java and Kotlin web frameworks
Stars: ✭ 65 (-5.8%)
Mutual labels:  benchmark
Dbbench
🏋️ dbbench is a simple database benchmarking tool which supports several databases and own scripts
Stars: ✭ 52 (-24.64%)
Mutual labels:  benchmark
Pytest Django Queries
Generate performance reports from your django database performance tests.
Stars: ✭ 54 (-21.74%)
Mutual labels:  benchmark
Freqbench
Comprehensive CPU frequency performance/power benchmark
Stars: ✭ 65 (-5.8%)
Mutual labels:  benchmark
Awesome Semantic Segmentation
🤘 awesome-semantic-segmentation
Stars: ✭ 8,831 (+12698.55%)
Mutual labels:  benchmark
Evalne
Source code for EvalNE, a Python library for evaluating Network Embedding methods.
Stars: ✭ 67 (-2.9%)
Mutual labels:  benchmark
Dana
Test/benchmark regression and comparison system with dashboard
Stars: ✭ 46 (-33.33%)
Mutual labels:  benchmark
Benchmark Websocket
Websocket Client and Server for benchmarks with Millions of concurrent connections.
Stars: ✭ 62 (-10.14%)
Mutual labels:  benchmark
Web Components Benchmark
Web Components benchmark for a various Web Components technologies
Stars: ✭ 69 (+0%)
Mutual labels:  benchmark
Crypto Bench
Benchmarks for crypto libraries (in Rust, or with Rust bindings)
Stars: ✭ 67 (-2.9%)
Mutual labels:  benchmark
Gl vs vk
Comparison of OpenGL and Vulkan API in terms of performance.
Stars: ✭ 65 (-5.8%)
Mutual labels:  benchmark
  • Benchmarks

If you just want to see the benchmark results just go to the table here [[#Benchmarks-results][here]].

  • Purpose

This is a set of benchmarks in C++ that tries to compare "raw/C-ish code" or old C++ style implementations vs "library-based, modern C++" implementations of some algorithms and compares their execution time.

For every benchmark, two implementations are introduced:

  • raw implementation.
  • modern C++ implementation.

The goal is to put them front to front to see how they perform against each other, on a per-compiler basis.

Plots are generated, grouping, per-compiler, the two versions put front to front.

I am particularly interested in measuring the abstraction penalty incurred by the use of a C++ vs C-ish plain approaches when compiling programs with optimization, since one of the goals of C++ is the zero-overhead principle.

My first experiment makes use of [[https://github.com/ericniebler/range-v3][Eric Niebler's ranges library]]. There is a standard C++ proposal for inclusion based on this work.

  • Benchmark style and guidelines

The scope of this benchmark set is very targeted: I want to show how typical, older-style or C-ish code or old-style C++ code performs against idiomatic modern C++ code.

I want to limit the benchmarks to code to focus on older vs newer styles. One benchmarks should represent and older way of doing something, and the modern one should represent the supposedly better, as in safer or more idiomatic way of doing something compared to the previous benchmark.

It will be considered cheating to write unconventional and deeply worked out code in the benchmarks just to beat one benchmark against another. For example, using carefully-crafted SSE intrinsics is not something acceptable. Using std::myalgo(my_policy, beg, end) is not cheating, because it is easy to write even if internally could use SSE or OpenMP.

Contributions are welcome.

Suggestions and ideas for new benchmarks are welcome as well.

I will reserve for myself the right to accept or reject a benchmark to the set of benchmarks, with the hope of keeping it focused. :).

  • Compile and run the benchmarks

So you want to run the benchmark yourself in your computer...

Prerequisites:

#+BEGIN_src sh git clone --recursive https://github.com/germandiagogomez/the-cpp-abstraction-penalty.git cd the-cpp-abstraction-penalty

Configure the superproject that will run the benchmarks suite

meson project-benchmarks-suite build-benchmarks-suite

Run the benchmarks

ninja -C build-benchmarks-suite run_benchmarks_suite

Generate the plots in directory build-all/plots (requires gnuplot)

ninja -C build-benchmarks-suite generate_plots

#+END_src

This will do the following:

  1. Build the binaries for your compilers.
  2. Run the binaries for the benchmark.
  3. Put, for each benchmark, a png file in build-all/plots directory that you can open when done to see the chart.
  • How to contribute a new benchmark

TODO

** Getting your benchmark to work

TODO

  • Benchmarks results |----------------------------------------------------------------- | Compiler files | Configuration details |Benchmark results | |----------------------------------------------------------------- |[[native-files/gcc.txt][gcc]] [[native-files/clang.txt][clang]]|[[.benchmarks_results/config_details/gcc.md][gcc]] [[.benchmarks_results/config_details/clang.md][clang]]|[[.benchmarks_results/gcc%21clang/results.org][Results]]| |[[native-files\msvc2019.txt][msvc2019]] [[native-files\mingw-w64.txt][mingw-w64]]|[[.benchmarks_results\config_details\msvc2019.md][msvc2019]] [[.benchmarks_results\config_details\mingw-w64.md][mingw-w64]]|[[.benchmarks_results\msvc2019%21mingw-w64\results.org][Results]]|
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].