dyu / Ffi Overhead
Licence: apache-2.0
comparing the c ffi (foreign function interface) overhead on various programming languages
Stars: ✭ 387
Programming Languages
c
50402 projects - #5 most used programming language
Projects that are alternatives of or similar to Ffi Overhead
Datasets
A repository of pretty cool datasets that I collected for network science and machine learning research.
Stars: ✭ 302 (-21.96%)
Mutual labels: benchmark
Across
Across the Great Wall we can reach every corner in the world
Stars: ✭ 3,654 (+844.19%)
Mutual labels: benchmark
Modclean
Remove unwanted files and directories from your node_modules folder
Stars: ✭ 309 (-20.16%)
Mutual labels: benchmark
Human Learn
Natural Intelligence is still a pretty good idea.
Stars: ✭ 323 (-16.54%)
Mutual labels: benchmark
Pcam
The PatchCamelyon (PCam) deep learning classification benchmark.
Stars: ✭ 340 (-12.14%)
Mutual labels: benchmark
C Cpp Notes
Notes about modern C++, C++11, C++14 and C++17, Boost Libraries, ABI, foreign function interface and reference cards.
Stars: ✭ 363 (-6.2%)
Mutual labels: ffi
Face Landmarks Detection Benchmark
Face landmarks(fiducial points) detection benchmark
Stars: ✭ 348 (-10.08%)
Mutual labels: benchmark
Layoutframeworkbenchmark
Benchmark the performances of various Swift layout frameworks (autolayout, UIStackView, PinLayout, LayoutKit, FlexLayout, Yoga, ...)
Stars: ✭ 316 (-18.35%)
Mutual labels: benchmark
Medmnist
[ISBI'21] MedMNIST Classification Decathlon: A Lightweight AutoML Benchmark for Medical Image Analysis
Stars: ✭ 338 (-12.66%)
Mutual labels: benchmark
Benchmarktools.jl
A benchmarking framework for the Julia language
Stars: ✭ 312 (-19.38%)
Mutual labels: benchmark
Tape
Tasks Assessing Protein Embeddings (TAPE), a set of five biologically relevant semi-supervised learning tasks spread across different domains of protein biology.
Stars: ✭ 295 (-23.77%)
Mutual labels: benchmark
Deeperforensics 1.0
[CVPR 2020] A Large-Scale Dataset for Real-World Face Forgery Detection
Stars: ✭ 338 (-12.66%)
Mutual labels: benchmark
Yet Another Bench Script
YABS - a simple bash script to estimate Linux server performance using fio, iperf3, & Geekbench
Stars: ✭ 348 (-10.08%)
Mutual labels: benchmark
Go Interlang
Examples of calls between Go and C/C++ (and how to call a Go shared object from Node/Ruby/Python/Java)
Stars: ✭ 346 (-10.59%)
Mutual labels: ffi
ffi-overhead
comparing the c ffi overhead on various programming languages
Requirements:
- gcc
- tup
- zig
- nim
- java7
- java8
- go
- rust
- d (dmd and ldc2)
- haskell (ghc)
- ocaml
- csharp (mono)
- luajit
- julia
- node
- dart
- wren
- elixir
My environment:
- Intel i7-3630QM laptop (4cores, HT) with 16g ram
- Ubuntu 14.04 x64
- gcc/g++ 5.4.1
- tup 0.7.4
- zig 0.2.0
- nim 0.14.3
- java 1.7.0_72 and 1.8.0_91
- go 1.8.0
- rust 1.17.0-nightly (c0b7112ba 2017-03-02)
- dmd 2.0.71.1
- ldc2 1.9.0
- ghc 7.10.3 (at /opt/ghc)
- ocaml 4.06.1
- mono 5.12.0.226
# dynamic languages
- luajit 2.0.4
- julia 0.6.3
- node 6.9.0 (at /opt/node)
- dart 1.22.0 (at /usr/lib/dart)
- wren 0.1.0
- elixir 1.6.5 (Erlang/OTP 20)
Initialize
tup init
Compile
./compile-all.sh
Compile opts:
- -O2 (gcc - applies to c/jni/nim)
- -C opt-level=2 (rust)
Run
./run-all.sh 1000000
Measurement:
- call the c function "plusone" x number of times and print out the elapsed time in millis.
int x = 0;
while (x < count) x = plusone(x);
- 2 samples/runs
Results (500M calls)
./run-all.sh 500000000
The results are elapsed time in milliseconds
============================================
luajit:
891
905
julia:
894
889
c:
1182
1182
cpp:
1182
1183
zig:
1191
1190
nim:
1330
1330
rust:
1193
1196
d:
1330
1330
d ldc2:
1191
1189
haskell:
1197
1198
ocamlopt:
1634
1634
ocamlc:
4299
4302
csharp mono:
2697
2690
java7:
4469
4472
java8:
4505
4472
node:
9163
9194
node scoped:
15425
15409
go:
37975
37879
dart:
31265
31282
dart scoped:
61906
69043
wren:
14519
14514
elixir:
23852
23752
Note that the project description data, including the texts, logos, images, and/or trademarks,
for each open source project belongs to its rightful owner.
If you wish to add or remove any projects, please contact us at [email protected].