All Projects → hclhkbu → Dlbench

hclhkbu / Dlbench

Licence: mit
Benchmarking State-of-the-Art Deep Learning Software Tools

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Dlbench

Dbench
Benchmark Kubernetes persistent disk volumes with fio: Read/write IOPS, bandwidth MB/s and latency
Stars: ✭ 138 (-18.34%)
Mutual labels:  benchmark
Php Orm Benchmark
PHP ORM Benchmark
Stars: ✭ 147 (-13.02%)
Mutual labels:  benchmark
D Optimizer
Make Dota 2 fps great again
Stars: ✭ 161 (-4.73%)
Mutual labels:  benchmark
Benchmarks
Comparison tools
Stars: ✭ 139 (-17.75%)
Mutual labels:  benchmark
Metabench
A simple framework for compile-time benchmarks
Stars: ✭ 146 (-13.61%)
Mutual labels:  benchmark
Chineseblue
Chinese Biomedical Language Understanding Evaluation benchmark (ChineseBLUE)
Stars: ✭ 149 (-11.83%)
Mutual labels:  benchmark
Kotlinx Benchmark
Kotlin multiplatform benchmarking toolkit
Stars: ✭ 137 (-18.93%)
Mutual labels:  benchmark
Gapbs
GAP Benchmark Suite
Stars: ✭ 165 (-2.37%)
Mutual labels:  benchmark
Awesome Http Benchmark
HTTP(S) benchmark tools, testing/debugging, & restAPI (RESTful)
Stars: ✭ 2,236 (+1223.08%)
Mutual labels:  benchmark
Kubestone
Performance benchmarks for Kubernetes
Stars: ✭ 159 (-5.92%)
Mutual labels:  benchmark
Fast Crystal
💨 Writing Fast Crystal 😍 -- Collect Common Crystal idioms.
Stars: ✭ 140 (-17.16%)
Mutual labels:  benchmark
Clue
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
Stars: ✭ 2,425 (+1334.91%)
Mutual labels:  benchmark
Sv Benchmarks
Collection of Verification Tasks
Stars: ✭ 158 (-6.51%)
Mutual labels:  benchmark
Local Feature Evaluation
Comparative Evaluation of Hand-Crafted and Learned Local Features
Stars: ✭ 138 (-18.34%)
Mutual labels:  benchmark
Are We Fast Yet
Are We Fast Yet? Comparing Language Implementations with Objects, Closures, and Arrays
Stars: ✭ 161 (-4.73%)
Mutual labels:  benchmark
Sltbench
C++ benchmark tool. Practical, stable and fast performance testing framework.
Stars: ✭ 137 (-18.93%)
Mutual labels:  benchmark
Leaky Repo
Benchmarking repo for secrets scanning
Stars: ✭ 149 (-11.83%)
Mutual labels:  benchmark
Pytorch Retraining
Transfer Learning Shootout for PyTorch's model zoo (torchvision)
Stars: ✭ 167 (-1.18%)
Mutual labels:  benchmark
Uibench
UI Benchmark
Stars: ✭ 163 (-3.55%)
Mutual labels:  benchmark
Blue benchmark
BLUE benchmark consists of five different biomedicine text-mining tasks with ten corpora.
Stars: ✭ 159 (-5.92%)
Mutual labels:  benchmark

Deep learning software tools benchmark

A benchmark framework for measuring different deep learning tools. Please refer to http://dlbench.comp.hkbu.edu.hk/ for our testing results and more details. Benchmarking with newer versions of frameworks is on the way:

Tool Version
Caffe 1.0rc5(39f28e4)
CNTK 2.0Beta10(1ae666d)
MXNet 0.93(32dc3a2)
TensorFlow 1.0(4ac9c09)
Torch 7(748f5e3))

This project is licensed under MIT License.

Introduction

Overview of dlbench

Dirctory Description
configs/ Configuration files for running benchmark
network-configs/ Description of our tested models
synthetic/ Our benchmark tests with fake data
tools/ Contains running scripts and network configurations of each deep learning tool
logs/ Will be generated by running benchmark.py. Running logs should be put in here

Run benchmark

Prepare Data

Prepare data for the tools you want to run and put them under $HOME/data. Note that the name of each data directory should be the same as the name of the tool for convenience.
You can download data we used for our benchmark through following links:

For the synthetic data generation, please refer to scripts in the link: http://dlbench.comp.hkbu.edu.hk/s/html/v5/index.html.

Prepare .config file

There are some sample configuration files in configs/, you can choose one of them as example and change values of each item according to your needs and environment.

Run

To run benchmark test just execute

python benchmark.py -config configs/<your config file>.config

Add new tools

Follow the instructions in tools/Readsme.md preparing the running scripts and netowrk configurations. Note that training data should be put in $HOME/data/ so that we can test new tools in our machines and update benchmarking results to our website.

Update log

May 24, 2017:

  • Updated the scripts for mxnet0.9.5, previous scripts are still preserved in tools/mxnet/mxnet0.7
  • Updated benchmark.py and the format of config files. Now you only need to specify the device id (-1 for CPU; 0,1,2,3 for GPU) and device count (number of cores to use) in the config file. An example called test.config can be found in configs/
  • yaroslavvb helped improve the performance of CNTK CIFAR by keeping variables on CPU. Thank you!
  • tfboyd optimized the scripts for tensorflow by changing the image processing place to CPU and some other tweaks making tensorflow run faster then before. Thank you!
  • testbm.sh in tools/common updated. Scripts for testing on CPU added.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].