All Projects β†’ dynverse β†’ Dynbenchmark

dynverse / Dynbenchmark

Comparison of methods for trajectory inference on single-cell data πŸ₯‡

Programming Languages

r
7636 projects

Projects that are alternatives of or similar to Dynbenchmark

Image Denoising Benchmark
Benchmarking Denoising Algorithms with Real Photographs
Stars: ✭ 49 (-55.86%)
Mutual labels:  benchmarking
Phpbench
PHP Benchmarking framework
Stars: ✭ 1,235 (+1012.61%)
Mutual labels:  benchmarking
Ezfio
Simple NVME/SAS/SATA SSD test framework for Linux and Windows
Stars: ✭ 91 (-18.02%)
Mutual labels:  benchmarking
Pytest Django Queries
Generate performance reports from your django database performance tests.
Stars: ✭ 54 (-51.35%)
Mutual labels:  benchmarking
Jsperf.com
jsperf.com v2. https://github.com/h5bp/lazyweb-requests/issues/174
Stars: ✭ 1,178 (+961.26%)
Mutual labels:  benchmarking
Truvari
Structural variant toolkit for VCFs
Stars: ✭ 85 (-23.42%)
Mutual labels:  benchmarking
Pibench
Benchmarking framework for index structures on persistent memory
Stars: ✭ 46 (-58.56%)
Mutual labels:  benchmarking
Phoronix Test Suite
The Phoronix Test Suite open-source, cross-platform automated testing/benchmarking software.
Stars: ✭ 1,339 (+1106.31%)
Mutual labels:  benchmarking
Envoy Perf
Envoy performance testing
Stars: ✭ 76 (-31.53%)
Mutual labels:  benchmarking
Hs Gauge
Lean Haskell Benchmarking
Stars: ✭ 91 (-18.02%)
Mutual labels:  benchmarking
Dynverse
A set of tools supporting the development, execution, and benchmarking of trajectory inference methods. 🌍
Stars: ✭ 58 (-47.75%)
Mutual labels:  benchmarking
Gobenchdata
πŸ“‰ Run Go benchmarks, publish results to an interactive web app, and check for performance regressions in your pull requests
Stars: ✭ 71 (-36.04%)
Mutual labels:  benchmarking
Karma Benchmark
A Karma plugin to run Benchmark.js over multiple browsers with CI compatible output.
Stars: ✭ 88 (-20.72%)
Mutual labels:  benchmarking
Jsbench Me
jsbench.me - JavaScript performance benchmarking playground
Stars: ✭ 50 (-54.95%)
Mutual labels:  benchmarking
Cloudsuite
A Benchmark Suite for Cloud Services.
Stars: ✭ 91 (-18.02%)
Mutual labels:  benchmarking
Dana
Test/benchmark regression and comparison system with dashboard
Stars: ✭ 46 (-58.56%)
Mutual labels:  benchmarking
Openml R
R package to interface with OpenML
Stars: ✭ 81 (-27.03%)
Mutual labels:  benchmarking
Benchexec
BenchExec: A Framework for Reliable Benchmarking and Resource Measurement
Stars: ✭ 108 (-2.7%)
Mutual labels:  benchmarking
Profiling Nodejs
🌌 Collection of articles and tools to efficiently profile Node.js
Stars: ✭ 93 (-16.22%)
Mutual labels:  benchmarking
Xpedite
A non-sampling profiler purpose built to measure and optimize performance of ultra low latency/real time systems
Stars: ✭ 89 (-19.82%)
Mutual labels:  benchmarking

Build Status Lifecycle doi ℹ️ Tutorials  

Benchmarking trajectory inference methods

This repo contains the scripts to reproduce the manuscript

A comparison of single-cell trajectory inference methods Wouter Saelens* , Robrecht Cannoodt* , Helena Todorov , Yvan Saeys
doi:10.1038/s41587-019-0071-9 altmetric

Dynverse

Under the hood, dynbenchmark makes use of most dynverse package for running the methods, comparing them to a gold standard, and plotting the output. Check out dynverse.org for an overview!

Experiments

From start to finish, the repository is divided into several experiments, each with their own scripts and results. These are accompanied by documentation using github readmes and can thus be easily explored by going to the appropriate folders:

# id scripts results
1 Datasets πŸ“„βž‘ πŸ“Šβž‘
2 Metrics πŸ“„βž‘ πŸ“Šβž‘
3 Methods πŸ“„βž‘ πŸ“Šβž‘
4 Method testing πŸ“„βž‘ πŸ“Šβž‘
5 Scaling πŸ“„βž‘ πŸ“Šβž‘
6 Benchmark πŸ“„βž‘ πŸ“Šβž‘
7 Stability πŸ“„βž‘ πŸ“Šβž‘
8 Summary πŸ“„βž‘ πŸ“Šβž‘
9 Guidelines πŸ“„βž‘ πŸ“Šβž‘
10 Benchmark interpretation πŸ“„βž‘ πŸ“Šβž‘
11 Example predictions πŸ“„βž‘ πŸ“Šβž‘
12 Manuscript πŸ“„βž‘ πŸ“Šβž‘
Varia πŸ“„βž‘

We also have several additional subfolders:

  • Manuscript: Source files for producing the manuscript.
  • Package: An R package with several helper functions for organizing the benchmark and rendering the manuscript.
  • Raw: Files generated by hand, such as figures and spreadsheets.
  • Derived: Intermediate data files produced by the scripts. These files are not git committed.

Guidelines

Based on the results of the benchmark, we provide context-dependent user guidelines, available as a shiny app. This app is integrated within the dyno pipeline, which also includes the wrappers used in the benchmarking and other packages for visualising and interpreting the results.

dynguidelines

Datasets

The benchmarking pipeline generates (and uses) the following datasets:

  • Gold standard single-cell datasets, both real and synthetic, used to evaluated the trajectory inference methods DOI

datasets

  • The performance of methods used for the results overview figure and the dynguidelines app.

  • General information about trajectory inference methods, available as a data frame in dynmethods::methods

Methods

All wrapped methods are wrapped as both docker and singularity containers. These can be easily run using dynmethods.

Installation

dynbenchmark has been tested using R version 3.5.1 on Linux. While running the methods also works on on Windows and Mac (see dyno), running the benchmark is currently not supported on these operating system, given that a lot of commands are linux specific.

In R, you can install the dependencies of dynbenchmark from github using:

# install.packages("devtools")
devtools::install_github("dynverse/dynbenchmark/package")

This will install several other β€œdynverse” packages. Depending on the number of R packages already installed, this installation should take approximately 5 to 30 minutes.

On Linux, you will need to install udunits and ImageMagick:

  • Debian / Ubuntu / Linux Mint: sudo apt-get install libudunits2-dev imagemagick
  • Fedora / CentOS / RHEL: sudo dnf install udunits2-devel ImageMagick-c++-devel

Docker or Singularity (version β‰₯ 3.0) has to be installed to run TI methods. We suggest docker on Windows and MacOS, while both docker and singularity are fine when running on linux. Singularity is strongly recommended when running the method on shared computing clusters.

For windows 10 you can install Docker CE, older Windows installations require the Docker toolbox.

You can test whether docker is correctly installed by running:

dynwrap::test_docker_installation(detailed = TRUE)
## βœ” Docker is installed

## βœ” Docker daemon is running

## βœ” Docker is at correct version (>1.0): 1.39

## βœ” Docker is in linux mode

## βœ” Docker can pull images

## βœ” Docker can run image

## βœ” Docker can mount temporary volumes

## βœ” Docker test successful -----------------------------------------------------------------

## [1] TRUE

Same for singularity:

dynwrap::test_singularity_installation(detailed = TRUE)
## βœ” Singularity is installed

## βœ” Singularity is at correct version (>=3.0): v3.0.0-13-g0273e90f is installed

## βœ” Singularity can pull and run a container from Dockerhub

## βœ” Singularity can mount temporary volumes

## βœ” Singularity test successful ------------------------------------------------------------

## [1] TRUE

These commands will give helpful tips if some parts of the installation are missing.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].