All Projects → Shians → CellBench

Shians / CellBench

Licence: GPL-3.0 license
R package for benchmarking single cell analysis methods

Programming Languages

HTML
75241 projects
r
7636 projects

Projects that are alternatives of or similar to CellBench

link-too-big
Link Too Big? Make Link Short
Stars: ✭ 12 (-42.86%)
Mutual labels:  benchmark
benchdiff
No description or website provided.
Stars: ✭ 41 (+95.24%)
Mutual labels:  benchmark
Language-Arena
C++ vs D vs Go benchmark
Stars: ✭ 19 (-9.52%)
Mutual labels:  benchmark
CBLUE
中文医疗信息处理基准CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark
Stars: ✭ 379 (+1704.76%)
Mutual labels:  benchmark
criterion
statistics-driven micro-benchmarking framework
Stars: ✭ 17 (-19.05%)
Mutual labels:  benchmark
eCommerceSearchBench
E-commerce search benchmark is the first end-to-end application benchmark for e-commerce search system with personalized recommendations.This work is joint with Prof. Jianfeng Zhan (http://www.benchcouncil.org/zjf.html) 's team, who is also the chair of International Open Benchmark Council (BenchCouncil, http://www.benchcouncil.org/).
Stars: ✭ 29 (+38.1%)
Mutual labels:  benchmark
benchmark-kit
phpbenchmarks.com kit to add your benchmark.
Stars: ✭ 31 (+47.62%)
Mutual labels:  benchmark
yjit-bench
Set of benchmarks for the YJIT CRuby JIT compiler
Stars: ✭ 38 (+80.95%)
Mutual labels:  benchmark
touchstone
Smart benchmarking of pull requests with statistical confidence
Stars: ✭ 33 (+57.14%)
Mutual labels:  benchmark
sbt-jmh
"Trust no one, bench everything." - sbt plugin for JMH (Java Microbenchmark Harness)
Stars: ✭ 740 (+3423.81%)
Mutual labels:  benchmark
github-action-benchmark
GitHub Action for continuous benchmarking to keep performance
Stars: ✭ 592 (+2719.05%)
Mutual labels:  benchmark
NPB-CPP
NAS Parallel Benchmark Kernels in C/C++. The parallel versions are in FastFlow, TBB, and OpenMP.
Stars: ✭ 18 (-14.29%)
Mutual labels:  benchmark
Edge-Detection-project
Tiny Image in Javascript - Edge Detection Algorithms
Stars: ✭ 27 (+28.57%)
Mutual labels:  benchmark
LFattNet
Attention-based View Selection Networks for Light-field Disparity Estimation
Stars: ✭ 41 (+95.24%)
Mutual labels:  benchmark
bench
⏱️ Reliable performance measurement for Go programs. All in one design.
Stars: ✭ 33 (+57.14%)
Mutual labels:  benchmark
snowman
Welcome to Snowman App – a Data Matching Benchmark Platform.
Stars: ✭ 25 (+19.05%)
Mutual labels:  benchmark
scATAC-benchmarking
Benchmarking computational single cell ATAC-seq methods
Stars: ✭ 137 (+552.38%)
Mutual labels:  benchmark
gtestx
A C++ benchmark extension for gtest
Stars: ✭ 19 (-9.52%)
Mutual labels:  benchmark
Hetero-Mark
A Benchmark Suite for Heterogeneous System Computation
Stars: ✭ 41 (+95.24%)
Mutual labels:  benchmark
perf
Linux Perf subsystem bindings for Go
Stars: ✭ 19 (-9.52%)
Mutual labels:  benchmark

CellBench

Travis build status Coverage status

R package for benchmarking single cell analysis methods, primarily inspired by the modelling structure used in DSC.

Installation

if (!require(remotes)) install.packages("remotes")
remotes::install_github("shians/CellBench", ref = "R-3.5", build_opts = c("--no-resave-data", "--no-manual"))

Introduction

This package revolves around one object and one function. The benchmark_tbl (benchmark tibble) and the apply_methods(x, methods) function.

We expect data to to be stored in lists, and we apply functions stored in lists to the data. This creates a benchmark_tbl where the names of the lists items are stored as columns and the final column contains the result of the computations.

library(CellBench)

sample1 <- data.frame(
    x = matrix(runif(25), nrow = 5, ncol = 5)
)

sample2 <- data.frame(
    x = matrix(runif(25), nrow = 5, ncol = 5)
)

datasets <- list(
    sample1 = sample1,
    sample2 = sample2
)

transform <- list(
    correlation = cor,
    covariance = cov
)

datasets %>% apply_methods(transform)

## # A tibble: 4 x 3
##   data    metric      result       
##   <fct>   <fct>       <list>       
## 1 sample1 correlation <dbl [5 × 5]>
## 2 sample1 covariance  <dbl [5 × 5]>
## 3 sample2 correlation <dbl [5 × 5]>
## 4 sample2 covariance  <dbl [5 × 5]>

We can additionally chain method applications and this will combinatorially expand our benchmark_tbl so that combinations of methods can easily be computed.

metric <- list(
    mean = mean,
    median = median
)

datasets %>%
    apply_methods(transform) %>%
    apply_methods(metric)

## # A tibble: 8 x 4
##   data    transform   metric   result
##   <fct>   <fct>       <fct>     <dbl>
## 1 sample1 correlation mean    0.0602 
## 2 sample1 correlation median -0.0520 
## 3 sample1 covariance  mean    0.00823
## 4 sample1 covariance  median -0.00219
## 5 sample2 correlation mean    0.303  
## 6 sample2 correlation median  0.482  
## 7 sample2 covariance  mean    0.0115 
## 8 sample2 covariance  median  0.0132 

The result table is essentially a regular tibble and works with all tidyverse packages.

See

vignette("Introduction", package = "CellBench")

for a more detailed introduction and example with biological data.

Features

  • High compatibility with dplyr and rest of tidyverse, fundamental data object can be used with dplyr verbs
  • Multithreading, methods can be applied in parallel

License

This package is licensed under GNU General Public License v3.0 (GPL-3.0).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].