All Projects → deckarep → corebench

deckarep / corebench

Licence: MIT license
corebench - run your benchmarks against high performance computing servers with many CPU cores

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to corebench

Swifter
A package which efficiently applies any function to a pandas dataframe or series in the fastest available manner
Stars: ✭ 1,844 (+6258.62%)
Mutual labels:  parallel-computing, parallelization
Future
🚀 R package: future: Unified Parallel and Distributed Processing in R for Everyone
Stars: ✭ 735 (+2434.48%)
Mutual labels:  parallel-computing, parallelization
future.callr
🚀 R package future.callr: A Future API for Parallel Processing using 'callr'
Stars: ✭ 52 (+79.31%)
Mutual labels:  parallel-computing, parallelization
Future.apply
🚀 R package: future.apply - Apply Function to Elements in Parallel using Futures
Stars: ✭ 159 (+448.28%)
Mutual labels:  parallel-computing, parallelization
vuo
A realtime visual programming language for interactive media.
Stars: ✭ 103 (+255.17%)
Mutual labels:  parallel-computing
opensbli
A framework for the automated derivation and parallel execution of finite difference solvers on a range of computer architectures.
Stars: ✭ 56 (+93.1%)
Mutual labels:  parallel-computing
raster
A micro server framework, support coroutine, and parallel-computing, used for building flatbuffers/thrift/protobuf/http protocol service.
Stars: ✭ 19 (-34.48%)
Mutual labels:  parallel-computing
Boost.simd
Boost SIMD
Stars: ✭ 238 (+720.69%)
Mutual labels:  parallel-computing
paradiseo
An evolutionary computation framework to (automatically) build fast parallel stochastic optimization solvers
Stars: ✭ 73 (+151.72%)
Mutual labels:  parallelization
JUDI.jl
Julia Devito inversion.
Stars: ✭ 71 (+144.83%)
Mutual labels:  parallel-computing
mutter-x11-scaling
Mutter build with Ubuntu patch for Xorg fractional scaling on Manjaro / Arch Linux
Stars: ✭ 77 (+165.52%)
Mutual labels:  scaling
Foundations of HPC 2021
This repository collects the materials from the course "Foundations of HPC", 2021, at the Data Science and Scientific Computing Department, University of Trieste
Stars: ✭ 22 (-24.14%)
Mutual labels:  parallel-computing
t8code
Parallel algorithms and data structures for tree-based AMR with arbitrary element shapes.
Stars: ✭ 37 (+27.59%)
Mutual labels:  parallel-computing
hp2p
Heavy Peer To Peer: a MPI based benchmark for network diagnostic
Stars: ✭ 17 (-41.38%)
Mutual labels:  parallel-computing
plazar-js
Modular framework built with enterprise in mind - http://www.plazarjs.com
Stars: ✭ 25 (-13.79%)
Mutual labels:  scaling
Amadeus
Harmonious distributed data analysis in Rust.
Stars: ✭ 240 (+727.59%)
Mutual labels:  parallel-computing
job stream
An MPI-based C++ or Python library for easy distributed pipeline processing
Stars: ✭ 32 (+10.34%)
Mutual labels:  parallel-computing
pestpp
tools for scalable and non-intrusive parameter estimation, uncertainty analysis and sensitivity analysis
Stars: ✭ 90 (+210.34%)
Mutual labels:  parallel-computing
react-native-scaling-utils
Simple scaling utilities for React Native
Stars: ✭ 12 (-58.62%)
Mutual labels:  scaling
pyabc
pyABC: distributed, likelihood-free inference
Stars: ✭ 13 (-55.17%)
Mutual labels:  parallel-computing

Build Status

corebench

Benchmark utility that's intended to exercise benchmarks and how they scale with a large number of cores.

TL;DR

How does your code scale and perform when running on high-core servers?

Demo

asciicast

Features

First Provider: DigitalOcean up to 48 cores currently.

  • --cpu flag supported: specify cpu delimited list
  • --benchmem flag supported: capture allocations
  • --count flag supported: multiple iterations of each benchmark
  • --stat flag supported: executes benchstat analysis
  • --regex flag supported: limits which benchmarks are run
  • --leave-running flag supported: leaves a box running so user can log on
  • sizes command: lists DigitalOcean instance sizes
  • term command: terminates instances created by corebench
  • list command: lists active corebench provisioned instances

Second Provider: AWS, specify your preferred instance type, us-east-1a for now

  • --instancetype (e.g. t2.micro)
  • --sizes - currently TODO - pricing API needs a soft touch/to add real value beyond --list, need mapping of instancetype/reigion --> ami
  • all other flags supported

Usage

# Install corebench
go get github.com/deckarep/corebench

DigitalOcean:

  • Sign up for a DigitalOcean account if not already a member
  • Create a DigitalOcean Personal Access Token to be used for: --DO_PAT={token-here}
  • Add your SSH public key to DigitalOcean for SSH access: --ssh-fp={ssh-md5-signature}

Run corebench:

// Fetch instance sizes
./corebench do sizes --DO_PAT=$DO_PAT

// Run a benchmark
./corebench do bench github.com/{user}/{repo} [OPTIONS] --DO_PAT=$DO_PAT --ssh-fp=$SF

// List active instances
./corebench do list --DO_PAT=$DO_PAT

// Terminate instances created by corebench
./corebench do term --DO_PAT=$DO_PAT --all

AWS:

  • Sign up for AWS!
  • Make sure your instance type is available (request a limit increase with support if needed)

Run corebench:

// Run a benchmark
./corebench aws bench github.com/{user}/{repo} [OPTIONS]

//Run a benchmark on this repo with instancetype xxx and leave the resources running
./corebench aws bench github.com/deckarep/corebench --instancetype m3.medium --leave-running true

// List running instances
./corebench aws list

// Terminate/delete AWS resource stack 'corebench'
./corebench aws term

Here's what happens:

  • A command like above will provision an on-demand high-performance computing server
  • Installs Go, and clones your repository
  • It will run Go's benchmark tooling against your repo and generate a comprehensive report demonstrating just how well your code scales across a large number of cores
  • It will immediately decomission (unless leave-running flag set) the computing resource so you only pay for a fraction of the cost

Here's what you need:

  • API/Credential access to at least one provider - Both Digital Ocean & AWS are currently supported
  • The ability to pay for your own computing resources for which ever providers you choose
  • A source repo with comprehensive benchmarks to run against this suite

Why benchmark on a large set of cores?

  • If you work in Go chances are you care about concurrent and parallel performance. If you don't care why are using Go at all?
  • Developers often benchmark their code on developer workstations, with a small number of cores
  • Benchmarks on a small number of cores often times don't reflect the true nature of your application
  • Sometimes an algorithm looks great on a few cores, but performance dramatically drops off when the core count gets higher
  • A larger number of cores often times illustrates performances problems around:
    • Contention/locking bottlenecks
    • Cache coherence issues
    • Parallelization overhead or lack of parallelization at all
    • Multi-threading overhead: starvation, race conditions, live-locks and priority inversion
    • The list goes on...

F.A.Q.

  • Q: Why is 48 the max amount of cores this utility supports?

  • A: This is DigitalOcean's beefiest box. AWS cores currently specified by instance type.

  • Q: What happens to the server and the code after the benchmark completes?

  • A: The default behavior is the server is destroyed along with the code and benchmark data. There is a setting that allows you to leave the server running if you'd like to log in and inspect the results using the --leave-running flag.

  • Q: Why is DigitalOcean the first provider?

  • A: Easy, because their droplets fire up FAST allowing a quick feedback loop during development of this project.

  • Q: When you will you add Google Cloud, AWS, {other-provider} next?

  • A: AWS support has been added. Google Cloud is next because they offer per minute billing which is great to save money. I'm hoping the community can help me build other providers along with refactoring as necessary to align the API.

  • Q: Why did you build this tool?

  • A: Because I wanted to quick way to execute remote benchmarks on cloud servers that are beefy (large number of cores).

  • Q: Will you eventually support other languages?

  • A: Meybe. :) Did I mention this code is open source?

  • Q: Why is your code sloppy?

  • A: Because I'm currently in rapid prototype mode...don't worry it will get a lot better. Also through the power of open-source...yada, yada, yada.

  • Q: Doesn't this cost money everytime you need to fire up a benchmark?

  • A: Yes, yes it does...you have been warned.

  • A: If you want to test-drive, you can use a weak single core instance which costs like a penny an hour.

Caution:

  • This utility is in active development, API is in flux and is expected to change
  • This project and its maintainers are NOT responsible for any monetary charges, overages, fees as a result of the auto-provision process during proper usage of the script, bugs in the script or because you decided to leave a cloud server running for months
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].