All Projects → facebookresearch → Pplbench

facebookresearch / Pplbench

Licence: mit
Evaluation Framework for Probabilistic Programming Languages

Programming Languages

python
139335 projects - #7 most used programming language

Labels

Projects that are alternatives of or similar to Pplbench

Esnext Benchmarks
Benchmarks comparing ESNext features to their ES5 and various pre-processor equivalents
Stars: ✭ 28 (-50.88%)
Mutual labels:  benchmark
Grade
Track Go benchmark performance over time by storing results in InfluxDB
Stars: ✭ 41 (-28.07%)
Mutual labels:  benchmark
Awesome Semantic Segmentation
🤘 awesome-semantic-segmentation
Stars: ✭ 8,831 (+15392.98%)
Mutual labels:  benchmark
Segmentation Networks Benchmark
Evaluation framework for testing segmentation networks in Keras
Stars: ✭ 34 (-40.35%)
Mutual labels:  benchmark
Tarsbenchmark
benchmark tool for tars/http service
Stars: ✭ 41 (-28.07%)
Mutual labels:  benchmark
Pibench
Benchmarking framework for index structures on persistent memory
Stars: ✭ 46 (-19.3%)
Mutual labels:  benchmark
Sysbench Docker Hpe
Sysbench Dockerfiles and Scripts for VM and Container benchmarking MySQL
Stars: ✭ 14 (-75.44%)
Mutual labels:  benchmark
Hash Bench
Java Hashing, CRC and Checksum Benchmark (JMH)
Stars: ✭ 53 (-7.02%)
Mutual labels:  benchmark
Torch Scan
Useful information about PyTorch modules (FLOPs, MACs, receptive field, etc.)
Stars: ✭ 41 (-28.07%)
Mutual labels:  benchmark
Php Framework Benchmark
PHP Framework Benchmark
Stars: ✭ 1,035 (+1715.79%)
Mutual labels:  benchmark
Rtb
Benchmarking tool to stress real-time protocols
Stars: ✭ 35 (-38.6%)
Mutual labels:  benchmark
Okutama Action
Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection
Stars: ✭ 36 (-36.84%)
Mutual labels:  benchmark
Java Sec Code
Java web common vulnerabilities and security code which is base on springboot and spring security
Stars: ✭ 1,033 (+1712.28%)
Mutual labels:  benchmark
Serverless Faas Workbench
FunctionBench
Stars: ✭ 32 (-43.86%)
Mutual labels:  benchmark
Jsbench Me
jsbench.me - JavaScript performance benchmarking playground
Stars: ✭ 50 (-12.28%)
Mutual labels:  benchmark
Go Benchmark App
Application for HTTP benchmarking via different rules and configs
Stars: ✭ 21 (-63.16%)
Mutual labels:  benchmark
Tbcf
Tracking Benchmark for Correlation Filters
Stars: ✭ 1,011 (+1673.68%)
Mutual labels:  benchmark
Pytest Django Queries
Generate performance reports from your django database performance tests.
Stars: ✭ 54 (-5.26%)
Mutual labels:  benchmark
Dbbench
🏋️ dbbench is a simple database benchmarking tool which supports several databases and own scripts
Stars: ✭ 52 (-8.77%)
Mutual labels:  benchmark
Dana
Test/benchmark regression and comparison system with dashboard
Stars: ✭ 46 (-19.3%)
Mutual labels:  benchmark

Build status PyPI version

Getting Started with PPL Bench

What is PPL Bench?

PPL Bench is a new benchmark framework for evaluating probabilistic programming languages (PPLs).

Installation

  1. Enter a virtual (or conda) environment
  2. Install PPL Bench core via pip:
pip install pplbench
  1. Install PPLs that you wish to benchmark. For PPL-specific instructions, see Installing PPLs. You could also run the following command to install all PPLs that are currently supported by PPL Bench (except for Jags):
pip install pplbench[ppls]

Alternatively, you could also install PPL Bench from source. Please refer to Installing PPLs for instructions.

Getting Started

Let's dive right in with a benchmark run of Bayesian Logistic Regression. To run this, you'll need to install PyStan (if you haven't already):

pip install pystan

Then, run PPL Bench with example config:

pplbench examples/example.json

This will create a benchmark run with two trials of Stan on the Bayesian Logistic Regression model. The results of the run are saved in the outputs/ directory.

This is what the Predictive Log Likelihood (PLL) plot should look like: PLL plot of example run PLL half plot of example run

Please see the examples/example.json file to understand the schema for specifying benchmark runs. The schema is documented in pplbench/main.py and can be printed by running the help command:

pplbench -h

A number of models is available in the pplbench/models directory and the PPL implementations are available in the pplbench/ppls directory.

Please feel free to submit pull requests to modify an existing PPL implementation or to add a new PPL or model.

Join the PPL Bench community

For more information about PPL Bench, refer to

  1. Blog post: link
  2. Paper: link
  3. Website: link

See the CONTRIBUTING.md file for how to help out.

License

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].