All Projects → ChrisCummins → paper-synthesizing-benchmarks

ChrisCummins / paper-synthesizing-benchmarks

Licence: GPL-3.0 License
📝 "Synthesizing Benchmarks for Predictive Modeling" (🥇 CGO'17 Best Paper)

Programming Languages

Jupyter Notebook
11667 projects
TeX
3793 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to paper-synthesizing-benchmarks

AdversarialAudioSeparation
Code accompanying the paper "Semi-supervised adversarial audio source separation applied to singing voice extraction"
Stars: ✭ 70 (+233.33%)
Mutual labels:  paper
influence boosting
Supporting code for the paper "Finding Influential Training Samples for Gradient Boosted Decision Trees"
Stars: ✭ 57 (+171.43%)
Mutual labels:  paper
ZSL-ADA
Code accompanying the paper "A Generative Framework for Zero Shot Learning with Adversarial Domain Adaptation"
Stars: ✭ 18 (-14.29%)
Mutual labels:  paper
LayeredSceneDecomposition
No description or website provided.
Stars: ✭ 22 (+4.76%)
Mutual labels:  paper
Curriculum-Learning-PaperList-Materials
Curriculum Learning related papers and materials
Stars: ✭ 50 (+138.1%)
Mutual labels:  paper
midi degradation toolkit
A toolkit for generating datasets of midi files which have been degraded to be 'un-musical'.
Stars: ✭ 29 (+38.1%)
Mutual labels:  paper
audioContextEncoder
A context encoder for audio inpainting
Stars: ✭ 18 (-14.29%)
Mutual labels:  paper
Object-Detection-Confidence-Bias
Code for "The Box Size Confidence Bias Harms Your Object Detector" (https://arxiv.org/abs/2112.01901)
Stars: ✭ 22 (+4.76%)
Mutual labels:  paper
cool-papers-in-pytorch
Reimplementing cool papers in PyTorch...
Stars: ✭ 21 (+0%)
Mutual labels:  paper
RTRT-Trans-Caustics
A reference implementation of ”Rendering transparent objects with caustics using real-time ray tracing“ using Unreal Engine 4.25.1.
Stars: ✭ 12 (-42.86%)
Mutual labels:  paper
paper-survey
Summary of machine learning papers
Stars: ✭ 26 (+23.81%)
Mutual labels:  paper
groove2groove
Code for "Groove2Groove: One-Shot Music Style Transfer with Supervision from Synthetic Data"
Stars: ✭ 88 (+319.05%)
Mutual labels:  paper
Awesome-Polarization
List of awesome papers on Polarization Imaging
Stars: ✭ 31 (+47.62%)
Mutual labels:  paper
msla2014
wherein I implement several substructural logics in Agda
Stars: ✭ 24 (+14.29%)
Mutual labels:  paper
adage
Data and code related to the paper "ADAGE-Based Integration of Publicly Available Pseudomonas aeruginosa..." Jie Tan, et al · mSystems · 2016
Stars: ✭ 61 (+190.48%)
Mutual labels:  paper
Cross-View-Gait-Based-Human-Identification-with-Deep-CNNs
Code for 2016 TPAMI(IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE) A Comprehensive Study on Cross-View Gait Based Human Identification with Deep CNNs
Stars: ✭ 21 (+0%)
Mutual labels:  paper
neural network papers
记录一些读过的论文,给出个人对论文的评分情况并简述论文insight
Stars: ✭ 152 (+623.81%)
Mutual labels:  paper
gemnet pytorch
GemNet model in PyTorch, as proposed in "GemNet: Universal Directional Graph Neural Networks for Molecules" (NeurIPS 2021)
Stars: ✭ 80 (+280.95%)
Mutual labels:  paper
TAGCN
Tensorflow Implementation of the paper "Topology Adaptive Graph Convolutional Networks" (Du et al., 2017)
Stars: ✭ 17 (-19.05%)
Mutual labels:  paper
Paper Note
📚 记录一些自己读过的论文与笔记
Stars: ✭ 22 (+4.76%)
Mutual labels:  paper

Synthesizing Benchmarks for Predictive Modeling

Chris Cummins, Pavlos Petoumenos, Zheng Wang, Hugh Leather.

Winner of Best Paper Award CGO'17

Abstract

Predictive modeling using machine learning is an effective method for building compiler heuristics, but there is a shortage of benchmarks. Typical machine learning experiments outside of the compilation field train over thousands or millions of examples. In machine learning for compilers, however, there are typically only a few dozen common benchmarks available. This limits the quality of learned models, as they have very sparse training data for what are often high-dimensional feature spaces. What is needed is a way to generate an unbounded number of training programs that finely cover the feature space. At the same time the generated programs must be similar to the types of programs that human developers actually write, otherwise the learning will target the wrong parts of the feature space.

We mine open source repositories for program fragments and apply deep learning techniques to automatically construct models for how humans write programs. We then sample the models to generate an unbounded number of runnable training programs, covering the feature space ever more finely. The quality of the programs is such that even human developers struggle to distinguish our generated programs from hand-written code.

We use our generator for OpenCL programs, CLgen, to automatically synthesize thousands of programs and show that learning over these improves the performance of a state of the art predictive model by 1.27x. In addition, the fine covering of the feature space automatically exposes weaknesses in the feature design which are invisible with the sparse training examples from existing benchmark suites. Correcting these weaknesses further increases performance by 4.30x.

Keywords Synthetic program generation, OpenCL, Benchmarking, Deep Learning, GPUs

@inproceedings{cummins2017a,
  title={Synthesizing Benchmarks for Predictive Modeling},
  author={Cummins, Chris and Petoumenos, Pavlos and Wang, Zheng and Leather, Hugh},
  booktitle={CGO},
  year={2017},
  organization={IEEE}
}

License

The code for this paper (everything in the directory code) is released under the terms of the GPLv3 license. See LICENSE for details. Everything else (i.e. the LaTeX sources and data sets) are unlicensed, please contact Chris Cummins [email protected] before using.

Acknowledgements

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].