All Projects → hendrycks → pre-training

hendrycks / pre-training

Licence: Apache-2.0 license
Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to pre-training

adversarial-vision-challenge
NIPS Adversarial Vision Challenge
Stars: ✭ 39 (-56.67%)
Mutual labels:  robustness, adversarial-examples
UQ360
Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty in machine learning model predictions.
Stars: ✭ 211 (+134.44%)
Mutual labels:  uncertainty, calibration
anomaly-seg
The Combined Anomalous Object Segmentation (CAOS) Benchmark
Stars: ✭ 115 (+27.78%)
Mutual labels:  out-of-distribution-detection, ml-safety
spatial-smoothing
(ICML 2022) Official PyTorch implementation of “Blurs Behave Like Ensembles: Spatial Smoothings to Improve Accuracy, Uncertainty, and Robustness”.
Stars: ✭ 68 (-24.44%)
Mutual labels:  uncertainty, robustness
robust-local-lipschitz
A Closer Look at Accuracy vs. Robustness
Stars: ✭ 75 (-16.67%)
Mutual labels:  robustness, adversarial-examples
DUN
Code for "Depth Uncertainty in Neural Networks" (https://arxiv.org/abs/2006.08437)
Stars: ✭ 65 (-27.78%)
Mutual labels:  uncertainty, robustness
uapca
Uncertainty-aware principal component analysis.
Stars: ✭ 16 (-82.22%)
Mutual labels:  uncertainty
MonoRUn
[CVPR'21] MonoRUn: Monocular 3D Object Detection by Reconstruction and Uncertainty Propagation
Stars: ✭ 85 (-5.56%)
Mutual labels:  uncertainty
welleng
A collection of Wells/Drilling Engineering tools, focused on well trajectory planning for the time being.
Stars: ✭ 79 (-12.22%)
Mutual labels:  uncertainty
chemprop
Fast and scalable uncertainty quantification for neural molecular property prediction, accelerated optimization, and guided virtual screening.
Stars: ✭ 75 (-16.67%)
Mutual labels:  uncertainty
out-of-distribution-detection
The Ultimate Reference for Out of Distribution Detection with Deep Neural Networks
Stars: ✭ 117 (+30%)
Mutual labels:  out-of-distribution-detection
torchuq
A library for uncertainty quantification based on PyTorch
Stars: ✭ 88 (-2.22%)
Mutual labels:  uncertainty
survHE
Survival analysis in health economic evaluation Contains a suite of functions to systematise the workflow involving survival analysis in health economic evaluation. survHE can fit a large range of survival models using both a frequentist approach (by calling the R package flexsurv) and a Bayesian perspective.
Stars: ✭ 32 (-64.44%)
Mutual labels:  uncertainty
ProSelfLC-2021
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.
Stars: ✭ 45 (-50%)
Mutual labels:  uncertainty
awesome-conformal-prediction
A professionally curated list of awesome Conformal Prediction videos, tutorials, books, papers, PhD and MSc theses, articles and open-source libraries.
Stars: ✭ 998 (+1008.89%)
Mutual labels:  uncertainty
image-segmentation
Mask R-CNN, FPN, LinkNet, PSPNet and UNet with multiple backbone architectures support readily available
Stars: ✭ 62 (-31.11%)
Mutual labels:  pretrained
pytorch-ensembles
Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning, ICLR 2020
Stars: ✭ 196 (+117.78%)
Mutual labels:  uncertainty
DecisionAmbiguityRecognition
Deep learning AI, that recognizes when are people uncertain
Stars: ✭ 16 (-82.22%)
Mutual labels:  uncertainty
ACSC
Automatic Calibration for Non-repetitive Scanning Solid-State LiDAR and Camera Systems
Stars: ✭ 210 (+133.33%)
Mutual labels:  calibration
sandy
Sampling nuclear data and uncertainty
Stars: ✭ 30 (-66.67%)
Mutual labels:  uncertainty

Using Pre-Training Can Improve Model Robustness and Uncertainty

This repository contains the essential code for the paper Using Pre-Training Can Improve Model Robustness and Uncertainty, ICML 2019.

Requires Python 3+ and PyTorch 0.4.1+.

Abstract

Kaiming He et al. (2018) have called into question the utility of pre-training by showing that training from scratch can often yield similar performance, should the model train long enough. We show that although pre-training may not improve performance on traditional classification metrics, it does provide large benefits to model robustness and uncertainty. With pre-training, we show approximately a 30% relative improvement in label noise robustness and a 10% absolute improvement in adversarial robustness on CIFAR-10 and CIFAR-100. Pre-training also improves model calibration. In some cases, using pre-training without task-specific methods surpasses the state-of-the-art, highlighting the importance of using pre-training when evaluating future methods on robustness and uncertainty tasks.

Citation

If you find this useful in your research, please consider citing:

@article{hendrycks2019pretraining,
  title={Using Pre-Training Can Improve Model Robustness and Uncertainty},
  author={Hendrycks, Dan and Lee, Kimin and Mazeika, Mantas},
  journal={Proceedings of the International Conference on Machine Learning},
  year={2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].