All Projects → parthnatekar → pgdl

parthnatekar / pgdl

Licence: other
Winning Solution of the NeurIPS 2020 Competition on Predicting Generalization in Deep Learning

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to pgdl

Transferlearning
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Stars: ✭ 8,481 (+23458.33%)
Mutual labels:  representation-learning, generalization
Link Prediction
Representation learning for link prediction within social networks
Stars: ✭ 245 (+580.56%)
Mutual labels:  representation-learning
Attribute Aware Attention
[ACM MM 2018] Attribute-Aware Attention Model for Fine-grained Representation Learning
Stars: ✭ 143 (+297.22%)
Mutual labels:  representation-learning
Semantic Embeddings
Hierarchy-based Image Embeddings for Semantic Image Retrieval
Stars: ✭ 196 (+444.44%)
Mutual labels:  representation-learning
Awesome Visual Representation Learning With Transformers
Awesome Transformers (self-attention) in Computer Vision
Stars: ✭ 166 (+361.11%)
Mutual labels:  representation-learning
Pytorch Byol
PyTorch implementation of Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning
Stars: ✭ 213 (+491.67%)
Mutual labels:  representation-learning
Kate
Code & data accompanying the KDD 2017 paper "KATE: K-Competitive Autoencoder for Text"
Stars: ✭ 135 (+275%)
Mutual labels:  representation-learning
COCO-LM
[NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
Stars: ✭ 109 (+202.78%)
Mutual labels:  representation-learning
Contrastive Predictive Coding Pytorch
Contrastive Predictive Coding for Automatic Speaker Verification
Stars: ✭ 223 (+519.44%)
Mutual labels:  representation-learning
Variational Ladder Autoencoder
Implementation of VLAE
Stars: ✭ 196 (+444.44%)
Mutual labels:  representation-learning
Vae vampprior
Code for the paper "VAE with a VampPrior", J.M. Tomczak & M. Welling
Stars: ✭ 173 (+380.56%)
Mutual labels:  representation-learning
Stylealign
[ICCV 2019]Aggregation via Separation: Boosting Facial Landmark Detector with Semi-Supervised Style Transition
Stars: ✭ 172 (+377.78%)
Mutual labels:  representation-learning
Paddlehelix
Bio-Computing Platform featuring Large-Scale Representation Learning and Multi-Task Deep Learning “螺旋桨”生物计算工具集
Stars: ✭ 213 (+491.67%)
Mutual labels:  representation-learning
Deformable Kernels
Deforming kernels to adapt towards object deformation. In ICLR 2020.
Stars: ✭ 166 (+361.11%)
Mutual labels:  representation-learning
Good Papers
I try my best to keep updated cutting-edge knowledge in Machine Learning/Deep Learning and Natural Language Processing. These are my notes on some good papers
Stars: ✭ 248 (+588.89%)
Mutual labels:  representation-learning
Autoregressive Predictive Coding
Autoregressive Predictive Coding: An unsupervised autoregressive model for speech representation learning
Stars: ✭ 138 (+283.33%)
Mutual labels:  representation-learning
Simclr
SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
Stars: ✭ 2,720 (+7455.56%)
Mutual labels:  representation-learning
Awesome Network Embedding
A curated list of network embedding techniques.
Stars: ✭ 2,379 (+6508.33%)
Mutual labels:  representation-learning
DESOM
🌐 Deep Embedded Self-Organizing Map: Joint Representation Learning and Self-Organization
Stars: ✭ 76 (+111.11%)
Mutual labels:  representation-learning
PLBART
Official code of our work, Unified Pre-training for Program Understanding and Generation [NAACL 2021].
Stars: ✭ 151 (+319.44%)
Mutual labels:  representation-learning

Winning Solution of the NeurIPS 2020 Competition on Predicting Generalization in Deep Learning

We present various complexity measures that may be predictive of generalization in deep learning. The intention is to create intuitive and simple measures that can be applied post-hoc on any trained model to get a relative measure of its generalization ability.

Our solutions based on consistency, robustness, and separability of representations achieved the highest (22.92) and second-highest (13.93) scores on the final phase of the NeurIPS Competitition on Predicting Generalization in Deep Learning. We are Team Interpex on the leaderboard.

Detailed descriptions of our solution can be found in our paper:

@misc{natekar2020representation,
      title={Representation Based Complexity Measures for Predicting Generalization in Deep Learning}, 
      author={Parth Natekar and Manik Sharma},
      year={2020},
      eprint={2012.02775},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Our solution is based on the quality of internal representations of deep neural networks, inspired by neuroscientific theories on how the human visual system creates invariant object representations.

To run our solution on the public task of the PGDL Competition, clone this repository to the sample_code_submission folder of the PGDL directory, then run python3 ../ingestion_program/ingestion.py from this folder.

Available Measures

The following complexity measures are currently available:

  1. Davies Bouldin Index
  2. Mixup Performance
  3. Perturbed Margin
  4. Manifold Mixup Performance
  5. Frobenius/Spectral Norm

The following measures are in the pipeline: FisherRegret, Silhouette Coefficient, Ablation Performance, Pac Bayes, Noise Attenutation.

Results

Scores of our final measures on various tasks of PGDL are as follows:

MEASURE CIFAR-10 SVHN CINIC-10 CINIC-10 (No BatchNorm) Oxford Flowers Oxford Pets Fashion MNIST CIFAR 10 (With Augmentations)
Davies Bouldin * Label-Wise Mixup 25.22 22.19 31.79 15.92 43.99 12.59 9.24 25.86
Mixup Margin 1.11 47.33 43.22 34.57 11.46 21.98 1.48 20.78
Augment Margin 15.66 48.34 47.22 22.82 8.67 11.97 1.28 15.25

Currently the code only works in the PGDL starting-kit framework available at https://competitions.codalab.org/competitions/25301#learn_the_details-get_starting_kit. You will need to manually choose the required measure in the complexity.py file and then run python3 ../ingestion_program/ingestion.py as mentioned above.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].