All Projects → abhiskk → Ladder

abhiskk / Ladder

Licence: gpl-3.0
Implementation of Ladder Network in PyTorch.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Ladder

SHOT-plus
code for our TPAMI 2021 paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"
Stars: ✭ 46 (+24.32%)
Mutual labels:  semi-supervised-learning
Mixmatch Pytorch
Code for "MixMatch - A Holistic Approach to Semi-Supervised Learning"
Stars: ✭ 378 (+921.62%)
Mutual labels:  semi-supervised-learning
Ganomaly
GANomaly: Semi-Supervised Anomaly Detection via Adversarial Training
Stars: ✭ 563 (+1421.62%)
Mutual labels:  semi-supervised-learning
L2c
Learning to Cluster. A deep clustering strategy.
Stars: ✭ 262 (+608.11%)
Mutual labels:  semi-supervised-learning
Ssl4mis
Semi Supervised Learning for Medical Image Segmentation, a collection of literature reviews and code implementations.
Stars: ✭ 336 (+808.11%)
Mutual labels:  semi-supervised-learning
Stn Ocr
Code for the paper STN-OCR: A single Neural Network for Text Detection and Text Recognition
Stars: ✭ 473 (+1178.38%)
Mutual labels:  semi-supervised-learning
DST-CBC
Implementation of our paper "DMT: Dynamic Mutual Training for Semi-Supervised Learning"
Stars: ✭ 98 (+164.86%)
Mutual labels:  semi-supervised-learning
Awesome Federated Learning
Federated Learning Library: https://fedml.ai
Stars: ✭ 624 (+1586.49%)
Mutual labels:  semi-supervised-learning
Imbalanced Semi Self
[NeurIPS 2020] Semi-Supervision (Unlabeled Data) & Self-Supervision Improve Class-Imbalanced / Long-Tailed Learning
Stars: ✭ 379 (+924.32%)
Mutual labels:  semi-supervised-learning
See
Code for the AAAI 2018 publication "SEE: Towards Semi-Supervised End-to-End Scene Text Recognition"
Stars: ✭ 545 (+1372.97%)
Mutual labels:  semi-supervised-learning
Fixmatch Pytorch
Unofficial PyTorch implementation of "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence"
Stars: ✭ 259 (+600%)
Mutual labels:  semi-supervised-learning
Tape
Tasks Assessing Protein Embeddings (TAPE), a set of five biologically relevant semi-supervised learning tasks spread across different domains of protein biology.
Stars: ✭ 295 (+697.3%)
Mutual labels:  semi-supervised-learning
Ssgan Tensorflow
A Tensorflow implementation of Semi-supervised Learning Generative Adversarial Networks (NIPS 2016: Improved Techniques for Training GANs).
Stars: ✭ 496 (+1240.54%)
Mutual labels:  semi-supervised-learning
HyperGBM
A full pipeline AutoML tool for tabular data
Stars: ✭ 172 (+364.86%)
Mutual labels:  semi-supervised-learning
Alibi Detect
Algorithms for outlier and adversarial instance detection, concept drift and metrics.
Stars: ✭ 604 (+1532.43%)
Mutual labels:  semi-supervised-learning
DiGCN
Implement of DiGCN, NeurIPS-2020
Stars: ✭ 25 (-32.43%)
Mutual labels:  semi-supervised-learning
Advsemiseg
Adversarial Learning for Semi-supervised Semantic Segmentation, BMVC 2018
Stars: ✭ 382 (+932.43%)
Mutual labels:  semi-supervised-learning
Gans In Action
Companion repository to GANs in Action: Deep learning with Generative Adversarial Networks
Stars: ✭ 748 (+1921.62%)
Mutual labels:  semi-supervised-learning
Semi Supervised Pytorch
Implementations of various VAE-based semi-supervised and generative models in PyTorch
Stars: ✭ 619 (+1572.97%)
Mutual labels:  semi-supervised-learning
Awesome Semi Supervised Learning
📜 An up-to-date & curated list of awesome semi-supervised learning papers, methods & resources.
Stars: ✭ 538 (+1354.05%)
Mutual labels:  semi-supervised-learning

ladder

Implementation of Ladder Network and Stacked Denoising Autoencoder in PyTorch.

Requirements

Training ladder

  1. Run python utils/mnist_data.py to create the MNIST dataset.

  2. Run the following command to train the ladder network:

  • python ladder/ladder.py --batch 100 --epochs 20 --noise_std 0.2 --data_dir data

Status: The unsupervised loss starts at a high value because of which the network overfits the unsupervised loss and the supervised performance is bad. Current best accuracy on MNIST validation set using 3000 labelled and 47000 unlabelled examples: 98.33%.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].