All Projects → guglielmocamporese → cvaecaposr

guglielmocamporese / cvaecaposr

Licence: other
Code for the Paper: "Conditional Variational Capsule Network for Open Set Recognition", Y. Guo, G. Camporese, W. Yang, A. Sperduti, L. Ballan, arXiv:2104.09159, 2021.

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to cvaecaposr

Continual Learning Data Former
A pytorch compatible data loader to create sequence of tasks for Continual Learning
Stars: ✭ 32 (+10.34%)
Mutual labels:  incremental-learning
Generative Continual Learning
No description or website provided.
Stars: ✭ 51 (+75.86%)
Mutual labels:  incremental-learning
Kapsul-Aglari-ile-Isaret-Dili-Tanima
Recognition of Sign Language using Capsule Networks
Stars: ✭ 42 (+44.83%)
Mutual labels:  capsule-networks
River
🌊 Online machine learning in Python
Stars: ✭ 2,980 (+10175.86%)
Mutual labels:  incremental-learning
alphaGAN
A PyTorch implementation of alpha-GAN
Stars: ✭ 15 (-48.28%)
Mutual labels:  variational-autoencoders
Adam-NSCL
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"
Stars: ✭ 34 (+17.24%)
Mutual labels:  incremental-learning
Pytorch Vae
A Collection of Variational Autoencoders (VAE) in PyTorch.
Stars: ✭ 2,704 (+9224.14%)
Mutual labels:  variational-autoencoders
TensorMONK
A collection of deep learning models (PyTorch implemtation)
Stars: ✭ 21 (-27.59%)
Mutual labels:  capsule-networks
incremental training
Repo that relates to the Medium blog 'Keeping your ML model in shape with Kafka, Airflow' and MLFlow'
Stars: ✭ 110 (+279.31%)
Mutual labels:  incremental-learning
GDPP
Generator loss to reduce mode-collapse and to improve the generated samples quality.
Stars: ✭ 32 (+10.34%)
Mutual labels:  variational-autoencoders
FACIL
Framework for Analysis of Class-Incremental Learning with 12 state-of-the-art methods and 3 baselines.
Stars: ✭ 411 (+1317.24%)
Mutual labels:  incremental-learning
tornado
The Tornado 🌪️ framework, designed and implemented for adaptive online learning and data stream mining in Python.
Stars: ✭ 110 (+279.31%)
Mutual labels:  incremental-learning
glcapsnet
Global-Local Capsule Network (GLCapsNet) is a capsule-based architecture able to provide context-based eye fixation prediction for several autonomous driving scenarios, while offering interpretability both globally and locally.
Stars: ✭ 33 (+13.79%)
Mutual labels:  capsule-networks
Train plus plus
Repo and code of the IEEE UIC paper: Train++: An Incremental ML Model Training Algorithm to Create Self-Learning IoT Devices
Stars: ✭ 17 (-41.38%)
Mutual labels:  incremental-learning
ML-MCU
Code for IoT Journal paper title 'ML-MCU: A Framework to Train ML Classifiers on MCU-based IoT Edge Devices'
Stars: ✭ 28 (-3.45%)
Mutual labels:  incremental-learning
cvpr clvision challenge
CVPR 2020 Continual Learning Challenge - Submit your CL algorithm today!
Stars: ✭ 57 (+96.55%)
Mutual labels:  incremental-learning
FUSION
PyTorch code for NeurIPSW 2020 paper (4th Workshop on Meta-Learning) "Few-Shot Unsupervised Continual Learning through Meta-Examples"
Stars: ✭ 18 (-37.93%)
Mutual labels:  incremental-learning
CVPR21 PASS
PyTorch implementation of our CVPR2021 (oral) paper "Prototype Augmentation and Self-Supervision for Incremental Learning"
Stars: ✭ 55 (+89.66%)
Mutual labels:  incremental-learning
continuous Bernoulli
There are C language computer programs about the simulator, transformation, and test statistic of continuous Bernoulli distribution. More than that, the book contains continuous Binomial distribution and continuous Trinomial distribution.
Stars: ✭ 22 (-24.14%)
Mutual labels:  variational-autoencoders
SegCaps
A Clone version from Original SegCaps source code with enhancements on MS COCO dataset.
Stars: ✭ 62 (+113.79%)
Mutual labels:  capsule-networks

Conditional Variational Capsule Network for Open Set Recognition

arXiv arXiv

This repository hosts the official code related to "Conditional Variational Capsule Network for Open Set Recognition", Y. Guo, G. Camporese, W. Yang, A. Sperduti, L. Ballan, arXiv:2104.09159, 2021. [Download]

alt text

If you use the code/models hosted in this repository, please cite the following paper and give a star to the repo:

@misc{guo2021conditional,
      title={Conditional Variational Capsule Network for Open Set Recognition}, 
      author={Yunrui Guo and Guglielmo Camporese and Wenjing Yang and Alessandro Sperduti and Lamberto Ballan},
      year={2021},
      eprint={2104.09159},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Updates

  • [2021/04/09] - The code is online,
  • [2021/07/22] - The paper has been accepted to ICCV-2021!

Install

Once you have cloned the repo, all the commands below should be runned inside the main project folder cvaecaposr:

# Clone the repo
$ git clone https://github.com/guglielmocamporese/cvaecaposr.git

# Go to the project directory
$ cd cvaecaposr

To run the code you need to have conda installed (version >= 4.9.2).

Furthermore, all the requirements for running the code are specified in the environment.yaml file and can be installed with:

# Install the conda env
$ conda env create --file environment.yaml

# Activate the conda env
$ conda activate cvaecaposr

Dataset Splits

You can find the dataset splits for all the datasets we have used (i.e. for MNIST, SVHN, CIFAR10, CIFAR+10, CIFAR+50 and TinyImageNet) in the splits.py file.

When you run the code the datasets will be automatically downloaded in the ./data folder and the split number selected is determined by the --split_num argument specified when you run the main.py file (more on how to run the code in the Experiment section below).

Model Checkpoints

You can download the model checkpoints using the download_checkpoints.sh script in the scripts folder by running:

# Extend script permissions
$ chmod +x ./scripts/download_checkpoints.sh

# Download model checkpoints
$ ./scripts/download_checkpoints.sh

After the download you will find the model checkpoints in the ./checkpoints folder:

  • ./checkpoints/mnist.ckpt
  • ./checkpoints/svhn.ckpt
  • ./checkpoints/cifar10.ckpt
  • ./checkpoints/cifar+10.ckpt
  • ./checkpoints/cifar+50.ckpt
  • ./checkpoints/tiny_imagenet.ckpt

The size of each checkpoint file is between ~370 MB and ~670 MB.

Experiments

For all the experiments we have used a GeForce RTX 2080 Ti (11GB of memory) GPU.

For the training you will need ~7300 MiB of GPU memory whereas for test ~5000 MiB of GPU memory.

Train

The CVAECapOSR model can be trained using the main.py program. Here we reported an example of a training script for the mnist experiment:

# Train
$ python main.py \
      --data_base_path "./data" \
      --dataset "mnist" \
      --val_ratio 0.2 \
      --seed 1234 \
      --batch_size 32 \
      --split_num 0 \
      --z_dim 128 \
      --lr 5e-5 \
      --t_mu_shift 10.0 \
      --t_var_scale 0.1 \
      --alpha 1.0 \
      --beta 0.01 \
      --margin 10.0 \
      --in_dim_caps 16 \
      --out_dim_caps 32 \
      --checkpoint "" \
      --epochs 100 \
      --mode "train"

For simplicity we provide all the training scripts for the different datasets in the scripts folder. Specifically, you will find:

  • train_mnist.sh
  • train_svhn.sh
  • train_cifar10.sh
  • train_cifar+10.sh
  • train_cifar+50.sh
  • train_tinyimagenet.sh

that you can run as follows:

# Extend script permissions
$ chmod +x ./scripts/train_{dataset}.sh # where you have to set a dataset name

# Run training
$ ./scripts/train_{dataset}.sh # where you have to set a dataset name

All the temporary files of the training stage (model checkpoints, tensorboard metrics, ...) are created at ./tmp/{dataset}/version_{version_number}/ where the dataset is specified in the train_{dataset}.sh script and version_number is an integer number that is tracked and computed automatically in order to not override training logs (each training will create unique files in different folders, with different versions).

Test

The CVAECapOSR model can be tested using the main.py program. Here we reported an example of a test script for the mnist experiment:

# Test
$ python main.py \
      --data_base_path "./data" \
      --dataset "mnist" \
      --val_ratio 0.2 \
      --seed 1234 \
      --batch_size 32 \
      --split_num 0 \
      --z_dim 128 \
      --lr 5e-5 \
      --t_mu_shift 10.0 \
      --t_var_scale 0.1 \
      --alpha 1.0 \
      --beta 0.01 \
      --margin 10.0 \
      --in_dim_caps 16 \
      --out_dim_caps 32 \
      --checkpoint "checkpoints/mnist.ckpt" \
      --mode "test"

For simplicity we provide all the test scripts for the different datasets in the scripts folder. Specifically, you will find:

  • test_mnist.sh
  • test_svhn.sh
  • test_cifar10.sh
  • test_cifar+10.sh
  • test_cifar+50.sh
  • test_tinyimagenet.sh

that you can run as follows:

# Extend script permissions
$ chmod +x ./scripts/test_{dataset}.sh # where you have to set a dataset name

# Run training
$ ./scripts/test_{dataset}.sh # where you have to set a dataset name

Model Reconstruction

Here we reported the reconstruction of some test samples of the model after training.

MNIST
alt text
SVHN
alt text
CIFAR10
alt text
TinyImageNet
alt text
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].