All Projects → Atcold → Pytorch Cortexnet

Atcold / Pytorch Cortexnet

PyTorch implementation of the CortexNet predictive model

Projects that are alternatives of or similar to Pytorch Cortexnet

Weakly Supervised 3d Object Detection
Weakly Supervised 3D Object Detection from Point Clouds (VS3D), ACM MM 2020
Stars: ✭ 61 (-82.52%)
Mutual labels:  jupyter-notebook, unsupervised-learning
Simclr
SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
Stars: ✭ 2,720 (+679.37%)
Mutual labels:  jupyter-notebook, unsupervised-learning
Concrete Autoencoders
Stars: ✭ 68 (-80.52%)
Mutual labels:  jupyter-notebook, unsupervised-learning
Mlj.jl
A Julia machine learning framework
Stars: ✭ 982 (+181.38%)
Mutual labels:  jupyter-notebook, predictive-modeling
Smt
Surrogate Modeling Toolbox
Stars: ✭ 233 (-33.24%)
Mutual labels:  jupyter-notebook, predictive-modeling
Gdynet
Unsupervised learning of atomic scale dynamics from molecular dynamics.
Stars: ✭ 37 (-89.4%)
Mutual labels:  jupyter-notebook, unsupervised-learning
Data Science Wg
SF Brigade's Data Science Working Group.
Stars: ✭ 135 (-61.32%)
Mutual labels:  jupyter-notebook, predictive-modeling
Simclr
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
Stars: ✭ 750 (+114.9%)
Mutual labels:  jupyter-notebook, unsupervised-learning
Pytorch Byol
PyTorch implementation of Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning
Stars: ✭ 213 (-38.97%)
Mutual labels:  jupyter-notebook, unsupervised-learning
Keras deep clustering
How to do Unsupervised Clustering with Keras
Stars: ✭ 202 (-42.12%)
Mutual labels:  jupyter-notebook, unsupervised-learning
Discogan Pytorch
PyTorch implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"
Stars: ✭ 961 (+175.36%)
Mutual labels:  jupyter-notebook, unsupervised-learning
Islr Python
An Introduction to Statistical Learning (James, Witten, Hastie, Tibshirani, 2013): Python code
Stars: ✭ 3,344 (+858.17%)
Mutual labels:  jupyter-notebook, predictive-modeling
Pyncov 19
Pyncov-19: Learn and predict the spread of COVID-19
Stars: ✭ 20 (-94.27%)
Mutual labels:  jupyter-notebook, predictive-modeling
Endtoend Predictive Modeling Using Python
Stars: ✭ 56 (-83.95%)
Mutual labels:  jupyter-notebook, predictive-modeling
Coursera Machine Learning
Coursera Machine Learning - Python code
Stars: ✭ 815 (+133.52%)
Mutual labels:  jupyter-notebook, predictive-modeling
Sfmlearner
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
Stars: ✭ 1,661 (+375.93%)
Mutual labels:  jupyter-notebook, unsupervised-learning
Contrastive Predictive Coding Pytorch
Contrastive Predictive Coding for Automatic Speaker Verification
Stars: ✭ 223 (-36.1%)
Mutual labels:  unsupervised-learning, predictive-modeling
Hidt
Official repository for the paper "High-Resolution Daytime Translation Without Domain Labels" (CVPR2020, Oral)
Stars: ✭ 513 (+46.99%)
Mutual labels:  jupyter-notebook, unsupervised-learning
Dragan
A stable algorithm for GAN training
Stars: ✭ 189 (-45.85%)
Mutual labels:  jupyter-notebook, unsupervised-learning
ML2017FALL
Machine Learning (EE 5184) in NTU
Stars: ✭ 66 (-81.09%)
Mutual labels:  predictive-modeling, unsupervised-learning

CortexNet

This repo contains the PyTorch implementation of CortexNet.
Check the project website for further information.

Project structure

The project consists of the following folders and files:

  • data/: contains Bash scripts and a Python class definition inherent video data loading;
  • image-pretraining/: hosts the code for pre-training TempoNet's discriminative branch;
  • model/: stores several network architectures, including PredNet, an additive feedback Model01, and a modulatory feedback Model02 (CortexNet);
  • notebook/: collection of Jupyter Notebooks for data exploration and results visualisation;
  • utils/: scripts for
    • (current or former) training error plotting,
    • experiments diff,
    • multi-node synchronisation,
    • generative predictions visualisation,
    • network architecture graphing;
  • [email protected]: link to the location where experimental results will be saved within 3-digit folders;
  • new_experiment.sh*: creates a new experiment folder, updates [email protected], prints a memo about last used settings;
  • [email protected]: symbolic link pointing to a new results sub-directory created by new_experiment.sh;
  • main.py: training script for CortexNet in MatchNet or TempoNet configuration;

Dependencies

pip install sk-video
  • tqdm: progress bar
conda config --add channels conda-forge
conda update --all
conda install tqdm

IDE

This project has been realised with PyCharm by JetBrains and the Vim editor. Grip has been also fundamental for crafting decent documtation locally.

Initialise environment

Once you've determined where you'd like to save your experimental results — let's call this directory <my saving location> — run the following commands from the project's root directory:

ln -s <my saving location> results  # replace <my saving location>
mkdir results/000 && touch results/000/train.log  # init. placeholder
ln -s results/000 last  # create pointer to the most recent result

Setup new experiment

Ready to run your first experiment? Type the following:

./new_experiment.sh

GPU selection

Let's say your machine has N GPUs. You can choose to use any of these, by specifying the index n = 0, ..., N-1. Therefore, type CUDA_VISIBLE_DEVICES=n just before python ... in the following sections.

Train MatchNet

  • Download e-VDS35 (e.g. e-VDS35-May17.tar) from here.
  • Use data/resize_and_split.sh to prepare your (video) data for training. It resizes videos present in folders of folders (i.e. directory of classes) and may split them into training and validation set. May also skip short videos and trim longer ones. Check data/README.md for more details.
  • Run the main.py script to start training. Use -h to print the command line interface (CLI) arguments help.
python -u main.py --mode MatchNet <CLI arguments> | tee last/train.log

Train TempoNet

  • Download e-VDS35 (e.g. e-VDS35-May17.tar) from here.
  • Pre-train the forward branch (see image-pretraining/) on an image data set (e.g. 33-image-set.tar from here);
  • Use data/resize_and_sample.sh to prepare your (video) data for training. It resizes videos present in folders of folders (i.e. directory of classes) and samples them. Videos are then distributed across training and validation set. May also skip short videos and trim longer ones. Check data/README.md for more details.
  • Run the main.py script to start training. Use -h to print the CLI arguments help.
python -u main.py --mode TempoNet --pre-trained <path> <CLI args> | tee last/train.log

GPU selection

To run on a specific GPU, say n, type CUDA_VISIBLE_DEVICES=n just before python ....

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].