All Projects → erogol → resnet.torch

erogol / resnet.torch

Licence: other
an updated version of fb.resnet.torch with many changes.

Programming Languages

Jupyter Notebook
11667 projects
lua
6591 projects
shell
77523 projects

Projects that are alternatives of or similar to resnet.torch

neuralBlack
A Multi-Class Brain Tumor Classifier using Convolutional Neural Network with 99% Accuracy achieved by applying the method of Transfer Learning using Python and Pytorch Deep Learning Framework
Stars: ✭ 36 (+2.86%)
Mutual labels:  torch, resnet
Vqa.pytorch
Visual Question Answering in Pytorch
Stars: ✭ 602 (+1620%)
Mutual labels:  torch, resnet
Look4Face
Demo of Face Recognition web service
Stars: ✭ 23 (-34.29%)
Mutual labels:  resnet
resnet-ensemble
Ensemble code for Resnet in Tensorflow slim
Stars: ✭ 14 (-60%)
Mutual labels:  resnet
resnet-cifar10
ResNet for Cifar10
Stars: ✭ 21 (-40%)
Mutual labels:  resnet
Cross-View-Gait-Based-Human-Identification-with-Deep-CNNs
Code for 2016 TPAMI(IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE) A Comprehensive Study on Cross-View Gait Based Human Identification with Deep CNNs
Stars: ✭ 21 (-40%)
Mutual labels:  torch
pytest-pytorch
pytest plugin for a better developer experience when working with the PyTorch test suite
Stars: ✭ 36 (+2.86%)
Mutual labels:  torch
WassersteinGAN.torch
Torch implementation of Wasserstein GAN https://arxiv.org/abs/1701.07875
Stars: ✭ 48 (+37.14%)
Mutual labels:  torch
CBAM-tensorflow-slim
CBAM implementation on TensorFlow Slim
Stars: ✭ 104 (+197.14%)
Mutual labels:  resnet
Deep-Compression.Pytorch
Unofficial Pytorch implementation of Deep Compression in CIFAR10
Stars: ✭ 29 (-17.14%)
Mutual labels:  resnet
neural-dream
PyTorch implementation of DeepDream algorithm
Stars: ✭ 110 (+214.29%)
Mutual labels:  resnet
DeepCD
[ICCV17] DeepCD: Learning Deep Complementary Descriptors for Patch Representations
Stars: ✭ 39 (+11.43%)
Mutual labels:  torch
TrackNet-Badminton-Tracking-tensorflow2
TrackNet for badminton tracking using tensorflow2
Stars: ✭ 37 (+5.71%)
Mutual labels:  resnet
ModelZoo.pytorch
Hands on Imagenet training. Unofficial ModelZoo project on Pytorch. MobileNetV3 Top1 75.64🌟 GhostNet1.3x 75.78🌟
Stars: ✭ 42 (+20%)
Mutual labels:  resnet
CoreML-samples
Sample code for Core ML using ResNet50 provided by Apple and a custom model generated by coremltools.
Stars: ✭ 38 (+8.57%)
Mutual labels:  resnet
neural-vqa-attention
❓ Attention-based Visual Question Answering in Torch
Stars: ✭ 96 (+174.29%)
Mutual labels:  torch
bittensor
Internet-scale Neural Networks
Stars: ✭ 97 (+177.14%)
Mutual labels:  torch
FlowNetTorch
Torch implementation of Fischer et al. FlowNet training code
Stars: ✭ 27 (-22.86%)
Mutual labels:  torch
general backbone
No description or website provided.
Stars: ✭ 37 (+5.71%)
Mutual labels:  resnet
torch-lrcn
An implementation of the LRCN in Torch
Stars: ✭ 85 (+142.86%)
Mutual labels:  torch

resnet.torch

This is a fork of https://github.com/facebook/fb.resnet.torch. Refer to that if you need to know the details of this library.

This code is heavily modified with many additions throughout my research. Many of the changes are optional and defined in "opts.lua". Here is the list of the additions by no means complete.

  1. Class weighting to tackle class imbalance (-classWeighting)
  • It counts the number of instances for each category and use the normalized reverse frequency to scale learning rates per category.
  1. Emprically verified way to freeze convolutional layers of the network.
  • I tried everything suggested to freeze a pretrained network, however, I saw that any method still updates the model. In the end, I modified nnlr in order to freeze the network without any such leak. nnlr is a library that you can scale learning rates per layer. I changed the code to give a exact value per layer instead of scaling the base learning rate. The idea is to give 0 learning rate and weight decays to each of feature layers and prevent the model updating parameters.
  1. Better booking of the trained models.
  • Any model trained is arraged in a folder named by the important model parameters and sub-foldered by the date of the execution.
  1. Plotting accuracy and loss values
  • In the created folder for training model, there are loss and accuracy plots using gnuplot, plotting per epoch values.
  1. New models;
  • GoogleNet
  • ResNet with Stochastic Depth
  • SimpleNet (a small architecture which is a good baseline)
  • And some others
  1. Model initialization with a different learning rate (-model_init_LR)
  • It is good to stabilize a model before setting the learning rate to a base value. Given value is used for initial 5 epochs.
  1. Save the model optimState so that you can continue the training from any checkpoint with all history recovered.

  2. dataset/balanced.lua for balancing instance selection against imabalnced datasets

  3. Set optimizer adam or sgd (-optimizer (sgd))

NOTE: Check other branches of the project. Eacn includes a particular model architecture.

WARNING: " I suggest you to use this repo with caution since codes are only used for research purposes and there might be buggy details."

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].