All Projects → minar09 → Vgg16 Pytorch

minar09 / Vgg16 Pytorch

VGG16 Net implementation from PyTorch Examples scripts for ImageNet dataset

Programming Languages

shell
77523 projects

Projects that are alternatives of or similar to Vgg16 Pytorch

Cnn Models
ImageNet pre-trained models with batch normalization for the Caffe framework
Stars: ✭ 355 (+1265.38%)
Mutual labels:  resnet, vgg16, vgg
LibtorchTutorials
This is a code repository for pytorch c++ (or libtorch) tutorial.
Stars: ✭ 463 (+1680.77%)
Mutual labels:  vgg, resnet
Keras Idiomatic Programmer
Books, Presentations, Workshops, Notebook Labs, and Model Zoo for Software Engineers and Data Scientists wanting to learn the TF.Keras Machine Learning framework
Stars: ✭ 720 (+2669.23%)
Mutual labels:  resnet, vgg
Keras-CIFAR10
practice on CIFAR10 with Keras
Stars: ✭ 25 (-3.85%)
Mutual labels:  vgg, resnet
Ai papers
AI Papers
Stars: ✭ 253 (+873.08%)
Mutual labels:  resnet, vgg
DeepNetModel
记录每一个常用的深度模型结构的特点(图和代码)
Stars: ✭ 25 (-3.85%)
Mutual labels:  vgg, resnet
neural-dream
PyTorch implementation of DeepDream algorithm
Stars: ✭ 110 (+323.08%)
Mutual labels:  vgg, resnet
Dsfd.pytorch
DSFD implement with pytorch
Stars: ✭ 153 (+488.46%)
Mutual labels:  resnet, vgg16
Faster-RCNN-TensorFlow
TensorFlow implementation of Faster RCNN for Object Detection
Stars: ✭ 13 (-50%)
Mutual labels:  vgg, resnet
Grad Cam Tensorflow
tensorflow implementation of Grad-CAM (CNN visualization)
Stars: ✭ 261 (+903.85%)
Mutual labels:  resnet, vgg16
Pytorch Image Classification
Tutorials on how to implement a few key architectures for image classification using PyTorch and TorchVision.
Stars: ✭ 272 (+946.15%)
Mutual labels:  resnet, vgg
Deep Learning With Python
Deep learning codes and projects using Python
Stars: ✭ 195 (+650%)
Mutual labels:  resnet, vgg16
Sceneclassify
AI场景分类竞赛
Stars: ✭ 169 (+550%)
Mutual labels:  resnet, vgg
Food Recipe Cnn
food image to recipe with deep convolutional neural networks.
Stars: ✭ 448 (+1623.08%)
Mutual labels:  vgg16, vgg
Resnet Cifar10 Caffe
ResNet-20/32/44/56/110 on CIFAR-10 with Caffe
Stars: ✭ 161 (+519.23%)
Mutual labels:  resnet, vgg16
python cv AI ML
用python做计算机视觉,人工智能,机器学习,深度学习等
Stars: ✭ 73 (+180.77%)
Mutual labels:  vgg, resnet
Chainer Cifar10
Various CNN models for CIFAR10 with Chainer
Stars: ✭ 134 (+415.38%)
Mutual labels:  resnet, vgg
Tensorrtx
Implementation of popular deep learning networks with TensorRT network definition API
Stars: ✭ 3,456 (+13192.31%)
Mutual labels:  resnet, vgg
RMNet
RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.
Stars: ✭ 129 (+396.15%)
Mutual labels:  vgg, resnet
Tianchi Medical Lungtumordetect
天池医疗AI大赛[第一季]:肺部结节智能诊断 UNet/VGG/Inception/ResNet/DenseNet
Stars: ✭ 314 (+1107.69%)
Mutual labels:  resnet, vgg

Disclaimer

This is a modified repository from PyTorch/examples/ImageNet. Please refer to the original repository for more details.

ImageNet training in PyTorch

This implements training of popular model architectures, such as ResNet, AlexNet, and VGG on the ImageNet dataset.

Requirements

Training

To train a model, run main.py with the desired model architecture and the path to the ImageNet dataset:

python main.py -a resnet18 [imagenet-folder with train and val folders]

The default learning rate schedule starts at 0.1 and decays by a factor of 10 every 30 epochs. This is appropriate for ResNet and models with batch normalization, but too high for AlexNet and VGG. Use 0.01 as the initial learning rate for AlexNet or VGG:

python main.py -a alexnet --lr 0.01 [imagenet-folder with train and val folders]

Multi-processing Distributed Data Parallel Training

You should always use the NCCL backend for multi-processing distributed training since it currently provides the best distributed training performance.

Single node, multiple GPUs:

python main.py -a resnet50 --lr 0.01 --dist-url 'tcp://127.0.0.1:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed [imagenet-folder with train and val folders]

Multiple nodes:

Node 0:

python main.py -a resnet50 --lr 0.01 --dist-url 'tcp://IP_OF_NODE0:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 2 --rank 0 [imagenet-folder with train and val folders]

Node 1:

python main.py -a resnet50 --lr 0.01 --dist-url 'tcp://IP_OF_NODE0:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 2 --rank 1 [imagenet-folder with train and val folders]

Usage

usage: main.py [-h] [--arch ARCH] [-j N] [--epochs N] [--start-epoch N] [-b N]
               [--lr LR] [--momentum M] [--weight-decay W] [--print-freq N]
               [--resume PATH] [-e] [--pretrained] [--world-size WORLD_SIZE]
               [--rank RANK] [--dist-url DIST_URL]
               [--dist-backend DIST_BACKEND] [--seed SEED] [--gpu GPU]
               [--multiprocessing-distributed]
               DIR

PyTorch ImageNet Training

positional arguments:
  DIR                   path to dataset

optional arguments:
  -h, --help            show this help message and exit
  --arch ARCH, -a ARCH  model architecture: alexnet | densenet121 |
                        densenet161 | densenet169 | densenet201 |
                        resnet101 | resnet152 | resnet18 | resnet34 |
                        resnet50 | squeezenet1_0 | squeezenet1_1 | vgg11 |
                        vgg11_bn | vgg13 | vgg13_bn | vgg16 | vgg16_bn | vgg19
                        | vgg19_bn (default: resnet18)
  -j N, --workers N     number of data loading workers (default: 4)
  --epochs N            number of total epochs to run
  --start-epoch N       manual epoch number (useful on restarts)
  -b N, --batch-size N  mini-batch size (default: 256), this is the total
                        batch size of all GPUs on the current node when using
                        Data Parallel or Distributed Data Parallel
  --lr LR, --learning-rate LR
                        initial learning rate
  --momentum M          momentum
  --weight-decay W, --wd W
                        weight decay (default: 1e-4)
  --print-freq N, -p N  print frequency (default: 10)
  --resume PATH         path to latest checkpoint (default: none)
  -e, --evaluate        evaluate model on validation set
  --pretrained          use pre-trained model
  --world-size WORLD_SIZE
                        number of nodes for distributed training
  --rank RANK           node rank for distributed training
  --dist-url DIST_URL   url used to set up distributed training
  --dist-backend DIST_BACKEND
                        distributed backend
  --seed SEED           seed for initializing training.
  --gpu GPU             GPU id to use.
  --multiprocessing-distributed
                        Use multi-processing distributed training to launch N
                        processes per node, which has N GPUs. This is the
                        fastest way to use PyTorch for either single node or
                        multi node data parallel training

Our case:

python main.py -a vgg16 --lr 0.01 -b 32 D:\Dataset\Imagenet2012\Images

For resuming:

python main.py -a vgg16 --lr 0.01 -b 32 --resume D:\Dataset\Imagenet2012\Images

With Batch Normalization:

python main.py -a vgg16_bn --lr 0.01 -b 16 D:\Dataset\Imagenet2012\Images
python main.py -a vgg16_bn --lr 0.01 -b 16 --resume . --epochs 50 --start-epoch 35 --multiprocessing-distributed  D:\Dataset\Imagenet2012\Images
python main.py -a vgg16_bn --lr 0.001 -b 16 --resume model_best.pth.tar --epochs 40 --gpu 0  D:\Dataset\Imagenet2012\Images

Download the ImageNet dataset

The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset has 1000 categories and 1.2 million images. The images do not need to be preprocessed or packaged in any database, but the validation images need to be moved into appropriate subfolders.

  1. Download the images from http://image-net.org/download-images

  2. Extract the training data:

mkdir train && mv ILSVRC2012_img_train.tar train/ && cd train
tar -xvf ILSVRC2012_img_train.tar && rm -f ILSVRC2012_img_train.tar
find . -name "*.tar" | while read NAME ; do mkdir -p "${NAME%.tar}"; tar -xvf "${NAME}" -C "${NAME%.tar}"; rm -f "${NAME}"; done
cd ..
  1. Extract the validation data and move images to subfolders:
mkdir val && mv ILSVRC2012_img_val.tar val/ && cd val && tar -xvf ILSVRC2012_img_val.tar
wget -qO- https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh | bash
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].