All Projects → wxywhu → SRDenseNet-pytorch

wxywhu / SRDenseNet-pytorch

Licence: other
SRDenseNet-pytorch(ICCV_2017)

Programming Languages

python
139335 projects - #7 most used programming language
matlab
3953 projects

Projects that are alternatives of or similar to SRDenseNet-pytorch

pytorch-SRDenseNet
Pytorch implementation for SRDenseNet (ICCV2017)
Stars: ✭ 70 (-38.05%)
Mutual labels:  densenet, super-resolution
nih-chest-xrays
A collection of projects which explore image classification on chest x-ray images (via the NIH dataset)
Stars: ✭ 32 (-71.68%)
Mutual labels:  densenet
Imagenet
Pytorch Imagenet Models Example + Transfer Learning (and fine-tuning)
Stars: ✭ 134 (+18.58%)
Mutual labels:  densenet
SRCNN-PyTorch
Pytorch framework can easily implement srcnn algorithm with excellent performance
Stars: ✭ 48 (-57.52%)
Mutual labels:  super-resolution
Pytorch Cifar100
Practice on cifar100(ResNet, DenseNet, VGG, GoogleNet, InceptionV3, InceptionV4, Inception-ResNetv2, Xception, Resnet In Resnet, ResNext,ShuffleNet, ShuffleNetv2, MobileNet, MobileNetv2, SqueezeNet, NasNet, Residual Attention Network, SENet, WideResNet)
Stars: ✭ 2,423 (+2044.25%)
Mutual labels:  densenet
TEGAN
Generative Adversarial Network (GAN) for physically realistic enrichment of turbulent flow fields
Stars: ✭ 60 (-46.9%)
Mutual labels:  super-resolution
Peleenet
PeleeNet: An efficient DenseNet architecture for mobile devices
Stars: ✭ 128 (+13.27%)
Mutual labels:  densenet
CSSR
Crack Segmentation for Low-Resolution Images using Joint Learning with Super-Resolution (CSSR) was accepted to international conference on MVA2021 (oral), and selected for the Best Practical Paper Award.
Stars: ✭ 50 (-55.75%)
Mutual labels:  super-resolution
Jalali-Lab-Implementation-of-RAISR
Implementation of RAISR (Rapid and Accurate Image Super Resolution) algorithm in Python 3.x by Jalali Laboratory at UCLA. The implementation presented here achieved performance results that are comparable to that presented in Google's research paper (with less than ± 0.1 dB in PSNR). Just-in-time (JIT) compilation employing JIT numba is used to …
Stars: ✭ 118 (+4.42%)
Mutual labels:  super-resolution
Chexnet Keras
This project is a tool to build CheXNet-like models, written in Keras.
Stars: ✭ 254 (+124.78%)
Mutual labels:  densenet
Chexnet With Localization
Weakly Supervised Learning for Findings Detection in Medical Images
Stars: ✭ 238 (+110.62%)
Mutual labels:  densenet
Eeg Dl
A Deep Learning library for EEG Tasks (Signals) Classification, based on TensorFlow.
Stars: ✭ 165 (+46.02%)
Mutual labels:  densenet
Magpie
将任何窗口放大至全屏
Stars: ✭ 4,478 (+3862.83%)
Mutual labels:  super-resolution
Chainer Cifar10
Various CNN models for CIFAR10 with Chainer
Stars: ✭ 134 (+18.58%)
Mutual labels:  densenet
densenet-tensorflow
A clean densenet in tensorflow
Stars: ✭ 37 (-67.26%)
Mutual labels:  densenet
Pytorch Speech Commands
Speech commands recognition with PyTorch
Stars: ✭ 128 (+13.27%)
Mutual labels:  densenet
Semantic Segmentation Suite
Semantic Segmentation Suite in TensorFlow. Implement, train, and test new Semantic Segmentation models easily!
Stars: ✭ 2,395 (+2019.47%)
Mutual labels:  densenet
multiclass-semantic-segmentation
Experiments with UNET/FPN models and cityscapes/kitti datasets [Pytorch]
Stars: ✭ 96 (-15.04%)
Mutual labels:  pytroch
Densenet
MXNet implementation for DenseNet
Stars: ✭ 28 (-75.22%)
Mutual labels:  densenet
ESPCN-PyTorch
A PyTorch implementation of ESPCN based on CVPR 2016 paper Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network.
Stars: ✭ 33 (-70.8%)
Mutual labels:  super-resolution

SRDenseNet-pytorch

Implementation of paper: "Image Super-Resolution Using Dense Skip Connections" in PyTorch (http://openaccess.thecvf.com/content_ICCV_2017/papers/Tong_Image_Super-Resolution_Using_ICCV_2017_paper.pdf) image

Usage

Training

usage: main.py [-h] [--batchSize BATCHSIZE] [--nEpochs NEPOCHS] [--lr LR]
               [--step STEP] [--cuda] [--resume RESUME]
               [--start-epoch START_EPOCH] [--threads THREADS]
               [--pretrained PRETRAINED]

Pytorch SRDenseNet train

optional arguments:
  -h, --help            show this help message and exit
  --batchSize BATCHSIZE
                        training batch size
  --nEpochs NEPOCHS     number of epochs to train for
  --lr LR               Learning Rate. Default=1e-4
  --step STEP           Sets the learning rate to the initial LR decayed by
                        10 every n epochs, Default: n=30
  --cuda                Use cuda?
  --resume RESUME       Path to checkpoint (default: none)
  --start-epoch START_EPOCH
                        Manual epoch number (useful on restarts)
  --threads THREADS     Number of threads for data loader to use, Default: 1
  --pretrained PRETRAINED
                        path to pretrained model (default: none)

Test

usage: test.py [-h] [--cuda] [--model MODEL] [--imageset IMAGESET] [--scale SCALE]

Pytorch SRDenseNet Test

optional arguments:
  -h, --help     show this help message and exit
  --cuda         use cuda?
  --model MODEL  model path
  --imageset IMAGESET  imageset name
  --scale SCALE  scale factor, Default: 4

Prepare Training dataset

The training data is generated with Matlab Bicubic Interplotation, please refer Code for Data Generation for creating training files.

Prepare Test dataset

The test imageset is generated with Matlab Bicubic Interplotation, please refer Code for test for creating test imageset.

Performance

We provide a pretrained .SRDenseNet x4 model trained on DIV2K images from [DIV2K_train_HR] (http://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_HR.zip).While I use the SR_DenseNet to train this model, so the performance is test based on this code.

Non-overlapping sub-images with a size of 96 × 96 were cropped in the HR space. Other settings is the same as the original paper

  • Performance in PSNR on Set5, Set14, and BSD100
DataSet/Method Paper PyTorch
Set5 32.02/0.893 31.57/0.883
Set14 28.50/0.778 28.11/0.771
BSD100 27.53/0.733 27.32/0.729
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].