All Projects → yuanyuanli85 → Tf Hrnet

yuanyuanli85 / Tf Hrnet

Licence: bsd-3-clause
tensorflow implementation for "High-Resolution Representations for Labeling Pixels and Regions"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Tf Hrnet

Caffenet Benchmark
Evaluation of the CNN design choices performance on ImageNet-2012.
Stars: ✭ 700 (+1047.54%)
Mutual labels:  imagenet
Classification models
Classification models trained on ImageNet. Keras.
Stars: ✭ 938 (+1437.7%)
Mutual labels:  imagenet
Divide And Co Training
[Paper 2020] Towards Better Accuracy-efficiency Trade-offs: Divide and Co-training. Plus, an image classification toolbox includes ResNet, Wide-ResNet, ResNeXt, ResNeSt, ResNeXSt, SENet, Shake-Shake, DenseNet, PyramidNet, and EfficientNet.
Stars: ✭ 54 (-11.48%)
Mutual labels:  imagenet
Pytorch image classification
PyTorch implementation of image classification models for CIFAR-10/CIFAR-100/MNIST/FashionMNIST/Kuzushiji-MNIST/ImageNet
Stars: ✭ 795 (+1203.28%)
Mutual labels:  imagenet
Imagenetscraper
👁 Bulk-download all thumbnails from an ImageNet synset, with optional rescaling
Stars: ✭ 24 (-60.66%)
Mutual labels:  imagenet
Constrained attention filter
(ECCV 2020) Tensorflow implementation of A Generic Visualization Approach for Convolutional Neural Networks
Stars: ✭ 36 (-40.98%)
Mutual labels:  imagenet
Randwirenn
Implementation of: "Exploring Randomly Wired Neural Networks for Image Recognition"
Stars: ✭ 675 (+1006.56%)
Mutual labels:  imagenet
Big transfer
Official repository for the "Big Transfer (BiT): General Visual Representation Learning" paper.
Stars: ✭ 1,096 (+1696.72%)
Mutual labels:  imagenet
Orange3 Imageanalytics
🍊 🎑 Orange3 add-on for dealing with image related tasks
Stars: ✭ 24 (-60.66%)
Mutual labels:  imagenet
Pretrained Models.pytorch
Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.
Stars: ✭ 8,318 (+13536.07%)
Mutual labels:  imagenet
Switchable Normalization
Code for Switchable Normalization from "Differentiable Learning-to-Normalize via Switchable Normalization", https://arxiv.org/abs/1806.10779
Stars: ✭ 804 (+1218.03%)
Mutual labels:  imagenet
Mini Imagenet
Generate mini-ImageNet with ImageNet for fewshot learning
Stars: ✭ 22 (-63.93%)
Mutual labels:  imagenet
Imagenet resnet tensorflow2.0
Train ResNet on ImageNet in Tensorflow 2.0; ResNet 在ImageNet上完整训练代码
Stars: ✭ 42 (-31.15%)
Mutual labels:  imagenet
Addernet
Code for paper " AdderNet: Do We Really Need Multiplications in Deep Learning?"
Stars: ✭ 722 (+1083.61%)
Mutual labels:  imagenet
Imagenet
Trial on kaggle imagenet object localization by yolo v3 in google cloud
Stars: ✭ 56 (-8.2%)
Mutual labels:  imagenet
Pytorch2keras
PyTorch to Keras model convertor
Stars: ✭ 676 (+1008.2%)
Mutual labels:  imagenet
Dcgan
Porting pytorch dcgan on FloydHub
Stars: ✭ 21 (-65.57%)
Mutual labels:  imagenet
One Pixel Attack Keras
Keras implementation of "One pixel attack for fooling deep neural networks" using differential evolution on Cifar10 and ImageNet
Stars: ✭ 1,097 (+1698.36%)
Mutual labels:  imagenet
Biglittlenet
Official repository for Big-Little Net
Stars: ✭ 57 (-6.56%)
Mutual labels:  imagenet
Segmentationcpp
A c++ trainable semantic segmentation library based on libtorch (pytorch c++). Backbone: ResNet, ResNext. Architecture: FPN, U-Net, PAN, LinkNet, PSPNet, DeepLab-V3, DeepLab-V3+ by now.
Stars: ✭ 49 (-19.67%)
Mutual labels:  imagenet

hrnet-tf

Overview

This is a tensorflow implementation of high-resolution representations for ImageNet classification. The network structure and training hyperparamters are kept the same as the offical pytorch implementation.

Features of this repo

  • Low-level implementation of tensorflow
  • Multiple GPU training via Horovod
  • Support configurable network for HRNet
  • Reproduce the close accuracy compared with its offical pytorch implementation.

HRnet structure details

First, the four-resolution feature maps are fed into a bottleneck and the number of output channels are increased to 128, 256, 512, and 1024, respectively. Then, we downsample the high-resolution representations by a 2-strided 3x3 convolution outputting 256 channels and add them to the representations of the second-high-resolution representations. This process is repeated two times to get 1024 channels over the small resolution. Last, we transform 1024 channels to 2048 channels through a 1x1 convolution, followed by a global average pooling operation. The output 2048-dimensional representation is fed into the classifier.

Accuracy of pretrained models

model #Params GFLOPs top-1 error top-5 error Link
HRNet-W18-C 21.3M 3.99 24.2% 7.3% TF-HRNET-W18
HRNet-W30-C 37.7M 7.55 21.9% 6.0% TF-HRNet-W30

Installation

This repo is built on tensorflow 1.12 and Python 3.6

  1. Install dependency
pip install -r requirements.txt
  1. [Optional] Follow horovod installation instructions to install horovod to support multiple gpu training.

Data preparision

Please follow instructions to converted imagenet dataset from images to tfrecords. This can accelerate the training speed significantly. After convertion, you will have tfrecords files under data/tfrecords as below

# training files
train-00000-of-01024
train-00001-of-01024
...

# validation files
validation-00000-of-00128
validation-00001-of-00128
...

How to train and eval network

  1. Train network with one GPU for HRNet-W30
python top/train.py --net_cfg cfgs/w30_s4.cfg --data_path /path/to/tfrecords  
  1. If you want to resume training from saved checkpoint, set resume_training to enable resume training.
python top/train.py --net_cfg cfgs/w30_s4.cfg --data_path /path/to/tfrecords --resume_training
  1. Evaluate network. Make sure the checkpoint saved in models.
python top/train.py --net_cfg cfgs/w30_s4.cfg --data_path /path/to/tfrecords --eval_only
  1. Training with multiple GPUs. Specify the number of gpus via nb_gpus and extra_args in ./scripts/run_horovod.sh. For example, if you want to train HRNet-w30 by using 4 GPUs, the scripts would be like below
nb_gpus=4

extra_args='--net_cfg cfgs/w30_s4.cfg'

echo "multi-GPU training enabled"
mpirun -np ${nb_gpus} -bind-to none -map-by slot -x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH \
    -mca pml ob1 -mca btl ^openib \
    python top/train.py --enbl_multi_gpu  

Related Efforts

  1. Lot of code to build the dataset and training pipeline refer to pocketflow

Citation

If you find this work or code is helpful in your research, please cite:

@inproceedings{SunXLW19,
  title={Deep High-Resolution Representation Learning for Human Pose Estimation},
  author={Ke Sun and Bin Xiao and Dong Liu and Jingdong Wang},
  booktitle={CVPR},
  year={2019}
}

@article{SunZJCXLMWLW19,
  title={High-Resolution Representations for Labeling Pixels and Regions},
  author={Ke Sun and Yang Zhao and Borui Jiang and Tianheng Cheng and Bin Xiao 
  and Dong Liu and Yadong Mu and Xinggang Wang and Wenyu Liu and Jingdong Wang},
  journal   = {CoRR},
  volume    = {abs/1904.04514},
  year={2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].