All Projects → zhanghang1989 → Torch Encoding Layer

zhanghang1989 / Torch Encoding Layer

Deep Texture Encoding Network

Programming Languages

lua
6591 projects

Projects that are alternatives of or similar to Torch Encoding Layer

Bcnencoder.net
Cross-platform texture encoding libary for .NET. With support for BC1-3/DXT, BC4-5/RGTC and BC7/BPTC compression. Outputs files in ktx or dds formats.
Stars: ✭ 28 (-65%)
Mutual labels:  encoding, texture
Anrl
ANRL: Attributed Network Representation Learning via Deep Neural Networks(IJCAI-2018)
Stars: ✭ 77 (-3.75%)
Mutual labels:  deep-neural-networks
Sarcasm Detection
Detecting Sarcasm on Twitter using both traditonal machine learning and deep learning techniques.
Stars: ✭ 73 (-8.75%)
Mutual labels:  deep-neural-networks
Mlbox
MLBox is a powerful Automated Machine Learning python library.
Stars: ✭ 1,199 (+1398.75%)
Mutual labels:  encoding
Fake news detection deep learning
Fake News Detection using Deep Learning models in Tensorflow
Stars: ✭ 74 (-7.5%)
Mutual labels:  deep-neural-networks
Ldpc
C and MATLAB implementation for LDPC encoding and decoding
Stars: ✭ 76 (-5%)
Mutual labels:  encoding
Rnn Trajmodel
The source of the IJCAI2017 paper "Modeling Trajectory with Recurrent Neural Networks"
Stars: ✭ 72 (-10%)
Mutual labels:  deep-neural-networks
Sdtw pytorch
Implementation of soft dynamic time warping in pytorch
Stars: ✭ 79 (-1.25%)
Mutual labels:  deep-neural-networks
Cnn Paper2
🎨 🎨 深度学习 卷积神经网络教程 :图像识别,目标检测,语义分割,实例分割,人脸识别,神经风格转换,GAN等🎨🎨 https://dataxujing.github.io/CNN-paper2/
Stars: ✭ 77 (-3.75%)
Mutual labels:  deep-neural-networks
Swae
Implementation of the Sliced Wasserstein Autoencoders
Stars: ✭ 75 (-6.25%)
Mutual labels:  deep-neural-networks
Dann
Deep Neural Network Sandbox for JavaScript.
Stars: ✭ 75 (-6.25%)
Mutual labels:  deep-neural-networks
Mit 6.s094
MIT-6.S094: Deep Learning for Self-Driving Cars Assignments solutions
Stars: ✭ 74 (-7.5%)
Mutual labels:  deep-neural-networks
Deepsequenceclassification
Deep neural network based model for sequence to sequence classification
Stars: ✭ 76 (-5%)
Mutual labels:  deep-neural-networks
Awesome System For Machine Learning
A curated list of research in machine learning system. I also summarize some papers if I think they are really interesting.
Stars: ✭ 1,185 (+1381.25%)
Mutual labels:  deep-neural-networks
Awesome Learning With Label Noise
A curated list of resources for Learning with Noisy Labels
Stars: ✭ 1,205 (+1406.25%)
Mutual labels:  deep-neural-networks
Channelnets
Tensorflow Implementation of ChannelNets (NeurIPS 18)
Stars: ✭ 73 (-8.75%)
Mutual labels:  deep-neural-networks
Tfjs Core
WebGL-accelerated ML // linear algebra // automatic differentiation for JavaScript.
Stars: ✭ 8,514 (+10542.5%)
Mutual labels:  deep-neural-networks
Godot Scraps
Experimental projects for learning the Godot engine
Stars: ✭ 76 (-5%)
Mutual labels:  texture
Deepdenoiser
Deep learning based denoiser for Cycles, Blender's physically-based production renderer.
Stars: ✭ 80 (+0%)
Mutual labels:  deep-neural-networks
Oiio
Reading, writing, and processing images in a wide variety of file formats, using a format-agnostic API, aimed at VFX applications.
Stars: ✭ 1,216 (+1420%)
Mutual labels:  texture

Deep Encoding

Created by Hang Zhang

Table of Contents

  1. Introduction
  2. Installation
  3. Experiments
  4. Benchmarks
  5. Acknowldgements

Introduction

  • Please checkout our PyTorch implementation (recommended, memory efficient).

  • This repo is a Torch implementation of Encoding Layer as described in the paper:

Deep TEN: Texture Encoding Network [arXiv]
Hang Zhang, Jia Xue, Kristin Dana

@article{zhang2016deep,
  title={Deep TEN: Texture Encoding Network},
  author={Zhang, Hang and Xue, Jia and Dana, Kristin},
  journal={arXiv preprint arXiv:1612.02844},
  year={2016}
}

Traditional methods such as bag-of-words BoW (left) have a structural similarity to more recent FV-CNN methods (center). Each component is optimized in separate steps. In our approach (right) the entire pipeline is learned in an integrated manner, tuning each component for the task at hand (end-to-end texture/material/pattern recognition).

Installation

On Linux

luarocks install https://raw.githubusercontent.com/zhanghang1989/Deep-Encoding/master/deep-encoding-scm-1.rockspec

On OSX

CC=clang CXX=clang++ luarocks install https://raw.githubusercontent.com/zhanghang1989/Deep-Encoding/master/deep-encoding-scm-1.rockspec

Experiments

  • The Joint Encoding experiment in Sec4.2 will execute by default (tested using 1 Titan X GPU). This achieves 12.89% percentage error on STL-10 dataset, which is 49.8% relative improvement comparing to pervious state-of-the art 25.67% of Zhao et. al. 2015.:

    git clone https://github.com/zhanghang1989/Deep-Encoding
    cd Deep-Encoding/experiments
    th main.lua
    
  • Training Deep-TEN on MINC-2500 in Sec4.1 using 4 GPUs.

    1. Please download the pre-trained ResNet-50 Torch model and the MINC-2500 dataset to minc folder before executing the program (tested using 4 Titan X GPUs).
     th main.lua -retrain resnet-50.t7 -ft true \
     -netType encoding -nCodes 32 -dataset minc \
     -data minc/ -nClasses 23 -batchSize 64 \
     -nGPU 4 -multisize true
    
    1. To get comparable results using 2 GPUs, you should change the batch size and the corresponding learning rate:
      th main.lua -retrain resnet-50.t7 -ft true \
      -netType encoding -nCodes 32 -dataset minc \
      -data minc/ -nClasses 23 -batchSize 32 \
      -nGPU 2 -multisize true -LR 0.05\
    

Benchmarks

Dataset MINC-2500 FMD GTOS KTH 4D-Light
FV-SIFT 46.0 47.0 65.5 66.3 58.4
FV-CNN(VD) 61.8 75.0 77.1 71.0 70.4
FV-CNN(VD) multi 63.1 74.0 79.2 77.8 76.5
FV-CNN(ResNet)multi 69.3 78.2 77.1 78.3 77.6
Deep-TEN*(ours) 81.3 80.2±0.9 84.5±2.9 84.5±3.5 81.7±1.0
State-of-the-Art 76.0±0.2 82.4±1.4 81.4 81.1±1.5 77.0±1.1

Acknowldgements

We thank Wenhan Zhang from Physics department, Rutgers University for discussions of mathematic models. This work was supported by National Science Foundation award IIS-1421134. A GPU used for this research was donated by the NVIDIA Corporation.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].