All Projects → ducksoup → autodial

ducksoup / autodial

Licence: other
AutoDIAL Caffe Implementation

Programming Languages

C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
Cuda
1817 projects
CMake
9771 projects
Protocol Buffer
295 projects
matlab
3953 projects

Projects that are alternatives of or similar to autodial

vqa-soft
Accompanying code for "A Simple Loss Function for Improving the Convergence and Accuracy of Visual Question Answering Models" CVPR 2017 VQA workshop paper.
Stars: ✭ 14 (-50%)
Mutual labels:  caffe
XLearning-GPU
qihoo360 xlearning with GPU support; AI on Hadoop
Stars: ✭ 22 (-21.43%)
Mutual labels:  caffe
caffe-demo
Collection of deep learning demos based on neworks from the Caffe Zoo
Stars: ✭ 15 (-46.43%)
Mutual labels:  caffe
Faster rcnn Cplusplus vs2013
faster-rcnn_VS2013with C++
Stars: ✭ 77 (+175%)
Mutual labels:  caffe
fusion gan
Codes for the paper 'Learning to Fuse Music Genres with Generative Adversarial Dual Learning' ICDM 17
Stars: ✭ 18 (-35.71%)
Mutual labels:  domain-adaptation
domain adapt
Domain adaptation networks for digit recognitioning
Stars: ✭ 14 (-50%)
Mutual labels:  domain-adaptation
domain-adaptation-capls
Unsupervised Domain Adaptation via Structured Prediction Based Selective Pseudo-Labeling
Stars: ✭ 43 (+53.57%)
Mutual labels:  domain-adaptation
MobilenetSSD caffe
How to train and verify mobilenet by using voc pascal data in caffe ssd?
Stars: ✭ 25 (-10.71%)
Mutual labels:  caffe
weak-supervision-for-NER
Framework to learn Named Entity Recognition models without labelled data using weak supervision.
Stars: ✭ 114 (+307.14%)
Mutual labels:  domain-adaptation
all-classifiers-2019
A collection of computer vision projects for Acute Lymphoblastic Leukemia classification/early detection.
Stars: ✭ 24 (-14.29%)
Mutual labels:  caffe
traditional-domain-adaptation-methods
traditional domain adaptation methods (e.g., GFK, TCA, SA)
Stars: ✭ 47 (+67.86%)
Mutual labels:  domain-adaptation
VirtualCapsuleEndoscopy
VR-Caps: A Virtual Environment for Active Capsule Endoscopy
Stars: ✭ 59 (+110.71%)
Mutual labels:  domain-adaptation
faster-rcnn-pedestrian-detection
Faster R-CNN for pedestrian detection
Stars: ✭ 31 (+10.71%)
Mutual labels:  caffe
SSD Tracker
Counting people, dog and bicycle using SSD detection and tracking.
Stars: ✭ 17 (-39.29%)
Mutual labels:  caffe
kernelized correlation filters gpu
Real-time visual object tracking using correlations filters and deep learning
Stars: ✭ 27 (-3.57%)
Mutual labels:  caffe
TensorRT-LPR
车牌识别,基于HyperLPR实现,修改模型调用方法,使用caffe+tensorRT实现GPU加速,修改了车牌检测模型
Stars: ✭ 14 (-50%)
Mutual labels:  caffe
FDCNN
The implementation of FDCNN in paper - A Feature Difference Convolutional Neural Network-Based Change Detection Method
Stars: ✭ 54 (+92.86%)
Mutual labels:  caffe
Similarity-Adaptive-Deep-Hashing
Unsupervised Deep Hashing with Similarity-Adaptive and Discrete Optimization (TPAMI2018)
Stars: ✭ 18 (-35.71%)
Mutual labels:  caffe
CAM-Python
Class Activation Mapping with Caffe using the Python wrapper pycaffe instead of matlab.
Stars: ✭ 66 (+135.71%)
Mutual labels:  caffe
Face-Attributes-MultiTask-Classification
Use Cafffe to do Face Attributes MultiTask Classification based on CelebA data sets
Stars: ✭ 32 (+14.29%)
Mutual labels:  caffe

AutoDIAL Caffe Implementation

This is the official Caffe implementation of AutoDIAL: Automatic DomaIn Alignment Layers

This code is forked from BVLC/caffe. For any issue not directly related to our components (listed in the following), please refer to the upstream repository.

Contents

We provide two additional layers and some example configuration files to train AlexNet-DIAL on the Office-31 dataset:

  • DialLayer: implements the AutoDIAL layer described in the paper.
  • EntropyLossLayer: a simple entropy loss implementation with integrated softmax computation.
  • AlexNet-DIAL: model and train *.prototxt files to train AlexNet-DIAL on Office-31 are available under models/alexnet_dial. They assume the Office-31 images are formatted in a way that is compatible with Caffe's ImageDataLayer.

DialLayer

At training time, DialLayer assumes that images / samples from the source and target sets are collected in the same batch, with the source data stored in the first n elements and the target data stored in the remaining N - n elements. The splitting point n can be freely configured by the user. Similarly to BatchNormLayer, DialLayer computes on-line estimates of the input's mean and standard deviation, but it does so separately for the source and target sets. At test time, DialLayer assumes that the batches contain samples from a single set, and uses the same (configurable) statistics to normalize all inputs.

DialLayer accepts all of BatchNormLayer's parameters (use_global_stats, moving_average_fraction, eps), with the addition of:

  • slice_point: the batch index n of the first target sample.
  • test_stats, one of SOURCE or TARGET: determines which of the stored statistics are used to normalize the input at test time.
  • weight_filler: a filler to initialize alpha (see the paper).

Abstract and citation

Classifiers trained on given databases perform poorly when tested on data acquired in different settings. This is explained in domain adaptation through a shift among distributions of the source and target domains. Attempts to align them have traditionally resulted in works reducing the domain shift by introducing appropriate loss terms, measuring the discrepancies between source and target distributions, in the objective function. Here we take a different route, proposing to align the learned representations by embedding in any given network specific Domain Alignment Layers, designed to match the source and target feature distributions to a reference one. Opposite to previous works which define a priori in which layers adaptation should be performed, our method is able to automatically learn the degree of feature alignment required at different levels of the deep network. Thorough experiments on different public benchmarks, in the unsupervised setting, confirm the power of our approach.

@inproceedings{carlucci2017autodial,
  title={AutoDIAL: Automatic DomaIn Alignment Layers},
  author={Carlucci, Fabio Maria and Porzi, Lorenzo and Caputo, Barbara and Ricci, Elisa and Rota Bul{\`o}, Samuel},
  booktitle={International Conference on Computer Vision},
  year={2017}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].