All Projects → jyh2986 → Active Convolution

jyh2986 / Active Convolution

Licence: other
Active Convolution

Projects that are alternatives of or similar to Active Convolution

Liteflownet2
A Lightweight Optical Flow CNN - Revisiting Data Fidelity and Regularization, TPAMI 2020
Stars: ✭ 195 (+248.21%)
Mutual labels:  deeplearning, caffe
dd performances
DeepDetect performance sheet
Stars: ✭ 92 (+64.29%)
Mutual labels:  caffe, deeplearning
Netron
Visualizer for neural network, deep learning, and machine learning models
Stars: ✭ 17,193 (+30601.79%)
Mutual labels:  deeplearning, caffe
Pwc Net
PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume, CVPR 2018 (Oral)
Stars: ✭ 1,142 (+1939.29%)
Mutual labels:  deeplearning, caffe
Jacinto Ai Devkit
Training & Quantization of embedded friendly Deep Learning / Machine Learning / Computer Vision models
Stars: ✭ 49 (-12.5%)
Mutual labels:  deeplearning, caffe
Mobilenet Ssd
MobileNet-SSD(MobileNetSSD) + Neural Compute Stick(NCS) Faster than YoloV2 + Explosion speed by RaspberryPi · Multiple moving object detection with high accuracy.
Stars: ✭ 84 (+50%)
Mutual labels:  deeplearning, caffe
caffe
Caffe: a Fast framework for deep learning. Custom version with built-in sparse inputs, segmentation, object detection, class weights, and custom layers
Stars: ✭ 36 (-35.71%)
Mutual labels:  caffe, deeplearning
DLInfBench
CNN model inference benchmarks for some popular deep learning frameworks
Stars: ✭ 51 (-8.93%)
Mutual labels:  caffe, deeplearning
Mobilenet Ssd Realsense
[High Performance / MAX 30 FPS] RaspberryPi3(RaspberryPi/Raspbian Stretch) or Ubuntu + Multi Neural Compute Stick(NCS/NCS2) + RealSense D435(or USB Camera or PiCamera) + MobileNet-SSD(MobileNetSSD) + Background Multi-transparent(Simple multi-class segmentation) + FaceDetection + MultiGraph + MultiProcessing + MultiClustering
Stars: ✭ 322 (+475%)
Mutual labels:  deeplearning, caffe
Segmentation keras
DilatedNet in Keras for image segmentation
Stars: ✭ 300 (+435.71%)
Mutual labels:  deeplearning, caffe
Ssd Models
把极速检测器的门槛给我打下来make lightweight caffe-ssd great again
Stars: ✭ 62 (+10.71%)
Mutual labels:  deeplearning, caffe
Liteflownet
LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation, CVPR 2018 (Spotlight paper, 6.6%)
Stars: ✭ 474 (+746.43%)
Mutual labels:  deeplearning, caffe
Skimcaffe
Caffe for Sparse Convolutional Neural Network
Stars: ✭ 230 (+310.71%)
Mutual labels:  caffe, convolution
Xlearning
AI on Hadoop
Stars: ✭ 1,709 (+2951.79%)
Mutual labels:  deeplearning, caffe
Similarity-Adaptive-Deep-Hashing
Unsupervised Deep Hashing with Similarity-Adaptive and Discrete Optimization (TPAMI2018)
Stars: ✭ 18 (-67.86%)
Mutual labels:  caffe, deeplearning
Caffe64
No dependency caffe replacement
Stars: ✭ 335 (+498.21%)
Mutual labels:  deeplearning, caffe
Ffdl
Fabric for Deep Learning (FfDL, pronounced fiddle) is a Deep Learning Platform offering TensorFlow, Caffe, PyTorch etc. as a Service on Kubernetes
Stars: ✭ 640 (+1042.86%)
Mutual labels:  deeplearning, caffe
Capsgnn
A PyTorch implementation of "Capsule Graph Neural Network" (ICLR 2019).
Stars: ✭ 1,008 (+1700%)
Mutual labels:  convolution
Awesome Semantic Segmentation
🤘 awesome-semantic-segmentation
Stars: ✭ 8,831 (+15669.64%)
Mutual labels:  deeplearning
Yann
This toolbox is support material for the book on CNN (http://www.convolution.network).
Stars: ✭ 41 (-26.79%)
Mutual labels:  convolution

Active Convolution

This repository contains the implementation for the paper Active Convolution: Learning the Shape of Convolution for Image Classification.

The code is based on Caffe and cuDNN(v5)

Abstract

In recent years, deep learning has achieved great success in many computer vision applications. Convolutional neural networks (CNNs) have lately emerged as a major approach to image classification. Most research on CNNs thus far has focused on developing architectures such as the Inception and residual networks. The convolution layer is the core of the CNN, but few studies have addressed the convolution unit itself. In this paper, we introduce a convolution unit called the active convolution unit (ACU). A new convolution has no fixed shape, because of which we can define any form of convolution. Its shape can be learned through backpropagation during training. Our proposed unit has a few advantages. First, the ACU is a generalization of convolution; it can define not only all conventional convolutions, but also convolutions with fractional pixel coordinates. We can freely change the shape of the convolution, which provides greater freedom to form CNN structures. Second, the shape of the convolution is learned while training and there is no need to tune it by hand. Third, the ACU can learn better than a conventional unit, where we obtained the improvement simply by changing the conventional convolution to an ACU. We tested our proposed method on plain and residual networks, and the results showed significant improvement using our method on various datasets and architectures in comparison with the baseline.

Testing Code

You can validate backpropagation using test code. Because it is not differentiable on lattice points, you should not use integer point position when you are testing code. It is simply possible to define "TEST_ACONV_FAST_ENV" macro in aconv_fast_layer.hpp

  1. Define "TEST_ACONV_FAST_ENV" macro in aconv_fast_layer.hpp
  2. > make test
  3. > ./build/test/test_aconv_fast_layer.testbin

You should pass all tests. Before the start, don't forget to undefine TEST_ACONV_FAST_ENV macro and make again.

Usage

ACU has 4 parameters(weight, bias, x-positions, y-positions of synapse). Even though you don't use bias term, the order will not be changed.

Please refer deploy file in models/ACU

If you want define arbitary shape of convolution,

  1. use non SQUARE type in aconv_param
  2. define number of synapse using kernel_h, kernel_w parameter in convolution_param

In example, if you want define cross-shaped convolution with 4 synapses, you can use like belows.

...
aconv_param{   type: CIRCLE }
convolution_param {    num_output: 48    kernel_h: 1    kernel_w: 4    stride: 1 }
...

When you use user-defined shape of convolution, you'd better edit aconv_fast_layer.cpp directly to define initial position of synapses.

Example

This is the result of plain ACU network, and there an example in models/ACU of CIFAR-10

Network CIFAR-10(%) CIFAR-100(%)
baseline 8.01 27.85
ACU 7.33 27.11
Improvement +0.68 +0.74

This is changes of the positions over iterations.

You can draw learned position by using ipython script.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].