All Projects → wenwei202 → Caffe

wenwei202 / Caffe

Licence: other
Caffe for Sparse and Low-rank Deep Neural Networks

Projects that are alternatives of or similar to Caffe

Compression
Data compression in TensorFlow
Stars: ✭ 458 (+35.1%)
Mutual labels:  deep-neural-networks, compression
All Classifiers 2019
A collection of computer vision projects for Acute Lymphoblastic Leukemia classification/early detection.
Stars: ✭ 22 (-93.51%)
Mutual labels:  deep-neural-networks, caffe
Aimet
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
Stars: ✭ 453 (+33.63%)
Mutual labels:  deep-neural-networks, compression
Nideep
collection of utilities to use with deep learning libraries (e.g. caffe)
Stars: ✭ 25 (-92.63%)
Mutual labels:  deep-neural-networks, caffe
Caffe2 Ios
Caffe2 on iOS Real-time Demo. Test with Your Own Model and Photos.
Stars: ✭ 221 (-34.81%)
Mutual labels:  deep-neural-networks, caffe
Cascaded Fcn
Source code for the MICCAI 2016 Paper "Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional NeuralNetworks and 3D Conditional Random Fields"
Stars: ✭ 296 (-12.68%)
Mutual labels:  deep-neural-networks, caffe
Ffdl
Fabric for Deep Learning (FfDL, pronounced fiddle) is a Deep Learning Platform offering TensorFlow, Caffe, PyTorch etc. as a Service on Kubernetes
Stars: ✭ 640 (+88.79%)
Mutual labels:  deep-neural-networks, caffe
Mobilnet ssd opencv
MobilNet-SSD object detection in opencv 3.4.1
Stars: ✭ 64 (-81.12%)
Mutual labels:  deep-neural-networks, caffe
Snn toolbox
Toolbox for converting analog to spiking neural networks (ANN to SNN), and running them in a spiking neuron simulator.
Stars: ✭ 187 (-44.84%)
Mutual labels:  deep-neural-networks, caffe
Compressai
A PyTorch library and evaluation platform for end-to-end compression research
Stars: ✭ 246 (-27.43%)
Mutual labels:  deep-neural-networks, compression
Tensorflow Open nsfw
Tensorflow Implementation of Yahoo's Open NSFW Model
Stars: ✭ 338 (-0.29%)
Mutual labels:  deep-neural-networks, caffe
Compress Images
Minify size your images. Image compression with extension: jpg/jpeg, svg, png, gif. NodeJs
Stars: ✭ 331 (-2.36%)
Mutual labels:  compression
Textspotter
Stars: ✭ 323 (-4.72%)
Mutual labels:  caffe
Compress
Collection of compression related Go packages.
Stars: ✭ 319 (-5.9%)
Mutual labels:  compression
Mobilenet Ssd Realsense
[High Performance / MAX 30 FPS] RaspberryPi3(RaspberryPi/Raspbian Stretch) or Ubuntu + Multi Neural Compute Stick(NCS/NCS2) + RealSense D435(or USB Camera or PiCamera) + MobileNet-SSD(MobileNetSSD) + Background Multi-transparent(Simple multi-class segmentation) + FaceDetection + MultiGraph + MultiProcessing + MultiClustering
Stars: ✭ 322 (-5.01%)
Mutual labels:  caffe
Keras Mmoe
A Keras implementation of "Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts" (KDD 2018)
Stars: ✭ 332 (-2.06%)
Mutual labels:  deep-neural-networks
Simdcomp
A simple C library for compressing lists of integers using binary packing
Stars: ✭ 331 (-2.36%)
Mutual labels:  compression
Largemargin softmax loss
Implementation for <Large-Margin Softmax Loss for Convolutional Neural Networks> in ICML'16.
Stars: ✭ 319 (-5.9%)
Mutual labels:  caffe
Ai Deadlines
⏰ AI conference deadline countdowns
Stars: ✭ 3,852 (+1036.28%)
Mutual labels:  deep-neural-networks
Bytenet Tensorflow
ByteNet for character-level language modelling
Stars: ✭ 319 (-5.9%)
Mutual labels:  deep-neural-networks

ABOUT

Repo summary

Lower-rank deep neural networks (ICCV 2017)

Paper: Coordinating Filters for Faster Deep Neural Networks.

Poster is available.

source code is in this master branch.

Sparse Deep Neural Networks (NIPS 2016)

See the source code in branch scnn

(NIPS 2017 Oral) Ternary Gradients to Reduce Communication in Distributed Deep Learning

A work to accelerate training. code

Direct sparse convolution and guided pruning (ICLR 2017)

Originally in branch intel, but merged to IntelLabs/SkimCaffe with contributions also by @jspark1105

Caffe version

Master branch is from caffe @ commit eb4ba30

Lower-rank deep neural networks (ICCV 2017)

Tutorials on using python to decompose DNNs to low-rank space is here.

If any problems/bugs/questions, you are welcome to open an issue and we will response asap.

Details of Force Regularization is in the Paper: Coordinating Filters for Faster Deep Neural Networks.

Training with Force Regularization for Lower-rank DNNs

It is easy to use the code to train DNNs toward lower-rank DNNs. Only three additional protobuf configurations are required:

  1. force_decay in SolverParameter: Specified in solver. The coefficient to make the trade-off between accuracy and ranks. Larger force_decay, smaller ranks and usually lower accuracy.
  2. force_type in SolverParameter: Specified in solver. The kind of force to coordinate filters. Degradation - The strength of pairwise attractive force decreases as the distance decreases. This is the L2-norm force in the paper; Constant - The strength of pairwise attractive force keeps constant regardless of the distance. This is the L1-norm force in the paper.
  3. force_mult in ParamSpec: Specified for the param of weights in each layer. The local multiplier of force_decay for filters in a specific layer, i.e., force_mult*force_decay is the final coefficient for the specific layer. You can set force_mult: 0.0 to eliminate force regularization in any layer.

See details and implementations in caffe.proto and SGDSolver

Examples

An example of training LeNet with L1-norm force regularization:

##############################################################\
# The train/test net with local force decay multiplier       
net: "examples/mnist/lenet_train_test_force.prototxt"        
##############################################################/

test_iter: 100
test_interval: 500
# The base learning rate. For large-scale DNNs, you might try 0.1x smaller base_lr of training the original DNNs from scratch.
base_lr: 0.01
momentum: 0.9
weight_decay: 0.0005

##############################################################\
# The coefficient of force regularization.                   
# The hyper-parameter to tune to make trade-off              
force_decay: 0.001                                           
# The type of force - L1-norm force                          
force_type: "Constant"                                       
##############################################################/

# The learning rate policy
lr_policy: "multistep"
gamma: 0.9
stepvalue: 5000
stepvalue: 7000
stepvalue: 8000
stepvalue: 9000
stepvalue: 9500
# Display every 100 iterations
display: 100
# The maximum number of iterations
max_iter: 10000
# snapshot intermediate results
snapshot: 5000
snapshot_prefix: "examples/mnist/lower_rank_lenet"
snapshot_format: HDF5
solver_mode: GPU

Retraining a trained DNN with force regularization might get better results, comparing with training from scratch.

Hyperparameter

We included the hyperparameter of "lambda_s" for AlexNet in Figure 6.

Some open research topics

Force Regularization can squeeze/coordinate weight information to much lower rank space, but after low-rank decomposition with the same precision of approximation, it is more challenging to recover the accuracy from the much more lightweight DNNs.

License and Citation

Please cite our ICCV and Caffe if it is useful for your research:

@InProceedings{Wen_2017_ICCV,
  author={Wen, Wei and Xu, Cong and Wu, Chunpeng and Wang, Yandan and Chen, Yiran and Li, Hai},
  title={Coordinating Filters for Faster Deep Neural Networks},
  booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
  month = {October},
  year = {2017}
}

Caffe is released under the BSD 2-Clause license. The BVLC reference models are released for unrestricted use.

Please cite Caffe in your publications if it helps your research:

@article{jia2014caffe,
  Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
  Journal = {arXiv preprint arXiv:1408.5093},
  Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
  Year = {2014}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].