All Projects → dicecco1 → fpga_caffe

dicecco1 / fpga_caffe

Licence: other
No description or website provided.

Programming Languages

C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
Cuda
1817 projects
CMake
9771 projects
matlab
3953 projects
Makefile
30231 projects

Projects that are alternatives of or similar to fpga caffe

Sdaccel examples
SDAccel Examples
Stars: ✭ 325 (+180.17%)
Mutual labels:  fpga, opencl
Tornadovm
TornadoVM: A practical and efficient heterogeneous programming framework for managed languages
Stars: ✭ 479 (+312.93%)
Mutual labels:  fpga, opencl
spector
Spector: An OpenCL FPGA Benchmark Suite
Stars: ✭ 38 (-67.24%)
Mutual labels:  fpga, opencl
dcurl
Hardware-accelerated Multi-threaded IOTA PoW, drop-in replacement for ccurl
Stars: ✭ 39 (-66.38%)
Mutual labels:  fpga, opencl
caffe-android-opencl-fp16
Optimised Caffe with OpenCL supporting for less powerful devices such as mobile phones
Stars: ✭ 17 (-85.34%)
Mutual labels:  caffe, opencl
John
John the Ripper jumbo - advanced offline password cracker, which supports hundreds of hash and cipher types, and runs on many operating systems, CPUs, GPUs, and even some FPGAs
Stars: ✭ 5,656 (+4775.86%)
Mutual labels:  fpga, opencl
Trisycl
Generic system-wide modern C++ for heterogeneous platforms with SYCL from Khronos Group
Stars: ✭ 354 (+205.17%)
Mutual labels:  fpga, opencl
Ck Caffe
Collective Knowledge workflow for Caffe to automate installation across diverse platforms and to collaboratively evaluate and optimize Caffe-based workloads across diverse hardware, software and data sets (compilers, libraries, tools, models, inputs):
Stars: ✭ 192 (+65.52%)
Mutual labels:  caffe, opencl
Tf2
An Open Source Deep Learning Inference Engine Based on FPGA
Stars: ✭ 113 (-2.59%)
Mutual labels:  fpga, opencl
Pipecnn
An OpenCL-based FPGA Accelerator for Convolutional Neural Networks
Stars: ✭ 775 (+568.1%)
Mutual labels:  fpga, opencl
Haddoc2
Caffe to VHDL
Stars: ✭ 57 (-50.86%)
Mutual labels:  caffe, fpga
opencl-hls-cnn-accelerator
OpenCL HLS based CNN Accelerator on Intel DE10 Nano FPGA.
Stars: ✭ 49 (-57.76%)
Mutual labels:  fpga, opencl
AutoSA
AutoSA: Polyhedral-Based Systolic Array Compiler
Stars: ✭ 120 (+3.45%)
Mutual labels:  fpga
caffe-wrn-generator
Caffe Wide-Residual-Network (WRN) Generator
Stars: ✭ 19 (-83.62%)
Mutual labels:  caffe
Yune
GPU based framework for writing Raytracers/Pathtracers. (Pronounced as "Yu-nay")
Stars: ✭ 64 (-44.83%)
Mutual labels:  opencl
docker-fpga
Dockerized FPGA toolchain experiments
Stars: ✭ 18 (-84.48%)
Mutual labels:  fpga
fpga-virtual-console
VT220-compatible console on Cyclone IV EP4CE55F23I7
Stars: ✭ 33 (-71.55%)
Mutual labels:  fpga
PothosZynq
DMA source and sink blocks for Xilinx Zynq FPGAs
Stars: ✭ 19 (-83.62%)
Mutual labels:  fpga
apollo
microcontroller-based FPGA / JTAG programmer
Stars: ✭ 32 (-72.41%)
Mutual labels:  fpga
pdp6
PDP-6 Emulator
Stars: ✭ 47 (-59.48%)
Mutual labels:  fpga

FPGA Caffe

This is a version of Caffe with FPGA kernels for forward and backward: convolution, relu, max pooling, and inner product. These kernels target the Xilinx SDAccel OpenCL environment. The kernels use custom-precision floating-point arithmetic to save area and improve the throughput of the kernels, while also allowing for experimentation with different floating-point precisions and rounding modes for training and inference with CNNs.

Infrastructure has been added to facilitate the use of Xilinx SDAccel kernels within Caffe, while making it essentially seamless to outside users that an FPGA is in use (aside from some additional layers required to program the device).

The version of SDAccel where most of the custom-precision floating-point results were gathered was 2016.3. Later versions of SDAccel should work too, though low precision multipliers don't seem to map well to DSPs in 2017.1. To overcome this use the 3 input multiplier implementation of the crp layer.

License and Citation

The license for this project is the same as that of the original Caffe implementation. Our initial paper related to this work can be found at: http://ieeexplore.ieee.org/document/7929549/

Citation: @INPROCEEDINGS{7929549, author={R. DiCecco and G. Lacey and J. Vasiljevic and P. Chow and G. Taylor and S. Areibi}, booktitle={2016 International Conference on Field-Programmable Technology (FPT)}, title={Caffeinated FPGAs: FPGA framework For Convolutional Neural Networks}, year={2016}, pages={265-268}, keywords={Computational modeling;Convolution;Field programmable gate arrays;Graphics processing units;Kernel;Parallel processing;Pipelines}, doi={10.1109/FPT.2016.7929549}, month={Dec},}

The work related to the Custom-Precision Floating-Point Training is set to appear at FPT2017. The repository for the Custom-Precision Floating-Point library is at: https://github.com/dicecco1/fpga_cpfp.

Build Instructions

In Makefile.config set USE_OCL := 1 and CPU_ONLY := 1 (CPU_ONLY won't be necessary soon) and run make all.

To build standalone FPGA tests run make testfpga. These tests may not always pass depending on what level of precision is specified because they compare to a single-precision reference.

To build FPGA layers (or add new layers), in src/fpga_caffe/layers/ run make -f layer.mk KERNEL_NAME=YOUR_KERNEL_NAME, the kernel name currently has to be the same as the .cpp name (e.g. crp_layer_hwcn_cpfp kernel has a .cpp file named crp_layer_hwcn_cpfp.cpp). After the xclbins have been generated, they should be copied to .build_release/opencl/src/caffe/layers/

Caffe

Build Status License

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berkeley Vision and Learning Center (BVLC) and community contributors.

Check out the project site for all the details like

and step-by-step examples.

Custom distributions

Community

Join the chat at https://gitter.im/BVLC/caffe

Please join the caffe-users group or gitter chat to ask questions and talk about methods and models. Framework development discussions and thorough bug reports are collected on Issues.

Happy brewing!

License and Citation

Caffe is released under the BSD 2-Clause license. The BAIR/BVLC reference models are released for unrestricted use.

Please cite Caffe in your publications if it helps your research:

@article{jia2014caffe,
  Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
  Journal = {arXiv preprint arXiv:1408.5093},
  Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
  Year = {2014}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].