All Projects → Xilinx → Finn

Xilinx / Finn

Licence: bsd-3-clause
Dataflow compiler for QNN inference on FPGAs

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Finn

Blueoil
Bring Deep Learning to small devices
Stars: ✭ 244 (-14.08%)
Mutual labels:  fpga, quantization
Brevitas
Brevitas: quantization-aware training in PyTorch
Stars: ✭ 343 (+20.77%)
Mutual labels:  fpga, quantization
DFiant
DFiant: A Dataflow Hardware Descripition Language
Stars: ✭ 21 (-92.61%)
Mutual labels:  fpga, dataflow
Tf2
An Open Source Deep Learning Inference Engine Based on FPGA
Stars: ✭ 113 (-60.21%)
Mutual labels:  fpga, quantization
Reduceron
FPGA Haskell machine with game changing performance. Reduceron is Matthew Naylor, Colin Runciman and Jason Reich's high performance FPGA softcore for running lazy functional programs, including hardware garbage collection. Reduceron has been implemented on various FPGAs with clock frequency ranging from 60 to 150 MHz depending on the FPGA. A high degree of parallelism allows Reduceron to implement graph evaluation very efficiently. This fork aims to continue development on this, with a view to practical applications. Comments, questions, etc are welcome.
Stars: ✭ 308 (+8.45%)
Mutual labels:  compiler, fpga
Orcc
Open RVC-CAL Compiler
Stars: ✭ 26 (-90.85%)
Mutual labels:  compiler, dataflow
Qkeras
QKeras: a quantization deep learning library for Tensorflow Keras
Stars: ✭ 254 (-10.56%)
Mutual labels:  fpga, quantization
Clang
Mirror kept for legacy. Moved to https://github.com/llvm/llvm-project
Stars: ✭ 2,880 (+914.08%)
Mutual labels:  compiler
Clangwarnings.com
A list of Clang warnings and their descriptions.
Stars: ✭ 276 (-2.82%)
Mutual labels:  compiler
Smlvm
Smallrepo Virtual Machine
Stars: ✭ 265 (-6.69%)
Mutual labels:  compiler
Gleam
⭐️ A friendly language for building type-safe, scalable systems!
Stars: ✭ 3,463 (+1119.37%)
Mutual labels:  compiler
Pyverilog
Python-based Hardware Design Processing Toolkit for Verilog HDL
Stars: ✭ 267 (-5.99%)
Mutual labels:  compiler
Write You A Haskell
Building a modern functional compiler from first principles. (http://dev.stephendiehl.com/fun/)
Stars: ✭ 3,064 (+978.87%)
Mutual labels:  compiler
Seq
A high-performance, Pythonic language for bioinformatics
Stars: ✭ 263 (-7.39%)
Mutual labels:  compiler
Fastor
A lightweight high performance tensor algebra framework for modern C++
Stars: ✭ 280 (-1.41%)
Mutual labels:  fpga
Openfpgaloader
Universal utility for programming FPGA
Stars: ✭ 264 (-7.04%)
Mutual labels:  fpga
C Compiler
C--compiler which implements LL(1)\LR(0)\SLR\LR(1) and semantic analysis and MIPS generate
Stars: ✭ 286 (+0.7%)
Mutual labels:  compiler
Openpiton
The OpenPiton Platform
Stars: ✭ 282 (-0.7%)
Mutual labels:  fpga
Vult
Vult is a transcompiler well suited to write high-performance DSP code
Stars: ✭ 272 (-4.23%)
Mutual labels:  compiler
Cores
Various HDL (Verilog) IP Cores
Stars: ✭ 271 (-4.58%)
Mutual labels:  fpga

Fast, Scalable Quantized Neural Network Inference on FPGAs

drawing

Gitter ReadTheDocs

FINN is an experimental framework from Xilinx Research Labs to explore deep neural network inference on FPGAs. It specifically targets quantized neural networks, with emphasis on generating dataflow-style architectures customized for each network. The resulting FPGA accelerators are highly efficient and can yield high throughput and low latency. The framework is fully open-source in order to give a higher degree of flexibility, and is intended to enable neural network research spanning several layers of the software/hardware abstraction stack.

We have a separate repository finn-examples that houses pre-built examples for several neural networks. For more general information about FINN, please visit the project page and check out the publications.

Getting Started

Please see the Getting Started page for more information on requirements, installation, and how to run FINN in different modes. Due to the complex nature of the dependencies of the project, we only support Docker-based execution of the FINN compiler at this time.

What's New in FINN?

  • 2020-12-17: v0.5b (beta) is released, with a new examples repo including MobileNet-v1. Read more on the release blog post.
  • 2020-09-21: v0.4b (beta) is released. Read more on the release blog post.
  • 2020-05-08: v0.3b (beta) is released, with initial support for convolutions, parallel transformations, more flexible memory allocation for MVAUs, throughput testing and many other smaller improvements and bugfixes. Read more on the release blog post.
  • 2020-04-15: FINN v0.2.1b (beta): use fixed commit versions for dependency repos, otherwise identical to 0.2b
  • 2020-02-28: FINN v0.2b (beta) is released, which is a clean-slate reimplementation of the framework. Currently only fully-connected networks are supported for the end-to-end flow. Please see the release blog post for a summary of the key features.

Documentation

You can view the documentation on readthedocs or build them locally using python setup.py doc from inside the Docker container. Additionally, there is a series of Jupyter notebook tutorials, which we recommend running from inside Docker for a better experience.

Community

We have a gitter channel where you can ask questions. You can use the GitHub issue tracker to report bugs, but please don't file issues to ask questions as this is better handled in the gitter channel.

We also heartily welcome contributions to the project, please check out the contribution guidelines and the list of open issues. Don't hesitate to get in touch over Gitter to discuss your ideas.

Citation

The current implementation of the framework is based on the following publications. Please consider citing them if you find FINN useful.

@article{blott2018finn,
  title={FINN-R: An end-to-end deep-learning framework for fast exploration of quantized neural networks},
  author={Blott, Michaela and Preu{\ss}er, Thomas B and Fraser, Nicholas J and Gambardella, Giulio and O’brien, Kenneth and Umuroglu, Yaman and Leeser, Miriam and Vissers, Kees},
  journal={ACM Transactions on Reconfigurable Technology and Systems (TRETS)},
  volume={11},
  number={3},
  pages={1--23},
  year={2018},
  publisher={ACM New York, NY, USA}
}

@inproceedings{finn,
author = {Umuroglu, Yaman and Fraser, Nicholas J. and Gambardella, Giulio and Blott, Michaela and Leong, Philip and Jahre, Magnus and Vissers, Kees},
title = {FINN: A Framework for Fast, Scalable Binarized Neural Network Inference},
booktitle = {Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays},
series = {FPGA '17},
year = {2017},
pages = {65--74},
publisher = {ACM}
}

Old version

We previously released an early-stage prototype of a toolflow that took in Caffe-HWGQ binarized network descriptions and produced dataflow architectures. You can find it in the v0.1 branch in this repository. Please be aware that this version is deprecated and unsupported, and the master branch does not share history with that branch so it should be treated as a separate repository for all purposes.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].