All Projects → Tiiiger → QPyTorch

Tiiiger / QPyTorch

Licence: MIT License
Low Precision Arithmetic Simulation in PyTorch

Programming Languages

python
139335 projects - #7 most used programming language
Cuda
1817 projects
C++
36643 projects - #6 most used programming language
c
50402 projects - #5 most used programming language

Projects that are alternatives of or similar to QPyTorch

Lazycard
A simple flashcard application where cards are formatted with Markdown.
Stars: ✭ 18 (-88.16%)
Mutual labels:  learning
ArduRTOS
Real Time Operating System Lessons using Arduino and The FreeRTOS Kernel
Stars: ✭ 46 (-69.74%)
Mutual labels:  learning
RCNN MDP
Code base for solving Markov Decision Processes and Reinforcement Learning problems using Recurrent Convolutional Neural Networks.
Stars: ✭ 65 (-57.24%)
Mutual labels:  learning
ctf
Capture The Flag Information
Stars: ✭ 12 (-92.11%)
Mutual labels:  learning
class-java-basico
Curso básico de Java (WIP)
Stars: ✭ 201 (+32.24%)
Mutual labels:  learning
brain-brew
Automated Anki flashcard creation and extraction to/from Csv
Stars: ✭ 55 (-63.82%)
Mutual labels:  learning
lobe
Lobe is the world's first AI paralegal.
Stars: ✭ 22 (-85.53%)
Mutual labels:  learning
learn-ruby-and-cs
Books and other resources for learning Ruby and computer science.
Stars: ✭ 25 (-83.55%)
Mutual labels:  learning
swift-algorithms-data-structs
📒 Algorithms and Data Structures in Swift. The used approach attempts to fully utilize the Swift Standard Library and Protocol-Oriented paradigm.
Stars: ✭ 42 (-72.37%)
Mutual labels:  learning
CSwala-android
An app that is a one-stop destination for all the CS enthusiasts, providing resources like Information scrapping techniques, best YT channels, courses available free-of-cost, etc. & knowledge about every domain and field that exists on the Internet related to Computer Science along with News, Jobs, and Internships opportunities in these domains …
Stars: ✭ 44 (-71.05%)
Mutual labels:  learning
dusybox
I'm learning Dlang
Stars: ✭ 11 (-92.76%)
Mutual labels:  learning
js-training
JS Training Course
Stars: ✭ 39 (-74.34%)
Mutual labels:  learning
subjuster
A Ruby based CLI for Subtitle adjustment | TDD Guide for Software Engineers
Stars: ✭ 17 (-88.82%)
Mutual labels:  learning
coding-projects
The coding projects which have been covered in the YouTube videos
Stars: ✭ 21 (-86.18%)
Mutual labels:  learning
ruby-study-group
Grupo de estudos de Ruby no Training Center
Stars: ✭ 28 (-81.58%)
Mutual labels:  learning
Biblioteca
Colección de libros recomendados en formato PDF que he realizado para ti y así puedas mejorar tus habilidades como programador. Recuerda, siempre disfruta del aprendizaje.
Stars: ✭ 89 (-41.45%)
Mutual labels:  learning
masterR
📓R学习笔记与进阶之道
Stars: ✭ 17 (-88.82%)
Mutual labels:  learning
xplain
A framework for providing interactive interpretations and explanations of statistical results
Stars: ✭ 26 (-82.89%)
Mutual labels:  learning
calibre-docker
docker 一键部署 calibre 在线书库
Stars: ✭ 15 (-90.13%)
Mutual labels:  learning
rust-book-fr
🇫🇷 French translation of the book "The Rust Programming Language"
Stars: ✭ 89 (-41.45%)
Mutual labels:  learning

QPyTorch

Downloads License: MIT

News:

  • Updated to version 0.3.0:
    • supporting subnorms now (#43). Thanks @danielholanda for his contribution!
  • Updated to version 0.2.0:
    • Bug fixed: previously in our floating point quantization, numbers that are closer to 0 than the smallest representable positive number are rounded to the smallest rep positive number. Now we round to 0 or the smallest representable number based on which one is the nearest.
    • Different Behavior: To be consistent with PyTorch Issue #17443, we round to nearest even now.
    • We migrate to PyTorch 1.5.0. There are several changes in the C++ API of PyTorch. This new version is not backward-compatible with older PyTorch.
    • Note: if you are using CUDA 10.1, please install CUDA 10.1 Update 1 (or later version). There is a bug in the first version of CUDA 10.1 which leads to compilation errors.
    • Note: previous users, please remove the cache in the pytorch extension directory. For example, you can run this command rm -rf /tmp/torch_extensions/quant_cuda /tmp/torch_extensions/quant_cpu if you are using the default directory for pytorch extensions.

Overview

QPyTorch is a low-precision arithmetic simulation package in PyTorch. It is designed to support researches on low-precision machine learning, especially for researches in low-precision training. A more comprehensive write-up can be found here.

Notably, QPyTorch supports quantizing different numbers in the training process with customized low-precision formats. This eases the process of investigating different precision settings and developing new deep learning architectures. More concretely, QPyTorch implements fused kernels for quantization and integrates smoothly with existing PyTorch kernels (e.g. matrix multiplication, convolution).

Recent researches can be reimplemented easily through QPyTorch. We offer an example replication of WAGE in a downstream repo WAGE. We also provide a list of working examples under Examples.

Note: QPyTorch relies on PyTorch functions for the underlying computation, such as matrix multiplication. This means that the actual computation is done in single precision. Therefore, QPyTorch is not intended to be used to study the numerical behavior of different accumulation strategies.

Note: QPyTorch, as of now, have a different rounding mode with PyTorch. QPyTorch does round-away-from-zero while PyTorch does round-to-nearest-even. This will create a discrepancy between the PyTorch half-precision tensor and QPyTorch's simulation of half-precision numbers.

if you find this repo useful please cite

@misc{zhang2019qpytorch,
    title={QPyTorch: A Low-Precision Arithmetic Simulation Framework},
    author={Tianyi Zhang and Zhiqiu Lin and Guandao Yang and Christopher De Sa},
    year={2019},
    eprint={1910.04540},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

Installation

requirements:

  • Python >= 3.6
  • PyTorch >= 1.5.0
  • GCC >= 4.9 on linux
  • CUDA >= 10.1 on linux

Install other requirements by:

pip install -r requirements.txt

Install QPyTorch through pip:

pip install qtorch

For more details about compiler requirements, please refer to PyTorch extension tutorial.

Documentation

See our readthedocs page.

Tutorials

Examples

  • Low-Precision VGGs and ResNets using fixed point, block floating point on CIFAR and ImageNet. lp_train
  • Reproduction of WAGE in QPyTorch. WAGE
  • Implementation (simulation) of 8-bit Floating Point Training in QPyTorch. IBM8

Team

Other Contributors

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].