All Projects → upul → Aurora

upul / Aurora

Licence: apache-2.0
Minimal Deep Learning library is written in Python/Cython/C++ and Numpy/CUDA/cuDNN.

Programming Languages

python
139335 projects - #7 most used programming language
python3
1442 projects
cplusplus
227 projects
cython
566 projects

Projects that are alternatives of or similar to Aurora

Ctranslate2
Fast inference engine for OpenNMT models
Stars: ✭ 140 (+55.56%)
Mutual labels:  cudnn, cuda
Mini Caffe
Minimal runtime core of Caffe, Forward only, GPU support and Memory efficiency.
Stars: ✭ 373 (+314.44%)
Mutual labels:  cudnn, cuda
Social-Distancing-and-Face-Mask-Detection
Social Distancing and Face Mask Detection using TensorFlow. Install all required Libraries and GPU drivers as well. Refer to README.md or REPORT for know to installation requirement
Stars: ✭ 39 (-56.67%)
Mutual labels:  cuda, cudnn
Tutorial Ubuntu 18.04 Install Nvidia Driver And Cuda And Cudnn And Build Tensorflow For Gpu
Ubuntu 18.04 How to install Nvidia driver + CUDA + CUDNN + build tensorflow for gpu step by step command line
Stars: ✭ 91 (+1.11%)
Mutual labels:  cudnn, cuda
Imagenet Classifier Tensorflow
Image recognition and classification using Convolutional Neural Networks with TensorFlow
Stars: ✭ 13 (-85.56%)
Mutual labels:  cudnn, cuda
Tensorflow Optimized Wheels
TensorFlow wheels built for latest CUDA/CuDNN and enabled performance flags: SSE, AVX, FMA; XLA
Stars: ✭ 118 (+31.11%)
Mutual labels:  cudnn, cuda
gpu-monitor
Script to remotely check GPU servers for free GPUs
Stars: ✭ 85 (-5.56%)
Mutual labels:  cuda, cudnn
Tensorflow Object Detection Tutorial
The purpose of this tutorial is to learn how to install and prepare TensorFlow framework to train your own convolutional neural network object detection classifier for multiple objects, starting from scratch
Stars: ✭ 113 (+25.56%)
Mutual labels:  cudnn, cuda
Arraymancer
A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends
Stars: ✭ 793 (+781.11%)
Mutual labels:  cudnn, cuda
Chainer
A flexible framework of neural networks for deep learning
Stars: ✭ 5,656 (+6184.44%)
Mutual labels:  cudnn, cuda
Arch-Data-Science
Archlinux PKGBUILDs for Data Science, Machine Learning, Deep Learning, NLP and Computer Vision
Stars: ✭ 92 (+2.22%)
Mutual labels:  cuda, cudnn
Nvidia libs test
Tests and benchmarks for cudnn (and in the future, other nvidia libraries)
Stars: ✭ 36 (-60%)
Mutual labels:  cudnn, cuda
Cupy
NumPy & SciPy for GPU
Stars: ✭ 5,625 (+6150%)
Mutual labels:  cudnn, cuda
Simple Sh Datascience
A collection of Bash scripts and Dockerfiles to install data science Tool, Lib and application
Stars: ✭ 32 (-64.44%)
Mutual labels:  cudnn, cuda
Singularity Tutorial
Tutorial for using Singularity containers
Stars: ✭ 46 (-48.89%)
Mutual labels:  cudnn, cuda
Cuda Design Patterns
Some CUDA design patterns and a bit of template magic for CUDA
Stars: ✭ 78 (-13.33%)
Mutual labels:  cuda
Python Opencv Cuda
custom opencv_contrib module which exposes opencv cuda optical flow methods with python bindings
Stars: ✭ 86 (-4.44%)
Mutual labels:  cuda
Hiop
HPC solver for nonlinear optimization problems
Stars: ✭ 75 (-16.67%)
Mutual labels:  cuda
Cudart.jl
Julia wrapper for CUDA runtime API
Stars: ✭ 75 (-16.67%)
Mutual labels:  cuda
Deep Learning With Cats
Deep learning with cats (^._.^)
Stars: ✭ 1,290 (+1333.33%)
Mutual labels:  cuda

Aurora: Minimal Deep Learning Library.

Aurora is a minimal deep learning library written in Python, Cython, and C++ with the help of Numpy, CUDA, and cuDNN. Though it is simple, Aurora comes with some advanced design concepts found it a typical deep learning library.

  • Automatic differentiation using static computational graphs.
  • Shape and type inference.
  • Static memory allocation for efficient training and inference.

Installation

Aurora relies on several external libraries including CUDA, cuDNN, and NumPy. For CUDA and cuDNN installation instructions please refer official documentation. Python dependencies can be installed by running the requirements.txt file.

Environment setup

To utilize GPU capabilities of the Aurora library, you need to have a Nvidia GPU. If CUDA toolkit is not already installed, first install the latest version of the CUDA toolkit as well as cuDNN library. Next, set following environment variables.

export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda/bin:$PATH
Cloning the Repository

You can clone Aurora repository using following command.

git clone https://github.com/upul/Aurora.git

Building the GPU Backend

Next, you need to build GPU backend. So please cuda directory and run make command as shown below.

  1. Go to cuda directory (cd cuda)
  2. Run make
Installing the Library

Go to Aurora directory and run:

  1. pip install -r requirements.txt
  2. pip install .

Examples

Following lists some noticeable examples. For the complete list of examples please refer examples directory. Also, for Jupyter notebooks please refer examples/notebooks folder.

  1. mnist
  2. mnist_cnn

Future Work

Following features will be added in upcoming releases.

  • Dropout and Batch Normalization.
  • High-level API similar to Keras.
  • Ability to load pre-trained models.
  • Model checkpointing.

Acknowledgement

It all started with CSE 599G1: Deep Learning System Design course. This course really helped me to understand fundamentals of Deep Learning System design. My answers to the two programming assignments of CSE 599G1 was the foundation of Aurora library. So I would like to acknowledge with much appreciation the instructors and teaching assistants of the SE 599G1 course.

References.

  1. CSE 599G1: Deep Learning System Design
  2. MXNet Architecture
  3. Parallel Programming With CUDA | Udacity
  4. Programming Massively Parallel Processors, Third Edition: A Hands-on Approach 3rd Edition
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].