All Projects → njm18 → Gmatrix

njm18 / Gmatrix

Licence: other
R package for unleashing the power of NVIDIA GPU's

Labels

Projects that are alternatives of or similar to Gmatrix

Nv Wavenet
Reference implementation of real-time autoregressive wavenet inference
Stars: ✭ 681 (+4156.25%)
Mutual labels:  cuda
Ethereum nvidia miner
💰 USB flash drive ISO image for Ethereum, Zcash and Monero mining with NVIDIA graphics cards and Ubuntu GNU/Linux (headless)
Stars: ✭ 772 (+4725%)
Mutual labels:  cuda
Scikit Cuda
Python interface to GPU-powered libraries
Stars: ✭ 803 (+4918.75%)
Mutual labels:  cuda
Cuda Convnet2
Automatically exported from code.google.com/p/cuda-convnet2
Stars: ✭ 690 (+4212.5%)
Mutual labels:  cuda
Juice
The Hacker's Machine Learning Engine
Stars: ✭ 743 (+4543.75%)
Mutual labels:  cuda
Numba
NumPy aware dynamic Python compiler using LLVM
Stars: ✭ 7,090 (+44212.5%)
Mutual labels:  cuda
Mc Cnn
Stereo Matching by Training a Convolutional Neural Network to Compare Image Patches
Stars: ✭ 638 (+3887.5%)
Mutual labels:  cuda
Cudadbclustering
Clustering via Graphics Processor, using NVIDIA CUDA sdk to preform database clustering on the massively parallel graphics card processor
Stars: ✭ 6 (-62.5%)
Mutual labels:  cuda
Accelerate
Embedded language for high-performance array computations
Stars: ✭ 751 (+4593.75%)
Mutual labels:  cuda
Blocksparse
Efficient GPU kernels for block-sparse matrix multiplication and convolution
Stars: ✭ 797 (+4881.25%)
Mutual labels:  cuda
Gunrock
High-Performance Graph Primitives on GPUs
Stars: ✭ 718 (+4387.5%)
Mutual labels:  cuda
Kintinuous
Real-time large scale dense visual SLAM system
Stars: ✭ 740 (+4525%)
Mutual labels:  cuda
Pyopencl
OpenCL integration for Python, plus shiny features
Stars: ✭ 790 (+4837.5%)
Mutual labels:  cuda
Warp Ctc
Pytorch Bindings for warp-ctc
Stars: ✭ 684 (+4175%)
Mutual labels:  cuda
Pytorch Loss
label-smooth, amsoftmax, focal-loss, triplet-loss, lovasz-softmax. Maybe useful
Stars: ✭ 812 (+4975%)
Mutual labels:  cuda
Chainer
A flexible framework of neural networks for deep learning
Stars: ✭ 5,656 (+35250%)
Mutual labels:  cuda
Marian
Fast Neural Machine Translation in C++
Stars: ✭ 777 (+4756.25%)
Mutual labels:  cuda
Ddsh Tip2018
source code for paper "Deep Discrete Supervised Hashing"
Stars: ✭ 16 (+0%)
Mutual labels:  cuda
Libcudarange
An interval arithmetic and affine arithmetic library for NVIDIA CUDA
Stars: ✭ 5 (-68.75%)
Mutual labels:  cuda
Arraymancer
A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends
Stars: ✭ 793 (+4856.25%)
Mutual labels:  cuda

The "gmatrix" Package

This package implements a general framework for utilizing R to harness the power of NVIDIA GPU's. The "gmatrix" and "gvector" classes allow for easy management of the separate device and host memory spaces. Numerous numerical operations are implemented for these objects on the GPU. These operations include matrix multiplication, addition, subtraction, the kronecker product, the outer product, comparison operators, logical operators, trigonometric functions, indexing, sorting, random number generation and many more. The "gmatrix" package has only been tested and compiled for linux machines. It would certainly be nice of someone to get it working in Windows. Until then, Windows is not supported. In addition we assume that the divice is at least of NVIDIA(R) compute capibility 2.0, so this package may not work with older devices.

Installation Instructions

  1. Install the the CUDA Toolkit. The current version of 'gmatix' has been tested for CUDA Toolkit 5.0 and 7.0.
  2. Install R. The current version of 'gmatrix' has been tested under R 3.0.2 and 3.2.2.
  3. Start R and then install the 'gmatrix' package with the following commands. Package compilation may take 5-10 minutes.
install.packages("gmatrix")

Alternatively, if you would like to install the developmental version, the following from the linux command line may be used:

git clone https://github.com/njm18/gmatrix.git
rm ./gmatrix/.git -rf
MAKE="make -j7" #note this make the compile process use 7 threads 
R CMD INSTALL gmatrix

Installation Note

By default, when compiling, the build process assumes that

  • The nvcc compiler is in the PATH, and that the the CUDA library files may be located based on the location of nvcc.
  • R is located in the PATH, and:
    • The R home directory may be located using the command: R RHOME
    • The R include director may be located using the command: R --slave --no-save -e "cat(R.home('include'))".
  • The compute capability of the target device is 2.0.

If these are incorrect assumptions, the user may set these values and install using the following R command as an example.

install.packages("gmatrix" ,  
   configure.args = "
      --with-arch=sm_30
      --with-cuda-home=/opt/cuda
      --with-r-home=/opt/R
	  --with-r-include=/opt/R/include/x64"
)

Alternatively, from the command line, use a cammand such as:

 R CMD INSTALL gmatrix  --configure-args="--with-arch=sm_35"

Testing the Installation

We recomend that the user test the installation using the following commands:

library(gmatrix)
gtest()

Please report any errors to the package maintainer.

Getting Started

  • Load the library for each sessesion using: library(gmatrix)
  • To list available gpu devices use: listDevices()
  • To set the device use: setDevice()
  • To move object to the device use: g()
  • To move object to the host use: h()
  • Object on the device can be manipulated in much the same way other R objects can.
  • A list of help topics may be optained using: help(package="gmatrix")
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].