All Projects → dnouri → Cuda Convnet

dnouri / Cuda Convnet

My fork of Alex Krizhevsky's cuda-convnet from 2013 where I added dropout, among other features.

Labels

Projects that are alternatives of or similar to Cuda Convnet

Bohrium
Automatic parallelization of Python/NumPy, C, and C++ codes on Linux and MacOSX
Stars: ✭ 209 (-12.18%)
Mutual labels:  cuda
Deepspeech
DeepSpeech neon implementation
Stars: ✭ 223 (-6.3%)
Mutual labels:  cuda
Optix Pathtracer
Simple physically based path tracer based on Nvidia's Optix Ray Tracing Engine
Stars: ✭ 231 (-2.94%)
Mutual labels:  cuda
Genomeworks
SDK for GPU accelerated genome assembly and analysis
Stars: ✭ 215 (-9.66%)
Mutual labels:  cuda
Softmax Splatting
an implementation of softmax splatting for differentiable forward warping using PyTorch
Stars: ✭ 218 (-8.4%)
Mutual labels:  cuda
Cupoch
Robotics with GPU computing
Stars: ✭ 225 (-5.46%)
Mutual labels:  cuda
Amgx
Distributed multigrid linear solver library on GPU
Stars: ✭ 207 (-13.03%)
Mutual labels:  cuda
Cu
package cu provides an idiomatic interface to the CUDA Driver API.
Stars: ✭ 234 (-1.68%)
Mutual labels:  cuda
Pedestrian alignment
TCSVT2018 Pedestrian Alignment Network for Large-scale Person Re-identification
Stars: ✭ 223 (-6.3%)
Mutual labels:  cuda
Cuspatial
CUDA-accelerated GIS and spatiotemporal algorithms
Stars: ✭ 229 (-3.78%)
Mutual labels:  cuda
Tigre
TIGRE: Tomographic Iterative GPU-based Reconstruction Toolbox
Stars: ✭ 215 (-9.66%)
Mutual labels:  cuda
Relion
Image-processing software for cryo-electron microscopy
Stars: ✭ 219 (-7.98%)
Mutual labels:  cuda
Pytorch Hed
a reimplementation of Holistically-Nested Edge Detection in PyTorch
Stars: ✭ 228 (-4.2%)
Mutual labels:  cuda
Haste
Haste: a fast, simple, and open RNN library
Stars: ✭ 214 (-10.08%)
Mutual labels:  cuda
Cudnn Training
A CUDNN minimal deep learning training code sample using LeNet.
Stars: ✭ 231 (-2.94%)
Mutual labels:  cuda
Hip
HIP: C++ Heterogeneous-Compute Interface for Portability
Stars: ✭ 2,609 (+996.22%)
Mutual labels:  cuda
Nvidia Modded Inf
Modified nVidia .inf files to run drivers on all video cards, research & telemetry free drivers
Stars: ✭ 227 (-4.62%)
Mutual labels:  cuda
Tensorrt Laboratory
Explore the Capabilities of the TensorRT Platform
Stars: ✭ 236 (-0.84%)
Mutual labels:  cuda
Occa
JIT Compilation for Multiple Architectures: C++, OpenMP, CUDA, HIP, OpenCL, Metal
Stars: ✭ 230 (-3.36%)
Mutual labels:  cuda
Tengine
Tengine is a lite, high performance, modular inference engine for embedded device
Stars: ✭ 4,012 (+1585.71%)
Mutual labels:  cuda

This is my fork of the cuda-convnet convolutional neural network implementation written by Alex Krizhevsky.

cuda-convnet has quite extensive documentation itself. Find the MAIN DOCUMENTATION HERE <http://code.google.com/p/cuda-convnet/>_.

Update: A newer version, cuda-convnet 2 <https://code.google.com/p/cuda-convnet2/>_, has been released by Alex. This fork is still based on the original cuda-convnet.

=================== Additional features

This document will only describe the small differences between cuda-convnet as hosted on Google Code and this version.

Dropout

Dropout is a relatively new regularization technique for neural networks. See the Improving neural networks by preventing co-adaptation of feature detectors <http://arxiv.org/abs/1207.0580>_ and Improving Neural Networks with Dropout <http://www.cs.toronto.edu/~nitish/msc_thesis.pdf‎>_ papers for details.

To set a dropout rate for one of our layers, we use the dropout parameter in our model's layer-params configuration file. For example, we could use dropout for the last layer in the CIFAR example by modifying the section for the fc10 layer to look like so::

[fc10] epsW=0.001 epsB=0.002

...

dropout=0.5

In practice, you'll probably also want to double the number of outputs in that layer.

CURAND random seeding

An environment variable CONVNET_RANDOM_SEED, if set, will be used to set the CURAND library's random seed. This is important in order to get reproducable results.

Updated to work with CUDA via CMake

The build configuration and code has been updated to work with CUDA via CMake. Run cmake . and then make. If you have an alternative BLAS library just set it with for example cmake -DBLAS_LIBRARIES=/usr/lib/libcblas.so ..

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].