All Projects → wichtounet → Dll

wichtounet / Dll

Licence: mit
Fast Deep Learning Library (DLL) for C++ (ANNs, CNNs, RBMs, DBNs...)

Programming Languages

cpp
1120 projects
cpp11
221 projects
cpp14
131 projects

Projects that are alternatives of or similar to Dll

Onemkl
oneAPI Math Kernel Library (oneMKL) Interfaces
Stars: ✭ 122 (-79.83%)
Mutual labels:  gpu, cpu, performance
Etl
Blazing-fast Expression Templates Library (ETL) with GPU support, in C++
Stars: ✭ 190 (-68.6%)
Mutual labels:  gpu, cpu, performance
Arrayfire
ArrayFire: a general purpose GPU library.
Stars: ✭ 3,693 (+510.41%)
Mutual labels:  gpu, performance
Stats
macOS system monitor in your menu bar
Stars: ✭ 7,134 (+1079.17%)
Mutual labels:  gpu, cpu
Yappi
Yet Another Python Profiler, but this time thread&coroutine&greenlet aware.
Stars: ✭ 595 (-1.65%)
Mutual labels:  cpu, performance
monolish
monolish: MONOlithic LInear equation Solvers for Highly-parallel architecture
Stars: ✭ 166 (-72.56%)
Mutual labels:  cpu, gpu
SilentXMRMiner
A Silent (Hidden) Monero (XMR) Miner Builder
Stars: ✭ 417 (-31.07%)
Mutual labels:  cpu, gpu
Chillout
Reduce CPU usage by non-blocking async loop and psychologically speed up in JavaScript
Stars: ✭ 565 (-6.61%)
Mutual labels:  cpu, performance
Media Watermark
GPU/CPU-based iOS Watermark Library for Image and Video Overlay
Stars: ✭ 170 (-71.9%)
Mutual labels:  gpu, cpu
H2o4gpu
H2Oai GPU Edition
Stars: ✭ 416 (-31.24%)
Mutual labels:  gpu, cpu
Scalene
Scalene: a high-performance, high-precision CPU, GPU, and memory profiler for Python
Stars: ✭ 4,819 (+696.53%)
Mutual labels:  cpu, gpu
Halide
a language for fast, portable data-parallel computation
Stars: ✭ 4,722 (+680.5%)
Mutual labels:  gpu, performance
peakperf
Achieve peak performance on x86 CPUs and NVIDIA GPUs
Stars: ✭ 33 (-94.55%)
Mutual labels:  cpu, gpu
Hdltex
HDLTex: Hierarchical Deep Learning for Text Classification
Stars: ✭ 191 (-68.43%)
Mutual labels:  gpu, convolutional-neural-networks
Komputation
Komputation is a neural network framework for the Java Virtual Machine written in Kotlin and CUDA C.
Stars: ✭ 295 (-51.24%)
Mutual labels:  gpu, convolutional-neural-networks
First Steps Towards Deep Learning
This is an open sourced book on deep learning.
Stars: ✭ 376 (-37.85%)
Mutual labels:  convolutional-neural-networks, artificial-neural-networks
React Adaptive Hooks
Deliver experiences best suited to a user's device and network constraints
Stars: ✭ 4,750 (+685.12%)
Mutual labels:  cpu, performance
Remotery
Single C file, Realtime CPU/GPU Profiler with Remote Web Viewer
Stars: ✭ 1,908 (+215.37%)
Mutual labels:  gpu, cpu
Creepminer
Burstcoin C++ CPU and GPU Miner
Stars: ✭ 169 (-72.07%)
Mutual labels:  gpu, cpu
Ilgpu
ILGPU JIT Compiler for high-performance .Net GPU programs
Stars: ✭ 374 (-38.18%)
Mutual labels:  gpu, cpu

Deep Learning Library (DLL) 1.1

|logo| |coverage| |jenkins| |license|

.. |logo| image:: logo_small.png .. |coverage| image:: https://img.shields.io/sonar/https/sonar.baptiste-wicht.ch/dll/coverage.svg .. |jenkins| image:: https://img.shields.io/jenkins/s/https/jenkins.baptiste-wicht.ch/dll.svg .. |license| image:: https://img.shields.io/github/license/mashape/apistatus.svg

DLL is a library that aims to provide a C++ implementation of Restricted Boltzmann Machine (RBM) and Deep Belief Network (DBN) and their convolution versions as well. It also has support for some more standard neural networks.

Features

  • Restricted Boltzmann Machine

    • Various units: Stochastic binary, Gaussian, Softmax and nRLU units

    • Contrastive Divergence and Persistence Contrastive Divergence

      • CD-1 learning by default
    • Momentum

    • Weight decay

    • Sparsity target

    • Train as Denoising autoencoder

  • Convolutional Restricted Boltzmann Machine

    • Standard version
    • Version with Probabilistic Max Pooling (Honglak Lee)
    • Binary and Gaussian visible units
    • Binary and ReLU hidden units for the standard version
    • Binary hidden units for the Probabilistic Max Pooling version
    • Training with CD-k or PCD-k (only for standard version)
    • Momentum, Weight Decay, Sparsity Target
    • Train as Denoising autoencoder
  • Deep Belief Network

    • Pretraining with RBMs
    • Fine tuning with Conjugate Gradient
    • Fine tuning with Stochastic Gradient Descent
    • Classification with SVM (libsvm)
  • Convolutional Deep Belief Network

    • Pretraining with CRBMs
    • Classification with SVM (libsvm)
  • Input data

    • Input data can be either in containers or in iterators

      • Even if iterators are supported for SVM classifier, libsvm will move all the data in memory structure.

Building

Note: When you clone the library, you need to clone the sub modules as well, using the --recursive option.

The folder include must be included with the -I option, as well as the etl/include folder.

This library is completely header-only, there is no need to build it.

However, this library makes extensive use of C++11 and C++14, therefore, a recent compiler is necessary to use it. Currently, this library is only tested with g++ 9.3.0.

If for some reasons, it should not work on one of the supported compilers, contact me and I'll fix it. It should work fine on recent versions of clang.

This has never been tested on Windows. While it should compile on Mingw, I don't expect Visual Studio to be able to compile it for now, although VS 2017 sounds promising. If you have problems compiling this library, I'd be glad to help, but cannot guarantee that this will work on other compilers.

If you want to use GPU, you should use CUDA 8.0 or superior and CUDNN 5.0.1 or superior. I haven't tried other versions, but lower versions of CUDA, such as 7, should work, and higher versions as well. If you got issues with different versions of CUDA and CUDNN, please open an issue on Github.

License

This library is distributed under the terms of the MIT license, see LICENSE file for details.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].