All Projects → etaler → Etaler

etaler / Etaler

Licence: BSD-3-Clause license
A flexable HTM (Hierarchical Temporal Memory) framework with full GPU support.

Programming Languages

C++
36643 projects - #6 most used programming language
c
50402 projects - #5 most used programming language
CMake
9771 projects
GDB
78 projects
Dockerfile
14818 projects

Projects that are alternatives of or similar to Etaler

Arraymancer
A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends
Stars: ✭ 793 (+903.8%)
Mutual labels:  opencl, tensor, gpu-computing
Autodock Gpu
AutoDock for GPUs and other accelerators
Stars: ✭ 65 (-17.72%)
Mutual labels:  opencl, gpu-computing
Neanderthal
Fast Clojure Matrix Library
Stars: ✭ 927 (+1073.42%)
Mutual labels:  opencl, gpu-computing
Clvk
Experimental implementation of OpenCL on Vulkan
Stars: ✭ 158 (+100%)
Mutual labels:  opencl, gpu-computing
Luxcore
LuxCore source repository
Stars: ✭ 601 (+660.76%)
Mutual labels:  opencl, gpu-computing
Tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Stars: ✭ 7,494 (+9386.08%)
Mutual labels:  opencl, tensor
Openclga
A Python Library for Genetic Algorithm on OpenCL
Stars: ✭ 103 (+30.38%)
Mutual labels:  opencl, gpu-computing
Blendluxcore
Blender Integration for LuxCore
Stars: ✭ 287 (+263.29%)
Mutual labels:  opencl, gpu-computing
Compute.scala
Scientific computing with N-dimensional arrays
Stars: ✭ 191 (+141.77%)
Mutual labels:  opencl, tensor
gpuowl
GPU Mersenne primality test.
Stars: ✭ 77 (-2.53%)
Mutual labels:  opencl, gpu-computing
dlprimitives
Deep Learning Primitives and Mini-Framework for OpenCL
Stars: ✭ 65 (-17.72%)
Mutual labels:  opencl, gpu-computing
Hipsycl
Implementation of SYCL for CPUs, AMD GPUs, NVIDIA GPUs
Stars: ✭ 377 (+377.22%)
Mutual labels:  opencl, gpu-computing
Trisycl
Generic system-wide modern C++ for heterogeneous platforms with SYCL from Khronos Group
Stars: ✭ 354 (+348.1%)
Mutual labels:  opencl, gpu-computing
Bayadera
High-performance Bayesian Data Analysis on the GPU in Clojure
Stars: ✭ 342 (+332.91%)
Mutual labels:  opencl, gpu-computing
gardenia
GARDENIA: Graph Analytics Repository for Designing Efficient Next-generation Accelerators
Stars: ✭ 22 (-72.15%)
Mutual labels:  opencl, gpu-computing
Cekirdekler
Multi-device OpenCL kernel load balancer and pipeliner API for C#. Uses shared-distributed memory model to keep GPUs updated fast while using same kernel on all devices(for simplicity).
Stars: ✭ 76 (-3.8%)
Mutual labels:  opencl, gpu-computing
Deepnet
Deep.Net machine learning framework for F#
Stars: ✭ 99 (+25.32%)
Mutual labels:  tensor, gpu-computing
Clojurecl
ClojureCL is a Clojure library for parallel computations with OpenCL.
Stars: ✭ 266 (+236.71%)
Mutual labels:  opencl, gpu-computing
Fast
A framework for GPU based high-performance medical image processing and visualization
Stars: ✭ 179 (+126.58%)
Mutual labels:  opencl, gpu-computing
CUDAfy.NET
CUDAfy .NET allows easy development of high performance GPGPU applications completely from the .NET. It's developed in C#.
Stars: ✭ 56 (-29.11%)
Mutual labels:  opencl, gpu-computing



Etaler is a library for machine intelligence based on HTM theory. Providing two main features.

  • HTM algorithms with modern API
  • A minimal cross-platform (CPU, GPU, etc..) Tensor implementation

You can now explore HTM with modern, easy to use API and enjoy the performance boost by the GPU.

More about Etaler

A GPU ready HTM library

Unlike most previous HTM implementations, Etaler is designed from the ground up to work with GPUs and allows almost seamless data transfer between CPU and GPUs.

Etaler provides HTM algorithms and a minimal Tensor implementation that operates on both CPU and GPU. You can choose what works best for you and switch between them with ease.

Total front-end /back-end separation

Etaler is written in a way that the front-end businesses (Tensor operation calls, HTM APIs, layer save/load) are totally separated from the backend, where all the computation and memory management happens. This allows Etaler to be easily expanded and optimized. Have multiple GPUs? Just spawn multiple GPU backends! Thou shell not need any black-magic.

Why the name?

Etaler is named after the inverse of the word "relate" for no specific reason.

Examples

See the examples folder for more information. For a quick feel on how Etaler works.

Creating and printing a tensor

float arr[] = {1, 2, 3, 4};
Tensor t = Tensor(/*shape=*/{4}
                ,/*data=*/arr);

std::cout << t << std::endl;

Encode a scalar

Tensor t = encoder::scalar(0.1);

Using the GPU

auto gpu = std::make_shared<OpenCLBackend>();
Tensor t = encoder::scalar(0.1);

//Transfer data to GPU
Tensot q = t.to(gpu);

SpatialPooler sp = SpatialPooler(/*Spatial Pooler params here*/).to(gpu);
//SpatialPooler sp(/*Spatial Pooler params here*/, gpu.get()); //Alternativelly
Tensor r = sp.compute(q);

Saving layers

save(sp.states(), "sp.cereal");

Documentation

Documents are avalible online on Read the Docs.

Building and platform support

OS/Backend CPU OpenCL
Linux Yes Yes
OS X Yes Yes
Windows Yes Yes
FreeBSD Yes *Yes
  • Build with GCC and libstdc++ on OS X 10.11.
  • Clang should work after OS X 10.14. See BuildOnOSX.md
  • Build with Visual Studio 2019 on Windows. See BuildOnMSVC.md
  • OpenCL on FreeBSD is tested using POCL. Which has know bugs preventing Etaler to fully function on ARM.

Dependencies

  • Required

  • OpenCL Backend

    • OpenCL and OpenCL C++ wrapper
    • OpenCL 1.2 capable GPU
  • Tests

Notes:

  1. Make sure to setup a TBBROOT environment variable to point to the binary installation directory of TBB. And the TBB tbbvars.sh file has been modified correctly and run, before running cmake.
  2. cereal can be git cloned into the Etaler/Etaler/3rdparty directory.
  3. Only the catch.hpp file is required from Catch2, and that file can be placed into the Etaler/tests directory.

Building from source

Clone the repository. Then after fulfilling the dependencies. Execute cmake and then run whatever build system you're using.

For example, on Linux.

mkdir build
cd build
cmake ..
make -j8

Some cmake options are available:

option description default
CMAKE_BUILD_TYPE Debug or Release build Release
ETALER_ENABLE_OPENCL Enable the OpenCL backend OFF
ETALER_BUILD_EXAMPLES Build the examples ON
ETALER_BUILD_TESTS Build the tests ON
ETALER_BUILD_DOCS Build the documents OFF
ETALER_ENABLE_SIMD Enable SIMD for CPU backend OFF
ETALER_NATIVE_BUILD Enable compiler optimize for the host CPU OFF

There are also packages available for the following distributions:

Building in Docker/VSC

Open the folder in VSC with remote docker extension ( ext install ms-vscode-remote.remote-containers ) - the docker image and container will start automatically. If CMake Tools extension are also installed, the building will be done automaticaly also. Otherwhise, do the regular cmake procedure inside Etaler dir.

LICENSE

Etaler is licensed under BSD 3-Clause License. So use it freely!

Be aware that Numenta holds the rights to HTM related patents. And only allows free (as "free beers" free) use of their patents for non-commercial purpose. If you are using Etaler commercially; please contact Numenta for licensing.
(tl;dr Etaler is free for any purpose. But HTM is not for commercial use.)

Contribution

HTM Theory is in it's young age and as we are growing. We'd like to get contributions from you to accelerate the development of Etaler! Just fork, make changes and launch a PR!

See CONTRIBUTION.md

Notes

  • NVIDIA's OpenCL implementation might not report error correctly. It can execute kernels with a invalid memory object without telling you and crash a random thing the next time. If you are encountering weird behaviors. Please try POCL with the CUDA backend or use an AMD card. However the OpenCL kernels haven't been optimized against vector processors like AMD's. They should work but you might experience performance drops doing so.

  • Due to the nature of HTM. The OpenCL backend uses local memory extensively. And thus you will experience lower than expected performance on processors that uses global memory to emulate local memory. This includes but not limited to (and non of them are tested): ARM Mali GPUs, VideoCore IV GPU, any CPU.

  • By default Etaler saves to a portable binary file. If you want to save your data as JSON, Etaler automatically saves as JSON when you specified a .json file extension. But note that JSON is fat compared to the binary format and grows fast. Make sure you know what you are doing.

  • FPGA based OpenCL are not supported for now. FPGA platforms don't provide online (API callable) compilers that Etaler uses for code generation.

  • DSP/CPU/Xeon Phi based OpenCL should work out of the box. But we didn't test that.

For NuPIC users

Etaler tho provides basically the same feature, is very different from Numenta's NuPIC. Some noticeable ones are:

  • Data Orientated Design instead of Object Orientated
  • No Network API (planned in the future, by another repo)
  • SDR is handled as a Tensor instead of a sparse matrix
  • Swarming is not supported nor planned

Testing

If you have the tests builded. Run tests/etaler_test.

We are still thinking about weather a CI is worth the trouble. C++ projects takes too long to build on most CIs so it drags the development speed.

Cite us

We're happy that you can use the library and are having fun. Please attribute us by linking to etaler at https://github.com/etaler/Etaler. For scientific publications, we suggest the following BibTex citation.

@misc{etaler2019,
	abstract = "Implementation of Hierarchical Temporal Memory and related algorithms in C++ and OpenCL",
	author = "An-Pang Clang",
	commit = {0226cdac1f03a642a4849ad8b9d4574ef35c943c},
	howpublished = "\url{https://github.com/etaler/Etaler}",
	journal = "GitHub repository",
	keywords = "HTM; Hierarchical Temporal Memory; Numenta; NuPIC; cortical; sparse distributed representation; SDR; anomaly; prediction; bioinspired; neuromorphic",
	publisher = "Github",
	title = "{Etaler implementation of Hierarchical Temporal Memory}",
	year = "2019"
}

Note: The commit number, publication year shown above are the ones when we last update the citation. You can update the fields to match the version you uses.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].