All Projects → DiffSharp → Diffsharp

DiffSharp / Diffsharp

Licence: bsd-2-clause
DiffSharp: Differentiable Functional Programming

Projects that are alternatives of or similar to Diffsharp

Tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Stars: ✭ 7,494 (+1953.15%)
Mutual labels:  gpu, tensor
Nx
Multi-dimensional arrays (tensors) and numerical definitions for Elixir
Stars: ✭ 1,133 (+210.41%)
Mutual labels:  gpu, tensor
Cupy
NumPy & SciPy for GPU
Stars: ✭ 5,625 (+1441.1%)
Mutual labels:  gpu, tensor
Megengine
MegEngine 是一个快速、可拓展、易于使用且支持自动求导的深度学习框架
Stars: ✭ 4,081 (+1018.08%)
Mutual labels:  gpu, tensor
Pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Stars: ✭ 52,811 (+14368.77%)
Mutual labels:  gpu, tensor
Hyperlearn
50% faster, 50% less RAM Machine Learning. Numba rewritten Sklearn. SVD, NNMF, PCA, LinearReg, RidgeReg, Randomized, Truncated SVD/PCA, CSR Matrices all 50+% faster
Stars: ✭ 1,204 (+229.86%)
Mutual labels:  gpu, tensor
Drlkit
A High Level Python Deep Reinforcement Learning library. Great for beginners, prototyping and quickly comparing algorithms
Stars: ✭ 29 (-92.05%)
Mutual labels:  gpu, tensor
Norse
Deep learning with spiking neural networks (SNNs) in PyTorch.
Stars: ✭ 211 (-42.19%)
Mutual labels:  gpu, tensor
Tensorflow Gpu Macosx
Unoffcial NVIDIA CUDA GPU support version of Google Tensorflow for MAC OSX
Stars: ✭ 103 (-71.78%)
Mutual labels:  gpu, tensor
Deepnet
Deep.Net machine learning framework for F#
Stars: ✭ 99 (-72.88%)
Mutual labels:  gpu, tensor
Compute.scala
Scientific computing with N-dimensional arrays
Stars: ✭ 191 (-47.67%)
Mutual labels:  gpu, tensor
Ocaml Torch
OCaml bindings for PyTorch
Stars: ✭ 308 (-15.62%)
Mutual labels:  gpu, tensor
Arrayfire
ArrayFire: a general purpose GPU library.
Stars: ✭ 3,693 (+911.78%)
Mutual labels:  gpu
Aparapi
The New Official Aparapi: a framework for executing native Java and Scala code on the GPU.
Stars: ✭ 352 (-3.56%)
Mutual labels:  gpu
Agi
Android GPU Inspector
Stars: ✭ 327 (-10.41%)
Mutual labels:  gpu
Adanet
Fast and flexible AutoML with learning guarantees.
Stars: ✭ 3,340 (+815.07%)
Mutual labels:  gpu
Arrayfire Python
Python bindings for ArrayFire: A general purpose GPU library.
Stars: ✭ 358 (-1.92%)
Mutual labels:  gpu
Curl
CURL: Contrastive Unsupervised Representation Learning for Sample-Efficient Reinforcement Learning
Stars: ✭ 346 (-5.21%)
Mutual labels:  gpu
Thrust
The C++ parallel algorithms library.
Stars: ✭ 3,595 (+884.93%)
Mutual labels:  gpu
Ultralight
Next-generation HTML renderer for apps and games
Stars: ✭ 3,585 (+882.19%)
Mutual labels:  gpu

Documentation

Build Status codecov

This is the development branch of DiffSharp 1.0.

NOTE: This branch is undergoing development. It has incomplete code, functionality, and design that are likely to change without notice.

Getting Started

DiffSharp is normally used from an F# Jupyter notebook. You can simply open examples directly in the browser, e.g.

To use locally you can install Jupyter and then:

dotnet tool install -g --add-source "https://dotnet.myget.org/F/dotnet-try/api/v3/index.json" microsoft.dotnet-interactive
dotnet interactive jupyter install

When using .NET Interactive it is best to completely turn off automatic HTML displays of outputs:

Formatter.SetPreferredMimeTypeFor(typeof<obj>, "text/plain")
Formatter.Register(fun (x:obj) (writer: TextWriter) -> fprintfn writer "%120A" x )

You can also use DiffSharp from a script or an application. Here are some example scripts with appropriate package references:

Available packages and backends

Now reference an appropriate nuget package from https://nuget.org:

For all but DiffSharp-lite add the following to your code:

dsharp.config(backend=Backend.Torch)

Using a pre-installed or self-built LibTorch 1.5.0

The Torch CPU and CUDA packages above are large. If you already have libtorch 1.5.0 available on your machine you can

  1. reference DiffSharp-lite

  2. set LD_LIBRARY_PATH to include a directory containing the relevant torch_cpu.so and torch_cuda.so.

  3. use dsharp.config(backend=Backend.Torch)

Developing DiffSharp Libraries

To develop libraries built on DiffSharp, do the following:

  1. reference DiffSharp.Core (and nothing else) in your library code.

  2. reference DiffSharp.Backends.Reference in your correctness testing code.

  3. reference DiffSharp.Backends.Torch and libtorch-cpu in your CPU testing code.

  4. reference DiffSharp.Backends.Torch and libtorch-cuda-linux or libtorch-cuda-windows in your (optional) GPU testing code.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].