All Projects → raskr → Rust Autograd

raskr / Rust Autograd

Licence: mit
Tensors and differentiable operations (like TensorFlow) in Rust

Programming Languages

rust
11053 projects

Projects that are alternatives of or similar to Rust Autograd

Arraymancer
A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends
Stars: ✭ 793 (+185.25%)
Mutual labels:  neural-networks, tensor, automatic-differentiation
Grassmann.jl
⟨Leibniz-Grassmann-Clifford⟩ differential geometric algebra / multivector simplicial complex
Stars: ✭ 289 (+3.96%)
Mutual labels:  tensor, automatic-differentiation
Qualia2.0
Qualia is a deep learning framework deeply integrated with automatic differentiation and dynamic graphing with CUDA acceleration. Qualia was built from scratch.
Stars: ✭ 41 (-85.25%)
Mutual labels:  neural-networks, automatic-differentiation
Tullio.jl
Stars: ✭ 231 (-16.91%)
Mutual labels:  tensor, automatic-differentiation
Tensorial.jl
Statically sized tensors and related operations for Julia
Stars: ✭ 18 (-93.53%)
Mutual labels:  automatic-differentiation, tensor
Adcme.jl
Automatic Differentiation Library for Computational and Mathematical Engineering
Stars: ✭ 106 (-61.87%)
Mutual labels:  neural-networks, automatic-differentiation
Autograd.jl
Julia port of the Python autograd package.
Stars: ✭ 147 (-47.12%)
Mutual labels:  neural-networks, automatic-differentiation
Qml
Introductions to key concepts in quantum machine learning, as well as tutorials and implementations from cutting-edge QML research.
Stars: ✭ 174 (-37.41%)
Mutual labels:  neural-networks, automatic-differentiation
Tensors.jl
Efficient computations with symmetric and non-symmetric tensors with support for automatic differentiation.
Stars: ✭ 142 (-48.92%)
Mutual labels:  automatic-differentiation, tensor
TensorAlgDiff
Automatic Differentiation for Tensor Algebras
Stars: ✭ 26 (-90.65%)
Mutual labels:  automatic-differentiation, tensor
Carrot
🥕 Evolutionary Neural Networks in JavaScript
Stars: ✭ 261 (-6.12%)
Mutual labels:  neural-networks
Deeplearning.ai Notes
These are my notes which I prepared during deep learning specialization taught by AI guru Andrew NG. I have used diagrams and code snippets from the code whenever needed but following The Honor Code.
Stars: ✭ 262 (-5.76%)
Mutual labels:  neural-networks
Moniel
Interactive Notation for Computational Graphs
Stars: ✭ 272 (-2.16%)
Mutual labels:  neural-networks
Awesome Distributed Deep Learning
A curated list of awesome Distributed Deep Learning resources.
Stars: ✭ 277 (-0.36%)
Mutual labels:  neural-networks
Deepc
vendor independent deep learning library, compiler and inference framework microcomputers and micro-controllers
Stars: ✭ 260 (-6.47%)
Mutual labels:  tensor
Pycox
Survival analysis with PyTorch
Stars: ✭ 269 (-3.24%)
Mutual labels:  neural-networks
Blitz
Blitz++ Multi-Dimensional Array Library for C++
Stars: ✭ 257 (-7.55%)
Mutual labels:  tensor
Painters
🎨 Winning solution for the Painter by Numbers competition on Kaggle.
Stars: ✭ 257 (-7.55%)
Mutual labels:  neural-networks
Place Recognition Using Autoencoders And Nn
Place recognition with WiFi fingerprints using Autoencoders and Neural Networks
Stars: ✭ 256 (-7.91%)
Mutual labels:  neural-networks
Librec
LibRec: A Leading Java Library for Recommender Systems, see
Stars: ✭ 3,045 (+995.32%)
Mutual labels:  tensor

autograd

Build Status Crates.io version docs.rs

Differentiable operations and tensors backed by ndarray.

Motivation

Machine learning is one of the field where Rust lagging behind other languages. The aim of this crate is to show that Rust has the capability to implement efficient and full-featured dataflow graph naturally. Moreover, the core of this crate is quite small compared to others (due to being implemented in pure Rust and ndarray), therefore it might be reasonable for those who are not familiar with how this kind of library works.

Installation

[dependencies]
autograd = { version = "1.1.0", features = ["mkl"] }

mkl feature is recommended to speedup linalg operations using Intel MKL.

rustc version

Tested with rustc 1.38 ..= 1.42

Features

Lazy, lightweight tensor evaluation

Computation graphs are created on the fly (a.k.a. define-by-run), but are not evaluated until eval is called. This mechanism balances better performance and flexibility.

use autograd as ag;

ag::with(|g: &mut ag::Graph<_>| {
    let a: ag::Tensor<f32> = g.ones(&[60]);
    let b: ag::Tensor<f32> = g.ones(&[24]);
    let c: ag::Tensor<f32> = g.reshape(a, &[3, 4, 5]);
    let d: ag::Tensor<f32> = g.reshape(b, &[4, 3, 2]);
    let e: ag::Tensor<f32> = g.tensordot(c, d, &[1, 0], &[0, 1]);
    e.eval(&[]);  // Getting `ndarray::Array` here.
});

Reverse-mode automatic differentiation

There are a lot of built-in operations that support higher-order derivatives, and you can also define your own differentiable ops with ndarrays easily.

Here we are just computing partial derivatives of z = 2x^2 + 3y + 1.

ag::with(|g: &mut ag::Graph<_>| {
   let x = g.placeholder(&[]);
   let y = g.placeholder(&[]);
   let z = 2.*x*x + 3.*y + 1.;

   // dz/dy
   let gy = &g.grad(&[z], &[y])[0];
   println!("{:?}", gy.eval(&[]));   // => Ok(3.)

   // dz/dx (requires to fill the placeholder `x`)
   let gx = &g.grad(&[z], &[x])[0];
   let feed = ag::ndarray::arr0(2.);
   println!("{:?}", gx.eval(&[x.given(feed.view())]));  // => Ok(8.)

   // ddz/dx (differentiates `z` again)
   let ggx = &g.grad(&[gx], &[x])[0];
   println!("{:?}", ggx.eval(&[]));  // => Ok(4.)
});

Neural networks

This crate has various low-level features inspired by tensorflow/theano to train neural networks. Since computation graphs require only bare minimum of heap allocations, the overhead is small, even for complex networks.

// This is a softmax regression for MNIST digits classification with Adam.
// This achieves 0.918 test accuracy after 3 epochs (0.11 sec/epoch on 2.7GHz Intel Core i5).
use autograd::{self as ag, Graph, optimizers::adam, ndarray_ext as arr, tensor::Variable};

let rng = ag::ndarray_ext::ArrayRng::<f32>::default();
let w_arr = arr::into_shared(rng.glorot_uniform(&[28 * 28, 10]));
let b_arr = arr::into_shared(arr::zeros(&[1, 10]));
let adam_state = adam::AdamState::new(&[&w_arr, &b_arr]);

let max_epoch = 3;

for epoch in 0..max_epoch {
   ag::with(|g| {
       let w = g.variable(w_arr.clone());
       let b = g.variable(b_arr.clone());
       let x = g.placeholder(&[-1, 28*28]);
       let y = g.placeholder(&[-1]);
       let z = g.matmul(x, w) + b;
       let mean_loss = g.reduce_mean(g.sparse_softmax_cross_entropy(z, &y), &[0], false);
       let grads = &g.grad(&[&mean_loss], &[w, b]);
       let update_ops: &[ag::Tensor<f32>] =
           &adam::Adam::default().compute_updates(&[w, b], grads, &adam_state, g);

       // let batch_size = 200isize;
       // let num_samples = x_train.shape()[0];
       // let num_batches = num_samples / batch_size as usize;
       // for i in get_permutation(num_batches) {
       //     let i = i as isize * batch_size;
       //     let x_batch = x_train.slice(s![i..i + batch_size, ..]).into_dyn();
       //     let y_batch = y_train.slice(s![i..i + batch_size, ..]).into_dyn();
       //     g.eval(update_ops, &[x.given(x_batch), y.given(y_batch)]);
       // }
   });
}

ConvNet, LSTM example can be found in examples

Hooks

You can register hooks on ag::Tensor objects for debugging.

use autograd as ag;

ag::with(|g| {
    let a: ag::Tensor<f32> = g.zeros(&[4, 2]).show();
    let b: ag::Tensor<f32> = g.ones(&[2, 3]).show_shape();
    let c = g.matmul(a, b).show_with("MatMul:");

    c.eval(&[]);
    // [[0.0, 0.0],
    // [0.0, 0.0],
    // [0.0, 0.0],
    // [0.0, 0.0]] shape=[4, 2], strides=[2, 1], layout=C (0x1)
    //
    // [2, 3]
    //
    // MatMul:
    //  [[0.0, 0.0, 0.0],
    //  [0.0, 0.0, 0.0],
    //  [0.0, 0.0, 0.0],
    //  [0.0, 0.0, 0.0]] shape=[4, 3], strides=[3, 1], layout=C (0x1), dynamic ndim=2
});

For more, see documentation or examples

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].