All Projects → surban → TensorAlgDiff

surban / TensorAlgDiff

Licence: Apache-2.0 license
Automatic Differentiation for Tensor Algebras

Programming Languages

F#
602 projects

Projects that are alternatives of or similar to TensorAlgDiff

Arraymancer
A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends
Stars: ✭ 793 (+2950%)
Mutual labels:  automatic-differentiation, tensor
Rust Autograd
Tensors and differentiable operations (like TensorFlow) in Rust
Stars: ✭ 278 (+969.23%)
Mutual labels:  automatic-differentiation, tensor
BTAS
Basic Tensor Algebra Subroutines
Stars: ✭ 35 (+34.62%)
Mutual labels:  tensor, tensor-algebra
Grassmann.jl
⟨Leibniz-Grassmann-Clifford⟩ differential geometric algebra / multivector simplicial complex
Stars: ✭ 289 (+1011.54%)
Mutual labels:  automatic-differentiation, tensor
Tensorial.jl
Statically sized tensors and related operations for Julia
Stars: ✭ 18 (-30.77%)
Mutual labels:  automatic-differentiation, tensor
Tullio.jl
Stars: ✭ 231 (+788.46%)
Mutual labels:  automatic-differentiation, tensor
Tensors.jl
Efficient computations with symmetric and non-symmetric tensors with support for automatic differentiation.
Stars: ✭ 142 (+446.15%)
Mutual labels:  automatic-differentiation, tensor
tensority
Strongly typed multidimensional array library for OCaml
Stars: ✭ 44 (+69.23%)
Mutual labels:  tensor
systemds
An open source ML system for the end-to-end data science lifecycle
Stars: ✭ 38 (+46.15%)
Mutual labels:  tensor
Scientific-Programming-in-Julia
Repository for B0M36SPJ
Stars: ✭ 32 (+23.08%)
Mutual labels:  automatic-differentiation
DI-treetensor
Let DI-treetensor help you simplify the structure processing!(树形运算一不小心就逻辑混乱?DI-treetensor快速帮你搞定)
Stars: ✭ 134 (+415.38%)
Mutual labels:  tensor
scikit tt
Tensor Train Toolbox
Stars: ✭ 52 (+100%)
Mutual labels:  tensor
autodiffr
Automatic Differentiation for R
Stars: ✭ 21 (-19.23%)
Mutual labels:  automatic-differentiation
Tensor
A library and extension that provides objects for scientific computing in PHP.
Stars: ✭ 146 (+461.54%)
Mutual labels:  tensor
FwiFlow.jl
Elastic Full Waveform Inversion for Subsurface Flow Problems Using Intrusive Automatic Differentiation
Stars: ✭ 24 (-7.69%)
Mutual labels:  automatic-differentiation
ostd
Online Stochastic Tensor Decomposition for Background Subtraction in Multispectral Video Sequences
Stars: ✭ 21 (-19.23%)
Mutual labels:  tensor
ad-lens
Automatic Differentiation using Pseudo Lenses. Neat.
Stars: ✭ 16 (-38.46%)
Mutual labels:  automatic-differentiation
AbstractOperators.jl
Abstract operators for large scale optimization in Julia
Stars: ✭ 26 (+0%)
Mutual labels:  automatic-differentiation
Etaler
A flexable HTM (Hierarchical Temporal Memory) framework with full GPU support.
Stars: ✭ 79 (+203.85%)
Mutual labels:  tensor
YaoBlocks.jl
Standard basic quantum circuit simulator building blocks. (archived, for it is moved to Yao.jl)
Stars: ✭ 26 (+0%)
Mutual labels:  automatic-differentiation

Automatic Differentiation for Tensor Algebras

Build Status

Read the corresponding technical report for more details and examples.

This code computes expressions for the derivatives of element-wise defined tensor-valued functions. It can handle arguments inside the functions that are indexed by arbitrary linear combinations of the function indices. Furthermore, the function may contain (nested) sums with arbitrary ranges (even linearly depending on other indices). An example is the matrix-valued function f(a,b,c,d), which is element-wise defined by the expression

f[i; j] = exp (-sum{k}_0^4 (((a[i; k] + b[j; k]) ** 2 * c[i; i] + d[i + k] ** 3)))

Deriving derivative expressions when the mapping between function indices and argument indices is not 1:1 requires special attention. For example, for the function f_{ij} (x) = x_i^2, the derivative of some loss l=l(f(x)) w.r.t. x is (dx)_i = dl / dx_i = \sum_j (df)_{ij} 2 x_i; the sum is necessary because index j does not appear in the indices of f. Another example is f_i (x) = x_{ii}^2, where x is a matrix; here we have (dx)_{ij} = \kronecker_{i=j} (df)_i 2 x_{ii}; the Kronecker delta is necessary because the derivative is zero for off-diagonal elements. Another indexing scheme is used by f_{ij} (x) = exp x_{i+j}; here the correct derivative is (dx)_k = \sum_i (df)_{i,k-i} \exp x_k, where the range of the sum must be chosen appropriately.

Our algorithm can handle any case in which the indices of an argument are an arbitrary linear combination of the indices of the function, thus all of the above examples can be handled. Sums (and their ranges) and Kronecker deltas are automatically inserted into the derivatives as necessary. Additionally, the indices are transformed, if required (as in the last example). The algorithm outputs a symbolic expression that can be subsequently fed into a tensor algebra compiler.

For the above expression the algorithm outputs:

Derivative of f wrt. a: da[da_0; da_1] = sum{da_z0}_0^3 (((-(df[da_0; da_z0] * exp (-sum{k}_0^4 (((a[da_0; k] + b[da_z0; k]) ** 2 * c[da_0; da_0] + d[da_0 + k] ** 3))))) * c[da_0; da_0] * 2 * (a[da_0; da_1] + b[da_z0; da_1]) ** (2 - 1)))
Derivative of f wrt. b: db[db_0; db_1] = sum{db_z0}_0^2 (((-(df[db_z0; db_0] * exp (-sum{k}_0^4 (((a[db_z0; k] + b[db_0; k]) ** 2 * c[db_z0; db_z0] + d[db_z0 + k] ** 3))))) * c[db_z0; db_z0] * 2 * (a[db_z0; db_1] + b[db_0; db_1]) ** (2 - 1)))
Derivative of f wrt. c: dc[dc_0; dc_1] = if {dc_0 + -dc_1 = 0} then (sum{dc_z1}_0^4 (sum{dc_z0}_0^3 (((a[dc_1; dc_z1] + b[dc_z0; dc_z1]) ** 2 * (-(df[dc_1; dc_z0] * exp (-sum{k}_0^4 (((a[dc_1; k] + b[dc_z0; k]) ** 2 * c[dc_1; dc_1] + d[dc_1 + k] ** 3))))))))) else (0)
Derivative of f wrt. d: dd[dd_0] = sum{dd_z1}_(max [0; -2 + dd_0])^(min [4; dd_0]) (sum{dd_z0}_0^3 (((-(df[dd_0 + -dd_z1; dd_z0] * exp (-sum{k}_0^4 (((a[dd_0 + -dd_z1; k] + b[dd_z0; k]) ** 2 * c[dd_0 + -dd_z1; dd_0 + -dd_z1] + d[dd_0 + -dd_z1 + k] ** 3))))) * 3 * d[dd_0] ** (3 - 1))))

Internally, the derivatives are stored as computational trees to avoid repeated computations and thus expression blowup that otherwise occurs in symbolic differentiation. This work can easily be employed in system that generate C++ or CUDA code for expressions or be combined with a Tensor Algebra Compiler like http://tensor-compiler.org.

Running

  1. Install .NET Core 2.0 from https://www.microsoft.com/net/learn/get-started/ (packages available for all operating systems). We tested our code on Ubuntu Linux.

  2. To build and run the demo execute dotnet run

  3. To run the numeric verification tests run dotnet test (takes approx. 2 minutes)

Reference

When using this work or the provided code please refer to the following publication.

Sebastian Urban, Patrick van der Smagt. Automatic Differentiation for Tensor Algebras. arXiv:1711.01348 [cs.SC], 2017.

Note that we employ some algorithms implemented in our open-source Tensor library; their source is at https://github.com/DeepMLNet/DeepNet/tree/master/Tensor. Documentation is available at https://www.deepml.net/Tensor.

License

Apache License 2.0

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].