All Projects → Jutho → Tensoroperations.jl

Jutho / Tensoroperations.jl

Licence: other
Julia package for tensor contractions and related operations

Programming Languages

julia
2034 projects

Labels

Projects that are alternatives of or similar to Tensoroperations.jl

Nx
Multi-dimensional arrays (tensors) and numerical definitions for Elixir
Stars: ✭ 1,133 (+392.61%)
Mutual labels:  tensor
Mtensor
A C++ Cuda Tensor Lazy Computing Library
Stars: ✭ 115 (-50%)
Mutual labels:  tensor
Laser
The HPC toolbox: fused matrix multiplication, convolution, data-parallel strided tensor primitives, OpenMP facilities, SIMD, JIT Assembler, CPU detection, state-of-the-art vectorized BLAS for floats and integers
Stars: ✭ 191 (-16.96%)
Mutual labels:  tensor
Hyperlearn
50% faster, 50% less RAM Machine Learning. Numba rewritten Sklearn. SVD, NNMF, PCA, LinearReg, RidgeReg, Randomized, Truncated SVD/PCA, CSR Matrices all 50+% faster
Stars: ✭ 1,204 (+423.48%)
Mutual labels:  tensor
Tensorflow Gpu Macosx
Unoffcial NVIDIA CUDA GPU support version of Google Tensorflow for MAC OSX
Stars: ✭ 103 (-55.22%)
Mutual labels:  tensor
Hptt
High-Performance Tensor Transpose library
Stars: ✭ 141 (-38.7%)
Mutual labels:  tensor
Cloud Volume
Read and write Neuroglancer datasets programmatically.
Stars: ✭ 63 (-72.61%)
Mutual labels:  tensor
Norse
Deep learning with spiking neural networks (SNNs) in PyTorch.
Stars: ✭ 211 (-8.26%)
Mutual labels:  tensor
Pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Stars: ✭ 52,811 (+22861.3%)
Mutual labels:  tensor
Mars
Mars is a tensor-based unified framework for large-scale data computation which scales numpy, pandas, scikit-learn and Python functions.
Stars: ✭ 2,308 (+903.48%)
Mutual labels:  tensor
Pytorch2c
A Python module for compiling PyTorch graphs to C
Stars: ✭ 86 (-62.61%)
Mutual labels:  tensor
Deepnet
Deep.Net machine learning framework for F#
Stars: ✭ 99 (-56.96%)
Mutual labels:  tensor
Tensorflow Cheatsheet
My personal reference for Tensorflow
Stars: ✭ 147 (-36.09%)
Mutual labels:  tensor
Pytorch Book
PyTorch tutorials and fun projects including neural talk, neural style, poem writing, anime generation (《深度学习框架PyTorch:入门与实战》)
Stars: ✭ 9,546 (+4050.43%)
Mutual labels:  tensor
Compute.scala
Scientific computing with N-dimensional arrays
Stars: ✭ 191 (-16.96%)
Mutual labels:  tensor
Mtt
MATLAB Tensor Tools
Stars: ✭ 61 (-73.48%)
Mutual labels:  tensor
L2
l2 is a fast, Pytorch-style Tensor+Autograd library written in Rust
Stars: ✭ 126 (-45.22%)
Mutual labels:  tensor
Tensor
package tensor provides efficient and generic n-dimensional arrays in Go that are useful for machine learning and deep learning purposes
Stars: ✭ 222 (-3.48%)
Mutual labels:  tensor
Tenseal
A library for doing homomorphic encryption operations on tensors
Stars: ✭ 197 (-14.35%)
Mutual labels:  tensor
Tinytpu
Implementation of a Tensor Processing Unit for embedded systems and the IoT.
Stars: ✭ 153 (-33.48%)
Mutual labels:  tensor

TensorOperations.jl

Fast tensor operations using a convenient Einstein index notation.

Documentation Build Status Digital Object Identifier
CI CI (Julia nightly) DOI

What's new in v3

  • Switched to CUDA.jl instead of CuArrays.jl, which effectively restricts support to Julia 1.4 and higher.

  • The default cache size for intermediate results is now the minimum of either 4GB or one quarter of your total memory (obtained via Sys.total_memory()). Furthermore, the structure (i.e. size) and eltype of the temporaries is now also used as lookup key in the LRU cache, such that you can run the same code on different objects with different sizes or element types, without constantly having to reallocate the temporaries. Finally, the task rather than threadid is used to make the cache compatible with concurrency at any level.

    As a consequence, different objects for the same temporary location can now be cached, such that the cache can grow out of size quickly. Once the cache is not able to hold all the temporary objects needed for your simulation, it might actually deteriorate perfomance, and you might be better off disabling the cache alltogether with TensorOperations.disable_cache().

WARNING: TensorOperations 3.0 contains breaking changes if you did implement support for custom array / tensor types by overloading checked_similar_from_indices etc.

Code example

TensorOperations.jl is mostly used through the @tensor macro which allows one to express a given operation in terms of index notation format, a.k.a. Einstein notation (using Einstein's summation convention).

using TensorOperations
α=randn()
A=randn(5,5,5,5,5,5)
B=randn(5,5,5)
C=randn(5,5,5)
D=zeros(5,5,5)
@tensor begin
    D[a,b,c] = A[a,e,f,c,f,g]*B[g,b,e] + α*C[c,a,b]
    E[a,b,c] := A[a,e,f,c,f,g]*B[g,b,e] + α*C[c,a,b]
end

In the second to last line, the result of the operation will be stored in the preallocated array D, whereas the last line uses a different assignment operator := in order to define and allocate a new array E of the correct size. The contents of D and E will be equal.

For more information, please see the documentation.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].