All Projects → ctongfei → Nexus

ctongfei / Nexus

Licence: mit
Experimental tensor-typed deep learning

Programming Languages

scala
5932 projects

Projects that are alternatives of or similar to Nexus

Pytorch2c
A Python module for compiling PyTorch graphs to C
Stars: ✭ 86 (-64.75%)
Mutual labels:  tensor
Tensorflow Cheatsheet
My personal reference for Tensorflow
Stars: ✭ 147 (-39.75%)
Mutual labels:  tensor
Norse
Deep learning with spiking neural networks (SNNs) in PyTorch.
Stars: ✭ 211 (-13.52%)
Mutual labels:  tensor
Deepnet
Deep.Net machine learning framework for F#
Stars: ✭ 99 (-59.43%)
Mutual labels:  tensor
L2
l2 is a fast, Pytorch-style Tensor+Autograd library written in Rust
Stars: ✭ 126 (-48.36%)
Mutual labels:  tensor
Mars
Mars is a tensor-based unified framework for large-scale data computation which scales numpy, pandas, scikit-learn and Python functions.
Stars: ✭ 2,308 (+845.9%)
Mutual labels:  tensor
Pytorch Book
PyTorch tutorials and fun projects including neural talk, neural style, poem writing, anime generation (《深度学习框架PyTorch:入门与实战》)
Stars: ✭ 9,546 (+3812.3%)
Mutual labels:  tensor
Tullio.jl
Stars: ✭ 231 (-5.33%)
Mutual labels:  tensor
Hptt
High-Performance Tensor Transpose library
Stars: ✭ 141 (-42.21%)
Mutual labels:  tensor
Tenseal
A library for doing homomorphic encryption operations on tensors
Stars: ✭ 197 (-19.26%)
Mutual labels:  tensor
Tensorflow Gpu Macosx
Unoffcial NVIDIA CUDA GPU support version of Google Tensorflow for MAC OSX
Stars: ✭ 103 (-57.79%)
Mutual labels:  tensor
Mtensor
A C++ Cuda Tensor Lazy Computing Library
Stars: ✭ 115 (-52.87%)
Mutual labels:  tensor
Laser
The HPC toolbox: fused matrix multiplication, convolution, data-parallel strided tensor primitives, OpenMP facilities, SIMD, JIT Assembler, CPU detection, state-of-the-art vectorized BLAS for floats and integers
Stars: ✭ 191 (-21.72%)
Mutual labels:  tensor
Pytorch Wrapper
Provides a systematic and extensible way to build, train, evaluate, and tune deep learning models using PyTorch.
Stars: ✭ 92 (-62.3%)
Mutual labels:  tensor
Tensor
package tensor provides efficient and generic n-dimensional arrays in Go that are useful for machine learning and deep learning purposes
Stars: ✭ 222 (-9.02%)
Mutual labels:  tensor
Hyperlearn
50% faster, 50% less RAM Machine Learning. Numba rewritten Sklearn. SVD, NNMF, PCA, LinearReg, RidgeReg, Randomized, Truncated SVD/PCA, CSR Matrices all 50+% faster
Stars: ✭ 1,204 (+393.44%)
Mutual labels:  tensor
Tinytpu
Implementation of a Tensor Processing Unit for embedded systems and the IoT.
Stars: ✭ 153 (-37.3%)
Mutual labels:  tensor
Einops
Deep learning operations reinvented (for pytorch, tensorflow, jax and others)
Stars: ✭ 4,022 (+1548.36%)
Mutual labels:  tensor
Tensoroperations.jl
Julia package for tensor contractions and related operations
Stars: ✭ 230 (-5.74%)
Mutual labels:  tensor
Compute.scala
Scientific computing with N-dimensional arrays
Stars: ✭ 191 (-21.72%)
Mutual labels:  tensor

Nexus

🚧 Ongoing project 🚧 Status: Prototype 🚧

Nexus is a prototypical typesafe deep learning system in Scala.

Nexus is a departure from common deep learning libraries such as TensorFlow, PyTorch, MXNet, etc.

  • Ever been baffled by the axes of tensors? Which axis should I max out?
  • Ever got TypeErrors in Python?
  • Ever spending hours or days getting the tensors' axes and dimensions right?

Nexus' answer to these problems is static types. By specifying tensor axes' semantics in types exploiting Scala's expressive types, compilers can validate the program at compile time, freeing developers' burden of remembering axes by heart, and eliminating nearly all errors above before even running.

Nexus embraces declarative and functional programming: Neural networks are built using small composable components, making code very easy to follow, understand and maintain.

A first glance

A simple neural network for learning the XOR function can be found here.

Building a typesafe XOR network:

  class In extends Dim;     val In = new In          
  class Hidden extends Dim; val Hidden = new Hidden
  class Out extends Dim;    val Out = new Out // tensor axis labels declared as types and singletons

  val x = Input[FloatTensor[In]]()     // input vectors
  val y = Input[FloatTensor[Out]]()    // gold labels

  val ŷ = x                       |>   // type: Symbolic[FloatTensor[In]]
    Affine(In -> 2, Hidden -> 2)  |>   // type: Symbolic[FloatTensor[Hidden]]
    Logistic                      |>   // type: Symbolic[FloatTensor[Hidden]]
    Affine(Hidden -> 2, Out -> 2) |>   // type: Symbolic[FloatTensor[Out]]
    Softmax                            // type: Symbolic[FloatTensor[Out]]
  val loss = CrossEntropy(y, ŷ)        // type: Symbolic[Float]

Design goals

  • Typeful. Each axis of a tensor is statically typed using tuples. For example, an image is typed as FloatTensor[(Width, Height, Channel)], whereas an embedded sentence is typed as FloatTensor[(Word, Embedding)]. This frees programmers from remembering what each axis stands for.
  • Typesafe. Very strong static type checking to eliminate most bugs at compile time.
  • Never, ever specify axis index again. For things like reduce_sum(x, axis=1), write x |> SumAlong(AxisName).
  • Automatic typeclass derivation: Differentiation through any case class (product type).
  • Versatile switching between eager and lazy evaluation.
  • [TODO] Typesafe tensor sizes using literal singleton types (Scala 2.13+).
  • [TODO] Automatic batching over sequences/trees (Neubig, Goldberg, Dyer, NIPS 2017). Free programmers from the pain of manual batching.
  • [TODO] GPU Acceleration. Reuse Torch C++ core through Swig (bindings).
  • [TODO] Multiple backends. Torch / MXNet? / TensorFlow.js for Scala.js? / libtorch for ScalaNative?
  • [TODO] Automatic operator fusion for optimization.
  • [TODO] Typesafe higher-order gradients / Jacobians.

Modules

Nexus is modularized. It contains the following modules:

Module Description
nexus-tensor Foundations for typesafe tensors
nexus-diff Typesafe deep learning (differentiable programming)
nexus-prob Typesafe probabilistic programming
nexus-ml High-level machine learning abstractions / models
nexus-jvm-backend JVM reference backend (slow)
nexus-torch Torch native CPU backend
nexus-torch-cuda Torch CUDA GPU backend

Citation

Please cite this in academic work as

@inproceedings{chen2017typesafe,
 author = {Chen, Tongfei},
 title = {Typesafe Abstractions for Tensor Operations (Short Paper)},
 booktitle = {Proceedings of the 8th ACM SIGPLAN International Symposium on Scala},
 series = {SCALA 2017},
 year = {2017},
 pages = {45--50},
 url = {http://doi.acm.org/10.1145/3136000.3136001},
 doi = {10.1145/3136000.3136001}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].