All Projects → wangleiphy → tensorgrad

wangleiphy / tensorgrad

Licence: Apache-2.0 License
Differentiable Programming Tensor Networks

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to tensorgrad

xcfun
XCFun: A library of exchange-correlation functionals with arbitrary-order derivatives
Stars: ✭ 50 (-50.98%)
Mutual labels:  automatic-differentiation
YaoBlocks.jl
Standard basic quantum circuit simulator building blocks. (archived, for it is moved to Yao.jl)
Stars: ✭ 26 (-74.51%)
Mutual labels:  automatic-differentiation
autodiff
A .NET library that provides fast, accurate and automatic differentiation (computes derivative / gradient) of mathematical functions.
Stars: ✭ 69 (-32.35%)
Mutual labels:  automatic-differentiation
jet
Jet is a cross-platform library for simulating quantum circuits using tensor network contractions.
Stars: ✭ 34 (-66.67%)
Mutual labels:  tensor-networks
cMPO
continuous-time matrix product operator for finite temperature quantum states
Stars: ✭ 27 (-73.53%)
Mutual labels:  tensor-networks
FwiFlow.jl
Elastic Full Waveform Inversion for Subsurface Flow Problems Using Intrusive Automatic Differentiation
Stars: ✭ 24 (-76.47%)
Mutual labels:  automatic-differentiation
Fortran-Tools
Fortran compilers, preprocessors, static analyzers, transpilers, IDEs, build systems, etc.
Stars: ✭ 31 (-69.61%)
Mutual labels:  automatic-differentiation
HamiltonianSolver
Numerically solves equations of motion for a given Hamiltonian function
Stars: ✭ 51 (-50%)
Mutual labels:  automatic-differentiation
autodiffr
Automatic Differentiation for R
Stars: ✭ 21 (-79.41%)
Mutual labels:  automatic-differentiation
TensorAlgDiff
Automatic Differentiation for Tensor Algebras
Stars: ✭ 26 (-74.51%)
Mutual labels:  automatic-differentiation
Tensorial.jl
Statically sized tensors and related operations for Julia
Stars: ✭ 18 (-82.35%)
Mutual labels:  automatic-differentiation
NTFk.jl
Unsupervised Machine Learning: Nonnegative Tensor Factorization + k-means clustering
Stars: ✭ 36 (-64.71%)
Mutual labels:  tensor-networks
dopt
A numerical optimisation and deep learning framework for D.
Stars: ✭ 28 (-72.55%)
Mutual labels:  automatic-differentiation
Scientific-Programming-in-Julia
Repository for B0M36SPJ
Stars: ✭ 32 (-68.63%)
Mutual labels:  automatic-differentiation
torch
TensorLy-Torch: Deep Tensor Learning with TensorLy and PyTorch
Stars: ✭ 36 (-64.71%)
Mutual labels:  tensor-networks
MissionImpossible
A concise C++17 implementation of automatic differentiation (operator overloading)
Stars: ✭ 18 (-82.35%)
Mutual labels:  automatic-differentiation
AbstractOperators.jl
Abstract operators for large scale optimization in Julia
Stars: ✭ 26 (-74.51%)
Mutual labels:  automatic-differentiation
bayex
Bayesian Optimization in JAX
Stars: ✭ 24 (-76.47%)
Mutual labels:  automatic-differentiation
Birch
A probabilistic programming language that combines automatic differentiation, automatic marginalization, and automatic conditioning within Monte Carlo methods.
Stars: ✭ 80 (-21.57%)
Mutual labels:  automatic-differentiation
ad-lens
Automatic Differentiation using Pseudo Lenses. Neat.
Stars: ✭ 16 (-84.31%)
Mutual labels:  automatic-differentiation
logo

Differentiable Programming Tensor Networks

Build Status

Requirements

  • PyTorch 1.0+
  • A good GPU card if you are impatient or ambitious

Higher order gradient of free energy

Run this to compute the energy and specific heat of the 2D classical Ising model using Automatic Differentiation through the Tensor Renormalization Group contraction.

$ cd 1_ising_TRG
$ python ising.py 

trg

Variational optimization of iPEPS

Run this to optimize an iPEPS wave function for 2D quantum Heisenberg model. Here, we use Corner Transfer Matrix Renormalization Group for contraction, and L-BFGS for optimization.

$ cd 2_variational_iPEPS
$ python variational.py -D 3 -chi 30 

You can supply the command line argument -use_checkpoint to reduce the memory usage. To make use of the GPU, you can add -cuda <GPUID>. You will reach the state-of-the-art variational energy and staggered magnetization using this code. You can also supply your own Hamiltonian of interest. In case of a question, you can type python variational.py -h.

heisenberg

What is under the hood ?

Reverse mode AD computes gradient accurately and efficiently for you! Check the codes in adlib for backward functions which propagate gradients through tensor network contractions.

To Cite

@article{PhysRevX.9.031041,
  title = {Differentiable Programming Tensor Networks},
  author = {Liao, Hai-Jun and Liu, Jin-Guo and Wang, Lei and Xiang, Tao},
  journal = {Phys. Rev. X},
  volume = {9},
  issue = {3},
  pages = {031041},
  numpages = {12},
  year = {2019},
  month = {Sep},
  publisher = {American Physical Society},
  doi = {10.1103/PhysRevX.9.031041},
  url = {https://link.aps.org/doi/10.1103/PhysRevX.9.031041}
}

Explore more

https://github.com/under-Peter/TensorNetworkAD.jl

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].