All Projects β†’ SciML β†’ Neuralpde.jl

SciML / Neuralpde.jl

Licence: other
Physics-Informed Neural Networks (PINN) and Deep BSDE Solvers of Differential Equations for Scientific Machine Learning (SciML) accelerated simulation

Programming Languages

julia
2034 projects

Projects that are alternatives of or similar to Neuralpde.jl

Ergo
🧠 A tool that makes AI easier.
Stars: ✭ 264 (-10.51%)
Mutual labels:  neural-networks
Rlgraph
RLgraph: Modular computation graphs for deep reinforcement learning
Stars: ✭ 272 (-7.8%)
Mutual labels:  neural-networks
Uncertainty Baselines
High-quality implementations of standard and SOTA methods on a variety of tasks.
Stars: ✭ 278 (-5.76%)
Mutual labels:  neural-networks
Keras Vis
Neural network visualization toolkit for keras
Stars: ✭ 2,900 (+883.05%)
Mutual labels:  neural-networks
Moniel
Interactive Notation for Computational Graphs
Stars: ✭ 272 (-7.8%)
Mutual labels:  neural-networks
Awesome Distributed Deep Learning
A curated list of awesome Distributed Deep Learning resources.
Stars: ✭ 277 (-6.1%)
Mutual labels:  neural-networks
Deeplearning.ai Notes
These are my notes which I prepared during deep learning specialization taught by AI guru Andrew NG. I have used diagrams and code snippets from the code whenever needed but following The Honor Code.
Stars: ✭ 262 (-11.19%)
Mutual labels:  neural-networks
Komputation
Komputation is a neural network framework for the Java Virtual Machine written in Kotlin and CUDA C.
Stars: ✭ 295 (+0%)
Mutual labels:  neural-networks
Flux.jl
Relax! Flux is the ML library that doesn't make you tensor
Stars: ✭ 3,358 (+1038.31%)
Mutual labels:  neural-networks
Rust Autograd
Tensors and differentiable operations (like TensorFlow) in Rust
Stars: ✭ 278 (-5.76%)
Mutual labels:  neural-networks
Awesome Ai Awesomeness
A curated list of awesome awesomeness about artificial intelligence
Stars: ✭ 268 (-9.15%)
Mutual labels:  neural-networks
Pycox
Survival analysis with PyTorch
Stars: ✭ 269 (-8.81%)
Mutual labels:  neural-networks
Sealion
The first machine learning framework that encourages learning ML concepts instead of memorizing class functions.
Stars: ✭ 278 (-5.76%)
Mutual labels:  neural-networks
Deeplearning Challenges
Codes for weekly challenges on Deep Learning by Siraj
Stars: ✭ 264 (-10.51%)
Mutual labels:  neural-networks
Deep Learning Papers
Papers about deep learning ordered by task, date. Current state-of-the-art papers are labelled.
Stars: ✭ 3,054 (+935.25%)
Mutual labels:  neural-networks
Graph Based Deep Learning Literature
links to conference publications in graph-based deep learning
Stars: ✭ 3,428 (+1062.03%)
Mutual labels:  neural-networks
Sharpneat
SharpNEAT - Evolution of Neural Networks. A C# .NET Framework.
Stars: ✭ 273 (-7.46%)
Mutual labels:  neural-networks
Mlpractical
Machine Learning Practical course repository
Stars: ✭ 295 (+0%)
Mutual labels:  neural-networks
Pix2depth
DEPRECATED: Depth Map Estimation from Monocular Images
Stars: ✭ 293 (-0.68%)
Mutual labels:  neural-networks
Pytorch Lesson Zh
pytorch εŒ…ζ•™δΈεŒ…δΌš
Stars: ✭ 279 (-5.42%)
Mutual labels:  neural-networks

NeuralPDE

Join the chat at https://gitter.im/JuliaDiffEq/Lobby Build Status Build status codecov.io Stable Dev

NeuralPDE.jl is a solver package which consists of neural network solvers for partial differential equations using scientific machine learning (SciML) techniques such as physics-informed neural networks (PINNs) and deep BSDE solvers. This package utilizes deep neural networks and neural stochastic differential equations to solve high-dimensional PDEs at a greatly reduced cost and greatly increased generality compared with classical methods.

Installation

Assuming that you already have Julia correctly installed, it suffices to install NeuralPDE.jl in the standard way, that is, by typing ] add NeuralPDE. Note: to exit the Pkg REPL-mode, just press Backspace or Ctrl + C.

Tutorials and Documentation

For information on using the package, see the stable documentation. Use the in-development documentation for the version of the documentation, which contains the unreleased features.

Features

  • Physics-Informed Neural Networks for automated PDE solving
  • Forward-Backwards Stochastic Differential Equation (FBSDE) methods for parabolic PDEs
  • Deep-learning-based solvers for optimal stopping time and Kolmogorov backwards equations

Example: Solving 2D Poisson Equation via Physics-Informed Neural Networks

using NeuralPDE, Flux, ModelingToolkit, GalacticOptim, Optim, DiffEqFlux

@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2

# 2D PDE
eq  = Dxx(u(x,y)) + Dyy(u(x,y)) ~ -sin(pi*x)*sin(pi*y)

# Boundary conditions
bcs = [u(0,y) ~ 0.f0, u(1,y) ~ -sin(pi*1)*sin(pi*y),
       u(x,0) ~ 0.f0, u(x,1) ~ -sin(pi*x)*sin(pi*1)]
# Space and time domains
domains = [x ∈ IntervalDomain(0.0,1.0),
           y ∈ IntervalDomain(0.0,1.0)]
# Discretization
dx = 0.1

# Neural network
dim = 2 # number of dimensions
chain = FastChain(FastDense(dim,16,Flux.Οƒ),FastDense(16,16,Flux.Οƒ),FastDense(16,1))

discretization = PhysicsInformedNN(chain, GridTraining(dx))

pde_system = PDESystem(eq,bcs,domains,[x,y],[u])
prob = discretize(pde_system,discretization)

cb = function (p,l)
    println("Current loss is: $l")
    return false
end

res = GalacticOptim.solve(prob, Optim.BFGS(); cb = cb, maxiters=1000)
phi = discretization.phi

And some analysis:

xs,ys = [domain.domain.lower:dx/10:domain.domain.upper for domain in domains]
analytic_sol_func(x,y) = (sin(pi*x)*sin(pi*y))/(2pi^2)

u_predict = reshape([first(phi([x,y],res.minimizer)) for x in xs for y in ys],(length(xs),length(ys)))
u_real = reshape([analytic_sol_func(x,y) for x in xs for y in ys], (length(xs),length(ys)))
diff_u = abs.(u_predict .- u_real)

using Plots
p1 = plot(xs, ys, u_real, linetype=:contourf,title = "analytic");
p2 = plot(xs, ys, u_predict, linetype=:contourf,title = "predict");
p3 = plot(xs, ys, diff_u,linetype=:contourf,title = "error");
plot(p1,p2,p3)

image

Example: Solving a 100-Dimensional Hamilton-Jacobi-Bellman Equation

using NeuralPDE
using Flux
using DifferentialEquations
using LinearAlgebra
d = 100 # number of dimensions
X0 = fill(0.0f0, d) # initial value of stochastic control process
tspan = (0.0f0, 1.0f0)
Ξ» = 1.0f0

g(X) = log(0.5f0 + 0.5f0 * sum(X.^2))
f(X,u,Οƒα΅€βˆ‡u,p,t) = -Ξ» * sum(Οƒα΅€βˆ‡u.^2)
ΞΌ_f(X,p,t) = zero(X)  # Vector d x 1 Ξ»
Οƒ_f(X,p,t) = Diagonal(sqrt(2.0f0) * ones(Float32, d)) # Matrix d x d
prob = TerminalPDEProblem(g, f, ΞΌ_f, Οƒ_f, X0, tspan)
hls = 10 + d # hidden layer size
opt = Flux.ADAM(0.01)  # optimizer
# sub-neural network approximating solutions at the desired point
u0 = Flux.Chain(Dense(d, hls, relu),
                Dense(hls, hls, relu),
                Dense(hls, 1))
# sub-neural network approximating the spatial gradients at time point
Οƒα΅€βˆ‡u = Flux.Chain(Dense(d + 1, hls, relu),
                  Dense(hls, hls, relu),
                  Dense(hls, hls, relu),
                  Dense(hls, d))
pdealg = NNPDENS(u0, Οƒα΅€βˆ‡u, opt=opt)
@time ans = solve(prob, pdealg, verbose=true, maxiters=100, trajectories=100,
                            alg=EM(), dt=1.2, pabstol=1f-2)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].