All Projects → levimcclenny → SA-PINNs

levimcclenny / SA-PINNs

Licence: other
Implementation of the paper "Self-Adaptive Physics-Informed Neural Networks using a Soft Attention Mechanism" [AAAI-MLPS 2021]

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to SA-PINNs

QuasiMonteCarlo.jl
Lightweight and easy generation of quasi-Monte Carlo sequences with a ton of different methods on one API for easy parameter exploration in scientific machine learning (SciML)
Stars: ✭ 47 (+46.88%)
Mutual labels:  sciml, physics-informed-learning
sciml.ai
The SciML Scientific Machine Learning Software Organization Website
Stars: ✭ 38 (+18.75%)
Mutual labels:  sciml, physics-informed-learning
ls1-mardyn
ls1-MarDyn is a massively parallel Molecular Dynamics (MD) code for large systems. Its main target is the simulation of thermodynamics and nanofluidics. ls1-MarDyn is designed with a focus on performance and easy extensibility.
Stars: ✭ 17 (-46.87%)
Mutual labels:  scientific-computing
dfogn
DFO-GN: Derivative-Free Optimization using Gauss-Newton
Stars: ✭ 20 (-37.5%)
Mutual labels:  scientific-computing
vtkbool
A new boolean operations filter for VTK
Stars: ✭ 77 (+140.63%)
Mutual labels:  scientific-computing
Cpp-Examples
Numerical C++ examples.
Stars: ✭ 38 (+18.75%)
Mutual labels:  scientific-computing
bitpit
Open source library for scientific HPC
Stars: ✭ 80 (+150%)
Mutual labels:  scientific-computing
conduit
Simplified Data Exchange for HPC Simulations
Stars: ✭ 114 (+256.25%)
Mutual labels:  scientific-computing
dishtiny
DISHTINY: A Platform for Studying Open-Ended Evolutionary Transitions in Individuality
Stars: ✭ 25 (-21.87%)
Mutual labels:  scientific-computing
scim
[wip]Speech recognition tool-box written by Nim. Based on Arraymancer.
Stars: ✭ 17 (-46.87%)
Mutual labels:  scientific-computing
reprozip-examples
Examples and demos for ReproZip
Stars: ✭ 13 (-59.37%)
Mutual labels:  scientific-computing
DataSciPy
Data Science with Python
Stars: ✭ 15 (-53.12%)
Mutual labels:  scientific-computing
PyMFEM
Python wrapper for MFEM
Stars: ✭ 91 (+184.38%)
Mutual labels:  scientific-computing
combi
Pythonic package for combinatorics
Stars: ✭ 51 (+59.38%)
Mutual labels:  scientific-computing
MACSio
A Multi-purpose, Application-Centric, Scalable I/O Proxy Application
Stars: ✭ 28 (-12.5%)
Mutual labels:  scientific-computing
python-data-science-project
Template repository for a Python 3-based (data) science project
Stars: ✭ 54 (+68.75%)
Mutual labels:  scientific-computing
monolish
monolish: MONOlithic LInear equation Solvers for Highly-parallel architecture
Stars: ✭ 166 (+418.75%)
Mutual labels:  scientific-computing
SciPyDiffEq.jl
Wrappers for the SciPy differential equation solvers for the SciML Scientific Machine Learning organization
Stars: ✭ 19 (-40.62%)
Mutual labels:  sciml
BoneJ2
Plugins for bone image analysis
Stars: ✭ 17 (-46.87%)
Mutual labels:  scientific-computing
owl ode
Owl's Differential Equation Solvers
Stars: ✭ 24 (-25%)
Mutual labels:  scientific-computing

Self-Adaptive PINN - Official Implementation

Self-Adaptive Physics-Informed Neural Networks using a Soft Attention Mechanism [AAAI-MLPS 2021]

Levi McClenny1,2, Ulisses Braga-Neto1

Accepted to AAAI-MLPS 2021

Update: The self-adaptive implementations of the Allen-Cahn, Burgers, and Helmholtz PDE systems, shown here, are available in our new package TensorDiffEq

Those examples in TensorDiffEq are available here

Paper: https://arxiv.org/pdf/2009.04544.pdf

Abstract: Physics-Informed Neural Networks (PINNs) have emerged recently as a promising application of deep neural networks to the numerical solution of nonlinear partial differential equations (PDEs). However, the original PINN algorithm is known to suffer from stability and accuracy problems in cases where the solution has sharp spatio-temporal transitions. These stiff PDEs require an unreasonably large number of collocation points to be solved accurately. It has been recognized that adaptive procedures are needed to force the neural network to fit accurately the stubborn spots in the solution of stiff PDEs. To accomplish this, previous approaches have used fixed weights hard-coded over regions of the solution deemed to be important. In this paper, we propose a fundamentally new method to train PINNs adaptively, where the adaptation weights are fully trainable, so the neural network learns by itself which regions of the solution are difficult and is forced to focus on them, which is reminiscent of soft multiplicative-mask attention mechanism used in computer vision. The basic idea behind these Self-Adaptive PINNs is to make the weights increase where the corresponding loss is higher, which is accomplished by training the network to simultaneously minimize the losses and maximize the weights, i.e., to find a saddle point in the cost surface. We show that this is formally equivalent to solving a PDE-constrained optimization problem using a penalty-based method, though in a way where the monotonically-nondecreasing penalty coefficients are trainable. Numerical experiments with an Allen-Cahn stiff PDE, the Self-Adaptive PINN outperformed other state-of-the-art PINN algorithms in L2 error by a wide margin, while using a smaller number of training epochs. An Appendix contains additional results with Burger's and Helmholtz PDEs, which confirmed the trends observed in the Allen-Cahn experiments.

1Texas A&M Dept. of Electrical Engineering, College Station, TX
2US Army CCDC Army Research Lab, Aberdeen Proving Ground/Adelphi, MD

Requirements

Code was implemented in python 3.7 with the following package versions:

tensorflow version = 2.3
keras version = 2.2.4

and matplotlib 3.1.1 was used for visualization. It is expected that any combination of recent numpy/matplotlib will be sufficient, however issues have been experienced on tensorflow versions <2.3.0

Virtual Environment (Optional)

(Mac) To create a virtual environment to run this code, download the repository either via git clone or by clicking download at the top of github, then navigate to the top-level folder in a terminal window and execute the commands

python3 -m venv --system-site-packages ./venv
source ./venv/bin/activate

This will create a virtual environment named venv in that directory (first line) and drop you into it (second line). At that point you can install/uninstall package versions without effecting your overall environment. You can verify you're in the virtual environment if you see (venv) at the beginning of your terminal line. At this point you can install the exact versions of the packages listed here with the pip into the venv:

pip install tensorflow==2.3 numpy==1.17.2 keras==2.2.4

run

python versiontest.py

And you should see the following output:

Using TensorFlow backend
tensorflow version = 2.3
keras version = 2.2.4
numpy version = 1.17.2

TeX Dependencies

(Debian) Some plots require TeX packages, you can have them installed using the following command:

sudo apt-get -qq install texlive-fonts-recommended texlive-fonts-extra dvipng

Data

The data used in this paper is publicly available in the Raissi implementation of Physics-Informed Neural Networks found here. It has already been copied into the appropriate directories for utilization in the script files.

Usage

You can recreate the results of the paper by simply navigating to the desired system (i.e. opening the Burgers folder) and running the .py script in the folder. After opening the Burgers folder, simply run

python burgers.py

And the training will begin, followed by the plots.

Note

The results in the paper were calculated on GPU. Running for the full 10k/10k training iterations for Adam and L-BFGS will likely take a very long time on CPU.

Citation

Cite using the Bibtex citation below:

@article{mcclenny2020self,
  title={Self-Adaptive Physics-Informed Neural Networks using a Soft Attention Mechanism},
  author={McClenny, Levi and Braga-Neto, Ulisses},
  journal={arXiv preprint arXiv:2009.04544},
  year={2020}
}

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].