All Projects → DonsetPG → fenics-DRL

DonsetPG / fenics-DRL

Licence: other
Repository from the paper https://arxiv.org/abs/1908.04127, to train Deep Reinforcement Learning in Fluid Mechanics Setup.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to fenics-DRL

omd
JAX code for the paper "Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation"
Stars: ✭ 43 (+7.5%)
Mutual labels:  gym
RBniCS
RBniCS - reduced order modelling in FEniCS
Stars: ✭ 5 (-87.5%)
Mutual labels:  fenics
ufl
UFL - Unified Form Language
Stars: ✭ 51 (+27.5%)
Mutual labels:  fenics
DQN
Deep-Q-Network reinforcement learning algorithm applied to a simple 2d-car-racing environment
Stars: ✭ 42 (+5%)
Mutual labels:  gym
reinforcement learning ppo rnd
Deep Reinforcement Learning by using Proximal Policy Optimization and Random Network Distillation in Tensorflow 2 and Pytorch with some explanation
Stars: ✭ 33 (-17.5%)
Mutual labels:  gym
GoBigger
Come & try Decision-Intelligence version of "Agar"! Gobigger could also help you with multi-agent decision intelligence study.
Stars: ✭ 410 (+925%)
Mutual labels:  gym
multi car racing
An OpenAI Gym environment for multi-agent car racing based on Gym's original car racing environment.
Stars: ✭ 58 (+45%)
Mutual labels:  gym
CartPole
Run OpenAI Gym on a Server
Stars: ✭ 16 (-60%)
Mutual labels:  gym
pymor
pyMOR - Model Order Reduction with Python
Stars: ✭ 198 (+395%)
Mutual labels:  fenics
ecole
Extensible Combinatorial Optimization Learning Environments
Stars: ✭ 249 (+522.5%)
Mutual labels:  gym
multiphenics
multiphenics - easy prototyping of multiphysics problems in FEniCS
Stars: ✭ 33 (-17.5%)
Mutual labels:  fenics
COVID-19-Resources
Resources for Covid-19
Stars: ✭ 25 (-37.5%)
Mutual labels:  gym
ios-build-script
Shell scripts to build ipa
Stars: ✭ 52 (+30%)
Mutual labels:  gym
pytorch-distributed
Ape-X DQN & DDPG with pytorch & tensorboard
Stars: ✭ 98 (+145%)
Mutual labels:  drl
Pytorch-RL-CPP
A Repository with C++ implementations of Reinforcement Learning Algorithms (Pytorch)
Stars: ✭ 73 (+82.5%)
Mutual labels:  gym
mujoco-benchmark
Provide full reinforcement learning benchmark on mujoco environments, including ddpg, sac, td3, pg, a2c, ppo, library
Stars: ✭ 101 (+152.5%)
Mutual labels:  drl
freqtrade-gym
A customized gym environment for developing and comparing reinforcement learning algorithms in crypto trading.
Stars: ✭ 192 (+380%)
Mutual labels:  gym
cashocs
computational adjoint-based shape optimization and optimal control software for python
Stars: ✭ 18 (-55%)
Mutual labels:  fenics
FenicsSolver
multiphysics FEM solver based on Fenics library
Stars: ✭ 52 (+30%)
Mutual labels:  fenics
safe-control-gym
PyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and RL
Stars: ✭ 272 (+580%)
Mutual labels:  gym

fenics-DRL :

Repository from the paper A review on Deep Reinforcement Learning for Fluid Mechanics.

List of other repo. with CFD + DRL (if your code is not here, please feel free to make a pull request)

(Explanatations on how to use the code are below)

Fenis + DRL https://github.com/DonsetPG/fenics-DRL
Flow Control of the 2D Kármán Vortex Street with Deep Reinforcement Learning https://github.com/jerabaul29/Cylinder2DFlowControlDRL
Accelerating Deep Reinforcement Learning strategies of Flow Control through a multi-environment approach https://github.com/jerabaul29/Cylinder2DFlowControlDRLParallel
Deep Reinforcement Learning control of the unstable falling liquid film https://github.com/vbelus/falling-liquid-film-drl
Direct shape optimization through deep reinforcement learning https://github.com/jviquerat/drl_shape_optimization
Fluid directed rigid ball balancing using Deep Reinforcement Learning https://github.com/sahilgupta2105/Deep-Reinforcement-Learning
Efficient collective swimming by harnessing vortices through deep reinforcement learning https://github.com/cselab/smarties
Training an RL agent to swim at low Reynolds Number https://github.com/RpDp-git/LearningToSwim-DQN

How to use the code :

Install everything :

CFD :

We used Fenics for this project. The easiest way to install it is by using Docker. Then :

docker run -ti -v $(pwd):/home/fenics/shared -w /home/fenics/shared quay.io/fenicsproject/stable:current

should install Fenics.

DRL :

We are using both Gym - OpenAI and Stable-Baselines. They both can installed with :

pip install gym 
pip install stable-baselines

More generally, everything you need can be installed with :

pip install --user tensorflow keras gym stable-baselines sklearn

Launch an experiment :

An experiment consists of an Environement (based on Gym - OpenAI & Fenics), and an Algorithm from Stable-Baselines. They can be launched with test_main.py. You will only have to precise a few parameters :

*nb_cpu*: the amount of CPU you want to use (e.g. 16)
*agents*: an array of the algorithms you want to use (e.g. ['PPO2','A2C'])
*name_env*: The name of the environment (e.g. 'Control-cylinder-v0')
*total_timesteps*: the amount of timesteps the training will last (e.g. 100000)
*text*: Some precisions you wanna add to you experiment (e.g. '1_step_1_episode_2CPU')

Build your own environment :

The Gym.env environment :

You can find examples of such environments in example 1 : Control Cylinder or example 2 : Flow Control Cylinder. They always share the same architecture :

class FluidMechanicsEnv_(gym.Env):
    metadata = {'render.modes': ['human']}

    def __init__(self,
                    **kwargs):
        ...
        self.problem = self._build_problem()
        self.reward_range = (-1,1)
        self.observation_space = spaces.Box(low=np.array([]), high=np.array([]), dtype=np.float16)
        self.action_space = spaces.Box(low=np.array([]), high=np.array([]), dtype=np.float16)
        
    def _build_problem(self,main_drag):
        ...
        return problem
        
        
    def _next_observation(self):
        ...
        
        
    def step(self, action):
        ...
        return obs, reward, done, {}
    
    def reset(self):
        ...

Here, most of these functions are DRL related, and more informations can be found at this paper (for applications of DRL on fluid mechanics) or here (for more general informations about DRL). The only link with Fenics is made with the

def _build_problem(self,main_drag):
        ...
        return problem

function, where you will be using functions from Fenics.

Fenics functions :

We built several functions to help you use Fenics and build DRL environment with it. Three main classes exist :

  • class Channel
  • class Obstacles
  • class Problem

Channel :

Allows you to create the 'box' where your simulation will take place.

Obstacles :

Allows you to add forms and obstacles (Circle, Square and Polygons) to your environment.

Problem :

Build the simulation with Channl and Obstacles. Also get parameters for the mesh and the solver. Finally, this is a Problem object you will return in the Gym.env class.

What's next :

We built this repository in order to get a code as clean as possible for fluid mechanics with DRL. However, Fenics is not the best solver, especially with very demanding problem. The goal is to keep the same philosophy in mind (DRL and Fluid mechanics coupled easily) but with other (and faster) libraries. Since most of these libraries are C++ based, and using powerful clusters, the architecture will be completely different. We are still working on it and doing our best to release an alpha version as soon as possible.

This repository will be updated when such library finally comes out. Until then, we hope that with this paper and this repository combined, some Fluid Mechanics researcher might want to try to apply Deep Reinforcement Learning to their experiments.

The Team :

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].