All Projects → loliverhennigh → Steady State Flow With Neural Nets

loliverhennigh / Steady State Flow With Neural Nets

Licence: apache-2.0
A Tensorflow re-implementation of the paper Convolutional Neural Networks for Steady Flow Approximation

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Steady State Flow With Neural Nets

Pytorch Learners Tutorial
PyTorch tutorial for learners
Stars: ✭ 97 (-9.35%)
Mutual labels:  convolutional-neural-networks
Ynet
Y-Net: Joint Segmentation and Classification for Diagnosis of Breast Biopsy Images
Stars: ✭ 100 (-6.54%)
Mutual labels:  convolutional-neural-networks
Tiny Faces Pytorch
Finding Tiny Faces in PyTorch
Stars: ✭ 105 (-1.87%)
Mutual labels:  convolutional-neural-networks
Har Keras Cnn
Human Activity Recognition (HAR) with 1D Convolutional Neural Network in Python and Keras
Stars: ✭ 97 (-9.35%)
Mutual labels:  convolutional-neural-networks
Lsuvinit
Reference caffe implementation of LSUV initialization
Stars: ✭ 99 (-7.48%)
Mutual labels:  convolutional-neural-networks
Top Deep Learning
Top 200 deep learning Github repositories sorted by the number of stars.
Stars: ✭ 1,365 (+1175.7%)
Mutual labels:  convolutional-neural-networks
Grenade
Deep Learning in Haskell
Stars: ✭ 1,338 (+1150.47%)
Mutual labels:  convolutional-neural-networks
Mp Cnn Torch
Multi-Perspective Convolutional Neural Networks for modeling textual similarity (He et al., EMNLP 2015)
Stars: ✭ 106 (-0.93%)
Mutual labels:  convolutional-neural-networks
Antialiased Cnns
pip install antialiased-cnns to improve stability and accuracy
Stars: ✭ 1,363 (+1173.83%)
Mutual labels:  convolutional-neural-networks
Idn Caffe
Caffe implementation of "Fast and Accurate Single Image Super-Resolution via Information Distillation Network" (CVPR 2018)
Stars: ✭ 104 (-2.8%)
Mutual labels:  convolutional-neural-networks
Bayesian cnn
Bayes by Backprop implemented in a CNN in PyTorch
Stars: ✭ 98 (-8.41%)
Mutual labels:  convolutional-neural-networks
Cutmix
a Ready-to-use PyTorch Extension of Unofficial CutMix Implementations with more improved performance.
Stars: ✭ 99 (-7.48%)
Mutual labels:  convolutional-neural-networks
Sigmoidal ai
Tutoriais de Python, Data Science, Machine Learning e Deep Learning - Sigmoidal
Stars: ✭ 103 (-3.74%)
Mutual labels:  convolutional-neural-networks
Mongolian Speech Recognition
Mongolian speech recognition with PyTorch
Stars: ✭ 97 (-9.35%)
Mutual labels:  convolutional-neural-networks
Self Driving Car
Automated Driving in NFS using CNN.
Stars: ✭ 105 (-1.87%)
Mutual labels:  convolutional-neural-networks
Cnniqa
CVPR2014-Convolutional neural networks for no-reference image quality assessment
Stars: ✭ 96 (-10.28%)
Mutual labels:  convolutional-neural-networks
Keras Video Classifier
Keras implementation of video classifier
Stars: ✭ 100 (-6.54%)
Mutual labels:  convolutional-neural-networks
Sod
An Embedded Computer Vision & Machine Learning Library (CPU Optimized & IoT Capable)
Stars: ✭ 1,460 (+1264.49%)
Mutual labels:  convolutional-neural-networks
Ghostnet
CV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.
Stars: ✭ 1,744 (+1529.91%)
Mutual labels:  convolutional-neural-networks
Wav2letter.pytorch
A fully convolution-network for speech-to-text, built on pytorch.
Stars: ✭ 104 (-2.8%)
Mutual labels:  convolutional-neural-networks

This repository contains an re-implementation of the paper Convolutional Neural Networks for Steady Flow Approximation. The premise is to learn a mapping from boundary conditions to steady state fluid flow. There are a few differences and improvements from this work and the original paper which are discussed bellow. This code and network architecture was later used to write this paper about optimizing wing airfoils to maximize the lift drag ratio.

Getting data and making TFrecords

This is the most difficult part of this project. Mechsys was used to generate the fluid simulations necessary for training however it can be difficult to set up and requires a fair number of packages. In light of this, I have made the data set available here (about 700 MB). Place this file in the data directory and this will be the train set. The test car set can be found here. Unzip this file in the data directory for the test car set.

Training

To train enter the train directory and run

python flow_train.py

Tensorboard

Some training information such as the loss is recorded and can be viewed with tensorboard. The checkpoint file is found in checkpoint and has a name corresponding to the parameters used.

Evaluation

Once the model is trained sufficiently you can evaluate it by running

python flow_test.py

This will run through the car dataset provided and do side by side comparisons. Here are a few cool images it will generated! The left image is true, the middle is generated, and right is difference. As you can see, the model is predicting flow extremely well. Comparing with the images seen in the original paper, we notice that our method predicts much smother flows on the boundaries.

alt tag alt tag alt tag

Learning Boundaries

While this isn't in this code base here are some cool videos form the paper optimizing a wing airfoil and heat sink.

IMAGE ALT TEXT HERE

IMAGE ALT TEXT HERE

Model details

As mentioned above, this work deviates from that seen in the original paper. Instead of using Signed Distance Function as input we use a binary representation of the boundary conditions. This simplifies the input greatly. We also use a U-network approach with residual layers similar to that seen in Pixel-CNN++. This seems to make learning incredibly fast and decreases the requirement of a large dataset. Notably, our model is trained on only 3,000 flow images instead of the 100,000 listed in the paper and still produces comparable performance.

Speed

The time pre image in a batch size of 8 is 0.00287 seconds on a GTX 1080 GPU. This is 3x faster the reported time of 0.0085 seconds in the paper. While our network is more complex we are able to achieve higher speed by not relying on any fully connected layers and keep our network all convolutional.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].