All Projects → robotsorcerer → FARNN

robotsorcerer / FARNN

Licence: MIT license
Code that trains cancer soft-robot networks

Programming Languages

lua
6591 projects
matlab
3953 projects

Projects that are alternatives of or similar to FARNN

Pontryagin-Differentiable-Programming
A unified end-to-end learning and control framework that is able to learn a (neural) control objective function, dynamics equation, control policy, or/and optimal trajectory in a control system.
Stars: ✭ 111 (+640%)
Mutual labels:  control-systems, system-identification
SLICOT-Reference
SLICOT - A Fortran subroutines library for systems and control
Stars: ✭ 19 (+26.67%)
Mutual labels:  control-systems, system-identification
control
Control in C++
Stars: ✭ 17 (+13.33%)
Mutual labels:  control-systems, system-identification
LTVModels.jl
Tools to estimate Linear Time-Varying models in Julia
Stars: ✭ 14 (-6.67%)
Mutual labels:  control-systems, system-identification
cocp
Source code for the examples accompanying the paper "Learning convex optimization control policies."
Stars: ✭ 61 (+306.67%)
Mutual labels:  control-systems
Nonlinear-Systems-and-Control
Files for my Nonlinear Systems and Controls class.
Stars: ✭ 16 (+6.67%)
Mutual labels:  control-systems
Guided Missile Simulation
Guided Missile, Radar and Infrared EOS Simulation Framework written in Fortran.
Stars: ✭ 33 (+120%)
Mutual labels:  control-systems
Inspire Openlung
An [IN PROGRESS] open source, low cost, low resource, quick deployment ventilator design that utilizes a Ambu-bag as a core component. Another project into the "war" against COVID-19. [Repo in Potuguese]
Stars: ✭ 196 (+1206.67%)
Mutual labels:  control-systems
sysidentpy
A Python Package For System Identification Using NARMAX Models
Stars: ✭ 139 (+826.67%)
Mutual labels:  system-identification
pymor
pyMOR - Model Order Reduction with Python
Stars: ✭ 198 (+1220%)
Mutual labels:  control-systems
OpenMAS
OpenMAS is an open source multi-agent simulator based in Matlab for the simulation of decentralized intelligent systems defined by arbitrary behaviours and dynamics.
Stars: ✭ 80 (+433.33%)
Mutual labels:  control-systems
NeuralNetworkAnalysis.jl
Reachability analysis for closed-loop control systems
Stars: ✭ 37 (+146.67%)
Mutual labels:  control-systems
mpc
A software pipeline using the Model Predictive Control method to drive a car around a virtual track.
Stars: ✭ 119 (+693.33%)
Mutual labels:  control-systems
adaptive-filters
My collection of implementations of adaptive filters.
Stars: ✭ 32 (+113.33%)
Mutual labels:  system-identification
sysid-neural-structures-fitting
Python code of the paper "Model structures and fitting criteria for system identification with neural networks" by Marco Forgione and Dario Piga.
Stars: ✭ 17 (+13.33%)
Mutual labels:  system-identification
CRAWLAB-Code-Snippets
Small pieces of code for use in CRAWLAB research
Stars: ✭ 12 (-20%)
Mutual labels:  control-systems
Aleph star
Reinforcement learning with A* and a deep heuristic
Stars: ✭ 235 (+1466.67%)
Mutual labels:  control-systems
Algorithms-for-Automated-Driving
Each chapter of this (mini-)book guides you in programming one important software component for automated driving.
Stars: ✭ 153 (+920%)
Mutual labels:  control-systems
Model-Predictive-Control
C++ implementation of Model Predictive Control(MPC)
Stars: ✭ 51 (+240%)
Mutual labels:  control-systems
AutomationShield
Arduino library and MATLAB/Simulink API for the AutomationShield Arduino expansion boards for control engineering education.
Stars: ✭ 22 (+46.67%)
Mutual labels:  control-systems

Nonlinear Systems Identification Using Deep Dynamic Neural Networks

in https://arxiv.org/abs/1610.01439

Author: Olalekan Ogunmolu

This repo contains the code for reproducing the results introduced in the paper, Nonlinear Systems Identification Using Deep Dynamic Neural Networks.

Maintainer

Table of contents

FastLSTM

Description

Nonlinear Systems Identification Using Deep Dynamic Neural Networks

Dependencies

This code is written in lua/torch and compiles with Torch7. I recommend you follow the instructions on the torch website to get the torch7 package installed. Typical installation would include running the following commands in a terminal

	curl -s https://raw.githubusercontent.com/torch/ezinstall/master/install-deps | bash
	git clone https://github.com/torch/distro.git ~/torch --recursive
	cd ~/torch; ./install.sh

Then add the installation to your path variable by doing:

	# On Linux
	source ~/.bashrc
	# On OSX
	source ~/.profile

By default, the code runs on GPU 0 and to get things running on CUDA we need to add a few more dependencies namely cunn, cudnn, and cutorch.

First, you will have to install the CUDA Toolkit by downloading the debian from NVIDIA's website and dpkg installing. Otherwise, this bash script will fetch the source files for CUDA 7.0 and install it on your computer.

If you'd like to use the cudnn backend (this is enabled by default), you also have to install cudnn's torch wrapper. First follow the link to NVIDIA website, register with them and download the cudnn library. Then make sure you adjust your LD_LIBRARY_PATH to point to the lib64 folder that contains the library (e.g. libcudnn.so.7.0.64). Then git clone the cudnn.torch repo, cd inside and do luarocks make cudnn-scm-1.rockspec to build the Torch bindings.

We would also need other Torch packages including nn and matio. Doing this in a terminal would list the rocks you have:

luarocks list

If the above command does not list the above dependencies, you can install them via the following commands

  • NN
	luarocks install nn
  • CUNN
	luarocks install cunn
  • CUTORCH
	luarocks install cutorch
  • MATIO

On Ubuntu, you can simply install the matio development library from the ubuntu repositories. Do

sudo apt-get install libmatio2

Then do

	luarocks install matio

To run the Hammerstein models described in the paper, we do

	luarocks install rnn

Test code

To test this code, make sure posemat7.mat is in the root directory of your project. Then run the farnn.lua script as

	th main.lua

Training Models

By default, this trains with the Hammerstein LSTM architecture described in the paper. To use a different model such as mlp, fastlstm, rnn or gru, do the following. To train on a specific dataset that was mentioned in the paper, pass the name of the dataset as a command line argument before training (e.g., -data softRobot for soft-robot dataset or -data glasssurface for glassfurnace dataset).

MLP Models

	th main.lua -model mlp

RNN Models

	th main.lua -model rnn

FastLSTM models

	th main.lua -fastlstm

GRU Models

	th main.lua -gru

Options

  • -seed, 123, 'initial seed for random number generator'
  • -silent, true, 'false|true: 0 for false, 1 for true'
  • -dir, 'outputs', 'directory to log training data'

-- Model Order Determination Parameters

  • -data,'glassfurnace','path to -v7.3 Matlab data e.g. robotArm | glassfurnace | ballbeam | soft_robot'
  • -tau, 5, 'what is the delay in the data?'
  • -m_eps, 0.01, 'stopping criterion for output order determination'
  • -l_eps, 0.05, 'stopping criterion for input order determination'
  • -trainStop, 0.5, 'stopping criterion for neural net training'
  • -sigma, 0.01, 'initialize weights with this std. dev from a normally distributed Gaussian distribution'

--Gpu settings

  • -gpu, 0, 'which gpu to use. -1 = use CPU; >=0 use gpu'
  • -backend, 'cudnn', 'nn|cudnn'

-- Neural Network settings

  • -learningRate,1e-3, 'learning rate for the neural network'
  • -learningRateDecay,1e-3, 'learning rate decay to bring us to desired minimum in style'
  • -momentum, 0.9, 'momentum for sgd algorithm'
  • -model, 'lstm', 'mlp|lstm|linear|rnn'
  • -gru, false, 'use Gated Recurrent Units (nn.GRU instead of nn.Recurrent)'
  • -fastlstm, false, 'use LSTMS without peephole connections?'
  • -netdir, 'network', 'directory to save the network'
  • -optimizer, 'mse', 'mse|sgd'
  • -coefL1, 0.1, 'L1 penalty on the weights'
  • -coefL2, 0.2, 'L2 penalty on the weights'
  • -plot, true, 'true|false'
  • -maxIter, 10000, 'max. number of iterations; must be a multiple of batchSize'

-- RNN/LSTM Settings

  • -rho, 5, 'length of sequence to go back in time'
  • -dropout, true, 'apply dropout with this probability after each rnn layer. dropout <= 0 disables it.'
  • -dropoutProb, 0.35, 'probability of zeroing a neuron (dropout probability)'
  • -rnnlearningRate,1e-3, 'learning rate for the reurrent neural network'
  • -decay, 0, 'rnn learning rate decay for rnn'
  • -batchNorm, false, 'apply szegedy and Ioffe's batch norm?'
  • -hiddenSize, {1, 10, 100}, 'number of hidden units used at output of each recurrent layer. When more thanone is specified, RNN/LSTMs/GRUs are stacked')
  • -batchSize, 100, 'Batch Size for mini-batch training'
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].