All Projects → sinAshish → NALU-Keras

sinAshish / NALU-Keras

Licence: other
A keras implementation of [Neural Arithmetic Logic Units](https://arxiv.org/pdf/1808.00508.pdf) by Andrew et. al.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to NALU-Keras

minionn
Privacy -preserving Neural Networks
Stars: ✭ 58 (+314.29%)
Mutual labels:  research-paper
vmam
VLAN Mac-address Authentication Manager
Stars: ✭ 19 (+35.71%)
Mutual labels:  nac
best AI papers 2021
A curated list of the latest breakthroughs in AI (in 2021) by release date with a clear video explanation, link to a more in-depth article, and code.
Stars: ✭ 2,740 (+19471.43%)
Mutual labels:  research-paper
favorite-research-papers
Listing my favorite research papers 📝 from different fields as I read them.
Stars: ✭ 12 (-14.29%)
Mutual labels:  research-paper
CycleGAN
A simple code of CycleGAN which is easy to read is implemented by TensorFlow
Stars: ✭ 21 (+50%)
Mutual labels:  paper-implementations
saliency
Contextual Encoder-Decoder Network for Visual Saliency Prediction [Neural Networks 2020]
Stars: ✭ 126 (+800%)
Mutual labels:  paper-implementations
dgcnn
Clean & Documented TF2 implementation of "An end-to-end deep learning architecture for graph classification" (M. Zhang et al., 2018).
Stars: ✭ 21 (+50%)
Mutual labels:  paper-implementations
paper-survey
Summary of machine learning papers
Stars: ✭ 26 (+85.71%)
Mutual labels:  research-paper
logically
explorations in core.logic
Stars: ✭ 108 (+671.43%)
Mutual labels:  paper-implementations
UAV-Stereo-Vision
A program for controlling a micro-UAV for obstacle detection and collision avoidance using disparity mapping
Stars: ✭ 30 (+114.29%)
Mutual labels:  research-paper
CGvsPhoto
Computer Graphics vs Real Photographic Images : A Deep-learning approach
Stars: ✭ 24 (+71.43%)
Mutual labels:  research-paper
paper annotations
A place to keep track of all the annotated papers.
Stars: ✭ 96 (+585.71%)
Mutual labels:  research-paper
material-appearance-similarity
Code for the paper "A Similarity Measure for Material Appearance" presented in SIGGRAPH 2019 and published in ACM Transactions on Graphics (TOG).
Stars: ✭ 22 (+57.14%)
Mutual labels:  paper-implementations
sioyek
Sioyek is a PDF viewer designed for reading research papers and technical books.
Stars: ✭ 3,890 (+27685.71%)
Mutual labels:  research-paper
pghumor
Is This a Joke? Humor Detection in Spanish Tweets
Stars: ✭ 48 (+242.86%)
Mutual labels:  paper-implementations
Rough-Sketch-Simplification-Using-FCNN
This is a PyTorch implementation of the the Paper by Simo-Sera et.al. on Cleaning Rough Sketches using Fully Convolutional Neural Networks.
Stars: ✭ 31 (+121.43%)
Mutual labels:  research-paper
speaker-recognition-papers
Share some recent speaker recognition papers and their implementations.
Stars: ✭ 92 (+557.14%)
Mutual labels:  paper-implementations
PhD
My PhD Papers and Presentations
Stars: ✭ 24 (+71.43%)
Mutual labels:  research-paper
affiliate-marketing-disclosures
Code and data belonging to our CSCW 2018 paper: "Endorsements on Social Media: An Empirical Study of Affiliate Marketing Disclosures on YouTube and Pinterest".
Stars: ✭ 22 (+57.14%)
Mutual labels:  research-paper
Sequence-to-Sequence-Learning-of-Financial-Time-Series-in-Algorithmic-Trading
My bachelor's thesis—analyzing the application of LSTM-based RNNs on financial markets. 🤓
Stars: ✭ 64 (+357.14%)
Mutual labels:  research-paper

NALU-Keras

A keras implementation of Neural Arithmetic Logic Units by Andrew et. al.

While neural networks can successfully represent and manipulate numerical quantities given an appropriate learning signal, the behavior that they learn does not generally exhibit systematic generalization. Specifically, one frequently observes failures when quantities that lie outside the numerical range used during training are encountered at test time, even when the target function is simple (e.g., it depends only on aggregating counts or linear extrapolation). This failure pattern indicates that the learned behavior is better characterized by memorization than by systematic abstraction. Whether input distribution shifts that trigger extrapolation failures are of practical concern depends on the environments where the trained models will operate.

In this paper, a new module is proposed that can be used in conjunction with standard neural network architectures (e.g., LSTMs or convnets) but which is biased to learn systematic numerical computation. Our strategy is to represent numerical quantities as individual neurons without a nonlinearity.y. To these single-value neurons, operators that are capable of representing simple functions (e.g., +, −, ×, etc.) are applied. These operators are controlled by parameters which determine the inputs and operations used to create each output. However, despite this combinatorial character, they are differentiable, making it possible to learn them with backpropagation.

NALU

The NALU consists of two NAC cells (the purple cells) interpolated by a learned sigmoidal gate g (the orange cell), such that if the add/subtract subcell’s output value is applied with a weight of 1 (on), the multiply/divide subcell’s is 0 (off) and vice versa. The first NAC (the smaller purple subcell) computes the accumulation vector a, which stores results of the NALU’s addition/subtraction operations; it is computed identically to the original NAC, (i.e., a = Wx). The second NAC (the larger purple subcell) operates in log space and is therefore capable of learning to multiply and divide, storing its results in m:

NAC : a = Wx                      W = tanh(Wˆ ) * σ(Mˆ )
NALU:  y = g * a + (1 − g) * m    m = exp W(log(|x| + epsilon)), g = σ(Gx)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].