All Projects → Octavian-ai → gqa-node-properties

Octavian-ai / gqa-node-properties

Licence: Unlicense license
Recalling node properties from a knowledge graph

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to gqa-node-properties

DeepLearningReading
Deep Learning and Machine Learning mini-projects. Current Project: Deepmind Attentive Reader (rc-data)
Stars: ✭ 78 (+310.53%)
Mutual labels:  attention
EBIM-NLI
Enhanced BiLSTM Inference Model for Natural Language Inference
Stars: ✭ 24 (+26.32%)
Mutual labels:  attention
Hierarchical-Word-Sense-Disambiguation-using-WordNet-Senses
Word Sense Disambiguation using Word Specific models, All word models and Hierarchical models in Tensorflow
Stars: ✭ 33 (+73.68%)
Mutual labels:  attention
Recurrent-Independent-Mechanisms
Implementation of the paper Recurrent Independent Mechanisms (https://arxiv.org/pdf/1909.10893.pdf)
Stars: ✭ 90 (+373.68%)
Mutual labels:  attention
External-Attention-pytorch
🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐
Stars: ✭ 7,344 (+38552.63%)
Mutual labels:  attention
free-lunch-saliency
Code for "Free-Lunch Saliency via Attention in Atari Agents"
Stars: ✭ 15 (-21.05%)
Mutual labels:  attention
reasoning attention
Unofficial implementation algorithms of attention models on SNLI dataset
Stars: ✭ 34 (+78.95%)
Mutual labels:  attention
jeelizGlanceTracker
JavaScript/WebGL lib: detect if the user is looking at the screen or not from the webcam video feed. Lightweight and robust to all lighting conditions. Great for play/pause videos if the user is looking or not, or for person detection. Link to live demo.
Stars: ✭ 68 (+257.89%)
Mutual labels:  attention
h-transformer-1d
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning
Stars: ✭ 121 (+536.84%)
Mutual labels:  attention
chatbot
一个基于深度学习的中文聊天机器人,这里有详细的教程与代码,每份代码都有详细的注释,作为学习是美好的选择。A Chinese chatbot based on deep learning.
Stars: ✭ 94 (+394.74%)
Mutual labels:  attention
RecycleNet
Attentional Learning of Trash Classification
Stars: ✭ 23 (+21.05%)
Mutual labels:  attention
TRAR-VQA
[ICCV 2021] TRAR: Routing the Attention Spans in Transformers for Visual Question Answering -- Official Implementation
Stars: ✭ 49 (+157.89%)
Mutual labels:  attention
Attentive-Neural-Process
A Pytorch Implementation of Attentive Neural Process
Stars: ✭ 60 (+215.79%)
Mutual labels:  attention
DeepMove
Codes for WWW'18 Paper-DeepMove: Predicting Human Mobility with Attentional Recurrent Network
Stars: ✭ 120 (+531.58%)
Mutual labels:  attention
Linear-Attention-Mechanism
Attention mechanism
Stars: ✭ 27 (+42.11%)
Mutual labels:  attention
flow1d
[ICCV 2021 Oral] High-Resolution Optical Flow from 1D Attention and Correlation
Stars: ✭ 91 (+378.95%)
Mutual labels:  attention
learningspoons
nlp lecture-notes and source code
Stars: ✭ 29 (+52.63%)
Mutual labels:  attention
torch-multi-head-attention
Multi-head attention in PyTorch
Stars: ✭ 93 (+389.47%)
Mutual labels:  attention
classifier multi label seq2seq attention
multi-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification,seq2seq,attention,beam search
Stars: ✭ 26 (+36.84%)
Mutual labels:  attention
CrabNet
Predict materials properties using only the composition information!
Stars: ✭ 57 (+200%)
Mutual labels:  attention

Graph Question Answering: Node properties

This codebase performs a basic Graph-Question-Answer (GQA) task: recalling node properties.

The dataset is a synthetically generated set of GQA tuples, where each graph is an imaginary transit network and each question asks about a property of a particular station in that network. For simplicity, stations have been named with random integers. For example,

What type of music plays at 3?

Answer:

Pop

Whilst this sort of property recall is trivial to perform in a database query language, we introduce two challenges:

  • The questions are posed in English, not a query language
  • The recall system is a neural network (i.e. a differentiable function)

How the system works

The system is a pure (deep) neural network implemented in TensorFlow. It takes a tokenized natural language string as the input, and returns a single word token as output.

See our medium article for an in-depth explanation of how this network works.

The system begins by transforming the input question into integer tokens, which are then embedded as vectors.

Next, the control cell³ performs attention over the token vectors. This produces the control signal that is used by the subsequent cells to guide their actions.

Then the read cell uses the control signal to extract a node from the graph node list. It then extracts one property of that node. This cell will be explained in more detail later.

Finally, the output cell transforms the output of the read cell into an answer token (e.g. an integer that maps to an english word in our dictionary)

This code is a snapshot of MacGraph, simplified down to just this task. The network takes inspiration from the MACnet architecture.

Running

First set up the pre-requisites:

pipenv install
pipenv shell

Training

python -m macgraph.train

Building the data

You'll need to get a YAML file from CLEVR-Graph.

Either download our pre-build YAML or create your own:

clevr-graph$ ./generate-station-properties.sh

You can then compile that into TF records:

python -m macgraph.input.build --gqa-path gqa-sa-small-100k.yaml --input-dir ./input_data/my_build

We provide pre-compiled TF records and also, the train.py script will automatically download and extract this zip file if it doesn't find any training data.

Visualising the predictions

./predict.sh will run the latest trained model in prediction mode. Alternatively you can run the python script yourself on any model directory you wish:

python -m macgraph.predict --model-dir ./output/my_model

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].