All Projects → google → Neural Logic Machines

google / Neural Logic Machines

Licence: apache-2.0
Implementation for the Neural Logic Machines (NLM).

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Neural Logic Machines

Simple rl
A simple framework for experimenting with Reinforcement Learning in Python.
Stars: ✭ 179 (-9.14%)
Mutual labels:  reinforcement-learning
Crypto trader
Q-Learning Based Cryptocurrency Trader and Portfolio Optimizer for the Poloniex Exchange
Stars: ✭ 184 (-6.6%)
Mutual labels:  reinforcement-learning
Reinforcementlearning.jl
A reinforcement learning package for Julia
Stars: ✭ 192 (-2.54%)
Mutual labels:  reinforcement-learning
Retro Learning Environment
The Retro Learning Environment (RLE) -- a learning framework for AI
Stars: ✭ 180 (-8.63%)
Mutual labels:  reinforcement-learning
Awesome Game Ai
Awesome Game AI materials of Multi-Agent Reinforcement Learning
Stars: ✭ 185 (-6.09%)
Mutual labels:  reinforcement-learning
Kbgan
Code for "KBGAN: Adversarial Learning for Knowledge Graph Embeddings" https://arxiv.org/abs/1711.04071
Stars: ✭ 186 (-5.58%)
Mutual labels:  reinforcement-learning
Reinforce.jl
Abstractions, algorithms, and utilities for reinforcement learning in Julia
Stars: ✭ 178 (-9.64%)
Mutual labels:  reinforcement-learning
Paac
Open source implementation of the PAAC algorithm presented in Efficient Parallel Methods for Deep Reinforcement Learning
Stars: ✭ 196 (-0.51%)
Mutual labels:  reinforcement-learning
Rlcycle
A library for ready-made reinforcement learning agents and reusable components for neat prototyping
Stars: ✭ 184 (-6.6%)
Mutual labels:  reinforcement-learning
Neural Localization
Train an RL agent to localize actively (PyTorch)
Stars: ✭ 193 (-2.03%)
Mutual labels:  reinforcement-learning
Promp
ProMP: Proximal Meta-Policy Search
Stars: ✭ 181 (-8.12%)
Mutual labels:  reinforcement-learning
Pomdpy
POMDPs in Python.
Stars: ✭ 183 (-7.11%)
Mutual labels:  reinforcement-learning
Free Ai Resources
🚀 FREE AI Resources - 🎓 Courses, 👷 Jobs, 📝 Blogs, 🔬 AI Research, and many more - for everyone!
Stars: ✭ 192 (-2.54%)
Mutual labels:  reinforcement-learning
Andrew Ng Notes
This is Andrew NG Coursera Handwritten Notes.
Stars: ✭ 180 (-8.63%)
Mutual labels:  reinforcement-learning
Dm control
DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.
Stars: ✭ 2,592 (+1215.74%)
Mutual labels:  reinforcement-learning
Gail Tf
Tensorflow implementation of generative adversarial imitation learning
Stars: ✭ 179 (-9.14%)
Mutual labels:  reinforcement-learning
Gym Sokoban
Sokoban environment for OpenAI Gym
Stars: ✭ 186 (-5.58%)
Mutual labels:  reinforcement-learning
Ailearnnotes
Artificial Intelligence Learning Notes.
Stars: ✭ 195 (-1.02%)
Mutual labels:  reinforcement-learning
Drl4recsys
Courses on Deep Reinforcement Learning (DRL) and DRL papers for recommender systems
Stars: ✭ 196 (-0.51%)
Mutual labels:  reinforcement-learning
Naf Tensorflow
"Continuous Deep Q-Learning with Model-based Acceleration" in TensorFlow
Stars: ✭ 192 (-2.54%)
Mutual labels:  reinforcement-learning

Neural Logic Machines

PyTorch implementation for the Neural Logic Machines (NLM). Please note that this is not an officially supported Google product.

Neural Logic Machine (NLM) is a neural-symbolic architecture for both inductive learning and logic reasoning. NLMs use tensors to represent logic predicates. This is done by grounding the predicate as True or False over a fixed set of objects. Based on the tensor representation, rules are implemented as neural operators that can be applied over the premise tensors and generate conclusion tensors.

Neural Logic Machines
Honghua Dong*, Jiayuan Mao*, Tian Lin, Chong Wang, Lihong Li, and Denny Zhou
(*: indicates equal contribution.)
In International Conference on Learning Representations (ICLR) 2019
[Paper] [Project Page]

@inproceedings{
      dong2018neural,
      title     = {Neural Logic Machines},
      author    = {Honghua Dong and Jiayuan Mao and Tian Lin and Chong Wang and Lihong Li and Denny Zhou},
      booktitle = {International Conference on Learning Representations},
      year      = {2019},
      url       = {https://openreview.net/forum?id=B1xY-hRctX},
    }

Prerequisites

  • Python 3
  • PyTorch 0.4.0
  • Jacinle. We use the version ed90c3a for this repo.
  • Other required python packages specified by requirements.txt. See the Installation.

Installation

Clone this repository:

git clone https://github.com/google/neural-logic-machines --recursive

Install Jacinle included as a submodule. You need to add the bin path to your global PATH environment variable:

export PATH=<path_to_neural_logic_machines>/third_party/Jacinle/bin:$PATH

Create a conda environment for NLM, and install the requirements. This includes the required python packages from both Jacinle and NLM. Most of the required packages have been included in the built-in anaconda package:

conda create -n nlm anaconda
conda install pytorch torchvision -c pytorch

Usage

This repo contains 10 graph-related reasoning tasks (using supervised learning) and 3 decision-making tasks (using reinforcement learning).

We also provide pre-trained models for 3 decision-making tasks in models directory,

Taking the Blocks World task as an example.

# To train the model:
$ jac-run scripts/blocksworld/learn_policy.py --task final
# To test the model:
$ jac-run scripts/blocksworld/learn_policy.py --task final --test-only --load models/blocksworld.pth
# add [--test-epoch-size T] to control the number of testing cases.
# E.g. use T=20 for a quick testing, usually take ~2min on CPUs.
# Sample output of testing for number=10 and number=50:
> Evaluation:
    length = 12.500000
    number = 10.000000
    score = 0.885000
    succ = 1.000000
> Evaluation:
    length = 85.800000
    number = 50.000000
    score = 0.152000
    succ = 1.000000

Please refer to the graph directory for training/inference details of other tasks.

Useful Command-line options

  • jac-crun GPU_ID FILE --use-gpu GPU_ID instead of jac-run FILE to enable using gpu with id GPU_ID.
  • --model {nlm, memnet}[default: nlm]: choose memnet to use (Memory Networks)[https://arxiv.org/abs/1503.08895] as baseline.
  • --runs N: take N runs.
  • --dump-dir DUMP_DIR: place to dump logs/summaries/checkpoints/plays.
  • --dump-play: dump plays for visualization in json format, can be visualized by our html visualizer. (not applied to graph tasks)
  • --test-number-begin B --test-number-step S --step-number-end E:
    defines the range of the sizes of the test instances.
  • --test-epoch-size SIZE: number of test instances.

For a complete command-line options see jac-run FILE -h (e.g. jac-run scripts/blocksworld/learn_policy.py -h).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].