All Projects → higgsfield → Imagination Augmented Agents

higgsfield / Imagination Augmented Agents

Building Agents with Imagination: pytorch step-by-step implementation

Projects that are alternatives of or similar to Imagination Augmented Agents

Lungcancerdetection
Use CNN to detect nodules in LIDC dataset.
Stars: ✭ 168 (-1.18%)
Mutual labels:  jupyter-notebook
Colab
Continual Learning tutorials and demo running on Google Colaboratory.
Stars: ✭ 168 (-1.18%)
Mutual labels:  jupyter-notebook
Structurenet
StructureNet: Hierarchical Graph Networks for 3D Shape Generation
Stars: ✭ 170 (+0%)
Mutual labels:  jupyter-notebook
Prottrans
ProtTrans is providing state of the art pretrained language models for proteins. ProtTrans was trained on thousands of GPUs from Summit and hundreds of Google TPUs using Transformers Models.
Stars: ✭ 164 (-3.53%)
Mutual labels:  jupyter-notebook
Deep Reinforcement Learning Algorithms
31 projects in the framework of Deep Reinforcement Learning algorithms: Q-learning, DQN, PPO, DDPG, TD3, SAC, A2C and others. Each project is provided with a detailed training log.
Stars: ✭ 167 (-1.76%)
Mutual labels:  jupyter-notebook
2048 Deep Reinforcement Learning
Trained A Convolutional Neural Network To Play 2048 using Deep-Reinforcement Learning
Stars: ✭ 169 (-0.59%)
Mutual labels:  jupyter-notebook
Quickdraw
Stars: ✭ 168 (-1.18%)
Mutual labels:  jupyter-notebook
Log Anomaly Detector
Log Anomaly Detection - Machine learning to detect abnormal events logs
Stars: ✭ 169 (-0.59%)
Mutual labels:  jupyter-notebook
Tf Tutorials
Tutorials for deep learning course here:
Stars: ✭ 169 (-0.59%)
Mutual labels:  jupyter-notebook
Gate Decorator Pruning
Code for the NuerIPS'19 paper "Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks"
Stars: ✭ 170 (+0%)
Mutual labels:  jupyter-notebook
Deeplearning Az
Stars: ✭ 168 (-1.18%)
Mutual labels:  jupyter-notebook
Motion Ai
AI assisted motion detection for Home Assistant
Stars: ✭ 169 (-0.59%)
Mutual labels:  jupyter-notebook
Audiosignalprocessingforml
Code and slides of my YouTube series called "Audio Signal Proessing for Machine Learning"
Stars: ✭ 169 (-0.59%)
Mutual labels:  jupyter-notebook
Tbbkanalysis
关于淘宝“爆款”数据爬取与分析。具体分析见 —
Stars: ✭ 168 (-1.18%)
Mutual labels:  jupyter-notebook
Photorealistic Style Transfer
High-Resolution Network for Photorealistic Style Transfer
Stars: ✭ 170 (+0%)
Mutual labels:  jupyter-notebook
Deeplearning.ai Andrewng
deeplearning.ai , By Andrew Ng, All slide and notebook + data + solutions and video link
Stars: ✭ 165 (-2.94%)
Mutual labels:  jupyter-notebook
Fastai old
OLD REPO - PLEASE USE fastai/fastai
Stars: ✭ 169 (-0.59%)
Mutual labels:  jupyter-notebook
Deep Learning Genomics Primer
Contains files for the deep learning in genomics primer.
Stars: ✭ 170 (+0%)
Mutual labels:  jupyter-notebook
Mimic extract
MIMIC-Extract:A Data Extraction, Preprocessing, and Representation Pipeline for MIMIC-III
Stars: ✭ 168 (-1.18%)
Mutual labels:  jupyter-notebook
Notebooks
Notebooks using the Hugging Face libraries 🤗
Stars: ✭ 168 (-1.18%)
Mutual labels:  jupyter-notebook

Building Agents with Imagination

Intelligent agents must have the capability to ‘imagine’ and reason about the future. Beyond that they must be able to construct a plan using this knowledge. [1] This tutorial presents a new family of approaches for imagination-based planning:

  • Imagination-Augmented Agents for Deep Reinforcement Learning [arxiv]
  • Learning and Querying Fast Generative Models for Reinforcement Learning [arxiv]

The tutorial consists of 4 parts:

1. MiniPacman Environemnt

MiniPacman is played in a 15 × 19 grid-world. Characters, the ghosts and Pacman, move through a maze. The environment was written by @sracaniere from DeepMind.
[minipacman.ipynb]

2. Actor Critic

Training standard model-free agent to play MiniPacman with advantage actor-critic (A2C)
[actor-critic.ipynb]

3. Environment Model

Environment model is a recurrent neural network which can be trained in an unsupervised fashion from agent trajectories: given a past state and current action, the environment model predicts the next state and reward.
[environment-model.ipynb]

4. Imagination Augmented Agent [in progress]

The I2A learns to combine information from its model-free and imagination-augmented paths. The environment model is rolled out over multiple time steps into the future, by initializing the imagined trajectory with the present time real observation, and subsequently feeding simulated observations into the model. Then a rollout encoder processes the imagined trajectories as a whole and learns to interpret it, i.e. by extracting any information useful for the agent’s decision, or even ignoring it when necessary This allows the agent to benefit from model-based imagination without the pitfalls of conventional model-based planning.
[imagination-augmented agent.ipynb]

More materials on model based + model free RL

  • The Predictron: End-To-End Learning and Planning [arxiv]
  • Model-Based Planning in Discrete Action Spaces [arxiv]
  • Schema Networks: Zero-shot Transfer with a Generative Causal Model of Intuitive Physics [arxiv]
  • Model-Based Value Expansion for Efficient Model-Free Reinforcement Learning [arxiv]
  • TEMPORAL DIFFERENCE MODELS: MODEL-FREE DEEP RL FOR MODEL-BASED CONTROL [arxiv]
  • Universal Planning Networks [arxiv]
  • World Models [arxiv]
  • Recall Traces: Backtracking Models for Efficient Reinforcement Learning [arxiv]
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].