All Projects → TLESORT → State-Representation-Learning-An-Overview

TLESORT / State-Representation-Learning-An-Overview

Licence: other
Simplified version of "State Representation Learning for Control: An Overview" bibliography

Projects that are alternatives of or similar to State-Representation-Learning-An-Overview

Autoregressive Predictive Coding
Autoregressive Predictive Coding: An unsupervised autoregressive model for speech representation learning
Stars: ✭ 138 (+331.25%)
Mutual labels:  representation-learning, unsupervised-learning
rl singing voice
Unsupervised Representation Learning for Singing Voice Separation
Stars: ✭ 18 (-43.75%)
Mutual labels:  representation-learning, unsupervised-learning
Simclr
SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
Stars: ✭ 2,720 (+8400%)
Mutual labels:  representation-learning, unsupervised-learning
Bagofconcepts
Python implementation of bag-of-concepts
Stars: ✭ 18 (-43.75%)
Mutual labels:  representation-learning, unsupervised-learning
FUSION
PyTorch code for NeurIPSW 2020 paper (4th Workshop on Meta-Learning) "Few-Shot Unsupervised Continual Learning through Meta-Examples"
Stars: ✭ 18 (-43.75%)
Mutual labels:  representation-learning, unsupervised-learning
Self Supervised Learning Overview
📜 Self-Supervised Learning from Images: Up-to-date reading list.
Stars: ✭ 73 (+128.13%)
Mutual labels:  representation-learning, unsupervised-learning
Pytorch Byol
PyTorch implementation of Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning
Stars: ✭ 213 (+565.63%)
Mutual labels:  representation-learning, unsupervised-learning
Awesome Vaes
A curated list of awesome work on VAEs, disentanglement, representation learning, and generative models.
Stars: ✭ 418 (+1206.25%)
Mutual labels:  representation-learning, unsupervised-learning
VQ-APC
Vector Quantized Autoregressive Predictive Coding (VQ-APC)
Stars: ✭ 34 (+6.25%)
Mutual labels:  representation-learning, unsupervised-learning
Revisiting-Contrastive-SSL
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [NeurIPS 2021]
Stars: ✭ 81 (+153.13%)
Mutual labels:  representation-learning, unsupervised-learning
Simclr
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
Stars: ✭ 750 (+2243.75%)
Mutual labels:  representation-learning, unsupervised-learning
awesome-graph-self-supervised-learning
Awesome Graph Self-Supervised Learning
Stars: ✭ 805 (+2415.63%)
Mutual labels:  representation-learning, unsupervised-learning
Unsupervised Classification
SCAN: Learning to Classify Images without Labels (ECCV 2020), incl. SimCLR.
Stars: ✭ 605 (+1790.63%)
Mutual labels:  representation-learning, unsupervised-learning
Pointglr
Global-Local Bidirectional Reasoning for Unsupervised Representation Learning of 3D Point Clouds (CVPR 2020)
Stars: ✭ 86 (+168.75%)
Mutual labels:  representation-learning, unsupervised-learning
Lemniscate.pytorch
Unsupervised Feature Learning via Non-parametric Instance Discrimination
Stars: ✭ 532 (+1562.5%)
Mutual labels:  representation-learning, unsupervised-learning
Variational Ladder Autoencoder
Implementation of VLAE
Stars: ✭ 196 (+512.5%)
Mutual labels:  representation-learning, unsupervised-learning
Contrastive Predictive Coding
Keras implementation of Representation Learning with Contrastive Predictive Coding
Stars: ✭ 369 (+1053.13%)
Mutual labels:  representation-learning, unsupervised-learning
Disentangling Vae
Experiments for understanding disentanglement in VAE latent representations
Stars: ✭ 398 (+1143.75%)
Mutual labels:  representation-learning, unsupervised-learning
Contrastive Predictive Coding Pytorch
Contrastive Predictive Coding for Automatic Speaker Verification
Stars: ✭ 223 (+596.88%)
Mutual labels:  representation-learning, unsupervised-learning
M-NMF
An implementation of "Community Preserving Network Embedding" (AAAI 2017)
Stars: ✭ 119 (+271.88%)
Mutual labels:  representation-learning, unsupervised-learning

State Representation Learning for Control: An Overview arXiv

Abstract

Representation learning algorithms are designed to learn abstract features that characterize data. State representation learning (SRL) focuses on a particular kind of representation learning where learned features are in low dimension, evolve through time, and are influenced by actions of an agent. As the representation learned captures the variation in the environment generated by agents, this kind of representation is particularly suitable for robotics and control scenarios. In particular, the low dimension helps to overcome the curse of dimensionality, provides easier interpretation and utilization by humans and can help improve performance and speed in policy learning algorithms such as reinforcement learning. This survey aims at covering the state-of-the-art on state representation learning in the most recent years. It reviews different SRL methods that involve interaction with the environment, their implementations and their applications in robotics control tasks (simulated or real). In particular, it highlights how generic learning objectives are differently exploited in the reviewed algorithms. Finally, it discusses evaluation methods to assess the representation learned and summarizes current and future lines of research.

Learning objective for SRL (state representation learning)

1️⃣ Learning by reconstruction the observation
2️⃣ Learning a Forward model
3️⃣ Learning an Inverse Model
4️⃣ Using feature adversarial learning
5️⃣ Exploiting reward
6️⃣ Other objective functions

  • Deep Spatial Autoencoders for Visuomotor Learning (2015) 1️⃣ 6️⃣
    Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, Pieter Abbeel arXiv pdf

  • Goal-Driven Dimensionality Reduction for Reinforcement Learning (rwPCA) (2017) 1️⃣ 5️⃣
    Simone Parisi, Simon Ramstedt and Jan Peters pdf

  • Disentangling the independently controllable factorsof variation by interacting with the world (2017) 1️⃣ 6️⃣
    Valentin Thomas, Emmanuel Bengio, William Fedus, Jules Pondard, Philippe Beaudoin, Hugo Larochelle, Joelle Pineau, Doina Precup, Yoshua Bengio pdf

  • Independently Controllable Factors (2017) 1️⃣ 6️⃣
    Valentin Thomas, Jules Pondard, Emmanuel Bengio, Marc Sarfati, Philippe Beaudoin, Marie-Jean Meurs, Joelle Pineau, Doina Precup, Yoshua Bengio arXiv pdf

  • Learn to swing up and balance a real pole based on raw visual input data (2012) 1️⃣
    Jan Mattner, Sascha Lange, Martin Riedmiller pdf

  • Dimensionality Reduced Reinforcement Learning for Assistive Robots (2016) 1️⃣
    William Curran, Tim Brys, David Aha, Matthew Taylor, William D. Smart pdf

  • Using PCA to Efficiently Represent State Spaces (2015) 1️⃣
    Curran15 arXiv pdf

  • Deep Kalman Filters (2015) 1️⃣ 2️⃣
    Rahul G. Krishnan, Uri Shalit, David Sontag, pdf arXiv

  • Learning to linearize under uncertainty (2015) 1️⃣ 2️⃣
    R. Goroshin, M. Mathieu, and Y. LeCun pdf arXiv

  • Embed to control: A locally linear latent dynamics model for control from raw images (2015) 1️⃣ 2️⃣
    Watter, Manuel, et al pdf arXiv

  • Learning State Representation for Deep Actor-Critic Control (2016) 2️⃣ 5️⃣
    Jelle Munk, Jens Kober, Robert Babuška pdf

  • Stable reinforcement learning with autoencoders for tactile and visual data (2016) 1️⃣ 2️⃣
    Herke van Hoof, Nutan Chen, Maximilian Karl, Patrick van der Smagt, Jan Peters pdf

  • Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data (2017) 1️⃣ 2️⃣
    Maximilian Karl, Maximilian Soelch, Justin Bayer, Patrick van der Smagt, (2017), pdf arXiv

  • Value Prediction Network (2017) 2️⃣ 5️⃣
    Junhyuk Oh, Satinder Singh, Honglak Lee arXiv pdf

  • Data-efficient learning of feedback policies from image pixels using deep dynamical model (2015) :one 2️⃣
    J.-A. M Assael, Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth arXiv pdf

  • Learning deep dynamical models from image pixels (2014) 1️⃣ 2️⃣
    Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth arXiv pdf

  • From pixels to torques: Policy learning withdeep dynamical models (2015)1️⃣ 2️⃣
    Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth arXiv pdf

  • Loss is its own Reward: Self-Supervision for Reinforcement Learning (2016) 3️⃣
    Evan Shelhamer, Parsa Mahmoudieh, Max Argus, Trevor Darrell pdf arXiv

  • Curiosity-driven Exploration by Self-supervised Prediction (2017) 2️⃣ 3️⃣
    Deepak Pathak et al. pdf Self-supervised approach.

  • InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets (2016) 1️⃣ 4️⃣
    *Xi Chen, Yan Duan, Rein Houthoof, John Schulman, Ilya Sutskever, Pieter Abbeel * pdf

  • Adversarial Feature Learning (2016) 1️⃣ 4️⃣
    Jeff Donahue, Philipp Krähenbühl, Trevor Darrell arXiv pdf

  • PVEs: Position-Velocity Encoders for Unsupervised Learning of Structured State Representations (2017) 6️⃣
    Rico Jonschkowski, Roland Hafner, Jonathan Scholz, Martin Riedmiller pdf, arXiv

  • Learning State Representations with Robotic Priors (2015) 5️⃣ 6️⃣
    Rico Jonschkowski, Oliver Brock, , pdf

  • Unsupervised state representation learning with robotic priors: a robustness benchmark (2017) 5️⃣ 6️⃣
    Timothée Lesort, Mathieu Seurin, Xinrui Li, Natalia Díaz Rodríguez, David Filliat pdf arXiv

Related Survey

  • Autonomous learning of state representations for control (2015)
    Wendelin Bohmer Jost Tobias Springenberg Joschka Boedecker Martin Riedmiller Klaus Obermayer pdf

Citation

If you find this repo useful please cite the relevant paper

@article{Lesort2018StateRL,
  title={State representation learning for control: An overview},
  author={Timoth{\'e}e Lesort and Natalia D{\'i}az Rodr{\'i}guez and Jean-François Goudou and David Filliat},
  journal={Neural networks : the official journal of the International Neural Network Society},
  year={2018},
  volume={108},
  pages={379-392}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].