All Projects → pranz24 → Pytorch Soft Actor Critic

pranz24 / Pytorch Soft Actor Critic

Licence: mit
PyTorch implementation of soft actor critic

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pytorch Soft Actor Critic

Deep Rl Trading
playing idealized trading games with deep reinforcement learning
Stars: ✭ 228 (-24%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Learning To Communicate Pytorch
Learning to Communicate with Deep Multi-Agent Reinforcement Learning in PyTorch
Stars: ✭ 236 (-21.33%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Machine Learning Uiuc
🖥️ CS446: Machine Learning in Spring 2018, University of Illinois at Urbana-Champaign
Stars: ✭ 233 (-22.33%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Tensorforce
Tensorforce: a TensorFlow library for applied reinforcement learning
Stars: ✭ 3,062 (+920.67%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Deep rl
PyTorch implementations of Deep Reinforcement Learning algorithms (DQN, DDQN, A2C, VPG, TRPO, PPO, DDPG, TD3, SAC, SAC-AEA)
Stars: ✭ 291 (-3%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Gam
A PyTorch implementation of "Graph Classification Using Structural Attention" (KDD 2018).
Stars: ✭ 227 (-24.33%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Rlgraph
RLgraph: Modular computation graphs for deep reinforcement learning
Stars: ✭ 272 (-9.33%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Naf Tensorflow
"Continuous Deep Q-Learning with Model-based Acceleration" in TensorFlow
Stars: ✭ 192 (-36%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Reinforcement Learning
Minimal and Clean Reinforcement Learning Examples
Stars: ✭ 2,863 (+854.33%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Learningx
Deep & Classical Reinforcement Learning + Machine Learning Examples in Python
Stars: ✭ 241 (-19.67%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Pytorch A2c Ppo Acktr Gail
PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).
Stars: ✭ 2,632 (+777.33%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Rad
RAD: Reinforcement Learning with Augmented Data
Stars: ✭ 268 (-10.67%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Drl4recsys
Courses on Deep Reinforcement Learning (DRL) and DRL papers for recommender systems
Stars: ✭ 196 (-34.67%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Applied Reinforcement Learning
Reinforcement Learning and Decision Making tutorials explained at an intuitive level and with Jupyter Notebooks
Stars: ✭ 229 (-23.67%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Reinforcementlearning.jl
A reinforcement learning package for Julia
Stars: ✭ 192 (-36%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Pytorch Drl
PyTorch implementations of various Deep Reinforcement Learning (DRL) algorithms for both single agent and multi-agent.
Stars: ✭ 233 (-22.33%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
2048 Deep Reinforcement Learning
Trained A Convolutional Neural Network To Play 2048 using Deep-Reinforcement Learning
Stars: ✭ 169 (-43.67%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Pytorch sac
PyTorch implementation of Soft Actor-Critic (SAC)
Stars: ✭ 174 (-42%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Roboleague
A car soccer environment inspired by Rocket League for deep reinforcement learning experiments in an adversarial self-play setting.
Stars: ✭ 236 (-21.33%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Gym Gazebo2
gym-gazebo2 is a toolkit for developing and comparing reinforcement learning algorithms using ROS 2 and Gazebo
Stars: ✭ 257 (-14.33%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning

Description


Reimplementation of Soft Actor-Critic Algorithms and Applications and a deterministic variant of SAC from Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor.

Added another branch for Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor -> SAC_V.

Requirements


Default Arguments and Usage


Usage

usage: main.py [-h] [--env-name ENV_NAME] [--policy POLICY] [--eval EVAL]
               [--gamma G] [--tau G] [--lr G] [--alpha G]
               [--automatic_entropy_tuning G] [--seed N] [--batch_size N]
               [--num_steps N] [--hidden_size N] [--updates_per_step N]
               [--start_steps N] [--target_update_interval N]
               [--replay_size N] [--cuda]

(Note: There is no need for setting Temperature(--alpha) if --automatic_entropy_tuning is True.)

For SAC

python main.py --env-name Humanoid-v2 --alpha 0.05

For SAC (Hard Update)

python main.py --env-name Humanoid-v2 --alpha 0.05 --tau 1 --target_update_interval 1000

For SAC (Deterministic, Hard Update)

python main.py --env-name Humanoid-v2 --policy Deterministic --tau 1 --target_update_interval 1000

Arguments


PyTorch Soft Actor-Critic Args

optional arguments:
  -h, --help            show this help message and exit
  --env-name ENV_NAME   Mujoco Gym environment (default: HalfCheetah-v2)
  --policy POLICY       Policy Type: Gaussian | Deterministic (default:
                        Gaussian)
  --eval EVAL           Evaluates a policy a policy every 10 episode (default:
                        True)
  --gamma G             discount factor for reward (default: 0.99)
  --tau G               target smoothing coefficient(τ) (default: 5e-3)
  --lr G                learning rate (default: 3e-4)
  --alpha G             Temperature parameter α determines the relative
                        importance of the entropy term against the reward
                        (default: 0.2)
  --automatic_entropy_tuning G
                        Automaically adjust α (default: False)
  --seed N              random seed (default: 123456)
  --batch_size N        batch size (default: 256)
  --num_steps N         maximum number of steps (default: 1e6)
  --hidden_size N       hidden size (default: 256)
  --updates_per_step N  model updates per simulator step (default: 1)
  --start_steps N       Steps sampling random actions (default: 1e4)
  --target_update_interval N
                        Value target update per no. of updates per step
                        (default: 1)
  --replay_size N       size of replay buffer (default: 1e6)
  --cuda                run on CUDA (default: False)
Environment (--env-name) Temperature (--alpha)
HalfCheetah-v2 0.2
Hopper-v2 0.2
Walker2d-v2 0.2
Ant-v2 0.2
Humanoid-v2 0.05
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].