NeuroEvolution-Flappy-BirdA comparison between humans, neuroevolution and multilayer perceptrons playing Flapy Bird implemented in Python
Stars: ✭ 17 (-34.62%)
neat-openai-gymNEAT for Reinforcement Learning on the OpenAI Gym
Stars: ✭ 19 (-26.92%)
NEATESTNEATEST: Evolving Neural Networks Through Augmenting Topologies with Evolution Strategy Training
Stars: ✭ 13 (-50%)
Easy Rl强化学习中文教程,在线阅读地址:https://datawhalechina.github.io/easy-rl/
Stars: ✭ 3,004 (+11453.85%)
evo-NEATA java implementation of NEAT(NeuroEvolution of Augmenting Topologies ) from scratch for the generation of evolving artificial neural networks. Only for educational purposes.
Stars: ✭ 34 (+30.77%)
Deep trafficMIT DeepTraffic top 2% solution (75.01 mph) 🚗.
Stars: ✭ 47 (+80.77%)
Deep Rl Tradingplaying idealized trading games with deep reinforcement learning
Stars: ✭ 228 (+776.92%)
Tensorflow-NeuroevolutionNeuroevolution Framework for Tensorflow 2.x focusing on modularity and high-performance. Preimplements NEAT, DeepNEAT, CoDeepNEAT, etc.
Stars: ✭ 109 (+319.23%)
DeepHyperNEATA public python implementation of the DeepHyperNEAT system for evolving neural networks. Developed by Felix Sosa and Kenneth Stanley. See paper here: https://eplex.cs.ucf.edu/papers/sosa_ugrad_report18.pdf
Stars: ✭ 42 (+61.54%)
neat-pythonPython implementation of the NEAT neuroevolution algorithm
Stars: ✭ 32 (+23.08%)
neuro-evolutionA project on improving Neural Networks performance by using Genetic Algorithms.
Stars: ✭ 25 (-3.85%)
ExplorerExplorer is a PyTorch reinforcement learning framework for exploring new ideas.
Stars: ✭ 54 (+107.69%)
Gym AnytradingThe most simple, flexible, and comprehensive OpenAI Gym trading environment (Approved by OpenAI Gym)
Stars: ✭ 627 (+2311.54%)
Ctc ExecutionerMaster Thesis: Limit order placement with Reinforcement Learning
Stars: ✭ 112 (+330.77%)
Paddle-RLBooksPaddle-RLBooks is a reinforcement learning code study guide based on pure PaddlePaddle.
Stars: ✭ 113 (+334.62%)
king-pongDeep Reinforcement Learning Pong Agent, King Pong, he's the best
Stars: ✭ 23 (-11.54%)
Grid royaleA life simulation for exploring social dynamics
Stars: ✭ 252 (+869.23%)
rustneatRust Neat - NeuroEvolution of Augmenting Topologies
Stars: ✭ 63 (+142.31%)
Data Science FreeFree Resources For Data Science created by Shubham Kumar
Stars: ✭ 232 (+792.31%)
Rl tradingAn environment to high-frequency trading agents under reinforcement learning
Stars: ✭ 205 (+688.46%)
breakout-Deep-Q-NetworkReinforcement Learning | tensorflow implementation of DQN, Dueling DQN and Double DQN performed on Atari Breakout
Stars: ✭ 69 (+165.38%)
MathutilitiesA collection of some of the neat math and physics tricks that I've collected over the last few years.
Stars: ✭ 2,815 (+10726.92%)
neat-componentsA styled-components implementation of Thoughtbot's Neat
Stars: ✭ 32 (+23.08%)
LearningxDeep & Classical Reinforcement Learning + Machine Learning Examples in Python
Stars: ✭ 241 (+826.92%)
Gym FxForex trading simulator environment for OpenAI Gym, observations contain the order status, performance and timeseries loaded from a CSV file containing rates and indicators. Work In Progress
Stars: ✭ 151 (+480.77%)
ArchI0ArchI0 : Arch-Based Distros Applications Automatic Installation Script
Stars: ✭ 26 (+0%)
Deep Math Machine Learning.aiA blog which talks about machine learning, deep learning algorithms and the Math. and Machine learning algorithms written from scratch.
Stars: ✭ 173 (+565.38%)
apxr runA topology and parameter evolving universal learning network.
Stars: ✭ 14 (-46.15%)
Accel Brain CodeThe purpose of this repository is to make prototypes as case study in the context of proof of concept(PoC) and research and development(R&D) that I have written in my website. The main research topics are Auto-Encoders in relation to the representation learning, the statistical machine learning for energy-based models, adversarial generation networks(GANs), Deep Reinforcement Learning such as Deep Q-Networks, semi-supervised learning, and neural network language model for natural language processing.
Stars: ✭ 166 (+538.46%)
HippocratesNo longer maintained, actually usable implementation of NEAT
Stars: ✭ 59 (+126.92%)
Reinforcement learning in pythonImplementing Reinforcement Learning, namely Q-learning and Sarsa algorithms, for global path planning of mobile robot in unknown environment with obstacles. Comparison analysis of Q-learning and Sarsa
Stars: ✭ 134 (+415.38%)
guzutaCustom repository manager for ArchLinux pacman
Stars: ✭ 27 (+3.85%)
dqn-obstacle-avoidanceDeep Reinforcement Learning for Fixed-Wing Flight Control with Deep Q-Network
Stars: ✭ 57 (+119.23%)
gov-au-ui-kitMOVED TO https://github.com/govau/uikit/
Stars: ✭ 19 (-26.92%)
Tetris AiA deep reinforcement learning bot that plays tetris
Stars: ✭ 109 (+319.23%)
omdJAX code for the paper "Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation"
Stars: ✭ 43 (+65.38%)
MicromlpA micro neural network multilayer perceptron for MicroPython (used on ESP32 and Pycom modules)
Stars: ✭ 92 (+253.85%)
Rl ardroneAutonomous Navigation of UAV using Reinforcement Learning algorithms.
Stars: ✭ 76 (+192.31%)
NEATNEAT implementation in Pharo
Stars: ✭ 16 (-38.46%)
Drivebottensorflow deep RL for driving a rover around
Stars: ✭ 62 (+138.46%)
ncursesPacncurses pacman written in C++
Stars: ✭ 23 (-11.54%)
DqnImplementation of q-learning using TensorFlow
Stars: ✭ 53 (+103.85%)
NotebooksSome notebooks
Stars: ✭ 53 (+103.85%)
Async DeeprlPlaying Atari games with TensorFlow implementation of Asynchronous Deep Q-Learning
Stars: ✭ 44 (+69.23%)
Rubiks CubeReinforcement Learning program that looks to be able to quickly learn to solve a Rubik's Cube
Stars: ✭ 15 (-42.31%)
holo-buildCross-distribution system package compiler
Stars: ✭ 43 (+65.38%)
pacNEMpacNEM is a Browser PacMan game with NodeJS, Socket.io, Handlebars and NEM Blockchain
Stars: ✭ 20 (-23.08%)
Gym Alttp GridworldA gym environment for Stuart Armstrong's model of a treacherous turn.
Stars: ✭ 14 (-46.15%)
Basic reinforcement learningAn introductory series to Reinforcement Learning (RL) with comprehensive step-by-step tutorials.
Stars: ✭ 826 (+3076.92%)
Rainy☔ Deep RL agents with PyTorch☔
Stars: ✭ 39 (+50%)