All Projects → isayev → Release

isayev / Release

Licence: mit
Deep Reinforcement Learning for de-novo Drug Design

Projects that are alternatives of or similar to Release

Basic reinforcement learning
An introductory series to Reinforcement Learning (RL) with comprehensive step-by-step tutorials.
Stars: ✭ 826 (+310.95%)
Mutual labels:  jupyter-notebook, reinforcement-learning, deeplearning
Text summurization abstractive methods
Multiple implementations for abstractive text summurization , using google colab
Stars: ✭ 359 (+78.61%)
Mutual labels:  jupyter-notebook, reinforcement-learning, deeplearning
Tensorwatch
Debugging, monitoring and visualization for Python Machine Learning and Data Science
Stars: ✭ 3,191 (+1487.56%)
Mutual labels:  jupyter-notebook, reinforcement-learning, deeplearning
Ngsim env
Learning human driver models from NGSIM data with imitation learning.
Stars: ✭ 96 (-52.24%)
Mutual labels:  jupyter-notebook, reinforcement-learning, deeplearning
Cutout Random Erasing
Cutout / Random Erasing implementation, especially for ImageDataGenerator in Keras
Stars: ✭ 142 (-29.35%)
Mutual labels:  jupyter-notebook, deeplearning
All4nlp
All For NLP, especially Chinese.
Stars: ✭ 141 (-29.85%)
Mutual labels:  jupyter-notebook, deeplearning
Deep Learning With Tensorflow Book
深度学习入门开源书,基于TensorFlow 2.0案例实战。Open source Deep Learning book, based on TensorFlow 2.0 framework.
Stars: ✭ 12,105 (+5922.39%)
Mutual labels:  jupyter-notebook, deeplearning
Java Deep Learning Cookbook
Code for Java Deep Learning Cookbook
Stars: ✭ 156 (-22.39%)
Mutual labels:  reinforcement-learning, deeplearning
Rl Quadcopter
Teach a Quadcopter How to Fly!
Stars: ✭ 124 (-38.31%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Chess Alpha Zero
Chess reinforcement learning by AlphaGo Zero methods.
Stars: ✭ 1,868 (+829.35%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Motion Sense
MotionSense Dataset for Human Activity and Attribute Recognition ( time-series data generated by smartphone's sensors: accelerometer and gyroscope)
Stars: ✭ 159 (-20.9%)
Mutual labels:  jupyter-notebook, deeplearning
Py4chemoinformatics
Python for chemoinformatics
Stars: ✭ 140 (-30.35%)
Mutual labels:  jupyter-notebook, cheminformatics
Seq2seq tutorial
Code For Medium Article "How To Create Data Products That Are Magical Using Sequence-to-Sequence Models"
Stars: ✭ 132 (-34.33%)
Mutual labels:  jupyter-notebook, deeplearning
Data Science Question Answer
A repo for data science related questions and answers
Stars: ✭ 2,000 (+895.02%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Modular Rl
[ICML 2020] PyTorch Code for "One Policy to Control Them All: Shared Modular Policies for Agent-Agnostic Control"
Stars: ✭ 126 (-37.31%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Anomaly detection tuto
Anomaly detection tutorial on univariate time series with an auto-encoder
Stars: ✭ 144 (-28.36%)
Mutual labels:  jupyter-notebook, deeplearning
Pytorch sac
PyTorch implementation of Soft Actor-Critic (SAC)
Stars: ✭ 174 (-13.43%)
Mutual labels:  jupyter-notebook, reinforcement-learning
2048 Deep Reinforcement Learning
Trained A Convolutional Neural Network To Play 2048 using Deep-Reinforcement Learning
Stars: ✭ 169 (-15.92%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Machine Learning And Reinforcement Learning In Finance
Machine Learning and Reinforcement Learning in Finance New York University Tandon School of Engineering
Stars: ✭ 173 (-13.93%)
Mutual labels:  jupyter-notebook, reinforcement-learning
Andrew Ng Notes
This is Andrew NG Coursera Handwritten Notes.
Stars: ✭ 180 (-10.45%)
Mutual labels:  jupyter-notebook, reinforcement-learning

ReLeaSE (Reinforcement Learning for Structural Evolution)

Deep Reinforcement Learning for de-novo Drug Design

Currently works only under Linux

This is an official PyTorch implementation of Deep Reinforcement Learning for de-novo Drug Design aka ReLeaSE method.

REQUIREMENTS:

In order to get started you will need:

Installation with Anaconda

If you installed your Python with Anacoda you can run the following commands to get started:

# Clone the reopsitory to your desired directory
git clone https://github.com/isayev/ReLeaSE.git
cd ReLeaSE
# Create new conda environment with Python 3.6
conda create --new release python=3.6
# Activate the environment
conda activate release
# Install conda dependencies
conda install --yes --file conda_requirements.txt
conda install -c rdkit rdkit nox cairo
conda install pytorch=0.4.1 torchvision=0.2.1 -c pytorch
# Instal pip dependencies
pip install pip_requirements.txt
# Add new kernel to the list of jupyter notebook kernels
python -m ipykernel install --user --name release --display-name ReLeaSE

Demos

We uploaded several demos in a form of iPython notebooks:

  • JAK2_min_max_demo.ipynb -- JAK2 pIC50 minimization and maximization
  • LogP_optimization_demo.ipynb -- optimization of logP to be in a drug-like region from 0 to 5 according to Lipinski's rule of five.
  • RecurrentQSAR-example-logp.ipynb -- training a Recurrent Neural Network to predict logP from SMILES using OpenChem toolkit.

Disclaimer: JAK2 demo uses Random Forest predictor instead of Recurrent Neural Network, since the availability of the dataset with JAK2 activity data used in the "Deep Reinforcement Learning for de novo Drug Design" paper is restricted under the license agreement. So instead we use the JAK2 activity data downladed from ChEMBL (CHEMBL2971) and curated. The size of this dataset is ~2000 data points, which is not enough to build a reliable deep neural network. If you want to see a demo with RNN, please checkout logP optimization.

Citation

If you use this code or data, please cite:

ReLeaSE method paper:

Mariya Popova, Olexandr Isayev, Alexander Tropsha. Deep Reinforcement Learning for de-novo Drug Design. Science Advances, 2018, Vol. 4, no. 7, eaap7885. DOI: 10.1126/sciadv.aap7885

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].