All Projects → ray-project → Tutorial

ray-project / Tutorial

Projects that are alternatives of or similar to Tutorial

Deep Learning For Hackers
Machine Learning tutorials with TensorFlow 2 and Keras in Python (Jupyter notebooks included) - (LSTMs, Hyperameter tuning, Data preprocessing, Bias-variance tradeoff, Anomaly Detection, Autoencoders, Time Series Forecasting, Object Detection, Sentiment Analysis, Intent Recognition with BERT)
Stars: ✭ 586 (-2.66%)
Mutual labels:  jupyter-notebook
Basic model scratch
Implementation of some classic Machine Learning model from scratch and benchmarking against popular ML library
Stars: ✭ 595 (-1.16%)
Mutual labels:  jupyter-notebook
Deep learning cookbook
Deep Learning Cookbox
Stars: ✭ 601 (-0.17%)
Mutual labels:  jupyter-notebook
Introduction To Python
Stars: ✭ 589 (-2.16%)
Mutual labels:  jupyter-notebook
Single Cell Tutorial
Single cell current best practices tutorial case study for the paper:Luecken and Theis, "Current best practices in single-cell RNA-seq analysis: a tutorial"
Stars: ✭ 594 (-1.33%)
Mutual labels:  jupyter-notebook
Deeplearning Nlp
Introduction to Deep Learning for Natural Language Processing
Stars: ✭ 598 (-0.66%)
Mutual labels:  jupyter-notebook
Ml Interview
Preparing for machine learning interviews
Stars: ✭ 586 (-2.66%)
Mutual labels:  jupyter-notebook
Fastai dev
fast.ai early development experiments
Stars: ✭ 604 (+0.33%)
Mutual labels:  jupyter-notebook
Yolov3 Complete Pruning
提供对YOLOv3及Tiny的多种剪枝版本,以适应不同的需求。
Stars: ✭ 596 (-1%)
Mutual labels:  jupyter-notebook
Neuraltalk2
Efficient Image Captioning code in Torch, runs on GPU
Stars: ✭ 5,263 (+774.25%)
Mutual labels:  jupyter-notebook
Telemanom
A framework for using LSTMs to detect anomalies in multivariate time series data. Includes spacecraft anomaly data and experiments from the Mars Science Laboratory and SMAP missions.
Stars: ✭ 589 (-2.16%)
Mutual labels:  jupyter-notebook
Python Deepdive
Python Deep Dive Course - Accompanying Materials
Stars: ✭ 590 (-1.99%)
Mutual labels:  jupyter-notebook
Time Series Classification And Clustering
Time series classification and clustering code written in Python.
Stars: ✭ 599 (-0.5%)
Mutual labels:  jupyter-notebook
Tensorflow exercises
The codes I made while I practiced various TensorFlow examples
Stars: ✭ 588 (-2.33%)
Mutual labels:  jupyter-notebook
Machine learning tutorials
Code, exercises and tutorials of my personal blog ! 📝
Stars: ✭ 601 (-0.17%)
Mutual labels:  jupyter-notebook
Dnc Tensorflow
A TensorFlow implementation of DeepMind's Differential Neural Computers (DNC)
Stars: ✭ 587 (-2.49%)
Mutual labels:  jupyter-notebook
Takehomedatachallenges
My solution to the book <A collection of Data Science Take-home Challenges>
Stars: ✭ 596 (-1%)
Mutual labels:  jupyter-notebook
Cs231n spring 2017 assignment
My implementations of cs231n 2017
Stars: ✭ 603 (+0.17%)
Mutual labels:  jupyter-notebook
Sqlitebiter
A CLI tool to convert CSV / Excel / HTML / JSON / Jupyter Notebook / LDJSON / LTSV / Markdown / SQLite / SSV / TSV / Google-Sheets to a SQLite database file.
Stars: ✭ 601 (-0.17%)
Mutual labels:  jupyter-notebook
Courses
fast.ai Courses
Stars: ✭ 5,253 (+772.59%)
Mutual labels:  jupyter-notebook

Ray Tutorial

See the new Anyscale Academy tutorials at https://github.com/anyscale/academy.

Try Ray on Google Colab

Try the Ray tutorials online using Google Colab:

  • Remote Functions_
  • Remote Actors_
  • In-Order Task Processing_
  • Reinforcement Learning with RLlib_

.. _Remote Functions: https://colab.research.google.com/github/ray-project/tutorial/blob/master/exercises/colab01-03.ipynb .. _Remote Actors: https://colab.research.google.com/github/ray-project/tutorial/blob/master/exercises/colab04-05.ipynb .. _In-Order Task Processing: https://colab.research.google.com/github/ray-project/tutorial/blob/master/exercises/colab06-07.ipynb .. _Reinforcement Learning with RLlib: https://colab.research.google.com/github/ray-project/tutorial/blob/master/rllib_exercises/rllib_colab.ipynb

Try Tune on Google Colab

Tuning hyperparameters is often the most expensive part of the machine learning workflow. Ray Tune <http://tune.io>_ is built to address this, demonstrating an efficient and scalable solution for this pain point.

Exercise 1 <https://github.com/ray-project/tutorial/tree/master/tune_exercises/exercise_1_basics.ipynb>_ covers basics of using Tune - creating your first training function and using Tune. This tutorial uses Keras.

.. raw:: html

<a href="https://colab.research.google.com/github/ray-project/tutorial/blob/master/tune_exercises/exercise_1_basics.ipynb" target="_parent">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Tune Tutorial"/>
</a>

Exercise 2 <https://github.com/ray-project/tutorial/tree/master/tune_exercises/exercise_2_optimize.ipynb>_ covers Search algorithms and Trial Schedulers. This tutorial uses PyTorch.

.. raw:: html

<a href="https://colab.research.google.com/github/ray-project/tutorial/blob/master/tune_exercises/exercise_2_optimize.ipynb" target="_parent">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Tune Tutorial"/>
</a>

Exercise 3 <https://github.com/ray-project/tutorial/tree/master/tune_exercises/exercise_3_pbt.ipynb>_ covers using Population-Based Training (PBT) and uses the advanced Trainable API with save and restore functions and checkpointing.

.. raw:: html

<a href="https://colab.research.google.com/github/ray-project/tutorial/blob/master/tune_exercises/exercise_3_pbt.ipynb" target="_parent">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Tune Tutorial"/>
</a>

Try Ray on Binder

Try the Ray tutorials online on Binder_. Note that Binder will use very small machines, so the degree of parallelism will be limited.

.. _Binder: https://mybinder.org/v2/gh/ray-project/tutorial/master?urlpath=lab

Local Setup

  1. Make sure you have Python installed (we recommend using the Anaconda Python distribution_). Ray works with both Python 2 and Python 3. If you are unsure which to use, then use Python 3.

    If not using conda, continue to step 2.

    If using conda, you can then run the following commands and skip the next 4 steps:

    .. code-block:: bash

    git clone https://github.com/ray-project/tutorial
    cd tutorial
    conda env create -f environment.yml
    conda activate ray-tutorial
    
  2. Install Jupyter with pip install jupyter. Verify that you can start Jupyter lab with the command jupyter-lab or jupyter-notebook.

  3. Install Ray by running pip install -U ray. Verify that you can run

    .. code-block:: bash

    import ray ray.init()

    in a Python interpreter.

  4. Clone the tutorial repository with

    .. code-block:: bash

    git clone https://github.com/ray-project/tutorial.git

  5. Install the additional dependencies.

    Either install them from the given requirements.txt

    .. code-block:: bash pip install -r requirements.txt

    Or install them manually

    .. code-block:: bash

    pip install modin pip install tensorflow pip install gym pip install scipy pip install opencv-python pip install bokeh pip install ipywidgets==6.0.0 pip install keras

    Verify that you can run import tensorflow and import gym in a Python interpreter.

    Note: If you have trouble installing these Python modules, note that almost all of the exercises can be done without them.

  6. If you want to run the pong exercise (in rl_exercises/rl_exercise05.ipynb), you will need to do pip install utilities/pong_py.

Exercises

Each file exercises/exercise*.ipynb is a separate exercise. They can be opened in Jupyter lab by running the following commands.

.. code-block:: bash

cd tutorial/exercises jupyter-lab

If you don't have jupyter-lab, try jupyter-notebook. If it asks for a password, just hit enter.

Instructions are written in each file. To do each exercise, first run all of the cells in Jupyter lab. Then modify the ones that need to be modified in order to prevent any exceptions from being raised. Throughout these exercises, you may find the Ray documentation_ helpful.

Exercise 1: Define a remote function, and execute multiple remote functions in parallel.

Exercise 2: Execute remote functions in parallel with some dependencies.

Exercise 3: Call remote functions from within remote functions.

Exercise 4: Use actors to share state between tasks. See the documentation on using actors_.

Exercise 5: Pass actor handles to tasks so that multiple tasks can invoke methods on the same actor.

Exercise 6: Use ray.wait to ignore stragglers. See the documentation for wait_.

Exercise 7: Use ray.wait to process tasks in the order that they finish. See the documentation for wait_.

Exercise 8: Use ray.put to avoid serializing and copying the same object into shared memory multiple times.

Exercise 9: Specify that an actor requires some GPUs. For a complete example that does something similar, you may want to see the ResNet example_.

Exercise 10: Specify that a remote function requires certain custom resources. See the documentation on custom resources_.

Exercise 11: Extract neural network weights from an actor on one process, and set them in another actor. You may want to read the documentation on using Ray with TensorFlow_.

Exercise 12: Pass object IDs into tasks to construct dependencies between tasks and perform a tree reduce.

.. _Anaconda Python distribution: https://www.continuum.io/downloads .. _Ray documentation: https://ray.readthedocs.io/en/latest/?badge=latest .. _documentation for wait: https://ray.readthedocs.io/en/latest/api.html#ray.wait .. _using actors: https://ray.readthedocs.io/en/latest/actors.html .. _using Ray with TensorFlow: https://ray.readthedocs.io/en/latest/using-ray-with-tensorflow.html .. _ResNet example: https://ray.readthedocs.io/en/latest/example-resnet.html .. _custom resources: https://ray.readthedocs.io/en/latest/resources.html#custom-resources

More In-Depth Examples

Sharded Parameter Server: This exercise involves implementing a parameter server as a Ray actor, implementing a simple asynchronous distributed training algorithm, and sharding the parameter server to improve throughput.

Speed Up Pandas: This exercise involves using Modin_ to speed up your pandas workloads.

MapReduce: This exercise shows how to implement a toy version of the MapReduce system on top of Ray.

.. _Modin: https://modin.readthedocs.io/en/latest/

RL Exercises

The exercises in rl_exercises/rl_exercise*.ipynb should be done in order. They can be opened in Jupyter lab by running the following commands.

.. code-block:: bash

cd tutorial/rl_exercises jupyter-lab

Exercise 1: Introduction to Markov Decision Processes.

Exercise 2: Derivative free optimization.

Exercise 3: Introduction to proximal policy optimization (PPO).

Exercise 4: Introduction to asynchronous advantage actor-critic (A3C).

Exercise 5: Train a policy to play pong using RLlib. Deploy it using actors, and play against the trained policy.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].