All Projects โ†’ wandb โ†’ Client

wandb / Client

Licence: other
๐Ÿ”ฅ A tool for visualizing and tracking your machine learning experiments. This repo contains the CLI and Python API.

Programming Languages

python
139335 projects - #7 most used programming language
objective c
16641 projects - #2 most used programming language

Projects that are alternatives of or similar to Client

Ray
An open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.
Stars: โœญ 18,547 (+419.52%)
Mutual labels:  reinforcement-learning, hyperparameter-search
Polyaxon
Machine Learning Platform for Kubernetes (MLOps tools for experimentation and automation)
Stars: โœญ 2,966 (-16.92%)
Mutual labels:  reinforcement-learning
Awesome Carla
๐Ÿ‘‰ CARLA resources such as tutorial, blog, code and etc https://github.com/carla-simulator/carla
Stars: โœญ 246 (-93.11%)
Mutual labels:  reinforcement-learning
alchemy
Experiments logging & visualization
Stars: โœญ 49 (-98.63%)
Mutual labels:  experiment-track
Reinforcement Learning
Minimal and Clean Reinforcement Learning Examples
Stars: โœญ 2,863 (-19.8%)
Mutual labels:  reinforcement-learning
Tiny-Imagenet-200
๐Ÿ”ฌ Some personal research code on analyzing CNNs. Started with a thorough exploration of Stanford's Tiny-Imagenet-200 dataset.
Stars: โœญ 68 (-98.1%)
Mutual labels:  hyperparameter-search
Open spiel
OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.
Stars: โœญ 2,991 (-16.22%)
Mutual labels:  reinforcement-learning
Vel
Velocity in deep-learning research
Stars: โœญ 267 (-92.52%)
Mutual labels:  reinforcement-learning
Gym Gazebo2
gym-gazebo2 is a toolkit for developing and comparing reinforcement learning algorithms using ROS 2 and Gazebo
Stars: โœญ 257 (-92.8%)
Mutual labels:  reinforcement-learning
benderopt
Black-box optimization library
Stars: โœญ 84 (-97.65%)
Mutual labels:  hyperparameter-search
Awesome Tensorlayer
A curated list of dedicated resources and applications
Stars: โœญ 248 (-93.05%)
Mutual labels:  reinforcement-learning
Pytorch Ddpg Naf
Implementation of algorithms for continuous control (DDPG and NAF).
Stars: โœญ 254 (-92.89%)
Mutual labels:  reinforcement-learning
Auto-Surprise
An AutoRecSys library for Surprise. Automate algorithm selection and hyperparameter tuning ๐Ÿš€
Stars: โœญ 19 (-99.47%)
Mutual labels:  hyperparameter-search
Ml Games
Machine learning games. Use combination of genetic algorithms and neural networks to control the behaviour of in-game objects.
Stars: โœญ 247 (-93.08%)
Mutual labels:  reinforcement-learning
Matterport3dsimulator
AI Research Platform for Reinforcement Learning from Real Panoramic Images.
Stars: โœญ 260 (-92.72%)
Mutual labels:  reinforcement-learning
Football
Check out the new game server:
Stars: โœญ 2,843 (-20.36%)
Mutual labels:  reinforcement-learning
maggy
Distribution transparent Machine Learning experiments on Apache Spark
Stars: โœญ 83 (-97.68%)
Mutual labels:  hyperparameter-search
Dialog Generation Paper
A list of recent papers regarding dialogue generation
Stars: โœญ 265 (-92.58%)
Mutual labels:  reinforcement-learning
Pysc2 Agents
This is a simple implementation of DeepMind's PySC2 RL agents.
Stars: โœญ 262 (-92.66%)
Mutual labels:  reinforcement-learning
Crowdnav
[ICRA19] Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning
Stars: โœญ 255 (-92.86%)
Mutual labels:  reinforcement-learning

Weights & Biases Weights & Biases

Weights and Biases ci pypi codecov

Use W&B to build better models faster. Track and visualize all the pieces of your machine learning pipeline, from datasets to production models.

  • Quickly identify model regressions. Use W&B to visualize results in real time, all in a central dashboard.
  • Focus on the interesting ML. Spend less time manually tracking results in spreadsheets and text files.
  • Capture dataset versions with W&B Artifacts to identify how changing data affects your resulting models.
  • Reproduce any model, with saved code, hyperparameters, launch commands, input data, and resulting model weights.

Sign up for a free account โ†’

Features

  • Store hyper-parameters used in a training run
  • Search, compare, and visualize training runs
  • Analyze system usage metrics alongside runs
  • Collaborate with team members
  • Replicate historic results
  • Run parameter sweeps
  • Keep records of experiments available forever

Documentation โ†’

If you have any questions, please don't hesitate to ask in our user forum.

๐Ÿค Simple integration with any framework

Install wandb library and login:

pip install wandb
wandb login

Flexible integration for any Python script:

import wandb

# 1. Start a W&B run
wandb.init(project='gpt3')

# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01

# Model training code here ...

# 3. Log metrics over time to visualize performance
for i in range (10):
    wandb.log({"loss": loss})

Try in a colab โ†’

If you have any questions, please don't hesitate to ask in our user forum.

Explore a W&B dashboard

Academic Researchers

If you'd like a free academic account for your research group, reach out to us โ†’

We make it easy to cite W&B in your published paper. Learn more โ†’

๐Ÿ“ˆ Track model and data pipeline hyperparameters

Set wandb.config once at the beginning of your script to save your hyperparameters, input settings (like dataset name or model type), and any other independent variables for your experiments. This is useful for analyzing your experiments and reproducing your work in the future. Setting configs also allows you to visualize the relationships between features of your model architecture or data pipeline and the model performance (as seen in the screenshot above).

wandb.init()
wandb.config.epochs = 4
wandb.config.batch_size = 32
wandb.config.learning_rate = 0.001
wandb.config.architecture = "resnet"

๐Ÿ— Use your favorite framework

๐Ÿฅ• Keras

In Keras, you can use our callback to automatically save all the metrics tracked in model.fit. To get you started here's a minimal example:

# Import W&B
import wandb
from wandb.keras import WandbCallback

# Step1: Initialize W&B run
wandb.init(project='project_name')

# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01

# Model training code here ...

# Step 3: Add WandbCallback 
model.fit(X_train, y_train,  validation_data=(X_test, y_test),
          callbacks=[WandbCallback()])

๐Ÿ”ฅ PyTorch

W&B provides first class support for PyTorch. To automatically log gradients and store the network topology, you can call .watch and pass in your PyTorch model. Then use .log for anything else you want to track, like so:

import wandb

# 1. Start a new run
wandb.init(project="gpt-3")

# 2. Save model inputs and hyperparameters
config = wandb.config
config.dropout = 0.01

# 3. Log gradients and model parameters
wandb.watch(model)
for batch_idx, (data, target) in enumerate(train_loader):
  ...  
  if batch_idx % args.log_interval == 0:      
    # 4. Log metrics to visualize performance
    wandb.log({"loss": loss})

๐ŸŒŠ TensorFlow

The simplest way to log metrics in TensorFlow is by logging tf.summary with our TensorFlow logger:

import wandb

# 1. Start a W&B run
wandb.init(project='gpt3')

# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01

# Model training here

# 3. Log metrics over time to visualize performance
with tf.Session() as sess:
  # ...
  wandb.tensorflow.log(tf.summary.merge_all())

๐Ÿ’จ fastai

Visualize, compare, and iterate on fastai models using Weights & Biases with the WandbCallback.

import wandb
from fastai.callback.wandb import WandbCallback

# 1. Start a new run
wandb.init(project="gpt-3")

# 2. Automatically log model metrics
learn.fit(..., cbs=WandbCallback())

โšก๏ธ PyTorch Lightning

Build scalable, structured, high-performance PyTorch models with Lightning and log them with W&B.

from pytorch_lightning.loggers import WandbLogger
from pytorch_lightning import Trainer

wandb_logger = WandbLogger(project="gpt-3")
trainer = Trainer(logger=wandb_logger)

๐Ÿค— HuggingFace

Just run a script using HuggingFace's Trainer in an environment where wandb is installed and we'll automatically log losses, evaluation metrics, model topology and gradients:

# 1. Install the wandb library
pip install wandb

# 2. Run a script that has the Trainer to automatically logs metrics, model topology and gradients
python run_glue.py \
 --model_name_or_path bert-base-uncased \
 --task_name MRPC \
 --data_dir $GLUE_DIR/$TASK_NAME \
 --do_train \
 --evaluate_during_training \
 --max_seq_length 128 \
 --per_gpu_train_batch_size 32 \
 --learning_rate 2e-5 \
 --num_train_epochs 3 \
 --output_dir /tmp/$TASK_NAME/ \
 --overwrite_output_dir \
 --logging_steps 50

๐Ÿงน Optimize hyperparameters with Sweeps

Use Weights & Biases Sweeps to automate hyperparameter optimization and explore the space of possible models.

Get started in 5 mins โ†’

Try Sweeps in PyTorch in a Colab โ†’

Benefits of using W&B Sweeps

  • Quick to setup: With just a few lines of code you can run W&B sweeps.
  • Transparent: We cite all the algorithms we're using, and our code is open source.
  • Powerful: Our sweeps are completely customizable and configurable. You can launch a sweep across dozens of machines, and it's just as easy as starting a sweep on your laptop.

Weights & Biases

Common use cases

  • Explore: Efficiently sample the space of hyperparameter combinations to discover promising regions and build an intuition about your model.
  • Optimize: Use sweeps to find a set of hyperparameters with optimal performance.
  • K-fold cross validation: Here's a brief code example of k-fold cross validation with W&B Sweeps.

Visualize Sweeps results

The hyperparameter importance plot surfaces which hyperparameters were the best predictors of, and highly correlated to desirable values for your metrics.

Weights & Biases

Parallel coordinates plots map hyperparameter values to model metrics. They're useful for honing in on combinations of hyperparameters that led to the best model performance.

Weights & Biases

๐Ÿ“œ Share insights with Reports

Reports let you organize visualizations, describe your findings, and share updates with collaborators.

Common use cases

  • Notes: Add a graph with a quick note to yourself.
  • Collaboration: Share findings with your colleagues.
  • Work log: Track what you've tried and plan next steps.

Explore reports in The Gallery โ†’ | Read the Docs

Once you have experiments in W&B, you can visualize and document results in Reports with just a few clicks. Here's a quick demo video.

๐Ÿบ Version control datasets and models with Artifacts

Git and GitHub make code version control easy, but they're not optimized for tracking the other parts of the ML pipeline: datasets, models, and other large binary files.

W&B's Artifacts are. With just a few extra lines of code, you can start tracking you and your team's outputs, all directly linked to run.

Try Artifacts in a Colab โ†’

Common use cases

  • Pipeline Management: Track and visualize the inputs and outputs of your runs as a graph
  • Don't Repeat Yourselfโ„ข: Prevent the duplication of compute effort
  • Sharing Data in Teams: Collaborate on models and datasets without all the headaches

Learn about Artifacts here โ†’ | Read the Docs

Testing

To run basic test use make test. More detailed information can be found at CONTRIBUTING.md.

We use circleci for CI.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].