All Projects → deepmind → Learning To Learn

deepmind / Learning To Learn

Licence: apache-2.0
Learning to Learn in TensorFlow

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Learning To Learn

Handtrack.js
A library for prototyping realtime hand detection (bounding box), directly in the browser.
Stars: ✭ 2,531 (-37.32%)
Mutual labels:  artificial-intelligence, neural-networks
Igel
a delightful machine learning tool that allows you to train, test, and use models without writing code
Stars: ✭ 2,956 (-26.8%)
Mutual labels:  artificial-intelligence, neural-networks
Mahjongai
Japanese Riichi Mahjong AI agent. (Feel free to extend this agent or develop your own agent)
Stars: ✭ 210 (-94.8%)
Mutual labels:  artificial-intelligence, neural-networks
Dm control
DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.
Stars: ✭ 2,592 (-35.81%)
Mutual labels:  artificial-intelligence, neural-networks
Gdrl
Grokking Deep Reinforcement Learning
Stars: ✭ 304 (-92.47%)
Mutual labels:  artificial-intelligence, neural-networks
Vectorai
Vector AI — A platform for building vector based applications. Encode, query and analyse data using vectors.
Stars: ✭ 195 (-95.17%)
Mutual labels:  artificial-intelligence, neural-networks
Echotorch
A Python toolkit for Reservoir Computing and Echo State Network experimentation based on pyTorch. EchoTorch is the only Python module available to easily create Deep Reservoir Computing models.
Stars: ✭ 231 (-94.28%)
Mutual labels:  artificial-intelligence, neural-networks
Wyrm
Autodifferentiation package in Rust.
Stars: ✭ 164 (-95.94%)
Mutual labels:  artificial-intelligence, neural-networks
Komputation
Komputation is a neural network framework for the Java Virtual Machine written in Kotlin and CUDA C.
Stars: ✭ 295 (-92.69%)
Mutual labels:  artificial-intelligence, neural-networks
Awesome Ai Awesomeness
A curated list of awesome awesomeness about artificial intelligence
Stars: ✭ 268 (-93.36%)
Mutual labels:  artificial-intelligence, neural-networks
Deep Learning Notes
My personal notes, presentations, and notebooks on everything Deep Learning.
Stars: ✭ 191 (-95.27%)
Mutual labels:  artificial-intelligence, neural-networks
Artificio
Deep Learning Computer Vision Algorithms for Real-World Use
Stars: ✭ 326 (-91.93%)
Mutual labels:  artificial-intelligence, neural-networks
Fixy
Amacımız Türkçe NLP literatüründeki birçok farklı sorunu bir arada çözebilen, eşsiz yaklaşımlar öne süren ve literatürdeki çalışmaların eksiklerini gideren open source bir yazım destekleyicisi/denetleyicisi oluşturmak. Kullanıcıların yazdıkları metinlerdeki yazım yanlışlarını derin öğrenme yaklaşımıyla çözüp aynı zamanda metinlerde anlamsal analizi de gerçekleştirerek bu bağlamda ortaya çıkan yanlışları da fark edip düzeltebilmek.
Stars: ✭ 165 (-95.91%)
Mutual labels:  artificial-intelligence, neural-networks
Deep Learning With Python
Deep learning codes and projects using Python
Stars: ✭ 195 (-95.17%)
Mutual labels:  artificial-intelligence, neural-networks
Iresnet
Improved Residual Networks (https://arxiv.org/pdf/2004.04989.pdf)
Stars: ✭ 163 (-95.96%)
Mutual labels:  artificial-intelligence, neural-networks
Kaolin
A PyTorch Library for Accelerating 3D Deep Learning Research
Stars: ✭ 2,794 (-30.81%)
Mutual labels:  artificial-intelligence, neural-networks
Hands On Machine Learning With Scikit Learn Keras And Tensorflow
Notes & exercise solutions of Part I from the book: "Hands-On ML with Scikit-Learn, Keras & TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems" by Aurelien Geron
Stars: ✭ 151 (-96.26%)
Mutual labels:  artificial-intelligence, neural-networks
Avalanche
Avalanche: a End-to-End Library for Continual Learning.
Stars: ✭ 151 (-96.26%)
Mutual labels:  artificial-intelligence, neural-networks
Darwinexlabs
Datasets, tools and more from Darwinex Labs - Prop Investing Arm & Quant Team @ Darwinex
Stars: ✭ 248 (-93.86%)
Mutual labels:  artificial-intelligence, neural-networks
Lightnet
🌓 Bringing pjreddie's DarkNet out of the shadows #yolo
Stars: ✭ 322 (-92.03%)
Mutual labels:  artificial-intelligence, neural-networks

Learning to Learn in TensorFlow

Dependencies

Training

python train.py --problem=mnist --save_path=./mnist

Command-line flags:

  • save_path: If present, the optimizer will be saved to the specified path every time the evaluation performance is improved.
  • num_epochs: Number of training epochs.
  • log_period: Epochs before mean performance and time is reported.
  • evaluation_period: Epochs before the optimizer is evaluated.
  • evaluation_epochs: Number of evaluation epochs.
  • problem: Problem to train on. See Problems section below.
  • num_steps: Number of optimization steps.
  • unroll_length: Number of unroll steps for the optimizer.
  • learning_rate: Learning rate.
  • second_derivatives: If true, the optimizer will try to compute second derivatives through the loss function specified by the problem.

Evaluation

python evaluate.py --problem=mnist --optimizer=L2L --path=./mnist

Command-line flags:

  • optimizer: Adam or L2L.
  • path: Path to saved optimizer, only relevant if using the L2L optimizer.
  • learning_rate: Learning rate, only relevant if using Adam optimizer.
  • num_epochs: Number of evaluation epochs.
  • seed: Seed for random number generation.
  • problem: Problem to evaluate on. See Problems section below.
  • num_steps: Number of optimization steps.

Problems

The training and evaluation scripts support the following problems (see util.py for more details):

  • simple: One-variable quadratic function.
  • simple-multi: Two-variable quadratic function, where one of the variables is optimized using a learned optimizer and the other one using Adam.
  • quadratic: Batched ten-variable quadratic function.
  • mnist: Mnist classification using a two-layer fully connected network.
  • cifar: Cifar10 classification using a convolutional neural network.
  • cifar-multi: Cifar10 classification using a convolutional neural network, where two independent learned optimizers are used. One to optimize parameters from convolutional layers and the other one for parameters from fully connected layers.

New problems can be implemented very easily. You can see in train.py that the meta_minimize method from the MetaOptimizer class is given a function that returns the TensorFlow operation that generates the loss function we want to minimize (see problems.py for an example).

It's important that all operations with Python side effects (e.g. queue creation) must be done outside of the function passed to meta_minimize. The cifar10 function in problems.py is a good example of a loss function that uses TensorFlow queues.

Disclaimer: This is not an official Google product.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].