All Projects → VITA-Group → Open-L2O

VITA-Group / Open-L2O

Licence: MIT License
Open-L2O: A Comprehensive and Reproducible Benchmark for Learning to Optimize Algorithms

Programming Languages

C++
36643 projects - #6 most used programming language
fortran
972 projects
python
139335 projects - #7 most used programming language
CMake
9771 projects
c
50402 projects - #5 most used programming language
Cuda
1817 projects

Projects that are alternatives of or similar to Open-L2O

Meta Learning Papers
Meta Learning / Learning to Learn / One Shot Learning / Few Shot Learning
Stars: ✭ 2,420 (+2140.74%)
Mutual labels:  meta-learning, learning-to-learn
PAML
Personalizing Dialogue Agents via Meta-Learning
Stars: ✭ 114 (+5.56%)
Mutual labels:  meta-learning, learning-to-learn
Joint-User-Association-and-In-band-Backhaul-Scheduling-and-in-5G-mmWave-Networks
Matlab Simulation for T. K. Vu, M. Bennis, S. Samarakoon, M. Debbah and M. Latva-aho, "Joint In-Band Backhauling and Interference Mitigation in 5G Heterogeneous Networks," European Wireless 2016; 22th European Wireless Conference, Oulu, Finland, 2016, pp. 1-6. URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7499273&isnumber=7499250
Stars: ✭ 36 (-66.67%)
Mutual labels:  convex-optimization
awesome-nn-optimization
Awesome list for Neural Network Optimization methods.
Stars: ✭ 39 (-63.89%)
Mutual labels:  convex-optimization
CS330-Stanford-Deep-Multi-Task-and-Meta-Learning
My notes and assignment solutions for Stanford CS330 (Fall 2019 & 2020) Deep Multi-Task and Meta Learning
Stars: ✭ 34 (-68.52%)
Mutual labels:  meta-learning
mliis
Code for meta-learning initializations for image segmentation
Stars: ✭ 21 (-80.56%)
Mutual labels:  meta-learning
meta-interpolation
Source code for CVPR 2020 paper "Scene-Adaptive Video Frame Interpolation via Meta-Learning"
Stars: ✭ 75 (-30.56%)
Mutual labels:  meta-learning
Optimization
A set of lightweight header-only template functions implementing commonly-used optimization methods on Riemannian manifolds and convex spaces.
Stars: ✭ 66 (-38.89%)
Mutual labels:  convex-optimization
MeTAL
Official PyTorch implementation of "Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning" (ICCV2021 Oral)
Stars: ✭ 24 (-77.78%)
Mutual labels:  meta-learning
metagenrl
MetaGenRL, a novel meta reinforcement learning algorithm. Unlike prior work, MetaGenRL can generalize to new environments that are entirely different from those used for meta-training.
Stars: ✭ 50 (-53.7%)
Mutual labels:  meta-learning
maml-tensorflow
This repository implements the paper, Model-Agnostic Meta-Leanring for Fast Adaptation of Deep Networks.
Stars: ✭ 17 (-84.26%)
Mutual labels:  meta-learning
PyOptSamples
Optimization sample codes on Python
Stars: ✭ 20 (-81.48%)
Mutual labels:  convex-optimization
opfunu
A collection of Benchmark functions for numerical optimization problems (https://opfunu.readthedocs.io)
Stars: ✭ 31 (-71.3%)
Mutual labels:  convex-optimization
Learning2AdaptForStereo
Code for: "Learning To Adapt For Stereo" accepted at CVPR2019
Stars: ✭ 73 (-32.41%)
Mutual labels:  meta-learning
Meta-DETR
Meta-DETR: Official PyTorch Implementation
Stars: ✭ 205 (+89.81%)
Mutual labels:  meta-learning
Meta-TTS
Official repository of https://arxiv.org/abs/2111.04040v1
Stars: ✭ 69 (-36.11%)
Mutual labels:  meta-learning
ProxSDP.jl
Semidefinite programming optimization solver
Stars: ✭ 69 (-36.11%)
Mutual labels:  convex-optimization
cr-sparse
Functional models and algorithms for sparse signal processing
Stars: ✭ 38 (-64.81%)
Mutual labels:  convex-optimization
MetaLifelongLanguage
Repository containing code for the paper "Meta-Learning with Sparse Experience Replay for Lifelong Language Learning".
Stars: ✭ 21 (-80.56%)
Mutual labels:  meta-learning
maml-rl-tf2
Implementation of Model-Agnostic Meta-Learning (MAML) applied on Reinforcement Learning problems in TensorFlow 2.
Stars: ✭ 16 (-85.19%)
Mutual labels:  meta-learning

Open-L2O

This repository establishes the first comprehensive benchmark efforts of existing learning to optimize (L2O) approaches on a number of problems and settings. We release our software implementation and data as the Open-L2O package, for reproducible research and fair benchmarking in the L2O field. [Paper]

License: MIT

Overview

What is learning to optimize (L2O)?

L2O (Learning to optimize) aims to replace manually designed analytic optimization algorithms (SGD, RMSProp, Adam, etc.) with learned update rules.

How does L2O work?

L2O serves as functions that can be fit from data. L2O gains experience from training optimization tasks in a principled and automatic way.

What can L2O do for you?

L2O is particularly suitable for solving a certain type of optimization over a specific distribution of data repeatedly. In comparison to classic methods, L2O is shown to find higher-quality solutions and/or with much faster convergence speed for many problems.

Open questions for research?

  • There are significant theoretical and practicality gaps between manually designed optimizers and existing L2O models.

Main Results

Learning to optimize sparse recovery

Learning to optimize Lasso functions

Learning to optimize non-convex Rastrigin functions

Learning to optimize neural networks

Supported Model-base Learnable Optimizers

All codes are available at here.

  1. LISTA (feed-forward form) from Learning fast approximations of sparse coding [Paper]
  2. LISTA-CP from Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds [Paper]
  3. LISTA-CPSS from Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds [Paper]
  4. LFISTA from Understanding Trainable Sparse Coding via Matrix Factorization [Paper]
  5. LAMP from AMP-Inspired Deep Networks for Sparse Linear Inverse Problems [Paper]
  6. ALISTA from ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA [Paper]
  7. GLISTA from Sparse Coding with Gated Learned ISTA [Paper]

Supported Model-free Learnable Optimizers

  1. L2O-DM from Learning to learn by gradient descent by gradient descent [Paper] [Code]
  2. L2O-RNNProp Learning Gradient Descent: Better Generalization and Longer Horizons from [Paper] [Code]
  3. L2O-Scale from Learned Optimizers that Scale and Generalize [Paper] [Code]
  4. L2O-enhanced from Training Stronger Baselines for Learning to Optimize [Paper] [Code]
  5. L2O-Swarm from Learning to Optimize in Swarms [Paper] [Code]
  6. L2O-Jacobian from HALO: Hardware-Aware Learning to Optimize [Paper] [Code]
  7. L2O-Minmax from Learning A Minimax Optimizer: A Pilot Study [Paper] [Code]

Supported Optimizees

Convex Functions:

  • Quadratic
  • Lasso

Non-convex Functions:

  • Rastrigin

Minmax Functions:

  • Saddle
  • Rotated Saddle
  • Seesaw
  • Matrix Game

Neural Networks:

  • MLPs on MNIST
  • ConvNets on MNIST and CIFAR-10
  • LeNet
  • NAS searched archtectures

Other Resources

  • This is a Pytorch implementation of L2O-DM. [Code]
  • This is the original L2O-Swarm repository. [Code]
  • This is the original L2O-Jacobian repository. [Code]

Future Works

  • TF2.0 Implementated toolbox v2 with a unified framework and lib dependency.

Cite

@misc{chen2021learning,
      title={Learning to Optimize: A Primer and A Benchmark}, 
      author={Tianlong Chen and Xiaohan Chen and Wuyang Chen and Howard Heaton and Jialin Liu and Zhangyang Wang and Wotao Yin},
      year={2021},
      eprint={2103.12828},
      archivePrefix={arXiv},
      primaryClass={math.OC}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].