All Projects → clovaai → Adamp

clovaai / Adamp

Licence: mit
AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights (ICLR 2021)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Adamp

adamwr
Implements https://arxiv.org/abs/1711.05101 AdamW optimizer, cosine learning rate scheduler and "Cyclical Learning Rates for Training Neural Networks" https://arxiv.org/abs/1506.01186 for PyTorch framework
Stars: ✭ 130 (-57.52%)
Mutual labels:  optimizer
Windows11-Optimization
Community repository, to improve security and performance of Windows 10 and windows 11 with tweaks, commands, scripts, registry keys, configuration, tutorials and more
Stars: ✭ 17 (-94.44%)
Mutual labels:  optimizer
gfsopt
Convenient hyperparameter optimization
Stars: ✭ 12 (-96.08%)
Mutual labels:  optimizer
ada-hessian
Easy-to-use AdaHessian optimizer (PyTorch)
Stars: ✭ 59 (-80.72%)
Mutual labels:  optimizer
postcss-clean
PostCss plugin to minify your CSS with clean-css
Stars: ✭ 41 (-86.6%)
Mutual labels:  optimizer
goga
Go evolutionary algorithm is a computer library for developing evolutionary and genetic algorithms to solve optimisation problems with (or not) many constraints and many objectives. Also, a goal is to handle mixed-type representations (reals and integers).
Stars: ✭ 39 (-87.25%)
Mutual labels:  optimizer
EAGO.jl
A development environment for robust and global optimization
Stars: ✭ 106 (-65.36%)
Mutual labels:  optimizer
nibbler
Runtime Python bytecode optimizer. ⚡️
Stars: ✭ 21 (-93.14%)
Mutual labels:  optimizer
madam
👩 Pytorch and Jax code for the Madam optimiser.
Stars: ✭ 46 (-84.97%)
Mutual labels:  optimizer
sam.pytorch
A PyTorch implementation of Sharpness-Aware Minimization for Efficiently Improving Generalization
Stars: ✭ 96 (-68.63%)
Mutual labels:  optimizer
ToyDB
A ToyDB (for beginner) based on MIT 6.830 and CMU 15445
Stars: ✭ 25 (-91.83%)
Mutual labels:  optimizer
lookahead tensorflow
Lookahead optimizer ("Lookahead Optimizer: k steps forward, 1 step back") for tensorflow
Stars: ✭ 25 (-91.83%)
Mutual labels:  optimizer
pigosat
Go (golang) bindings for Picosat, the satisfiability solver
Stars: ✭ 15 (-95.1%)
Mutual labels:  optimizer
Post-Tweaks
A post-installation batch script for Windows
Stars: ✭ 136 (-55.56%)
Mutual labels:  optimizer
soar
SQL Optimizer And Rewriter
Stars: ✭ 7,786 (+2444.44%)
Mutual labels:  optimizer
portfolio-optimizer
A library for portfolio optimization algorithms with python interface.
Stars: ✭ 19 (-93.79%)
Mutual labels:  optimizer
falcon
A WordPress cleanup and performance optimization plugin.
Stars: ✭ 17 (-94.44%)
Mutual labels:  optimizer
Lookahead.pytorch
lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch
Stars: ✭ 279 (-8.82%)
Mutual labels:  optimizer
simplu3D
A library to generate buildings from local urban regulations.
Stars: ✭ 18 (-94.12%)
Mutual labels:  optimizer
rethinking-bnn-optimization
Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"
Stars: ✭ 62 (-79.74%)
Mutual labels:  optimizer

AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights (ICLR 2021)

Official PyTorch implementation of AdamP and SGDP optimizers | Paper | Project page

Byeongho Heo*, Sanghyuk Chun*, Seong Joon Oh, Dongyoon Han, Sangdoo Yun, Gyuwan Kim, Youngjung Uh, Jung-Woo Ha.
* indicates equal contribution

NAVER AI LAB, NAVER CLOVA

Abstract

Normalization techniques are a boon for modern deep learning. They let weights converge more quickly with often better generalization performances. It has been argued that the normalization-induced scale invariance among the weights provides an advantageous ground for gradient descent (GD) optimizers: the effective step sizes are automatically reduced over time, stabilizing the overall training procedure. It is often overlooked, however, that the additional introduction of momentum in GD optimizers results in a far more rapid reduction in effective step sizes for scale-invariant weights, a phenomenon that has not yet been studied and may have caused unwanted side effects in the current practice. This is a crucial issue because arguably the vast majority of modern deep neural networks consist of (1) momentum-based GD (e.g. SGD or Adam) and (2) scale-invariant parameters. In this paper, we verify that the widely-adopted combination of the two ingredients lead to the premature decay of effective step sizes and sub-optimal model performances. We propose a simple and effective remedy, SGDP and AdamP: get rid of the radial component, or the norm-increasing direction, at each optimizer step. Because of the scale invariance, this modification only alters the effective step sizes without changing the effective update directions, thus enjoying the original convergence properties of GD optimizers. Given the ubiquity of momentum GD and scale invariance in machine learning, we have evaluated our methods against the baselines on 13 benchmarks. They range from vision tasks like classification (e.g. ImageNet), retrieval (e.g. CUB and SOP), and detection (e.g. COCO) to language modelling (e.g. WikiText) and audio classification (e.g. DCASE) tasks. We verify that our solution brings about uniform gains in those benchmarks.

How does it work?

Please visit our project page.

Updates

  • Aug 27, 2020: built-in cosine similarity and fix warning (v0.3.0)
  • Jun 19, 2020: nesterov update (v0.2.0)
  • Jun 15, 2020: Initial upload (v0.1.0)

Getting Started

Installation

pip3 install adamp

Usage

Usage is exactly same as torch.optim library!

from adamp import AdamP

# define your params
optimizer = AdamP(params, lr=0.001, betas=(0.9, 0.999), weight_decay=1e-2)
from adamp import SGDP

# define your params
optimizer = SGDP(params, lr=0.1, weight_decay=1e-5, momentum=0.9, nesterov=True)

Arguments

SGDP and AdamP share arguments with torch.optim.SGD and torch.optim.Adam. There are two additional hyperparameters; we recommend using the default values.

  • delta : threhold that determines whether a set of parameters is scale invariant or not (default: 0.1)
  • wd_ratio : relative weight decay applied on scale-invariant parameters compared to that applied on scale-variant parameters (default: 0.1)

Both SGDP and AdamP support Nesterov momentum.

  • nesterov : enables Nesterov momentum (default: False)

License

This project is distributed under MIT license.

Copyright (c) 2020-present NAVER Corp.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

How to cite

@inproceedings{heo2021adamp,
    title={AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights},
    author={Heo, Byeongho and Chun, Sanghyuk and Oh, Seong Joon and Han, Dongyoon and Yun, Sangdoo and Kim, Gyuwan and Uh, Youngjung and Ha, Jung-Woo},
    year={2021},
    booktitle={International Conference on Learning Representations (ICLR)},
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].