All Projects → sagarvegad → Adam-optimizer

sagarvegad / Adam-optimizer

Licence: other
Implemented Adam optimizer in python

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Adam-optimizer

Android-SGTextView
同时带字体描边 渐变 阴影的TextView - both have stroker, gradient and shadow TextView
Stars: ✭ 18 (-58.14%)
Mutual labels:  gradient
GradientProgress
A gradient progress bar (UIProgressView).
Stars: ✭ 38 (-11.63%)
Mutual labels:  gradient
Learning-Lab-C-Library
This library provides a set of basic functions for different type of deep learning (and other) algorithms in C.This deep learning library will be constantly updated
Stars: ✭ 20 (-53.49%)
Mutual labels:  adam-optimizer
sweetconfirm.js
👌A useful zero-dependencies, less than 434 Bytes (gzipped), pure JavaScript & CSS solution for drop an annoying pop-ups confirming the submission of form in your web apps.
Stars: ✭ 34 (-20.93%)
Mutual labels:  gradient
lookahead tensorflow
Lookahead optimizer ("Lookahead Optimizer: k steps forward, 1 step back") for tensorflow
Stars: ✭ 25 (-41.86%)
Mutual labels:  adam-optimizer
CS231n
PyTorch/Tensorflow solutions for Stanford's CS231n: "CNNs for Visual Recognition"
Stars: ✭ 47 (+9.3%)
Mutual labels:  adam-optimizer
RMGradientView
A Custom Gradient View Control for iOS with inspectable properties.
Stars: ✭ 24 (-44.19%)
Mutual labels:  gradient
tensorflow-mle
Some examples on computing MLEs using TensorFlow
Stars: ✭ 14 (-67.44%)
Mutual labels:  gradient
SwiftUI-Color-Kit
SwiftUI Color Pickers, Gradient Pickers And All The Utilities Needed To Make Your Own!
Stars: ✭ 120 (+179.07%)
Mutual labels:  gradient
RainbowTaskbar
Customizable Windows taskbar effects.
Stars: ✭ 39 (-9.3%)
Mutual labels:  gradient
random-gradient
Generate beautiful random gradients
Stars: ✭ 63 (+46.51%)
Mutual labels:  gradient
LimitlessUI
Awesome C# UI library that highly reduced limits of your application looks
Stars: ✭ 41 (-4.65%)
Mutual labels:  gradient
autodiff
A .NET library that provides fast, accurate and automatic differentiation (computes derivative / gradient) of mathematical functions.
Stars: ✭ 69 (+60.47%)
Mutual labels:  gradient
GradientProgressView
一个简单的进度条控件
Stars: ✭ 15 (-65.12%)
Mutual labels:  gradient
haskell-vae
Learning about Haskell with Variational Autoencoders
Stars: ✭ 18 (-58.14%)
Mutual labels:  adam-optimizer
PastelXamarinIos
🌒 Gradient animations on Xamarin-iOS
Stars: ✭ 17 (-60.47%)
Mutual labels:  gradient
Gradientable
Gradiention Protocol in iOS
Stars: ✭ 26 (-39.53%)
Mutual labels:  gradient
GradientBorderedLabelView
IBDesignable label with customizable gradient attributes
Stars: ✭ 70 (+62.79%)
Mutual labels:  gradient
ML-Optimizers-JAX
Toy implementations of some popular ML optimizers using Python/JAX
Stars: ✭ 37 (-13.95%)
Mutual labels:  adam-optimizer
mixed-precision-pytorch
Training with FP16 weights in PyTorch
Stars: ✭ 72 (+67.44%)
Mutual labels:  gradient

Adam-optimizer

I have implemented adam optimizer from scratch in python. I have assumed the stochastic function to be x^2 -4*x + 4. I have referred the algorithm from "Adam: A Method for Stochastic Optimization" written by Diederik P. Kingma and Jimmy Ba.

First I have initialised all parameters like alpha, beta_1, beta_2, epsilon, theta_0, 1st moment vector, 2nd moment vector and timestep. Then I looped till the parameter vector(theta_0) is converged.

In the while loop, I have updated the timestep, got the gradient from the stochastic function, updated exponential moving averages of the gradient(m_t) and the average gradient(v_t) and calculated the bias-corrected estimates m_cap and v_cap. Finally, I updated the parameters(theta_0) and also kept a condition to check when the previous value of the inital parameter(theta_0) becomes equal to the new theta_0 and stopped the while loop at that point which means that it is converged.

Adam uses an adaptive learning rate and is an efficent method for stochastic optimization which only requires first-order gradients with little memory requirement. It combines the advantages of Adagrad optimizer to deal with sparse gradients and the ability of RMSProp optimizer to deal with non-stationary objectives.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].