mattjj / Autodidact
Licence: mit
A pedagogical implementation of Autograd
Stars: ✭ 585
Labels
Projects that are alternatives of or similar to Autodidact
Deeplearning.ai
Some work of Andrew Ng's course on Coursera
Stars: ✭ 572 (-2.22%)
Mutual labels: jupyter-notebook
Business Machine Learning
A curated list of practical business machine learning (BML) and business data science (BDS) applications for Accounting, Customer, Employee, Legal, Management and Operations (by @firmai)
Stars: ✭ 575 (-1.71%)
Mutual labels: jupyter-notebook
Easy Scraping Tutorial
Simple but useful Python web scraping tutorial code.
Stars: ✭ 583 (-0.34%)
Mutual labels: jupyter-notebook
Cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
Stars: ✭ 5,356 (+815.56%)
Mutual labels: jupyter-notebook
Pandas Cookbook
Recipes for using Python's pandas library
Stars: ✭ 5,520 (+843.59%)
Mutual labels: jupyter-notebook
Quaternet
Proposes neural networks that can generate animation of virtual characters for different actions.
Stars: ✭ 580 (-0.85%)
Mutual labels: jupyter-notebook
Gym Trading
Environment for reinforcement-learning algorithmic trading models
Stars: ✭ 574 (-1.88%)
Mutual labels: jupyter-notebook
Coursera Ml Using Matlab Python
coursera吴恩达机器学习课程作业自写Python版本+Matlab原版
Stars: ✭ 579 (-1.03%)
Mutual labels: jupyter-notebook
Wgan Tensorflow
a tensorflow implementation of WGAN
Stars: ✭ 572 (-2.22%)
Mutual labels: jupyter-notebook
Functional Zoo
PyTorch and Tensorflow functional model definitions
Stars: ✭ 577 (-1.37%)
Mutual labels: jupyter-notebook
Grokking Deep Learning
this repository accompanies the book "Grokking Deep Learning"
Stars: ✭ 5,380 (+819.66%)
Mutual labels: jupyter-notebook
Trtorch
PyTorch/TorchScript compiler for NVIDIA GPUs using TensorRT
Stars: ✭ 583 (-0.34%)
Mutual labels: jupyter-notebook
Diracnets
Training Very Deep Neural Networks Without Skip-Connections
Stars: ✭ 581 (-0.68%)
Mutual labels: jupyter-notebook
Autograd
Autodidact: a pedagogical implementation ofThis is a tutorial implementation based on the full version of Autograd.
Example use:
>>> import autograd.numpy as np # Thinly-wrapped numpy
>>> from autograd import grad # The only autograd function you may ever need
>>>
>>> def tanh(x): # Define a function
... y = np.exp(-2.0 * x)
... return (1.0 - y) / (1.0 + y)
...
>>> grad_tanh = grad(tanh) # Obtain its gradient function
>>> grad_tanh(1.0) # Evaluate the gradient at x = 1.0
0.41997434161402603
>>> (tanh(1.0001) - tanh(0.9999)) / 0.0002 # Compare to finite differences
0.41997434264973155
We can continue to differentiate as many times as we like, and use numpy's vectorization of scalar-valued functions across many different input values:
>>> import matplotlib.pyplot as plt
>>> x = np.linspace(-7, 7, 200)
>>> plt.plot(x, tanh(x),
... x, grad(tanh)(x), # first derivative
... x, grad(grad(tanh))(x), # second derivative
... x, grad(grad(grad(tanh)))(x), # third derivative
... x, grad(grad(grad(grad(tanh))))(x), # fourth derivative
... x, grad(grad(grad(grad(grad(tanh)))))(x), # fifth derivative
... x, grad(grad(grad(grad(grad(grad(tanh))))))(x)) # sixth derivative
>>> plt.show()
Autograd was written by Dougal Maclaurin, David Duvenaud and Matt Johnson. See the main page for more information.
Note that the project description data, including the texts, logos, images, and/or trademarks,
for each open source project belongs to its rightful owner.
If you wish to add or remove any projects, please contact us at [email protected].