All Projects → amanchadha → stanford-cs231n-assignments-2020

amanchadha / stanford-cs231n-assignments-2020

Licence: other
This repository contains my solutions to the assignments for Stanford's CS231n "Convolutional Neural Networks for Visual Recognition" (Spring 2020).

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to stanford-cs231n-assignments-2020

Keras Attention
Visualizing RNNs using the attention mechanism
Stars: ✭ 697 (+729.76%)
Mutual labels:  recurrent-neural-networks, attention-mechanism
Image Caption Generator
A neural network to generate captions for an image using CNN and RNN with BEAM Search.
Stars: ✭ 126 (+50%)
Mutual labels:  recurrent-neural-networks, attention-mechanism
Textclassifier
Text classifier for Hierarchical Attention Networks for Document Classification
Stars: ✭ 985 (+1072.62%)
Mutual labels:  recurrent-neural-networks, attention-mechanism
datastories-semeval2017-task6
Deep-learning model presented in "DataStories at SemEval-2017 Task 6: Siamese LSTM with Attention for Humorous Text Comparison".
Stars: ✭ 20 (-76.19%)
Mutual labels:  recurrent-neural-networks, attention-mechanism
Neural-Chatbot
A Neural Network based Chatbot
Stars: ✭ 68 (-19.05%)
Mutual labels:  recurrent-neural-networks, attention-mechanism
Da Rnn
📃 **Unofficial** PyTorch Implementation of DA-RNN (arXiv:1704.02971)
Stars: ✭ 256 (+204.76%)
Mutual labels:  recurrent-neural-networks, attention-mechanism
Linear Attention Recurrent Neural Network
A recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Stars: ✭ 119 (+41.67%)
Mutual labels:  recurrent-neural-networks, attention-mechanism
automatic-personality-prediction
[AAAI 2020] Modeling Personality with Attentive Networks and Contextual Embeddings
Stars: ✭ 43 (-48.81%)
Mutual labels:  recurrent-neural-networks, attention-mechanism
DARNN
A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction
Stars: ✭ 90 (+7.14%)
Mutual labels:  recurrent-neural-networks, attention-mechanism
Attention Mechanisms
Implementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with TensorFlow 2.0 and Keras.
Stars: ✭ 203 (+141.67%)
Mutual labels:  recurrent-neural-networks, attention-mechanism
Simplednn
SimpleDNN is a machine learning lightweight open-source library written in Kotlin designed to support relevant neural network architectures in natural language processing tasks
Stars: ✭ 81 (-3.57%)
Mutual labels:  recurrent-neural-networks, attention-mechanism
CS231n
CS231n Assignments Solutions - Spring 2020
Stars: ✭ 48 (-42.86%)
Mutual labels:  stanford, cs231n
Document Classifier Lstm
A bidirectional LSTM with attention for multiclass/multilabel text classification.
Stars: ✭ 136 (+61.9%)
Mutual labels:  recurrent-neural-networks, attention-mechanism
Visual-Attention-Model
Chainer implementation of Deepmind's Visual Attention Model paper
Stars: ✭ 27 (-67.86%)
Mutual labels:  recurrent-neural-networks, attention-mechanism
CS231n
Solutions to Assignments of CS231n: Convolutional Neural Networks for Visual Recognition(http://cs231n.github.io/)
Stars: ✭ 13 (-84.52%)
Mutual labels:  stanford, cs231n-assignment
wikiHow paper list
A paper list of research conducted based on wikiHow
Stars: ✭ 25 (-70.24%)
Mutual labels:  vision-and-language
resolutions-2019
A list of data mining and machine learning papers that I implemented in 2019.
Stars: ✭ 19 (-77.38%)
Mutual labels:  attention-mechanism
CrabNet
Predict materials properties using only the composition information!
Stars: ✭ 57 (-32.14%)
Mutual labels:  attention-mechanism
LSTM-Time-Series-Analysis
Using LSTM network for time series forecasting
Stars: ✭ 41 (-51.19%)
Mutual labels:  recurrent-neural-networks
bitcoin-prediction
bitcoin prediction algorithms
Stars: ✭ 21 (-75%)
Mutual labels:  recurrent-neural-networks

CS231n: Convolutional Neural Networks for Visual Recognition - Assignment Solutions

This repository contains my solutions to the assignments for Stanford's CS231n "Convolutional Neural Networks for Visual Recognition" course (Spring 2020).

Stanford's CS231n is one of the best ways to dive into Deep Learning in general, in particular, into Computer Vision. If you plan to excel in another subfield of Deep Learning (say, Natural Language Processing or Reinforcement Learning), we still recommend that you start with CS231n, because it helps build intuition, fundamental understanding and hands-on skills. Beware, the course is very challenging!

To motivate you to work hard, here are actual applications that you'll implement in A3 - Style Transfer and Class Visualization.

For the one on the left, you take a base image and a style image and apply the "style" to the base image (reminds you of Prisma and Artisto, right?). The example on the right is a random image, gradually perturbed in a way that a neural network classifies it more and more confidently as a gorilla. DIY Deep Dream, isn't it? And it's all math under the hood, it's cool to figure out how it all works. You'll get to this understanding with CS231n, it'll be hard but at the same time an exciting journey from a simple kNN implementation to these fascinating applications. If you think that these two applications are eye-catchy, then take another look at the picture above - a Convolutional Neural Network classifying images. That's the basics of how machines can "see" the world. The course will teach you both how to build such an algorithm from scratch and how to use modern tools to run state-of-the-art models for your tasks.

Find course notes and assignments here and be sure to check out the video lectures for Winter 2016 and Spring 2017!

Assignments have been completed using both TensorFlow and PyTorch.

Assignment #1: Image Classification, kNN, SVM, Softmax, Neural Network

Q1: k-Nearest Neighbor Classifier

  • Test accuracy on CIFAR-10: 0.282

Q2: Training a Support Vector Machine

  • Test accuracy on CIFAR-10: 0.376

Q3: Implement a Softmax classifier

  • Test accuracy on CIFAR-10: 0.355

Q4: Two-Layer Neural Network

  • Test accuracy on CIFAR-10: 0.501

Q5: Higher Level Representations: Image Features

  • Test accuracy on CIFAR-10: 0.576

Assignment #2: Fully-Connected Nets, Batch Normalization, Dropout, Convolutional Nets

Q1: Fully-connected Neural Network

  • Validation / test accuracy on CIFAR-10: 0.547 / 0.539

Q2: Batch Normalization

Q3: Dropout

Q4: Convolutional Networks

Q5: PyTorch / TensorFlow v2 on CIFAR-10 / TensorFlow v1 (Tweaked TFv1 model)

  • Training / validation / test accuracy of TF implementation on CIFAR-10: 0.928 / 0.801 / 0.822
  • PyTorch implementation:
Model Training Accuracy Test Accuracy
Base network 92.86 88.90
VGG-16 99.98 93.16
VGG-19 99.98 93.24
ResNet-18 99.99 93.73
ResNet-101 99.99 93.76

Assignment #3: Image Captioning with Vanilla RNNs, Image Captioning with LSTMs, Network Visualization, Style Transfer, Generative Adversarial Networks

Q1: Image Captioning with Vanilla RNNs

Q2: Image Captioning with LSTMs

Q3: Network Visualization: Saliency maps, Class Visualization, and Fooling Images (PyTorch / TensorFlow v2 / TensorFlow v1)

Q4: Style Transfer (PyTorch / TensorFlow v2 / TensorFlow v1)

Q5: Generative Adversarial Networks (PyTorch / TensorFlow v2 / TensorFlow v1)

Course notes

GPUs

For some parts of the 3rd assignment, you'll need GPUs. Kaggle Kernels or Google Colaboratory will do.

Useful links

Direct links to Spring 2017 lectures

Disclaimer

I recognize the hard time people spend on building intuition, understanding new concepts and debugging assignments. The solutions uploaded here are only for reference. They are meant to unblock you if you get stuck somewhere. Please do not copy any part of the solutions as-is (the assignments are fairly easy if you read the instructions carefully).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].