All Projects → hankcs → Cs224n

hankcs / Cs224n

Licence: gpl-3.0
CS224n: Natural Language Processing with Deep Learning Assignments Winter, 2017

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Cs224n

Gensim
Topic Modelling for Humans
Stars: ✭ 12,763 (+1845.58%)
Mutual labels:  natural-language-processing, word2vec
Deepnlp Models Pytorch
Pytorch implementations of various Deep NLP models in cs-224n(Stanford Univ)
Stars: ✭ 2,760 (+320.73%)
Mutual labels:  natural-language-processing, rnn
Deep Math Machine Learning.ai
A blog which talks about machine learning, deep learning algorithms and the Math. and Machine learning algorithms written from scratch.
Stars: ✭ 173 (-73.63%)
Mutual labels:  natural-language-processing, word2vec
Awesome Embedding Models
A curated list of awesome embedding models tutorials, projects and communities.
Stars: ✭ 1,486 (+126.52%)
Mutual labels:  natural-language-processing, word2vec
altair
Assessing Source Code Semantic Similarity with Unsupervised Learning
Stars: ✭ 42 (-93.6%)
Mutual labels:  word2vec, rnn
Scattertext
Beautiful visualizations of how language differs among document types.
Stars: ✭ 1,722 (+162.5%)
Mutual labels:  natural-language-processing, word2vec
Practical 1
Oxford Deep NLP 2017 course - Practical 1: word2vec
Stars: ✭ 220 (-66.46%)
Mutual labels:  natural-language-processing, word2vec
Pytorch Pos Tagging
A tutorial on how to implement models for part-of-speech tagging using PyTorch and TorchText.
Stars: ✭ 96 (-85.37%)
Mutual labels:  natural-language-processing, rnn
DeepLearning-Lab
Code lab for deep learning. Including rnn,seq2seq,word2vec,cross entropy,bidirectional rnn,convolution operation,pooling operation,InceptionV3,transfer learning.
Stars: ✭ 83 (-87.35%)
Mutual labels:  word2vec, rnn
chainer-notebooks
Jupyter notebooks for Chainer hands-on
Stars: ✭ 23 (-96.49%)
Mutual labels:  word2vec, rnn
Magnitude
A fast, efficient universal vector embedding utility package.
Stars: ✭ 1,394 (+112.5%)
Mutual labels:  natural-language-processing, word2vec
Text summurization abstractive methods
Multiple implementations for abstractive text summurization , using google colab
Stars: ✭ 359 (-45.27%)
Mutual labels:  rnn, word2vec
Repo 2016
R, Python and Mathematica Codes in Machine Learning, Deep Learning, Artificial Intelligence, NLP and Geolocation
Stars: ✭ 103 (-84.3%)
Mutual labels:  natural-language-processing, word2vec
Scattertext Pydata
Notebooks for the Seattle PyData 2017 talk on Scattertext
Stars: ✭ 132 (-79.88%)
Mutual labels:  natural-language-processing, word2vec
Codesearchnet
Datasets, tools, and benchmarks for representation learning of code.
Stars: ✭ 1,378 (+110.06%)
Mutual labels:  natural-language-processing, rnn
Germanwordembeddings
Toolkit to obtain and preprocess german corpora, train models using word2vec (gensim) and evaluate them with generated testsets
Stars: ✭ 189 (-71.19%)
Mutual labels:  natural-language-processing, word2vec
Sense2vec
🦆 Contextually-keyed word vectors
Stars: ✭ 1,184 (+80.49%)
Mutual labels:  natural-language-processing, word2vec
Ja.text8
Japanese text8 corpus for word embedding.
Stars: ✭ 79 (-87.96%)
Mutual labels:  natural-language-processing, word2vec
Pytorch Sentiment Analysis
Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
Stars: ✭ 3,209 (+389.18%)
Mutual labels:  natural-language-processing, rnn
Languagecrunch
LanguageCrunch NLP server docker image
Stars: ✭ 281 (-57.16%)
Mutual labels:  natural-language-processing, word2vec

CS224n

CS224n: Natural Language Processing with Deep Learning Assignments Winter, 2017

Requirements

  • Python 2.7
  • TensorFlow r1.2

Assignment #1

  1. Softmax
  2. Neural Network Basics
  3. word2vec q3_word_vectors
  4. Sentiment Analysis q4_reg_v_acc q4_dev_conf

Assignment #2

  1. Tensorflow Softmax
  2. Neural Transition-Based Dependency Parsing
924/924 [==============================] - 49s - train loss: 0.0631    
Evaluating on dev set - dev UAS: 88.54
New best dev UAS! Saving model in ./data/weights/parser.weights
================================================================================
TESTING
================================================================================
Restoring the best model weights found on the dev set
Final evaluation on test set - test UAS: 88.92
Writing predictions
Done!
  1. Recurrent Neural Networks: Language Modeling unrolled_rnn

Assignment #3

  1. A window into NER
DEBUG:Token-level confusion matrix:
go\gu   PER     ORG     LOC     MISC    O    
PER     2968    26      84      16      55   
ORG     147     1621    131     65      128  
LOC     48      88      1896    26      36   
MISC    37      40      54      1030    107  
O       42      46      18      39      42614
DEBUG:Token-level scores:
label   acc     prec    rec     f1   
PER     0.99    0.92    0.94    0.93 
ORG     0.99    0.89    0.77    0.83 
LOC     0.99    0.87    0.91    0.89 
MISC    0.99    0.88    0.81    0.84 
O       0.99    0.99    1.00    0.99 
micro   0.99    0.98    0.98    0.98 
macro   0.99    0.91    0.89    0.90 
not-O   0.99    0.89    0.87    0.88 
INFO:Entity level P/R/F1: 0.82/0.85/0.84
  1. Recurrent neural nets for NER
DEBUG:Token-level confusion matrix:
go\gu   PER     ORG     LOC     MISC    O    
PER     2987    32      47      12      71   
ORG     136     1684    90      70      112  
LOC     39      83      1907    21      44   
MISC    43      45      47      1031    102  
O       36      56      15      34      42618
DEBUG:Token-level scores:
label   acc     prec    rec     f1   
PER     0.99    0.92    0.95    0.93 
ORG     0.99    0.89    0.80    0.84 
LOC     0.99    0.91    0.91    0.91 
MISC    0.99    0.88    0.81    0.85 
O       0.99    0.99    1.00    0.99 
micro   0.99    0.98    0.98    0.98 
macro   0.99    0.92    0.89    0.91 
not-O   0.99    0.90    0.88    0.89 
INFO:Entity level P/R/F1: 0.85/0.86/0.85
  1. Grooving with GRUs

q3-noclip-rnn q3-clip-rnn q3-noclip-gru q3-clip-gru

DEBUG:Token-level confusion matrix:
go\gu	PER  	ORG  	LOC  	MISC 	O    
PER  	2920 	41   	57   	12   	119  
ORG  	101  	1716 	73   	64   	138  
LOC  	22   	95   	1908 	16   	53   
MISC 	37   	45   	53   	1017 	116  
O    	21   	67   	14   	39   	42618

DEBUG:Token-level scores:
label	acc  	prec 	rec  	f1   
PER  	0.99 	0.94 	0.93 	0.93 
ORG  	0.99 	0.87 	0.82 	0.85 
LOC  	0.99 	0.91 	0.91 	0.91 
MISC 	0.99 	0.89 	0.80 	0.84 
O    	0.99 	0.99 	1.00 	0.99 
micro	0.99 	0.98 	0.98 	0.98 
macro	0.99 	0.92 	0.89 	0.90 
not-O	0.99 	0.91 	0.88 	0.89 

INFO:Entity level P/R/F1: 0.86/0.85/0.85
  1. Easter Egg Hunt!
    • Run python q3_gru.py dynamics to unfold your candy eggs

References

CS224n official website

Many code snippets come from

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].