dbamman / Anlp19
Course repo for Applied Natural Language Processing (Spring 2019)
Stars: ✭ 402
Projects that are alternatives of or similar to Anlp19
Pytorch Sentiment Analysis
Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
Stars: ✭ 3,209 (+698.26%)
Mutual labels: jupyter-notebook, natural-language-processing
Adaptnlp
An easy to use Natural Language Processing library and framework for predicting, training, fine-tuning, and serving up state-of-the-art NLP models.
Stars: ✭ 278 (-30.85%)
Mutual labels: jupyter-notebook, natural-language-processing
Bertviz
Tool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)
Stars: ✭ 3,443 (+756.47%)
Mutual labels: jupyter-notebook, natural-language-processing
Deepnlp Models Pytorch
Pytorch implementations of various Deep NLP models in cs-224n(Stanford Univ)
Stars: ✭ 2,760 (+586.57%)
Mutual labels: jupyter-notebook, natural-language-processing
Question generation
Neural question generation using transformers
Stars: ✭ 356 (-11.44%)
Mutual labels: jupyter-notebook, natural-language-processing
Pytorch Bert Crf Ner
KoBERT와 CRF로 만든 한국어 개체명인식기 (BERT+CRF based Named Entity Recognition model for Korean)
Stars: ✭ 236 (-41.29%)
Mutual labels: jupyter-notebook, natural-language-processing
Nlp Tutorial
Tutorial: Natural Language Processing in Python
Stars: ✭ 274 (-31.84%)
Mutual labels: jupyter-notebook, natural-language-processing
Practical 1
Oxford Deep NLP 2017 course - Practical 1: word2vec
Stars: ✭ 220 (-45.27%)
Mutual labels: jupyter-notebook, natural-language-processing
Nlp Papers With Arxiv
Statistics and accepted paper list of NLP conferences with arXiv link
Stars: ✭ 345 (-14.18%)
Mutual labels: jupyter-notebook, natural-language-processing
Biosentvec
BioWordVec & BioSentVec: pre-trained embeddings for biomedical words and sentences
Stars: ✭ 308 (-23.38%)
Mutual labels: jupyter-notebook, natural-language-processing
Pytorch Transformers Classification
Based on the Pytorch-Transformers library by HuggingFace. To be used as a starting point for employing Transformer models in text classification tasks. Contains code to easily train BERT, XLNet, RoBERTa, and XLM models for text classification.
Stars: ✭ 229 (-43.03%)
Mutual labels: jupyter-notebook, natural-language-processing
Nlp Python Deep Learning
NLP in Python with Deep Learning
Stars: ✭ 374 (-6.97%)
Mutual labels: jupyter-notebook, natural-language-processing
Text summarization with tensorflow
Implementation of a seq2seq model for summarization of textual data. Demonstrated on amazon reviews, github issues and news articles.
Stars: ✭ 226 (-43.78%)
Mutual labels: jupyter-notebook, natural-language-processing
Malaya
Natural Language Toolkit for bahasa Malaysia, https://malaya.readthedocs.io/
Stars: ✭ 239 (-40.55%)
Mutual labels: jupyter-notebook, natural-language-processing
Machine Learning Notebooks
Machine Learning notebooks for refreshing concepts.
Stars: ✭ 222 (-44.78%)
Mutual labels: jupyter-notebook, natural-language-processing
Nlpython
This repository contains the code related to Natural Language Processing using python scripting language. All the codes are related to my book entitled "Python Natural Language Processing"
Stars: ✭ 265 (-34.08%)
Mutual labels: jupyter-notebook, natural-language-processing
Aind Nlp
Coding exercises for the Natural Language Processing concentration, part of Udacity's AIND program.
Stars: ✭ 202 (-49.75%)
Mutual labels: jupyter-notebook, natural-language-processing
Graph Convolution Nlp
Graph Convolution Network for NLP
Stars: ✭ 208 (-48.26%)
Mutual labels: jupyter-notebook, natural-language-processing
Zhihu
This repo contains the source code in my personal column (https://zhuanlan.zhihu.com/zhaoyeyu), implemented using Python 3.6. Including Natural Language Processing and Computer Vision projects, such as text generation, machine translation, deep convolution GAN and other actual combat code.
Stars: ✭ 3,307 (+722.64%)
Mutual labels: jupyter-notebook, natural-language-processing
Data Science
Collection of useful data science topics along with code and articles
Stars: ✭ 315 (-21.64%)
Mutual labels: jupyter-notebook, natural-language-processing
Course materials for Applied Natural Language Processing (Spring 2019). Syllabus: http://people.ischool.berkeley.edu/~dbamman/info256.html
Notebook | Description |
---|---|
1.words/EvaluateTokenizationForSentiment.ipynb | The impact of tokenization choices on sentiment classification. |
1.words/ExploreTokenization.ipynb | Different methods for tokenizing texts (whitespace, NLTK, spacy, regex) |
1.words/TokenizePrintedBooks.ipynb | Design a better tokenizer for printed books |
2.distinctive_terms/ChiSquare.ipynb | Find distinctive terms using the Chi-square test |
2.distinctive_terms/CompareCorpora.ipynb | Find distinctive terms using the Mann-Whitney rank sums test |
3.dictionaries/DictionaryTimeSeries.ipynb | Plot sentiment over time using human-defined dictionaries |
4.classification/CheckData_TODO.ipynb | Gather data for classification |
4.classification/FeatureExploration_TODO.ipynb | Feature engineering for text classification |
4.classification/FeatureWeights_TODO.ipynb | Analyze feature weights for text classification |
4.classification/Hyperparameters_TODO.ipynb | Explore hyperparameter choices on classification accuracy |
5.text_regression/Regularization.ipynb | Linear regression with L1/L2 regularization for box office prediction |
6.tests/BootstrapConfidenceIntervals.ipynb | Estimate confidence intervals with the bootstrap |
6.tests/ParametricTest.ipynb | Hypothesis testing with parametric (normal) tests |
6.tests/PermutationTest.ipynb | Hypothesis testing with non-parametric (permutation) tests |
7.embeddings/DistributionalSimilarity.ipynb | Explore distributional hypothesis to build high-dimensional, sparse representations for words |
7.embeddings/TFIDF.ipynb | Explore distributional hypothesis to build high-dimensional, sparse representations for words (with TF IDF scaling) |
7.embeddings/TurneyLittman2003.ipynb | Use word embeddings to implement the method of Turney and Littman (2003) for calculating the semantic orientation of a term defined by proximity to other terms in two polar dictionaries. |
7.embeddings/WordEmbeddings.ipynb | Explore word embeddings using Gensim |
8.neural/MLP.ipynb | MLP for text classification (keras) |
8.neural/ExploreMLP.ipynb | Explore MLP for your data (keras) |
8.neural/CNN.ipynb | CNN for text classification (keras) |
8.neural/LSTM.ipynb | LSTM for text classification (keras) |
8.neural/Attention.ipynb | Attention over word embeddings for document classification (keras) |
8.neural/AttentionLSTM.ipynb | Attention over LSTM output for text classification (keras) |
9.annotation/IAAMetrics.ipynb | Calculate inter-annotator agreement (Cohen's kappa, Krippendorff's alpha) |
10.wordnet/ExploreWordNet.ipynb | Explore WordNet synsets with a simple method for finding in a text all mentions of all hyponyms of a given node in the WordNet hierarchy (e.g., finding all buildings in a text). |
10.wordnet/Lesk.ipynb | Implement the Lesk algorithm for WSD using word embeddings |
10.wordnet/Retrofitting.ipynb | Explore retrofit word vectors |
11.pos/KeyphraseExtraction.ipynb | Keyphrase extraction with tf-idf and POS filtering |
11.pos/POS_tagging.ipynb | Understand the Penn Treebank POS tags through tagged texts |
12.ner/ExtractingSocialNetworks.ipynb | Extract social networks from literary texts |
12.ner/SequenceLabelingBiLSTM.ipynb | BiLSTM + sequence labeling for Twitter NER |
12.ner/ToponymResolution.ipynb | Extract place names from text, geolocate them and visualize on map |
13.mwe/JustesonKatz95.ipynb | Implement Justeson and Katz (1995) for identifying MWEs using POS tag patterns |
14.syntax/SyntacticRelations.ipynb | Explore dependency parsing by identifying the actions and objects that are characteristically associated with male and female characters. |
15.coref/CorefSetup.ipynb | Install neuralcoref for coreference resolution |
15.coref/ExtractTimeline.ipynb | Use coreference resolution for the task of timeline generation: for a given biography on Wikipedia, can you extract all of the events associated with the people mentioned and create one timeline for each person? |
16.ie/DependencyPatterns.ipynb | Measuring common dependency paths between two entities that hold a given relation to each other |
16.ie/EntityLinking.ipynb | Explore named entity disambiguation and entity linking to Wikipedia pages. |
17.clustering/TopicModeling_TODO.ipynb | Explore topic modeling to discover broad themes in a collection of movie summaries. |
Note that the project description data, including the texts, logos, images, and/or trademarks,
for each open source project belongs to its rightful owner.
If you wish to add or remove any projects, please contact us at [email protected].