All Projects → tqtg → Hierarchical Attention Networks

tqtg / Hierarchical Attention Networks

Licence: mit
TensorFlow implementation of the paper "Hierarchical Attention Networks for Document Classification"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Hierarchical Attention Networks

Sarcasm Detection
Detecting Sarcasm on Twitter using both traditonal machine learning and deep learning techniques.
Stars: ✭ 73 (-2.67%)
Mutual labels:  text-classification, sentiment-analysis, attention-mechanism
Chatbot cn
基于金融-司法领域(兼有闲聊性质)的聊天机器人,其中的主要模块有信息抽取、NLU、NLG、知识图谱等,并且利用Django整合了前端展示,目前已经封装了nlp和kg的restful接口
Stars: ✭ 791 (+954.67%)
Mutual labels:  text-classification, sentiment-analysis, attention-mechanism
Deep Atrous Cnn Sentiment
Deep-Atrous-CNN-Text-Network: End-to-end word level model for sentiment analysis and other text classifications
Stars: ✭ 64 (-14.67%)
Mutual labels:  text-classification, sentiment-analysis
Omnicat Bayes
Naive Bayes text classification implementation as an OmniCat classifier strategy. (#ruby #naivebayes)
Stars: ✭ 30 (-60%)
Mutual labels:  text-classification, sentiment-analysis
Tensorflow Sentiment Analysis On Amazon Reviews Data
Implementing different RNN models (LSTM,GRU) & Convolution models (Conv1D, Conv2D) on a subset of Amazon Reviews data with TensorFlow on Python 3. A sentiment analysis project.
Stars: ✭ 34 (-54.67%)
Mutual labels:  text-classification, sentiment-analysis
Sentiment analysis fine grain
Multi-label Classification with BERT; Fine Grained Sentiment Analysis from AI challenger
Stars: ✭ 546 (+628%)
Mutual labels:  text-classification, sentiment-analysis
Tf Rnn Attention
Tensorflow implementation of attention mechanism for text classification tasks.
Stars: ✭ 735 (+880%)
Mutual labels:  text-classification, sentiment-analysis
Text classification
all kinds of text classification models and more with deep learning
Stars: ✭ 7,179 (+9472%)
Mutual labels:  text-classification, attention-mechanism
Ml Classify Text Js
Machine learning based text classification in JavaScript using n-grams and cosine similarity
Stars: ✭ 38 (-49.33%)
Mutual labels:  text-classification, sentiment-analysis
Meta Learning Bert
Meta learning with BERT as a learner
Stars: ✭ 52 (-30.67%)
Mutual labels:  text-classification, sentiment-analysis
Text Classification Keras
📚 Text classification library with Keras
Stars: ✭ 53 (-29.33%)
Mutual labels:  text-classification, sentiment-analysis
Text mining resources
Resources for learning about Text Mining and Natural Language Processing
Stars: ✭ 358 (+377.33%)
Mutual labels:  text-classification, sentiment-analysis
Bertweet
BERTweet: A pre-trained language model for English Tweets (EMNLP-2020)
Stars: ✭ 282 (+276%)
Mutual labels:  text-classification, sentiment-analysis
Kaggle-Twitter-Sentiment-Analysis
Kaggle Twitter Sentiment Analysis Competition
Stars: ✭ 18 (-76%)
Mutual labels:  sentiment-analysis, text-classification
Sentiment analysis albert
sentiment analysis、文本分类、ALBERT、TextCNN、classification、tensorflow、BERT、CNN、text classification
Stars: ✭ 61 (-18.67%)
Mutual labels:  text-classification, sentiment-analysis
ML2017FALL
Machine Learning (EE 5184) in NTU
Stars: ✭ 66 (-12%)
Mutual labels:  sentiment-analysis, text-classification
NSP-BERT
The code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task —— Next Sentence Prediction"
Stars: ✭ 166 (+121.33%)
Mutual labels:  sentiment-analysis, text-classification
vista-net
Code for the paper "VistaNet: Visual Aspect Attention Network for Multimodal Sentiment Analysis", AAAI'19
Stars: ✭ 67 (-10.67%)
Mutual labels:  sentiment-analysis, attention-mechanism
Textclassifier
Text classifier for Hierarchical Attention Networks for Document Classification
Stars: ✭ 985 (+1213.33%)
Mutual labels:  text-classification, attention-mechanism
Textblob Ar
Arabic support for textblob
Stars: ✭ 60 (-20%)
Mutual labels:  text-classification, sentiment-analysis

Hierarchical Attention Networks for Document Classification

This is an implementation of the paper Hierarchical Attention Networks for Document Classification, NAACL 2016.

alt tag

Requirements

Data

We use the data provided by Tang et al. 2015, including 4 datasets:

  • IMDB
  • Yelp 2013
  • Yelp 2014
  • Yelp 2015

Note: The original data seems to have an issue with unzipping. I re-uploaded the data to GG Drive for better downloading speed. Please request for access permission.

Usage

First, download the datasets and unzip into data folder.
Then, run script to prepare the data (default is using Yelp-2015 dataset):

python data_prepare.py

Train and evaluate the model:
(make sure Glove embeddings are ready before training)

wget http://nlp.stanford.edu/data/glove.6B.zip
unzip glove.6B.zip
python train.py

Print training arguments:

python train.py --help
optional arguments:
  -h, --help            show this help message and exit
  --cell_dim            CELL_DIM
                        Hidden dimensions of GRU cells (default: 50)
  --att_dim             ATTENTION_DIM
                        Dimensionality of attention spaces (default: 100)
  --emb_dim             EMBEDDING_DIM
                        Dimensionality of word embedding (default: 200)
  --learning_rate       LEARNING_RATE
                        Learning rate (default: 0.0005)
  --max_grad_norm       MAX_GRAD_NORM
                        Maximum value of the global norm of the gradients for clipping (default: 5.0)
  --dropout_rate        DROPOUT_RATE
                        Probability of dropping neurons (default: 0.5)
  --num_classes         NUM_CLASSES
                        Number of classes (default: 5)
  --num_checkpoints     NUM_CHECKPOINTS
                        Number of checkpoints to store (default: 1)
  --num_epochs          NUM_EPOCHS
                        Number of training epochs (default: 20)
  --batch_size          BATCH_SIZE
                        Batch size (default: 64)
  --display_step        DISPLAY_STEP
                        Number of steps to display log into TensorBoard (default: 20)
  --allow_soft_placement ALLOW_SOFT_PLACEMENT
                        Allow device soft device placement

Results

With the Yelp-2015 dataset, after 5 epochs, we achieved:

  • 69.79% accuracy on the dev set
  • 69.62% accuracy on the test set

No systematic hyper-parameter tunning was performed. The result reported in the paper is 71.0% for the Yelp-2015.

alt tag

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].