All Projects → charlesdong1991 → interpretable-han-for-document-classification-with-keras

charlesdong1991 / interpretable-han-for-document-classification-with-keras

Licence: MIT license
Keras implementation of hierarchical attention network for document classification with options to predict and present attention weights on both word and sentence level.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to interpretable-han-for-document-classification-with-keras

datastories-semeval2017-task6
Deep-learning model presented in "DataStories at SemEval-2017 Task 6: Siamese LSTM with Attention for Humorous Text Comparison".
Stars: ✭ 20 (+11.11%)
Mutual labels:  attention
Relation-Extraction-Transformer
NLP: Relation extraction with position-aware self-attention transformer
Stars: ✭ 63 (+250%)
Mutual labels:  attention
ntua-slp-semeval2018
Deep-learning models of NTUA-SLP team submitted in SemEval 2018 tasks 1, 2 and 3.
Stars: ✭ 79 (+338.89%)
Mutual labels:  attention
transformer
A PyTorch Implementation of "Attention Is All You Need"
Stars: ✭ 28 (+55.56%)
Mutual labels:  attention
LFattNet
Attention-based View Selection Networks for Light-field Disparity Estimation
Stars: ✭ 41 (+127.78%)
Mutual labels:  attention
iPerceive
Applying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering | Python3 | PyTorch | CNNs | Causality | Reasoning | LSTMs | Transformers | Multi-Head Self Attention | Published in IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
Stars: ✭ 52 (+188.89%)
Mutual labels:  attention
torch-multi-head-attention
Multi-head attention in PyTorch
Stars: ✭ 93 (+416.67%)
Mutual labels:  attention
NTUA-slp-nlp
💻Speech and Natural Language Processing (SLP & NLP) Lab Assignments for ECE NTUA
Stars: ✭ 19 (+5.56%)
Mutual labels:  attention
protein-transformer
Predicting protein structure through sequence modeling
Stars: ✭ 77 (+327.78%)
Mutual labels:  attention
automatic-personality-prediction
[AAAI 2020] Modeling Personality with Attentive Networks and Contextual Embeddings
Stars: ✭ 43 (+138.89%)
Mutual labels:  attention
LNSwipeCell
一套友好的、方便集成的针对cell的左滑编辑功能!
Stars: ✭ 16 (-11.11%)
Mutual labels:  attention
image-recognition
采用深度学习方法进行刀具识别。
Stars: ✭ 19 (+5.56%)
Mutual labels:  attention
visualization
a collection of visualization function
Stars: ✭ 189 (+950%)
Mutual labels:  attention
gnn-lspe
Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
Stars: ✭ 165 (+816.67%)
Mutual labels:  attention
AoA-pytorch
A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering
Stars: ✭ 33 (+83.33%)
Mutual labels:  attention
gqa-node-properties
Recalling node properties from a knowledge graph
Stars: ✭ 19 (+5.56%)
Mutual labels:  attention
lambda.pytorch
PyTorch implementation of Lambda Network and pretrained Lambda-ResNet
Stars: ✭ 54 (+200%)
Mutual labels:  attention
Base-On-Relation-Method-Extract-News-DA-RNN-Model-For-Stock-Prediction--Pytorch
基於關聯式新聞提取方法之雙階段注意力機制模型用於股票預測
Stars: ✭ 33 (+83.33%)
Mutual labels:  attention
RNNSearch
An implementation of attention-based neural machine translation using Pytorch
Stars: ✭ 43 (+138.89%)
Mutual labels:  attention
Image-Captioning
Image Captioning with Keras
Stars: ✭ 60 (+233.33%)
Mutual labels:  attention

Build Status Python 3.6 License: MIT

Interpretable-han-for-document-classfication-with-keras

This repository uses Keras to implement the hierachical attention network presented in Hierarchical Attention Networks for Document Classification. link

How to use the package

  1. Clone the repository.
  2. In the root of repo, run the python setup.py install to install all packages required.
  3. Import and initialize the class:
from han.model import HAN

han = HAN(embedding_matrix)

       You would like to change value of parameters during the initialization, for instance:

han = HAN(embedding_matrix, max_sent_length=150, max_sent_num=15)
  1. When you initialize the HAN, the models are also set, so you could print the summary to check layers:
han.print_summary()
  1. Train the model simply with:
han.train_model(checkpoint_path, X_train, y_train, X_test, y_test)

       And you could also tune the value of parameters.

  1. Show the attention weights for word level:
han.show_word_attention(X)

       X is the embedded matrix vector for one review.

        Show the attention weights for sentence level:

han.show_sent_attention(X)

       X is the embedded matrix vector for reviews (could be multiple reviews).

  1. Truncate attention weights based on sentence length and number, and transform them into dataframe to make the result easily understandable:

        Regarding the word attention, running the line below will give you:

han.word_att_to_df(sent_tokenized_review, word_att)

       result will look like:

word_att review
{'i':0.3, 'am': 0.1, 'wrong': 0.6} i am wrong
{'this': 0.1, 'is': 0.1, 'ridiculously': 0.4, 'good': 0.4} this is ridiculously good
han.sent_att_to_df(sent_tokenized_reviews, sent_att)

       result will look like:

sent_att reviews
{'this is good': 0.8, 'i am watching': 0.2} this is good. i am watching.
{'i like it': 0.6, 'it is about history': 0.4} i like it. it is about history.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].