All Projects → cmasch → cnn-text-classification

cmasch / cnn-text-classification

Licence: other
Text classification with Convolution Neural Networks on Yelp, IMDB & sentence polarity dataset v1.0

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to cnn-text-classification

NewsMTSC
Target-dependent sentiment classification in news articles reporting on political events. Includes a high-quality data set of over 11k sentences and a state-of-the-art classification model.
Stars: ✭ 54 (-50%)
Mutual labels:  text-classification, sentiment-classification
Deep Atrous Cnn Sentiment
Deep-Atrous-CNN-Text-Network: End-to-end word level model for sentiment analysis and other text classifications
Stars: ✭ 64 (-40.74%)
Mutual labels:  text-classification, imdb
ML2017FALL
Machine Learning (EE 5184) in NTU
Stars: ✭ 66 (-38.89%)
Mutual labels:  text-classification, sentiment-classification
Text tone analyzer
Система, анализирующая тональность текстов и высказываний.
Stars: ✭ 15 (-86.11%)
Mutual labels:  text-classification, sentiment-classification
Dan Jurafsky Chris Manning Nlp
My solution to the Natural Language Processing course made by Dan Jurafsky, Chris Manning in Winter 2012.
Stars: ✭ 124 (+14.81%)
Mutual labels:  text-classification, sentiment-classification
nsmc-zeppelin-notebook
Movie review dataset Word2Vec & sentiment classification Zeppelin notebook
Stars: ✭ 26 (-75.93%)
Mutual labels:  text-classification, sentiment-classification
Tensorflow Sentiment Analysis On Amazon Reviews Data
Implementing different RNN models (LSTM,GRU) & Convolution models (Conv1D, Conv2D) on a subset of Amazon Reviews data with TensorFlow on Python 3. A sentiment analysis project.
Stars: ✭ 34 (-68.52%)
Mutual labels:  text-classification, sentiment-classification
COVID-19-Tweet-Classification-using-Roberta-and-Bert-Simple-Transformers
Rank 1 / 216
Stars: ✭ 24 (-77.78%)
Mutual labels:  text-classification, sentiment-classification
Context
ConText v4: Neural networks for text categorization
Stars: ✭ 120 (+11.11%)
Mutual labels:  text-classification, sentiment-classification
Tia
Your Advanced Twitter stalking tool
Stars: ✭ 98 (-9.26%)
Mutual labels:  text-classification, sentiment-classification
imdb-scraper
🎬 An attempt at the most complete IMDb API
Stars: ✭ 24 (-77.78%)
Mutual labels:  imdb, imdb-dataset
Text-Classification-PyTorch
Implementation of papers for text classification task on SST-1/SST-2
Stars: ✭ 57 (-47.22%)
Mutual labels:  text-classification, sentiment-classification
Text Classification Pytorch
Text classification using deep learning models in Pytorch
Stars: ✭ 683 (+532.41%)
Mutual labels:  text-classification, sentiment-classification
Nlp Tutorial
A list of NLP(Natural Language Processing) tutorials
Stars: ✭ 1,188 (+1000%)
Mutual labels:  text-classification, sentiment-classification
Sentiment-analysis-amazon-Products-Reviews
NLP with NLTK for Sentiment analysis amazon Products Reviews
Stars: ✭ 37 (-65.74%)
Mutual labels:  text-classification, sentiment-classification
20-newsgroups text-classification
"20 newsgroups" dataset - Text Classification using Multinomial Naive Bayes in Python.
Stars: ✭ 41 (-62.04%)
Mutual labels:  text-classification, multiclass-classification
Graph-Based-TC
Graph-based framework for text classification
Stars: ✭ 24 (-77.78%)
Mutual labels:  text-classification
deepnlp
小时候练手的nlp项目
Stars: ✭ 11 (-89.81%)
Mutual labels:  text-classification
cirilla
Multipurpose telegram bot
Stars: ✭ 33 (-69.44%)
Mutual labels:  imdb
movie-app
App using auth0, netlify functions, + Algolia
Stars: ✭ 39 (-63.89%)
Mutual labels:  imdb

Text classification with Convolution Neural Networks (CNN)

This project demonstrates how to classify text documents / sentences with CNNs. You can find a great introduction in a similar approach on a blog entry of Denny Britz and Keras. My approach is quit similar to the one of Denny and the original paper of Yoon Kim [1]. You can find the implementation of Yoon Kim on GitHub as well.

Changes

*** UPDATE *** - September 10th, 2021

In this update I fixed some typos as well as improved the jupyter notebook. You can execute the notebook without any requirements. Required data will be downloaded automatically.

  • Add Yelp Polarity Dataset (Tensorflow-Dataset)
  • Add utils.py for moving code out of notebook
  • Add blank char in ALPHABET variable

*** UPDATE *** - December 15th, 2019

I’ve updated the code to TensorFlow 2. Besides I made some changes in the jupyter notebook:

  • Remove Yelp dataset
  • Add TensorFlow Dataset for IMDB

*** UPDATE *** - May 17th, 2019

Model:

  • Combine word-level with character-based input. The char input ist optional and can be used for further reasearch.
  • Change padding of conv-layer from same to valid.
  • Add average pooling after conv-layer and combine it with existing max pooling.

Notebook:

  • Add CHAR support
  • Commented out preprocessing
  • Add scikit-learn example at the end for comparison between deep learning and machine learning.

Using characters in addition to words ends up with no improvement but can be a good starting point for further research. I keep the model as simple as possible and reuse the existing methods for character input. As written in the paper of Yann LeCun [3] using several conv-layers on each over could improve performance.

*** UPDATE *** - December 3rd, 2018

  • Implemented the model as a class (cnn_model.CNN)
  • Replaced max pooling by global max pooling
  • Replaced conv1d by separable convolutions
  • Added dense + dropout after each global max pooling
  • Removed flatten + dropout after concatenation
  • Removed L2 regularizer on convolution layers
  • Support multiclass classification

Besides i made some changes in evaluation notebook. It seems that cleaning the text by removing stopwords, nummerical values and punctuation remove important features too. Therefore I dont use this preprocessing steps anymore. As optimizer I switched from Adadelta to Adam because it converge to an optimum even faster.

This are just small changes but with a significant improvement as you can see below.

Comparing old vs new

For the Yelp dataset I increased the training samples from 200000 to 600000 and test samples to 200000 instead of 50000.

Dataset Old
(loss / acc)
New
(loss / acc)
Polarity 0.4688 / 0.7974 0.4058 / 0.8135
IMDB 0.2994 / 0.8896 0.2509 / 0.9007
Yelp 0.1793 / 0.9393 0.0997 / 0.9631
Yelp - Multi 0.9356 / 0.6051 0.8076 / 0.6487

Next steps:

  • Combine word-level model with a character-based input. Working on characters has the advantage that misspellings and emoticons may be naturally learnt.
  • Adding attention layer on recurrent / convolution layer. I allready tested it but with no improvements but still working on this.

Evaluation

For evaluation I used different datasets that are freely available. They differ in their size of amount and the content length. What all have in common is that they have two classes to predict (positive / negative). I would like to show how CNN performs on ~10000 up to ~800000 documents with modify only a few paramters.

I used the following sets for evaluation:

  • sentence polarity dataset v1.0
    The polarity dataset v1.0 has 10662 sentences. It's quit similiar to traditional sentiment analysis of tweets because of the content length. I just splitted the data in train / validation (90% / 10%).
  • IMDB moview review
    IMDB moview review has 25000 train and 25000 test documents. I splitted the trainset into train / validation (80% / 20%) and used the testset for a final test.
  • Yelp dataset 2017
    This dataset contains a JSON of nearly 5 million entries. I splitted this JSON for performance reason to randomly 600000 train and 200000 test documents. I selected ratings with 1-2 stars as negative and 4-5 as positive. Ratings with 3 stars are not considered because of their neutrality. In addition comes that this selected subset contains only texts with more than 5 words. The language of the texts include english, german, spanish and a lot more. During the training I used 80% / 20% (train / validation). If you are interested you can also check a small demo of the embeddings created from the training data.

Model

The implemented model has multiple convolutional layers in parallel to obtain several features of one text. Through different kernel sizes of each convolution layer the window size varies and the text will be read with a n-gram approach. The default values are 3 convolution layers with kernel size of 3, 4 and 5.

I also used pre-trained embedding GloVe with 300 dimensional vectors and 6B tokens to show that unsupervised learning of words can have a positive effect on neural nets.

Results

For all runs I used filter sizes of [3,4,5], Adam as optimizer, batch size of 100 and 10 epochs. As already described I used 5 runs with random state to get a final mean of loss / accuracy.

Sentence polarity dataset v1.0

Feature Maps Embedding Max Words / Sequence Hidden Units Dropout Training
(loss / acc)
Validation
(loss / acc)
[100,100,100] GloVe 300 15000 / 35 64 0.4 0.3134 / 0.8642 0.4058 / 0.8135
[100,100,100] 300 15000 / 35 64 0.4 0.4741 / 0.7753 0.4563 / 0.7807

IMDB

Feature Maps Embedding Max Words / Sequence Hidden Units Dropout Training
(loss / acc)
Validation
(loss / acc)
Test
(loss / acc)
[200,200,200] GloVe 300 15000 / 500 200 0.4 0.1735 / 0.9332 0.2417 / 0.9064 0.2509 / 0.9007
[200,200,200] 300 15000 / 500 200 0.4 0.2425 / 0.9037 0.2554 / 0.8964 0.2632 / 0.8920

Yelp Polarity Dataset (2015)

Feature Maps Embedding Max Words / Sequence Hidden Units Dropout Training
(loss / acc)
Validation
(loss / acc)
Test
(loss / acc)
[200,200,200] GloVe 300 15000 / 200 250 0.5 0.1066 / 0.9602 0.1146 / 0.9567 0.1130 / 0.9574
[200,200,200] 300 15000 / 200 250 0.5 0.1029 / 0.9617 0.1243 / 0.9533 0.1219 / 0.9547
ML-Model - - - - - - / 0.9398 - / 0.9398

Yelp 2017

Feature Maps Embedding Max Words / Sequence Hidden Units Dropout Training
(loss / acc)
Validation
(loss / acc)
Test
(loss / acc)
[200,200,200] GloVe 300 15000 / 200 250 0.5 0.0793 / 0.9707 0.0958 / 0.9644 0.0997 / 0.9631
[200,200,200] 300 15000 / 200 250 0.5 0.0820 / 0.9701 0.1012 / 0.9623 0.1045 / 0.9615

Yelp 2017 - Multiclass classification

All previous evaluations are typical binary classification tasks. The Yelp dataset comes with reviews which can be classified into five classes (one to five stars). For the evaluations above I merged one and two star reviews together to the negative class. Reviews with four and five stars are labeled as positive reviews. Neutral reviews with three stars are not considered. In this evaluation I trained the model on all five classes. The baseline we have to reach is 20% accuracy because all classes are balanced to the same amount of samples. In a first evaluation I reached 64% accuracy. This sounds a little bit low but you have to keep in mind that in the binary classification we have a baseline of 50% accuracy. That is more than twice as much! Furthermore there is a lot subjectivity in the reviews. Take a look on the confusion matrix:

If you look carefully you can see that it’s hard to distinguish in one class that has surrounding classes side by side. If you wrote a negative review, when does this have just two stars and not one or three?! Sometimes it’s clear for sure but sometimes not!

Feature Maps Embedding Max Words / Sequence Hidden Units Dropout Training
(loss / acc)
Validation
(loss / acc)
Test
(loss / acc)
[200,200,200] GloVe 300 15000 / 200 250 0.5 0.7676 / 0.6658 0.7983 / 0.6531 0.8076 / 0.6487
[200,200,200] 300 15000 / 200 250 0.5 0.7932 / 0.6556 0.8103 / 0.6470 0.8169 / 0.6443

Conclusion and improvements

Finally CNNs are a great approach for text classification. However a lot of data is needed for training a good model. It would be interesting to compare this results with a typical machine learning approach. I expect that using ML for all datasets except Yelp getting similar results. If you evaluate your own architecture (neural network), I recommend using IMDB or Yelp because of their amount of data.

Using pre-trained embeddings like GloVe improved accuracy by about 1-2%. In addition comes that pre-trained embeddings have a regularization effect on training. That make sense because GloVe is trained on data which is some different to Yelp and the other datasets. This means that during training the weights of the pre-trained embedding will be updated. You can see the regularization effect in the following image:

If you are interested in CNN and text classification try out the dataset from Yelp! Not only because of the best result in accuracy, it has a lot metadata. Maybe I will use this dataset to get insights for my next travel :)

I'm sure that you can get better results by tuning some parameters:

  • Increase / decrease feature maps
  • Add / remove filter sizes
  • Use another embeddings (e.g. Google word2vec)
  • Increase / decrease maximum words in vocabulary and sequence
  • Modify the method clean_text

If you have any questions or hints for improvement contact me through an issue. Thanks!

Requirements

  • Python 3.x
  • TensorFlow 2.x
  • TensorFlow-Datasets
  • Scikit

Usage

Feel free to use the model and your own dataset. As an example you can use this evaluation notebook.

References

[1] Convolutional Neural Networks for Sentence Classification
[2] Neural Document Embeddings for Intensive Care Patient Mortality Prediction
[3] Character-level Convolutional Networks for Text Classification

Author

Christopher Masch

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].