All Projects → hmohebbi → SentimentAnalysis

hmohebbi / SentimentAnalysis

Licence: other
(BOW, TF-IDF, Word2Vec, BERT) Word Embeddings + (SVM, Naive Bayes, Decision Tree, Random Forest) Base Classifiers + Pre-trained BERT on Tensorflow Hub + 1-D CNN and Bi-Directional LSTM on IMDB Movie Reviews Dataset

Programming Languages

Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to SentimentAnalysis

text-classification-cn
中文文本分类实践,基于搜狗新闻语料库,采用传统机器学习方法以及预训练模型等方法
Stars: ✭ 81 (+102.5%)
Mutual labels:  svm, word2vec, tf-idf
text2text
Text2Text: Cross-lingual natural language processing and generation toolkit
Stars: ✭ 188 (+370%)
Mutual labels:  embeddings, tf-idf, bert
Persian-Sentiment-Analyzer
Persian sentiment analysis ( آناکاوی سهش های فارسی | تحلیل احساسات فارسی )
Stars: ✭ 30 (-25%)
Mutual labels:  word2vec, embeddings, lstm
Machine-Learning-Models
In This repository I made some simple to complex methods in machine learning. Here I try to build template style code.
Stars: ✭ 30 (-25%)
Mutual labels:  random-forest, svm, decision-tree
Ml Projects
ML based projects such as Spam Classification, Time Series Analysis, Text Classification using Random Forest, Deep Learning, Bayesian, Xgboost in Python
Stars: ✭ 127 (+217.5%)
Mutual labels:  random-forest, svm, word2vec
LSTM-Time-Series-Analysis
Using LSTM network for time series forecasting
Stars: ✭ 41 (+2.5%)
Mutual labels:  random-forest, lstm
NLP-paper
🎨 🎨NLP 自然语言处理教程 🎨🎨 https://dataxujing.github.io/NLP-paper/
Stars: ✭ 23 (-42.5%)
Mutual labels:  word2vec, bert
sarcasm-detection-for-sentiment-analysis
Sarcasm Detection for Sentiment Analysis
Stars: ✭ 21 (-47.5%)
Mutual labels:  word2vec, lstm
scoruby
Ruby Scoring API for PMML
Stars: ✭ 69 (+72.5%)
Mutual labels:  random-forest, decision-tree
handson-ml
도서 "핸즈온 머신러닝"의 예제와 연습문제를 담은 주피터 노트북입니다.
Stars: ✭ 285 (+612.5%)
Mutual labels:  random-forest, svm
info-retrieval
Information Retrieval in High Dimensional Data (class deliverables)
Stars: ✭ 33 (-17.5%)
Mutual labels:  svm, embeddings
learningspoons
nlp lecture-notes and source code
Stars: ✭ 29 (-27.5%)
Mutual labels:  word2vec, lstm
introduction-to-machine-learning
A document covering machine learning basics. 🤖📊
Stars: ✭ 17 (-57.5%)
Mutual labels:  random-forest, svm
turbofan failure
Aircraft engine failure prediction model
Stars: ✭ 23 (-42.5%)
Mutual labels:  svm, lstm
receiptdID
Receipt.ID is a multi-label, multi-class, hierarchical classification system implemented in a two layer feed forward network.
Stars: ✭ 22 (-45%)
Mutual labels:  random-forest, word2vec
navec
Compact high quality word embeddings for Russian language
Stars: ✭ 118 (+195%)
Mutual labels:  word2vec, embeddings
bert-squeeze
🛠️ Tools for Transformers compression using PyTorch Lightning ⚡
Stars: ✭ 56 (+40%)
Mutual labels:  lstm, bert
datastories-semeval2017-task6
Deep-learning model presented in "DataStories at SemEval-2017 Task 6: Siamese LSTM with Attention for Humorous Text Comparison".
Stars: ✭ 20 (-50%)
Mutual labels:  embeddings, lstm
dnn-lstm-word-segment
Chinese Word Segmention Base on the Deep Learning and LSTM Neural Network
Stars: ✭ 24 (-40%)
Mutual labels:  word2vec, lstm
STOCK-RETURN-PREDICTION-USING-KNN-SVM-GUASSIAN-PROCESS-ADABOOST-TREE-REGRESSION-AND-QDA
Forecast stock prices using machine learning approach. A time series analysis. Employ the Use of Predictive Modeling in Machine Learning to Forecast Stock Return. Approach Used by Hedge Funds to Select Tradeable Stocks
Stars: ✭ 94 (+135%)
Mutual labels:  random-forest, decision-tree

SentimentAnalysis

(BOW, TF-IDF, Word2Vec, BERT) Word Embeddings + (SVM, Naive Bayes, Decision Tree, Random Forest) Base Classifiers + Pre-trained BERT on Tensorflow Hub + 1-D CNN and Bi-Directional LSTM on IMDB Movie Reviews Dataset

Results for Base Classifiers

Rank Word Embedding Classifier Accuracy F1-Score
1 BERT Sentence Version (Mean Bert Features per Review) SVM 90.35 0.90
2 BERT Sentence Version (Mean Bert Features per Review) MLP 90.32 0.90
3 TFIDF with Stop Words SVM 89.59 0.90

Results for Deep Neural Networks

Rank Word Embedding Model Accuracy
1 BERT TensorFlow-HUB Bi-Directional LSTM 91.34
2 BERT Sentence Version (Mean Bert Features per Review) 1-D CNN 85.46
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].