All Projects → P7h → Spark Mllib Twitter Sentiment Analysis

P7h / Spark Mllib Twitter Sentiment Analysis

Licence: apache-2.0
🌟 ✨ Analyze and visualize Twitter Sentiment on a world map using Spark MLlib

Programming Languages

scala
5932 projects

Projects that are alternatives of or similar to Spark Mllib Twitter Sentiment Analysis

Coolplayspark
酷玩 Spark: Spark 源代码解析、Spark 类库等
Stars: ✭ 3,318 (+2836.28%)
Mutual labels:  spark, spark-streaming
Angel
A Flexible and Powerful Parameter Server for large-scale machine learning
Stars: ✭ 6,458 (+5615.04%)
Mutual labels:  spark, spark-streaming
Learningspark
Scala examples for learning to use Spark
Stars: ✭ 421 (+272.57%)
Mutual labels:  spark, spark-streaming
Example Spark
Spark, Spark Streaming and Spark SQL unit testing strategies
Stars: ✭ 205 (+81.42%)
Mutual labels:  spark, spark-streaming
Waterdrop
Production Ready Data Integration Product, documentation:
Stars: ✭ 1,856 (+1542.48%)
Mutual labels:  spark, spark-streaming
Gimel
Big Data Processing Framework - Unified Data API or SQL on Any Storage
Stars: ✭ 216 (+91.15%)
Mutual labels:  spark, spark-streaming
Sparta
Real Time Analytics and Data Pipelines based on Spark Streaming
Stars: ✭ 513 (+353.98%)
Mutual labels:  spark, spark-streaming
Spark
.NET for Apache® Spark™ makes Apache Spark™ easily accessible to .NET developers.
Stars: ✭ 1,721 (+1423.01%)
Mutual labels:  spark, spark-streaming
Real Time Stream Processing Engine
This is an example of real time stream processing using Spark Streaming, Kafka & Elasticsearch.
Stars: ✭ 37 (-67.26%)
Mutual labels:  spark, spark-streaming
Learning Spark
零基础学习spark,大数据学习
Stars: ✭ 37 (-67.26%)
Mutual labels:  spark, spark-streaming
Spark Streaming With Kafka
Self-contained examples of Apache Spark streaming integrated with Apache Kafka.
Stars: ✭ 180 (+59.29%)
Mutual labels:  spark, spark-streaming
Pyspark Examples
Code examples on Apache Spark using python
Stars: ✭ 58 (-48.67%)
Mutual labels:  spark, spark-streaming
Pyspark Learning
Updated repository
Stars: ✭ 147 (+30.09%)
Mutual labels:  spark, spark-streaming
Data Accelerator
Data Accelerator for Apache Spark simplifies onboarding to Streaming of Big Data. It offers a rich, easy to use experience to help with creation, editing and management of Spark jobs on Azure HDInsights or Databricks while enabling the full power of the Spark engine.
Stars: ✭ 247 (+118.58%)
Mutual labels:  spark, spark-streaming
Azure Event Hubs Spark
Enabling Continuous Data Processing with Apache Spark and Azure Event Hubs
Stars: ✭ 140 (+23.89%)
Mutual labels:  spark, spark-streaming
Cdap
An open source framework for building data analytic applications.
Stars: ✭ 509 (+350.44%)
Mutual labels:  spark, spark-streaming
Kinesis Sql
Kinesis Connector for Structured Streaming
Stars: ✭ 120 (+6.19%)
Mutual labels:  spark, spark-streaming
Example Spark Kafka
Apache Spark and Apache Kafka integration example
Stars: ✭ 120 (+6.19%)
Mutual labels:  spark, spark-streaming
Mobius
C# and F# language binding and extensions to Apache Spark
Stars: ✭ 929 (+722.12%)
Mutual labels:  spark, spark-streaming
Utils4s
scala、spark使用过程中,各种测试用例以及相关资料整理
Stars: ✭ 1,070 (+846.9%)
Mutual labels:  spark, spark-streaming

Twitter sentiment analysis with Spark MLlib and visualization

Introduction

Project to analyse and visualize sentiment of tweets in real-time on a world map using Apache Spark ecosystem [Spark MLlib + Spark Streaming].

At a very high level, this project encapsulates and covers each of the following broad topics:

  • Distributed Stream Processing » Apache Spark
  • Machine Learning » Naive Bayes Classifier [Apache Spark MLlib implementation]
  • Visualization » Sentiment visualization on a World map using Datamaps
  • DevOps » Docker Hub and Docker Image

For more details on this project and the code associated with it, please check this blogpost.
Also, a Docker Image is available on Docker Hub with the complete environment and dependencies installed and preconfigured.

Note:

I had actually written a blog post on my personal website with the code walkthru and explaining intricate details; but unfortunately I managed to corrupt my Octopress GitHub repo. 😧 😩 😡 So, till the time I salvage it, I thought of publishing it as GitHub wiki for the time being.

Visualization Demo and screenshots

Demo of visualization

Demo of visualization

Screenshots of visualization

Overview

Overview

Positive sentiment

Positive sentiment

Neutral sentiment

Neutral sentiment

Negative sentiment

Negative sentiment

Features

  • Apache Spark MLlib's implementation of Naive Bayes classifier is used for classifying the tweets in real-time.
  • Training is performed using 1.6 million tweet training data made available by Sentiment140.
  • Model created by Naive Bayes is applied in real-time to the tweets retrieved using Twitter Streaming API to determine the sentiment of each of the tweets.
  • We also compare this result with Stanford CoreNLP sentiment prediction.
  • Tweets are classified by both these approaches as:
    • Positive
    • Neutral
    • Negative
  • Please note all non-English tweets are classified as "neutral" as our training data consists of English language only tweets.
  • We analyze and process and consider only the tweets which have location and discard tweets without location info.
    • This is to facilitate the visualization based on the latitude, longitude info of the tweets.
  • Application can also save compressed raw tweets to the disk.
    • Please set SAVE_RAW_TWEETS flag to true in application.conf if you want to save / retain the raw tweets we retrieve from Twitter.
  • The result of the tweet is published to Redis which is subscribed by the front-end webapp for visualization.
  • Datamaps -- based on D3.js -- is used for visualization to display the tweet location on the world map with a pop up for more details on hover.
    • Hover over the bubbles to see the additional info of the tweets.
    • Visualization is fully responsive and scales well for any form factor. Works even on mobile.
    • App adjusts if a window is resized without impacting the UX or losing the data already on the screen.
    • Changes to the orientation [of a phone / tablet] does not have any impact on the app either.
  • This codebase has been updated with comments, where necessary.

Docker Image and Dockerfile

  • Docker image hosted on Docker Hub is available with the complete environment and dependencies installed.
  • Dockerfile and other supporting files are also available on GitHub.
  • For detailed info on this project, please check the blogpost.

Dependencies

Following is the complete list of languages and frameworks used and their significance in this project.

  1. OpenJDK 64-Bit v1.8.0_102 » Java for compiling and execution; the VM to be precise
  2. Scala v2.10.6 » basic infrastructure and Spark jobs
  3. SBT v0.13.12 » build script and uber jar creation
  4. Apache Spark v1.6.2
    • Spark Streaming » connecting to Twitter and streaming the tweets
    • Spark MLlib » creating a ML model and predicting the sentiment of tweets based on the text
    • Spark SQL » saving tweets [both raw and classified]
  5. Stanford CoreNLP v3.6.0 » alternative approach to find sentiment of tweets based on the text
  6. Redis » publishing classified tweets; subscribed by the front-end app to render the chart
  7. Datamaps » chart and visualization
  8. Python » run the flask app for rendering the front-end
  9. Flask » render the template for front-end

Also, please check build.sbt for more information on the various other dependencies of the project.

Prerequisites for successful execution

  • A machine with Docker installed and in which you can allocate at least the following [actually the more, the merrier] to the Docker-machine instance:
    • 2 GB RAM
    • 2 CPUs
    • 6 GB free disk space
  • We will need unfettered internet access for executing this project.
  • Twitter App OAuth credentials are mandatory.
    • These credentials are for retrieving tweets using Twitter Streaming API.
  • We will download ~1.5 GB of data with this image and SBT dependencies, etc and streaming tweets too.

Env Setup

If not already installed, please install Docker on your machine.

We will be using the accompanying Docker image created for this project.

Resources for the Docker machine

  • Stop docker-machine. docker-machine stop default
  • Launch VirtualBox and click on settings of default instance, which should be in Powered Off state.
  • Fix the settings as highlighted in the screenshots below.
    • Please note this is minimum required config; you might want to allocate more.
  • Increase RAM of the VM
    Docker Machine RAM
  • Increase # of CPUs of the VM
    Docker Machine CPU
  • Relaunch docker after modifying the settings. docker-machine start default
  • Any Docker image you create now will have 2 GB RAM and 2 CPUs allocated.
    • Or the resources you allocated earlier.

Execution

Run the Docker image

  • This step will pull the Docker image from Docker Hub and runs it.
    • If the image doesn't exist locally, the Docker Client will first fetch the image from the registry and then run the image.
    • After image boots up and completes the setup process, you will land into a bash shell waiting for your input.

docker run -ti -p 4040:4040 -p 8080:8080 -p 8081:8081 -p 9999:9999 -h spark --name=spark p7hb/p7hb-docker-mllib-twitter-sentiment:1.6.2

Please note:

  • root is the user we logged into.
  • spark is the container name.
  • spark is host name of this container.
    • This is very important as Spark Slaves are started using this host name as the master.
  • The container exposes ports 4040, 8080, 8081 for Spark Web UI console and 9999 for Twitter sentiment Visualization.

Twitter App OAuth credentials

  • The only manual intervention required in this project is setting up a Twitter App and updating OAuth credentials to connect to Twitter Streaming API. Please note that this is a critical step and without this, Spark will not be able to connect to Twitter or retrieve tweets with Twitter Streaming API and so, the visualization will be empty basically without any data.
  • Please check the application.conf and add your own values and complete the integration of Twitter API to your application by looking at your values from Twitter Developer Page.
    • If you did not create a Twitter App before, then please create a new Twitter App on Twitter Developer Page, where you will get all the required values of application.conf.

Execute Spark Streaming job for sentiment prediction

  • Please execute /root/exec_spark_jobs.sh in the console after updating the Twitter App OAuth credentials in application.conf.
    • This script first starts Spark services [Spark Master and Spark Slave] and then launches Spark jobs one after the other.
  • This might take sometime as SBT will initiate a download and setup of all the required packages from Maven Central Repo and Typesafe repo as required.

Visualization app

  • After a few minutes of launching the Spark jobs, point your browser on the host machine to http://192.168.99.100:9999/ to view the Twitter sentiment visualized on a world map.
  • When a tweet is classified, a small bubble appears on world map exactly showing the location from which that tweet originated from.
  • Hovering over a bubble displays the corresponding tweet's additional info:
  1. tweet handle
  2. tweet profile pic
  3. date tweet created
  4. text of the tweet
  5. sentiment predicted by MLlib
  6. sentiment as per Stanford CoreNLP

Further work and improvement areas

  • Visualization could be completely scrapped for something better and UX needs a lot of uplifting.
  • Use Spark package / wrapper for Stanford CoreNLP and reduce the boilerplate code further.
  • Current prediction accuracy is ~80%. Prediction accuracy needs to be rethinked about and probably a better dataset should be used for creating the model.
  • Update the project to Apache Spark v2.0.
    • Push out RDDs; hello DataFrames and Datasets!
    • And also use org.apache.spark.ml package.
    • Speed gains too!
  • Also processing and predicting non-English tweets too could be taken up in future.
  • Add or update comments in the code where necessary.

Expert mode execution steps

This is a very quick recap / summary of the steps required for execution of this code.
Please consider these steps only if you are an expert on Docker, Spark and ecosystem of this project and understand clearly what is being done here.

  • Install and launch Docker.
  • Stop Docker and in the VirtualBox GUI, increase RAM of Docker machine [instance named default and should be in Powered Off state] to at least 2 GB [or more] and # of CPUs to 2 [or more].
  • Start Docker again.
  • Pull the project Docker image and launch it.
    • Might have to wait for ~10 minutes or so [depending on your internet speed].
      docker run -ti -p 4040:4040 -p 8080:8080 -p 8081:8081 -p 9999:9999 -h spark --name=spark p7hb/p7hb-docker-mllib-twitter-sentiment:1.6.2
  • Update application.conf to include your Twitter App OAuth credentials.
  • Execute: /root/exec_spark_jobs.sh
    • Might have to wait for ~10 minutes or so [depending on your internet speed].
  • Point your browser on the host machine to http://192.168.99.100:9999 for visualization.

Note:

Please do not forget to modify the Twitter App OAuth credentials in the file application.conf.
Please check Twitter Developer page for more info.

Helpful links

  1. I am currently hosting this web app on Amazon EC2: http://54.84.252.184:9999/. I will bring it down sometime next week. Update on 19th September, 2016: After running the live app on EC2 for almost a month, I have shutdown this instance today.
  2. Docker Image on Docker Hub Registry: https://hub.docker.com/r/p7hb/p7hb-docker-mllib-twitter-sentiment/.
  3. GitHub URL for source code of the project: https://github.com/P7h/Spark-MLlib-Twitter-Sentiment-Analysis.
  4. GitHub URL for blog post on code walkthru: https://github.com/P7h/Spark-MLlib-Twitter-Sentiment-Analysis/wiki/.
  5. Dockerfile GitHub repo: https://github.com/P7h/p7hb-docker-mllib-twitter-sentiment.

Problems? Questions? Contributions? Contributions welcome

If you find any issues or would like to discuss further, please ping me on my Twitter handle @P7h or drop me an email. Appreciate your help. Thanks!

License License

Copyright © 2016 Prashanth Babu.
Licensed under the Apache License, Version 2.0.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].