psychothan / Data Scientists Guide Apache Spark
Labels
Projects that are alternatives of or similar to Data Scientists Guide Apache Spark
The Data Scientist's Guide to Apache Spark
This repo contains notebook exercises for a workshop teaching the best practices of using Spark for practicing data scientists in the context of a data scientist’s standard workflow. By leveraging Spark’s APIs for Python and R to present practical applications, the technology will be much more accessible by decreasing the barrier to entry.
Materials
For the workshop (and after) we will use a Discord chatroom to keep the conversation going: https://discord.gg/avj79xZ
And/or please do not hesitate to reach out to me directly via email at [email protected] or over twitter @memoryphoneme
The presentation can be found on Slideshare here.
Prerequisites
Prior experience with Python and the scientific Python stack is beneficial. Also knowledge of data science models and applications is preferred. This will not be an introduction to Machine Learning or Data Science, but rather a course for people proficient in these methods on a small scale to understand how to apply that knowledge in a distributed setting with Spark.
Setup
SparkR with a Notebook
- Install IRKernel
install.packages(c('rzmq','repr','IRkernel','IRdisplay'), repos = c('http://irkernel.github.io/', getOption('repos')))
IRkernel::installspec()
# Example: Set this to where Spark is installed
Sys.setenv(SPARK_HOME="/Users/[username]/spark")
# This line loads SparkR from the installed directory
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
# if these two lines work, you are all set
library(SparkR)
sc <- sparkR.init(master="local")
Data
link = 'http://hopelessoptimism.com/static/data/airline-data'
The notebooks use a few datasets. For the DonorsChoose data, you can read the documentation here and download a zip (~0.5 gb) from: http://hopelessoptimism.com/static/data/donors_choose.zip
IPython Console Help
Q: How can I find out all the methods that are available on DataFrame?
-
In the IPython console type
sales.[TAB]
-
Autocomplete will show you all the methods that are available.
-
To find more information about a specific method, say
.cov
typehelp(sales.cov)
-
This will display the API documentation for that method.
Spark Documentation
Q: How can I find out more about Spark's Python API, MLlib, GraphX, Spark Streaming, deploying Spark to EC2?
-
Navigate using tabs to the following areas in particular.
-
Programming Guide > Quick Start, Spark Programming Guide, Spark Streaming, DataFrames and SQL, MLlib, GraphX, SparkR.
-
Deploying > Overview, Submitting Applications, Spark Standalone, YARN, Amazon EC2.
-
More > Configuration, Monitoring, Tuning Guide.
References
Setup
History of Computing
- Why CPUs aren't getting any faster
- Hadoop: A brief History
- The State of Spark: And where we are going next
- https://blogs.apache.org/foundation/entry/the_apache_software_foundation_announces50
Original Papers
Data Science with Spark
Distributed Computing
- Distributed Systems for Fun and Profit
- Resilience Engineering: Learning to Embrace Failure
- Chaos Monkey
Spark Internals
Spark Performance
- Tuning and Debugging in Apache Spark
reduceByKey
vsgroupByKey
- Advanced Spark
- What's the difference between
cache()
andpersist()
- Monitoring and Instrumentation
Spark Deployment
Plotly + Spark
- https://plot.ly/ipython-notebooks/apache-spark/
- https://plot.ly/python/ipython-notebooks/
- https://plot.ly/python/matplotlib-to-plotly-tutorial/#6.1-Matplotlib-to-Plotly-conversion-basics
word2Vec
The word2vec tool takes a text corpus as input and produces the word vectors as output. It first constructs a vocabulary from the training text data and then learns vector representation of words. The resulting word vector file can be used as features in many natural language processing and machine learning applications.
Theory/Application
- Efficient Estimation of Word Representations in Vector Space
- Distributed Representations of Words and Phrases and their Compositionality
- Distributed Representations of Sentences and Documents
- deeplearning4j tutorial (with applications)
- Modern Methods for Sentiment Analysis
- word2vec: an introduction
Tools
Books on Spark
-
Learning Spark: Lightning-Fast Big Data Analytics
By Holden Karau, Andy Konwinski, Patrick Wendell, Matei Zaharia
Publisher: O'Reilly Media, June 2014
http://shop.oreilly.com/product/0636920028512.do
Introduction to Spark APIs and underlying concepts. -
Spark Knowledge Base
By Databricks, Vida Ha, Pat McDonough
Publisher: Databricks
http://databricks.gitbooks.io/databricks-spark-knowledge-base
Spark tips, tricks, and recipes. -
Spark Reference Applications
By Databricks, Vida Ha, Pat McDonough
Publisher: Databricks
http://databricks.gitbooks.io/databricks-spark-reference-applications
Best practices for large-scale Spark application architecture. Topics include import, export, machine learning, streaming.
Learning Scala
- Scala for the Impatient
by Cay S. Horstmann
Publisher: Addison-Wesley Professional, March 2012
http://www.amazon.com/Scala-Impatient-Cay-S-Horstmann/dp/0321774094
Concise, to the point, and contains good practical tips on using Scala.
Video Tutorials
-
Spark Internals
By Matei Zaharia (Databricks)
https://www.youtube.com/watch?v=49Hr5xZyTEA -
Spark on YARN
By Sandy Ryza (Cloudera)
https://www.youtube.com/watch?v=N6pJhxCPe-Y -
Spark Programming
By Pat McDonough (Databricks)
https://www.youtube.com/watch?v=mHF3UPqLOL8
Community
-
Community
https://spark.apache.org/community.html
Spark's community page lists meetups, mailing-lists, and upcoming Spark conferences. -
Meetups
http://spark.meetup.com/
Spark has meetups in the Bay Area, NYC, Seattle, and most major cities around the world. -
Mailing Lists
https://spark.apache.org/community.html
The user mailing list covers issues and best practices around using Spark. The dev mailing list is for people who want to contribute to Spark.