All Projects → spirom → Learningspark

spirom / Learningspark

Licence: mit
Scala examples for learning to use Spark

Programming Languages

scala
5932 projects

Projects that are alternatives of or similar to Learningspark

Waterdrop
Production Ready Data Integration Product, documentation:
Stars: ✭ 1,856 (+340.86%)
Mutual labels:  spark, spark-streaming
Spark
.NET for Apache® Spark™ makes Apache Spark™ easily accessible to .NET developers.
Stars: ✭ 1,721 (+308.79%)
Mutual labels:  spark, spark-streaming
Spark Mllib Twitter Sentiment Analysis
🌟 ✨ Analyze and visualize Twitter Sentiment on a world map using Spark MLlib
Stars: ✭ 113 (-73.16%)
Mutual labels:  spark, spark-streaming
Utils4s
scala、spark使用过程中,各种测试用例以及相关资料整理
Stars: ✭ 1,070 (+154.16%)
Mutual labels:  spark, spark-streaming
Example Spark
Spark, Spark Streaming and Spark SQL unit testing strategies
Stars: ✭ 205 (-51.31%)
Mutual labels:  spark, spark-streaming
Pyspark Examples
Code examples on Apache Spark using python
Stars: ✭ 58 (-86.22%)
Mutual labels:  spark, spark-streaming
Example Spark Kafka
Apache Spark and Apache Kafka integration example
Stars: ✭ 120 (-71.5%)
Mutual labels:  spark, spark-streaming
Angel
A Flexible and Powerful Parameter Server for large-scale machine learning
Stars: ✭ 6,458 (+1433.97%)
Mutual labels:  spark, spark-streaming
Spark Streaming With Kafka
Self-contained examples of Apache Spark streaming integrated with Apache Kafka.
Stars: ✭ 180 (-57.24%)
Mutual labels:  spark, spark-streaming
Pyspark Learning
Updated repository
Stars: ✭ 147 (-65.08%)
Mutual labels:  spark, spark-streaming
Real Time Stream Processing Engine
This is an example of real time stream processing using Spark Streaming, Kafka & Elasticsearch.
Stars: ✭ 37 (-91.21%)
Mutual labels:  spark, spark-streaming
Data Accelerator
Data Accelerator for Apache Spark simplifies onboarding to Streaming of Big Data. It offers a rich, easy to use experience to help with creation, editing and management of Spark jobs on Azure HDInsights or Databricks while enabling the full power of the Spark engine.
Stars: ✭ 247 (-41.33%)
Mutual labels:  spark, spark-streaming
Learning Spark
零基础学习spark,大数据学习
Stars: ✭ 37 (-91.21%)
Mutual labels:  spark, spark-streaming
Spark States
Custom state store providers for Apache Spark
Stars: ✭ 83 (-80.29%)
Mutual labels:  spark, spark-streaming
Mobius
C# and F# language binding and extensions to Apache Spark
Stars: ✭ 929 (+120.67%)
Mutual labels:  spark, spark-streaming
Kinesis Sql
Kinesis Connector for Structured Streaming
Stars: ✭ 120 (-71.5%)
Mutual labels:  spark, spark-streaming
Cdap
An open source framework for building data analytic applications.
Stars: ✭ 509 (+20.9%)
Mutual labels:  spark, spark-streaming
Sparta
Real Time Analytics and Data Pipelines based on Spark Streaming
Stars: ✭ 513 (+21.85%)
Mutual labels:  spark, spark-streaming
Azure Event Hubs Spark
Enabling Continuous Data Processing with Apache Spark and Azure Event Hubs
Stars: ✭ 140 (-66.75%)
Mutual labels:  spark, spark-streaming
Gimel
Big Data Processing Framework - Unified Data API or SQL on Any Storage
Stars: ✭ 216 (-48.69%)
Mutual labels:  spark, spark-streaming

The LearningSpark Project

NOTE: This code now uses Spark 2.0.0 and beyond -- if you are still using an earlier version of Spark you may want to work off the before_spark2.0.0 branch.

This project contains snippets of Scala code for illustrating various Apache Spark concepts. It is intended to help you get started with learning Apache Spark (as a Scala programmer) by providing a super easy on-ramp that doesn't involve Unix, cluster configuration, building from sources or installing Hadoop. Many of these activities will be necessary later in your learning experience, after you've used these examples to achieve basic familiarity.

It is intended to accompany a number of posts on the blog A River of Bytes.

Dependencies

The project was created with IntelliJ Idea 14 Community Edition, currently using JDK 1.8, Scala 2.11.12 and Spark 2.3.0 on Ubuntu Linux.

Versions of these examples for other configurations (older versions of Scala and Spark) can be found in various branches.

Java Examples

These are much less developed than the Scala examples below. Note that they written to use Java 7 and Spark 2.0.0 only -- if you go back to the before_spark2.0.0 branch you won't find any Java examples at all. I'm adding these partly out of curiosity (because I like Java almost as much as Scala) and partly because of a realization that lots of Spark programmers use Java. There are a number of things it's important to realize I'm not promising to do:

  • Rush to catch up with the Scala examples
  • Keep the two sets of examples perfectly matched
  • Keep working on the Java examples
  • Add Python and R as well (this is really unlikely)

Spark 2.2.0 note: Now that support for Java 7 has been dropped, these "old-fashioned" Java examples are of dubious value, and I'll probably delete them soon in favor of the separate Java/Maven project mentioned below. I've completely stopped working on them, so I can focus on the Scala and Java 8 examples.

If you are using Java 8 or later, you may be interested in the new learning-spark-with-java project based completely on Java 8 and Maven.

Package What's Illustrated
rdd The JavaRDD: core Spark data structure -- see the local README.md in that directory for details.
dataset A range of Dataset examples (queryable collection that is statically typed) -- see the local README.md in that directory for details.
dataframe A range of DataFrame/Dataset examples (queryable collection that is dynamically typed) -- see the local README.md in that directory for details.

Scala Examples

The examples can be found under src/main/scala. The best way to use them is to start by reading the code and its comments. Then, since each file contains an object definition with a main method, run it and consider the output. Relevant blog posts and StackOverflow answers are listed in the various package README.md files.

Package or File What's Illustrated
Ex1_SimpleRDD How to execute your first, very simple, Spark Job. See also An easy way to start learning Spark.
Ex2_Computations How RDDs work in more complex computations. See also Spark computations.
Ex3_CombiningRDDs Operations on multiple RDDs
Ex4_MoreOperationsOnRDDs More complex operations on individual RDDs
Ex5_Partitions Explicit control of partitioning for performance and scalability.
Ex6_Accumulators How to use Spark accumulators to efficiently gather the results of distributed computations.
hiveql Using HiveQL features in a HiveContext. See the local README.md in that directory for details.
special Special/adbanced RDD examples -- see the local README.md in that directory for details.
dataset A range of Dataset examples (queryable collection that is statically typed) -- see the local README.md in that directory for details.
dataframe A range of DataFrame examples (queryable collection that is dynamically -- and weakly -- typed)-- see the local README.md in that directory for details.
sql A range of SQL examples -- see the local README.md in that directory for details.
datasourcev2 New experimental API for developing external data sources, as of Spark 2.3.0 -- removed in favor of the new repository https://github.com/spirom/spark-data-sources, which explores the new API in some detail.
streaming Streaming examples -- see the local README.md in that directory for details.
streaming/structured Structured streaming examples (Spark 2.0) -- see the local README.md in that directory for details.
graphx A range of GraphX examples -- see the local README.md in that directory for details.

Additional Scala code is "work in progress".

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].