All Projects → faizanahemad → spark-gradle-template

faizanahemad / spark-gradle-template

Licence: other
Apache Spark in your IDE with gradle

Programming Languages

scala
5932 projects

Projects that are alternatives of or similar to spark-gradle-template

Spark On K8s Operator
Kubernetes operator for managing the lifecycle of Apache Spark applications on Kubernetes.
Stars: ✭ 1,780 (+4464.1%)
Mutual labels:  spark, apache-spark
Spark With Python
Fundamentals of Spark with Python (using PySpark), code examples
Stars: ✭ 150 (+284.62%)
Mutual labels:  spark, apache-spark
Spark
.NET for Apache® Spark™ makes Apache Spark™ easily accessible to .NET developers.
Stars: ✭ 1,721 (+4312.82%)
Mutual labels:  spark, apache-spark
Spark States
Custom state store providers for Apache Spark
Stars: ✭ 83 (+112.82%)
Mutual labels:  spark, apache-spark
Sparkrdma
RDMA accelerated, high-performance, scalable and efficient ShuffleManager plugin for Apache Spark
Stars: ✭ 215 (+451.28%)
Mutual labels:  spark, apache-spark
Cuesheet
A framework for writing Spark 2.x applications in a pretty way
Stars: ✭ 86 (+120.51%)
Mutual labels:  spark, apache-spark
Azure Event Hubs Spark
Enabling Continuous Data Processing with Apache Spark and Azure Event Hubs
Stars: ✭ 140 (+258.97%)
Mutual labels:  spark, apache-spark
Apache Spark Internals
The Internals of Apache Spark
Stars: ✭ 1,045 (+2579.49%)
Mutual labels:  spark, apache-spark
Mmlspark
Simple and Distributed Machine Learning
Stars: ✭ 2,899 (+7333.33%)
Mutual labels:  spark, apache-spark
Azure Cosmosdb Spark
Apache Spark Connector for Azure Cosmos DB
Stars: ✭ 165 (+323.08%)
Mutual labels:  spark, apache-spark
Awesome Pulsar
A curated list of Pulsar tools, integrations and resources.
Stars: ✭ 57 (+46.15%)
Mutual labels:  spark, apache-spark
Mastering Spark Sql Book
The Internals of Spark SQL
Stars: ✭ 234 (+500%)
Mutual labels:  spark, apache-spark
Pulsar Spark
When Apache Pulsar meets Apache Spark
Stars: ✭ 55 (+41.03%)
Mutual labels:  spark, apache-spark
Splash
Splash, a flexible Spark shuffle manager that supports user-defined storage backends for shuffle data storage and exchange
Stars: ✭ 105 (+169.23%)
Mutual labels:  spark, apache-spark
Spark Nkp
Natural Korean Processor for Apache Spark
Stars: ✭ 50 (+28.21%)
Mutual labels:  spark, apache-spark
Spark On Lambda
Apache Spark on AWS Lambda
Stars: ✭ 137 (+251.28%)
Mutual labels:  spark, apache-spark
Spark Tda
SparkTDA is a package for Apache Spark providing Topological Data Analysis Functionalities.
Stars: ✭ 45 (+15.38%)
Mutual labels:  spark, apache-spark
Spark As Service Using Embedded Server
This application comes as Spark2.1-as-Service-Provider using an embedded, Reactive-Streams-based, fully asynchronous HTTP server
Stars: ✭ 46 (+17.95%)
Mutual labels:  spark, apache-spark
Whylogs Java
Profile and monitor your ML data pipeline end-to-end
Stars: ✭ 164 (+320.51%)
Mutual labels:  spark, apache-spark
Spark Workshop
Apache Spark™ and Scala Workshops
Stars: ✭ 224 (+474.36%)
Mutual labels:  spark, apache-spark

Spark-Gradle-Template

A barebones project with scala, apache spark built using gradle. Spark-shell provides spark and sc variables pre-initialised, here I did the same using a scala trait that you can extend.

Prerequisites

Build and Demo process

Clone the Repo

git clone https://github.com/faizanahemad/spark-gradle-template.git

Build

./gradlew clean build

Run

./gradlew run

All Together

./gradlew clean run

What the demo does?

Take a look at src->main->scala->template->spark directory

We have two Items here.

The trait InitSpark which is extended by any class that wants to run spark code. This trait has all the code for initialization. I have also supressed the logging to only error levels for less noise.

The file Main.scala has the executable class Main. In this class, I do 4 things

  • Print spark version.
  • Find sum from 1 to 100 (inclusive).
  • Read a csv file into a structured DataSet.
  • Find average age of persons from the csv.

InitSpark.scala

trait InitSpark {
  val spark: SparkSession = SparkSession.builder().appName("Spark example").master("local[*]")
                            .config("spark.some.config.option", "some-value").getOrCreate()
  val sc = spark.sparkContext
  val sqlContext = spark.sqlContext
  def reader = spark.read.option("header",true).option("inferSchema", true).option("mode", "DROPMALFORMED")
  def readerWithoutHeader = spark.read.option("header",true).option("inferSchema", true).option("mode", "DROPMALFORMED")
  private def init = {
    sc.setLogLevel("ERROR")
    Logger.getLogger("org").setLevel(Level.ERROR)
    Logger.getLogger("akka").setLevel(Level.ERROR)
    LogManager.getRootLogger.setLevel(Level.ERROR)
  }
  init
  def close = {
    spark.close()
  }
}

Main.scala

final case class Person(firstName: String, lastName: String, country: String, age: Int)

object Main extends InitSpark {
  def main(args: Array[String]) = {
    import spark.implicits._

    val version = spark.version
    println("VERSION_STRING = " + version)

    val sumHundred = spark.range(1, 101).reduce(_ + _)
    println(sumHundred)

    val persons = reader.csv("people-example.csv").as[Person]
    val averageAge = persons.agg(avg("age")).first.get(0).asInstanceOf[Double]
    println(f"Average Age: $averageAge%.2f")

    close
  }
}

Using this Repo

Just import it into your favorite IDE as a gradle project. Tested with IntelliJ to work. Or use your favorite editor and build from command line with gradle.

Libraries Included

  • Spark - 2.1.0

Useful Links

Issues or Suggestions

  • Raise one on github
  • Send me a mail -> fahemad3+github @ gmail dot com (Remove the spaces and dot = .)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].