All Projects → apache → Spark

apache / Spark

Licence: apache-2.0
Apache Spark - A unified analytics engine for large-scale data processing

Programming Languages

python
139335 projects - #7 most used programming language
java
68154 projects - #9 most used programming language
scala
5932 projects
r
7636 projects
Jupyter Notebook
11667 projects
HiveQL
18 projects

Projects that are alternatives of or similar to Spark

Spark Website
Apache Spark Website
Stars: ✭ 75 (-99.76%)
Mutual labels:  sql, spark, big-data, jdbc
Metorikku
A simplified, lightweight ETL Framework based on Apache Spark
Stars: ✭ 361 (-98.86%)
Mutual labels:  sql, spark, big-data
Spark With Python
Fundamentals of Spark with Python (using PySpark), code examples
Stars: ✭ 150 (-99.53%)
Mutual labels:  sql, spark, big-data
Gimel
Big Data Processing Framework - Unified Data API or SQL on Any Storage
Stars: ✭ 216 (-99.32%)
Mutual labels:  spark, big-data, jdbc
Linkis
Linkis helps easily connect to various back-end computation/storage engines(Spark, Python, TiDB...), exposes various interfaces(REST, JDBC, Java ...), with multi-tenancy, high performance, and resource control.
Stars: ✭ 2,323 (-92.65%)
Mutual labels:  sql, spark, jdbc
Trino
Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
Stars: ✭ 4,581 (-85.51%)
Mutual labels:  sql, big-data, jdbc
Kyuubi
Kyuubi is a unified multi-tenant JDBC interface for large-scale data processing and analytics, built on top of Apache Spark
Stars: ✭ 363 (-98.85%)
Mutual labels:  sql, spark, jdbc
Jooq
jOOQ is the best way to write SQL in Java
Stars: ✭ 4,695 (-85.15%)
Mutual labels:  sql, jdbc
Magellan
Geo Spatial Data Analytics on Spark
Stars: ✭ 507 (-98.4%)
Mutual labels:  spark, big-data
Sparkjni
A heterogeneous Apache Spark framework.
Stars: ✭ 11 (-99.97%)
Mutual labels:  spark, big-data
Hibernate Springboot
Collection of best practices for Java persistence performance in Spring Boot applications
Stars: ✭ 589 (-98.14%)
Mutual labels:  sql, jdbc
Data Science Ipython Notebooks
Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.
Stars: ✭ 22,048 (-30.27%)
Mutual labels:  spark, big-data
Listenbrainz Server
Server for the ListenBrainz project
Stars: ✭ 420 (-98.67%)
Mutual labels:  spark, big-data
Beam
Apache Beam is a unified programming model for Batch and Streaming
Stars: ✭ 5,149 (-83.71%)
Mutual labels:  sql, big-data
Ignite
Apache Ignite
Stars: ✭ 4,027 (-87.26%)
Mutual labels:  sql, big-data
Jailer
Database Subsetting and Relational Data Browsing Tool.
Stars: ✭ 576 (-98.18%)
Mutual labels:  sql, jdbc
Bigdl
Building Large-Scale AI Applications for Distributed Big Data
Stars: ✭ 3,813 (-87.94%)
Mutual labels:  spark, big-data
Ragtime
Database-independent migration library
Stars: ✭ 519 (-98.36%)
Mutual labels:  sql, jdbc
Zeppelin
Web-based notebook that enables data-driven, interactive data analytics and collaborative documents with SQL, Scala and more.
Stars: ✭ 5,513 (-82.56%)
Mutual labels:  spark, big-data
Scriptis
Scriptis is for interactive data analysis with script development(SQL, Pyspark, HiveQL), task submission(Spark, Hive), UDF, function, resource management and intelligent diagnosis.
Stars: ✭ 696 (-97.8%)
Mutual labels:  sql, spark

Apache Spark

Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, pandas API on Spark for pandas workloads, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for stream processing.

https://spark.apache.org/

GitHub Action Build Jenkins Build AppVeyor Build PySpark Coverage

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project web page. This README file only contains basic setup instructions.

Building Spark

Spark is built using Apache Maven. To build Spark and its example programs, run:

./build/mvn -DskipTests clean package

(You do not need to do this if you downloaded a pre-built package.)

More detailed documentation is available from the project site, at "Building Spark".

For general development tips, including info on developing Spark using an IDE, see "Useful Developer Tools".

Interactive Scala Shell

The easiest way to start using Spark is through the Scala shell:

./bin/spark-shell

Try the following command, which should return 1,000,000,000:

scala> spark.range(1000 * 1000 * 1000).count()

Interactive Python Shell

Alternatively, if you prefer Python, you can use the Python shell:

./bin/pyspark

And run the following command, which should also return 1,000,000,000:

>>> spark.range(1000 * 1000 * 1000).count()

Example Programs

Spark also comes with several sample programs in the examples directory. To run one of them, use ./bin/run-example <class> [params]. For example:

./bin/run-example SparkPi

will run the Pi example locally.

You can set the MASTER environment variable when running examples to submit examples to a cluster. This can be a mesos:// or spark:// URL, "yarn" to run on YARN, and "local" to run locally with one thread, or "local[N]" to run locally with N threads. You can also use an abbreviated class name if the class is in the examples package. For instance:

MASTER=spark://host:7077 ./bin/run-example SparkPi

Many of the example programs print usage help if no params are given.

Running Tests

Testing first requires building Spark. Once Spark is built, tests can be run using:

./dev/run-tests

Please see the guidance on how to run tests for a module, or individual tests.

There is also a Kubernetes integration test, see resource-managers/kubernetes/integration-tests/README.md

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs.

Please refer to the build documentation at "Specifying the Hadoop Version and Enabling YARN" for detailed guidance on building for a particular distribution of Hadoop, including building for particular Hive and Hive Thriftserver distributions.

Configuration

Please refer to the Configuration Guide in the online documentation for an overview on how to configure Spark.

Contributing

Please review the Contribution to Spark guide for information on how to get started contributing to the project.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].