All Projects → swoop-inc → spark-records

swoop-inc / spark-records

Licence: Apache-2.0 license
Bulletproof Apache Spark jobs with fast root cause analysis of failures.

Programming Languages

scala
5932 projects

Projects that are alternatives of or similar to spark-records

Scala Spark Tutorial
Project for James' Apache Spark with Scala course
Stars: ✭ 121 (+80.6%)
Mutual labels:  big-data, apache-spark
Parquetviewer
Simple windows desktop application for viewing & querying Apache Parquet files
Stars: ✭ 145 (+116.42%)
Mutual labels:  big-data, apache-spark
Griffon Vm
Griffon Data Science Virtual Machine
Stars: ✭ 128 (+91.04%)
Mutual labels:  big-data, apache-spark
gan deeplearning4j
Automatic feature engineering using Generative Adversarial Networks using Deeplearning4j and Apache Spark.
Stars: ✭ 19 (-71.64%)
Mutual labels:  big-data, apache-spark
Data Accelerator
Data Accelerator for Apache Spark simplifies onboarding to Streaming of Big Data. It offers a rich, easy to use experience to help with creation, editing and management of Spark jobs on Azure HDInsights or Databricks while enabling the full power of the Spark engine.
Stars: ✭ 247 (+268.66%)
Mutual labels:  big-data, apache-spark
Morpheus
Morpheus brings the leading graph query language, Cypher, onto the leading distributed processing platform, Spark.
Stars: ✭ 303 (+352.24%)
Mutual labels:  big-data, apache-spark
Hydrograph
A visual ETL development and debugging tool for big data
Stars: ✭ 144 (+114.93%)
Mutual labels:  big-data, apache-spark
mmtf-workshop-2018
Structural Bioinformatics Training Workshop & Hackathon 2018
Stars: ✭ 50 (-25.37%)
Mutual labels:  big-data, apache-spark
Sparkrdma
RDMA accelerated, high-performance, scalable and efficient ShuffleManager plugin for Apache Spark
Stars: ✭ 215 (+220.9%)
Mutual labels:  big-data, apache-spark
Bigdata Playground
A complete example of a big data application using : Kubernetes (kops/aws), Apache Spark SQL/Streaming/MLib, Apache Flink, Scala, Python, Apache Kafka, Apache Hbase, Apache Parquet, Apache Avro, Apache Storm, Twitter Api, MongoDB, NodeJS, Angular, GraphQL
Stars: ✭ 177 (+164.18%)
Mutual labels:  big-data, apache-spark
datalake-etl-pipeline
Simplified ETL process in Hadoop using Apache Spark. Has complete ETL pipeline for datalake. SparkSession extensions, DataFrame validation, Column extensions, SQL functions, and DataFrame transformations
Stars: ✭ 39 (-41.79%)
Mutual labels:  big-data, apache-spark
mmtf-spark
Methods for the parallel and distributed analysis and mining of the Protein Data Bank using MMTF and Apache Spark.
Stars: ✭ 20 (-70.15%)
Mutual labels:  big-data, apache-spark
Parquet Dotnet
🏐 Apache Parquet for modern .NET
Stars: ✭ 276 (+311.94%)
Mutual labels:  big-data, apache-spark
Mist
Serverless proxy for Spark cluster
Stars: ✭ 309 (+361.19%)
Mutual labels:  big-data, apache-spark
Mmlspark
Simple and Distributed Machine Learning
Stars: ✭ 2,899 (+4226.87%)
Mutual labels:  big-data, apache-spark
Spark On Lambda
Apache Spark on AWS Lambda
Stars: ✭ 137 (+104.48%)
Mutual labels:  big-data, apache-spark
leaflet heatmap
简单的可视化湖州通话数据 假设数据量很大,没法用浏览器直接绘制热力图,把绘制热力图这一步骤放到线下计算分析。使用Apache Spark并行计算数据之后,再使用Apache Spark绘制热力图,然后用leafletjs加载OpenStreetMap图层和热力图图层,以达到良好的交互效果。现在使用Apache Spark实现绘制,可能是Apache Spark不擅长这方面的计算或者是我没有设计好算法,并行计算的速度比不上单机计算。Apache Spark绘制热力图和计算代码在这 https://github.com/yuanzhaokang/ParallelizeHeatmap.git .
Stars: ✭ 13 (-80.6%)
Mutual labels:  big-data, apache-spark
aut
The Archives Unleashed Toolkit is an open-source toolkit for analyzing web archives.
Stars: ✭ 111 (+65.67%)
Mutual labels:  big-data, apache-spark
Spark With Python
Fundamentals of Spark with Python (using PySpark), code examples
Stars: ✭ 150 (+123.88%)
Mutual labels:  big-data, apache-spark
Detecting-Malicious-URL-Machine-Learning
No description or website provided.
Stars: ✭ 47 (-29.85%)
Mutual labels:  big-data, apache-spark

Spark Records

Spark Records is a data processing pattern with an associated lightweight, dependency-free framework for Apache Spark v2+ that enables:

  1. Bulletproof data processing with Spark
    Your jobs will never unpredictably fail midway due to data transformation bugs. Spark records give you predictable failure control through instant data quality checks performed on metrics automatically collected during job execution, without any additional querying.

  2. Automatic row-level structured logging
    Exceptions generated during job execution are automatically associated with the data that caused the exception, down to nested exception causes and full stack traces. If you need to reprocess data, you can trivially and efficiently choose to only process the failed inputs.

  3. Lightning-fast root cause analysis
    Get answers to any questions related to exceptions or warnings generated during job execution directly using SparkSQL or your favorite Spark DSL. Would you like to see the top 5 issues encountered during job execution with example source data and the line in your code that caused the problem? You can.

Spark Records has been tested with petabyte-scale data at Swoop. The library was extracted out of Swoop's production systems to share with the Spark community.

See the documentation for more information or watch the Spark Summit talk (slides).

Installation

Just add the following to your libraryDependencies in SBT:

resolvers += Resolver.bintrayRepo("swoop-inc", "maven")

libraryDependencies += "com.swoop" %% "spark-records" % "<version>"

You can find all released versions here.

Community

Contributions and feedback of any kind are welcome.

Spark Records is maintained by Sim Simeonov and the team at Swoop.

Special thanks to Reynold Xin and Michael Armbrust for many interesting conversations about better ways to use Spark.

Development

Build docs microsite

sbt "project docs" makeMicrosite

Run docs microsite locally (run under target/site folder)

jekyll serve -b /spark-records

More details

License

spark-records is Copyright © 2017 Simeon Simeonov and Swoop, Inc. It is free software, and may be redistributed under the terms of the LICENSE.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].