All Projects → saurfang → Sparksql Protobuf

saurfang / Sparksql Protobuf

Licence: apache-2.0
Read SparkSQL parquet file as RDD[Protobuf]

Programming Languages

scala
5932 projects

Projects that are alternatives of or similar to Sparksql Protobuf

Gcs Tools
GCS support for avro-tools, parquet-tools and protobuf
Stars: ✭ 57 (-30.49%)
Mutual labels:  protobuf, parquet
Ratatool
A tool for data sampling, data generation, and data diffing
Stars: ✭ 279 (+240.24%)
Mutual labels:  protobuf, parquet
Sol2proto
Ethereum contract ABI to gRPC protobuf IDL transpiler
Stars: ✭ 41 (-50%)
Mutual labels:  protobuf
Grpcdump
Tool for capture and parse grpc traffic
Stars: ✭ 75 (-8.54%)
Mutual labels:  protobuf
Grpcc
A gRPC cli interface for easy testing against gRPC servers
Stars: ✭ 1,078 (+1214.63%)
Mutual labels:  protobuf
Protocol Buffers Language Server
[WIP] Protocol Buffers Language Server
Stars: ✭ 44 (-46.34%)
Mutual labels:  protobuf
Rumble
⛈️ Rumble 1.11.0 "Banyan Tree"🌳 for Apache Spark | Run queries on your large-scale, messy JSON-like data (JSON, text, CSV, Parquet, ROOT, AVRO, SVM...) | No install required (just a jar to download) | Declarative Machine Learning and more
Stars: ✭ 58 (-29.27%)
Mutual labels:  parquet
Grpc Contract
A tool to generate the grpc server code for a contract
Stars: ✭ 40 (-51.22%)
Mutual labels:  protobuf
Easyrpc
EasyRpc is a simple, high-performance, easy-to-use RPC framework based on Netty, ZooKeeper and ProtoStuff.
Stars: ✭ 79 (-3.66%)
Mutual labels:  protobuf
Protoc Gen Twirp swagger
Swagger generator for twirp
Stars: ✭ 54 (-34.15%)
Mutual labels:  protobuf
Grpc Rust
Rust implementation of gRPC
Stars: ✭ 1,139 (+1289.02%)
Mutual labels:  protobuf
Proto Extractor
Program to extract protobufs compiled for C#
Stars: ✭ 49 (-40.24%)
Mutual labels:  protobuf
Mortgageblockchainfabric
Mortgage Processing App using Hyperledger Fabric Blockchain. Uses channels for privacy and access, and restricts read/write previleges through endorsement policies
Stars: ✭ 45 (-45.12%)
Mutual labels:  protobuf
Node Google Play Cli
command line tools using the node-google-play library
Stars: ✭ 58 (-29.27%)
Mutual labels:  protobuf
Quilt
Quilt is a self-organizing data hub for S3
Stars: ✭ 1,007 (+1128.05%)
Mutual labels:  parquet
Actix Protobuf
Protobuf integration for actix web
Stars: ✭ 75 (-8.54%)
Mutual labels:  protobuf
Lua Protobuf
A Lua module to work with Google protobuf
Stars: ✭ 1,002 (+1121.95%)
Mutual labels:  protobuf
Jabci
Java implementation of the Tendermint ABCI
Stars: ✭ 48 (-41.46%)
Mutual labels:  protobuf
Ue4protobuf
A protobuf source integration for UE4.
Stars: ✭ 80 (-2.44%)
Mutual labels:  protobuf
Curaengine
Powerful, fast and robust engine for converting 3D models into g-code instructions for 3D printers. It is part of the larger open source project Cura.
Stars: ✭ 1,207 (+1371.95%)
Mutual labels:  protobuf

sparksql-protobuf

This library provides utilities to work with Protobuf objects in SparkSQL. It provides a way to read parquet file written by SparkSQL back as an RDD of compatible protobuf object. It can also converts RDD of protobuf objects into DataFrame.

Build Status codecov.io

For sbt 0.13.6+

resolvers += Resolver.jcenterRepo

libraryDependencies ++= Seq(
    "com.github.saurfang" %% "sparksql-protobuf" % "0.1.3",
    "org.apache.parquet" % "parquet-protobuf" % "1.8.3"
)

Motivation

SparkSQL is very powerful and easy to use. However it has a few limitations and schema is only detected during runtime makes developers a lot less confident that they will get things right at first time. Static typing helps a lot! This is where protobuf comes in:

  1. Protobuf defines nested data structure easily
  2. It doesn't constraint you to the 22 fields limit in case class (no longer true once we upgrade to 2.11+)
  3. It is language agnostic and generates code that gives you native objects hence you get all the benefit of type checking and code completion unlike operating Row in Spark/SparkSQL

Features

Read Parquet file as RDD[Protobuf]

val personsPB = new ProtoParquetRDD(sc, "persons.parquet", classOf[Person])

where we need SparkContext, parquet path and protobuf class.

This converts the existing workflow:

  1. Ingest raw data as DataFrame with nested data structure
  2. Create awkward runtime type checking udfs
  3. Transform raw DataFrame using above udfs into a tabular DataFrame for data analytics

to

  1. Ingest raw data as DataFrame with nested data structure and persist as Parquet file
  2. Read Parquet file back as RDD[Protobuf]
  3. Perform any data transformation and extraction by working with compile typesafe Protobuf getters
  4. Create a DataFrame out of the above transformation and perform additional downstream data analytics on the tabular DataFrame

Infer SparkSQL Schema from Protobuf Definition

val personSchema = ProtoReflection.schemaFor[Person].dataType.asInstanceOf[StructType]

Convert RDD[Protobuf] to DataFrame

import com.github.saurfang.parquet.proto.spark.sql._
val personsDF = sqlContext.createDataFrame(protoPersons)

For more information, please see test cases.

Under the hood

  1. ProtoMessageConverter has been improved to read from LIST specification according to latest parquet documentation. This implementation should be backwards compatible and is able to read repeated fields generated by writers like SparkSQL.
  2. ProtoMessageParquetInputFormat helps the above process by correctly returning the built protobuf object as value.
  3. ProtoParquetRDD abstract the Hadoop input format and returns an RDD of your protobuf objects from parquet files directly.
  4. ProtoReflection infers SparkSQL schema from any Protobuf message class.
  5. ProtoRDDConversions converts Protobuf objects into SparkSQL rows.

Related Work

Elephant Bird

Spark-9999

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].