All Projects → henridf → Apache Spark Node

henridf / Apache Spark Node

Licence: apache-2.0
Node.js bindings for Apache Spark DataFrame APIs

Programming Languages

javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to Apache Spark Node

Spark Excel
A Spark plugin for reading Excel files via Apache POI
Stars: ✭ 216 (+58.82%)
Mutual labels:  spark, data-frame
Pointblank
Data validation and organization of metadata for data frames and database tables
Stars: ✭ 480 (+252.94%)
Mutual labels:  spark, data-frame
Spark Bigquery
Google BigQuery support for Spark, Structured Streaming, SQL, and DataFrames with easy Databricks integration.
Stars: ✭ 65 (-52.21%)
Mutual labels:  spark, data-frame
Spark Infotheoretic Feature Selection
This package contains a generic implementation of greedy Information Theoretic Feature Selection (FS) methods. The implementation is based on the common theoretic framework presented by Gavin Brown. Implementations of mRMR, InfoGain, JMI and other commonly used FS filters are provided.
Stars: ✭ 123 (-9.56%)
Mutual labels:  spark
Scala Samples
There are pieces of scala code that explain Scala syntax and related things - like what you can do with all this
Stars: ✭ 125 (-8.09%)
Mutual labels:  spark
Airflow Pipeline
An Airflow docker image preconfigured to work well with Spark and Hadoop/EMR
Stars: ✭ 128 (-5.88%)
Mutual labels:  spark
Aliyun Emapreduce Datasources
Extended datasource support for Spark/Hadoop on Aliyun E-MapReduce.
Stars: ✭ 132 (-2.94%)
Mutual labels:  spark
Deequ
Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.
Stars: ✭ 2,020 (+1385.29%)
Mutual labels:  spark
Opaque
An encrypted data analytics platform
Stars: ✭ 129 (-5.15%)
Mutual labels:  spark
Spring Boot Quick
🌿 基于springboot的快速学习示例,整合自己遇到的开源框架,如:rabbitmq(延迟队列)、Kafka、jpa、redies、oauth2、swagger、jsp、docker、spring-batch、异常处理、日志输出、多模块开发、多环境打包、缓存cache、爬虫、jwt、GraphQL、dubbo、zookeeper和Async等等📌
Stars: ✭ 1,819 (+1237.5%)
Mutual labels:  spark
Openuba
A robust, and flexible open source User & Entity Behavior Analytics (UEBA) framework used for Security Analytics. Developed with luv by Data Scientists & Security Analysts from the Cyber Security Industry. [PRE-ALPHA]
Stars: ✭ 127 (-6.62%)
Mutual labels:  spark
Spark Bigquery Connector
BigQuery data source for Apache Spark: Read data from BigQuery into DataFrames, write DataFrames into BigQuery tables.
Stars: ✭ 126 (-7.35%)
Mutual labels:  spark
Gaffer
A large-scale entity and relation database supporting aggregation of properties
Stars: ✭ 1,642 (+1107.35%)
Mutual labels:  spark
Gdeltpyr
Python based framework to retreive Global Database of Events, Language, and Tone (GDELT) version 1.0 and version 2.0 data.
Stars: ✭ 124 (-8.82%)
Mutual labels:  data-frame
Abris
Avro SerDe for Apache Spark structured APIs.
Stars: ✭ 130 (-4.41%)
Mutual labels:  spark
Spark Alchemy
Collection of open-source Spark tools & frameworks that have made the data engineering and data science teams at Swoop highly productive
Stars: ✭ 122 (-10.29%)
Mutual labels:  spark
Spylon Kernel
Jupyter kernel for scala and spark
Stars: ✭ 129 (-5.15%)
Mutual labels:  spark
Lift
The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows.
Stars: ✭ 127 (-6.62%)
Mutual labels:  spark
Cape Python
Collaborate on privacy-preserving policy for data science projects in Pandas and Apache Spark
Stars: ✭ 125 (-8.09%)
Mutual labels:  spark
Feast
Feature Store for Machine Learning
Stars: ✭ 2,576 (+1794.12%)
Mutual labels:  spark

Build Status

This repository is no longer under development. Everything here should continue to work (with appropriate Spark and node versions), but the repo will not be further developed or maintained. I will still try to review any PRs. If anyone has any interest in taking this over, please contact @henridf.

Apache Spark <=> Node.js

Node.js bindings for Apache Spark DataFrame APIs.

Table of Contents

API Docs

API documentation is here.

Status

This project is already usable in its present form, but it is still in early stages and under development. APIs may change.

Notably not yet implemented are:

  • support for user-defined functions
  • jvm-side helpers to for functions/methods which cannot currently be called from node (for example because they take parameters types like Seq)

Getting started

Requirements

  • Linux or OS X (on Windows, there are currently problems building node add-ons)
  • Node.js, version 4+
  • Java 8
  • Spark >= 1.5. You'll need the Spark Assembly jar, which contains all of the Spark classes. If you don't have an existing installation, the easiest is to get the binaries from the Spark downloads page (choose "pre-built for Hadoop 2.6 and later"). Or you can download the Spark sources and build it yourself. More information here.

Installing

From NPM

$ npm install apache-spark-node

From source

Clone git repo, then:

$ npm install
$ npm run compile

Running

Set ASSEMBLY_JAR to the location of your assembly JAR and run spark-node from the directory where you issued npm install apache-spark-node:

ASSEMBLY_JAR=/path/to/spark-assembly-1.6.0-SNAPSHOT-hadoop2.2.0.jar node_modules/apache-spark-node/bin/spark-node

Docker

If you want to play with spark-node but don't want to download the dependencies or build, you can run it in docker.

$ docker run -it henridf/spark-node

This will take you to the normal spark-node shell. Optionally, you can map host volumes to use files on your host system with spark-node. For example

$ docker run -v /var/data:/data -it henridf/spark-node

will map the host's /var/data directory to /data within the Docker image. This means that you can use

$ var df = sqlContext.read().jsonSync("/data/people.json")

to load a file at /var/data/people.json on the host system.

Usage

(Note: This section is a quick overview of the available APIs in spark-node; it is not general introduction to Spark or to DataFrames.)

Start the spark-node shell (this assumes you've that you've ASSEMBLY_JAR as an environment variable):

$ ./bin/spark-node

A sqlContext global object is available in the shell. Its functions are used to create DataFrames, register DataFrames as tables, execute SQL over tables, cache tables, and read parquet files.

To see available command-line options, do ./bin/spark-node --help.

Creating a DataFrame

Load a dataframe from a json file:

$ var df = sqlContext.read().jsonSync("./data/people.json")

Load a dataframe from a list of javascript objects:

$ var df = sqlContext.createDataFrame([{"name":"Michael"}, {"name":"Andy", "age":30}, {"name":"Justin", "age": 19}])

Pretty-print dataframe contents to stdout:

$ df.show()
+----+-------+
| age|   name|
+----+-------+
|null|Michael|
|  30|   Andy|
|  19| Justin|
+----+-------+

DataFrame Operations

Print the dataframe's schema in a tree format:

$ df.printSchema()

Select only the "name" column:

$ df.select(df.col("name")).show()

or the shorter (equivalent) version:

$ df.select("name").show()

collect the result (as an array of rows) and assign it to a javascript variable:

$ var res = df.select("name").collectSync()

Select everybody and increment age by 1:

$ df.select(df.col("name"), df.col("age").plus(1)).show()

Select people older than 21:

$ df.filter(df.col("age").gt(21)).show()

Count people by age:

$ df.groupBy("age").count().show()

Dataframe functions

A sqlFunctions global object is available in the shell. It contains a variety of built-in functions for operating on dataframes.

For example, to find the minimum and average of "age" across all rows:

$ var F = sqlFunctions;

$ df.agg(F.min(df.col("age")), F.avg(df.col("age"))).show()

Running SQL Queries Programmatically

Register df as a table named people:

$ df.registerTempTable("people")

Run a SQL query:

$ var teens = sqlContext.sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")
$ teens.show()

Examples

Word count (aka 'big data hello world')

Create dataframe from text file:

$ var lines = sqlContext.read().textSync("data/words.txt");

(Note: support for the "text" format was added in Spark 1.6).

Split strings into arrays:

$ var F = sqlFunctions;
$ var splits = lines.select(F.split(lines.col("value"), " ").as("words"));

Explode the arrays into individual rows:

$ var occurrences = splits.select(F.explode(splits.col("words")).as("word"));

We now have a dataframe with one row per word occurrence. So we group and count occurrences of the same word and we're done:

$ var counts = occurrences.groupBy("word").count()

$ counts.where("count>10").sort(counts.col("count")).show()

Running spark-node against a standalone cluster

When you run bin/spark-node without passing a --master argument, the spark-node process runs a spark worker in the same process. To run the spark-node shell against a cluser, use the --master argument. Here's an example.

On a worker node, do the following:

$ cd path/to/spark/distribution
$ ./sbin/start-master.sh

Navigate to http://hostname:8080 and get the Spark URL (top line), which will be something like spark://worker_hostname:7077. Then start any number of slaves on your cluster hosts by running ./sbin/start-slave.sh --master <spark_url>.

Then on your client machine:

$ cd path/to/apache-spark-node
$ ./bin/spark-node --master <spark_url>

If you return to the master Web UI (http://hostname:8080), you should now see an application with name "spark-node shell" under "Running applications". Following that link gets you to the Web UI of the node-spark shell itself.

Misc notes

This was done under the self-imposed constraint of not making modifications to the spark sources. This results in hacks like the NodeSparkSubmit Scala class, which are workaround for the fact that we can't add explicit awareness of this shell to SparkSubmit.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].