All Projects → spoddutur → Spark Streaming Monitoring With Lightning

spoddutur / Spark Streaming Monitoring With Lightning

Plot live-stats as graph from ApacheSpark application using Lightning-viz

Programming Languages

scala
5932 projects

Projects that are alternatives of or similar to Spark Streaming Monitoring With Lightning

Azure Event Hubs Spark
Enabling Continuous Data Processing with Apache Spark and Azure Event Hubs
Stars: ✭ 140 (+833.33%)
Mutual labels:  bigdata, apache-spark, spark-streaming
Mobius
C# and F# language binding and extensions to Apache Spark
Stars: ✭ 929 (+6093.33%)
Mutual labels:  bigdata, apache-spark, spark-streaming
Spark
.NET for Apache® Spark™ makes Apache Spark™ easily accessible to .NET developers.
Stars: ✭ 1,721 (+11373.33%)
Mutual labels:  bigdata, apache-spark, spark-streaming
qs-hadoop
大数据生态圈学习
Stars: ✭ 18 (+20%)
Mutual labels:  bigdata, spark-streaming
Sparkrdma
RDMA accelerated, high-performance, scalable and efficient ShuffleManager plugin for Apache Spark
Stars: ✭ 215 (+1333.33%)
Mutual labels:  bigdata, apache-spark
bigdatatutorial
bigdatatutorial
Stars: ✭ 34 (+126.67%)
Mutual labels:  bigdata, spark-streaming
Data Accelerator
Data Accelerator for Apache Spark simplifies onboarding to Streaming of Big Data. It offers a rich, easy to use experience to help with creation, editing and management of Spark jobs on Azure HDInsights or Databricks while enabling the full power of the Spark engine.
Stars: ✭ 247 (+1546.67%)
Mutual labels:  apache-spark, spark-streaming
SparkTwitterAnalysis
An Apache Spark standalone application using the Spark API in Scala. The application uses Simple Build Tool(SBT) for building the project.
Stars: ✭ 29 (+93.33%)
Mutual labels:  apache-spark, bigdata
gan deeplearning4j
Automatic feature engineering using Generative Adversarial Networks using Deeplearning4j and Apache Spark.
Stars: ✭ 19 (+26.67%)
Mutual labels:  apache-spark, bigdata
spark-utils
Basic framework utilities to quickly start writing production ready Apache Spark applications
Stars: ✭ 25 (+66.67%)
Mutual labels:  apache-spark, spark-streaming
leaflet heatmap
简单的可视化湖州通话数据 假设数据量很大,没法用浏览器直接绘制热力图,把绘制热力图这一步骤放到线下计算分析。使用Apache Spark并行计算数据之后,再使用Apache Spark绘制热力图,然后用leafletjs加载OpenStreetMap图层和热力图图层,以达到良好的交互效果。现在使用Apache Spark实现绘制,可能是Apache Spark不擅长这方面的计算或者是我没有设计好算法,并行计算的速度比不上单机计算。Apache Spark绘制热力图和计算代码在这 https://github.com/yuanzhaokang/ParallelizeHeatmap.git .
Stars: ✭ 13 (-13.33%)
Mutual labels:  apache-spark, bigdata
Spark-and-Kafka IoT-Data-Processing-and-Analytics
Final Project for IoT: Big Data Processing and Analytics class. Analyzing U.S nationwide temperature from IoT sensors in real-time
Stars: ✭ 42 (+180%)
Mutual labels:  bigdata, spark-streaming
Clearly
Clearly see and debug your celery cluster in real time!
Stars: ✭ 287 (+1813.33%)
Mutual labels:  realtime, monitoring-tool
ExDeMon
A general purpose metrics monitor implemented with Apache Spark. Kafka source, Elastic sink, aggregate metrics, different analysis, notifications, actions, live configuration update, missing metrics, ...
Stars: ✭ 19 (+26.67%)
Mutual labels:  spark-streaming, monitoring-tool
Splash
Splash, a flexible Spark shuffle manager that supports user-defined storage backends for shuffle data storage and exchange
Stars: ✭ 105 (+600%)
Mutual labels:  bigdata, apache-spark
coolplayflink
Flink: Stateful Computations over Data Streams
Stars: ✭ 14 (-6.67%)
Mutual labels:  bigdata, realtime
Spark States
Custom state store providers for Apache Spark
Stars: ✭ 83 (+453.33%)
Mutual labels:  apache-spark, spark-streaming
Bigdata Playground
A complete example of a big data application using : Kubernetes (kops/aws), Apache Spark SQL/Streaming/MLib, Apache Flink, Scala, Python, Apache Kafka, Apache Hbase, Apache Parquet, Apache Avro, Apache Storm, Twitter Api, MongoDB, NodeJS, Angular, GraphQL
Stars: ✭ 177 (+1080%)
Mutual labels:  apache-spark, spark-streaming
SparkProgrammingInScala
Apache Spark Course Material
Stars: ✭ 57 (+280%)
Mutual labels:  apache-spark, bigdata
Coolplayspark
酷玩 Spark: Spark 源代码解析、Spark 类库等
Stars: ✭ 3,318 (+22020%)
Mutual labels:  apache-spark, spark-streaming

spark-streaming-monitoring-with-lightning

1. Background:

ApacheSpark 2.x streaming application with Dataset’s is not supporting streaming tab now. This project shows how to have a realtime graph monitoring system using Lightning-viz where we can plot and monitor any custom param that we need.

1.1 Architecture:

There are 3 main components in this project as shown in the picture below: image

  1. SparkApplication: Spark application receives streaming data from a socket stream and it does simple job of word count.
  2. Lightning Server: Plots live-stats of any custom params that user wants to monitor within his spark application real-time.
  3. StreamingListener: Registered a custom streaming listener to post live-stats to LightningServer.

2. RunningExample

Following picture depicts side-by-side view of spark-metrics page and its corresponding processing time taken per batch and number of records per batch params graph plotted live Running example

3. Building

This project is using mvn, scala 2.11, spark 2.x and java 1.8.

$ mvn clean install

4. Pre-execution

4.1. Lightning Graph Server

First of all, the application depends on Lightning Graph Server. The default server is http://localhost:3000. You can Deploy or Install on your machine. Good part, is installing it is very simple (kinda one-click process).

5. Execution

Once lightning server is up & running, We can start our spark application in either of the 2 ways listed below:

  1. standalone jar
$ scala -extdirs$SPARK_HOME/lib" <path-to-spark-streaming-monitoring-with-lightning.jar> --master <master> <cmd-line-args>
  1. spark-submit
$ spark-submit --master <master> <path-to-spark-streaming-monitoring-with-lightning.jar> <cmd-line-args>

Default value for master is local[2].

5.1 cmd-line-args

Optionally, you can provide configuration params like lightning server url etc from command line. To see the list of configurable params, just type:

$ spark-submit <path-to-spark-streaming-monitoring-with-lightning.jar> --help
OR
scala -extdirs$SPARK_HOME/lib" <path-to-spark-streaming-monitoring-with-lightning.jar> -h

Help content will look something like this:

This is a Spark Streaming application which receives data from SocketStream and does word count.
You can monitor batch size and batch processing time by real-time graph that's rendered using
Lightning graph server. So, this application needs lightningServerUrl and SocketStreamHost
and Port from where to listen to..
Usage: spark-submit realtime-spark-monitoring-with-lightning*.jar [options]

  Options:
  -h, --help
  -m, --master <master_url>                    spark://host:port, mesos://host:port, yarn, or local.
  -n, --name <name>                            A name of your application.
  -ssh, --socketStreamHost <hostname>          Default: localhost
  -ssp, --socketStreamPort <port>              Default: 9999
  -bi, --batchInterval <batch interval in ms>  Default: 5
  -ls, --lightningServerUrl <hostname>  T      Default: http://localhost:3000

5.2. File configuration

Default values for all the options available from command-line are also present in configuration file. You can directly tweak the file instead of submitting it every time from run/submit command. You can find config file at /src/main/resources/dev/application.properties. Following lists the params listed in the file:

...
sparkMaster=local[2]
socketStreamPort=9999
socketStreamHost=localhost
appName=sparkmonitoring-with-lightning
batchInterval=5
lightningServerUrl=http://localhost:3000
...

6.References

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].