All Projects → apssouza22 → Lambda Arch

apssouza22 / Lambda Arch

Licence: apache-2.0
Applying Lambda Architecture with Spark, Kafka, and Cassandra.

Programming Languages

java
68154 projects - #9 most used programming language

Projects that are alternatives of or similar to Lambda Arch

Bigdataguide
大数据学习,从零开始学习大数据,包含大数据学习各阶段学习视频、面试资料
Stars: ✭ 817 (+636.04%)
Mutual labels:  spark, bigdata
Optimus
🚚 Agile Data Preparation Workflows made easy with dask, cudf, dask_cudf and pyspark
Stars: ✭ 986 (+788.29%)
Mutual labels:  spark, bigdata
Mobius
C# and F# language binding and extensions to Apache Spark
Stars: ✭ 929 (+736.94%)
Mutual labels:  spark, bigdata
Bigdataie
大数据博客、笔试题、教程、项目、面经的整理
Stars: ✭ 445 (+300.9%)
Mutual labels:  spark, bigdata
Spark Py Notebooks
Apache Spark & Python (pySpark) tutorials for Big Data Analysis and Machine Learning as IPython / Jupyter notebooks
Stars: ✭ 1,338 (+1105.41%)
Mutual labels:  spark, bigdata
Spark Movie Lens
An on-line movie recommender using Spark, Python Flask, and the MovieLens dataset
Stars: ✭ 745 (+571.17%)
Mutual labels:  spark, bigdata
Sparktutorial
Source code for James Lee's Aparch Spark with Java course
Stars: ✭ 105 (-5.41%)
Mutual labels:  spark, bigdata
Spline
Data Lineage Tracking And Visualization Solution
Stars: ✭ 306 (+175.68%)
Mutual labels:  spark, bigdata
Cleanframes
type-class based data cleansing library for Apache Spark SQL
Stars: ✭ 75 (-32.43%)
Mutual labels:  spark, bigdata
Apache Spark Hands On
Educational notes,Hands on problems w/ solutions for hadoop ecosystem
Stars: ✭ 74 (-33.33%)
Mutual labels:  spark, bigdata
God Of Bigdata
专注大数据学习面试,大数据成神之路开启。Flink/Spark/Hadoop/Hbase/Hive...
Stars: ✭ 6,008 (+5312.61%)
Mutual labels:  spark, bigdata
Bigdata Notebook
Stars: ✭ 100 (-9.91%)
Mutual labels:  spark, bigdata
Big data architect skills
一个大数据架构师应该掌握的技能
Stars: ✭ 400 (+260.36%)
Mutual labels:  spark, bigdata
Coding Now
学习记录的一些笔记,以及所看得一些电子书eBooks、视频资源和平常收纳的一些自己认为比较好的博客、网站、工具。涉及大数据几大组件、Python机器学习和数据分析、Linux、操作系统、算法、网络等
Stars: ✭ 750 (+575.68%)
Mutual labels:  spark, bigdata
Sidekick
High Performance HTTP Sidecar Load Balancer
Stars: ✭ 366 (+229.73%)
Mutual labels:  spark, bigdata
Bigdata Interview
🎯 🌟[大数据面试题]分享自己在网络上收集的大数据相关的面试题以及自己的答案总结.目前包含Hadoop/Hive/Spark/Flink/Hbase/Kafka/Zookeeper框架的面试题知识总结
Stars: ✭ 857 (+672.07%)
Mutual labels:  spark, bigdata
Big Data Rosetta Code
Code snippets for solving common big data problems in various platforms. Inspired by Rosetta Code
Stars: ✭ 254 (+128.83%)
Mutual labels:  spark, bigdata
Docker Spark Cluster
A simple spark standalone cluster for your testing environment purposses
Stars: ✭ 261 (+135.14%)
Mutual labels:  spark, bigdata
Big Data Engineering Coursera Yandex
Big Data for Data Engineers Coursera Specialization from Yandex
Stars: ✭ 71 (-36.04%)
Mutual labels:  spark, bigdata
Bigdata Notes
大数据入门指南 ⭐
Stars: ✭ 10,991 (+9801.8%)
Mutual labels:  spark, bigdata

Lambda architecture

Alt text

Read about the project here

Watch the videos demonstrating the project here

Our Lambda project receives real-time IoT Data Events coming from Connected Vehicles, then ingested to Spark through Kafka. Using the Spark streaming API, we processed and analysed IoT data events and transformed them into vehicle information. While simultaneously the data is also stored into HDFS for Batch processing. We performed a series of stateless and stateful transformation using Spark streaming API on streams and persisted them to Cassandra database tables. In order to get accurate views, we also perform a batch processing and generating a batch view into Cassandra. We developed responsive web traffic monitoring dashboard using Spring Boot, SockJs and Bootstrap which get the views from the Cassandra database and push to the UI using web socket.

All component parts are dynamically managed using Docker, which means you don't need to worry about setting up your local environment, the only thing you need is to have Docker installed.

System stack:

  • Java 8
  • Maven
  • ZooKeeper
  • Kafka
  • Cassandra
  • Spark
  • Docker
  • HDFS

The streaming part of the project was done from iot-traffic-project InfoQ

How to use

  • Set the KAFKA_ADVERTISED_LISTENERS with your IP in the docker-compose.yml
  • mvn package
  • docker-compose -p lambda up
  • Wait all services be up and running, then...
  • ./project-orchestrate.sh
  • Run realtime job docker exec spark-master /spark/bin/spark-submit --class com.apssouza.iot.processor.StreamingProcessor --master spark://localhost:7077 /opt/spark-data/iot-spark-processor-1.0.0.jar
  • Run the traffic producer java -jar iot-kafka-producer/target/iot-kafka-producer-1.0.0.jar
  • Run the service layer (Web app) java -jar iot-springboot-dashboard/target/iot-springboot-dashboard-1.0.0.jar
  • Access the dashboard with the data http://localhost:3000/
  • Run batch job docker exec spark-master /spark/bin/spark-submit --class BatchProcessor --master spark://localhost:7077 /opt/spark-data/iot-spark-processor-1.0.0.jar

Miscellaneous

Spark

spark-submit --class StreamingProcessor --packages org.apache.kafka:kafka-clients:0.10.2.2 --master spark://spark-master:7077 /opt/spark-data/iot-spark-processor-1.0.0.jar Add spark-master to /etc/hosts pointing to localhost /spark/bin/spark-submit

Submit a job to master

  • mvn package
  • spark-submit --class com.apssouza.iot.processor.StreamingProcessor --master spark://spark-master:7077 iot-spark-processor/target/iot-spark-processor-1.0.0.jar

GUI

http://localhost:8080 Master http://localhost:8081 Slave

HDFS

Comands https://hortonworks.com/tutorial/manage-files-on-hdfs-via-cli-ambari-files-view/section/1/

Open a file - http://localhost:50070/webhdfs/v1/path/to/file/file.csv?op=open

Web file handle - https://hadoop.apache.org/docs/r1.0.4/webhdfs.html

Commands :

Gui

http://localhost:50070 http://localhost:50075

Kafka

  • kafka-topics --create --topic iot-data-event --partitions 1 --replication-factor 1 --if-not-exists --zookeeper zookeeper:2181
  • kafka-console-producer --request-required-acks 1 --broker-list kafka:9092 --topic iot-data-event
  • kafka-console-consumer --bootstrap-server kafka:9092 --topic iot-data-event
  • kafka-topics --list --zookeeper zookeeper:2181

Cassandra

  • Log in cqlsh --username cassandra --password cassandra
  • Access the keyspace use TrafficKeySpace;
  • List data SELECT * FROM TrafficKeySpace.Total_Traffic;

That's all. Leave a star if this project has helped you!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].