All Projects → Chabane → Bigdata Playground

Chabane / Bigdata Playground

Licence: apache-2.0
A complete example of a big data application using : Kubernetes (kops/aws), Apache Spark SQL/Streaming/MLib, Apache Flink, Scala, Python, Apache Kafka, Apache Hbase, Apache Parquet, Apache Avro, Apache Storm, Twitter Api, MongoDB, NodeJS, Angular, GraphQL

Programming Languages

python
139335 projects - #7 most used programming language
typescript
32286 projects
scala
5932 projects

Projects that are alternatives of or similar to Bigdata Playground

Eel Sdk
Big Data Toolkit for the JVM
Stars: ✭ 140 (-20.9%)
Mutual labels:  kafka, big-data, hadoop, parquet
Gimel
Big Data Processing Framework - Unified Data API or SQL on Any Storage
Stars: ✭ 216 (+22.03%)
Mutual labels:  kafka, big-data, spark-streaming, hbase
Bigdata Notes
大数据入门指南 ⭐
Stars: ✭ 10,991 (+6109.6%)
Mutual labels:  kafka, big-data, hadoop, hbase
wasp
WASP is a framework to build complex real time big data applications. It relies on a kind of Kappa/Lambda architecture mainly leveraging Kafka and Spark. If you need to ingest huge amount of heterogeneous data and analyze them through complex pipelines, this is the framework for you.
Stars: ✭ 19 (-89.27%)
Mutual labels:  hadoop, hbase, spark-streaming, parquet
DaFlow
Apache-Spark based Data Flow(ETL) Framework which supports multiple read, write destinations of different types and also support multiple categories of transformation rules.
Stars: ✭ 24 (-86.44%)
Mutual labels:  apache-spark, hadoop, avro, parquet
Gaffer
A large-scale entity and relation database supporting aggregation of properties
Stars: ✭ 1,642 (+827.68%)
Mutual labels:  big-data, hadoop, parquet, hbase
Data Accelerator
Data Accelerator for Apache Spark simplifies onboarding to Streaming of Big Data. It offers a rich, easy to use experience to help with creation, editing and management of Spark jobs on Azure HDInsights or Databricks while enabling the full power of the Spark engine.
Stars: ✭ 247 (+39.55%)
Mutual labels:  kafka, big-data, apache-spark, spark-streaming
Devops Python Tools
80+ DevOps & Data CLI Tools - AWS, GCP, GCF Python Cloud Function, Log Anonymizer, Spark, Hadoop, HBase, Hive, Impala, Linux, Docker, Spark Data Converters & Validators (Avro/Parquet/JSON/CSV/INI/XML/YAML), Travis CI, AWS CloudFormation, Elasticsearch, Solr etc.
Stars: ✭ 406 (+129.38%)
Mutual labels:  hadoop, avro, parquet, hbase
Szt Bigdata
深圳地铁大数据客流分析系统🚇🚄🌟
Stars: ✭ 826 (+366.67%)
Mutual labels:  kafka, hadoop, mongodb, hbase
Parquetviewer
Simple windows desktop application for viewing & querying Apache Parquet files
Stars: ✭ 145 (-18.08%)
Mutual labels:  big-data, apache-spark, parquet
Nagios Plugins
450+ AWS, Hadoop, Cloud, Kafka, Docker, Elasticsearch, RabbitMQ, Redis, HBase, Solr, Cassandra, ZooKeeper, HDFS, Yarn, Hive, Presto, Drill, Impala, Consul, Spark, Jenkins, Travis CI, Git, MySQL, Linux, DNS, Whois, SSL Certs, Yum Security Updates, Kubernetes, Cloudera etc...
Stars: ✭ 1,000 (+464.97%)
Mutual labels:  kafka, hadoop, hbase
Dataengineeringproject
Example end to end data engineering project.
Stars: ✭ 82 (-53.67%)
Mutual labels:  kafka, big-data, mongodb
Spark With Python
Fundamentals of Spark with Python (using PySpark), code examples
Stars: ✭ 150 (-15.25%)
Mutual labels:  big-data, hadoop, apache-spark
Real Time Stream Processing Engine
This is an example of real time stream processing using Spark Streaming, Kafka & Elasticsearch.
Stars: ✭ 37 (-79.1%)
Mutual labels:  kafka, apache-spark, spark-streaming
Open Bank Mark
A bank simulation application using mainly Clojure, which can be used to end-to-end test and show some graphs.
Stars: ✭ 81 (-54.24%)
Mutual labels:  graphql, kafka, avro
Learning Spark
零基础学习spark,大数据学习
Stars: ✭ 37 (-79.1%)
Mutual labels:  hadoop, spark-streaming, hbase
Springboot Templates
springboot和dubbo、netty的集成,redis mongodb的nosql模板, kafka rocketmq rabbit的MQ模板, solr solrcloud elasticsearch查询引擎
Stars: ✭ 100 (-43.5%)
Mutual labels:  kafka, mongodb, hbase
Bigdata Interview
🎯 🌟[大数据面试题]分享自己在网络上收集的大数据相关的面试题以及自己的答案总结.目前包含Hadoop/Hive/Spark/Flink/Hbase/Kafka/Zookeeper框架的面试题知识总结
Stars: ✭ 857 (+384.18%)
Mutual labels:  kafka, hadoop, hbase
Repository
个人学习知识库涉及到数据仓库建模、实时计算、大数据、Java、算法等。
Stars: ✭ 92 (-48.02%)
Mutual labels:  kafka, hadoop, hbase
Spring Boot 2.x Examples
Spring Boot 2.x code examples
Stars: ✭ 104 (-41.24%)
Mutual labels:  kafka, mongodb, hbase

Bigdata Playground

The aim is to create a Batch/Streaming/ML/WebApp stack where you can test your jobs locally or to submit them to the Yarn resource manager. We are using Docker to build the environment and Docker-Compose to provision it with the required components (Next step using Kubernetes). Along with the infrastructure, We are check that it works with 4 projects that just probes everything is working as expected. The boilerplate is based on a sample search flight Web application.

Installation

If you are on mac then, you can use package manager like brew to install sbt on your machine:

$ brew install sbt

For other systems, you can refer to manual instructions from sbt website http://www.scala-sbt.org/0.13/tutorial/Manual-Installation.html.

If you are on mac then, you can use package manager like brew to install maven on your machine:

$ brew install maven

For other systems, you can refer to manual instructions from maven website https://maven.apache.org/install.html.

Install Docker by following the instructions for mac, linux, or windows.

docker network create vnet
npm install yarn -g
cd webapp && yarn && cd client && yarn && cd ../server && yarn && cd ../ && npm run build:dev && cd ../
cd batch/spark && sbt clean package assembly && cd ../..

cd batch/hadoop && mvn clean package && cd ../..
cd streaming/spark && sbt clean assembly && cd ../..
cd streaming/flink && sbt clean assembly && cd ../..
cd streaming/storm && mvn clean package && cd ../..
cd docker
docker-compose -f mongo.yml -f zookeeper.yml -f kafka.yml -f hadoop-hbase.yml -f flink.yml up -d
docker-compose -f dev/webapp.yml up -d
docker-compose -f dev/batch-spark.yml up -d
docker-compose -f dev/batch-hadoop.yml up -d
docker-compose -f dev/streaming-spark.yml up -d
docker-compose -f dev/streaming-flink.yml up -d
docker-compose -f dev/streaming-storm.yml up -d

Create your Twitter app on https://apps.twitter.com

export TWITTER_CONSUMER_KEY=<TWITTER_CONSUMER_KEY>
export TWITTER_CONSUMER_SECRET=<TWITTER_CONSUMER_SECRET>
export TWITTER_CONSUMER_ACCESS_TOKEN=<TWITTER_CONSUMER_ACCESS_TOKEN>
export TWITTER_CONSUMER_ACCESS_TOKEN_SECRET=<TWITTER_CONSUMER_ACCESS_TOKEN_SECRET>
docker-compose -f dev/ml-spark.yml up -d

Interactions / OnGoing

Contributing

Pull requests are welcome.

Support

Please raise tickets for issues and improvements at https://github.com/Chabane/bigdata-playground/issues

License

This example is released under version 2.0 of the Apache License.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].