All Projects → Stono → bigdata-fun

Stono / bigdata-fun

Licence: other
A complete (distributed) BigData stack, running in containers

Programming Languages

shell
77523 projects
clojure
4091 projects

Projects that are alternatives of or similar to bigdata-fun

Bigdata Notes
大数据入门指南 ⭐
Stars: ✭ 10,991 (+78407.14%)
Mutual labels:  big-data, spark, hadoop, hbase, hdfs, flume
Devops Python Tools
80+ DevOps & Data CLI Tools - AWS, GCP, GCF Python Cloud Function, Log Anonymizer, Spark, Hadoop, HBase, Hive, Impala, Linux, Docker, Spark Data Converters & Validators (Avro/Parquet/JSON/CSV/INI/XML/YAML), Travis CI, AWS CloudFormation, Elasticsearch, Solr etc.
Stars: ✭ 406 (+2800%)
Mutual labels:  spark, hadoop, solr, hbase, hdfs
God Of Bigdata
专注大数据学习面试,大数据成神之路开启。Flink/Spark/Hadoop/Hbase/Hive...
Stars: ✭ 6,008 (+42814.29%)
Mutual labels:  spark, hadoop, hbase, hdfs, flume
Bigdata docker
Big Data Ecosystem Docker
Stars: ✭ 161 (+1050%)
Mutual labels:  spark, hadoop, hbase, hdfs, hue
Spark With Python
Fundamentals of Spark with Python (using PySpark), code examples
Stars: ✭ 150 (+971.43%)
Mutual labels:  big-data, spark, hadoop, hdfs
cloud
云计算之hadoop、hive、hue、oozie、sqoop、hbase、zookeeper环境搭建及配置文件
Stars: ✭ 48 (+242.86%)
Mutual labels:  hadoop, hbase, flume, hue
Dockerfiles
50+ DockerHub public images for Docker & Kubernetes - Hadoop, Kafka, ZooKeeper, HBase, Cassandra, Solr, SolrCloud, Presto, Apache Drill, Nifi, Spark, Consul, Riak, TeamCity and DevOps tools built on the major Linux distros: Alpine, CentOS, Debian, Fedora, Ubuntu
Stars: ✭ 847 (+5950%)
Mutual labels:  spark, hadoop, solr, hbase
Bigdata Interview
🎯 🌟[大数据面试题]分享自己在网络上收集的大数据相关的面试题以及自己的答案总结.目前包含Hadoop/Hive/Spark/Flink/Hbase/Kafka/Zookeeper框架的面试题知识总结
Stars: ✭ 857 (+6021.43%)
Mutual labels:  spark, hadoop, hbase, hdfs
Repository
个人学习知识库涉及到数据仓库建模、实时计算、大数据、Java、算法等。
Stars: ✭ 92 (+557.14%)
Mutual labels:  spark, hadoop, hbase, hdfs
aaocp
一个对用户行为日志进行分析的大数据项目
Stars: ✭ 53 (+278.57%)
Mutual labels:  hadoop, hbase, hdfs, flume
wasp
WASP is a framework to build complex real time big data applications. It relies on a kind of Kappa/Lambda architecture mainly leveraging Kafka and Spark. If you need to ingest huge amount of heterogeneous data and analyze them through complex pipelines, this is the framework for you.
Stars: ✭ 19 (+35.71%)
Mutual labels:  hadoop, solr, hbase, hdfs
Gaffer
A large-scale entity and relation database supporting aggregation of properties
Stars: ✭ 1,642 (+11628.57%)
Mutual labels:  big-data, spark, hadoop, hbase
Learning Spark
零基础学习spark,大数据学习
Stars: ✭ 37 (+164.29%)
Mutual labels:  spark, hadoop, hbase, hdfs
BigData-News
基于Spark2.2新闻网大数据实时系统项目
Stars: ✭ 36 (+157.14%)
Mutual labels:  spark, hadoop, hbase, flume
leaflet heatmap
简单的可视化湖州通话数据 假设数据量很大,没法用浏览器直接绘制热力图,把绘制热力图这一步骤放到线下计算分析。使用Apache Spark并行计算数据之后,再使用Apache Spark绘制热力图,然后用leafletjs加载OpenStreetMap图层和热力图图层,以达到良好的交互效果。现在使用Apache Spark实现绘制,可能是Apache Spark不擅长这方面的计算或者是我没有设计好算法,并行计算的速度比不上单机计算。Apache Spark绘制热力图和计算代码在这 https://github.com/yuanzhaokang/ParallelizeHeatmap.git .
Stars: ✭ 13 (-7.14%)
Mutual labels:  big-data, spark, hadoop, hdfs
Hdfs Shell
HDFS Shell is a HDFS manipulation tool to work with functions integrated in Hadoop DFS
Stars: ✭ 117 (+735.71%)
Mutual labels:  big-data, hadoop, hdfs
Bigdata Playground
A complete example of a big data application using : Kubernetes (kops/aws), Apache Spark SQL/Streaming/MLib, Apache Flink, Scala, Python, Apache Kafka, Apache Hbase, Apache Parquet, Apache Avro, Apache Storm, Twitter Api, MongoDB, NodeJS, Angular, GraphQL
Stars: ✭ 177 (+1164.29%)
Mutual labels:  big-data, hadoop, hbase
litemall-dw
基于开源Litemall电商项目的大数据项目,包含前端埋点(openresty+lua)、后端埋点;数据仓库(五层)、实时计算和用户画像。大数据平台采用CDH6.3.2(已使用vagrant+ansible脚本化),同时也包含了Azkaban的workflow。
Stars: ✭ 36 (+157.14%)
Mutual labels:  solr, hbase, flume
Gimel
Big Data Processing Framework - Unified Data API or SQL on Any Storage
Stars: ✭ 216 (+1442.86%)
Mutual labels:  big-data, spark, hbase
swordfish
Open-source distribute workflow schedule tools, also support streaming task.
Stars: ✭ 35 (+150%)
Mutual labels:  spark, hadoop, hbase

BigData Fun

This is the respository associated with my article, Big Data, Little Cloud.

In summary, I wanted to learn more about big data, and some of the key tools in the market. As a result, I decided to create an all in one docker environment where I can test out all the bits.

The key components currently implemented are:

  • HDFS - Distributed file system, including two data nodes
  • HBase - Non-relational, distributed database similar to Google BigTable
  • Hue - Web interface for analyzing data
  • NiFi - A system to process and distribute data
  • HBase Indexer - HBase Indexer allows you to easily and quickly index HBase rows into Solr.
  • Solr - Search platform based on Lucene
  • Banana - A kibana port for visualisation of Solr data
  • Flume - A headless way to process and distribute data
  • ZkWeb - For viewing your ZooKeeper data in a UI

The following components will be coming soon:

  • Spark - Data processing engine, probably managed by Livy

Getting started

Requirements

As most of these components run inside a JVM, as you've probably guessed - this fully distributed setup is rather resource intensive. If you're on Linux, with an alright amount of RAM, you'll be fine. However if you're running docker through Virtualisation like HyperV or Qemu2 (mac), you need may need to tweak things a little.

IMPORTANT If you're on Mac, ensure you have adequate RAM assigned to the docker daemon (Preferences -> Advanced). I personally have this set to 8gb, on a 16gb Mac and it runs sweet as a nut.

IMPORTANT: Before you do anything, you need to build the base images. Please do docker-compose -f compose.build.yml build

Key URLs

Once you've used one of the start up options below, these are your key URLs:

Startup Options

#### Starting everything If you just want to start everything, do docker-compose up -d. I believe I've mapped the dependencies correctly in the base docker-compose.yml, so give it a minute and everything should start up.

Starting individual components

I've tried to break the docker-compose file down into sub sections:

  • HDFS: docker-compose up namenode datanode1 datanode2 resourcemanager
  • HBase: docker-compose up zookeeper regionserver master thrift rest, requires HDFS
  • Solr/Lily/Banana: docker-compose up solr banana lily, requires HBase

Then you've got the other tools too, so start them by name: hue, nifi.

Starting the user import demo (recommended)

I am working on a complete end to end demo however, so if you prefer, just run ./demo.sh. The idea of this demo is to read in random user data from a sample API, import that into HBase, have the indexer then send that over to Solr so we can query it in Banana.

The components in detail

Hadoop HDFS (2.7.3)

  • namenode
  • datanode1
  • datanode2
  • resourcemanager

This is a HDFS cluster running two data nodes, and a YARN resource manager.

HDFS

HBase (1.3.0)

  • zookeeper
  • master
  • regionserver

This setup is designed to replicate a fully distributed setup, subsequently we're running in distributed mode and running separate instances (containers) of the following:

The HBase container can be run in standalone mode too, if you want - which will result in less JVMs, but a less production like environment. To run HBase in standalone mode, run the HBase container with HBASE_MANAGES_ZK=true, HBASE_CONF_DISTRIBUTED=false and HBASE_CONF_QUORUM=hbase-master.

You can read more about the modes here.

If you want to visulaise the Zookeeper data, take a look at zk-web. It certainly helped me debugging.

HBase

Zk-Web

  • zkweb

This is a web interface for managing ZooKeeper. I found it helps me with my debugging. To run it do docker-compose up -d zkweb, and then go to the URL (referenced above). You'll need to enter the ZooKeeper cluster address, but it's from the perspective of the docker container so that is hbase-zookeeper:2181.

Rest/Thrift

  • thrift
  • rest

The rest & thrift interfaces sit on top of the cluster, you can stop them if you don't need them.

Rest/Thrift

Hue (latest)

When you first use Hue, it does a health check and will tell you that a bunch of stuff isn't configured correctly, that's fine as I don't plan to build the whole Cloudera stack, just 'next next next' thought it and use the components that matter, like the HBase Browser.

NiFi (1.1.1)

  • nifi

Ahhh NiFi. Think of it as the more feature complete, graphical version of Flume. It really does make getting data into HBase rather simple. However, as it's generally the last to start - if your machine is resource constrained then NiFi will just randomly not boot. Give docker more ram, buy more ram, just ram.

To get started, I reccomend using some templates from Here, in particular Fun With HBase which will get you importing some random user data into HBase in a matter of minutes.

The only think you actually need to configure is the controller service HBase_1_1_2_ClientService. Basically you need to point it at ZooKeeper so it can discover your HBase nodes.

NiFi

Oh, and create the table in HBase:

$ docker-compose exec master hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.3.0, re359c76e8d9fd0d67396456f92bcbad9ecd7a710, Tue Jan  3 05:31:38 MSK 2017

hbase(main):001:0> create 'Users', 'cf'
0 row(s) in 5.4940 seconds

=> Hbase::Table - Users

After you've done that - start the process flows in NiFi and you'll see data being imported into HBase. Easy as that!

USers

Banana (1.16.12)

  • banana

Banana is a Kibana port for Solr. It's used to visulaise the data we're indexing. I haven't done much with it yet other than semi get it running.

HBase Indexer (Lily)

  • lily

HBase Indexer allows you to easily and quickly index HBase rows into Solr. It links to the HBase replication agent, whenever data is modified in HBase those events are mutated and sent over to the Solr instance.

Solr (6.4.1)

  • solr

Search/Index platform based on Lucene. If you're familiar with the ELK stack, this is the ElasticSearch component.

Flume (1.7.0)

  • flume

I wanted a GUI-less way to strema data into HBase or HDFS too. That's what this container does. It includes the java classes for HBase and HDFS too so those sinks will work. By default, docker-compose will mount ./data/flume, and any files you place in there will be 'flumed' into a HBase table called flume_sink, with the column family cf. That's all config driven though so edit ./config/flume/flume-conf.properties to change that behaviour.

In order for the HBase aspect to work, you need to create the table first, that's easiest via the hbase shell.

$ docker-compose exec master hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.3.0, re359c76e8d9fd0d67396456f92bcbad9ecd7a710, Tue Jan  3 05:31:38 MSK 2017

hbase(main):001:0> create 'flume_sink', 'cf'
0 row(s) in 2.5370 seconds

=> Hbase::Table - flume_sink

If you've started flume before creating this table, you'll be seeing errors like this:

org.apache.flume.FlumeException: Error getting column family from HBase.Please verify that the table flume_sink and Column Family, cf exists in HBase, and the current user has permissions to access that table.

Simply do a docker-compose restart flume and it'll sort itself out.

Flume

Credits

The HDFS work has been tackled beautifully by https://github.com/big-data-europe/docker-hadoop, so I'm using a lot of what they did for the hadoop namenodes and datanodes.

Tested on...

$ docker version
Client:
 Version:      1.13.1
 API version:  1.26
 Go version:   go1.7.5
 Git commit:   092cba3
 Built:        Wed Feb  8 08:47:51 2017
 OS/Arch:      darwin/amd64

Server:
 Version:      1.13.1
 API version:  1.26 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   092cba3
 Built:        Wed Feb  8 08:47:51 2017
 OS/Arch:      linux/amd64
 Experimental: true


$ docker-compose version
docker-compose version 1.11.1, build 7c5d5e4
docker-py version: 2.0.2
CPython version: 2.7.12
OpenSSL version: OpenSSL 1.0.2j  26 Sep 2016

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].