All Projects → big-data-europe → Docker Hadoop

big-data-europe / Docker Hadoop

Apache Hadoop docker image

Programming Languages

shell
77523 projects

Projects that are alternatives of or similar to Docker Hadoop

Storm Camel Example
Real-time analysis and visualization with Storm-AMQ-Camel-Websockets-Highcharts integration.
Stars: ✭ 28 (-97.65%)
Mutual labels:  hadoop
Base
https://www.researchgate.net/profile/Rajah_Iyer
Stars: ✭ 48 (-95.97%)
Mutual labels:  hadoop
Jumbune
Jumbune, an open source BigData APM & Data Quality Management Platform for Data Clouds. Enterprise feature offering is available at http://jumbune.com. More details of open source offering are at,
Stars: ✭ 64 (-94.62%)
Mutual labels:  hadoop
Akkeeper
An easy way to deploy your Akka services to a distributed environment.
Stars: ✭ 30 (-97.48%)
Mutual labels:  hadoop
Nagios Plugins
450+ AWS, Hadoop, Cloud, Kafka, Docker, Elasticsearch, RabbitMQ, Redis, HBase, Solr, Cassandra, ZooKeeper, HDFS, Yarn, Hive, Presto, Drill, Impala, Consul, Spark, Jenkins, Travis CI, Git, MySQL, Linux, DNS, Whois, SSL Certs, Yum Security Updates, Kubernetes, Cloudera etc...
Stars: ✭ 1,000 (-15.97%)
Mutual labels:  hadoop
Docker Hadoop
A Docker container with a full Hadoop cluster setup with Spark and Zeppelin
Stars: ✭ 54 (-95.46%)
Mutual labels:  hadoop
Cdc Kafka Hadoop
MySQL to NoSQL real time dataflow
Stars: ✭ 13 (-98.91%)
Mutual labels:  hadoop
Hive Funnel Udf
Hive UDFs for funnel analysis
Stars: ✭ 72 (-93.95%)
Mutual labels:  hadoop
Moosefs
MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)
Stars: ✭ 1,025 (-13.87%)
Mutual labels:  hadoop
Waimak
Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.
Stars: ✭ 60 (-94.96%)
Mutual labels:  hadoop
Jsr203 Hadoop
A Java NIO file system provider for HDFS
Stars: ✭ 35 (-97.06%)
Mutual labels:  hadoop
Weblogsanalysissystem
A big data platform for analyzing web access logs
Stars: ✭ 37 (-96.89%)
Mutual labels:  hadoop
Docker Spark Cluster
A Spark cluster setup running on Docker containers
Stars: ✭ 57 (-95.21%)
Mutual labels:  hadoop
Data Algorithms Book
MapReduce, Spark, Java, and Scala for Data Algorithms Book
Stars: ✭ 949 (-20.25%)
Mutual labels:  hadoop
Src
A light-weight distributed stream computing framework for Golang
Stars: ✭ 67 (-94.37%)
Mutual labels:  hadoop
Interview Questions Collection
按知识领域整理面试题,包括C++、Java、Hadoop、机器学习等
Stars: ✭ 21 (-98.24%)
Mutual labels:  hadoop
Hadoop Solr
Code to index HDFS to Solr using MapReduce
Stars: ✭ 51 (-95.71%)
Mutual labels:  hadoop
Apache Spark Hands On
Educational notes,Hands on problems w/ solutions for hadoop ecosystem
Stars: ✭ 74 (-93.78%)
Mutual labels:  hadoop
Atsd
Axibase Time Series Database Documentation
Stars: ✭ 68 (-94.29%)
Mutual labels:  hadoop
Likelike
An implementation of locality sensitive hashing with Hadoop
Stars: ✭ 58 (-95.13%)
Mutual labels:  hadoop

Gitter chat

Changes

Version 2.0.0 introduces uses wait_for_it script for the cluster startup

Hadoop Docker

Supported Hadoop Versions

See repository branches for supported hadoop versions

Quick Start

To deploy an example HDFS cluster, run:

  docker-compose up

Run example wordcount job:

  make wordcount

Or deploy in swarm:

docker stack deploy -c docker-compose-v3.yml hadoop

docker-compose creates a docker network that can be found by running docker network list, e.g. dockerhadoop_default.

Run docker network inspect on the network (e.g. dockerhadoop_default) to find the IP the hadoop interfaces are published on. Access these interfaces with the following URLs:

  • Namenode: http://<dockerhadoop_IP_address>:9870/dfshealth.html#tab-overview
  • History server: http://<dockerhadoop_IP_address>:8188/applicationhistory
  • Datanode: http://<dockerhadoop_IP_address>:9864/
  • Nodemanager: http://<dockerhadoop_IP_address>:8042/node
  • Resource manager: http://<dockerhadoop_IP_address>:8088/

Configure Environment Variables

The configuration parameters can be specified in the hadoop.env file or as environmental variables for specific services (e.g. namenode, datanode etc.):

  CORE_CONF_fs_defaultFS=hdfs://namenode:8020

CORE_CONF corresponds to core-site.xml. fs_defaultFS=hdfs://namenode:8020 will be transformed into:

  <property><name>fs.defaultFS</name><value>hdfs://namenode:8020</value></property>

To define dash inside a configuration parameter, use triple underscore, such as YARN_CONF_yarn_log___aggregation___enable=true (yarn-site.xml):

  <property><name>yarn.log-aggregation-enable</name><value>true</value></property>

The available configurations are:

  • /etc/hadoop/core-site.xml CORE_CONF
  • /etc/hadoop/hdfs-site.xml HDFS_CONF
  • /etc/hadoop/yarn-site.xml YARN_CONF
  • /etc/hadoop/httpfs-site.xml HTTPFS_CONF
  • /etc/hadoop/kms-site.xml KMS_CONF
  • /etc/hadoop/mapred-site.xml MAPRED_CONF

If you need to extend some other configuration file, refer to base/entrypoint.sh bash script.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].