All Projects → hivemq → hivemq4-docker-images

hivemq / hivemq4-docker-images

Licence: Apache-2.0 license
Official Docker Images for the Enterprise MQTT Broker HiveMQ

Programming Languages

shell
77523 projects
Dockerfile
14818 projects

Projects that are alternatives of or similar to hivemq4-docker-images

mqtt
The fully compliant, embeddable high-performance Go MQTT v5 server for IoT, smarthome, and pubsub
Stars: ✭ 356 (+1877.78%)
Mutual labels:  mqtt-broker, mqtt-server, mqtt5
Jaas
Run jobs (tasks/one-shot containers) with Docker
Stars: ✭ 291 (+1516.67%)
Mutual labels:  cluster, docker-swarm
docker-swarm-vagrant
Getting started with Docker swarm
Stars: ✭ 20 (+11.11%)
Mutual labels:  cluster, docker-swarm
Ckss Certified Kubernetes Security Specialist
This repository is a collection of resources to prepare for the Certified Kubernetes Security Specialist (CKSS) exam.
Stars: ✭ 333 (+1750%)
Mutual labels:  cluster, cloud-native
Gnes
GNES is Generic Neural Elastic Search, a cloud-native semantic search system based on deep neural network.
Stars: ✭ 1,178 (+6444.44%)
Mutual labels:  docker-swarm, cloud-native
clusterplex
ClusterPlex is basically an extended version of Plex, which supports distributed Workers across a cluster to handle transcoding requests.
Stars: ✭ 123 (+583.33%)
Mutual labels:  cluster, docker-swarm
aws docker swarm
setup to bootstrap docker swarm cluster and a controller on AWS using terraform
Stars: ✭ 24 (+33.33%)
Mutual labels:  cluster, docker-swarm
docker-volume-hetzner
Docker Volume Plugin for accessing Hetzner Cloud Volumes
Stars: ✭ 81 (+350%)
Mutual labels:  cluster, docker-swarm
Mosquitto Cluster
a built-in, autonomous Mosquitto Cluster implementation. MQTT集群.
Stars: ✭ 238 (+1222.22%)
Mutual labels:  cluster, mqtt-broker
Miniswarm
Docker Swarm cluster in one command
Stars: ✭ 130 (+622.22%)
Mutual labels:  cluster, docker-swarm
Zenko
Zenko is the open source multi-cloud data controller: own and keep control of your data on any cloud.
Stars: ✭ 353 (+1861.11%)
Mutual labels:  docker-swarm, cloud-native
chip
📦 🐳 🚀 - Smart "dummy" mock for cloud native tests
Stars: ✭ 19 (+5.56%)
Mutual labels:  docker-swarm, cloud-native
Mqttnet
MQTTnet is a high performance .NET library for MQTT based communication. It provides a MQTT client and a MQTT server (broker). The implementation is based on the documentation from http://mqtt.org/.
Stars: ✭ 2,486 (+13711.11%)
Mutual labels:  mqtt-broker, mqtt-server
inspr
Inspr is an agnostic application mesh for simpler, faster, and securer development of distributed applications (dApps).
Stars: ✭ 49 (+172.22%)
Mutual labels:  cluster, cloud-native
Emqx
An Open-Source, Cloud-Native, Distributed MQTT Message Broker for IoT.
Stars: ✭ 8,951 (+49627.78%)
Mutual labels:  mqtt-broker, mqtt-server
jo-mqtt
java mqtt-server/broker 使用简单 扩展方便 集群稳定 轻松支持10万并发 已用于生产环境
Stars: ✭ 71 (+294.44%)
Mutual labels:  mqtt-broker, mqtt-server
nmqtt
Native Nim MQTT client library
Stars: ✭ 39 (+116.67%)
Mutual labels:  mqtt-broker, mqtt-server
mqtt
Kotlin cross-platform, coroutine based, reflectionless MQTT 3.1.1 & 5.0 client & server
Stars: ✭ 31 (+72.22%)
Mutual labels:  mqtt-broker, mqtt-server
Docker Swarm
🐳🐳🐳 This repository is part of a blog series on Docker Swarm example using VirtualBox, OVH Openstack, Azure and Amazon Web Services AWS
Stars: ✭ 43 (+138.89%)
Mutual labels:  cluster, docker-swarm
KMQTT
Embeddable and standalone Kotlin Multiplatform MQTT broker
Stars: ✭ 56 (+211.11%)
Mutual labels:  mqtt-broker, mqtt-server

Table of Contents

What is HiveMQ?

HiveMQ is a MQTT based messaging platform designed for the fast, efficient and reliable movement of data to and from connected IoT devices. It uses the MQTT protocol for instant, bi-directional push of data between your device and your enterprise systems. HiveMQ is built to address some of the key technical challenges organizations face when building new IoT applications, including:

  • Building reliable and scalable business critical IoT applications
  • Fast data delivery to meet the expectations of end users for responsive IoT products
  • Lower cost of operation through efficient use of hardware, network and cloud resources
  • Integrating IoT data into existing enterprise systems

While at its core, HiveMQ is an MQTT 3.1, MQTT 3.1.1 and MQTT 5.0 compliant MQTT broker, HiveMQ excels with its additional features designed for enterprise use cases and professional deployments.

See Features for more information.

HiveMQ Docker Images

This repository provides the Dockerfile and context for the images hosted in the HiveMQ Docker Hub repository.

HiveMQ Base Image

The HiveMQ base image installs and optimizes the HiveMQ installation for execution as a container.

It is meant to be used to build custom images or to run a dockerized HiveMQ locally for testing purposes.

How to Build

The image can then be built by running the command HIVEMQ_VERSION=4.7.3 ./build.sh in the hivemq4/base-image folder. An alternative image name can be specified with the environment variable TARGETIMAGE, example: TARGETIMAGE=myregistry/custom-hivemq:1.2.3 HIVEMQ_VERSION=4.7.3 ./build.sh

HiveMQ DNS Discovery Image

The HiveMQ DNS discovery image is based on the HiveMQ base image and adds the HiveMQ DNS Discovery Extension.

We recommend using the HiveMQ DNS discovery image to run HiveMQ in a cluster.

How to Build

To build the DNS discover image, you must first obtain the HiveMQ DNS Discovery Extension, unzip the file and copy the folder to the hivemq4/dns-image folder.

The image can then be built by running docker build -t hivemq-dns . in the hivemq4/dns-image folder.

Tags

The HiveMQ Docker Hub repository provides different versions of the HiveMQ images using tags:

Tag Meaning
latest This tag will always point to the latest version of the HiveMQ base image
dns-latest This tag will always point to the latest version of the HiveMQ DNS discovery image
<version> Base image providing the given version of the broker (e.g. 4.0.0)
dns-<version> DNS discovery image based on the given version base image

Basic Single Instance

To start a single HiveMQ instance and allow access to the MQTT port as well as the Control Center, get Docker and run the following command:

docker run --ulimit nofile=500000:500000 -p 8080:8080 -p 8000:8000 -p 1883:1883 hivemq/hivemq4

You can connect to the broker via MQTT (1883) or Websockets (8000) or the Control Center (8080) via the respective ports.

Clustering

For running HiveMQ in a cluster, we recommend using the DNS discovery image. This image has the HiveMQ DNS Discovery Extension built in. It can be used with any container orchestration engine that supports service discovery using a round-robin A record.

A custom solution supplying the A record could be used as well.

Environment Variables

The following environment variables can be used to customize the discovery and broker configuration respectively.

Environment Variable Default value Meaning
HIVEMQ_DNS_DISCOVERY_ADDRESS - Address to get the A record that will be used for cluster discovery
HIVEMQ_DNS_DISCOVERY_INTERVAL 31 Interval in seconds after which to search for new nodes
HIVEMQ_DNS_DISCOVERY_TIMEOUT 30 How long to wait for DNS resolution to complete
HIVEMQ_CLUSTER_PORT 8000 Set the port to be used for the cluster transport
HIVEMQ_BIND_ADDRESS - Set the cluster transport bind address, only necessary if the default policy (resolve hostname) fails
HIVEMQ_CLUSTER_TRANSPORT_TYPE UDP Set the cluster transport type
HIVEMQ_LICENSE - base64 encoded license file to use for the broker
HIVEMQ_CONTROL_CENTER_USER admin Set the username for the HiveMQ Control Center login
HIVEMQ_CONTROL_CENTER_PASSWORD SHA256 of adminhivemq (default) Set the password hash for HiveMQ Control Center authentication
HIVEMQ_NO_ROOT_STEP_DOWN - Disable root privilege step-down at startup by setting this to true. See HiveMQ base image for more information.
HIVEMQ_ALLOW_ALL_CLIENTS true Whether the default packaged allow-all extension (starting from 4.3.0) should be enabled or not. If this is set to false, the extension will be deleted prior to starting the broker. This flag is inactive for all versions prior to 4.3.0.
HIVEMQ_REST_API_ENABLED false Whether the REST API (supported starting at 4.4.0) should be enabled or not. If this is set to true, the REST API will bind to 0.0.0.0 on port 8888 at startup. This flag is unused for versions prior to 4.4.0.
HIVEMQ_VERBOSE_ENTRYPOINT false Whether the entrypoint scripts should print additional debug info.
HIVEMQ_USE_NSS_WRAPPER true Whether nss_wrapper should be used for properly configuring user information.

Following are two examples, describing how to use this image on Docker Swarm and Kubernetes respectively.

Other environments are compatible as well (provided they support DNS discovery in some way).

Entrypoint Scripts

There is a /docker-entrypoint.d directory which you can COPY custom entrypoint scripts to which will be executed before running HiveMQ. The scripts must follow the XX_name.sh naming scheme, where XX is an integer number that will determine the ordering in which the entrypoint scripts are executed.

Depending on the executable bit within the image, they will be either executed normally, or if the executable bit is not set, they will be sourced and run in the parent shell. Sourcing allows you to set custom environment variables from an entrypoint script before startup.

Local Cluster with Docker Swarm

To start a HiveMQ cluster locally, you can use Docker Swarm.

Note: Using Docker Swarm in production is not recommended.

  • Start a single node Swarm cluster by running:
docker swarm init
  • Create an overlay network for the cluster nodes to communicate on:
docker network create -d overlay --attachable myNetwork
  • Create the HiveMQ service on the network
docker service create \
  --replicas 3 --network myNetwork \
  --env HIVEMQ_DNS_DISCOVERY_ADDRESS=tasks.hivemq \
  --publish target=1883,published=1883 \
  --publish target=8080,published=8080 \
  -p 8000:8000/udp \
  --name hivemq \
    hivemq/hivemq4:dns-latest

This will provide a 3 node cluster with the MQTT (1883) and HiveMQ Control Center (8080) ports forwarded to the host network.

This means you can connect MQTT clients on port 1883. The connection will be forwarded to any of the cluster nodes.

The HiveMQ HiveMQ Control Center can be used in a single node cluster. A sticky session for the HTTP requests in clusters with multiple nodes cannot be upheld with this configuration, as the internal load balancer forwards requests in an alternating fashion. To use sticky sessions the Docker Swarm Enterprise version is required.

Managing the Cluster

To scale the cluster up to 5 nodes, run

docker service scale hivemq=5

To remove the cluster, run

docker service rm hivemq

To read the logs for all HiveMQ nodes in real time, use

docker service logs hivemq -f

To get the log for a single node, get the list of service containers using

docker service ps hivemq

And print the log using

docker service logs <id>

where <id> is the container ID listed in the service ps command.

Production Use with Kubernetes

NOTE: Please consider using the Kubernetes Operator instead, as it makes production deployment of HiveMQ much easier.

For production we recommend using the DNS discovery image in combination with Kubernetes.

On Kubernetes, an appropriate deployment configuration is necessary to utilize DNS discovery. A headless service will provide a DNS record for the broker that can be used for discovery.

Following is an example configuration for a HiveMQ cluster with 3 nodes using DNS discovery in a replication controller setup.

Please note that you may have to replace HIVEMQ_DNS_DISCOVERY_ADDRESS according to your Kubernetes namespace and configured domain.

apiVersion: v1
kind: ReplicationController
metadata:
  name: hivemq-replica
spec:
  replicas: 3
  selector:
    app: hivemq-cluster1
  template:
    metadata:
      name: hivemq-cluster1
      labels:
        app: hivemq-cluster1
    spec:
      containers:
      - name: hivemq-pods
        image: hivemq/hivemq4:dns-latest
        ports:
        - containerPort: 8080
          protocol: TCP
          name: hivemq-control-center
        - containerPort: 1883
          protocol: TCP
          name: mqtt
        env:
        - name: HIVEMQ_DNS_DISCOVERY_ADDRESS
          value: "hivemq-discovery.default.svc.cluster.local."
        - name: HIVEMQ_DNS_DISCOVERY_TIMEOUT
          value: "20"
        - name: HIVEMQ_DNS_DISCOVERY_INTERVAL
          value: "21"
        - name: HIVEMQ_CLUSTER_TRANSPORT_TYPE
          value: "TCP"
        readinessProbe:
          tcpSocket:
            port: 1883
          initialDelaySeconds: 30
          periodSeconds: 60
          failureThreshold: 60
        livenessProbe:
          tcpSocket:
            port: 1883
          initialDelaySeconds: 30
          periodSeconds: 60
          failureThreshold: 60
---
kind: Service
apiVersion: v1
metadata:
  name: hivemq-discovery
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  selector:
    app: hivemq-cluster1
  ports:
    - protocol: TCP
      port: 1883
      targetPort: 1883
  clusterIP: None

Accessing the HiveMQ Control Center

To access the HiveMQ HiveMQ Control Center for a cluster running on Kubernetes, follow these steps:

  • Create a service exposing the HiveMQ Control Center of the HiveMQ service. Use the following YAML definition (as web.yaml):
kind: Service
apiVersion: v1
metadata:
  name: hivemq-control-center
spec:
  selector:
    app: hivemq-cluster1
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
  sessionAffinity: ClientIP
  type: LoadBalancer
  • Create the service using kubectl create -f web.yaml

Note: Depending on your provider of Kubernetes environment, load balancers might not be available or additional configuration may be necessary to access the HiveMQ Control Center.

Accessing the MQTT Port Using External Clients

To allow access for the MQTT port of a cluster running on Kubernetes, follow these steps:

  • Create a service exposing the MQTT port using a load balancer. You can use the following YAML definition (as mqtt.yaml):
kind: Service
apiVersion: v1
metadata:
  name: hivemq-mqtt
  annotations:
    service.spec.externalTrafficPolicy: Local
spec:
  selector:
    app: hivemq-cluster1
  ports:
    - protocol: TCP
      port: 1883
      targetPort: 1883
  type: LoadBalancer
  • Create the service using kubectl create -f mqtt.yaml

Note: The externalTrafficPolicy annotation is necessary to allow the Kubernetes service to maintain a larger amount of concurrent connections.
See Source IP for Services for more information.

Configuration

Setting the HiveMQ Control Center Username and Password

The environment variable HIVEMQ_CONTROL_CENTER_PASSWORD allows you to set the password of the HiveMQ Control Center by defining a SHA256 hash for a custom password.

Additionally, you can also configure the username, using the environment variable HIVEMQ_CONTROL_CENTER_USER.

See Generate a SHA256 Password to read more about how to generate the password hash.

Adding a License

To use a license with a HiveMQ docker container, you must first encode it as a string.

To do so, run cat license.lic | base64 (replace license.lic with the path to your license file).

Set the resulting string as the value for the HIVEMQ_LICENSE environment variable of the container.

Disabling the hivemq-allow-all-extension

By default the HiveMQ docker images use the packaged hivemq-allow-all-extension.

This can be circumvented by setting the HIVEMQ_ALLOW_ALL_CLIENTS environment variable to false.

This will cause the entrypoint script to delete the extension on startup.

Disabling Privilege Step-Down

By default the HiveMQ docker images check for root privileges at startup and, if present, switch to a less privileged user before running the HiveMQ broker.

This will enhance the security of the container.

If you wish to skip this step, set the environment variable HIVEMQ_NO_ROOT_STEP_DOWN to false.

Overriding the Cluster Bind Address

By default the HiveMQ DNS discovery image attempts to set the bind address using the containers ${HOSTNAME} to ensure that HiveMQ will bind the cluster connection to the correct interface so a cluster can be formed.

This behavior can be overridden by setting any value for the environment variable HIVEMQ_BIND_ADDRESS. The broker will attempt to use the given value as the bind address instead.

Setting the Cluster Transport Type

By default the HiveMQ DNS discovery image uses UDP as transport protocol for the cluster transport.

If you would like to use TCP as transport type instead, you can set the HIVEMQ_CLUSTER_TRANSPORT_TYPE environment variable to TCP.

Note: We generally recommend using TCP for the cluster transport, as it makes HiveMQ less susceptible to network splits under high network load.

Building a custom Docker image

See our documentation for more information on how to build custom HiveMQ images.

Contributing

If you want to contribute to HiveMQ 4 Docker Images, see the contribution guidelines.

License

HiveMQ 4 Docker Images is licensed under the APACHE LICENSE, VERSION 2.0. A copy of the license can be found here.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].