All Projects → function61 → promswarmconnect

function61 / promswarmconnect

Licence: Apache-2.0 license
Bridges Docker Swarm services to Prometheus without any changes to Prometheus

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to promswarmconnect

Radar
拍拍贷微服务注册中心
Stars: ✭ 165 (+560%)
Mutual labels:  service-discovery
sample-kotlin-ktor-microservices
sample microservices written in Kotlin that demonstrates usage of Ktor framework woth Consul server
Stars: ✭ 37 (+48%)
Mutual labels:  service-discovery
hydra-router
A service aware router for Hydra Services. Implements an API Gateway and can route web socket messages.
Stars: ✭ 59 (+136%)
Mutual labels:  service-discovery
Discovery
.NET Clients for Service Discovery and Registration
Stars: ✭ 181 (+624%)
Mutual labels:  service-discovery
opsbro
Ops Best friend
Stars: ✭ 37 (+48%)
Mutual labels:  service-discovery
ais-service-discovery-go
Cloud application library for Golang
Stars: ✭ 77 (+208%)
Mutual labels:  service-discovery
Amalgam8
Content and Version-based Routing Fabric for Polyglot Microservices
Stars: ✭ 152 (+508%)
Mutual labels:  service-discovery
etcdenv
Use your etcd keys as environment variables
Stars: ✭ 23 (-8%)
Mutual labels:  service-discovery
go-bmi
Body Mass Index(BMI) application developed by go-chassis microservice framwork
Stars: ✭ 14 (-44%)
Mutual labels:  service-discovery
Uragano
Uragano, A simple, high performance RPC library. Support load balancing, circuit breaker, fallback, caching, intercepting.
Stars: ✭ 28 (+12%)
Mutual labels:  service-discovery
Express Gateway
A microservices API Gateway built on top of Express.js
Stars: ✭ 2,583 (+10232%)
Mutual labels:  service-discovery
kongsul
Kong Api Gateway with Consul Service Discovery (MicroService)
Stars: ✭ 35 (+40%)
Mutual labels:  service-discovery
sample-envoy-proxy
custom implementation of service discovery with envoy and inter-service communication for spring-boot applications
Stars: ✭ 29 (+16%)
Mutual labels:  service-discovery
Wgsd
A CoreDNS plugin that provides WireGuard peer information via DNS-SD semantics
Stars: ✭ 169 (+576%)
Mutual labels:  service-discovery
blogr-pve
Puppet provisioning of HA failover/cluster environment implemented in Proxmox Virtual Environment and Linux boxes.
Stars: ✭ 28 (+12%)
Mutual labels:  service-discovery
Lighthouse
Lighthouse - a simple service discovery platform for Akka.Cluster (Akka.NET)
Stars: ✭ 164 (+556%)
Mutual labels:  service-discovery
juno-agent
juno-agent
Stars: ✭ 46 (+84%)
Mutual labels:  service-discovery
microservices-developer-roadmap
Roadmap for becoming a Microservice Developer in 2017
Stars: ✭ 24 (-4%)
Mutual labels:  service-discovery
dnsdisco
DNS service discovery library
Stars: ✭ 25 (+0%)
Mutual labels:  service-discovery
microservices4vaadin
Sample application to show the secured integration of microservices and vaadin
Stars: ✭ 30 (+20%)
Mutual labels:  service-discovery

Build status Download

IMPORTANT NOTICE

Prometheus recently added native Swarm support

For more details see Issue #14

What?

Syncs services/tasks from Docker Swarm to Prometheus by pretending to be a Triton service discovery endpoint, which is a built-in service discovery module in Prometheus.

Features:

  • Have your container metrics scraped fully automatically to Prometheus.
  • We don't have to make ANY changes to Prometheus (or its container) to support Docker Swarm (except configuration changes).
  • Supports overriding metrics endpoint (default /metrics) and port.
  • Supports clustering, so containers are discovered from all nodes. Neither Prometheus nor promswarmconnect needs to run on the Swarm manager node.
    • promswarmconnect needs to run on Swarm manager if you use the docker.sock mount option
  • Supports scoping Prometheus job label to a) container (default), b) host (think host-level metrics) or c) static string (think cluster-wide metrics). Read more

NOTE: the drawing is for option 2). This is even simpler if you use option 1) with socket mount.

How to deploy

Run the image from Docker Hub (see top of README) with the configuration mentioned below. Both options mention "VERSION" version of the image. You'll find the latest version from the Docker Hub. We don't currently publish "latest" tag so the versions are immutable.

You need to run promswarmconnect and Prometheus on the same network.

Option 1: run on Swarm manager node with mounted docker.sock

This is the easiest option, but requires you to have a placement constraint to guarantee that promswarmconnect always runs on the manager node - its Docker socket is the only API with knowledge of the whole cluster state.

$ docker service create \
	--name promswarmconnect \
	--constraint node.role==manager \
	--mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
	--env "DOCKER_URL=unix:///var/run/docker.sock" \
	--env "NETWORK_NAME=yourNetwork" \
	--network yourNetwork \
	"fn61/promswarmconnect:VERSION"

NOTE: unix:.. contains three forward slashes!

Option 2: run on any node by having Docker's socket exposed over HTTPS

This may be useful to you if you have other needs that also require you to expose Docker's port. For example I'm running Portainer on my own computer and that needs to dial to Docker's socket over TLS from the outside world.

Docker's socket needs to be exposed over HTTPS with a client cert authentication. We use dockersockproxy for this. You can do the same with just pure Docker (expose the API over HTTPS) configuration, but I found it much easier to not mess with default Docker settings, and to do this by just deploying a container.

Below configuration DOCKER_CLIENTCERT (and its key) refers to the client cert that is allowed to connect to the Docker socket over HTTPS. They can be encoded to base64 like this:

  • $ cat cert.pem | base64 -w 0
  • $ cat cert.key | base64 -w 0
$ docker service create \
	--name promswarmconnect \
	--env "DOCKER_URL=https://dockersockproxy:4431" \
	--env "DOCKER_CLIENTCERT=..." \
	--env "DOCKER_CLIENTCERT_KEY=..." \
	--env "NETWORK_NAME=yourNetwork" \
	--network yourNetwork \
	"fn61/promswarmconnect:VERSION"

Obviously, you need to replace URL and port with your Docker socket's details.

Verify that it's working

Before moving on to configure Prometheus, verify that promswarmconnect is working.

Grab an Alpine container (on the same network), and verify that you can $ curl the API:

$ docker run --rm -it --network yourNetwork alpine sh
$ apk add curl
$ curl -k https://promswarmconnect/v1/discover
{
  "containers": [
    {
      "server_uuid": "/metrics",
      "vm_alias": "10.0.1.7:8081",
      "vm_brand": "http",
      "vm_image_uuid": "traefik_traefik",
      "vm_uuid": "rsvltiqm6nbcj72ibi7bess0w"
    },
    {
      "server_uuid": "/metrics",                 <-- __metrics_path__
      "vm_alias": "10.0.1.15:80",                <-- __address__
      "vm_brand": "http",                        <-- __scheme__
      "vm_image_uuid": "hellohttp_hellohttp",    <-- job (Docker service name)
      "vm_uuid": "p44b6yr05ucmhpl0teiadq3jt"     <-- instance (Docker task ID)
    }
  ]
}

More info here on why the JSON keys are so different W.R.T. Prometheus labels they'll be relabeled at (see also our config example).

Running in a swarm via docker-compose

You can launch promswarmconnect via docker-compose, an entry would look similar to the below for the promswarmconnect container.

promswarmconnect:
    image: fn61/promswarmconnect:20190126_1620_7b450c47
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - DOCKER_URL=unix:///var/run/docker.sock
      - NETWORK_NAME=<CHANGETOSTACKNETNAME>
    deploy:
      placement:
        constraints: [node.role == manager]

Then for each service you wish to monitor metrics for, add an environment var as noted above in this readme, for example:

nats_monitoring:
    image: ainsey11/nats_prometheus_exporter
    environment:
      - METRICS_ENDPOINT=:7777/metrics
    ports:
      - 7777:7777
    command: ["-varz", "-connz", "-routez", "-subz", "http://nats:8222"]

Exporting per node metrics via multiple containers in a service

The prime use case of this is when running something like node_exporter, or cAdvisor as a service with a global constraint basically, each docker host will have a cAdvisor container running, the problem with this is that by default promswarmconnect doesn't return by default, the hostname of the host the container is on this makes it difficult to differentiate which container is which when the data is surfaced into prometheus itself,

as described in Issue Number 4, you can edit the environment variable of the container you wish to autodiscover, as follows: METRICS_ENDPOINT=/metrics,instance=_HOSTNAME_

this will then return a value similar to:

    {
      "server_uuid": "/metrics",
      "vm_alias": "10.0.3.86:8080",
      "vm_brand": "http",
      "vm_image_uuid": "test_stack1_cadvisor",
      "vm_uuid": "nc-docker-1"
    },

Configuring Prometheus

Configure your Prometheus: example configuration that works for us.

The relabeling steps are really important

The endpoint needs to be your service name in Docker that you use to run promswarmconnect.

Pro-tip: you could probably use our Prometheus image (check the Docker Hub link) as-is, if not for production but at least to check out if this concept works for you!

Considerations for running containers

promswarmconnect only picks up containers whose service-level ENV vars specify METRICS_ENDPOINT=/metrics. To use non-80 port, specify METRICS_ENDPOINT=:8080/metrics. The metrics path is also configurable, obviously.

For a complete demo with dummy application, deploy:

  • promswarmconnect (instructions were at this document)
  • our prometheus image (instructions were at above pro-tip) and
  • hellohttp (it has built-in Prometheus metrics)

FAQ

Can I read DOCKER_CLIENTCERT or DOCKER_CLIENTCERT_KEY from file or use Docker secrets?

Yes, see #10

TLS?

See #12

How to build & develop

How to build & develop (with Turbo Bob, our build tool). It's easy and simple!

Alternatives & links

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].