All Projects → pires → Kubernetes Nats Cluster

pires / Kubernetes Nats Cluster

Licence: apache-2.0
NATS cluster on top of Kubernetes made easy.

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to Kubernetes Nats Cluster

go-nats-examples
Single repository for go-nats example code. This includes all documentation examples and any common message pattern examples.
Stars: ✭ 99 (-41.07%)
Mutual labels:  messaging, nats
Liftbridge
Lightweight, fault-tolerant message streams.
Stars: ✭ 2,175 (+1194.64%)
Mutual labels:  messaging, nats
Nats.net
The official C# Client for NATS
Stars: ✭ 378 (+125%)
Mutual labels:  messaging, nats
Phpnats
A PHP client for the NATSio cloud messaging system.
Stars: ✭ 209 (+24.4%)
Mutual labels:  messaging, nats
liftbridge-api
Protobuf definitions for the Liftbridge gRPC API. https://github.com/liftbridge-io/liftbridge
Stars: ✭ 15 (-91.07%)
Mutual labels:  messaging, nats
Nats.java
Java client for NATS
Stars: ✭ 325 (+93.45%)
Mutual labels:  messaging, nats
Nats.rb
Ruby client for NATS, the cloud native messaging system.
Stars: ✭ 850 (+405.95%)
Mutual labels:  messaging, nats
Lightbus
RPC & event framework for Python 3
Stars: ✭ 149 (-11.31%)
Mutual labels:  messaging
Laravel Queue
Laravel Enqueue message queue extension. Supports AMQP, Amazon SQS, Kafka, Google PubSub, Redis, STOMP, Gearman, Beanstalk and others
Stars: ✭ 155 (-7.74%)
Mutual labels:  messaging
Go Micro Boilerplate
The boilerplate of the GoLang application with a clear microservices architecture.
Stars: ✭ 147 (-12.5%)
Mutual labels:  nats
Jsqmessagesviewcontroller
An elegant messages UI library for iOS
Stars: ✭ 11,240 (+6590.48%)
Mutual labels:  messaging
Nsqsharp
A .NET library for NSQ, a realtime distributed messaging platform
Stars: ✭ 150 (-10.71%)
Mutual labels:  messaging
Webapp
Tinode web chat using React
Stars: ✭ 156 (-7.14%)
Mutual labels:  messaging
Dontclickshit
Як не стати кібер-жертвою
Stars: ✭ 149 (-11.31%)
Mutual labels:  messaging
Mnm
The legitimate email replacement — n-identity, decentralized, store-and-forward, open protocol, open source. (Server)
Stars: ✭ 162 (-3.57%)
Mutual labels:  messaging
Messager
A convenient way to handle messages between users in a simple way
Stars: ✭ 147 (-12.5%)
Mutual labels:  messaging
Django instagram
Photo sharing social media site built with Python/Django. Based on Instagram's design.
Stars: ✭ 165 (-1.79%)
Mutual labels:  messaging
Qpid Proton
Mirror of Apache Qpid Proton
Stars: ✭ 164 (-2.38%)
Mutual labels:  messaging
Sum
SUM - Secure Ultimate Messenger
Stars: ✭ 154 (-8.33%)
Mutual labels:  messaging
Garagemq
AMQP message broker implemented with golang
Stars: ✭ 153 (-8.93%)
Mutual labels:  messaging

kubernetes-nats-cluster

NATS cluster on top of Kubernetes made easy.

THIS PROJECT HAS BEEN ARCHIVED. SEE https://github.com/nats-io/nats-operator

NOTE: This repository provides a configurable way to deploy secure, available and scalable NATS clusters. However, a smarter solution in on the way (see #5).

Pre-requisites

  • Kubernetes cluster v1.8+ - tested with v1.9.0 on top of Vagrant + CoreOS
  • At least 3 nodes available (see Pod anti-affinity)
  • kubectl configured to access your cluster master API Server
  • openssl for TLS certificate generation

Deploy

We will be deploying a cluster of 3 NATS instances, with the following set-up:

  • TLS on for clients, but not clustering because peer-auth requires real SANS DNS in certificate
  • NATS client credentials: nats_client_user:nats_client_pwd
  • NATS route/cluster credentials: nats_route_user:nats_route_pwd
  • Logging: debug:false, trace:true, logtime:true

First, make sure to change nats.conf according to your needs. Then create a Kubernetes configmap to store it:

kubectl create configmap nats-config --from-file nats.conf

Next, we need to generate valid TLS artifacts:

openssl genrsa -out ca-key.pem 2048
openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
openssl genrsa -out nats-key.pem 2048
openssl req -new -key nats-key.pem -out nats.csr -subj "/CN=kube-nats" -config ssl.cnf
openssl x509 -req -in nats.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out nats.pem -days 3650 -extensions v3_req -extfile ssl.cnf

Then, it's time to create a couple Kubernetes secrets to store the TLS artifacts:

  • tls-nats-server for the NATS server TLS setup
  • tls-nats-client for NATS client apps setup - one will need it to validate the self-signed certificate used to secure NATS server
kubectl create secret generic tls-nats-server --from-file nats.pem --from-file nats-key.pem --from-file ca.pem
kubectl create secret generic tls-nats-client --from-file ca.pem

ATTENTION: Both using self-signed certificates and using the same certificates for securing client and cluster connections is a significant security compromise. But for the sake of showing how it can be done, I'm fine with doing just that. In an ideal scenario, there should be:

  • One centralized PKI/CA
  • One certificate for securing NATS route/cluster connections
  • One certificate for securing NATS client connections
  • TLS route/cluster authentication should be enforced, so one TLS certificate per route/cluster peer
  • TLS client authentication should be enforced, so one TLS certificate per client

And finally, we deploy NATS:

kubectl create -f nats.yml

Logs should be enough to make sure everything is working as expected:

$ kubectl logs -f nats-0
[1] 2017/12/17 12:38:37.801139 [INF] Starting nats-server version 1.0.4
[1] 2017/12/17 12:38:37.801449 [INF] Starting http monitor on 0.0.0.0:8222
[1] 2017/12/17 12:38:37.801580 [INF] Listening for client connections on 0.0.0.0:4242
[1] 2017/12/17 12:38:37.801772 [INF] TLS required for client connections
[1] 2017/12/17 12:38:37.801778 [INF] Server is ready
[1] 2017/12/17 12:38:37.802078 [INF] Listening for route connections on 0.0.0.0:6222
[1] 2017/12/17 12:38:38.874497 [TRC] 10.244.1.3:33494 - rid:1 - ->> [CONNECT {"verbose":false,"pedantic":false,"user":"nats_route_user","pass":"nats_route_pwd","tls_required":true,"name":"KGMPnL89We3gFLEjmp8S5J"}]
[1] 2017/12/17 12:38:38.956806 [TRC] 10.244.74.2:46018 - rid:3 - ->> [CONNECT {"verbose":false,"pedantic":false,"user":"nats_route_user","pass":"nats_route_pwd","tls_required":true,"name":"Skc5mx9enWrGPIQhyE7uzR"}]
[1] 2017/12/17 12:38:39.951160 [TRC] 10.244.1.4:46242 - rid:4 - ->> [CONNECT {"verbose":false,"pedantic":false,"user":"nats_route_user","pass":"nats_route_pwd","tls_required":true,"name":"0kaCfF3BU8g92snOe34251"}]
[1] 2017/12/17 12:40:38.956203 [TRC] 10.244.74.2:46018 - rid:3 - <<- [PING]
[1] 2017/12/17 12:40:38.958279 [TRC] 10.244.74.2:46018 - rid:3 - ->> [PING]
[1] 2017/12/17 12:40:38.958300 [TRC] 10.244.74.2:46018 - rid:3 - <<- [PONG]
[1] 2017/12/17 12:40:38.961791 [TRC] 10.244.74.2:46018 - rid:3 - ->> [PONG]
[1] 2017/12/17 12:40:39.951421 [TRC] 10.244.1.4:46242 - rid:4 - <<- [PING]
[1] 2017/12/17 12:40:39.952578 [TRC] 10.244.1.4:46242 - rid:4 - ->> [PONG]
[1] 2017/12/17 12:40:39.952594 [TRC] 10.244.1.4:46242 - rid:4 - ->> [PING]
[1] 2017/12/17 12:40:39.952598 [TRC] 10.244.1.4:46242 - rid:4 - <<- [PONG]

Scale

WARNING: Due to the Pod anti-affinity rule, for scaling up to n NATS instances, one needs n available Kubernetes nodes.

kubectl scale statefulsets nats --replicas 5

Did it work?

NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                      AGE
svc/kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP                      1h
svc/nats         ClusterIP   None         <none>        4222/TCP,6222/TCP,8222/TCP   4m

NAME        READY     STATUS    RESTARTS   AGE
po/nats-0   1/1       Running   0          4m
po/nats-1   1/1       Running   0          4m
po/nats-2   1/1       Running   0          4m
po/nats-3   1/1       Running   0          7s
po/nats-4   1/1       Running   0          6s

Access the service

Don't forget that services in Kubernetes are only acessible from containers in the cluster.

In this case, we're using a headless service.

Just point your client apps to:

nats:4222

Pod anti-affinity

One of the main advantages of running NATS on top of Kubernetes is how resilient the cluster becomes, particularly during node restarts. However if all NATS pods are scheduled onto the same node(s), this advantage decreases significantly and may even result in service downtime.

It is then highly recommended that one adopts pod anti-affinity in order to increase availability. This is enabled by default (see nats.yml).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].