All Projects → exoscale → multi-master-kubernetes

exoscale / multi-master-kubernetes

Licence: other
Multi-master Kubernetes cluster on Exoscale

Programming Languages

python
139335 projects - #7 most used programming language
HTML
75241 projects

Projects that are alternatives of or similar to multi-master-kubernetes

Ansible Role K3s
Ansible role for installing k3s as either a standalone server or HA cluster.
Stars: ✭ 132 (+103.08%)
Mutual labels:  playbook, kubernetes-cluster, k8s
kubernetes the easy way
Automating Kubernetes the hard way with Vagrant and scripts
Stars: ✭ 22 (-66.15%)
Mutual labels:  kubernetes-cluster, k8s, k8s-cluster
GPU-Kubernetes-Guide
How to setup a production-grade Kubernetes GPU cluster on Paperspace in 10 minutes for $10
Stars: ✭ 34 (-47.69%)
Mutual labels:  kubernetes-cluster, k8s-cluster
Metalk8s
An opinionated Kubernetes distribution with a focus on long-term on-prem deployments
Stars: ✭ 217 (+233.85%)
Mutual labels:  kubernetes-cluster, k8s
Openfaas On Digitalocean
Ansible playbook to create a Digital Ocean droplet and deploy OpenFaaS onto it.
Stars: ✭ 57 (-12.31%)
Mutual labels:  playbook, k8s
Owasp Workshop
owasp-workshop: Orchetraing containers with Kubernetes
Stars: ✭ 116 (+78.46%)
Mutual labels:  kubernetes-cluster, k8s
K3s Ansible
Ansible playbook to deploy k3s kubernetes cluster
Stars: ✭ 153 (+135.38%)
Mutual labels:  kubernetes-cluster, k8s
Katlas
A distributed graph-based platform to automatically collect, discover, explore and relate multi-cluster Kubernetes resources and metadata.
Stars: ✭ 179 (+175.38%)
Mutual labels:  kubernetes-cluster, k8s
kube-watch
Simple tool to get webhooks on Kubernetes cluster events
Stars: ✭ 21 (-67.69%)
Mutual labels:  kubernetes-cluster, k8s
deploy
Deploy Development Builds of Open Cluster Management (OCM) on RedHat Openshift Container Platform
Stars: ✭ 133 (+104.62%)
Mutual labels:  k8s, k8s-cluster
K8s Digitalocean Terraform
Deploy latest Kubernetes cluster on DigitalOcean using Terraform
Stars: ✭ 33 (-49.23%)
Mutual labels:  kubernetes-cluster, k8s
kubectl-janitor
List Kubernetes objects in a problematic state
Stars: ✭ 48 (-26.15%)
Mutual labels:  kubernetes-cluster, k8s
K8s On Raspbian
Kubernetes on Raspbian (Raspberry Pi)
Stars: ✭ 839 (+1190.77%)
Mutual labels:  kubernetes-cluster, k8s
Kubernetes Lxd
A step-by-step guide to get kubernetes running inside an LXC container
Stars: ✭ 173 (+166.15%)
Mutual labels:  kubernetes-cluster, k8s
Geodesic
🚀 Geodesic is a DevOps Linux Distro. We use it as a cloud automation shell. It's the fastest way to get up and running with a rock solid Open Source toolchain. ★ this repo! https://slack.cloudposse.com/
Stars: ✭ 629 (+867.69%)
Mutual labels:  kubernetes-cluster, k8s
K3sup
bootstrap Kubernetes with k3s over SSH < 1 min 🚀
Stars: ✭ 4,012 (+6072.31%)
Mutual labels:  kubernetes-cluster, k8s
Kubernetes Certified Administrator
Online resources that will help you prepare for taking the CNCF CKA 2020 "Kubernetes Certified Administrator" Certification exam. with time, This is not likely the comprehensive up to date list - please make a pull request if there something that should be added here.
Stars: ✭ 3,438 (+5189.23%)
Mutual labels:  kubernetes-cluster, k8s
Kubekey
Provides a flexible, rapid and convenient way to install Kubernetes only, both Kubernetes and KubeSphere, and related cloud-native add-ons. It is also an efficient tool to scale and upgrade your cluster.
Stars: ✭ 288 (+343.08%)
Mutual labels:  kubernetes-cluster, k8s
k0s-ansible
Create a Kubernetes Cluster using Ansible and the vanilla upstream Kubernetes distro k0s.
Stars: ✭ 56 (-13.85%)
Mutual labels:  playbook, kubernetes-cluster
mck8s
mck8s: Orchestration platform for multi-cluster k8s environments
Stars: ✭ 60 (-7.69%)
Mutual labels:  kubernetes-cluster, k8s

DEPRECATED

Please look at other deployment options for production ready Kubernetes clusters on Exoscale

  • Cluster API
  • Native managed kubernetes from Exoscale
  • Rancher and other control panels

Multi-master Kubernetes

This Ansible playbook helps you setup a multi-master Kubernetes cluster on Exoscale.

Getting started

To run this playbook a working Docker installation and a basic understanding of containers and volumes is required. You also need a Exoscale account and the corresponding API key and secret.

You can get your key and secret here: https://portal.exoscale.com/account/profile/api

Let's bootstrap a cluster.

# Run the container and mount a data volume for the cluster specific secrets
docker run -ti -v k8s_secrets:/secret exoscale/multi-master-kubernetes

# Set EXO_API_KEY and EXO_API_SECRET environment variables
export EXO_API_KEY=
export EXO_API_SECRET=

# Then run the cluster-bootstrap playbook
ansible-playbook cluster-bootstrap.yml

Tip: The cluster-bootstrap playbook is safe to re-run at any time to make sure your cluster is configured correctly.

Bootstrapping the cluster takes a few minutes. When the playbook finishes, you can see the cluster nodes come up using:

kubectl get nodes -w

Note: kubectl is setup automatically inside the container. To use it outside the container as well, simply get the kubeconfig file from that data volume and copy it into ~/.kube/config

Add more worker nodes

If you want to add more workers simply run the worker-add playbook. Specify the desired number of worker nodes. The default cluster has 3 worker nodes. Below command adds 2 more for a total of 5.

ansible-playbook -e desired_num_worker_nodes=5 worker-add.yml

Update Kubernetes

The cluster-upgrade playbook takes care of one by one updating Kubernetes on each of the nodes and restarting services as required. The upgrade does lead to a short unavailability of the apiserver due to the restart of the etcd members. Member restarts take a couple of retries before they succeed, this is caused by ports still being in use.

ansible-playbook cluster-upgrade.yml

Architecture

The initial cluster consists of 3 master nodes and 3 worker nodes. Master nodes are pets, worker nodes are cattle. All nodes run CoreOS.

Master nodes run:

  • infra-etcd2: Etcd2 cluster used for Flanneld overlay networking and Locksmithd
  • flanneld: for the container overlay network
  • locksmithd: to orchestrate automatic updates
  • dockerd
  • kubelet
  • kubernetes-etcd2: Etcd2 cluster used for Kubernetes
  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager
  • kube-proxy

Worker nodes run:

  • flanneld: for the container overlay network
  • locksmithd: to orchestrate automatic updates
  • dockerd
  • kubelet
  • kube-proxy
  • haproxy
  • and your containers of course

Flanneld, Locksmithd, Docker, infra-etcd2 and the kubelet are started using Systemd. All other components most notably kubernetes-etcd2 and kube-* are started by the kubelet.

CoreOS is configured to do automatic updates. Locksmith is configured to make sure only one of the six cluster nodes reboots at the same time. It is also configured to ensure a maintenance window for master nodes between 4 and 5am and for worker nodes between 5 and 6am daily. Automatic updates only include the OS components that are part of CoreOS.

Ingress

Cluster bootstrap includes the nginx-ingress-controller to make services available externally using ingress resources.

Haproxy on each worker node listens on 0.0.0.0:80 and 0.0.0.0:443 and forwards TCP traffic to the ingress controller service.

Simply setup a wildcard DNS entry to point to the IPs of your worker nodes.

Kube-lego is supported by the nginx-ingress-controller but is not automatically installed.

Security

Master and worker nodes each have their own security-groups and only open the required ports between nodes within the same group or between nodes of the other group respectively.

All nodes allow external SSH access. (Required for Ansible unless you use a bastion host.)

On top of the firewall rules enforced by the security groups, all components are configured to communicate via TLS using certificates.

The required certificate authorities and certificates are generated using cfssl automatically.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].