All Projects → SUSE → Skuba

SUSE / Skuba

Licence: apache-2.0
CLI tool used to simplify (or orchestrate) kubeadm-based Kubernetes cluster deployment and update

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to Skuba

GPU-Kubernetes-Guide
How to setup a production-grade Kubernetes GPU cluster on Paperspace in 10 minutes for $10
Stars: ✭ 34 (-55.26%)
Mutual labels:  kubeadm, kubernetes-deployment
kubash
Kubash - the K8$ shell for your kube clusters
Stars: ✭ 20 (-73.68%)
Mutual labels:  kubeadm, kubernetes-deployment
kubeadm-bootstrap
Supporting code + documentation for bootstrapping a kubeadm installation on bare-metal-ish machinery
Stars: ✭ 23 (-69.74%)
Mutual labels:  kubeadm, kubernetes-deployment
Kubekey
Provides a flexible, rapid and convenient way to install Kubernetes only, both Kubernetes and KubeSphere, and related cloud-native add-ons. It is also an efficient tool to scale and upgrade your cluster.
Stars: ✭ 288 (+278.95%)
Mutual labels:  kubernetes-deployment, kubeadm
K8s Digitalocean Terraform
Deploy latest Kubernetes cluster on DigitalOcean using Terraform
Stars: ✭ 33 (-56.58%)
Mutual labels:  kubeadm
Rssbox
📰 I consume the world via RSS feeds, and this is my attempt to keep it that way.
Stars: ✭ 492 (+547.37%)
Mutual labels:  kubernetes-deployment
Kubeadm Ansible
Build a Kubernetes cluster using kubeadm via Ansible.
Stars: ✭ 479 (+530.26%)
Mutual labels:  kubeadm
Kube Spawn
A tool for creating multi-node Kubernetes clusters on a Linux machine using kubeadm & systemd-nspawn. Brought to you by the Kinvolk team.
Stars: ✭ 392 (+415.79%)
Mutual labels:  kubeadm
Container Service Extension
Container Service for VMware vCloud Director
Stars: ✭ 66 (-13.16%)
Mutual labels:  kubernetes-deployment
Airflow Toolkit
Any Airflow project day 1, you can spin up a local desktop Kubernetes Airflow environment AND one in Google Cloud Composer with tested data pipelines(DAGs) 🖥 >> [ 🚀, 🚢 ]
Stars: ✭ 51 (-32.89%)
Mutual labels:  kubernetes-deployment
Kubernetes Che
Example deploying Eclipse Che on a Kubernetes cluster
Stars: ✭ 17 (-77.63%)
Mutual labels:  kubernetes-deployment
Kubeadm Playbook
Fully fledged (HA) Kubernetes Cluster using official kubeadm, ansible and helm. Tested on RHEL/CentOS/Ubuntu with support of http_proxy, dashboard installed, ingress controller, heapster - using official helm charts
Stars: ✭ 533 (+601.32%)
Mutual labels:  kubeadm
Kind
Kubernetes IN Docker - local clusters for testing Kubernetes
Stars: ✭ 8,932 (+11652.63%)
Mutual labels:  kubeadm
Carvel Kapp
kapp is a simple deployment tool focused on the concept of "Kubernetes application" — a set of resources with the same label
Stars: ✭ 489 (+543.42%)
Mutual labels:  kubernetes-deployment
Hkube
Kubernetes cluster deployment to Hetzner Cloud
Stars: ✭ 55 (-27.63%)
Mutual labels:  kubernetes-deployment
Cni Genie
CNI-Genie for choosing pod network of your choice during deployment time. Supported pod networks - Calico, Flannel, Romana, Weave
Stars: ✭ 408 (+436.84%)
Mutual labels:  kubeadm
Kubeadm Ha
Kubernetes high availiability deploy based on kubeadm, loadbalancer included (English/中文 for v1.15 - v1.20+)
Stars: ✭ 614 (+707.89%)
Mutual labels:  kubeadm
60sk3s
Deploy VMs and 4 node k3s cluster on them in under 60 seconds
Stars: ✭ 51 (-32.89%)
Mutual labels:  kubernetes-deployment
Kubeadm Workshop
Showcasing a bare-metal multi-platform kubeadm setup with persistent storage and monitoring
Stars: ✭ 593 (+680.26%)
Mutual labels:  kubeadm
Kubernetes On Arm
Kubernetes ported to ARM boards like Raspberry Pi.
Stars: ✭ 572 (+652.63%)
Mutual labels:  kubeadm

skuba

Tool to manage the full lifecycle of a cluster.

Table of Content

Prerequisites

The required infrastructure for deploying CaaSP needs to exist beforehand, it's required for you to have SSH access to these machines from the machine that you are running skuba from. skuba requires you to have added your SSH keys to the SSH agent on this machine, e.g:

ssh-add ~/.ssh/id_rsa

The system running skuba must have kubectl available.

Installation

go get github.com/SUSE/skuba/cmd/skuba

Development

A development build will:

  • Pull container images from registry.suse.de/devel/caasp/4.5/containers/containers/caasp/v4.5/

To build it, run:

make

Staging

A staging build will:

  • Pull container images from registry.suse.de/suse/sle-15-sp2/update/products/caasp/4.5/containers/caasp/v4.5

To build it, run:

make staging

Release

A release build will:

  • Pull container images from registry.suse.com/caasp/v4.5

To build it, run:

make release

Creating a cluster

Go to any directory in your machine, e.g. ~/clusters. From there, execute:

cluster init

The init process creates the definition of your cluster. Ideally there's nothing to tweak in the general case, but you can go through the generated configurations and check if everything is fine for your taste.

skuba cluster init --control-plane load-balancer.example.com company-cluster

This command will have generated a basic project scaffold in the company-cluster folder. You need to change the directory to this new folder in order to run the rest of the commands in this README.

node bootstrap

You need to bootstrap your first master node of the cluster. For this purpose you have to be inside the company-cluster folder.

skuba node bootstrap --user opensuse --sudo --target <IP/fqdn> my-master

You can check skuba node bootstrap --help for further options, but the previous command means:

  • Bootstrap node using a SSH connection to target <IP/fqdn>
    • Use opensuse user when opening the SSH session
    • Use sudo when executing commands inside the machine
  • Name the node my-master: this is what Kubernetes will use to refer to your node

When this command has finished, some secrets will have been copied to your company-cluster folder. Namely:

  • Generated secrets will be copied inside the pki folder
  • The administrative admin.conf file of the cluster has been copied in root of the company-cluster folder
    • The company-cluster/admin.conf file is the kubeconfig configuration required by kubectl and other command line tools

Growing a cluster

node join

Joining a node allows you to grow your Kubernetes cluster. You can join master nodes as well as worker nodes to your existing cluster. For this purpose you have to be inside the company-cluster folder.

This task will automatically create a new bootstrap token on the existing cluster that will be used for the kubelet TLS bootstrap to happen on the new node. The token will be fed automatically to the configuration used to join the new node.

This task will create the configuration file inside the kubeadm-join.conf.d folder as well with a file named <IP/fqdn>.conf that will contain the join configuration used. If this file existed before it will be honored, only overriding a small subset of settings automatically:

  • Bootstrap token to the one generated on demand
  • Kubelet extra args
    • node-ip if the --target is an IP address
    • hostname-override to the node-name provided as an argument
    • cni-bin-dir directory location if required
  • Node registration name to node-name provided as an argument

master node join

This command will join a new master node to the cluster. This will also increase the etcd member count by one.

skuba node join --role master --user opensuse --sudo --target <IP/fqdn> second-master

worker node join

This command will join a new worker node to the cluster.

skuba node join --role worker --user opensuse --sudo --target <IP/fqdn> my-worker

Shrinking a cluster

node remove

It's possible to remove master and worker nodes from the cluster. All the required tasks to remove the target node will be performed automatically:

  • Drain the node (also cordoning it)
  • Mask and disable the kubelet service
  • If it's a master node:
    • Remove persisted information
      • etcd store
      • PKI secrets
    • Remove etcd member from the etcd cluster
    • Remove the endpoint from the kubeadm-config config map
  • Remove node from the cluster

For removing a node you only need to provide the name of the node known to Kubernetes:

skuba node remove my-worker

Or, if you want to remove a master node:

skuba node remove second-master

kubectl-caasp

This project also comes with a kubectl plugin that has the same layout as skuba. You can call to the same commands presented in skuba as kubectl caasp when installing the kubectl-caasp binary in your path.

The purpose of the tool is to provide a quick way to see if nodes have pending upgrades.

$ kubectl caasp cluster status
NAME      STATUS   ROLE     OS-IMAGE                              KERNEL-VERSION           KUBELET-VERSION   CONTAINER-RUNTIME   HAS-UPDATES   HAS-DISRUPTIVE-UPDATES   CAASP-RELEASE-VERSION
master0   Ready    master   SUSE Linux Enterprise Server 15 SP1   4.12.14-197.29-default   v1.16.2           cri-o://1.16.0      no            no                       4.1.0
master1   Ready    master   SUSE Linux Enterprise Server 15 SP1   4.12.14-197.29-default   v1.16.2           cri-o://1.16.0      no            no                       4.1.0
master2   Ready    master   SUSE Linux Enterprise Server 15 SP1   4.12.14-197.29-default   v1.16.2           cri-o://1.16.0      no            no                       4.1.0
worker0   Ready    <none>   SUSE Linux Enterprise Server 15 SP1   4.12.14-197.29-default   v1.16.2           cri-o://1.16.0      no            no                       4.1.0
worker1   Ready    <none>   SUSE Linux Enterprise Server 15 SP1   4.12.14-197.29-default   v1.16.2           cri-o://1.16.0      no            no                       4.1.0
worker2   Ready    <none>   SUSE Linux Enterprise Server 15 SP1   4.12.14-197.29-default   v1.16.2           cri-o://1.16.0      no            no                       4.1.0

Demo

This is a quick screencast showing how it's easy to deploy a multi master node on top of AWS. The procedure is the same as the deployment on OpenStack or on libvirt.

The deployment is done on AWS via the terraform files shared inside of the infra repository.

Videos:

The videos are uncut, as you will see the whole deployment takes around 7 minutes: 4 minutes for the infrastructure, 3 minutes for the actual cluster.

The demo uses a small script to automate the sequential invocations of skuba. Anything can be used to do that, including bash.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].