All Projects → kubealex → Libvirt K8s Provisioner

kubealex / Libvirt K8s Provisioner

Licence: mit
Automate your k8s installation

Projects that are alternatives of or similar to Libvirt K8s Provisioner

Ansible Role Kubernetes
Ansible Role - Kubernetes
Stars: ✭ 247 (+133.02%)
Mutual labels:  k8s, kubectl, kubeadm
Rak8s
Stand up a Raspberry Pi based Kubernetes cluster with Ansible
Stars: ✭ 354 (+233.96%)
Mutual labels:  kubectl, kubernetes-setup, kubeadm
K8s Digitalocean Terraform
Deploy latest Kubernetes cluster on DigitalOcean using Terraform
Stars: ✭ 33 (-68.87%)
Mutual labels:  hcl, k8s, kubeadm
Aws Minikube
Single node Kubernetes instance implemented using Terraform and kubeadm
Stars: ✭ 101 (-4.72%)
Mutual labels:  hcl, kubernetes-setup, kubeadm
Terraform Aws Kubernetes
Terraform module for Kubernetes setup on AWS
Stars: ✭ 159 (+50%)
Mutual labels:  hcl, kubernetes-setup, kubeadm
GPU-Kubernetes-Guide
How to setup a production-grade Kubernetes GPU cluster on Paperspace in 10 minutes for $10
Stars: ✭ 34 (-67.92%)
Mutual labels:  kubernetes-setup, kubectl, kubeadm
k8s-deployer
Deploy Kubernetes service and store retrieved information in the Consul K/V store
Stars: ✭ 23 (-78.3%)
Mutual labels:  k8s, kubectl, kubeadm
rak8s
Stand up a Raspberry Pi based Kubernetes cluster with Ansible
Stars: ✭ 362 (+241.51%)
Mutual labels:  kubernetes-setup, kubectl, kubeadm
Geodesic
🚀 Geodesic is a DevOps Linux Distro. We use it as a cloud automation shell. It's the fastest way to get up and running with a rock solid Open Source toolchain. ★ this repo! https://slack.cloudposse.com/
Stars: ✭ 629 (+493.4%)
Mutual labels:  k8s, kubectl
Gcr.io mirror
all of the gcr.io docker image mirror
Stars: ✭ 650 (+513.21%)
Mutual labels:  k8s, kubectl
Rakkess
Review Access - kubectl plugin to show an access matrix for k8s server resources
Stars: ✭ 751 (+608.49%)
Mutual labels:  k8s, kubectl
Kubeadm Ha
Kubernetes high availiability deploy based on kubeadm, loadbalancer included (English/中文 for v1.15 - v1.20+)
Stars: ✭ 614 (+479.25%)
Mutual labels:  kubeadm, nginx
K8s By Kubeadm
🏗 如何使用kubeadm在国内网络环境搭建单主k8s集群
Stars: ✭ 46 (-56.6%)
Mutual labels:  k8s, kubeadm
Sealos
一条命令离线安装高可用kubernetes,3min装完,700M,100年证书,版本不要太全,生产环境稳如老狗
Stars: ✭ 5,253 (+4855.66%)
Mutual labels:  kubernetes-setup, kubeadm
Kubekey
Provides a flexible, rapid and convenient way to install Kubernetes only, both Kubernetes and KubeSphere, and related cloud-native add-ons. It is also an efficient tool to scale and upgrade your cluster.
Stars: ✭ 288 (+171.7%)
Mutual labels:  k8s, kubeadm
K8s Utils
Kubernetes Utility / Helper Scripts
Stars: ✭ 33 (-68.87%)
Mutual labels:  k8s, kubectl
Kubernetes Starter
kubernetes入门,包括kubernetes概念,架构设计,集群环境搭建,认证授权等。
Stars: ✭ 1,077 (+916.04%)
Mutual labels:  k8s, kubernetes-setup
Kube Aliases
Kubernetes Aliases and Bash Functions
Stars: ✭ 40 (-62.26%)
Mutual labels:  k8s, kubectl
Terraform Rancher Ha Example
Terraform files for deploying a Rancher HA cluster in AWS
Stars: ✭ 61 (-42.45%)
Mutual labels:  hcl, rancher
Kubeadm Dind Cluster
[EOL] A Kubernetes multi-node test cluster based on kubeadm
Stars: ✭ 1,112 (+949.06%)
Mutual labels:  k8s, kubeadm

License: MIT

libvirt-k8s-provisioner - Automate your cluster provisioning from 0 to k8s!

Welcome to the home of the project!

With this project, you can build up in minutes a fully working k8s cluster (single master/HA) with as many worker nodes as you want.

Kubernetes version that is installed can be choosen between:

  • 1.19.6 - Latest 1.19 release
  • 1.20.1 - Latest 1.20 release

Terraform will take care of the provisioning of:

  • Loadbalancer machine with haproxy installed and configured for HA clusters
  • k8s Master(s) VM(s)
  • k8s Worker(s) VM(s)

It also takes care of preparing the host machine with needed packages, configuring:

You can customize the setup choosing:

  • container runtime that you want to use (docker, cri-o, containerd actually available).
  • schedulable master if you want to schedule on your master nodes or leave the taint.
  • service CIDR to be used during installation.
  • pod CIDR to be used during installation.
  • network plugin to be used, based on the documentation. Project Calico Flannel Project Cilium
  • NFS Server creation for exporting shares to be used as PVs
  • nginx-ingress-controller, haproxy-ingress-controller or Project Contour if you want to enable ingress management.
  • Rancher installation to manage your cluster.
  • metalLB to manage bare-metal LoadBalancer services - WIP - Only L2 configuration can be set-up via playbook.
  • Rook-Ceph - WIP - To be improved, current rook-ceph cluster size is fixed to 3 nodes

All VMs are specular,prepared with:

The user is capable of logging via SSH too.

Quickstart

The playbook is meant to be ran against a/many local or remote host/s, defined under vm_host group, depending on how many clusters you want to configure at once.

ansible-playbook main.yml

You can quickly make it work by configuring the needed vars, but you can go straight with the defaults!

make create

You can also install your cluster using the Makefile with: make create

Recommended sizings are:

Role vCPU RAM
master 2 2G
worker 2 2G

vars/k8s_cluster.yml

General configuration

k8s:
  cluster_name: k8s-test
  cluster_os: Ubuntu
  cluster_version: 1.20
  container_runtime: crio
  master_schedulable: false

# Nodes configuration

  control_plane:
    vcpu: 2
    mem: 2 
    vms: 3
    disk: 30

  worker_nodes:
    vcpu: 1
    mem: 2
    vms: 1
    disk: 30

# Network configuration

  network:
    network_cidr: 192.168.200.0/24
    domain: k8s.test
    pod_cidr: 10.20.0.0/16
    service_cidr: 10.110.0.0/16
    cni_plugin: calico

# Rook configuration
storage:
  nfs:
    nfs_enabled: true
    nfs_fsSize: 50GB
	    nfs_export: /srv/k8s


rook_ceph:
  install_rook: false
  volume_size: 50

# Ingress controller configuration [nginx/haproxy]

ingress_controller:
  install_ingress_controller: true
  type: haproxy

# Section for Rancher setup

rancher:
  install_rancher: true

# Section for metalLB setup

metallb:
  install_metallb: false
  manifest_url: https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests
  l2:
    iprange: 192.168.200.210-192.168.200.250

Size for disk and mem is in GB. disk allows to provision space in the cloud image for pod's ephemeral storage.

cluster_version can be 1.19 or 1.20 to install the corresponding latest version for the release

VMS are created with these names by default (customizing them is work in progress):

- **cluster_name**-loadbalancer.**domain**
- **cluster_name**-master-N.**domain**
- **cluster_name**-worker-N.**domain**

It is possible to choose CentOS/Ubuntu as kubernetes hosts OS

Rook

Rook setup actually creates a dedicated kind of worker, with an additional volume on ALL workers to be used. It will be improved to just select a number of nodes that can be coherent with the number of ceph replicas. Feel free to suggest modifications/improvements.

Rancher

Basic setup is made starting from Rancher documentation, with Helm chart.

MetalLB

Basic setup taken from the documentation. At the moment, the parameter l2 reports the IPs that can be used (defaults to some IPs in the same subnet of the hosts) as 'external' IPs for accessing the applications

Suggestion and improvements are highly recommended! Alex

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].