All Projects → kubealex → libvirt-ocp4-provisioner

kubealex / libvirt-ocp4-provisioner

Licence: MIT license
Automate your OCP4 installation

Programming Languages

HCL
1544 projects
Jinja
831 projects
shell
77523 projects
Dockerfile
14818 projects
Makefile
30231 projects

Projects that are alternatives of or similar to libvirt-ocp4-provisioner

grafana-operator
An operator for Grafana that installs and manages Grafana instances, Dashboards and Datasources through Kubernetes/OpenShift CRs
Stars: ✭ 449 (+447.56%)
Mutual labels:  openshift, k8s, openshift-v4
Deploy
Deploy Development Builds of Open Cluster Management (OCM) on RedHat Openshift Container Platform
Stars: ✭ 78 (-4.88%)
Mutual labels:  openshift, k8s
Pega Helm Charts
Orchestrate a Pega Platform™ deployment by using Docker, Kubernetes, and Helm to take advantage of Pega Platform Cloud Choice flexibility.
Stars: ✭ 58 (-29.27%)
Mutual labels:  openshift, k8s
Reloader
Reloader is maintained by Stakater. Like it? Please let us know at [email protected]
Stars: ✭ 2,930 (+3473.17%)
Mutual labels:  openshift, k8s
Openshift Acme
ACME Controller for OpenShift and Kubernetes Cluster. (Supports e.g. Let's Encrypt)
Stars: ✭ 287 (+250%)
Mutual labels:  openshift, k8s
Ingressmonitorcontroller
A Kubernetes controller to watch ingresses and create liveness alerts for your apps/microservices in UptimeRobot, StatusCake, Pingdom, etc. – [✩Star] if you're using it!
Stars: ✭ 306 (+273.17%)
Mutual labels:  openshift, k8s
Gitwebhookproxy
A proxy to let webhooks reach running services behind a firewall – [✩Star] if you're using it!
Stars: ✭ 123 (+50%)
Mutual labels:  openshift, k8s
openshift-quickstart
Developer Workshops related to the Java development on OpenShift
Stars: ✭ 19 (-76.83%)
Mutual labels:  openshift, openshift-v4
openshift4-upi-homelab
OpenShift 4 User Provisioned Infrastructure Homelab
Stars: ✭ 15 (-81.71%)
Mutual labels:  openshift, openshift-v4
ocp4upc
OCP4 Upgrade Paths Checker
Stars: ✭ 30 (-63.41%)
Mutual labels:  openshift, openshift-v4
deploy
Deploy Development Builds of Open Cluster Management (OCM) on RedHat Openshift Container Platform
Stars: ✭ 133 (+62.2%)
Mutual labels:  openshift, k8s
ProxyInjector
A Kubernetes controller to inject an authentication proxy container to relevant pods - [✩Star] if you're using it!
Stars: ✭ 77 (-6.1%)
Mutual labels:  openshift, k8s
openshift4-vmware-upi
Ansible Playbooks and Documentation to Support the Automated Installation of OpenShift 4 on VMware
Stars: ✭ 45 (-45.12%)
Mutual labels:  openshift, upi
Hcloud Okd4
Deploy OKD4 (OpenShift) on Hetzner Cloud
Stars: ✭ 29 (-64.63%)
Mutual labels:  openshift, hashicorp
bobbycar
IoT Transportation demo using Red Hat OpenShift and Middleware technologies
Stars: ✭ 33 (-59.76%)
Mutual labels:  openshift, openshift-v4
Linchpin
ansible based multicloud orchestrator
Stars: ✭ 107 (+30.49%)
Mutual labels:  openshift, libvirt
deploy-vm
Libvirt wrapper to spawn VMs using cloud images
Stars: ✭ 56 (-31.71%)
Mutual labels:  coreos, libvirt
okd4-upi-lab-setup
Building an OKD 4 Home Lab
Stars: ✭ 72 (-12.2%)
Mutual labels:  openshift, libvirt
Kcli
Management tool for libvirt/aws/gcp/kubevirt/openstack/ovirt/vsphere/packet
Stars: ✭ 219 (+167.07%)
Mutual labels:  openshift, libvirt
gotf
Managing multiple environments with Terraform made easy
Stars: ✭ 25 (-69.51%)
Mutual labels:  hashicorp, hashicorp-terraform

License: MIT

libvirt-ocp4-provisioner - Automate your cluster provisioning from 0 to OCP!

Welcome to the home of the project! This project has been inspired by @ValentinoUberti, who did a GREAT job creating the playbooks to provision existing infrastructure nodes on oVirt and preparing for cluster installation.

I wanted to play around with terraform and port his great work to libvirt and so, here we are! I adapted his playbooks to libvirt needs, making massive use of in-memory inventory creation for provisioned VMs, to minimize the impact on customizable stuff in variables.

To give a quick overview, this project will allow you to provision a fully working and stable OCP environment, consisting of:

  • Bastion machine provisioned with:
    • dnsmasq (with SELinux module, compiled and activated)
    • dhcp based on dnsmasq
    • nginx (for ignition files and rhcos pxe-boot)
    • pxeboot
  • Loadbalancer machine provisioned with:
    • haproxy
  • OCP Bootstrap VM
  • OCP Master VM(s)
  • OCP Worker VM(s)

It also takes care of preparing the host machine with needed packages, configuring:

PXE is automatic, based on MAC binding to different OCP nodes role, so no need of choosing it from the menus, this means you can just run the playbook, take a beer and have your fully running OCP up and running.

The version can be selected freely, by specifying the desired one (i.e. 4.2.33, 4.7.7) or the latest stable release with "stable".

Now support for Single Node Openshift - SNO has been added!

bastion and loadbalancer VMs spec:

The user is capable of logging via SSH too.

Quickstart

First of all, you need to install required collections to get started:

ansible-galaxy collection install -r requirements.yml

The playbook is meant to run against local host/s, defined under vm_host group in your inventory, depending on how many clusters you want to configure at once.

HA Clusters

ansible-playbook main.yml

Single Node Openshift (SNO)

ansible-playbook main-sno.yml

You can quickly make it work by configuring the needed vars, but you can go straight with the defaults!

Quickstart with Execution Environment

The playbooks are compatible with the newly introduced Execution environments (EE). To use them with an execution environment you need to have ansible-builder and ansible-navigator installed.

Build EE image

To build the EE image, jump in the execution-environment folder and run the build:

ansible-builder build -f execution-environment/execution-environment.yml -t ocp-ee

Run playbooks

To run the playbooks use ansible navigator:

ansible-navigator run main.yml -m stdout 

Or, in case of Single Node Openshift:

ansible-navigator run main-sno.yml -m stdout

Common vars

The kind of network created is a simple NAT configuration, without DHCP since it will be provisioned with bastion VM. Defaults can be OK if you don't have any overlapping network.

HA Configuration vars

vars/infra_vars.yml

infra_nodes:
  host_list:
    bastion:
      - ip: 192.168.100.4
    loadbalancer:
      - ip: 192.168.100.5
dhcp:
  timezone: "Europe/Rome"
  ntp: 204.11.201.10

vars/cluster_vars.yml

three_node: false
network_cidr: 192.168.100.0/24
domain: hetzner.lab
additional_block_device:
  enabled: false
  size: 100
cluster:
  version: stable
  name: ocp4
  ocp_user: admin
  ocp_pass: openshift
  pullSecret: ''
cluster_nodes:
  host_list:
    bootstrap:
      - ip: 192.168.100.6
    masters:
      - ip: 192.168.100.7
      - ip: 192.168.100.8
      - ip: 192.168.100.9
    workers:
      - ip: 192.168.100.10
        role: infra
      - ip: 192.168.100.11
      - ip: 192.168.100.12
  specs:
    bootstrap:
      vcpu: 4
      mem: 16
      disk: 40
    masters:
      vcpu: 4
      mem: 16
      disk: 40	  
    workers:
      vcpu: 2
      mem: 8
      disk: 40

Where domain is the dns domain assigned to the nodes and cluster.name is the name chosen for our OCP cluster installation.

mem and disk are intended in GB

cluster.version allows you to choose a particular version to be installed (i.e. 4.5.0, stable)

additional_block_device controls whether an additional disk of the given size should be added to Workers or Control Plane nodes in case of compact (3 nodes) setup

The role for workers is intended for nodes labelling. Omitting labels sets them to their default value, worker

The count of VMs is taken by the elements of the list, in this example, we got:

  • 3 master nodes with 4vcpu and 16G memory
  • 3 worker nodes with 2vcpu and 8G memory

Recommended values are:

Role vCPU RAM Storage
bootstrap 4 16G 120G
master 4 16G 120G
worker 2 8G 120G

For testing purposes, minimum storage value is set at 40GB.

The playbook now supports three nodes setup (3 masters with both master and worker node role) intended for pure testing purposes and you can enable it with the three_node boolean var ONLY FOR 4.6+

Single Node Openshift vars

vars/cluster_vars.yml

domain: hetzner.lab
network_cidr: 192.168.100.0/24
cluster:
  version: stable
  name: ocp4
  ocp_user: admin
  ocp_pass: openshift
  pullSecret: ''
cluster_nodes:
  host_list:
    sno:
      ip: 192.168.100.7
  specs:
    sno:
      vcpu: 8
      mem: 32
      disk: 120            
local_storage:
  enabled: true
  volume_size: 50

local_storage field can be used to provision an additional disk to the VM in order to provision volumes using, for instance, rook-ceph or local storage operator.

In both cases, Pull Secret can be retrived easily at https://cloud.redhat.com/openshift/install/pull-secret

HTPasswd provider is created after the installation, you can use ocp_user and ocp_pass to login!

DISCLAIMER This project is for testing/lab only, it is not supported in any way by Red Hat nor endorsed.

Feel free to suggest modifications/improvements.

Alex

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].