All Projects → HSBawa → icp-ce-on-linux-containers

HSBawa / icp-ce-on-linux-containers

Licence: other
Multi node IBM Cloud Private Community Edition 3.2.x w/ Kubernetes 1.13.5 in a Box. Terraform, Packer and BASH based Infrastructure as Code script sets up a multi node LXD cluster, installs ICP-CE and clis on a metal or VM Ubuntu 18.04 host.

Programming Languages

shell
77523 projects
HCL
1544 projects

Projects that are alternatives of or similar to icp-ce-on-linux-containers

gotf
Managing multiple environments with Terraform made easy
Stars: ✭ 25 (-51.92%)
Mutual labels:  iac, hashicorp, infrastructure-as-code
Infrastructure As Code Tutorial
Infrastructure As Code Tutorial. Covers Packer, Terraform, Ansible, Vagrant, Docker, Docker Compose, Kubernetes
Stars: ✭ 1,954 (+3657.69%)
Mutual labels:  packer, infrastructure-as-code
Ops Cli
Ops - cli wrapper for Terraform, Ansible, Helmfile and SSH for cloud automation
Stars: ✭ 152 (+192.31%)
Mutual labels:  packer, kubernetes-cluster
jenkins kube brains
Example scripts to run Kubernetes on your private VMs. This is to support of Loren and my KubeCon 2018 talk "Migrating Jenkins to Kubernetes broke our brains." https://sched.co/GrSh
Stars: ✭ 34 (-34.62%)
Mutual labels:  kubernetes-cluster, kubernetes-setup
Packerlicious
use python to make hashicorp packer templates
Stars: ✭ 90 (+73.08%)
Mutual labels:  packer, hashicorp
Toc
A Table of Contents of all Gruntwork Code
Stars: ✭ 111 (+113.46%)
Mutual labels:  packer, infrastructure-as-code
cb-spider
CB-Spider provides a unified view and single interface for multi-cloud management.
Stars: ✭ 26 (-50%)
Mutual labels:  iac, ibm
python-packer
A Packer interface for Python
Stars: ✭ 22 (-57.69%)
Mutual labels:  packer, hashicorp
openshift-install-power
UPI Install helper to deploy OpenShift 4 on IBM Power Systems Virtual Server using Terraform IaC
Stars: ✭ 16 (-69.23%)
Mutual labels:  infrastructure-as-code, ibm
Nietzsche
Scrap quotes from Goodreads and schedule random tweets.
Stars: ✭ 44 (-15.38%)
Mutual labels:  iac, infrastructure-as-code
awesome-lxc-lxd
A curated list of awesome LXC and LXD tools, libraries and related projects.
Stars: ✭ 34 (-34.62%)
Mutual labels:  lxd, lxc
Ansible Role Packer rhel
Ansible Role - Packer RHEL/CentOS Configuration for Vagrant VirtualBox
Stars: ✭ 45 (-13.46%)
Mutual labels:  packer, hashicorp
Hcloud Okd4
Deploy OKD4 (OpenShift) on Hetzner Cloud
Stars: ✭ 29 (-44.23%)
Mutual labels:  packer, hashicorp
local-hashicorp-stack
Local Hashicorp Stack for DevOps Development without Hypervisor or Cloud
Stars: ✭ 23 (-55.77%)
Mutual labels:  packer, hashicorp
gitlab-setup
A Packer / Terraform / Ansible configuration to install Gitlab and Gitlab-CI
Stars: ✭ 53 (+1.92%)
Mutual labels:  packer, infrastructure-as-code
ggshield
Find and fix 360+ types of hardcoded secrets and 70+ types of infrastructure-as-code misconfigurations.
Stars: ✭ 1,272 (+2346.15%)
Mutual labels:  iac, infrastructure-as-code
ansible-role-packer-debian
Ansible Role - Packer Debian/Ubuntu Configuration for Vagrant VirtualBox
Stars: ✭ 32 (-38.46%)
Mutual labels:  packer, hashicorp
vim-hcl
Syntax highlighting for HashiCorp Configuration Language (HCL)
Stars: ✭ 83 (+59.62%)
Mutual labels:  packer, hashicorp
copr-lxc4
RPM spec files for building lxc/lxd 4.x releases on Fedora COPR
Stars: ✭ 18 (-65.38%)
Mutual labels:  lxd, lxc
learn-terraform-provisioning
Companion code repository for learning to provision Terraform instances with Packer & cloud-init
Stars: ✭ 56 (+7.69%)
Mutual labels:  packer, hashicorp

IMPORTANT Note: This IaC is not supported anymore and so will not be updated in future.

Welcome to my IBM Cloud Private (Community Edition) on Linux Containers Infrastructure as a Code (IaaC). With the help of this IaaC, developers can easily setup a virtual multi-node ICP cluster on a single Linux Metal/VM!!!

This IaC not only takes away the pain of all manual configuration, but will also save valuable resources (nodes) by utilizing a single host machine to provide multi node ICP Kubernetes experience. It will install required CLIs, setup LXD, setup ICP-CE and some utility scripts.

As ICP is installed on LXD VMs, it can be easily installed and removed without any impact to host environment. Only LXD, CLIs and other desired/required packages will be installed on the host.

ICP 3.2.0 - Getting started
High Level Architecture
Supported Platforms
Topologies
View Install Configuration
Usage
Post Install
Screenshots

High Level Architecture

An example 4 node topology

Supported platforms

Host Guest VM ICP-CE LXD Min. Compute Power User Privileges Shell
Ubuntu 18.04 Ubuntu 18.04 3.2.x/3.1.2 3.0.3 (apt) 8Core 16GB-RAM 300GB-Disk root bash

Topologies

Boot (B) Master/Etcd (ME) Management (M) Proxy (P) Worker (W)
1 (B/ME/M/P) 1+*
1 (B/ME/M) 1 1+*
1 (B/ME/P) 1 1+*
1 (B/ME) 1 1 1+*
*Set desired worker node count in install.properties before setting up cluster.
Supported topologies based on ICP Architecture
ICP Community Edition does not support HA. Master, Management and Proxy nodes count must always be 1

Usage

Git clone:

  sudo su -
  git clone https://github.com/HSBawa/icp-ce-on-linux-containers.git
  cd icp-ce-on-linux-containers

Update install properties:

  For simplified setup, there is one single install.properites file, that will cover configuration for CLIs, LXD and ICP.

  Examples:

  # 3.1.2 or 3.2.0 or 3.2.1
  ICP_TAG=3.2.0
  # config.yaml.312.tmpl for 3.1.2 or config.yaml.320.tmpl for 3.2.x
  ICP_CONFIG_YAML_TMPL_FILE=config.yaml.320.tmpl

  ## Use y to create separate Proxy, Management Nodes
  PROXY_NODE=y
  MGMT_NODE=y

  ## If for some reason public/external IP lookup fails or gets incorrect address,
  ## set lookup to 'n', manually provide IP  addresses and then re-create cluster
  ICP_AUTO_LOOKUP_HOST_IP_ADDRESS_AS_LB_ADDRESS=y
  ICP_MASTER_LB_ADDRESS=none
  ICP_PROXY_LB_ADDRESS=none

  ## Enable/Disable management services ####
  ICP_MGMT_SVC_CUST_METRICS=enabled
  ICP_MGMT_SVC_IMG_SEC_ENFORCE=enabled
  ICP_MGMT_SVC_METERING=enabled
  ...

  ## Used for console/scripted login, provide your choice of username and password
  ## Default namespace will be added to auto-generated login helper script
  ## For extra security, random Username and Password auto generation based on patterns is supported.
  ## Auto generated username and/or password can be found in config.yaml or helper login script (keep them secure)
  ICP_DEFAULT_NAMESPACE=default
  ICP_DEFAULT_ADMIN_USER=admin
  ICP_AUTO_GEN_RANDOM_ADMIN_USERNAME=n
  ICP_AUTO_GEN_RANDOM_ADMIN_USERNAME_PATTERN=a-z
  ICP_AUTO_GEN_RANDOM_ADMIN_USERNAME_LENGTH=10

  ICP_DEFAULT_ADMIN_PASSWORD=xxxxxxx
  ICP_AUTO_GEN_RANDOM_PASSWORD=y
  ## ICP Default password pattern of '^([a-zA-Z0-9\-]' with length 32 chars or more
  ICP_PASSWORD_RULE_PATTERN=^([a-zA-Z0-9\-]{32,})$
  ICP_AUTO_GEN_RANDOM_PASSWORD_LENGTH=35
  ICP_AUTO_GEN_RANDOM_PASSWORD_PATTERN=a-zA-Z0-9-

Create cluster:

 Usage:    sudo ./create_cluster.sh [options]
              -es or --env-short : Environment name in short. ex: test, dev, demo etc.
              -f  or --force     : [yY]|[yY][eE][sS] or n. Delete cluster LXD components from past install.
              -h  or --host      : Provide host type information: pc (default), vsi, fyre, aws or othervm.
              help               : Print this usage.

  Examples: sudo ./create_cluster.sh --host=fyre
            sudo ./create_cluster.sh --host=fyre -f
            sudo ./create_cluster.sh -es=demo --force --host=pc

  Important Notes:
     - v1.1.3 version of Terraform Provider for LXD may not work with recently released Terraform 0.12.x.
     - It is imporant to use use right `host` parameter depending upon your host machine/vm.
     - LXD cluster uses internal and private subnet. To expose this cluster, HAProxy is installed and configured by default to enable remote access.
     - Recommended use of `static external IP`.
     - If external IP gets changed after build, remote access to cluster will fail and thus will require a new build.
     - This IaC is not tested with LXD installed via SNAP. I had so many issues using it, that I had to switch to APT based 3.0.3, which is considered as production stable
     - During install, if you encounter error: "...Failed container creation: Create LXC container: LXD doesn't have a uid/gid allocation...", validate that the files '/etc/subgid' and '/etc/subuid' have content similar to shown below:
           lxd:100000:65536
           root:100000:65536
           [username goes here]:165536:65536
     - During install, if your build is stuck at the following message for greater than 10 mins: "....icp_ce_master: Still creating... ", perform the following steps:
           * Cancel installation (Ctrl-C). May need more than one.
           * Destroy cluster (./destroy_cluster.sh)
           * Create cluster  (./create_cluster.sh)

           If you still see this issue next time, open a GIT issue, with as much possible details, and I can take look into it.

Download cloudctl and helm clis:

 sudo ./download_icp_cloudctl_helm.sh

Login into cluster:

 ./icp-login-3.2.0-ce.sh
 or
 cloudctl login -a https://<internal_master_ip>:8443 -u <default_admin_user> -p <default_admin_user> -c id-devicpcluster-account -n default --skip-ssl-validation
 or
 cloudctl login -a https://<public_ip>:8443 -u <default_admin_user> -p <default_admin_user> -c id-devicpcluster-account -n default --skip-ssl-validation

Destory Cluster:

 sudo ./destroy-cluster.sh (Deletes lxd cluster w/ ICP-CE. Use with caution)

Setting up LXD based NFS Server: (Optional)

     NFS Server on Linux Container

Post install


Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].