All Projects → infraly → K8s On Openstack

infraly / K8s On Openstack

Licence: apache-2.0
An opinionated way to deploy a Kubernetes cluster on top of an OpenStack cloud.

Projects that are alternatives of or similar to K8s On Openstack

Linchpin
ansible based multicloud orchestrator
Stars: ✭ 107 (-0.93%)
Mutual labels:  openstack, ansible
Kubeoperator
KubeOperator 是一个开源的轻量级 Kubernetes 发行版,专注于帮助企业规划、部署和运营生产级别的 K8s 集群。
Stars: ✭ 4,147 (+3739.81%)
Mutual labels:  openstack, ansible
Casl Ansible
Ansible automation for Managing OpenShift Container Platform clusters
Stars: ✭ 123 (+13.89%)
Mutual labels:  openstack, ansible
Ansible Role Kubernetes
Ansible Role - Kubernetes
Stars: ✭ 247 (+128.7%)
Mutual labels:  ansible, kubeadm
Sysadmintools
Acorn's Server, Workstation, & VM Cluster Automation & Documentation
Stars: ✭ 7 (-93.52%)
Mutual labels:  openstack, ansible
Chef Bcpc
Bloomberg Clustered Private Cloud distribution
Stars: ✭ 205 (+89.81%)
Mutual labels:  openstack, ansible
Kubenow
Deploy Kubernetes. Now!
Stars: ✭ 285 (+163.89%)
Mutual labels:  openstack, kubeadm
Rak8s
Stand up a Raspberry Pi based Kubernetes cluster with Ansible
Stars: ✭ 354 (+227.78%)
Mutual labels:  ansible, kubeadm
Kubeadm Playbook
Fully fledged (HA) Kubernetes Cluster using official kubeadm, ansible and helm. Tested on RHEL/CentOS/Ubuntu with support of http_proxy, dashboard installed, ingress controller, heapster - using official helm charts
Stars: ✭ 533 (+393.52%)
Mutual labels:  ansible, kubeadm
Kubeadm Ansible
Build a Kubernetes cluster using kubeadm via Ansible.
Stars: ✭ 479 (+343.52%)
Mutual labels:  ansible, kubeadm
Manageiq
ManageIQ Open-Source Management Platform
Stars: ✭ 1,089 (+908.33%)
Mutual labels:  openstack, ansible
Devops Exercises
Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP, DNS, Elastic, Network, Virtualization. DevOps Interview Questions
Stars: ✭ 20,905 (+19256.48%)
Mutual labels:  openstack, ansible
Docker Cloud Platform
使用Docker构建云平台,Docker云平台系列共三讲,Docker基础、Docker进阶、基于Docker的云平台方案。OpenStack+Docker+RestAPI+OAuth/HMAC+RabbitMQ/ZMQ+OpenResty/HAProxy/Nginx/APIGateway+Bootstrap/AngularJS+Ansible+K8S/Mesos/Marathon构建/探索微服务最佳实践。
Stars: ✭ 86 (-20.37%)
Mutual labels:  openstack, ansible
Ansible Proxmox Inventory
Proxmox dynamic inventory for Ansible
Stars: ✭ 100 (-7.41%)
Mutual labels:  ansible
Awx Ha Instancegroup
Build AWX clustering on Docker Standalone Installation
Stars: ✭ 106 (-1.85%)
Mutual labels:  ansible
Awx
AWX Project
Stars: ✭ 10,469 (+9593.52%)
Mutual labels:  ansible
Vps Comparison
A comparison between some VPS providers. It uses Ansible to perform a series of automated benchmark tests over the VPS servers that you specify. It allows the reproducibility of those tests by anyone that wanted to compare these results to their own. All the tests results are available in order to provide independence and transparency.
Stars: ✭ 1,357 (+1156.48%)
Mutual labels:  ansible
Community.vmware
Ansible Collection for VMWare
Stars: ✭ 104 (-3.7%)
Mutual labels:  ansible
Yams
A collection of Ansible roles for automating infosec builds.
Stars: ✭ 98 (-9.26%)
Mutual labels:  ansible
Drupal Vm
A VM for Drupal development
Stars: ✭ 1,348 (+1148.15%)
Mutual labels:  ansible

k8s-on-openstack

An opinionated way to deploy a Kubernetes cluster on top of an OpenStack cloud.

It is based on the following tools:

  • kubeadm
  • ansible

Getting started

The following mandatory environment variables need to be set before calling ansible-playbook:

  • OS_*: standard OpenStack environment variables such as OS_AUTH_URL, OS_USERNAME, ...
  • KEY: name of an existing SSH keypair

The following optional environment variables can also be set:

  • NAME: name of the Kubernetes cluster, used to derive instance names, kubectl configuration and security group name
  • IMAGE: name of an existing Ubuntu 16.04 image
  • EXTERNAL_NETWORK: name of the neutron external network, defaults to 'public'
  • FLOATING_IP_POOL: name of the floating IP pool
  • FLOATING_IP_NETWORK_UUID: uuid of the floating IP network (required for LBaaSv2)
  • USE_OCTAVIA: try to use Octavia instead of Neutron LBaaS, defaults to False
  • USE_LOADBALANCER: assume a loadbalancer is used and allow traffic to nodes (default: false)
  • SUBNET_CIDR the subnet CIDR for OpenStack's network (default: 10.8.10.0/24)
  • POD_SUBNET_CIDR CIDR of the POD network (default: 10.96.0.0/16)
  • CLUSTER_DNS_IP: IP address of the cluster DNS service passed to kubelet (default: 10.96.0.10)
  • BLOCK_STORAGE_VERSION: version of the block storage (Cinder) service, defaults to 'v2'
  • IGNORE_VOLUME_AZ: whether to ignore the AZ field of volumes, needed on some clouds where AZs confuse the driver, defaults to False.
  • NODE_MEMORY: how many MB of memory should nodes have, defaults to 4GB
  • NODE_FLAVOR: allows to configure the exact OpenStack flavor name or ID to use for the nodes. When set, the NODE_MEMORY setting is ignored.
  • NODE_COUNT: how many nodes should we provision, defaults to 3
  • NODE_AUTO_IP assign a floating IP to nodes, defaults to False
  • NODE_DELETE_FIP: delete floating IP when node is destroyed, defaults to True
  • NODE_BOOT_FROM_VOLUME: boot node instances using boot from volume. Useful on clouds with only boot from volume
  • NODE_TERMINATE_VOLUME: delete the root volume when each node instance is destroy, defaults to True
  • NODE_VOLUME_SIZE: size of each node volume. defaults to 64GB
  • NODE_EXTRA_VOLUME: create an extra unmounted data volume for each node, defaults to False
  • NODE_EXTRA_VOLUME_SIZE: size of extra data volume for each node, defaults to 80GB
  • NODE_DELETE_EXTRA_VOLUME: delete the extra data volume for each node when node is destroy, defaults to True
  • MASTER_BOOT_FROM_VOLUME: boot the master instance on a volume for data persistence, defaults to True
  • MASTER_TERMINATE_VOLUME: delete the volume when master instance is destroy, defaults to True
  • MASTER_VOLUME_SIZE: size of the master volume. default to 64GB
  • MASTER_MEMORY: how many MB of memory should master have, defaults to 4 GB
  • MASTER_FLAVOR: allows to configure the exact OpenStack flavor name or ID to use for the master. When set, the MASTER_MEMORY setting is ignored.
  • AVAILABILITY_ZONE: the availability zone to use for nodes and the default StorageClass (defaults to nova). This affects PersistentVolumeClaims without explicit a storage class.
  • HELM_REPOS: a list of additional helm repos to add, separated by semicolons. Example: charts* https://github.com/helm/charts;mycharts https://github.com/dev/mycharts
  • HELM_INSTALL: a list of helm charts and their parameters to install, separated by semicolons. Example: mycharts/mychart;charts/somechart --name somechart --namespace somenamespace

Spin up a new cluster:

$ ansible-playbook site.yaml

Destroy the cluster:

$ ansible-playbook destroy.yaml

Upgrade the cluster:

The upgrade.yaml playbook implements the upgrade steps described in https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11/ After editing in group_vars/all.yaml the kubernetes_version and kubernetes_ubuntu_version variables, you can run the following commands.

$ ansible-playbook upgrade.yaml
$ ansible-playbook site.yaml

Open Issues

Find a better way to configure worker nodes' network plugin

Somehow, the network plugin (kubenet) is not correctly set on the worker node. On the master node /var/lib/kubelet/kubeadm-flags.env (created by kubeadm init) contains:

KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --cloud-provider=external --network-plugin=kubenet --pod-infra-container-image=k8s.gcr.io/pause:3.1 --resolv-conf=/run/systemd/resolve/resolv.conf"

It contains the correct --network-plugin=kubenet as configured here. After joining the k8s cluster, the worker node's copy of /var/lib/kubelet/kubeadm-flags.env (created by kubeadm join) looks like this:

KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --resolv-conf=/run/systemd/resolve/resolv.conf"

It contains --network-plugin=cni despite setting network-plugin: kubenet here. But the JoinConfiguration is ignored by kubeadm join when using a join token.

Once I edit /var/lib/kubelet/kubeadm-flags.env to contain --network-plugin=kubenet, the worker node goes online. I've added a hack in roles/kubeadm-nodes/tasks/main.yaml to set the correct value.

Prerequisites

  • Ansible (tested with version 2.9.1)
  • Shade library required by Ansible OpenStack modules (python-shade for Debian)

CI/CD

The following environment variables needs to be defined:

  • OS_AUTH_URL
  • OS_PASSWORD
  • OS_USERNAME
  • OS_DOMAIN_NAME

Authors

References

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].