All Projects → PyratLabs → Ansible Role K3s

PyratLabs / Ansible Role K3s

Licence: bsd-3-clause
Ansible role for installing k3s as either a standalone server or HA cluster.

Projects that are alternatives of or similar to Ansible Role K3s

Graylog Ansible Role
Ansible role which installs and configures Graylog
Stars: ✭ 173 (+31.06%)
Mutual labels:  ansible, ansible-role, playbook
kubernetes the easy way
Automating Kubernetes the hard way with Vagrant and scripts
Stars: ✭ 22 (-83.33%)
Mutual labels:  cluster, kubernetes-cluster, k8s
Ansible Playbook Grapher
A command line tool to create a graph representing your Ansible playbook tasks and roles
Stars: ✭ 234 (+77.27%)
Mutual labels:  ansible, ansible-role, playbook
Awx Ha Instancegroup
Build AWX clustering on Docker Standalone Installation
Stars: ✭ 106 (-19.7%)
Mutual labels:  ansible, playbook, cluster
Kubeadm Playbook
Fully fledged (HA) Kubernetes Cluster using official kubeadm, ansible and helm. Tested on RHEL/CentOS/Ubuntu with support of http_proxy, dashboard installed, ingress controller, heapster - using official helm charts
Stars: ✭ 533 (+303.79%)
Mutual labels:  ansible, playbook, cluster
K3s Ansible
Ansible playbook to deploy k3s kubernetes cluster
Stars: ✭ 153 (+15.91%)
Mutual labels:  ansible, k8s, kubernetes-cluster
kube-watch
Simple tool to get webhooks on Kubernetes cluster events
Stars: ✭ 21 (-84.09%)
Mutual labels:  cluster, kubernetes-cluster, k8s
ansible-role-etcd
Ansible role for installing etcd cluster
Stars: ✭ 38 (-71.21%)
Mutual labels:  cluster, ansible-role, playbook
k8s-istio-demo
Demo showing the capabilities of Istio
Stars: ✭ 22 (-83.33%)
Mutual labels:  cluster, kubernetes-cluster, k8s
ansible-role-k8s
This role render an arbitrary number of Jinja2 templates and deploys or removes them to/from Kubernetes clusters.
Stars: ✭ 26 (-80.3%)
Mutual labels:  ansible-role, kubernetes-cluster, k8s
multi-master-kubernetes
Multi-master Kubernetes cluster on Exoscale
Stars: ✭ 65 (-50.76%)
Mutual labels:  playbook, kubernetes-cluster, k8s
Openfaas On Digitalocean
Ansible playbook to create a Digital Ocean droplet and deploy OpenFaaS onto it.
Stars: ✭ 57 (-56.82%)
Mutual labels:  ansible, playbook, k8s
K8s Digitalocean Terraform
Deploy latest Kubernetes cluster on DigitalOcean using Terraform
Stars: ✭ 33 (-75%)
Mutual labels:  k8s, cluster, kubernetes-cluster
Ansible Rpi K8s Cluster
Deploy Raspberry Pi Kubernetes cluster using Ansible
Stars: ✭ 131 (-0.76%)
Mutual labels:  ansible, cluster, kubernetes-cluster
Raspberry Pi Dramble
Raspberry Pi Kubernetes cluster that runs HA/HP Drupal 8
Stars: ✭ 1,317 (+897.73%)
Mutual labels:  ansible, cluster
Kubernetes Pfsense Controller
Integrate Kubernetes and pfSense
Stars: ✭ 100 (-24.24%)
Mutual labels:  k8s, cluster
Ansible Windows Hardening
This Ansible role provides windows hardening configurations for the DevSec Windows baseline profile.
Stars: ✭ 109 (-17.42%)
Mutual labels:  ansible, playbook
Ansible Elasticsearch
Ansible playbook for Elasticsearch
Stars: ✭ 1,316 (+896.97%)
Mutual labels:  ansible, ansible-role
Ansible Role Bootstrap
Prepare your system to be managed by Ansible.
Stars: ✭ 106 (-19.7%)
Mutual labels:  ansible, playbook
Php K8s
PHP K8s is a PHP handler for the Kubernetes Cluster API, helping you handling the individual Kubernetes resources directly from PHP, like viewing, creating, updating or deleting resources.
Stars: ✭ 111 (-15.91%)
Mutual labels:  k8s, cluster

Ansible Role: k3s (v2.x)

Ansible role for installing K3S ("Lightweight Kubernetes") as either a standalone server or cluster.

CI

Release notes

Please see Releases and CHANGELOG.md.

Requirements

The host you're running Ansible from requires the following Python dependencies:

  • ansbile >= 2.9.17 or ansible-base >= 2.10.4

You can install dependencies using the requirements.txt file in this repository: pip3 install -r requirements.txt.

This role has been tested against the following Linux Distributions:

  • Amazon Linux 2
  • Archlinux
  • CentOS 8
  • CentOS 7
  • Debian 9
  • Debian 10
  • Fedora 29
  • Fedora 30
  • Fedora 31
  • Fedora 32
  • openSUSE Leap 15
  • Ubuntu 18.04 LTS
  • Ubuntu 20.04 LTS

⚠️ The v2 releases of this role only supports k3s >= v1.19, for k3s < v1.19 please consider updating or use the v1.x releases of this role.

Before upgrading, see CHANGELOG for notifications of breaking changes.

Role Variables

Since K3s v1.19.1+k3s1 you can now configure K3s using a configuration file rather than environment variables or command line arguments. The v2 release of this role has moved to the configuration file method rather than populating a systemd unit file with command-line arguments. There may be exceptions that are defined in Global/Cluster Variables, however you will mostly be configuring k3s by configuration files using the k3s_server and k3s_agent variables.

See "Server (Control Plane) Configuration" and "Agent (Worker) Configuraion" below.

Global/Cluster Variables

Below are variables that are set against all of the play hosts for environment consistency. These are generally cluster-level configuration.

Variable Description Default Value
k3s_state State of k3s: installed, started, stopped, downloaded, uninstalled, validated. installed
k3s_release_version Use a specific version of k3s, eg. v0.2.0. Specify false for stable. false
k3s_config_file Location of the k3s configuration file. /etc/rancher/k3s/config.yaml
k3s_build_cluster When multiple play hosts are available, attempt to cluster. Read notes below. true
k4s_registration_address Fixed registration address for nodes. IP or FQDN. NULL
k3s_github_url Set the GitHub URL to install k3s from. https://github.com/k3s-io/k3s
k3s_install_dir Installation directory for k3s. /usr/local/bin
k3s_install_hard_links Install using hard links rather than symbolic links. false
k3s_server_manifests_templates A list of Auto-Deploying Manifests Templates. []
k3s_use_experimental Allow the use of experimental features in k3s. false
k3s_use_unsupported_config Allow the use of unsupported configurations in k3s. false
k3s_etcd_datastore Enable etcd embedded datastore (read notes below). false
k3s_debug Enable debug logging on the k3s service. false

K3S Service Configuration

The below variables change how and when the systemd service unit file for K3S is run. Use this with caution, please refer to the systemd documentation for more information.

Variable Description Default Value
k3s_start_on_boot Start k3s on boot. true
k3s_service_requires List of required systemd units to k3s service unit. []
k3s_service_wants List of "wanted" systemd unit to k3s (weaker than "requires"). []*
k3s_service_before Start k3s before a defined list of systemd units. []
k3s_service_after Start k3s after a defined list of systemd units. []*

* The systemd unit template always specifies network-online.target for wants and after.

Group/Host Variables

Below are variables that are set against individual or groups of play hosts. Typically you'd set these at group level for the control plane or worker nodes.

Variable Description Default Value
k3s_control_node Specify if a host (or host group) are part of the control plane. false (role will automatically delegate a node)
k3s_server Server (control plane) configuration, see notes below. {}
k3s_agent Agent (worker) configuration, see notes below. {}

Server (Control Plane) Configuration

The control plane is configured with the k3s_server dict variable. Please refer to the below documentation for configuration options:

https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/

The k3s_server dictionary variable will contain flags from the above (removing the -- prefix). Below is an example:

k3s_server:
  datastore-endpoint: postgres://postgres:[email protected]:5432/postgres?sslmode=disable
  docker: true
  cluster-cidr: 172.20.0.0/16
  flannel-backend: 'none'  # This needs to be in quotes
  disable:
    - traefik
    - coredns

Alternatively, you can create a .yaml file and read it in to the k3s_server variable as per the below example:

k3s_server: "{{ lookup('file', 'path/to/k3s_server.yml') | from_yaml }}"

Check out the Documentation for example configuration.

Agent (Worker) Configuration

Workers are configured with the k3s_agent dict variable. Please refer to the below documentation for configuration options:

https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config

The k3s_agent dictionary variable will contain flags from the above (removing the -- prefix). Below is an example:

k3s_agent:
  with-node-id: true
  node-label:
    - "foo=bar"
    - "hello=world"

Alternatively, you can create a .yaml file and read it in to the k3s_agent variable as per the below example:

k3s_agent: "{{ lookup('file', 'path/to/k3s_agent.yml') | from_yaml }}"

Check out the Documentation for example configuration.

Ansible Controller Configuration Variables

The below variables are used to change the way the role executes in Ansible, particularly with regards to privilege escalation.

Variable Description Default Value
k3s_skip_validation Skip all tasks that validate configuration. false
k3s_skip_env_checks Skill all tasks that check environment configuration. false
k3s_become_for_all Escalate user privileges for all tasks. Overrides all of the below. false
k3s_become_for_systemd Escalate user privileges for systemd tasks. NULL
k3s_become_for_install_dir Escalate user privileges for creating installation directories. NULL
k3s_become_for_directory_creation Escalate user privileges for creating application directories. NULL
k3s_become_for_usr_local_bin Escalate user privileges for writing to /usr/local/bin. NULL
k3s_become_for_package_install Escalate user privileges for installing k3s. NULL
k3s_become_for_kubectl Escalate user privileges for running kubectl. NULL
k3s_become_for_uninstall Escalate user privileges for uninstalling k3s. NULL

Important note about k3s_release_version

If you do not set a k3s_release_version the latest version from the stable channel of k3s will be installed. If you are developing against a specific version of k3s you must ensure this is set in your Ansible configuration, eg:

k3s_release_version: v1.19.3+k3s1

It is also possible to install specific K3s "Channels", below are some examples for k3s_release_version:

k3s_release_version: false             # defaults to 'stable' channel
k3s_release_version: stable            # latest 'stable' release
k3s_release_version: testing           # latest 'testing' release
k3s_release_version: v1.19             # latest 'v1.19' release
k3s_release_version: v1.19.3+k3s3      # specific release

# Specific commit
# CAUTION - only used for testing - must be 40 characters
k3s_release_version: 48ed47c4a3e420fa71c18b2ec97f13dc0659778b

Important note about k3s_install_hard_links

If you are using the system-upgrade-controller you will need to use hard links rather than symbolic links as the controller will not be able to follow symbolic links. This option has been added however is not enabled by default to avoid breaking existing installations.

To enable the use of hard links, ensure k3s_install_hard_links is set to true.

k3s_install_hard_links: true

The result of this can be seen by running the following in k3s_install_dir:

ls -larthi | grep -E 'k3s|ctr|ctl' | grep -vE ".sh$" | sort

Symbolic Links:

[

Hard Links:

[

Important note about k3s_build_cluster

If you set k3s_build_cluster to false, this role will install each play host as a standalone node. An example of when you might use this would be when building a large number of standalone IoT devices running K3s. Below is a hypothetical situation where we are to deploy 25 Raspberry Pi devices, each a standalone system and not a cluster of 25 nodes. To do this we'd use a playbook similar to the below:

- hosts: k3s_nodes  # eg. 25 RPi's defined in our inventory.
  vars:
    k3s_build_cluster: false
  roles:
     - xanmanning.k3s

Important note about k3s_control_node and High Availability (HA)

By default only one host will be defined as a control node by Ansible, If you do not set a host as a control node, this role will automatically delegate the first play host as a control node. This is not suitable for use within a Production workload.

If multiple hosts have k3s_control_node set to true, you must also set datastore-endpoint in k3s_server as the connection string to a MySQL or PostgreSQL database, or external Etcd cluster else the play will fail.

If using TLS, the CA, Certificate and Key need to already be available on the play hosts.

See: High Availability with an External DB

It is also possible, though not supported, to run a single K3s control node with a datastore-endpoint defined. As this is not a typically supported configuration you will need to set k3s_use_unsupported_config to true.

Since K3s v1.19.1 it is possible to use an embedded Etcd as the backend database, and this is done by setting k3s_etcd_datastore to true. The best practice for Etcd is to define at least 3 members to ensure quorum is established. In addition to this, an odd number of members is recommended to ensure a majority in the event of a network partition. If you want to use 2 members or an even number of members, please set k3s_use_unsupported_config to true.

Dependencies

No dependencies on other roles.

Example Playbooks

Example playbook, single control node running testing channel k3s:

- hosts: k3s_nodes
  roles:
     - { role: xanmanning.k3s, k3s_release_version: testing }

Example playbook, Highly Available with PostgreSQL database running the latest stable release:

- hosts: k3s_nodes
  vars:
    k3s_registration_address: loadbalancer  # Typically a load balancer.
    k3s_server:
      datastore-endpoint: "postgres://postgres:[email protected]:5432/postgres?sslmode=disable"
  pre_tasks:
    - name: Set each node to be a control node
      ansible.builtin.set_fact:
        k3s_control_node: true
      when: inventory_hostname in ['node2', 'node3']
  roles:
    - role: xanmanning.k3s

License

BSD 3-clause

Contributors

Contributions from the community are very welcome, but please read the contribution guidelines before doing so, this will help make things as streamlined as possible.

Also, please check out the awesome list of contributors.

Author Information

Xan Manning

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].