All Projects → equinix → terraform-metal-k3s

equinix / terraform-metal-k3s

Licence: Apache-2.0 license
Manage K3s (k3s.io) region clusters on Equinix Metal

Programming Languages

HCL
1544 projects
shell
77523 projects
go
31211 projects - #10 most used programming language
python
139335 projects - #7 most used programming language
Smarty
1635 projects
Jinja
831 projects
Dockerfile
14818 projects

Projects that are alternatives of or similar to terraform-metal-k3s

balanced
BalanceD is a Layer-4 Linux Virtual Server (LVS) based load balancing platform for Kubernetes.
Stars: ✭ 34 (-17.07%)
Mutual labels:  bgp, baremetal, anycast
K3d
Little helper to run Rancher Lab's k3s in Docker
Stars: ✭ 3,090 (+7436.59%)
Mutual labels:  rancher, k3s
k3d-demo
Demo of k3d: Tool to run k3s (Kubernetes) in Docker
Stars: ✭ 197 (+380.49%)
Mutual labels:  rancher, k3s
multipass-k3s
Use multipass instances to create your k3s cluster
Stars: ✭ 50 (+21.95%)
Mutual labels:  rancher, k3s
gocast
GoCast is a tool for controlled BGP route announcements from a host
Stars: ✭ 55 (+34.15%)
Mutual labels:  bgp, anycast
k3d-action
A GitHub Action to run lightweight ephemeral Kubernetes clusters during workflow. Fundamental advantage of this action is a full customization of embedded k3s clusters. In addition, it provides a private image registry and multi-cluster support.
Stars: ✭ 137 (+234.15%)
Mutual labels:  rancher, k3s
paas-templates
Bosh, CFAR, CFCR and OSB services templates for use with COA (cf-ops-automation) framework
Stars: ✭ 16 (-60.98%)
Mutual labels:  rancher, k3s
quads
📆 The infrastructure deployment time machine
Stars: ✭ 74 (+80.49%)
Mutual labels:  baremetal
helm-charts
My collection of Helm charts.
Stars: ✭ 62 (+51.22%)
Mutual labels:  k3s
rancher2-ansible
Provision a single node rancher2 k8s cluster using Ansible
Stars: ✭ 18 (-56.1%)
Mutual labels:  rancher
spring-cloud-microservices-on-kubernetes
My Best Practices in development and deployment of Spring Cloud Microservices on Kubernetes.
Stars: ✭ 19 (-53.66%)
Mutual labels:  rancher
kubernetes-basico
Demonstração dos componentes do Kubernetes
Stars: ✭ 26 (-36.59%)
Mutual labels:  baremetal
Certified-Rancher-Operator-Thai
มาเรียนรู้ Kuberntes แบบ On-Premise และ Architecture ของ Rancher ที่ใช้ในการจัดการ Kubernetes Cluster เพื่อนำสู่ Certified Kubernetes Administrator และ Certified Rancer Operator
Stars: ✭ 78 (+90.24%)
Mutual labels:  rancher
pilot
Simple web-based SDN controller for family and friends
Stars: ✭ 33 (-19.51%)
Mutual labels:  bgp
rancher-redis
A containerized redis master/slave configuration with sentinels for use in Rancher
Stars: ✭ 13 (-68.29%)
Mutual labels:  rancher
bovine
Manager for single node Rancher clusters
Stars: ✭ 51 (+24.39%)
Mutual labels:  rancher
rackshift
RackShift 是开源的裸金属服务器管理平台,功能覆盖裸金属服务器的发现、带外管理、RAID 配置、固件更新、操作系统安装等。
Stars: ✭ 467 (+1039.02%)
Mutual labels:  baremetal
pathvector
Declarative routing platform that automates BGP route optimization and control plane configuration with secure and repeatable routing policy.
Stars: ✭ 110 (+168.29%)
Mutual labels:  bgp
rpki-client-portable
Portability shim for OpenBSD's rpki-client
Stars: ✭ 33 (-19.51%)
Mutual labels:  bgp
htk8s
HTPC services running on Kubernetes
Stars: ✭ 69 (+68.29%)
Mutual labels:  k3s

K3s on Equinix Metal

Build Status GitHub release Slack Twitter Follow

This is a Terraform project for deploying K3s on Equinix Metal.

New projects can build on this Equinix Metal K3s Terraform Registry module with:

terraform init --from-module=equinix/k3s/metal metal-k3s

This project configures your cluster with:

  • MetalLB using Equinix Metal elastic IPs.

on ARM devices.

This is intended to allow you to quickly spin-up and down K3s clusters in edge locations.

This repository is Experimental meaning that it's based on untested ideas or techniques and not yet established or finalized or involves a radically new and innovative style! This means that support is best effort (at best!) and we strongly encourage you to NOT use this in production.

Requirements

The only required variables are auth_token (your Equinix Metal API key), your Equinix Metal project_id, facility, and count (number of ARM nodes in the cluster, not counting the controller, which is always set to 1--if you wish to only run the controller, and its local node, set this value to 0).

In addition to Terraform, your client machine (where Terraform will be run from) will need curl, and jq available in order for all of the automation to run as expected.

You will need an SSH key associated with this project, or your account. Add the identity path to ssh_private_key--this will only be used locally to assist Terraform in completing cluster bootstrapping (needed to retrieve the cluster node-token from the controller node).

BGP will need to be enabled for your project.

Clusters

Generating a Cluster Template

To ensure all your regions have standardized deployments, in your Terraform variables (TF_VAR_varname or in terraform.tfvars), ensure that you have set count (number of nodes per cluster), plan_primary, and plan_node. This will apply to all clusters managed by this project.

To add new clusters to a cluster pool, add the new facility to the facilities map:

variable "facilities" {
  type = "map"

  default = {
    newark  = "ewr1"
    narita  = "nrt1"
    sanjose = "sjc1"
  }
}

by adding a line such as:

...
 chicago = "ord1"
   }
}

Manually defining a Cluster, or adding a new cluster pool

To create a cluster manually, in cluster-inventory.tf (this is ignored by git--your initial cluster setup is in clusters.tf, and is tracked), instantiate a new cluster_pool module:

module "manual_cluster" {
  source = "./modules/cluster_pool"

  cluster_name         = "manual_cluster"
  node_count           = var.node_count
  plan_primary         = var.plan_primary
  plan_node            = var.plan_node
  facilities           = var.facilities
  primary_facility     = var.primary_facility
  auth_token           = var.auth_token
  project_id           = var.project_id
  ssh_private_key_path = var.ssh_private_key_path
  anycast_ip           = metal_reserved_ip_block.anycast_ip.address
}

This creates a single-controller cluster, with count number of agent nodes for each facility in the facilities map.

Demo Project

In example/, there are files to configure and deploy a demo project that, once your request is received, returns the IP of the cluster serving your request to demonstrate the use of Equinix Metal's Global IPv4 addresses to distribute traffic globally to your edge cluster deployments.

To run the project, you can run the deploy_demo Ansible project by running the create_inventory.sh script to gather your cluster controller IPs into your inventory for Ansible:

cd example/
sh create_inventory.sh
cd deploy_demo
ansible-playbook -i inventory.yaml main.yml

or manually copy example/deploy_demo/roles/demo/files/traefik.sh to your kubectl client machine and run manually to deploy Traefik and the application.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].