All Projects → meezaan → linode-k8s-autoscaler

meezaan / linode-k8s-autoscaler

Licence: LGPL-2.1 License
Autoscaling utility for horizontally scaling Linodes in an LKE Cluster Node Pool based on memory or cpu usage

Programming Languages

PHP
23972 projects - #3 most used programming language
Dockerfile
14818 projects

Projects that are alternatives of or similar to linode-k8s-autoscaler

Spekt8
Visualize your Kubernetes cluster in real time
Stars: ✭ 545 (+1918.52%)
Mutual labels:  docker-container, kubernetes-cluster
Kube Aws Autoscaler
Simple, elastic Kubernetes cluster autoscaler for AWS Auto Scaling Groups
Stars: ✭ 94 (+248.15%)
Mutual labels:  kubernetes-cluster, autoscaling
Refarch Cloudnative Kubernetes
Reference Implementation for Microservices based on Kubernetes and the IBM Container Service.
Stars: ✭ 115 (+325.93%)
Mutual labels:  docker-container, kubernetes-cluster
Cerebral
Kubernetes cluster autoscaler with pluggable metrics backends and scaling engines
Stars: ✭ 138 (+411.11%)
Mutual labels:  kubernetes-cluster, autoscaling
Ksync
Sync files between your local system and a kubernetes cluster.
Stars: ✭ 1,005 (+3622.22%)
Mutual labels:  docker-container, kubernetes-cluster
Owasp Workshop
owasp-workshop: Orchetraing containers with Kubernetes
Stars: ✭ 116 (+329.63%)
Mutual labels:  docker-container, kubernetes-cluster
Spring Boot K8s Hpa
Autoscaling Spring Boot with the Horizontal Pod Autoscaler and custom metrics on Kubernetes
Stars: ✭ 250 (+825.93%)
Mutual labels:  docker-container, autoscaling
vamp2setup
Vamp Lamia Alpha Setup Guide
Stars: ✭ 33 (+22.22%)
Mutual labels:  kubernetes-cluster
docker-aws-s3-sync
Docker container to sync a folder to Amazon S3
Stars: ✭ 21 (-22.22%)
Mutual labels:  docker-container
k8s-istio-demo
Demo showing the capabilities of Istio
Stars: ✭ 22 (-18.52%)
Mutual labels:  kubernetes-cluster
docker-observium
Docker container for Observium Community Edition
Stars: ✭ 37 (+37.04%)
Mutual labels:  docker-container
firework8s
Firework8s is a collection of kubernetes objects (yaml files) for deploying workloads in a home lab.
Stars: ✭ 35 (+29.63%)
Mutual labels:  kubernetes-cluster
box-exec
Box execute is a npm package to compile/run codes (c,cpp,python) in a virtualized environment, Here virtualized environment used is a docker container. This packages is built to ease the task of running a code against test cases as done by websites used to practice algorithmic coding.
Stars: ✭ 17 (-37.04%)
Mutual labels:  docker-container
terraform-kvm-kubespray
Set up Kubernetes cluster using KVM, Terraform and Kubespray
Stars: ✭ 55 (+103.7%)
Mutual labels:  kubernetes-cluster
bluechatter
Deploy & Scale a chat app using Cloud Foundry, Docker Container and Kubernetes
Stars: ✭ 64 (+137.04%)
Mutual labels:  docker-container
iris
Watch on Kubernetes events, filter and send them as standard wehbook to any system
Stars: ✭ 57 (+111.11%)
Mutual labels:  kubernetes-cluster
metrics-server-on-rancher-2.0.2
Method to Setup Metrics-Server on Kubernetes via Rancher-Deployed Cluster
Stars: ✭ 14 (-48.15%)
Mutual labels:  kubernetes-cluster
xiaomi-r3g-openwrt-builder
OpenWrt builder for any supported routers using Docker. Scheduled to run weekly
Stars: ✭ 25 (-7.41%)
Mutual labels:  docker-container
docker-atlassian
A docker-compose orchestration for JIRA Software and Confluence based on docker containers.
Stars: ✭ 13 (-51.85%)
Mutual labels:  docker-container
docker-hubot
Docker container for running hubot in a container.
Stars: ✭ 17 (-37.04%)
Mutual labels:  docker-container

CircleCI Docker Pulls

Linode Kubernetes Engine Autoscaler

Note: (October 14, 2021) Linode has now released the Kubernetes autoscaler to all LKEs. Whilst this autoscaler still has some usage (particularly if you want to autoscale aggressively and in advance), it will no longer be actively maintained.

===

This is a simple autoscaling utility for horizontally scaling Linodes in an LKE Cluster Pool based on memory or cpu usage. This effectively means that you can use Kubernetes' horizontal pod autoscaling to scale up your pods and this utility to scale up your Linodes - so you can set this up and let your cluster scale up or down as needed.

Each instance of this utility will autoscale based on either memory or cpu. To use both, you can deploy 2 instances of this utility (usually 1 is enough).

It's fully dockerised (but written in PHP) and has a low resource footprint, so you can deploy it locally or on the cluster itself.

Contents

  1. Requirements
  2. Docker Image
  3. Environment Variables & Configuration
  4. Usage
  5. Deploying on Kubernetes for Production Use
  6. Autoscaler Pod Sizing
  7. Credits
  8. Disclaimer

Requirements

  • Linode Kuberenetes Cluster (LKE) with Metrics Server
  • A kubectl config file (usually stored @ ~/.kube/config)
  • A Linode Personal Access Token with access to LKE
  • Docker (recommended) or PHP 7.4 (you'll need to setup env vars on your machine / server before using PHP without Docker)

Published Docker Image

The image for this utility is published @ Docker Hub as meezaan/linode-k8s-autoscaler (https://hub.docker.com/r/meezaan/linode-k8s-autoscaler).

The latest tag always has the latest code. Also, Docker Hub tags are tied to the tags in this git repository as releases.

Environment Variables / Configuration

The docker container takes all its configuration via environment variables. Here's a list of what each one does:

Environment Variable Name Description
LINODE_PERSONAL_ACCESS_TOKEN Your Personal Access Token with LKE scope
LINODE_LKE_CLUSTER_ID The ID of the LKE Cluster to Autoscale
LINODE_LKE_CLUSTER_POOL_ID The Node Pool ID within the LKE Cluster to Autoscale
LINODE_LKE_CLUSTER_POOL_MINIMUM_NODES The minimum nodes to keep in the cluster. The cluster won't be scaled down below this.
AUTOSCALE_TRIGGER 'cpu' or 'memory'. Defaults to memmory.
AUTOSCALE_TRIGGER_TYPE 'requested' or 'used'. Defaults to requested. This tells the autoscaler to use either the requested or the currently used memory or cpu to scale up or down if it breaches the threshhold.
AUTOSCALE_UP_PERCENTAGE At what percentage of 'cpu' or 'memory' to scale up the node pool. Example: 65
AUTOSCALE_DOWN_PERCENTAGE At what percentage of 'cpu' or 'memory' to scale down the node pool. Example: 40
AUTOSCALE_RESOURCE_REQUEST_UP_PERCENTAGE At what percentage of 'cpu' or 'memory' of the requested / available to scale up the cluster. Default: 80
AUTOSCALE_RESOURCE_REQUEST_DOWN_PERCENTAGE At what percentage of 'cpu' or 'memory' of the requested / available to scale down the cluster. Default: 70
AUTOSCALE_QUERY_INTERVAL How many seconds to wait before each call to the Kubernetes API to check CPU and Memory usage. Example: 10
AUTOSCALE_THRESHOLD_COUNT After how many consecutive matches of AUTOSCALE_UP_PERCENTAGE or AUTOSCALE_DOWN_PERCENTAGE to scale the cluster up or down.
AUTOSCALE_NUMBER_OF_NODES How many nodes to add at one time when scaling the cluster. Example: 1 or 2 or 3 or N
AUTOSCALE_WAIT_TIME_AFTER_SCALING How many seconds to wait after scaling up or down to start checking CPU and Memory. This should be set the to give the cluster enough time to adjust itself with the updated number of nodes. Example: 150

To understand the above assuming we have set the following values.

  • AUTOSCALE_TRIGGER=memory
  • AUTOSCALE_TRIGGER_TYPE=requested
  • AUTOSCALE_UP_PERCENTAGE=65
  • AUTOSCALE_DOWN_PERCENTAGE=30
  • AUTOSCALE_RESOURCE_REQUEST_UP_PERCENTAGE=80
  • AUTOSCALE_RESOURCE_REQUEST_DOWN_PERCENTAGE=70
  • AUTOSCALE_QUERY_INTERVAL=10
  • AUTOSCALE_THRESHOLD_COUNT=3
  • AUTOSCALE_NUMBER_OF_NODES=2
  • AUTOSCALE_WAIT_TIME_AFTER_SCALING=180

With this setup, the autoscaler utility will query the Kuberenetes API every 10 seconds. If with 3 consecutive calls to the API (effectively meaning over 30 seconds), the requested memory exceeds 80% of the total memory available on the cluster, 2 more nodes will be added to the specified node pool. The utility will wait for 180 seconds and then start querying the API every 10 seconds again.

If with 3 consecutive calls to the API (effectively meaning over 30 seconds), the requested memory is below 70% of the total memory available on the cluster, 1 node will be removed (nodes are always removed one at a time to ensure you don't run out of capacity all of a sudden) from the specified node pool. The utility will wait for 180 seconds and then start querying the API every 10 seconds again.

Same example, with a different trigger type.

  • AUTOSCALE_TRIGGER=memory
  • AUTOSCALE_TRIGGER_TYPE=used
  • AUTOSCALE_UP_PERCENTAGE=65
  • AUTOSCALE_DOWN_PERCENTAGE=30
  • AUTOSCALE_RESOURCE_REQUEST_UP_PERCENTAGE=80
  • AUTOSCALE_RESOURCE_REQUEST_DOWN_PERCENTAGE=70
  • AUTOSCALE_QUERY_INTERVAL=10
  • AUTOSCALE_THRESHOLD_COUNT=3
  • AUTOSCALE_NUMBER_OF_NODES=2
  • AUTOSCALE_WAIT_TIME_AFTER_SCALING=180

With this setup, the autoscaler utility will query the Kuberenetes API every 10 seconds. If with 3 consecutive calls to the API (effectively meaning over 30 seconds), the memory usage is higher than 65% of the total memory available on the cluster, 2 more nodes will be added to the specified node pool. The utility will wait for 180 seconds and then start querying the API every 10 seconds again.

If with 3 consecutive calls to the API (effectively meaning over 30 seconds), the memory usage is below 30% of the total memory available on the cluster, 1 node will be removed (nodes are always removed one at a time to ensure you don't run out of capacity all of a sudden) from the specified node pool. The utility will wait for 180 seconds and then start querying the API every 10 seconds again.

Usage

You'll need to configure the Docker image with env variables and the kubectl config.

To run locally:

docker run -v ~/.kube/config:/root/.kube/config \
-e LINODE_PERSONAL_ACCCESS_TOKEN='xxxx' \
-e LINODE_LKE_CLUSTER_ID='xxxx' \
-e LINODE_LKE_CLUSTER_POOL_ID='xxxx' \
-e LINODE_LKE_CLUSTER_POOL_MINIMUM_NODES='3' \
-e AUTOSCALE_TRIGGER='cpu' \
-e AUTOSCALE_UP_PERCENTAGE='60' \
-e AUTOSCALE_DOWN_PERCENTAGE='30' \
-e AUTOSCALE_RESOURCE_REQUEST_UP_PERCENTAGE='70' \
-e AUTOSCALE_RESOURCE_REQUEST_DOWN_PERCENTAGE='70' \
-e AUTOSCALE_QUERY_INTERVAL='10' \
-e AUTOSCALE_THRESHOLD_COUNT='3' \
-e AUTOSCALE_NUMBER_OF_NODES='1' \
-e AUTOSCALE_WAIT_TIME_AFTER_SCALING='180' meezaan/linode-k8s-autoscaler

Deploying on Kubernetes for Production Use

For production, you can build a private Docker image and push a kubectl config file with a service account's credentials into the image. So, your Dockerfile may look something like:

FROM meezaan/linode-k8s-autoscaler

COPY configfile /root/.kube/config

Once you've built the image (and let's assume it's called yourspace/k8s-autoscaler:latest), you can deploy it with the following manifest:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: k8s-autoscaler
  namespace: name-of-namespace ####### Change this to the actual namespace
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  selector:
    matchLabels:
      app: k8s-autoscale
  template:
    metadata:
      labels:
        app: k8s-autoscale
    spec:
      imagePullSecrets:
        - name: regcred  ####### Docker registry credentials secret
      containers:
        - name: k8s-autoscale
          image: yourspace/k8s-autoscaler:latest ####### CHANGE THIS TO YOUR ACTUAL DOCKER IMAGE
          env:
            - name:  LINODE_PERSONAL_ACCCESS_TOKEN
              valueFrom:
                secretKeyRef:
                  name: linode-personal-access-token-k8s-autoscaler ####### LINODE PERSONAL ACCESS TOKEN SECRET
                  key: token
            - name:  LINODE_LKE_CLUSTER_ID
              value: ""
            - name:  LINODE_LKE_CLUSTER_POOL_ID
              value: ""
            - name:  AUTOSCALE_TRIGGER
              value: "memory"
            - name:  AUTOSCALE_UP_PERCENTAGE
              value: "60"
            - name:  AUTOSCALE_DOWN_PERCENTAGE
              value: "30"
            - name:  AUTOSCALE_QUERY_INTERVAL
              value: "30"
            - name:  AUTOSCALE_THRESHOLD_COUNT
              value: "3"
            - name:  AUTOSCALE_NUMBER_OF_NODES
              value: "1"
            - name:  AUTOSCALE_WAIT_TIME_AFTER_SCALING
              value: "150"
          resources:
            requests:
              memory: 32Mi
            limits:
              memory: 32Mi

The above manifest uses a secret for your Linode Personal Access Token and docker registry credentials.

You will need to create these.

Sizing the Autoscaler Pod

The above pod takes 0.01 CPU and 15MB of memory to run. The memory may increase based on the size of the API response, but it returns JSON, so even if you have 100+ servers in your cluster you're still only looking at 30MB or so.

Credits

Disclaimer

This utility is not affiliated with Linode.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].