All Projects → hjacobs → Kube Aws Autoscaler

hjacobs / Kube Aws Autoscaler

Licence: gpl-3.0
Simple, elastic Kubernetes cluster autoscaler for AWS Auto Scaling Groups

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Kube Aws Autoscaler

Kubernetes On Aws
Deploying Kubernetes on AWS with CloudFormation and Ubuntu
Stars: ✭ 517 (+450%)
Mutual labels:  aws, kubernetes-cluster
Aws Scalable Big Blue Button Example
Demonstration of how to deploy a scalable video conference solution based on Big Blue Button
Stars: ✭ 29 (-69.15%)
Mutual labels:  aws, autoscaling
Escalator
Escalator is a batch or job optimized horizontal autoscaler for Kubernetes
Stars: ✭ 539 (+473.4%)
Mutual labels:  aws, autoscaling
Aws Ec2 Assign Elastic Ip
Automatically assign Elastic IPs to AWS EC2 Auto Scaling Group instances
Stars: ✭ 172 (+82.98%)
Mutual labels:  aws, autoscaling
Terraform Ecs Autoscale Alb
ECS cluster with instance and service autoscaling configured and running behind an ALB with path based routing set up
Stars: ✭ 60 (-36.17%)
Mutual labels:  aws, autoscaling
linode-k8s-autoscaler
Autoscaling utility for horizontally scaling Linodes in an LKE Cluster Node Pool based on memory or cpu usage
Stars: ✭ 27 (-71.28%)
Mutual labels:  kubernetes-cluster, autoscaling
Rotate Eks Asg
Rolling Cluster Node Upgrades for AWS EKS
Stars: ✭ 6 (-93.62%)
Mutual labels:  aws, autoscaling
Aws Sdk Perl
A community AWS SDK for Perl Programmers
Stars: ✭ 153 (+62.77%)
Mutual labels:  aws, autoscaling
Terraform Aws Asg
Terraform AWS Auto Scaling Stack
Stars: ✭ 58 (-38.3%)
Mutual labels:  aws, autoscaling
Terraform Aws Dynamodb
Terraform module that implements AWS DynamoDB with support for AutoScaling
Stars: ✭ 49 (-47.87%)
Mutual labels:  aws, autoscaling
Replicator
Automated Cluster and Job Scaling For HashiCorp Nomad
Stars: ✭ 166 (+76.6%)
Mutual labels:  aws, autoscaling
Governor
A collection of cluster reliability tools for Kubernetes
Stars: ✭ 71 (-24.47%)
Mutual labels:  aws, kubernetes-cluster
Terraform Aws Autoscaling
Terraform module which creates Auto Scaling resources on AWS
Stars: ✭ 166 (+76.6%)
Mutual labels:  aws, autoscaling
Kubenow
Deploy Kubernetes. Now!
Stars: ✭ 285 (+203.19%)
Mutual labels:  aws, kubernetes-cluster
Autospotting
Saves up to 90% of AWS EC2 costs by automating the use of spot instances on existing AutoScaling groups. Installs in minutes using CloudFormation or Terraform. Convenient to deploy at scale using StackSets. Uses tagging to avoid launch configuration changes. Automated spot termination handling. Reliable fallback to on-demand instances.
Stars: ✭ 2,014 (+2042.55%)
Mutual labels:  aws, autoscaling
Geodesic
🚀 Geodesic is a DevOps Linux Distro. We use it as a cloud automation shell. It's the fastest way to get up and running with a rock solid Open Source toolchain. ★ this repo! https://slack.cloudposse.com/
Stars: ✭ 629 (+569.15%)
Mutual labels:  aws, kubernetes-cluster
Awesome Kubernetes
A curated list for awesome kubernetes sources 🚢🎉
Stars: ✭ 12,306 (+12991.49%)
Mutual labels:  aws, kubernetes-cluster
Ops Cli
Ops - cli wrapper for Terraform, Ansible, Helmfile and SSH for cloud automation
Stars: ✭ 152 (+61.7%)
Mutual labels:  aws, kubernetes-cluster
Karch
A Terraform module to create and maintain Kubernetes clusters on AWS easily, relying entirely on kops
Stars: ✭ 38 (-59.57%)
Mutual labels:  aws, kubernetes-cluster
Kube Aws
[EOL] A command-line tool to declaratively manage Kubernetes clusters on AWS
Stars: ✭ 1,146 (+1119.15%)
Mutual labels:  aws, kubernetes-cluster

================================= Kubernetes AWS Cluster Autoscaler

.. image:: https://travis-ci.org/hjacobs/kube-aws-autoscaler.svg?branch=master :target: https://travis-ci.org/hjacobs/kube-aws-autoscaler :alt: Travis CI Build Status

.. image:: https://coveralls.io/repos/github/hjacobs/kube-aws-autoscaler/badge.svg?branch=master;_=1 :target: https://coveralls.io/github/hjacobs/kube-aws-autoscaler?branch=master :alt: Code Coverage

THIS PROJECT IS NO LONGER MAINTAINED, PLEASE USE THE OFFICIAL CLUSTER AUTOSCALER <https://github.com/kubernetes/autoscaler>_ INSTEAD

Simple cluster autoscaler for AWS Auto Scaling Groups which sets the DesiredCapacity of one or more ASGs to the calculated number of nodes.

Goals:

  • support multiple Auto Scaling Groups
  • support resource buffer (overprovision fixed or percentage amount)
  • respect Availability Zones, i.e. make sure that all AZs provide enough capacity
  • be deterministic and predictable, i.e. the DesiredCapacity is only calculated based on the current cluster state
  • scale down slowly to mitigate service disruptions, i.e. at most one node at a time
  • support "elastic" workloads like daily up/down scaling
  • support AWS Spot Fleet (not yet implemented)
  • require a minimum amount of configuration (preferably none)
  • keep it simple

This autoscaler was initially created as a proof of concept and born out of frustration with the "official" cluster-autoscaler_:

  • it only scales up when "it's too late" (pods are unschedulable)
  • it does not honor Availability Zones
  • it does not support multiple Auto Scaling Groups
  • it requires unnecessary configuration
  • the code is quite complex

Disclaimer

Use at your own risk! This autoscaler was only tested with Kubernetes versions 1.5.2 to 1.7.7. There is no guarantee that it works in previous Kubernetes versions.

Is it production ready? Yes, the kube-aws-autoscaler is running in production at Zalando for months, see https://github.com/zalando-incubator/kubernetes-on-aws for more information and deployment configuration.

How it works

The autoscaler consists of a simple main loop which calls the autoscale function every 60 seconds (configurable via the --interval option). The main loop keeps no state (like history), all input for the autoscale function comes from either static configuration or the Kubernetes API server. The autoscale function performs the following task:

  • retrieve the list of all (worker) nodes from the Kubernetes API and group them by Auto Scaling Group (ASG) and Availability Zone (AZ)

  • retrieve the list of all pods from the Kubernetes API

  • calculate the current resource "usage" for every ASG and AZ by summing up all pod resource requests (CPU, memory and number of pods)

  • calculates the currently required number of nodes per AWS Auto Scaling Group:

    • iterate through every ASG/AZ combination
    • use the calculated resource usage (sum of resource requests) and add the resource requests of any unassigned pods (pods not scheduled on any node yet)
    • apply the configured buffer values (10% extra for CPU and memory by default)
    • find the allocatable capacity_ of the weakest node
    • calculate the number of required nodes by adding up the capacity of the weakest node until the sum is greater than or equal to requested+buffer for both CPU and memory
    • sum up the number of required nodes from all AZ for the ASG
  • adjust the number of required nodes if it would scale down more than one node at a time

  • set the DesiredCapacity for each ASG to the calculated number of required nodes

The whole process relies on having properly configured resource requests for all pods.

Usage

Create the necessary IAM role (to be used by kube2iam if you have it deployed):

  • Modify deploy/cloudformation.yaml and change the AWS account ID and the worker node's role name as necessary.
  • Create the Cloud Formation stack from deploy/cloudformation.yaml.

Deploy the autoscaler to your running cluster:

.. code-block:: bash

$ kubectl apply -f deploy/deployment.yaml

See below for optional configuration parameters.

Configuration

The following command line options are supported:

--buffer-cpu-percentage Extra CPU requests % to add to calculation, defaults to 10%. --buffer-memory-percentage Extra memory requests % to add to calculation, defaults to 10%. --buffer-pods-percentage Extra pods requests % to add to calculation, defaults to 10%. --buffer-cpu-fixed Extra CPU requests to add to calculation, defaults to 200m. --buffer-memory-fixed Extra memory requests to add to calculation, defaults to 200Mi. --buffer-pods-fixed Extra number of pods to overprovision for, defaults to 10. --buffer-spare-nodes Number of extra "spare" nodes to provision per ASG/AZ, defaults to 1. --include-master-nodes Do not ignore auto scaling group with master nodes. --interval Time to sleep between runs in seconds, defaults to 60 seconds. --once Only run once and exit (useful for debugging). --scale-down-step-fixed Scale down step in terms of node count, defaults to 1. --scale-down-step-percentage Scale down step in terms of node percentage (1.0 is 100%), defaults to 0%

.. _"official" cluster-autoscaler: https://github.com/kubernetes/autoscaler .. _allocatable capacity: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/node-allocatable.md

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].