All Projects → ResultadosDigitais → bigtable-autoscaler-operator

ResultadosDigitais / bigtable-autoscaler-operator

Licence: other
Kubernetes operator to autoscale Google's Cloud Bigtable clusters

Programming Languages

go
31211 projects - #10 most used programming language
Starlark
911 projects
Makefile
30231 projects
shell
77523 projects
Dockerfile
14818 projects

Projects that are alternatives of or similar to bigtable-autoscaler-operator

push-to-gcr-github-action
An action that build docker image and push to Google Cloud Registry and Google Artifact Registry.
Stars: ✭ 43 (+95.45%)
Mutual labels:  gcp
augle
Auth + Google = Augle
Stars: ✭ 22 (+0%)
Mutual labels:  gcp
collie-cli
Collie CLI allows you to manage your AWS, Azure & GCP cloud landscape through a single view.
Stars: ✭ 152 (+590.91%)
Mutual labels:  gcp
moadsd-ng
The MOADSD-NG project does provide a simple way to setup a hybrid cloud security demo, playground and learning environment within the clouds.
Stars: ✭ 13 (-40.91%)
Mutual labels:  gcp
drone-gcloud-helm
Drone 0.5 plugin to create and deploy Helm charts for Kubernetes in Google Cloud.
Stars: ✭ 13 (-40.91%)
Mutual labels:  gcp
Networking-and-Kubernetes
This is the code repo for Networking and Kubernetes: A Layered Approach. https://learning.oreilly.com/library/view/networking-and-kubernetes/9781492081647/
Stars: ✭ 103 (+368.18%)
Mutual labels:  gcp
DeployMachineLearningModels
This Repo Contains Deployment of Machine Learning Models on various cloud services like Azure, Heroku, AWS,GCP etc
Stars: ✭ 14 (-36.36%)
Mutual labels:  gcp
Everything-Tech
A collection of online resources to help you on your Tech journey.
Stars: ✭ 396 (+1700%)
Mutual labels:  gcp
gisjogja
GISJOGJA - aplikasi web based sistem informasi geografis (SIG) / GIS wisata kota JOGJA - www.firstplato.com
Stars: ✭ 17 (-22.73%)
Mutual labels:  gcp
tfeel
Twitter sentiment analysis
Stars: ✭ 22 (+0%)
Mutual labels:  gcp
infrakit.gcp
Infrakit plugins for Google Cloud Platform.
Stars: ✭ 12 (-45.45%)
Mutual labels:  gcp
paving
Terraform templates for paving infrastructure to deploy the Pivotal Platform.
Stars: ✭ 43 (+95.45%)
Mutual labels:  gcp
cloud-pricing-api
GraphQL API for cloud pricing. Contains over 3M public prices from AWS, Azure and GCP. Self-updates prices via an automated weekly job.
Stars: ✭ 281 (+1177.27%)
Mutual labels:  gcp
cloud-speech-and-vision-demos
A set of demo applications that make use of google speech, nlp and vision apis based in angular2
Stars: ✭ 35 (+59.09%)
Mutual labels:  gcp
cli
The universal GraphQL API and CSPM tool for AWS, Azure, GCP, K8s, and tencent.
Stars: ✭ 811 (+3586.36%)
Mutual labels:  gcp
gcp-dl
Deep Learning on GCP
Stars: ✭ 27 (+22.73%)
Mutual labels:  gcp
grucloud
Generate diagrams and code from cloud infrastructures: AWS, Azure,GCP, Kubernetes
Stars: ✭ 76 (+245.45%)
Mutual labels:  gcp
yildiz
🦄🌟 Graph Database layer on top of Google Bigtable
Stars: ✭ 24 (+9.09%)
Mutual labels:  bigtable
webping.cloud
Test your network latency to the nearest cloud provider in AWS, Azure, GCP, Alibaba Cloud, IBM Cloud, Oracle Cloud and DigitalOcean directly from your browser.
Stars: ✭ 60 (+172.73%)
Mutual labels:  gcp
runiac
Run IaC Anywhere With Ease
Stars: ✭ 18 (-18.18%)
Mutual labels:  gcp

CircleCI GitHub release

Bigtable Autoscaler Operator

Bigtable Autoscaler Operator is a Kubernetes Operator to autoscale the number of nodes of a Google Cloud Bigtable instance based on the CPU utilization.

Overview

Google Cloud Bigtable is designed to scale horizontally, meaning that the number of nodes of an instance can be increased to balance and reduce the average CPU utilization. For Bigtable applications dealing with high variances of workload, automating the cluster scaling allow handling short load bursts while keeping costs as low as possible. This operator automates the scaling by balancing the number of nodes to keep the CPU utilization under the manifest specifications.

The reconciler's responsibility is to keep the CPU utilization of the instance below the target specification respecting the minimum and maximum amount of nodes. When the CPU utilization is above the target, the reconciler will increase the amount of nodes in steps linearly proportional to how above it is from the target. For example, considering 100% of CPU utilization and only one node running, if the CPU target is 50%, it increases to 2 nodes, but if the CPU target is 25% it increases to 4 nodes.

The downscale also follows a linear rule, but it considers the maxScaleDownNodes specification which defines the maximum downscale step size in order to avoid aggressive downscale. Furthermore, the downscale step is calculated using the amount of current nodes running and the CPU target. For example, if there are two nodes running and the CPU target is 50%, in order to downscale occur the CPU utilization must go bellow 25%. This is important to avoid downscale that immediately causes upscale.

All scale operations are made respecting a reaction time window, which at time is not part of the manifest specification.

The image bellow shows how peaks above the CPU target of 50% are shortened by the automatic increase of nodes. Bigtable CPU utilization and nodes count

Usage

Create a k8s secret with your service account:

$ kubectl create secret generic bigtable-autoscaler-service-account --from-file=service-account=./your_service_account.json

Create an autoscaling manifest:

# my-autoscaler.yml
apiVersion: bigtable.bigtable-autoscaler.com/v1
kind: BigtableAutoscaler
metadata:
  name: my-autoscaler
spec:
  bigtableClusterRef:
    projectId: cool-project
    instanceId: my-instance-id
    clusterId: my-cluster-id
  serviceAccountSecretRef:
    name: example-service-account
    key: service-account
  minNodes: 1
  maxNodes: 10
  targetCPUUtilization: 50

Then you can install it on your k8s cluster:

$ kubectl apply -f my-autoscaler.yml

You can check your autoscaler running:

$ kubectl get bigtableautoscalers

image

Prerequisites

  1. Enable Bigtable and Monitoring APIs on your GCP project.
  2. Generate a service account secret with the role for Bigtable administrator.

Installation

  1. Visit the releases page, download the all-in-one.yml of the version of your choice and apply it
    kubectl apply -f all-in-one.yml

Development environment

These are the steps for setting up the development environment.

This project is using go version 1.13 and other tools with its respective version, we don't guarantee that using other versions can perform successful builds.

  1. Install kubebuilder version 2.3.2.

    1. Also make sure that you have its dependencies installed: controller-gen version 0.5.0 and kustomize version 3.10.0
  2. Follow Option 1 or Option 2 section.

Option 1: Run with Tilt (recomended)

Tilt is tool to automate development cycle and has features like hot deploy.

  1. Install tilt version 0.19.0 (follow the official instructions).

    1. Install its dependencies: ctlptl and kind (or other tool to create local k8s clusters) as instructed.
  2. If it doesn't exist, create your k8s cluster using ctlptl

    ctlptl create cluster kind --registry=ctlptl-registry
  3. Provide the secret with the service account credentials and role as described in section Secret setup.

  4. Run tilt up

Option 2: Manual run

Running manually requires some extra steps!

  1. If it doesn't exist, create your local k8s cluster. Here we will use kind to create it:

    kind create cluster
  2. Provide the secret with the service account credentials and role as described in section Secret setup.

  3. check that your cluster is correctly running

    kubectl cluster-info
  4. Apply Custom Resource Definition

    make install
  5. Build docker image with manger binary

    make docker-build
  6. Load this image to the cluster

    kind load docker-image controller:latest
  7. Deploy the operator to the local cluster

    make deploy
  8. Apply the autoscaler sample

    kubectl apply -f config/samples/bigtable_v1_bigtableautoscaler.yaml
  9. Check pods and logs

    kubectl -n bigtable-autoscaler-system logs $(kubectl -n bigtable-autoscaler-system get pods | tail -n1 | cut -d ' ' -f1) --all-containers

Secret setup

  1. Use the service account from the Prerequisites section to create the k8s secret

    kubectl create secret generic bigtable-autoscaler-service-account --from-file=service-account=./your_service_account.json
  2. Create role and rolebinding to read secret

    kubectl apply -f config/rbac/secret-role.yml

Running tests

go test ./... -v

or

gotestsum
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].