All Projects → russmedia → terraform-google-kubernetes-cluster

russmedia / terraform-google-kubernetes-cluster

Licence: MIT license
GKE Kubernetes cluster with node pool submodule

Programming Languages

HCL
1544 projects
Makefile
30231 projects

Projects that are alternatives of or similar to terraform-google-kubernetes-cluster

multitenant-microservices-demo
Full Isolation in Multi-Tenant SaaS with Kubernetes + Istio
Stars: ✭ 57 (+338.46%)
Mutual labels:  gke
inspec-gke-cis-benchmark
GKE CIS 1.1.0 Benchmark InSpec Profile
Stars: ✭ 27 (+107.69%)
Mutual labels:  gke
gke-vault-demo
This demo builds two GKE Clusters and guides you through using secrets in Vault, using Kubernetes authentication from within a pod to login to Vault, and fetching short-lived Google Service Account credentials on-demand from Vault within a pod.
Stars: ✭ 63 (+384.62%)
Mutual labels:  gke
k8-byexamples-ingress-controller
Deploy an ingress with SSL termination out of the box!
Stars: ✭ 28 (+115.38%)
Mutual labels:  gke
terraform-google-kubernetes-istio
Creates a kubernetes cluster with istio enabled on GKE
Stars: ✭ 27 (+107.69%)
Mutual labels:  gke
kubernetes-vault-example
Placeholder for training material related to TA usage of Vault for securing Kubernetes apps.
Stars: ✭ 16 (+23.08%)
Mutual labels:  gke
gke-istio-telemetry-demo
This project demonstrates how to use an Istio service mesh in a single Kubernetes Engine cluster alongside Prometheus, Jaeger, and Grafana, to monitor cluster and workload performance metrics. You will first deploy the Istio control plane, data plane, and additional visibility tools using the provided scripts, then explore the collected metrics …
Stars: ✭ 55 (+323.08%)
Mutual labels:  gke
nominatim-k8s
Nominatim for Kubernetes on Google Container Engine (GKE).
Stars: ✭ 59 (+353.85%)
Mutual labels:  gke
pixie
Instant Kubernetes-Native Application Observability
Stars: ✭ 3,238 (+24807.69%)
Mutual labels:  gke
kuberbs
K8s deployment rollback system based on system observability principles of modern stacks
Stars: ✭ 61 (+369.23%)
Mutual labels:  gke
iskan
Kubernetes Native, Runtime Container Image Scanning
Stars: ✭ 35 (+169.23%)
Mutual labels:  gke
gke-logging-sinks-demo
This project describes the steps required to deploy a sample application to Kubernetes Engine that forwards log events to Stackdriver Logging. As a part of the exercise, you will create a Cloud Storage bucket and a BigQuery dataset for exporting log data.
Stars: ✭ 45 (+246.15%)
Mutual labels:  gke
gke-managed-certificates-demo
GKE ingress with GCP managed certificates
Stars: ✭ 21 (+61.54%)
Mutual labels:  gke
gke-rbac-walkthrough
A walk through of RBAC on a Google GKE Kubernetes 1.6 cluster.
Stars: ✭ 64 (+392.31%)
Mutual labels:  gke
Microservices Demo
Sample cloud-native application with 10 microservices showcasing Kubernetes, Istio, gRPC and OpenCensus.
Stars: ✭ 11,369 (+87353.85%)
Mutual labels:  gke
laravel-php-k8s
Just a simple port of renoki-co/php-k8s for easier access in Laravel
Stars: ✭ 71 (+446.15%)
Mutual labels:  gke
build-your-own-platform-with-knative
Knativeのコンポーネントを理解しながらFaaSプラットフォームをDIYするワークショップです
Stars: ✭ 43 (+230.77%)
Mutual labels:  gke
migrate-for-anthos-gke
Migrate to Containers samples and best practices
Stars: ✭ 33 (+153.85%)
Mutual labels:  gke
Professional Services
Common solutions and tools developed by Google Cloud's Professional Services team
Stars: ✭ 1,923 (+14692.31%)
Mutual labels:  gke
atlassian-kubernetes
All things Atlassian and Kubernetes
Stars: ✭ 30 (+130.77%)
Mutual labels:  gke

This Repo Is No Longer Maintained

Please consider migrating to official terraform-google-kubernetes-engine module: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine.

Overview

GKE Kubernetes module with node pools submodule

Kuberntes diagram on GKE

Table of contents

Requirements

Please use google provider version = "~> 3.14"

If you need more control with versioning of your cluster, it is advised to specify "min_master_version" and "version" in node-pools. Otherwise GKE will be using default version which might change in near future.

Compatibility

This module is meant for use with Terraform 0.12. If you haven't upgraded and need a Terraform 0.11.x-compatible version of this module, the last released version intended for Terraform 0.11.x is 3.0.0.

1. Features

  • multiple node pools with node number multiplied by defined zones
  • node pools with autoscaling enabled (scale to 0 nodes available)
  • node pools with preemptible instances
  • ip_allocation_policy for exposing nodes/services/pods in VPC
  • tested with NAT module
  • configurable node pools oauth scopes (global per all node pools)

2. Usage

cluster with default node pool on preemptible

module "primary-cluster" {
  name                   = terraform.workspace
  source                 = "russmedia/kubernetes-cluster/google"
  version                = "4.0.0"
  region                 = var.google_region
  zones                  = var.google_zones
  project                = var.project
  environment            = terraform.workspace 
  min_master_version     = var.master_version
}

cluster with explicit definition of node pools (optional)

module "primary-cluster" {
  name                   = "my-cluster"
  source                 = "russmedia/kubernetes-cluster/google"
  version                = "4.0.0"
  region                 = var.google_region
  zones                  = var.google_zones
  project                = var.project
  environment            = terraform.workspace
  min_master_version     = var.master_version
  node_pools             = var.node_pools
}

and in variables:

node_pools = [
  {
    name                = "default-pool"
    initial_node_count  = 1
    min_node_count      = 1
    max_node_count      = 1
    version             = "1.15.11-gke.3"
    image_type          = "COS"
    machine_type        = "n1-standard-1"
    preemptible         = true
    tags                = "tag1 nat"
  },
]

Note: at least one node pool must have initial_node_count > 0.

Since version 5.0.0 module supports no_schedule_taint and no_execute_taint - they will add schedulable=equals:NoSchedule or executable=equals:NoExecute - which will effect in only specific nodes being scheduled on those nodes. Please see k8s docs for more info.

Example usage with "NoSchedule":

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    env: test
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  tolerations:
    - key: "schedulable"
      operator: "Exists"
      effect: "NoSchedule"

Example usage with "NoExecute":

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    env: test
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  tolerations:
    - key: "executable"
      operator: "Exists"
      effect: "NoExecute"

Note - if node has both taints NoExecute and NoSchedule - you need to add both tolerations to pod to be allowed there.

multiple clusters

Due to current limitations with depends_on feature and modules it is advised to create vpc network separately and use it when defining modules, i.e:

resource "google_compute_network" "default" {
  name                    = terraform.workspace
  auto_create_subnetworks = "false"
  project                 = var.project
}
module "primary-cluster" {
  name        = "primary-cluster"
  source      = "russmedia/kubernetes-cluster/google"
  version     = "4.0.0"
  region      = var.google_region
  zones       = var.google_zones
  project     = var.project
  environment = terraform.workspace
  network     = google_compute_network.default.name
}
module "secondary-cluster" {
  name                                 = "secondary-cluster"
  source                               = "russmedia/kubernetes-cluster/google"
  version                              = "4.0.0"
  region                               = var.google_region
  zones                                = var.google_zones
  project                              = var.project
  environment                          = terraform.workspace
  network                              = google_compute_network.default.name
  nodes_subnet_ip_cidr_range           = "10.101.0.0/24"
  nodes_subnet_container_ip_cidr_range = "172.21.0.0/16"
  nodes_subnet_service_ip_cidr_range   = "10.201.0.0/16"
}

Note: secondary clusters need to have nodes_subnet_ip_cidr_range nodes_subnet_container_ip_cidr_range and nodes_subnet_service_ip_cidr_range defined, otherwise you will run into IP conflict. Also only one cluster can have nat_enabled set to 'true'.

add nat module (optional and deprecated - please use build in nat option - variable "nat_enabled")

Adding NAT module for outgoing Kubernetes IP:

module "nat" {
  source     = "github.com/GoogleCloudPlatform/terraform-google-nat-gateway?ref=1.2.0"
  region     = var.google_region
  project    = var.project
  network    = terraform.workspace
  subnetwork = "${terraform.workspace}-nodes-subnet"
  tags       = ["nat-${terraform.workspace}"]
}

Note: remember to add tag nat-${terraform.workspace} to primary cluster tags and node pools so NAT module can open routing for nodes.

using an existing or creating a new vpc network

Variable "network" is controling network creation.

  • when left empty (by default network="") - terraform will create a vpc network - network name will be equal to ${terraform.workspace}.
  • when we define a name - this network must already exist within the project - terraform will create a subnetwork within defined network and place the cluster in it.

subnetworks

Terraform always creates a subnetwork. The subnetwork name is taken from a pattern: ${terraform.workspace}-${var.name}-nodes-subnet. If you already have a subnetwork and you would like to keep the name - please define the "subnetwork_name" variable.

  • we define a subnetwork nodes CIDR using nodes_subnet_ip_cidr_range variable - terraform will fail with conflict if you use existing netmask
  • we define kubernetes pods CIDR using nodes_subnet_container_ip_cidr_range variable
  • we define kubernetes service CIDR using nodes_subnet_service_ip_cidr_range variable

zonal and regional clusters

  • Zonal clusters: A zonal cluster runs in one or more compute zones within a region. A multi-zone cluster runs its nodes across two or more compute zones within a single region. Zonal clusters run a single cluster master. Important zonal clusters from version 3.0.0 are using nodes only in the zone of the master. This is changed due to new nodes behavior on google cloud. Nodes in other zones can no longer register to cluster in a different zone.
  • Regional cluster: A regional cluster runs three cluster masters across three compute zones, and runs nodes in two or more compute zones.

Regional clusters are still in beta, please use with caution. You can enable it by setting variable "regional_cluster" to true. Warning - possible data loss! - changing this setting on a running cluster will force you to recreate it.

cloud nat

You can configure your cluster to sit behind nat, and have the same static external IP shared between pods. You can enable it by setting variable "nat_enabled" to true

Warning - possible data loss! - changing this setting on a running cluster will force you to recreate it.

3. Migration

To migrate from 1.x.x module version to 2.x.x follow these steps:

  • Remove tags property -> it is included now in node_pools map.
  • Remove node_version property -> it is included now in node_pools map.
  • Add initial_node_count to all node pools -> changing the previous value will recreate the node pool.
  • Add network with existing network name.
  • Add subnetwork_name with existing subnetwork name.
  • Add use_existing_terraform_network set to true if network was created by this module.

Important note: when upgrading, default pool will be deleted. Before migration, please extend size of non-default pools to be able to schedule all applications without the default node pool.

4. Authors

5. License

This project is licensed under the MIT License - see the LICENSE.md file for details. Copyright (c) 2018 Russmedia GmbH.

6. Acknowledgments

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].