All Projects → r0hi7 → K8s In 30mins

r0hi7 / K8s In 30mins

Licence: mit
Learn how to set up the Kubernetes cluster in 30 mins and deploy the application inside the cluster.

Projects that are alternatives of or similar to K8s In 30mins

kubectl-janitor
List Kubernetes objects in a problematic state
Stars: ✭ 48 (-72.09%)
Mutual labels:  kubernetes-cluster, kubectl
rak8s
Stand up a Raspberry Pi based Kubernetes cluster with Ansible
Stars: ✭ 362 (+110.47%)
Mutual labels:  kubernetes-cluster, kubectl
aksctl
An easy to use CLI for AKS cluster
Stars: ✭ 46 (-73.26%)
Mutual labels:  kubernetes-cluster, kubectl
kubehelper
KubeHelper - simplifies many daily Kubernetes cluster tasks through a web interface. Search, analysis, run commands, cron jobs, reports, filters, git synchronization and many more.
Stars: ✭ 200 (+16.28%)
Mutual labels:  kubernetes-cluster, kubectl
Kubernetes Cheatsheet
This is Kubernetes Cheatsheet based on Kubernetes API 1.19 version.
Stars: ✭ 53 (-69.19%)
Mutual labels:  hacktoberfest, kubectl
kubeadm-vagrant
Setup Kubernetes Cluster with Kubeadm and Vagrant
Stars: ✭ 49 (-71.51%)
Mutual labels:  kubernetes-cluster, kubectl
GPU-Kubernetes-Guide
How to setup a production-grade Kubernetes GPU cluster on Paperspace in 10 minutes for $10
Stars: ✭ 34 (-80.23%)
Mutual labels:  kubernetes-cluster, kubectl
kubernetes-starterkit
A launchpad for developers to learn Kubernetes from scratch and deployment of microservices on a kubernetes cluster.
Stars: ✭ 39 (-77.33%)
Mutual labels:  kubernetes-cluster, kubectl
Geodesic
🚀 Geodesic is a DevOps Linux Distro. We use it as a cloud automation shell. It's the fastest way to get up and running with a rock solid Open Source toolchain. ★ this repo! https://slack.cloudposse.com/
Stars: ✭ 629 (+265.7%)
Mutual labels:  kubernetes-cluster, kubectl
Rak8s
Stand up a Raspberry Pi based Kubernetes cluster with Ansible
Stars: ✭ 354 (+105.81%)
Mutual labels:  kubernetes-cluster, kubectl
admission-webhook-example-with-openfaas
Use OpenFaaS functions as Kubernetes Validating Admission Webhook
Stars: ✭ 24 (-86.05%)
Mutual labels:  kubernetes-cluster, kubectl
Kubernetes Reflector
Custom Kubernetes controller that can be used to replicate secrets, configmaps and certificates.
Stars: ✭ 129 (-25%)
Mutual labels:  kubernetes-cluster, kubectl
Kubectl Trace
Schedule bpftrace programs on your kubernetes cluster using the kubectl
Stars: ✭ 1,194 (+594.19%)
Mutual labels:  kubernetes-cluster, kubectl
Primehub
A toil-free multi-tenancy machine learning platform in your Kubernetes cluster
Stars: ✭ 160 (-6.98%)
Mutual labels:  kubernetes-cluster, kubectl
Fiber
⚡️ Express inspired web framework written in Go
Stars: ✭ 17,334 (+9977.91%)
Mutual labels:  hacktoberfest
Docker
Run the Pelias geocoder in docker containers, including example projects.
Stars: ✭ 171 (-0.58%)
Mutual labels:  hacktoberfest
Xbmc
Kodi is an award-winning free and open source home theater/media center software and entertainment hub for digital media. With its beautiful interface and powerful skinning engine, it's available for Android, BSD, Linux, macOS, iOS and Windows.
Stars: ✭ 13,175 (+7559.88%)
Mutual labels:  hacktoberfest
Revel
A high productivity, full-stack web framework for the Go language.
Stars: ✭ 12,463 (+7145.93%)
Mutual labels:  hacktoberfest
Swiftycontacts
A Swift library for Contacts framework.
Stars: ✭ 171 (-0.58%)
Mutual labels:  hacktoberfest
Rocket.chat.livechat
New Livechat client written in Preact
Stars: ✭ 171 (-0.58%)
Mutual labels:  hacktoberfest



GitHub release (latest by date) GitHub code size in bytes GitHub
GitHub issues GitHub stars Twitter Follow GitHub followers

K8s in 30 mins

This is not a comprehensive guide to learn Kubernetes from scratch, rather this is just a small guide/cheat sheet to quickly setup and run applications with Kubernetes and deploy a very simple application on single workload VM. This repo can be served as quick learning manual to understand Kubernetes.

Prerequisite

Table of Contents:

  1. Setting up Kubernetes cluster in VM (NOT MINIKUBE) : 1 VM cluster
    • Spining up a virtual machine with Vagrant : 2GB RAM + 2CPU cores (at least)
    • Understanding:
      • kubeadm
      • kubelet
      • kubectl
  2. Kuberenetes pods: How are they different than Docker containers.
  3. Kubernetes Resource:
  4. Kubernetes network manager
    • I will pick up the plugin called Flannel.
  5. Stateless Workload
    • Replicasets & Deployments
  6. Stateful Workloads
  7. Deploying End-to-End Service in Kubernetes cluster
  8. Understanding advance kubernetes resources:
  9. Cheat sheet
  10. Next steps

Setting up Kubernetes cluster in VM

  1. Download the Vagrant File.
  2. Download Virtual box and install from here.
  3. Download and install Vagrant.
  4. In the terminal, run the two command to get the VM up and running, with out any configuration 😄
    # In the same directory where you have downloaded Vagrantfile, run
    vagrant up
    vagrant ssh
    
    This will download the Ubuntu box image and do the entire setup for you with the help of virtual box. It just need virtual box installed.
  5. The Vagrantfile comes preconfigured with kubeadm, kubelet, kubectl
  6. Check if kubernetes cluster is perfectly installed.
    -o json
    {
      "clientVersion": {
        "major": "1",
        "minor": "19",
        "gitVersion": "v1.19.2",
        "gitCommit": "f5743093fd1c663cb0cbc89748f730662345d44d",
        "gitTreeState": "clean",
        "buildDate": "2020-09-16T13:41:02Z",
        "goVersion": "go1.15",
        "compiler": "gc",
        "platform": "linux/amd64"
      },
      "serverVersion": {
        "major": "1",
        "minor": "19",
        "gitVersion": "v1.19.2",
        "gitCommit": "f5743093fd1c663cb0cbc89748f730662345d44d",
        "gitTreeState": "clean",
        "buildDate": "2020-09-16T13:32:58Z",
        "goVersion": "go1.15",
        "compiler": "gc",
        "platform": "linux/amd64"
      }
    }
    
  7. Start the Kubernetes cluster master node.
    # This will spin up Kubernetes cluster with CIDR: 10.244.0.0/16
    --pod-network-cidr=10.244.0.0/16
    kubeadm join 10.0.2.15:6443 --token 3m5dsc.toup1iv7670ya7wc --discovery-token-ca-cert-hash sha256:73f4983d43f9618522eaccf014205f969e3bacd76c98dd0c
    
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
  8. Conenct other VM to this cluster: Not required in case of single VM cluster. For this run perfectly, make sure:
    • VM to VM connectivity is there.
    • All there kube-* are installed in VM.
    kubeadm join 10.0.2.15:6443 --token 3m5dsc.toup1iv7670ya7wc --discovery-token-ca-cert-hash sha256:73f4983d43f9618522eaccf014205f969e3bacd76c98dd0c
    
  9. At this point, Kubernetes is installed and cluster master is up, but still we need a agent to provision and manager network for new nodes for us, This is where Flannel comes to rescue. Install Flannel to manager docker network for pods.
    kubectl apply -f \
        https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    
  10. This step applies, if we wish to use, our master node as worker as well. Which is yes in our case:
    [email protected]:/home/vagrant# kubectl taint nodes $(hostname) node-role.kubernetes.io/master:NoSchedule-
    
    # If everything goes well, you will see something like this.
    [email protected]:/home/vagrant# kubectl get node
    NAME      STATUS   ROLES    AGE     VERSION
    vagrant   Ready    master   3m40s   v1.19.2
    

Run all the commands from root shell.

What are kube*

Kubernetes runs in client server model, similar to the way the docker runs. Kubernetes server exposes kubernetes-api, and each of kubeadm, kubelet and kubectl connect with this kubernetes server api to get the task done. In the master slave model, there are two entities:

  • Control Plane
  • Worker Nodes

Control Plane : Connects with Worker nodes for resource allocation.
Worker nodes : Cluster entitiy that actually allocates tasks and run Pods.

  1. kubeadm:
    • Sets-up the cluster
    • Connect various worker nodes togather.
  2. kubectl:
    • It is a client cli.
    • Connects with control plane kubernetes api server and send execution requests to control plane.
  3. kubelet:
    • Receives request from control planes.
    • Runs in Worker nodes.
    • Runs task over worker nodes.
    • Maintain Pod lifecycle. Not just for pods, but all Kubernetes resources lifecycle.

Kubernetes pods

  • Pods run multiple containers.
  • Pods abstract out multilpe containers into single unit.
  • If two service in pods are both exposing service on same port, the other one wont spin up and it will fail.
  • The unit of Kubernetes work load is called Pod.

How to create a pod

You can create a simple nginx pod with following yaml spec. Save this in file name : pod.yml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx
Key name Key Description
apiVersion Kubernetes server API
kind Kubernetes Resource type: Pod
metadata.name Name of Kubernetes Pod
spec.container.name Name of Container which will run in a Pod
spec.container.name Name of docker image to run

Run this Pod spec with. kubectl apply -f pod.yml

-f pod.yaml
pod/nginx created

# If everything goes OK, you will se something like this.

[email protected]:/home/vagrant/kubedata# kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          43s
[email protected]:/home/vagrant/kubedata#

Use : kubectl get pods to get the list of all Pods.

  1. Running command into container, running inside Pod. kubectl exec -it <pod_name> -c <container_name> -- <command>
    [email protected]:/home/vagrant/kubedata# kubectl exec -it nginx -c nginx -- whoami
    root
    
    [email protected]:/home/vagrant/kubedata# kubectl exec -it nginx -c nginx -- /bin/sh
    # cat /etc/*-release
    PRETTY_NAME="Debian GNU/Linux 10 (buster)"
    NAME="Debian GNU/Linux"
    VERSION_ID="10"
    VERSION="10 (buster)"
    VERSION_CODENAME=buster
    ID=debian
    HOME_URL="https://www.debian.org/"
    SUPPORT_URL="https://www.debian.org/support"
    BUG_REPORT_URL="https://bugs.debian.org/"
    
  2. Running multiple container in one pod.
    apiVersion: v1
    kind: Pod
    metadata:
      name: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
      - name: curl
        image: appropriate/curl
        stdin: true
        tty: true
        command: ["/bin/sh"]
    
    Save this into pod-with-two-containers.yml.
    Run this : kubectl apply -f pod-with-two-containers.yml
  3. Delete a running pod. kubectl delete -f pod-with-two-containers.yml. This will remove the pod mentioned in spec file.
  4. Container in a Pod can connect to another container in same pod with spec.containers.name.
    exec -it nginx -c curl -- /bin/sh
    # curl nginx
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
        body {
            width: 35em;
            margin: 0 auto;
            font-family: Tahoma, Verdana, Arial, sans-serif;
        }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>
    #
    

Kubernetes Resources

Pods

  • Fundamental unit of k8s cluster.
  • Abstraction for container/multiple-containers, running under single name.
  • Discussed in detail : here

Deployments

  • A Deployment provides declarative updates for Pods.
  • The configuration state in yml file, defines how the pods will run in cluster. They can specify:
    • Replicas
    • Resource allocation
    • Connection with Volumes etc.
    • We will see example once we see replicasets

Replicasets

  1. Run deployments in replicas.

  2. Create file with following specification.

    apiVersion: apps/v1
    
    kind: Deployment
    metadata:
      name: nginx
    
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx-app
      template:
        metadata:
          labels:
            app: nginx-app
        spec:
          containers:
          - name: nginx
            image: nginx
    

    Notice the difference.

    -- kind: Pod
    ++ kind: Deployment
    
    ++ spec:
    ++  replicas: 3
    ++  selector:
    ++    matchLabels:
    ++      app: nginx-app
    
  3. Remove existing pods(if any) kubectl delete pods --all, and create deployment.

    -f deployment-replica.yml
    deployment.apps/nginx created
    
    [email protected]:/home/vagrant/kubedata# kubectl get deployments
    NAME    READY   UP-TO-DATE   AVAILABLE   AGE
    nginx   0/3     3            0           7s
    
    -w
    NAME    READY   UP-TO-DATE   AVAILABLE   AGE
    nginx   1/3     3            1           14s
    nginx   2/3     3            2           20s
    
  4. Get the list of all deployments: kubectl get deployments or kubectl get deploy

  5. Get the list of all replicaset : kubectl get replicaset or kubectl get rs

    [email protected]:/home/vagrant/kubedata# kubectl get pods
    NAME                    READY   STATUS    RESTARTS   AGE
    nginx-d6ff45774-f84l8   1/1     Running   0          4m59s
    nginx-d6ff45774-gzxfz   1/1     Running   0          4m59s
    nginx-d6ff45774-t69mw   1/1     Running   0          4m59s
    
    [email protected]:/home/vagrant/kubedata# kubectl get deploy
    NAME    READY   UP-TO-DATE   AVAILABLE   AGE
    nginx   3/3     3            3           162m
    
    [email protected]:/home/vagrant/kubedata# kubectl get replicaset
    NAME              DESIRED   CURRENT   READY   AGE
    nginx-d6ff45774   3         3         3       162m
    
    [email protected]:/home/vagrant/kubedata#
    
  6. Print a detailed description of the selected resources, including related resources such as events or controllers: kubectl describe <resource_type> <resouce_name>

  7. Get deployment configuration in JSON format: kubectl get deployment nginx -o yaml.

Services

  • Logical abstraction of Pods and policies to access them.
  • They enable loose coupling between dependent Pods. e.g
    • Open Ports.
    • Security Policies between Pod interaction etc.
  • Can be created independent of Pod declaration, but usually services linked to one Pod are present in same spec file.
  • Lets create a simple service to expose nginx service port to host machine. File
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-app
  template:
    metadata:
      labels:
        app: nginx-app
    spec:
      containers:
      - name: nginx
        image: nginx
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  • Service declaration starts by augmenting exiting deployment/pod spec with ---.
  • Service and Pod can share same names.
    • Each different resource must have unique amongst themselves.
  • The above service, exposes port 80 on host specified by spec.ports.port to port 80 of target pod specified by spec.ports.taregtPort
-f nginx-service.yml
deployment.apps/nginx unchanged
service/nginx created

[email protected]:/home/vagrant/kubedata#
  • Once the service is created:
    • Run : kubectl get services to get the list of services.
      [email protected]:/home/vagrant/kubedata# kubectl get services
      NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
      kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   2d5h
      nginx        ClusterIP   10.104.178.240   <none>        80/TCP    49s
      
      Cluster IP is the IP interface of Pod anstraction on host. curl cluster IP will connect us to the Pod.
      [email protected]:/home/vagrant/kubedata# curl 10.104.178.240
      <!DOCTYPE html>
      <html>
      <head>
      <title>Welcome to nginx!</title>
      <style>
          body {
              width: 35em;
              margin: 0 auto;
              font-family: Tahoma, Verdana, Arial, sans-serif;
          }
      </style>
      </head>
      <body>
      <h1>Welcome to nginx!</h1>
      <p>If you see this page, the nginx web server is successfully installed and
      working. Further configuration is required.</p>
      
      <p>For online documentation and support please refer to
      <a href="http://nginx.org/">nginx.org</a>.<br/>
      Commercial support is available at
      <a href="http://nginx.com/">nginx.com</a>.</p>
      
      <p><em>Thank you for using nginx.</em></p>
      </body>
      </html>
      
    • Run : kubectl get endpoints or kubectl get ep to get list of exposed endpoints.
      [email protected]:/home/vagrant/kubedata# kubectl get ep
      NAME         ENDPOINTS                                    AGE
      kubernetes   10.0.2.15:6443                               2d5h
      nginx        10.244.0.10:80,10.244.0.8:80,10.244.0.9:80   2m
      
      Since I am running 3 different replicas, we are seeing 3 different Pod IPs.

Loadbalancer Service

  • Notice External IP in:
    [email protected]:/home/vagrant/kubedata# kubectl get services
    NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
    kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   2d5h
    nginx        ClusterIP   10.104.178.240   <none>        80/TCP    49s
    
  • Since we are running this in local setup, we dont have any CCM(Cloud Config manager), which can provision external IP for us to connect to the service running inside the Pod.
    • In case of Azure or AWS Cloud providers, the CCM provisions and links external IPs for us.
  • So lets do a hack here.
    • Update nginx service to LoadBalancer. File
      apiVersion: v1
      kind: Service
      metadata:
        name: nginx
      spec:
        type: LoadBalancer
        selector:
          app: nginx-app
        ports:
        - protocol: TCP
          port: 80
          targetPort: 80
      
      Notice:
      spec:
      ++ type: LoadBalancer
      
    • Apply the config: kubectl apply -f nginx-service-lb.yml
      [email protected]:/home/vagrant/kubedata# kubectl get svc
      NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
      kubernetes   ClusterIP      10.96.0.1        <none>        443/TCP        2d5h
      nginx        LoadBalancer   10.104.178.240   <pending>     80:32643/TCP   17m
      
      Now the state is pending :)
    • Run netstat -nltp, and notice the kube-proxy
      ++ tcp        0      0 0.0.0.0:32643           0.0.0.0:*               LISTEN      13095/kube-proxy
         tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      7024/kubelet
      ++ tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      13095/kube-proxy
      
      See the magic.
      [email protected]:/home/vagrant/kubedata# curl 0.0.0.0:32643
      <!DOCTYPE html>
      <html>
      <head>
      <title>Welcome to nginx!</title>
      <style>
          body {
              width: 35em;
              margin: 0 auto;
              font-family: Tahoma, Verdana, Arial, sans-serif;
          }
      </style>
      </head>
      <body>
      <h1>Welcome to nginx!</h1>
      <p>If you see this page, the nginx web server is successfully installed and
      working. Further configuration is required.</p>
      
      <p>For online documentation and support please refer to
      <a href="http://nginx.org/">nginx.org</a>.<br/>
      Commercial support is available at
      <a href="http://nginx.com/">nginx.com</a>.</p>
      
      <p><em>Thank you for using nginx.</em></p>
      </body>
      </html>
      
      • The LoadBalancer exposed the service endpoints out of Kubernetes cluster IP interface and in our vagrant host we can access it now directly :)
      • The next challenge to to expose this kube-proxy interface to host machine. And hack is done, then we can access the service running in Pod(replica set deployment) from our host interface directly.
      • This is how the network now looks like. The port 32643 is not exposed through kube-proxy over host/control-plane node.
                                                          Kubernetes Cluster
                                           +---------------------------------------------+
                                           |                               POD           |
                                           |                           +---------+       |
                                           |                    +------>  NGINX  |       |
                                           |                    |      +---------+       |
                                           |           LB       |                        |
                     +--------------+      |    +---------------+          POD           |
        0.0.0.0:32643|  Kube Proxy  |80    |    |               |      +---------+       |
                <------------------>----------->+    SERVICE    +------>  NGINX  |       |
                     |              |      |  80|               |      +---------+       |
                     +--------------+      |    +---------------+                        |
                           HOST            |                    |          POD           |
                                           |                    |      +---------+       |
                                           |                    +------>  NGINX  |       |
                                           |                           +---------+       |
                                           +---------------------------------------------+
        

Stateless workloads

  • Deployments and Replicasets that we had deployed so far are stateless workloads.
  • There is no state related information stored at Pods/Service, so request from kube-proxy via serivce resource can be routed to any of the Pod in the cluster.
  • This constitutes stateless workload.
  • Next section is to create a Stateful workload.

Stateful workloads

  • Preserve the state of data present on Pods.
  • Two situations can be possible:
    • Multi pod stateful workload
      • If multiple pods are connecting to stateful workload, there should be worker based synchronization
      • Else, stateful data may go out of sync.
    • Single pod stateful workload
      • Create persitant volumes
      • Create persitant volume claims to access persitant volumes in a synchronized way, just to prevent ensure data atomicity.

Persistent Volumes

  • PV are like volumes in Docker, just that their lifecycle is independent of Pods.
  • This is an API object. Captures details about storage implementation.
  • Provised by Kubernetes administrator.
  • Way to abstract storage resource.
  • Create a persistent volume for MySQL server. File
    kind: PersistentVolume
    apiVersion: v1
    metadata:
      name: pv
      labels:
        type: local
    spec:
      storageClassName: manual
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteOnce
      hostPath:
        path: "/data"
    
    This spec specifies the volume is at /data on cluster's node. Apply it : kubectl apply -f pv.yml
    [email protected]:/home/vagrant/kubedata# kubectl get pv
    NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
    pv     5Gi        RWO            Retain           Available           manual                  62s
    

Persistent Volume Claims

  • Storage requeest by a user.
  • PVCs consume PV resources.
  • Way to access abstract storage.
  • PVC can request specific size and access mode: ReadWriteOnce, ReadOnlyMany, ReadWriteMany
Access Mode Meaning
ReadWriteOnce volume can be mounted as read-write by a single node
ReadOnlyMany volume can be mounted read-only by many nodes
ReadWriteMany volume can be mounted as read-write by many nodes
  • Create a PVC spec. File

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pv-claim
    spec:
      storageClassName: manual
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
    

    Apply it.

    -f pv-claim.yml
    persistentvolumeclaim/pv-claim created
    
    [email protected]:/home/vagrant/kubedata# kubectl get pvc
    NAME       STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    pv-claim   Bound    pv       5Gi        RWO            manual         8s
    
    [email protected]:/home/vagrant/kubedata# kubectl describe pvc pv-claim
    Name:          pv-claim
    Namespace:     default
    StorageClass:  manual
    Status:        Bound
    Volume:        pv
    Labels:        <none>
    Annotations:   pv.kubernetes.io/bind-completed: yes
                   pv.kubernetes.io/bound-by-controller: yes
    Finalizers:    [kubernetes.io/pvc-protection]
    Capacity:      5Gi
    Access Modes:  RWO
    VolumeMode:    Filesystem
    Mounted By:    <none>
    Events:        <none>
    [email protected]:/home/vagrant/kubedata#
    
    • Pods use PersistentVolumeClaims to request physical storage
    • After creating the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control plane finds a suitable PersistentVolume with the same StorageClass, it binds the claim to the volume.
  • Lets create a POD which will use PV as Volume using PVC. File

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod-with-pvc
spec:
  volumes:
    - name: nginx-pv-storage
      persistentVolumeClaim:
        claimName: pv-claim
  containers:
    - name: nginx-with-pv
      image: nginx
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: nginx-pv-storage
[email protected]:/home/vagrant/kubedata# kubectl get pods nginx-pod-with-pvc
NAME                 READY   STATUS    RESTARTS   AGE
nginx-pod-with-pvc   1/1     Running   0          16s

exec -it nginx-pod-with-pvc -c nginx-with-pv -- /bin/bash
[email protected]:/# curl localhost
Hi PV

  • The file we just created in storage is made accessible to Nginx POD.

Summary

                                +--------------------------------------+
                                |     +------------+                   |
                                |     |    POD     |        +--------------->
                                |     +-----+------+        |          |    |
                                |           |               |          |    |
                                |           |         +-----+------+   |    v
                                |           |         |     PV     |   |   /data
                                |           |         +------+-----+   |
                                |     +-----v------+         ^         |
                                |     |    PVC     +---------+         |
                                |     +------------+                   |
                                |                                      |
                                +--------------------------------------+
  • PV to PVC bind is automatic, based on storage class.
  • Pod/Deployment/K8s-Resource link to PVC has to has to be done manually in spec file.

Sample Application Example

  1. This End to End setup will include:
  2. MySQL setup through PV and PVC.
  3. Building Custom Dockerfile for sprinboot application.
  4. Creating Deployment for SpringBoot application. 1. Setup the environment for application to connect to DB. 2. Setting up PVC setup in deployment. 3. Creating Serivce for springboot application access outside pod.
    1. Service setup through LB

Once we create spec.yml in bits, we will create a big spec to show our Infrastructure as a Code and deploy that 😄.

MySQL Resource

Step 1: Create PV for MYSQL DB

kind: PersistentVolume
apiVersion: v1
metadata:
  name: mysql-pv
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/mysql"   

Step 2: Create PVC for PV

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mysql-pvc
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Step 3: Create MySQL deployment Spec

apiVersion: apps/v1 
kind: Deployment
metadata:
  name: dbserver
  labels:
    app: dbserver
spec:
  selector:
    matchLabels:
      app: dbserver
  template:
    metadata:
      labels:
        app: dbserver
    spec:
      containers:
      - image: mysql
        name: mysql
        imagePullPolicy: Never
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: mysecretpassword
        ports:
        - containerPort: 3306
          name: dbserver
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pvc
  • Once the DB server is up, please go adhead and login to MySQL and create peopledb for sprinboot application to access.
    • mysql -- mysql -u root -pmysecretpassword & create database peopledb

Step 4: Expose MySQL server via Service

apiVersion: v1
kind: Service
metadata:
  name: dbservice
spec:
  selector:
    app: dbserver
  ports:
  - protocol: TCP
    port: 3306
    targetPort: 3306
  • This will expose this service over/inside cluster for other services to access.

Springboot Application

Step 1: Build and Deploy AppServer

  • Build the Docker image with name appserver from this File.
    docker build  -t appserver .
    
  • Create Deployment spec for appserver.
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: appserver
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: appserver
      template:
        metadata:
          labels:
            app: appserver
        spec:
          containers:
          - name: appserver
            image: appserver
            imagePullPolicy: Never
            env:
            - name: DB_HOST
              value: dbservice
    

Step 2: Expose AppServer service via Service type LB to host.

apiVersion: v1
kind: Service
metadata:
  name: contacts
spec:
  type: LoadBalancer
  selector:
    app: appserver
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080

Infrastructure as a Code

MySQL Full Spec

  • You can find the full spec file here : File
    kind: PersistentVolume
    apiVersion: v1
    metadata:
      name: mysql-pv
      labels:
        type: local
    spec:
      storageClassName: manual
      capacity:
          storage: 5Gi
      accessModes:
        - ReadWriteOnce
      hostPath:
        path: "/data/mysql"   
    
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: mysql-pvc
    spec:
      storageClassName: manual
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
    
    ---
    apiVersion: apps/v1 
    kind: Deployment
    metadata:
      name: dbserver
      labels:
        app: dbserver
    spec:
      selector:
        matchLabels:
          app: dbserver
      template:
        metadata:
          labels:
            app: dbserver
        spec:
          containers:
          - image: mysql
            name: mysql
            imagePullPolicy: Never
            env:
            - name: MYSQL_ROOT_PASSWORD
              value: mysecretpassword
            ports:
            - containerPort: 3306
              name: dbserver
            volumeMounts:
            - name: mysql-persistent-storage
              mountPath: /var/lib/mysql
          volumes:
          - name: mysql-persistent-storage
            persistentVolumeClaim:
              claimName: mysql-pvc
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: dbservice
    spec:
      selector:
        app: dbserver
      ports:
      - protocol: TCP
        port: 3306
        targetPort: 3306
    
  • kuebctl apply -f mysql-spec.yml 😄

AppServer Full Spec

  • You can find the full spec file here: File
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: appserver
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: appserver
      template:
        metadata:
          labels:
            app: appserver
        spec:
          containers:
          - name: appserver
            image: appserver
            imagePullPolicy: Never
            env:
            - name: DB_HOST
              value: dbservice
    
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: contacts
    spec:
      type: LoadBalancer
      selector:
        app: appserver
      ports:
      - protocol: TCP
        port: 80
        targetPort: 8080
    
    Quickly apply it with kubectl apply -f appserver-spec.yml

Understanding Advance Kubernetes Resources

Namespace

Namespace are software level cluster virtualization over same physical k8s cluster.

  [email protected]:/home/vagrant# kubectl get ns
  NAME              STATUS   AGE
  default           Active   19d
  kube-node-lease   Active   19d
  kube-public       Active   19d
  kube-system       Active   19d

Kubernetes starts with 4 namespaces:

  1. default: The default namespace for objects with no other namespace.
  2. kube-system: The namespace for objects created by the Kubernetes system.
  3. kube-public: This namespace is created automatically and is readable by all users (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.
  4. kube-node-lease: This namespace for the lease objects associated with each node which improves the performance of the node heartbeats as the cluster scales.

Get Pods from specific namespace kubectl get pods --namespace=default OR kubectl get pods -n default

--namespace=kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-f9fd979d6-g9wxg           1/1     Running   5          19d
coredns-f9fd979d6-zrdvs           1/1     Running   5          19d
etcd-vagrant                      1/1     Running   5          19d
kube-apiserver-vagrant            1/1     Running   5          19d
kube-controller-manager-vagrant   1/1     Running   7          19d
kube-flannel-ds-64l2p              1/1     Running   6          19d
kube-proxy-4j4kw                  1/1     Running   5          19d
kube-scheduler-vagrant            1/1     Running   7          19d

Creating Namespace & Adding resource

  • Create namespace : kubectl create namespace qa
  • Once the namespace is created, just add the metadata field : namespace: qa, File
    apiVersion: v1
    kind: Pod
    metadata:
       name: nginx
    ++ namespace: qa
    spec:
      containers:
      - name: nginx
        image: nginx
    
  • Most Kubernetes resources (e.g. pods, services, replication controllers, and others) are in some namespaces. However namespace resources are not themselves in a namespace. And low-level resources, such as nodes and persistentVolumes, are not in any namespace.
    • To see the list of resource not in namespace : kubectl api-resources --namespaced=false

Context

  • Is a tuple of cluster, user, namespace. This is useful when you connect to multiple clusters from one control plane.
    • Get the current context: kubectl config get-contexts
    [email protected]:/home/vagrant/kubedata# kubectl config get-contexts
    CURRENT   NAME                          CLUSTER      AUTHINFO           NAMESPACE
    *         [email protected]   kubernetes   kubernetes-admin
    
  • You can create kubernetes context using config file or using commands.
    • Create a qa-config: kubectl config set-context dev-env --cluster=kubernetes --user=new-admin --namespace=dev-env
      --cluster=kubernetes --user=new-admin --namespace=dev-env
      Context "dev-env" created.
      
      [email protected]:/home/vagrant/kubedata# kubectl config get-contexts
      CURRENT   NAME                          CLUSTER      AUTHINFO           NAMESPACE
               dev-env                       kubernetes   new-admin          dev-env
      *          [email protected]   kubernetes   kubernetes-admin
      
    • Now use the created context using : kubectl config use-context dev-env
    • All your k8s resource will now be in DEV name space under kubernetes cluster 😄
      • But to create resource you will need user new-admin authentication. This is the user created during context creation.
      • Create username & password for user new-admin to use the resource in context and create a role binding: Run this before switching context kubectl config set-credentials new-admin --username=adm --password=changeme
      cat << EOF | kubectl apply -f -
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: new-admin
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: cluster-admin
      subjects:
      - apiGroup: rbac.authorization.k8s.io
        kind: User
        name: [email protected]
      
      EOF
      

CheatSheet

Next Steps

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].