All Projects → aws-samples → Kubernetes For Java Developers

aws-samples / Kubernetes For Java Developers

Licence: apache-2.0
A Day in Java Developer’s Life, with a taste of Kubernetes

Programming Languages

java
68154 projects - #9 most used programming language

Projects that are alternatives of or similar to Kubernetes For Java Developers

Jib
🏗 Build container images for your Java applications.
Stars: ✭ 11,370 (+2112.06%)
Mutual labels:  maven, containers
Ambientum
Docker native solution for running Laravel projects. From Development to Production
Stars: ✭ 487 (-5.25%)
Mutual labels:  containers
Draft
A tool for developers to create cloud-native applications on Kubernetes.
Stars: ✭ 3,972 (+672.76%)
Mutual labels:  containers
Lxdui
LXDUI is a web UI for the native Linux container technology LXD/LXC
Stars: ✭ 443 (-13.81%)
Mutual labels:  containers
Cc Oci Runtime
OCI (Open Containers Initiative) compatible runtime for Intel® Architecture
Stars: ✭ 418 (-18.68%)
Mutual labels:  containers
Dokku
A docker-powered PaaS that helps you build and manage the lifecycle of applications
Stars: ✭ 22,155 (+4210.31%)
Mutual labels:  containers
Strongbox
Strongbox is an artifact repository manager.
Stars: ✭ 412 (-19.84%)
Mutual labels:  maven
Spring Boot Microservices On Kubernetes
In this code we demonstrate how a simple Spring Boot application can be deployed on top of Kubernetes. This application, Office Space, mimicks the fictitious app idea from Michael Bolton in the movie "Office Space".
Stars: ✭ 504 (-1.95%)
Mutual labels:  containers
Bitnami Docker Wordpress
Bitnami Docker Image for WordPress
Stars: ✭ 476 (-7.39%)
Mutual labels:  containers
Kubernetes Handbook
Kubernetes Handbook (Kubernetes指南) https://kubernetes.feisky.xyz
Stars: ✭ 4,511 (+777.63%)
Mutual labels:  containers
Kelda
Kelda is an approachable way to deploy to the cloud.
Stars: ✭ 433 (-15.76%)
Mutual labels:  containers
Cookbook
🎉🎉🎉JAVA高级架构师技术栈==任何技能通过 “刻意练习” 都可以达到融会贯通的境界,就像烹饪一样,这里有一份JAVA开发技术手册,只需要增加自己练习的次数。🏃🏃🏃
Stars: ✭ 428 (-16.73%)
Mutual labels:  maven
Trow
Container Registry and Image Management for Kubernetes Clusters
Stars: ✭ 453 (-11.87%)
Mutual labels:  containers
Shop
spring cloud最佳实践项目实例,使用了spring cloud全家桶,TCC事务管理,EDA事务最终一致性等技术的下单示例
Stars: ✭ 418 (-18.68%)
Mutual labels:  maven
Java Sdk
百度AI开放平台 Java SDK
Stars: ✭ 495 (-3.7%)
Mutual labels:  maven
Hyscale
HyScale is an Application Centric Abstraction Framework over K8s.
Stars: ✭ 417 (-18.87%)
Mutual labels:  containers
Docker Curriculum
🐬 A comprehensive tutorial on getting started with Docker!
Stars: ✭ 4,523 (+779.96%)
Mutual labels:  containers
Minikube
Run Kubernetes locally
Stars: ✭ 22,673 (+4311.09%)
Mutual labels:  containers
Tern
Tern is a software composition analysis tool and Python library that generates a Software Bill of Materials for container images and Dockerfiles. The SBoM that Tern generates will give you a layer-by-layer view of what's inside your container in a variety of formats including human-readable, JSON, HTML, SPDX and more.
Stars: ✭ 505 (-1.75%)
Mutual labels:  containers
Netplugin
Container networking for various use cases
Stars: ✭ 497 (-3.31%)
Mutual labels:  containers

= A Day in Java Developer's Life, with a taste of Kubernetes :toc:

Deploying your Java application in a Kubernetes cluster could feel like Alice in Wonderland. You keep going down the rabbit hole and don't know how to make that ride comfortable. This repository explains how a Java application can be deployed, tested, debugged and monitored in Kubernetes. In addition, it also talks about canary deployment and deployment pipeline.

A comprehensive hands-on course explaining these concepts is available at https://www.linkedin.com/learning/kubernetes-for-java-developers.

== Application

We will use a simple Java application built using Spring Boot. The application publishes a REST endpoint that can be invoked at http://{host}:{port}/hello.

The source code is in the app directory.

== Build and Test using Maven

. Run application:

cd app
mvn spring-boot:run

. Test application

curl http://localhost:8080/hello

== Build and Test using Docker

=== Build Docker Image using multi-stage Dockerfile

. Create m2.tar.gz:

mvn -Dmaven.repo.local=./m2 clean package
tar cvf m2.tar.gz ./m2

. Create Docker image:

docker image build -t arungupta/greeting .

Explain multi-stage Dockerfile.

=== Build Docker Image using https://github.com/GoogleContainerTools/jib[Jib]

. Create Docker image:

mvn compile jib:build -Pjib

The benefits of using Jib over a multi-stage Dockerfile build include:

  • Don't need to install Docker or run a Docker daemon
  • Don't need to write a Dockerfile or build the archive of m2 dependencies
  • Much faster
  • Builds reproducibly

The above builds directly to your Docker registry. Alternatively, Jib can also build to a Docker daemon:

mvn compile jib:dockerBuild -Pjib -Ddocker.name=arungupta/greeting

=== Test built container using Docker

. Run container:

docker container run --name greeting -p 8080:8080 -d arungupta/greeting

. Access application:

curl http://localhost:8080/hello

. Remove container:

docker container rm -f greeting

== Minimal Docker Image using Custom JRE

. Download http://download.oracle.com/otn-pub/java/jdk/11.0.1+13/90cf5d8f270a4347a95050320eef3fb7/jdk-11.0.1_linux-x64_bin.rpm[JDK 11] and scp to an https://aws.amazon.com/marketplace/pp/B00635Y2IW/ref=mkt_ste_ec2_lw_os_win[Amazon Linux] instance . Install JDK 11:

sudo yum install jdk-11.0.1_linux-x64_bin.rpm

. Create a custom JRE for the Spring Boot application:

cp target/app.war target/app.jar
jlink \
	--output myjre \
	--add-modules $(jdeps --print-module-deps target/app.jar),\
	java.xml,jdk.unsupported,java.sql,java.naming,java.desktop,\
	java.management,java.security.jgss,java.instrument

. Build Docker image using this custom JRE:

docker image build --file Dockerfile.jre -t arungupta/greeting:jre-slim .

. List the Docker images and show the difference in sizes:

[[email protected] app]$ docker image ls | grep greeting
arungupta/greeting   jre-slim            9eed25582f36        6 seconds ago       162MB
arungupta/greeting   latest              1b7c061dad60        10 hours ago        490MB

. Run the container:

docker container run -d -p 8080:8080 arungupta/greeting:jre-slim

. Access the application:

curl http://localhost:8080/hello

== Build and Test using Kubernetes

A single node Kubernetes cluster can be easily created on a development machine using https://github.com/kubernetes/minikube[Minikube], https://microk8s.io/[MicroK8s], https://github.com/kubernetes-sigs/kind[KIND], and https://docs.docker.com/docker-for-mac/kubernetes/[Docker for Mac]. https://blog.tilt.dev//2019/08/21/why-does-developing-on-kubernetes-suck.html[Read] on why using these local development environments does not truly represent your prod cluster.

This tutorial will use Docker for Mac.

. Ensure that Kubernetes is enabled in Docker for Mac . Show the list of contexts:

kubectl config get-contexts

. Configure kubectl CLI for Kubernetes cluster

kubectl config use-context docker-for-desktop

. Install the Helm CLI: + brew install kubernetes-helm + If Helm CLI is already installed then use brew upgrade kubernetes-helm. + . Check Helm version:

helm version

. Install Helm in Kubernetes cluster: + helm init + If Helm has already been initialized on the cluster, then you may have to upgrade Tiller: + helm init --upgrade + . Install the Helm chart:

cd ..
helm install --name myapp manifests/myapp

. Check that the pod is running:

kubectl get pods

. Check that the service is up:

kubectl get svc

. Access the application:

curl http://$(kubectl get svc/myapp-greeting \
	-o jsonpath='{.status.loadBalancer.ingress[0].hostname}'):8080/hello

== Debug Docker and Kubernetes using IntelliJ

You can debug a Docker container and a Kubernetes Pod if they're running locally on your machine.

=== Debug using Kubernetes

This was tested using Docker for Mac/Kubernetes. Use the previously deployed Helm chart.

. Show service: + kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE greeting-service LoadBalancer 10.101.39.100 80:30854/TCP 8m kubernetes ClusterIP 10.96.0.1 443/TCP 90d myapp-greeting LoadBalancer 10.108.104.178 localhost 8080:32189/TCP,5005:31117/TCP 4s + Highlight the debug port is also forwarded. + . In IntelliJ, Run, Debug, Remote: + image::images/docker-debug1.png[] + . Click on Debug, setup a breakpoint in the class: + image::images/docker-debug2.png[] + . Access the application:

curl http://$(kubectl get svc/myapp-greeting \
	-o jsonpath='{.status.loadBalancer.ingress[0].hostname}'):8080/hello

. Show the breakpoint hit in IntelliJ: + image::images/docker-debug3.png[] + . Delete the Helm chart:

helm delete --purge myapp

=== Debug using Docker

This was tested using Docker for Mac.

. Run container:

docker container run --name greeting -p 8080:8080 -p 5005:5005 -d arungupta/greeting

. Check container:

$ docker container ls -a
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                                            NAMES
724313157e3c        arungupta/greeting   "java -jar app-swarm…"   3 seconds ago       Up 2 seconds        0.0.0.0:5005->5005/tcp, 0.0.0.0:8080->8080/tcp   greeting

. Setup breakpoint as explained above. . Access the application using curl http://localhost:8080/resources/greeting.

== Kubernetes Cluster on AWS

This application will be deployed to an https://aws.amazon.com/eks/[Amazon EKS] cluster. If you're looking for a self-paced workshop that provide detailed instructions to get you started with EKS then https://eksworkshop.com[eksworkshop.com] is your place.

Let's create the cluster first.

. Install http://eksctl.io/[eksctl] CLI:

brew install weaveworks/tap/eksctl

. Create EKS cluster:

eksctl create cluster --name myeks --nodes 4 --region us-west-2
2018-10-25T13:45:38+02:00 [ℹ]  setting availability zones to [us-west-2a us-west-2c us-west-2b]
2018-10-25T13:45:39+02:00 [ℹ]  using "ami-0a54c984b9f908c81" for nodes
2018-10-25T13:45:39+02:00 [ℹ]  creating EKS cluster "myeks" in "us-west-2" region
2018-10-25T13:45:39+02:00 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
2018-10-25T13:45:39+02:00 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --name=myeks'
2018-10-25T13:45:39+02:00 [ℹ]  creating cluster stack "eksctl-myeks-cluster"
2018-10-25T13:57:33+02:00 [ℹ]  creating nodegroup stack "eksctl-myeks-nodegroup-0"
2018-10-25T14:01:18+02:00 [✔]  all EKS cluster resource for "myeks" had been created
2018-10-25T14:01:18+02:00 [✔]  saved kubeconfig as "/Users/argu/.kube/config"
2018-10-25T14:01:19+02:00 [ℹ]  the cluster has 0 nodes
2018-10-25T14:01:19+02:00 [ℹ]  waiting for at least 4 nodes to become ready
2018-10-25T14:01:50+02:00 [ℹ]  the cluster has 4 nodes
2018-10-25T14:01:50+02:00 [ℹ]  node "ip-192-168-161-180.us-west-2.compute.internal" is ready
2018-10-25T14:01:50+02:00 [ℹ]  node "ip-192-168-214-48.us-west-2.compute.internal" is ready
2018-10-25T14:01:50+02:00 [ℹ]  node "ip-192-168-75-44.us-west-2.compute.internal" is ready
2018-10-25T14:01:50+02:00 [ℹ]  node "ip-192-168-82-236.us-west-2.compute.internal" is ready
2018-10-25T14:01:52+02:00 [ℹ]  kubectl command should work with "/Users/argu/.kube/config", try 'kubectl get nodes'
2018-10-25T14:01:52+02:00 [✔]  EKS cluster "myeks" in "us-west-2" region is ready

. Check the nodes:

kubectl get nodes
NAME                                            STATUS   ROLES    AGE   VERSION
ip-192-168-161-180.us-west-2.compute.internal   Ready    <none>   52s   v1.10.3
ip-192-168-214-48.us-west-2.compute.internal    Ready    <none>   57s   v1.10.3
ip-192-168-75-44.us-west-2.compute.internal     Ready    <none>   57s   v1.10.3
ip-192-168-82-236.us-west-2.compute.internal    Ready    <none>   54s   v1.10.3

. Get the list of configs: + kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * [email protected] myeks.us-west-2.eksctl.io [email protected]
docker-for-desktop docker-for-desktop-cluster docker-for-desktop
+ As indicated by *, kubectl CLI configuration is updated to the recently created cluster.

== Migrate from Dev to Prod

. Explicitly set the context:

kubectl config use-context [email protected]

. Install Helm:

kubectl -n kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller

. Check the list of pods:

kubectl get pods -n kube-system
NAME                            READY   STATUS    RESTARTS   AGE
aws-node-774jf                  1/1     Running   1          2m
aws-node-jrf5r                  1/1     Running   0          2m
aws-node-n46tw                  1/1     Running   0          2m
aws-node-slgns                  1/1     Running   0          2m
kube-dns-7cc87d595-5tskv        3/3     Running   0          8m
kube-proxy-2ghg6                1/1     Running   0          2m
kube-proxy-hqxwg                1/1     Running   0          2m
kube-proxy-lrwrr                1/1     Running   0          2m
kube-proxy-x77tq                1/1     Running   0          2m
tiller-deploy-895d57dd9-txqk4   1/1     Running   0          15s

. Redeploy the application:

helm install --name myapp manifests/myapp

. Get the service: + kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 443/TCP 17m myapp-greeting LoadBalancer 10.100.241.250 a8713338abef211e8970816cb629d414-71232674.us-east-1.elb.amazonaws.com 8080:32626/TCP,5005:30739/TCP 2m + It shows the port 8080 and 5005 are published and an Elastic Load Balancer is provisioned. It takes about three minutes for the load balancer to be ready. + . Access the application:

curl http://$(kubectl get svc/myapp-greeting \
	-o jsonpath='{.status.loadBalancer.ingress[0].hostname}'):8080/hello

. Delete the application:

helm delete --purge myapp

== Service Mesh using AWS App Mesh

https://https://aws.amazon.com/app-mesh/[AWS App Mesh] is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. App Mesh can be used with Amazon EKS or Kubernetes running on AWS. In addition, it also works with other container services offered by AWS such as AWS Fargate and Amazon ECS. It also works with microservices deployed on Amazon EC2.

A thorough detailed example that shows how to use App Mesh with EKS is available at https://eksworkshop.com/servicemesh_with_appmesh/[Service Mesh with App Mesh]. This section provides a simplistic setup using the configuration files from there.

All scripts used in this section are in the manifests/appmesh directory.

=== Setup IAM Permissions

. Set a variable ROLE_NAME to IAM role for the EKS worker nodes:

ROLE_NAME=$(aws iam list-roles \
	--query \
	'Roles[?contains(RoleName,`eksctl-myeks-nodegroup`)].RoleName' --output text)

. Setup permissions for the worker nodes:

aws iam attach-role-policy \
	--role-name $ROLE_NAME \
	--policy-arn arn:aws:iam::aws:policy/AWSAppMeshFullAccess

=== Configure App Mesh

. Enable side-car injection by running create.sh script from https://github.com/aws/aws-app-mesh-examples/tree/master/examples/apps/djapp/2_create_injector. You need to change ca-bundle.sh and change MESH_NAME to greeting-app. . Create prod namespace:

kubectl create namespace prod

. Label prod namespace:

kubectl label namespace prod appmesh.k8s.aws/sidecarInjectorWebhook=enabled

. Create CRDs:

kubectl create -f https://raw.githubusercontent.com/aws/aws-app-mesh-examples/master/examples/apps/djapp/3_add_crds/mesh-definition.yaml
kubectl create -f https://raw.githubusercontent.com/aws/aws-app-mesh-examples/master/examples/apps/djapp/3_add_crds/virtual-node-definition.yaml
kubectl create -f https://raw.githubusercontent.com/aws/aws-app-mesh-examples/master/examples/apps/djapp/3_add_crds/virtual-service-definition.yaml
kubectl create -f https://raw.githubusercontent.com/aws/aws-app-mesh-examples/master/examples/apps/djapp/3_add_crds/controller-deployment.yaml

=== Create App Mesh Components

. Create a Mesh:

kubectl create -f mesh.yaml

. Create Virtual Nodes:

kubectl create -f virtualnodes.yaml

. Create a Virtual Services:

kubectl create -f virtualservice.yaml

. Create deployments:

kubectl create -f app-hello-howdy.yaml

. Create services:

kubectl create -f services.yaml

=== Traffic Shifting

. Find the name of the talker pod:

TALKER_POD=$(kubectl get pods \
	-nprod -lgreeting=talker \
	-o jsonpath='{.items[0].metadata.name}')

. Exec into the talker pod:

kubectl exec -nprod $TALKER_POD -it bash

. Invoke the mostly-hello service to get back mostly Hello response:

while [ 1 ]; do curl http://mostly-hello.prod.svc.cluster.local:8080/hello; echo;done

. CTRL+C to break the loop.

. Invoke the mostly-howdy service to get back mostly Howdy response:

while [ 1 ]; do curl http://mostly-howdy.prod.svc.cluster.local:8080/hello; echo;done

. CTRL+C to break the loop.

== Service Mesh using Istio

https://istio.io/[Istio] is is a layer 4/7 proxy that routes and load balances traffic over HTTP, WebSocket, HTTP/2, gRPC and supports application protocols such as MongoDB and Redis. Istio uses the Envoy proxy to manage all inbound/outbound traffic in the service mesh.

Istio has a wide variety of traffic management features that live outside the application code, such as A/B testing, phased/canary rollouts, failure recovery, circuit breaker, layer 7 routing and policy enforcement (all provided by the Envoy proxy). Istio also supports ACLs, rate limits, quotas, authentication, request tracing and telemetry collection using its Mixer component. The goal of the Istio project is to support traffic management and security of microservices without requiring any changes to the application; it does this by injecting a sidecar into your pod that handles all network communications.

More details at https://aws.amazon.com/blogs/opensource/getting-started-istio-eks/[Getting Started with Istio on Amazon EKS].

=== Install and Configure

. Download Istio:

curl -L https://git.io/getLatestIstio | sh -
cd istio-1.*

. Include istio-1.*/bin directory in PATH . Install Istio on Amazon EKS:

helm install \
	--wait \
	--name istio \
	--namespace istio-system \
	install/kubernetes/helm/istio \
	--set tracing.enabled=true \
	--set grafana.enabled=true

. Verify: + kubectl get pods -n istio-system NAME READY STATUS RESTARTS AGE grafana-75485f89b9-4lwg5 1/1 Running 0 1m istio-citadel-84fb7985bf-4dkcx 1/1 Running 0 1m istio-egressgateway-bd9fb967d-bsrhz 1/1 Running 0 1m istio-galley-655c4f9ccd-qwk42 1/1 Running 0 1m istio-ingressgateway-688865c5f7-zj9db 1/1 Running 0 1m istio-pilot-6cd69dc444-9qstf 2/2 Running 0 1m istio-policy-6b9f4697d-g8hc6 2/2 Running 0 1m istio-sidecar-injector-8975849b4-cnd6l 1/1 Running 0 1m istio-statsd-prom-bridge-7f44bb5ddb-8r2zx 1/1 Running 0 1m istio-telemetry-6b5579595f-nlst8 2/2 Running 0 1m istio-tracing-ff94688bb-2w4wg 1/1 Running 0 1m prometheus-84bd4b9796-t9kk5 1/1 Running 0 1m + Check that both Tracing and Grafana add-ons are enabled. + . Enable side car injection for all pods in default namespace

kubectl label namespace default istio-injection=enabled

. From the repo's main directory, deploy the application:

kubectl apply -f manifests/app.yaml

. Check pods and note that it has two containers (one for the application and one for the sidecar):

kubectl get pods -l app=greeting
NAME                       READY     STATUS    RESTARTS   AGE
greeting-d4f55c7ff-6gz8b   2/2       Running   0          5s

. Get list of containers in the pod:

kubectl get pods -l app=greeting -o jsonpath={.items[*].spec.containers[*].name}
greeting istio-proxy

. Get response:

curl http://$(kubectl get svc/greeting
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/hello

=== Traffic Shifting

. Deploy application with two versions of greeting, one that returns Hello and another that returns Howdy:

kubectl delete -f manifests/app.yaml kubectl apply -f manifests/app-hello-howdy.yaml

. Check the list of pods:

kubectl get pods -l app=greeting
NAME                              READY     STATUS    RESTARTS   AGE
greeting-hello-69cc7684d-7g4bx    2/2       Running   0          1m
greeting-howdy-788b5d4b44-g7pml   2/2       Running   0          1m

. Access application multipe times to see different response:

for i in {1..10} do curl -q http://$(kubectl get svc/greeting -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/hello echo done

. Setup an Istio rule to split traffic between 75% to Hello and 25% to Howdy version of the greeting service:

kubectl apply -f manifests/istio/app-rule-75-25.yaml

. Invoke the service again to see the traffic split between two services.

=== Canary Deployment

. Setup an Istio rule to divert 10% traffic to canary:

kubectl delete -f manifests/istio/app-rule-75-25.yaml kubectl apply -f manifests/istio/app-canary.yaml

. Access application multipe times to see ~10% greeting messages with Howdy:

for i in {1..50} do curl -q http://$(kubectl get svc/greeting -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/hello echo done

=== Distributed Tracing

Istio is deployed as a sidecar proxy into each of your pods; this means it can see and monitor all the traffic flows between your microservices and generate a graphical representation of your mesh traffic. We’ll use the application you deployed in the previous step to demonstrate this.

By default, tracing is disabled. --set tracing.enabled=true was used during Istio installation to ensure tracing was enabled.

Setup access to the tracing dashboard URL using port-forwarding:

kubectl port-forward \
	-n istio-system \
	pod/$(kubectl get pod \
		-n istio-system \
		-l app=jaeger \
		-o jsonpath='{.items[0].metadata.name}') 16686:16686 &

Access the dashboard at http://localhost:16686, click on Dependencies, DAG.

image::images/istio-dag.png[]

=== Metrics using Grafana

. By default, Grafana is disabled. --set grafana.enabled=true was used during Istio installation to ensure Grafana was enabled. Alternatively, the Grafana add-on can be installed as:

kubectl apply -f install/kubernetes/addons/grafana.yaml

. Verify:

kubectl get pods -l app=grafana -n istio-system
NAME                       READY     STATUS    RESTARTS   AGE
grafana-75485f89b9-n4skw   1/1       Running   0          10m

. Forward Istio dashboard using Grafana UI:

kubectl -n istio-system \
	port-forward $(kubectl -n istio-system \
		get pod -l app=grafana \
		-o jsonpath='{.items[0].metadata.name}') 3000:3000 &

. View Istio dashboard http://localhost:3000. Click on Home, Istio Workload Dashboard.

. Invoke the endpoint:

curl http://$(kubectl get svc/greeting \
	-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/hello

image::images/istio-dashboard.png[]

=== Timeouts

Delays and timeouts can be injected in services.

. Deploy the application:

kubectl delete -f manifests/app.yaml kubectl apply -f manifests/app-ingress.yaml

. Add a 5 seconds delay to calls to the service:

kubectl apply -f manifests/istio/greeting-delay.yaml

. Invoke the service using a 2 seconds timeout:

export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http")].port}')
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
curl --connect-timeout 2 http://$GATEWAY_URL/resources/greeting

The service will timeout in 2 seconds.

== Chaos using kube-monkey

https://github.com/asobti/kube-monkey[kube-monkey] is an implementation of Netflix's Chaos Monkey for Kubernetes clusters. It randomly deletes Kubernetes pods in the cluster encouraging and validating the development of failure-resilient services.

. Create kube-monkey configuration:

kubectl apply -f manifests/kubemonkey/kube-monkey-configmap.yaml

. Run kube-monkey:

kubectl apply -f manifests/kubemonkey/kube-monkey-deployment.yaml

. Deploy an app that opts-in for pod deletion:

kubectl apply -f manifests/kubemonkey/app-kube-monkey.yaml

This application agrees to kill up to 40% of pods. The schedule of deletion is defined by kube-monkey configuration and is defined to be between 10am and 4pm on weekdays.

== Deployment Pipeline using Skaffold

https://github.com/GoogleContainerTools/skaffold[Skaffold] is a command line utility that facilitates continuous development for Kubernetes applications. With Skaffold, you can iterate on your application source code locally then deploy it to a remote Kubernetes cluster.

. Check context:

kubectl config get-contexts
CURRENT   NAME                               CLUSTER                       AUTHINFO                           NAMESPACE
          [email protected]   eks-gpu.us-west-2.eksctl.io   [email protected]   
*         [email protected]     myeks.us-east-1.eksctl.io     [email protected]     
          docker-for-desktop                 docker-for-desktop-cluster    docker-for-desktop

. Change to use local Kubernetes cluster:

kubectl config use-context docker-for-desktop

. Download Skaffold:

curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-darwin-amd64 \
	&& chmod +x skaffold

. Open http://localhost:8080/resources/greeting in browser. This will show the page is not available. . Run Skaffold in the application directory:

cd app
skaffold dev

. Refresh the page in browser to see the output.

== Deployment Pipeline using CodePipeline

Complete detailed instructions are available at https://eksworkshop.com/codepipeline/.

=== Create IAM role

. Create an IAM role and add an in-line policy that will allow the CodeBuild stage to interact with the EKS cluster:

ACCOUNT_ID=`aws sts get-caller-identity --query Account --output text`
TRUST="{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::${ACCOUNT_ID}:root\" }, \"Action\": \"sts:AssumeRole\" } ] }"
echo '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "eks:Describe*", "Resource": "*" } ] }' > /tmp/iam-role-policy
aws iam create-role --role-name EksWorkshopCodeBuildKubectlRole --assume-role-policy-document "$TRUST" --output text --query 'Role.Arn'
aws iam put-role-policy --role-name EksWorkshopCodeBuildKubectlRole --policy-name eks-describe --policy-document file:///tmp/iam-role-policy

. Add this IAM role to aws-auth ConfigMap for the EKS cluster:

ROLE="    - rolearn: arn:aws:iam::$ACCOUNT_ID:role/EksWorkshopCodeBuildKubectlRole\n      username: build\n      groups:\n        - system:masters"
kubectl get -n kube-system configmap/aws-auth -o yaml | awk "/mapRoles: \|/{print;print \"$ROLE\";next}1" > /tmp/aws-auth-patch.yml
kubectl patch configmap/aws-auth -n kube-system --patch "$(cat /tmp/aws-auth-patch.yml)"

=== Create CloudFormation template

. Fork the repo https://github.com/aws-samples/kubernetes-for-java-developers . Create a new GitHub token https://github.com/settings/tokens/new, select repo as the scope, click on Generate Token to generate the token. Copy the generated token. . Launch https://console.aws.amazon.com/cloudformation/home?#/stacks/create/review?stackName=eksws-codepipeline&templateURL=https://s3.amazonaws.com/eksworkshop.com/templates/master/ci-cd-codepipeline.cfn.yml[CodePipeline CloudFormation template]. . Specify the correct values for GitHubUser, GitHubToken, GitSourceRepo and EKS cluster name. Change the branch if you need to: + image::images/codepipeline-template.png[] + Click on Create stack to create the resources.

=== View CodePipeline

. Once the stack creation is complete, open https://us-west-2.console.aws.amazon.com/codesuite/codepipeline/pipelines?region=us-west-2#[CodePipeline in the AWS Console]. . Select the pipeline and wait for the pipeline status to complete: + image::images/codepipeline-status.png[] + . Access the service:

curl http://$(kubectl get svc/greeting -n default \
	-o jsonpath='{.status.loadBalancer.ingress[0].hostname}'):8080/hello

== Deployment Pipeline using Jenkins X

. Install jx CLI:

brew tap jenkins-x/jx
brew install jx 

. Create a new https://github.com/settings/tokens[GitHub token] with the following scope: + image::images/jenkinsx-github-token.png[] + . Install Jenkins X on Amazon EKS: + jx install --provider=eks --git-username arun-gupta --git-api-token GITHUB_TOKEN --batch-mode + link:images/jenkinsx-log.txt[Log] shows complete run of the command. + . Use jx import to import a project. Need Dockerfile and maven application in the root directory.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].