All Projects → cloudogu → gitops-playground

cloudogu / gitops-playground

Licence: MIT license
Reproducible infrastructure to showcase GitOps workflows and evaluate different GitOps Operators on Kubernetes

Programming Languages

shell
77523 projects
groovy
2714 projects
java
68154 projects - #9 most used programming language
Dockerfile
14818 projects
HCL
1544 projects
HTML
75241 projects

Projects that are alternatives of or similar to gitops-playground

gitops-build-lib
Jenkins pipeline shared library for automating deployments via GitOps
Stars: ✭ 23 (-70.13%)
Mutual labels:  flux, jenkins, helm, argo, k8s, gitops, argocd, fluxcd, gitops-playground
k3s-gitops
GitOps principles to define kubernetes cluster state via code
Stars: ✭ 103 (+33.77%)
Mutual labels:  flux, helm, gitops, k3s
Book k8sInfra
< 컨테이너 인프라 환경 구축을 위한 쿠버네티스/도커 >
Stars: ✭ 176 (+128.57%)
Mutual labels:  jenkins, helm, k8s, gitops
Quiz
Example real time quiz application with .NET Core, React, DDD, Event Sourcing, Docker and built-in infrastructure for CI/CD with k8s, jenkins and helm
Stars: ✭ 100 (+29.87%)
Mutual labels:  jenkins, helm, k8s
Argo Cd
Declarative continuous deployment for Kubernetes.
Stars: ✭ 7,887 (+10142.86%)
Mutual labels:  helm, argo, gitops
Arkade
Open Source Kubernetes Marketplace
Stars: ✭ 2,343 (+2942.86%)
Mutual labels:  helm, k8s, k3s
okd-lab
Controlled Environment for OKD4 experiments
Stars: ✭ 24 (-68.83%)
Mutual labels:  argo, gitops, argocd
k8s-gitops
Homelab GitOps repository. Cluster definition state via code.
Stars: ✭ 47 (-38.96%)
Mutual labels:  flux, gitops, k3s
croc-hunter-jenkinsx
Croc Hunter demo, deployed with Jenkins X
Stars: ✭ 19 (-75.32%)
Mutual labels:  jenkins, helm, gke
k8s-gitops
No description or website provided.
Stars: ✭ 23 (-70.13%)
Mutual labels:  flux, gitops, k3s
K8s Gitops
GitOps principles to define kubernetes cluster state via code. Community around [email protected] is on discord: https://discord.gg/7PbmHRK
Stars: ✭ 192 (+149.35%)
Mutual labels:  flux, helm, k8s
gitops-helm-workshop
Progressive Delivery for Kubernetes with Flux, Helm, Linkerd and Flagger
Stars: ✭ 59 (-23.38%)
Mutual labels:  helm, gitops, fluxcd
K3sup
bootstrap Kubernetes with k3s over SSH < 1 min 🚀
Stars: ✭ 4,012 (+5110.39%)
Mutual labels:  helm, k8s, k3s
homelab
My self-hosting infrastructure, fully automated from empty disk to operating services
Stars: ✭ 4,451 (+5680.52%)
Mutual labels:  helm, argocd, k3s
vcluster
vcluster - Create fully functional virtual Kubernetes clusters - Each vcluster runs inside a namespace of the underlying k8s cluster. It's cheaper than creating separate full-blown clusters and it offers better multi-tenancy and isolation than regular namespaces.
Stars: ✭ 1,360 (+1666.23%)
Mutual labels:  helm, k8s, k3s
paas-templates
Bosh, CFAR, CFCR and OSB services templates for use with COA (cf-ops-automation) framework
Stars: ✭ 16 (-79.22%)
Mutual labels:  k8s, gitops, k3s
cicd-demo
A demo repository that shows CI/CD integration using DroneCI + ArgoCD + Kubernetes.
Stars: ✭ 36 (-53.25%)
Mutual labels:  k8s, gitops, argocd
firework8s
Firework8s is a collection of kubernetes objects (yaml files) for deploying workloads in a home lab.
Stars: ✭ 35 (-54.55%)
Mutual labels:  k8s, k3s, k3d
k3s-gitops
My home Kubernetes (k3s) cluster managed by GitOps (Flux)
Stars: ✭ 26 (-66.23%)
Mutual labels:  flux, gitops, k3s
charts
Helm charts for creating reproducible and maintainable deployments of Polyaxon with Kubernetes.
Stars: ✭ 32 (-58.44%)
Mutual labels:  helm, k8s, gitops

gitops-playground

Build Status

Reproducible infrastructure to showcase GitOps workflows with Kubernetes.

In fact, this rolls out a complete DevOps stack with different features including

  • GitOps (with different controllers to choose from: Argo CD, Flux v1 and v2),
  • Monitoring (using Prometheus and Grafana),
  • example applications and CI-pipelines (using Jenkins and our GitOps library) and
  • soon Secrets management (using Vault).

The gitops-playground is derived from our experiences in consulting and operating the myCloudogu platform.
For questions or suggestions you are welcome to join us at our myCloudogu community forum.

Discuss it on myCloudogu

TLDR;

You can run a local k8s cluster with the GitOps playground installed with only one command (on Linux)

docker pull ghcr.io/cloudogu/gitops-playground && \ 
bash <(curl -s \
  https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/init-cluster.sh) \
  && sleep 2 && docker run --rm -it -u $(id -u) -v ~/.k3d/kubeconfig-gitops-playground.yaml:/home/.kube/config \
    --net=host \
    ghcr.io/cloudogu/gitops-playground --yes

This command will also print URLs of the applications inside the cluster to get you started.
Note that you can append --argocd, --fluxv1 and --fluxv2 to select specific operators. This will also speed up the progress.

We recommend running this command as an unprivileged user, that is inside the docker group.

Table of contents

What is the GitOps Playground?

The GitOps Playground provides an reproducible environment for trying out GitOps. Is consists of Infra As Code and scripts for automatically setting up a Kubernetes Cluster including CI-server (Jenkins), source code management (SCM-Manager) and several GitOps operators (Flux V1, Flux V2, Argo CD). CI-Server, SCM and operators are pre-configured with a number of demo applications.

The GitOps Playground lowers the barriers for getting your hands on GitOps. No need to read lots of books and operator docs, getting familiar with CLIs, ponder about GitOps Repository folder structures and staging, etc. The GitOps Playground is a pre-configured environment to see GitOps in motion, including more advanced use cases like notifications and monitoring.

Installation

There a several options for running the GitOps playground

  • on a local k3d cluster
    NOTE: Currently runs only on linux!
    Running on Windows or Mac is possible in general, but we would need to bind all needed ports to k3d container.
    See our POC. Let us know if this feature is of interest to you.
  • on a remote k8s cluster
  • each with the option
    • to use an external Jenkins, SCM-Manager and registry (this can be run in production, e.g. with a Cloudogu Ecosystem) or
    • to run everything inside the cluster (for demo only)

The diagrams below show an overview of the playground's architecture and three scenarios for running the playground.

Note that running Jenkins inside the cluster is meant for demo purposes only. The third graphic shows our production scenario with the Cloudogu EcoSystem (CES). Here better security and build performance is achieved using ephemeral Jenkins build agents spawned in the cloud.

Demo on local machine Demo on remote cluster Production environment with CES
Playground on local machine Playground on remote cluster A possible production environment

Create Cluster

If you don't have a demo cluster at hand we provide scripts to create either

  • a local k3d cluster (see docs or script for more details):
    bash <(curl -s \
      https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/init-cluster.sh)
  • a remote k8s cluster on Google Kubernetes Engine (e.g. via Terraform, see our docs),
  • or almost any k8s cluster.
    Note that if you want to deploy Jenkins inside the cluster, Docker is required as container runtime.

Apply playground

You can apply the playground to your cluster using our container image ghcr.io/cloudogu/gitops-playground.
On success, the container prints a little intro on how to get started with the GitOps playground.

There are several options for running the container:

  • For local k3d cluster, we recommend running the image as a local container via docker
  • For remote clusters (e.g. on GKE) you can run the image inside a pod of the target cluster via kubectl.

All options offer the same parameters, see below.

Apply via Docker (local cluster)

When connecting to k3d it is easiest to apply the playground via a local container in the host network and pass k3d's kubeconfig.

CLUSTER_NAME=gitops-playground
docker pull ghcr.io/cloudogu/gitops-playground
docker run --rm -it -u $(id -u)  -v ~/.k3d/kubeconfig-${CLUSTER_NAME}.yaml:/home/.kube/config \
  --net=host \
  ghcr.io/cloudogu/gitops-playground # additional parameters go here

Note:

  • docker pull in advance makes sure you have the newest image, even if you ran this command before.
    Of course, you could also specify a specific version of the image.
  • Using the host network makes it possible to determine localhost and to use k3d's kubeconfig without altering, as it access the API server via a port bound to localhost.
  • We run as the local user in order to avoid file permission issues with the kubeconfig-${CLUSTER_NAME}.yaml.
  • If you experience issues and want to access the full log files, use the following command while the container is running:
docker exec -it \
  $(docker ps -q  --filter ancestor=ghcr.io/cloudogu/gitops-playground) \
  bash -c -- 'tail -f  -n +1 /tmp/playground-log-*'

Apply via kubectl (remote cluster)

For remote clusters it is easiest to apply the playground via kubectl. You can find info on how to install kubectl here.

# Create a temporary ServiceAccount and authorize via RBAC. This is needed to install CRDs, etc.
kubectl create serviceaccount gitops-playground-job-executer -n default
kubectl create clusterrolebinding gitops-playground-job-executer \
  --clusterrole=cluster-admin \
  --serviceaccount=default:gitops-playground-job-executer

# Then start apply the playground with the following command
# The --remote parameter exposes Jenkins, SCMM and argo on well-known ports for example, 
# so you don't have to remember the individual ports
kubectl run gitops-playground -i --tty --restart=Never \
  --overrides='{ "spec": { "serviceAccount": "gitops-playground-job-executer" } }' \
  --image ghcr.io/cloudogu/gitops-playground \
  -- --yes --remote # additional parameters go here

# If everything succeeded, remove the objects
kubectl delete clusterrolebinding/gitops-playground-job-executer \
  sa/gitops-playground-job-executer pods/gitops-playground -n default  

In general docker run should work here as well. But GKE, for example, uses gcloud and python in their kubeconfig. Running inside the cluster avoids these kinds of issues.

Additional parameters

The following describes more parameters and use cases.

You can get a full list of all options like so:

docker run --rm ghcr.io/cloudogu/gitops-playground --help
Deploy specific GitOps operators only
  • --argocd - deploy only Argo CD GitOps operator
  • --fluxv1 - deploy only Flux v1 GitOps operator
  • --fluxv2 - deploy only Flux v2 GitOps operator
Deploy with local Cloudogu Ecosystem

See our Quickstart Guide on how to set up the instance.
Then set the following parameters.

# Note: 
# * In this case --password only sets the Argo CD admin password (Jenkins and SCMM are external)
# * Insecure is needed, because the local instance will not have a valid cert
--jenkins-url=https://192.168.56.2/jenkins \ 
--scmm-url=https://192.168.56.2/scm \
--jenkins-username=admin \
--jenkins-password=yourpassword \
--scmm-username=admin \
--scmm-password=yourpassword \
--password=yourpassword \
--insecure
Deploy with productive Cloudogu Ecosystem and GCR

Using Google Container Registry (GCR) fits well with our cluster creation example via Terraform on Google Kubernetes Engine (GKE), see our docs.

Note that you can get a free CES demo instance set up with a Kubernetes Cluster as GitOps Playground here.

# Note:
# In this case --password only sets the Argo CD admin password (Jenkins and SCMM are external) 
--jenkins-url=https://your-ecosystem.cloudogu.net/jenkins \ 
--scmm-url=https://your-ecosystem.cloudogu.net/scm \
--jenkins-username=admin \
--jenkins-password=yourpassword \
--scmm-username=admin \
--scmm-password=yourpassword \
--password=yourpassword \
--registry-url=eu.gcr.io \
--registry-path=yourproject \
--registry-username=_json_key \ 
--registry-password="$( cat account.json | sed 's/"/\\"/g' )" 
Override default images used in the gitops-build-lib

Images used by the gitops-build-lib are set in the gitopsConfig in each Jenkinsfile of an application like that:

def gitopsConfig = [
    ...
    buildImages          : [
            helm: 'ghcr.io/cloudogu/helm:3.5.4-1',
            kubectl: 'lachlanevenson/k8s-kubectl:v1.19.3',
            kubeval: 'ghcr.io/cloudogu/helm:3.5.4-1',
            helmKubeval: 'ghcr.io/cloudogu/helm:3.5.4-1',
            yamllint: 'cytopia/yamllint:1.25-0.7'
    ],...

To override each image in all the applications you can use following parameters:

  • --kubectl-image someRegistry/someImage:1.0.0
  • --helm-image someRegistry/someImage:1.0.0
  • --kubeval-image someRegistry/someImage:1.0.0
  • --helmkubeval-image someRegistry/someImage:1.0.0
  • --yamllint-image someRegistry/someImage:1.0.0
Argo CD-Notifications

If you are using a remote cluster you can set the --argocd-url parameter so that argocd-notification messages have a link to the corresponding application.

Metrics

Set the parameter --metrics to enable deployment of monitoring and alerting tools like prometheus, grafana and mailhog.

See Monitoring tools for details.

Remove playground

For k3d, you can just k3d cluster delete gitops-playground. This will delete the whole cluster. If you just want to remove the playground from the cluster, use this script: './scripts/destroy.sh'

On remote clusters there is a script inside this repo:

bash <(curl -s \
  https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/destroy.sh) 

Applications

As described above the GitOps playground comes with a number of applications. Some of them can be accessed via web.

  • Jenkins
  • SCM-Manager
  • Argo CD
  • Demo applications for each GitOps operator, each with staging and production instance.

We distilled the logic used in the example application pipelines into a reusable library for Jenkins: cloudogu/gitops-build-lib.

The URLs of the applications depend on the environment the playground is deployed to. The following lists all application and how to find out their respective URLs for a GitOps playground being deployed to local or remote cluster.

For remote clusters you need the external IP, no need to specify the port (everything running on port 80). Basically, you can get the IP address as follows:

kubectl -n "${namespace}" get svc "${serviceName}" \
  --template="{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}"

There is also a convenience script scripts/get-remote-url. The script waits, if externalIP is not present, yet. You could use this conveniently like so:

bash <(curl -s \
  https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/get-remote-url) \
  jenkins default

You can open the application in the browser right away, like so for example:

xdg-open $(bash <(curl -s \
  https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/get-remote-url) \
   jenkins default)

Credentials

If deployed within the cluster, Jenkins, SCM-Manager, Argo CD and others can be accessed via: admin/admin

Note that you can change (an should for a remote cluster!) the password with the --password argument. There also is a --username parameter, which is ignored for argocd. That is, for now argos username ist always admin.

Jenkins

Jenkins is available at

You can enable browser notifications about build results via a button in the lower right corner of Jenkins Web UI.

Note that this only works when using localhost or https://.

Enable Jenkins Notifications

Example of a Jenkins browser notifications

External Jenkins

You can set an external jenkins server via the following parameters when applying the playground. See Parameters for examples.

  • --jenkins-url,
  • --jenkins-username,
  • --jenkins-password

Note that the demo applications pipelines will only run on a Jenkins that uses agents that provide a docker host. That is, Jenkins must be able to run e.g. docker ps successfully on the agent.

The user has to have the following privileges:

  • install plugins
  • set credentials
  • create jobs
  • restarting

SCM-Manager

SCM-Manager is available at

External SCM-Manager

You can set an external SCM-Manager via the following parameters when applying the playground. See Parameters for examples.

  • --scmm-url,
  • --scmm-username,
  • --scmm-password

The user on the scm has to have privileges to:

  • add / edit users
  • add / edit permissions
  • add / edit repositories
  • add / edit proxy
  • install plugins

Monitoring tools

Set the parameter --metrics so the kube-prometheus-stack via its helm-chart is being deployed including Argo CD dashboards.

This leads to the following tools to be exposed:

Grafana can be used to query and visualize metrics via prometheus. Prometheus is not exposed by default.

In addition, argocd-notifications is set up. Applications deployed with Argo CD now will alert via email to mailhog the sync status failed, for example.

Note that this only works with Argo CD so far

Argo CD UI

Argo CD's web UI is available at

Demo applications

Each GitOps operator comes with a couple of demo applications that allow for experimenting with different GitOps features.

All applications are deployed via separated application and GitOps repos:

  • Separation of app repo and GitOps repo
  • Infrastructure as Code is maintained in app repo,
  • CI Server writes to GitOps repo and creates PullRequests.

The applications implement a simple staging mechanism:

  • After a successful Jenkins build, the staging application will be deployed into the cluster by the GitOps operator.
  • Deployment of production applications can be triggered by accepting pull requests.

Note that we are working on moving the GitOps-related logic into a gitops-build-lib for Jenkins. See the README there for more options like

  • staging,
  • resource creation,
  • validation (fail early / shift left).

Please note that it might take about a minute after the pull request has been accepted for the GitOps operator to start deploying. Alternatively you can trigger the deployment via the respective GitOps operator's CLI (flux) or UI (Argo CD)

Flux V1

PetClinic with plain k8s resources

Jenkinsfile for plain k8s deployment

  • Staging:
    • local: localhost:30001
    • remote: scripts/get-remote-url spring-petclinic-plain fluxv1-staging
  • Production:
    • local: localhost:30002
    • remote: scripts/get-remote-url spring-petclinic-plain fluxv1-production
  • QA (example for a 3rd stage)
    • local: localhost:30003
    • remote: scripts/get-remote-url spring-petclinic-plain fluxv1-qa
PetClinic with helm

Jenkinsfile for helm deployment

  • Staging
    • local: localhost:30004
    • remote: scripts/get-remote-url spring-petclinic-helm-springboot fluxv1-staging
  • Production
    • local: localhost:30005
    • remote: scripts/get-remote-url spring-petclinic-helm-springboot fluxv1-production
3rd Party app (NGINX) with helm

Jenkinsfile

  • Staging
    • local: localhost:30006
    • remote: scripts/get-remote-url nginx fluxv1-staging
  • Production
    • local: localhost:30007
    • remote: scripts/get-remote-url nginx fluxv1-production

Flux V2

PetClinic with plain k8s resources

Jenkinsfile

  • Staging
    • local: localhost:30010
    • remote: scripts/get-remote-url spring-petclinic-plain fluxv2-staging
  • Production
    • local: localhost:30011
    • remote: scripts/get-remote-url spring-petclinic-plain fluxv2-production

Argo CD

PetClinic with plain k8s resources

Jenkinsfile for plain deployment

  • Staging
    • local localhost:30020
    • remote: scripts/get-remote-url spring-petclinic-plain argocd-staging
  • Production
    • local localhost:30021
    • remote: scripts/get-remote-url spring-petclinic-plain argocd-production
PetClinic with helm

Jenkinsfile for helm deployment

  • Staging
    • local localhost:30022
    • remote: scripts/get-remote-url spring-petclinic-helm argocd-staging
  • Production
    • local localhost:30023
    • remote: scripts/get-remote-url spring-petclinic-helm argocd-production
3rd Party app (NGINX) with helm, templated in Jenkins

Jenkinsfile

  • Staging
    • local: localhost:30024
    • remote: scripts/get-remote-url nginx argocd-staging
  • Production
    • local: localhost:30025
    • remote: scripts/get-remote-url nginx argocd-production
3rd Party app (NGINX) with helm, using Helm dependency mechanism
  • local: localhost:30026
  • remote: scripts/get-remote-url nginx argocd-staging

Development

See docs/developers.md

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].