All Projects β†’ joyrex2001 β†’ kubedock

joyrex2001 / kubedock

Licence: MIT license
Kubedock is a minimal implementation of the docker api that will orchestrate containers on a Kubernetes cluster, rather than running containers locally.

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to kubedock

k8s-buildkite-plugin
Run any buildkite build step as a Kubernetes Job
Stars: ✭ 37 (-53.16%)
Mutual labels:  ci, k8s
Android-CICD
This repo demonstrates how to work on CI/CD for Mobile Apps πŸ“± using Github Actions πŸ’Š + Firebase Distribution πŸŽ‰
Stars: ✭ 37 (-53.16%)
Mutual labels:  ci, cicd
pipelines-as-code
Pipelines as Code
Stars: ✭ 37 (-53.16%)
Mutual labels:  ci, tekton
github-task-manager
receive github hook, notify agent, receive task results, notify github
Stars: ✭ 13 (-83.54%)
Mutual labels:  ci, k8s
Octopod
πŸ™πŸ› οΈ Open-source self-hosted solution for managing multiple deployments in a Kubernetes cluster with a user-friendly web interface.
Stars: ✭ 47 (-40.51%)
Mutual labels:  ci, k8s
docker-pega-web-ready
Docker project for generating a tomcat docker image for Pega
Stars: ✭ 46 (-41.77%)
Mutual labels:  ci, cicd
awesome
A curated list of delightful developers resources.
Stars: ✭ 13 (-83.54%)
Mutual labels:  ci, k8s
Microk8s
MicroK8s is a small, fast, single-package Kubernetes for developers, IoT and edge.
Stars: ✭ 6,017 (+7516.46%)
Mutual labels:  k8s, cicd
Semaphore
Modern UI for Ansible
Stars: ✭ 4,588 (+5707.59%)
Mutual labels:  ci, cicd
Cml
♾️ CML - Continuous Machine Learning | CI/CD for ML
Stars: ✭ 2,843 (+3498.73%)
Mutual labels:  ci, cicd
devops-book
运维开发
Stars: ✭ 29 (-63.29%)
Mutual labels:  ci, cicd
Javascript Testing Best Practices
πŸ“—πŸŒ 🚒 Comprehensive and exhaustive JavaScript & Node.js testing best practices (August 2021)
Stars: ✭ 13,976 (+17591.14%)
Mutual labels:  ci, cicd
kahoy
Simple Kubernetes raw manifests deployment tool
Stars: ✭ 33 (-58.23%)
Mutual labels:  ci, k8s
github-status-updater
Command line utility for updating GitHub commit statuses and enabling required status checks for pull requests
Stars: ✭ 83 (+5.06%)
Mutual labels:  ci, cicd
ofcourse
A Concourse resource generator
Stars: ✭ 41 (-48.1%)
Mutual labels:  ci, cicd
jt tools
Ruby on Rails Continuous Deployment Ecosystem to maintain Healthy Stable Development
Stars: ✭ 13 (-83.54%)
Mutual labels:  ci, cicd
Jx
Jenkins X provides automated CI+CD for Kubernetes with Preview Environments on Pull Requests using Cloud Native pipelines from Tekton
Stars: ✭ 4,041 (+5015.19%)
Mutual labels:  cicd, tekton
erda-actions
No description or website provided.
Stars: ✭ 17 (-78.48%)
Mutual labels:  k8s, cicd
tichi
TiChi ☯️ contains the tidb community collaboration automation basic framework and tool set.
Stars: ✭ 36 (-54.43%)
Mutual labels:  ci, k8s
K8s Config Projector
Create Kubernetes ConfigMaps from configuration files
Stars: ✭ 61 (-22.78%)
Mutual labels:  ci, k8s

Kubedock

Kubedock is a minimal implementation of the docker api that will orchestrate containers on a kubernetes cluster, rather than running containers locally. The main driver for this project is to run tests that require docker-containers inside a container, without the requirement of running docker-in-docker within resource heavy containers. Containers that are orchestrated by kubedock are considered short-lived and emphemeral and not intended to run production services. An example use case is running testcontainers-java enabled unit-tests in a tekton pipeline. In this use case, running kubedock in a sidecar can help orchestrating containers inside the kubernetes cluster instead of within the task container itself.

Quick start

Running this locally with a testcontainers enabled unit-test requires to run kubedock with port-forwarding enabled (kubedock server --port-forward). After that start the unit tests in another terminal with the below environment variables set, for example:

export TESTCONTAINERS_RYUK_DISABLED=true
export TESTCONTAINERS_CHECKS_DISABLE=true
export DOCKER_HOST=tcp://127.0.0.1:2475
mvn test

The default configuration for kubedock is to orchestrate in the namespace that has been set in the current context. This can be overruled with -n argument (or via the NAMESPACE environment variable). The service requires permissions to create Deployments, Jobs, Services and Configmaps. If namespace locking is used, the service also requires permissions to create Leases in the namespace.

To see a complete list of available options: kubedock --help.

Implementation

When kubedock is started with kubedock server it will start an API server on port :2475, which can be used as a drop-in replacement for the default docker api server. Additionally, kubedock can also start listening to an unix-socket (docker.sock).

Containers

Container API calls are translated towards kubernetes Deployment (or Job) resources. When a container is started, it will create a kubernetes Service within the cluster and maps the ports to that of the container (note that only tcp is supported). This will make it accessable for use within the cluster (e.g. within a containerized pipeline within that same cluster). It is also possible to create port-forwards for the ports that should be exposed with the --port-forward argument. These are however not very performant, nor stable and are intended for local debugging. If the ports should be exposed on localhost as well, but port-forwarding is not required, they can be made available via the built-in reverse-proxy. This can be enabled with the --reverse-proxy argument and is mutual exlusive with --port-forward.

Starting a container is a blocking call that will wait until it results in a running Pod. By default it will wait for maximum 1 minute, but this is configurable with the --timeout argument. The logs API calls will always return the complete history of logs, and doesn't differentiate between stdout/stderr. All log output is send as stdout. Executions in the containers are supported.

By default, all containers will be orchestrated using kubernetes Deployment resources. However, in some cases it could make more sense to deploy the container as a Job instead. The deployment resource type can be forced by adding a com.joyrex2001.kubedock.deploy-as-job label that contains true on the container that should be orchestrated as a Job instead. This can also be set globally with the --deploy-as-job argument, which will result in all containers being deployes as Jobs. The restart policy for Jobs is fixed to OnFailure.

Volumes

Volumes are implemented by copying over the source content towards the container by means of an init-container that is started before the actual container is started. By default the kubedock image with the same version as the running kubedock is used as the init container. However, this can be any image that has tar available and can be configured with the --initimage argument.

Volumes are one-way copies and emphemeral. This typically means, any data that is written into the volume is not available locally. This also means that mounts to devices, or sockets are not supported (e.g. mounting a docker-socket). Volumes that point to a single file will be converted to a configmap (and is implicitly read-only always).

Copying data from a running container back towards the client is supported either, but only works if the container running has tar available. Also be aware that copying data towards a container will implicitly start the container. This is different compared to a real docker api, where a container can be in an unstarted state. To 'workaround' this, use a volume instead. Alternatively kubedock can be started with --pre-archive, which will convert copy statements of single files to configmaps when the container is started yet. This will implicitly make the target file read-only, and may not work in all use-cases (hence it's not the default).

Networking

Kubedock flattens all networking, which basicly means that everything will run in the same namespace. This should be sufficient for most use-cases. Network aliases are supported. When a network alias is present, it will create a service exposing all ports that have been exposed by the container. If no ports are configured, kubedock is able to fetch ports that are exposed in the container image. To do this, kubedock should be started with the --inspector argument.

Images

Kubedock implements the images API by tracking which images are requested. It is not able to actually build images. If kubedock is started with --inspector, kubedock will fetch configuration information about the image by calling external container registries. This configuration includes ports that are exposed by the container image itself, and increases network aliases support. The registries should be configured by the client (for example by doing a skopeo login). By default images that are used are deployed with a 'IfNotPresent' pull policy. This can be globally configured with the --pull-policy argument, and can be configured on container level by adding a label com.joyrex2001.kubedock.pull-policy to the container. Possible values are 'never', 'always' and 'ifnotpresent'.

Namespace locking

If multiple kubedocks are using the namespace, it might be possible there will be collisions in network aliases. Since networks are flattend (see Networking), all network aliases will result in a Service with the name of the given network alias. To ensure tests don't fail because of these name collisions, kubedock can lock the namespace while it's running. When enabling this with the --lock argument, kubedock will create a Lease called kubedock-lock in the namespace in which it tracks the current ownership.

Resource requests and limits

By default containers are started without any resource request configuration. This can impact performance of the tests that are run in the containers. Setting resource requests (and limits) will allow better scheduling, and can improve the overall performance of the running containers. Global requests and limits can be set with --request-cpu and --request-memory, which takes regular kubernetes resource requests configurations as can be found in the kubernetes documentation. Limits are optional, and can be configured by adding it with a ,limit. If the values should be configured specifically for a container, they can be configured by adding com.joyrex2001.kubedock.request-cpu or com.joyrex2001.kubedock.request-memory labels to the container with their specific requests (and limits). The labels take precedence over the cli configuration.

Resources cleanup

Kubedock will dynamically create deployments and services in the configured namespace. If kubedock is requested to delete a container, it will remove the deployment and related services. Kubedock will also delete all the resources (Services and Deployments) it created in the running instance before exiting (identified with the kubedock.id label).

Automatic reaping

If a test fails and didn't clean up its started containers, these resources will remain in the namespace. To prevent unused deployments, configmaps and services lingering around, kubedock will automatically delete these resources. If these resorces are owned by the current process, they will be removed if they are older than 60 minutes (default). If the resources have the label kubedock=true, but are not owned by the running process, it will delete them 15 minutes after the initial reap interval (in the default scenario; after 75 minutes).

Forced cleaning

The reaping of resources can also be enforced at startup. When kubedock is started with the --prune-start argument, it will delete all resources that have the label kubedock=true, before starting the API server. This includes resources that are created by other instances of kubedock.

Service Account RBAC

As a reference, the below role can be used to manage the permissions of the service account that is used to run kubedock in a cluster. The uncommented rules are the minimal permissions. Depending on use of --deploy-as-job, --pre-archive and --lock, the additional (commented) rules are required as well.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: testcontainers
  namespace: jenkins
rules:
  - apiGroups: ["apps"]
    resources: ["deployments"]
    verbs: ["create", "get", "list", "delete"]
  - apiGroups: [""]
    resources: ["pods", "pods/log"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["pods/exec"]
    verbs: ["create"]
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["create", "get", "list", "delete"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["list"]
  - apiGroups: ["batch"]
    resources: ["jobs"]
    verbs: ["list"]
## optional permissions (depending on kubedock use)
# - apiGroups: ["batch"]
#   resources: ["jobs"]
#   verbs: ["create", "get", "list", "delete"]
# - apiGroups: [""]
#   resources: ["configmaps"]
#   verbs: ["create", "get", "list", "delete"]
# - apiGroups: ["coordination.k8s.io"]
#   resources: ["leases"]
#   verbs: ["create", "get", "list", "delete"]

See also

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].