All Projects → matiaslindgren → celery-kubernetes-example

matiaslindgren / celery-kubernetes-example

Licence: MIT license
Small Flask app with scalable, asynchronous backend workers deployed on Kubernetes.

Programming Languages

python
139335 projects - #7 most used programming language
javascript
184084 projects - #8 most used programming language
Dockerfile
14818 projects
HTML
75241 projects
CSS
56736 projects
Makefile
30231 projects

Projects that are alternatives of or similar to celery-kubernetes-example

Letsmapyournetwork
Lets Map Your Network enables you to visualise your physical network in form of graph with zero manual error
Stars: ✭ 305 (+286.08%)
Mutual labels:  rabbitmq, celery
Python Devops
gathers Python stack for DevOps, these are usually my basic templates use for my implementations, so, feel free to use it and evolve it! Everything is Docker!
Stars: ✭ 61 (-22.78%)
Mutual labels:  rabbitmq, celery
Flower
Real-time monitor and web admin for Celery distributed task queue
Stars: ✭ 5,036 (+6274.68%)
Mutual labels:  rabbitmq, celery
celery-connectors
Want to handle 100,000 messages in 90 seconds? Celery and Kombu are that awesome - Multiple publisher-subscriber demos for processing json or pickled messages from Redis, RabbitMQ or AWS SQS. Includes Kombu message processors using native Producer and Consumer classes as well as ConsumerProducerMixin workers for relay publish-hook or caching
Stars: ✭ 37 (-53.16%)
Mutual labels:  rabbitmq, celery
Django Celery Docker Example
Example Docker setup for a Django app behind an Nginx proxy with Celery workers
Stars: ✭ 149 (+88.61%)
Mutual labels:  rabbitmq, celery
pandora
Small box of pandora to prototype your app with ready for use backend. This is just my compilation of different solutions occasionally applied in hackathons and challenges
Stars: ✭ 26 (-67.09%)
Mutual labels:  rabbitmq, celery
Django Celery Tutorial
Django Celery Tutorial
Stars: ✭ 48 (-39.24%)
Mutual labels:  rabbitmq, celery
python-asynchronous-tasks
😎Asynchronous tasks in Python with Celery + RabbitMQ + Redis
Stars: ✭ 37 (-53.16%)
Mutual labels:  rabbitmq, celery
Quiz
Example real time quiz application with .NET Core, React, DDD, Event Sourcing, Docker and built-in infrastructure for CI/CD with k8s, jenkins and helm
Stars: ✭ 100 (+26.58%)
Mutual labels:  rabbitmq, minikube
Scaleable Crawler With Docker Cluster
a scaleable and efficient crawelr with docker cluster , crawl million pages in 2 hours with a single machine
Stars: ✭ 96 (+21.52%)
Mutual labels:  rabbitmq, celery
aioamqp consumer
consumer/producer/rpc library built over aioamqp
Stars: ✭ 36 (-54.43%)
Mutual labels:  rabbitmq, producer-consumer
Kombu
Kombu is a messaging library for Python.
Stars: ✭ 2,263 (+2764.56%)
Mutual labels:  rabbitmq, celery
Online-Judge
Online Judge for hosting coding competitions inside NIT Durgapur made by GNU/Linux Users' Group!
Stars: ✭ 19 (-75.95%)
Mutual labels:  rabbitmq, celery
celery-priority-tasking
This is a prototype to schedule jobs in the backend based on some priority using Rabbitmq and Celery.
Stars: ✭ 28 (-64.56%)
Mutual labels:  rabbitmq, celery
leek
Celery Tasks Monitoring Tool
Stars: ✭ 77 (-2.53%)
Mutual labels:  rabbitmq, celery
Node Celery
Celery client for Node.js
Stars: ✭ 648 (+720.25%)
Mutual labels:  rabbitmq, celery
Docker Cluster With Celery And Rabbitmq
Build Docker clusters with Celery and RabbitMQ in 10 minutes
Stars: ✭ 72 (-8.86%)
Mutual labels:  rabbitmq, celery
Fastapi Celery
Minimal example utilizing fastapi and celery with RabbitMQ for task queue, Redis for celery backend and flower for monitoring the celery tasks.
Stars: ✭ 154 (+94.94%)
Mutual labels:  rabbitmq, celery
Rusty Celery
🦀 Rust implementation of Celery for producing and consuming background tasks
Stars: ✭ 243 (+207.59%)
Mutual labels:  rabbitmq, celery
fastapi
基于Fastapi开发,集成Celery-redis分布式任务队列、JWT 用户系统、ElasticSearch和encode orm的基础项目模板,大家可以根据自己的需求在本模板上进行修改
Stars: ✭ 75 (-5.06%)
Mutual labels:  celery

Celery on Kubernetes

Toy example of a Kubernetes application with Celery workers. The system consists of a HTTP service which computes long running tasks asynchronously using two different task queues depending on input size. This example is intended for local experimentation with Minikube and is probably not suitable for direct production use.

The web service is a simple Flask application, deployed in its own pod, along with a single Celery worker for small tasks (two containers in one pod). This system uses RabbitMQ as the Celery message broker and it is deployed as a service in another pod. In addition, a third deployment is created that runs independent, stateless Celery workers for consuming large tasks. The third deployment can be easily scaled.

The web service is represented by myproject and long running tasks are simulated with a poorly implemented longest common substring algorithm.

Overview sketch

architecture sketch, which shows how the components of this application relate to and interact with each other

Requirements

All application dependencies will be installed into Docker containers.

Docker Desktop (optional)

You can probably run all examples without minikube if you are using Kubernetes with Docker Desktop.

Running

Assuming dockerd is running and minikube is installed, let's deploy the system inside a Minikube cluster.

Initialization

Create a Minikube cluster that uses the local dockerd environment (skip this step if you are running Kubernetes from Docker Desktop):

minikube start
eval $(minikube -p minikube docker-env)

Build all Docker images:

docker build --tag myproject:1 --file myproject/Dockerfile .
docker build --tag consumer-small:1 --file consumer-small/Dockerfile .
docker build --tag consumer-large:1 --file consumer-large/Dockerfile .

Check that the images were created successfully:

docker images

Output:

REPOSITORY           TAG       IMAGE ID       CREATED         SIZE
consumer-large       1         ddfec2f889ad   3 minutes ago   67.2MB
consumer-small       1         99e589f61f63   3 minutes ago   72.4MB
myproject            1         bbed507879da   3 minutes ago   72.4MB

Deploying applications

Deploy the RabbitMQ message broker as a service inside the cluster:

kubectl create --filename message_queue/rabbitmq-deployment.yaml
kubectl create --filename message_queue/rabbitmq-service.yaml

Deploy the myproject Flask web service and its consumer-small Celery worker:

kubectl create --filename myproject/deployment.yaml

Then deploy the consumer-large Celery worker for large tasks in its own pod:

kubectl create --filename consumer-large/deployment.yaml

Check that we have 3 pods running:

kubectl get pods

Output:

NAME                              READY   STATUS    RESTARTS   AGE
consumer-large-7f44489db9-9btcf   1/1     Running   0          3s
myproject-648fbdff85-kw78t        2/2     Running   0          7s
rabbitmq-68447cbdf5-ktj4v         1/1     Running   0          14s

Note that you might have different names for the pods. I'll be using the above pod names but you should use the ones printed by kubectl get pods.

Inspecting application logs

Check that all applications are running and the Celery workers can connect to the broker.

Flask web server:

kubectl logs myproject-648fbdff85-kw78t --container myproject

Celery worker for small tasks:

kubectl logs myproject-648fbdff85-kw78t --container consumer-small

Celery worker for large tasks:

kubectl logs consumer-large-7f44489db9-9btcf

RabbitMQ message broker:

kubectl logs rabbitmq-68447cbdf5-ktj4v

I prefer to open new terminals or tmux for all applications and then use kubectl logs --follow to monitor all logs interactively.

Interacting with the web app

Now everything is running and we can expose the Flask web app port to our local machine:

kubectl port-forward deployment/myproject 5000:5000

Then open http://localhost:5000/ in a browser and you should see a simple web UI.

Try copy-pasting some strings and compute the longest common substrings for them. E.g. first try short strings and check that the tasks show up in the Celery logs of pod consumer-small. Then try long strings (over 1000 chars) and check the Celery logs of pod consumer-large. The consumer-large pods run Celery workers with --concurrency 2, so you should be seeing two CPUs being utilized when submitting two or more large tasks at the same time.

Scaling up

The consumer-large deployment creates stateless Celery worker pods, which can be scaled easily to e.g. 4 pods with:

kubectl scale deployment/consumer-large --replicas=4

You should now have 6 pods running:

kubectl get pods

If you submit several large tasks now, you should see much higher CPU usage.

Other useful things

Get a shell to the container that is running the Flask app:

kubectl exec --stdin --tty myproject-648fbdff85-kw78t --container myproject -- /bin/bash

Then e.g. delete all data from the SQL database:

python3 -c 'import sqlite3
conn = sqlite3.connect("/data/myproject.sqlite3.db")
conn.execute("delete from tasks")
conn.commit()'

Refresh the task list and all results should now be empty.

Get a shell to the large tasks Celery worker container:

kubectl exec --stdin --tty consumer-large-7f44489db9-9btcf -- /bin/bash

Inspect the Celery worker state:

celery inspect active_queues --broker=$CELERY_BROKER_URL
celery inspect report --broker=$CELERY_BROKER_URL

Cleanup

Terminate all pods by removing the deployments:

kubectl delete deploy myproject consumer-large rabbitmq
kubectl delete service rabbitmq-service
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].