All Projects → pabloromeo → clusterplex

pabloromeo / clusterplex

Licence: MIT license
ClusterPlex is basically an extended version of Plex, which supports distributed Workers across a cluster to handle transcoding requests.

Programming Languages

javascript
184084 projects - #8 most used programming language
Dockerfile
14818 projects
shell
77523 projects

Projects that are alternatives of or similar to clusterplex

Miniswarm
Docker Swarm cluster in one command
Stars: ✭ 130 (+5.69%)
Mutual labels:  cluster, docker-swarm, swarm
Youtube-DL-Agent.bundle
A plex metadata agent for Youtube-DL downloads
Stars: ✭ 92 (-25.2%)
Mutual labels:  plex, plex-media-server
discord-rich-presence-plex
Display your Plex status on Discord using Rich Presence
Stars: ✭ 180 (+46.34%)
Mutual labels:  plex, plex-media-server
traefik-ondemand-service
Traefik ondemand service for the traefik ondemand plugin
Stars: ✭ 35 (-71.54%)
Mutual labels:  docker-swarm, swarm
Raspi Cluster
Notes and scripts for setting up (yet another) Raspberry Pi computing cluster
Stars: ✭ 235 (+91.06%)
Mutual labels:  cluster, swarm
conv2mp4-py
Python script that recursively searches through a user-defined file path and converts all videos of user-specified file types to MP4 with H264 video and AAC audio using ffmpeg. If a conversion failure is detected, the script re-encodes the file with HandbrakeCLI. Upon successful encoding, Plex libraries are refreshed and source file is deleted. …
Stars: ✭ 37 (-69.92%)
Mutual labels:  plex, plex-media-server
plex-music
Web/Desktop app for streaming music from your Plex Media Server
Stars: ✭ 42 (-65.85%)
Mutual labels:  plex, plex-media-server
Amp
** THIS PROJECT IS STOPPED ** An open source CaaS for Docker, batteries included.
Stars: ✭ 74 (-39.84%)
Mutual labels:  cluster, swarm
flixctl
A toolkit for controlling the infrastructure necessary for a true MaSaS (Movies and Shows as a Service) architecture.
Stars: ✭ 43 (-65.04%)
Mutual labels:  plex, plex-media-server
hivemq4-docker-images
Official Docker Images for the Enterprise MQTT Broker HiveMQ
Stars: ✭ 18 (-85.37%)
Mutual labels:  cluster, docker-swarm
swarm-router
Scalable stateless «zero config» service-name ingress for docker swarm mode with a fresh more secure approach
Stars: ✭ 58 (-52.85%)
Mutual labels:  docker-swarm, swarm
PlexAutoSkip
Automatically skip content in Plex
Stars: ✭ 95 (-22.76%)
Mutual labels:  plex, plex-media-server
Deploykit
A toolkit for creating and managing declarative, self-healing infrastructure.
Stars: ✭ 2,237 (+1718.7%)
Mutual labels:  cluster, swarm
docker-volume-hetzner
Docker Volume Plugin for accessing Hetzner Cloud Volumes
Stars: ✭ 81 (-34.15%)
Mutual labels:  cluster, docker-swarm
plex-utills
Manage your Plex library automatically
Stars: ✭ 258 (+109.76%)
Mutual labels:  plex, plex-media-server
plex-api
.NET Core SDK for Plex Media Server
Stars: ✭ 70 (-43.09%)
Mutual labels:  plex, plex-media-server
Docker Swarm
🐳🐳🐳 This repository is part of a blog series on Docker Swarm example using VirtualBox, OVH Openstack, Azure and Amazon Web Services AWS
Stars: ✭ 43 (-65.04%)
Mutual labels:  cluster, docker-swarm
Terraform Digitalocean Docker Swarm Mode
Terraform module for provisioning a Docker Swarm mode cluster on DigitalOcean
Stars: ✭ 59 (-52.03%)
Mutual labels:  cluster, swarm
docker-system-prune
Docker system prune automatically
Stars: ✭ 20 (-83.74%)
Mutual labels:  cluster, swarm
rpi-nas
🌐👨‍💻💻 Setup your own NAS on a Raspberry Pi
Stars: ✭ 29 (-76.42%)
Mutual labels:  plex, plex-media-server

ClusterPlex

GitHub license GitHub release ci

What is it?

ClusterPlex is basically an extended version of Plex, which supports distributed Workers across a cluster to handle transcoding requests.

Plex organizes video, music and photos from personal media libraries and streams them to smart TVs, streaming boxes and mobile devices.

plex

Components

In order to be able to use multiple nodes for transcoding, it's made up of 3 parts:

  • Plex Media Server

    There are two alternatives here:
    1. RECOMMENDED: Running the Official LinuxServer Plex image (ghcr.io/linuxserver/plex:latest) and applying the ClusterPlex dockermod (ghcr.io/pabloromeo/clusterplex_dockermod:latest)
    2. Running the ClusterPlex PMS docker image (ghcr.io/pabloromeo/clusterplex_pms:latest)
  • Transcoding Orchestrator

    Running a container using ghcr.io/pabloromeo/clusterplex_orchestrator:latest
  • Transcoding Workers

    Just as with PMS, two alternatives:
    1. RECOMMENDED: Official image (ghcr.io/linuxserver/plex:latest) with the Worker dockermod (ghcr.io/pabloromeo/clusterplex_worker_dockermod:latest)
    2. Custom Docker image: ghcr.io/pabloromeo/clusterplex_worker:latest

How does it work?

  • In the customized PMS server, Plex’s own transcoder is renamed and a shim is put in its place which calls a small Node.js app that communicates with the Orchestrator container over websockets.

  • The Orchestrator (Node.js application which receives all transcoding requests from PMS and forwards it to one of the active Workers available over websockets.

  • Workers receive requests from the Orchestrator and kick off the transcoding and report progress back to PMS. Workers can come online or go offline and the Orchestrator manages their registrations and availability. These Workers can run as replicated services managed by the cluster.

Shared content

Plex Application Data

WARNING: PMS's Application Data mount (/config) doesn't need to be shared with the Workers, so you can use your preferred method for persistent storage. However, beware that Plex doesn't play very well with network storage for this, especially regarding symlinks and file locks (used by their sqlite db).

For this reason CIFS/SMB should be avoided for this mount. NFS has been shown to work, but it is very sensitive to how the server and the mount is finetunned through configuration and may not work.

The recommendation is to use GlusterFS or Ceph

Media

In order for Workers to function properly, all Media content should be shared using identical paths between PMS and the Workers. This would be using network shared storage, such as NFS, SMB, Ceph, Gluster, etc.

Temp & Transcoding location

The same applies to the /tmp directory, in both PMS and the Workers. And the transcoding path configured in Plex should be a subdirectory of /tmp.

Such as:

transcode-path

Codecs

Workers require a path to store downloaded codecs for the particular architecture of the Worker. Codecs are downloaded as needed, whenever a transcoding request is received.

These can be shared across Workers, if desired, in order to avoid downloading the same codec for each Worker, but it isn't mandatory.

The path within the container is /codecs, which you can mount to a volume in order to have them persisted across container recreations. Subdirectories for each plex version and architecture are created within it.

Network settings in PMS

In Plex's Network Configuration, add Docker's VLAN (or the range that will be used by Workers) to the "List of IP addresses and networks that are allowed without auth".

For example: network-ips

Example Docker Swarm Deployment

docker-swarm

Docker Swarm stack example using Dockermods:

---
version: '3.4'

services:
  plex:
    image: ghcr.io/linuxserver/plex:latest
    deploy:
      mode: replicated
      replicas: 1
    environment:
      DOCKER_MODS: "ghcr.io/pabloromeo/clusterplex_dockermod:latest"
      VERSION: docker
      PUID: 1000
      PGID: 1000
      TZ: Europe/London
      ORCHESTRATOR_URL: http://plex-orchestrator:3500
      PMS_IP: 192.168.2.1
      TRANSCODE_OPERATING_MODE: both #(local|remote|both)
      TRANSCODER_VERBOSE: "1"   # 1=verbose, 0=silent
    healthcheck:
      test: curl -fsS http://localhost:32400/identity > /dev/null || exit 1
      interval: 15s
      timeout: 15s
      retries: 5
      start_period: 30s
    volumes:
      - /path/to/config:/config
      - /path/to/backups:/backups
      - /path/to/tv:/data/tv
      - /path/to/movies:/data/movies
      - /path/to/tmp:/tmp
      - /etc/localtime:/etc/localtime:ro
    ports:
      - 32469:32469
      - 32400:32400
      - 3005:3005
      - 8324:8324
      - 1900:1900/udp
      - 32410:32410/udp
      - 32412:32412/udp
      - 32413:32413/udp
      - 32414:32414/udp

  plex-orchestrator:
    image: ghcr.io/pabloromeo/clusterplex_orchestrator:latest
    deploy:
      mode: replicated
      replicas: 1
      update_config:
        order: start-first
    healthcheck:
      test: curl -fsS http://localhost:3500/health > /dev/null || exit 1
      interval: 15s
      timeout: 15s
      retries: 5
      start_period: 30s
    environment:
      TZ: Europe/London
      STREAM_SPLITTING: "OFF" # ON | OFF (default)
      LISTENING_PORT: 3500
      WORKER_SELECTION_STRATEGY: "LOAD_RANK" # RR | LOAD_CPU | LOAD_TASKS | LOAD_RANK (default)
    volumes:
      - /etc/localtime:/etc/localtime:ro
    ports:
      - 3500:3500

  plex-worker:
    image: ghcr.io/linuxserver/plex:latest
    hostname: "plex-worker-{{.Node.Hostname}}"
    deploy:
      mode: global
      update_config:
        order: start-first
    environment:
      DOCKER_MODS: "ghcr.io/pabloromeo/clusterplex_worker_dockermod:latest"
      VERSION: docker
      PUID: 1000
      PGID: 1000
      TZ: Europe/London
      LISTENING_PORT: 3501      # used by the healthcheck
      STAT_CPU_INTERVAL: 2000   # interval for reporting worker load metrics
      ORCHESTRATOR_URL: http://plex-orchestrator:3500
    healthcheck:
      test: curl -fsS http://localhost:3501/health > /dev/null || exit 1
      interval: 15s
      timeout: 15s
      retries: 5
      start_period: 240s
    volumes:
      - /path/to/codecs:/codecs # (optional, can be used to share codecs)
      - /path/to/tv:/data/tv
      - /path/to/movies:/data/movies
      - /path/to/tmp:/tmp
      - /etc/localtime:/etc/localtime:ro

Docker Swarm stack example using ClusterPlex docker images:

---
version: '3.4'

services:
  plex:
    image: ghcr.io/pabloromeo/clusterplex_pms:latest
    deploy:
      mode: replicated
      replicas: 1
    environment:
      VERSION: docker
      PUID: 1000
      PGID: 1000
      TZ: Europe/London
      ORCHESTRATOR_URL: http://plex-orchestrator:3500
      PMS_IP: 192.168.2.1
      TRANSCODE_OPERATING_MODE: both #(local|remote|both)
      TRANSCODER_VERBOSE: "1"   # 1=verbose, 0=silent
    healthcheck:
      test: curl -fsS http://localhost:32400/identity > /dev/null || exit 1
      interval: 15s
      timeout: 15s
      retries: 5
      start_period: 30s
    volumes:
      - /path/to/config:/config
      - /path/to/backups:/backups
      - /path/to/tv:/data/tv
      - /path/to/movies:/data/movies
      - /path/to/tmp:/tmp
      - /etc/localtime:/etc/localtime:ro
    ports:
      - 32469:32469
      - 32400:32400
      - 3005:3005
      - 8324:8324
      - 1900:1900/udp
      - 32410:32410/udp
      - 32412:32412/udp
      - 32413:32413/udp
      - 32414:32414/udp

  plex-orchestrator:
    image: ghcr.io/pabloromeo/clusterplex_orchestrator:latest
    deploy:
      mode: replicated
      replicas: 1
      update_config:
        order: start-first
    healthcheck:
      test: curl -fsS http://localhost:3500/health > /dev/null || exit 1
      interval: 15s
      timeout: 15s
      retries: 5
      start_period: 30s
    environment:
      TZ: Europe/London
      STREAM_SPLITTING: "OFF" # ON | OFF (default)
      LISTENING_PORT: 3500
      WORKER_SELECTION_STRATEGY: "LOAD_RANK" # RR | LOAD_CPU | LOAD_TASKS | LOAD_RANK (default)
    volumes:
      - /etc/localtime:/etc/localtime:ro
    ports:
      - 3500:3500

  plex-worker:
    image: ghcr.io/pabloromeo/clusterplex_worker:latest
    hostname: "plex-worker-{{.Node.Hostname}}"
    deploy:
      mode: global
      update_config:
        order: start-first
    environment:
      VERSION: docker
      PUID: 1000
      PGID: 1000
      TZ: Europe/London
      LISTENING_PORT: 3501      # used by the healthcheck
      STAT_CPU_INTERVAL: 2000   # interval for reporting worker load metrics
      ORCHESTRATOR_URL: http://plex-orchestrator:3500
    healthcheck:
      test: curl -fsS http://localhost:3501/health > /dev/null || exit 1
      interval: 15s
      timeout: 15s
      retries: 5
      start_period: 240s
    volumes:
      - /path/to/codecs:/codecs # (optional, can be used to share codecs)
      - /path/to/tv:/data/tv
      - /path/to/movies:/data/movies
      - /path/to/tmp:/tmp
      - /etc/localtime:/etc/localtime:ro

Parameters

Plex

The image extends the LinuxServer Plex Image, see here for information on all its parameters.

Parameter Function
ORCHESTRATOR_URL The url where the orchestrator service can be reached (ex: http://plex-orchestrator:3500)
PMS_IP IP pointing at the Plex instance (can be the cluster IP, a virtual IP, or the actual service name in Docker Swarm)
TRANSCODE_EAE_LOCALLY Force media which requires EasyAudioEncoder to transcode locally
TRANSCODE_OPERATING_MODE "local" => only local transcoding (no workers), "remote" => only remote workers transcoding, "both" (default) => Remote first, local if it fails
TRANSCODER_VERBOSE "0" (default) => info level, "1" => debug logging
FORCE_HTTPS "0" (Default) uses Plex's default http callback, "1" forces HTTPS to be used.
IMPORTANT: You must set this to "1" if you have set "Secure Connections" in Plex to "Required".

Orchestrator

Parameter Function
TZ Timezone
STREAM_SPLITTING Experimental feature, only "OFF" is allowed
LISTENING_PORT Port where orchestrator should run
WORKER_SELECTION_STRATEGY How the worker is chosen: "LOAD_CPU" => lowest CPU usage, "LOAD_TASKS" => least amount of current tasks, "RR" => round-robin, "LOAD_RANK" (default) => CPU benchmark * free_cpu

Orchestrator metrics

The Orchestrator exposes usage metrics at /metrics, in Prometheus format.

# HELP jobs_posted Jobs Posted
# TYPE jobs_posted counter
jobs_posted 0

# HELP jobs_completed Jobs Completed
# TYPE jobs_completed counter
jobs_completed 0

# HELP jobs_succeeded Jobs Succeeded
# TYPE jobs_succeeded counter
jobs_succeeded 0

# HELP jobs_failed Jobs Failed
# TYPE jobs_failed counter
jobs_failed 0

# HELP jobs_killed Jobs Killed
# TYPE jobs_killed counter
jobs_killed 0

# HELP job_posters_active Active Job Posters
# TYPE job_posters_active gauge
job_posters_active 0

# HELP workers_active Active Workers
# TYPE workers_active gauge
workers_active 2

# HELP worker_load_cpu Worker Load - CPU usage
# TYPE worker_load_cpu gauge
worker_load_cpu{worker_id="869902cf-5f95-49ec-8d4e-c49ff9bee914",worker_name="NODE1"} 28.13
worker_load_cpu{worker_id="61e06076-4b9e-4d83-bcaa-1385f2d8f414",worker_name="NODE2"} 11.97

# HELP worker_load_tasks Worker Load - Tasks Count
# TYPE worker_load_tasks gauge
worker_load_tasks{worker_id="869902cf-5f95-49ec-8d4e-c49ff9bee914",worker_name="NODE1"} 1
worker_load_tasks{worker_id="61e06076-4b9e-4d83-bcaa-1385f2d8f414",worker_name="NODE2"} 0

Using these metrics you can create Dashboards in something like Grafana, such as:

grafana-metrics

Dashboard JSON file: samples/grafana-dashboard.json

Workers

The image extends the LinuxServer Plex Image, see here for information on all its parameters.

Parameter Function
FFMPEG_HWACCEL Allows a hwaccel decoder to be passed to ffmpeg such as nvdec or dvxa2
LISTENING_PORT Port where workers expose the internal healthcheck
STAT_CPU_INTERVAL Frequency at which the worker sends stats to the orchestrator (in ms). Default 2000
ORCHESTRATOR_URL The url where the orchestrator service can be reached (ex: http://plex-orchestrator:3500)
TRANSCODER_PATH Default = '/usr/lib/plexmediaserver/'
TRANSCODER_NAME Default = 'Plex Transcoder'
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].