All Projects → bfirsh → Funker

bfirsh / Funker

Licence: apache-2.0
Functions as Docker containers

Projects that are alternatives of or similar to Funker

Fn
The container native, cloud agnostic serverless platform.
Stars: ✭ 5,046 (+1968.03%)
Mutual labels:  serverless, swarm
Time2code
Portable Scalable web code editor to integrate into your sites and learning experiences
Stars: ✭ 294 (+20.49%)
Mutual labels:  serverless, swarm
Gofn
Function process via docker provider (serverless minimalist)
Stars: ✭ 134 (-45.08%)
Mutual labels:  serverless, swarm
Embark
Framework for serverless Decentralized Applications using Ethereum, IPFS and other platforms
Stars: ✭ 3,478 (+1325.41%)
Mutual labels:  serverless, swarm
Firecamp
Serverless Platform for the stateful services
Stars: ✭ 194 (-20.49%)
Mutual labels:  serverless, swarm
Mtgatracker
MTGATracker is a deck tracker for MTG Arena, offering an in-game overlay that shows real time info about your deck in MTGA. It can also record & analyze your past matches to show personal aggregated gameplay history information, like lifetime wins/losses by deck, by event, etc.
Stars: ✭ 232 (-4.92%)
Mutual labels:  serverless
Raspi Cluster
Notes and scripts for setting up (yet another) Raspberry Pi computing cluster
Stars: ✭ 235 (-3.69%)
Mutual labels:  swarm
Aws Lambda Typescript
This sample uses the Serverless Application Framework to implement an AWS Lambda function in TypeScript, deploy it via CloudFormation, publish it through API Gateway to a custom domain registered on Route53, and document it with Swagger.
Stars: ✭ 228 (-6.56%)
Mutual labels:  serverless
Serverless Chrome
🌐 Run headless Chrome/Chromium on AWS Lambda
Stars: ✭ 2,625 (+975.82%)
Mutual labels:  serverless
Serverless Boilerplate
Minimal yet super-functional serverless boilerplate
Stars: ✭ 244 (+0%)
Mutual labels:  serverless
Batch Shipyard
Simplify HPC and Batch workloads on Azure
Stars: ✭ 240 (-1.64%)
Mutual labels:  serverless
Komiser
☁️ Cloud Environment Inspector 👮🔒 💰
Stars: ✭ 2,684 (+1000%)
Mutual labels:  serverless
Openwhisk Deploy Kube
The Apache OpenWhisk Kubernetes Deployment repository supports deploying the Apache OpenWhisk system on Kubernetes and OpenShift clusters.
Stars: ✭ 231 (-5.33%)
Mutual labels:  serverless
To Do
一个无后端待办事项应用,数据用 LeanCloud 进行同步。
Stars: ✭ 238 (-2.46%)
Mutual labels:  serverless
Faas Nomad
OpenFaaS plugin for Nomad
Stars: ✭ 230 (-5.74%)
Mutual labels:  serverless
Serverless Google Cloudfunctions
Serverless Google Cloud Functions Plugin – Adds Google Cloud Functions support to the Serverless Framework
Stars: ✭ 241 (-1.23%)
Mutual labels:  serverless
Algnhsa
AWS Lambda Go net/http server adapter
Stars: ✭ 226 (-7.38%)
Mutual labels:  serverless
Baker
Orchestrate microservice-based process flows
Stars: ✭ 233 (-4.51%)
Mutual labels:  serverless
Mercury Parser Api
🚀 A drop-in replacement for the Mercury Parser API.
Stars: ✭ 239 (-2.05%)
Mutual labels:  serverless
Aws Amplify Auth Starters
Starter projects for developers looking to build web & mobile applications that have Authentication & protected routing
Stars: ✭ 233 (-4.51%)
Mutual labels:  serverless

Funker: Functions as Docker containers

Funker allows you to package up pieces of your application as Docker containers and have them run on-demand on a swarm.

You can define functions like this as Docker services:

var funker = require('funker');

funker.handler(function(args, callback) {
  callback(args.x + args.y);
});

Then call them from other Docker services on any node in the swarm:

>>> import funker
>>> funker.call("add", x=1, y=2)
3

These functions are being called demand, scale effortlessly, and make your application vastly simpler. It's a bit like serverless, but just using Docker.

Getting started

Creating a function

First, you need to package up a piece of your application as a function. Let's start with a trivial example: a function that adds two numbers together.

Save this code as handler.js:

var funker = require('funker');

funker.handler(function(args, callback) {
  callback(args.x + args.y);
});

We also need to define the Node package in package.json:

{
  "name": "app",
  "version": "0.0.1",
  "scripts": {
    "start": "node handler.js"
  },
  "dependencies": {
    "funker": "^0.0.1"
  }
}

Then, we package it up inside a Docker container by creating Dockerfile:

FROM node:7-onbuild

And building it:

$ docker build -t add .

To run the function, you create a service:

$ docker network create --attachable -d overlay funker
$ docker service create --name add --network funker add

The function is now available at the name add to other things running inside the same network. It has booted up a warm version of the function, so calls made to it will be instant.

Calling a function

Let's try calling the function from a Python shell:

$ docker run -it --net funker funker/python

(The funker/python image is just a Python image with the funker package installed.)

You should now see a Python prompt. Try importing the package and running the function we just created:

>>> import funker
>>> funker.call("add", x=1, y=2)
3

Cool! So, to recap: we've put a function written in Node inside a container, then called it from Python. That function is run on-demand, and this is all being done with plain Docker services and no additional infrastructure.

Implementations

There are implementations of handling and calling Funker functions in various languages:

Example applications

Deploying with Compose

Functions are just services, so they are really easy to deploy using Compose. You simply define them alongside your long-running services.

For example, to deploy a function called process-upload:

version: "2"
services:
  web:
    image: oscorp/web
  db:
    image: postgres
  process-upload:
    image: oscorp/process-upload
    restart: always

In all the services in this application, the function will be available under the name process-upload. For example, you could call it with a bit of code like this:

funker.call("process-upload", bucket="some-s3-bucket", filename="upload.jpg")

Architecture

The architecture is intentionally very simple. It leans on Docker services as the base infrastructure, and avoids any unnecessary complexity (daemons, queues, storage, consensus systems, and so on).

Functions run as Docker services. When they boot up, they open a TCP socket and sit there waiting for a connection.

To call functions, another Docker service connects to the function at its hostname. This can be done anywhere in a swarm due to Docker's overlay networking. It sends function arguments as JSON, then the function responds with a return value as JSON.

Once it has been called, the function refuses any other connections. Once it has responded, the function closes the socket and quits immediately. Docker's state reconciliation will then boot up a fresh copy of the function ready to receive calls again.

So, each function only processes a single request. To process functions in parallel, we need to have multiple warm functions running in parallel, which is easy to do with Docker's service replication. The idea is to do this automatically, but this is incomplete. See this issue for more background and discussion.

Alternative architectures

An alternative implementation considered was for the function caller to create the service directly, as has been done in some previous experiments.

The upside of Funker over this implementation is that functions are warm and ready to receive calls, and you don't need the complexity of giving containers access to create Docker services somehow.

The disadvantage is that it doesn't scale easily. We need some additional infrastructure to be able to scale functions up and down to handle demand.

## Credits

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].