All Projects → slouc → finch-demo

slouc / finch-demo

Licence: MIT License
Introduction to Finch, a lightweight HTTP server library based on Twitter's Finagle.

Programming Languages

scala
5932 projects

Projects that are alternatives of or similar to finch-demo

finch-server
Some base classes and configuration used for making a server using finch
Stars: ✭ 23 (+21.05%)
Mutual labels:  http-server, finch
kog
🌶 A simple Kotlin web framework inspired by Clojure's Ring.
Stars: ✭ 41 (+115.79%)
Mutual labels:  http-server
ts-httpexceptions
🚦 See https://tsed.io/docs/exceptions.html
Stars: ✭ 24 (+26.32%)
Mutual labels:  http-server
httpbun
A simple HTTP server with responses tuned to be useful in testing HTTP clients. Heavily inspired by httpbin, but doesn't intend to be a perfect clone.
Stars: ✭ 14 (-26.32%)
Mutual labels:  http-server
nhttp
An Simple http framework for Deno, Deno Deploy and Cloudflare Workers. so hot 🚀
Stars: ✭ 26 (+36.84%)
Mutual labels:  http-server
restana
Super fast and minimalist framework for building REST micro-services.
Stars: ✭ 380 (+1900%)
Mutual labels:  http-server
DataXServer
为DataX(https://github.com/alibaba/DataX) 提供远程多语言调用(ThriftServer,HttpServer) 分布式运行(DataX on YARN) 功能
Stars: ✭ 130 (+584.21%)
Mutual labels:  http-server
node-jsonrpc2
JSON-RPC 2.0 server and client library, with HTTP (with Websocket support) and TCP endpoints
Stars: ✭ 103 (+442.11%)
Mutual labels:  http-server
cs
开箱即用的基于命令的消息处理框架,让 websocket 和 tcp 开发就像 http 那样简单
Stars: ✭ 19 (+0%)
Mutual labels:  http-server
fs-over-http
A filesystem interface over http, with extras and docker support
Stars: ✭ 14 (-26.32%)
Mutual labels:  http-server
eephttpd
Serving simple static sites directly to i2p via the SAM API. (Also part of https://github.com/eyedeekay/sam-forwarder)
Stars: ✭ 15 (-21.05%)
Mutual labels:  http-server
shivneri
Component based MVC web framework based on fort architecture targeting good code structures, modularity & performance.
Stars: ✭ 21 (+10.53%)
Mutual labels:  http-server
yarx
An awesome reverse engine for xray poc. | 一个自动化根据 xray poc 生成对应 server 的工具
Stars: ✭ 229 (+1105.26%)
Mutual labels:  http-server
MTJailed-Native
A terminal emulator with remote shell for non-jailbroken iOS devices
Stars: ✭ 24 (+26.32%)
Mutual labels:  http-server
Swiftly
Swiftly is an easy to use Qt/C++ web framework
Stars: ✭ 20 (+5.26%)
Mutual labels:  http-server
go-fileserver
A simple HTTP Server to share files over WiFi via Qr Code
Stars: ✭ 68 (+257.89%)
Mutual labels:  http-server
quickserv
Dangerously user-friendly web server for quick prototyping and hackathons
Stars: ✭ 275 (+1347.37%)
Mutual labels:  http-server
node-slack-events-api
Slack Events API for Node
Stars: ✭ 93 (+389.47%)
Mutual labels:  http-server
oxen-storage-server
Storage server for Oxen Service Nodes
Stars: ✭ 19 (+0%)
Mutual labels:  http-server
lazurite
A simple http server.
Stars: ✭ 17 (-10.53%)
Mutual labels:  http-server

Simple REST API server with Finch

What is Finch?

Let's start by quoting the official GitHub page: "Finch is a thin layer of purely functional basic blocks atop of Finagle for building composable HTTP APIs". That's a beautiful definition because it's both concise and complete; while Twitter's Finagle is the raw machinery under the hood that deals with RPCs, protocols, concurrency, connection pools, load balancers and things like that, Finch is a thin layer of composable abstract stuff on top of all that.

Finagle's mantra is: server is a function. That makes sense; when we abstract it out and forget about the mechanical parts, it really comes down to a simple Req => Future[Rep] (note that this is Twitter's Future). Of course, we can't simply "forget" about the mechanical parts, but we can at least move them away from the abstract part. And that's exactly what Finagle does - it separates the netty-powered engine from the functional, composable, type-safe abstractions that live on top of it. And Finch takes that part a bit futher.

Working with Endpoints

Basic building block in Finch is the Endpoint. Again, this is a function, this time Input => EndpointResult[A], where A denotes the type of result (we'll get back to this type soon). You're most likely not going to be constructing the EndpointResult yourself, but more about that later. First, an example.

Here's a basic endpoint:

import io.finch._

val endpoint = get("hello" :: "world" :: string)

This descibes an endpoint /hello/world/{URL_PARAM}, which means that our endpoint is a function of one String parameter. Here's how we can also include query params and/or request body:

import io.finch._

case class Book(title: String, author: String)

val endpoint1 = get("books" :: string :: param ("minPrice") :: paramOption("maxPrice"))
val endpoint2 = post("books" :: jsonBody[Book])

First endpoint matches e.g. GET /books/scienceFiction?minPrice=20&maxPrice=150 while second one matches POST /books with request body e.g. { "title": "1984", "author": "G. Orwell" }. Note that min price is required and max price is optional, which probably doesn't make a lot of sense in the real world, but I'm just showing the possibilities.

Values provided to get and post are HLists from shapeless. If you are not familiar with Shapeless, that's fine (although you should put it on your TODO list because it's really useful). All you need to know for now is that "HList" is short for a Heterogenous List, which is basically a List that can contain different types. In functional programming terminology this is known as a product (that would have been the name of shapeless HList too if it hadn't already been taken in standard Scala library). The HList we passed to endpoint1 was a product of: "books", one URL path parameter, one required query parameter and one optional query parameter. So, endpoint is all of those things combined. This is in contrast to product's dual, coproduct. A coproduct of those things would mean that the final value is only one of them. It's like AND vs OR in first-order logic (hooray for Curry-Howard correspondence). In standard Scala terms, product is like a TupleN while coproduct is a bunch of nested Eithers (or just one, in case of a coproduct of only two types). We can also model product as a case class and coproduct as a bunch of case classes which extend a common sealed trait. We're only using products for now in Finch, but we'll also use the coproducts later, hence this small digression.

Now let's see how to provide our endpoints-as-functions with their bodies. This is where "not constructing the EndpointResult yourself" comes into play. What you need to do is simply provide each endpoint with a function from its parameters to some Output[A] and Finch takes care of the rest (for those who are interested - this function will be implicitly transformed into an instance of Mapper typeclass, which is then passed to Endpoint's apply() method).

Here's how to provide our functions with bodies:

import io.finch._

case class Book(title: String, author: String)

val endpoint1 = get("books" :: string :: param("minPrice") :: paramOption("maxPrice")) { 
  (s: String, minPrice: String, maxPrice: Option[String]) =>  
    // do something and return Output
    Ok(s"Cool request bro! Here are the params: $s, $minPrice" + maxPrice.getOrElse(s" and $maxPrice", ""))
}

val endpoint2 = post("books" :: jsonBody[Book]) {
  (book: Book) => 
    // do something and return Output
    Ok(s"You posted a book with title: ${book.title} and author: ${book.author}")
}

Type A that I mentioned before is the type we parameterize Output with, and therefore the type of result. In the previous example it was a simple String. Now let's return a JSON:

import io.finch._
import io.finch.circe._
import io.circe.generic.auto._

case class MyResponse(code: Int, msg: String)

val endpoint3 = post("books" :: "json" :: jsonBody[Book]) {
  (book: Book) => 
    // do something and return Output
     Ok(MyResponse(200, "This is a response!"))
}

You might be wondering how exactly are we returning a JSON since we never even mentioned the word "json", we are just returning a MyResponse. Magic is in those two extra imports. They contain implicit conversions (powered by circe) that automatically construct a result in viable format.

Let's get even more sophisticated:

import io.finch._
import io.finch.circe._
import io.circe.generic.auto._

case class FancyResponse[T: io.circe.Encoder](code: Int, msg: String, body: T)
case class FancyResponseBody(msg: String)

val endpoint4 = post("books" :: "fancy" :: jsonBody[Book]) {
  (book: Book) =>
    // do something and return Output
    Ok(FancyResponse(200, "This is one fancy response!", FancyResponseBody("response")))
}

Here in FancyResponse we have a generic type T. Having just T as a type parameter would not satisfy the compiler since there is no information about the type so there's no guarantee that Finch will know how to encode it into some output type such as JSON. But by declaring the type as [T: io.circe.Encoder] we are saying that implicit implementation of Encoder typeclass must exist in scope for the given T. When we later on use FancyResponseBody in place of T, compiler is happy because there indeed exists a needed typeclass instance (it's in the imports).

Some types (such as Option[T]) are contained inside the imports, and some (such as scalaz.Maybe[T], at least in the time of writing) are not. But even for those that are not, it's simple to build your own conversions. They are not in the scope of this text and you don't need them for now anyway, but let's just say that Finch documentation and gitter channel should help you when you do get there (not to mention that Travis Brown supplied ridiculous amounts of Finch+Circe information on StackOverflow).

Non-blocking

Instead of returning Ok(foo) from an endpoint, you're also free to return a (Twitter) Future: Future(Ok(foo)).

Asynchronous alternative is clearly visible in the Finch code:

implicit def mapperFromOutputFunction[A, B](f: A => Output[B]): Mapper.Aux[A, B] = ...
...
implicit def mapperFromFutureOutputFunction[A, B](f: A => Future[Output[B]]): Mapper.Aux[A, B] = ...

So basically this would work:

import com.twitter.util.Future

val endpoint4 = post("books" :: "fancy" :: jsonBody[Book]) {
  (book: Book) =>
    // do something and return Output
    Future(Ok(FancyResponse(200, "This is one fancy response!", FancyResponseBody("response"))))
}

Instead of just wrapping our response (or any computation that results in a response) with a Future, which is kind of "blind" way of doing it, it's better to wrap it with a FuturePool:

import com.twitter.util.FuturePool

val endpoint4 = post("books" :: "fancy" :: jsonBody[Book]) {
  (book: Book) =>
    // do something and return Output
    FuturePool.unboundedPool(Ok(FancyResponse(200, "This is one fancy response!", FancyResponseBody("response"))))
}

Unbounded pool is not that great either, since our application may use too much resources. FuturePool can be configured to use any number of threads, and some other parameters may be set as well. You can instantiate the FuturePool with Java executors, e.g.:

FuturePool(Executors.newCachedThreadPool())

(executors are out of scope of this project; you'll find a bunch of info online)

There is not much point to wrapping the output in the way we do it in our simple example, but in real use cases you may have heavy computations that calculate the end result and put it in the output. In this case, it is highly recommended to use the FuturePool to make your request serving asynchronous. Of course, heavy computations are not the only scenario in which asynchronous handling is needed; for example, your business logic may have to communicate with the database, other APIs etc. If these operations give you a standard Future instead of the Twitter one (or scalaz Task, monix Task or something like that), you will have to eventually turn those into a Twitter Future.

Error handling

What about errors? Everyone knows how to handle their errors in business logic, wrap things into a bunch of disjunctions (or similar coproducts) and eventually return some error HTTP response, so I'm not going to go into that. But what if e.g. request body can't even be decoded into our desired case class (which means that things broke somewhere inside Finch/Circe)?

Answer is the rescue method. Let's add it to e.g. endpoint3:

val endpoint3: Endpoint[MyResponse] = post("books" :: "json" :: jsonBody[Book]) {
  (book: Book) =>
    // do something and return Output
    Ok(MyResponse(200, "This is a response!"))
}.rescue {
  case (t: Throwable) => Future(Output.payload(MyResponse(400, "Not cool dude!"), Status.BadRequest))
}

Now posting some random JSON request body to /books/json results in {"code":400,"msg":"Not cool dude!"}. Not that the default message Finch would have given you is bad or anything; it would be something like "{"message":"body cannot be converted to Book: Attempt to decode value on failed cursor: DownField(title)."}". But if you want to handle this error case yourself, rescue is how you can do it.

To summarize - endpoint is a function whose input is a product of path/query/body function parameters and whose return value is Endpoint[SomeResult], where SomeResult can be any type (most likely a string, an array/vector/list or a case class, all of which are automatically transformed to their JSON counterparts). A bit of terminology - we can say that Scala's String, Array/Seq and case class are isomorphic to JsString, JsArray and JsObject because we can go from one to the other and back without losing any information.

We can visualize the type transformations like this:

Request[AE] ------> AD ------> BD ------> Response[BE]

where AE is "A Encoded" (e.g. JSON request body), AD is "A Decoded" (e.g. a corresponding case class), BD is "B Decoded" (result of applying the business logic to AD) and BE is the encoded version of BD (e.g. from case class to JSON).

Each endpoint is constructed in two steps, first by providing an HList that describes the endpoint URL and parameter(s), and then by composing that with a function which describes what happens with the input parameters (if any) and constructs the result (the "body" of the endpoint function).

Let's now see how to define the server.

Implementing the Server

I said we will be working with coproducts later. This is exactly what our server will be - a coproduct of endpoints. It's like Schrödinger's cat; server is potentially all endpoints at the same time, but once you make the request it's actually materialized as just one of them. Well, kind of, but it's an interesting way of looking at it. When a request is made, each endpoint is probed until a match is found or the end has been reached. If some endpoint matches the request (e.g. request is GET /foo/bar and an endpoint get("foo" :: "bar") is matched), that endpoint is triggered and the search stops. If more than one endpoint matches the request, first one is chosen. It's just like the good old pattern matching.

Here's a simple implementation of a server. Even though it's not necessary for a schoolbook example, in the real world you will want to extend the TwitterServer (this is the official best practice). Other than that, everything should be pretty straightforward. You will notice that the syntax for joining things into a coproduct is :+: (also known as the "space invader" operator).

import com.twitter.finagle.http.{Request, Response}
import com.twitter.server._
import com.twitter.finagle.{Http, Service}
import com.twitter.util.Await
import io.circe.generic.auto._
import io.finch.circe._

object Server extends TwitterServer {

  val api: Service[Request, Response] =
    (endpoint1 :+: endpoint2 :+: endpoint3 :+: endpoint4)
      .toService

  def main(): Unit = {
    val server = Http.server.serve(":8080", api)
    onExit { server.close() }
    Await.ready(adminHttpServer)
  }
  
}

As you can see, once you get the hang of working with endpoints, defining the basic implementation of a server is almost trivial. Later on you will perhaps want to add various filters and stats receivers and stuff, but for a simple demonstration this is enough.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].