All Projects → spotify → Folsom

spotify / Folsom

Licence: other
An asynchronous memcache client for Java

Programming Languages

java
68154 projects - #9 most used programming language

Folsom

Folsom is an attempt at a small and stable memcache client. Folsom is fully asynchronous, based on Netty and uses Java 8's CompletionStage through-out the API.

Build status

Travis Coverage Status

Maven central

Maven Central

Build dependencies

  • Java 8 or higher
  • Maven
  • Docker - to run integration tests.

Runtime dependencies

  • Netty 4
  • Google Guava
  • Yammer metrics (optional)
  • OpenCensus (optional)

Usage

Folsom is meant to be used as a library embedded in other software.

To import it with maven, use this:

<!-- In dependencyManagement section -->
<dependency>
  <groupId>com.spotify</groupId>
  <artifactId>folsom-bom</artifactId>
  <version>1.7.4</version>
  <type>pom</type>
  <scope>import</scope>
</dependency>

<!-- In dependencies section -->
<dependency>
  <groupId>com.spotify</groupId>
  <artifactId>folsom</artifactId>
</dependency>

<!-- optional if you want to expose folsom metrics with spotify-semantic-metrics -->
<dependency>
  <groupId>com.spotify</groupId>
  <artifactId>folsom-semantic-metrics</artifactId>
</dependency>

<!-- optional if you want to expose folsom metrics with yammer -->
<dependency>
  <groupId>com.spotify</groupId>
  <artifactId>folsom-yammer-metrics</artifactId>
</dependency>

<!-- optional if you want to expose folsom tracing with OpenCensus -->
<dependency>
  <groupId>com.spotify</groupId>
  <artifactId>folsom-opencensus</artifactId>
</dependency>

<!-- optional if you want to use AWS ElastiCache auto-discovery -->
<dependency>
  <groupId>com.spotify</groupId>
  <artifactId>folsom-elasticache</artifactId>
</dependency>

If you want to use one of the metrics or tracing libraries, make sure you use the same version as the main artifact.

We are using semantic versioning

The main entry point to the folsom API is the MemcacheClientBuilder class. It has chainable setter methods to configure various aspects of the client. The methods connectBinary() and connectAscii() constructs MemcacheClient instances utilising the binary protocol and ascii protocol respectively. For details on their differences see Protocol below.

All calls to the folsom API that interacts with a memcache server is asynchronous and the result is typically accessible from CompletionStage instances. An exception to this rule are the methods that connects clients to their remote endpoints, MemcacheClientBuilder.connectBinary() and MemcacheClientBuilder.connectAscii() which will return a MemcacheClient immediately while asynchronously attempting to connect to the configured remote endpoint(s).

As code using the folsom API should be written so that it handles failing intermittently with MemcacheClosedException anyway, waiting for the initial connect to complete is not something folsom concerns itself with. For single server connections, ConnectFuture provides functionality to wait for the initial connection to succeed, as can be seen in the example below.

final MemcacheClient<String> client = MemcacheClientBuilder.newStringClient()
    .withAddress(hostname)
    .connectAscii();
// make we wait until the client has connected to the server
ConnectFuture.connectFuture(client).toCompletableFuture().get();

client.set("key", "value", 10000).toCompletableFuture().get();
client.get("key").toCompletableFuture().get();

client.shutdown();

Clients are single use, after shutdown has been invoked the client can no longer be used.

Java 7 usage

If you are still on Java 7, you can depend on the older version:

<dependency>
  <groupId>com.spotify</groupId>
  <artifactId>folsom</artifactId>
  <version>0.8.1</version>
</dependency>

Design goals

  • Robustness - If you request something, the future you get back should always complete at some point.
  • Error detection - If something goes wrong (the memcache server is behaving incorrectly or some internal bug occurs), we try to detect it and drop the connection to prevent further problems.
  • Simplicity - The code base is intended to be small and well abstracted. We prefer simple solutions that solve the major usecases and avoid implementing optimizations that would give small returns.
  • Fail-fast - If something happens (the memcached service is slow or gets disconnected) we try to fail as fast as possible. How to handle the error is up to you, and you probably want to know about the error as soon as possible.
  • Modularity - The complex client code is isolated in a single class, and all the extra functionality are in composable modules: (ketama, reconnecting, retry, roundrobin)
  • Efficiency - We want to support a high traffic throughput without using too much CPU or memory resources.
  • Asynchronous - We fully support the idea of writing asynchronous code instead of blocking threads, and this is achieved through Java 8 futures.
  • Low amount of synchronization - Code that uses a lot of synchronization primitives is more likely to have race condition bugs and deadlocks. We try to isolate that as much as possible to minimize the risk, and most of the code base doesn't have to care.

Best practices

Do not use withConnectionTimeoutMillis() or the deprecated withRequestTimeoutMillis() to set timeouts per request. This is intended to detect broken TCP connections to close it and recreate it. Once this happens, all open requests will be completed with a failure and Folsom will try to recreate the connection. If this timeout is set too low, this will create connection flapping which will result in an increase of failed requests and/or increased request latencies.

A better way of setting timeouts on individual requests (in Java 9+) is something like this:

CompletableFuture<T> future = client.get(...).toCompletableFuture().orTimeout(...);

Protocol

Folsom implements both the binary protocol and ascii protocol. They share a common interface but also extend it with their own specializations.

Which protocol to use depends on your use case. With a regular memcached backend, the ascii protocol is much more efficient. The binary protocol is a bit chattier but also makes error detection easier.

interface MemcacheClient<T> {}
interface AsciiMemcacheClient<T> extends MemcacheClient<T> {}
interface BinaryMemcacheClient<T> extends MemcacheClient<T> {}

Changelog

See changelog.

Features

Ketama

Folsom support Ketama for sharing across a set of memcache servers. Note that the caching algorithm (currently) doesn't attempt to provide compatibility with other memcache clients, and thus when switching client implementation you will get a period of low cache hit ratio.

Yammer metrics

You can optionally choose to track performance using Yammer metrics. You will need to include the folsom-yammer-metrics dependency and initialize using MemcacheClientBuilder:

builder.withMetrics(new YammerMetrics(metricsRegistry));

OpenCensus tracing

You can optionally use OpenCensus to trace Folsom operations. You will need to include the folsom-opencensus dependency and initialize tracing using MemcacheClientBuilder:

builder.withTracer(OpenCensus.tracer());

Cluster auto-discovery

Nodes in a memcache clusters can be auto-discovered. Folsom supports discovery through DNS SRV records using the com.spotify.folsom.SrvResolver or AWS ElastiCache using the com.spotify.folsom.elasticache.ElastiCacheResolver.

SrvResolver:

builder.withResolver(SrvResolver.newBuilder("foo._tcp.example.org").build());

ElastiCacheResolver:

builder.withResolver(ElastiCacheResolver.newBuilder("cluster-configuration-endpoint-hostname").build());

Building

mvn package

Code of conduct

This project adheres to the Open Code of Conduct. By participating, you are expected to honor this code.

Authors

Folsom was initially built at Spotify by Kristofer Karlsson, Niklas Gustavsson and Daniel Norberg. Many thanks also go out to Noa Resare.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].