All Projects → keithrozario → lambda-cache

keithrozario / lambda-cache

Licence: MIT License
Python utility for caching in Lambda Functions

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to lambda-cache

Ngx Cache
Cache utility for Angular
Stars: ✭ 144 (+414.29%)
Mutual labels:  caching, cache
Haproxy
HAProxy Load Balancer's development branch (mirror of git.haproxy.org)
Stars: ✭ 2,463 (+8696.43%)
Mutual labels:  caching, cache
Nuster
A high performance HTTP proxy cache server and RESTful NoSQL cache server based on HAProxy
Stars: ✭ 1,825 (+6417.86%)
Mutual labels:  caching, cache
Cash
HTTP response caching for Koa. Supports Redis, in-memory store, and more!
Stars: ✭ 122 (+335.71%)
Mutual labels:  caching, cache
HAProxy-2-RPM-builder
Build latest HAProxy binary with prometheus metrics support
Stars: ✭ 28 (+0%)
Mutual labels:  caching, cache
Sequelize Transparent Cache
Simple to use and universal cache layer for Sequelize
Stars: ✭ 137 (+389.29%)
Mutual labels:  caching, cache
Cachemanager
CacheManager is an open source caching abstraction layer for .NET written in C#. It supports various cache providers and implements many advanced features.
Stars: ✭ 2,049 (+7217.86%)
Mutual labels:  caching, cache
Ansible Role Memcached
Ansible Role - Memcached
Stars: ✭ 54 (+92.86%)
Mutual labels:  caching, cache
punic
Punic is a remote cache CLI built for Carthage and Apple .xcframework
Stars: ✭ 25 (-10.71%)
Mutual labels:  caching, cache
Scaffeine
Thin Scala wrapper for Caffeine (https://github.com/ben-manes/caffeine)
Stars: ✭ 195 (+596.43%)
Mutual labels:  caching, cache
Node Cache
a node internal (in-memory) caching module
Stars: ✭ 1,660 (+5828.57%)
Mutual labels:  caching, cache
salad
Asynchronous Scala Redis Client supporting Sentinel and Redis Cluster
Stars: ✭ 14 (-50%)
Mutual labels:  caching, cache
Minicache
📦 Python memory caching utilities
Stars: ✭ 79 (+182.14%)
Mutual labels:  caching, cache
Olric
Distributed cache and in-memory key/value data store. It can be used both as an embedded Go library and as a language-independent service.
Stars: ✭ 2,067 (+7282.14%)
Mutual labels:  caching, cache
Cashew
A simple and elegant yet powerful HTTP client cache for .NET
Stars: ✭ 70 (+150%)
Mutual labels:  caching, cache
Libcache
A Lightweight in-memory key:value cache library for Go.
Stars: ✭ 152 (+442.86%)
Mutual labels:  caching, cache
Easycaching
💥 EasyCaching is an open source caching library that contains basic usages and some advanced usages of caching which can help us to handle caching more easier!
Stars: ✭ 1,047 (+3639.29%)
Mutual labels:  caching, cache
Fastcache
Fast thread-safe inmemory cache for big number of entries in Go. Minimizes GC overhead
Stars: ✭ 1,051 (+3653.57%)
Mutual labels:  caching, cache
Simple Spring Memcached
A drop-in library to enable memcached caching in Spring beans via annotations
Stars: ✭ 185 (+560.71%)
Mutual labels:  caching, cache
Ccache.cmake
🚅 Compile faster with Ccache! A Ccache integration for CMake with Xcode support.
Stars: ✭ 24 (-14.29%)
Mutual labels:  caching, cache

Lambda Cache

Simple Caching for Lambda Functions

PackageStatus PyPI version Downloads

PythonSupport License: MIT Documentation Status

Test Coverage Status Codacy Badge Code style: black

Introduction

lambda-cache helps you cache data in your Lambda function from one invocation to another. It utilizes the internal memory of the lambda function's execution context to store data across multiple invocations, which:

  • Reduces load on back-end systems
  • Reduces the execution time of the lambda
  • Guarantees that functions will reference latest data after cache expiry

lambda-cache is purpose-built for AWS Lambda functions, and prioritizes simplicity as our design gaol. It currently supports SSM Parameters, Secrets from Secrets Manager and S3 Objects.

Installation

Include the package in your function zip-file artifact using:

$ pip install lambda-cache -t /path/of/function

Refer to installation guide for other options.

Usage

The official user guide has more info.

SSM - Parameter Store

Cache single parameter

To cache a parameter from ssm, decorate your handler function like so:

from lambda_cache import ssm

@ssm.cache(parameter='/production/app/var')
def handler(event, context):
    var = getattr(context,'var')
    response = do_something(var)
    return response

All invocations of this function over in the next minute will reference the parameter from the function's internal cache, rather than making a network call to ssm. After one minute has lapsed, the the next invocation will invoke get_parameter to refresh the cache. The parameter value will be injected into the context object of your lambda handler for retrieval.

Change cache expiry

The default max_age_in_seconds settings is 60 seconds (1 minute), it defines the maximum age of a parameter that is acceptable to the handler function. Cache entries older than this, will be refreshed. To set a longer cache duration (e.g 5 minutes), change the setting:

from lambda_cache import ssm

@ssm.cache(parameter='/production/app/var', max_age_in_seconds=300)
def handler(event, context):
    var = getattr(context,'var')
    response = do_something(var)
    return response

Note: The caching logic runs only at invocation, regardless of how long the function runs. A 15 minute lambda function will not refresh the parameter, unless explicitly refreshed using get_entry method (described later). The library is primary interested in caching 'across' invocations rather than 'within' an invocation

Change cache entry settings

The default name of the parameter is the string after the last slash('/') character of its name. This means /production/app/var and test/app/var both resolve to just var. To over-ride this default, use the entry_name setting like so:

from lambda_cache import ssm

@ssm.cache(parameter='/production/app/var', entry_name='new_var')
def handler(event, context):
    var = getattr(context,'new_var')
    response = do_something(var)
    return response

Cache multiple parameters

To cache multiple entries at once, pass a list of parameters to the parameter argument. This method groups all the parameter values in one python dictionary, stored in the Lambda Context under the entry_name.

Note: When using this method, entry_name is a required parameter, if not present a NoEntryNameError exception is thrown.

from lambda_cache import ssm

@ssm.cache(parameter=['/app/var1', '/app/var2'], entry_name='parameters')
def handler(event, context):
    var1 = getattr(context,'parameters').get('var1')
    var2 = getattr(context,'parameters').get('var2')
    response = do_something(var)
    return response

Under the hood, we use the get_parameters API call for boto3, which translate to a single network call for multiple parameters. You can group all parameters types in a single call, including String, StringList and SecureString. StringList will return as a list, while all other types will return as plain-text strings. The library does not support returning SecureString parameters in encrypted form, and will only return plain-text strings regardless of String type.

Note: for this method to work, ensure you have both ssm:GetParameter and ssm:GetParameters (with the 's' at the end) in your function's permission policy

Decorator stacking

If you wish to cache multiple parameters with different expiry times, stack the decorators. In this example, var1 will be refreshed every 30 seconds, var2 will be refreshed after 60.

@ssm.cache(parameter='/production/app/var1', max_age_in_seconds=30)
@ssm.cache(parameter='/production/app/var2', max_age_in_seconds=60)
def handler(event, context):
    var1 = getattr(context,'var1')
    var2 = getattr(context,'var2')
    response = do_something(var)
    return response

Note: Decorator stacking performs one API call per decorator, which might result is slower performance

Cache invalidation

If you require a fresh value at some point of the code, you can force a refresh using the ssm.get_entry function, and setting the max_age_in_seconds argument to 0.

from lambda_cache import ssm

@ssm.cache(parameter='/prod/var')
def handler(event, context):

    if event.get('refresh'):
        # refresh parameter
        var = ssm.get_entry(parameter='/prod/var', max_age_in_seconds=0)
    else:
        var = getattr(context,'var')
    
    response = do_something(var)
    return response

You may also use ssm.get_entry to get a parameter entry from anywhere in your functions code.

To only get parameter once in the lifetime of the function, set max_age_in_seconds to some arbitary large number ~36000 (10 hours).

Return Values

Caching supports String, SecureString and StringList parameters with no change required (ensure you have kms:Decrypt permission for SecureString). For simplicity, StringList parameters are automatically converted into list (delimited by comma), while String and SecureString both return the single string value of the parameter.

Secrets Manager

Cache single secret

Secret support is similar, but uses the secret.cache decorator.

from lambda_cache import secrets_manager

@secrets_manager.cache(name='/prod/db/conn_string')
def handler(event, context):
    conn_string = getattr(context,'conn_string')
    return context

Change Cache expiry

The default max_age_in_seconds settings is 60 seconds (1 minute), it defines how long a parameter should be kept in cache before it is refreshed from ssm. To configure longer or shorter times, modify this argument like so:

from lambda_cache import secrets_manager

@secrets_manager.cache(name='/prod/db/conn_string', max_age_in_seconds=300)
def handler(event, context):
    var = getattr(context,'conn_string')
    response = do_something(var)
    return response

Note: The caching logic runs only at invocation, regardless of how long the function runs. A 15 minute lambda function will not refresh the parameter, unless explicitly refreshed using get_cache_ssm. The library is primary interested in caching 'across' invocation rather than 'within' an invocation

Change Cache entry settings

The name of the secret is simply shortened to the string after the last slash('/') character of the secret's name. This means /prod/db/conn_string and /test/db/conn_string resolve to just conn_string. To over-ride this default, use entry_name:

from lambda_cache import secrets_manager

@secrets_manager.cache(name='/prod/db/conn_string', entry_name='new_var')
def handler(event, context):
    var = getattr(context,'new_var')
    response = do_something(var)
    return response

Decorator stacking

If you wish to cache multiple secrets, you can use decorator stacking.

@secrets_manager.cache(name='/prod/db/conn_string', max_age_in_seconds=30)
@secrets_manager.cache(name='/prod/app/elk_username_password', max_age_in_seconds=60)
def handler(event, context):
    var1 = getattr(context,'conn_string')
    var2 = getattr(context,'elk_username_password')
    response = do_something(var)
    return response

Note: Decorator stacking performs one API call per decorator, which might result is slower performance.

Cache Invalidation

To invalidate a secret, use the get_entry, setting the max_age_in_seconds=0.

from lambda_cache import secrets_manager

@secrets_manager.cache(name='/prod/db/conn_string')
def handler(event, context):

    try:
        response = db_connect()
    except AuthenticationError:
        var = secrets_manager.get_entry(name='/prod/db/conn_string', max_age_in_seconds=0)
        response = db_connect()

    return response

Return Values

Secrets Manager supports both string and binary secrets. For simplicity we will cache the secret in the format it is stored. It is up to the calling application to process the return as Binary or Strings.

S3

S3 support is considered experimental for now, but withing the python community we see a lot of folks pull down files from S3 for use in AI/ML models.

Files downloaded from s3 are automatically stored in the /tmp directory of the lambda function. This is the only writable directory within lambda, and has a 512MB of storage space.

Cache a single file

To download a file from S3 use the the same decorator pattern:

from lambda_cache import s3

@s3.cache(s3Uri='s3://bucket_name/path/to/object.json')
def s3_download_entry_name(event, context):

    # Object from S3 automatically saved to /tmp directory
    with open("/tmp/object.json") as file_data:
        status = json.loads(file_data.read())['status']

    return status

Change Cache expiry

The default max_age_in_seconds settings is 60 seconds (1 minute), it defines how long a file should be kept in /tmp before it is refreshed from S3. To configure longer or shorter times, modify this argument like so:

from lambda_cache import s3

@s3.cache(s3Uri='s3://bucket_name/path/to/object.json', max_age_in_seconds=300)
def s3_download_entry_name(event, context):
    with open("/tmp/object.json") as file_data:
        status = json.loads(file_data.read())['status']

    return status

Note: The caching logic runs only at invocation, regardless of how long the function runs. A 15 minute lambda function will not refresh the object, unless explicitly refreshed using s3.get_entry. The library is primary interested in caching 'across' invocation rather than 'within' an invocation

Check file before download

By default, lambda_cache will download the file once at cache has expired, however, to save on network bandwidth (and possibly time), we can set the check_before_download parameter to True. This will check the age of the object in S3 and download only if the object has changed since the last download.

from lambda_cache import s3

@s3.cache(s3Uri='s3://bucket_name/path/to/object.json', max_age_in_seconds=300, check_before_download=True)
def s3_download_entry_name(event, context):
    with open("/tmp/object.json") as file_data:
        status = json.loads(file_data.read())['status']

    return status

Note: we use the GetHead object call to verify the objects last_modified_date. This simplifies the IAM policy of the function, as it still only requires the s3:GetObject permission. However, this is still a GET requests, and will be charged as such, for smaller objects it might be cheaper to just download the object

Credit

Project inspired by:

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].