All Projects → tylerreisinger → Cache Macro

tylerreisinger / Cache Macro

Licence: mit
A procedural attribute macro to automatically cache the results of a function call with given args.

Programming Languages

rust
11053 projects

Labels

Projects that are alternatives of or similar to Cache Macro

Synchrotron
Caching layer load balancer.
Stars: ✭ 42 (-28.81%)
Mutual labels:  cache
Cachelper
Stars: ✭ 48 (-18.64%)
Mutual labels:  cache
Exploit Discord Cache System Poc
Exploit Discord's cache system to remote upload payloads on Discord users machines
Stars: ✭ 51 (-13.56%)
Mutual labels:  cache
Extended image
A powerful official extension library of image, which support placeholder(loading)/ failed state, cache network, zoom pan image, photo view, slide out page, editor(crop,rotate,flip), paint custom etc.
Stars: ✭ 1,021 (+1630.51%)
Mutual labels:  cache
Pomodoro
A simple WordPress translation cache
Stars: ✭ 47 (-20.34%)
Mutual labels:  cache
Backfill
A JavaScript caching library for reducing build time
Stars: ✭ 50 (-15.25%)
Mutual labels:  cache
Spring Boot
spring-boot 项目实践总结
Stars: ✭ 989 (+1576.27%)
Mutual labels:  cache
Oss.common
oss基础类库,主要涉及基础实体,加密算法,xml序列化,以及其他扩展方法等
Stars: ✭ 56 (-5.08%)
Mutual labels:  cache
Redis Tag Cache
Cache and invalidate records in Redis with tags
Stars: ✭ 48 (-18.64%)
Mutual labels:  cache
Object Cache
Simple Ruby object caching solution using Redis as a backend
Stars: ✭ 50 (-15.25%)
Mutual labels:  cache
Wheel
关于net nio os cache db rpc json web http udp tcp mq 等多个小工具的自定义实现
Stars: ✭ 45 (-23.73%)
Mutual labels:  cache
Cache
A promise-aware caching API for Amp.
Stars: ✭ 46 (-22.03%)
Mutual labels:  cache
Guzzle Cache Middleware
A Guzzle Cache middleware
Stars: ✭ 50 (-15.25%)
Mutual labels:  cache
Rxnetwork
A swift network library based on Moya/RxSwift.
Stars: ✭ 43 (-27.12%)
Mutual labels:  cache
Ansible Role Memcached
Ansible Role - Memcached
Stars: ✭ 54 (-8.47%)
Mutual labels:  cache
Python Diskcache
Python disk-backed cache (Django-compatible). Faster than Redis and Memcached. Pure-Python.
Stars: ✭ 992 (+1581.36%)
Mutual labels:  cache
Easycaching
💥 EasyCaching is an open source caching library that contains basic usages and some advanced usages of caching which can help us to handle caching more easier!
Stars: ✭ 1,047 (+1674.58%)
Mutual labels:  cache
Kirby3 Autoid
Automatic unique ID for Pages, Files and Structures including performant helpers to retrieve them. Bonus: Tiny-URL.
Stars: ✭ 58 (-1.69%)
Mutual labels:  cache
Apollo Invalidation Policies
An extension of the Apollo 3 cache with support for type-based invalidation policies.
Stars: ✭ 55 (-6.78%)
Mutual labels:  cache
Fastcache
Fast thread-safe inmemory cache for big number of entries in Go. Minimizes GC overhead
Stars: ✭ 1,051 (+1681.36%)
Mutual labels:  cache

cache-macro

Build Status cache-macro on docs.rs cache-macro on crates.io

A procedural macro to automatically cache the result of a function given a set of inputs.

Previously named 'lru-cache-macros', but renamed to reflect the broadening of scope.

Example:

use cache_macro::cache;
use lru_cache::LruCache;

#[cache(LruCache : LruCache::new(20))]
fn fib(x: u32) -> u64 {
    println!("{:?}", x);
    if x <= 1 {
        1
    } else {
        fib(x - 1) + fib(x - 2)
    }
}

assert_eq!(fib(19), 6765);

The above example only calls fib twenty times, with the values from 0 to 19. All intermediate results because of the recursion hit the cache.

Usage:

Simply place #[cache(CacheType : constructor)] above your function. The function must obey a few properties to use lru_cache:

  • All arguments and return values must implement Clone.
  • The function may not take self in any form.

The LruCache type used must accept two generic parameters <Args, Return> and must support methods get_mut(&K) -> Option<&mut V> and insert(K, V). The lru-cache (for LRU caching) and expiring_map (for time-to-live caching) crates currently meet these requirements.

Currently, this crate only works on nightly rust. However, once the 2018 edition stabilizes as well as the procedural macro diagnostic interface, it should be able to run on stable.

Configuration:

The lru_cache macro can be configured by adding additional attributes under #[cache(...)].

All configuration attributes take the form #[cache_cfg(...)]. The available attributes are:

  • #[cache_cfg(ignore_args = ...)]

This allows certain arguments to be ignored for the purposes of caching. That means they are not part of the hash table key and thus should never influence the output of the function. It can be useful for diagnostic settings, returning the number of times executed, or other introspection purposes.

ignore_args takes a comma-separated list of variable identifiers to ignore.

Example:

use cache_macro::cache;
use lru_cache::LruCache;
#[cache(LruCache : LruCache::new(20))]
#[cache_cfg(ignore_args = call_count)]
fn fib(x: u64, call_count: &mut u32) -> u64 {
    *call_count += 1;
    if x <= 1 {
        1
    } else {
        fib(x - 1, call_count) + fib(x - 2, call_count)
    }
}

let mut call_count = 0;
assert_eq!(fib(39, &mut call_count), 102_334_155);
assert_eq!(call_count, 40);

The call_count argument can vary, caching is only done based on x.

  • #[cache_cfg(thread_local)]

Store the cache in thread-local storage instead of global static storage. This avoids the overhead of Mutex locking, but each thread will be given its own cache, and all caching will not affect any other thread.

Expanding on the first example:

use cache_macro::cache;
use lru_cache::LruCache;

#[cache(LruCache : LruCache::new(20))]
#[cache_cfg(thread_local)]
fn fib(x: u32) -> u64 {
    println!("{:?}", x);
    if x <= 1 {
        1
    } else {
        fib(x - 1) + fib(x - 2)
    }
}

assert_eq!(fib(19), 6765);

Details

The created cache is stored as a static variable protected by a mutex unless the #[cache_cfg(thread_local)] configuration is added.

With the default settings, the fibonacci example will generate the following code:

fn __lru_base_fib(x: u32) -> u64 {
    if x <= 1 { 1 } else { fib(x - 1) + fib(x - 2) }
}
fn fib(x: u32) -> u64 {
    use lazy_static::lazy_static;
    use std::sync::Mutex;

    lazy_static! {
        static ref cache: Mutex<::lru_cache::LruCache<(u32,), u64>> =
            Mutex::new(::lru_cache::LruCache::new(20usize));
    }

    let cloned_args = (x.clone(),);
    let mut cache_unlocked = cache.lock().unwrap();
    let stored_result = cache_unlocked.get_mut(&cloned_args);
    if let Some(stored_result) = stored_result {
        return stored_result.clone();
    };
    drop(cache_unlocked);
    let ret = __lru_base_fib(x);
    let mut cache_unlocked = cache.lock().unwrap();
    cache_unlocked.insert(cloned_args, ret.clone());
    ret
}

Whereas, if you use the #[lru_config(thread_local)] the generated code will look like:

fn __lru_base_fib(x: u32) -> u64 {
    if x <= 1 { 1 } else { fib(x - 1) + fib(x - 2) }
}
fn fib(x: u32) -> u64 {
    use std::cell::UnsafeCell;
    use std::thread_local;

    thread_local!(
         static cache: UnsafeCell<::lru_cache::LruCache<(u32,), u64>> =
             UnsafeCell::new(::lru_cache::LruCache::new(20usize));
    );

    cache.with(|c|
        {
            let mut cache_ref = unsafe { &mut *c.get() };
            let cloned_args = (x.clone(),);
            let stored_result = cache_ref.get_mut(&cloned_args);
            if let Some(stored_result) = stored_result {
                stored_result.clone()
            } else {
                let ret = __lru_base_fib(x);
                cache_ref.insert(cloned_args, ret.clone());
                ret
            }
        })
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].