All Projects → sajari → storage

sajari / storage

Licence: MIT License
Go package for abstracting local, in-memory, and remote (Google Cloud Storage/S3) filesystems

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to storage

nestjs-storage
Nestjs file system / file storage module wrapping flydrive
Stars: ✭ 92 (+87.76%)
Mutual labels:  filesystem, s3, google-cloud-storage
Drone Cache
A Drone plugin for caching current workspace files between builds to reduce your build times
Stars: ✭ 194 (+295.92%)
Mutual labels:  cache, s3, google-cloud-storage
Flydrive
☁️ Flexible and Fluent framework-agnostic driver based system to manage storage in Node.js
Stars: ✭ 275 (+461.22%)
Mutual labels:  filesystem, s3, google-cloud-storage
Mc
MinIO Client is a replacement for ls, cp, mkdir, diff and rsync commands for filesystems and object storage.
Stars: ✭ 1,962 (+3904.08%)
Mutual labels:  filesystem, s3, google-cloud-storage
Goofys
a high-performance, POSIX-ish Amazon S3 file system written in Go
Stars: ✭ 3,932 (+7924.49%)
Mutual labels:  filesystem, s3, google-cloud-storage
Python Diskcache
Python disk-backed cache (Django-compatible). Faster than Redis and Memcached. Pure-Python.
Stars: ✭ 992 (+1924.49%)
Mutual labels:  filesystem, cache
S3fs
Amazon S3 filesystem for PyFilesystem2
Stars: ✭ 111 (+126.53%)
Mutual labels:  filesystem, s3
Afs
Abstract File Storage
Stars: ✭ 126 (+157.14%)
Mutual labels:  filesystem, s3
Tus Ruby Server
Ruby server for tus resumable upload protocol
Stars: ✭ 172 (+251.02%)
Mutual labels:  filesystem, s3
Chubaofs
ChubaoFS (abbrev. CBFS) is a cloud native distributed file system and object store.
Stars: ✭ 2,482 (+4965.31%)
Mutual labels:  filesystem, s3
kafka-connect-fs
Kafka Connect FileSystem Connector
Stars: ✭ 107 (+118.37%)
Mutual labels:  filesystem, s3
S3fs Fuse
FUSE-based file system backed by Amazon S3
Stars: ✭ 5,733 (+11600%)
Mutual labels:  filesystem, s3
S5cmd
Parallel S3 and local filesystem execution tool.
Stars: ✭ 565 (+1053.06%)
Mutual labels:  filesystem, s3
Catfs
Cache AnyThing filesystem written in Rust
Stars: ✭ 404 (+724.49%)
Mutual labels:  filesystem, cache
go-fsimpl
Go io/fs.FS filesystem implementations for various URL schemes
Stars: ✭ 225 (+359.18%)
Mutual labels:  filesystem, s3
Infinit
The Infinit policy-based software-defined storage platform.
Stars: ✭ 363 (+640.82%)
Mutual labels:  filesystem, s3
ob bulkstash
Bulk Stash is a docker rclone service to sync, or copy, files between different storage services. For example, you can copy files either to or from a remote storage services like Amazon S3 to Google Cloud Storage, or locally from your laptop to a remote storage.
Stars: ✭ 113 (+130.61%)
Mutual labels:  s3, google-cloud-storage
chicon-rs
A file abstraction system for Rust
Stars: ✭ 55 (+12.24%)
Mutual labels:  filesystem, s3
Juicefs
JuiceFS is a distributed POSIX file system built on top of Redis and S3.
Stars: ✭ 4,262 (+8597.96%)
Mutual labels:  filesystem, s3
acid-store
A library for secure, deduplicated, transactional, and verifiable data storage
Stars: ✭ 48 (-2.04%)
Mutual labels:  filesystem, s3

Storage

Build Status GoDoc

storage is a Go package which abstracts file systems (local, in-memory, Google Cloud Storage, S3) into a few interfaces. It includes convenience wrappers for simplifying common file system use cases such as caching, prefix isolation and more!

Requirements

Installation

$ go get code.sajari.com/storage

Usage

For full documentation see: http://godoc.org/code.sajari.com/storage/.

All storage in this package follow two simple interfaces designed for using file systems.

type FS interface {
	Walker

	// Open opens an existing file at path in the filesystem.  Callers must close the
	// File when done to release all underlying resources.
	Open(ctx context.Context, path string) (*File, error)

	// Create makes a new file in the filesystem.  Callers must close the
	// returned WriteCloser and check the error to be sure that the file
	// was successfully written.
	Create(ctx context.Context, path string) (io.WriteCloser, error)

	// Delete removes a file from the filesystem.
	Delete(ctx context.Context, path string) error
}

// WalkFn is a function type which is passed to Walk.
type WalkFn func(path string) error

// Walker is an interface which defines the Walk method.
type Walker interface {
	// Walk traverses a path listing by prefix, calling fn with each object path rewritten
	// to be relative to the underlying filesystem and provided path.
	Walk(ctx context.Context, path string, fn WalkFn) error
}

Local

Local is the default implementation of a local file system (i.e. using os.Open etc).

local := storage.Local("/some/root/path")
f, err := local.Open(context.Background(), "file.json") // will open "/some/root/path/file.json"
if err != nil {
	// ...
}
// ...
f.Close()

Memory

Mem is the default in-memory implementation of a file system.

mem := storage.Mem()
wc, err := mem.Create(context.Background(), "file.txt")
if err != nil {
	// ...
}
if _, err := io.WriteString(wc, "Hello World!"); err != nil {
	// ...
}
if err := wc.Close(); err != nil {
	// ...
}

And now:

f, err := mem.Open(context.Background(), "file.txt")
if err != nil {
	// ...
}
// ...
f.Close()

Google Cloud Storage

CloudStorage is the default implementation of Google Cloud Storage. This uses https://godoc.org/golang.org/x/oauth2/google#DefaultTokenSource for autentication.

store := storage.CloudStorage{Bucket:"some-bucket"}
f, err := store.Open(context.Background(), "file.json") // will fetch "gs://some-bucket/file.json"
if err != nil {
	// ...
}
// ...
f.Close()

S3

Not yet implemented! Watch this space.

Wrappers and Helpers

Simple Caching

To use Cloud Storage as a source file system, but cache all opened files in a local filesystem:

src := storage.CloudStorage{Bucket:"some-bucket"}
local := storage.Local("/scratch-space")

fs := storage.Cache(src, local)
f, err := fs.Open(context.Background(), "file.json") // will try src then jump to cache ("gs://some-bucket/file.json")
if err != nil {
	// ...
}
// ...
f.Close()

f, err := fs.Open(context.Background(), "file.json") // should now be cached ("/scratch-space/file.json")
if err != nil {
	// ...
}
// ...
f.Close()

This is particularly useful when distributing files across multiple regions or between cloud providers. For instance, we could add the following code to the previous example:

mainSrc := storage.CloudStorage{Bucket:"some-bucket-in-another-region"}
fs2 := storage.Cache(mainSrc, fs) // fs is from previous snippet

// Open will:
// 1. Try local (see above)
// 2. Try gs://some-bucket
// 3. Try gs://some-bucket-in-another-region, which will be cached in gs://some-bucket and then local on its
//    way back to the caller.
f, err := fs2.Open(context.Background(), "file.json") // will fetch "gs://some-bucket-in-another-region/file.json"
if err != nil {
	// ...
}
// ...
f.Close()

f, err := fs2.Open(context.Background(), "file.json") // will fetch "/scratch-space/file.json"
if err != nil {
	// ...
}
// ...
f.Close()

Adding prefixes to paths

If you're writing code that relies on a set directory structure, it can be very messy to have to pass path-patterns around. You can avoid this by wrapping storage.FS implementations with storage.Prefix that rewrites all incoming paths.

modelFS := storage.Prefix(rootFS, "models/")
f, err := modelFS.Open(context.Background(), "file.json") // will call rootFS.Open with path "models/file.json"
if err != nil {
	// ...
}
// ...
f.Close()

It's also now simple to write wrapper functions to abstract out more complex directory structures.

func UserFS(fs storage.FS, userID, mediaType string) FS {
	return storage.Prefix(fs, fmt.Sprintf("%v/%v", userID, userType))
}

userFS := UserFS(rootFS, "1111", "pics")
f, err := userFS.Open(context.Background(), "beach.png") // will call rootFS.Open with path "1111/pics/beach.png"
if err != nil {
	// ...
}
// ...
f.Close()
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].