All Projects → flowerinthenight → spindle

flowerinthenight / spindle

Licence: Apache-2.0 license
A distributed locking library built on top of Cloud Spanner and TrueTime.

Programming Languages

go
31211 projects - #10 most used programming language

Projects that are alternatives of or similar to spindle

pontem
Open source tools for Google Cloud Storage and Databases.
Stars: ✭ 62 (+31.91%)
Mutual labels:  gcp, spanner
juice
Java后端开发库,涵盖:常用工具类、SPI扩展、分布式锁、限流、分布式链路追踪等。
Stars: ✭ 32 (-31.91%)
Mutual labels:  distributed-lock
awesome-bigquery-views
Useful SQL queries for Blockchain ETL datasets in BigQuery.
Stars: ✭ 325 (+591.49%)
Mutual labels:  gcp
bigflow
A Python framework for data processing on GCP.
Stars: ✭ 96 (+104.26%)
Mutual labels:  gcp
GCPEditorPro
Amazingly fast and simple ground control points interface. ◎
Stars: ✭ 33 (-29.79%)
Mutual labels:  gcp
waihona
Rust crate for performing cloud storage CRUD actions across major cloud providers e.g aws
Stars: ✭ 46 (-2.13%)
Mutual labels:  gcp
devrel
Common solutions and tools developed for Apigee
Stars: ✭ 121 (+157.45%)
Mutual labels:  gcp
GoogleCloudLogging
Swift (Darwin) library for logging application events in Google Cloud.
Stars: ✭ 24 (-48.94%)
Mutual labels:  gcp
aruco-geobits
geobits: ArUco Ground Control Point Targets and Detection for Aerial Imagery (UAV/MAV).
Stars: ✭ 32 (-31.91%)
Mutual labels:  gcp
k8s-digester
Add digests to container and init container images in Kubernetes pod and pod template specs. Use either as a mutating admission webhook, or as a client-side KRM function with kpt or kustomize.
Stars: ✭ 65 (+38.3%)
Mutual labels:  gcp
sdk
Home of the JupiterOne SDK
Stars: ✭ 21 (-55.32%)
Mutual labels:  gcp
hive-bigquery-storage-handler
Hive Storage Handler for interoperability between BigQuery and Apache Hive
Stars: ✭ 16 (-65.96%)
Mutual labels:  gcp
tfquery
tfquery: Run SQL queries on your Terraform infrastructure. Query resources and analyze its configuration using a SQL-powered framework.
Stars: ✭ 297 (+531.91%)
Mutual labels:  gcp
hush gcp secret manager
A Google Secret Manager Provider for Hush
Stars: ✭ 17 (-63.83%)
Mutual labels:  gcp
polygon-etl
ETL (extract, transform and load) tools for ingesting Polygon blockchain data to Google BigQuery and Pub/Sub
Stars: ✭ 53 (+12.77%)
Mutual labels:  gcp
KuiBaDB
Another OLAP database
Stars: ✭ 297 (+531.91%)
Mutual labels:  spanner
gcp-serviceaccount-controller
This is a controller to automatically create gcp service accounts an save them into kubernetes secrets
Stars: ✭ 14 (-70.21%)
Mutual labels:  gcp
deploy-cloudrun
This action deploys your container image to Cloud Run.
Stars: ✭ 238 (+406.38%)
Mutual labels:  gcp
cloud-detect
Module that determines a host's cloud provider.
Stars: ✭ 28 (-40.43%)
Mutual labels:  gcp
course-material
Course Material for in28minutes courses on Java, Spring Boot, DevOps, AWS, Google Cloud, and Azure.
Stars: ✭ 544 (+1057.45%)
Mutual labels:  gcp

main Go Reference

spindle

A distributed locking library built on top of Cloud Spanner. It uses Spanner's TrueTime and transactions support to achieve its locking mechanism.

Usage

At the moment, the table needs to be created beforehand using the following DDL (locktable is just an example):

CREATE TABLE locktable (
    name STRING(MAX) NOT NULL,
    heartbeat TIMESTAMP OPTIONS (allow_commit_timestamp=true),
    token TIMESTAMP OPTIONS (allow_commit_timestamp=true),
    writer STRING(MAX),
) PRIMARY KEY (name)

This library doesn't use the usual synchronous "lock", "do protected work", "unlock" sequence. For that, you can check out the included lock package. Instead, after instantiating the lock object, you will call the Run(...) function which will attempt to acquire a named lock at a regular interval (lease duration) until cancelled. A HasLock() function is provided which returns true (along with the lock token) if the lock is successfully acquired. Something like:

db, _ := spanner.NewClient(context.Background(), "your/database")
defer db.Close()

done := make(chan error, 1) // notify me when done (optional)
quit, cancel := context.WithCancel(context.Background()) // for cancel

// Instantiate the lock object using a 5s lease duration using locktable above.
lock := spindle.New(db, "locktable", "mylock", spindle.WithDuration(5000))

lock.Run(quit, done) // start the main loop, async

time.Sleep(time.Second * 20)
locked, token := lock.HasLock()
log.Println("HasLock:", locked, token)
time.Sleep(time.Second * 20)

cancel()
<-done

How it works

The initial lock (the lock record doesn't exist in the table yet) is acquired by a process using an SQL INSERT. Once the record is created (by one process), all other INSERT attempts will fail. In this phase, the commit timestamp of the locking process' transaction will be equal to the timestamp stored in the token column. This will serve as our fencing token in situations where multiple processes are somehow able to acquire a lock. Using this token, the real lock holder will start sending heartbeats by updating the heartbeat column.

When a lock is active, all participating processes will detect if the lease has expired by checking the heartbeat against Spanner's current timestamp. If so (say, the active locker has crashed, or cancelled), another round of SQL INSERT is attempted, this time, using the name format <lockname_current-lock-token>. The process that gets the lock this time will then attempt to update the token column using its commit timestamp, thus, updating the fencing token. In the event that the original locker process recovers (if crashed), or continues after a stop-the-world GC pause, the latest token should invalidate its locking claim (its token is already outdated).

A simple code is provided to demonstrate the mechanism through logs. You can try running multiple binaries in multiple terminals or in a single terminal, like:

$ cd examples/simple/
$ go build -v
$ for num in 1 2 3; do ./simple &; done
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].