All Projects → yildizdb → bigtable

yildizdb / bigtable

Licence: MIT License
TypeScript Bigtable Client with 🔋🔋 included.

Programming Languages

typescript
32286 projects
javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to bigtable

Esper Tv
Esper instance for TV news analysis
Stars: ✭ 37 (+184.62%)
Mutual labels:  big-data, google-cloud
bigdata-fun
A complete (distributed) BigData stack, running in containers
Stars: ✭ 14 (+7.69%)
Mutual labels:  big-data, hbase
Bigdata Notes
大数据入门指南 ⭐
Stars: ✭ 10,991 (+84446.15%)
Mutual labels:  big-data, hbase
Janusgraph
JanusGraph: an open-source, distributed graph database
Stars: ✭ 4,277 (+32800%)
Mutual labels:  hbase, bigtable
emulator-tools
Google Cloud BigTable and PubSub emulator tools to make development a breeze
Stars: ✭ 16 (+23.08%)
Mutual labels:  google-cloud, bigtable
Redislite
Redis in a python module.
Stars: ✭ 464 (+3469.23%)
Mutual labels:  big-data, key-value
Dvid
Distributed, Versioned, Image-oriented Dataservice
Stars: ✭ 174 (+1238.46%)
Mutual labels:  big-data, key-value
Tera
An Internet-Scale Database.
Stars: ✭ 1,846 (+14100%)
Mutual labels:  hbase, bigtable
yildiz
🦄🌟 Graph Database layer on top of Google Bigtable
Stars: ✭ 24 (+84.62%)
Mutual labels:  big-data, bigtable
Gimel
Big Data Processing Framework - Unified Data API or SQL on Any Storage
Stars: ✭ 216 (+1561.54%)
Mutual labels:  big-data, hbase
Gaffer
A large-scale entity and relation database supporting aggregation of properties
Stars: ✭ 1,642 (+12530.77%)
Mutual labels:  big-data, hbase
bigquery-kafka-connect
☁️ nodejs kafka connect connector for Google BigQuery
Stars: ✭ 17 (+30.77%)
Mutual labels:  big-data, google-cloud
Bigdata Playground
A complete example of a big data application using : Kubernetes (kops/aws), Apache Spark SQL/Streaming/MLib, Apache Flink, Scala, Python, Apache Kafka, Apache Hbase, Apache Parquet, Apache Avro, Apache Storm, Twitter Api, MongoDB, NodeJS, Angular, GraphQL
Stars: ✭ 177 (+1261.54%)
Mutual labels:  big-data, hbase
google-bigtable-postgres-fdw
Google Bigtable Postgres FDW in Rust
Stars: ✭ 37 (+184.62%)
Mutual labels:  google-cloud, bigtable
bftkv
A distributed key-value storage that's tolerant to Byzantine fault.
Stars: ✭ 27 (+107.69%)
Mutual labels:  big-data, key-value
BigData-News
基于Spark2.2新闻网大数据实时系统项目
Stars: ✭ 36 (+176.92%)
Mutual labels:  hbase
arc gcs
Provides an Arc backend for Google Cloud Storage
Stars: ✭ 48 (+269.23%)
Mutual labels:  google-cloud
leaflet heatmap
简单的可视化湖州通话数据 假设数据量很大,没法用浏览器直接绘制热力图,把绘制热力图这一步骤放到线下计算分析。使用Apache Spark并行计算数据之后,再使用Apache Spark绘制热力图,然后用leafletjs加载OpenStreetMap图层和热力图图层,以达到良好的交互效果。现在使用Apache Spark实现绘制,可能是Apache Spark不擅长这方面的计算或者是我没有设计好算法,并行计算的速度比不上单机计算。Apache Spark绘制热力图和计算代码在这 https://github.com/yuanzhaokang/ParallelizeHeatmap.git .
Stars: ✭ 13 (+0%)
Mutual labels:  big-data
ibmpairs
open source tools for interaction with IBM PAIRS:
Stars: ✭ 23 (+76.92%)
Mutual labels:  big-data
tempdb
Key-value store for temporary items 📝
Stars: ✭ 16 (+23.08%)
Mutual labels:  key-value

bigtable-client

yarn add bigtable-client

Intro

This is a TypeScript Bigtable client, it acts as wrapper around the official Google package @google-cloud/bigtable. When working with Bigtable we almost always had the urge to wrap the API to add a pinch of convenience to it, as well implement a way to get TTL (per cell basis), as well as metadata information such as a simple count more efficiently.

This client automatically manages a metadata table and ttl jobs for every table that you manage through it. Additionally it aims to mimic a simple CRUD interface, that is offered by a lot of redis packages like ioredis for example.

Additionally the setup and all operation (except for scan) are optimized for sub-millisecond response times (depending on your Google Cloud Bigtable Instance configuration), which helps you to develop real-time applications based on this Bigtable. This client is not ment to be used for analytical purposes, although it is fairly possible through scan operations.

Before you get started

Make sure to follow the setup described here. You will need a Google Cloud Project with enabled billing, as well as a setup authentication flow for this client to work.

Using

Using it is fairly simple:

First, you have to setup a factory instance, which gets the general configuration to connect to your Bigtable instance. NOTE: If the instance you describe does not exist, it will be created.

const {BigtableFactory} = require("bigtable-client");
const bigtableFactory = new BigtableFactory({

  projectId: "my-project-1", // -> see @google-cloud/bigtable configuration
  instanceName: "my-bigtable-cluster", // -> see @google-cloud/bigtable configuration
  //keyFilename: "keyfile.json", // -> see @google-cloud/bigtable configuration

  // optional:
  ttlScanIntervalMs: 5000,
  minJitterMs: 2000,
  maxJitterMs: 30000,
});
await bigtableFactory.init();

Then, using the factory you can create handles for you tables very easily. You can see that we are taking away the complexity of handling columnFamilies and columns in general, by assuming default values in the API that can be set via config optionally. However the API always allows you to access cells (by passing a column name as parameter) directly, as well as accessing and deleting whole rows. Please bear in mind that the number is TTL in seconds, and will be deleted on the next job run.

const myTable = await bigtableFactory.get({
  name: "mytable",

  // optional:
  columnFamily: "myfamily",
  defaultColumn: "default",
  maxVersions: 1,
});

const rowKey = "myrowkey";
const value = "myvalue";

await myTable.set(rowKey, value);
await myTable.set(rowKey, value, 10, "newColumn");

await myTable.ttl(rowKey);

await myTable.multiSet(rowKey, {testColumn: "hello", anotherColumn: "yes"}, 5);

await myTable.increase(rowKey);
await myTable.decrease(rowKey);

await myTable.bulkInsert([
    {
    row: "jean-paul",
    column: "sartre",
    data: "france",
    },
    {
    row: "emmanuel",
    column: "kant",
    data: "germany",
    },
    {
    row: "baruch",
    column: "spinoza",
    data: "netherland",
    },
], 5);

await myTable.multiAdd(rowKey, {foo: 1, bar: -5}, 7);
await myTable.get(rowKey);

await myTable.ttl(rowKey);
await myTable.count();

await myTable.getRow(rowKey)
await myTable.deleteRow(rowKey);

myTable.close(); // or bigtableFactory.close();

You can also scan tables (be carefull as these operations are slow).

const filters = [
    {
        // -> check out the official api for bigtable filters: https://cloud.google.com/nodejs/docs/reference/bigtable/0.13.x/Filter#interleave
    }
];

const etl = (row) => {
    return row.id || null;
};

const cells = await myTable.scanCells(filters, etl);

You can activate debug logs via env variable DEBUG=yildiz:bigtable:*.

You can find additional implementation examples here:

License

License is MIT

Disclaimer

This project is not associated with Google.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].