All Projects → replikativ → Datahike

replikativ / Datahike

Licence: epl-1.0
A durable datalog implementation adaptable for distribution.

Programming Languages

clojure
4091 projects

Projects that are alternatives of or similar to Datahike

Nora
Nora is a Firebase abstraction layer for FirebaseDatabase and FirebaseStorage
Stars: ✭ 270 (-74.84%)
Mutual labels:  database, open-source
Deveeldb
DeveelDB is a complete SQL database system, primarly developed for .NET/Mono frameworks
Stars: ✭ 80 (-92.54%)
Mutual labels:  database, open-source
Cocorico
👐 Cocorico is an open source marketplace solution for services and rentals. More information right here: https://www.cocorico.io/en/ 🚀 Cocorico is also available in an off-the-shelf SaaS package, check out https://www.hatch.li to launch your platform today. 😍 We are hiring (telecommute welcome 🏡): https://www.welcometothejungle.com/en/companies/cocorico/jobs/candidatures-spontanees#apply
Stars: ✭ 765 (-28.7%)
Mutual labels:  database, open-source
Quick.db
An easy, open-sourced, Node.js database designed for complete beginners getting into the concept of coding.
Stars: ✭ 177 (-83.5%)
Mutual labels:  database, open-source
Knime Core
KNIME Analytics Platform
Stars: ✭ 302 (-71.85%)
Mutual labels:  database, open-source
Pizzaql
🍕 Modern OSS Order Management System for Pizza Restaurants
Stars: ✭ 631 (-41.19%)
Mutual labels:  database, open-source
Ethereumdb
Stars: ✭ 21 (-98.04%)
Mutual labels:  database, open-source
Php E Invoice It
A PHP package for managing italian e-invoice and notice XML formats. (Pacchetto PHP per gestire il formato XML di fatture e notifiche come richiesto dal SdI).
Stars: ✭ 53 (-95.06%)
Mutual labels:  open-source
Bpmn Elements
Executable workflow elements based on BPMN 2.0
Stars: ✭ 54 (-94.97%)
Mutual labels:  open-source
Sessionstore
Sessionstore is a node.js module for multiple databases. It can be very useful if you work with express or connect.
Stars: ✭ 52 (-95.15%)
Mutual labels:  database
Gormt
database to golang struct
Stars: ✭ 1,063 (-0.93%)
Mutual labels:  database
Dbbench
🏋️ dbbench is a simple database benchmarking tool which supports several databases and own scripts
Stars: ✭ 52 (-95.15%)
Mutual labels:  database
Electrophysiologysoftware
A list of openly available software tools for (mostly human) electrophysiology.
Stars: ✭ 54 (-94.97%)
Mutual labels:  open-source
Coronavirus Countries
COVID-19 interactive dashboard for the whole world
Stars: ✭ 53 (-95.06%)
Mutual labels:  open-source
Ansible Role Memcached
Ansible Role - Memcached
Stars: ✭ 54 (-94.97%)
Mutual labels:  database
Java Client Api
Java client for the MarkLogic enterprise NoSQL database
Stars: ✭ 52 (-95.15%)
Mutual labels:  database
Fifa Fut Data
Web-scraping script that writes the data of all players from FutHead and FutBin to a CSV file or a DB
Stars: ✭ 55 (-94.87%)
Mutual labels:  database
Nodejs Driver
DataStax Node.js Driver for Apache Cassandra
Stars: ✭ 1,074 (+0.09%)
Mutual labels:  database
Hacktoberfest 2020
Learn how to Open your First PR (Pull Request) and contribute towards Open Source
Stars: ✭ 54 (-94.97%)
Mutual labels:  open-source
East
node.js database migration tool
Stars: ✭ 53 (-95.06%)
Mutual labels:  database

Datahike

Datahike is a durable Datalog database powered by an efficient Datalog query engine. This project started as a port of DataScript to the hitchhiker-tree. All DataScript tests are passing, but we are still working on the internals. Having said this we consider Datahike usable for medium sized projects, since DataScript is very mature and deployed in many applications and the hitchhiker-tree implementation is heavily tested through generative testing. We are building on the two projects and the storage backends for the hitchhiker-tree through konserve. We would like to hear experience reports and are happy if you join us.

You can find API documentation on cljdoc and articles on Datahike on our company's blog page.

cljdoc

We presented Datahike also at meetups,for example at:

Usage

Add to your dependencies:

Clojars Project

We provide a small stable API for the JVM at the moment, but the on-disk schema is not fixed yet. We will provide a migration guide until we have reached a stable on-disk schema. Take a look at the ChangeLog before upgrading.

(require '[datahike.api :as d])


;; use the filesystem as storage medium
(def cfg {:store {:backend :file :path "/tmp/example"}})

;; create a database at this place, per default configuration we enforce a strict
;; schema and keep all historical data
(d/create-database cfg)

(def conn (d/connect cfg))

;; the first transaction will be the schema we are using
;; you may also add this within database creation by adding :initial-tx
;; to the configuration
(d/transact conn [{:db/ident :name
                   :db/valueType :db.type/string
                   :db/cardinality :db.cardinality/one }
                  {:db/ident :age
                   :db/valueType :db.type/long
                   :db/cardinality :db.cardinality/one }])

;; lets add some data and wait for the transaction
(d/transact conn [{:name  "Alice", :age   20 }
                  {:name  "Bob", :age   30 }
                  {:name  "Charlie", :age   40 }
                  {:age 15 }])

;; search the data
(d/q '[:find ?e ?n ?a
       :where
       [?e :name ?n]
       [?e :age ?a]]
  @conn)
;; => #{[3 "Alice" 20] [4 "Bob" 30] [5 "Charlie" 40]}

;; add new entity data using a hash map
(d/transact conn {:tx-data [{:db/id 3 :age 25}]})

;; if you want to work with queries like in
;; https://grishaev.me/en/datomic-query/,
;; you may use a hashmap
(d/q {:query '{:find [?e ?n ?a ]
               :where [[?e :name ?n]
                       [?e :age ?a]]}
      :args [@conn]})
;; => #{[5 "Charlie" 40] [4 "Bob" 30] [3 "Alice" 25]}

;; query the history of the data
(d/q '[:find ?a
       :where
       [?e :name "Alice"]
       [?e :age ?a]]
  (d/history @conn))
;; => #{[20] [25]}

;; you might need to release the connection for specific stores like leveldb
(d/release conn)

;; clean up the database if it is not need any more
(d/delete-database cfg)

The API namespace provides compatibility to a subset of Datomic functionality and should work as a drop-in replacement on the JVM. The rest of Datahike will be ported to core.async to coordinate IO in a platform-neutral manner.

Refer to the docs for more information:

For simple examples have a look at the projects in the examples folder.

Example projects

Performance Measurement

There is a small command line utility integrated in this project to measure the performance of our in-memory and our file backend.

To run the benchmarks, navigate to the project folder in your terminal and run

clj -A:benchmark

You will receive a list containing information about what has been tested and the mean of measured times in milliseconds as follows:

[ ;; ...
 {:context
  {:db
   {:store {:backend :mem, :id "performance-hht"},
    :schema-flexibility :write,
    :keep-history? true,
    :index :datahike.index/hitchhiker-tree},
   :function :transaction,
   :db-size 1000,
   :tx-size 10},
  :mean-time 5.0185512 ;; ms
 }
  ;; ...
]

The functions tested are

  • connect with keyword :connection
    • :dbsize describes the number of datoms in the database the connection is being established to
  • transact with keyword :transaction
    • :txsize describes the number of datoms inserted into the database
    • :dbsize describes the number of datoms in the database before the transaction
  • q with keywords :query1 and :query2
    • :dbsize describes the number of datoms in the database
    • queries are defined as following examples:
 (def query1 '[:find ?e :where [?e :s1 "string"]])

 (def query2 '[:find ?a :where [?e :s1 ?a]
                               [?e :i1 42]])

Relationship to Datomic and DataScript

Datahike provides similar functionality to Datomic and can be used as a drop-in replacement for a subset of it. The goal of Datahike is not to provide an open-source reimplementation of Datomic, but it is part of the replikativ toolbox aimed to build distributed data management solutions. We have spoken to many backend engineers and Clojure developers, who tried to stay away from Datomic just because of its proprietary nature and we think in this regard Datahike should make an approach to Datomic easier and vice-versa people who only want to use the goodness of Datalog in small scale applications should not worry about setting up and depending on Datomic.

Some differences are:

  • Datahike runs locally on one peer. A transactor might be provided in the future and can also be realized through any linearizing write mechanism, e.g. Apache Kafka. If you are interested, please contact us.
  • Datahike provides the database as a transparent value, i.e. you can directly access the index datastructures (hitchhiker-tree) and leverage their persistent nature for replication. These internals are not guaranteed to stay stable, but provide useful insight into what is going on and can be optimized.
  • Datomic has a REST interface and a Java API
  • Datomic provides timeouts

Datomic is a full-fledged scalable database (as a service) built from the authors of Clojure and people with a lot of experience. If you need this kind of professional support, you should definitely stick to Datomic.

Datahike's query engine and most of its codebase come from DataScript. Without the work on DataScript, Datahike would not have been possible. Differences to Datomic with respect to the query engine are documented there.

When should I pick what?

Datahike

Pick Datahike if your app has modest requirements towards a typical durable database, e.g. a single machine and a few millions of entities at maximum. Similarly if you want to have an open-source solution and be able to study and tinker with the codebase of your database, Datahike provides a comparatively small and well composed codebase to tweak it to your needs. You should also always be able to migrate to Datomic later easily.

Datomic

Pick Datomic if you already know that you will need scalability later or if you need a network API for your database. There is also plenty of material about Datomic online already. Most of it applies in some form or another to Datahike, but it might be easier to use Datomic directly when you first learn Datalog.

DataScript

Pick DataScript if you want the fastest possible query performance and do not have a huge amount of data. You can easily persist the write operations separately and use the fast in-memory index datastructure of DataScript then. Datahike also at the moment does not support ClojureScript anymore, although we plan to recover this functionality.

ClojureScript support

ClojureScript support is planned and work in progress. Please see Roadmap.

Migration & Backup

The database can be exported to a flat file with:

(require '[datahike.migrate :refer [export-db import-db]])
(export-db @conn "/tmp/eavt-dump")

You must do so before upgrading to a Datahike version that has changed the on-disk format. This can happen as long as we are arriving at version 1.0.0 and will always be communicated through the Changelog. After you have bumped the Datahike version you can use

;; ... setup new-conn (recreate with correct schema)

(import-db new-conn "/tmp/eavt-dump")

to reimport your data into the new format.

The datoms are stored as strings in a line-based format, so you can easily check whether your dump is containing reasonable data. You can also use it to do some string based editing of the DB. You can also use the export as a backup.

If you are upgrading from pre 0.1.2 where we have not had the migration code yet, then just evaluate the datahike.migrate namespace manually in your project before exporting.

Have a look at the change log for recent updates.

Roadmap

0.4.0

  • identity and access management
  • CRDT type schema support
  • fast redis backend support
  • query planner and optimizer
  • transaction monitoring

0.5.0

  • optionally use core.async to handle storage IO
  • ClojureScript support both in the browser and on node

0.6.0

  • support GC or eager deletion of fragments
  • use hitchhiker-tree synchronization for replication
  • run comprehensive query suite and compare to DataScript and Datomic
  • support anomaly errors (?)

1.0.0

Commercial support

We are happy to provide commercial support with lambdaforge. If you are interested in a particular feature, please let us know.

License

Copyright © 2014–2020 Konrad Kühne, Christian Weilbach, Chrislain Razafimahefa, Timo Kramer, Judith Massa, Nikita Prokopov

Licensed under Eclipse Public License (see LICENSE).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].