All Projects → preset-io → elasticsearch-dbapi

preset-io / elasticsearch-dbapi

Licence: Apache-2.0 license
A DBAPI and SQLAlchemy dialect for Elasticsearch

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to elasticsearch-dbapi

databricks-dbapi
DBAPI and SQLAlchemy dialect for Databricks Workspace and SQL Analytics clusters
Stars: ✭ 21 (-75%)
Mutual labels:  sqlalchemy
swap
A Solver for the Wavelength Assignment Problem (RWA) in WDM networks
Stars: ✭ 27 (-67.86%)
Mutual labels:  sqlalchemy
sqlalchemy-collectd
Send statistics on SQLAlchemy connection and transaction metrics used by Python applications to the collectd service.
Stars: ✭ 24 (-71.43%)
Mutual labels:  sqlalchemy
magql
The magical GraphQL framework that generates an API for your data.
Stars: ✭ 26 (-69.05%)
Mutual labels:  sqlalchemy
Flask-Validator
Validator for SQLAlchemy Models
Stars: ✭ 27 (-67.86%)
Mutual labels:  sqlalchemy
Qwerkey
Qwerkey is a social media platform for connecting and learning more about mechanical keyboards built on React and Redux in the frontend and Flask in the backend on top of a PostgreSQL database.
Stars: ✭ 22 (-73.81%)
Mutual labels:  sqlalchemy
bag
A Python library for several purposes
Stars: ✭ 25 (-70.24%)
Mutual labels:  sqlalchemy
sanic-graphql-example
Sanic using Graphsql + SQLAlchemy example
Stars: ✭ 21 (-75%)
Mutual labels:  sqlalchemy
shillelagh
Making it easy to query APIs via SQL
Stars: ✭ 172 (+104.76%)
Mutual labels:  sqlalchemy
alchemy-mock
SQLAlchemy mock helpers.
Stars: ✭ 74 (-11.9%)
Mutual labels:  sqlalchemy
hypothesis sqlalchemy
hypothesis strategies for generating SQLAlchemy objects
Stars: ✭ 24 (-71.43%)
Mutual labels:  sqlalchemy
flask-template
Template for creating Flask based projects
Stars: ✭ 60 (-28.57%)
Mutual labels:  sqlalchemy
fastapi-saas-base
Fast API SAAS Base App
Stars: ✭ 47 (-44.05%)
Mutual labels:  sqlalchemy
OpenAlchemy
Define SQLAlchemy models using the OpenAPI specification.
Stars: ✭ 39 (-53.57%)
Mutual labels:  sqlalchemy
pyramid basemodel
Global base classes for Pyramid SQLAlchemy applications.
Stars: ✭ 14 (-83.33%)
Mutual labels:  sqlalchemy
pyramid-cookiecutter-alchemy
[DEPRECATED - Please use https://github.com/pylons/pyramid-cookiecutter-starter instead] A Cookiecutter (project template) for creating a Pyramid project using SQLite for persistent storage, SQLAlchemy for an ORM, Alembic for database migrations, URL dispatch for routing, and Jinja2 for templating.
Stars: ✭ 39 (-53.57%)
Mutual labels:  sqlalchemy
favv
Fullstack Web Application Framework With FastAPI + Vite + VueJS. Streamlit for rapid development.
Stars: ✭ 17 (-79.76%)
Mutual labels:  sqlalchemy
nim-gatabase
Connection-Pooling Compile-Time ORM for Nim
Stars: ✭ 103 (+22.62%)
Mutual labels:  sqlalchemy
tifa
Yet another opinionated fastapi-start-kit with best practice
Stars: ✭ 82 (-2.38%)
Mutual labels:  sqlalchemy
ChefAPI
API using FastAPI and PostgreSQL for sharing or keeping track of awesome food recipes Based on Oauth2 and JWT 💎
Stars: ✭ 16 (-80.95%)
Mutual labels:  sqlalchemy

ElasticSearch DBAPI

Build Status PyPI version Coverage Status

elasticsearch-dbapi Implements a DBAPI (PEP-249) and SQLAlchemy dialect, that enables SQL access on elasticsearch clusters for query only access.

On Elastic Elasticsearch: Uses Elastic X-Pack SQL API

On AWS ES, opendistro Elasticsearch: Open Distro SQL

This library supports Elasticsearch 7.X versions.

Installation

$ pip install elasticsearch-dbapi

To install support for AWS Elasticsearch Service / Open Distro:

$ pip install elasticsearch-dbapi[opendistro]

Usage:

Using DBAPI:

from es.elastic.api import connect

conn = connect(host='localhost')
curs = conn.cursor()
curs.execute(
    "select * from flights LIMIT 10"
)
print([row for row in curs])

Using SQLAlchemy execute:

from sqlalchemy.engine import create_engine

engine = create_engine("elasticsearch+http://localhost:9200/")
rows = engine.connect().execute(
    "select * from flights LIMIT 10"
)
print([row for row in rows])

Using SQLAlchemy:

from sqlalchemy import func, select
from sqlalchemy.engine import create_engine
from sqlalchemy.schema import MetaData, Table


engine = create_engine("elasticsearch+http://localhost:9200/")
logs = Table("flights", MetaData(bind=engine), autoload=True)
count = select([func.count("*")], from_obj=logs).scalar()
print(f"COUNT: {count}")

Using SQLAlchemy reflection:

from sqlalchemy.engine import create_engine
from sqlalchemy.schema import Table, MetaData

engine = create_engine("elasticsearch+http://localhost:9200/")
logs = Table("flights", MetaData(bind=engine), autoload=True)
print(engine.table_names())

metadata = MetaData()
metadata.reflect(bind=engine)
print([table for table in metadata.sorted_tables])
print(logs.columns)

Connection Parameters:

elasticsearch-py is used to establish connections and transport, this is the official elastic python library. Elasticsearch constructor accepts multiple optional parameters that can be used to properly configure your connection on aspects like security, performance and high availability. These optional parameters can be set at the connection string, for example:

   elasticsearch+http://localhost:9200/?http_compress=True&timeout=100

will set transport to use gzip (http_compress) and timeout to 10 seconds.

For more information on configuration options, look at elasticsearch-py’s documentation:

The connection string follows RFC-1738, to support multiple nodes you should use sniff_* parameters

Fetch size

By default the maximum number of rows which get fetched by a single query is limited to 10000. This can be adapted through the fetch_size parameter:

from es.elastic.api import connect

conn = connect(host="localhost", fetch_size=1000)
curs = conn.cursor()

If more than 10000 rows should get fetched then max_result_window has to be adapted as well.

Time zone

By default, elasticsearch query time zone defaults to Z (UTC). This can be adapted through the time_zone parameter:

from es.elastic.api import connect

conn = connect(host="localhost", time_zone="Asia/Shanghai")
curs = conn.cursor()

Tests

To run unittest launch elasticsearch and kibana (kibana is really not required but is a nice to have)

$ docker-compose up -d
$ nosetests -v

Special case for sql opendistro endpoint (AWS ES)

AWS ES exposes the opendistro SQL plugin, and it follows a different SQL dialect. Using the odelasticsearch driver:

from sqlalchemy.engine import create_engine

engine = create_engine(
    "odelasticsearch+https://search-SOME-CLUSTER.us-west-2.es.amazonaws.com:443/"
)
rows = engine.connect().execute(
    "select count(*), Carrier from flights GROUP BY Carrier"
)
print([row for row in rows])

Or using DBAPI:

from es.opendistro.api import connect

conn = connect(host='localhost',port=9200,path="", scheme="http")

curs = conn.cursor().execute(
    "select * from flights LIMIT 10"
)

print([row for row in curs])

Opendistro (AWS ES) Basic authentication

Basic authentication is configured as expected on the , fields of the URI

from sqlalchemy.engine import create_engine

engine = create_engine(
    "odelasticsearch+https://my_user:[email protected]:443/"
)

IAM AWS Authentication keys are passed on the URI basic auth location, and by setting aws_keys

Query string keys are:

  • aws_keys
  • aws_region
from sqlalchemy.engine import create_engine

engine = create_engine(
    "odelasticsearch+https://<AWS_ACCESS_KEY>:<AWS_SECRET_KEY>@search-SOME-CLUSTER.us-west-2.es.amazonaws.com:443/?aws_keys=1&&aws_region=<AWS_REGION>"
)

IAM AWS profile is configured has a query parameter name aws_profile on the URI. The value for the key provides the AWS region

from sqlalchemy.engine import create_engine

engine = create_engine(
    "odelasticsearch+https://search-SOME-CLUSTER.us-west-2.es.amazonaws.com:443/?aws_profile=us-west-2"
)

Using the new SQL engine:

Opendistro 1.13.0 brings (enabled by default) a new SQL engine, with lots of improvements and fixes. Take a look at the release notes

This DBAPI has to behave slightly different for SQL v1 and SQL v2, by default we comply with v1, to enable v2 support, pass v2=true has a query parameter.

odelasticsearch+https://search-SOME-CLUSTER.us-west-2.es.amazonaws.com:443/?aws_profile=us-west-2&v2=true

To connect to the provided Opendistro ES on docker-compose use the following URI: odelasticsearch+https://admin:admin@localhost:9400/?verify_certs=False

Known limitations

This library does not yet support the following features:

  • Array type columns are not supported. Elaticsearch SQL does not support them either. SQLAlchemy get_columns will exclude them.

  • object and nested column types are not well supported and are converted to strings

  • Indexes that whose name start with .

  • GEO points are not currently well-supported and are converted to strings

  • AWS ES (opendistro elascticsearch) is supported (still beta), known limitations are:

    • You are only able to GROUP BY keyword fields (new experimental opendistro SQL already supports it)
    • Indices with dots are not supported (indices like 'audit_log.2021.01.20'), on these cases we recommend the use of aliases
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].