All Projects → hyperqueryhq → whale

hyperqueryhq / whale

Licence: GPL-3.0 license
🐳 The stupidly simple CLI workspace for your data warehouse.

Programming Languages

python
139335 projects - #7 most used programming language
rust
11053 projects
Makefile
30231 projects

Projects that are alternatives of or similar to whale

metamapper
Metamapper is a data discovery and documentation platform for improving how teams understand and interact with their data.
Stars: ✭ 60 (-91.38%)
Mutual labels:  data-catalog, data-discovery
Amundsen
Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.
Stars: ✭ 2,901 (+316.81%)
Mutual labels:  data-catalog, data-discovery
Datahub
The Metadata Platform for the Modern Data Stack
Stars: ✭ 4,232 (+508.05%)
Mutual labels:  data-catalog, data-discovery
intake-esm
An intake plugin for parsing an Earth System Model (ESM) catalog and loading assets into xarray datasets.
Stars: ✭ 78 (-88.79%)
Mutual labels:  data-catalog
herd-mdl
Herd-MDL, a turnkey managed data lake in the cloud. See https://finraos.github.io/herd-mdl/ for more information.
Stars: ✭ 11 (-98.42%)
Mutual labels:  data-catalog
sqllineage
SQL Lineage Analysis Tool powered by Python
Stars: ✭ 348 (-50%)
Mutual labels:  data-discovery
mudrod
Mining and Utilizing Dataset Relevancy from Oceanographic Datasets to Improve Data Discovery and Access, online demo: https://mudrod.jpl.nasa.gov/#/
Stars: ✭ 15 (-97.84%)
Mutual labels:  data-discovery
bigquery-data-lineage
Reference implementation for real-time Data Lineage tracking for BigQuery using Audit Logs, ZetaSQL and Dataflow.
Stars: ✭ 112 (-83.91%)
Mutual labels:  data-catalog
Applied Ml
📚 Papers & tech blogs by companies sharing their work on data science & machine learning in production.
Stars: ✭ 17,824 (+2460.92%)
Mutual labels:  data-discovery
WG3-MetadataSpecifications
WG3 Metadata Specification
Stars: ✭ 25 (-96.41%)
Mutual labels:  data-discovery

Whale is actively being built and maintained by Dataframe. For our full, collaborative SQL workspace, check out prequel.

The simplest way to find tables, write queries, and take notes

whale is a lightweight, CLI-first SQL workspace for your data warehouse.

  • Execute SQL in .sql files using wh run, or in sql blocks within .md files using the --!wh-run flag and wh run.
  • Automatically index all of the tables in your warehouse as plain markdown files -- so they're easily versionable, searchable, and editable either locally or through a remote git server.
  • Search for tables and documentation.
  • Define and schedule basic metric calculations (in beta).

😁 Join the discussion on slack.


codecov slack

For a demo of a git-backed workflow, check out dataframehq/whale-bigquery-public-data.

📔 Documentation

Read the docs for a full overview of whale's capabilities.

Installation

Mac OS

brew install dataframehq/tap/whale

All others

Make sure rust is installed on your local system. Then, clone this directory and run the following in the base directory of the repo:

make && make install

If you are running this multiple times, make sure ~/.whale/libexec does not exist, or your virtual environment may not rebuild. We don't explicitly add an alias for the whale binary, so you'll want to add the following alias to your .bash_profile or .zshrc file.

alias wh=~/.whale/bin/whale

Getting started

Setup

For individual use, run the following command to go through the onboarding process. It will (a) set up all necessary files in ~/.whale, (b) walk you through cron job scheduling to periodically scrape metadata, and (c) set up a warehouse:

wh init

The cron job will run as you schedule it (by default, every 6 hours). If you're feeling impatient, you can also manually run wh etl to pull down the latest data from your warehouse.

For team use, see the docs for instructions on how to set up and point your whale installation at a remote git server.

Seeding some sample data

If you just want to get a feel for how whale works, remove the ~/.whale directory and follow the instructions at dataframehq/whale-bigquery-public-data.

Go go go!

Run:

wh

to search over all metadata. Hitting enter will open the editable part of the docs in your default text editor, defined by the environmental variable $EDITOR (if no value is specified, whale will use the command open).

To execute .sql files, run:

wh run your_query.sql

To execute markdown files, you'll need to write the query in a ```sql block, then place a --!wh-run on its own line. Upon execution of the markdown file, any sql blocks with this comment will execute the query and replace the `--!wh-run` line with the result set. To run the markdown file, run:

wh run your_markdown_file.md

A common pattern is to set up a shortcut in your IDE to execute wh run % for a smooth editing + execution workflow. For an example of how to do this in vim, see the docs here. This is one of the most powerful features of whale, enabling you to take notes and write executable queries seamlessly side-by-side.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].