Kuwala is the no-code data platform for BI analysts and engineers enabling you to build powerful analytics workflows. We are set out to bring state-of-the-art data engineering tools you love, such as Airbyte, dbt, or Great Expectations together in one intuitive interface built with React Flow.
Do you want to discuss your first contribution, want to learn more in general, or discuss your specific use-case for Kuwala? Just book a digital coffee session with the core team here.
Kuwala stands for extendability, reproducibility, and enablement. Small data teams build data products fastly and collaboratively. Analysts and engineers stay with their strengths. Kuwala is the tool that makes it possible to keep a data project within scope while having fun again.
- Kuwala Canvas runs directly on a data warehouse = Maximum flexibility and no lock-in effect
- Engineers enable their analysts by adding transformations and models via dbt or new data sources through Airbyte
- The node-based editor enables analyst to build advanced data workflows with many data sources and transformations through simple drag-and-drop
- With models-as-a-block the BI analyst can launch advanced Marketing Mix Models and attributions without knowing R or Python
Extract and Load with Airbyte
For connecting and loading all your tooling data into a data warehouse, we are integrating with Airbyte connectors. For everything related to third-party data, such as POI and demographics data, we are building separate data pipelines.
Transform with dbt
To apply transformations on your data, we are integrating dbt which is running on top of your data warehouses. Engineers can easily create dbt models and make them reusable to the frontend.
Run a Data Science Model
We are going to include open-source data science and AI models (e.g., Meta's Robyn Marketing Mix Modeling).
Report
We make the results exportable to Google Sheets and in the future also available in a Medium-style markdown editor.
How can I use Kuwala?
Canvas
The canvas environment is currently WIP. But you can already get an idea of how it is going to look like with our prototype and checkout our roadmap for updates.
Third-party data connectors
We currently have five pipelines for different third-party data sources which can easily be imported into a Postgres database. The following pipelines are integrated:
Jupyter environment & CLI
Before the canvas is built, we have a Jupyter environment with convenience functions to work with the third-party data pipelines. To easily run the data pipelines, you can use the CLI.
Quickstart & Demo
Demo correlating Uber traversals with Google popularities
We have a notebook with which you can correlate any value associated with a geo-reference with the Google popularity score. In the demo, we have preprocessed popularity data and a test dataset with Uber rides in Lisbon, Portugal.
Run the demo
You could either use the deployed example on Binder using the badge above or run everything locally. The Binder example simply uses Pandas dataframes and is not connecting to a data warehouse.
Setting up and running the CLI
Prerequisites
- Installed version of
Docker
anddocker-compose v2
.- We recommend using the latest version of
Docker Desktop
.
- We recommend using the latest version of
- Installed version of
Python3
and latestpip, setuptools, and wheel
version.- We recommend using version
3.9.5
or higher. - To check your current version run
python3 --version
.
- We recommend using version
- Installed version of
libpq
.- For Mac, you can use brew:
brew install libpq
- For Mac, you can use brew:
- Installed version of
postgresql
.- For Mac, you can use brew:
brew install postgresql
- For Mac, you can use brew:
Setup
- Change your directory to
kuwala/core/cli
. - Create a virtual environment.
- For instructions on how to set up a
venv
on different system see here.
- For instructions on how to set up a
- Install dependencies by running
pip3 install --no-cache-dir -r requirements.txt
Run
To start the CLI, run the following command from inside the kuwala/core/cli/src
directory and follow the instructions:
python3 main.py
Using Kuwala components individually
To use Kuwala's components, such as the data pipelines or the Jupyter environment, individually, please refer to the
instructions under /kuwala
.
Use cases
- How to build an Uber-like analytics system with Kuwala
- Perform location analytics for a grocery store with Kuwala
- Querying the most granular demographics data set with Kuwala
How can I contribute?
Every new issue, question, or comment is a contribution and very welcome! This project lives from your feedback and involvement!
Be part of our community
The best first step to get involved is to join the Kuwala Community on Slack. There we discuss everything related to our roadmap, development, and support.
Contribute to the project
Please refer to our contribution guidelines for further information on how to get involved.
Get more content about Kuwala
Link | Description |
---|---|
Blog | Read all our blog articles related to the stuff we are doing here. |
Join Slack | Our Slack channel with over 170 data engineers and many discussions. |
Jupyter notebook - Popularity correlation | Open a Jupyter notebook on Binder and merge external popularity data with Uber traversals by making use of convenient dbt functions. |
Podcast | Listen to our community podcast and maybe join us on the next show. |
Digital coffee break | Are you looking for new inspiring tech talks? Book a digital coffee chit-chat with one member of the core team. |
Our roadmap | See our upcoming milestones and sprint planing. |
Contribution guidelines | Further information on how to get involved. |