All Projects → vmware → versatile-data-kit

vmware / versatile-data-kit

Licence: Apache-2.0 license
Versatile Data Kit (VDK) is an open source framework that enables anybody with basic SQL or Python knowledge to create their own data pipelines.

Programming Languages

python
139335 projects - #7 most used programming language
java
68154 projects - #9 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to versatile-data-kit

beneath
Beneath is a serverless real-time data platform ⚡️
Stars: ✭ 65 (-54.86%)
Mutual labels:  etl, data-warehouse, data-engineering, dataops, data-pipelines
rivery cli
Rivery CLI
Stars: ✭ 16 (-88.89%)
Mutual labels:  etl, dataops, elt, data-pipelines
astro
Astro allows rapid and clean development of {Extract, Load, Transform} workflows using Python and SQL, powered by Apache Airflow.
Stars: ✭ 79 (-45.14%)
Mutual labels:  etl, snowflake, elt
contessa
Easy way to define, execute and store quality rules for your data.
Stars: ✭ 17 (-88.19%)
Mutual labels:  data-engineering, sqlite3, data-quality
Airbyte
Airbyte is an open-source EL(T) platform that helps you replicate your data in your warehouses, lakes and databases.
Stars: ✭ 4,919 (+3315.97%)
Mutual labels:  etl, data-engineering, elt
arthur-redshift-etl
ELT Code for your Data Warehouse
Stars: ✭ 22 (-84.72%)
Mutual labels:  etl, data-engineering, elt
dbd
dbd is a database prototyping tool that enables data analysts and engineers to quickly load and transform data in SQL databases.
Stars: ✭ 30 (-79.17%)
Mutual labels:  etl, snowflake, elt
AirflowDataPipeline
Example of an ETL Pipeline using Airflow
Stars: ✭ 24 (-83.33%)
Mutual labels:  etl, data-engineering, data-pipelines
hive-metastore-client
A client for connecting and running DDLs on hive metastore.
Stars: ✭ 37 (-74.31%)
Mutual labels:  etl, data-engineering
etl
[READ-ONLY] PHP - ETL (Extract Transform Load) data processing library
Stars: ✭ 279 (+93.75%)
Mutual labels:  etl, data-engineering
soda-spark
Soda Spark is a PySpark library that helps you with testing your data in Spark Dataframes
Stars: ✭ 58 (-59.72%)
Mutual labels:  data-engineering, data-quality
NBi
NBi is a testing framework (add-on to NUnit) for Business Intelligence and Data Access. The main goal of this framework is to let users create tests with a declarative approach based on an Xml syntax. By the means of NBi, you don't need to develop C# or Java code to specify your tests! Either, you don't need Visual Studio or Eclipse to compile y…
Stars: ✭ 102 (-29.17%)
Mutual labels:  etl, data-quality
AirflowETL
Blog post on ETL pipelines with Airflow
Stars: ✭ 20 (-86.11%)
Mutual labels:  etl, data-engineering
wikirepo
Python based Wikidata framework for easy dataframe extraction
Stars: ✭ 33 (-77.08%)
Mutual labels:  etl, elt
Aws Serverless Data Lake Framework
Enterprise-grade, production-hardened, serverless data lake on AWS
Stars: ✭ 179 (+24.31%)
Mutual labels:  etl, data-engineering
google-sheets-etl
Live import all your Google Sheets to your data warehouse
Stars: ✭ 15 (-89.58%)
Mutual labels:  etl, data-warehouse
deordie-meetups
DE or DIE meetup made by data engineers for data engineers. Currently in Russian only.
Stars: ✭ 48 (-66.67%)
Mutual labels:  data-engineering, data-engineer
morph-kgc
Powerful RDF Knowledge Graph Generation with [R2]RML Mappings
Stars: ✭ 77 (-46.53%)
Mutual labels:  etl, data-engineering
starlake
Starlake is a Spark Based On Premise and Cloud ELT/ETL Framework for Batch & Stream Processing
Stars: ✭ 16 (-88.89%)
Mutual labels:  etl, snowflake
polygon-etl
ETL (extract, transform and load) tools for ingesting Polygon blockchain data to Google BigQuery and Pub/Sub
Stars: ✭ 53 (-63.19%)
Mutual labels:  etl, data-engineering

Versatile Data Kit Versatile Data Kit

Last Activity license pre-commit build status twitter YouTube Channel Subscribers

Overview

Versatile Data Kit (VDK) is an open source framework that enables anybody with basic SQL or Python knowledge to create their own data pipelines.

Versatile Data Kit enables Data Engineers to develop, deploy, run and manage Data Jobs. A Data Job is a data processing workload and can be written in Python, SQL, or both at the same time. A Data Job enables Data Engineers to implement automated pull ingestion (E in ELT) and batch data transformation (T in ELT) into a database or any type of data storage.

Versatile Data Kit consists of two main components:

  • A Data SDK provides all tools for the automation of data extraction, transformation, and loading, as well as a plugin framework that allows users to extend the framework according to their specific requirements.
  • A Control Service allows users to create, deploy, manage and execute Data Jobs in Kubernetes runtime environment.

To help solve common data engineering problems Versatile Data Kit:

  • allows ingestion of data from different sources, including CSV files, JSON objects, data provided by REST API services, etc.;
  • ensures data applications are packaged, versioned, and deployed correctly while dealing with credentials, retries, reconnects, etc.;
  • provides built-in monitoring and smart notification capabilities;
  • tracks both code and data modifications and the relations between them, enabling engineers to troubleshoot faster and providing an easy revert to a stable version.

Data Journey and where VDK fits in

Data Journey Data Journey

Installation and Getting Started

Install Versatile Data Kit SDK

pip install -U pip setuptools wheel
pip install quickstart-vdk

Note that Versatile Data Kit requires Python 3.7+.

See the Installation page for more details.

Use

# see Help to see what you can do
vdk --help

Check out the Getting Started page to create and run your first Data Job.

Documentation

Official documentation for Versatile Data Kit can be found here.

Contributing

If you are interested in contributing as a developer, visit CONTRIBUTING.md.

Contacts

Feedback is very welcome via the GitHub site as issues or pull requests

How to use Versatile Data Kit?

For the full list of resources go to Community and Resources

Code of Conduct

Everyone involved in working on the project's source code, or engaging in any issue trackers, Slack channels and mailing lists is expected to follow the Code of Conduct.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].