All Projects → asavinov → prosto

asavinov / prosto

Licence: MIT License
Prosto is a data processing toolkit radically changing how data is processed by heavily relying on functions and operations with functions - an alternative to map-reduce and join-groupby

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to prosto

machine-learning-data-pipeline
Pipeline module for parallel real-time data processing for machine learning models development and production purposes.
Stars: ✭ 22 (-59.26%)
Mutual labels:  data-preprocessing, data-processing, data-preparation
Sparta
Real Time Analytics and Data Pipelines based on Spark Streaming
Stars: ✭ 513 (+850%)
Mutual labels:  workflow, spark, olap
OLAP-cube
is an hypercube of data
Stars: ✭ 23 (-57.41%)
Mutual labels:  business-intelligence, olap
foofah
Foofah: programming-by-example data transformation program synthesizer
Stars: ✭ 24 (-55.56%)
Mutual labels:  data-wrangling, data-preparation
veridical-flow
Making it easier to build stable, trustworthy data-science pipelines.
Stars: ✭ 28 (-48.15%)
Mutual labels:  workflow, pandas
hamilton
A scalable general purpose micro-framework for defining dataflows. You can use it to create dataframes, numpy matrices, python objects, ML models, etc.
Stars: ✭ 612 (+1033.33%)
Mutual labels:  pandas, feature-engineering
pandas-workshop
An introductory workshop on pandas with notebooks and exercises for following along.
Stars: ✭ 161 (+198.15%)
Mutual labels:  pandas, data-wrangling
Market-Mix-Modeling
Market Mix Modelling for an eCommerce firm to estimate the impact of various marketing levers on sales
Stars: ✭ 31 (-42.59%)
Mutual labels:  feature-engineering, data-preparation
Udacity-Data-Analyst-Nanodegree
Repository for the projects needed to complete the Data Analyst Nanodegree.
Stars: ✭ 31 (-42.59%)
Mutual labels:  pandas, data-wrangling
SumStatsRehab
GWAS summary statistics files QC tool
Stars: ✭ 19 (-64.81%)
Mutual labels:  data-preprocessing, data-preparation
data processing course
Some class materials for a data processing course using PySpark
Stars: ✭ 50 (-7.41%)
Mutual labels:  spark, data-processing
Data-Analyst-Nanodegree
Kai Sheng Teh - Udacity Data Analyst Nanodegree
Stars: ✭ 42 (-22.22%)
Mutual labels:  pandas, data-wrangling
Data-Science-101
Notes and tutorials on how to use python, pandas, seaborn, numpy, matplotlib, scipy for data science.
Stars: ✭ 19 (-64.81%)
Mutual labels:  pandas, data-wrangling
Data-Wrangling-with-Python
Simplify your ETL processes with these hands-on data sanitation tips, tricks, and best practices
Stars: ✭ 90 (+66.67%)
Mutual labels:  pandas, data-wrangling
xplore
A python package built for data scientist/analysts, AI/ML engineers for exploring features of a dataset in minimal number of lines of code for quick analysis before data wrangling and feature extraction.
Stars: ✭ 21 (-61.11%)
Mutual labels:  data-wrangling, data-preprocessing
whyqd
data wrangling simplicity, complete audit transparency, and at speed
Stars: ✭ 16 (-70.37%)
Mutual labels:  pandas, data-wrangling
sparklanes
A lightweight data processing framework for Apache Spark
Stars: ✭ 17 (-68.52%)
Mutual labels:  data-preprocessing, data-processing
optimus
🚚 Agile Data Preparation Workflows made easy with Pandas, Dask, cuDF, Dask-cuDF, Vaex and PySpark
Stars: ✭ 1,351 (+2401.85%)
Mutual labels:  data-wrangling, data-preparation
visions
Type System for Data Analysis in Python
Stars: ✭ 136 (+151.85%)
Mutual labels:  spark, pandas
spark-druid-olap
Sparkline BI Accelerator provides fast ad-hoc query capability over Logical Cubes. This has been folded into our SNAP Platform(http://bit.ly/2oBJSpP) an Integrated BI platform on Apache Spark.
Stars: ✭ 286 (+429.63%)
Mutual labels:  spark, business-intelligence
 ____                _        
|  _ \ _ __ ___  ___| |_ ___   _________________________________________________
| |_) | '__/ _ \/ __| __/ _ \ 
|  __/| | | (_) \__ \ || (_) | Functions matter! No map-reduce. No join-groupby.
|_|   |_|  \___/|___/\__\___/  _________________________________________________

License: MIT PyPI PyPI - Downloads Python 3.6 Documentation Status Unit Tests

Paper Paper Paper Paper PaperDOCUMENTATION

Why Prosto?Quick startHow to useReferences

Prosto is a Python data processing toolkit to (programmatically or using Column-SQL) author and execute complex data processing workflows. Conceptually, it is an alternative to purely set-oriented approaches to data processing like map-reduce, relational algebra, SQL or data-frame-based tools like pandas.

Prosto radically changes the way data is processed by relying on a novel data processing paradigm: concept-oriented model of data [2]. It treats columns (modelled via mathematical functions) as first-class elements of the data processing pipeline having the same rights as tables. If a traditional data processing graph consists of only set operations than the Prosto workflow consists of two types of operations:

  • Table operations produce (populate) new tables from existing tables. A table is an implementation of a mathematical set which is a collection of tuples.

  • Column operations produce (evaluate) new columns from existing columns. A column is an implementation of a mathematical function which maps tuples from one set to another set.

An example of such a Prosto workflow consisting of 3 column operations is shown below. The main difference from traditional approaches is that this Prosto workflow will not modify any table - it changes only columns. Formally, if traditional approaches apply set operations by generating new sets from already inferred sets, then Prosto derives new functions from existing functions. In many cases, using functions (column operations) is much simpler and more natural.

Data processing workflow

Prosto provides two ways to define its operations:

  • Programmatically by calling function with parameters specifying an operation
  • Column-SQL by means of syntactic statements with all operation parameters. Column-SQL is a new way to define a column-oriented data processing workflow. It is a syntactic alternative to programmatic operations. Read more here: Column-SQL

Prosto operations are demonstrated in notebooks which can be found in the "notebooks" folder in the main repo. Do your own experiments by tweaking them and playing around with the code: https://github.com/asavinov/prosto/tree/master/notebooks

The column-oriented approach was used in the Intelligent Trading Bot for deriving new features: https://github.com/asavinov/intelligent-trading-bot

More detailed information can be found in the documentation: http://prosto.readthedocs.io

Motivation: Why Prosto?

Why functions and column-orientation?

In traditional approaches to data processing, we frequently need to produce a new table even though we need to define a new attribute. For example, in SQL, a new relation has to be produced even if we want to define a new calculated attribute. We also need to produce a new relation (using join) if we want to add a column with data from another table. Data aggregation by means of groupby operation produces a new relation too, although the goal is to compute a new attribute.

In many important cases, processing data using only set operations is counter-intuitive, and this is why map-reduce, join-groupby (including SQL) and similar set-oriented approaches require high expertise and are error-prone

The main unique novel feature of Prosto is that it relies on a different formal basis:

Prosto adds mathematical functions (implemented as columns) to its model by significantly simplifying data processing and analysis

Now, if we want to define a new attribute then we can do it directly without defining new unnecessary tables, collections or relations. New columns are defined in terms of other columns (in different tables) and this makes this model similar to how spreadsheets work but instead of cells we use columns. For comparison, if in spreadsheets we could define a new cell as A1=B2+C3, then in Prosto we could define a new column as Column1=Column2+Column3. The main theoretical challange is in introducing a set of column operations between columns in multiple tables in such a way that these operations effectively replace relational operations (join and groupby) and cover most important use cases. How it is done is described in [2]. Prosto with Column-SQL is one possible implementation of this model.

More info: Why functions and column-orientation?

Unique features and benefits of Prosto

Prosto provides the following unique features and benefits:

  • Easily processing data in multiple tables. New derived columns are added directly to tables without creating multiple intermediate tables

  • Getting rid of join and group-by. Column definitions such as link columns and aggregate columns are used instead of join and groupby set operations

  • Flexibility and modularization via Python user-defined functions (UDF). UDFs describe what needs to be done with the data only in this operation using arbitrary Python code. If UDF for an operation changes then it is not necessary to update other operations

  • Parameterization of operations by a model object. A model can be as simple as one value and as complex as a trained deep neural network. This feature leads to a novel view of how data analysis should be organized by combining feature engineering and machine learning so that both model training and model use (predictions or transformations) are part of one data processing workflow. Currently models are supported only as static parameters but in future there will be a possibility to train model within the same workflow

  • Future directions. Incremental evaluation and data dictionary

More info: Benefits of Prosto

Quick start (using Column-SQL)

Creating a workflow

All data elements (tables and columns) as well as operations for data generation are defined in a workflow object (interpreted as a context):

import prosto as pr
prosto = pr.Prosto("My Prosto Workflow")

More info: Workflow and operations

Populating a source table

Each table has some structure which is defined by its attributes. Table data is defined by the tuples it consists of and each tuple is a combination of some attribute values.

The simplest way to populate a source table is to create or load a pandas data frame and then pass it to a Column-SQL statement:

sales_data = {
    "product_name": ["beer", "chips", "chips", "beer", "chips"],
    "quantity": [1, 2, 3, 2, 1],
    "price": [10.0, 5.0, 6.0, 15.0, 4.0]
}
sales_df = pd.DataFrame(sales_data)

prosto.column_sql("TABLE Sales", sales_df)

The Column-SQL statement TABLE Sales will create a definition of a source table with data from the sales_df data frame.

In more complex cases, we could pass a user-defined function (UDF) instead of the data frame. This function is supposed to "know" where to load data from by returning a pandas data. For example, it could load data from a csv file.

More info: Table operations

Defining a calculate column

A column is formally interpreted as a mathematical function which maps tuples (defined by table attributes) of this table to output values. The simplest column operation is a calculate column which computes output values using the values of the specified input columns of the same table:

prosto.column_sql(
    "CALCULATE  Sales(quantity, price) -> amount",
    lambda x: x["quantity"] * x["price"]
)

This new amount column will store the amount computed for each record as a product of quantity and price. The CALCULATE statement consists of two parts separated by arrow:

  • First, we define the source table and its columns that we want to process as input: Sales(quantity, price)
  • Second, we define a column to be created: amount

This use of arrows is an important syntactic convention of Column-SQL which informally represent a flow of data within one table or between tables.

More info: Column operations

Executing a workflow

A workflow object stores only operation definitions. In order to really process data, the workflow has to be executed:

prosto.run()

Prosto translates a workflow into a graph of operations (topology) taking into account their dependencies and then executes each operation.

Now we can explore the result by reading data form the table along with the calculate column:

df = prosto.get_table("Sales").get_df()
print(df)
   product_name quantity price amount
0  beer         1        10.0  10.0
1  chips        2        5.0   10.0
2  chips        3        6.0   18.0
3  beer         2        15.0  30.0
4  chips        1        4.0   4.0

The amount column was derived from the data in other columns. If we change input data, then we can again run this workflow and the derived column will contain updated results.

The full power of Prosto comes from the ability to process data in multiple tables by definining derived links (instead of joins) and then aggregating data based on these links (without groupby). Note that both linking and aggregation do not require and will not produce new tables: only columns are defined and evaluated. For example, we might use column paths like my_derived_link::my_column in operations in order to access data in other tables.

More info: Column-SQL

How to use

Install from source code

Check out the source code and execute this command in the project directory (where setup.py is located):

$ pip install .

Or alternatively:

$ python setup.py install

Install from PYPI

This command will install the latest release of Prosto from PYPI:

$ pip install prosto

How to test

Run tests from the project root:

$ python -m pytest

or

$ python setup.py test

References

[1]: A.Savinov. On the importance of functions in data modeling, Eprint: arXiv:2012.15570 [cs.DB], 2019. https://www.researchgate.net/publication/348079767_On_the_importance_of_functions_in_data_modeling

[2]: A.Savinov. Concept-oriented model: Modeling and processing data using functions, Eprint: arXiv:1606.02237 [cs.DB], 2019. https://www.researchgate.net/publication/337336089_Concept-oriented_model_Modeling_and_processing_data_using_functions

[3]: A.Savinov. From Group-By to Accumulation: Data Aggregation Revisited, Proc. IoTBDS 2017, 370-379. https://www.researchgate.net/publication/316551218_From_Group-by_to_Accumulation_Data_Aggregation_Revisited

[4]: A.Savinov. Concept-oriented model: the Functional View, Eprint: arXiv:1606.02237 [cs.DB], 2016. https://www.researchgate.net/publication/303840097_Concept-Oriented_Model_the_Functional_View

[5]: A.Savinov. Joins vs. Links or Relational Join Considered Harmful, Proc. IoTBD 2016, 362-368. https://www.researchgate.net/publication/301764816_Joins_vs_Links_or_Relational_Join_Considered_Harmful

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].