All Projects → linkedin → Lift

linkedin / Lift

Licence: bsd-2-clause
The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows.

Programming Languages

scala
5932 projects

Projects that are alternatives of or similar to Lift

Isolation Forest
A Spark/Scala implementation of the isolation forest unsupervised outlier detection algorithm.
Stars: ✭ 139 (+9.45%)
Mutual labels:  linkedin, spark
Avro2tf
Avro2TF is designed to fill the gap of making users' training data ready to be consumed by deep learning training frameworks.
Stars: ✭ 125 (-1.57%)
Mutual labels:  linkedin
Spring Shiro Spark
Spring-Shiro-Spark是Spring-Boot Hibernate Spark Spark-SQL Shiro iView VueJs... ...的集成尝试
Stars: ✭ 114 (-10.24%)
Mutual labels:  spark
Example Spark Kafka
Apache Spark and Apache Kafka integration example
Stars: ✭ 120 (-5.51%)
Mutual labels:  spark
Truvisory
This project is meant to provide resources to users who want to access good LinkedIn posts which contains resources to learn any Technology, Design, Self-Branding, Motivation etc. You can visit project by:
Stars: ✭ 116 (-8.66%)
Mutual labels:  linkedin
Zparkio
Boiler plate framework to use Spark and ZIO together.
Stars: ✭ 121 (-4.72%)
Mutual labels:  spark
Xlearning Xdml
extremely distributed machine learning
Stars: ✭ 113 (-11.02%)
Mutual labels:  spark
Hadoopcryptoledger
Hadoop Crypto Ledger - Analyzing CryptoLedgers, such as Bitcoin Blockchain, on Big Data platforms, such as Hadoop/Spark/Flink/Hive
Stars: ✭ 126 (-0.79%)
Mutual labels:  spark
Spark Infotheoretic Feature Selection
This package contains a generic implementation of greedy Information Theoretic Feature Selection (FS) methods. The implementation is based on the common theoretic framework presented by Gavin Brown. Implementations of mRMR, InfoGain, JMI and other commonly used FS filters are provided.
Stars: ✭ 123 (-3.15%)
Mutual labels:  spark
Teddy
Spark Streaming监控平台,支持任务部署与告警、自启动
Stars: ✭ 120 (-5.51%)
Mutual labels:  spark
Kinesis Sql
Kinesis Connector for Structured Streaming
Stars: ✭ 120 (-5.51%)
Mutual labels:  spark
Cube.js
📊 Cube — Open-Source Analytics API for Building Data Apps
Stars: ✭ 11,983 (+9335.43%)
Mutual labels:  spark
Deequ
Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.
Stars: ✭ 2,020 (+1490.55%)
Mutual labels:  spark
Spark Lucenerdd
Spark RDD with Lucene's query and entity linkage capabilities
Stars: ✭ 114 (-10.24%)
Mutual labels:  spark
Scala Samples
There are pieces of scala code that explain Scala syntax and related things - like what you can do with all this
Stars: ✭ 125 (-1.57%)
Mutual labels:  spark
Spark Mllib Twitter Sentiment Analysis
🌟 ✨ Analyze and visualize Twitter Sentiment on a world map using Spark MLlib
Stars: ✭ 113 (-11.02%)
Mutual labels:  spark
Elassandra
Elassandra = Elasticsearch + Apache Cassandra
Stars: ✭ 1,610 (+1167.72%)
Mutual labels:  spark
Eat pyspark in 10 days
pyspark🍒🥭 is delicious,just eat it!😋😋
Stars: ✭ 116 (-8.66%)
Mutual labels:  spark
Cape Python
Collaborate on privacy-preserving policy for data science projects in Pandas and Apache Spark
Stars: ✭ 125 (-1.57%)
Mutual labels:  spark
Spark Bigquery Connector
BigQuery data source for Apache Spark: Read data from BigQuery into DataFrames, write DataFrames into BigQuery tables.
Stars: ✭ 126 (-0.79%)
Mutual labels:  spark

The LinkedIn Fairness Toolkit (LiFT)

Build Status Download License

The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows. The library can be deployed in training and scoring workflows to measure biases in training data, evaluate fairness metrics for ML models, and detect statistically significant differences in their performance across different subgroups. It can also be used for ad-hoc fairness analysis.

This library was created by Sriram Vasudevan and Krishnaram Kenthapadi (work done while at LinkedIn).

Copyright

Copyright 2020 LinkedIn Corporation All Rights Reserved.

Licensed under the BSD 2-Clause License (the "License"). See License in the project root for license information.

Features

LiFT provides a configuration-driven Spark job for scheduled deployments, with support for custom metrics through User Defined Functions (UDFs). APIs at various levels are also exposed to enable users to build upon the library's capabilities as they see fit. One can thus opt for a plug-and-play approach or deploy a customized job that uses LiFT. As a result, the library can be easily integrated into ML pipelines. It can also be utilized in Jupyter notebooks for more exploratory fairness analyses.

LiFT leverages Apache Spark to load input data into in-memory, fault-tolerant and scalable data structures. It strategically caches datasets and any pre-computation performed. Distributed computation is balanced with single system execution to obtain a good mix of scalability and speed. For example, distance, distribution and divergence related metrics are computed on the entire dataset in a distributed manner, while benefit vectors and permutation tests (for model performance) are computed on scored dataset samples that can be collected to the driver.

The LinkedIn Fairness Toolkit (LiFT) provides the following capabilities:

  1. Measuring Fairness Metrics on Training Data
  2. Measuring Fairness Metrics for Model Performance

As part of the model performance metrics, it also contains the implementation of a new permutation testing framework that detects statistically significant differences in model performance (as measured by an arbitrary performance metric) across different subgroups.

High-level details about the parameters, metrics supported and usage are described below. More details about the metrics themselves are provided in the links above.

A list of automatically downloaded direct dependencies are provided here.

Usage

Building the Library

It is recommended to use Scala 2.11.8 and Spark 2.3.0. To build, run the following:

./gradlew build

This will produce a JAR file in the ./lift/build/libs/ directory.

If you want to use the library with Spark 2.4 (and the Scala 2.11.8 default), you can specify this when running the build command.

./gradlew build -PsparkVersion=2.4.3

You can also build an artifact with Spark 2.4 and Scala 2.12.

./gradlew build -PsparkVersion=2.4.3 -PscalaVersion=2.12.11

Tests typically run with the test task. If you want to force-run all tests, you can use:

./gradlew cleanTest test --no-build-cache

Add a LiFT Dependency to Your Project

Please check Bintray for the latest artifact versions.

Gradle Example

The artifacts are available in JCenter, so you can specify the JCenter repository in the top-level build.gradle file.

repositories {
    jcenter()
}

Add the LiFT dependency to the module-level build.gradle file. Here are some examples for multiple recent Spark/Scala version combinations:

dependencies {
    compile 'com.linkedin.lift:lift_2.3.0_2.11:0.1.4'
}
dependencies {
    compile 'com.linkedin.lift:lift_2.4.3_2.11:0.1.4'
}
dependencies {
    compile 'com.linkedin.lift:lift_2.4.3_2.12:0.1.4'
}

Using the JAR File

Depending on the mode of usage, the built JAR can be deployed as part of an offline data pipeline, depended upon to build jobs using its APIs, or added to the classpath of a Spark Jupyter notebook or a Spark Shell instance. For example:

$SPARK_HOME/bin/spark-shell --jars target/lift_2.3.0_2.11_0.1.4.jar

Usage Examples

Measuring Dataset Fairness Metrics using the provided Spark job

LiFT provides a Spark job for measuring fairness metrics for training data, as well as for the validation or test dataset:

com.linkedin.fairness.eval.jobs.MeasureDatasetFairnessMetrics

This job can be configured using various parameters to compute fairness metrics on the dataset of interest:

1. datasetPath: Input data path
2. protectedDatasetPath: Input path to the protected dataset (optional).
                         If not provided, the library attempts to use
                         the right dataset based on the protected attribute.
3. dataFormat: Format of the input datasets. This is the parameter passed
              to the Spark reader's format method. Defaults to avro.
4. dataOptions: A map of options to be used with Spark's reader (optional).
5. uidField: The unique ID field, like a memberId field.
6. labelField: The label field
7. protectedAttributeField: The protected attribute field
8. uidProtectedAttributeField: The uid field for the protected attribute dataset
9. outputPath: Output data path
10. referenceDistribution: A reference distribution to compare against (optional).
                          Only accepted value currently is UNIFORM.
11. distanceMetrics: Distance and divergence metrics like SKEWS, INF_NORM_DIST,
                    TOTAL_VAR_DIST, JS_DIVERGENCE, KL_DIVERGENCE and
                    DEMOGRAPHIC_PARITY (optional).
12. overallMetrics: Aggregate metrics like GENERALIZED_ENTROPY_INDEX,
                    ATKINSONS_INDEX, THEIL_L_INDEX, THEIL_T_INDEX and
                    COEFFICIENT_OF_VARIATION, along with their corresponding
                    parameters.
13. benefitMetrics: The distance/divergence metrics to use as the benefit
                    vector when computing the overall metrics. Acceptable
                    values are SKEWS and DEMOGRAPHIC_PARITY.

The most up-to-date information on these parameters can always be found here.

The Spark job performs no preprocessing of the input data, and makes assumptions like assuming that the unique ID field (the join key) is stored in the same format in the input data and the protectedAttribute data. This might not be the case for your dataset, in which case you can always create your own Spark job similar to the provided example (described below).

Measuring Model Fairness Metrics using the provided Spark job

LiFT provides a Spark job for measuring fairness metrics for model performance, based on the labels and scores of the test or validation data:

com.linkedin.fairness.eval.jobs.MeasureModelFairnessMetrics

This job can be configured using various parameters to compute fairness metrics on the dataset of interest:

1. datasetPath Input data path
2. protectedDatasetPath Input path to the protected dataset (optional).
                        If not provided, the library attempts to use
                        the right dataset based on the protected attribute.
3. dataFormat: Format of the input datasets. This is the parameter passed
              to the Spark reader's format method. Defaults to avro.
4. dataOptions: A map of options to be used with Spark's reader (optional).
5. uidField The unique ID field, like a memberId field.
6. labelField The label field
7. scoreField The score field
8. scoreType Whether the scores are raw scores or probabilities.
             Accepted values are RAW or PROB.
9. protectedAttributeField The protected attribute field
10. uidProtectedAttributeField The uid field for the protected attribute dataset
11. groupIdField An optional field to be used for grouping, in case of ranking metrics
12. outputPath Output data path
13. referenceDistribution A reference distribution to compare against (optional).
                          Only accepted value currently is UNIFORM.
14. approxRows The approximate number of rows to sample from the input data
               when computing model metrics. The final sampled value is
               min(numRowsInDataset, approxRows)
15. labelZeroPercentage The percentage of the sampled data that must
                        be negatively labeled. This is useful in case
                        the input data is highly skewed and you believe
                        that stratified sampling will not obtain sufficient
                        number of examples of a certain label.
16. thresholdOpt An optional value that contains a threshold. It is used
                 in case you want to generate hard binary classifications.
                 If not provided and you request metrics that depend on
                 explicit label predictions (eg. precision), the scoreType
                 information is used to convert the scores into the
                 probabilities of predicting positives. This is used for
                 computing expected positive prediction counts.
17. numTrials Number of trials to run the permutation test for. More trials
              yield results with lower variance in the computed p-value,
              but takes more time
18. seed The random value seed
19. distanceMetrics Distance and divergence metrics that are to be computed.
                    These are metrics such as Demographic Parity
                    and Equalized Odds.
20. permutationMetrics The metrics to use for permutation testing
21. distanceBenefitMetrics The model metrics that are to be used for
                           computing benefit vectors, one for each
                           distance metric specified.
22. performanceBenefitMetrics The model metrics that are to be used for
                              computing benefit vectors, one for each
                              model performance metric specified.
23. overallMetrics The aggregate metrics that are to be computed on each
                   of the benefit vectors generated.

The most up-to-date information on these parameters can always be found here.

The Spark job performs no preprocessing of the input data, and makes assumptions like assuming that the unique ID field (the join key) is stored in the same format in the input data and the protectedAttribute data. This might not be the case for your dataset, in which case you can always create your own Spark job similar to the provided example (described below)

Custom Spark jobs built on LiFT

If you are implementing your own driver program to measure dataset metrics, here's how you can make use of LiFT:

object MeasureDatasetFairnessMetrics { 
  def main(progArgs: Array[String]): Unit = { 
    // Get spark session
    val spark = SparkSession 
      .builder() 
      .appName(getClass.getSimpleName) 
      .getOrCreate() 
 
    // Parse args
    val args = MeasureDatasetFairnessMetricsCmdLineArgs.parseArgs(progArgs) 
 
    // Load and preprocess data
    val df = spark.read.format(args.dataFormat)
      .load(args.datasetPath)
      .select(args.uidField, args.labelField)
 
    // Load protected data and join
    val joinedDF = ...
    joinedDF.persist 

    // Obtain reference distribution (optional). This can be used to provide a
    // custom distribution to compare the dataset against.
    val referenceDistrOpt = ...
 
    // Passing in the appropriate parameters to this API computes and writes 
    // out the fairness metrics 
    FairnessMetricsUtils.computeAndWriteDatasetMetrics(distribution,
      referenceDistrOpt, args) 
  } 
}

A complete example for the above can be found here.

In the case of measuring model metrics, a similar Spark job can be implemented:

object MeasureModelFairnessMetrics { 
  def main(progArgs: Array[String]): Unit = { 
    // Get spark session
    val spark = SparkSession 
      .builder() 
      .appName(getClass.getSimpleName) 
      .getOrCreate() 
 
    // Parse args
    val args = MeasureModelFairnessMetricsCmdLineArgs.parseArgs(progArgs) 
 
    // Load and preprocess data
    val df = spark.read.format(args.dataFormat)
      .load(args.datasetPath)
      .select(args.uidField, args.labelField)
 
    // Load protected data and join
    val joinedDF = ...
    joinedDF.persist 

    // Obtain reference distribution (optional). This can be used to provide a
    // custom distribution to compare the dataset against.
    val referenceDistrOpt = ...
 
    // Passing in the appropriate parameters to this API computes and writes 
    // out the fairness metrics 
    FairnessMetricsUtils.computeAndWriteModelMetrics(
      joinedDF, referenceDistrOpt, args) 
  } 
}

A complete example for the above can be found here.

Contributions

If you would like to contribute to this project, please review the instructions here.

Acknowledgments

Implementations of some methods in LiFT were inspired by other open-source libraries. LiFT also contains the implementation of a new permutation testing framework. Discussions with several LinkedIn employees influenced aspects of this library. A full list of acknowledgements can be found here.

Citations

If you publish material that references the LinkedIn Fairness Toolkit (LiFT), you can use the following citations:

@inproceedings{vasudevan20lift,
    author       = {Vasudevan, Sriram and Kenthapadi, Krishnaram},
    title        = {{LiFT}: A Scalable Framework for Measuring Fairness in ML Applications},
    booktitle    = {Proceedings of the 29th ACM International Conference on Information and Knowledge Management},
    series       = {CIKM '20},
    year         = {2020},
    pages        = {},
    numpages     = {8}
}

@misc{lift,
    author       = {Vasudevan, Sriram and Kenthapadi, Krishnaram},
    title        = {The LinkedIn Fairness Toolkit ({LiFT})},
    howpublished = {\url{https://github.com/linkedin/lift}},
    month        = aug,
    year         = 2020
}

If you publish material that references the permutation testing methodology that is available as part of LiFT, you can use the following citation:

@inproceedings{diciccio20evaluating,
    author       = {DiCiccio, Cyrus and Vasudevan, Sriram and Basu, Kinjal and Kenthapadi, Krishnaram and Agarwal, Deepak},
    title        = {Evaluating Fairness Using Permutation Tests},
    booktitle    = {Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},
    series       = {KDD '20},
    year         = {2020},
    pages        = {},
    numpages     = {11}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].