All Projects → IBM → predict-fraud-using-auto-ai

IBM / predict-fraud-using-auto-ai

Licence: Apache-2.0 license
Use AutoAI to detect fraud

Programming Languages

Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to predict-fraud-using-auto-ai

xgboost-smote-detect-fraud
Can we predict accurately on the skewed data? What are the sampling techniques that can be used. Which models/techniques can be used in this scenario? Find the answers in this code pattern!
Stars: ✭ 59 (+118.52%)
Mutual labels:  fraud-prevention, fraud-detection
pixiedust-facebook-analysis
A Jupyter notebook that uses the Watson Visual Recognition and Natural Language Understanding services to enrich Facebook Analytics and uses Cognos Dashboard Embedded to explore and visualize the results in Watson Studio
Stars: ✭ 42 (+55.56%)
Mutual labels:  watson-api, watson-studio
IDVerification
"Very simple but works well" Computer Vision based ID verification solution provided by LibraX.
Stars: ✭ 44 (+62.96%)
Mutual labels:  fraud-prevention, fraud-detection
Misp
MISP (core software) - Open Source Threat Intelligence and Sharing Platform
Stars: ✭ 3,485 (+12807.41%)
Mutual labels:  fraud-prevention, fraud-detection
DGFraud-TF2
A Deep Graph-based Toolbox for Fraud Detection in TensorFlow 2.X
Stars: ✭ 84 (+211.11%)
Mutual labels:  fraud-prevention, fraud-detection
suspicidy
Suspicidy aims to detect suspicious web requests
Stars: ✭ 13 (-51.85%)
Mutual labels:  fraud-prevention, fraud-detection
SentryPeer
A distributed peer to peer list of bad actor IP addresses and phone numbers collected via a SIP Honeypot.
Stars: ✭ 108 (+300%)
Mutual labels:  fraud-prevention, fraud-detection
deepAD
Detection of Accounting Anomalies in the Latent Space using Adversarial Autoencoder Neural Networks - A lab we prepared for the KDD'19 Workshop on Anomaly Detection in Finance that will walk you through the detection of interpretable accounting anomalies using adversarial autoencoder neural networks. The majority of the lab content is based on J…
Stars: ✭ 65 (+140.74%)
Mutual labels:  fraud-prevention, fraud-detection
CARE-GNN
Code for CIKM 2020 paper Enhancing Graph Neural Network-based Fraud Detectors against Camouflaged Fraudsters
Stars: ✭ 121 (+348.15%)
Mutual labels:  fraud-prevention, fraud-detection
MemStream
MemStream: Memory-Based Streaming Anomaly Detection
Stars: ✭ 58 (+114.81%)
Mutual labels:  fraud-detection
Ramses
The Rx Asset Management System for motion picture production
Stars: ✭ 48 (+77.78%)
Mutual labels:  pipeline
jenkins-terraform-pipeline
create a jenkins pipeline which uses terraform to manage AWS resources
Stars: ✭ 17 (-37.04%)
Mutual labels:  pipeline
html-pipeline
HTML processing filters and utilities in Go version
Stars: ✭ 18 (-33.33%)
Mutual labels:  pipeline
pytorch-on-watson-studio
Use Watson Studio and PyTorch to create a machine learning model to recognize hand-written digits
Stars: ✭ 22 (-18.52%)
Mutual labels:  watson-studio
LabPype
Framework for Creating Pipeline Software
Stars: ✭ 18 (-33.33%)
Mutual labels:  pipeline
rails-docker-parallel-example
An example of how to run Rails CI and test steps in parallel with Docker and Buildkite
Stars: ✭ 19 (-29.63%)
Mutual labels:  pipeline
MIPS-pipeline-processor
A pipelined implementation of the MIPS processor featuring hazard detection as well as forwarding
Stars: ✭ 92 (+240.74%)
Mutual labels:  pipeline
node-express-azure
Node & Express Demo App for Azure DevOps
Stars: ✭ 31 (+14.81%)
Mutual labels:  pipeline
TDAstats
R pipeline for computing persistent homology in topological data analysis. See https://doi.org/10.21105/joss.00860 for more details.
Stars: ✭ 26 (-3.7%)
Mutual labels:  pipeline
IARC-nf
List of IARC bioinformatics nextflow pipelines
Stars: ✭ 34 (+25.93%)
Mutual labels:  pipeline

Fraud Prediction using Auto AI

Automation and artificial intelligence (AI) are transforming businesses and will contribute to economic growth via contributions to productivity. They will also help address challenges in areas of healthcare, technology & other areas. At the same time, these technologies will transform the nature of work and the workplace itself. In this code pattern, we will focus on building state of the art systems for churning out predictions which can be used in different scenarios. We will try to predict fraudulent transactions which we know can reduce monetary loss and risk mitigation. The same approach can be used for predicting customer churn, demand and supply forecast and others. Building predictive models require time, effort and good knowledge of algorithms to create effective systems which can predict the outcome accurately. With that being said, IBM has introduced Auto AI which will automate all the tasks involved in building predictive models for different requirements. We will get to see how Auto AI can churn out great models quickly which will save time and effort and aid in faster decision making process.

When the reader has completed this code pattern, they will understand how to :

  • Quickly set up the services on cloud for model building.
  • Ingest the data and initiate the Auto AI process.
  • Build different models using Auto AI and evaluate the performance.
  • Choose the best model and complete the deployment.
  • Generate predictions using the deployed model by making ReST calls.
  • Compare the process of using Auto AI and building the model manually.

Architecture Diagram

  1. User logs into Watson Studio, creates a project and initiates an instance of Auto AI & Object Storage.
  2. User uploads the data file in the CSV format to the object storage.
  3. User initiates the model building process using Auto AI and create pipelines.
  4. User evaluates different pipelines from Auto AI and selects the best model for deployment.
  5. User generates accurate predictions by making ReST call to the deployed model.

Included components

  • IBM Watson Studio: Analyze data using RStudio, Jupyter, and Python in a configured, collaborative environment that includes IBM value-adds, such as managed Spark.

  • IBM Auto AI:The AutoAI graphical tool in Watson Studio automatically analyzes your data and generates candidate model pipelines customized for your predictive modeling problem.

  • IBM Cloud Object Storage: An IBM Cloud service that provides an unstructured cloud data store to build and deliver cost effective apps and services with high reliability and fast speed to market. This code pattern uses Cloud Object Storage.

Featured technologies

  • Artificial Intelligence: Any system which can mimic cognitive functions that humans associate with the human mind, such as learning and problem solving.
  • Data Science: Systems and scientific methods to analyze structured and unstructured data in order to extract knowledge and insights.
  • Analytics: Analytics delivers the value of data for the enterprise.
  • Python: Python is a programming language that lets you work more quickly and integrate your systems more effectively.

Watch the Video

TBD

Steps using AutoAI

Follow these steps to setup and run this code pattern using Auto AI.

  1. Create an account with IBM Cloud
  2. Create a new Watson Studio project
  3. Add Data
  4. Add Asset as Auto AI
  5. Create and define experiment
  6. Import the csv file
  7. Run experiment
  8. Analyze results
  9. Deploy to Cloud
  10. Model testing

1. Create an account with IBM Cloud

Sign up for IBM Cloud. By clicking on create a free account you will get 30 days trial account.

2. Create a new Watson Studio project

Sign up for IBM's Watson Studio.

Click on New Project and select per below.

Define the project by giving a Name and hit 'Create'.

3. Add Data

Clone this repo Navigate to data and save the file on the disk. Review the data glossary from the data folder for more details. Note: Citation is needed to use this dataset for any other projects.

Click on Assets and select Browse and add the csv file from your file system.

4. Add Asset as Auto AI

Click on Add to project and select AutoAI experiment.

Note: The Lite account for AutoAI comes with 50 capacity units per month and AutoAI consumes 20 capacity units per hour.

5. Create and define experiment

Click on New AutoAI experiment and give a name to the experiment.

Click on Associate a Machine Learning service instance to this project and select the Machine Learning service instance and hit reload. If you do not have Machine Learning service instance, then follow the steps on your screen to get one.

The Create button at the bottom right gets highlighted, go ahead and hit Create.

6. Import the csv file

We need to import the csv file into the experiment. Note that, only csv file format is supported in AutoAI. Click on Browse or Select from project to choose the fraud_dataset.csv file to import.

7. Run experiment

We have to select the target variable, in this case it is Fraud_Risk. Notice that Prediction Type and Optimized Metric get highlighted which tells us that we are working on Binary Classification use case and the evaluation metric is ROC (Receiver Operating Characteristics) & AUC (Area Under The Curve) which is used for classification usecases.

We can click on experiment settings to adjust the holdout sample and training sample under source settings.

We can click on prediction setting to modify the Prediction type, Positive Class & Optimized metric if required. In this case, we will leave'em as is and hit save and close.

Click on Run experiment.

8. Analyze results

The AutoAI experiment has been completed in 97 seconds to generate four pipelines. The duration of experiment depends completly on the size of the dataset. AutoAI selects the appropriate machine learning algorithm (in the fifth stage of the process under Model Selection) which is best suited for the dataset.

Each pipeline is run with different parameters, pipeline 3 is run on a sequence of HPO (hyper parameters optimization) & FE (feature engineering) where as pipeline 4 includes HPO (hyper parameters optimization), FE (feature engineering) and a combination of both. All these are done on the fly! Isn't it amazing that we just have to sit and watch while AutoAI takes care of things for us and generates awesome machine learning models!! There's very minimal intervention required to get things going and in no time we have the generated pipelines to choose from.

Click on pipeline 3 (which is ranked 1) to see the evaluation metrics on the left side.

Click on model evaluation to review the performance of the model on the hold out sample and cross validation score. We can observe that our model has done very well by scoring > 95% on Recall, average Precision scores & Area under the curve scores. These scores also mean that our model is able to remember and identify fraudulent transactions with great precision.

Click on feature importance to identify the significant features influencing the outcome. Any variable which starts with Newfeature is a variable generated on the fly by the model as part of feature engineering.

Click on feature transforms to understand the transformation of original features to new features. Feature engineering is one of the important factors in the model building process which has a direct impact on the overall accuracy of the model. We can observe that total features are 24 where as the original dataset had 13 variables which means 11 new features have been created by AutoAI which is one of the reasons for high accuracy of the model.

After all the analysis of model performance, its time to select the model for deployment. We will go ahead and select pipeline 3 which is Rank 1 and hit on Save as model. We can select any of the pipelines to be saved which has highest Accuracy or any other evaluation metrics.

9. Deploy to Cloud

The saved model can be found under Models under the project in Watson Studio. Click on three dots on the right side below Actions and hit Promote. Click the Promote to deployment space. Choose an existing deployment space or create a new one. Click Add Deployment.

In the page that opens, fill in the fields: Specify a name for the deployment. Select “Web service” as the Deployment type. Click Save.

Define the deployment by giving a name and hit Save. Note that, the model will get deployed as web service as a ReST API.

After you save the deployment, click on the deployment name from the left navigation pane to view the deployment details page. The deployment will get initialized and the status will show as ready when it is complete.

We can click on deployed model to see three tabs, Overview, Implementation and Test. Overview tab will give all details about the deployment like name, type, status et'al. Implementation tab will give scoring endpoint and code snippets to invoke the model. Test tab will give options to test the model.

10. Model testing

Now that we have created and deployed the model as a web service, how do we test it?? We have to click on Test tab which will have two options which are form and json. We can use form if we are to test one record at a time where we can give the values to each attribute manually and hit Predict to generate predictions. The output of 0 under values indicate that it is a fraudulent transaction. The output can be either 0 or 1 as per the data glossary provided in the data folder.

For predicting multiple records, we have to update the values in the json file and use the option to input json data & then hit Predict to generate real time predictions.

A sample json file has been provided for testing purpose. The format for scoring the model has to be same as given in json file. Navigate to data-for-testing and save the file on the disk. Copy and paste the values in the test tab as shown above to generate predictions.

Go ahead and give it a try on different datasets as per your requirement and realize the ease of creating and deploying models quickly using AutoAI offering by IBM.

Steps using Jupyter Notebook

Follow the below steps to use Jupyter Notebook for building the model. This is to compare the manual process of model building with the automated process using AutoAI.

Create an account with IBM Cloud and then create a project in Watson Studio. Add the data as an asset. These three steps are given above in detail.

  1. Create the notebook
  2. Insert the data as dataframe
  3. Run the notebook
  4. Analyze the results

4. Create the notebook

After the notebook is imported, click on Not Trusted and select the option as Yes to trust the source of the notebook.

This notebook has been created to demonstrate the steps for building the model using Watson Studio platform. For other usecases, the notebook has to be created from scratch.

5. Insert the data as dataframe

Click on 0010 icon at the top right side which will bring up the data assets tab.

Click on Insert to code dropdown and select the option Insert Pandas Dataframe.

6. Run the notebook

When a notebook is executed, what is actually happening is that each code cell in the notebook is executed, in order, from top to bottom.

Each code cell is selectable and is preceded by a tag in the left margin. The tag format is In [x]:. Depending on the state of the notebook, the x can be:

  • A blank, this indicates that the cell has never been executed.
  • A number, this number represents the relative order this code step was executed.
  • A *, this indicates that the cell is currently executing.

There are several ways to execute the code cells in your notebook:

  • One cell at a time.
    • Select the cell, and then press the Play button in the toolbar.
  • Batch mode, in sequential order.
    • From the Cell menu bar, there are several options available. For example, you can Run All cells in your notebook, or you can Run All Below, that will start executing from the first cell under the currently selected cell, and then continue executing all cells that follow.

7. Analyze the results

After we run the cells in the notebook which includes data ingestion, data analysis, splitting the data, building the model and generating feature importance, its time to review and analyze the performance. There could be so many other activities like handling missing values, outlier management, feature engineering and hyper parameters optimization which are omitted for demo purpose.

Check the model accuracy and confusion matrix to identify precision and recall scores. We can observe that model has > 92% accuracy on test data and the Precision/Recall scores are also high.

Feature importance as per the model is below. The model has highlighted some of the attributes which has high impact on the outcome. Features might or might not be fairly compared to access the impact on outcome.

We have used shapley values which is a very effective model evaluation technique. Shapley values calculate the importance of a feature by comparing what a model predicts with and without the feature. However, since the order in which a model sees features can affect its predictions, this is done in every possible order, so that the features are fairly compared.

We can observe that attributes like Married, Applicant Income & Credit history available are having high impact on the outcome which is to detect fraud as per shapley values.

With this, we have come to the end of this code pattern where we can compare the ease of using AutoAI to build predictive models vs creating a new jupyter notebook to build and evaluate predictive models. There's considerable reduction of time in building and deploying the models using AutoAI because it handles missing values, outliers, feature engineering & hyper parameters optimization on the fly and selects the best algorithm as per the dataset. AI Model building process has been reduced from Days to Hours thanks to AutoAI. If you are a developer or a data scientist who wants to build the model quickly and deploy it for being production ready, then AutoAI is for you which will help in taking decisions faster and gives a detailed overview of the attribute relationships within the data.

More to come :

The integration of Auto AI and Watson Open Scale is currently in progress and will be updated at a later date.

Related Links :

Fraud Prediction using skewed data

Troubleshooting

See DEBUGGING.md.

Citation for data :

The dataset which is referenced in this code pattern is created and owned by R.K.Sharath Kumar, Data Scientist, IBM India Software Labs.

License

This code pattern is licensed under the Apache Software License, Version 2. Separate third party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 (DCO) and the Apache Software License, Version 2.

Check the ASL FAQ link for more details

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].