All Projects → IQTLabs → FakeFinder

IQTLabs / FakeFinder

Licence: Apache-2.0 license
FakeFinder builds a modular framework for evaluating various deepfake detection models, offering a web application as well as API access for integration into existing workflows.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to FakeFinder

Hub
Dataset format for AI. Build, manage, & visualize datasets for deep learning. Stream data real-time to PyTorch/TensorFlow & version-control it. https://activeloop.ai
Stars: ✭ 4,003 (+13703.45%)
Mutual labels:  cloud-computing, mlops
recommendations-for-engineers
All of my recommendations for aspiring engineers in a single place, coming from various areas of interest.
Stars: ✭ 81 (+179.31%)
Mutual labels:  mlops
lightning-hydra-template
PyTorch Lightning + Hydra. A very user-friendly template for rapid and reproducible ML experimentation with best practices. ⚡🔥⚡
Stars: ✭ 1,905 (+6468.97%)
Mutual labels:  mlops
tencent-cam-policy
Easily create an Tencent CAM Policy with Serverless Components
Stars: ✭ 20 (-31.03%)
Mutual labels:  cloud-computing
vertex-edge
A tool for training models to Vertex on Google Cloud Platform.
Stars: ✭ 24 (-17.24%)
Mutual labels:  mlops
krsh
A declarative KubeFlow Management Tool
Stars: ✭ 127 (+337.93%)
Mutual labels:  mlops
charts
Helm charts for creating reproducible and maintainable deployments of Polyaxon with Kubernetes.
Stars: ✭ 32 (+10.34%)
Mutual labels:  mlops
mlops-platforms
Compare MLOps Platforms. Breakdowns of SageMaker, VertexAI, AzureML, Dataiku, Databricks, h2o, kubeflow, mlflow...
Stars: ✭ 293 (+910.34%)
Mutual labels:  mlops
fastapi-template
Completely Scalable FastAPI based template for Machine Learning, Deep Learning and any other software project which wants to use Fast API as an API framework.
Stars: ✭ 156 (+437.93%)
Mutual labels:  mlops
datajoint-python
Relational data pipelines for the science lab
Stars: ✭ 140 (+382.76%)
Mutual labels:  cloud-computing
dama
a simplified machine learning container platform that helps teams get started with an automated workflow
Stars: ✭ 76 (+162.07%)
Mutual labels:  mlops
aml-compute
GitHub Action that allows you to attach, create and scale Azure Machine Learning compute resources.
Stars: ✭ 19 (-34.48%)
Mutual labels:  mlops
analogsea
Digital Ocean R client
Stars: ✭ 142 (+389.66%)
Mutual labels:  cloud-computing
FaceLivenessDetection-SDK
3D Passive Face Liveness Detection (Anti-Spoofing) & Deepfake detection. A single image is needed to compute liveness score. 99,67% accuracy on our dataset and perfect scores on multiple public datasets (NUAA, CASIA FASD, MSU...).
Stars: ✭ 85 (+193.1%)
Mutual labels:  deepfake-detection
covid-19-prediction
[IoT'20] Predicting the Growth and Trend of COVID-19 Pandemic using Machine Learning and Cloud Computing
Stars: ✭ 28 (-3.45%)
Mutual labels:  cloud-computing
cartpole-rl-remote
CartPole game by Reinforcement Learning, a journey from training to inference
Stars: ✭ 24 (-17.24%)
Mutual labels:  mlops
mlops-workload-orchestrator
The MLOps Workload Orchestrator solution helps you streamline and enforce architecture best practices for machine learning (ML) model productionization. This solution is an extendable framework that provides a standard interface for managing ML pipelines for AWS ML services and third-party services.
Stars: ✭ 114 (+293.1%)
Mutual labels:  mlops
MERlin
MERlin is an extensible analysis pipeline applied to decoding MERFISH data
Stars: ✭ 19 (-34.48%)
Mutual labels:  cloud-computing
Cloud-Service-Providers-Free-Tier-Overview
Comparing the free tier offers of the major cloud providers like AWS, Azure, GCP, Oracle etc.
Stars: ✭ 226 (+679.31%)
Mutual labels:  cloud-computing
platform-services-go-sdk
Go client library for IBM Cloud Platform Services
Stars: ✭ 14 (-51.72%)
Mutual labels:  cloud-computing

FakeFinder: Sifting out deepfakes in the wild

The FakeFinder project builds upon the work done at IQT Labs in competing in the Facebook Deepfake Detection Challenge (DFDC). FakeFinder builds a modular, scalable and extensible framework for evaluating various deepfake detection models. The toolkit provides a web application as well as API access for integration into existing media forensic workflow and applications. To illustrate the functionality in FakeFinder we have included implementations of six existing, open source Deepfake detectors as well as a template exemplifying how new algorithms can be easily added to the system.

Table of contents

  1. Overview
  2. Available Detectors
  3. Reproducing the Tool
  4. Usage Instruction

Overview

We have included instructions to reproduce the system as we have built it, using Docker containers with Compose or Kubernetes.

Available Detectors

Although designed for extensability, the current toolkit includes implementations for six detectors open sourced from the DeepFake Detection Challenge(DFDC) and the DeeperForensics Challenge 2020(DFC). The detectors included are:

Name Input type Challenge Description
selimsef video (mp4) DFDC1 Model Card
wm video (mp4) DFDC1 Model Card
ntech video (mp4) DFDC1 Model Card
eighteen video (mp4) DFDC1 Model Card
medics video (mp4) DFDC1 Model Card
boken video (mp4) DFC2 Model Card

Additionally, we have included template code and instructions for adding a new detector to the system in the detector template folder.

As part of the inplementation we have evaluated the current models against the test sets provided by both (1, 2) competitions after they closed. The following figure shows the True Positive Rate (TPR), False Positive Rate (FPR) and final accuracy (Acc) for all six models against these data. We have also included the average binary cross entropy (LogLoss) whcih was ultimately used to score the competition.

drawing

We have also measured the correlation between the six detectors over all of the evaulation dataset, shown in the following figure (Note: a correlation > 0.7 is considered a strong correlation)

drawing

Reproducing the Tool

We built FakeFinder using Docker for building and running containers, and Flask for the API server and serving models for inference. Here we provide instructions on reproducing the FakeFinder architecture. There are a few prerequisites:

GPU Host

The different detectors require the use of a GPU. We've tested against an AWS EC2 instance type of g4dn.xlarge using the Deep Learning AMI (Ubuntu 18.04) Version 52.0.

Clone the Repository

Some of the detectors use submodules so use the following command to clone the FakeFinder repo.

git clone --recurse-submodules -j8 https://github.com/IQTLabs/FakeFinder
cd FakeFinder

Model Weights

To access the weights for each model run the following commands:

mkdir weights
cd weights
wget -O boken.tar.gz https://github.com/IQTLabs/FakeFinder/releases/download/weights/boken.tar.gz
tar -xvzf boken.tar.gz
wget -O eighteen.tar.gz https://github.com/IQTLabs/FakeFinder/releases/download/weights/eighteen.tar.gz
tar -xvzf eighteen.tar.gz
wget -O medics.tar.gz https://github.com/IQTLabs/FakeFinder/releases/download/weights/medics.tar.gz
tar -xvzf medics.tar.gz
wget -O ntech.tar.gz https://github.com/IQTLabs/FakeFinder/releases/download/weights/ntech.tar.gz
tar -xvzf ntech.tar.gz
wget -O selimsef.tar.gz https://github.com/IQTLabs/FakeFinder/releases/download/weights/selimsef.tar.gz
tar -xvzf selimsef.tar.gz
wget -O wm.tar.gz https://github.com/IQTLabs/FakeFinder/releases/download/weights/wm.tar.gz
tar -xvzf wm.tar.gz

This will create a top level directory called weights, with sub-directories for each detector.

FakeFinder can now be started with either Compose or Kubernetes

Start with Compose

docker-compose up -d --build

Start with Kubernetes

This has been tested using Minikube.It should be applicable in other Kuberenetes environments but has not been explicitly tested.

minikube start --driver docker --mount --mount-string $(pwd)/data/:/ff-data/
eval $(minikube docker-env) && \
docker build -t iqtlabs/fakefinder-api ./api && \
docker build -t iqtlabs/fakefinder-dash ./dash && \
docker build -t iqtlabs/fakefinder-detectors ./detectors 
minikube kubectl -- apply -f ff-networks.yaml -f ff-volumes.yaml -f ff-services.yaml -f ff-deployments.yaml

You should then be able to point a browser to the host's IP address to access the web application.

Usage Instructions

Using the Dash App

The above example demonstrates using the Inference Tool section of the web app. Users can upload a video by clicking on the Upload box of the Input Video section. The dropdown menu autopopulates upon upload completion, and users can play the video via a series of controls. There is the ability to change the volume, playback speed and current location within the video file.

In the Inference section of the page, users may select from the deep learning models available through the API. After checking the boxes of requested models, the Submit button will call the API to run inference for each model. The results are returned in the table, which includes an assignment of Real or Fake based on the model probability output, as well as graphically, found below the table. The bar braph presents the confidence of the video being real or fake for each model and for submissions with more than one model, an average confidence score is also presented.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].