All Projects → ssloxford → seeing-red

ssloxford / seeing-red

Licence: other
Using PPG Obtained via Smartphone Cameras for Authentication

Programming Languages

swift
15916 projects
python
139335 projects - #7 most used programming language
Dockerfile
14818 projects

Projects that are alternatives of or similar to seeing-red

website
Project Free Our Knowledge aims to organise collective action in support of open and reproducible research practices. This repository is used to design new campaigns (using the issues feature) and to build the website (www.freeourknowledge.org).
Stars: ✭ 32 (+10.34%)
Mutual labels:  research
SSBiometricsAuthentication
Biometric factors allow for secure authentication on the Android platform.
Stars: ✭ 87 (+200%)
Mutual labels:  biometrics
ogrants
Open grants list
Stars: ✭ 96 (+231.03%)
Mutual labels:  research
Recommendation-System-Baseline
Some common recommendation system baseline, with description and link.
Stars: ✭ 34 (+17.24%)
Mutual labels:  research
Fingerprint-Enhancement-Python
Using oriented gabor filters to enhance fingerprint images
Stars: ✭ 157 (+441.38%)
Mutual labels:  biometrics
llvm-semantics
Formal semantics of LLVM IR in K
Stars: ✭ 42 (+44.83%)
Mutual labels:  research
events
Materials related to events I might attend, and to talks I am giving
Stars: ✭ 22 (-24.14%)
Mutual labels:  research
linkedresearch.org
🌐 linkedresearch.org
Stars: ✭ 32 (+10.34%)
Mutual labels:  research
jdit
Jdit is a research processing oriented framework based on pytorch. The docs is here!
Stars: ✭ 29 (+0%)
Mutual labels:  research
schemaanalyst
➰ Search-based Test Data Generation for Relational Database Schemas
Stars: ✭ 18 (-37.93%)
Mutual labels:  research
path semantics
A research project in path semantics, a re-interpretation of functions for expressing mathematics
Stars: ✭ 136 (+368.97%)
Mutual labels:  research
parler-py-api
UNOFFICIAL Python API to interface with Parler.com
Stars: ✭ 52 (+79.31%)
Mutual labels:  research
knowledge
Everything I know. My knowledge wiki. My notes (mostly for fast.ai). Document everything. Brain dump.
Stars: ✭ 118 (+306.9%)
Mutual labels:  research
mlst check
Multilocus sequence typing by blast using the schemes from PubMLST
Stars: ✭ 22 (-24.14%)
Mutual labels:  research
plur
PLUR (Programming-Language Understanding and Repair) is a collection of source code datasets suitable for graph-based machine learning. We provide scripts for downloading, processing, and loading the datasets. This is done by offering a unified API and data structures for all datasets.
Stars: ✭ 67 (+131.03%)
Mutual labels:  research
graphicsvg
Graphics library authored by Chris Schankula and Dr. Christopher Anand
Stars: ✭ 42 (+44.83%)
Mutual labels:  research
mozilla-sprint-2018
DEPRECATED & Materials Moved: This sprint was to focus on brainstorming for the Joint Roadmap for Open Science Tools.
Stars: ✭ 24 (-17.24%)
Mutual labels:  research
book-notes
📖Notes on books and other things I'm reading 📖
Stars: ✭ 43 (+48.28%)
Mutual labels:  research
research-grants
Protocol Labs Research Grants
Stars: ✭ 143 (+393.1%)
Mutual labels:  research
cerberus research
Research tools for analysing Cerberus banking trojan.
Stars: ✭ 110 (+279.31%)
Mutual labels:  research

Seeing Red: PPG Biometrics Using Smartphone Cameras

This repository contains the code for the paper "Seeing Red: PPG Biometrics Using Smartphone Cameras" published in the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) at the 15th IEEE Computer Vision Society Workshop on Biometrics. This work is a collaboration between Giulio Lovisotto, Henry Turner and Simon Eberz from the System Security Lab at University of Oxford.

Idea

In this work we investigated the use of photoplethysmography (PPG) for authentication. An individual's PPG signal can be extracted by taking a video with a smartphone camera as users place their finger on the sensor. The blood flowing through the finger changes the reflective properties of the skin, which is captured by subtle changes in the video color.

We collected PPG signals from 15 participants over several sessions (6-11), in each session the participant places his finger on the camera while a 30 seconds long video is taken. We extract the raw value of the LUMA component of each video frame to obtain the underlying PPG signal from a video. The signals are then preprocessed with a set of filters to remove trends and high frequency components, and then each individual heartbeat is separated with a custom algorithm.

We designed a set of features that capture the distinctiveness of each individual's PPG signal and we evaluated the authentication performance with a set of experiments (see Reproduce Evaluation).

See the conference presentation slides

Dataset

The dataset used for this paper has been published online on ORA and can be freely downloaded. The dataset contains a set of videos for 14 participants who consented to their data being shared, ethics approval number SSD/CUREC1A CS_C1A_19_032. Each video is a 30 seconds long recording which was taken as the participant kept his index finger on the smartphone camera, see a preview here. The dataset was collected using a custom built app on an iPhone X, the iOS application source code is available in this repository.

Reproduce Evaluation

The code runs inside a Docker container and requires docker and docker-compose to be installed in your system.

You might be able to make this work on a generic python/anaconda environment with some effort.

To reproduce the evaluation, follow these steps:

  1. read the paper - this is the only way you will understand what you are doing
  2. Clone this repository
  3. download the dataset used in the paper, unzip the archive and place the downloaded videos folder in seeing-red/data/
  4. build and start the container by running docker-compose up -d
  5. attach to the container with docker attach seeingred_er
  6. in the container, cd /home/code and run the entire signal analysis pipeline with python signal_run_all.py

Results will be produced in several subfolders in seeing-red/data/.

Read EER Results

Resulting Equal Error Rates (EER) are produced by three functions defined in classify.py and saved in subfolders in seeing-red/data/results/<expid>

  • exp1 produces the results used in paper Section 5.1: Multi-class Case, saved in data/results/exp1/all.npy and data/results/exp1/all-fta.npy
  • exp3 produces the results used in paper Section 5.2: One-class Case, saved in data/results/exp3/all.npy and data/results/exp3/all-fta.npy
  • exp4 produces the results used in paper Section 5.3: One-class Cross-Session, saved in data/results/exp4/all.npy and data/results/exp4/all-fta.npy

NB.: paper Section 5.4: EER User Distribution re-uses the results from exp3 and exp4.

A file results/<expid>/all.npy is a numpy multidimensional array containing EER measurements, each table dimension is described by the descr.json contained in the same folder.

For example, if you load the result file for exp1 and its description file, you can read results this way:

import numpy as np
import json
# load the file
eers = np.load("/home/data/results/exp1/all.npy")  
# load the description for the result file
descr = json.load(open("/home/data/results/exp1/descr.json"))  

# "header" in descr decribes the dimensions of the eers array
# the number of dimensions of eers should match the length of the header
assert len(descr["header"]) == len(eers.shape)

# ["fold", "clf", "window_size", "user"]
print(descr["header"])  
# should be (2, 3, 5, 14) for exp1
print(eers.shape)  

# let's print an EER for a specific instance
# select one index across each dimension
fold_index = 0
# one of ["SVM", "GBT", "RFC"]
clf_index = descr["clf"].index("SVM")  
# one of [1, 2, 5, 10, 20]
aws_index = descr["window_size"].index(5)  
usr_index = 3 
print("The EER measured for fold %d, classifier %s, aggregation window size of %d and user %d is %.4f" % (
          fold_index, descr["clf"][clf_index], descr["window_size"][aws_index], usr_index, eers[fold_index, clf_index, aws_index, usr_index]))

In the paper, to get an EER for a (classifier, aggregation window size) pair, we take the average across folds and across users:

## let's take "SVM" and aggregation window size of 5
# load the file
eers = np.load("/home/data/results/exp1/all.npy")  
# one of ["SVM", "GBT", "RFC"]
chosen_clf = "SVM"  
# one of [1, 2, 5, 10, 20]
chosen_aws = 5  
clf_index = descr["clf"].index(chosen_clf)
aws_index = descr["window_size"].index(chosen_aws)
eers = eers[:, clf_index, aws_index, :]
# we average across folds first to produce confidence intervals
eers_mean = eers.mean(axis=0).mean(axis=-1)  
eers_std = eers.mean(axis=0).std(axis=-1)
print("The average EER measured for exp1 using %s and aggregation window size of %d is %.4f with standard deviation of %.4f" % (
           chosen_clf, chosen_aws, eers_mean, eers_std))

Citation

If you use this repository please cite the paper as follows:

@INPROCEEDINGS{9150630,
  author={G. {Lovisotto} and H. {Turner} and S. {Eberz} and I. {Martinovic}},
  booktitle={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)}, 
  title={Seeing Red: PPG Biometrics Using Smartphone Cameras}, 
  year={2020},
  volume={},
  number={},
  pages={3565-3574},
  doi={10.1109/CVPRW50498.2020.00417}}

Contributors

Acknowledgements

This work was generously supported by a grant from Mastercard and by the Engineering and Physical Sciences Research Council [grant numbers EP/N509711/1, EP/P00881X/1].

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].