All Projects → LaRiffle → collateral-learning

LaRiffle / collateral-learning

Licence: other
Collateral Learning - Functional Encryption and Adversarial Training on partially encrypted networks

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to collateral-learning

Adversarial Explainable Ai
💡 A curated list of adversarial attacks on model explanations
Stars: ✭ 56 (-16.42%)
Mutual labels:  adversarial-learning
Arel
Code for the ACL paper "No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"
Stars: ✭ 124 (+85.07%)
Mutual labels:  adversarial-learning
Awesome Domain Adaptation
A collection of AWESOME things about domian adaptation
Stars: ✭ 3,357 (+4910.45%)
Mutual labels:  adversarial-learning
Pose Adv Aug
Code for "Jointly Optimize Data Augmentation and Network Training: Adversarial Data Augmentation in Human Pose Estimation" (CVPR 2018)
Stars: ✭ 83 (+23.88%)
Mutual labels:  adversarial-learning
Pytorch Adversarial box
PyTorch library for adversarial attack and training
Stars: ✭ 104 (+55.22%)
Mutual labels:  adversarial-learning
Unsupervised detection
An Unsupervised Learning Framework for Moving Object Detection From Videos
Stars: ✭ 139 (+107.46%)
Mutual labels:  adversarial-learning
Gvb
Code of Gradually Vanishing Bridge for Adversarial Domain Adaptation (CVPR2020)
Stars: ✭ 52 (-22.39%)
Mutual labels:  adversarial-learning
MAN
Multimodal Adversarial Network for Cross-modal Retrieval (PyTorch Code)
Stars: ✭ 26 (-61.19%)
Mutual labels:  adversarial-learning
Gpnd
Generative Probabilistic Novelty Detection with Adversarial Autoencoders
Stars: ✭ 112 (+67.16%)
Mutual labels:  adversarial-learning
Adversarial Learning For Neural Dialogue Generation In Tensorflow
Adversarial-Learning-for-Neural-Dialogue-Generation-in-Tensorflow
Stars: ✭ 181 (+170.15%)
Mutual labels:  adversarial-learning
Virtual Adversarial Training
Pytorch implementation of Virtual Adversarial Training
Stars: ✭ 94 (+40.3%)
Mutual labels:  adversarial-learning
Zerospeech Tts Without T
A Pytorch implementation for the ZeroSpeech 2019 challenge.
Stars: ✭ 100 (+49.25%)
Mutual labels:  adversarial-learning
Segan
SegAN: Semantic Segmentation with Adversarial Learning
Stars: ✭ 143 (+113.43%)
Mutual labels:  adversarial-learning
Ali Pytorch
PyTorch implementation of Adversarially Learned Inference (BiGAN).
Stars: ✭ 61 (-8.96%)
Mutual labels:  adversarial-learning
Awesome Tensorlayer
A curated list of dedicated resources and applications
Stars: ✭ 248 (+270.15%)
Mutual labels:  adversarial-learning
Handwriting recogition using adversarial learning
[CVPR 2019] "Handwriting Recognition in Low-resource Scripts using Adversarial Learning ”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2019.
Stars: ✭ 52 (-22.39%)
Mutual labels:  adversarial-learning
Free adv train
Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural training.
Stars: ✭ 127 (+89.55%)
Mutual labels:  adversarial-learning
RAS
AISTATS 2019: Reference-based Adversarial Sampling & Its applications to Soft Q-learning
Stars: ✭ 15 (-77.61%)
Mutual labels:  adversarial-learning
Clan
( CVPR2019 Oral ) Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation
Stars: ✭ 248 (+270.15%)
Mutual labels:  adversarial-learning
Reenactgan
[ECCV 2018] ReenactGAN: Learning to Reenact Faces via Boundary Transfer
Stars: ✭ 147 (+119.4%)
Mutual labels:  adversarial-learning

Collateral Learning

TL;DR We use Functional Encryption combined with Adversarial Learning to perform privacy-preserving neural network evaluation. We provides a wide range of tutorials to help you better dive into the project.

Motivation

Imagine that you train a neural network to perform a specific task, and you discover it has also learned information which makes it possible to perform another completely different task, which is very sensitive. Is this possible? What can you do to prevent this?

This shouldn't be confused with the following notions:

Transfer learning: You train a model to perform a specific task, and you reuse this pre-trained model to perform a related task on possibly different data. In collateral learning, the second task is of a different nature while the data used should be closely related to the original one.

Adversarial learning: You corrupt the input data to fool the model and reduce its performance. In collateral learning, you don't modify the input but you try to disclose hidden sensitive information about it using the model output.

Context

Let's assume you have a semi-private trained network performing a prediction task of some nature pred1. This means you have a network with the first layers encrypted and the last ones visible in clear. The structure of the network could be written like this: x -> Private -> output(x) -> Public -> pred1(x). For example, pred1(x) could be the age based on an face picture input x, or the text translation of some speech record.

Several encryption schemes exist, among which Secure Multi-Party Computation (SMPC) and Homomorphic Encryption (HE), but we will consider here a less popular one: Functional Encryption (FE). FE allows a non-trusted party to learn the output of a specific function over encrypted data without interacting with the data owner.

It can be used to encrypt quadratic networks as it is done in this paper (code is available here), where actually you are even given x encrypted (i.e. Enc(x) -> Private -> output(x) ...). One reason for this setting with two phases is that encryption is very expensive or restrictive (current FE schemes only support a single quadratic operation), but you can improve the model accuracy by adding a neural network in clear which will leverage the encrypted part's output.

On the testing phase, if you make a prediction on an input, one can only observe the neuron activations starting from the output of the private part, output(x) with our notations. Hence, output(x) is the best you can know about the original input.

We investigate here how an adversary could query this trained model x -> Private -> output(x) -> Public -> pred1(x) with an item x for which it's aware it has another feature pred2(x) that can be classified. If for example the input x is a face picture then pred2(x) could be the gender for example, or it could be the ethnic origin of the person talking if x is a speech record. The goal of the adversary is to learn another network based on output(x) which can perform a prediction task output(x) -> NN -> pred2(x) for encrypted items x. In particular, the adversary can't alter the Private network.

Use case

So, what's the use case? For example, imagine you provide a secure service to write down speech records and people give you encrypted speech records. All you can read and exploit in clear is output(x) which can be a relatively short vector compared to x, which is enough for you to detect the text and deliver your service. The question is: Can you use output(x) to detect the ethnic origin of the person speaking?

Our approach

We give concrete examples of this in our repository to answer this question. Even if some existing datasets exist that are suitable for two distinct and "relatively" independent learning tasks, like the face dataset imdb-wiki, we have proposed a 60.000 items artificial letter character dataset inspired from MNIST, where several fonts were used to draw the characters, to which extra deformation is added. Hence, we can ensure complete independence between the two features to classify and adjust the difficulty of classification to the current capabilities of Functional Encryption.

Bilby Stampede

Our work is detailed in the tutorials section. Any comments are welcome!

Publication

This work has been published at Neurips 2019 and the poster can be found here.

Partially Encrypted Machine Learning using Functional Encryption

Machine learning on encrypted data has received a lot of attention thanks to recent breakthroughs in homomorphic encryption and secure multi-party computation. It allows outsourcing computation to untrusted servers without sacrificing privacy of sensitive data. We propose a practical framework to perform partially encrypted and privacy-preserving predictions which combines adversarial training and functional encryption. We first present a new functional encryption scheme to efficiently compute quadratic functions so that the data owner controls what can be computed but is not involved in the calculation: it provides a decryption key which allows one to learn a specific function evaluation of some encrypted data. We then show how to use it in machine learning to partially encrypt neural networks with quadratic activation functions at evaluation time, and we provide a thorough analysis of the information leaks based on indistinguishability of data items of the same label. Last, since most encryption schemes cannot deal with the last thresholding operation used for classification, we propose a training method to prevent selected sensitive features from leaking, which adversarially optimizes the network against an adversary trying to identify these features. This is interesting for several existing works using partially encrypted machine learning as it comes with little reduction on the model's accuracy and significantly improves data privacy.

It is a joined work from Theo Ryffel, Edouard Dufour-Sans, Romain Gay, Francis Bach, David Pointcheval.

Slides are also available here

Support the project!

If you're enthusiastic about our project, ⭐️ it to show your support! ❤️

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].