All Projects → spring-epfl → Mia

spring-epfl / Mia

Licence: mit
A library for running membership inference attacks against ML models

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Mia

Silence
PROJECT MOVED: https://git.silence.dev/Silence/Silence-Android/ (GitHub is just a mirror.)
Stars: ✭ 1,019 (+1517.46%)
Mutual labels:  privacy
Sophos Xg Block Lists
Extending & consolidating hosts files from a variety of sources, specifically for Sophos XG.
Stars: ✭ 54 (-14.29%)
Mutual labels:  privacy
Drops
opmsg p2p transport network
Stars: ✭ 58 (-7.94%)
Mutual labels:  privacy
Anonaddy
Anonymous email forwarding
Stars: ✭ 1,022 (+1522.22%)
Mutual labels:  privacy
Nowallet
This project is a secure Bitcoin brainwallet app written in Python.
Stars: ✭ 52 (-17.46%)
Mutual labels:  privacy
Ail Framework
AIL framework - Analysis Information Leak framework
Stars: ✭ 1,091 (+1631.75%)
Mutual labels:  privacy
Paperwork
Paperwork - OpenSource note-taking & archiving alternative to Evernote, Microsoft OneNote & Google Keep
Stars: ✭ 7,838 (+12341.27%)
Mutual labels:  privacy
Megachat
MEGA C++ SDK for chat-enabled apps
Stars: ✭ 61 (-3.17%)
Mutual labels:  privacy
Sign Up For Facebook
A summary of what data Facebook collects and how it can be used
Stars: ✭ 53 (-15.87%)
Mutual labels:  privacy
Pathwar
☠️ The Pathwar Project ☠️
Stars: ✭ 58 (-7.94%)
Mutual labels:  privacy
Nipe
An engine to make Tor network your default gateway
Stars: ✭ 1,032 (+1538.1%)
Mutual labels:  privacy
Privatezilla
👀👮🐢🔥Performs a privacy & security check of Windows 10
Stars: ✭ 1,045 (+1558.73%)
Mutual labels:  privacy
Kindmetrics
Kind metrics analytics for your website
Stars: ✭ 57 (-9.52%)
Mutual labels:  privacy
Adguardbrowserextension
AdGuard browser extension
Stars: ✭ 1,018 (+1515.87%)
Mutual labels:  privacy
Dnscrypt Menu
Manage DNSCrypt from the macOS menu bar (BitBar plugin)
Stars: ✭ 59 (-6.35%)
Mutual labels:  privacy
Embassy Os
A graphical operating system for running self-hosted software.
Stars: ✭ 43 (-31.75%)
Mutual labels:  privacy
Fem
Blokada 5 for Android and iOS (repo moved).
Stars: ✭ 57 (-9.52%)
Mutual labels:  privacy
Owasp Seraphimdroid
OWASP Seraphimdroid is an open source project with aim to create, as a community, an open platform for education and protection of Android users against privacy and security threats.
Stars: ✭ 62 (-1.59%)
Mutual labels:  privacy
Vpn At Home
1-click, self-hosted deployment of OpenVPN with DNS ad blocking sinkhole
Stars: ✭ 1,106 (+1655.56%)
Mutual labels:  privacy
Ethsnarks Miximus
Example project for EthSnarks - Miximus coin mixer
Stars: ✭ 58 (-7.94%)
Mutual labels:  privacy

mia

|pypi| |license| |build_status| |docs_status| |zenodo|

.. |pypi| image:: https://img.shields.io/pypi/v/mia.svg :target: https://pypi.org/project/mia/ :alt: PyPI version

.. |build_status| image:: https://travis-ci.org/spring-epfl/mia.svg?branch=master :target: https://travis-ci.org/spring-epfl/mia :alt: Build status

.. |docs_status| image:: https://readthedocs.org/projects/mia-lib/badge/?version=latest :target: https://mia-lib.readthedocs.io/?badge=latest :alt: Documentation status

.. |license| image:: https://img.shields.io/pypi/l/mia.svg :target: https://pypi.org/project/mia/ :alt: License

.. |zenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.1433744.svg :target: https://zenodo.org/record/1433744 :alt: Citing with the Zenodo

A library for running membership inference attacks (MIA) against machine learning models. Check out the documentation <https://mia-lib.rtfd.io>_.

.. description-marker-do-not-remove

These are attacks against privacy of the training data. In MIA, an attacker tries to guess whether a given example was used during training of a target model or not, only by querying the model. See more in the paper by Shokri et al <https://arxiv.org/abs/1610.05820>_. Currently, you can use the library to evaluate the robustness of your Keras or PyTorch models to MIA.

Features:

  • Implements the original shadow model attack <https://arxiv.org/abs/1610.05820>_
  • Is customizable, can use any scikit learn's Estimator-like object as a shadow or attack model
  • Is tested with Keras and PyTorch

.. getting-started-marker-do-not-remove

=============== Getting started

You can install mia from PyPI:

.. code-block:: bash

pip install mia

.. usage-marker-do-not-remove

===== Usage

Shokri et al. attack

See the full runnable example <https://github.com/spring-epfl/mia/tree/master/examples/cifar10.py>. Read the details of the attack in the paper <https://arxiv.org/abs/1610.05820>.

Let target_model_fn() return the target model architecture as a scikit-like classifier. The attack is white-box, meaning the attacker is assumed to know the architecture. Let NUM_CLASSES be the number of classes of the classification problem.

First, the attacker needs to train several shadow models —that mimick the target model— on different datasets sampled from the original data distribution. The following code snippet initializes a shadow model bundle, and runs the training of the shadows. For each shadow model, 2 * SHADOW_DATASET_SIZE examples are sampled without replacement from the full attacker's dataset. Half of them will be used for control, and the other half for training of the shadow model.

.. code-block:: python

from mia.estimators import ShadowModelBundle

smb = ShadowModelBundle(
    target_model_fn,
    shadow_dataset_size=SHADOW_DATASET_SIZE,
    num_models=NUM_MODELS,
)
X_shadow, y_shadow = smb.fit_transform(attacker_X_train, attacker_y_train)

fit_transform returns attack data X_shadow, y_shadow. Each row in X_shadow is a concatenated vector consisting of the prediction vector of a shadow model for an example from the original dataset, and the example's class (one-hot encoded). Its shape is hence (2 * SHADOW_DATASET_SIZE, 2 * NUM_CLASSES). Each label in y_shadow is zero if a corresponding example was "out" of the training dataset of the shadow model (control), or one, if it was "in" the training.

mia provides a class to train a bundle of attack models, one model per class. attack_model_fn() is supposed to return a scikit-like classifier that takes a vector of model predictions (NUM_CLASSES, ), and returns whether an example with these predictions was in the training, or out.

.. code-block:: python

from mia.estimators import AttackModelBundle

amb = AttackModelBundle(attack_model_fn, num_classes=NUM_CLASSES)
amb.fit(X_shadow, y_shadow)

In place of the AttackModelBundle one can use any binary classifier that takes (2 * NUM_CLASSES, )-shape examples (as explained above, the first half of an input is the prediction vector from a model, the second half is the true class of a corresponding example).

To evaluate the attack, one must encode the data in the above-mentioned format. Let target_model be the target model, data_in the data (tuple X, y) that was used in the training of the target model, and data_out the data that was not used in the training.

.. code-block:: python

from mia.estimators import prepare_attack_data    

attack_test_data, real_membership_labels = prepare_attack_data(
    target_model, data_in, data_out
)

attack_guesses = amb.predict(attack_test_data)
attack_accuracy = np.mean(attack_guesses == real_membership_labels)

.. misc-marker-do-not-remove

====== Citing

.. code-block::

@misc{mia, author = {Bogdan Kulynych and Mohammad Yaghini}, title = {{mia: A library for running membership inference attacks against ML models}}, month = sep, year = 2018, doi = {10.5281/zenodo.1433744}, url = {https://doi.org/10.5281/zenodo.1433744} }

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].