All Projects → adiyoss → WatermarkNN

adiyoss / WatermarkNN

Licence: MIT license
Watermarking Deep Neural Networks (USENIX 2018)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to WatermarkNN

PS-Signature-and-EL-PASSO
A C++ Implementation of Short Randomizable Signatures (PS Signatures) and EL PASSO (Privacy-preserving, Asynchronous Single Sign-On)
Stars: ✭ 21 (-66.67%)
Mutual labels:  zero-knowledge-proofs
bellman-substrate
A library for supporting zk-SNARKs to Substrate
Stars: ✭ 26 (-58.73%)
Mutual labels:  zero-knowledge-proofs
zksk
Zero-Knowledge Swiss Knife
Stars: ✭ 56 (-11.11%)
Mutual labels:  zero-knowledge-proofs
SubRosa
Basic tool to automate backdooring PE files
Stars: ✭ 48 (-23.81%)
Mutual labels:  backdooring
haal
Hääl - Anonymous Electronic Voting System on Public Blockchains
Stars: ✭ 96 (+52.38%)
Mutual labels:  zero-knowledge-proofs
dnn-watermark
Implementation of "Embedding Watermarks into Deep Neural Networks," in Proc. of ICMR'17.
Stars: ✭ 87 (+38.1%)
Mutual labels:  watermak
RSB-Framework
Windows/Linux - ReverseShellBackdoor Framework
Stars: ✭ 44 (-30.16%)
Mutual labels:  backdooring
bulletproofs
Bulletproofs and Bulletproofs+ Rust implementation for Aggregated Range Proofs over multiple elliptic curves
Stars: ✭ 62 (-1.59%)
Mutual labels:  zero-knowledge-proofs
bulletproofs-r1cs-gadgets
Arithmatic circuits convertible to R1CS based on Bulletproofs
Stars: ✭ 65 (+3.17%)
Mutual labels:  zero-knowledge-proofs
awesome-zkp-starter-pack
A curated collection of links for zero-knowledge proof cryptography used in blockchains
Stars: ✭ 63 (+0%)
Mutual labels:  zero-knowledge-proofs
bellman
Bellman zkSNARK library for community with Ethereum's BN256 support
Stars: ✭ 121 (+92.06%)
Mutual labels:  zero-knowledge-proofs
baseline
The Baseline Protocol is an open source initiative that combines advances in cryptography, messaging, and distributed ledger technology to enable confidential and complex coordination between enterprises while keeping data in systems of record. This repo serves as the main repo for the Baseline Protocol, containing core packages, examples, and r…
Stars: ✭ 565 (+796.83%)
Mutual labels:  zero-knowledge-proofs
Grin
Minimal implementation of the Mimblewimble protocol.
Stars: ✭ 4,897 (+7673.02%)
Mutual labels:  zero-knowledge-proofs
zkay
A programming language and compiler which enable automatic compilation of intuitive data privacy specifications to NIZK-enabled private smart contracts.
Stars: ✭ 28 (-55.56%)
Mutual labels:  zero-knowledge-proofs
zerocash-ethereum
Smart contract - Zerocash-like approach for privacy on Ethereum
Stars: ✭ 18 (-71.43%)
Mutual labels:  zero-knowledge-proofs
threshold-signatures
Threshold Signature Scheme for ECDSA
Stars: ✭ 79 (+25.4%)
Mutual labels:  zero-knowledge-proofs

Watermarking Deep Neural Networks

This repository provides a PyTorch implementation of the paper Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring. This repository provides scripts for watermarking neural networks by backdooring as well as fine-tuning them. A blog post with a non-formal description of the proposed method can be found here.

Paper

Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
Yossi Adi1, Carsten Baum1, Moustapha Cisse2, Benny Pinkas1, Joseph Keshet1
1 Bar-Ilan University, 2 Google, Inc
27th USENIX Security Symposium, USENIX.

Content

The repository contains three main scripts: train.py, predict.py, and fine-tune.py where you can train (with and without watermark), predict and fine-tune models.

Additionally, this repo contains the trigger set images used to embed the watermarks.

At the moment the code supports training and evaluating on CIFAR-10 dataset only. More datasets will be supported soon.

Dependencies

Python 3.6

PyTorch 0.4.1

Usage

1. Cloning the repository

$ git clone https://github.com/adiyoss/WatermarkNN.git
$ cd WatermarkNN

2. Training

The train.py script allows you to train a model with or without a trigger set.

For example:

python train.py --batch_size 100 --max_epochs 60 --runname train --wm_batch_size 2 --wmtrain

For training without the trigger set, omit the --wmtrain flag.
In case you want to resume training you can use the --resume flag. Lastly, all log files and models will have the prefix --runname.

New Trigger Set

For training with your own trigger set and labels, provide the path to the data using the --wm_path flag and the path to the trigger set using the --wm_lbl flag.

3. Testing

The predict.py script allows you to test your model on CIFAR10 test set or on a provided trigger set.
To test a trained model on CIFAR10 dataset (without the trigger set) run the following command:

python predict.py --model_path checkpoint/model.t7

To test a trained model on a specified trigger set, run the following command:

python predict.py --model_path checkpoint/model.t7 --wm_path ./data/trigger_set --wm_lbl labels-cifar.txt --testwm

4. Fine-Tuning

We define four ways to fine-tune: Fine-Tune Last Layer (FTLL), Fine-Tune All Layers (FTAL), Retrain Last Layer (RTLL), and Retrain All Layers (RTAL). A graphic description of the aforementioned methods described below:
Fine-tuning techniques

Below we provide example scripts for all four fine-tuning techniques.

Fine-Tune Last Layer (FTLL)

python fine-tune.py --lr 0.01 --load_path checkpoint/model.t7 --save_dir checkpoint/ --save_model ftll.t7 --runname fine.tune.last.layer

Fine-Tune All Layers (FTAL)

python fine-tune.py --lr 0.01 --load_path checkpoint/model.t7 --save_dir checkpoint/ --save_model ftal.t7 --runname fine.tune.all.layers --tunealllayers

Retrain Last Layer (RTLL)

python fine-tune.py --lr 0.01 --load_path checkpoint/model.t7 --save_dir checkpoint/ --save_model rtll.t7 --runname reinit.last.layer --reinitll

Retrain All Layers (RTAL)

python fine-tune.py --lr 0.01 --load_path checkpoint/model.t7 --save_dir checkpoint/ --save_model rtal.t7 --runname reinit_all.layers --reinitll --tunealllayers

For more training / testing / fine-tuning options, look inside the scripts arguments.

Citation

If you find our work useful please cite:

@inproceedings {217591,
author = {Yossi Adi and Carsten Baum and Moustapha Cisse and Benny Pinkas and Joseph Keshet},
title = {Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring},
booktitle = {27th {USENIX} Security Symposium ({USENIX} Security 18)},
year = {2018},
isbn = {978-1-931971-46-1},
address = {Baltimore, MD},
pages = {1615--1631},
url = {https://www.usenix.org/conference/usenixsecurity18/presentation/adi},
publisher = {{USENIX} Association},
}

Acknowledgement

This work was supported by the BIU Center for Research in Applied Cryptography and Cyber Security in conjunction with the Israel National Cyber Directorate in the Prime Minister’s Office.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].