All Projects → AI-secure → T3

AI-secure / T3

Licence: other
[EMNLP 2020] "T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack" by Boxin Wang, Hengzhi Pei, Boyuan Pan, Qian Chen, Shuohang Wang, Bo Li

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to T3

hard-label-attack
Natural Language Attacks in a Hard Label Black Box Setting.
Stars: ✭ 26 (+4%)
Mutual labels:  bert, adversarial-attacks
KitanaQA
KitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (+132%)
Mutual labels:  bert, adversarial-attacks
Adversarial Robustness Toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Stars: ✭ 2,638 (+10452%)
Mutual labels:  attack, adversarial-attacks
TIGER
Python toolbox to evaluate graph vulnerability and robustness (CIKM 2021)
Stars: ✭ 103 (+312%)
Mutual labels:  attack, adversarial-attacks
PIE
Fast + Non-Autoregressive Grammatical Error Correction using BERT. Code and Pre-trained models for paper "Parallel Iterative Edit Models for Local Sequence Transduction": www.aclweb.org/anthology/D19-1435.pdf (EMNLP-IJCNLP 2019)
Stars: ✭ 164 (+556%)
Mutual labels:  bert
ph-commons
Java 1.8+ Library with tons of utility classes required in all projects
Stars: ✭ 23 (-8%)
Mutual labels:  tree
iyov
Web proxy for http(s) for developers to analyze data between client and servers based on workerman, especailly for app developers.
Stars: ✭ 27 (+8%)
Mutual labels:  attack
Adversarial-Examples-Paper
Paper list of Adversarial Examples
Stars: ✭ 20 (-20%)
Mutual labels:  adversarial-attacks
laravel-ltree
LTree Extension (PostgreSQL) for Laravel
Stars: ✭ 19 (-24%)
Mutual labels:  tree
domain-shift-robustness
Code for the paper "Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets", ICCV 2019
Stars: ✭ 22 (-12%)
Mutual labels:  adversarial-attacks
iamQA
中文wiki百科QA阅读理解问答系统,使用了CCKS2016数据的NER模型和CMRC2018的阅读理解模型,还有W2V词向量搜索,使用torchserve部署
Stars: ✭ 46 (+84%)
Mutual labels:  bert
bert-as-a-service TFX
End-to-end pipeline with TFX to train and deploy a BERT model for sentiment analysis.
Stars: ✭ 32 (+28%)
Mutual labels:  bert
SIGIR2021 Conure
One Person, One Model, One World: Learning Continual User Representation without Forgetting
Stars: ✭ 23 (-8%)
Mutual labels:  bert
video autoencoder
Video lstm auto encoder built with pytorch. https://arxiv.org/pdf/1502.04681.pdf
Stars: ✭ 32 (+28%)
Mutual labels:  autoencoder
square-attack
Square Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
Stars: ✭ 89 (+256%)
Mutual labels:  adversarial-attacks
text-generation-transformer
text generation based on transformer
Stars: ✭ 36 (+44%)
Mutual labels:  bert
awesome-hacktoberfest-plant-a-tree
Will you choose the ✨ Hacktoberfest t-shirt ✨ but don't want to stop contributing to the environment and a sustainable future? Find an organization here so you can plant a tree! 🌱
Stars: ✭ 30 (+20%)
Mutual labels:  tree
carbon footprint
An open-source about a Carbon Footprint Calculator made with Reactjs. The objective is to have a nice simple web about the environment and how to preserve our planet.
Stars: ✭ 14 (-44%)
Mutual labels:  tree
TEXTOIR
TEXTOIR is a flexible toolkit for open intent detection and discovery. (ACL 2021)
Stars: ✭ 31 (+24%)
Mutual labels:  bert
rpl-attacks
RPL attacks framework for simulating WSN with a malicious mote based on Contiki
Stars: ✭ 56 (+124%)
Mutual labels:  attack

T3

This is the official code base for the EMNLP 2020 paper, "T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack".

This repo contains the code to attack both classification models (self-attentive models and BERT) and question answering models (BiDAF and BERT). We put our attack code in each folders.

You may use our code to attack other NLP tasks.

Note

Before using our T3(Sent), a tree-based autoencoder needs to be trained in a large corpus.

Train Tree-based Autoencoder

We trained our tree-based autoencoder on the Yelp review training dataset.

Related code can be found SAM-attack/my_generator/. Before training, each sentence in the training set should be parsed by Stanford CoreNLP Parser to get its dependency structures.

We also provide our pre-trained tree auto-encoder checkpoint here.

Contributions

We welcome all kinds of contribution by opening a pull request. If you have questions, please open an issue for discussion.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].