recentrifugeRecentrifuge: robust comparative analysis and contamination removal for metagenomics
Stars: ✭ 79 (+51.92%)
EAD AttackEAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Stars: ✭ 34 (-34.62%)
Comprehensive-Tacotron2PyTorch Implementation of Google's Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions. This implementation supports both single-, multi-speaker TTS and several techniques to enforce the robustness and efficiency of the model.
Stars: ✭ 22 (-57.69%)
square-attackSquare Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
Stars: ✭ 89 (+71.15%)
GeFsGenerative Forests in Python
Stars: ✭ 23 (-55.77%)
Adversarial Robustness ToolboxAdversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Stars: ✭ 2,638 (+4973.08%)
FawkesFawkes, privacy preserving tool against facial recognition systems. More info at https://sandlab.cs.uchicago.edu/fawkes
Stars: ✭ 4,362 (+8288.46%)
AMRThis is our official implementation for the paper: Jinhui Tang, Xiaoyu Du, Xiangnan He, Fajie Yuan, Qi Tian, and Tat-Seng Chua, Adversarial Training Towards Robust Multimedia Recommender System.
Stars: ✭ 30 (-42.31%)
domain-shift-robustnessCode for the paper "Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets", ICCV 2019
Stars: ✭ 22 (-57.69%)
CVPR 2019 PNIpytorch implementation of Parametric Noise Injection for adversarial defense
Stars: ✭ 30 (-42.31%)
Releasing Research CodeTips for releasing research code in Machine Learning (with official NeurIPS 2020 recommendations)
Stars: ✭ 1,840 (+3438.46%)
dti-clustering(NeurIPS 2020 oral) Code for "Deep Transformation-Invariant Clustering" paper
Stars: ✭ 60 (+15.38%)
DiGCNImplement of DiGCN, NeurIPS-2020
Stars: ✭ 25 (-51.92%)