aileen-coreSensor data aggregation tool for any numerical sensor data. Robust and privacy-friendly.
Stars: ✭ 15 (-78.87%)
robust-gcnImplementation of the paper "Certifiable Robustness and Robust Training for Graph Convolutional Networks".
Stars: ✭ 35 (-50.7%)
adversarial-robustness-publicCode for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients"
Stars: ✭ 49 (-30.99%)
POPQORNAn Algorithm to Quantify Robustness of Recurrent Neural Networks
Stars: ✭ 44 (-38.03%)
SimP-GCNImplementation of the WSDM 2021 paper "Node Similarity Preserving Graph Convolutional Networks"
Stars: ✭ 43 (-39.44%)
Denoised-Smoothing-TFMinimal implementation of Denoised Smoothing (https://arxiv.org/abs/2003.01908) in TensorFlow.
Stars: ✭ 19 (-73.24%)
ViTs-vs-CNNs[NeurIPS 2021]: Are Transformers More Robust Than CNNs? (Pytorch implementation & checkpoints)
Stars: ✭ 145 (+104.23%)
spatial-smoothing(ICML 2022) Official PyTorch implementation of “Blurs Behave Like Ensembles: Spatial Smoothings to Improve Accuracy, Uncertainty, and Robustness”.
Stars: ✭ 68 (-4.23%)
pre-trainingPre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)
Stars: ✭ 90 (+26.76%)
recentrifugeRecentrifuge: robust comparative analysis and contamination removal for metagenomics
Stars: ✭ 79 (+11.27%)
amcheckcontrib/amcheck from Postgres v11 backported to earlier Postgres versions
Stars: ✭ 74 (+4.23%)
Comprehensive-Tacotron2PyTorch Implementation of Google's Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions. This implementation supports both single-, multi-speaker TTS and several techniques to enforce the robustness and efficiency of the model.
Stars: ✭ 22 (-69.01%)
square-attackSquare Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
Stars: ✭ 89 (+25.35%)
GeFsGenerative Forests in Python
Stars: ✭ 23 (-67.61%)
DiagnoseRESource code and dataset for the CCKS201 paper "On Robustness and Bias Analysis of BERT-based Relation Extraction"
Stars: ✭ 23 (-67.61%)
belayRobust error-handling for Kotlin and Android
Stars: ✭ 35 (-50.7%)
Robust-Semantic-SegmentationDynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation (ICCV2021)
Stars: ✭ 25 (-64.79%)
RaySRayS: A Ray Searching Method for Hard-label Adversarial Attack (KDD2020)
Stars: ✭ 43 (-39.44%)
perceptual-advexCode and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".
Stars: ✭ 44 (-38.03%)
s-attack[CVPR 2022] S-attack library. Official implementation of two papers "Vehicle trajectory prediction works, but not everywhere" and "Are socially-aware trajectory prediction models really socially-aware?".
Stars: ✭ 51 (-28.17%)
shortcut-perspectiveFigures & code from the paper "Shortcut Learning in Deep Neural Networks" (Nature Machine Intelligence 2020)
Stars: ✭ 67 (-5.63%)
robustness-vitContains code for the paper "Vision Transformers are Robust Learners" (AAAI 2022).
Stars: ✭ 78 (+9.86%)
cycle-confusionCode and models for ICCV2021 paper "Robust Object Detection via Instance-Level Temporal Cycle Confusion".
Stars: ✭ 67 (-5.63%)
Generalization-Causality关于domain generalization,domain adaptation,causality,robutness,prompt,optimization,generative model各式各样研究的阅读笔记
Stars: ✭ 482 (+578.87%)
eleanorCode used during my Chaos Engineering and Resiliency Patterns talk.
Stars: ✭ 14 (-80.28%)
TIGERPython toolbox to evaluate graph vulnerability and robustness (CIKM 2021)
Stars: ✭ 103 (+45.07%)
ATMC[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (-42.25%)
DUNCode for "Depth Uncertainty in Neural Networks" (https://arxiv.org/abs/2006.08437)
Stars: ✭ 65 (-8.45%)
safe-control-gymPyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and RL
Stars: ✭ 272 (+283.1%)
DDMLA metric learning method for person re-identification
Stars: ✭ 21 (-70.42%)