diabetes use caseSample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Stars: ✭ 22 (-60.71%)
AdverseBiNetImproving Document Binarization via Adversarial Noise-Texture Augmentation
Stars: ✭ 34 (-39.29%)
FlashtorchVisualization toolkit for neural networks in PyTorch! Demo -->
Stars: ✭ 561 (+901.79%)
InterpretFit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+7671.43%)
sageFor calculating global feature importance using Shapley values.
Stars: ✭ 129 (+130.36%)
AdaptsegnetLearning to Adapt Structured Output Space for Semantic Segmentation, CVPR 2018 (spotlight)
Stars: ✭ 654 (+1067.86%)
shapeshopTowards Understanding Deep Learning Representations via Interactive Experimentation
Stars: ✭ 16 (-71.43%)
AlibiAlgorithms for monitoring and explaining machine learning models
Stars: ✭ 924 (+1550%)
tulipScaleable input gradient regularization
Stars: ✭ 19 (-66.07%)
DeepliftPublic facing deeplift repo
Stars: ✭ 512 (+814.29%)
AdvsemisegAdversarial Learning for Semi-supervised Semantic Segmentation, BMVC 2018
Stars: ✭ 382 (+582.14%)
adVAEImplementation of 'Self-Adversarial Variational Autoencoder with Gaussian Anomaly Prior Distribution for Anomaly Detection'
Stars: ✭ 17 (-69.64%)
Tf ExplainInterpretability Methods for tf.keras models with Tensorflow 2.x
Stars: ✭ 780 (+1292.86%)
FacetHuman-explainable AI.
Stars: ✭ 269 (+380.36%)
Symbolic MetamodelingCodebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Stars: ✭ 29 (-48.21%)
SPINECode for SPINE - Sparse Interpretable Neural Embeddings. Jhamtani H.*, Pruthi D.*, Subramanian A.*, Berg-Kirkpatrick T., Hovy E. AAAI 2018
Stars: ✭ 44 (-21.43%)
knowledge-neuronsA library for finding knowledge neurons in pretrained transformer models.
Stars: ✭ 72 (+28.57%)
AKEGuiding Entity Alignment via Adversarial Knowledge Embedding
Stars: ✭ 15 (-73.21%)
Xai resourcesInteresting resources related to XAI (Explainable Artificial Intelligence)
Stars: ✭ 553 (+887.5%)
summit🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Stars: ✭ 95 (+69.64%)
AdvertorchA Toolbox for Adversarial Robustness Research
Stars: ✭ 826 (+1375%)
zennitZennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (+1.79%)
LucidA collection of infrastructure and tools for research in neural network interpretability.
Stars: ✭ 4,344 (+7657.14%)
linguistic-style-transfer-pytorchImplementation of "Disentangled Representation Learning for Non-Parallel Text Style Transfer(ACL 2019)" in Pytorch
Stars: ✭ 55 (-1.79%)
Selectiongan[CVPR 2019 Oral] Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation
Stars: ✭ 366 (+553.57%)
Lab[CVPR 2018] Look at Boundary: A Boundary-Aware Face Alignment Algorithm
Stars: ✭ 956 (+1607.14%)
NeurecNext RecSys Library
Stars: ✭ 731 (+1205.36%)
adaptAwesome Domain Adaptation Python Toolbox
Stars: ✭ 46 (-17.86%)
GvbCode of Gradually Vanishing Bridge for Adversarial Domain Adaptation (CVPR2020)
Stars: ✭ 52 (-7.14%)
removal-explanationsA lightweight implementation of removal-based explanations for ML models.
Stars: ✭ 46 (-17.86%)
Ad examplesA collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Stars: ✭ 641 (+1044.64%)
CDFSL-ATA[IJCAI 2021] Cross-Domain Few-Shot Classification via Adversarial Task Augmentation
Stars: ✭ 21 (-62.5%)
gym-advGym environments modified with adversarial agents
Stars: ✭ 26 (-53.57%)
XaiXAI - An eXplainability toolbox for machine learning
Stars: ✭ 596 (+964.29%)
nalp🗣️ NALP is a library that covers Natural Adversarial Language Processing.
Stars: ✭ 17 (-69.64%)
TrelawneyGeneral Interpretability Package
Stars: ✭ 55 (-1.79%)
CADAAttending to Discriminative Certainty for Domain Adaptation
Stars: ✭ 17 (-69.64%)
Tf DannDomain-Adversarial Neural Network in Tensorflow
Stars: ✭ 556 (+892.86%)
neuron-importance-zsl[ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance
Stars: ✭ 56 (+0%)
Grad Cam[ICCV 2017] Torch code for Grad-CAM
Stars: ✭ 891 (+1491.07%)
Interpretable machine learning with pythonExamples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Stars: ✭ 530 (+846.43%)
ContrastiveexplanationContrastive Explanation (Foil Trees), developed at TNO/Utrecht University
Stars: ✭ 36 (-35.71%)
yggdrasil-decision-forestsA collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
Stars: ✭ 156 (+178.57%)
TcavCode for the TCAV ML interpretability project
Stars: ✭ 442 (+689.29%)
ProtoTreeProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-16.07%)
TaadpapersMust-read Papers on Textual Adversarial Attack and Defense
Stars: ✭ 800 (+1328.57%)
Mli ResourcesH2O.ai Machine Learning Interpretability Resources
Stars: ✭ 428 (+664.29%)
Text nnText classification models. Used a submodule for other projects.
Stars: ✭ 55 (-1.79%)
A2cl PtAdversarial Background-Aware Loss for Weakly-supervised Temporal Activity Localization (ECCV 2020)
Stars: ✭ 34 (-39.29%)
DalexmoDel Agnostic Language for Exploration and eXplanation
Stars: ✭ 795 (+1319.64%)
Neural Backed Decision TreesMaking decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Stars: ✭ 411 (+633.93%)