knowledge-neuronsA library for finding knowledge neurons in pretrained transformer models.
Stars: ✭ 72 (-55.83%)
neuron-importance-zsl[ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance
Stars: ✭ 56 (-65.64%)
Market-Mix-ModelingMarket Mix Modelling for an eCommerce firm to estimate the impact of various marketing levers on sales
Stars: ✭ 31 (-80.98%)
Recommender-SystemIn this code we implement and compared Collaborative Filtering algorithm, prediction algorithms such as neighborhood methods, matrix factorization-based ( SVD, PMF, SVD++, NMF), and many others.
Stars: ✭ 30 (-81.6%)
summit🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Stars: ✭ 95 (-41.72%)
sageFor calculating global feature importance using Shapley values.
Stars: ✭ 129 (-20.86%)
yggdrasil-decision-forestsA collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models.
Stars: ✭ 156 (-4.29%)
zennitZennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (-65.03%)
CASTDeveloper Version of the R package CAST: Caret Applications for Spatio-Temporal models
Stars: ✭ 65 (-60.12%)
ProtoTreeProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-71.17%)
learnrExploratory, Inferential and Predictive data analysis. Feel free to show your ❤️ by giving a star ⭐
Stars: ✭ 64 (-60.74%)
BASBAS R package https://merliseclyde.github.io/BAS/
Stars: ✭ 36 (-77.91%)
solar-forecasting-RNNMulti-time-horizon solar forecasting using recurrent neural network
Stars: ✭ 29 (-82.21%)
deep-explanation-penalizationCode for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (-32.52%)
partial dependencePython package to visualize and cluster partial dependence.
Stars: ✭ 23 (-85.89%)
free-lunch-saliencyCode for "Free-Lunch Saliency via Attention in Atari Agents"
Stars: ✭ 15 (-90.8%)
transformers-interpretModel explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+428.22%)
KaggleKaggle Kernels (Python, R, Jupyter Notebooks)
Stars: ✭ 26 (-84.05%)
interpretable-mlTechniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-89.57%)
mljar-api-RR wrapper for MLJAR API
Stars: ✭ 16 (-90.18%)
mllpThe code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-90.8%)
mmnMoore Machine Networks (MMN): Learning Finite-State Representations of Recurrent Policy Networks
Stars: ✭ 39 (-76.07%)
hierarchical-dnn-interpretationsUsing / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (-32.52%)
megMolecular Explanation Generator
Stars: ✭ 14 (-91.41%)
NDDDrug-Drug Interaction Predicting by Neural Network Using Integrated Similarity
Stars: ✭ 25 (-84.66%)
glcapsnetGlobal-Local Capsule Network (GLCapsNet) is a capsule-based architecture able to provide context-based eye fixation prediction for several autonomous driving scenarios, while offering interpretability both globally and locally.
Stars: ✭ 33 (-79.75%)
Transformer-MM-Explainability[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+196.93%)
adversarial-robustness-publicCode for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients"
Stars: ✭ 49 (-69.94%)
timemachinesPredict time-series with one line of code.
Stars: ✭ 342 (+109.82%)
ArenaRData generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (-82.82%)
EgoCNNCode for "Distributed, Egocentric Representations of Graphs for Detecting Critical Structures" (ICML 2019)
Stars: ✭ 16 (-90.18%)
concept-based-xaiLibrary implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-74.85%)
Statistical-Learning-using-RThis is a Statistical Learning application which will consist of various Machine Learning algorithms and their implementation in R done by me and their in depth interpretation.Documents and reports related to the below mentioned techniques can be found on my Rpubs profile.
Stars: ✭ 27 (-83.44%)
xai-iml-sotaInteresting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (-68.71%)
ALPS 2021XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
Stars: ✭ 55 (-66.26%)
kernel-modNeurIPS 2018. Linear-time model comparison tests.
Stars: ✭ 17 (-89.57%)
adaptive-waveletsAdaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (-64.42%)
thermostatCollection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (-22.7%)
datafsmMachine Learning Finite State Machine Models from Data with Genetic Algorithms
Stars: ✭ 14 (-91.41%)
Torch CamClass activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM)
Stars: ✭ 249 (+52.76%)
CaptumModel interpretability and understanding for PyTorch
Stars: ✭ 2,830 (+1636.2%)
ExplainxExplainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Stars: ✭ 196 (+20.25%)
ImodelsInterpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Stars: ✭ 194 (+19.02%)
Pyss3A Python package implementing a new machine learning model for text classification with visualization tools for Explainable AI
Stars: ✭ 191 (+17.18%)
ShapA game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+9051.53%)
SmtSurrogate Modeling Toolbox
Stars: ✭ 233 (+42.94%)
Data Science Live BookAn open source book to learn data science, data analysis and machine learning, suitable for all ages!
Stars: ✭ 193 (+18.4%)
Pytorch Grad CamMany Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM
Stars: ✭ 3,814 (+2239.88%)