cobaContextual bandit benchmarking
Stars: ✭ 29 (+26.09%)
Pyss3A Python package implementing a new machine learning model for text classification with visualization tools for Explainable AI
Stars: ✭ 191 (+730.43%)
adaptive-waveletsAdaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (+152.17%)
Pycebox⬛ Python Individual Conditional Expectation Plot Toolbox
Stars: ✭ 101 (+339.13%)
cuba-weather-pythonApplication programming interface of the Cuba Weather project implemented in Python
Stars: ✭ 17 (-26.09%)
CaptumModel interpretability and understanding for PyTorch
Stars: ✭ 2,830 (+12204.35%)
teanaps자연어 처리와 텍스트 분석을 위한 오픈소스 파이썬 라이브러리 입니다.
Stars: ✭ 91 (+295.65%)
Lrp for lstmLayer-wise Relevance Propagation (LRP) for LSTMs
Stars: ✭ 152 (+560.87%)
SiEPIC Photonics PackageA Python (v3.6.5) package that provides a set of basic functions commonly used in integrated photonics.
Stars: ✭ 22 (-4.35%)
CxplainCausal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.
Stars: ✭ 84 (+265.22%)
concept-based-xaiLibrary implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (+78.26%)
hierarchical-dnn-interpretationsUsing / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+378.26%)
Torch CamClass activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM)
Stars: ✭ 249 (+982.61%)
ALPS 2021XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
Stars: ✭ 55 (+139.13%)
ShapA game theoretic approach to explain the output of any machine learning model.
Stars: ✭ 14,917 (+64756.52%)
geonamescachegeonamescache - a Python library for quick access to a subset of GeoNames data.
Stars: ✭ 76 (+230.43%)
Transformer-MM-Explainability[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+2004.35%)
aotoolsA useful set of tools for Adaptive Optics in Python
Stars: ✭ 60 (+160.87%)
Playlist-LengthA simple command line tool to get length of all the video and/or audio files in a directory and its sub-directories recursively
Stars: ✭ 29 (+26.09%)
EgoCNNCode for "Distributed, Egocentric Representations of Graphs for Detecting Critical Structures" (ICML 2019)
Stars: ✭ 16 (-30.43%)
cpythonAlternative StdLib for Nim for Python targets, hijacks Python StdLib for Nim
Stars: ✭ 75 (+226.09%)
mmnMoore Machine Networks (MMN): Learning Finite-State Representations of Recurrent Policy Networks
Stars: ✭ 39 (+69.57%)
ciraCira algorithmic trading made easy. A Façade library for simpler interaction with alpaca-trade-API from Alpaca Markets.
Stars: ✭ 21 (-8.7%)
T-ReqsT-Reqs is a multi-language requirements file generator which also serves the purpose of preparing a template Dockerfile for working with Docker applications.
Stars: ✭ 18 (-21.74%)
pyGinitA simple github automation cli
Stars: ✭ 15 (-34.78%)
xai-iml-sotaInteresting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Stars: ✭ 51 (+121.74%)
megMolecular Explanation Generator
Stars: ✭ 14 (-39.13%)
ExplainxExplainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.
Stars: ✭ 196 (+752.17%)
kernel-modNeurIPS 2018. Linear-time model comparison tests.
Stars: ✭ 17 (-26.09%)
ImodelsInterpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Stars: ✭ 194 (+743.48%)
multi-imbalancePython package for tackling multi-class imbalance problems. http://www.cs.put.poznan.pl/mlango/publications/multiimbalance/
Stars: ✭ 66 (+186.96%)
thermostatCollection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (+447.83%)
Modelstudio📍 Interactive Studio for Explanatory Model Analysis
Stars: ✭ 163 (+608.7%)
glcapsnetGlobal-Local Capsule Network (GLCapsNet) is a capsule-based architecture able to provide context-based eye fixation prediction for several autonomous driving scenarios, while offering interpretability both globally and locally.
Stars: ✭ 33 (+43.48%)
StellargraphStellarGraph - Machine Learning on Graphs
Stars: ✭ 2,235 (+9617.39%)
lnurlLNURL implementation for Python.
Stars: ✭ 51 (+121.74%)
Visual AttributionPytorch Implementation of recent visual attribution methods for model interpretability
Stars: ✭ 127 (+452.17%)
interpretable-mlTechniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Stars: ✭ 17 (-26.09%)
BreakdownModel Agnostics breakDown plots
Stars: ✭ 93 (+304.35%)
gitea-auto-updateA script which can update gitea via crontab automatically to a new version.
Stars: ✭ 25 (+8.7%)
Interpretability By PartsCode repository for "Interpretable and Accurate Fine-grained Recognition via Region Grouping", CVPR 2020 (Oral)
Stars: ✭ 88 (+282.61%)
adversarial-robustness-publicCode for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients"
Stars: ✭ 49 (+113.04%)
logyaLogya is a static site generator written in Python designed to be easy to use and flexible.
Stars: ✭ 16 (-30.43%)
free-lunch-saliencyCode for "Free-Lunch Saliency via Attention in Atari Agents"
Stars: ✭ 15 (-34.78%)
transformers-interpretModel explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+3643.48%)
mllpThe code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-34.78%)
ArenaRData generator for Arena - interactive XAI dashboard
Stars: ✭ 28 (+21.74%)
ISS InfoPython wrapper for tracking information about International Space Station via http://open-notify.org
Stars: ✭ 12 (-47.83%)