danilonumeroso / meg

Licence: Apache-2.0 license
Molecular Explanation Generator

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to meg

CARLA
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Stars: ✭ 166 (+1085.71%)
Mutual labels:  explainable-ai, counterfactual-explanations
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (+7.14%)
Mutual labels:  interpretability, explainable-ai
PyDGN
A research library for Deep Graph Networks
Stars: ✭ 158 (+1028.57%)
Mutual labels:  deep-graph-networks, deep-learning-for-graphs
transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Stars: ✭ 861 (+6050%)
Mutual labels:  interpretability, explainable-ai
zennit
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (+307.14%)
Mutual labels:  interpretability, explainable-ai
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+3357.14%)
Mutual labels:  interpretability, explainable-ai
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+685.71%)
Mutual labels:  interpretability, explainable-ai
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (+685.71%)
Mutual labels:  interpretability, explainable-ai
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (+235.71%)
Mutual labels:  interpretability, explainable-ai
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+30985.71%)
Mutual labels:  interpretability, explainable-ai
concept-based-xai
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (+192.86%)
Mutual labels:  interpretability, explainable-ai
bert attn viz
Visualize BERT's self-attention layers on text classification tasks
Stars: ✭ 41 (+192.86%)
Mutual labels:  explainable-ai
kernel-mod
NeurIPS 2018. Linear-time model comparison tests.
Stars: ✭ 17 (+21.43%)
Mutual labels:  interpretability
adaptive-wavelets
Adaptive, interpretable wavelets across domains (NeurIPS 2021)
Stars: ✭ 58 (+314.29%)
Mutual labels:  interpretability
thermostat
Collection of NLP model explanations and accompanying analysis tools
Stars: ✭ 126 (+800%)
Mutual labels:  interpretability
adversarial-robustness-public
Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients"
Stars: ✭ 49 (+250%)
Mutual labels:  interpretability
DataScience ArtificialIntelligence Utils
Examples of Data Science projects and Artificial Intelligence use cases
Stars: ✭ 302 (+2057.14%)
Mutual labels:  explainable-ai
XAIatERUM2020
Workshop: Explanation and exploration of machine learning models with R and DALEX at eRum 2020
Stars: ✭ 52 (+271.43%)
Mutual labels:  explainable-ai
datafsm
Machine Learning Finite State Machine Models from Data with Genetic Algorithms
Stars: ✭ 14 (+0%)
Mutual labels:  explainable-ai
pyCeterisParibus
Python library for Ceteris Paribus Plots (What-if plots)
Stars: ✭ 19 (+35.71%)
Mutual labels:  explainable-ai

MEG: Molecular Explanation Generator

This repository contains the implementation of MEG (IJCNN 2021).

Usage

We assume miniconda (or anaconda) to be installed.

Install dependencies

Run the following commands:

source setup/install.sh [cpu | cu92 | cu101 | cu102]
conda activate meg

Train DGN

Train the DGN to be explained by running:

python train_dgn.py [tox21 | esol] <experiment_name>

Generate counterfactuals

To generate counterfactual explanations for a specific sample, run:

python train_meg.py [tox21 | esol] <experiment_name> --sample <INTEGER>

Results will be saved at runs/<dataset_name>/<experiment_name>/meg_output.

Bibtex

@inproceedings{numeroso2021,
      author={Numeroso, Danilo and Bacciu, Davide},
      booktitle={2021 International Joint Conference on Neural Networks (IJCNN)}, 
      title={MEG: Generating Molecular Counterfactual Explanations for Deep Graph Networks}, 
      year={2021},
      volume={},
      number={},
      pages={1-8},
      doi={10.1109/IJCNN52387.2021.9534266}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].