All Projects → rguo12 → CIKM18-LCVA

rguo12 / CIKM18-LCVA

Licence: other
Code for CIKM'18 paper, Linked Causal Variational Autoencoder for Inferring Paired Spillover Effects.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to CIKM18-LCVA

Awesome Vaes
A curated list of awesome work on VAEs, disentanglement, representation learning, and generative models.
Stars: ✭ 418 (+3115.38%)
Mutual labels:  variational-inference, variational-autoencoder
haskell-vae
Learning about Haskell with Variational Autoencoders
Stars: ✭ 18 (+38.46%)
Mutual labels:  variational-inference, variational-autoencoder
Variational Autoencoder
Variational autoencoder implemented in tensorflow and pytorch (including inverse autoregressive flow)
Stars: ✭ 807 (+6107.69%)
Mutual labels:  variational-inference, variational-autoencoder
Dowhy
DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions. DoWhy is based on a unified language for causal inference, combining causal graphical models and potential outcomes frameworks.
Stars: ✭ 3,480 (+26669.23%)
Mutual labels:  causal-inference, treatment-effects
gradient-boosted-normalizing-flows
We got a stew going!
Stars: ✭ 20 (+53.85%)
Mutual labels:  variational-inference, variational-autoencoder
Generative models tutorial with demo
Generative Models Tutorial with Demo: Bayesian Classifier Sampling, Variational Auto Encoder (VAE), Generative Adversial Networks (GANs), Popular GANs Architectures, Auto-Regressive Models, Important Generative Model Papers, Courses, etc..
Stars: ✭ 276 (+2023.08%)
Mutual labels:  variational-inference, variational-autoencoder
Normalizing Flows
Understanding normalizing flows
Stars: ✭ 126 (+869.23%)
Mutual labels:  variational-inference, variational-autoencoder
lagvae
Lagrangian VAE
Stars: ✭ 27 (+107.69%)
Mutual labels:  variational-inference, variational-autoencoder
SIVI
Using neural network to build expressive hierarchical distribution; A variational method to accurately estimate posterior uncertainty; A fast and general method for Bayesian inference. (ICML 2018)
Stars: ✭ 49 (+276.92%)
Mutual labels:  variational-inference, variational-autoencoder
cfml tools
My collection of causal inference algorithms built on top of accessible, simple, out-of-the-box ML methods, aimed at being explainable and useful in the business context
Stars: ✭ 24 (+84.62%)
Mutual labels:  causal-inference, treatment-effects
Kvae
Kalman Variational Auto-Encoder
Stars: ✭ 115 (+784.62%)
Mutual labels:  variational-inference, variational-autoencoder
causeinfer
Machine learning based causal inference/uplift in Python
Stars: ✭ 45 (+246.15%)
Mutual labels:  causal-inference, treatment-effects
Tensorflow Mnist Cvae
Tensorflow implementation of conditional variational auto-encoder for MNIST
Stars: ✭ 139 (+969.23%)
Mutual labels:  variational-inference, variational-autoencoder
normalizing-flows
PyTorch implementation of normalizing flow models
Stars: ✭ 271 (+1984.62%)
Mutual labels:  variational-inference, variational-autoencoder
causal-ml
Must-read papers and resources related to causal inference and machine (deep) learning
Stars: ✭ 387 (+2876.92%)
Mutual labels:  causal-inference, treatment-effects
prosper
A Python Library for Probabilistic Sparse Coding with Non-Standard Priors and Superpositions
Stars: ✭ 17 (+30.77%)
Mutual labels:  variational-inference
deep-active-inference-mc
Deep active inference agents using Monte-Carlo methods
Stars: ✭ 41 (+215.38%)
Mutual labels:  variational-inference
vaegan
An implementation of VAEGAN (variational autoencoder + generative adversarial network).
Stars: ✭ 88 (+576.92%)
Mutual labels:  variational-autoencoder
Friends-Recommender-In-Social-Network
Friends Recommendation and Link Prediction in Social Netowork
Stars: ✭ 33 (+153.85%)
Mutual labels:  network-embedding
Generalized-PixelVAE
PixelVAE with or without regularization
Stars: ✭ 64 (+392.31%)
Mutual labels:  variational-inference

CIKM18-Linked Causal Variational Autoencoder

Code for research paper:

Linked Causal Variational Autoencoder for Inferring Paired Spillover Effects.

Please cite us via this bibtex if you use this code for further development or as a baseline method in your work:

@inproceedings{rakesh2018linked,
  title={Linked Causal Variational Autoencoder for Inferring Paired Spillover Effects},
  author={Rakesh, Vineeth and Guo, Ruocheng and Moraffah, Raha and Agarwal, Nitin and Liu, Huan},
  booktitle={Proceedings of the 27th ACM International Conference on Information and Knowledge Management},
  pages={1679--1682},
  year={2018},
  organization={ACM}
}

In this work, we consider spill-over effect between instances for learning causal effects from data.

Amazon dataset

For the Amazon dataset we processed and used for the paper, please check out: Download Amazon Dataset Here

We will use the positive review dataset as an example. You will load data from new_product_graph_pos.npz and AmazonItmFeatures_pos.csv.

If you run the following code to load the item-item co-purchase graph new_product_graph_pos.npz

import pandas as pd
import numpy as np
import json
from scipy.sparse import csr_matrix

prod_G = np.load("new_product_graph_pos.npz")
print(prod_G.files)
data = prod_G['data']
indices = prod_G['indices']
indptr = prod_G['indptr']
shape = prod_G['shape']

csr_mat = csr_matrix((data, indices, indptr), dtype=int)

arr = csr_mat.toarray()

you will find there is a dimension mismatch: arr.shape is (96132, 42135) while the positive review dataset only has 42135 instances. You will only need the first 42135 rows of the matrix since the rest is just 0s. So it is safe to ignore the rest.

For the file AmazonItmFeatures_pos.csv, you can load it as a pd.Dataframe with the dimension (43135, 305). The last 300 columns are the doc2vec features. The first, second and third columns are the treatment, treated outcome and the controlled outcome. The forth and fifth columns can be ignored.

You can refer to this link for how to handle the csr matrices with indptr.

For the job training dataset, please refer to the paper.

Feel free to email me ruocheng.guo at cityu dot edu dot hk for any question and collaboration.

Acknowledgement

The code is developed based on the code released by authors of the Neurips 2017 paper:

Louizos, Christos, Uri Shalit, Joris M. Mooij, David Sontag, Richard Zemel, and Max Welling. "Causal effect inference with deep latent-variable models." In Advances in Neural Information Processing Systems, pp. 6446-6456. 2017.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].