All Projects → unsuthee → Variationaldeepsemantichashing

unsuthee / Variationaldeepsemantichashing

Licence: mit
The original implementation of the models and experiments of Variational Deep Semantic Hashing paper (SIGIR 2017)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Variationaldeepsemantichashing

class-incremental-learning
PyTorch implementation of a VAE-based generative classifier, as well as other class-incremental learning methods that do not store data (DGR, BI-R, EWC, SI, CWR, CWR+, AR1, the "labels trick", SLDA).
Stars: ✭ 30 (-40%)
Mutual labels:  variational-autoencoder
Pytorch Rl
This repository contains model-free deep reinforcement learning algorithms implemented in Pytorch
Stars: ✭ 394 (+688%)
Mutual labels:  variational-autoencoder
Neurec
Next RecSys Library
Stars: ✭ 731 (+1362%)
Mutual labels:  variational-autoencoder
Textbox
TextBox is an open-source library for building text generation system.
Stars: ✭ 257 (+414%)
Mutual labels:  variational-autoencoder
Tensorflow Generative Model Collections
Collection of generative models in Tensorflow
Stars: ✭ 3,785 (+7470%)
Mutual labels:  variational-autoencoder
Awesome Vaes
A curated list of awesome work on VAEs, disentanglement, representation learning, and generative models.
Stars: ✭ 418 (+736%)
Mutual labels:  variational-autoencoder
classifying-vae-lstm
music generation with a classifying variational autoencoder (VAE) and LSTM
Stars: ✭ 27 (-46%)
Mutual labels:  variational-autoencoder
Simple Variational Autoencoder
A VAE written entirely in Numpy/Cupy
Stars: ✭ 20 (-60%)
Mutual labels:  variational-autoencoder
Vae cf
Variational autoencoders for collaborative filtering
Stars: ✭ 386 (+672%)
Mutual labels:  variational-autoencoder
Continual Learning
PyTorch implementation of various methods for continual learning (XdG, EWC, online EWC, SI, LwF, GR, GR+distill, RtF, ER, A-GEM, iCaRL).
Stars: ✭ 600 (+1100%)
Mutual labels:  variational-autoencoder
Dalle Mtf
Open-AI's DALL-E for large scale training in mesh-tensorflow.
Stars: ✭ 250 (+400%)
Mutual labels:  variational-autoencoder
Neuraldialog Cvae
Tensorflow Implementation of Knowledge-Guided CVAE for dialog generation ACL 2017. It is released by Tiancheng Zhao (Tony) from Dialog Research Center, LTI, CMU
Stars: ✭ 279 (+458%)
Mutual labels:  variational-autoencoder
Tensorflow Mnist Vae
Tensorflow implementation of variational auto-encoder for MNIST
Stars: ✭ 422 (+744%)
Mutual labels:  variational-autoencoder
S Vae Pytorch
Pytorch implementation of Hyperspherical Variational Auto-Encoders
Stars: ✭ 255 (+410%)
Mutual labels:  variational-autoencoder
Variational Autoencoder
Variational autoencoder implemented in tensorflow and pytorch (including inverse autoregressive flow)
Stars: ✭ 807 (+1514%)
Mutual labels:  variational-autoencoder
intro dgm
An Introduction to Deep Generative Modeling: Examples
Stars: ✭ 124 (+148%)
Mutual labels:  variational-autoencoder
Disentangling Vae
Experiments for understanding disentanglement in VAE latent representations
Stars: ✭ 398 (+696%)
Mutual labels:  variational-autoencoder
Deep Generative Models
Deep generative models implemented with TensorFlow 2.0: eg. Restricted Boltzmann Machine (RBM), Deep Belief Network (DBN), Deep Boltzmann Machine (DBM), Convolutional Variational Auto-Encoder (CVAE), Convolutional Generative Adversarial Network (CGAN)
Stars: ✭ 34 (-32%)
Mutual labels:  variational-autoencoder
Variational Autoencoder
PyTorch implementation of "Auto-Encoding Variational Bayes"
Stars: ✭ 25 (-50%)
Mutual labels:  variational-autoencoder
Scvi Tools
Deep probabilistic analysis of single-cell omics data
Stars: ✭ 452 (+804%)
Mutual labels:  variational-autoencoder

Variational Deep Semantic Hashing (SIGIR'2017)

The implementation of the models and experiments of Variational Deep Semantic Hashing (SIGIR 2017).

Author: Suthee Chaidaroon

Platform

  • This project uses python 2.7 and Tensorflow version 1.3

Prepare dataset

The model expects the input document to be in a bag-of-words format. I provided sample dataset under dataset directory. If you want to use a new text collection, the input document collection to our model should be a matrix where each row represents one document and each column represents one unique word in the corpus.

To get the best performance

TFIDF turns out to be the best representation for our models according to our empirical results.

Training the model

The component collapsing is common in variational autoencoder framework where the KL regularizer shuts off some latent dimensions (by setting the weights to zero). We use weight annealing technique [1] to mitigate this issue during the training.

References

[1] https://arxiv.org/abs/1602.02282

Bibtex

@inproceedings{Chaidaroon:2017:VDS:3077136.3080816,
 author = {Chaidaroon, Suthee and Fang, Yi},
 title = {Variational Deep Semantic Hashing for Text Documents},
 booktitle = {Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval},
 series = {SIGIR '17},
 year = {2017},
 isbn = {978-1-4503-5022-8},
 location = {Shinjuku, Tokyo, Japan},
 pages = {75--84},
 numpages = {10},
 url = {http://doi.acm.org/10.1145/3077136.3080816},
 doi = {10.1145/3077136.3080816},
 acmid = {3080816},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {deep learning, semantic hashing, variational autoencoder},
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].