All Projects → claude-zhou → Mojitalk

claude-zhou / Mojitalk

Licence: mit
Code for "MojiTalk: Generating Emotional Responses at Scale" https://arxiv.org/abs/1711.04090

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Mojitalk

Pytorch Rl
This repository contains model-free deep reinforcement learning algorithms implemented in Pytorch
Stars: ✭ 394 (+268.22%)
Mutual labels:  reinforcement-learning, vae, variational-autoencoder
Tensorflow Mnist Vae
Tensorflow implementation of variational auto-encoder for MNIST
Stars: ✭ 422 (+294.39%)
Mutual labels:  vae, variational-autoencoder
Awesome Vaes
A curated list of awesome work on VAEs, disentanglement, representation learning, and generative models.
Stars: ✭ 418 (+290.65%)
Mutual labels:  vae, variational-autoencoder
Advanced Deep Learning With Keras
Advanced Deep Learning with Keras, published by Packt
Stars: ✭ 917 (+757.01%)
Mutual labels:  reinforcement-learning, vae
Tensorflow Generative Model Collections
Collection of generative models in Tensorflow
Stars: ✭ 3,785 (+3437.38%)
Mutual labels:  vae, variational-autoencoder
Disentangling Vae
Experiments for understanding disentanglement in VAE latent representations
Stars: ✭ 398 (+271.96%)
Mutual labels:  vae, variational-autoencoder
Nlg Eval
Evaluation code for various unsupervised automated metrics for Natural Language Generation.
Stars: ✭ 822 (+668.22%)
Mutual labels:  natural-language-generation, dialog
srVAE
VAE with RealNVP prior and Super-Resolution VAE in PyTorch. Code release for https://arxiv.org/abs/2006.05218.
Stars: ✭ 56 (-47.66%)
Mutual labels:  vae, variational-autoencoder
Vae protein function
Protein function prediction using a variational autoencoder
Stars: ✭ 57 (-46.73%)
Mutual labels:  vae, variational-autoencoder
Nlg Rl
Accelerated Reinforcement Learning for Sentence Generation by Vocabulary Prediction
Stars: ✭ 59 (-44.86%)
Mutual labels:  reinforcement-learning, natural-language-generation
Smrt
Handle class imbalance intelligently by using variational auto-encoders to generate synthetic observations of your minority class.
Stars: ✭ 102 (-4.67%)
Mutual labels:  vae, variational-autoencoder
S Vae Pytorch
Pytorch implementation of Hyperspherical Variational Auto-Encoders
Stars: ✭ 255 (+138.32%)
Mutual labels:  vae, variational-autoencoder
classifying-vae-lstm
music generation with a classifying variational autoencoder (VAE) and LSTM
Stars: ✭ 27 (-74.77%)
Mutual labels:  vae, variational-autoencoder
Copycat-abstractive-opinion-summarizer
ACL 2020 Unsupervised Opinion Summarization as Copycat-Review Generation
Stars: ✭ 76 (-28.97%)
Mutual labels:  vae, natural-language-generation
Seqgan tensorflow
SeqGAN tensorflow implementation
Stars: ✭ 96 (-10.28%)
Mutual labels:  reinforcement-learning, natural-language-generation
Variational Autoencoder
Variational autoencoder implemented in tensorflow and pytorch (including inverse autoregressive flow)
Stars: ✭ 807 (+654.21%)
Mutual labels:  vae, variational-autoencoder
linguistic-style-transfer-pytorch
Implementation of "Disentangled Representation Learning for Non-Parallel Text Style Transfer(ACL 2019)" in Pytorch
Stars: ✭ 55 (-48.6%)
Mutual labels:  natural-language-generation, variational-autoencoder
VAE-Gumbel-Softmax
An implementation of a Variational-Autoencoder using the Gumbel-Softmax reparametrization trick in TensorFlow (tested on r1.5 CPU and GPU) in ICLR 2017.
Stars: ✭ 66 (-38.32%)
Mutual labels:  vae, variational-autoencoder
Variational Autoencoder
PyTorch implementation of "Auto-Encoding Variational Bayes"
Stars: ✭ 25 (-76.64%)
Mutual labels:  vae, variational-autoencoder
Vae For Image Generation
Implemented Variational Autoencoder generative model in Keras for image generation and its latent space visualization on MNIST and CIFAR10 datasets
Stars: ✭ 87 (-18.69%)
Mutual labels:  vae, variational-autoencoder

MojiTalk

Xianda Zhou, and William Yang Wang. 2018. Mojitalk: Generating emotional responses at scale. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1128–1137. Association for Computational Linguistics.

Paper: https://arxiv.org/abs/1711.04090

Our lab: http://nlp.cs.ucsb.edu/index.html

Emojis

The file emoji-test.txt (http://unicode.org/Public/emoji/5.0/emoji-test.txt) provides data for loading and testing emojis. The 64 emojis that we used in our work are marked with '64' in our modified emoji-test.txt file.

Unicode and the Unicode Logo are registered trademarks of Unicode, Inc. in the U.S. and other countries.

For terms of use, see http://www.unicode.org/terms_of_use.html

Dependencies

  • Python 3.5.2
  • TensorFlow 1.2.1

Usage

  1. Preparation:

    Set up an environment according to the dependencies.

    Dataset: https://drive.google.com/file/d/1l0fAfxvoNZRviAMVLecPZvFZ0Qexr7yU/view?usp=sharing

    Unzip mojitalk_data.zip to the current path, creating mojitalk_data directory where our dataset is stored. Read the readme.txt in it for the format of the dataset.

  2. Base model:

    1. Set the is_seq2seq variable in the cvae_run.py to True

    2. Train, test and generate: python3 cvae_run.py

      This will save several breakpoints, a log file and generation output in mojitalk_data/seq2seq/<timestamp>/

  3. CVAE model:

    1. Set the is_seq2seq variable in the cvae_run.py to False

    2. Set path of pretrain model: Modify line 67 of cvae_run.py to load a previously trained base model. e.g.: saver.restore(sess, "seq2seq/07-17_05-49-50/breakpoints/at_step_18000.ckpt")

    3. Train, test and generate: python3 cvae_run.py

      This will save several breakpoints, a log file and generation output in mojitalk_data/cvae/<timestamp>/.

      Note that the choice of base model breakpoint as the pretrain setting would influence the result of CVAE training. A overfitted base model may cause the CVAE to diverge.

  4. Reinforced CVAE model:

    1. Train the emoji classifier: CUDA_VISIBLE_DEVICES=0 python3 classifier.py

      The trained model will be saved in mojitalk_data/classifier/<timestamp>/breakpoints as a tensorflow breakpoint.

    2. Set path of pretrain model: Modify line 63/74 of rl_run.py to load a previously trained CVAE model and the classifier.

    3. Train, test and generate: python3 rl_run.py

      This will save several breakpoints, a log file and generation output in mojitalk_data/cvae/<timestamp>/.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].