All Projects → harsh19 → Shakespearizing-Modern-English

harsh19 / Shakespearizing-Modern-English

Licence: other
Code for "Jhamtani H.*, Gangal V.*, Hovy E. and Nyberg E. Shakespearizing Modern Language Using Copy-Enriched Sequence to Sequence Models" Workshop on Stylistic Variation, EMNLP 2017

Programming Languages

OpenEdge ABL
179 projects

Projects that are alternatives of or similar to Shakespearizing-Modern-English

deep-keyphrase
seq2seq based keyphrase generation model sets, including copyrnn copycnn and copytransfomer
Stars: ✭ 51 (-20.31%)
Mutual labels:  seq2seq, copynet
Domain-Aware-Style-Transfer
Official Implementation of Domain-Aware Universal Style Transfer
Stars: ✭ 84 (+31.25%)
Mutual labels:  style-transfer, neural-style-transfer
pytorch-neural-style-transfer-johnson
Reconstruction of the fast neural style transfer (Johnson et al.). Some portions of the paper have been improved by the follow-up work like the instance normalization, etc. Checkout transformer_net.py's header for details.
Stars: ✭ 85 (+32.81%)
Mutual labels:  style-transfer, neural-style-transfer
color-aware-style-transfer
Reference code for the paper CAMS: Color-Aware Multi-Style Transfer.
Stars: ✭ 36 (-43.75%)
Mutual labels:  style-transfer, neural-style-transfer
PyTorch-deep-photo-styletransfer
PyTorch implementation of "Deep Photo Style Transfer": https://arxiv.org/abs/1703.07511
Stars: ✭ 23 (-64.06%)
Mutual labels:  style-transfer, neural-style-transfer
Keras-Style-Transfer
An implementation of "A Neural Algorithm of Artistic Style" in Keras
Stars: ✭ 36 (-43.75%)
Mutual labels:  style-transfer, neural-style-transfer
favorite-research-papers
Listing my favorite research papers 📝 from different fields as I read them.
Stars: ✭ 12 (-81.25%)
Mutual labels:  style-transfer
STYLER
Official repository of STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech, INTERSPEECH 2021
Stars: ✭ 105 (+64.06%)
Mutual labels:  style-transfer
dynmt-py
Neural machine translation implementation using dynet's python bindings
Stars: ✭ 17 (-73.44%)
Mutual labels:  seq2seq
seq3
Source code for the NAACL 2019 paper "SEQ^3: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression"
Stars: ✭ 121 (+89.06%)
Mutual labels:  seq2seq
Image recoloring
Image Recoloring Based on Object Color Distributions (Eurographics 2019)
Stars: ✭ 30 (-53.12%)
Mutual labels:  style-transfer
chatbot
kbqa task-oriented qa seq2seq ir neo4j jena seq2seq tf chatbot chat
Stars: ✭ 32 (-50%)
Mutual labels:  seq2seq
lewis
Official code for LEWIS, from: "LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer", ACL-IJCNLP 2021 Findings by Machel Reid and Victor Zhong
Stars: ✭ 22 (-65.62%)
Mutual labels:  style-transfer
CVAE Dial
CVAE_XGate model in paper "Xu, Dusek, Konstas, Rieser. Better Conversations by Modeling, Filtering, and Optimizing for Coherence and Diversity"
Stars: ✭ 16 (-75%)
Mutual labels:  seq2seq
keras-chatbot-web-api
Simple keras chat bot using seq2seq model with Flask serving web
Stars: ✭ 51 (-20.31%)
Mutual labels:  seq2seq
Adversarial-Learning-for-Generative-Conversational-Agents
This repository contains a new adversarial training method for Generative Conversational Agents
Stars: ✭ 71 (+10.94%)
Mutual labels:  seq2seq
adversarial-code-generation
Source code for the ICLR 2021 work "Generating Adversarial Computer Programs using Optimized Obfuscations"
Stars: ✭ 16 (-75%)
Mutual labels:  seq2seq
classifier multi label seq2seq attention
multi-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification,seq2seq,attention,beam search
Stars: ✭ 26 (-59.37%)
Mutual labels:  seq2seq
sentence2vec
Deep sentence embedding using Sequence to Sequence learning
Stars: ✭ 23 (-64.06%)
Mutual labels:  seq2seq
linguistic-style-transfer-pytorch
Implementation of "Disentangled Representation Learning for Non-Parallel Text Style Transfer(ACL 2019)" in Pytorch
Stars: ✭ 55 (-14.06%)
Mutual labels:  style-transfer

Shakespearizing-Modern-English

Code for "Jhamtani H., Gangal V., Hovy E. and Nyberg E. Shakespearizing Modern Language Using Copy-Enriched Sequence to Sequence Models" Workshop on Stylistic Variation, EMNLP 2017

Link to paper: https://arxiv.org/abs/1707.01161

Requirements

  • Python 2.7
  • Tensorflow 1.1.0

Instructions to run:

Preprocessing:

  • Change working directory to code/main/
  • Create a new directory named 'tmp'
  • Run:
    python mt_main.py preprocessing

Pointer model:

  • First run pre-processing
  • Change working directory to code/main/
  • python mt_main.py train 10 pointer_model
    For inference:
  • Change working directory to code/main/
  • python mt_main.py inference tmp/pointer_model7.ckpt greedy

Normal seq2seq model:

  • Look at seq2seq branch of the repo

Post-Processing:

There are two post-processing actions which one may be interested in performing:

  1. Visualizing attention matrices
  2. Replacing UNKS in hypothesis with their highest-aligned (attention) input tokens. For both of these actions, refer to the running instructions in code/main/post_process.py (comments commencing the file). The file can be run in two modes, to perform 1 (write) and 2 (postProcess) respectively*. *Not elaborated on here to preserve conciseness and clarity.
    Note that the path to test file is hard-coded in the post_process.py file, so to try with a new file,one will have to make corresponding changes.

Baseline (Dictionary):

  • Change working directory to code/baselines/
  • Run:
    python dictionary_baseline.py ../../data/shakespeare.dict ../../data/test.modern.nltktok ../../data/test.dictBaseline
  • The test.dictBaseline file contains the output (Shakespearean) of the dictionary baseline.
  • To evaluate BLEU:
    • Change working directory to code/main/
    • Run:
      perl multi-bleu.perl -lc ../../data/test.original.nltktok < ../../data/test.dictBaseline

Baseline (statistical MT)

  • Please follow instructions in "Wei Xu, Alan Ritter, William B Dolan, Ralph Grish- man, and Colin Cherry. 2012. Paraphrasing for style. In 24th International Conference on Computational Linguistics, COLING 2012."

Citation

If you use this code or the processed data, please consider citing our work:

@article{jhamtani2017shakespearizing,
  title={Shakespearizing Modern Language Using Copy-Enriched Sequence-to-Sequence Models},
  author={Jhamtani, Harsh and Gangal, Varun and Hovy, Eduard and Nyberg, Eric},
  journal={EMNLP 2017},
  volume={6},
  pages={10},
  year={2017}
}

Additionally, if you use the data, please consder citing "Wei Xu, Alan Ritter, William B Dolan, Ralph Grish- man, and Colin Cherry. 2012. Paraphrasing for style. In 24th International Conference on Computational Linguistics, COLING 2012."

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].