All Projects → vlievin → char-VAE

vlievin / char-VAE

Licence: MIT license
Inspired by the neural style algorithm in the computer vision field, we propose a high-level language model with the aim of adapting the linguistic style.

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to char-VAE

Cramer Gan
Tensorflow Implementation on "The Cramer Distance as a Solution to Biased Wasserstein Gradients" (https://arxiv.org/pdf/1705.10743.pdf)
Stars: ✭ 123 (+583.33%)
Mutual labels:  generative-model, tensorflow-experiments, tensorflow-models
Attend infer repeat
A Tensorfflow implementation of Attend, Infer, Repeat
Stars: ✭ 82 (+355.56%)
Mutual labels:  generative-model, rnn, vae
Wgan
Tensorflow Implementation of Wasserstein GAN (and Improved version in wgan_v2)
Stars: ✭ 228 (+1166.67%)
Mutual labels:  generative-model, tensorflow-experiments, tensorflow-models
TensorFlow-Multiclass-Image-Classification-using-CNN-s
Balanced Multiclass Image Classification with TensorFlow on Python.
Stars: ✭ 57 (+216.67%)
Mutual labels:  tensorflow-experiments, tensorflow-models
Tensorflow-Wide-Deep-Local-Prediction
This project demonstrates how to run and save predictions locally using exported tensorflow estimator model
Stars: ✭ 28 (+55.56%)
Mutual labels:  tensorflow-experiments, tensorflow-models
EEG-Motor-Imagery-Classification-CNNs-TensorFlow
EEG Motor Imagery Tasks Classification (by Channels) via Convolutional Neural Networks (CNNs) based on TensorFlow
Stars: ✭ 125 (+594.44%)
Mutual labels:  tensorflow-experiments, tensorflow-models
Recurrent-Neural-Network-for-BitCoin-price-prediction
Recurrent Neural Network (LSTM) by using TensorFlow and Keras in Python for BitCoin price prediction
Stars: ✭ 53 (+194.44%)
Mutual labels:  rnn, rnn-tensorflow
InpaintNet
Code accompanying ISMIR'19 paper titled "Learning to Traverse Latent Spaces for Musical Score Inpaintning"
Stars: ✭ 48 (+166.67%)
Mutual labels:  generative-model, vae
Rep-Counter
AI Exercise Rep Counter based on Google's Human Pose Estimation Library (Posenet)
Stars: ✭ 47 (+161.11%)
Mutual labels:  rnn, rnn-tensorflow
Awesome-Tensorflow2
基于Tensorflow2开发的优秀扩展包及项目
Stars: ✭ 45 (+150%)
Mutual labels:  tensorflow-experiments, tensorflow-models
vqvae-2
PyTorch implementation of VQ-VAE-2 from "Generating Diverse High-Fidelity Images with VQ-VAE-2"
Stars: ✭ 65 (+261.11%)
Mutual labels:  generative-model, vae
Solar-Rad-Forecasting
In these notebooks the entire research and implementation process carried out for the construction of various machine learning models based on neural networks that are capable of predicting levels of solar radiation is captured given a set of historical data taken by meteorological stations.
Stars: ✭ 24 (+33.33%)
Mutual labels:  rnn, rnn-tensorflow
STAR Network
[PAMI 2021] Gating Revisited: Deep Multi-layer RNNs That Can Be Trained
Stars: ✭ 16 (-11.11%)
Mutual labels:  rnn, rnn-tensorflow
Machine-Learning
🌎 I created this repository for educational purposes. It will host a number of projects as part of the process .
Stars: ✭ 38 (+111.11%)
Mutual labels:  tensorflow-experiments, tensorflow-models
Tf Vqvae
Tensorflow Implementation of the paper [Neural Discrete Representation Learning](https://arxiv.org/abs/1711.00937) (VQ-VAE).
Stars: ✭ 226 (+1155.56%)
Mutual labels:  generative-model, vae
Age-Gender Estimation TF-Android
Age + Gender Estimation on Android with TensorFlow Lite
Stars: ✭ 34 (+88.89%)
Mutual labels:  tensorflow-experiments, tensorflow-models
FARED for Anomaly Detection
Official source code of "Fast Adaptive RNN Encoder-Decoder for Anomaly Detection in SMD Assembly Machine"
Stars: ✭ 14 (-22.22%)
Mutual labels:  rnn, rnn-encoder-decoder
Vae For Image Generation
Implemented Variational Autoencoder generative model in Keras for image generation and its latent space visualization on MNIST and CIFAR10 datasets
Stars: ✭ 87 (+383.33%)
Mutual labels:  generative-model, vae
Rnn Handwriting Generation
Handwriting generation by RNN with TensorFlow, based on "Generating Sequences With Recurrent Neural Networks" by Alex Graves
Stars: ✭ 90 (+400%)
Mutual labels:  generative-model, rnn
stylegan-pokemon
Generating Pokemon cards using a mixture of StyleGAN and RNN to create beautiful & vibrant cards ready for battle!
Stars: ✭ 47 (+161.11%)
Mutual labels:  rnn, rnn-tensorflow

Thesis Abstract

This thesis reviews the use of neural networks to build a general natural language model and evaluate its application to the task of linguistic style adaptation. We name style adaptation the process of transforming a sentence into another sentence which conveys the same meaning but uses a different linguistic style. This work has been strongly influenced by the algorithm for artistic style proposed in the computer vision field.

Hopefully, the work presented in this thesis will help to create better generative language model with near human performances. We believe that this technology could be a strong asset in the translation industry and could significantly improve current conversational interfaces.

The literature review has motivated the use of the Variational Auto-Encoder to build a continuous representation of the language. During this thesis, we have rst conducted an in-depth study of the theory behind this framework and tested it against the case of the MNIST dataset modeling.

Motivated by the sequential natural of the language, we introduced the Recurrent Neural Network architecture and proven that it was compatible with the Variational Auto-Encoder framework by testing it against the modeling of a narrow set of time series.

As a second step, we applied the Variational Auto-Encoder to build a character-level language model. For this purpose, a narrow set of sentences taken from the Large Movie Review Dataset has been used. We reported poor generative performances but good recognition performances has been notably shown by its ability to rephrase unknown sentences.

Thereafter, we justified the choice of the sentiment as a specific case of stylistic feature and we proposed four different approaches for the task of style adaptation. Afterwards, we exploited the good recognition performances of our model to build a simple prototype which can be regarded as a successful case of style adaptation: the prototype has proven to be able to change the sentiment conveyed by simples sentences while addressing the same object.

We hope that the analysis of the Variational Auto-Encoder and its application to the task of linguistic style adaptation will motivate further research in this domain and we are convinced that our methods can be applied to the task of natural language understanding in the near future. Lastly, we propose further research in order to overcome the poor generative performances and apply this model to generative tasks.

What is included

  • report
  • code to download and pre-process the Large Movie Review Dataset
  • code for the VAE model used as language model
  • code for the Pessimistic Machine
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].