All Projects → chen0040 → keras-chatbot-web-api

chen0040 / keras-chatbot-web-api

Licence: MIT license
Simple keras chat bot using seq2seq model with Flask serving web

Programming Languages

python
139335 projects - #7 most used programming language
HTML
75241 projects
CSS
56736 projects

Projects that are alternatives of or similar to keras-chatbot-web-api

Awesome Chatbot
Awesome Chatbot Projects,Corpus,Papers,Tutorials.Chinese Chatbot =>:
Stars: ✭ 1,785 (+3400%)
Mutual labels:  chatbot, seq2seq, seq2seq-model
Tensorflow seq2seq chatbot
Stars: ✭ 81 (+58.82%)
Mutual labels:  chatbot, seq2seq
Tensorflow Seq2seq Dialogs
Build conversation Seq2Seq models with TensorFlow
Stars: ✭ 43 (-15.69%)
Mutual labels:  chatbot, seq2seq
Chinese Chatbot
中文聊天机器人,基于10万组对白训练而成,采用注意力机制,对一般问题都会生成一个有意义的答复。已上传模型,可直接运行,跑不起来直播吃键盘。
Stars: ✭ 124 (+143.14%)
Mutual labels:  chatbot, seq2seq
Tf seq2seq chatbot
[unmaintained]
Stars: ✭ 420 (+723.53%)
Mutual labels:  chatbot, seq2seq
Seq2seqchatbots
A wrapper around tensor2tensor to flexibly train, interact, and generate data for neural chatbots.
Stars: ✭ 466 (+813.73%)
Mutual labels:  chatbot, seq2seq
Mlds2018spring
Machine Learning and having it Deep and Structured (MLDS) in 2018 spring
Stars: ✭ 124 (+143.14%)
Mutual labels:  chatbot, seq2seq
Dynamic Seq2seq
seq2seq中文聊天机器人
Stars: ✭ 303 (+494.12%)
Mutual labels:  chatbot, seq2seq
Seq2seq chatbot new
基于seq2seq模型的简单对话系统的tf实现,具有embedding、attention、beam_search等功能,数据集是Cornell Movie Dialogs
Stars: ✭ 144 (+182.35%)
Mutual labels:  chatbot, seq2seq
Conversation Tensorflow
TensorFlow implementation of Conversation Models
Stars: ✭ 143 (+180.39%)
Mutual labels:  chatbot, seq2seq
Nlp pytorch project
Embedding, NMT, Text_Classification, Text_Generation, NER etc.
Stars: ✭ 153 (+200%)
Mutual labels:  chatbot, seq2seq
Pytorch Chatbot
Pytorch seq2seq chatbot
Stars: ✭ 336 (+558.82%)
Mutual labels:  chatbot, seq2seq
Seq2seq Chatbot For Keras
This repository contains a new generative model of chatbot based on seq2seq modeling.
Stars: ✭ 322 (+531.37%)
Mutual labels:  chatbot, seq2seq
Practical seq2seq
A simple, minimal wrapper for tensorflow's seq2seq module, for experimenting with datasets rapidly
Stars: ✭ 563 (+1003.92%)
Mutual labels:  chatbot, seq2seq
Seq2seq chatbot
基于seq2seq模型的简单对话系统的tf实现,具有embedding、attention、beam_search等功能,数据集是Cornell Movie Dialogs
Stars: ✭ 308 (+503.92%)
Mutual labels:  chatbot, seq2seq
Multiturndialogzoo
Multi-turn dialogue baselines written in PyTorch
Stars: ✭ 106 (+107.84%)
Mutual labels:  chatbot, seq2seq
Deepqa
My tensorflow implementation of "A neural conversational model", a Deep learning based chatbot
Stars: ✭ 2,811 (+5411.76%)
Mutual labels:  chatbot, seq2seq
Seq2seq chatbot links
Links to the implementations of neural conversational models for different frameworks
Stars: ✭ 270 (+429.41%)
Mutual labels:  chatbot, seq2seq
Tensorflow Tutorials
텐서플로우를 기초부터 응용까지 단계별로 연습할 수 있는 소스 코드를 제공합니다
Stars: ✭ 2,096 (+4009.8%)
Mutual labels:  chatbot, seq2seq
Tensorflow Ml Nlp
텐서플로우와 머신러닝으로 시작하는 자연어처리(로지스틱회귀부터 트랜스포머 챗봇까지)
Stars: ✭ 176 (+245.1%)
Mutual labels:  chatbot, seq2seq

keras-chatbot-web-api

Simple keras chat bot using seq2seq model with Flask serving web

The chat bot is built based on seq2seq models, and can infer based on either character-level or word-level.

The seq2seq model is implemented using LSTM encoder-decoder on Keras.

Notes

So far the GloVe word encoding version of the chatbot seems to give the best performance.

Usage

Run the following command to install the keras, flask and other dependency modules:

sudo pip install -r requirements.txt

The chat bot models are train using cornell-dialogs and gunthercox-corpus data set and are available in the "chatbot_train/models" directory. During runtime, the flask app will load these trained models to perform the chat-reply

Training (Optional)

As the trained models are already included in the "chatbot_train/models" folder in the project, the bot training is not required. However, if you like to tune the parameters of the seq2seq and retrain the models, you can use the following command to run the training:

cd chatbot_train
python cornell_char_seq2seq_train.py

The above commands will train seq2seq model using cornell dialogs on the character-level and store the trained model in "chatbot_train/models/cornell/char-**"

If you like to train other models, you can use the same command above on another train python scripts:

  • cornell_word_seq2seq_train.py: train on cornell dialogs on word-level (one hot encoding)
  • cornell_word_seq2seq_glove_train.py: train on cornell dialogs on word-level (GloVe word2vec encoding)
  • gunthercox_char_seq2seq_train.py: train on gunthercox corpus on character-level
  • gunthercox_word_seq2seq_train.py: train on gunthercox corpus on word-level (one hot encoding)
  • gunthercox_word_seq2seq_glove_train.py train on gunthercox corpus on word-level (GloVe word2vec encoding)

Running Web Api Server

Goto chatbot_web directory and run the following command:

python flaskr.py

Now navigate your browser to http://localhost:5000 and you can try out various predictors built with the following trained seq2seq models:

  • Character-level seq2seq models
  • Word-level seq2seq models (One Hot Encoding)
  • Word-level seq2seq models (GloVe Encoding)

Invoke Web Api

To make the bot reply using web api, after the flask server is started, run the following curl POST query in your terminal:

curl -H 'Content-Type application/json' -X POST -d '{"level":"level_type", "sentence":"your_sentence_here", "dialogs":"chatbox_dataset"}' http://localhost:5000/chatbot_reply

The level_type can be "char" or "word", the dialogs can be "gunthercox" or "cornell"

(Note that same results can be obtained by running a curl GET query to http://localhost:5000/chatbot_reply?sentence=your_sentence_here&level=level_type&dialogs=chatbox_dataset)

For example, you can ask the bot to reply the sentence "How are you?" by running the following command:

curl -H 'Content-Type: application/json' -X POST -d '{"level":"word", "sentence":"How are you?", "dialogs":"gunthercox"}' http://localhost:5000/chatbot_reply

And the following will be the json response:

{
    "dialogs": "gunthercox",
    "level": "word",
    "reply": "i am doing well how about you",
    "sentence": "How are you?"
}

Here are some examples for eng chat-reply using some other configuration options:

curl -H 'Content-Type: application/json' -X POST -d '{"level":"char", "sentence":"How are you?", "dialogs":"gunthercox"}' http://localhost:5000/chatbot_reply
curl -H 'Content-Type: application/json' -X POST -d '{"level":"word", "sentence":"How are you?", "dialogs":"cornell"}' http://localhost:5000/chatbot_reply
curl -H 'Content-Type: application/json' -X POST -d '{"level":"char", "sentence":"How are you?", "dialogs":"cornell"}' http://localhost:5000/chatbot_reply
curl -H 'Content-Type: application/json' -X POST -d '{"level":"word-glove", "sentence":"How are you?", "dialogs":"cornell"}' http://localhost:5000/chatbot_reply
curl -H 'Content-Type: application/json' -X POST -d '{"level":"word-glove", "sentence":"How are you?", "dialogs":"gunthercox"}' http://localhost:5000/chatbot_reply

Configure to run on GPU on Windows

  • Step 1: Change tensorflow to tensorflow-gpu in requirements.txt and install tensorflow-gpu
  • Step 2: Download and install the CUDA® Toolkit 9.0 (Please note that currently CUDA® Toolkit 9.1 is not yet supported by tensorflow, therefore you should download CUDA® Toolkit 9.0)
  • Step 3: Download and unzip the cuDNN 7.0.4 for CUDA@ Toolkit 9.0 and add the bin folder of the unzipped directory to the $PATH of your Windows environment

TODO

  • Parameter tuning: as the seq2seq usually takes many hours to train on large corpus and long sentences, i don't have sufficient time at the moment to do a good job on the parameter tuning and training for a longer period of time (current parameters were tuned such that the bots can be trained in a few hours)
  • Better text preprocessing: to improve the bot behavior, one way is to perform more text preprocessing before the training (e.g. stop word filtering, stemming)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].