All Projects → pochih → Video-Cap

pochih / Video-Cap

Licence: other
🎬 Video Captioning: ICCV '15 paper implementation

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to Video-Cap

Seq2seq Summarizer
Pointer-generator reinforced seq2seq summarization in PyTorch
Stars: ✭ 306 (+595.45%)
Mutual labels:  seq2seq, attention-mechanism
Seq2seq chatbot new
基于seq2seq模型的简单对话系统的tf实现,具有embedding、attention、beam_search等功能,数据集是Cornell Movie Dialogs
Stars: ✭ 144 (+227.27%)
Mutual labels:  seq2seq, attention-mechanism
Seq2seq chatbot
基于seq2seq模型的简单对话系统的tf实现,具有embedding、attention、beam_search等功能,数据集是Cornell Movie Dialogs
Stars: ✭ 308 (+600%)
Mutual labels:  seq2seq, attention-mechanism
Neural sp
End-to-end ASR/LM implementation with PyTorch
Stars: ✭ 408 (+827.27%)
Mutual labels:  seq2seq, attention-mechanism
SequenceToSequence
A seq2seq with attention dialogue/MT model implemented by TensorFlow.
Stars: ✭ 11 (-75%)
Mutual labels:  seq2seq, attention-mechanism
MoChA-pytorch
PyTorch Implementation of "Monotonic Chunkwise Attention" (ICLR 2018)
Stars: ✭ 65 (+47.73%)
Mutual labels:  seq2seq, attention-mechanism
Sockeye
Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
Stars: ✭ 990 (+2150%)
Mutual labels:  seq2seq, attention-mechanism
ttslearn
ttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python)
Stars: ✭ 158 (+259.09%)
Mutual labels:  seq2seq, attention-mechanism
S2VT-seq2seq-video-captioning-attention
S2VT (seq2seq) video captioning with bahdanau & luong attention implementation in Tensorflow
Stars: ✭ 18 (-59.09%)
Mutual labels:  seq2seq, attention-mechanism
Poetry Seq2seq
Chinese Poetry Generation
Stars: ✭ 159 (+261.36%)
Mutual labels:  seq2seq, attention-mechanism
Awesome Speech Recognition Speech Synthesis Papers
Automatic Speech Recognition (ASR), Speaker Verification, Speech Synthesis, Text-to-Speech (TTS), Language Modelling, Singing Voice Synthesis (SVS), Voice Conversion (VC)
Stars: ✭ 2,085 (+4638.64%)
Mutual labels:  seq2seq, attention-mechanism
minimal-nmt
A minimal nmt example to serve as an seq2seq+attention reference.
Stars: ✭ 36 (-18.18%)
Mutual labels:  seq2seq, attention-mechanism
NLP-paper
🎨 🎨NLP 自然语言处理教程 🎨🎨 https://dataxujing.github.io/NLP-paper/
Stars: ✭ 23 (-47.73%)
Mutual labels:  seq2seq, attention-mechanism
A-Persona-Based-Neural-Conversation-Model
No description or website provided.
Stars: ✭ 22 (-50%)
Mutual labels:  seq2seq, attention-mechanism
automatic-personality-prediction
[AAAI 2020] Modeling Personality with Attentive Networks and Contextual Embeddings
Stars: ✭ 43 (-2.27%)
Mutual labels:  attention-mechanism
NTUA-slp-nlp
💻Speech and Natural Language Processing (SLP & NLP) Lab Assignments for ECE NTUA
Stars: ✭ 19 (-56.82%)
Mutual labels:  attention-mechanism
DAF3D
Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound
Stars: ✭ 60 (+36.36%)
Mutual labels:  attention-mechanism
NeuralCitationNetwork
Neural Citation Network for Context-Aware Citation Recommendation (SIGIR 2017)
Stars: ✭ 24 (-45.45%)
Mutual labels:  seq2seq
FragmentVC
Any-to-any voice conversion by end-to-end extracting and fusing fine-grained voice fragments with attention
Stars: ✭ 134 (+204.55%)
Mutual labels:  attention-mechanism
RNNSearch
An implementation of attention-based neural machine translation using Pytorch
Stars: ✭ 43 (-2.27%)
Mutual labels:  seq2seq

Open Source Love

Video-Captioning

Image src

performance

method BLEU@1 score
seq2seq* 0.28

*seq2seq is the reproduction of paper's model

run the code

pip install -r requirements.txt
./run.sh data/testing_id.txt data/test_features

for details, run.sh needs two parameters

./run.sh <video_id_file> <path_to_video_features>
  • video_id_file

a txt file with video id

you can use data/testing_id.txt for convience

  • path_to_video_features

a path contains video features, each video feature should be a *.npy file

take a look at data/test_features

you can use "data/test_features" directory for convience

train the code

pip install -r requirements.txt
./train.sh

test the code

./test.sh <path_to_model>
  • path_to_model

the path to trained model

type "models/model-2380" to use pre-trained model

Environment

  • OS: CentOS Linux release 7.3.1611 (Core)
  • CPU: Intel(R) Xeon(R) CPU E3-1230 v3 @ 3.30GHz
  • GPU: GeForce GTX 1070 8GB
  • Memory: 16GB DDR3
  • Python3 (for data_parser.py) & Python2.7 (for others)

Author

Po-Chih Huang / @pochih

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].