All Projects → Div99 → Image-Captioning

Div99 / Image-Captioning

Licence: MIT license
Image Captioning with Keras

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Image-Captioning

stylenet
A pytorch implemention of "StyleNet: Generating Attractive Visual Captions with Styles"
Stars: ✭ 58 (-3.33%)
Mutual labels:  caption, image-captioning
Image Caption Generator
A neural network to generate captions for an image using CNN and RNN with BEAM Search.
Stars: ✭ 126 (+110%)
Mutual labels:  attention, image-captioning
Sightseq
Computer vision tools for fairseq, containing PyTorch implementation of text recognition and object detection
Stars: ✭ 116 (+93.33%)
Mutual labels:  attention, image-captioning
cvpr18-caption-eval
Learning to Evaluate Image Captioning. CVPR 2018
Stars: ✭ 79 (+31.67%)
Mutual labels:  caption, image-captioning
gramtion
Twitter bot for generating photo descriptions (alt text)
Stars: ✭ 21 (-65%)
Mutual labels:  image-captioning
gnn-lspe
Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
Stars: ✭ 165 (+175%)
Mutual labels:  attention
gqa-node-properties
Recalling node properties from a knowledge graph
Stars: ✭ 19 (-68.33%)
Mutual labels:  attention
jeelizGlanceTracker
JavaScript/WebGL lib: detect if the user is looking at the screen or not from the webcam video feed. Lightweight and robust to all lighting conditions. Great for play/pause videos if the user is looking or not, or for person detection. Link to live demo.
Stars: ✭ 68 (+13.33%)
Mutual labels:  attention
visualization
a collection of visualization function
Stars: ✭ 189 (+215%)
Mutual labels:  attention
lambda.pytorch
PyTorch implementation of Lambda Network and pretrained Lambda-ResNet
Stars: ✭ 54 (-10%)
Mutual labels:  attention
image-recognition
采用深度学习方法进行刀具识别。
Stars: ✭ 19 (-68.33%)
Mutual labels:  attention
pix2code-pytorch
PyTorch implementation of pix2code. 🔥
Stars: ✭ 24 (-60%)
Mutual labels:  image-captioning
MIA
Code for "Aligning Visual Regions and Textual Concepts for Semantic-Grounded Image Representations" (NeurIPS 2019)
Stars: ✭ 57 (-5%)
Mutual labels:  image-captioning
datastories-semeval2017-task6
Deep-learning model presented in "DataStories at SemEval-2017 Task 6: Siamese LSTM with Attention for Humorous Text Comparison".
Stars: ✭ 20 (-66.67%)
Mutual labels:  attention
iPerceive
Applying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering | Python3 | PyTorch | CNNs | Causality | Reasoning | LSTMs | Transformers | Multi-Head Self Attention | Published in IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
Stars: ✭ 52 (-13.33%)
Mutual labels:  attention
torch-multi-head-attention
Multi-head attention in PyTorch
Stars: ✭ 93 (+55%)
Mutual labels:  attention
stagin
STAGIN: Spatio-Temporal Attention Graph Isomorphism Network
Stars: ✭ 34 (-43.33%)
Mutual labels:  attention
Relation-Extraction-Transformer
NLP: Relation extraction with position-aware self-attention transformer
Stars: ✭ 63 (+5%)
Mutual labels:  attention
LNSwipeCell
一套友好的、方便集成的针对cell的左滑编辑功能!
Stars: ✭ 16 (-73.33%)
Mutual labels:  attention
Image-Captioining
The objective is to process by generating textual description from an image – based on the objects and actions in the image. Using generative models so that it creates novel sentences. Pipeline type models uses two separate learning process, one for language modelling and other for image recognition. It first identifies objects in image and prov…
Stars: ✭ 20 (-66.67%)
Mutual labels:  image-captioning

Image Captioning (Keras)

Image Captioning System that generates natural language captions for any image.

The architecture for the model is inspired from "Show and Tell" [1] by Vinyals et al. The model is built using Keras library.

The project also contains code for Attention LSTM layer, although not integrated in the model.

Dataset

The model is trained on Flickr8k Dataset

Although it can be trained on others like Flickr30k or MS COCO

Model



Performance

The model has been trained for 20 epoches on 6000 training samples of Flickr8k Dataset. It acheives a BLEU-1 = ~0.59 with 1000 testing samples.


Requirements

  • tensorflow
  • keras
  • numpy
  • h5py
  • progressbar2

These requirements can be easily installed by: pip install -r requirements.txt

Scripts

  • caption_generator.py: The base script that contains functions for model creation, batch data generator etc.
  • prepare_data.py: Extracts features from images using VGG16 imagenet model. Also prepares annotation for training. Changes have to be done to this script if new dataset is to be used.
  • train_model.py: Module for training the caption generator.
  • eval_model.py: Contains module for evaluating and testing the performance of the caption generator, currently, it contains the BLEU metric.

Usage

Pre-trained model

  1. Download pre-trained weights from releases
  2. Move model_weight.h5 to models directory
  3. Prepare data using python prepare_data.py
  4. For inference on example image, run: python eval_model.py -i [img-path]

From scratch

After the requirements have been installed, the process from training to testing is fairly easy. The commands to run:

  1. python prepare_data.py
  2. python train_model.py
  3. python eval_model.py

After training, evaluation on an example image can be done by running:
python eval_model.py -m [model-checkpoint] -i [img-path]

Results

Image Caption
Generated Caption: A white and black dog is running through the water
Generated Caption: man is skiing on snowy hill
Generated Caption: man in red shirt is walking down the street

References

[1] Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan. Show and Tell: A Neural Image Caption Generator

[2] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention


License

MIT License. See LICENSE file for details.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].