All Projects → tylin → Coco Caption

tylin / Coco Caption

Licence: other

Projects that are alternatives of or similar to Coco Caption

Stockpriceprediction
Stock Price Prediction using Machine Learning Techniques
Stars: ✭ 700 (-1.69%)
Mutual labels:  jupyter-notebook
Machine Learning
머신러닝 입문자 혹은 스터디를 준비하시는 분들에게 도움이 되고자 만든 repository입니다. (This repository is intented for helping whom are interested in machine learning study)
Stars: ✭ 705 (-0.98%)
Mutual labels:  jupyter-notebook
Mlops
MLOps examples
Stars: ✭ 707 (-0.7%)
Mutual labels:  jupyter-notebook
Network Analysis Made Simple
An introduction to network analysis and applied graph theory using Python and NetworkX
Stars: ✭ 700 (-1.69%)
Mutual labels:  jupyter-notebook
Polyrnn Pp
Inference Code for Polygon-RNN++ (CVPR 2018)
Stars: ✭ 704 (-1.12%)
Mutual labels:  jupyter-notebook
Fewshot Face Translation Gan
Generative adversarial networks integrating modules from FUNIT and SPADE for face-swapping.
Stars: ✭ 705 (-0.98%)
Mutual labels:  jupyter-notebook
Madewithml
Learn how to responsibly deliver value with ML.
Stars: ✭ 29,253 (+4008.57%)
Mutual labels:  jupyter-notebook
Soft Nms
Object Detection
Stars: ✭ 708 (-0.56%)
Mutual labels:  jupyter-notebook
Cookbook 2nd
IPython Cookbook, Second Edition, by Cyrille Rossant, Packt Publishing 2018
Stars: ✭ 704 (-1.12%)
Mutual labels:  jupyter-notebook
Elasticsearch Spark Recommender
Use Jupyter Notebooks to demonstrate how to build a Recommender with Apache Spark & Elasticsearch
Stars: ✭ 707 (-0.7%)
Mutual labels:  jupyter-notebook
Panama Papers Dataset 2016
Structured data about Panama papers collected from official ICIJ website
Stars: ✭ 701 (-1.54%)
Mutual labels:  jupyter-notebook
Intro To Dl
Resources for "Introduction to Deep Learning" course.
Stars: ✭ 703 (-1.26%)
Mutual labels:  jupyter-notebook
Lolviz
A simple Python data-structure visualization tool for lists of lists, lists, dictionaries; primarily for use in Jupyter notebooks / presentations
Stars: ✭ 706 (-0.84%)
Mutual labels:  jupyter-notebook
Caffenet Benchmark
Evaluation of the CNN design choices performance on ImageNet-2012.
Stars: ✭ 700 (-1.69%)
Mutual labels:  jupyter-notebook
Fecon235
Notebooks for financial economics. Keywords: Jupyter notebook pandas Federal Reserve FRED Ferbus GDP CPI PCE inflation unemployment wage income debt Case-Shiller housing asset portfolio equities SPX bonds TIPS rates currency FX euro EUR USD JPY yen XAU gold Brent WTI oil Holt-Winters time-series forecasting statistics econometrics
Stars: ✭ 708 (-0.56%)
Mutual labels:  jupyter-notebook
Analytics Handbook
Getting started with soccer analytics
Stars: ✭ 699 (-1.83%)
Mutual labels:  jupyter-notebook
Data hacking
Data Hacking Project
Stars: ✭ 705 (-0.98%)
Mutual labels:  jupyter-notebook
Kaggle Titanic
A tutorial for Kaggle's Titanic: Machine Learning from Disaster competition. Demonstrates basic data munging, analysis, and visualization techniques. Shows examples of supervised machine learning techniques.
Stars: ✭ 709 (-0.42%)
Mutual labels:  jupyter-notebook
Notes
Notes On Using Data Science & Artificial Intelligence To Fight For Something That Matters.
Stars: ✭ 710 (-0.28%)
Mutual labels:  jupyter-notebook
Pytorch Wavenet
An implementation of WaveNet with fast generation
Stars: ✭ 706 (-0.84%)
Mutual labels:  jupyter-notebook

Microsoft COCO Caption Evaluation

Evaluation codes for MS COCO caption generation.

Requirements

  • java 1.8.0
  • python 2.7

Files

./

  • cocoEvalCapDemo.py (demo script)

./annotation

  • captions_val2014.json (MS COCO 2014 caption validation set)
  • Visit MS COCO download page for more details.

./results

  • captions_val2014_fakecap_results.json (an example of fake results for running demo)
  • Visit MS COCO format page for more details.

./pycocoevalcap: The folder where all evaluation codes are stored.

  • evals.py: The file includes COCOEavlCap class that can be used to evaluate results on COCO.
  • tokenizer: Python wrapper of Stanford CoreNLP PTBTokenizer
  • bleu: Bleu evalutation codes
  • meteor: Meteor evaluation codes
  • rouge: Rouge-L evaluation codes
  • cider: CIDEr evaluation codes
  • spice: SPICE evaluation codes

Setup

  • You will first need to download the Stanford CoreNLP 3.6.0 code and models for use by SPICE. To do this, run: ./get_stanford_models.sh
  • Note: SPICE will try to create a cache of parsed sentences in ./pycocoevalcap/spice/cache/. This dramatically speeds up repeated evaluations. The cache directory can be moved by setting 'CACHE_DIR' in ./pycocoevalcap/spice. In the same file, caching can be turned off by removing the '-cache' argument to 'spice_cmd'.

References

Developers

  • Xinlei Chen (CMU)
  • Hao Fang (University of Washington)
  • Tsung-Yi Lin (Cornell)
  • Ramakrishna Vedantam (Virgina Tech)

Acknowledgement

  • David Chiang (University of Norte Dame)
  • Michael Denkowski (CMU)
  • Alexander Rush (Harvard University)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].