stylenetA pytorch implemention of "StyleNet: Generating Attractive Visual Captions with Styles"
Stars: ✭ 58 (-26.58%)
ttconvSubtitle conversion. Converts STL, SRT, TTML and SCC into TTML, WebVTT and SRT.
Stars: ✭ 88 (+11.39%)
glimpse cloudsPytorch implementation of the paper "Glimpse Clouds: Human Activity Recognition from Unstructured Feature Points", F. Baradel, C. Wolf, J. Mille , G.W. Taylor, CVPR 2018
Stars: ✭ 30 (-62.03%)
MusDrEvaluation metrics for machine-composed symbolic music. Paper: "The Jazz Transformer on the Front Line: Exploring the Shortcomings of AI-Composed Music through Quantitative Measures", ISMIR 2020
Stars: ✭ 38 (-51.9%)
catrImage Captioning Using Transformer
Stars: ✭ 206 (+160.76%)
G2LTexCode for CVPR 2018 paper --- Texture Mapping for 3D Reconstruction with RGB-D Sensor
Stars: ✭ 104 (+31.65%)
Show Control And TellShow, Control and Tell: A Framework for Generating Controllable and Grounded Captions. CVPR 2019
Stars: ✭ 243 (+207.59%)
IDN-pytorchpaper implement : Fast and Accurate Single Image Super-Resolution via Information Distillation Network
Stars: ✭ 40 (-49.37%)
DataturksML data annotations made super easy for teams. Just upload data, add your team and build training/evaluation dataset in hours.
Stars: ✭ 200 (+153.16%)
Show and TellShow and Tell : A Neural Image Caption Generator
Stars: ✭ 74 (-6.33%)
VoxelMorph-PyTorchAn unofficial PyTorch implementation of VoxelMorph- An unsupervised 3D deformable image registration method
Stars: ✭ 68 (-13.92%)
BUTD modelA pytorch implementation of "Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering" for image captioning.
Stars: ✭ 28 (-64.56%)
image-captioning-DLCTOfficial pytorch implementation of paper "Dual-Level Collaborative Transformer for Image Captioning" (AAAI 2021).
Stars: ✭ 134 (+69.62%)
UdacityThis repo includes all the projects I have finished in the Udacity Nanodegree programs
Stars: ✭ 57 (-27.85%)
f1-communitiesA novel approach to evaluate community detection algorithms on ground truth
Stars: ✭ 20 (-74.68%)
CS231nCS231n Assignments Solutions - Spring 2020
Stars: ✭ 48 (-39.24%)
FfsubsyncAutomagically synchronize subtitles with video.
Stars: ✭ 5,167 (+6440.51%)
Show-Attend-and-TellA PyTorch implementation of the paper Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
Stars: ✭ 58 (-26.58%)
Caption generatorA modular library built on top of Keras and TensorFlow to generate a caption in natural language for any input image.
Stars: ✭ 243 (+207.59%)
Machine-LearningThe projects I do in Machine Learning with PyTorch, keras, Tensorflow, scikit learn and Python.
Stars: ✭ 54 (-31.65%)
Sca Cnn.cvpr17Image Captions Generation with Spatial and Channel-wise Attention
Stars: ✭ 198 (+150.63%)
gramtionTwitter bot for generating photo descriptions (alt text)
Stars: ✭ 21 (-73.42%)
PySODEvalToolkitPySODEvalToolkit: A Python-based Evaluation Toolbox for Salient Object Detection and Camouflaged Object Detection
Stars: ✭ 59 (-25.32%)
gcnetGCNet (GIF Caption Network) | Neural Network Generated GIF Captions
Stars: ✭ 14 (-82.28%)
LaBERTA length-controllable and non-autoregressive image captioning model.
Stars: ✭ 50 (-36.71%)
CS231nMy solutions for Assignments of CS231n: Convolutional Neural Networks for Visual Recognition
Stars: ✭ 30 (-62.03%)
easseEasier Automatic Sentence Simplification Evaluation
Stars: ✭ 109 (+37.97%)
FaceAttrCVPR2018 Face Super-resolution with supplementary Attributes
Stars: ✭ 18 (-77.22%)
quicaquica is a tool to run inter coder agreement pipelines in an easy and effective ways. Multiple measures are run and results are collected in a single table than can be easily exported in Latex
Stars: ✭ 21 (-73.42%)
RSTNetRSTNet: Captioning with Adaptive Attention on Visual and Non-Visual Words (CVPR 2021)
Stars: ✭ 71 (-10.13%)
udacity-cvnd-projectsMy solutions to the projects assigned for the Udacity Computer Vision Nanodegree
Stars: ✭ 36 (-54.43%)
DVQA datasetDVQA Dataset: A Bar chart question answering dataset presented at CVPR 2018
Stars: ✭ 20 (-74.68%)
Awesome-CaptioningA curated list of Multimodal Captioning related research(including image captioning, video captioning, and text captioning)
Stars: ✭ 56 (-29.11%)
im2pTensorflow implement of paper: A Hierarchical Approach for Generating Descriptive Image Paragraphs
Stars: ✭ 43 (-45.57%)
Image CaptioningImplementation of 'X-Linear Attention Networks for Image Captioning' [CVPR 2020]
Stars: ✭ 171 (+116.46%)
Pytorch BookPyTorch tutorials and fun projects including neural talk, neural style, poem writing, anime generation (《深度学习框架PyTorch:入门与实战》)
Stars: ✭ 9,546 (+11983.54%)
nervaluateFull named-entity (i.e., not tag/token) evaluation metrics based on SemEval’13
Stars: ✭ 40 (-49.37%)
nekocapBrowser extension for creating & uploading community captions for YouTube, niconico and other video sharing sites.
Stars: ✭ 27 (-65.82%)
AoanetCode for paper "Attention on Attention for Image Captioning". ICCV 2019
Stars: ✭ 242 (+206.33%)
AdaptivePytorch Implementation of Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning
Stars: ✭ 97 (+22.78%)
NLP-toolsUseful python NLP tools (evaluation, GUI interface, tokenization)
Stars: ✭ 39 (-50.63%)
Image To Image SearchA reverse image search engine powered by elastic search and tensorflow
Stars: ✭ 200 (+153.16%)
MIACode for "Aligning Visual Regions and Textual Concepts for Semantic-Grounded Image Representations" (NeurIPS 2019)
Stars: ✭ 57 (-27.85%)
Up Down CaptionerAutomatic image captioning model based on Caffe, using features from bottom-up attention.
Stars: ✭ 195 (+146.84%)
DisguiseNetCode for DisguiseNet : A Contrastive Approach for Disguised Face Verification in the Wild
Stars: ✭ 20 (-74.68%)
Image-CaptioiningThe objective is to process by generating textual description from an image – based on the objects and actions in the image. Using generative models so that it creates novel sentences. Pipeline type models uses two separate learning process, one for language modelling and other for image recognition. It first identifies objects in image and prov…
Stars: ✭ 20 (-74.68%)
ASNetSalient Object Detection Driven by Fixation Prediction (CVPR2018)
Stars: ✭ 41 (-48.1%)
captioning chainerA fast implementation of Neural Image Caption by Chainer
Stars: ✭ 17 (-78.48%)
caption-coreCaption Core acts as an abstraction layer for Caption’s core functionality.
Stars: ✭ 33 (-58.23%)
Image-CaptionUsing LSTM or Transformer to solve Image Captioning in Pytorch
Stars: ✭ 36 (-54.43%)