All Projects → imatge-upc → Sentiment 2017 Imavis

imatge-upc / Sentiment 2017 Imavis

Licence: mit
From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Sentiment 2017 Imavis

Gtsrb
Convolutional Neural Network for German Traffic Sign Recognition Benchmark
Stars: ✭ 65 (-23.53%)
Mutual labels:  cnn
Understaing Datasets Estimators Tfrecords
Try to use tf.estimator and tf.data together to train a cnn model.
Stars: ✭ 76 (-10.59%)
Mutual labels:  cnn
Dltk
Deep Learning Toolkit for Medical Image Analysis
Stars: ✭ 1,249 (+1369.41%)
Mutual labels:  cnn
Ensemble Methods For Image Classification
In this project, I implemented several ensemble methods (including bagging, AdaBoost, SAMME, stacking, snapshot ensemble) for a normal CNN model and Residual Neural Network.
Stars: ✭ 67 (-21.18%)
Mutual labels:  cnn
Tools To Design Or Visualize Architecture Of Neural Network
Tools to Design or Visualize Architecture of Neural Network
Stars: ✭ 1,143 (+1244.71%)
Mutual labels:  cnn
Pcn Ncnn
PCN based on ncnn framework.
Stars: ✭ 78 (-8.24%)
Mutual labels:  cnn
Deeplearning Nlp Models
A small, interpretable codebase containing the re-implementation of a few "deep" NLP models in PyTorch. Colab notebooks to run with GPUs. Models: word2vec, CNNs, transformer, gpt.
Stars: ✭ 64 (-24.71%)
Mutual labels:  cnn
Tf Mobilenet V2
Mobilenet V2(Inverted Residual) Implementation & Trained Weights Using Tensorflow
Stars: ✭ 85 (+0%)
Mutual labels:  cnn
Hand Detection.pytorch
FaceBoxes for hand detection in PyTorch
Stars: ✭ 76 (-10.59%)
Mutual labels:  cnn
Segan
A PyTorch implementation of SEGAN based on INTERSPEECH 2017 paper "SEGAN: Speech Enhancement Generative Adversarial Network"
Stars: ✭ 82 (-3.53%)
Mutual labels:  cnn
Deepzip
NN based lossless compression
Stars: ✭ 69 (-18.82%)
Mutual labels:  cnn
Char Cnn Text Classification Tensorflow
Character-level Convolutional Networks for Text Classification论文仿真实现
Stars: ✭ 72 (-15.29%)
Mutual labels:  cnn
Dispnet Flownet Docker
Dockerfile and runscripts for DispNet and FlowNet1 (estimation of disparity and optical flow)
Stars: ✭ 78 (-8.24%)
Mutual labels:  cnn
Text Analytics With Python
Learn how to process, classify, cluster, summarize, understand syntax, semantics and sentiment of text data with the power of Python! This repository contains code and datasets used in my book, "Text Analytics with Python" published by Apress/Springer.
Stars: ✭ 1,132 (+1231.76%)
Mutual labels:  sentiment
Cfsrcnn
Coarse-to-Fine CNN for Image Super-Resolution (IEEE Transactions on Multimedia,2020)
Stars: ✭ 84 (-1.18%)
Mutual labels:  cnn
Lstm Cnn classification
Stars: ✭ 64 (-24.71%)
Mutual labels:  cnn
Cnn Paper2
🎨 🎨 深度学习 卷积神经网络教程 :图像识别,目标检测,语义分割,实例分割,人脸识别,神经风格转换,GAN等🎨🎨 https://dataxujing.github.io/CNN-paper2/
Stars: ✭ 77 (-9.41%)
Mutual labels:  cnn
Tensorflow Cifar 10
Cifar-10 CNN implementation using TensorFlow library with 20% error.
Stars: ✭ 85 (+0%)
Mutual labels:  cnn
Single Human Parsing Lip
PSPNet implemented in PyTorch for single-person human parsing task, evaluating on Look Into Person (LIP) dataset.
Stars: ✭ 84 (-1.18%)
Mutual labels:  cnn
Recursive Cnns
Implementation of my paper "Real-time Document Localization in Natural Images by Recursive Application of a CNN."
Stars: ✭ 80 (-5.88%)
Mutual labels:  cnn

From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction

Image and Vision Computing

Víctor Campos Brendan Jou Xavier Giro-i-Nieto
Víctor Campos Brendan Jou Xavier Giro-i-Nieto

A joint collaboration between:

logo-bsc logo-upc logo-gpi logo-columbia logo-dvmmlab
Barcelona Supercomputing Center (BSC) Universitat Politecnica de Catalunya (UPC) UPC Image Processing Group Columbia University Digital Video and Multimedia Lab (DVMM)

Abstract

Visual multimedia have become an inseparable part of our digital social lives, and they often capture moments tied with deep affections. Automated visual sentiment analysis tools can provide a means of extracting the rich feelings and latent dispositions embedded in these media. In this work, we explore how Convolutional Neural Networks (CNNs), a now de facto computational machine learning tool particularly in the area of Computer Vision, can be specifically applied to the task of visual sentiment prediction. We accomplish this through fine-tuning experiments using a state-of-the-art CNN and via rigorous architecture analysis, we present several modifications that lead to accuracy improvements over prior art on a dataset of images from a popular social media platform. We additionally present visualizations of local patterns that the network learned to associate with image sentiment for insight into how visual positivity (or negativity) is perceived by the model.

Publication

Our article can be found on ScienceDirect. A preprint is publicly available on arXiv as well. You can also find it indexed on gitxiv.

Please cite with the following Bibtex code:

@article{campos2017from,
  title={From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction},
  author={Campos, Victor and Jou, Brendan and Giro-i-Nieto, Xavier},
  journal={Image and Vision Computing},
  year={2017}
}

You may also want to refer to our publication with the more human-friendly APA style:

Campos, V., Jou, B., & Giro-i-Nieto, X. (2017, February). From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction. Image and Vision Computing.

Sentiment Maps

Sentiment maps

Data

The Twitter dataset used in our experiments can be downloaded from here.

Models

The weights for the best CNN model can be downloaded from here (217 MB). These same weights, modified to fit the fully convolutional architecture used to generate the sentiment maps, can be downloaded from here (217 MB).

The deep network was developed over Caffe by Berkeley Vision and Learning Center (BVLC). You will need to follow these instructions to install Caffe.

How to re-train the models ?

We do not provide training code because we used Caffe's command line tool to train the models. Please see the framework's website for more details on how to download pre-trained models and fine-tune them on your data. Besides the trained models that can be used for inference, our repo provides text files with (image_id, label) tuples for all cross-validation splits in the paper. These can be used to train the model, but you will need to download the dataset from the project site first.

Acknowledgments

We would like to especially thank Albert Gil and Josep Pujal from our technical support team at the Image Processing Group at UPC and Carlos Tripiana from the technical support team at the Barcelona Supercomputing Center.

AlbertGil-photo JosepPujal-photo CarlosTripiana-photo
Albert Gil Josep Pujal Carlos Tripiana
This work has been supported by the grant SEV2015-0493 of the Severo Ochoa Program awarded by Spanish Government, project TIN2015-65316 by the Spanish Ministry of Science and Innovation contracts 2014-SGR-1051 by Generalitat de Catalunya logo-severo
We gratefully acknowledge the support of NVIDIA Corporation through the BSC/UPC NVIDIA GPU Center of Excellence. logo-gpu_excellence_center
The Image ProcessingGroup at the UPC is a SGR14 Consolidated Research Group recognized and sponsored by the Catalan Government (Generalitat de Catalunya) through its AGAUR office. logo-catalonia
This work has been developed in the framework of the project BigGraph TEC2013-43935-R, funded by the Spanish Ministerio de Economía y Competitividad and the European Regional Development Fund (ERDF). logo-spain

Contact

If you have any general doubt about our work or code which may be of interest for other researchers, please use the public issues section on this github repo.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].