All Projects → baidut → PaQ-2-PiQ

baidut / PaQ-2-PiQ

Licence: other
Source code for "From Patches to Pictures (PaQ-2-PiQ): Mapping the Perceptual Space of Picture Quality"

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to PaQ-2-PiQ

Spatially-Varying-Blur-Detection-python
python implementation of the paper "Spatially-Varying Blur Detection Based on Multiscale Fused and Sorted Transform Coefficients of Gradient Magnitudes" - cvpr 2017
Stars: ✭ 43 (-31.75%)
Mutual labels:  image-quality, image-quality-assessment
BVQA Benchmark
A resource list and performance benchmark for blind video quality assessment (BVQA) models on user-generated content (UGC) datasets. [IEEE TIP'2021] "UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content", Zhengzhong Tu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, Alan C. Bovik
Stars: ✭ 93 (+47.62%)
Mutual labels:  image-quality-assessment, picture-quality
CONTRIQUE
Official implementation for "Image Quality Assessment using Contrastive Learning"
Stars: ✭ 33 (-47.62%)
Mutual labels:  image-quality, image-quality-assessment
No-Reference-Image-Quality-Assessment-using-BRISQUE-Model
Implementation of the paper "No Reference Image Quality Assessment in the Spatial Domain" by A Mittal et al. in OpenCV (using both C++ and Python)
Stars: ✭ 137 (+117.46%)
Mutual labels:  image-quality, image-quality-assessment
LinearityIQA
[official] Norm-in-Norm Loss with Faster Convergence and Better Performance for Image Quality Assessment (ACM MM 2020)
Stars: ✭ 73 (+15.87%)
Mutual labels:  image-quality-assessment
RADN
[CVPRW 2021] Codes for Region-Adaptive Deformable Network for Image Quality Assessment
Stars: ✭ 49 (-22.22%)
Mutual labels:  image-quality-assessment
XCloud
Official Code for Paper <XCloud: Design and Implementation of AI Cloud Platform with RESTful API Service> (arXiv1912.10344)
Stars: ✭ 58 (-7.94%)
Mutual labels:  image-quality-assessment
WaDIQaM
[unofficial] Pytorch implementation of WaDIQaM in TIP2018, Bosse S. et al. (Deep neural networks for no-reference and full-reference image quality assessment)
Stars: ✭ 119 (+88.89%)
Mutual labels:  image-quality-assessment
Mobile Image-Video Enhancement
Sensifai image and video enhancement module on mobiles
Stars: ✭ 39 (-38.1%)
Mutual labels:  image-quality
geeSharp.js
Pan-sharpening in the Earth Engine code editor
Stars: ✭ 25 (-60.32%)
Mutual labels:  image-quality
FocusLiteNN
Official PyTorch and MATLAB implementations of our MICCAI 2020 paper "FocusLiteNN: High Efficiency Focus Quality Assessment for Digital Pathology"
Stars: ✭ 28 (-55.56%)
Mutual labels:  image-quality-assessment
image-quality-assessment-toolbox
Toolbox of commonly-used image quality assessment algorithms.
Stars: ✭ 98 (+55.56%)
Mutual labels:  image-quality-assessment
RAPIQUE
[IEEE OJSP'2021] "RAPIQUE: Rapid and Accurate Video Quality Prediction of User Generated Content", Zhengzhong Tu, Xiangxu Yu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, Alan C. Bovik
Stars: ✭ 40 (-36.51%)
Mutual labels:  image-quality-assessment
haarpsi
The Haar wavelet-based perceptual similarity index (HaarPSI) is a similarity measure for images that aims to correctly assess the perceptual similarity between two images with respect to a human viewer.
Stars: ✭ 27 (-57.14%)
Mutual labels:  image-quality-assessment
Crunch
Crunch is a tool for lossy PNG image file optimization. It combines selective bit depth, color type, and color palette reduction with zopfli DEFLATE compression algorithm encoding using the pngquant and zopflipng PNG optimization tools. This approach leads to a significant file size gain relative to lossless approaches at the expense of a relatively modest decrease in image quality (see example images below).
Stars: ✭ 3,074 (+4779.37%)
Mutual labels:  image-quality
pybrisque
A python implementation of BRISQUE Image Quality Assessment
Stars: ✭ 156 (+147.62%)
Mutual labels:  image-quality-assessment
image-quality-assessment-python
Python code to compute features of classic Image Quality Assessment models
Stars: ✭ 35 (-44.44%)
Mutual labels:  image-quality-assessment

| Installation | ArXiv | Website | Setup | Document

PaQ-2-PiQ

pypi fastiqa version fastiqa python compatibility

License: CC BY-NC-SA 4.0

Code for our CVPR2020 paper "From Patches to Pictures (PaQ-2-PiQ): Mapping the Perceptual Space of Picture Quality"

We packaged our source code built on FastAI inside FastIQA. For the pure-python version, check it this repo!

Follow this notebook to reproduce the results in the paper.

Installation

  • Linux system is recommended
  • python 3.6 or higher
  • for CPU-only usage, just install pytorch-cpu [detailed instruction coming soon]

PyPI Install

pip install fastiqa

By default pip will install the latest pytorch with the latest cudatoolkit as well as fastai. If your hardware doesn't support the latest cudatoolkit, follow the instructions here, to install a pytorch build that fits your hardware.

Bug Fix Install

If a bug fix was made in git and you can't wait till a new release is made, you can install the bleeding edge version with:

pip install git+https://github.com/baidut/PaQ-2-PiQ.git

Developer Install [Recommended]

git clone https://github.com/baidut/PaQ-2-PiQ
cd PaQ-2-PiQ
pip install -r requirements.txt

Demo

see demo.ipynb

Get Started with FastIQA 🚀

For brief examples, see the examples folder;

%matplotlib inline
from fastiqa.basics import *

# setup an experiment on gpu 0
e = IqaExp('test_different_models', gpu=0)

# pick a dataset: CLIVE data, Im2MOS format: input image, output MOS
data = Im2MOS(CLIVE, batch_size=16)

# add learners 
for model in [models.resnet18, models.resnet34, models.resnet50]:
	e += iqa_cnn_learner(data, model)
	
# start training all models
e.fit(10)

# validate on other databases
e.valid(on=[Im2MOS(KonIQ), Im2MOS(FLIVE)])

The sections about advanced materials will be marked with *

Prepare labels

First we need to prepare images and labels. All label classes are extended from IqaLabel, which stores all default configurations. By default, we assume all databases are put under !data folder. To pack your database:

  1. make a short name for your database, e.g. PLIVE

  2. put images under !data\PLIVE\images (or other places)

  3. put label files !data\PLIVE\labels.csv, the label should contain a column name for filename and mos for scores

  4. define your database label class:

    class PLIVE(IqaLabel):
        path = '!data/PLIVE'
        csv_labels = 'labels.csv'
        
        fn_col = 'name'
        label_cols = 'mos', # don't omit the comma here
        folder = 'images'
        valid_pct = 0.2  # how many percent of data for validation
  5. define subset if you have any, now you can extend from PLIVE to get all its default settings :

    class PLIVE_256x256(PLIVE):
        folder = '256x256'

    For PLIVE_256x256, it will load images from 256x256 instead of images folder.

  6. You are all set!

    Some IQA databases have been packed and ready to use:

graph LR
linkStyle default interpolate basis

%% IqaLabel ----------------------------

IqaLabel --&gt; KonIQ
IqaLabel --&gt; CLIVE
IqaLabel --&gt;Rois0123Label


subgraph labels
    KonIQ --&gt; KonIQ_dist
	CLIVE
    Rois0123Label--&gt;FLIVE
    FLIVE--&gt;FLIVE_8k
    FLIVE--&gt;FLIVE_2k
    FLIVE--&gt;FLIVE640
end

style IqaLabel fill:yellow
style CLIVE fill:lightgreen
style KonIQ fill:lightgreen
style KonIQ_dist fill:lightgreen
style FLIVE fill:lightgreen
style FLIVE640 fill:lightgreen
style FLIVE_2k fill:lightgreen
style FLIVE_8k fill:lightgreen

Advanced Label Format*

  • ImageRoI contains the image size information so that we could do RoIPool on the whole image area and get image score prediction
  • Rois0123 contains the roi information of patch0 (image), patch1, patch2, patch3

Analyze labels

Check out csv/vis.py you will find some useful visualization functions. Simply put

Vis(PLIVE). and press tab, a smart IDE will pull out those functions :)

image-20191119173554263

Browse Images

You could use Browser(PLIVE) to go over images and labels:

1572201939021

Bunch labels to Datasets

How you gonna load the data for training/testing the model:

  • Im2MOS input images, output MOSs
  • RandCrop2MOS
  • Rois0123

Or you could define your own way of bunching data:

class MyBunch(IqaDataBunch):
    def get_data(self):
        # write your code here
        # ...
        return data # return a fastai data object

Please follow fastai's tutorial to prepare your database.

Finally, you could also use the way existing model bunches it:

# use the way how NIMA processes the data to prepare CLIVE database
data = NIMA.bunch(CLIVE)

After bunching it:

db= MyBunch(PLIVE)

# check out information (e.g. # images in train/val split)
print(db.data)

# show a batch
db.show_batch()

Also check out other fastai functions that you can call :)

Prepare your models

you could use pretrained CNN models: model = models.resnet18

or define your own model (in exactly Pytorch way but extend it from IqaModel):

class BodyHeadModel(IqaModel):
    def __init__(self):
        super().__init__()
        self.__name__ = self.__class__.__name__
        self.body = create_body(models.resnet18)
        
        nf = num_features_model(self.body) * 2
        self.head = create_head(nf, 1)
        
    def forward(self, img):
        feat = self.body(img)
        pred = self.head(feat)
        return pred
        
    @staticmethod
    def split_on(m):
        return [[m.body], [m.head]]
    
    @staticmethod
    def bunch(self, label, **kwargs):
        return Im2MOS(label, **kwargs)

Here, split_on function split the model into two groups and use smaller learning rate for pretrained CNN backbone and bigger learning rate for head layers. bunch function tells how this model bunch the labels.

Learner

Now you can train your model easily just as in Fastai :)

One thing to note, models.resnet18 and models.resnet18() is different, the former one is just a n architecture that should go to iqa_cnn_learner to create actual model object.

# iqa_cnn_learner is a sub class of cnn_learner in Fastai but with useful functins
data = Im2MOS(CLIVE)
model = models.resnet18

learn = iqa_cnn_learner(data, model)
learn.fit(10)

While the latter one is an model object that should go to IqaLearner:

# iqa_cnn_learner is just like IqaLearner in Fastai but with useful functins
data = Im2MOS(KonIQ)
model = models.resnet18()

learn = IqaLearner(data, model)
learn.fit(10)

By default, the best model and training history will be stored under the dataset folder, see!data\PLIVE\models\bestmodel.pth and ``!data\PLIVE\history.csv`

Setup Experiments

IqaExp is a bunch of learners (in a dictionary form)

Directory structure would be:

  • data:
    • CLIVE
    • KonIQ
  • experiment_name
    • model_name: one model, one folder, easy to manage
      • train@data_name
        • models
          • bestmodel.pth: parameters of current best model
        • history.csv: model training history
        • valid@data_name.csv: validation outputs
  • fastiqa: the library
  • your_code.py
  • your_notebook.ipynb

Citation

If you use this code for your research, please cite our papers.

@article{ying2019patches,
  title={From Patches to Pictures (PaQ-2-PiQ): Mapping the Perceptual Space of Picture Quality},
  author={Ying, Zhenqi
  ang and Niu, Haoran and Gupta, Praful and Mahajan, Dhruv and Ghadiyaram, Deepti and Bovik, Alan},
  journal={arXiv preprint arXiv:1912.10088},
  year={2019}
}

Acknowledgments

Our code is built on fast.ai

⬆️ back to top

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].