All Projects → XiSHEN0220 → Watermarkreco

XiSHEN0220 / Watermarkreco

Licence: mit
Pytorch implementation of the paper "Large-Scale Historical Watermark Recognition: dataset and a new consistency-based approach"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Watermarkreco

University1652 Baseline
ACM Multimedia2020 University-1652: A Multi-view Multi-source Benchmark for Drone-based Geo-localization 🚁 annotates 1652 buildings in 72 universities around the world.
Stars: ✭ 232 (+415.56%)
Mutual labels:  dataset, image-retrieval
Qri
you're invited to a data party!
Stars: ✭ 1,003 (+2128.89%)
Mutual labels:  dataset
Feversymmetric
Symmetric evaluation set based on the FEVER (fact verification) dataset
Stars: ✭ 29 (-35.56%)
Mutual labels:  dataset
Dataconfs
A list of conferences connected with data worldwide.
Stars: ✭ 36 (-20%)
Mutual labels:  dataset
Day night dataset list
Collecting a list of dataset with day and night annotations
Stars: ✭ 30 (-33.33%)
Mutual labels:  dataset
Pts
Quantized Mesh Terrain Data Generator and Server for CesiumJS Library
Stars: ✭ 36 (-20%)
Mutual labels:  dataset
Jsut Lab
HTS-style full-context labels for JSUT v1.1
Stars: ✭ 28 (-37.78%)
Mutual labels:  dataset
Mirror
Matchable Image Retrieval by Learning from Surface Reconstruction
Stars: ✭ 44 (-2.22%)
Mutual labels:  image-retrieval
Hierarchical Localization
Visual localization made easy with hloc
Stars: ✭ 997 (+2115.56%)
Mutual labels:  image-retrieval
French Sentiment Analysis Dataset
A collection of over 1.5 Million tweets data translated to French, with their sentiment.
Stars: ✭ 35 (-22.22%)
Mutual labels:  dataset
Multi Plier
An unsupervised transfer learning approach for rare disease transcriptomics
Stars: ✭ 33 (-26.67%)
Mutual labels:  dataset
Elastic data
Elasticsearch datasets ready for bulk loading
Stars: ✭ 30 (-33.33%)
Mutual labels:  dataset
Human3.6m downloader
Human3.6M downloader by Python
Stars: ✭ 37 (-17.78%)
Mutual labels:  dataset
Dns Lots Of Lookups
dnslol is a command line tool for performing lots of DNS lookups.
Stars: ✭ 30 (-33.33%)
Mutual labels:  dataset
Covid Ctset
Large Covid-19 CT scans dataset from paper: https://doi.org/10.1101/2020.06.08.20121541
Stars: ✭ 40 (-11.11%)
Mutual labels:  dataset
Deep learning projects
Stars: ✭ 28 (-37.78%)
Mutual labels:  dataset
Wikisql
A large annotated semantic parsing corpus for developing natural language interfaces.
Stars: ✭ 965 (+2044.44%)
Mutual labels:  dataset
Okutama Action
Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection
Stars: ✭ 36 (-20%)
Mutual labels:  dataset
Deep Fashion
Proposal a new method to retrieval clothing images
Stars: ✭ 44 (-2.22%)
Mutual labels:  image-retrieval
Letsgodataset
This repository makes the integral Let's Go dataset publicly available.
Stars: ✭ 41 (-8.89%)
Mutual labels:  dataset

WatermarkReco

Pytorch implementation of Paper "Large-Scale Historical Watermark Recognition: dataset and a new consistency-based approach"

[arXiv] [Project website] [YouTube Video (5mins)] [Slides]

teaser

The project is an extension work to ArtMiner. If our project is helpful for your research, please consider citing :

@inproceedings{shen2020watermark,
          title={Large-Scale Historical Watermark Recognition: dataset and a new consistency-based approach},
          author={Shen, Xi and Pastrolin, Ilaria and Bounou, Oumayma and Gidaris, Spyros and Smith, Marc and Poncet, Olivier and Aubry, Mathieu},
          booktitle={ICPR},
          year={2020}
        }

Table of Content

Installation

Dependencies

Code is tested under Pytorch > 1.0 + Python 3.6 environment. To install all dependencies :

bash requirement.sh

Datasets

We release our watermark dataset composed of 4 subsets targeting on 4 different tasks: classification, one-shot, one-shot cross-domain and large-scale one-shot cross-domain recognition.

You can run the following command to directly download the dataset:

cd data/
bash download_data.sh ## Watermark + Shoes / Chairs datasets

Or click here(~400M) to download it.

A full description of dataset is provided in our project website.

Models

To download pretrained models:

cd model/
bash download_model.sh # classification models + fine-tuned models

Classification

Dataset: A Train

cd classification/
bash demo_train.sh # Training with Dropout Ratio 0.7

Local Matching

One-shot Recognition

Dataset: A Test

cd localMatching/
bash demo_A.sh 

Feature Similarity Baselines:

cd featComparisonBaseline/
bash bestParam.sh # Run with resolution 256 * 256
bash run.sh # Run with different resolutions

One-shot Cross-domain Recognition

Dataset: B Test

cd localMatching/
bash demo_B.sh # Using drawing or synthetic as references with / without finetuned model

Dataset: Shoes / Chairs

cd localMatching/
bash demo_SBIR.sh # Evaluate on Shoes and Chairs dataset with / without finetuned model

Large Scale One-shot Cross-domain Recognition (16,753-class)

Dataset: B Test + Briquet

cd localMatching/
bash demo_Briquet_Baseline.sh # AvgPool, Concat and Local Sim. baselines
demo_Briquet_Ours.sh # Our approach w / wo F.T.

Feature Learning

Dataset: B Train

cd featureLearning/
bash demo_B_Finetune.sh # Eta = 3 for both drawing and synthetic references

Dataset: Shoes / Chairs

cd featureLearning/
bash demo_SBIR_Finetune.sh # Eta = 4 for chairs and shoes

Visual Results

More visual results can be found in our project website.

Top-5 retrieval results on Briquet + B Test dataset with using engraving as references:

teaser

Top-5 retrieval results on Briquet + B Test dataset with using synthetic image as references:

teaser

Top-5 retrieval results on Shoes / Chairs Test dataset:

teaser

Acknowledgment

This work was partly supported by ANR project EnHeritANR-17-CE23-0008 PSL Filigrane pour tous project and gifts from Adobe to Ecole des Ponts.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].