All Projects → DengPingFan → S-measure

DengPingFan / S-measure

Licence: BSD-3-Clause license
Structure-measure: A New Way to Evaluate Foreground Maps, IJCV2021 (ICCV 2017-Spotlight)

Programming Languages

matlab
3953 projects

Projects that are alternatives of or similar to S-measure

PySODEvalToolkit
PySODEvalToolkit: A Python-based Evaluation Toolbox for Salient Object Detection and Camouflaged Object Detection
Stars: ✭ 59 (+37.21%)
Mutual labels:  evaluation, saliency
table-evaluator
Evaluate real and synthetic datasets with each other
Stars: ✭ 44 (+2.33%)
Mutual labels:  evaluation
vcf stuff
📊Evaluating, filtering, comparing, and visualising VCF
Stars: ✭ 19 (-55.81%)
Mutual labels:  evaluation
audio degrader
Audio degradation toolbox in python, with a command-line tool. It is useful to apply controlled degradations to audio: e.g. data augmentation, evaluation in noisy conditions, etc.
Stars: ✭ 40 (-6.98%)
Mutual labels:  evaluation
verif
Software for verifying weather forecasts
Stars: ✭ 70 (+62.79%)
Mutual labels:  evaluation
GBVS360-BMS360-ProSal
Extending existing saliency prediction models from 2D to omnidirectional images
Stars: ✭ 25 (-41.86%)
Mutual labels:  saliency
PTXQC
A Quality Control (QC) pipeline for Proteomics (PTX) results generated by MaxQuant
Stars: ✭ 34 (-20.93%)
Mutual labels:  metric
pdq evaluation
Evaluation code for using probabilistic detection quality (PDQ) measure for probabilistic object detection tasks. Currently supports COCO and robotic vision challenge (RVC) data.
Stars: ✭ 34 (-20.93%)
Mutual labels:  evaluation
DINet
A dilated inception network for visual saliency prediction (TMM 2019)
Stars: ✭ 25 (-41.86%)
Mutual labels:  saliency
word-benchmarks
Benchmarks for intrinsic word embeddings evaluation.
Stars: ✭ 45 (+4.65%)
Mutual labels:  evaluation
cyberrating
🚥 S&P of Blockchains
Stars: ✭ 13 (-69.77%)
Mutual labels:  evaluation
DiscEval
Discourse Based Evaluation of Language Understanding
Stars: ✭ 18 (-58.14%)
Mutual labels:  evaluation
texpr
Boolean evaluation and digital calculation expression engine for GO
Stars: ✭ 18 (-58.14%)
Mutual labels:  evaluation
evaluator
No description or website provided.
Stars: ✭ 35 (-18.6%)
Mutual labels:  evaluation
thundra-agent-python
Thundra Lambda Python Agent
Stars: ✭ 36 (-16.28%)
Mutual labels:  metric
speech-recognition-evaluation
Evaluate results from ASR/Speech-to-Text quickly
Stars: ✭ 25 (-41.86%)
Mutual labels:  evaluation
ICON
(TPAMI2022) Salient Object Detection via Integrity Learning.
Stars: ✭ 125 (+190.7%)
Mutual labels:  saliency
datasets
🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
Stars: ✭ 13,870 (+32155.81%)
Mutual labels:  evaluation
Saliency-Objectness
No description or website provided.
Stars: ✭ 25 (-41.86%)
Mutual labels:  saliency
go-eek
Blazingly fast and safe Go evaluation library, created on top of Go pkg/plugin package
Stars: ✭ 37 (-13.95%)
Mutual labels:  evaluation

S-measure: A new way to evaluate foreground maps (IJCV2021)

alt tag

Publication

Structure-measure: A new way to evaluate foreground maps. IJCV2021

[pdf][中译版]

Usage

Requirement:

1. Matlab

Matlab Example

You can just run the demo.m or demo2.m to get the evaluation results.
Run demo1.m can get each map's score in the result folder. Run demo2.m can get the average S-measure score in the matlab command window.

Python version:

https://github.com/zzhanghub/eval-co-sod

If our code is useful for you, please cite our paper

@article{Cheng2021sMeasure,
 title={Structure-measure: A New Way to Evaluate Foreground Maps},
 author={Ming-Ming Cheng and Deng-Ping Fan},
 journal={International Journal of Computer Vision (IJCV)},
 year={2021},
 volume={129},
 number={9},
 pages={2622--2638},
 doi = {10.1007/s11263-021-01490-8},
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].