All Projects → xavysp → Dexined

xavysp / Dexined

Licence: mit
Tensorflow Implementation of DexiNed for Edge Detection WACV'20 (also in PyTorch )

Programming Languages

python
139335 projects - #7 most used programming language

PWC

Dense Extreme Inception Network: Towards a Robust CNN Model for Edge Detection (DexiNed)

This work presents a new Convolutional Neural Network (CNN) arquitecture for edge detection. Unlike of the state-of-the-art CNN based edge detectors, this models has a single training stage, but it is still able to overcome those models in the edge detection datasets. Moreover, Dexined does not need pre-trained weights, and it is trained from the scratch with fewer parameters tunning. To know more about DexiNed, read our first version of Dexined in arxiv, the last version will be uploaded after the camera-ready deadline of WACV2020.

Table of Contents

PyTorch

To test DexiNed in PyTorch please refer to DexiNed-Pytorch directory

TensorFlow

Before starting to use this model, there are some requirements to fullfill.

Requirements

Once the packages are installed, clone this repo as follow:

git clone https://github.com/xavysp/DexiNed.git
cd DexiNed

Project Architecture

├── data                        # sample images for testing
|   ├── lena_std.tif            # sample 1
|   └── stonehengeuk.jpg        # sample 2
├── figs                        # Images used in README.md
|   └── DexiNed_banner.png      # DexiNed banner
├── models                      # tensorflow model file  
|   └── dexined.py              # DexiNed class
├── utls                        # a series of tools used in this repo
|   └── dataset_manager.py      # tools for dataset managing
|   └── losses.py               # Loss function used to train DexiNed 
|   └── utls.py                 # miscellaneous tool functions
├── run_model.py                # the main python file with main functions and parameter settings
└── test.py                     # the script to run the test experiment
└── train.py                    # the script to run the train experiment

As described above, run_model.py has the parameters settings, whether DexiNed is used for training or testing, before those processes the parameters need to be set. As highlighted, DexiNed is trained just one time with our proposed dataset BIPED, so in "--train_dataset" as the default setting is BIDEP; however, in the testing stage (--test_dataset), any dataset can be used, even CLASSIC, which is an arbitrary image downloaded from the internet. However, to evaluate with single images or CLASSIC "--use_dataset" has to be in FALSE mode. Whenever a dataset is used to test or train on DexiNed the arguments have to have the list of training or testing files (--train_list, --test_list). Pay attention in the parameters' settings, and change whatever you want, like ''--image_width'' or ''--image_height''. To test the Lena image I set 512x51 (see "test" section).

parser.add_argument('--train_dataset', default='BIPED', choices=['BIPED','BSDS'])
parser.add_argument('--test_dataset', default='CLASSIC', choices=['BIPED', 'BSDS','MULTICUE','NYUD','PASCAL','CID'])
parser.add_argument('--dataset_dir',default=None,type=str)
parser.add_argument('--dataset_augmented', default=True,type=bool)
parser.add_argument('--train_list',default='train_rgb.lst', type=str)
parser.add_argument('--test_list', default='test_pair.lst',type=str)  

Test

Before test the DexiNed model, it is necesarry to download the checkpoint here Checkpoint from Drive and save those files into the DexiNed folder like: checkpoints/DXN_BIPED/train/(here the checkpoints from Drive), then run as follow:

python run_model.py --image_width=512 --image_height=512

Make sure that in run_model.py the test setting be as: parser.add_argument('--model_state', default='test', choices=['train','test','None']) DexiNed downsample the input image till 16 scales, please make sure that the image width and height be multiple of 16, like 512, 960, and etc. In the Checkpoint from Drive you will get data_list.zip, train_1.zip, and train_2.zip. The train_2 contains our last checkpoint trained with the updated BIPED; train_1 has checkpoints with the results presented in WACV'20, and data_list has a list of MDBD dataset images used for testing, if you choose another random list of images, you probably get a better or worst result, I think is not fair.

Train

python run_model.py 

Make sure that in run_model.py the train setting be as: parser.add_argument('--model_state', default='train', choices=['train','test','None'])

Datasets

Dataset used for Training

BIPED (Barcelona Images for Perceptual Edge Detection): This dataset is collected and annotated in the edge level for this work. See more details and download in: Option1, Option2 kaggle. The BIPED dataset has been updated, adding more annotations and correcting few mistakes, so those links have the renewed version of BIPED, if you want the older version you may ask us by email. The last performance (table below) will be updated soon.

Datasets used for Testing

Edge detection datasets

Non-edge detection datasets

Performance

The results below are from the last version of BIPEP. After WACV20, the BIPED images have been again checked and added annotations. All of those models have been trained again.

Methods ODS ODS AP
SED before .717 .731 .756
SED .000 .000 .000
HED before .823 .847 .869
HED .000 .000 .000
RCF before .843 .859 .882
RCF .000 .000 .000
BDCN before .839 .854 .887
BDCN .000 .000 .000
DexiNed(WACV'20) .859 .867 .905
DexiNed(Ours) .000 .000 .000
Evaluation performed to BIPED dataset. We will update the result soon.

Citation

If you like DexiNed, why not starring the project on GitHub!

GitHub stars

Please cite our paper if you find helpful in your academic/scientific publication,

@InProceedings{soria2020dexined,
    title={Dense Extreme Inception Network: Towards a Robust CNN Model for Edge Detection},
    author={Xavier Soria and Edgar Riba and Angel Sappa},
    booktitle={The IEEE Winter Conference on Applications of Computer Vision (WACV '20)},
    year={2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].