All Projects → li-plus → DSNet

li-plus / DSNet

Licence: MIT License
DSNet: A Flexible Detect-to-Summarize Network for Video Summarization

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to DSNet

attack-navigator-docker
A simple Docker container that serves the MITRE ATT&CK Navigator web app
Stars: ✭ 20 (-82.76%)
Mutual labels:  detection
AIODrive
Official Python/PyTorch Implementation for "All-In-One Drive: A Large-Scale Comprehensive Perception Dataset with High-Density Long-Range Point Clouds"
Stars: ✭ 32 (-72.41%)
Mutual labels:  detection
person-detection
TensorRT person tracking RFBNet300
Stars: ✭ 30 (-74.14%)
Mutual labels:  detection
odam
ODAM - Object detection and Monitoring
Stars: ✭ 16 (-86.21%)
Mutual labels:  detection
KAREN
KAREN: Unifying Hatespeech Detection and Benchmarking
Stars: ✭ 18 (-84.48%)
Mutual labels:  detection
unsupervised llamas
Code for https://unsupervised-llamas.com
Stars: ✭ 70 (-39.66%)
Mutual labels:  detection
jeelizPupillometry
Real-time pupillometry in the web browser using a 4K webcam video feed processed by this WebGL/Javascript library. 2 demo experiments are included.
Stars: ✭ 78 (-32.76%)
Mutual labels:  detection
CornerNet-Lite-Pytorch
🚨🚨🚨 CornerNet:基于虚拟仿真环境下的自动驾驶交通标志识别
Stars: ✭ 34 (-70.69%)
Mutual labels:  detection
Awesome Underwater Datasets
Pointers to large-scale underwater datasets and relevant resources.
Stars: ✭ 233 (+100.86%)
Mutual labels:  detection
ManTraNet-pytorch
Implementation of the famous Image Manipulation\Forgery Detector "ManTraNet" in Pytorch
Stars: ✭ 47 (-59.48%)
Mutual labels:  detection
object-tracking
Multiple Object Tracking System in Keras + (Detection Network - YOLO)
Stars: ✭ 89 (-23.28%)
Mutual labels:  detection
VISO
[IEEE TGRS 2021] Detecting and Tracking Small and Dense Moving Objects in Satellite Videos: A Benchmark
Stars: ✭ 61 (-47.41%)
Mutual labels:  detection
micro-code-analyser
A tiny Node.js microservice to detect the language of a code snippet
Stars: ✭ 21 (-81.9%)
Mutual labels:  detection
ObjRecPoseEst
Object Detection and 3D Pose Estimation
Stars: ✭ 71 (-38.79%)
Mutual labels:  detection
sqair
Implementation of Sequential Attend, Infer, Repeat (SQAIR)
Stars: ✭ 96 (-17.24%)
Mutual labels:  detection
Comet.Box
Collection of Object Detection and Segmentation Pipelines🛸🚀
Stars: ✭ 24 (-79.31%)
Mutual labels:  detection
VindicateTool
LLMNR/NBNS/mDNS Spoofing Detection Toolkit
Stars: ✭ 40 (-65.52%)
Mutual labels:  detection
mri-deep-learning-tools
Resurces for MRI images processing and deep learning in 3D
Stars: ✭ 56 (-51.72%)
Mutual labels:  detection
keras cv attention models
Keras/Tensorflow attention models including beit,botnet,CMT,CoaT,CoAtNet,convnext,cotnet,davit,efficientdet,efficientnet,fbnet,gmlp,halonet,lcnet,levit,mlp-mixer,mobilevit,nfnets,regnet,resmlp,resnest,resnext,resnetd,swin,tinynet,uniformer,volo,wavemlp,yolor,yolox
Stars: ✭ 159 (+37.07%)
Mutual labels:  detection
multiple-object-tracking
combine state of art deep neural network based detectors with most efficient trackers to solve motion based multiple objects tracking problems
Stars: ✭ 25 (-78.45%)
Mutual labels:  detection

DSNet: A Flexible Detect-to-Summarize Network for Video Summarization [paper]

UnitTest License: MIT

framework

A PyTorch implementation of our paper DSNet: A Flexible Detect-to-Summarize Network for Video Summarization by Wencheng Zhu, Jiwen Lu, Jiahao Li, and Jie Zhou. Published in IEEE Transactions on Image Processing.

Getting Started

This project is developed on Ubuntu 16.04 with CUDA 9.0.176.

First, clone this project to your local environment.

git clone https://github.com/li-plus/DSNet.git

Create a virtual environment with python 3.6, preferably using Anaconda.

conda create --name dsnet python=3.6
conda activate dsnet

Install python dependencies.

pip install -r requirements.txt

Datasets Preparation

Download the pre-processed datasets into datasets/ folder, including TVSum, SumMe, OVP, and YouTube datasets.

mkdir -p datasets/ && cd datasets/
wget https://www.dropbox.com/s/tdknvkpz1jp6iuz/dsnet_datasets.zip
unzip dsnet_datasets.zip

If the Dropbox link is unavailable to you, try downloading from below links.

Now the datasets structure should look like

DSNet
└── datasets/
    ├── eccv16_dataset_ovp_google_pool5.h5
    ├── eccv16_dataset_summe_google_pool5.h5
    ├── eccv16_dataset_tvsum_google_pool5.h5
    ├── eccv16_dataset_youtube_google_pool5.h5
    └── readme.txt

Pre-trained Models

Our pre-trained models are now available online. You may download them for evaluation, or you may skip this section and train a new one from scratch.

mkdir -p models && cd models
# anchor-based model
wget https://www.dropbox.com/s/0jwn4c1ccjjysrz/pretrain_ab_basic.zip
unzip pretrain_ab_basic.zip
# anchor-free model
wget https://www.dropbox.com/s/2hjngmb0f97nxj0/pretrain_af_basic.zip
unzip pretrain_af_basic.zip

To evaluate our pre-trained models, type:

# evaluate anchor-based model
python evaluate.py anchor-based --model-dir ../models/pretrain_ab_basic/ --splits ../splits/tvsum.yml ../splits/summe.yml
# evaluate anchor-free model
python evaluate.py anchor-free --model-dir ../models/pretrain_af_basic/ --splits ../splits/tvsum.yml ../splits/summe.yml --nms-thresh 0.4

If everything works fine, you will get similar F-score results as follows.

TVSum SumMe
Anchor-based 62.05 50.19
Anchor-free 61.86 51.18

Training

Anchor-based

To train anchor-based attention model on TVSum and SumMe datasets with canonical settings, run

python train.py anchor-based --model-dir ../models/ab_basic --splits ../splits/tvsum.yml ../splits/summe.yml

To train on augmented and transfer datasets, run

python train.py anchor-based --model-dir ../models/ab_tvsum_aug/ --splits ../splits/tvsum_aug.yml
python train.py anchor-based --model-dir ../models/ab_summe_aug/ --splits ../splits/summe_aug.yml
python train.py anchor-based --model-dir ../models/ab_tvsum_trans/ --splits ../splits/tvsum_trans.yml
python train.py anchor-based --model-dir ../models/ab_summe_trans/ --splits ../splits/summe_trans.yml

To train with LSTM, Bi-LSTM or GCN feature extractor, specify the --base-model argument as lstm, bilstm, or gcn. For example,

python train.py anchor-based --model-dir ../models/ab_basic --splits ../splits/tvsum.yml ../splits/summe.yml --base-model lstm

Anchor-free

Much similar to anchor-based models, to train on canonical TVSum and SumMe, run

python train.py anchor-free --model-dir ../models/af_basic --splits ../splits/tvsum.yml ../splits/summe.yml --nms-thresh 0.4

Note that NMS threshold is set to 0.4 for anchor-free models.

Evaluation

To evaluate your anchor-based models, run

python evaluate.py anchor-based --model-dir ../models/ab_basic/ --splits ../splits/tvsum.yml ../splits/summe.yml

For anchor-free models, remember to specify NMS threshold as 0.4.

python evaluate.py anchor-free --model-dir ../models/af_basic/ --splits ../splits/tvsum.yml ../splits/summe.yml --nms-thresh 0.4

Generating Shots with KTS

Based on the public datasets provided by DR-DSN, we apply KTS algorithm to generate video shots for OVP and YouTube datasets. Note that the pre-processed datasets already contain these video shots. To re-generate video shots, run

python make_shots.py --dataset ../datasets/eccv16_dataset_ovp_google_pool5.h5
python make_shots.py --dataset ../datasets/eccv16_dataset_youtube_google_pool5.h5

Using Custom Videos

Training & Validation

We provide scripts to pre-process custom video data, like the raw videos in custom_data folder.

First, create an h5 dataset. Here --video-dir contains several MP4 videos, and --label-dir contains ground truth user summaries for each video. The user summary of a video is a UxN binary matrix, where U denotes the number of annotators and N denotes the number of frames in the original video.

python make_dataset.py --video-dir ../custom_data/videos --label-dir ../custom_data/labels \
  --save-path ../custom_data/custom_dataset.h5 --sample-rate 15

Then split the dataset into training and validation sets and generate a split file to index them.

python make_split.py --dataset ../custom_data/custom_dataset.h5 \
  --train-ratio 0.67 --save-path ../custom_data/custom.yml

Now you may train on your custom videos using the split file.

python train.py anchor-based --model-dir ../models/custom --splits ../custom_data/custom.yml
python evaluate.py anchor-based --model-dir ../models/custom --splits ../custom_data/custom.yml

Inference

To predict the summary of a raw video, use infer.py. For example, run

python infer.py anchor-based --ckpt-path ../models/custom/checkpoint/custom.yml.0.pt \
  --source ../custom_data/videos/EE-bNr36nyA.mp4 --save-path ./output.mp4

Acknowledgments

We gratefully thank the below open-source repo, which greatly boost our research.

  • Thank KTS for the effective shot generation algorithm.
  • Thank DR-DSN for the pre-processed public datasets.
  • Thank VASNet for the training and evaluation pipeline.

Citation

If you find our codes or paper helpful, please consider citing.

@article{zhu2020dsnet,
  title={DSNet: A Flexible Detect-to-Summarize Network for Video Summarization},
  author={Zhu, Wencheng and Lu, Jiwen and Li, Jiahao and Zhou, Jie},
  journal={IEEE Transactions on Image Processing},
  volume={30},
  pages={948--962},
  year={2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].