All Projects → avinashpaliwal → Super Slomo

avinashpaliwal / Super Slomo

Licence: mit
PyTorch implementation of Super SloMo by Jiang et al.

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects
Dockerfile
14818 projects

Projects that are alternatives of or similar to Super Slomo

meta-interpolation
Source code for CVPR 2020 paper "Scene-Adaptive Video Frame Interpolation via Meta-Learning"
Stars: ✭ 75 (-97.24%)
Mutual labels:  slow-motion, frame-interpolation, video-frame-interpolation
Models Comparison.pytorch
Code for the paper Benchmark Analysis of Representative Deep Neural Network Architectures
Stars: ✭ 148 (-94.55%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Livianet
This repository contains the code of LiviaNET, a 3D fully convolutional neural network that was employed in our work: "3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study"
Stars: ✭ 143 (-94.73%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Tf Adnet Tracking
Deep Object Tracking Implementation in Tensorflow for 'Action-Decision Networks for Visual Tracking with Deep Reinforcement Learning(CVPR 2017)'
Stars: ✭ 162 (-94.03%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Deep Steganography
Hiding Images within other images using Deep Learning
Stars: ✭ 136 (-94.99%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Bender
Easily craft fast Neural Networks on iOS! Use TensorFlow models. Metal under the hood.
Stars: ✭ 1,728 (-36.33%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Sign Language Interpreter Using Deep Learning
A sign language interpreter using live video feed from the camera.
Stars: ✭ 157 (-94.22%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Top Deep Learning
Top 200 deep learning Github repositories sorted by the number of stars.
Stars: ✭ 1,365 (-49.71%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Vidaug
Effective Video Augmentation Techniques for Training Convolutional Neural Networks
Stars: ✭ 178 (-93.44%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Hdltex
HDLTex: Hierarchical Deep Learning for Text Classification
Stars: ✭ 191 (-92.96%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Traffic Sign Detection
Traffic Sign Detection. Code for the paper entitled "Evaluation of deep neural networks for traffic sign detection systems".
Stars: ✭ 200 (-92.63%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Pytorch convlstm
convolutional lstm implementation in pytorch
Stars: ✭ 126 (-95.36%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Hyperdensenet
This repository contains the code of HyperDenseNet, a hyper-densely connected CNN to segment medical images in multi-modal image scenarios.
Stars: ✭ 124 (-95.43%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Shainet
SHAInet - a pure Crystal machine learning library
Stars: ✭ 143 (-94.73%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Lenet 5
PyTorch implementation of LeNet-5 with live visualization
Stars: ✭ 122 (-95.5%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Self Driving Car
Udacity Self-Driving Car Engineer Nanodegree projects.
Stars: ✭ 2,103 (-22.51%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
3dmmasstn
MatConvNet implementation for incorporating a 3D Morphable Model (3DMM) into a Spatial Transformer Network (STN)
Stars: ✭ 218 (-91.97%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Har Keras Cnn
Human Activity Recognition (HAR) with 1D Convolutional Neural Network in Python and Keras
Stars: ✭ 97 (-96.43%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Awslambdaface
Perform deep neural network based face detection and recognition in the cloud (via AWS lambda) with zero model configuration or tuning.
Stars: ✭ 98 (-96.39%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks
Iresnet
Improved Residual Networks (https://arxiv.org/pdf/2004.04989.pdf)
Stars: ✭ 163 (-93.99%)
Mutual labels:  deep-neural-networks, convolutional-neural-networks

Super-SloMo MIT Licence

PyTorch implementation of "Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation" by Jiang H., Sun D., Jampani V., Yang M., Learned-Miller E. and Kautz J. [Project] [Paper]

Check out our paper "Deep Slow Motion Video Reconstruction with Hybrid Imaging System" published in TPAMI.

Results

Results on UCF101 dataset using the evaluation script provided by paper's author. The get_results_bug_fixed.sh script was used. It uses motions masks when calculating PSNR, SSIM and IE.

Method PSNR SSIM IE
DVF 29.37 0.861 16.37
SepConv - L_1 30.18 0.875 15.54
SepConv - L_F 30.03 0.869 15.78
SuperSloMo_Adobe240fps 29.80 0.870 15.68
pretrained mine 29.77 0.874 15.58
SuperSloMo 30.22 0.880 15.18

Prerequisites

This codebase was developed and tested with pytorch 0.4.1 and CUDA 9.2 and Python 3.6. Install:

For GPU, run

conda install pytorch=0.4.1 cuda92 torchvision==0.2.0 -c pytorch

For CPU, run

conda install pytorch-cpu=0.4.1 torchvision-cpu==0.2.0 cpuonly -c pytorch

Training

Preparing training data

In order to train the model using the provided code, the data needs to be formatted in a certain manner. The create_dataset.py script uses ffmpeg to extract frames from videos.

Adobe240fps

For adobe240fps, download the dataset, unzip it and then run the following command

python data\create_dataset.py --ffmpeg_dir path\to\folder\containing\ffmpeg --videos_folder path\to\adobe240fps\videoFolder --dataset_folder path\to\dataset --dataset adobe240fps

Custom

For custom dataset, run the following command

python data\create_dataset.py --ffmpeg_dir path\to\folder\containing\ffmpeg --videos_folder path\to\adobe240fps\videoFolder --dataset_folder path\to\dataset

The default train-test split is 90-10. You can change that using command line argument --train_test_split.

Run the following commmand for help / more info

python data\create_dataset.py --h

Training

In the train.ipynb, set the parameters (dataset path, checkpoint directory, etc.) and run all the cells.

or to train from terminal, run:

python train.py --dataset_root path\to\dataset --checkpoint_dir path\to\save\checkpoints

Run the following commmand for help / more options like continue from checkpoint, progress frequency etc.

python train.py --h

Tensorboard

To get visualization of the training, you can run tensorboard from the project directory using the command:

tensorboard --logdir log --port 6007

and then go to https://localhost:6007.

Evaluation

Pretrained model

You can download the pretrained model trained on adobe240fps dataset here.

Video Converter

You can convert any video to a slomo or high fps video (or both) using video_to_slomo.py. Use the command

# Windows
python video_to_slomo.py --ffmpeg path\to\folder\containing\ffmpeg --video path\to\video.mp4 --sf N --checkpoint path\to\checkpoint.ckpt --fps M --output path\to\output.mkv

# Linux
python video_to_slomo.py --video path\to\video.mp4 --sf N --checkpoint path\to\checkpoint.ckpt --fps M --output path\to\output.mkv

If you want to convert a video from 30fps to 90fps set fps to 90 and sf to 3 (to get 3x frames than the original video).

Run the following commmand for help / more info

python video_to_slomo.py --h

You can also use eval.py if you do not want to use ffmpeg. You will instead need to install opencv-python using pip for video IO. A sample usage would be:

python eval.py data/input.mp4 --checkpoint=data/SuperSloMo.ckpt --output=data/output.mp4 --scale=4

Use python eval.py --help for more details

More info TBA

References:

Parts of the code is based on TheFairBear/Super-SlowMo

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].