All Projects → myungsub → meta-interpolation

myungsub / meta-interpolation

Licence: MIT license
Source code for CVPR 2020 paper "Scene-Adaptive Video Frame Interpolation via Meta-Learning"

Programming Languages

python
139335 projects - #7 most used programming language
Cuda
1817 projects
C++
36643 projects - #6 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to meta-interpolation

Super Slomo
PyTorch implementation of Super SloMo by Jiang et al.
Stars: ✭ 2,714 (+3518.67%)
Mutual labels:  slow-motion, frame-interpolation, video-frame-interpolation
Waifu2x Extension Gui
Video, Image and GIF upscale/enlarge(Super-Resolution) and Video frame interpolation. Achieved with Waifu2x, Real-ESRGAN, SRMD, RealSR, Anime4K, RIFE, CAIN, DAIN, and ACNet.
Stars: ✭ 5,463 (+7184%)
Mutual labels:  frame-interpolation, video-frame-interpolation
FISR
Official repository of FISR (AAAI 2020).
Stars: ✭ 72 (-4%)
Mutual labels:  frame-interpolation, video-frame-interpolation
MetaBIN
[CVPR2021] Meta Batch-Instance Normalization for Generalizable Person Re-Identification
Stars: ✭ 58 (-22.67%)
Mutual labels:  meta-learning
MotionNet
CVPR 2020, "MotionNet: Joint Perception and Motion Prediction for Autonomous Driving Based on Bird's Eye View Maps"
Stars: ✭ 141 (+88%)
Mutual labels:  cvpr2020
Meta-DETR
Meta-DETR: Official PyTorch Implementation
Stars: ✭ 205 (+173.33%)
Mutual labels:  meta-learning
MetaLifelongLanguage
Repository containing code for the paper "Meta-Learning with Sparse Experience Replay for Lifelong Language Learning".
Stars: ✭ 21 (-72%)
Mutual labels:  meta-learning
FastMVSNet
[CVPR'20] Fast-MVSNet: Sparse-to-Dense Multi-View Stereo With Learned Propagation and Gauss-Newton Refinement
Stars: ✭ 193 (+157.33%)
Mutual labels:  cvpr2020
pytorch-psetae
PyTorch implementation of the model presented in "Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention"
Stars: ✭ 117 (+56%)
Mutual labels:  cvpr2020
MetaHeac
This is an official implementation for "Learning to Expand Audience via Meta Hybrid Experts and Critics for Recommendation and Advertising"(KDD2021).
Stars: ✭ 36 (-52%)
Mutual labels:  meta-learning
meta-SR
Pytorch implementation of Meta-Learning for Short Utterance Speaker Recognition with Imbalance Length Pairs (Interspeech, 2020)
Stars: ✭ 58 (-22.67%)
Mutual labels:  meta-learning
mindware
An efficient open-source AutoML system for automating machine learning lifecycle, including feature engineering, neural architecture search, and hyper-parameter tuning.
Stars: ✭ 34 (-54.67%)
Mutual labels:  meta-learning
mliis
Code for meta-learning initializations for image segmentation
Stars: ✭ 21 (-72%)
Mutual labels:  meta-learning
HebbianMetaLearning
Meta-Learning through Hebbian Plasticity in Random Networks: https://arxiv.org/abs/2007.02686
Stars: ✭ 77 (+2.67%)
Mutual labels:  meta-learning
metagenrl
MetaGenRL, a novel meta reinforcement learning algorithm. Unlike prior work, MetaGenRL can generalize to new environments that are entirely different from those used for meta-training.
Stars: ✭ 50 (-33.33%)
Mutual labels:  meta-learning
pymfe
Python Meta-Feature Extractor package.
Stars: ✭ 89 (+18.67%)
Mutual labels:  meta-learning
hawp
Holistically-Attracted Wireframe Parsing
Stars: ✭ 146 (+94.67%)
Mutual labels:  cvpr2020
nemar
[CVPR2020] Unsupervised Multi-Modal Image Registration via Geometry Preserving Image-to-Image Translation
Stars: ✭ 120 (+60%)
Mutual labels:  cvpr2020
FSL-Mate
FSL-Mate: A collection of resources for few-shot learning (FSL).
Stars: ✭ 1,346 (+1694.67%)
Mutual labels:  meta-learning
DSGN
DSGN: Deep Stereo Geometry Network for 3D Object Detection (CVPR 2020)
Stars: ✭ 276 (+268%)
Mutual labels:  cvpr2020

SAVFI - Meta-Learning for Video Frame Interpolation

Myungsub Choi, Janghoon Choi, Sungyong Baik, Tae Hyun Kim, Kyoung Mu Lee

Source code for CVPR 2020 paper "Scene-Adaptive Video Frame Interpolation via Meta-Learning"

Project | Paper-CVF | Paper-ArXiv | Supp

Paper

Requirements

  • Ubuntu 18.04
  • Python==3.7
  • numpy==1.18.1
  • PyTorch==1.4.0, cudatoolkit==10.1
  • opencv==3.4.2
  • cupy==7.3 (recommended: conda install cupy -c conda-forge)
  • tqdm==4.44.1

For [DAIN], the environment is different; please check dain/dain_env.yml for the requirements.

Usage

Disclaimer : This code is re-organized to run multiple different models in this single codebase. Due to a lot of version and env changes, the numbers obtained from this code may be different (usually better) from those reported in the paper. The original code modifies the main training scripts for each frame interpolation github repo ([DVF (voxelflow)], [SuperSloMo], [SepConv], [DAIN]), and are put in ./legacy/*.py. If you want to exactly reproduce the numbers reported in our paper, please contact @myungsub for legacy experimental settings.

Dataset Preparation

  • We use [ Vimeo90K Septuplet dataset ] for training + testing
    • After downloading the full dataset, make symbolic links in data/ folder:
      • ln -s /path/to/vimeo_septuplet_data/ ./data/vimeo_septuplet
  • For further evaluation, use:

Frame Interpolation Model Preparation

  • Download pretrained models from [Here], and save them to ./pretrained_models/*.pth

Training / Testing with Vimeo90K-Septuplet dataset

  • For training, simply run: ./scripts/run_{VFI_MODEL_NAME}.sh
    • Currently supports: sepconv, voxelflow, superslomo, cain, and rrin
    • Other models are coming soon!
  • For testing, just uncomment two lines containing: --mode val and --pretrained_model {MODEL_NAME}

Testing with custom data

  • See scripts/run_test.sh for details:
  • Things to change:
    • Modify the folder directory containing the video frames by changing --data_root to your desired dir/
    • Make sure to match the image format --img_fmt (defaults to png)
    • Change --model, --loss, and --pretrained_models to what you want
      • For SepConv, --model should be sepconv, and --loss should be 1*L1
      • For VoxelFlow, --model should be voxelflow, and --loss should be 1*MSE
      • For SuperSloMo, --model should be superslomo, --loss should be 1*Super
      • For DAIN, --model should be dain, and --loss should be 1*L1
      • For CAIN, --model should be cain, and --loss should be 1*L1
      • For RRIN, '--model should be rrin, and --loss should be 1*L1

Using Other Meta-Learning Algorithms

  • Current code supports using more advanced meta-learning algorithms compared to vanilla MAML, e.g. MAML++, L2F, or Meta-SGD.
    • For MAML++ you can explore many different hyperparameters by adding additional options (see config.py)
    • For L2F, just uncomment --attenuate in scripts/run_{VFI_MODEL_NAME}.sh
    • For Meta-SGD, just uncomment --metasgd (This usually results in the best performance!)

Framework Overview

Results

  • Qualitative results for VimeoSeptuplet dataset

  • Qualitative results for Middlebury-OTHERS dataset

  • Qualitative results for HD dataset

Additional Results Video

Video

Citation

If you find this code useful for your research, please consider citing the following paper:

@inproceedings{choi2020meta,
    author = {Choi, Myungsub and Choi, Janghoon and Baik, Sungyong and Kim, Tae Hyun and Lee, Kyoung Mu},
    title = {Scene-Adaptive Video Frame Interpolation via Meta-Learning},
    booktitle = {CVPR},
    year = {2020}
}

Acknowledgement

The main structure of this code is based on MAML++. Training scripts for each of the frame interpolation method is adopted from: [DVF], [SuperSloMo], [SepConv], [DAIN], [CAIN], [RRIN]. We thank the authors for sharing the codes for their great works.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].