All Projects → XiSHEN0220 → Ransac Flow

XiSHEN0220 / Ransac Flow

Licence: mit
(ECCV 2020) RANSAC-Flow: generic two-stage image alignment

Projects that are alternatives of or similar to Ransac Flow

Pix2face
3D human face estimation and rendering from a single image
Stars: ✭ 89 (-66.42%)
Mutual labels:  jupyter-notebook, 3d-reconstruction
Nerf pl
NeRF (Neural Radiance Fields) and NeRF in the Wild using pytorch-lightning
Stars: ✭ 362 (+36.6%)
Mutual labels:  jupyter-notebook, 3d-reconstruction
Tfvos
Semi-Supervised Video Object Segmentation (VOS) with Tensorflow. Includes implementation of *MaskRNN: Instance Level Video Object Segmentation (NIPS 2017)* as part of the NIPS Paper Implementation Challenge.
Stars: ✭ 151 (-43.02%)
Mutual labels:  jupyter-notebook, optical-flow
Tfoptflow
Optical Flow Prediction with TensorFlow. Implements "PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume," by Deqing Sun et al. (CVPR 2018)
Stars: ✭ 415 (+56.6%)
Mutual labels:  jupyter-notebook, optical-flow
Objectron
Objectron is a dataset of short, object-centric video clips. In addition, the videos also contain AR session metadata including camera poses, sparse point-clouds and planes. In each video, the camera moves around and above the object and captures it from different views. Each object is annotated with a 3D bounding box. The 3D bounding box describes the object’s position, orientation, and dimensions. The dataset contains about 15K annotated video clips and 4M annotated images in the following categories: bikes, books, bottles, cameras, cereal boxes, chairs, cups, laptops, and shoes
Stars: ✭ 1,352 (+410.19%)
Mutual labels:  jupyter-notebook, 3d-reconstruction
Vcn
Volumetric Correspondence Networks for Optical Flow, NeurIPS 2019.
Stars: ✭ 118 (-55.47%)
Mutual labels:  jupyter-notebook, optical-flow
Computer-Vision
Cool Vision projects
Stars: ✭ 51 (-80.75%)
Mutual labels:  optical-flow, 3d-reconstruction
Neural Collaborative Filtering
pytorch version of neural collaborative filtering
Stars: ✭ 263 (-0.75%)
Mutual labels:  jupyter-notebook
Pandas for everyone
Repository to accompany "Pandas for Everyone"
Stars: ✭ 266 (+0.38%)
Mutual labels:  jupyter-notebook
Machine Learning
讲解常见的机器学习算法
Stars: ✭ 264 (-0.38%)
Mutual labels:  jupyter-notebook
Graph Based Deep Learning Literature
links to conference publications in graph-based deep learning
Stars: ✭ 3,428 (+1193.58%)
Mutual labels:  jupyter-notebook
Lotterypredict
TensorFlow实战,使用LSTM预测彩票
Stars: ✭ 263 (-0.75%)
Mutual labels:  jupyter-notebook
Plantwateringalarm
A soil humidity level sensor based on ATTINY44. Uses capacitive sensing.
Stars: ✭ 264 (-0.38%)
Mutual labels:  jupyter-notebook
The Elements Of Statistical Learning
My notes and codes (jupyter notebooks) for the "The Elements of Statistical Learning" by Trevor Hastie, Robert Tibshirani and Jerome Friedman
Stars: ✭ 260 (-1.89%)
Mutual labels:  jupyter-notebook
Dlpython course
Примеры для курса "Программирование глубоких нейронных сетей на Python"
Stars: ✭ 266 (+0.38%)
Mutual labels:  jupyter-notebook
Icsharp
C# kernel for Jupyter
Stars: ✭ 263 (-0.75%)
Mutual labels:  jupyter-notebook
Lstm Human Activity Recognition
Human Activity Recognition example using TensorFlow on smartphone sensors dataset and an LSTM RNN. Classifying the type of movement amongst six activity categories - Guillaume Chevalier
Stars: ✭ 2,943 (+1010.57%)
Mutual labels:  jupyter-notebook
Oreilly Rl Tutorial
Contains Jupyter notebooks associated with the "Deep Reinforcement Learning Tutorial" tutorial given at the O'Reilly 2017 NYC AI Conference.
Stars: ✭ 266 (+0.38%)
Mutual labels:  jupyter-notebook
Stn.keras
Implementation of spatial transformer networks (STNs) in keras 2 with tensorflow as backend.
Stars: ✭ 265 (+0%)
Mutual labels:  jupyter-notebook
Mobike Crawler
摩拜单车爬虫
Stars: ✭ 265 (+0%)
Mutual labels:  jupyter-notebook

RANSAC-Flow

Pytorch implementation of paper "RANSAC-Flow: generic two-stage image alignment" (ECCV 2020)

[PDF] [Project page] [Demo] [Youtube demo]

teaser

If our project is helpful for your research, please consider citing :

@inproceedings{shen2020ransac,
          title={RANSAC-Flow: generic two-stage image alignment},
          author={Shen, Xi and Darmon, Fran{\c{c}}ois and Efros, Alexei A and Aubry, Mathieu},
          booktitle={16th European Conference on Computer Vision}
          year={2020}
        }

Since some functions have different behaviors in different Pytorch version, we recommand to install EXACT version indicated in the Dependencies if you want to reproduce our results in the paper. For more details, please refer to this issue.

Table of Content

1. Visual Results

1.1. Aligning Artworks (More results can be found in our project page)

Input Our Fine Alignment
Animation Avg Animation Avg
gif gif gif gif
gif gif gif gif

1.2. 3D recontruction (More results can be found in our project page)

Source Target 3D Reconstruction
gif gif gif

1.3. Texture transfer

Source Target Texture Transfer
gif gif gif

Other results (such as: aligning duplicated artworks, optical flow, localization etc.) can be seen in our paper.

2. Installation

2.1. Dependencies

Our model can be learnt in a single GPU GeForce GTX 1080Ti (12G).

Install Pytorch adapted to your CUDA version :

Other dependencies (tqdm, visdom, pandas, kornia, opencv-python) :

bash requirement.sh

2.2. Pre-trained models

Quick download :

cd model/pretrained
bash download_model.sh

For more details of the pre-trained models, see here

2.3. Datasets

Download the results of ArtMiner :

cd data/
bash Brueghel_detail.sh # Brueghel detail dataset (208M) : visual results, aligning groups of details

Download our training data here (~9G). It includes the validation and test data as well.

3. Quick Start

A quick start guide of how to use our code is available in demo.ipynb

notebook

4. Train

4.1. Generating training pairs

To run the training, we need pairs that are coarsely aligned. We provide a notebook to show how to generate the training pairs. Note that, we also provide our training pairs in here.

4.2. Reproducing the training on MegaDepth

The training data need to be downloaded from here and saved into ./data. The file structure is :

./RANSAC-Flow/data/MegaDepth
├── MegaDepth_Train/
├── MegaDepth_Train_Org/
├── Val/
└── Test/

As mentioned in the paper, the model trained on MegaDepth contains the following 3 different stages of training:

  • Stage 1 : we only trained the reconstruction loss. You can find the hyper-parameters in train/stage1.sh. You can run the training of this stage by :
cd train/ 
bash stage1.sh
  • Stage 2 : in this stage, we train jointly: reconstruction loss + cycle consistency of the flow. We started from the model trained in the stage 1. The hyper-parameters are in train/stage2.sh. You need to change the argument --resumePth to your model path. Once it is done, run:
cd train/ 
bash stage2.sh
  • Stage 3 : finally, we trained all the three losses together: reconstruction loss + cycle consistency of the flow + matchability loss. We started from the model trained in the stage 2. The hyper-parameters are in train/stage3.sh. You need to change the argument --resumePth to your model path. Once it is done, run:
cd train/ 
bash stage3.sh

4.3. Fine-tuning on your own dataset

If you want to conduct fine-tuning on your own dataset. It is recommended to start from our MegaDepth trained model. You can see all the arguments of training by :

cd train/ 
python train.py --help

If you don't need to predict the matchability, you can set the weight of the matchability loss to 0 (--eta 0 in the train.py), and set your path of images (--trainImgDir). Please refer to train/stage2.sh for other arguments.

In case of predicting matchability, you need to tune the weight of the matchability loss (argument --eta in the train.py) depending on the dataset.

5. Evaluation

The evaluation of different tasks can be seen in the following files:

6. Acknowledgement

We appreciate helps from :

7. Changelog

2020.07.20

  • Remove useless parts + rename some functions / parameters to make it compatible with papers + more comments

  • Fix bug in YFCC evaluation, see here. results in the paper have been updated as well.

  • Make a comparison to recent work GLU-Net, results are updated in the paper.

  • Add csv file containing annotated coorespondences for RobotCar, see here for more details.

2020.11.03

  • Update results on Aachan day-night dataset, see here
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].