All Projects → zju3dv → LoFTR

zju3dv / LoFTR

Licence: Apache-2.0 license
Code for "LoFTR: Detector-Free Local Feature Matching with Transformers", CVPR 2021

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to LoFTR

LEMO
Official Pytorch implementation for 2021 ICCV paper "Learning Motion Priors for 4D Human Body Capture in 3D Scenes" and trained models / data
Stars: ✭ 149 (-85.76%)
Mutual labels:  pose-estimation, 3d-vision
EgoNet
Official project website for the CVPR 2021 paper "Exploring intermediate representation for monocular vehicle pose estimation"
Stars: ✭ 111 (-89.39%)
Mutual labels:  pose-estimation, 3d-vision
PCLoc
Pose Correction for Highly Accurate Visual Localization in Large-scale Indoor Spaces (ICCV 2021)
Stars: ✭ 37 (-96.46%)
Mutual labels:  pose-estimation, feature-matching
mediapipe-osc
MediaPipe examples which stream their detections over OSC.
Stars: ✭ 26 (-97.51%)
Mutual labels:  pose-estimation
eval-mpii-pose
Evaluation code for the MPII human pose dataset
Stars: ✭ 58 (-94.46%)
Mutual labels:  pose-estimation
DeepLabCut-core
Headless DeepLabCut (no GUI support)
Stars: ✭ 29 (-97.23%)
Mutual labels:  pose-estimation
Multi-Person-Pose-using-Body-Parts
No description or website provided.
Stars: ✭ 41 (-96.08%)
Mutual labels:  pose-estimation
slamkit
SLAM Kit
Stars: ✭ 28 (-97.32%)
Mutual labels:  pose-estimation
articulated-pose
[CVPR 2020, Oral] Category-Level Articulated Object Pose Estimation
Stars: ✭ 85 (-91.87%)
Mutual labels:  pose-estimation
sleap
A deep learning framework for multi-animal pose tracking.
Stars: ✭ 200 (-80.88%)
Mutual labels:  pose-estimation
Semi-Supervised-Learning-GAN
Semi-supervised Learning GAN
Stars: ✭ 72 (-93.12%)
Mutual labels:  feature-matching
cvxpnpl
A Perspective-n-Points-and-Lines method.
Stars: ✭ 56 (-94.65%)
Mutual labels:  pose-estimation
dynamic plane convolutional onet
[WACV 2021] Dynamic Plane Convolutional Occupancy Networks
Stars: ✭ 25 (-97.61%)
Mutual labels:  3d-vision
Feature-Detection-and-Matching
Feature Detection and Matching with SIFT, SURF, KAZE, BRIEF, ORB, BRISK, AKAZE and FREAK through the Brute Force and FLANN algorithms using Python and OpenCV
Stars: ✭ 95 (-90.92%)
Mutual labels:  feature-matching
label-studio-frontend
Data labeling react app that is backend agnostic and can be embedded into your applications — distributed as an NPM package
Stars: ✭ 230 (-78.01%)
Mutual labels:  pose-estimation
Normal-Assisted-Stereo
[CVPR 2020] Normal Assisted Stereo Depth Estimation
Stars: ✭ 95 (-90.92%)
Mutual labels:  3d-vision
openpose-docker
A docker build file for CMU openpose with Python API support
Stars: ✭ 68 (-93.5%)
Mutual labels:  pose-estimation
Robotics-Object-Pose-Estimation
A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.
Stars: ✭ 153 (-85.37%)
Mutual labels:  pose-estimation
HybrIK
Official code of "HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation", CVPR 2021
Stars: ✭ 395 (-62.24%)
Mutual labels:  pose-estimation
MinkLoc3D
MinkLoc3D: Point Cloud Based Large-Scale Place Recognition
Stars: ✭ 83 (-92.07%)
Mutual labels:  3d-vision

LoFTR: Detector-Free Local Feature Matching with Transformers

Project Page | Paper


LoFTR: Detector-Free Local Feature Matching with Transformers
Jiaming Sun*, Zehong Shen*, Yu'ang Wang*, Hujun Bao, Xiaowei Zhou
CVPR 2021

demo_vid

TODO List and ETA

  • Inference code and pretrained models (DS and OT) (2021-4-7)
  • Code for reproducing the test-set results (2021-4-7)
  • Webcam demo to reproduce the result shown in the GIF above (2021-4-13)
  • Training code and training data preparation (expected 2021-6-10)

Discussions about the paper are welcomed in the discussion panel.

🚩 Updates

Colab demo

Want to run LoFTR with custom image pairs without configuring your own GPU environment? Try the Colab demo: Open In Colab

Installation

# For full pytorch-lightning trainer features (recommended)
conda env create -f environment.yaml
conda activate loftr

# For the LoFTR matcher only
pip install torch einops yacs kornia

We provide the download link to

  • the scannet-1500-testset (~1GB).
  • the megadepth-1500-testset (~600MB).
  • 4 pretrained models of indoor-ds, indoor-ot, outdoor-ds and outdoor-ot (each ~45MB).

By now, the environment is all set and the LoFTR-DS model is ready to go! If you want to run LoFTR-OT, some extra steps are needed:

[Requirements for LoFTR-OT]

We use the code from SuperGluePretrainedNetwork for optimal transport. However, we can't provide the code directly due its strict LICENSE requirements. We recommend downloading it with the following command instead.

cd src/loftr/utils  
wget https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/master/models/superglue.py 

Run LoFTR demos

Match image pairs with LoFTR

[code snippets]
from src.loftr import LoFTR, default_cfg

# Initialize LoFTR
matcher = LoFTR(config=default_cfg)
matcher.load_state_dict(torch.load("weights/indoor_ds.ckpt")['state_dict'])
matcher = matcher.eval().cuda()

# Inference
with torch.no_grad():
    matcher(batch)    # batch = {'image0': img0, 'image1': img1}
    mkpts0 = batch['mkpts0_f'].cpu().numpy()
    mkpts1 = batch['mkpts1_f'].cpu().numpy()

An example is given in notebooks/demo_single_pair.ipynb.

Online demo

Run the online demo with a webcam or video to reproduce the result shown in the GIF above.

cd demo
./run_demo.sh
[run_demo.sh]
#!/bin/bash
set -e
# set -x

if [ ! -f utils.py ]; then
    echo "Downloading utils.py from the SuperGlue repo."
    echo "We cannot provide this file directly due to its strict licence."
    wget https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/master/models/utils.py
fi

# Use webcam 0 as input source. 
input=0
# or use a pre-recorded video given the path.
# input=/home/sunjiaming/Downloads/scannet_test/$scene_name.mp4

# Toggle indoor/outdoor model here.
model_ckpt=../weights/indoor_ds.ckpt
# model_ckpt=../weights/outdoor_ds.ckpt

# Optionally assign the GPU ID.
# export CUDA_VISIBLE_DEVICES=0

echo "Running LoFTR demo.."
eval "$(conda shell.bash hook)"
conda activate loftr
python demo_loftr.py --weight $model_ckpt --input $input
# To save the input video and output match visualizations.
# python demo_loftr.py --weight $model_ckpt --input $input --save_video --save_input

# Running on remote GPU servers with no GUI.
# Save images first.
# python demo_loftr.py --weight $model_ckpt --input $input --no_display --output_dir="./demo_images/"
# Then convert them to a video.
# ffmpeg -framerate 15 -pattern_type glob -i '*.png' -c:v libx264 -r 30 -pix_fmt yuv420p out.mp4

Reproduce the testing results with pytorch-lightning

You need to setup the testing subsets of ScanNet and MegaDepth first. We create symlinks from the previously downloaded datasets to data/{{dataset}}/test.

# set up symlinks
ln -s /path/to/scannet-1500-testset/* /path/to/LoFTR/data/scannet/test
ln -s /path/to/megadepth-1500-testset/* /path/to/LoFTR/data/megadepth/test
conda activate loftr
# with shell script
bash ./scripts/reproduce_test/indoor_ds.sh

# or
python test.py configs/data/scannet_test_1500.py configs/loftr/loftr_ds.py --ckpt_path weights/indoor_ds.ckpt --profiler_name inference --gpus=1 --accelerator="ddp"

For visualizing the results, please refer to notebooks/visualize_dump_results.ipynb.


Training

See Training LoFTR for more details.

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@article{sun2021loftr,
  title={{LoFTR}: Detector-Free Local Feature Matching with Transformers},
  author={Sun, Jiaming and Shen, Zehong and Wang, Yuang and Bao, Hujun and Zhou, Xiaowei},
  journal={{CVPR}},
  year={2021}
}

Copyright

This work is affiliated with ZJU-SenseTime Joint Lab of 3D Vision, and its intellectual property belongs to SenseTime Group Ltd.

Copyright SenseTime. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].