All Projects → SunghwanHong → Deep-Matching-Prior

SunghwanHong / Deep-Matching-Prior

Licence: other
Official implementation of deep matching prior

Labels

Projects that are alternatives of or similar to Deep-Matching-Prior

ilvr adm
ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models (ICCV 2021 Oral)
Stars: ✭ 133 (+533.33%)
Mutual labels:  iccv2021
CurveNet
Official implementation of "Walk in the Cloud: Learning Curves for Point Clouds Shape Analysis", ICCV 2021
Stars: ✭ 94 (+347.62%)
Mutual labels:  iccv2021
Awesome-ICCV2021-Low-Level-Vision
A Collection of Papers and Codes for ICCV2021 Low Level Vision and Image Generation
Stars: ✭ 163 (+676.19%)
Mutual labels:  iccv2021
Vision-Language-Transformer
Vision-Language Transformer and Query Generation for Referring Segmentation (ICCV 2021)
Stars: ✭ 127 (+504.76%)
Mutual labels:  iccv2021
QmapCompression
Official implementation of "Variable-Rate Deep Image Compression through Spatially-Adaptive Feature Transform", ICCV 2021
Stars: ✭ 27 (+28.57%)
Mutual labels:  iccv2021
neat
[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving
Stars: ✭ 194 (+823.81%)
Mutual labels:  iccv2021
ICCV2021-Paper-Code-Interpretation
ICCV2021/2019/2017 论文/代码/解读/直播合集,极市团队整理
Stars: ✭ 2,022 (+9528.57%)
Mutual labels:  iccv2021
LLVIP
LLVIP: A Visible-infrared Paired Dataset for Low-light Vision
Stars: ✭ 438 (+1985.71%)
Mutual labels:  iccv2021
HCFlow
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)
Stars: ✭ 140 (+566.67%)
Mutual labels:  iccv2021
PlaneTR3D
[ICCV'21] PlaneTR: Structure-Guided Transformers for 3D Plane Recovery
Stars: ✭ 58 (+176.19%)
Mutual labels:  iccv2021
cycle-confusion
Code and models for ICCV2021 paper "Robust Object Detection via Instance-Level Temporal Cycle Confusion".
Stars: ✭ 67 (+219.05%)
Mutual labels:  iccv2021
snarf
Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.
Stars: ✭ 184 (+776.19%)
Mutual labels:  iccv2021
Meta-SelfLearning
Meta Self-learning for Multi-Source Domain Adaptation: A Benchmark
Stars: ✭ 157 (+647.62%)
Mutual labels:  iccv2021
gnerf
[ ICCV 2021 Oral ] Our method can estimate camera poses and neural radiance fields jointly when the cameras are initialized at random poses in complex scenarios (outside-in scenes, even with less texture or intense noise )
Stars: ✭ 152 (+623.81%)
Mutual labels:  iccv2021
G-SFDA
code for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'
Stars: ✭ 88 (+319.05%)
Mutual labels:  iccv2021
renet
[ICCV'21] Official PyTorch implementation of Relational Embedding for Few-Shot Classification
Stars: ✭ 72 (+242.86%)
Mutual labels:  iccv2021
dti-sprites
(ICCV 2021) Code for "Unsupervised Layered Image Decomposition into Object Prototypes" paper
Stars: ✭ 33 (+57.14%)
Mutual labels:  iccv2021
MSRGCN
Official implementation of MSR-GCN (ICCV2021 paper)
Stars: ✭ 42 (+100%)
Mutual labels:  iccv2021
SnowflakeNet
(TPAMI 2022) Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer
Stars: ✭ 74 (+252.38%)
Mutual labels:  iccv2021
MVP Benchmark
MVP Benchmark for Multi-View Partial Point Cloud Completion and Registration
Stars: ✭ 74 (+252.38%)
Mutual labels:  iccv2021

Deep-Matching-Prior (ICCV2021)

Official implementation of deep matching prior. The code will be available soon.

Meanwhile, check out our paper on[arXiv]!

alt text

@InProceedings{Hong_2021_ICCV,
    author    = {Hong, Sunghwan and Kim, Seungryong},
    title     = {Deep Matching Prior: Test-Time Optimization for Dense Correspondence},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {9907-9917}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].