All Projects → nbarba → Py3drec

nbarba / Py3drec

Licence: mit
3D modeling from uncalibrated images

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Py3drec

simple-sfm
A readable implementation of structure-from-motion
Stars: ✭ 19 (-70.77%)
Mutual labels:  structure-from-motion, 3d-reconstruction
Bundler sfm
Bundler Structure from Motion Toolkit
Stars: ✭ 1,296 (+1893.85%)
Mutual labels:  3d-reconstruction, structure-from-motion
Alicevision
Photogrammetric Computer Vision Framework
Stars: ✭ 2,029 (+3021.54%)
Mutual labels:  3d-reconstruction, structure-from-motion
sfm-disambiguation-colmap
Making Structure-from-Motion (COLMAP) more robust to symmetries and duplicated structures
Stars: ✭ 189 (+190.77%)
Mutual labels:  structure-from-motion, 3d-reconstruction
Meshroom
3D Reconstruction Software
Stars: ✭ 7,254 (+11060%)
Mutual labels:  3d-reconstruction, structure-from-motion
Dagsfm
Distributed and Graph-based Structure from Motion
Stars: ✭ 269 (+313.85%)
Mutual labels:  3d-reconstruction, structure-from-motion
cv-arxiv-daily
🎓Automatically Update CV Papers Daily using Github Actions (Update Every 12th hours)
Stars: ✭ 216 (+232.31%)
Mutual labels:  structure-from-motion, 3d-reconstruction
Openmvg
open Multiple View Geometry library. Basis for 3D computer vision and Structure from Motion.
Stars: ✭ 3,902 (+5903.08%)
Mutual labels:  3d-reconstruction, structure-from-motion
Theiasfm
An open source library for multiview geometry and structure from motion
Stars: ✭ 647 (+895.38%)
Mutual labels:  structure-from-motion
Awesome Learning Mvs
A list of awesome learning-based multi-view stereo papers
Stars: ✭ 27 (-58.46%)
Mutual labels:  structure-from-motion
Matterport
Matterport3D is a pretty awesome dataset for RGB-D machine learning tasks :)
Stars: ✭ 583 (+796.92%)
Mutual labels:  3d-reconstruction
Boofcv
Fast computer vision library for SFM, calibration, fiducials, tracking, image processing, and more.
Stars: ✭ 706 (+986.15%)
Mutual labels:  structure-from-motion
Scannet
Stars: ✭ 860 (+1223.08%)
Mutual labels:  3d-reconstruction
Teaser Plusplus
A fast and robust point cloud registration library
Stars: ✭ 607 (+833.85%)
Mutual labels:  3d-reconstruction
Mirror
Matchable Image Retrieval by Learning from Surface Reconstruction
Stars: ✭ 44 (-32.31%)
Mutual labels:  3d-reconstruction
3dv tutorial
An Invitation to 3D Vision: A Tutorial for Everyone
Stars: ✭ 571 (+778.46%)
Mutual labels:  3d-reconstruction
Mvs Texturing
Algorithm to texture 3D reconstructions from multi-view stereo images
Stars: ✭ 532 (+718.46%)
Mutual labels:  3d-reconstruction
Floor Sp
Floor-SP: Inverse CAD for Floorplans by Sequential Room-wise Shortest Path, ICCV 2019
Stars: ✭ 54 (-16.92%)
Mutual labels:  3d-reconstruction
Hierarchical Localization
Visual localization made easy with hloc
Stars: ✭ 997 (+1433.85%)
Mutual labels:  structure-from-motion
Kimera
Index repo for Kimera code
Stars: ✭ 802 (+1133.85%)
Mutual labels:  3d-reconstruction

py3DRec

This repository contains a revised implementation of the structure from motion algorithm used in Barbalios et al., "3D Human Face Modelling From Uncalibrated Images Using Spline Based Deformation", VISSAP 2008. Given at least three views of an object, and identified feature points across these views, the algorithm generates a 3D reconstruction of the object.

Overview

The algorithm takes a series of features (i.e. 2D coordinates), tracked through a sequence of images (i.e. taken with a common handheld camera), and returns the 3D coordinates of these features in the metric space. To make this happen, the algorithm consists of the following steps:

  • Selecting two views (images), e.g. i-th and j-th view, the fundamental matrix and the epipoles are computed from the corresponding 2D features of the i-th and j-th view. For best results, the fartherst views in the sequence are selected.
  • Estimating the projection matrices for the i-th and j-th view. To do so, the i-th view is assumed to be aligned with the world frame, and the projection matrix for the j-th view can be deduced using the fundamental matrix, the epipole and the reference frame of the reconstruction.
  • Triangulation of the 2D features of the i-th and j-th views to get an initial estimate of the 3D point coordinates of the 2D features.
  • Estimating the projection matrices for all the remaining views, using the 3D points we got from triangulation.
  • Bundle adjustment, i.e. an overall optimization to refine the 3D points and the projection matrices by minimizing the reprojection error of the 3D points back to each view.
  • Self-calibration to estimate the camera intrisic parameters for each view, and transform the 3D point coordinates from projective to metric space.
  • Delaunay triangulation of the 3D points, to get a 3D structure.

Note: The algorithm used in the above referenced paper included an additional step, where the 3D points were used to deform a generic face model.

Overview

Prerequisites

The following python packages are required to run the projet:

Usage

Code Structure

The code consists of four classes, each one designed for a specific task:

  • ImageSequence: holds all the image sequence related stuff, like sequence length, width and height of each image, 2D feature coordinates across all images. It also contains a method (i.e. show()), to visualize the image sequence with the 2D features highlighted.
  • EpipolarGeometry: implements epipolar geometry related operations, such as computing the fundamental matrix, finding the epipoles, triangulation, computing homographies, and the reprojection error for one or multiple views.
  • UncalibratedReconstruction: the main class that implements the 3D modeling algorithm. Constructor arguments include:
    • sequence length: the length of the image sequence
    • width: the width of images in the sequence
    • length: the length of the images in the sequence
    • triang_method: triangulation method (0: standard triangulation, 1: polynomial triangulation)
    • opt_triang: optimize triangulation result (i.e. 3D points)
    • opt_f: optimize fundamental matrix estimation
    • self_foc: for self-calibration, defines the type of focal length expected across views (0: fixed, 1: varying)
  • RecModel: holds the result of the reconstruction, i.e. projection matrices, rotation matrices, translation vectors, camera intrisic parameters vectors and the 3D structure point coordinates. It also contains a method, named export_stl_file, that performs Delaunay triangulation and saves the result in stl format.

Example

A toy-example is provided in the examples folder. It is a sequence of pictures of a poster on a wall (3 images). The 2D features were automatically selected using the SIFT algorithm, and are given in a txt file that has the following format: | x coordinated | y coordinates | feature id | image id |.

To run the example, use the following command:

python unalibrated_rec.py --input_file=./example/features_poster.txt --show

The first argument (i.e. --input_file) defines the txt file with the features, the second flag (--show) displays the image sequence together with the 2D features (remove this argument to not show the sequence).

After the algorithm is executed, two files should be generated:

  • rec_model_cloud.txt : contains the 3D homogeneous coordinates of the reconstructed 3D model.
  • reconstructed_model.stl: an stl file of the reconstructed 3D model

License

This project is licensed under the MIT License - see the LICENSE.md file for details

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].