This project make available a iterative method for colouring geometric models reconstructed through RGB-D video. Consumer depth cameras are now widely available, and the development of robust tools to work with their RGB-D recordings is a relevant and interesting task.
Click here to see this project presentation.
This project was developed as a final assignment for the course Fundamentals and Trends in Vision and Image Processing at IMPA. It is based on a joint work of Qian-Yi Zhou and Vladlen Koltun presented on the SIGGRAPH 2014 (pdf).
In the past few years the concept of geometric reconstruction through RGB-D have become an important tool in the fields of computer graphics and vision, the related problem of using the RGB images to color these structures is the main task of this project.
Given a reconstructed geometric model (mesh or point cloud) and its RGB-D video source, together with a camera estimated trajectory, the method implemented can be used to generate a optimized camera trajectory and then a decent colouring of the reconstructed model.
For tests and results I used the data available at Qianyi's web page. For a typical colouring the data necessary are a RGB-D video, the model in a PLY file, and the camera trajectory.
All the implementation, with the exception of the frame selection, was done using MATLAB 2015. The developed tools and other usefull files can be found at the shared folder on the link.
The main file for camera pose optimization is cameraPose.m, and for color integration is colorIntegration.m. The folder extFrames_ONI contain a script to make frame selection using C++ and Qt, this selections can be done with a .ONI file or in a selection of images. To read or write PLY files you should use the scripts in the ply folder.
The second method proposed by Qian-Yi Zhou and Vladlen Koltun in their paper is also available in the shared folder. This method have shown to be too slow for this project proposal, but can still be improved or used as base for other projects.