Efficiently Combining Positions and Normals for Precise 3D Geometry
ACM Transactions on Graphics (SIGGRAPH 2005), August 2005

Diego Nehab, Szymon Rusinkiewicz, James Davis,
Ravi Ramamoorthi

Rendering comparisons. Left: range image obtained from triangulation.
Right: our hybrid surface reconstruction, which incorporates both position and normal information.

Abstract

Range scanning, manual 3D editing, and other modeling approaches can provide information about the geometry of surfaces in the form of either 3D positions (e.g., triangle meshes or range images) or orientations (normal maps or bump maps). We present an algorithm that combines these two kinds of estimates to produce a new surface that approximates both. Our formulation is linear, allowing it to operate efficiently on complex meshes commonly used in graphics. It also treats high- and low-frequency components separately, allowing it to optimally combine outputs from data sources such as stereo triangulation and photometric stereo, which have different error-vs.-frequency characteristics. We demonstrate the ability of our technique to both recover high-frequency details and avoid low-frequency bias, producing surfaces that are more widely applicable than position or orientation data alone.

Citation (BibTeX)

Diego Nehab, Szymon Rusinkiewicz, James Davis, and Ravi Ramamoorthi. Efficiently Combining Positions and Normals for Precise 3D Geometry. ACM Transactions on Graphics (SIGGRAPH 2005). 24(3) August 2005.

Paper
  PDF file

Talk
  SIGGRAPH 2005 talk