Global Registration of Dynamic Range Scans for Articulated Model Reconstruction

Will Chang
University of California, San Diego
Matthias Zwicker
University of Bern

In ACM Transactions on Graphics 30(3), May 2011

The articulated global registration algorithm can automatically reconstruct articulated, poseable models from a sequence of single-view dynamic range scans.
Reconstruction results for the Pink Panther dataset with faster input motion. The top row shows some of the input frames in the sequence. Notice that there is a significant amount of occlusion in some of the frames. The middle row shows the reconstructed mesh using the algorithm, with weights obtained by interpolating the weights on the sample set. The bottom row shows the estimated joint locations, where hinge joints are represented by a short bar and ball joints by a sphere. Both the reconstruction results and the weight estimation are faithful to the input data.


We present the articulated global registration algorithm to reconstruct articulated 3D models from dynamic range scan sequences. This new algorithm aligns multiple range scans simultaneously to reconstruct a full 3D model from the geometry of these scans. Unlike other methods, we express the surface motion in terms of a reduced deformable model and solve for joints and skinning weights. This allows a user to interactively manipulate the reconstructed 3D model to create new animations. We express the global registration as an optimization of both the alignment of the range scans and the articulated structure of the model. We employ a graph-based representation for the skinning weights that successfully handles difficult topological cases well. Joints between parts are estimated automatically and are used in the optimization to preserve the connectivity between parts. The algorithm also robustly handles difficult cases where parts suddenly disappear or reappear in the range scans. The global registration produces a more accurate registration compared to a sequential registration approach, because it estimates the articulated structure based on the motion observed in all input frames. We show that we can automatically reconstruct a variety of articulated models without the use of markers, user-placed correspondences, segmentation, or template model.



[ Paper ] (25.4 MB Adobe PDF)
[ Supplementary Material ] (6.0 MB Adobe PDF)
[ BibTex ]


[ Video ] (40.4 MB Xvid/AVI)
[ Supplementary Video ] (2.5 MB Xvid/AVI)