Through the Looking Glass: Neural 3D Reconstruction of Transparent Shapes

Through the Looking Glass: Neural 3D Reconstruction of Transparent Shapes

CVPR 2020 (Oral Presentation)

Zhengqin Li* Yu-Ying Yeh* Manmohan Chandraker
University of California, San Diego
*indicates equal contributions


Abstract:

Recovering the 3D shape of transparent objects using a small number of unconstrained natural images is an ill-posed problem. Complex light paths induced by refraction and reflection have prevented both traditional and deep multiview stereo from solving this challenge. We propose a physically-based network to recover 3D shape of transparent objects using a few images acquired with a mobile phone camera, under a known but arbitrary environment map. Our novel contributions include a normal representation that enables the network to model complex light transport through local computation, a rendering layer that models refractions and reflections, a cost volume specifically designed for normal refinement of transparent shapes and a feature mapping based on predicted normals for 3D point cloud reconstruction. We render a synthetic dataset to encourage the model to learn refractive light transport across different views. Our experiments show successful recovery of high-quality 3D geometry for complex transparent shapes using as few as 5-12 natural images.

Download

Paper[ArXiv]
Dataset Real Meshes

191MB

Models
Network Dataset Creation Renderer Real Data

Bibtex

@inproceedings{li2020through,
title={Through the Looking Glass: Neural 3D Reconstruction of Transparent Shapes},
author={Li, Zhengqin and Yeh, Yu-Ying and Chandraker, Manmohan},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={1262--1271},
year={2020}
}