Inverse Rendering for Complex Indoor Scenes:
Shape, Spatially-Varying Lighting and SVBRDF from a Single Image
CVPR 2020 (Oral presentation)
1University of California, San Diego |
2Adobe |
Abstract
We propose a deep inverse rendering framework for indoor scenes. From a single RGB image of an arbitrary indoor scene, we obtain a complete scene reconstruction, estimating shape, spatially-varying lighting, and spatially-varying, non-Lambertian surface reflectance. Our novel inverse rendering network incorporates physical insights -- including a spatially-varying spherical Gaussian lighting representation, a differentiable rendering layer to model scene appearance, a cascade structure to
iteratively refine the predictions and a bilateral solver for refinement -- allowing us to jointly reason about shape, lighting, and reflectance. Since no existing dataset provides ground truth high quality spatially-varying material and spatially-varying lighting, we propose novel methods to map complex materials to existing indoor scene datasets and a new physically-based GPU renderer to create a large-scale, photorealistic indoor dataset. Experiments show that our framework outperforms
previous methods and enables various novel applications like photorealistic object insertion and material editing.
Augment synthetic indoor dataset with complex SVBRDF
Photorealistic real scene editing application 1: object insertion
Photorealistic real scene editing application 2: material editing
Download
Bibtex
@inproceedings{li2020inverse,
title={Inverse rendering for complex indoor scenes: Shape, spatially-varying lighting and svbrdf from a single image},
author={Li, Zhengqin and Shafiei, Mohammad and Ramamoorthi, Ravi and Sunkavalli, Kalyan and Chandraker, Manmohan},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2475--2484},
year={2020}
}