Abstract:
We present an image-based relighting method that can synthesize scene appearance under novel, distant illumination from the visible hemisphere, from only five images captured under predefined directional lights. Our method uses a deep convolutional neural network to regress the relit image from these five images; this relighting network is trained on a large synthetic dataset comprised of procedurally generated shapes with real-world reflectances. We show that by combining a custom-designed sampling network with the relighting network, we can jointly learn both the optimal input light directions and the relighting function. We present an extensive evaluation of our network, including an empirical analysis of reconstruction quality, optimal lighting configurations for different scenarios, and alternative network architectures. We demonstrate, on both synthetic and real scenes, that our method is able to reproduce complex, high-frequency lighting effects like specularities and cast shadows, and outperforms other image-based relighting methods that require an order of magnitude more images.
Download
Data and code
Data and code are released for academic and non-commercial use only. Copyright held by Owner/Author 2018.
|
Code |
|
Data |
External
Bibtex
@article{xu2018deep,
title={Deep image-based relighting from optimal sparse samples},
author={Xu, Zexiang and Sunkavalli, Kalyan and Hadap, Sunil and Ramamoorthi, Ravi},
journal={ACM Transactions on Graphics (TOG)},
volume={37},
number={4},
pages={126},
year={2018},
publisher={ACM}
}