Deep Multi Depth Panoramas for View Synthesis

Deep Multi Depth Panoramas for View Synthesis

ECCV 2020

Kai-En Lin1 Zexiang Xu1,3 Ben Mildenhall2 Pratul P. Srinivasan2
Yannick Hold-Geoffroy3 Stephen DiVerdi3 Qi Sun3 Kalyan Sunkavalli3 Ravi Ramamoorthi1
1University of California, San Diego 2University of California, Berkeley 3Adobe Research


Abstract:

We propose a learning-based approach for novel view synthesis for multi-camera 360° panorama capture rigs. Previous work constructs RGBD panoramas from such data, allowing for view synthesis with small amounts of translation, but cannot handle the disocclusions and view-dependent effects that are caused by large translations. To address this issue, we present a novel scene representation - Multi Depth Panorama (MDP) - that consists of multiple RGBDα panoramas that represent both scene geometry and appearance. We demonstrate a deep neural network-based method to reconstruct MDPs from multi-camera 360° images. MDPs are more compact than previous 3D scene representations and enable high-quality, efficient new view rendering. We demonstrate this via experiments on both synthetic and real data and comparisons with previous state-of-the-art methods spanning both learning-based approaches and classical RGBD-based methods.


Download

Paper
Supplementary Materials
Code

Bibtex

@inproceedings{lin2020mdp,
  title={Deep Multi Depth Panoramas for View Synthesis},
  author={Lin, Kai-En and Xu, Zexiang and Mildenhall, Ben and Srinivasan, Pratul P and
    Hold-Geoffroy Yannick and DiVerdi, Stephen and Sun, Qi and Sunkavalli, Kalyan and Ramamoorthi, Ravi},
  year={2020},
  booktitle={ECCV},
}