Deep 3D Mask Volume for View Synthesis of Dynamic Scenes

Deep 3D Mask Volume for
View Synthesis of Dynamic Scenes

ICCV 2021

Kai-En Lin1 Lei Xiao2 Feng Liu2 Guowei Yang1 Ravi Ramamoorthi1
1 University of California, San Diego 2 Facebook Reality Labs


Image view synthesis has seen great success in reconstructing photorealistic visuals, thanks to deep learning and various novel representations. The next key step in immersive virtual experiences is view synthesis of dynamic scenes. However, several challenges exist due to the lack of high-quality training datasets, and the additional time dimension for videos of dynamic scenes. To address this issue, we introduce a multi-view video dataset, captured with a custom 10-camera rig in 120 FPS. The dataset contains 96 high-quality scenes showing various visual effects and human interactions in outdoor scenes. We develop a new algorithm, Deep 3D Mask Volume, which enables temporally stable view extrapolation from binocular videos of dynamic scenes, captured by static cameras. Our algorithm addresses the temporal inconsistency of disocclusions by identifying the error-prone areas with a 3D mask volume, and replaces them with static background observed throughout the video. Our method enables manipulation in 3D space as opposed to simple 2D masks, We demonstrate better temporal stability than frame-by-frame static view synthesis methods, or those that use 2D masks. The resulting view synthesis videos show minimal flickering artifacts and allow for larger translational movements.


Supplementary Materials
Algorithm code
Preprocessing code


Our dataset can be found in the link (same as above). There are two options: (a) raw video files in folder raw_video; (b) processed hdf5 files in folder compressed. Files are organized according to which scene it is from. In (b), there are two files for each scene: a h5 file containing the resized frames, background and foreground frames; a npy file storing the camera poses in LLFF convention.
Please check this link for more details.

Additional Results

Figure 1. A challenging example with two persons moving across each other. Here we show the inputs and comparisons for frame 6. Mildenhall et al. 2019 produces noticeable artifacts like the blurred out bush and the ghosting artifact of the tablet held by the closer person.

Figure 2. Novel view results with dynamic background. We visualize the movements by replacing RGB with the image intensity of 3 temporal frames. Inset highlights movements of the left vegetation. Background vegetation moves randomly due to strong winds and our proposed method is still able to produce high-quality visual results as shown. Please zoom in closely to see the effects.


@inproceedings {lin2021deep,
    title = {Deep 3D Mask Volume for View Synthesis of Dynamic Scenes},
    author = {Kai-En Lin and Lei Xiao and Feng Liu and Guowei Yang and Ravi Ramamoorthi},
    booktitle = {ICCV},
    year = {2021},