Space-time Hole Filling with Random Walks in View Extrapolation for 3-D Video

Sunghwan Choi, Bumsub Ham, and Kwanghoon Sohn
Dept. of Electrical and Electronic Eng., Yonsei University, Seoul, Korea

In this paper, a space-time hole filling approach is presented to deal with a disocclusion when a view is synthesized for the 3D video. The problem becomes even more complicated when the view is extrapolated from a single view, since the hole is large and has no stereo depth cues. Although many techniques have been developed to address this problem, most of them focus only on view interpolation. We propose a space-time joint filling method for color and depth videos in view extrapolation. For proper texture and depth to be sampled in the following hole filling process, the background of a scene is automatically segmented by the random walker segmentation in conjunction with the hole formation process. Then, the patch candidate selection process is formulated as a labeling problem, which can be solved with random walks. The patch candidates that best describe the hole region are dynamically selected in the spacetime domain, and the hole is filled with the optimal patch for ensuring both spatial and temporal coherence. The experimental results show that the proposed method is superior to stateof- the-art methods and provides both spatially and temporally consistent results with significantly reduced flicker artifacts.


Experimental Results: "Ballet"

"Ballet" sequence provided by MSR. Virtual view 3 is rendered by using the reference view 4. Parameters are provided in our paper.



















Experimental Results: "Break Dancer"

"Break Dancer" sequence provided by MSR. Virtual view 3 is rendered by using the reference view 4. Parameters are provided in our paper.


















References

[1] A. Criminisi, P. Perez, and K. Toyama, “Region Filling and Object Removal by Exemplar-Based Image Inpainting,” IEEE Trans. Image Processing, vol. 13, no. 9, pp. 1200-1212, Sep. 2004.

[2] I. Daribo and H. Saito, “A novel inpainting-based layered depth video for 3DTV,” IEEE Trans. Broadcast., vol. 57, no. 2, Jun. 2011.




| Last updated Monday, June 11, 2012 |