1Yonsei University 2Chungnam University 3Netcom Solution
This dataset has been used to train convolutional neural networks in our project1 "High Quality 2D-to-Multiview Contents Generation from Large-scale RGB-D Database," and for our papers2, 3
1) "High quality 2D-to-multiview contents generation from large-scale RGB-D database", under Grant by the Institute for Information and Communications Technology Promotion(IITP) through the Korean Government(MSIP)(R0115-16-1007)
2) Y. Kim, B. Ham, C. Oh, and K. Sohn, "Structure selective depth super-resolution for RGB-D cameras," IEEE Trnas. on Image Processing, vol.25, no. 11, pp. 5527-38, Nov. 2016.
3) Y. Kim, H. Jung, D. Min, and K. Sohn, "A deep variational approach for single image depth estimation," IEEE Trnas. on Image Processing, 2017 (Submitted).
We introduce an RGB-D scene dataset consisting of more than 200 indoor / outdoor scenes. This dataset contains synchronized RGB-D frames from both Kinect v2 and Zed stereo camera. For the outdoor scene, we first generate disparity maps using an accurate stereo matching method and convert them using calibration parameters. A per-pixel confidence map of disparity is also provided. Our scenes are captured at various places, e.g., offices, rooms, dormitory, exhibition center, street, road etc., from Yonsei University and Chungnam National University