Self-Taught Recovery Of Depth Data
2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)(2015)
摘要
Depth data captured by Kinect provides inexpensive geometric information to higher level computer vision tasks such as object detection and recognition. However, there are missing values in the depth map at object boundaries and those beyond the working distance of Kinect due to the limitations of the hardware employed. In this paper, we proposed a self-taught regression method to recover the missing depth data. First a rough estimation of the scene depth was made based on the color image from Kinect. We then trained a random forest using the estimated depth and the intensity from the neighborhood of each pixel that the depth can be captured by Kinect. The random forest was used to predict missing depth data in a self-taught manner that the pixels with largest number of valid neighborhood were predicted first and then added to the training set for the next round prediction. This repeats until all missing data was recovered. The experiment results show that our method outperforms existing approaches to depth recovery.
更多查看译文
关键词
self-taught depth data recovery,Kinect data,geometric information,computer vision,object detection,object recognition,depth map,object boundary,self-taught regression method,rough scene depth estimation,color image,random forest training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络