AMDCNet: An attentional multi-directional convolutional network for stereo matching

Displays(2022)

引用 3|浏览13
暂无评分
摘要
Stereo matching refers to finding the correspondence of a point in the real world between two different storage mediums (e.g., intensity images, depth images, three-dimensional points). There are existing stereo matching methods in the literature, but they exhibit two shortcomings. Firstly, during the feature region extraction of stereo matching, these methods require measuring the distance of regions, but measuring the texture distribution of the region is difficult and might lead to the failure of matching. Secondly, the templates used in these methods are rectangles with a fixed size, while most of the natural images exhibit rich information and are more suitable for flexible templates. In this paper, we propose an attentional multi-directional convolutional network (AMDCNet) for circumventing these issues. Our AMDCNet approach consists of three stages: extract the visual sensitivity factor, construct the multi-directional aggregation template and utilize left–right consistency detection to optimize. We evaluate our approach using standard images in the Middlebury test dataset, Scene Flow and KITTI 2015. Experimental results show that AMDCNet can reduce the mismatch rate, and also show significant improvement in accuracy compared with some classical method. In some scenarios, it surpasses some advanced methods based on deep learning. The model code, dataset, and results of the experiments in this paper are available at: https://github.com/WangHewei16/Attentional-Multi-Directional-Convolution-Network.
更多
查看译文
关键词
Stereo matching,Convolution aggregation network,Visual sensitive,Cost aggregation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要