An Adaptive Feature-fusion Method for Object Matching over Non-overlapped Scenes

Journal of Signal Processing Systems(2013)

引用 3|浏览39
暂无评分
摘要
Object matching in non-overlapped scenes of multi-cameras is a challenging task, due to a large number of factors, e.g. complex backgrounds, illumination variance, pose of observed object, viewpoint and image resolutions of different cameras, shadows and occlusions. For an object, matching its observations with variant appearances in such context usually turns to evaluate their similarity over some sophisticatedly chosen image features. We observe that certain feature is usually robust to certain variance, e.g. SIFT is robust to the variance in viewpoint and scale. We mean that incorporating the abilities of a bag of such features would reach a better performance. Based on these observations and insights, we propose an adaptive feature-fusion algorithm. The algorithm, first, evaluates the matching accuracy of four sophisticatedly chosen and well validated features: color histogram, UV chromaticity, major color spectrum and SIFT, using exponential models of entropy as similarity measure. Second, an adaptive fusion algorithm is presented to fuse a bag of features for a collaborative similarity measure. Our approach is shown to be able to adaptively and dynamically reduce the variances of object appearances caused by multiple factors. Experimental results show that our approach applied to human matching reaches a high robustness and matching accuracy in comparison with the previous fusion methods.
更多
查看译文
关键词
Appearance features,Feature fusion,Object matching,Non-overlapping scenes,Performance evaluation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要