Physical querying with multi-modal sensing

WACV(2014)

引用 1|浏览139
暂无评分
摘要
We present Marvin, a system that can search physical objects using a mobile or wearable device. It integrates HOG-based object recognition, SURF-based localization information, automatic speech recognition, and user feedback information with a probabilistic model to recognize the “object of interest” at high accuracy and at interactive speeds. Once the object of interest is recognized, the information that the user is querying, e.g. reviews, options, etc., is displayed on the user's mobile or wearable device. We tested this prototype in a real-world retail store during business hours, with varied degree of background noise and clutter. We show that this multi-modal approach achieves superior recognition accuracy compared to using a vision system alone, especially in cluttered scenes where a vision system would be unable to distinguish which object is of interest to the user without additional input. It is computationally able to scale to large numbers of objects by focusing compute-intensive resources on the objects most likely to be of interest, inferred from user speech and implicit localization information. We present the system architecture, the probabilistic model that integrates the multi-modal information, and empirical results showing the benefits of multi-modal integration.
更多
查看译文
关键词
mobile device,speech recognition,multimodal sensing,multimodal integration,user interfaces,compute-intensive resources,multi-modal approach,marvin,user feedback,physical querying,multimodal information,cluttered scenes,system architecture,vision system,business hours,object recognition,interactive speeds,superior recognition accuracy,background noise,hog-based object recognition,wearable device,surf-based localization information,automatic speech recognition,probabilistic model,feature extraction,speech,visualization,computer architecture
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要