An Attentive Machine Interface Using Geo-Contextual Awareness for Mobile Vision Tasks

ECAI(2008)

引用 3|浏览3
暂无评分
摘要
The presented work settles attention in the architecture of ambient intelligence, in particular, for the application of mobile vision tasks in multimodal interfaces. A major issue for the performance of these services is uncertainty in the visual information which roots in the requirement to index into a huge amount of reference images. We propose a system implementation --the Attentive Machine Interface (AMI) --that enables contextual processing of multi-sensor information in a probabilistic framework, for example to exploit contextual information from geo-services with the purpose to cut down the visual search space into a subset of relevant object hypotheses. We present a proof-of-concept with results from bottom-up information processing from experimental tracks and image capture in an urban scenario, extracting object hypotheses in the local context from both (i) mobile image based appearance and (ii) GPS based positioning, and verify performance in recognition accuracy ( 10%) using Bayesian decision fusion. Finally, we demonstrate that top-down information processing --geo-information priming the recognition method in feature space --can yield even better results ( 13%) and more economic computing, verifying the advantage of multi-sensor attentive processing in multimodal interfaces.
更多
查看译文
关键词
image capture,bottom-up information processing,multimodal interface,top-down information processing,multi-sensor information,mobile vision tasks,geo-contextual awareness,visual information,feature space,multi-sensor attentive processing,contextual information,attentive machine interface,contextual processing,ambient intelligence,search space,top down,indexation,proof of concept,bottom up,information processing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要