A novel fusion method for integrating multiple modalities and knowledge for multimodal location estimation.

MM '13: ACM Multimedia Conference Barcelona Spain October, 2013(2013)

引用 13|浏览47
暂无评分
摘要
This article describes a novel fusion approach using multiple modalities and knowledge sources that improves the accuracy of multimodal location estimation algorithms. The problem of "multimodal location estimation" or "placing" involves associating geo-locations with consumer-produced nmultimedia data like videos or photos that have not been tagged using GPS. Our algorithm effectively integrates data from the visual and textual modalities with external geographical knowledge bases by building a hierarchical model that combines data-driven and semantic methods to group visual and textual features together within geographical regions. We evaluate our algorithm on the MediaEval 2010 Placing Task dataset and show that our system significantly outperforms other state-of-the-art approaches, successfully locating about 40% of the videos to within a radius of 100 m.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要