Incorporating Geo-Tagged Mobile Videos into Context-Aware Augmented Reality Applications

2016 IEEE Second International Conference on Multimedia Big Data (BigMM)(2016)

引用 9|浏览47
暂无评分
摘要
In recent years, augmented-reality (AR) has been attracting extensive attentions from both the research community and industry as a new form of media, mixing virtual content into the physical world. However, the scarcity of the AR content and the lack of user contexts are major impediments to providing and representing rich and dynamic multimedia content on AR applications. In this study, we propose an approach to search and filter big multimedia data, specifically geo-tagged mobile videos, for context-aware AR applications. The challenge is to automatically search for interesting video segments out of a huge amount of user-generated mobile videos, which is one of the biggest multimedia data, to be efficiently incorporated into AR applications. We model the significance of video segments as AR content adopting camera shooting patterns defined in filming, such as panning, zooming, tracking and arching. Then, several efficient algorithms are proposed to search for such patterns using fine granular geospatial properties of the videos such as camera locations and viewing directions over time. Experiments with real-world geo-tagged video dataset show that the proposed algorithms effectively search for a large collection of user-generated mobile videos to identify top K significant video segments.
更多
查看译文
关键词
Multimedia Content,Geo-tagging,Mobile Video,Augmented Reality
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要