Selective Keyframe Summarisation for Egocentric Videos Based on Semantic Concept Search

2018 IEEE International Conference on Image Processing, Applications and Systems (IPAS)(2018)

引用 4|浏览14
暂无评分
摘要
Large volumes of egocentric video data are being continually collected every day. While the standard video summarisation approach offers all-purpose summaries, here we propose a method for selective video summarisation. The user can query the video with an unlimited vocabulary of terms. The result is a time-tagged summary of keyframes related to the query concept. Our method uses a pre-trained Convolutional Neural Network (CNN) for the semantic search, and visualises the generated summary as a compass. Two commonly used datasets were chosen for the evaluation: UTEgo egocentric video and EDUB lifelog.
更多
查看译文
关键词
egocentric video,video summarisation,keyframe selection,first person vision,semantic search
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要