A deep audio-visual model for efficient dynamic video summarization

Journal of Visual Communication and Image Representation(2024)

引用 0|浏览0
暂无评分
摘要
The adage “a picture is worth a thousand words” resonates in the digital video domain, suggesting that a video could be seen as a composition of millions of these words. Videos are composed of countless frames. Video summarization creates cohesive visual units in scenes by condensing shots from segments. Video summarization gains prominence by condensing lengthy videos while retaining crucial content. Despite effective techniques using keyframes or keyshots in video summarization, integrating audio components is imperative. This paper focuses on integrating deep learning techniques to generate dynamic summaries enriched with audio. To address that gap, an efficient model employs audio-visual features, enriching summarization for more robust and informative video summaries. The model selects keyshots based on their significance scores, safeguarding essential content. Assigning these scores to specific video shots is a pivotal yet demanding task for video summarization. The model’s evaluation occurs on benchmark datasets, TVSum and SumMe. Experimental outcomes reveal its efficacy, showcasing considerable performance enhancements. On the TVSum, SumMe datasets, an F-Score metric of 79.33% and 66.78%, respectively, is achieved, surpassing previous state-of-the-art techniques.
更多
查看译文
关键词
Video Skimming,VGGish,SumMe,TVSum,Visualization of score curves
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要