Self-Supervised Adversarial Video Summarizer With Context Latent Sequence Learning

IEEE Transactions on Circuits and Systems for Video Technology(2023)

引用 0|浏览6
暂无评分
摘要
Video summarization attempts to create concise and complete synopsis of a video through identifying the most informative and explanatory parts while removing redundant video frames, which facilitates retrieving, managing and browsing video efficiently. Most existing video summarization approaches heavily rely on enormous high-quality human-annotated labels or fail to produce semantically meaningful video summaries with the guidance of prior information. Without any supervised labels, we propose Self-supervised Adversarial Video Summarizer (2SAVS) that exploits context latent sequence learning to generate satisfying video summary. To implement it, our model elaborates a novel pretext task of identifying latent sequences and normal frames by training self-supervised generative adversarial network (GAN) with several well-designed losses. As the core components of 2SAVS, Clip Consistency Representation (CCR) and Hybrid Feature Refinement (HFR) are developed to ensure semantic consistency and continuity of clips. Furthermore, a novel separation loss is designed to explicitly enlarge the distance between prediction frame scores to effectively enhance the model's discriminative ability. Differently, latent sequences, additional finetune operations and generators are not required when inferring video summary. Experiments on two challenging and diverse datasets demonstrate that our approach outperforms other state-of-the-art unsupervised and weakly-supervised methods, and even produces comparable results with several excellent supervised methods.
更多
查看译文
关键词
Self-supervised GAN,video summarization,context latent sequence learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要