Stable Viewport-Based Unsupervised Compressed 360 Video Quality Enhancement

IEEE TRANSACTIONS ON BROADCASTING(2024)

引用 0|浏览6
暂无评分
摘要
With the popularity of panoramic cameras and head mount displays, many 360 degrees videos have been recorded. Due to the geometric distortion and boundary discontinuity of 2D projection of 360 degrees video, traditional 2D lossy video compression technology always generates more artifacts. Therefore, it is necessary to enhance the quality of compressed 360 degrees video. However, 360 degrees video characteristics make traditional 2D enhancement models cannot work properly. So the previous work tries to obtain the viewport sequence with smaller geometric distortions for enhancement. But such sequence is difficult to be obtained and the trained enhancement model cannot be well adapted to a new dataset. To address these issues, we propose a Stable viewport-based Unsupervised compressed 360 degrees video Quality Enhancement (SUQE) method. Our method consists of two stages. In the first stage, a new data preparation module is proposed which adopts saliency-based data augmentation and viewport cropping techniques to generate training dataset. A standard 2D enhancement model is trained based on this dataset. For transferring the trained enhancement model to the target dataset, a shift prediction module is designed, which will crop a shifted viewport clip as supervision signal for model adaptation. For the second stage, by comparing the differences between the current enhanced original and shifted frames, the Mean Teacher framework is employed to further fine-tune the enhancement model. Experiment results confirm that our method achieves satisfactory performance on the public dataset. The relevant models and code will be released.
更多
查看译文
关键词
360 degrees video,quality enhancement,unsupervised domain adaptation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要