Deep Reference Frame Generation Method for VVC Inter Prediction Enhancement.

IEEE Trans. Circuits Syst. Video Technol.(2024)

Cited 0|Views1
No score
Abstract
In video coding, inter prediction aims to reduce temporal redundancy by using previously encoded frames as references. The quality of reference frames is crucial to the performance of inter prediction. This paper presents a deep reference frame generation method to optimize the inter prediction in Versatile Video Coding (VVC). Specifically, reconstructed frames are sent to a well-designed frame generation network to synthesize a picture similar to the current encoding frame. The synthesized picture serves as an additional reference frame inserted into the reference picture list (RPL) to provide a more reliable reference for subsequent motion estimation (ME) and motion compensation (MC). The frame generation network employs optical flow to predict motion precisely. Moreover, an optical flow reorganization strategy is proposed to enable bi-directional and uni-directional predictions with only a single network architecture. To reasonably apply our method to VVC, we further introduce a normative modification of the temporal motion vector prediction (TMVP). Integrated into the VVC reference software VTM-15.0, the deep reference frame generation method achieves coding efficiency improvements of 5.22%, 3.61%, and 3.83% for the Y component under random access (RA), low delay B (LDB), and low delay P (LDP) configurations, respectively. The proposed method has been discussed in Joint Video Exploration Team (JVET) meeting and is currently part of Exploration Experiments (EE) for further study.
More
Translated text
Key words
neural-network-based video coding,Versatile Video Coding (VVC),inter prediction,deep learning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined