VideoSet

Periodicals(2017)

引用 88|浏览8
暂无评分
摘要
AbstractA large-scale JND-based coded video quality dataset is presented.The VideoSet contains 220 5-s sequences in four resolutions coded by H.264/AVC.The subjective test procedure, JND data cleaning and properties are described.The significance and implications of the VideoSet are discussed.This work points out a clear path to data-driven perceptual coding. A new methodology to measure coded image/video quality using the just-noticeable-difference (JND) idea was proposed in Lin et al. (2015). Several small JND-based image/video quality datasets were released by the Media Communications Lab at the University of Southern California in Jin et al. (2016) and Wang et al. (2016) [3]. In this work, we present an effort to build a large-scale JND-based coded video quality dataset. The dataset consists of 220 5-s sequences in four resolutions (i.e., 19201080,1280720,960540 and 640360). For each of the 880 video clips, we encode it using the H.264/AVC codec with QP=1,,51 and measure the first three JND points with 30+subjects. The dataset is called the VideoSet, which is an acronym for Video Subject Evaluation Test (SET). This work describes the subjective test procedure, detection and removal of outlying measured data, and the properties of collected JND data. Finally, the significance and implications of the VideoSet to future video coding research and standardization efforts are pointed out. All source/coded video clips as well as measured JND data included in the VideoSet are available to the public in the IEEE DataPort (Wang et al., 2016 [4]).
更多
查看译文
关键词
Human visual system (HVS),Just noticeable difference (JND),Video coding,Video quality
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要