Deep Spatial and Temporal Information based QoE Evaluation Model for HTTP Adaptive Streaming.

CSAI(2021)

引用 0|浏览4
暂无评分
摘要
The content characteristics of video is one of the important influencing factors affecting the user's Quality of Experience (QoE). In this paper, deep spatial and temporal information are extracted to characterize the content characteristics of video, which are then used to establish a QoE evaluation model for HTTP adaptive streaming. Firstly, a Gabor convolutional layer and Channel Attention (CA) are incorporated into ResNet18 to construct the Gabor-CA-ResNet18 network, which is used to capture the Deep Spatial Information (DSI) of video. To avoid the problem of the "curse of dimensionality", LargeVis is applied to reduce the dimensionality of the DSI features to improve the representative and discriminative ability, obtaining a compact feature representation vector. Secondly, 3D Convolutional Neural Networks (3D CNN) and Gated Recurrent Unit (GRU) are used together to capture the Deep Temporal Information (DTI) of video, named 3D CNN-GRU. And finally, the DSI and DTI features are combined with the statistical features of other influencing factors, including video quality level, re-buffering duration, re-buffering frequency, and so on, to form the feature parameter vector. The Gradient Boosting method is adopted to establish the mapping relationship model between the feature parameter vector and Mean Opinion Score (MOS), which can be used to predict the user's QoE. Experimental results on SQoE-III and SQoE-IV datasets demonstrate that the proposed QoE model can achieve the state-of-the-art performance compared with the existing QoE evaluation models.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要