Attention-based automatic editing of virtual lectures for reduced production labor and effective learning experience

Eugene Hwang,Jeongmi Lee

INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES(2024)

引用 0|浏览1
暂无评分
摘要
Recently there has been a surge in demand for online video-based learning, and the importance of highquality educational videos is ever-growing. However, a uniform format of videos that neglects individual differences and the labor-intensive process of editing are major setbacks in producing effective educational videos. This study aims to resolve the issues by proposing an automatic lecture video editing pipeline based on each individual's attention pattern. In this pipeline, the eye-tracking data are obtained while each individual watches virtual lectures, which later go through multiple filters to define the viewer's locus of attention and to select the appropriate shot at each time point to create personalized videos. To assess the effectiveness of the proposed method, video characteristics, subjective evaluations of the learning experience, and objective eye-movement features were compared between differently edited videos (attention-based, randomly edited, professionally edited). The results showed that our method dramatically reduced the editing time, with similar video characteristics to those of professionally edited versions. Attention-based versions were also evaluated to be significantly better than randomly edited ones, and as effective as professionally edited ones. Eye-tracking results indicated that attention-based videos have the potential to decrease the cognitive load of learners. These results suggest that attention-based automatic editing can be a viable or even a better alternative to the human expert-dependent approach, and individually-tailored videos have the potential to heighten the learning experience and effect.
更多
查看译文
关键词
Multimedia learning,Automatic video editing,Eye-tracking,Customized learning,Virtual lecture
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要