Pose Estimation and Video Annotation Approaches for Understanding Individual and Team Interaction During Augmented Reality-Enabled Mission Planning.

HCI (9)(2021)

引用 0|浏览4
暂无评分
摘要
Two video analysis approaches (pose estimation and manual annotation) were applied to video recordings of two-person teams performing a mission planning task in a shared augmented reality (AR) environment. The analysis approaches calculated the distance relations between team members and annotated observed behaviors during the collaborative task. The 2D pose estimation algorithm lacked scene depth processing; therefore, we found some inconsistencies with the manual annotation. Although integration of the two analysis approaches was not possible, each approach by itself produced several insights on team behavior. The manual annotation analysis found four common team behaviors as well as behavior variations unique to particular teams and temporal situations. Comparing a behavior-based time on task percentage indicated behavior-type connections and some possible exclusions. The pose estimation analysis found the majority of the teams moved around the 3D scene at a similar distance apart on average with similar variation in fluctuation around a common distance range between team members. Outlying team behavior was detected by both analysis approaches and included: periods of very low distance relations, infrequent but very high distance relation spikes, significant task time spent adjusting the HoloLens device during wearing, and exceptionally long task time with gaps in pose estimation data processing.
更多
查看译文
关键词
Augmented reality, Mission planning, Pose estimation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要