Understanding and predicting the actions decisions of individuals in team and multiagent task contexts

Journal of Science and Medicine in Sport(2022)

引用 0|浏览7
暂无评分
摘要
Introduction: Effective team behaviour requires that co-actors reciprocally coordinate their actions with respect to each other and changing task demands. Pivotal to the structural organization of such behaviour is the ability of actors to effectively decide how and when to act, with robust decision-making differentiating expert from novice performance. Indeed, task expertise reflects the trained attunement of actors to information that best specifies what action possibilities will ensure task success1. The current study explored whether state-of-the art Supervised Machine Learning (SML), Long-Short Term Memory artificial neural networks (LSTMNN), and explainable-AI could be employed to model, predict, and explicate the action decisions of expert and novice individuals during team performance. Methods: We modelled the target selection decisions of pairs of expert and novice individuals playing a simulated, fast paced herding game2. For this task, pairs of players controlled virtual herder agents to corral a herd of four virtual cows (targets), dispersed around a game field. We then used the recorded performance data to training LSTMNN, via SML, to predict the future target selection decisions of players. Following model development and validation, we then analysed the resultant LSTMNN models using the explainable-AI technique SHapley Additive exPlanation (SHAP)3 to identify the different sources of task information that defined the action decisions of expert and novice players. Results: Using 1 second task state information sequences, we were able to train LSTMNN models to successfully predict the target selection decisions of both expert and novice players at an average accuracy above 95%. Moreover, accurate target predictions could be made between 640 ms and 2.4 seconds prior to player decisions being enacted or observable within the state input sequence. Another key finding was that the LSTMNN models were expertise specific; when the expertise level of the training and test data was mismatched, prediction performance dropped to near chance levels. The SHAP analysis revealed that this specificity was because experts were more influenced by information about the state of their co-herders and target direction of motion information compared to novices. Discussion: The findings demonstrated how SML trained LSTMNN and the explainable-AI technique SHAP could provide powerful tools for understanding the decision-making process of human actors during team behaviour, including what information best supports optimal task performance. The implications for both basic scientific research and the applied development of task training and decision-making assessment tools are significant. References 1Araujo D, Davids K, Hristovski R. The ecological dynamics of decision making in sport. Psychol Sport Exerc, vol. 7, no. 6, pp. 653–676, 2006. 2Rigoli LM, et al. Employing Models of Human Social Motor Behavior for Artificial Agent Trainers. in Proceedings of the 19th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2020), 2020, p. 9. 3Lundberg SM, et al. From local explanations to global understanding with explainable AI for trees. Nat Mach Intell, vol. 2, no. 1, pp. 56–67, 2020.
更多
查看译文
关键词
actions decisions,team,task,individuals
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要