Deep learning, transparency, and trust in human robot teamwork

user-5f1f7d444c775e3a796188dc(2021)

引用 11|浏览14
暂无评分
摘要
Abstract For autonomous AI systems to be accepted and trusted, the users should be able to understand the reasoning process of the system (i.e., the system should be transparent). Robotics presents unique programming difficulties in that systems need to map from complicated sensor inputs such as camera feeds and laser scans to outputs such as joint angles and velocities. Advances in deep neural networks are now making it possible to replace laborious handcrafted features and control code by learning control policies directly from high-dimensional sensor inputs. Because Atari games, where these capabilities were first demonstrated, replicate the robotics problem, they are ideal for investigating how humans might come to understand and interact with agents who have not been explicitly programmed. We present computational and human results for making DRLN more transparent using object saliency visualizations of internal states and test the effectiveness of expressing saliency through teleological verbal explanations.
更多
查看译文
关键词
Deep learning,Human–robot interaction,Robotics,Salience (neuroscience),Human–computer interaction,Teamwork,Computer science,Transparency (graphic),Replicate,Ai systems,Artificial intelligence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要