Human-centric and semantics-based explainable event detection: a survey

Artificial Intelligence Review(2023)

引用 0|浏览2
暂无评分
摘要
In recent years, there has been a surge of interest in Artificial Intelligence (AI) systems that can provide human-centric explanations for decisions or predictions. No matter how good and efficient an AI model is, users or practitioners find it difficult to trust it if they cannot understand the AI model or its behaviours. Incorporating explainability that is human-centric in event detection systems is significant for building a decision-making process that is more trustworthy and sustainable. Human-centric and semantics-based explainable event detection will achieve trustworthiness, explainability, and reliability, which are currently lacking in AI systems. This paper provides a survey on human-centric explainable AI, explainable event detection, and semantics-based explainable event detection by answering some research questions that bother on the characteristics of human-centric explanations, the state of explainable AI, methods for human-centric explanations, the essence of human-centricity in explainable event detection, research efforts in explainable event solutions, and the benefits of integrating semantics into explainable event detection. The findings from the survey show the current state of human-centric explainability, the potential of integrating semantics into explainable AI, the open problems, and the future directions which can guide researchers in the explainable AI domain.
更多
查看译文
关键词
Event detection,Explainable AI,Human-centric explanations,Explainable event detection,Semantics-based explainable event detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要