A Quantitative Comparison of Causality and Feature Relevance via Explainable AI (XAI) for Robust, and Trustworthy Artificial Reasoning Systems.

HCI (40)(2023)

引用 0|浏览5
暂无评分
摘要
Challenges related to causal learning remain a major issue for artificial reasoning systems. Similar to other ML approaches, robust and trustworthy explainability is needed to support the underlying tasks. This paper aims to provide a novel perspective on causal explainability, creating a model which extracts quantitative causal knowledge and relationships from observational data via Average treatment effect (ATE) estimation to generate robust explanations through comparison and validation of the ranked causally relevant features with results from correlation-based feature relevance explanations. Average treatment effect estimation is calculated to provide a quantitative comparison of the causal features to the relevant features from Explainable AI (XAI). This approach provides a comprehensive method to generate explanations via validations from both causality and XAI to ensure trustworthiness, fairness, and bias detection from both within the data, as well as the AI/ML models themselves for artificial reasoning systems.
更多
查看译文
关键词
trustworthy artificial reasoning systems,explainable explainable,feature relevance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要