Rule-guided Counterfactual Explainable Recommendation

IEEE Transactions on Knowledge and Data Engineering(2023)

引用 0|浏览15
暂无评分
摘要
To empower the trust of current recommender systems, the counterfactual explanation (CE) method is adopted to generate the counterfactual instance for each input and take their changes causing the different outcomes as the explanation. Although promising results have been achieved by existing CE-based methods, we propose to generate the attribute-oriented counterfactual explanation. Different from them, we aim to generate the counterfactual instance by performing the intervention on the attributes, and then build an attribute-oriented counterfactual explainable recommender system. Considering the correlation and categorical values of attributes, how to efficiently generate the reliable counterfactual instances on the attributes challenges us. To alleviate such a problem, we propose to extract the decision rules over the attributes to guide the attribute-oriented counterfactual generation. Specifically, we adopt the gradient boosting decision tree (GBDT) to pre-build the decision rules over the attributes and develop a Rule-guided Counterfactual Explainable Recommendation model ( RCER ) to predict the user-item interaction and generate the counterfactual instances for the user-item pairs. We finally conduct extensive experiments on four publicly datasets, including NYC, LON, Amazon, and Movielens datasets. Experimental results have qualitatively and quantitatively justified the superiority of our model over existing cutting-edge baselines. We release the code: https://github.com/quxiaoyang0zero/RCER .
更多
查看译文
关键词
Counterfactual Explanation,Recommender System,Explainable Model,Interpretable Model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要