An Interpretable Client Decision Tree Aggregation process for Federated Learning
arxiv(2024)
摘要
Trustworthy Artificial Intelligence solutions are essential in today's
data-driven applications, prioritizing principles such as robustness, safety,
transparency, explainability, and privacy among others. This has led to the
emergence of Federated Learning as a solution for privacy and distributed
machine learning. While decision trees, as self-explanatory models, are ideal
for collaborative model training across multiple devices in
resource-constrained environments such as federated learning environments for
injecting interpretability in these models. Decision tree structure makes the
aggregation in a federated learning environment not trivial. They require
techniques that can merge their decision paths without introducing bias or
overfitting while keeping the aggregated decision trees robust and
generalizable. In this paper, we propose an Interpretable Client Decision Tree
Aggregation process for Federated Learning scenarios that keeps the
interpretability and the precision of the base decision trees used for the
aggregation. This model is based on aggregating multiple decision paths of the
decision trees and can be used on different decision tree types, such as ID3
and CART. We carry out the experiments within four datasets, and the analysis
shows that the tree built with the model improves the local models, and
outperforms the state-of-the-art.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要