Interpretability Diversity for Decision-Tree-Initialized Dendritic Neuron Model Ensemble.

IEEE transactions on neural networks and learning systems(2023)

引用 6|浏览21
暂无评分
摘要
To construct a strong classifier ensemble, base classifiers should be accurate and diverse. However, there is no uniform standard for the definition and measurement of diversity. This work proposes a learners' interpretability diversity (LID) to measure the diversity of interpretable machine learners. It then proposes a LID-based classifier ensemble. Such an ensemble concept is novel because: 1) interpretability is used as an important basis for diversity measurement and 2) before its training, the difference between two interpretable base learners can be measured. To verify the proposed method's effectiveness, we choose a decision-tree-initialized dendritic neuron model (DDNM) as a base learner for ensemble design. We apply it to seven benchmark datasets. The results show that the DDNM ensemble combined with LID obtains superior performance in terms of accuracy and computational efficiency compared to some popular classifier ensembles. A random-forest-initialized dendritic neuron model (RDNM) combined with LID is an outstanding representative of the DDNM ensemble.
更多
查看译文
关键词
Classification,dendritic neuron model (DNM),ensemble learning,interpretability diversity,random forest (RF)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要