Rethinking Logic Minimization for Tabular Machine Learning

IEEE Transactions on Artificial Intelligence(2023)

引用 1|浏览9
暂无评分
摘要
Tabular datasets can be viewed as logic functions that can be simplified using two-level logic minimization to produce minimal logic formulas in disjunctive normal form, which in turn can be readily viewed as an explainable decision rule set for binary classification. However, there are two problems with using logic minimization for tabular machine learning. First, tabular datasets often contain overlapping examples that have different class labels, which have to be resolved before logic minimization can be applied since logic minimization assumes consistent logic functions. Second, even without inconsistencies, logic minimization alone generally produces complex models with poor generalization because it exactly fits all data points, which leads to detrimental overfitting . How best to remove training instances to eliminate inconsistencies and overfitting is highly nontrivial. In this article, we propose a novel statistical framework for removing these training samples so that logic minimization can become an effective approach to tabular machine learning. Using the proposed approach, we are able to obtain comparable performance as gradient boosted and ensemble decision trees, which have been the winning hypothesis classes in tabular learning competitions, but with human-understandable explanations in the form of decision rules. To the best of authors' knowledge, neither logic minimization nor explainable decision rule methods have been able to achieve the state-of-the-art performance before in tabular learning problems.
更多
查看译文
关键词
logic minimization,machine learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要