More Than Accuracy: A Composite Learning Framework for Interval Type-2 Fuzzy Logic Systems

IEEE Transactions on Fuzzy Systems(2023)

引用 5|浏览27
暂无评分
摘要
In this article, we propose a novel composite learning framework for interval type-2 (IT2) fuzzy logic systems (FLSs) to train regression models with a high accuracy performance and capable of representing uncertainty. In this context, we identify three challenges, first, the uncertainty handling capability, second, the construction of the composite loss, and third, a learning algorithm that overcomes the training complexity while taking into account the definitions of IT2-FLSs. This article presents a systematic solution to these problems by exploiting the type-reduced set of IT2-FLS via fusing quantile regression and deep learning (DL) with IT2-FLS. The uncertainty processing capability of IT2-FLS depends on employed center-of-sets calculation methods, while its representation capability is defined via the structure of its antecedent and consequent membership functions. Thus, we present various parametric IT2-FLSs and define the learnable parameters of all IT2-FLSs alongside their constraints to be satisfied during training. To construct the loss function, we define a multiobjective loss and then convert it into a constrained composite loss composed of the log-cosh loss for accuracy purposes and a tilted loss for uncertainty representation, which explicitly uses the type-reduced set. We also present a DL approach to train IT2-FLS via unconstrained optimizers. In this context, we present parameterization tricks for converting the constraint optimization problem of IT2-FLSs into an unconstrained one without violating the definitions of fuzzy sets. Finally, we provide comprehensive comparative results for hyperparameter sensitivity analysis and an inter/intramodel comparison on various benchmark datasets.
更多
查看译文
关键词
Deep learning (DL),interval type-2 fuzzy logic systems (IT2-FLS),parameterization tricks,quantile regression (QR),uncertainty
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要