Dual-level Deep Evidential Fusion: Integrating multimodal information for enhanced reliable decision-making in deep learning

INFORMATION FUSION(2024)

引用 0|浏览10
暂无评分
摘要
Multimodal learning has gained significant attention in recent years for combining information from different modalities using Deep Neural Networks (DNNs). However, existing approaches often overlook the varying importance of modalities and neglect uncertainty estimation, leading to limited generalization and unreliable predictions. In this paper, we propose a novel algorithm, Dual-level Deep Evidential Fusion (DDEF), to address these challenges by integrating multimodal information at both the Basic Belief Assignment (BBA) level and multimodal level, for enhancing accuracy, robustness, and reliability. The proposed DDEF approach utilizes the Dirichlet framework and BBA methods to connect neural network outputs with Dirichlet distribution parameters, enabling effective uncertainty estimation, and the Dempster-Shafer Theory (DST) is used for dual-level fusion, facilitating the fusion of evidence from two BBA methods and multiple modalities. It has been validated by two experiments on synthetic digit classification, and real-world medical prognosis after brain- computer interface (BCI) treatment, and by demonstrating superior performance compared to existing methods. Our findings emphasize the importance of considering multimodal integration and uncertainty estimation for reliable decision-making in deep learning.
更多
查看译文
关键词
Evidential deep learning,Basic belief assignment,Multimodal fusion,Uncertainty estimation,Dempster-Shafer theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要