Guest Editorial Multimedia Computing With Interpretable Machine Learning

IEEE TRANSACTIONS ON MULTIMEDIA(2020)

引用 1|浏览56
暂无评分
摘要
The papers in this special section is to broadly engage the machine learning and multimedia communities on the emerging yet challenging interpretable machine learning. Multimedia is increasingly becoming the "biggest big data," among the most important and valuable source for insight and information. Many powerful machine learning algorithms, especially deep learning models such as convolutional neural networks (CNNs), have recently achieved outstanding predictive performance in a wide range of multimedia applications, including visual object classification, scene understanding, speech recognition, and activity prediction. Nevertheless, most deep learning algorithms are generally conceived as blackbox methods, and it is difficult to intuitively and quantitatively understand the results of their prediction and inference. Since this lack of interpretability is a major bottleneck in designing more successful predictive models and exploring wider-range useful applications, there has been an explosion of interest in interpreting the representations learned by these models, with profound implications for research into interpretable machine learning in the multimedia community.
更多
查看译文
关键词
Special issues and sections, Machine learning, Feature extraction, Visualization, Multimedia communication, Deep learning, Big Data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要