Does Tail Label Help for Large-Scale Multi-Label Learning.

IEEE Transactions on Neural Networks and Learning Systems(2020)

引用 53|浏览106
暂无评分
摘要
Large-scale multi-label learning (LMLL) annotates relevant labels for unseen data from a huge number of candidate labels. It is perceived that labels exhibit a long tail distribution in which a significant number of labels are tail labels. Most previous studies consider that the performance would benefit from incorporating tail labels. Nonetheless, it is not quantified how tail labels impact the performance. In this article, we disclose that whatever labels are randomly missing or misclassified, the impact of labels on commonly used LMLL evaluation metrics (Propensity Score Precision (PSP)@ $k$ and Propensity Score nDCG (PSnDCG)@ $k$ ) is directly related to the product of the label weights and the label frequencies. In particular, when labels share equal weights, tail labels impact much less than common labels due to the scarcity of relevant examples. Based on such observation, we propose to develop low-complexity LMLL methods with the goal of facilitating fast prediction time and compact model size by restraining less performance-influential labels. With the consideration that discarding labels may cause the loss of predictive capability, we further propose to preserve dominant model parameters for the less performance-influential labels. Experiments clearly justify that both the prediction time and the model size are significantly reduced without sacrificing much predictive performance.
更多
查看译文
关键词
Predictive models,Measurement,Training,Prediction algorithms,Correlation,Sparse matrices,Learning systems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要