A theoretical understanding of self-paced learning.

Information Sciences(2017)

引用 114|浏览21
暂无评分
摘要
Self-paced learning (SPL) is a recently proposed methodology designed by mimicking through the learning principle of humans/animals. A variety of SPL realization schemes have been designed for different computer vision and pattern recognition tasks, and empirically demonstrated to be effective in these applications. However, the literature is in lack of the theoretical understanding of SPL. Regarding this research gap, this study attempts to provide some new theoretical understanding of the SPL scheme. Specifically, we prove that the solution strategy on SPL accords with a majorization minimization algorithm implemented on an implicit objective function. Furthermore, we found that the loss function contained in this implicit objective has a similar configuration with the non-convex regularized penalty (NCRP) known in statistics and machine learning. Such connection inspires us to discover more intrinsic relationships between the SPL regimes and the NCRP forms, like smoothly clipped absolute deviation (SCAD), logarithmic penalty (LOG) and non-convex exponential penalty (EXP). The insight of the robustness under SPL can then be finely explained. We also analyze the capability of SPL regarding its easy loss-prior-embedding property, and provide an insightful interpretation of the effectiveness mechanism under current SPL variations. Moreover, we design a group-partial-order loss prior, which is especially useful for weakly labeled large-scale data processing tasks. By applying SPL with this loss prior to the FCVID dataset, which is currently one of the largest manually annotated video dataset, our method achieves state-of-the-art performance above existing methods, which further supports the proposed theoretical arguments.
更多
查看译文
关键词
Self-paced learning,Curriculum learning,Multimedia event detection,Non-convex regularized penalty
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要