Stopping Criterion for Active Learning with Model Stability.

ACM TIST(2018)

引用 6|浏览20
暂无评分
摘要
Active learning selectively labels the most informative instances, aiming to reduce the cost of data annotation. While much effort has been devoted to active sampling functions, relatively limited attention has been paid to when the learning process should stop. In this article, we focus on the stopping criterion of active learning and propose a model stability--based criterion, that is, when a model does not change with inclusion of additional training instances. The challenge lies in how to measure the model change without labeling additional instances and training new models. Inspired by the stochastic gradient update rule, we use the gradient of the loss function at each candidate example to measure its effect on model change. We propose to stop active learning when the model change brought by any of the remaining unlabeled examples is lower than a given threshold. We apply the proposed stopping criterion to two popular classifiers: logistic regression (LR) and support vector machines (SVMs). In addition, we theoretically analyze the stability and generalization ability of the model obtained by our stopping criterion. Substantial experiments on various UCI benchmark datasets and ImageNet datasets have demonstrated that the proposed approach is highly effective.
更多
查看译文
关键词
Active learning, logistic regression, model stability, stopping criterion, support vector machines
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要