ASPEST: Bridging the Gap Between Active Learning and Selective Prediction
arxiv(2023)
摘要
Selective prediction aims to learn a reliable model that abstains from making
predictions when uncertain. These predictions can then be deferred to humans
for further evaluation. As an everlasting challenge for machine learning, in
many real-world scenarios, the distribution of test data is different from the
training data. This results in more inaccurate predictions, and often increased
dependence on humans, which can be difficult and expensive. Active learning
aims to lower the overall labeling effort, and hence human dependence, by
querying the most informative examples. Selective prediction and active
learning have been approached from different angles, with the connection
between them missing. In this work, we introduce a new learning paradigm,
active selective prediction, which aims to query more informative samples from
the shifted target domain while increasing accuracy and coverage. For this new
paradigm, we propose a simple yet effective approach, ASPEST, that utilizes
ensembles of model snapshots with self-training with their aggregated outputs
as pseudo labels. Extensive experiments on numerous image, text and structured
datasets, which suffer from domain shifts, demonstrate that ASPEST can
significantly outperform prior work on selective prediction and active learning
(e.g. on the MNIST→SVHN benchmark with the labeling budget of 100, ASPEST
improves the AUACC metric from 79.36
utilization of humans in the loop.
更多查看译文
关键词
active learning,selective
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要