Making Better Use of Unlabelled Data in Bayesian Active Learning
International Conference on Artificial Intelligence and Statistics(2024)
摘要
Fully supervised models are predominant in Bayesian active learning. We argue
that their neglect of the information present in unlabelled data harms not just
predictive performance but also decisions about what data to acquire. Our
proposed solution is a simple framework for semi-supervised Bayesian active
learning. We find it produces better-performing models than either conventional
Bayesian active learning or semi-supervised learning with randomly acquired
data. It is also easier to scale up than the conventional approach. As well as
supporting a shift towards semi-supervised models, our findings highlight the
importance of studying models and acquisition methods in conjunction.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要