A Bayesian shrinkage estimator for transfer learning
arxiv(2024)
摘要
Transfer learning (TL) has emerged as a powerful tool to supplement data
collected for a target task with data collected for a related source task. The
Bayesian framework is natural for TL because information from the source data
can be incorporated in the prior distribution for the target data analysis. In
this paper, we propose and study Bayesian TL methods for the normal-means
problem and multiple linear regression. We propose two classes of prior
distributions. The first class assumes the difference in the parameters for the
source and target tasks is sparse, i.e., many parameters are shared across
tasks. The second assumes that none of the parameters are shared across tasks,
but the differences are bounded in ℓ_2-norm. For the sparse case, we
propose a Bayes shrinkage estimator with theoretical guarantees under mild
assumptions. The proposed methodology is tested on synthetic data and
outperforms state-of-the-art TL methods. We then use this method to fine-tune
the last layer of a neural network model to predict the molecular gap property
in a material science application. We report improved performance compared to
classical fine tuning and methods using only the target data.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要