Bayesian Learning for Regression Using Dirichlet Prior Distributions of Varying Localization

2021 IEEE Statistical Signal Processing Workshop (SSP)(2021)

引用 0|浏览3
暂无评分
摘要
When taking a Bayesian approach to machine learning applications, the performance of the learned function strongly depends on how well the prior distribution selected by the designer matches the true data-generating model. Dirichlet priors have a number of desirable properties – they result in closed-form posterior distributions given independent training data, have full support over the space of data probability distributions, and can be maximally informative or non-informative depending on their localization parameter. This paper assumes a Dirichlet prior and details the predictive distributions that characterize unobservable random quantities given observed data. The results are then applied to the most common loss function for regression, the squared error loss. The optimal Bayes estimator and the resultant risk trends are presented for different prior localizations, demonstrating a bias/variance trade-off.
更多
查看译文
关键词
Bayesian learning,machine learning,regression,estimation,Dirichlet distribution,bias,variance,predictive distribution
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要