Generative Quantile Regression with Variability Penalty

Shijie Wang, Minsuk Shin,Ray Bai

JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS(2024)

引用 0|浏览3
暂无评分
摘要
Quantile regression and conditional density estimation can reveal structure that is missed by mean regression, such as multimodality and skewness. In this article, we introduce a deep learning generative model for joint quantile estimation called Penalized Generative Quantile Regression (PGQR). Our approach simultaneously generates samples from many random quantile levels, allowing us to infer the conditional distribution of a response variable given a set of covariates. Our method employs a novel variability penalty to avoid the problem of vanishing variability, or memorization, in deep generative models. Further, we introduce a new family of partial monotonic neural networks (PMNN) to circumvent the problem of crossing quantile curves. A major benefit of PGQR is that it can be fit using a single optimization, thus, bypassing the need to repeatedly train the model at multiple quantile levels or use computationally expensive cross-validation to tune the penalty parameter. We illustrate the efficacy of PGQR through extensive simulation studies and analysis of real datasets. Code to implement our method is available at https://github.com/shijiew97/PGQR.
更多
查看译文
关键词
Conditional quantile,Deep generative model,Generative learning,Joint quantile model,Neural networks,Nonparametric quantile regression
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要