Time-timescale Derivative Free Optimization for Performative Prediction with Markovian Data

Haitong Liu,Qiang Li,Hoi-To Wai

arXiv (Cornell University)(2023)

引用 0|浏览11
暂无评分
摘要
This paper studies the performative prediction problem where a learner aims to minimize the expected loss with a decision-dependent data distribution. Such setting is motivated when outcomes can be affected by the prediction model, e.g., in strategic classification. We consider a state-dependent setting where the data distribution evolves according to an underlying controlled Markov chain. We focus on stochastic derivative free optimization (DFO) where the learner is given access to a loss function evaluation oracle with the above Markovian data. We propose a two-timescale DFO($\lambda$) algorithm that features {\sf (i)} a sample accumulation mechanism that utilizes every observed sample to estimate the overall gradient of performative risk, and {\sf (ii)} a two-timescale diminishing step size that balances the rates of DFO updates and bias reduction. Under a general non-convex optimization setting, we show that DFO($\lambda$) requires ${\cal O}( 1 /\epsilon^3)$ samples (up to a log factor) to attain a near-stationary solution with expected squared gradient norm less than $\epsilon > 0$. Numerical experiments verify our analysis.
更多
查看译文
关键词
performative prediction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要