A CONTINUOUS-TIME APPROACH TO ONLINE OPTIMIZATION
JOURNAL OF DYNAMICS AND GAMES(2017)
摘要
We consider a family of mirror descent strategies for online optimization in continuous-time and we show that they lead to no regret. From a more traditional, discrete-time viewpoint, this continuous-time approach allows us to derive the no-regret properties of a large class of discrete-time algorithms including as special cases the exponential weights algorithm, online mirror descent, smooth fictitious play and vanishingly smooth fictitious play. In so doing, we obtain a unified view of many classical regret bounds, and we show that they can be decomposed into a term stemming from continuous-time considerations and a term which measures the disparity between discrete and continuous time. This generalizes the continuous-time based analysis of the exponential weights algorithm from [29]. As a result, we obtain a general class of in finite horizon learning strategies that guarantee an O(n(-1/2)) regret bound without having to resort to a doubling trick.
更多查看译文
关键词
Online optimization,regret minimization,mirror descent,gradient descent,continuous time,convex optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络