An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

JOURNAL OF MACHINE LEARNING RESEARCH(2017)

引用 244|浏览170
暂无评分
摘要
We consider the closely related problems of bandit convex optimization with two-point feedback, and zero-order stochastic convex optimization with two function evaluations per round. We provide a simple algorithm and analysis which is optimal for convex Lipschitz functions. This improves on Duchi et al. (2015), which only provides an optimal result for smooth functions; Moreover, the algorithm and analysis are simpler, and readily extend to non-Euclidean problems. The algorithm is based on a small but surprisingly powerful modification of the gradient estimator.
更多
查看译文
关键词
zero-order optimization,bandit optimization,stochastic optimization,gradient estimator
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要