A Stochastic Derivative-Free Optimization Method with Importance Sampling

arXiv: Optimization and Control(2019)

引用 23|浏览68
暂无评分
摘要
We consider the problem of unconstrained minimization of a smooth objective function in $mathbb{R}^n$ in a setting where only function evaluations are possible. While importance sampling is one of the most popular techniques used by machine learning practitioners to accelerate the convergence of their models when applicable, there is not much existing theory for this acceleration in the derivative-free setting. In this paper, we propose an importance sampling version of the stochastic three points ($texttt{STP}$) method proposed by Bergou et al. and derive new improved complexity results on non-convex, convex and $lambda$-strongly convex functions. We conduct extensive experiments on various synthetic and real LIBSVM datasets confirming our theoretical results. We further test our method on a collection of continuous control tasks on several MuJoCo environments with varying difficulty. Our results suggest that $texttt{STP}$ is practical for high dimensional continuous control problems. Moreover, the proposed importance sampling version results in a significant sample complexity improvement.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要