Derivative-Free Optimization via Adaptive Sampling Strategies
arxiv(2024)
摘要
In this paper, we present a novel derivative-free optimization framework for
solving unconstrained stochastic optimization problems. Many problems in fields
ranging from simulation optimization to reinforcement learning involve settings
where only stochastic function values are obtained via an oracle with no
available gradient information, necessitating the usage of derivative-free
optimization methodologies. Our approach includes estimating gradients using
stochastic function evaluations and integrating adaptive sampling techniques to
control the accuracy in these stochastic approximations. We consider various
gradient estimation techniques including standard finite difference, Gaussian
smoothing, sphere smoothing, randomized coordinate finite difference, and
randomized subspace finite difference methods. We provide theoretical
convergence guarantees for our framework and analyze the worst-case iteration
and sample complexities associated with each gradient estimation method.
Finally, we demonstrate the empirical performance of the methods on logistic
regression and nonlinear least squares problems.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要