Black-Box Acceleration of Monotone Convex Program Solvers

Operations Research(2024)

引用 0|浏览2
暂无评分
摘要
When and where was the study conducted: This work was done in 2018, 2019 and 2020 when Palma London was a PhD student at Caltech and Shai Vardi was a postdoc at Caltech. This work was also done in part while Palma London was visiting Purdue University, and while Reza Eghbali was a postdoctoral fellow the Simons Institute for the Theory of Computing. Adam Wierman is a professor at Caltech. Article Summary and Talking Points: Please describe the primary purpose/findings of your article in 3 sentences or less. This paper presents a framework for accelerating (speeding up) existing convex program solvers. Across engineering disciplines, a fundamental bottleneck is the availability of fast, efficient, accurate solvers. We present an acceleration method that speeds up linear programing solvers such as Gurobi and convex program solvers such as the Splitting Conic Solver by two orders of magnitude. Please include 3-5 short bullet points of “Need to Know” items regarding this research and your findings. - Optimizations problems arise in many engineering and science disciplines, and developing efficient optimization solvers is key to future innovation. - We speed up linear programing solver Gurobi by two orders of magnitude. - This work applies to optimization problems with monotone objective functions and packing constraints, which is a common problem formulation across many disciplines. Please identify 2 pull quotes from your article that best capture the novelty and impact of your research. “We propose a framework for accelerating exact and approximate convex programming solvers for packing linear programming problems and a family of convex programming problems with linear constraints. Analytically, we provide worst-case guarantees on the run time and the quality of the solution produced. Numerically, we demonstrate that our framework speeds up Gurobi and the Splitting Conic Solver by two orders of magnitude, while maintaining a near-optimal solution.” “Our focus in this paper is on a class of packing problems for which data is either very costly or hard to obtain. In these situations, the number of data points available is much smaller than the number of variables. In a machine-learning setting, this regime is increasingly prevalent because it is often advantageous to consider larger and larger feature spaces, while not necessarily obtaining proportionally more data.” Article Implications - Please describe in 5 sentences or less the innovative takeaway(s) of your research. This framework applies to optimization problems with monotone objective functions and packing constraints, which is a common problem formulation across many disciplines, including machine learning, inference, and resource allocation. Providing fast solvers for these problems is crucial. We exploit characteristics of the problem structure and leverage statistical properties of the problem constraints to allow us to speed up optimization solvers. We present worst-case guarantees on run-time, and empirically demonstrate speedups of two orders of magnitude. - Please describe in 5 sentences or less why your findings would be of interest to the general public. Many problems in engineering, science, math, and machine learning involve solving an optimization problem. Fast, efficient optimization solvers are key to future innovation in science and engineering. This work presents a tool to accelerate existing convex solvers, and thus can also be applied to future solvers. As the size of datasets grow it is even more crucial to have fast solvers. - Who would be the most impacted by your research (i.e. by industry, job title, consumer category). Our work impacts machine-learning researchers and optimization researchers, in industry or academia. This paper presents a black-box framework for accelerating packing optimization solvers. Our method applies to packing linear programming problems and a family of convex programming problems with linear constraints. The framework is designed for high-dimensional problems, for which the number of variables n is much larger than the number of measurements m . Given an (m×n) problem, we construct a smaller (m×ϵn) problem, whose solution we use to find an approximation to the optimal solution. Our framework can accelerate both exact and approximate solvers. If the solver being accelerated produces an α -approximation, then we produce a (1−ϵ)/α2-approximation of the optimal solution to the original problem. We present worst-case guarantees on run time and empirically demonstrate speedups of two orders of magnitude. Funding: Financial support from the National Science Foundation [Grants AitF-1637598, CNS-151894, and CPS-154471] and the Linde Institute is gratefully acknowledged.
更多
查看译文
关键词
Optimization,linear programming,convex optimization,dimension reduction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要