Compiler-Generated Subgradient Code for McCormick Relaxations

Callum J. Corbett, Michael Maier, Markus Beckers,Uwe Naumann,Amin Ghobeity,Alexander Mitsos

semanticscholar(2011)

引用 0|浏览0
暂无评分
摘要
McCormick Relaxations are special convex and concave underand overestimators which are used in the field of nonconvex global optimization. As they are possibly nonsmooth at some points subgradients are used as derivative information. Subgradients are natural extensions of usual derivatives. They may be used to construct linear or piecewise linear relaxations. [Mitsos et al. 2009] developed a corresponding method and the library libMC (written in C++) for propagating relaxations and related subgradients using the tangent-linear mode of Algorithmic Differentiation (also known as Automatic Differentiation) by operator overloading. This paper extends [Mitsos et al. 2009] by providing libMC functionality by source transformation in Fortran. The corresponding Fortran module modMC is described. A research prototype of the NAG Fortran compiler has been extended with domain-specific inlining techniques to enable the generation of tangent-linear McCormick code. Speedups by factors of up to four with respect to the runtime of the respective libMC-based implementation can be observed. These results are supported by a number of relevant applications. To perform the numerical experiments, an interface between tangent-linear McCormick code written in Fortran and the existing C++-implementation of the branch-and-bound algorithm has been established. 1 Motivation and Context Nonlinear programs (NLPs) with nonconvex objective function or constraints typically result in multiple local extrema some of which are suboptimal. There are various necessary and sufficient criteria to establish a local extremum, see e.g., [6], which are relatively easy to test, and local, gradient-based methods employ these criteria for termination. In other words, at termination, gradient-based methods can identify if a local extremum has been obtained. In contrast, no direct criteria exist for establishing a global optimum. Several heuristics exist to attempt global optimization with local solvers, such as multistart, i.e., repeated application of a local solver from different initial guesses. An alternative method, are gradient-free methods, such as evolutionary algorithms. These methods do not rely on local termination criteria, and as such have the potential of avoiding suboptimal local solutions. However, neither gradient-based methods with heuristics, nor gradientfree methods can guarantee that a global optimum has been obtained, at least not at finite termination. This article focuses on compiler support for deterministic global optimization, i.e., methods which can deterministically guarantee a global solution. Throughout the article, round-off error is not considered. Deterministic global optimization algorithms [28, 15] rely on lower bounds on the optimal objective value, obtained via relaxations. A relaxation is an auxiliary problem which is guaranteed to have a better objective value (lower for minimization problems). For a relaxation to be useful for global optimization it must be easier to solve than the original optimization problem. Popular relaxations of a nonconvex NLP are convex NLPs, linear programs (LPs) and interval-extensions.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要