Controlled gradient descent: A control theoretical perspective for optimization

Revati Gunjal,Syed Shadab Nayyer, S.R. Wagh, N.M. Singh

Results in Control and Optimization(2024)

引用 0|浏览2
暂无评分
摘要
The Gradient Descent (GD) paradigm is a foundational principle of modern optimization algorithms. The GD algorithm and its variants, including accelerated optimization algorithms, geodesic optimization, natural gradient, and contraction-based optimization, to name a few, are used in machine learning and the system and control domain. Here, we proposed a new algorithm based on the control theoretical perspective, labeled as the Controlled Gradient Descent (CGD). Specifically, this approach overcomes the challenges of the abovementioned algorithms, which rely on the choice of a suitable geometric structure, particularly in machine learning. The proposed CGD approach visualizes the optimization as a Manifold Stabilization Problem (MSP) through the notion of an invariant manifold and its attractivity. The CGD approach leads to an exponential contraction of trajectories under the influence of a pseudo-Riemannian metric generated through the control procedure as an additional outcome. The efficacy of the CGD is demonstrated with various test objective functions like the benchmark Rosenbrock function, objective function with a lack of flatness, and semi-contracting objective functions often encountered in machine learning applications.
更多
查看译文
关键词
Gradient descent (GD),Manifold Stabilization,Optimization,Overparameterized Networks,Passivity and Immersion (P&I)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要