Exact representation and efficient approximations of linear model predictive control laws via HardTanh type deep neural networks

Systems & Control Letters(2024)

引用 0|浏览1
暂无评分
摘要
Deep neural networks have revolutionized many fields, including image processing, inverse problems, text mining and more recently, give very promising results in systems and control. Neural networks with hidden layers have a strong potential as an approximation framework of predictive control laws as they usually yield better approximation quality and smaller memory requirements than existing explicit (multi-parametric) approaches. In this paper, we first show that neural networks with HardTanh activation functions can exactly represent predictive control laws of linear time-invariant systems. We derive theoretical bounds on the minimum number of hidden layers and neurons that a HardTanh neural network should have to exactly represent a given predictive control law. The choice of HardTanh deep neural networks is particularly suited for linear predictive control laws as they usually require less hidden layers and neurons than deep neural networks with ReLU units for representing exactly continuous piecewise affine (or equivalently min–max) maps. In the second part of the paper we bring the physics of the model and standard optimization techniques into the architecture design, in order to eliminate the disadvantages of the black-box HardTanh learning. More specifically, we design trainable unfolded HardTanh deep architectures for learning linear predictive control laws based on two standard iterative optimization algorithms, i.e., projected gradient descent and accelerated projected gradient descent. We also study the performance of the proposed HardTanh type deep neural networks on a linear model predictive control application.
更多
查看译文
关键词
Model predictive control,Piecewise affine/min–max functions,HardTanh activation function,Deep neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要