Finite Time Stability and Optimal Finite Time Stabilization for Discrete-Time Stochastic Dynamical Systems

IEEE Transactions on Automatic Control(2023)

引用 5|浏览1
暂无评分
摘要
In this article, we address finite time stability in probability of discrete-time stochastic dynamical systems. Specifically, a stochastic comparison lemma is constructed along with a scalar system involving a generalized deadzone function to establish almost sure convergence and finite time stability in probability. This result is used to provide Lyapunov theorems for finite time stability in probability for Ito-type stationary nonlinear stochastic difference equations involving Lyapunov difference conditions on the minimum of the Lyapunov function itself along with a fractional power of the Lyapunov function. In addition, we establish sufficient conditions for almost sure lower semicontinuity of the stochastic settling-time capturing the average settling time behavior of the discrete-time nonlinear stochastic dynamical system. Furthermore, a stochastic finite-time optimal control framework is developed by exploiting connections between Lyapunov theory for finite time stability in probability and stochastic Bellman theory. In particular, we show that finite time stability in probability of the closed-loop nonlinear system is guaranteed by means of a Lyapunov function that can clearly be seen to be the solution to the steady state form of the stochastic Bellman equation, and hence, guaranteeing both stochastic finite time stability and optimality.
更多
查看译文
关键词
Finite time stability in probability,finite-time stabilization,optimal control,stochastic systems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要