Decentralized Control of Multi-Agent Systems using Local Density Feedback

IEEE Transactions on Automatic Control(2021)

引用 10|浏览0
暂无评分
摘要
In this paper, we stabilize a discrete-time Markov process evolving on a compact subset of $\\mathbb{R}^d$ to an arbitrary target distribution that has an $L^\\infty$ density and does not necessarily have a connected support on the state space. We address this problem by stabilizing the corresponding Kolmogorov forward equation, the \\textit{mean-field model} of the system, using a density-dependent transition kernel as the control parameter. Our main application of interest is controlling the distribution of a multi-agent system in which each agent evolves according to this discrete-time Markov process. To prevent agent state transitions at the equilibrium distribution, which would potentially waste energy, we show that the Markov process can be constructed in such a way that the operator that pushes forward measures is the identity at the target distribution. In order to achieve this, the transition kernel is defined as a function of the current agent distribution, resulting in a nonlinear Markov process. Moreover, we design the transition kernel to be \\textit{decentralized} in the sense that it depends only on the local density measured by each agent. We prove the existence of such
更多
查看译文
关键词
Decentralized control,discrete-time Markov processes,multi-agent systems,probability density function
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要