Large Deviations for Products of Non-Identically Distributed Network Matrices With Applications to Communication-Efficient Distributed Learning and Inference

IEEE Transactions on Signal Processing(2023)

引用 1|浏览20
暂无评分
摘要
This paper studies products of independent but non-identically distributed random network matrices that arise as weight matrices in distributed consensus-type computation and inference procedures in peer-to-peer multi-agent networks. The non-identically distributed matrices studied in this paper model various application scenarios in which the agent communication network is time-varying, either naturally or engineered to achieve communication efficiency in computational procedures. First, under broad conditions on the statistics of the network matrix sequence, the product of the sequence is shown to converge almost surely to the consensus matrix and explicit large deviations rate of convergence are obtained. Specifically, given the admissible graph of interconnections modeling the base network topology, it is shown that the large deviations rate of consensus equals the minimum limiting value of the fluctuating graph cuts, where the edge costs are assigned through the current probabilities of the inter-agent communications. Secondly, an application of the above large deviations principle is studied in the context of distributed detection in time-varying networks with sequential observations. By adopting a consensus+innovations type distributed detection algorithm, as a by-product of this result, error exponents are obtained for the performance of distributed detection. It is shown that slow starts (slow increase) of inter-agent communication probabilities yield the same asymptotic error rate – and hence the same distributed detection performance, as if the communications were at their nominal levels from the beginning. As an important special case it is shown that when all the intermittent graph cuts have a link the probability of which increases to one, the performance of distributed detection is asymptotically optimal - i.e., equivalent to a centralized setup having access to all network data at all times.
更多
查看译文
关键词
Distributed inference,stochastic matrices,error exponents,inaccuracy rates,large deviations,consensus
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要