Fast Convergence Rates for Distributed Non-Bayesian Learning

IEEE Trans. Automat. Contr.(2017)

引用 116|浏览33
暂无评分
摘要
We consider the problem of distributed learning, where a network of agents collectively aim to agree on a hypothesis that best explains a set of distributed observations of conditionally independent random processes. We propose a distributed algorithm and establish consistency, as well as a nonasymptotic, explicit, and geometric convergence rate for the concentration of the beliefs around the set of optimal hypotheses. Additionally, if the agents interact over static networks, we provide an improved learning protocol with better scalability with respect to the number of nodes in the network.
更多
查看译文
关键词
Convergence,Silicon,Protocols,Probability distribution,Random variables,Estimation,Bayes methods
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要