Consensus-based distributed online prediction and optimization

Global Conference Signal and Information Processing(2013)

引用 7|浏览21
暂无评分
摘要
This paper considers the problems of distributed online prediction and optimization. Each node in a network of processors processes a stream of data in an online manner. Before the next data point arrives, the processor must make a prediction. Then, after receiving the next point, the processor accrues some loss or regret. The goal of the processors is to minimize the total aggregate regret. We propose a consensus-based distributed optimization method for fitting a model used to make the predictions online. After observing each data point, nodes individually make gradient descent-like adjustments to their model parameters, and then consensus iterations are performed to synchronize models across the nodes. We prove that the proposed method achieves the optimal regret bound when the loss function has Lipschitz continuous gradients, and the amount of communication required depends on the network structure.
更多
查看译文
关键词
distributed processing,gradient methods,learning (artificial intelligence),network theory (graphs),Consensus-based distributed online optimization,Lipschitz continuous gradients,communication amount,consensus iterations,consensus-based distributed online prediction,data point,gradient descent-like adjustments,loss function,machine learning problems,model parameters,network structure,processors network,total aggregate regret minimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要