A performance comparison between two consensus-based distributed optimization algorithms
Abstract
In this paper we address the problem of multi-agent optimization for convex functions
expressible as sums of convex functions. Each agent has access to only one function in the sum and
can use only local information to update its current estimate of the optimal solution. We consider
two consensus-based iterative algorithms, based on a combination between a consensus step and a
subgradient decent update. The main difference between the two algorithms is the order in which the
consensus-step and the subgradient descent update are performed. We show that updating first the
current estimate in the direction of a subgradient and then executing the consensus step ensures better
performance than executing the steps in reversed order. In support of our analytical results, we give some
numerical simulations of the algorithms as well.