A performance comparison between two consensus-based distributed optimization algorithms

dc.contributor.authorMatei, Ion
dc.contributor.authorBaras, John
dc.date.accessioned2012-05-04T19:36:31Z
dc.date.available2012-05-04T19:36:31Z
dc.date.issued2012-05-04
dc.description.abstractIn this paper we address the problem of multi-agent optimization for convex functions expressible as sums of convex functions. Each agent has access to only one function in the sum and can use only local information to update its current estimate of the optimal solution. We consider two consensus-based iterative algorithms, based on a combination between a consensus step and a subgradient decent update. The main difference between the two algorithms is the order in which the consensus-step and the subgradient descent update are performed. We show that updating first the current estimate in the direction of a subgradient and then executing the consensus step ensures better performance than executing the steps in reversed order. In support of our analytical results, we give some numerical simulations of the algorithms as well.en_US
dc.identifier.urihttp://hdl.handle.net/1903/12480
dc.language.isoen_USen_US
dc.relation.isAvailableAtInstitute for Systems Researchen_us
dc.relation.isAvailableAtDigital Repository at the University of Marylanden_us
dc.relation.isAvailableAtUniversity of Maryland (College Park, MD)en_us
dc.relation.ispartofseriesTR_2012-05
dc.subjectdistributed optimizationen_US
dc.subjectconsensusen_US
dc.subjectperformanceen_US
dc.subjectanalysisen_US
dc.titleA performance comparison between two consensus-based distributed optimization algorithmsen_US
dc.typeArticleen_US

Files

Original bundle

Now showing 1 - 1 of 1
Thumbnail Image
Name:
necsys_042812_long_edited.pdf
Size:
498.84 KB
Format:
Adobe Portable Document Format