Convergence results for the linear consensus problem under Markovian random graphs

dc.contributor.advisorBaras, John
dc.contributor.authorMatei, Ion
dc.contributor.authorJohn, Baras
dc.date.accessioned2009-10-23T17:50:54Z
dc.date.available2009-10-23T17:50:54Z
dc.date.issued2009
dc.description.abstractThis note discusses the linear discrete and continuous time consensus problem for a network of dynamic agents with directed information flows and random switching topologies. The switching is determined by a Markov chain, each topology corresponding to a state of the Markov chain. We show that, under doubly stochastic assumption on the matrices involved in the linear consensus scheme, average consensus is achieved in the mean square sense and almost surely if and only if the graph resulted from the union of graphs corresponding to the states of the Markov chain is strongly connected. The aim of this note is to show how techniques from Markovian jump linear systems theory, in conjunction with results inspired by matrix and graph theory, can be used to prove convergence results for stochastic consensus problems. Ien
dc.format.extent349841 bytes
dc.format.mimetypeapplication/pdf
dc.identifier.urihttp://hdl.handle.net/1903/9693
dc.language.isoen_USen
dc.relation.isAvailableAtInstitute for Systems Researchen_us
dc.relation.isAvailableAtDigital Repository at the University of Marylanden_us
dc.relation.isAvailableAtUniversity of Maryland (College Park, MD)en_us
dc.relation.ispartofseriesTR_2009-18en
dc.subjectconsensusen
dc.subjectrandom graphsen
dc.titleConvergence results for the linear consensus problem under Markovian random graphsen
dc.typeArticleen

Files

Original bundle

Now showing 1 - 1 of 1
Thumbnail Image
Name:
IMateiJBaras.pdf
Size:
349.38 KB
Format:
Adobe Portable Document Format