Convergence results for the linear consensus problem under Markovian random graphs

Thumbnail Image
Files
IMateiJBaras.pdf(349.38 KB)
No. of downloads: 1995
Publication or External Link
Date
2009
Authors
Matei, Ion
John, Baras
Advisor
Baras, John
Citation
DRUM DOI
Abstract
This note discusses the linear discrete and continuous time consensus problem for a network of dynamic agents with directed information flows and random switching topologies. The switching is determined by a Markov chain, each topology corresponding to a state of the Markov chain. We show that, under doubly stochastic assumption on the matrices involved in the linear consensus scheme, average consensus is achieved in the mean square sense and almost surely if and only if the graph resulted from the union of graphs corresponding to the states of the Markov chain is strongly connected. The aim of this note is to show how techniques from Markovian jump linear systems theory, in conjunction with results inspired by matrix and graph theory, can be used to prove convergence results for stochastic consensus problems. I
Notes
Rights