Error exponents for Distributed Detection of Markov Sources
dc.contributor.author | Shalaby, H.M.H. | en_US |
dc.contributor.author | Papamarcou, A. | en_US |
dc.contributor.department | ISR | en_US |
dc.date.accessioned | 2007-05-23T09:48:35Z | |
dc.date.available | 2007-05-23T09:48:35Z | |
dc.date.issued | 1991 | en_US |
dc.description.abstract | We consider a decentralized detection problem in which two sensors collect data from a discrete-time finite-valued stationary ergodic Markov source and transmit M-ary messages to a Neyman-Pearson central detector. We assume that the codebook sizes M are fixed for both sensors and to not vary with data sample size. We investigate the asymptotic behavior of the type II error rate as the sample size increases to infinity and obtain (under mild assumptions on the source distributions) the associated error exponent. The derived exponent is independent of the test level e and the codebook sizes M, is achieved by a universally optimal sequence of acceptance regions and is characterized by an infimum of informational divergence over a class of infinite-dimensional distributions. | en_US |
dc.format.extent | 615587 bytes | |
dc.format.mimetype | application/pdf | |
dc.identifier.uri | http://hdl.handle.net/1903/5128 | |
dc.language.iso | en_US | en_US |
dc.relation.ispartofseries | ISR; TR 1991-80 | en_US |
dc.subject | data compression | en_US |
dc.subject | detection | en_US |
dc.subject | distributed information processing | en_US |
dc.subject | information theory | en_US |
dc.subject | multi-user systems | en_US |
dc.subject | Communication | en_US |
dc.subject | Signal Processing Systems | en_US |
dc.title | Error exponents for Distributed Detection of Markov Sources | en_US |
dc.type | Technical Report | en_US |
Files
Original bundle
1 - 1 of 1