Error exponents for Distributed Detection of Markov Sources
MetadataShow full item record
We consider a decentralized detection problem in which two sensors collect data from a discrete-time finite-valued stationary ergodic Markov source and transmit M-ary messages to a Neyman-Pearson central detector. We assume that the codebook sizes M are fixed for both sensors and to not vary with data sample size. We investigate the asymptotic behavior of the type II error rate as the sample size increases to infinity and obtain (under mild assumptions on the source distributions) the associated error exponent. The derived exponent is independent of the test level e and the codebook sizes M, is achieved by a universally optimal sequence of acceptance regions and is characterized by an infimum of informational divergence over a class of infinite-dimensional distributions.