Institute for Systems Research

Permanent URI for this communityhttp://hdl.handle.net/1903/4375

Browse

Search Results

Now showing 1 - 3 of 3
  • Thumbnail Image
    Item
    Stability of Wireless Networks for Mode S Radar
    (2000) Chawla, Jay P.; Marcus, Steven I.; Shayman, Mark A.; Shayman, Mark; Marcus, Steven; ISR
    Stability issues in a connectionless, one-hop queueing system featuringservers with overlapping service regions (e.g. a Mode Select (Mode S) Radarcommunications network or part of an Aeronautical Telecommunications Network (ATN) network) are considered, and a stabilizing policy is determined in closed-loop form. The cases of queues at the sources (aircraft) and queues at the servers (base stations) are consideredseparately. Stabilizability of the system with exponential service times and Poisson arrival rates is equivalent to the solvability of a linear program and if the system is stabilizable, a stabilizing open loop routingpolicy can be expressed in terms of the coefficients of the solution to thelinear program. We solve the linear program for the case of a single class of packets.

    The research and scientific content in this material has beenpublished under the same title in the Proceedings of the 32nd Conference onInformation Sciences and Systems; Princeton, NJ; March 1998.
  • Thumbnail Image
    Item
    Risk Sensitive Control of Markov Processes in Countable State Space
    (1996) Hernandez-Hernandez, Daniel; Marcus, Steven I.; ISR
    In this paper we consider infinite horizon risk-sensitive control of Markov processes with discrete time and denumerable state space. This problem is solved proving, under suitable conditions, that there exists a bounded solution to the dynamic programming equation. The dynamic programming equation is transformed into an Isaacs equation for a stochastic game, and the vanishing discount method is used to study its solution. In addition, we prove that the existence conditions are as well necessary.
  • Thumbnail Image
    Item
    Non-Standard Optimality Criteria for Stochastic Control Problems
    (1995) Fernandez-Gaucherand, Emmanuel; Marcus, Steven I.; ISR
    In this paper, we survey several recent developments on non- standard optimality criteria for controlled Markov process models of stochastic control problems. Commonly, the criteria employed for optimal decision and control are either the discounted cost (DC) or the long-run average cost (AC). We present results on several other criteria that, as opposed to the AC or DC, take into account, e.g., a) the variance of costs; b) multiple objectives; c) robustness with respect to sample path realizations; d) sensitivity to long but finite horizon performance as well as long-run average performance.