Institute for Systems Research Technical Reports

Permanent URI for this collectionhttp://hdl.handle.net/1903/4376

This archive contains a collection of reports generated by the faculty and students of the Institute for Systems Research (ISR), a permanent, interdisciplinary research unit in the A. James Clark School of Engineering at the University of Maryland. ISR-based projects are conducted through partnerships with industry and government, bringing together faculty and students from multiple academic departments and colleges across the university.

Browse

Search Results

Now showing 1 - 8 of 8
  • Thumbnail Image
    Item
    Harnack's Inequality for Cooperative Weakly Coupled Elliptic Systems
    (1997) Arapostathis, Aristotle; Ghosh, Mrinal K.; Marcus, Steven I.; ISR
    We consider cooperative, uniformly elliptic systems, with bounded coefficients and coupling in the zeroth-order terms. We establish two analogues of Harnack's inequality for this class of systems. A weak version is obtained under fairly general conditions, while imposing an irreducibility condition on the coupling coefficients we obtain a stronger version of the inequality. This irreducibility condition is also necessary for the existience of a Harnack constant for this class of systems. A Harnack inequality is also obtained for a class of superharmonic functions.
  • Thumbnail Image
    Item
    Stochastic Differential Games with Multiple Modes
    (1995) Ghosh, Mrinal K.; Marcus, Steven I.; ISR
    We have studied two-person stochastic differential games with multiple modes. For the zero-sum game we have established the existence of optimal strategies for both players. For the non- zero sum case we have proved the existence of a Nash equilibrium.
  • Thumbnail Image
    Item
    A Note on an LQG Regulator with Markovian Switching and Pathwise Average Cost
    (1994) Ghosh, Mrinal K.; Arapostathis, Aristotle; Marcus, Steven I.; ISR
    We study a linear system with a Markovian switching parameter perturbed by white noise. The cost function is quadratic. Under certain conditions, we find a linear feedback control which is almost surely optimal for the pathwise average cost over the infinite planning horizon.
  • Thumbnail Image
    Item
    Controlled Markov Processes on the Infinite Planning Horizon: Weighted and, Overtaking Cost Criteria
    (1993) Fernandez-Gaucherand, Emmanuel; Ghosh, Mrinal K.; Marcus, Steven I.; ISR
    Stochastic control problems for controlled Markov processes models with an infinite planning horizon are considered, under some non-standard cost criteria. The classical discounted and average cost criteria can be viewed as complementary, in the sense that the former captures the short-time and the latter the long-time performance of the system. Thus, we study a cost criterion obtained as weighted combinations of these criteria, extending to a general state and control space framework several recent results by Feinberg and Shwartz, and by Krass et al. In addition, a functional characterization is given for overtaking optimal policies, for problems with countable state spaces and compact control spaces; our approach is based on qualitative properties of the optimality equation for problems with an average cost criterion.
  • Thumbnail Image
    Item
    A Note on an LQG Regulator with Markovian Switching and Pathwise Average Cost
    (1992) Ghosh, Mrinal K.; Arapostathis, Aristotle; Marcus, Steven I.; ISR
    We study a linear system with a Markovian switching parameter perturbed by white noise. The cost function is quadratic. Under certain conditions, we find a linear feedback control which is almost surely optimal for the pathwise average cost over the infinite planning horizon.
  • Thumbnail Image
    Item
    Ergodic Control of Switching Diffusions
    (1992) Ghosh, Mrinal K.; Arapostathis, Aristotle; Marcus, Steven I.; ISR
    We study the ergodic control problem of switching diffusions representing a typical hybrid system that arises in numerous applications such as fault tolerant control systems, flexible manufacturing systems, etc. Under certain conditions, we establish the existence of a stable Markov nonrandomized policy which is almost surely optimal for a pathwise longrun average cost criterion. We then study the corresponding Hamilton-Jacobi- Bellman (HJB) equation and establish the existence of a unique solution in a certain class. Using this, we characterize the optimal policy as a minimizing selector of the Hamiltonian associated with the HJB equations. We apply these results to a failure prone manufacturing system and show that the optimal production rate is of the hedging point type.
  • Thumbnail Image
    Item
    Discrete-Time Controlled Markov Processes with Average Cost Criterion: A Survey
    (1991) Arapostathis, Aristotle; Borkar, Vivek S.; Fernandez-Gaucherand, Emmanuel; Ghosh, Mrinal K.; Marcus, Steven I.; ISR
    This work is a survey of the average cost control problem for discrete-time Markov processes. We have attempted to put together a comprehensive account of the considerable research on this problem over the past three decades. Our exposition ranges from finite to Borel state and action spaces and includes a variety of methodologies to find and characterize optimal policies. We have included a brief historical perspective of the research efforts in this area and have compiled a substantial yet not exhaustive bibliography. We have also identified several important questions which are still left open to investigation.
  • Thumbnail Image
    Item
    Optimal Control of Switching Diffusions with Application to Flexible Manufacturing Systems
    (1991) Ghosh, Mrinal K.; Arapostathis, Aristotle; Marcus, Steven I.; ISR
    A Controlled switching diffusion model is developed to study the hierarchical control of flexible manufacturing systems. The existence of a homogeneous Markov nonrandomized optimal policy is established by a convex analytic method. Using the existence of such a policy, the existence of a unique solution in a certain class to the associated Hamilton-Jacobi-Bellman equations is established and the optimal policy is characterized as a minimizing selector of an appropriate Hamiltonian.