Existence of Risk Sensitive Optimal Stationary Policies for Controlled Markov Processes

View/ Open
Date
1997Author
Hernandez-Hernandez, Daniel
Marcus, Steven I.
Metadata
Show full item recordAbstract
In this paper we are concerned with the existence of optimal stationary policies for infinite horizon risk sensitive Markov control processes with denumerable state space, unbounded cost function, and long run average cost. Introducing a discounted cost dynamic game, we prove that its value function satisfies an Isaacs equation, and its relationship with the risk sensitive control problem is studied. Using the vanishing discount approach, we prove that the risk-sensitive dynamic programming inequality holds, and derive an optimal stationary policy.