Existence of Risk Sensitive Optimal Stationary Policies for Controlled Markov Processes
Existence of Risk Sensitive Optimal Stationary Policies for Controlled Markov Processes
Loading...
Files
Publication or External Link
Date
1997
Advisor
Citation
DRUM DOI
Abstract
In this paper we are concerned with the existence of optimal stationary policies for infinite horizon risk sensitive Markov control processes with denumerable state space, unbounded cost function, and long run average cost. Introducing a discounted cost dynamic game, we prove that its value function satisfies an Isaacs equation, and its relationship with the risk sensitive control problem is studied. Using the vanishing discount approach, we prove that the risk-sensitive dynamic programming inequality holds, and derive an optimal stationary policy.