Existence of Risk Sensitive Optimal Stationary Policies for Controlled Markov Processes

dc.contributor.authorHernandez-Hernandez, Danielen_US
dc.contributor.authorMarcus, Steven I.en_US
dc.contributor.departmentISRen_US
dc.date.accessioned2007-05-23T10:03:40Z
dc.date.available2007-05-23T10:03:40Z
dc.date.issued1997en_US
dc.description.abstractIn this paper we are concerned with the existence of optimal stationary policies for infinite horizon risk sensitive Markov control processes with denumerable state space, unbounded cost function, and long run average cost. Introducing a discounted cost dynamic game, we prove that its value function satisfies an Isaacs equation, and its relationship with the risk sensitive control problem is studied. Using the vanishing discount approach, we prove that the risk-sensitive dynamic programming inequality holds, and derive an optimal stationary policy.en_US
dc.format.extent220421 bytes
dc.format.mimetypeapplication/pdf
dc.identifier.urihttp://hdl.handle.net/1903/5847
dc.language.isoen_USen_US
dc.relation.ispartofseriesISR; TR 1997-9en_US
dc.subjectoptimal controlen_US
dc.subjectstochastic systemsen_US
dc.subjectintelligent servomech: risk sensitive controlen_US
dc.subjectSystems Integration Methodologyen_US
dc.titleExistence of Risk Sensitive Optimal Stationary Policies for Controlled Markov Processesen_US
dc.typeTechnical Reporten_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
TR_97-9.pdf
Size:
215.25 KB
Format:
Adobe Portable Document Format