A Short Note on Combining Multiple Policies in Risk-Sensitive Exponential Average Reward Markov Decision Processes

dc.contributor.authorChang, Hyeong Soo
dc.date.accessioned2009-09-30T13:36:39Z
dc.date.available2009-09-30T13:36:39Z
dc.date.issued2009
dc.descriptionThis work was done while he was a visiting associate professor at ISR, University of Maryland, College Park.en
dc.description.abstractThis short note presents a method of combining multiple policies in a given policy set such that the resulting policy improves all policies in the set for risk-sensitive exponential average reward Markov decision processes (MDPs), extending the work of Howard and Matheson for the singleton policy set case. Some applications of the method in solving risk-sensitive MDPs are also discussed.en
dc.description.sponsorshipSteven I. Marcus and Michael C. Fuen
dc.format.extent260067 bytes
dc.format.mimetypeapplication/pdf
dc.identifier.urihttp://hdl.handle.net/1903/9433
dc.language.isoenen
dc.relation.isAvailableAtInstitute for Systems Researchen_us
dc.relation.isAvailableAtDigital Repository at the University of Marylanden_us
dc.relation.isAvailableAtUniversity of Maryland (College Park, MD)en_us
dc.relation.ispartofseriesTR_2009-14en
dc.subjectRisk-sensitive Markov decision processen
dc.subjectpolicy improvementen
dc.titleA Short Note on Combining Multiple Policies in Risk-Sensitive Exponential Average Reward Markov Decision Processesen
dc.typeTechnical Reporten

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
senserev_edited.pdf
Size:
253.97 KB
Format:
Adobe Portable Document Format