Finding the Value of Information About a State Variable in a Markov Decision Process

dc.contributor.authorSouza, Gilvan
dc.date.accessioned2005-01-13T15:04:40Z
dc.date.available2005-01-13T15:04:40Z
dc.date.issued2005-01-13T15:04:40Z
dc.description.abstractIn this paper we present a mixed–integer programming formulation that computes the optimal solution for a certain class of Markov decision processes with finite state and action spaces, where a state is comprised of multiple state variables, and one of the state variables is unobservable to the decision maker. Our approach is a much simpler modeling alternative to the theory of partially observable Markov decision processes (POMDP), where an information and updating structure about the decision variable needs to be defined. We illustrate the approach with an example of a duopoly where one firm’s actions are not immediately observable by the other firm, and present computational results. We believe that this approach can be used in a variety of applications, where the decision maker wants to assess the value of information about an additional decision variable.en
dc.format.extent684476 bytes
dc.format.mimetypeapplication/pdf
dc.identifier.urihttp://hdl.handle.net/1903/1930
dc.language.isoen_US
dc.relation.isAvailableAtRobert H. Smith School of Businessen_us
dc.relation.isAvailableAtDecision & Information Technologiesen_us
dc.relation.isAvailableAtDigital Repository at the University of Marylanden_us
dc.relation.isAvailableAtUniversity of Maryland (College Park, Md.)en_us
dc.subjectMarkov decision processesen
dc.subjectPOMDPen
dc.titleFinding the Value of Information About a State Variable in a Markov Decision Processen
dc.typeWorking Paperen

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
MDP_paper_v5.pdf
Size:
668.43 KB
Format:
Adobe Portable Document Format