Evolutionary Policy Iteration for Solving Markov Decision Processes

dc.contributor.authorChang, Hyeong Sooen_US
dc.contributor.authorLee, Hong-Gien_US
dc.contributor.authorFu, Michael C.en_US
dc.contributor.authorMarcus, Steven I.en_US
dc.contributor.departmentISRen_US
dc.contributor.departmentCSHCNen_US
dc.date.accessioned2007-05-23T10:12:50Z
dc.date.available2007-05-23T10:12:50Z
dc.date.issued2002en_US
dc.description.abstractWe propose a novel algorithm called Evolutionary Policy Iteration (EPI) for solving infinite horizon discounted reward Markov Decision Process (MDP) problems. EPI inherits the spirit of the well-known PI algorithm but eliminates the need to maximize over the entire action space in the policy improvement step, so it should be most effective for problems with very large action spaces. EPI iteratively generates a "population" or a set of policies such that the performance of the "elite policy" for a population is monotonically improved with respect to a defined fitness function. EPI converges with probability one to a population whose elite policy is an optimal policy for a given MDP. EPI is naturally parallelizable and along this discussion, a distributed variant of PI is also studied.en_US
dc.format.extent276001 bytes
dc.format.mimetypeapplication/pdf
dc.identifier.urihttp://hdl.handle.net/1903/6311
dc.language.isoen_USen_US
dc.relation.ispartofseriesISR; TR 2002-31en_US
dc.relation.ispartofseriesCSHCN; TR 2002-17en_US
dc.subjectNext-Generation Product Realization Systemsen_US
dc.titleEvolutionary Policy Iteration for Solving Markov Decision Processesen_US
dc.typeTechnical Reporten_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
TR_2002-31.pdf
Size:
269.53 KB
Format:
Adobe Portable Document Format