Evolutionary Policy Iteration for Solving Markov Decision Processes
dc.contributor.author | Chang, Hyeong Soo | en_US |
dc.contributor.author | Lee, Hong-Gi | en_US |
dc.contributor.author | Fu, Michael C. | en_US |
dc.contributor.author | Marcus, Steven I. | en_US |
dc.contributor.department | ISR | en_US |
dc.contributor.department | CSHCN | en_US |
dc.date.accessioned | 2007-05-23T10:12:50Z | |
dc.date.available | 2007-05-23T10:12:50Z | |
dc.date.issued | 2002 | en_US |
dc.description.abstract | We propose a novel algorithm called Evolutionary Policy Iteration (EPI) for solving infinite horizon discounted reward Markov Decision Process (MDP) problems. EPI inherits the spirit of the well-known PI algorithm but eliminates the need to maximize over the entire action space in the policy improvement step, so it should be most effective for problems with very large action spaces. EPI iteratively generates a "population" or a set of policies such that the performance of the "elite policy" for a population is monotonically improved with respect to a defined fitness function. EPI converges with probability one to a population whose elite policy is an optimal policy for a given MDP. EPI is naturally parallelizable and along this discussion, a distributed variant of PI is also studied. | en_US |
dc.format.extent | 276001 bytes | |
dc.format.mimetype | application/pdf | |
dc.identifier.uri | http://hdl.handle.net/1903/6311 | |
dc.language.iso | en_US | en_US |
dc.relation.ispartofseries | ISR; TR 2002-31 | en_US |
dc.relation.ispartofseries | CSHCN; TR 2002-17 | en_US |
dc.subject | Next-Generation Product Realization Systems | en_US |
dc.title | Evolutionary Policy Iteration for Solving Markov Decision Processes | en_US |
dc.type | Technical Report | en_US |
Files
Original bundle
1 - 1 of 1