Institute for Systems Research >
Institute for Systems Research Technical Reports >
Please use this identifier to cite or link to this item:
|Title: ||Evolutionary Policy Iteration for Solving Markov Decision Processes|
|Authors: ||Chang, Hyeong Soo|
Fu, Michael C.
Marcus, Steven I.
|Type: ||Technical Report|
|Keywords: ||Next-Generation Product Realization Systems|
|Issue Date: ||2002|
|Series/Report no.: ||ISR; TR 2002-31|
CSHCN; TR 2002-17
|Abstract: ||We propose a novel algorithm called Evolutionary Policy Iteration (EPI) for solving infinite horizon discounted reward Markov Decision Process (MDP) problems. EPI inherits the spirit of the well-known PI algorithm but eliminates the need to maximize over the entire action space in the policy improvement step, so it should be most effective for problems with very large action spaces. EPI iteratively generates a "population" or a set of policies such that the performance of the "elite policy" for a population is monotonically improved with respect to a defined fitness function. EPI converges with probability one to a population whose elite policy is an optimal policy for a given MDP. EPI is naturally parallelizable and along this discussion, a distributed variant of PI is also studied.|
|Appears in Collections:||Institute for Systems Research Technical Reports|
All items in DRUM are protected by copyright, with all rights reserved.