Digital Repository at the University of Maryland (DRUM)  >
Institute for Systems Research  >
Institute for Systems Research Technical Reports 

Please use this identifier to cite or link to this item:

Title: Evolutionary Policy Iteration for Solving Markov Decision Processes
Authors: Chang, Hyeong Soo
Lee, Hong-Gi
Fu, Michael C.
Marcus, Steven I.
Department/Program: ISR
Type: Technical Report
Keywords: Next-Generation Product Realization Systems
Issue Date: 2002
Series/Report no.: ISR; TR 2002-31
CSHCN; TR 2002-17
Abstract: We propose a novel algorithm called Evolutionary Policy Iteration (EPI) for solving infinite horizon discounted reward Markov Decision Process (MDP) problems. EPI inherits the spirit of the well-known PI algorithm but eliminates the need to maximize over the entire action space in the policy improvement step, so it should be most effective for problems with very large action spaces. EPI iteratively generates a "population" or a set of policies such that the performance of the "elite policy" for a population is monotonically improved with respect to a defined fitness function. EPI converges with probability one to a population whose elite policy is an optimal policy for a given MDP. EPI is naturally parallelizable and along this discussion, a distributed variant of PI is also studied.
Appears in Collections:Institute for Systems Research Technical Reports

Files in This Item:

File Description SizeFormatNo. of Downloads
TR_2002-31.pdf269.53 kBAdobe PDF268View/Open

All items in DRUM are protected by copyright, with all rights reserved.


DRUM is brought to you by the University of Maryland Libraries
University of Maryland, College Park, MD 20742-7011 (301)314-1328.
Please send us your comments