Evolutionary Policy Iteration for Solving Markov Decision Processes

Loading...
Thumbnail Image

Files

TR_2002-31.pdf (269.53 KB)
No. of downloads: 756

Publication or External Link

Date

2002

Advisor

Citation

DRUM DOI

Abstract

We propose a novel algorithm called Evolutionary Policy Iteration (EPI) for solving infinite horizon discounted reward Markov Decision Process (MDP) problems. EPI inherits the spirit of the well-known PI algorithm but eliminates the need to maximize over the entire action space in the policy improvement step, so it should be most effective for problems with very large action spaces. EPI iteratively generates a "population" or a set of policies such that the performance of the "elite policy" for a population is monotonically improved with respect to a defined fitness function. EPI converges with probability one to a population whose elite policy is an optimal policy for a given MDP. EPI is naturally parallelizable and along this discussion, a distributed variant of PI is also studied.

Notes

Rights