Digital Repository at the University of Maryland (DRUM)  >
Institute for Systems Research  >
Institute for Systems Research Technical Reports 

Please use this identifier to cite or link to this item:

Title: Convergence of Sample Path Optimal Policies for Stochastic Dynamic Programming
Authors: Fu, Michael C.
Jin, Xing
Advisors: Fu, Michael C.
Department/Program: ISR
Type: Technical Report
Issue Date: 2005
Series/Report no.: ISR; TR 2005-84
Abstract: We consider the solution of stochastic dynamic programs using sample path estimates. Applying the theory of large deviations, we derive probability error bounds associated with the convergence of the estimated optimal policy to the true optimal policy, for finite horizon problems. These bounds decay at an exponential rate, in contrast with the usual canonical (inverse) square root rate associated with estimation of the value (cost-to-go) function itself. These results have practical implications for Monte Carlo simulation-based solution approaches to stochastic dynamic programming problems where it is impractical to extract the explicit transition probabilities of the underlying system model.
Appears in Collections:Institute for Systems Research Technical Reports

Files in This Item:

File Description SizeFormatNo. of Downloads
TR_2005-84.pdf181.79 kBAdobe PDF529View/Open

All items in DRUM are protected by copyright, with all rights reserved.


DRUM is brought to you by the University of Maryland Libraries
University of Maryland, College Park, MD 20742-7011 (301)314-1328.
Please send us your comments