Convergence of Sample Path Optimal Policies for Stochastic Dynamic Programming

dc.contributor.advisorFu, Michael C.en_US
dc.contributor.authorFu, Michael C.en_US
dc.contributor.authorJin, Xingen_US
dc.contributor.departmentISRen_US
dc.date.accessioned2007-05-23T10:17:46Z
dc.date.available2007-05-23T10:17:46Z
dc.date.issued2005en_US
dc.description.abstractWe consider the solution of stochastic dynamic programs using sample path estimates. Applying the theory of large deviations, we derive probability error bounds associated with the convergence of the estimated optimal policy to the true optimal policy, for finite horizon problems. These bounds decay at an exponential rate, in contrast with the usual canonical (inverse) square root rate associated with estimation of the value (cost-to-go) function itself. These results have practical implications for Monte Carlo simulation-based solution approaches to stochastic dynamic programming problems where it is impractical to extract the explicit transition probabilities of the underlying system model.en_US
dc.format.extent186156 bytes
dc.format.mimetypeapplication/pdf
dc.identifier.urihttp://hdl.handle.net/1903/6548
dc.language.isoen_USen_US
dc.relation.ispartofseriesISR; TR 2005-84en_US
dc.titleConvergence of Sample Path Optimal Policies for Stochastic Dynamic Programmingen_US
dc.typeTechnical Reporten_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
TR_2005-84.pdf
Size:
181.79 KB
Format:
Adobe Portable Document Format