STOCHASTIC OPTIMIZATION: ALGORITHMS AND CONVERGENCE

Loading...
Thumbnail Image

Files

umi-umd-2219.pdf (417.36 KB)
No. of downloads: 1188

Publication or External Link

Date

2005-03-23

Citation

DRUM DOI

Abstract

Stochastic approximation is one of the oldest approaches for solving stochastic optimization problems. In the first part of the dissertation, we study the convergence and asymptotic normality of a generalized form of stochastic approximation algorithm with deterministic perturbation sequences. Both one-simulation and two-simulation methods are considered. Assuming a special structure on the deterministic sequence, we establish sufficient conditions on the noise sequence for a.s. convergence of the algorithm and asymptotic normality. Finally we propose ideas for further research in analysis and design of the deterministic perturbation sequences.

In the second part of the dissertation, we consider the application of stochastic optimization problems to American option pricing, a challenging task particularly for high-dimensional underlying securities. For options where there are a finite number of exercise dates, we present a weighted stochastic mesh method that only requires some easy-to-verify assumptions and a method to simulate the behavior of underlying securities. The algorithm provides point estimates and confidence intervals for both price and value-at-risk. The estimators converge to the true values as the computational effort increases.

In the third part, we deal with an optimization problem in the field of ranking and selection. We generalize the discussion in the literature to a non-Gaussian correlated distribution setting. We propose a procedure to locate an approximate solution, which can be shown to converge to the true solution asymptotically. The convergence rate is also provided for the Gaussian setting.

Notes

Rights