STOCHASTIC OPTIMIZATION: APPROXIMATE BAYESIAN INFERENCE AND COMPLETE EXPECTED IMPROVEMENT

Loading...
Thumbnail Image

Files

Publication or External Link

Date

2018

Authors

Citation

Abstract

Stochastic optimization includes modeling, computing and decision making. In practice, due to the limitation of mathematical tools or real budget, many practical solution methods are designed using approximation techniques or taking forms that are efficient to compute and update. These models have shown their practical benefits in different backgrounds, but many of them also lack rigorous theoretical support. Through interfacing with statistical tools, we analyze the asymptotic properties of two important Bayesian models and show their validity by proving consistency or other limiting results, which may be useful to algorithmic scientists seeking to leverage these computational techniques for their practical performance.

The first part of the thesis is the consistency analysis of sequential learning algorithms under approximate Bayesian inference. Approximate Bayesian inference is a powerful methodology for constructing computationally efficient statistical mechanisms for sequential learning from incomplete or censored information.Approximate Bayesian learning models have proven successful in a variety of operations research and business problems; however, prior work in this area has been primarily computational, and the consistency of approximate Bayesian estimators has been a largely open problem. We develop a new consistency theory by interpreting approximate Bayesian inference as a form of stochastic approximation (SA) with an additional “bias” term. We prove the convergence of a general SA algorithm of this form, and leverage this analysis to derive the first consistency proofs for a suite of approximate Bayesian models from the recent literature.

The second part of the thesis proposes a budget allocation algorithm for the ranking and selection problem. The ranking and selection problem is a well-known mathematical framework for the formal study of optimal information collection. Expected improvement (EI) is a leading algorithmic approach to this problem; the practical benefits of EI have repeatedly been demonstrated in the literature, especially in the widely studied setting of Gaussian sampling distributions. However, it was recently proved that some of the most well-known EI-type methods achieve sub- optimal convergence rates. We investigate a recently-proposed variant of EI (known as “complete EI”) and prove that, with some minor modifications, it can be made to converge to the rate-optimal static budget allocation without requiring any tuning.

Notes

Rights