ESSAYS IN STATISTICAL ANALYSIS: ISOTONIC REGRESSION AND FILTERING

Loading...
Thumbnail Image

Files

Publication or External Link

Date

2018

Citation

Abstract

In many real-world applications in optimal information collection and stochastic

approximation, statistical estimators are often constructed to learn the true parameter

value of some utility functions or underlying signals. Many of these estimators

exhibit excellent empirical performance, but full analyses of their consistency

are not previously available, thus putting decision-makers in somewhat of a predicament

regarding implementation. The goal of this dissertation is to fill this blank of

missing consistency proofs.

The first part of this thesis considers the consistency of estimating a monotonic

cost function which appears in an optimal learning algorithm that incorporates

isotonic regression with a Bayesian policy known as Knowledge Gradient with

Discrete Priors (KGDP). Isotonic regression deals with regression problems under

order constraints. Previous literature proposed to estimate the cost function by

a weighted sum of a pool of candidate curves, each of which is generated by the

isotonic regression estimator based on all the previous observations that have been

collected, and the weights are calculated by KGDP. Our primary objective is to

establish the consistency of the suggested estimator. Some minor results, regarding

with the knowledge gradient algorithm and the isotonic regression estimator under

insufficient observations, are also discussed.

The second part of this thesis focuses on the convergence of the bias-adjusted

Kalman filter (BAKF). The BAKF algorithm is designed to optimize the statistical

estimation of a non-stationary signal that can only be observed with stochastic

noise. The algorithm has numerous applications in dynamic programming and signal

processing. However, a consistency analysis of the process that approximates the

underlying signal has heretofore not been available. We resolve this open issue

by showing that the BAKF stepsize satisfies the well-known conditions on almost

sure convergence of a stochastic approximation sequence, with only one additional

assumption on the convergence rate of the signal compared to those used in the

derivation of the original problem.

Notes

Rights