Show simple item record

dc.contributor.advisorRyzhov, Ilya O.en_US
dc.contributor.authorDing, Zien_US
dc.date.accessioned2014-10-11T05:49:57Z
dc.date.available2014-10-11T05:49:57Z
dc.date.issued2014en_US
dc.identifierdoi:10.13016/M2PS3X
dc.identifier.urihttp://hdl.handle.net/1903/15774
dc.description.abstractIn this disseration, the author studies sequential Bayesian learning problems modeled under non-Gaussian distributions. We focus on a class of problems called the multi-armed bandit problem, and studies its optimal learning strategy, the Gittins index policy. The Gittins index is computationally intractable and approxi- mation methods have been developed for Gaussian reward problems. We construct a novel theoretical and computational framework for the Gittins index under non- Gaussian rewards. By interpolating the rewards using continuous-time conditional Levy processes, we recast the optimal stopping problems that characterize Gittins indices into free-boundary partial integro-differential equations (PIDEs). We also provide additional structural properties and numerical illustrations on how our ap- proach can be used to approximate the Gittins index.en_US
dc.language.isoenen_US
dc.titleOptimal Learning with Non-Gaussian Rewardsen_US
dc.typeDissertationen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.contributor.departmentApplied Mathematics and Scientific Computationen_US
dc.subject.pqcontrolledMathematicsen_US
dc.subject.pqcontrolledOperations researchen_US
dc.subject.pquncontrolledBayesian learningen_US
dc.subject.pquncontrolledGittins Indexen_US
dc.subject.pquncontrollednon-Gaussian rewardsen_US
dc.subject.pquncontrolledOptimal learningen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record