Reinforcement Learning Methods for Conic Finance

dc.contributor.advisorMadan, Dilipen_US
dc.contributor.authorChopra, Sahilen_US
dc.contributor.departmentApplied Mathematics and Scientific Computationen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2020-09-29T05:31:33Z
dc.date.available2020-09-29T05:31:33Z
dc.date.issued2020en_US
dc.description.abstractConic Finance is a world of two-prices, a more grounded reality than the theory of one-price. The world, however, is constructed by considering nonadditive expectations of risks or value functions. This makes some of the optimization algorithms incompatible with this universe, if not infeasible. It is more evident in the application of Reinforcement Learning algorithms where the underlying principle of TD learning and Bellman equations are based on the additivity of value functions. Hence, the task undertaken here is to mold the recent advances in the field of Distributional Reinforcement Learning to be conducive to learning in the setting of nonadditive dynamics. Algorithms for discrete and continuous actions are described and illustrated on sample problems in finance.en_US
dc.identifierhttps://doi.org/10.13016/6zp9-sefo
dc.identifier.urihttp://hdl.handle.net/1903/26471
dc.language.isoenen_US
dc.subject.pqcontrolledMathematicsen_US
dc.subject.pqcontrolledFinanceen_US
dc.titleReinforcement Learning Methods for Conic Financeen_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Chopra_umd_0117E_20941.pdf
Size:
1.68 MB
Format:
Adobe Portable Document Format