Reinforcement Learning Methods for Conic Finance
dc.contributor.advisor | Madan, Dilip | en_US |
dc.contributor.author | Chopra, Sahil | en_US |
dc.contributor.department | Applied Mathematics and Scientific Computation | en_US |
dc.contributor.publisher | Digital Repository at the University of Maryland | en_US |
dc.contributor.publisher | University of Maryland (College Park, Md.) | en_US |
dc.date.accessioned | 2020-09-29T05:31:33Z | |
dc.date.available | 2020-09-29T05:31:33Z | |
dc.date.issued | 2020 | en_US |
dc.description.abstract | Conic Finance is a world of two-prices, a more grounded reality than the theory of one-price. The world, however, is constructed by considering nonadditive expectations of risks or value functions. This makes some of the optimization algorithms incompatible with this universe, if not infeasible. It is more evident in the application of Reinforcement Learning algorithms where the underlying principle of TD learning and Bellman equations are based on the additivity of value functions. Hence, the task undertaken here is to mold the recent advances in the field of Distributional Reinforcement Learning to be conducive to learning in the setting of nonadditive dynamics. Algorithms for discrete and continuous actions are described and illustrated on sample problems in finance. | en_US |
dc.identifier | https://doi.org/10.13016/6zp9-sefo | |
dc.identifier.uri | http://hdl.handle.net/1903/26471 | |
dc.language.iso | en | en_US |
dc.subject.pqcontrolled | Mathematics | en_US |
dc.subject.pqcontrolled | Finance | en_US |
dc.title | Reinforcement Learning Methods for Conic Finance | en_US |
dc.type | Dissertation | en_US |
Files
Original bundle
1 - 1 of 1