Robust Reinforcement Learning via Risk-Sensitivity

dc.contributor.advisorBaras, John Sen_US
dc.contributor.authorNoorani, Erfaunen_US
dc.contributor.departmentElectrical Engineeringen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2023-10-12T05:34:20Z
dc.date.available2023-10-12T05:34:20Z
dc.date.issued2023en_US
dc.description.abstractThe objective of this research is to develop robust-resilient-adaptive Reinforcement Learning (RL) systems that are generic, provide performance guarantees, and can generalize-reason-improve in complex and unknown task environments. To achieve this objective, we focus on exploring the concept of Risk-sensitivity in RL systems and its extensions to Multi-Agent (MA) systems. The development of robust reinforcement learning algorithms is crucial to address challenges such as model misspecification, parameter uncertainty, disturbances, and more. Risk-sensitive methods offer an approach to developing robust RL algorithms by hedging against undesirable outcomes in a probabilistic manner. The robustness properties of risk-sensitive controllers have long been established. We investigate risk-sensitive RL (as a generalization of risk-sensitive stochastic control), by theoretically analyzing the risk-sensitive exponential (exponential of the total reward) criteria and the benefits and improvements the introduction of risk-sensitivity brings to conventional RL. By considering exponential criteria as risk measures, we aim to enhance the reliability of our decision-making process. We explore the exponential criteria to better understand its representation, the implications of its optimization, and the behavioral characteristics exhibited by an agent optimizing this criterion. We demonstrate the advantages of utilizing exponential criteria for the development of RL algorithms. We then shift our focus to developing algorithms that effectively leverage these exponential criteria. To do that, we first focus on developing risk-sensitive RL algorithms within the framework of Markov Decision Processes (MDPs). We then broaden our scope by exploring the application of the Probabilistic Graphical Models (PGM) framework for developing risk-sensitive algorithms. Within this context, we delve into the PGM framework and examine its connection with the MDP framework. We proceed by exploring the effects of risk sensitivity on trust, collaboration, and cooperation in multi-agent systems. To conclude, we finally investigate the concept of risk sensitivity and the robust properties of risk-sensitive algorithms in decision-making and optimization domains beyond RL. Specifically, we focus on safe RL using risk-sensitive filters. Through our exploration, we seek to enhance the understanding and applicability of risk-sensitive approaches in various domains.en_US
dc.identifierhttps://doi.org/10.13016/dspace/thuh-y00n
dc.identifier.urihttp://hdl.handle.net/1903/30952
dc.language.isoenen_US
dc.subject.pqcontrolledElectrical engineeringen_US
dc.subject.pqcontrolledArtificial intelligenceen_US
dc.subject.pquncontrolledMulti-Agent Systemsen_US
dc.subject.pquncontrolledRisk-Sensitive Filtersen_US
dc.subject.pquncontrolledRisk-Sensitive Reinforcement Learning (RL)en_US
dc.subject.pquncontrolledRobust Reinforcement Learning (RL)en_US
dc.titleRobust Reinforcement Learning via Risk-Sensitivityen_US
dc.typeDissertationen_US

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Noorani_umd_0117E_23652.pdf
Size:
2.1 MB
Format:
Adobe Portable Document Format
Download
(RESTRICTED ACCESS)