Towards Robust and Adaptable Real-World Reinforcement Learning
Files
Publication or External Link
Date
Authors
Advisor
Citation
Abstract
The past decade has witnessed a rapid development of reinforcement learning (RL) techniques. However, there is still a gap between employing RL in simulators and applying RL models to challenging and diverse real-world systems. On the one hand, existing RL approaches have been shown to be fragile under perturbations in the environment, making it risky to deploy RL models in real-world applications where unexpected noise and interference exist. On the other hand, most RL methods focus on learning a policy in a fixed environment, and need to re-train a policy if the environment gets changed. For real-world environments whose agent specifications and dynamics can be ever-changing, these methods become less practical as they require a large amount of data and computations to adapt to a changed environment.
We focus on the above two challenges and introduce multiple solutions to improve the robustness and adaptability of RL methods. For robustness, we propose a series of approaches that define, explore, and mitigate the vulnerability of RL agents from different perspectives and achieve state-of-the-art performance on robustifying RL policies. For adaptability, we present transfer learning and pretraining frameworks to address challenging multi-task learning problems that are important yet rarely studied, contributing to the application of RL techniques to more real-life scenarios.