From Demonstration to Dynamic Interaction: Enabling Long-Term Robotic Planning

dc.contributor.advisorShrivastava, Abhinaven_US
dc.contributor.authorLevy, Maraen_US
dc.contributor.departmentComputer Scienceen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2025-09-15T05:41:10Z
dc.date.issued2025en_US
dc.description.abstractRobotic learning has seen rapid growth over the past decade, driven by advances in machine learning that have brought real-world deployment of robots closer to reality. Research in this area primarily falls into two categories: reinforcement learning and imitation learning. Despite their promise, both approaches face significant challenges, including limited data availability and the difficulty of obtaining accurate state representations. This thesis explores how we can advance these methods to enable robust performance in real-world, unstructured environments. We begin by exploring how to redefine state representation, presenting two complementary approaches. The first focuses on human state representation but is easily extendable to robots. It significantly outperforms existing methods in generalizing to unseen states and varying camera viewpoints. The second approach introduces a more concise, keypoint-based representation. We show that this method enables training of robot policies with minimal demonstrations and generalizes effectively to new environments and objects of varying shapes and sizes. Next, we turn to the problem of learning policies from a single demonstration, without relying on handcrafted reward functions. Remarkably, our method achieves comparable final performance to existing approaches while using 100× less data. Finally, we demonstrate how these methods can be deployed in dynamic environments, even when trained under static conditions. By layering a lightweight planner on top of a pretrained policy, we achieve substantial improvements over naïve replanning strategies, approaching oracle-level success rates.en_US
dc.identifierhttps://doi.org/10.13016/o2a5-87i2
dc.identifier.urihttp://hdl.handle.net/1903/34675
dc.language.isoenen_US
dc.subject.pqcontrolledComputer scienceen_US
dc.subject.pqcontrolledRoboticsen_US
dc.subject.pquncontrolledComputer Visionen_US
dc.subject.pquncontrolledImitation Learningen_US
dc.subject.pquncontrolledReinforcement Learningen_US
dc.subject.pquncontrolledRobotic Planningen_US
dc.titleFrom Demonstration to Dynamic Interaction: Enabling Long-Term Robotic Planningen_US
dc.typeDissertationen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Levy_umd_0117E_25510.pdf
Size:
21.02 MB
Format:
Adobe Portable Document Format