Acting, Planning, and Learning Using Hierarchical Operational Models
dc.contributor.advisor | Nau, Dana | en_US |
dc.contributor.author | Patra, Sunandita | en_US |
dc.contributor.department | Computer Science | en_US |
dc.contributor.publisher | Digital Repository at the University of Maryland | en_US |
dc.contributor.publisher | University of Maryland (College Park, Md.) | en_US |
dc.date.accessioned | 2020-09-25T05:36:24Z | |
dc.date.available | 2020-09-25T05:36:24Z | |
dc.date.issued | 2020 | en_US |
dc.description.abstract | The most common representation formalisms for planning are descriptive models that abstractly describe what the actions do and are tailored for efficiently computing the next state(s) in a state-transition system. However, real-world acting requires operational models that describe how to do things, with rich control structures for closed-loop online decision-making in a dynamic environment. Use of a different action model for planning than the one used for acting causes problems with combining acting and planning, in particular for the development and consistency verification of the different models. As an alternative, this dissertation defines and implements an integrated acting-and-planning system in which both planning and acting use the same operational models, which are written in a general-purpose hierarchical task-oriented language offering rich control structures. The acting component, called Reactive Acting Engine (RAE), is inspired by the well-known PRS system, except that instead of being purely reactive, it can get advice from a planner. The dissertation also describes three planning algorithms which plan by doing several Monte Carlo rollouts in the space of operational models. The best of these three planners, Plan-with-UPOM uses a UCT-like Monte Carlo Tree Search procedure called UPOM (UCT Procedure for Operational Models), whose rollouts are simulated executions of the actor's operational models. The dissertation also presents learning strategies for use with RAE and UPOM that acquire from online acting experiences and/or simulated planning results, a mapping from decision contexts to method instances as well as a heuristic function to guide UPOM. The experimental results show that Plan-with-UPOM and the learning strategies significantly improve the acting efficiency and robustness of RAE. It can be proved that UPOM converges asymptotically by mapping its search space to an MDP. The dissertation also describes a real-world prototype of RAE and Plan-with-UPOM to defend software-defined networks, a relatively new network management architecture, against incoming attacks. | en_US |
dc.identifier | https://doi.org/10.13016/dhn4-gxki | |
dc.identifier.uri | http://hdl.handle.net/1903/26448 | |
dc.language.iso | en | en_US |
dc.subject.pqcontrolled | Artificial intelligence | en_US |
dc.subject.pquncontrolled | Acting and planning | en_US |
dc.subject.pquncontrolled | dynamic environments | en_US |
dc.subject.pquncontrolled | hierarchical operational models | en_US |
dc.subject.pquncontrolled | online planning | en_US |
dc.subject.pquncontrolled | planning and learning | en_US |
dc.subject.pquncontrolled | supervised learning | en_US |
dc.title | Acting, Planning, and Learning Using Hierarchical Operational Models | en_US |
dc.type | Dissertation | en_US |
Files
Original bundle
1 - 1 of 1