Assured Autonomy in Multiagent Systems with Safe Learning

dc.contributor.advisorBaras, John Sen_US
dc.contributor.authorFiaz, Usman Aminen_US
dc.contributor.departmentElectrical Engineeringen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2022-09-27T05:48:03Z
dc.date.available2022-09-27T05:48:03Z
dc.date.issued2022en_US
dc.description.abstractAutonomous multiagent systems is an area that is currently receiving increasing attention in the communities of robotics, control systems, and machine learning (ML) and artificial intelligence (AI). It is evident today, how autonomous robots and vehicles can help us shape our future. Teams of robots are being used to help identify and rescue survivors in case of a natural disaster for instance. There we understand that we are talking minutes and seconds that can decide whether you can save a person's life or not. This example portrays not only the value of safety but also the significance of time, in planning complex missions with autonomous agents. This thesis aims to develop a generic, composable framework for a multiagent system (of robots or vehicles), which can safely carry out time-critical missions in a distributed and autonomous fashion. The goal is to provide formal guarantees on both safety and finite-time mission completion in real time, thus, to answer the question: “how trustworthy is the autonomy of a multi-robot system in a complex mission?” We refer to this notion of autonomy in multiagent systems as assured or trusted autonomy, which is currently a very sought-after area of research, thanks to its enormous applications in autonomous driving for instance. There are two interconnected components of this thesis. In the first part, using tools from control theory (optimal control), formal methods (temporal logic and hybrid automata), and optimization (mixed-integer programming), we propose multiple variants of (almost) realtime planning algorithms, which provide formal guarantees on safety and finite-time mission completion for a multiagent system in a complex mission. Our proposed framework is hybrid, distributed, and inherently composable, as it uses a divide-and-conquer approach for planning a complex mission, by breaking it down into several sub-tasks. This approach enables us to implement the resulting algorithms on robots with limited computational power, while still achieving close to realtime performance. We validate the efficacy of our methods on multiple use cases such as autonomous search and rescue with a team of unmanned aerial vehicles (UAVs) and ground robots, autonomous aerial grasping and navigation, UAV-based surveillance, and UAV-based inspection tasks in industrial environments. In the second part, our goal is to translate and adapt these developed algorithms to safely learn actions and policies for robots in dynamic environments, so that they can accomplish their mission even in the presence of uncertainty. To accomplish this goal, we introduce the ideas of self-monitoring and self-correction for agents using hybrid automata theory and model predictive control (MPC). Self-monitoring and self-correction refer to the problems in autonomy where the autonomous agents monitor their performance, detect deviations from normal or expected behavior, and learn to adjust both the description of their mission/task and their performance online, to maintain the expected behavior and performance. In this setting, we propose a formal and composable notion of safety and adaptation for autonomous multiagent systems, which we refer to as safe learning. We revisit one of the earlier use cases to demonstrate the capabilities of our approach for a team of autonomous UAVs in a surveillance and search and rescue mission scenario. Despite portraying results mainly for UAVs in this thesis, we argue that the proposed planning framework is transferable to any team of autonomous agents, under some realistic assumptions. We hope that this research will serve several modern applications of public interest, such as autopilots and flight controllers, autonomous driving systems (ADS), autonomous UAV missions such as aerial grasping and package delivery with drones etc., by improving upon the existing safety of their autonomous operation.en_US
dc.identifierhttps://doi.org/10.13016/rd8r-ik1n
dc.identifier.urihttp://hdl.handle.net/1903/29403
dc.language.isoenen_US
dc.subject.pqcontrolledElectrical engineeringen_US
dc.subject.pqcontrolledRoboticsen_US
dc.subject.pquncontrolledAutonomyen_US
dc.subject.pquncontrolledMetric Temporal Logicen_US
dc.subject.pquncontrolledMission Planningen_US
dc.subject.pquncontrolledMulti-Robot Systemsen_US
dc.subject.pquncontrolledMultiagent Systemsen_US
dc.subject.pquncontrolledSafetyen_US
dc.titleAssured Autonomy in Multiagent Systems with Safe Learningen_US
dc.typeDissertationen_US

Files

Original bundle

Now showing 1 - 1 of 1
Thumbnail Image
Name:
Fiaz_umd_0117E_22860.pdf
Size:
18.39 MB
Format:
Adobe Portable Document Format