Runtime Adaptation in Embedded Computing Systems using Markov Decision Processes

dc.contributor.advisorBhattacharyya, Shuvra Sen_US
dc.contributor.authorSapio, Adrianen_US
dc.contributor.departmentElectrical Engineeringen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2020-02-01T06:39:55Z
dc.date.available2020-02-01T06:39:55Z
dc.date.issued2019en_US
dc.description.abstractDuring the design and implementation of embedded computing systems (ECSs), engineers must make assumptions on how the system will be used after being built and deployed. Traditionally, these important decisions were made at design time for a fleet of ECSs prior to deployment. In contrast to this approach, this research explores and develops techniques to enable adaptation of ECSs at runtime to the environments and applications in which they operate. Adaptation is enabled such that the usage assumptions and performance optimization decisions can be made autonomously at runtime in the deployed system. This thesis utilizes Markov Decision Processes (MDPs), a powerful and well established mathematical framework used for decision making under uncertainty, to control computing systems at runtime. The resulting control is performed in ways that are more dynamic, robust and adaptable than alternatives in many scenarios. The techniques developed in this thesis are first applied to a reconfigurable embedded digital signal processing system. In this effort, several challenges are encountered and resolved using novel approaches. Through extensive simulations and a prototype implementation, the robustness of the adaptation is demonstrated in comparison with the prior state-of-the-art. The thesis continues by developing an efficient algorithm for conversion of MDP models to actionable control policies - a required step known as solving the MDP. The solver algorithm is developed in the context of ECSs that contain general purpose embedded GPUs (graphics processing units). The novel solver algorithm, Sparse Parallel Value Iteration (SPVI), makes use of the parallel processing capabilities provided by such GPUs, and also exploits the sparsity that typically exists in MDPs when used to model and control ECSs. To extend the applicability of the runtime adaptation techniques to smaller and more strictly resource constrained ECSs, another solver - Sparse Value Iteration (SVI) is developed for use on microcontrollers. The method is explored in a detailed case study involving a cellular (LTE-M) connected sensor that adapts to varying communications profiles. The case study reveals that the proposed adaptation framework outperforms a competing approach based on Reinforcement Learning (RL) in terms of robustness and adaptation, while consuming comparable resource requirements. Finally, the thesis concludes by analyzing the various logistical challenges that exist when deploying MDPs on ECSs. In response to these challenges, the thesis contributes an open source software package to the engineering community. The package contains libraries of MDP solvers, parsers, datasets and reference solutions, which provide a comprehensive infrastructure for exploring the trade-offs among existing embedded MDP techniques, and experimenting with novel approaches.en_US
dc.identifierhttps://doi.org/10.13016/6bqg-pdxr
dc.identifier.urihttp://hdl.handle.net/1903/25442
dc.language.isoenen_US
dc.subject.pqcontrolledElectrical engineeringen_US
dc.subject.pquncontrolledadaptationen_US
dc.subject.pquncontrolledembeddeden_US
dc.subject.pquncontrolledioten_US
dc.subject.pquncontrolledlearningen_US
dc.subject.pquncontrolledMarkoven_US
dc.subject.pquncontrolledMDPen_US
dc.titleRuntime Adaptation in Embedded Computing Systems using Markov Decision Processesen_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Sapio_umd_0117E_20440.pdf
Size:
915.36 KB
Format:
Adobe Portable Document Format