Artificial Intelligence on the Edge: From Learning to Deployment

dc.contributor.advisorHuang, Furongen_US
dc.contributor.authorBornstein, Marcoen_US
dc.contributor.departmentApplied Mathematics and Scientific Computationen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2025-08-08T11:47:05Z
dc.date.issued2025en_US
dc.description.abstractAs Artificial Intelligence (AI) has grown in use, so too have the computational, memory, and data resources required to train state-of-the-art AI models. AI models, and the data they are trained on, are traditionally stored centrally, in-house. However, data is increasingly collected by decentralized devices, like smart sensors and mobile phones, which are at the edge of a network. Primarily due to privacy constraints, via impending laws and regulation, it is increasingly difficult to transfer user data from such edge devices to a central, in-house server. This emerging impediment has spurred interest and research into edge learning, a distributed and collaborative paradigm, where AI models are trained closer to, or on, the devices where data originates. While promising, edge learning faces obstacles to overcome before attaining widespread adoption. Part I of this dissertation addresses the first obstacle, the ability of participating devices, known as edge devices, to learn AI models. Edge devices lack access to large memory and computational power sources that are common in traditional, centralized settings. To solve these issues, Part I proposes: (i) an asynchronous decentralized edge-learning algorithm that removes the training-time dependence on the slowest device, improving training speed, (ii) a locality-sensitive hashing framework that allows devices, with the help of a server, to train a large-scale AI model without storing the entire model, and (iii) a compressed-sensing algorithm that shrinks the size of both training data and AI models, with minimal performance degradation, while improving training and inference runtime. Part II of this dissertation addresses the second obstacle, the inability to successfully deploy both fair collaborative edge-learning frameworks and safely trained, compliant AI models. Current edge-learning frameworks lack incentives. Thus, edge-learning participants may not always be incentivized to act in accordance with how the designee intended their frameworks to function, resulting in unfair and unintended outcomes. To solve these issues, Part II proposes: (i) two separate mechanisms to eliminate device free riding, one to incentivize device truthfulness when reporting information to a central server and a second to incentivize greater training contribution than by training alone, and (ii) a rigorous framework for AI regulation that incentivizes devices to both deploy compliant models and participate in the regulation process.en_US
dc.identifierhttps://doi.org/10.13016/jmpu-1nmm
dc.identifier.urihttp://hdl.handle.net/1903/34111
dc.language.isoenen_US
dc.subject.pqcontrolledArtificial intelligenceen_US
dc.subject.pqcontrolledApplied mathematicsen_US
dc.subject.pqcontrolledComputer scienceen_US
dc.subject.pquncontrolledAsynchronyen_US
dc.subject.pquncontrolledCompressionen_US
dc.subject.pquncontrolledEdge Learningen_US
dc.subject.pquncontrolledFree Ridingen_US
dc.subject.pquncontrolledMechanism Designen_US
dc.subject.pquncontrolledRegulationen_US
dc.titleArtificial Intelligence on the Edge: From Learning to Deploymenten_US
dc.typeDissertationen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Bornstein_umd_0117E_24964.pdf
Size:
16.5 MB
Format:
Adobe Portable Document Format