Artificial Intelligence on the Edge: From Learning to Deployment
Files
Publication or External Link
Date
Authors
Advisor
Citation
DRUM DOI
Abstract
As Artificial Intelligence (AI) has grown in use, so too have the computational, memory, and data resources required to train state-of-the-art AI models. AI models, and the data they are trained on, are traditionally stored centrally, in-house. However, data is increasingly collected by decentralized devices, like smart sensors and mobile phones, which are at the edge of a network. Primarily due to privacy constraints, via impending laws and regulation, it is increasingly difficult to transfer user data from such edge devices to a central, in-house server.
This emerging impediment has spurred interest and research into edge learning, a distributed and collaborative paradigm, where AI models are trained closer to, or on, the devices where data originates. While promising, edge learning faces obstacles to overcome before attaining widespread adoption.
Part I of this dissertation addresses the first obstacle, the ability of participating devices, known as edge devices, to learn AI models. Edge devices lack access to large memory and computational power sources that are common in traditional, centralized settings. To solve these issues, Part I proposes: (i) an asynchronous decentralized edge-learning algorithm that removes the training-time dependence on the slowest device, improving training speed, (ii) a locality-sensitive hashing framework that allows devices, with the help of a server, to train a large-scale AI model without storing the entire model, and (iii) a compressed-sensing algorithm that shrinks the size of both training data and AI models, with minimal performance degradation, while improving training and inference runtime.
Part II of this dissertation addresses the second obstacle, the inability to successfully deploy both fair collaborative edge-learning frameworks and safely trained, compliant AI models. Current edge-learning frameworks lack incentives. Thus, edge-learning participants may not always be incentivized to act in accordance with how the designee intended their frameworks to function, resulting in unfair and unintended outcomes. To solve these issues, Part II proposes: (i) two separate mechanisms to eliminate device free riding, one to incentivize device truthfulness when reporting information to a central server and a second to incentivize greater training contribution than by training alone, and (ii) a rigorous framework for AI regulation that incentivizes devices to both deploy compliant models and participate in the regulation process.