Expanding Robustness in Responsible AI for Novel Bias Mitigation

dc.contributor.advisorDickerson, John Pen_US
dc.contributor.authorDooley, Samuelen_US
dc.contributor.departmentComputer Scienceen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2024-02-09T06:30:44Z
dc.date.available2024-02-09T06:30:44Z
dc.date.issued2023en_US
dc.description.abstractConventional belief in the fairness community is that one should first find the highest performing model for a given problem and then apply a bias mitigation strategy. One starts with an existing model architecture and hyperparameters, and then adjusts model weights, learning procedures, or input data to make the model fairer using a pre-, post-, or in-processing bias mitigation technique. While existing methods for de-biasing machine learning systems use a fixed neural architecture and hyperparameter setting, I instead ask a fundamental question which has received little attention: how much does model-bias arise from the architecture and hyperparameters, and ask how can we exploit the extensive research in the fields of neural architecture search (NAS) and hyperparameter optimization (HPO) to search for more inherently fair models. By thinking of bias mitigation in this new way, we really are expanding our conceptualization of robustness in responsible AI. Robustness is an emerging aspect of responsible AI and focuses on maintaining model performance in the face of uncertainties and variations for all subgroups of a data population. Often robustness deals with protecting models from intentional or unintentional manipulations in data, while handling noisy or corrupted data and preserving accuracy in real-world scenarios. In other words, robustness, as commonly defined, examines the output of a system under changes to input data. However, I will broaden the idea of what robustness in responsible AI is in a manner which defines new fairness metrics, yields insights into robustness of deployed AI systems, and proposes an entirely new bias mitigation strategy. This thesis explores the connection between robust machine learning and responsible AI. It introduces a fairness metric that quantifies disparities in susceptibility to adversarial attacks. It also audits face detection systems for robustness to common natural noises, revealing biases in these systems. Finally, it proposes using neural architecture search to find fairer architectures, challenging the conventional approach of starting with accurate architectures and applying bias mitigation strategies.en_US
dc.identifierhttps://doi.org/10.13016/dspace/rabv-1nym
dc.identifier.urihttp://hdl.handle.net/1903/31647
dc.language.isoenen_US
dc.subject.pqcontrolledComputer scienceen_US
dc.titleExpanding Robustness in Responsible AI for Novel Bias Mitigationen_US
dc.typeDissertationen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Dooley_umd_0117E_23714.pdf
Size:
6.48 MB
Format:
Adobe Portable Document Format