Trust through Transparency: Towards Reliable AI for All
| dc.contributor.advisor | Feizi, Soheil | en_US |
| dc.contributor.author | Moayeri, Mazda | en_US |
| dc.contributor.department | Computer Science | en_US |
| dc.contributor.publisher | Digital Repository at the University of Maryland | en_US |
| dc.contributor.publisher | University of Maryland (College Park, Md.) | en_US |
| dc.date.accessioned | 2025-08-08T11:43:57Z | |
| dc.date.issued | 2025 | en_US |
| dc.description.abstract | Seemingly performant models can break down in unexpected and uneven ways, from image classifiers failing to recognize an otter out of water, to LLMs being nearly 3 times worse at recalling facts about Somalia than Sweden. In this dissertation, I’ll detail interpretability techniques to scalably illuminate and efficiently intervene on discovered model deficiencies. First, I’ll present evidence for pervasive reliance on spurious correlations by vision models, by way of carefully constructed benchmarks. Then, I’ll automate these approaches, demonstrating the power of leveraging auxiliary models to more efficiently organize data, towards uncovering and articulating subsets where models struggle. Finally, I’ll show how these same techniques can be applied to mitigate instances of real-world geographic disparities and even tackle sociotechnical challenges like artistic copyright infringement. In general, it can be difficult to trust what we do not fully understand, especially when unexpected failures arise. By scalably identifying failure modes before they cause harm, we enhance transparency around model abilities and limitations, thus better informing when models can be trusted to work reliably for all. | en_US |
| dc.identifier | https://doi.org/10.13016/awz7-oxld | |
| dc.identifier.uri | http://hdl.handle.net/1903/34092 | |
| dc.language.iso | en | en_US |
| dc.subject.pqcontrolled | Artificial intelligence | en_US |
| dc.subject.pqcontrolled | Computer science | en_US |
| dc.subject.pquncontrolled | computer vision | en_US |
| dc.subject.pquncontrolled | deep learning | en_US |
| dc.subject.pquncontrolled | fairness | en_US |
| dc.subject.pquncontrolled | robustness | en_US |
| dc.subject.pquncontrolled | spurious correlations | en_US |
| dc.subject.pquncontrolled | vision language models | en_US |
| dc.title | Trust through Transparency: Towards Reliable AI for All | en_US |
| dc.type | Dissertation | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Moayeri_umd_0117E_24939.pdf
- Size:
- 49.77 MB
- Format:
- Adobe Portable Document Format