Towards Effective and Inclusive AI: Aligning AI Systems with User Needs and Stakeholder Values Across Diverse Contexts
dc.contributor.advisor | Daumé III, Hal | en_US |
dc.contributor.author | Cao, Yang | en_US |
dc.contributor.department | Computer Science | en_US |
dc.contributor.publisher | Digital Repository at the University of Maryland | en_US |
dc.contributor.publisher | University of Maryland (College Park, Md.) | en_US |
dc.date.accessioned | 2024-06-29T05:57:52Z | |
dc.date.available | 2024-06-29T05:57:52Z | |
dc.date.issued | 2024 | en_US |
dc.description.abstract | Inspired by the Turing test, a long line of research in AI has focused on technical improvement on tasks thought to require human-like comprehension. However, this focus has often resulted in models with impressive technical capabilities but uncertain real-world applicability. Despite the advancements of large pre-trained models, we still see various failure cases towards discriminated groups and when applied to specific applications. A major problem here is the detached model development process — these models are designed, developed, and evaluated with limited consideration of their users and stakeholders. My dissertation is dedicated to addressing this detachment by examining how artificial intelligence (AI) systems can be more effectively aligned with the needs of users and the values of stakeholders across diverse contexts. This workaims to close the gap between the current state of AI technology and its meaningful application in the lives of real-life stakeholders. My thesis explores three key aspects of aligning AI systems with human needs and values: identifying sources of misalignment, addressing the needs of specific user groups, and ensuring value alignment across diverse stakeholders. First, I examine potential causes of misalignment in AI system development, focusing on gender biases in natural language processing (NLP) systems. I demonstrate that without careful consideration of real-life stakeholders, AI systems are prone to biases entering at each development stage. Second, I explore the alignment of AI systems for specific user groups by analyzing two real-life application contexts: a content moderation assistance system for volunteer moderators and a visual question answering (VQA) system for blind and visually impaired (BVI) individuals. In both contexts, I identify significant gaps in AI systems and provide directions for better alignment with users’ needs. Finally, I assess the alignment of AI systems with human values, focusing on stereotype issues within general large language models (LLMs). I propose a theory-grounded method for systematically evaluating stereotypical associations and exploring their impact on diverse user identities, including intersectional identity stereotypes and the leakage of stereotypes across cultures. Through these investigations, this dissertation contributes to the growing field of human-centered AI by providing insights, methodologies, and recommendations for aligning AI systems with the needs and values of diverse stakeholders. By addressing the challenges of misalignment, user-specific needs, and value alignment, this work aims to foster the development of AI technologies that effectively collaborate with and empower users while promoting fairness, inclusivity, and positive social impact. | en_US |
dc.identifier | https://doi.org/10.13016/avlx-lloi | |
dc.identifier.uri | http://hdl.handle.net/1903/32924 | |
dc.language.iso | en | en_US |
dc.subject.pqcontrolled | Computer science | en_US |
dc.subject.pquncontrolled | AI for Accessibility | en_US |
dc.subject.pquncontrolled | Alignment | en_US |
dc.subject.pquncontrolled | Content Moderation | en_US |
dc.subject.pquncontrolled | Fairness | en_US |
dc.subject.pquncontrolled | Human-Centered AI | en_US |
dc.title | Towards Effective and Inclusive AI: Aligning AI Systems with User Needs and Stakeholder Values Across Diverse Contexts | en_US |
dc.type | Dissertation | en_US |
Files
Original bundle
1 - 1 of 1