Computer Science Theses and Dissertations
Permanent URI for this collectionhttp://hdl.handle.net/1903/2756
Browse
4 results
Search Results
Item Towards Effective and Inclusive AI: Aligning AI Systems with User Needs and Stakeholder Values Across Diverse Contexts(2024) Cao, Yang; Daumé III, Hal; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Inspired by the Turing test, a long line of research in AI has focused on technical improvement on tasks thought to require human-like comprehension. However, this focus has often resulted in models with impressive technical capabilities but uncertain real-world applicability. Despite the advancements of large pre-trained models, we still see various failure cases towards discriminated groups and when applied to specific applications. A major problem here is the detached model development process — these models are designed, developed, and evaluated with limited consideration of their users and stakeholders. My dissertation is dedicated to addressing this detachment by examining how artificial intelligence (AI) systems can be more effectively aligned with the needs of users and the values of stakeholders across diverse contexts. This workaims to close the gap between the current state of AI technology and its meaningful application in the lives of real-life stakeholders. My thesis explores three key aspects of aligning AI systems with human needs and values: identifying sources of misalignment, addressing the needs of specific user groups, and ensuring value alignment across diverse stakeholders. First, I examine potential causes of misalignment in AI system development, focusing on gender biases in natural language processing (NLP) systems. I demonstrate that without careful consideration of real-life stakeholders, AI systems are prone to biases entering at each development stage. Second, I explore the alignment of AI systems for specific user groups by analyzing two real-life application contexts: a content moderation assistance system for volunteer moderators and a visual question answering (VQA) system for blind and visually impaired (BVI) individuals. In both contexts, I identify significant gaps in AI systems and provide directions for better alignment with users’ needs. Finally, I assess the alignment of AI systems with human values, focusing on stereotype issues within general large language models (LLMs). I propose a theory-grounded method for systematically evaluating stereotypical associations and exploring their impact on diverse user identities, including intersectional identity stereotypes and the leakage of stereotypes across cultures. Through these investigations, this dissertation contributes to the growing field of human-centered AI by providing insights, methodologies, and recommendations for aligning AI systems with the needs and values of diverse stakeholders. By addressing the challenges of misalignment, user-specific needs, and value alignment, this work aims to foster the development of AI technologies that effectively collaborate with and empower users while promoting fairness, inclusivity, and positive social impact.Item Adversarial Robustness and Fairness in Deep Learning(2023) Cherepanova, Valeriia; Goldstein, Tom; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)While deep learning has led to remarkable advancements across various domains, the widespread adoption of neural network models has brought forth significant challenges such as vulnerability to adversarial attacks and model unfairness. These challenges have profound implications for privacy, security, and societal impact, requiring thorough investigation and development of effective mitigation strategies. In this work we address both these challenges. We study adversarial robustness of deep learning models and explore defense mechanisms against poisoning attacks. We also explore the sources of algorithmic bias and evaluate existing bias mitigation strategies in neural networks. Through this work, we aim to contribute to the understanding and enhancement of both adversarial robustness and fairness of deep learning systems.Item Social Aspects of Algorithms: Fairness, Diversity, and Resilience to Strategic Behavior(2021) Ahmadi, Saba; Khuller, Samir; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)With algorithms becoming ubiquitous in our society, it is important to ensure that they are compatible with our social values. In this thesis, we study some of the social aspects of algorithms including fairness, diversity, and resilience to strategic behavior of individuals. Lack of diversity has a potential impact on discrimination against marginalized groups. Inspired by this issue, in the first part of this thesis, we study a notion of diversity in bipartite matching problems. Bipartite matching where agents on one side of a market are matched to one or more agents or items on the other side, is a classical model that is used in myriad application areas such as healthcare, advertising, education, and general resource allocation. In particular, we consider an application of matchings where a firm wants to hire, i.e. match, some workers for a number of teams. Each team has a demand that needs to be satisfied, and each worker has multiple features (e.g., country of origin, gender). We ask the question of how to assign workers to the teams in an efficient way, i.e. low-cost matching, while forming diverse teams with respect to all the features. Inspired by previous work, we balance whole-match diversity and economic efficiency by optimizing a supermodular function over the matching. Particularly, we show when the number of features is given as part of the input, this problem is NP-hard, and design a pseudo-polynomial time algorithm to solve this problem. Next, we focus on studying fairness in optimization problems. Particularly, in this thesis, we study two notions of fairness in an optimization problem called correlation clustering. In correlation clustering, given an edge-weighted graph, each edge in addition to a weight has a positive or negative label. The goal is to obtain a clustering of the vertices into an arbitrary number of clusters that minimizes disagreements which is defined as the total weight of negative edges trapped inside a cluster plus the sum of weights of positive edges between different clusters. In the first fairness notion, assuming each node has a color, i.e. feature, our aim is to generate clusters with minimum disagreements, where the distribution of colors in each cluster is the same as the global distribution. Next, we switch our attention to a min-max notion of fairness in correlation clustering. In this notion of fairness, we consider a cluster-wise objective function that asks to minimize the maximum number of disagreements of each cluster. In this notion, the goal is to respect the quality of each cluster. We focus on designing approximation algorithms for both of these notions. In the last part of this thesis, we take into consideration, the vulnerability of algorithms to manipulation and gaming. We study the problem of how to learn a linear classifier in presence of strategic agents that desire to be classified as positive and that are able to modify their position by a limited amount, making the classifier not be able to observe the true position of agents but rather a position where the agent pretends to be. We focus on designing algorithms with a bounded number of mistakes for a few different variations of this problem.Item Exploring Diversity and Fairness in Machine Learning(2020) Schumann, Candice; Dickerson, John P; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)With algorithms, artificial intelligence, and machine learning becoming ubiquitous in our society, we need to start thinking about the implications and ethical concerns of new machine learning models. In fact, two types of biases that impact machine learning models are social injustice bias (bias created by society) and measurement bias (bias created by unbalanced sampling). Biases against groups of individuals found in machine learning models can be mitigated through the use of diversity and fairness constraints. This dissertation introduces models to help humans make decisions by enforcing diversity and fairness constraints. This work starts with a call to action. Bias is rife in hiring, and since algorithms are being used in multiple companies to filter applicants, we need to pay special attention to this application. Inspired by this hiring application, I introduce new multi-armed bandit frameworks to help assign human resources in the hiring process while enforcing diversity through a submodular utility function. These frameworks increase diversity while using less resources compared to original admission decisions of the Computer Science graduate program at the University of Maryland. Moving outside of hiring I present a contextual multi-armed bandit algorithm that enforces group fairness by learning a societal bias term and correcting for it. This algorithm is tested on two real world datasets and shows marked improvement over other in-use algorithms. Additionally I take a look at fairness in traditional machine learning domain adaptation. I provide the first theoretical analysis of this setting and test the resulting model on two deal world datasets. Finally I explore extensions to my core work, delving into suicidality, comprehension of fairness definitions, and student evaluations.