Theses and Dissertations from UMD
Permanent URI for this communityhttp://hdl.handle.net/1903/2
New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM
More information is available at Theses and Dissertations at University of Maryland Libraries.
Browse
10 results
Search Results
Item Towards Effective and Inclusive AI: Aligning AI Systems with User Needs and Stakeholder Values Across Diverse Contexts(2024) Cao, Yang; Daumé III, Hal; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Inspired by the Turing test, a long line of research in AI has focused on technical improvement on tasks thought to require human-like comprehension. However, this focus has often resulted in models with impressive technical capabilities but uncertain real-world applicability. Despite the advancements of large pre-trained models, we still see various failure cases towards discriminated groups and when applied to specific applications. A major problem here is the detached model development process — these models are designed, developed, and evaluated with limited consideration of their users and stakeholders. My dissertation is dedicated to addressing this detachment by examining how artificial intelligence (AI) systems can be more effectively aligned with the needs of users and the values of stakeholders across diverse contexts. This workaims to close the gap between the current state of AI technology and its meaningful application in the lives of real-life stakeholders. My thesis explores three key aspects of aligning AI systems with human needs and values: identifying sources of misalignment, addressing the needs of specific user groups, and ensuring value alignment across diverse stakeholders. First, I examine potential causes of misalignment in AI system development, focusing on gender biases in natural language processing (NLP) systems. I demonstrate that without careful consideration of real-life stakeholders, AI systems are prone to biases entering at each development stage. Second, I explore the alignment of AI systems for specific user groups by analyzing two real-life application contexts: a content moderation assistance system for volunteer moderators and a visual question answering (VQA) system for blind and visually impaired (BVI) individuals. In both contexts, I identify significant gaps in AI systems and provide directions for better alignment with users’ needs. Finally, I assess the alignment of AI systems with human values, focusing on stereotype issues within general large language models (LLMs). I propose a theory-grounded method for systematically evaluating stereotypical associations and exploring their impact on diverse user identities, including intersectional identity stereotypes and the leakage of stereotypes across cultures. Through these investigations, this dissertation contributes to the growing field of human-centered AI by providing insights, methodologies, and recommendations for aligning AI systems with the needs and values of diverse stakeholders. By addressing the challenges of misalignment, user-specific needs, and value alignment, this work aims to foster the development of AI technologies that effectively collaborate with and empower users while promoting fairness, inclusivity, and positive social impact.Item Adversarial Robustness and Fairness in Deep Learning(2023) Cherepanova, Valeriia; Goldstein, Tom; Applied Mathematics and Scientific Computation; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)While deep learning has led to remarkable advancements across various domains, the widespread adoption of neural network models has brought forth significant challenges such as vulnerability to adversarial attacks and model unfairness. These challenges have profound implications for privacy, security, and societal impact, requiring thorough investigation and development of effective mitigation strategies. In this work we address both these challenges. We study adversarial robustness of deep learning models and explore defense mechanisms against poisoning attacks. We also explore the sources of algorithmic bias and evaluate existing bias mitigation strategies in neural networks. Through this work, we aim to contribute to the understanding and enhancement of both adversarial robustness and fairness of deep learning systems.Item CHILDREN’S DISTRIBUTIVE JUSTICE AND FRIENDSHIP PREFERENCES IN GENDER STEREOTYPED CONTEXTS(2023) Sims, Riley N.; Killen, Melanie; Human Development; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)From an early age, children prefer fair and just treatment of others. Young children often reason about the importance of establishing equality between individuals and groups, with concerns for equity emerging by middle childhood. At the same time, children expect that individuals who counter gender stereotypic norms will face exclusion from the peer group, and give preferential treatment towards gender ingroup members over gender outgroup members in resource allocation tasks. Denying individuals from friendships, resources, or opportunities based on gender stereotypic expectations constitutes unfair treatment. Intergroup contact has been shown to reduce children’s prejudicial attitudes, but less research has investigated how intergroup contact with counter-stereotypic peers shapes children’s friendship preferences. Furthermore, research indicates that children rectify inequalities for historically marginalized racial/ethnic groups. Women have historically been marginalized and excluded within science, technology, engineering, and mathematics (STEM) fields. Though some research has investigated the extent to which children rectify inequalities between racial groups, less research has focused on how children rectify inequalities between gender groups in stereotypic contexts, such as those involving science inequalities. The present dissertation contains three empirical papers that explore how gender stereotypic expectations shape children’s friendship preferences and distributive justice beliefs. Empirical Paper 1 explored how children’s own reported gender stereotypes and playmate experiences relate to their desires to play with peers who hold counter-stereotypic toy preferences. Empirical Paper 2 assessed children’s evaluations, resource allocation decisions, and reasoning in contexts involving inequalities of science supplies between groups of boys and girls. Empirical Paper 3 extended work from Empirical Paper 2 to investigate how children and young adults consider merit and gender group membership in science inequality contexts. Together, this body of work suggests that intergroup contact with counter-stereotypic peers can dampen the influence of gender stereotypes in shaping children’s friendship preferences, and that children and young adults maintain subtle pro-boy biases in their evaluations and decision-making regarding access to science resources between gender groups. Documenting the contextual factors that encourage children to resist gender stereotypic expectations and promote more equitable attitudes as it relates to rights to resources and opportunities can inform future research aimed at promoting inclusive orientations in childhood.Item Comparing the Validity & Fairness of Machine Learning to Regression in Personnel Selection(2022) Epistola, Jordan J; Hanges, Paul J; Psychology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)In the realm of personnel selection, several researchers have claimed that machine learning (ML) can generate predictions that can out-predict more conventional methods such as regression. However, high-profile misuses of ML in selection contexts have demonstrated that ML can also result in illegal discrimination and/or bias against minority groups when developed improperly. This dissertation examined the utility of ML in personnel selection by examining the validity and fairness of ML methods relative to regression. Studies One and Two predicted counterproductive work behavior in Hanges et al.’s (2021) sample of Military cadets/midshipmen, and Study Three predicted job performance ratings of employees in Patalano & Huebner’s (2021) human resources dataset. Results revealed equivalent validity of ML to regression across all three studies. However, fairness was enhanced when ML was developed in accordance with employment law. Implications for the use of ML in personnel selection, as well as relevant legal considerations, are presented in my dissertation. Further, methods for further enhancing the legal defensibility of ML in the selection are discussed.Item CHILDREN’S CONCEPTIONS OF FAIRNESS: THE ROLE OF MENTAL STATE UNDERSTANDING AND GROUP IDENTITY(2021) D'Esterre, Alexander; Killen, Melanie; Human Development; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Children’s everyday experiences occur against a backdrop that is rich in social information andwhich requires decisions involving considerations about fairness, intentionality, and social groups. With age, children improve in their ability to utilize intentional information in their judgments and have been shown to demonstrate preferences for fairness over group benefit. What has not been fully investigated is how children coordinate and weight these considerations at different ages. Moreover, mistaken intentions and a tendency to benefit the in-group over others can be seen even in adulthood – suggesting that these issues are not so easily overcome and have the potential to affect the evaluations and behaviors of individuals more than have been previously considered. Research designed to carefully investigate the impact of these social and cognitive factors on children’s fairness concepts can provide insight into the ways in which biases may begin to form and potentially inform our understanding of the underlying mechanisms present in prejudicial attitudes. The present dissertation contains a series of three empirical papers that are designed to investigate children’s responses to unintentional and intentional transgressions based on their cognitive ability to infer beliefs of others and their relationship to the group identity of the target. Empirical Study 1 demonstrated the value of using a morally-relevant theory of mind measure embedded directly into the context when predicting children’s responses to unintentional and intentional transgressions. Empirical Study 2 investigated the ways in which children’s assessment of fair and unfair advantages were influenced by the group identity of the character who created the advantage. Empirical Study 3 explored the types of retributive justice that children would endorse in light of various types of intentional and unintentional transgressions, revealing differences based on group identity and the impact that the retributive justice would present to the functioning of the group. The results of these studies together suggest that children’s fairness concepts are heavily influenced by the context in which children find themselves and are far from static. Better understanding the relationship between these factors will provide increased insight into the ways in which prejudice and bias may develop in childhood and suggest potential areas for intervention.Item Social Aspects of Algorithms: Fairness, Diversity, and Resilience to Strategic Behavior(2021) Ahmadi, Saba; Khuller, Samir; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)With algorithms becoming ubiquitous in our society, it is important to ensure that they are compatible with our social values. In this thesis, we study some of the social aspects of algorithms including fairness, diversity, and resilience to strategic behavior of individuals. Lack of diversity has a potential impact on discrimination against marginalized groups. Inspired by this issue, in the first part of this thesis, we study a notion of diversity in bipartite matching problems. Bipartite matching where agents on one side of a market are matched to one or more agents or items on the other side, is a classical model that is used in myriad application areas such as healthcare, advertising, education, and general resource allocation. In particular, we consider an application of matchings where a firm wants to hire, i.e. match, some workers for a number of teams. Each team has a demand that needs to be satisfied, and each worker has multiple features (e.g., country of origin, gender). We ask the question of how to assign workers to the teams in an efficient way, i.e. low-cost matching, while forming diverse teams with respect to all the features. Inspired by previous work, we balance whole-match diversity and economic efficiency by optimizing a supermodular function over the matching. Particularly, we show when the number of features is given as part of the input, this problem is NP-hard, and design a pseudo-polynomial time algorithm to solve this problem. Next, we focus on studying fairness in optimization problems. Particularly, in this thesis, we study two notions of fairness in an optimization problem called correlation clustering. In correlation clustering, given an edge-weighted graph, each edge in addition to a weight has a positive or negative label. The goal is to obtain a clustering of the vertices into an arbitrary number of clusters that minimizes disagreements which is defined as the total weight of negative edges trapped inside a cluster plus the sum of weights of positive edges between different clusters. In the first fairness notion, assuming each node has a color, i.e. feature, our aim is to generate clusters with minimum disagreements, where the distribution of colors in each cluster is the same as the global distribution. Next, we switch our attention to a min-max notion of fairness in correlation clustering. In this notion of fairness, we consider a cluster-wise objective function that asks to minimize the maximum number of disagreements of each cluster. In this notion, the goal is to respect the quality of each cluster. We focus on designing approximation algorithms for both of these notions. In the last part of this thesis, we take into consideration, the vulnerability of algorithms to manipulation and gaming. We study the problem of how to learn a linear classifier in presence of strategic agents that desire to be classified as positive and that are able to modify their position by a limited amount, making the classifier not be able to observe the true position of agents but rather a position where the agent pretends to be. We focus on designing algorithms with a bounded number of mistakes for a few different variations of this problem.Item Exploring Diversity and Fairness in Machine Learning(2020) Schumann, Candice; Dickerson, John P; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)With algorithms, artificial intelligence, and machine learning becoming ubiquitous in our society, we need to start thinking about the implications and ethical concerns of new machine learning models. In fact, two types of biases that impact machine learning models are social injustice bias (bias created by society) and measurement bias (bias created by unbalanced sampling). Biases against groups of individuals found in machine learning models can be mitigated through the use of diversity and fairness constraints. This dissertation introduces models to help humans make decisions by enforcing diversity and fairness constraints. This work starts with a call to action. Bias is rife in hiring, and since algorithms are being used in multiple companies to filter applicants, we need to pay special attention to this application. Inspired by this hiring application, I introduce new multi-armed bandit frameworks to help assign human resources in the hiring process while enforcing diversity through a submodular utility function. These frameworks increase diversity while using less resources compared to original admission decisions of the Computer Science graduate program at the University of Maryland. Moving outside of hiring I present a contextual multi-armed bandit algorithm that enforces group fairness by learning a societal bias term and correcting for it. This algorithm is tested on two real world datasets and shows marked improvement over other in-use algorithms. Additionally I take a look at fairness in traditional machine learning domain adaptation. I provide the first theoretical analysis of this setting and test the resulting model on two deal world datasets. Finally I explore extensions to my core work, delving into suicidality, comprehension of fairness definitions, and student evaluations.Item Too Busy to Be Fair? The Effect of Managers’ Perceived Workload on Their Core Technical Performance and Justice Rule Adherence(2016) Sherf, Elad Netanel; Venkataramani, Vijaya; Business and Management: Management & Organization; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Despite the organizational benefits of treating employees fairly, both anecdotal and empirical evidence suggest that managers do not behave fairly towards their employees in a consistent manner. As treating employees fairly takes up personal resources such as time, effort, and attention, I argue that when managers face high workloads (i.e., high amounts of work and time pressure), they are unable to devote such personal resources to effectively meet both core technical task requirements and treat employees fairly. I propose that in general, managers tend to view their core technical task performance as more important than being fair in their dealings with employees; as a result, when faced with high workloads, they tend to prioritize the former at the expense of the latter. I also propose that managerial fairness will suffer more as a result of heightened workloads than will core technical task performance, unless managers perceive their organization to explicitly reward fair treatment of employees. I find support for my hypotheses across three studies: two experimental studies (with online participants and students respectively) and one field study of managers from a variety of organizations. I discuss the implications of studying fairness in the wider context of managers’ complex role in organizations to the fairness and managerial work demands literatures.Item Resource Allocation in Relay-based Satellite and Wireless Communication Networks(2008-11-24) Zeng, Hui; Baras, John S; Electrical Engineering; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)A two-level bandwidth allocation scheme is proposed for a slotted Time-Division Multiple Access high data rate relay satellite communication link to provide efficient and fair channel utilization. The long-term allocation is implemented to provide per-flow/per-user Quality-of-Service guarantees and shape the average behavior. The time-varying short-term allocation is determined by solving an optimal timeslot scheduling problem based on the requests and other parameters. Through extensive simulations, the performance of a suitable MAC protocol with two-level bandwidth allocation is analyzed and compared with that of the existing static fixed-assignment scheme in terms of end-to-end delay and successful throughput. It is also shown that pseudo-proportional fairness is achieved for our hybrid protocol. We study rate control systems with heterogeneous time-varying propagation delays, based on analytic fluid flow models composed of first-order delay-differential equations. Both single-flow and multi-flow system models are analyzed, with special attention paid to the Mitra-Seery algorithm. The stationary solutions are investigated. For the fluctuating solutions, their dynamic behavior is analyzed in detail, analytically and numerically, in terms of amplitude, transient behavior, fairness and adaptability, etc.. Especially the effects of heterogeneous time-varying delays are investigated. It is shown that with proper parameter design the system can achieve stable behavior with close to pointwise proportional fairness among flows. Finally we investigate the resource allocation in 802.16j multi-hop relay systems with rate fairness constraints for two mutually exclusive options: transparent and non-transparent relay systems (T-RS and NT-RS). Single-Input Single-Output and Multi-Input Multi-Output antenna systems are considered in the links between the Base Station (BS) and Relay Stations (RS). 1 and 3 RSs per sector are considered. The Mobile Station (MS) association rule, which determines the access station (BS or RS) for each MS, is also studied. Two rules: Highest MCS scheme with the highest modulation and coding rate, and Highest (Mod) ESE scheme with the highest (modified) effective spectrum efficiency, are studied along with the optimal rule that maximizes system capacity with rate fairness constraints. Our simulation results show that the highest capacity is always achieved by NT-RS with 3 RSs per sector in distributed scheduling mode, and that the Highest (Mod) ESE scheme performs closely to the optimal rule in terms of system capacity.Item ARE YOU IN OR OUT? A GROUP-LEVEL EXAMINATION OF THE EFFECTS OF LMX ON JUSTICE AND CUSTOMER SATISFACTION(2005-01-03) Mayer, David M.; Schneider, Benjamin; Psychology; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)Early work on leader-member exchange (LMX) theory suggested that leaders differentiating followers into in-groups and out-groups leads to superior group performance. However, research on LMX has almost exclusively studied individual outcomes as opposed to group outcomes. In addition, the notion of differentiation suggests that not all group members have high quality relationships with their leaders thereby violating rules surrounding experienced organizational justice. Thus, the purpose of this dissertation is to conceptualize and study LMX at the level of analysis at which it was initially conceptualized (i.e., the work group level), and to examine the effects of LMX level (i.e., mean in group members' LMX scores) and LMX strength (i.e., variance in group members' LMX scores, i.e., differentiation) on group performance (i.e., unit-level customer satisfaction) and group-level fairness perceptions (i.e., justice climates). Drawing on LMX, organizational justice, social comparison theory, and multilevel theory and research, I derived a number of testable hypotheses involving the relationship between LMX level and LMX strength on justice climates and group performance. There were three major sets of findings regarding (1): the effects of LMX level, (2) the effects of LMX differentiation (later called LMX strength), (3) and the moderating roles of task interdependence and group size on the LMX strength to justice climates relationships. First, LMX level was positively related to justice climates; however, the relationship between LMX level and customer satisfaction was not significant. Second, as predicted, LMX strength was negatively related to justice climates, but, incongruent with the differentiation (strength) hypothesis of LMX theory, there was not a significant relationship between LMX strength and customer satisfaction. Third, consistent with the hypothesis, task interdependence moderated the relationship between LMX strength and justice climates such that justice climates were more favorable when strength was high and task interdependence was high. Collectively, these results suggest that having variability (i.e., differentiation) in the quality of relationships in a work group may have negative effects on justice climates, particularly when individuals must work interdependently; but a negligible direct effect on group performance. Theoretical and practical implications are discussed.