Algorithmic Decision-making and Model Evaluation in Socially Consequential Domains

dc.contributor.advisorDickerson, John P.en_US
dc.contributor.authorHerlihy, Christine Robieen_US
dc.contributor.departmentComputer Scienceen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2025-01-25T06:33:54Z
dc.date.available2025-01-25T06:33:54Z
dc.date.issued2024en_US
dc.description.abstractAlgorithms are increasingly used to create markets, discover and disseminate information, incentivize behaviors, and inform real-world decision-making in a variety of socially consequential domains. In such settings, algorithms have the potential to improve aggregate utility by leveraging previously acquired knowledge, reducing transaction costs, and facilitating the efficient allocation of resources, broadly construed. However, ensuring that the distribution over outcomes induced by algorithmic decision-making renders the broader system sustainable---i.e., by preserving rationality of participation for a diverse set of stakeholders, and identifying and mitigating the costs associated with unevenly distributed harms---remains challenging. One set of challenges arises during algorithm or model development: here, we must decide how to operationalize sociotechnical constructs of interest, induce prosocial behavior, balance uncertainty-reducing exploration and reward-maximizing exploitation, and incorporate domain-specific preferences and constraints. Common desiderata such as individual or subgroup fairness, cooperation, or risk mitigation often resist uncontested analytic expression, induce combinatorial relations, or are at odds with unconstrained optimization objectives and must be carefully incorporated or approximated so as to preserve utility and tractability. Another set of challenges arises during model evaluation: here, we must contend with small sample sizes and high variance when estimating performance for intersectional subgroups of interest, and determine whether observed performance on domain-specific reasoning tasks may be upwardly biased due to annotation artifacts or data contamination. In this thesis, we propose algorithms and evaluation methods to address these challenges and show how our methods can be applied to improve algorithmic acceptability and decision-making in the face of uncertainty in public health and conversational recommendation systems. Our core contributions include: (1) novel resource allocation algorithms to incorporate prosocial constraints while preserving utility in the restless bandit setting; (2) model evaluation techniques to inform harms identification and mitigation efforts; and (3) prompt-based interventions and meta-policy learning strategies to improve expected utility by encouraging context-aware uncertainty reduction in large language model (LLM)-based recommendation systems.en_US
dc.identifierhttps://doi.org/10.13016/rzkq-aplt
dc.identifier.urihttp://hdl.handle.net/1903/33579
dc.language.isoenen_US
dc.subject.pqcontrolledComputer scienceen_US
dc.subject.pqcontrolledArtificial intelligenceen_US
dc.subject.pquncontrolledAlgorithmic fairnessen_US
dc.subject.pquncontrolledModel evaluationen_US
dc.subject.pquncontrolledMulti-objective optimizationen_US
dc.subject.pquncontrolledSequential decision-makingen_US
dc.titleAlgorithmic Decision-making and Model Evaluation in Socially Consequential Domainsen_US
dc.typeDissertationen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Herlihy_umd_0117E_24665.pdf
Size:
2.15 MB
Format:
Adobe Portable Document Format