When News Headlines Go Wrong: An In-depth Analysis and AI-driven Intervention of Misleading News Headlines

Loading...
Thumbnail Image

Files

Publication or External Link

Date

Advisor

Hassan, Naeemul

Citation

Abstract

Misleading news headlines that distort, exaggerate, or omit information without presenting outright falsehoods pose a persistent challenge in the digital news ecosystem. These headlines often exploit commercial and algorithmic pressures, taking advantage of limited reader attention and heuristic processing. Despite their widespread impact, misleading headlines have received limited in-depth investigation in both misinformation research and HCI. This dissertation investigates the issue through a multi-method, three-part inquiry: examining human perceptions and correction practices (Project 1), testing the behavioral effects of headline correction strategies (Project 2), and evaluating large language models' (LLMs) capacity to support editorial reasoning (Project 3).

Project 1 explores how two key stakeholder groups, journalists and news readers, perceive and respond to misleading headlines. Through semi-structured interviews with 12 journalists and 12 readers, the study identifies competing notions of responsibility, with journalists emphasizing audience literacy and readers expecting inherent trustworthiness. The analysis surfaces three key correction strategies that stakeholders independently employ: adding uncertainty cues, restoring critical context, and removing emotional framing. These findings reveal editorial tensions and motivate the need to assess how such strategies function when deployed at scale.

Project 2 builds on these qualitative insights through a between-subjects experiment with 399 participants, testing the effects of the three correction strategies on reader outcomes. The study evaluates six headline versions across engagement, credibility, and interpretation accuracy. Results show that corrections, particularly the removal of emotional language, can significantly enhance perceived credibility and interpretive accuracy without diminishing engagement. The findings challenge the presumed trade-off between truthfulness and reader interest and offer empirical grounding for ethical headline design in journalism and platform interventions.

Project 3 investigates how LLMs such as GPT and Gemini explain misleadingness in headlines under varying levels of annotator agreement. Using a stratified dataset of 60 headlines and explanations generated by three LLMs, the study engages six professional journalists to evaluate explanation quality along editorial dimensions, including correctness, ambiguity awareness, and risk sensitivity. While LLMs align well with human reasoning in high-consensus cases, they often falter in ambiguous ones, failing to surface interpretive complexity or journalistic reasoning. The analysis informs design directions for editorially aligned, expert-in-the-loop AI systems.

Together, the three chapters advance a situated understanding of misleading headlines as a socio-technical problem and offer design-relevant implications for computational journalism, explainable AI, and platform governance. This dissertation highlights the need for editorial transparency, role-aware collaboration, and systems that support nuanced, context-sensitive decision-making.

Notes

Rights