REDUCING CONSPIRATORIAL BELIEF IN 2020 ELECTION FRAUD USING CHATGPT
Files
Publication or External Link
Date
Authors
Advisor
Citation
DRUM DOI
Abstract
This study examined whether a single, three-round ChatGPT conversation could weaken belief in a highly politicized conspiracy theory. Twenty-five adults were randomly assigned on a 60/40 split to one of two active conditions delivered through the TruthTalk web platform. In the Conspiracy condition (n = 15), the dialogue respectfully challenged claims of widespread fraud in the 2020 U.S. presidential election; in the comparison condition (n = 10), the same interaction structure invited participants to reconsider their opinion about the best musical genre. Pre- and post-surveys assessed confidence (certainty) in the target belief, rated belief strength, openness to counterevidence, and trust in AI. Descriptive change scores (post – pre) showed medians of zero and narrow interquartile ranges for every outcome in both conditions, with only minor additional dispersion in confidence among conspiracy participants. In short, most people finished the study holding views indistinguishable from those they began with, regardless of topic. These findings reinforce Pierre’s socio-epistemic model and Petty et al.’s attitude-strength insights, indicating that brief factual rebuttals—even when personalized and civil—rarely dislodge beliefs rooted in epistemic mistrust or anchored by high certainty, moral conviction, or partisan identity. The study also exposed methodological hurdles specific to large-language-model interventions: prompt drift, unsupported claims, and opaque system behavior made it difficult to ensure a uniform treatment and to earn participant trust. Future research should test multi-session, transparently sourced dialogues that directly address the moral, identity, and certainty foundations of strong attitudes before expecting meaningful belief change.