AI Chatbots Can Actually Weaken Belief in Conspiracy Theories
'An eight-minute conversation with a GPT-4 based chatbot reduced belief in conspiracy theories for many participants, with effects lasting at least two months'
How a chatbot changed minds
Many assume facts alone cannot shift deeply held conspiracy beliefs. Recent research shows that tailored, conversational delivery of accurate information can produce measurable declines in conspiratorial conviction. In a study published in Science, researchers tested a chatbot named DebunkBot, built on GPT-4 Turbo, to see whether an eight-minute dialogue could reduce belief in conspiracy theories.
The experiment and results
Over 2,000 participants who endorsed various conspiracy theories described the theory they believed and the evidence that convinced them. The chatbot then engaged each person in a three-round back-and-forth conversation, averaging 8.4 minutes. After the interaction, participants showed an average 20% drop in confidence in their conspiracy beliefs. About one in four participants went from believing a conspiracy to not believing it at all. The effect held across classic conspiracies like JFK or moon landing denials and contemporary politically charged claims tied to events like the 2020 election or covid-19.
What the chatbot did differently
DebunkBot provided timely, clear, and evidence-based explanations tailored to each person's specific belief. For example, when users raised the claim that jet fuel cannot melt steel in 9/11 denial arguments, the chatbot responded with the distinction that while jet fuel may not melt steel outright, it can reduce steel's strength by over 50% according to the American Institute of Steel Construction, enough to cause structural collapse. These kinds of focused factual corrections removed the need for users to know esoteric technical details themselves.
Durability and mechanisms
Remarkably, the reductions in belief persisted. A two-month follow-up showed about the same reduction in conspiracy belief as immediately after the conversations. Follow-up experiments showed that the effect depended on facts and evidence: telling the model to persuade without using facts eliminated the effect, whereas transparency about the chatbot's purpose did not reduce efficacy. The study suggests many conspiracy believers are misinformed rather than irrational, and that clear factual explanations can shift their views.
AI versus human debunkers
When participants were told they were talking to an expert rather than an AI, the debunking effect was equally strong. This implies the effect is not specific to AI per se, but reflects the accessibility and efficiency that generative AI provides. Humans could achieve similar results but would need substantial time and expertise. Generative models can do the cognitive labor of sourcing facts and constructing rebuttals at scale and in real time.
Accuracy and limits
Although language models can hallucinate, the study found high factual accuracy in this context. A professional fact-checker rated over 99% of the GPT-4 claims as true and not politically biased. When participants cited conspiracies that were actually accurate, such as MK Ultra, the chatbot confirmed those beliefs rather than incorrectly debunking them.
Potential applications
Debunking bots could be deployed on social platforms to engage with users who share conspiratorial content, linked to search engines to answer conspiracy-related queries, or used in personal settings to supplement difficult conversations. These interventions could complement preventive measures by actively pulling people back from conspiratorial thinking.
Broader implications for public discourse
The findings contribute to a growing body of research showing that facts and evidence retain persuasive power. Earlier worries about a widespread backfire effect appear overstated: corrections and evidence often reduce belief and sharing of falsehoods. If accurate information can be disseminated widely enough, potentially with AI assistance, it may help rebuild shared factual ground necessary for democratic debate.
You can try the debunking bot at debunkbot.com.
Thomas Costello, Gordon Pennycook, and David Rand are researchers whose combined work explores how analytic reasoning, human-AI dialogue, and evidence-based interventions can correct inaccurate beliefs and reduce polarization.
Сменить язык
Читать эту статью на русском