Menu
For joint projects editor@huxley.media
For cooperation with authors chiefeditor@huxley.media
Telephone

AI IN THE AGE OF POST-TRUTH: A Chatbot That Will Make You Question Your Beliefs

Huxley
Author: Huxley
© Huxley – an almanac about philosophy, art and science
AI IN THE AGE OF POST-TRUTH: A Chatbot That Will Make You Question Your Beliefs
Photo source: inc.com

 

AI is capable of adjusting people’s worldviews, making them question their beliefs. This is highlighted in a new study published in the journal Science. Interaction with large language models can lead to a «shift in thinking» by introducing people to facts and well-reasoned evidence. Thus, in the age of post-truth, AI has become a valuable tool for debunking false information and various conspiracy theories.

 

EVERY SECOND PERSON IS A «CONSPIRACY THEORIST»!

 

The modern world is a very uneasy and unsafe place. It’s hard to argue with this statement. Traditionally, conspiracy theories owe their popularity to people’s desire to find psychological and ideological stability in the midst of total uncertainty.

Surveys show that about 50% of Americans believe in various conspiracy theories. Among the most popular are the U.S. government faking the 1969 moon landing and microchips being implanted under the guise of COVID-19 vaccines to facilitate mass surveillance.

The rapid growth of social media has only exacerbated this problem, as the free exchange of information between users does not imply verification for scientific or factual accuracy.

 

THE DESTRUCTIVE EFFECT OF CONSPIRACY THEORIES

 

One of the study’s authors, Catherine Fitzgerald from the Queensland University of Technology in Brisbane (Australia), believes that most conspiracy theories do not have a severe impact on society. However, some can cause actual harm.

For instance, the claim that the 2020 U.S. presidential election was rigged led to the attack on the Capitol. Similarly, anti-vaccination rhetoric influenced COVID-19 vaccination rates. There are vast numbers of conspiracy theory supporters, and convincing them otherwise is exceptionally challenging.

It will take specialists a significant amount of time and effort to achieve this. However, large language models (LLMs) can handle vast amounts of data and solve large-scale problems that humans cannot manage as efficiently.

 

AI WITH THE POWER OF PERSUASION

 

A new chatbot from the creators of ChatGPT at OpenAI has been specifically trained to convincingly debunk false worldviews. The model quickly processes vast amounts of information, learns from the Internet, and navigates all possible arguments and counterarguments from conspiracy theory proponents and opponents.

Its reasoning is human-like and persuasive. To achieve this, researchers enlisted over 1,000 participants whose demographic data matched U.S. Census figures for characteristics such as gender and ethnicity.

 

By joining the Huxley friends club, you support philosophy, science and art

 

Based on their life experience and personal perspectives, each participant was asked to describe a conspiracy theory and explain why they believed it to be true. The strength of this belief was expressed as a percentage. Once this data was provided to the chatbot, the dialogue between the individual and the model began, which lasted an average of 8 minutes.

In responding to participants’ questions, the chatbot did quite well in debunking false information.

 

DOUBTING THEIR CERTAINTY

 

The chatbot’s responses were so detailed and comprehensive that they influenced participants’ opinions. After the dialogue, their confidence in the validity of the conspiracy theory decreased by an average of 21%. Before speaking with the chatbot, 25% of the participants were more than 50% certain they were right. However, after interacting with the AI, their self-assessment shifted toward doubt.

A follow-up study conducted two months later revealed that this shift in perspective persisted for many participants. However, there’s a caveat: all the subjects in the study were paid respondents, which may mean they aren’t entirely representative of people deeply entrenched in conspiracy theories.

Nevertheless, the experiment’s results are impressive. Now, the goal is to understand what exactly ensured its success and to explore the mechanism that makes persuasion «ineffective.»

 

HOW TO AVOID HALLUCINATIONS

 

Further research is needed to ensure that LLMs don’t reinforce conspiracy thinking, but rather neutralize it. To achieve this, scientists plan to use additional metrics to evaluate the chatbot’s effectiveness and to replicate the experiment with less advanced safety measures in place.

It’s well known that AI chatbots are sometimes prone to «hallucinating» and providing false information. To avoid this, the team of researchers asked a fact-checking expert to assess the accuracy of the information provided by the anti-conspiracy chatbot.

The expert confirmed that, in this case, none of the chatbot’s statements were false or politically biased. As the experiment continues, the scientists plan to explore various chatbot strategies. For instance, they want to understand what happens when the chatbot’s responses are impolite.

 

Original research:

 

When copying materials, please place an active link to www.huxley.media