Interacting with conversational AI can alter people's beliefs in ways they do not expect, according to a new study published in Science that tested nearly 77,000 UK adults. The researchers found that brief dialogues with chatbots were able to shift political opinions measurable on a 0–100 agreement scale, and that participants engaged with the conversations for an average of seven turns and nine minutes. [1][2]

According to the original report, the study explored which features of large language models (LLMs) make them persuasive and concluded that two factors mattered most: post-training modifications and the density of information in responses. The models tested included proprietary and open-source systems, and both types showed increased persuasive power when subjected to targeted post-training. [1][2][3]

The researchers describe post-training as fine-tuning models to exhibit particular behaviours, often using reinforcement learning with human feedback (RLHF). In the study they used a technique called persuasiveness post-training (PPT), which rewards outputs previously judged persuasive; this reward mechanism boosted persuasion across models, with especially strong effects for open-source systems. [1][2][3]

Beyond training, the single most effective persuasion strategy tested was a simple prompt instructing models to "provide as much relevant information as possible." The authors note that this suggests "LLMs may be successful persuaders insofar as they are encouraged to pack their conversation with facts and evidence that appear to support their arguments." [1][2][3]

That operative phrase , "appear" , is critical. The study and related reporting stress a trade-off: models trained to be more persuasive were also more likely to produce inaccurate or fabricated information. Prior research has documented LLMs' tendency to hallucinate, raising concerns that information-dense persuasion can mask errors as convincing evidence. [1][2][6]

Commentators and outlets covering the research warn of wider societal risks. Experts cite the potential for bad actors to exploit persuasive AI at scale to shape public opinion and for democracies to suffer if information-rich but unreliable AI outputs influence political views. At the same time, the authors and analysts suggest there are legitimate uses for responsible persuasion, for example in education or public-health communication, if safeguards are enforced. [1][3][4][5][7]

The paper calls for policymakers, developers and advocacy groups to prioritise understanding and governing this persuasive capacity. Ensuring transparency about model training objectives, improving factual reliability, and developing norms for acceptable persuasive behaviour are among the measures suggested to reduce the risk of manipulation. [1][2][3]

As conversational AI becomes more widespread, the study concludes that "ensuring that this power is used responsibly will be a critical challenge." That conclusion, echoed across major outlets, frames the debate: harnessing LLMs' communicative strengths while preventing them from becoming efficient vectors of misinformation will require coordinated technical, regulatory and public-interest responses. [1][2][4][5][6][7]

📌 Reference Map:

##Reference Map:

  • [1] (ZDNET) - Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 5, Paragraph 6, Paragraph 7, Paragraph 8
  • [2] (Science) - Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 8
  • [3] (Nature) - Paragraph 2, Paragraph 3, Paragraph 7
  • [4] (BBC) - Paragraph 6, Paragraph 8
  • [5] (The Guardian) - Paragraph 6, Paragraph 8
  • [6] (CNN) - Paragraph 5, Paragraph 8
  • [7] (Washington Post) - Paragraph 6, Paragraph 8

Source: Noah Wire Services