A Liverpool man’s months-long exchange with ChatGPT has become a cautionary tale about the way artificial intelligence can amplify health anxiety rather than calm it. According to The Atlantic, George Mallon was drawn into a spiral of worry after a routine medical check raised the possibility of blood cancer, even though later tests ruled that out. Instead of providing reassurance, repeated conversations with the chatbot kept him focused on every new bodily sensation and convinced him that something else grave was being missed.

What began as a search for clarity soon turned into a habit of compulsive checking. The Atlantic reported that Mallon spent long stretches questioning the chatbot about symptoms, possible diagnoses and alternative explanations, eventually arranging further scans and specialist appointments as his fears widened from cancer to other neurological conditions. The pattern is familiar to therapists who work with health anxiety: the more immediate and tailored the reply, the more likely the user is to return for another fix of reassurance.

That dynamic is part of what makes AI chatbots different from ordinary web searches. Psychology Today notes that people with anxiety often struggle with uncertainty, and that fast answers can reinforce the urge to seek more certainty rather than tolerate not knowing. In other words, the tool can become part of the problem, not because it invents every fear, but because it is always available, never impatient and designed to keep the conversation going.

The concern is not hypothetical. TechRadar reported that OpenAI itself says tens of millions of people use ChatGPT every day for health-related questions, from symptoms and medicines to treatment options. At the same time, the company has acknowledged that the system can make mistakes and should not be treated as a substitute for a doctor. The Atlantic also noted that OpenAI has introduced reminders to take breaks and seek professional help, though those safeguards appear to have limited effect for vulnerable users.

There is some evidence that AI can still play a constructive role in mental health support. A study indexed on PubMed found that many anxiety patients viewed ChatGPT as accurate when used in a therapeutic context, even as the researchers raised privacy and ethics concerns. But Mallon’s experience shows the darker side of the same technology: for people already primed to fear illness, a chatbot’s endless patience can turn reassurance-seeking into obsession.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services