Artificial intelligence (AI) is increasingly integrated into the National Health Service (NHS) in England, particularly for diagnostic purposes, such as interpreting X-rays and CT scans. However, recent research has indicated potential risks associated with the use of AI in predicting medical outcomes, particularly in how these models might negatively affect patient care.

A team of academics in the Netherlands focused their research on outcome prediction models (OPMs), which analyse a patient's individual characteristics, including health history and lifestyle factors, to assist healthcare professionals in making informed treatment decisions. Despite the initial promise of these AI-driven tools in personalising healthcare, the study highlights significant shortcomings that may lead to harmful outcomes for certain patient demographics.

The findings suggest that when OPMs are developed based on historical data, they often do not adequately account for critical demographic factors and may reflect a history of under-treatment of particular medical conditions. As a consequence, these models can create what researchers describe as “harmful self-fulfilling prophecies.” The deployment of these tools may worsen outcomes for groups already experiencing disparity in medical treatment, as healthcare providers could be deterred from offering necessary interventions due to AI predictions indicating poor outcomes.

“Many expect that by predicting patient-specific outcomes, these models have the potential to inform treatment decisions and they are frequently lauded as instruments for personalised, data-driven healthcare,” the researchers stated in their report. They argue, however, for a paradigm shift in AI model development, suggesting that the focus should transition from merely enhancing predictive accuracy to improving treatment policies and patient outcomes.

Dr Catherine Menon, a principal lecturer at the University of Hertfordshire, discussed the implications of the study's findings. In her comments, she noted that AI models often rely on historical data that may not consider various social determinants of health, leading to a cycle of under-treatment: “This creates a ‘self-fulfilling prophecy’ if doctors decide not to treat these patients due to the associated treatment risks and the fact that the AI predicts a poor outcome for them,” she explained.

Concerns regarding patient outcomes are not limited to this research; Ewen Harrison, a professor of surgery and data science at the University of Edinburgh, also articulated the real-world implications of relying on AI-based predictions. He offered a hypothetical example involving a knee replacement surgery outcome prediction tool. If the healthcare system prioritises those predicted to have the best post-operative recovery for intensive rehabilitation, patients classified as having a poor recovery may receive inadequate care, resulting in slower recovery and increased suffering.

Professor Ian Simpson, another expert in biomedical informatics at the University of Edinburgh, noted that while OPMs are not yet extensively implemented across the NHS, they are often used in conjunction with existing clinical management strategies to enhance diagnostic processes.

This evolving landscape of AI within the NHS has drawn significant attention, especially as Prime Minister Sir Keir Starmer previously announced ambitions for the UK to become an "AI superpower" capable of utilising technology to address pressing issues, such as NHS waiting lists. Nonetheless, the findings from this recent research underscore the critical need for careful evaluation of AI decision-making and the continued integration of human expertise in medical assessments to minimise unintended consequences for patients, particularly those from historically disadvantaged backgrounds.

Source: Noah Wire Services