Recent research from the Netherlands is raising important concerns regarding the application of artificial intelligence (AI) in healthcare, particularly in the context of outcome prediction models (OPMs). The study, published in the data-science journal Patterns, highlights the potential dangers of focusing on predictive accuracy at the expense of treatment efficacy, which could ultimately harm patients.
OPMs leverage patient-specific information—including health history and lifestyle factors—to assist healthcare professionals in evaluating treatment options. The real-time data processing capabilities of AI offer significant advantages in clinical decision-making. However, researchers warn that if these models are trained on historical data that reflect existing disparities in treatment and demographics, they risk creating "self-fulfilling prophecies" that exacerbate these inequalities.
The team's findings, based on mathematical scenarios, suggest that deploying AI models in this manner can lead to a deterioration in patient health. "Many expect that by predicting patient-specific outcomes, these models have the potential to inform treatment decisions and they are frequently lauded as instruments for personalised, data-driven healthcare," the researchers stated. However, they cautioned, "using prediction models for decision-making can lead to harm, even when the predictions exhibit good discrimination after deployment."
Dr Catherine Menon, a principal lecturer at the University of Hertfordshire’s department of computer science, commented on the implications of the research. She explained that when AI models are trained on biased historical data, they can accurately predict poorer outcomes for certain demographics. This might lead healthcare professionals to exercise caution in treating these patients, exacerbating the historical pattern of under-treatment based on factors such as race, gender, or educational background.
"This demonstrates the inherent importance of evaluating AI decisions in context," Dr Menon noted, underscoring the necessity of applying human reasoning and assessment when interpreting algorithmic predictions. The researchers of the study called for a shift in AI model development to prioritise changes in treatment policy and patient outcome rather than simply focusing on predictive performance.
AI technology has been integrated into the NHS across England to support clinicians in reading X-rays and CT scans, expediting stroke diagnoses, and ultimately aiming to alleviate waiting lists, a priority underscored by Prime Minister Sir Keir Starmer in January, who heralded the UK as aspiring to be an "AI superpower."
Despite the promising direction of AI in healthcare, experts caution against its unchecked application. Ian Simpson, a professor of biomedical informatics at the University of Edinburgh, reaffirmed that OPMs are not currently widely utilised within the NHS, stating they are used mainly to assist diagnostic processes rather than replace existing clinical management policies.
Ewen Harrison, a professor and co-director of the centre for medical informatics at the University of Edinburgh, elaborated on the unintentional consequences that can arise from AI predictions. For example, if a hospital uses an AI tool to identify patients likely to have a poor recovery post-knee replacement surgery, the resource allocation for rehabilitation may skew disproportionately towards those predicted to have better outcomes. This could mean that those categorised as having poorer prospects receive inadequate support, resulting in slower recovery and increased pain, highlighting an essential issue in the current approach to AI integration in medical settings.
Overall, the study indicates that while AI offers significant promise in personalising patient care, there are critical challenges that must be addressed to ensure that these tools improve outcomes rather than inadvertently perpetuating historical biases and inequities in healthcare treatment.
Source: Noah Wire Services