NHS England has announced a pause on a significant AI project aimed at assessing individual health risks due to emerging concerns regarding the use of patient data. The initiative, known as Foresight, was designed to utilise Meta's open-source AI model, Llama 2, in a bid to refine healthcare delivery by tailoring treatment plans based on comprehensive medical histories. Researchers from University College London and King's College London spearheaded the project as part of a national pilot exploring the potential of AI in personalising healthcare offerings.
However, the project faced scrutiny after it was revealed that data sourced from 57 million patients was being processed without appropriate permissions. Experts indicated that even anonymised data could potentially lead to the identification of individuals, a point underscored in reporting by The Observer. These concerns prompted the British Medical Association (BMA) and the Royal College of General Practitioners (RCGP) to call for a halt to the initiative. They raised alarms regarding data collected for COVID-19 research, emphasising that this had not been transparently conveyed to them before its utilisation in training the AI model.
Professor Kamila Hawthorne, chair of the RCGP, articulated that fostering patient trust is crucial in this context, asserting that medical data must not be used beyond the scope originally agreed upon by patients. She stated, “If we can’t foster this patient trust, then any advancements made in AI – which has potential to benefit patient care and alleviate GP workload – will be undermined.” The BMA echoed these sentiments through its England GP committee chair, Katie Bramall, who expressed surprise at the lack of awareness regarding the use of patient data for AI training.
The repeated emphasis on patient trust highlights a growing concern across the UK with regard to the use of AI in healthcare data analytics. A survey conducted in July 2023 revealed that 56% of UK citizens expressed a lack of trust in the NHS's handling of AI applications, stemming from security and privacy issues. Furthermore, 25% explicitly opposed the use of AI for processing their medical data, indicating a clear public apprehension about data privacy amid advancing technological integration.
While the Foresight initiative reflects broader trends towards harnessing AI in the NHS, it also underscores the delicate balance between innovation and the safeguarding of patient information. Similar initiatives have unfolded within the political sphere, as UK ministers have explored allowing private companies to profit from NHS data for AI advancements. This proposal faced immediate backlash over privacy concerns, highlighting the need for stringent controls and transparent governance in all AI-related endeavours in healthcare.
As a promising yet contentious field, the integration of AI in the NHS must navigate these waters carefully. Not only must it comply with ethical considerations, but it must also actively engage in transparent dialogue with patients to cultivate trust and maintain confidence in healthcare systems. The recent developments regarding the Foresight project serve as a pertinent reminder of these challenges, emphasizing that technological advancements should harmoniously coalesce with patient rights and expectations.
📌 Reference Map:
Source: Noah Wire Services