The spread of ChatGPT into ordinary medical practice is no longer a theoretical debate. In Catalonia, a study highlighted by The Lancet found that family doctors were already using the tool during consultations, mainly to help draft reports, organise information and ease the administrative load that can crowd out patient time. What looked like a novelty is becoming part of the clinical routine, and that shift is forcing doctors and regulators to confront questions that efficiency alone cannot settle.
The appeal is obvious. In overstretched primary care systems, artificial intelligence can save time, structure notes and even support diagnostic thinking. But the ethical review published in npj Digital Medicine argues that large language models also bring familiar hazards: bias, privacy problems, weak transparency and the danger of producing fluent but misleading answers. In medicine, a polished sentence is not the same thing as a safe one.
That tension is already visible in everyday practice. The article notes the now familiar scene of a doctor speaking aloud after an appointment, dictating a summary for an AI system to convert into a formal record. The Journal of Medical Internet Research has said that such uses raise legal and humanistic questions about who owns the decision, who is accountable when something goes wrong and whether patients are being told when AI has shaped their care. Those concerns become sharper when experienced clinicians, not just early adopters, are the ones most likely to use the tools.
The risks extend well beyond clerical work. A recent investigation reported by ScienceDaily found that chatbot-style systems can respond with alarming confidence to dangerous medical prompts, including advice that would clearly be unsafe in real life. Separate reporting on OpenAI’s health-related features has also drawn attention to privacy concerns, with experts warning that uploading medical records to a chatbot raises confidentiality issues that do not map neatly onto the protections offered in conventional healthcare settings.
There is still a case for careful use. As the article argues, medicine has always absorbed new tools, from the stethoscope to imaging systems and electronic records. But the more powerful the software becomes, the more urgent the need for training, clear boundaries and active human judgement. Brown University researchers, writing about AI in therapy, went further, warning that chatbots can mishandle crises and reinforce harmful beliefs unless ethical and legal standards keep pace. In healthcare, the central question is no longer whether AI will be present, but how far clinicians are willing to let convenience erode responsibility, trust and the human bond at the heart of care.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2]
- Paragraph 2: [4], [3]
- Paragraph 3: [1], [3]
- Paragraph 4: [5], [7]
- Paragraph 5: [4], [2], [3], [1]
Source: Noah Wire Services