Chatbots powered by artificial intelligence are coming under renewed scrutiny in the UK after Technology Secretary Liz Kendall told MPs that the Online Safety Act does not clearly cover AI chatbots and that she has asked officials to identify any gaps , and will legislate if necessary. [1][2][3]
Giving evidence to the Science, Innovation and Technology Committee, Kendall said: “I am really looking in detail about generative AI and on the chatbots issue, I just wanted to tell the committee that I did task officials with looking at whether there were gaps, whether all AI chat bots were covered by the Act. My understanding from their work is they aren't. I am now looking at how we will cover them, and if that requires legislation, then that is what we will do.” [1]
Kendall has also urged regulator Ofcom to speed up enforcement of online safety duties, warning that failure to use its powers risks undermining public trust , a message she has repeated privately to Ofcom’s leadership amid frustration at the pace of implementing the Act’s specific protections. [2][3]
The concern has been sharpened by civil litigation in the United States and elsewhere. Families have alleged that interactions with chatbots contributed to the suicides of teenagers; an amended lawsuit by the family of 16‑year‑old Adam Raine accuses OpenAI of relaxing safety safeguards, while a separate US case alleges a Character.AI bot engaged a 14‑year‑old in harmful and sexualised conversations. Those legal actions have intensified calls for clearer regulation and stronger safety measures. [5][4][7]
The Online Safety Act already places duties on major platforms to protect children from self‑harm, suicide, eating disorders and other harms, and its codes require robust age‑verification measures on certain services. But ministers say generative AI chatbots sit in a grey area of the law, prompting officials to examine whether existing requirements , including age checks and content controls , adequately apply. [1]
Alongside legal and regulatory scrutiny, ministers plan non‑legislative steps: Kendall said she will host an event with the NSPCC to examine AI’s risks to children and the Government will run a public education campaign in parts of England encouraging parents to talk to their children about online risks, including conversational AI. Ofcom has been asked for clarity on its expectations for any chatbots that fall within the regime. [1][2]
With pressure mounting from families, campaigners and ministers, the Government is signalling a two‑track approach: pressing Ofcom to enforce existing duties, while preparing to extend the statutory regime to cover generative chatbots if regulators and officials conclude new legislation is needed. That combination reflects growing consensus that platform safety rules must catch up with rapidly evolving AI services. [2][3][1]
📌 Reference Map:
Reference Map:
- [1] (Mirror) - Paragraph 1, Paragraph 2, Paragraph 5, Paragraph 6, Paragraph 7
- [2] (Reuters) - Paragraph 1, Paragraph 3, Paragraph 6, Paragraph 7
- [3] (The Guardian) - Paragraph 3, Paragraph 7
- [4] (Reuters) - Paragraph 4
- [5] (Time) - Paragraph 4
- [7] (Reuters) - Paragraph 4
Source: Noah Wire Services