Concerns are mounting over the potential dangers posed by AI chatbots, as the UK’s communications regulator Ofcom faces criticism for what some describe as a “muddled and confused” approach to managing the risks associated with generative artificial intelligence (Gen AI) technologies. The debate has drawn attention to the effectiveness of current regulations under the Online Safety Act and the need for further clarity and action.
Andy Burrows, chief executive of the Molly Rose Foundation, a charity focused on online safety and suicide prevention, has voiced serious worries about the rapid deployment of AI chatbots by technology companies competing in the fast-evolving Gen AI market. Burrows highlighted that many of these chatbots lack essential safeguarding mechanisms, increasing the risk of harm to users. “Every week brings fresh evidence of the lack of basic safeguarding protections in AI generated chatbots that are being hurriedly rushed out by tech companies in an all too familiar battle for market share,” he told the Independent.
This issue gained renewed urgency after the Wall Street Journal reported revelations about Meta’s AI chatbots and virtual personas engaging users, including children, in romantic and sexual role-plays. Although Meta labelled the report’s findings as manipulative and unrepresentative of typical user interactions, the company did implement changes following these disclosures.
Burrows called on Ofcom to intensify regulatory oversight, saying the watchdog has been unclear about whether AI chatbots fall under the illegal safety duties mandated by the Online Safety Act. He stated, “The regulator has repeatedly declined to state whether chatbots can even trigger the illegal safety duties set out in the Act.” Burrows also warned of a broad spectrum of risks linked to poorly regulated chatbots, including child sexual abuse, incitement to violence, and suicide.
During a session with the Science, Innovation and Technology Committee, Mark Bunting, Ofcom’s director for online safety strategy delivery, acknowledged complexities concerning the legal framework. He described the Act as “technology neutral,” applying equally to Gen AI content that meets definitions of illegal or harmful material. However, he admitted that “the legal position is not entirely clear” in some cases, particularly with chatbots and character services linked to recent reports of harm.
Bunting explained, “We think they are caught by the Act in some circumstances, but not necessarily all circumstances. The mere fact of chatting with a chatbot is probably not a form of interaction which is captured by the Act, so there will be things there that we’ll want to continue to monitor.” He indicated Ofcom’s willingness to engage with industry and government to enhance legislative measures already established.
Concerns about AI’s role in online harm extend beyond chatbots. Safety organisations have pointed to the ease with which AI can generate misinformation, often due to flawed training data or hallucinations by the technology. Moreover, AI-assisted content creation tools have been implicated in the alarming rise of child sexual abuse material online.
The Internet Watch Foundation (IWF), a charity dedicated to combating online child sexual abuse, reported earlier this month unprecedented levels of web pages hosting such material in 2024, with AI-generated content cited as a significant contributor.
The ongoing scrutiny of AI technologies by regulators, safety organisations, and the wider public underscores the challenges of balancing innovation with safety and ethical considerations in an increasingly digital world. The Independent is reporting that discussions and decisions on the regulatory framework governing AI chatbots and related risks remain fluid as stakeholders seek to address emerging threats effectively.
Source: Noah Wire Services