Internet safety campaigners are increasingly vocal about their concerns regarding the use of artificial intelligence in risk assessments made by social media companies, particularly following reports of Meta's ambitious plans to automate up to 90% of these assessments. In a formal appeal to Ofcom, the UK’s communications regulator, groups including the Molly Rose Foundation, NSPCC, and Internet Watch Foundation have warned that AI-driven assessments could represent a “retrograde and highly alarming step” for online safety.
The Online Safety Act mandates social media platforms to thoroughly assess and mitigate potential harms, especially those affecting child users and illegal content. The process of risk assessment is thus a critical element in ensuring these platforms adhere to safety standards. Campaigners are apprehensive that reliance on AI could dilute the quality of these assessments, arguing that “risk assessments will not normally be considered as ‘suitable and sufficient’” if significantly automated.
As Ofcom weighs the implications of these warnings, it has stated that it is reviewing the concerns expressed and will provide a response in due course. The watchdog has consistently highlighted the importance of transparency, with a spokesperson confirming that companies should disclose who is responsible for conducting, reviewing, and approving their risk assessments.
In contrast, Meta has rejected the representations made by the campaigners, asserting that its risk management processes are mischaracterised. A spokesperson for the company asserted, “We are not using AI to make decisions about risk,” emphasising that their tools, developed by experts, aim to assist in identifying applicable legal and policy requirements rather than replacing human oversight. This assertion comes amidst claims from a former Meta executive, who indicated that the shift towards AI-driven assessments would expedite app updates but could potentially expose users to greater risks before problems are flagged.
Interestingly, Meta has also recently unveiled its Frontier AI Framework, which categorises AI systems based on their risk potential. In this framework, systems deemed too hazardous are to be indefinitely suspended, a step that reflects the company’s cautious approach to AI development. Yet, juxtaposed with this initiative, a study conducted by SaferAI has rated Meta's AI risk management efforts as “very weak”, raising questions about the robustness of its safety measures.
The evolving landscape of AI regulations is also a key factor in this debate. With the EU's AI Act coming into effect in August 2025, companies including Meta, Apple, and Microsoft are now actively enhancing their AI risk transparency to address emerging ethical, legal, and regulatory challenges. This legislation is poised to impose stricter regulations on AI, prompting a shift towards more transparent and responsible AI practices across the sector.
As these discussions unfold, the need for clear, effective oversight of AI's role in risk assessments remains crucial to safeguarding users, particularly the most vulnerable among them. The coming months are likely to be pivotal as Ofcom, Meta, and various advocacy groups navigate these complex concerns at the intersection of technology and safety.
📌 Reference Map:
- Paragraph 1 – [1], [2]
- Paragraph 2 – [1], [2], [5]
- Paragraph 3 – [3], [6]
- Paragraph 4 – [5], [7]
- Paragraph 5 – [4], [6]
Source: Noah Wire Services