Since its enactment in 2023, the UK's Online Safety Act was designed to protect children from harmful online content by mandating that platforms implement age verification systems and restrict access to materials such as pornography, self-harm content, and other age-inappropriate media. While the Act introduced crucial new criminal offences targeting cyberflashing, intimate image abuse, and epilepsy trolling, its early implementation has exposed significant flaws. Numerous innocuous online forums have shut down completely to avoid regulatory liability, while legitimate support groups—such as those for sexual assault survivors or individuals seeking to quit smoking—now require government ID verification, eroding user anonymity and trust. This has not only caused domestic backlash but also drawn international criticism of UK regulators for over-censoring foreign companies.
A core issue stems from the Act’s broad and vague definitions of “harmful content,” including poorly defined categories like “bullying content” and “content encouraging behaviours associated with eating disorders.” These sweeping terms compel platforms to adopt overly cautious moderation, leading to unnecessary restrictions that stray from the law’s intended protections. The Electronic Frontier Foundation, among others, has highlighted the risk of such loose terminology fostering overzealous compliance that threatens free expression and privacy without clear benefits to children’s safety. To redress this, experts urge Parliament to sharply narrow and clarify these content categories, ideally through immediate regulatory guidance from Ofcom or legislative amendment, thereby providing platforms with unambiguous criteria aligned to the law’s child-protection goals.
Compounding these challenges is the current enforcement approach. Ofcom, the UK regulator, has the discretion to impose penalties immediately upon identifying breaches, prompting platforms to implement broad content restrictions rather than risk fines. Although Ofcom has developed informal remediation options that allow services to address compliance issues before formal investigations, platforms face uncertainty around when these leniencies apply. Observers advocate for a statutory obligation on Ofcom to provide a clear remediation period—between 30 and 60 days—before sanctions, aligning with precedents in other UK regulatory frameworks such as the Communications Act and FCA enforcement. This change would incentivise targeted fixes over sweeping censorship, fostering a proportionate regulatory environment.
Moreover, there is concern about the absence of judicial oversight over Ofcom’s content restrictions. While regulators face political pressure to act visibly on child protection, they bear limited accountability when enforcement results in excessive restrictions. Legal experts recommend that the Act be amended to require that significant content restrictions undergo independent judicial review. This would mirror existing safeguards in related areas, such as ISP blocking orders under copyright laws and injunctions in defamation cases. Courts could provide prompt urgency reviews or post-action assessments to ensure that restrictions satisfy legal standards and proportionality, thereby protecting free speech while enabling rapid action against genuinely harmful content.
The implementation of the Act has already led to high-profile regulatory actions. For instance, in early 2025, Ofcom fined Fenix International Limited, operator of OnlyFans, £1.05 million for discrepancies in its age verification disclosures. The regulator had uncovered that the platform’s live selfie age estimation threshold was misrepresented, revealing the agency’s insistence on accuracy to enforce compliance effectively. Later, Ofcom launched investigations into 34 pornography websites operated by multiple companies to ensure they meet the “highly effective” age verification standards mandated by the Act. The law empowers Ofcom to impose fines up to £18 million or 10% of global revenue, or even block access to sites failing to comply.
The Online Safety Act further demands that tech giants such as Facebook, TikTok, and YouTube adopt enhanced moderation and reporting mechanisms to tackle illegal content, including child sexual abuse imagery. Failure to comply exposes these platforms to substantial fines, reinforcing the government’s commitment to tightening online safety. However, the repeal of earlier regulatory schemes such as the Video Sharing Platform regime has also led to legal and procedural shifts, requiring Ofcom to adjust its enforcement priorities and approaches accordingly.
Ultimately, the UK government’s efforts to safeguard children online under the Online Safety Act are a complex balancing act between stringent protections and preserving open, supportive online spaces. Without clearer guidance, reasonable remediation periods, and judicial checks, the risk remains that platforms will prioritise excessive content removal over nuanced moderation, to the detriment of user rights and community trust. Implementing these three fixes—focused guidance on harmful content, structured enforcement grace periods, and judicial oversight—could help restore a more effective and proportionate online safety framework that genuinely protects children while respecting the broader digital ecosystem.
📌 Reference Map:
- Paragraph 1 – [1], [7]
- Paragraph 2 – [1], [7]
- Paragraph 3 – [1]
- Paragraph 4 – [2], [3], [6]
- Paragraph 5 – [4], [5], [6]
- Paragraph 6 – [1], [7]
Source: Noah Wire Services