TikTok faces growing scrutiny from UK MPs as it plans to cut over 400 jobs in its London trust and safety teams, which are responsible for moderating content on the platform. The proposed layoffs are part of a broader restructuring effort to consolidate operations into fewer global sites while expanding the use of artificial intelligence (AI) to automate content moderation. This shift has raised concerns about the platform's commitment to user safety and the effectiveness of AI compared to human moderators.

The Commons Science, Innovation and Technology Committee, chaired by Dame Chi Onwurah, has demanded that TikTok clarify how it intends to protect users from harmful content amid the proposed job reductions. The committee's letter to the company highlighted a troubling contradiction between TikTok’s increased reliance on AI and its earlier public statements emphasizing the crucial role of human moderators in content removal. Dame Chi emphasized the need for TikTok to justify how it can maintain robust moderation standards while cutting staff it previously described as critical. She set a November 10 deadline for TikTok to answer questions regarding the roles affected, risk assessments on user safety, and whether responsibilities of laid-off employees will be outsourced to third-party providers.

In response, TikTok's Northern Europe public policy director, Ali Law, said the restructuring is intended to improve the speed and efficiency of moderation processes and insisted that most of the affected roles are not frontline moderation jobs. TikTok also stressed its ongoing investment in trust and safety, noting a $2 billion global commitment this year to enhance content moderation technology, with automated tools now responsible for removing about 80% of violative content.

This move toward AI-driven moderation is part of a wider trend within the company that has seen layoffs occur not only in the UK but also across other regions, including Germany, Malaysia, and parts of Asia. Recent reports confirm that as of early 2025, TikTok has been concentrating its trust and safety operations in fewer locations globally, affecting teams in Europe, the Middle East, Africa, and Asia. Internal communications cited in reports suggest the company is leveraging advancements in large language models and other AI technologies to reshape its moderation strategy.

However, the timing of these changes coincides with increasing regulatory pressure, particularly in the UK, where the Online Safety Act has introduced stricter obligations for social media firms to protect users, with potential fines reaching up to 10% of global turnover for non-compliance. Critics warn that TikTok’s commitment to user safety appears undermined by the move to reduce human moderator numbers in the face of these new rules. The committee’s demand for urgent clarity reflects broader concerns about the accountability of platforms that increasingly rely on automated systems which may not yet fully address the complexities of harmful content.

Despite TikTok’s assurances that these changes are designed to enhance moderation efficacy, the episode highlights the ongoing tension between technological innovation and the need for responsible human oversight. How TikTok balances these priorities in the coming months will be closely watched by legislators, users, and industry observers alike.

📌 Reference Map:

  • Paragraph 1 – [1] (The Irish News), [7] (The Independent)
  • Paragraph 2 – [1] (The Irish News)
  • Paragraph 3 – [1] (The Irish News), [2] (ABC News)
  • Paragraph 4 – [2] (ABC News), [4] (PYMNTS), [5] (Cybernews)
  • Paragraph 5 – [6] (Anadolu Agency), [7] (The Independent)
  • Paragraph 6 – [7] (The Independent)

Source: Noah Wire Services