TikTok is set to cut hundreds of jobs in the United Kingdom, predominantly affecting employees in its Trust and Safety team responsible for moderating content on the platform. This move forms part of a broader global reorganisation aimed at concentrating operations in fewer locations across Europe, alongside a significant shift towards integrating artificial intelligence (AI) more extensively in content moderation. According to a TikTok spokesperson, the company is focusing on "maximising effectiveness and speed" as it evolves this critical function by leveraging technological advancements.

The affected staff, primarily based in London, will face job losses as TikTok migrates much of the work to other European offices. Notably, hundreds more employees in similar roles across parts of Asia are also impacted. The company stated that affected employees will have the opportunity to apply for other internal roles and will be prioritised if they meet the minimum job requirements. TikTok emphasised that its automated content moderation systems, including AI, currently remove about 85% of posts that violate its rules, plays a key role in reducing the exposure of human reviewers to distressing material.

However, the decision has attracted sharp criticism, particularly from the Communication Workers Union (CWU). John Chadfield, the CWU National Officer for Tech, condemned the move as prioritising "corporate greed over the safety of workers and the public." He voiced concerns that TikTok is replacing human moderation teams with what he described as "hastily developed, immature AI alternatives." The announcement coincides with a union vote among TikTok's UK workers seeking recognition, adding a layer of tension to the workforce changes.

These changes come against a backdrop of growing regulatory scrutiny in the UK, which has introduced tougher requirements for online platforms to monitor harmful content and protect younger users. The Online Safety Act came into force in July 2023, imposing stringent duties on companies like TikTok to enforce safety measures and age verification controls, with potential fines reaching up to 10% of a business’s global turnover for non-compliance. TikTok introduced new parental controls alongside these regulatory requirements, including features to restrict specific accounts from interacting with children and enhanced transparency around privacy settings for older teenagers.

Despite these efforts, TikTok has faced ongoing criticism from regulators and organisations worried about its handling of content safety. The UK’s Information Commissioner’s Office launched a "major investigation" into TikTok earlier this year, focusing on whether the platform's recommendation algorithms and moderation systems appropriately safeguard the privacy and safety of younger users. TikTok has maintained that its recommender system operates under "strict and comprehensive measures" to protect teens.

These developments illustrate the complex balancing act TikTok faces: advancing AI-driven moderation technologies to scale efficiently while addressing safety, regulatory expectations, and employee concerns. Industry experts and union representatives warn that an overreliance on AI could undermine the effectiveness of content moderation, especially in handling nuanced, sensitive material that human moderators are better equipped to evaluate. The situation also reflects wider sector trends where social media firms increasingly deploy automated tools amid rising demands to improve user protection, often prompting debate about the best approaches to maintain online safety.

📌 Reference Map:

Source: Noah Wire Services