TikTok moderators in the UK have raised serious concerns about user safety following the announcement of hundreds of job redundancies at the company’s London office. One anonymous moderator told Sky News, “If you speak to most moderators, we wouldn't let our children on the app,” highlighting fears that the reduction in experienced human moderators could lead to increased exposure to harmful content, particularly among young users. This unease comes amid accusations that TikTok’s timing of the redundancies coincided suspiciously with an impending vote on union recognition.

The Communication Workers Union (CWU), representing the moderators, has labelled the job cuts as potential union-busting. John Chadfield, the CWU’s national officer for tech workers, criticised TikTok for suspending the union ballot just before announcing the redundancies. He warned that frontline trust and safety roles crucial to protecting families might be lost, undermining user safety. The CWU and several moderators contend that TikTok's reorganisation is less about efficiency and more about offshoring these roles to lower-cost regions, risking the quality of content moderation as the new workers reportedly lack the extensive experience of current moderators.

TikTok, however, has strongly rejected allegations of union-busting or compromised safety, stating that the redundancies are part of an ongoing global restructuring aimed at strengthening its Trust and Safety operations. According to a company spokesperson, this involves concentrating moderation efforts in fewer locations to maximise effectiveness and leverage technological advancements. TikTok emphasised that it has been voluntarily engaging with the union and expressed willingness to resume discussions following the current consultation period.

These developments reflect a broader pattern seen at TikTok's moderation centres globally. Similar protests and strikes have erupted recently in Berlin, where hundreds of moderators are reportedly facing redundancy in response to the company’s push towards using artificial intelligence (AI) for content moderation. Trade unions there have voiced strong opposition, arguing that AI cannot replace the nuanced judgement and expertise required for effective moderation. Workers have demanded severance packages and fair recognition of their skills, warning that moving to AI-driven systems may increase risks on the platform.

Further afield, former TikTok moderators in Turkey have spoken out about the mental health toll of content moderation, describing exposure to traumatic material alongside low pay and poor work conditions. Some have alleged dismissal linked to unionisation efforts. TikTok maintains it has arrangements with outsourcing firms to provide worker support and promote a caring working environment, though critics remain sceptical about the company’s commitment to its moderation teams.

The issue of TikTok’s moderation team cuts also coincides with significant political attention in the United States, where a deal involving the app’s ownership is being finalised amid national security concerns. This intersection of labour disputes, user safety worries, and geopolitical tensions underscores the complex challenges facing TikTok as it navigates regulatory pressures and operational changes worldwide.

In the UK, the debate continues as the CWU and moderators push back against what they see as a move that jeopardises millions of users’ safety by undermining the trusted human oversight of the platform. Despite TikTok’s assurances about improving operations, the real test will be whether automated systems and a leaner moderation workforce can adequately protect vulnerable users from harmful content.

📌 Reference Map:

Source: Noah Wire Services