Arooj Shah, the leader of Oldham Council, has condemned a series of deeply offensive and “racist and malicious” AI-generated deepfake videos targeting her. The manipulated footage, which circulated within a public social media group, depicts Coun Shah speaking about council finances in a fabricated and exaggerated East Asian accent, alongside other deeply inappropriate videos involving political figures depicted in lewd or sexualised scenarios. Shah described the videos as “bigoted” and said they are designed to dehumanise her, emphasising that such hateful and false portrayals are “completely unacceptable” and should have no place in the public or community space.

Expressing her shock and horror at the incident, Shah stressed that no one, whether in public life or not, should be subjected to such “pathetic tactics.” She vowed that this intimidation will not deter her from serving her community, highlighting the personal and broader societal damage inflicted by these videos. The initial posts were traced to a local Facebook group associated with far-right sympathies, though the political group Advance UK later disavowed any official connection to the page or content, strongly condemning the videos as contrary to their values.

This incident is part of a wider and troubling trend involving the misuse of AI-generated media to harass and manipulate public figures. Several politicians across the UK have been targeted by deepfake content recently. Conservative MP George Freeman reported to police an AI video that falsely suggested he was defecting to another party. Female politicians have been disproportionately victimised, with an investigation revealing that several prominent figures, including Labour’s Angela Rayner, Conservative MPs Penny Mordaunt and Priti Patel, and others, have been subjected to non-consensual deepfake pornography. These fabricated intimate images have circulated online for years, eliciting significant concern and police involvement. While the UK’s Online Safety Act, enacted in January, criminalises the sharing of such imagery without consent, the law currently does not ban the creation of deepfake pornography, fueling ongoing debates about further legislative measures to combat this form of abuse.

The scale and sophistication of deepfake technology have drawn growing attention, with experts warning that the manipulation of videos and images is becoming increasingly difficult for the public to detect. Common signs of deepfakes include unnatural mouth movements, irregular lighting, and voice synchronization errors, as outlined in recent expert guidance. Beyond individual harassment, there is also a wider political dimension, as seen in deepfake advertisements targeting UK Prime Minister Rishi Sunak, which falsely portrayed his financial dealings and were disseminated widely, raising fears about AI’s role in election interference.

Social media platforms have responded with varying policies; for instance, Facebook banned deepfake videos designed to mislead ahead of the US elections, though it sometimes allowances content deemed newsworthy. The ongoing challenges of regulating AI-manipulated media underscore the persistent risks these technologies pose to public discourse, individual dignity, and democratic processes alike.

In condemning the attacks against her, Arooj Shah has joined a growing number of public figures calling for stronger protections and accountability around the use of AI-generated content, particularly as it pertains to misinformation, racism, harassment, and sexual exploitation. The interplay between technological innovation and ethical governance remains urgent as AI tools become more accessible and their misuse more harmful.

📌 Reference Map:

  • [1] Manchester Evening News – Paragraphs 1, 2, 3, 4, 5
  • [2] The Guardian – Paragraph 6, 7
  • [3] The Guardian – Paragraph 8
  • [4] The Guardian – Paragraph 9
  • [5] The Guardian – Paragraph 10
  • [6] The Guardian – Paragraph 11

Source: Noah Wire Services