ISLAMABAD , When Shukria Ismail left Khyber Pakhtunkhwa’s conservative Kurram district to pursue journalism, she accepted the strict cultural constraints her family expected of a Pathan woman. She kept her dupatta on, moderated her speech, and built a public profile through field reporting and on‑screen work. “At first, I brushed it off as something common in our field,” she said, speaking to Asian News after a fake Facebook profile bearing her image was used to circulate sexually explicit material and obscene messages to relatives and people with long‑running property disputes against her family. The assault on her reputation, she added, was “clear they wanted to push me towards suicide or provoke an ‘honour killing.’” [1]

Shukria’s experience is by no means isolated. The Digital Rights Foundation’s (DRF) 2024 Cyber Harassment Helpline Annual Report recorded 3,171 new complaints of technology‑facilitated gender‑based violence (TFGBV) across Pakistan, with the majority involving cyber harassment targeting women. According to the report, incidents surged at the start of the year as gendered disinformation and AI‑generated images aimed to humiliate and discredit women politicians and journalists during the February elections. DRF found that many victims saw the harm spill from online spaces into threats, reputational damage and loss of livelihood. [1][2]

Deepfakes and AI‑generated intimate images have become especially damaging tools. DRF founder Nighat Dad told Dawn the organisation was witnessing “a disturbing new frontier of gender‑based violence,” and warned that “AI has lowered the barrier for abuse. With just a photo and a free app, anyone can manufacture a scandal.” She said the scale and invisibility of such manufactured content make redress harder and urged stronger platform safeguards, quicker takedown mechanisms and resourced, survivor‑centric helplines. [1][2]

High‑profile women in politics and media illustrate how weaponised content can be used across social strata. Islamabad anchor Mona Alam said a decades‑old sex‑worker video was repurposed and relabelled as hers in December 2024 and then amplified across WhatsApp and social media channels; she told Dawn that initial campaigners included individuals affiliated with political parties and some media professionals. Her case, she said, moved from local investigators to overseas amplification after the Federal Investigation Agency’s (FIA) director general who initially pursued arrests was removed and the investigation stalled. She has repeatedly sought intervention from senior officials, including Interior Minister Mohsin Naqvi, she said, but described limited help. [1]

Punjab information minister Azma Bokhari also reported a sexualised deepfake circulated in 2024 that left her deeply distressed. She argued such cases must be treated as test cases in courts equipped to show “zero tolerance” and warned that politicisation of incidents undermines accountability. Her view underscores growing calls for legal remedies that produce visible consequences for perpetrators rather than prolonged investigative limbo. [1]

For non‑public figures the consequences are often personal and long‑lasting. A Karachi marketing professional using the pseudonym Maria said a former boyfriend doctored photos and sent them to her fiancé, ending her engagement and leaving her depressed for months. Clinical psychologists cited in reporting say such manipulations can trigger anxiety, hypervigilance, shame and social withdrawal; they call for gender‑sensitive reporting systems, public awareness campaigns and digital‑literacy initiatives to protect psychological well‑being and social participation. [1]

Researchers and activists say the technology amplifies long‑standing patriarchal controls. Annam Lodhi’s research shows women who voice opinions online are disproportionately targeted; she told Asian News that AI doesn’t create misogyny but magnifies it by making harassment faster, more personalised and harder to trace. Tech journalist Sindhu Abbasi highlighted cases ranging from “nudifier” apps to workplace misuse of AI that altered a professional headshot to add cleavage; she noted Meta has taken legal action against some developers of such apps. The combination of algorithmic amplification and easy‑to‑use generative tools is widening the gap between harm and legal‑institutional capacity to respond. [1]

Official and civil‑society responses are uneven. Government statistics and civil‑society reporting show broadband penetration has increased , extending risk into digital spaces , but a persistent gender gap in internet use leaves many women without the skills or autonomy to protect themselves: industry data cited by DRF put regular female internet use at roughly a third of the population and noted many women access the web via another person’s device. The National Commission on the Status of Women’s 2023 survey found nearly 40% of women had experienced cyber‑bullying or harassment, while conviction rates remain low , for example, 92 cybercrime convictions from 1,375 cases in 2023 , reinforcing calls for better capacity building, sensitisation of law enforcement, and more accessible complaint mechanisms. [1][2][3]

International and multilateral actors are moving to fill policy and capacity gaps. The United Nations Development Programme in Pakistan has launched a project focused on legal and policy reform, institutional strengthening and digital‑literacy promotion to combat TFGBV, while DRF and other organisations press for gender‑sensitive training for law‑enforcement personnel and platform accountability. Activists say these measures are necessary but not sufficient: shifts in social norms and political will to pursue high‑profile prosecutions are essential to deter abuse and protect women’s civic participation. [7][2]

Despite pockets of successful intervention , where coordinated complaints, helpline support and police action have led to takedowns and temporary relief , many survivors remain sceptical about long‑term remedies. High court advocate Syed Miqdad Mehdi and seasoned gender experts argue that laws such as the Prevention of Electronic Crimes Act 2016 are evolving but implementation lags behind technological advances, and that specialised courts and victim‑friendly mechanisms need greater sensitisation and resources. Until courts, investigators and platforms produce consistent redress, the threat of AI‑enabled harassment will continue to curtail women’s public lives and livelihoods. [1]

Shukria’s story is emblematic of this broader pattern: forced out of journalism by the fallout of a targeted online smear, she remains unable to practise her profession nearly two years later. “Being on air, reporting, writing , journalism became a crime in my own home. My parents forbade me from going to work. That was the day I died mentally,” she said to Asian News. Her desire to revive her career after marriage underscores the personal costs of TFGBV and the urgent need for coordinated legal, technological and societal responses that allow women to work, speak and participate without fear. [1]

📌 Reference Map:

##Reference Map:

  • [1] (Asian News Network) - Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 5, Paragraph 6, Paragraph 7, Paragraph 9, Paragraph 10
  • [2] (Dawn) - Paragraph 2, Paragraph 3, Paragraph 8, Paragraph 9
  • [3] (Dawn) - Paragraph 8
  • [4] (Hum) - Paragraph 2
  • [5] (The News) - Paragraph 8
  • [6] (The Reporters) - Paragraph 2
  • [7] (UNDP) - Paragraph 9

Source: Noah Wire Services