The Alarming Surge of Deepfakes: A Call for Urgent Action
Researchers from the Oxford Internet Institute (OII) at the University of Oxford have uncovered a staggering increase in the accessibility of artificial intelligence (AI) tools designed to create deepfake images of identifiable individuals. The study, aptly titled "Deepfakes on Demand," identifies nearly 35,000 such tools available for public download on platforms like Civitai. Since late 2022, these generators have been downloaded around 15 million times, with a striking focus on women as primary targets.
Will Hawkins, a doctoral student at OII and lead author of the study, expressed grave concerns over the findings, noting, “There is an urgent need for more robust technical safeguards, clearer and more proactively enforced platform policies, and new regulatory approaches to address the creation and distribution of these harmful AI models." The researchers caution that their insights may only represent the "tip of the iceberg," as their analysis focuses solely on publicly available models, potentially omitting more insidious practices such as the creation of child sexual abuse material.
Women like Emma Watson, the star of the "Harry Potter" franchise and a current student at Oxford, frequently fall victim to these intimate deepfakes. Alarmingly, the data illuminates that about 96% of these deepfake models target identifiable women, including both globally renowned celebrities and ordinary social media users. This phenomenon not only perpetuates harm against individual women but also highlights the broader implications of misogyny prevalent in digital spaces.
The issue of deepfakes is not confined to individual victimisation. It resonates on a societal level, making headlines around the globe. High-profile individuals such as Martin Lewis have recently spoken out against the risks associated with deepfakes. Lewis revealed that a viewer lost £140,000 to a scam advert employing an AI-generated video of him, stating, “I’ve had enough of this; it’s time something was done.” His remarks underscore the urgent need for protective measures both legally and in terms of technological safeguards.
In the UK, legislative steps are being taken to combat this growing menace. Since April 2023, sharing sexually explicit deepfake images has been classified as a criminal offence under an amendment to the Online Safety Act. Moreover, the UK Government aims to facilitate the prosecution of those creating such images as part of its ongoing Crime and Policing Bill, which is currently at the committee stage. Supportive data from the Revenge Porn Helpline indicates a significant rise in intimate image abuse reports—an increase of 20.9% in 2024 alone.
Yet, the urgency to address deepfakes is not restricted to the UK. The Biden administration in the United States is also advocating for action against the rise of AI-generated sexually explicit imagery. With generative AI tools making the creation of realistic fake images alarmingly easy, there is an emphasis on seeking cooperation from tech companies and financial institutions alike to prevent the creation and dissemination of such harmful content. Advocates suggest that voluntary compliance is not sufficient; rather, comprehensive legislative reforms must also be pursued to provide necessary oversight and enforcement measures.
Globally, countries like South Korea face their own deepfake crises. The nation has seen over 800 investigations into deepfake sex crimes in just the first nine months of 2024, a stark increase from 156 cases reported in 2021. Amidst an environment of deep-seated sexism, women are increasingly turning to social media defiantly while grappling with the risks of being targeted with explicit content. Although South Korea has begun to legislate against the creation and distribution of such images, many victims express feeling unprotected, highlighting the ongoing challenges.
Looking ahead, it is clear that while advancements in AI provide remarkable technological capabilities, they also carry significant risks, particularly for women. Legislation, such as the U.S. Congress's recent "Take It Down Act," aims to criminalise non-consensual deepfake pornography and demands decisive action from social media platforms to remove harmful content expeditiously. Success in combatting these online dangers requires not only legal frameworks but a cultural shift to dismantle the oppressive structures that enable such harassment.
The alarming rise of deepfakes should galvanise public discourse and prompt immediate action across the tech industry and legislative bodies. As the call for regulation intensifies, the focus must remain on protecting victims and preventing the exploitation that deepfakes facilitate, ensuring that technology serves humanity rather than undermining it.
Reference Map:
- Paragraph 1 – [[1]](https://www.oxfordmail.co.uk/news/25144023.oxford-university-uncovers-dramatic-rise-deepfakes/?ref=rss), [[2]](https://apnews.com/article/c76c46b48e872cf79ded5430e098e65b)
- Paragraph 2 – [[1]](https://www.oxfordmail.co.uk/news/25144023.oxford-university-uncovers-dramatic-rise-deepfakes/?ref=rss), [[3]](https://www.huffingtonpost.es/politica/deepfakes-porno-digital-falso-ia-atacar-mujeres-politicas-activistas-periodistas.html)
- Paragraph 3 – [[1]](https://www.oxfordmail.co.uk/news/25144023.oxford-university-uncovers-dramatic-rise-deepfakes/?ref=rss), [[4]](https://www.ft.com/content/9eba22b9-a113-47e5-9a8c-2306abf6ec36)
- Paragraph 4 – [[1]](https://www.oxfordmail.co.uk/news/25144023.oxford-university-uncovers-dramatic-rise-deepfakes/?ref=rss), [[5]](https://time.com/6589263/taylor-swift-deepfakes-legal-protections/)
- Paragraph 5 – [[1]](https://www.oxfordmail.co.uk/news/25144023.oxford-university-uncovers-dramatic-rise-deepfakes/?ref=rss), [[6]](https://qa.time.com/6308786/nina-jankowicz/)
- Paragraph 6 – [[1]](https://www.oxfordmail.co.uk/news/25144023.oxford-university-uncovers-dramatic-rise-deepfakes/?ref=rss), [[7]](https://time.com/7277746/ai-deepfakes-take-it-down-act-2025/)
Source: Noah Wire Services