The Alarming Rise of Deepfake Technology: An Oxford Study Sheds Light on the Growing Threat

Researchers at the Oxford Internet Institute (OII) have revealed a disquieting trend in the proliferation of deepfake technology, identifying nearly 35,000 AI tools available for public download that can generate deepfake images of identifiable individuals. This alarming increase was highlighted in their detailed study, titled 'Deepfakes on Demand,' which found that these deepfake generators have been downloaded almost 15 million times since late 2022. The study underscores a troubling reality: the vast majority of these impersonations (96%) target identifiable women, including both well-known celebrities and ordinary social media users.

Lead author Will Hawkins, a doctoral student at the OII, emphasised the urgent need for enhanced technical safeguards and more robust regulatory measures to combat the creation and distribution of harmful AI models. Hawkins stated, “There is an urgent need for more robust technical safeguards, clearer and more proactively enforced platform policies, and new regulatory approaches to address the creation and distribution of these harmful AI models.” Given the low cost of producing these deepfakes, the researchers warn that even more egregious content—such as child sexual abuse material—could be on the rise, potentially lurking beyond public platforms.

The troubling implications of this study connect to a wider narrative involving the misuse of deepfake technology. Prominent figures like Martin Lewis, the founder of MoneySavingExpert.com, have become unwitting victims of deepfake scams. Lewis recently took to social media after discovering a fake video featuring him promoting a fraudulent investment scheme, an experience he characterised as “weird and pretty frightening.” He echoed calls for regulatory action, highlighting the urgent need for measures to protect consumers from these increasingly sophisticated scams.

Indeed, deepfake scams have become a growing concern. The Financial Times reports on how generative AI enables scammers to create convincing fake videos, often impersonating public figures to promote fraudulent investment schemes on platforms like Instagram and WhatsApp. Experts have urged users to remain vigilant, noting that many social media users are unaware of the capabilities of AI, making them more susceptible to these deceptive practices. Social media companies, while claiming to combat such content through AI moderation, have faced criticism for their inadequate responses.

Recent statistics bolster the urgency for regulatory action. Reports from the Revenge Porn Helpline indicate a dramatic increase in intimate image abuse, with instances rising by 20.9% in 2024, culminating in a total of 22,275 reports—the highest figure in the helpline's history. The UK government has taken steps to address this issue, making the sharing of sexually explicit deepfake images a criminal offence under the Online Safety Act as of April 2023. Additional proposals aim to criminalise the creation of such content as part of the ongoing Crime and Policing Bill.

Efforts are being made at various levels to hold technology companies accountable. Rafał Brzoska, the head of InPost, is seeking to enlist around 150 notable Polish figures to sue Meta for the proliferation of deepfake scams using AI-generated impersonations on its platforms. Brzoska's initiative reflects a broader push for accountability in the tech industry, a sentiment echoed by Lewis, who warned against the “wild west” nature of current tech regulations, calling for immediate legislative action to protect vulnerable individuals from scams.

As the study from the Oxford Internet Institute suggests, society is witnessing only the tip of the iceberg when it comes to deepfake technology. The potential for more significant and harmful content looms large, underscoring the imperative for a unified and effective response. The technology’s rapid advancement poses a fundamental question: how can we balance the benefits of AI innovation with the pressing need for safeguarding users against its darker applications? This dialogue is becoming ever more critical as deepfakes increasingly infiltrate our digital lives.

Reference Map:

Source: Noah Wire Services