In 2025, the cybersecurity landscape has been profoundly reshaped by the rapid adoption and exploitation of artificial intelligence (AI) technologies. According to a detailed report by IT Brew, cybercriminals and state-backed actors have leveraged generative AI and other advanced tools to conduct sophisticated attacks on IT infrastructures, causing widespread disruption and raising critical questions about the effectiveness of current defence measures. The arms race between cyber attackers and defenders increasingly revolves around who can deploy the latest AI-driven capabilities more swiftly and efficiently.
The origins of this challenge trace back to the mainstream adoption of AI technology, accelerated notably by OpenAI's release of ChatGPT in 2022. This surge in AI development has enabled hackers to automate and scale attacks like phishing and deepfake impersonations. Experts, such as Andre Piazza from BforeAI, have highlighted how AI is used to extract intelligence from digital profiles and websites, allowing attackers to clone legitimate websites, including their full IT infrastructure, and use AI-generated phishing emails to hijack user credentials efficiently. Deepfake technology has also emerged as a critical threat vector, as demonstrated by Jumio's Reinhard Hochrieser. With AI tools, criminals can create convincing fake videos or voice calls to impersonate individuals, facilitating fraud with minimal technical effort. Hochrieser recounted how he produced a rapid deepfake video from a simple Instagram photo in just two minutes, underscoring the technology's accessibility for malicious use.
These AI-driven tactics are not limited to isolated incidents but permeate various sectors. Retailers, for example, have faced a surge in AI-generated fraud attempts, particularly during peak shopping seasons when attackers inundate companies with deepfake calls and fake deals on social media. According to Axios, nearly a third of fraud attempts against large retail firms now involve AI-generated content, with some retailers receiving over a thousand such calls daily. These sophisticated social engineering attacks often lead to significant financial losses per incident and necessitate vigilance from both employees and consumers in verifying online offers directly through official channels.
Moreover, cybersecurity researchers have raised alarms about malignant large language models (LLMs) purpose-built for illicit activities. A TechRadar investigation revealed that criminal groups are deploying unrestricted AI tools like WormGPT 4 and KawaiiGPT to automate malware production, create sophisticated phishing campaigns, and generate ransom notes efficiently. These tools significantly lower the skill barrier for cybercriminals, enabling even less technically skilled actors to launch damaging attacks autonomously. The proliferation of these malicious LLMs on platforms such as Telegram highlights the accelerating threat landscape and the urgent need for improved regulation and response strategies.
Adding a further layer of complexity, Microsoft researchers have uncovered an alarming encryption flaw in popular AI chatbots, dubbed "Whisper Leak." This vulnerability allows hackers to infer the content of encrypted conversations by analysing metadata patterns, such as data packet sizes and timing, without decrypting the messages directly. Although developers like Microsoft and OpenAI have implemented some mitigations, not all providers have responded adequately. This architectural issue poses a risk of exposing sensitive discussions, particularly over unsecured networks, prompting security experts to recommend cautious use of AI chatbots when discussing confidential information and advocate for the use of VPNs and other protections to minimise exposure.
From a broader law enforcement perspective, Europol’s 2025 European Serious Organised Crime Threat Assessment paints a concerning picture of how AI is transforming organised crime. The agency warns that AI enables criminal enterprises to operate globally with unprecedented efficiency, generating multilingual scam messages, impersonating victims, and even producing AI-generated child abuse material. The report also foreshadows the potential emergence of autonomous, AI-controlled criminal networks that could execute complex crimes without direct human involvement. Current major threats include cyberattacks, drug and arms trafficking, migrant smuggling, and environmental crimes, all increasingly facilitated by AI.
State-backed actors are also heavily investing in AI for cyber warfare and disinformation campaigns. A Microsoft report indicates a dramatic increase in AI-driven cyberattacks by countries such as Russia, China, Iran, and North Korea, with the United States as the primary target. In July 2025 alone, over 200 incidents of AI-generated fake content were detected, marking a sharp rise from previous years. Tactics include creating convincing phishing emails, cloning government officials via deepfakes, and employing automated hacking techniques. Despite denials from some states, such as Iran, security experts warn of the pressing need to modernize cybersecurity infrastructures to counter these sophisticated threats effectively.
On the defensive front, AI also offers potential upsides in cybersecurity operations. Elyse Gunn, tCISO at Nasuni, notes that generative AI can offload lower-level help desk tasks, allowing human experts to concentrate on higher-value, proactive threat analysis and mitigation. Similarly, Andre Piazza mentions the adoption of agentic AI in security operations centres, which not only increases efficiency in existing tasks but also introduces capabilities such as predictive AI to anticipate attacks before they materialize.
However, caution remains. HackerOne CEO Kara Sprague cautions that attackers currently hold an advantage because they can deploy AI tools without the constraints faced by legitimate organisations, such as legal oversight or maintenance responsibilities. This agility allows cybercriminals to move faster in adopting and operationalizing AI-driven attack methods, complicating defence efforts.
In summary, while AI unquestionably escalates the scale and sophistication of cyber threats, it also equips defenders with advanced tools to counter these evolving risks. The balance between these forces will likely define cybersecurity in the coming years, making it imperative for organisations, governments, and security professionals to invest in AI-based defences and maintain vigilance against increasingly automated and AI-enhanced criminal tactics.
📌 Reference Map:
- [1] IT Brew - Paragraphs 1, 2, 3, 6, 7, 9, 10, 11
- [2] Axios - Paragraph 4
- [3] TechRadar - Paragraph 5
- [4] LiveScience - Paragraph 6
- [5] Reuters/Europol - Paragraph 7
- [6] AP News/Microsoft - Paragraph 8
Source: Noah Wire Services