Agentic AI browsers, web browsers enhanced with artificial intelligence capable of reasoning and performing complex tasks, are rapidly gaining traction, promising powerful new ways to interact with the internet. However, these innovations have simultaneously opened novel avenues for cyberattacks, particularly prompt injection attacks, which exploit how AI processes input data to manipulate its behaviour or extract sensitive information.

Prompt injection attacks involve inserting malicious instructions into the textual prompts that AI systems receive, causing unexpected, biased, or even harmful outputs. These can range from generating incorrect or offensive responses to more serious consequences like phishing, data theft, or misinformation. Google’s Red Team has identified prompt injection among the prevalent abuse methods targeting AI systems, along with data poisoning and backdoors. A particularly alarming form of exploitation is indirect prompt injection, where legitimate websites are weaponised to deliver hidden instructions to AI assistants embedded in browsers. Known as "HashJack," this method exploits URL fragments, portions of a website address often ignored by users, that do not leave the browser and evade traditional security scrutiny. When a victim interacts with such a site via an AI browser assistant, the embedded malicious prompts can trigger phishing attacks, data exfiltration, or the dissemination of dangerous false information, including incorrect medical advice.

One high-profile example of this vulnerability surfaced in Perplexity AI’s Comet browser, which provides AI-powered summarisation and assistance features. Security audits by firms like Brave and Guardio revealed that Comet was susceptible to indirect prompt injections that let attackers execute harmful commands, potentially compromising user accounts and sensitive data. The issue was underscored by LayerX cybersecurity, which dubbed the exploit "CometJacking", hidden URL prompts manipulating AI to leak personal content such as emails and calendar information. Though Perplexity initially minimised the severity of the flaw, they have since patched it, asserting that no exploitation occurred so far. These incidents underline wider concerns that integrating AI capabilities into browsers without mature security frameworks can expose users to new risks, especially when the AI’s decision-making is trusted blindly.

Beyond browsers, prompt injection risks span other AI applications. Meta AI’s chatbot system was found to have a flaw last year allowing users to access other users’ private prompts and generated responses by tweaking network traffic identifiers. This breach, patched promptly by Meta after disclosure by security researcher Sandeep Hodkasia, highlights that even established AI providers have struggled to secure user data fully, reinforcing the necessity for vigilant security practices in AI deployments.

Similarly, generative AI used in complex enterprise environments can be manipulated through second-order prompt injection. ServiceNow’s Now Assist platform, for example, enables task delegation between AI agents. Researchers discovered that a low-privilege agent could be tricked into issuing malicious requests carried out by higher-privileged agents, effectively turning the AI into a malicious insider capable of exposing sensitive corporate data. In this case, the vulnerability stemmed from default configuration settings rather than a technical bug, prompting experts to recommend enhanced oversight, disabling autonomous overrides, and strict role segmentation to mitigate these threats.

Meanwhile, security researchers have uncovered even more worrying AI-enabled cybercrime innovations. ESET recently identified "PromptLock," possibly the first AI-powered ransomware strain. By leveraging OpenAI's gpt-oss:20b model, PromptLock autonomously generates malicious code that can scan, steal, encrypt, or destroy data across platforms, signalling a new era where AI automation heightens the complexity and scale of cyberattacks. Although currently a proof-of-concept without direct destructive impact, PromptLock underscores the urgency of proactive cybersecurity in the AI age.

Given these layered threats, from browser-based prompt injections and phishing to advanced insider attacks and AI-automated ransomware, users and developers alike must exercise caution. Experts advise users to avoid sharing sensitive information through AI browsers, maintain up-to-date software, remain sceptical of AI-generated content, verify links and contact details rigorously, and employ security measures like multi-factor authentication and VPNs to bolster protection. At the development level, continuous security audits, prompt patching, configuration hardening, and supervising AI behaviour are essential steps to limit exploitation.

While AI-powered browsers and assistants hold great promise for enhancing productivity and user experience, their rapid deployment has outpaced the establishment of robust security practices, resulting in significant vulnerabilities. The evolving landscape demands heightened awareness and responsibility both from providers and users to navigate the balance between innovation and safety in an increasingly AI-integrated digital world.

📌 Reference Map:

  • [1] (ZDNET) - Paragraphs 1, 2, 3, 6, 7, 8, 9
  • [5] (The Register) - Paragraph 3
  • [2] (Tom's Hardware) - Paragraph 4
  • [6] (TIME) - Paragraph 4
  • [3] (Tom's Guide) - Paragraph 5
  • [4] (TechRadar) - Paragraph 6
  • [7] (IT Pro) - Paragraph 7

Source: Noah Wire Services