New AI Search Tools Pose Multiple Prompt Injection Risks

Indirect prompt injection is not only possible, but it’s a growing problem affecting all AI-powered browsers.

Published on Oct 24, 2025
New AI Search Tools Pose Multiple Prompt Injection Risks

AI-powered browsers are set to change the way we search the internet. Rather than having to pick key words to find the information they’re looking for, users can ask their questions in natural language, then receive fast, accurate responses. In recent days, OpenAI launched ChatGPT Atlas, its own foray into the AI browsers space, hoping to capitalize on the 800 million weekly active ChatGPT users.

But as the number of individuals using AI-browsers increases, we are seeing more attackers developing new ways to use this traffic to their advantage.

Perplexity’s Problem

One of the great features of Perplexity’s AI assistant, Comet, is that you can upload a screenshot, then ask questions about it. However, there are reports that screenshots have been used to maliciously inject prompts into AI engines. 

Traditional, text-based prompts undergo analysis and sanitization to ensure they are clear from malicious content. However, when using screenshots, malicious actors can bypass this sanitization by embedding almost-invisible text within the image, which is then read by the AI engine.

In their testing, security researchers at Brave were able to inject instructions into the engine by using “faint light blue text on a yellow background.” The text was read by Comet’s Optical Character Recognition (OCR) technology, which added the “hidden” information to the user’s query, without distinction. In Brave’s example, Comet acted on commands that instructed the browser to act maliciously.

Brave attempted similar experiments on Fellou, described as the world’s first AI browser. They say that Fellou showed more resistance than Perplexity did, but that “it still treats visible webpage content as trusted input to its LLM.” 

The Risk To AI Sidebars

Browser security firm SquareX has been conducting their own experiments on AI sidebars. These helpful assistants sit at the edge of your browser window and boost productivity by allowing you to ask questions or generate content. They can also proofread text for errors.

SquareX has revealed a flaw in Perplexity’s Comet and ChatGPT Atlas, saying that the systematic flaw may also be possible in Edge, Brave, and Firefox. These flaws enable attackers to spoof AI sidebars and manipulate the targeted user into installing a malicious AI tool. 

SquareX first pointed out the flaw in Perplexity, identifying that Comet could be tricked into “exfiltrating data, downloading malicious files and providing unauthorized access to enterprise apps, all without the victim’s knowledge.”

In their blog post, SquareX identifies several instances where users have installed spoofed sidebars, that then tricked users into performing malicious tasks or siphoned key information to the attackers.

The malicious extension requires host and storage permissions, but it has been pointed out that these permissions are commonly requested by browser extensions. 

Rethinking AI Security

These recent findings highlight yet again how innovation must be balanced with caution. Finding a way of achieving this balance is a challenge that IT leaders across organizations will be grappling with as we see AI capabilities rolled out more widely.