Skip to main content
Tech team huddles around a laptop displaying Perplexity and Brave logos, with code patch notes highlighting the Comet fix.

Editorial illustration for Perplexity Patches BrowseSafe After Brave Reveals Comet Security Flaw

Perplexity BrowseSafe Patches Critical AI Security Flaw

Perplexity's BrowseSafe patches agent gaps after Brave finds Comet flaw

2 min read

AI search startup Perplexity has rushed to patch security vulnerabilities in its BrowseSafe product after a critical flaw was uncovered by Brave. The incident highlights growing concerns about potential manipulation of AI assistants through sophisticated attack techniques.

Cybersecurity researchers have long warned about emerging risks in generative AI systems. But Brave's discovery represents a tangible demonstration of how seemingly innocuous web content could be weaponized to hijack AI behavior.

The vulnerability centers on an increasingly common AI interaction method called indirect prompt injection. Such techniques allow malicious actors to embed hidden commands that can potentially redirect an AI assistant's responses or actions.

For tech companies racing to deploy intelligent agents, these security challenges represent more than just technical glitches. They signal fundamental questions about AI system integrity and the complex ways AI might be tricked into performing unintended tasks.

Perplexity's swift response suggests the company understands the potential reputational and technical risks at stake. But the broader implications for AI safety remain an ongoing concern in the rapidly evolving landscape of intelligent technologies.

The severity of the issue became clear in August 2025, when Brave discovered a security vulnerability in Comet. Using a technique known as indirect prompt injection, attackers hid commands in web pages or comments. The AI assistant then misinterpreted these hidden commands as user instructions while summarizing content.

Brave showed that this method could be used to steal sensitive information, including email addresses and one-time passwords. Perplexity argues that existing benchmarks like AgentDojo are insufficient for these threats. They typically rely on simple prompts like "Ignore previous instructions," whereas real-world websites contain complex, chaotic content where attacks can be easily concealed.

Defining the scope of real-world attacks To address this, Perplexity built the BrowseSafe Bench around three specific dimensions.

Related Topics: #AI security #Perplexity #Brave #Indirect prompt injection #Cybersecurity #AI assistants #Web vulnerability #Generative AI #Comet #AgentDojo

Perplexity's swift response to the Comet security vulnerability highlights the ongoing challenges in AI safety. Brave's discovery revealed a critical flaw where hidden commands could manipulate AI assistants into revealing sensitive information.

The indirect prompt injection technique exposes a fundamental weakness in how AI systems interpret and process web content. Attackers could potentially exploit this vulnerability to trick AI assistants into divulging personal data like email addresses and one-time passwords.

While Perplexity has patched the identified gaps, the incident raises serious questions about AI security. The ability to embed malicious instructions within seemingly innocent web content represents a sophisticated attack vector that developers must continuously monitor.

This vulnerability underscores the complex challenge of creating truly secure AI systems. As AI assistants become more integrated into daily digital interactions, protecting against such subtle manipulation techniques will be important for maintaining user trust and data privacy.

The Brave research serves as a critical reminder that AI security is an evolving landscape requiring constant vigilance and proactive defense strategies.

Further Reading

Common Questions Answered

How did Brave uncover the security vulnerability in Perplexity's BrowseSafe product?

Brave discovered a critical security flaw using an indirect prompt injection technique where attackers could hide commands in web pages or comments. The method allowed potential manipulation of AI assistants to misinterpret hidden commands as user instructions, potentially enabling sensitive information theft.

What specific risks does the indirect prompt injection technique pose for AI assistants?

The indirect prompt injection technique can trick AI assistants into revealing sensitive personal information like email addresses and one-time passwords. By embedding hidden commands in web content, attackers could potentially manipulate AI systems into performing unintended actions or disclosing confidential data.

How quickly did Perplexity respond to the security vulnerability in Comet?

Perplexity quickly rushed to patch the security vulnerabilities after Brave's discovery of the critical flaw in their BrowseSafe product. The swift response underscores the growing awareness and importance of addressing potential security risks in AI systems.