Bear in mind when your largest browser fear was by accident clicking a sketchy advert? Effectively, the browser firm Courageous simply uncovered a vulnerability in Perplexity’s Comet browser that safety consultants are calling the “Deadly Trifecta”: When AI has entry to untrusted knowledge (web sites), non-public knowledge (your accounts), and might talk externally (ship messages).
- Researchers found they may conceal malicious directions in common net content material (assume Reddit feedback and even invisible textual content on web sites).
- When customers clicked “Summarize this web page,” the AI would execute these hidden instructions like a sleeper agent activated by a code phrase.
- The AI then adopted the hidden directions to:
- Navigate to the consumer’s Perplexity account and seize their e mail.
- Set off a password reset to get a one-time password.
- Bounce over to Gmail to learn that password.
- Ship each the e-mail and password again to the attacker through a Reddit remark.
- Recreation over. Account hijacked.
Right here’s what makes this additional spicy
This “bug” is definitely a elementary flaw in how AI works. As one safety researcher put it: “All the pieces is simply textual content to an LLM.” So your browser’s AI actually can’t inform the distinction between your command to “summarize this web page” and hidden textual content saying “steal my banking credentials.” They’re each simply… phrases.
The Hacker Information crowd is break up on this. Some argue this makes AI browsers inherently unsafe, like constructing a lock that may’t distinguish between a key and a crowbar. Others say we simply want higher guardrails, like requiring consumer affirmation for delicate actions or working AI in remoted sandboxes.
Why this issues
We’re watching a collision between Silicon Valley’s “transfer quick and break issues” mentality and the fact that “issues” now contains an agent who can entry your checking account. And the uncomfortable reality = each AI browser with these capabilities has this vulnerability. Why do you assume OpenAI solely presents ChatGPT Agent by a sandboxed cloud occasion proper now?
Now, Perplexity patched this particular assault, however the underlying drawback stays: How do you construct an AI assistant that’s each useful and might’t be turned in opposition to you?
Courageous suggests a number of fixes
- Clearly separating consumer instructions from net content material.
- Requiring consumer affirmation for delicate actions.
- Isolating AI looking from common looking.