HomeSample Page

Sample Page Title


Ravie LakshmananMar 11, 2026Synthetic Intelligence / Browser Safety

Researchers Trick Perplexity’s Comet AI Browser Into Phishing Rip-off in Underneath 4 Minutes

Agentic net browsers that leverage synthetic intelligence (AI) capabilities to autonomously execute actions throughout a number of web sites on behalf of a consumer might be educated and tricked into falling prey to phishing and rip-off traps.

The assault, at its core, takes benefit of AI browsers’ tendency to purpose their actions and use it towards the mannequin itself to decrease their safety guardrails, Guardio stated in a report shared with The Hacker Information forward of publication.

“The AI now operates in actual time, inside messy and dynamic pages, whereas constantly requesting data, making selections, and narrating its actions alongside the best way. Properly, ‘narrating’ is sort of an understatement – It blabbers, and means an excessive amount of!,” safety researcher Shaked Chen stated.

“That is what we name Agentic Blabbering: the AI Browser exposing what it sees, what it believes is going on, what it plans to do subsequent, and what indicators it considers suspicious or protected.”

By intercepting this visitors between the browser and the AI providers operating on the seller’s servers and feeding it as enter to a Generative Adversarial Community (GAN), Guardio stated it was capable of make Perplexity’s Comet AI browser fall sufferer to a phishing rip-off in below 4 minutes.

The analysis builds on prior methods like VibeScamming and Scamlexity, which discovered that vibe-coding platforms and AI browsers might be coaxed into producing rip-off pages or finishing up malicious actions through hidden immediate injections. In different phrases, with the AI agent dealing with the duties with out fixed human supervision, there arises a shift within the assault floor whereby a rip-off not has to deceive a consumer. Fairly, it goals to trick the AI mannequin itself.

“For those who can observe what the agent flags as suspicious, hesitates on, and extra importantly, what it thinks and blabbers concerning the web page, you should utilize that as a coaching sign,” Chen defined. “The rip-off evolves till the AI Browser reliably walks into the entice one other AI set for it.”

The thought, in a nutshell, is to construct a “scamming machine” that iteratively optimizes and regenerates a phishing web page till the agentic browser stops complaining and proceeds to hold out the menace actor’s bidding, resembling getting into a sufferer’s credentials on a bogus net web page designed for finishing up a refund rip-off.

What makes this assault attention-grabbing and harmful is that when the fraudster iterates on an internet web page till it really works towards a selected AI browser, it really works on all customers who depend on the identical agent. Put in another way, the goal has shifted from the human consumer to the AI browser.

“This reveals the unlucky close to future we face: scams is not going to simply be launched and adjusted within the wild, they are going to be educated offline, towards the precise mannequin hundreds of thousands depend on, till they work flawlessly on first contact,” Guardio stated. “As a result of when your AI Browser explains why it stopped, it teaches attackers find out how to bypass it.”

The disclosure comes as Path of Bits demonstrated 4 immediate injection methods towards the Comet browser to extract customers’ non-public data from providers like Gmail by exploiting the browser’s AI assistant and exfiltrating the information to an attacker’s server when the consumer asks to summarize an internet web page below their management.

Final week, Zenity Labs additionally detailed two zero-click assaults affecting Perplexity’s Comet that use oblique immediate injection seeded inside assembly invitations to exfiltrate native information to an exterior server (aka PerplexedComet) or hijack a consumer’s 1Password account if the password supervisor extension is put in and unlocked. The problems, collectively codenamed PerplexedBrowser, have since been addressed by the AI firm.

That is achieved via a immediate injection approach known as intent collision, which happens “when the agent merges a benign consumer request with attacker-controlled directions from untrusted net information right into a single execution plan, with out a dependable method to distinguish between the 2,” safety researcher Stav Cohen stated.

Immediate injection assaults stay a elementary safety problem for giant language fashions (LLMs) and for integrating them into organizational workflows, largely as a result of utterly eliminating these vulnerabilities might not be possible. In December 2025, OpenAI famous that such weaknesses are “unlikely to ever” be totally resolved in agentic browsers, though the related dangers might be decreased via automated assault discovery, adversarial coaching, and new system-level safeguards.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles