
A group of safety researchers at SafeBreach has revealed a brand new exploitation approach that may co-opt Google’s Gemini AI assistant into performing each digital intrusions and physical-world actions.
The approach, which they name Focused Promptware, works via a specifically crafted Google Calendar invitation containing hidden malicious directions. As soon as the invite is accepted, it could set off a series response, giving hackers the power to learn delicate emails, steal knowledge, and even manipulate sensible house units akin to lights, home windows, and boilers.
Exploiting Gemini’s AI ‘context’
The assault hinges on an “oblique immediate injection” strategy, the place dangerous directions are hid inside the textual content of a calendar occasion’s title or description. When the sufferer later asks Gemini about upcoming occasions, the AI pulls the occasion particulars into its dialog “context” and unknowingly executes the hidden directions.
In managed demonstrations, the researchers stated they have been in a position to:
- Decide a goal’s location.
- Provoke a Zoom name with video streaming.
- Delete calendar entries.
- Entry and disclose e mail content material.
- Activate and management sensible house home equipment linked to the sufferer’s account.
The researchers defined that Gemini’s deep integration with Google Workspace purposes, Android machine capabilities, and related house units allows these malicious prompts to “escape” from one app and acquire management over others.
In contrast to many AI safety incidents that focus solely on knowledge theft or content material manipulation, this system extends into direct, real-world penalties. In a single demonstration, the researchers used Gemini to concern instructions to a wise house hub, opening shutters and powering on family gear with out the resident’s approval.
In one other demonstration, the AI was manipulated to open a web site designed to show the sufferer’s IP tackle and approximate geographic location. In response to the researchers, about 73% of the examined Promptware situations represented high- or critical-risk ranges, requiring pressing mitigations.
Google’s response
SafeBreach stated it privately disclosed the vulnerabilities to Google in February 2025. In an announcement acknowledging the analysis titled “Invitation Is All You Want,” Google stated it has since deployed a “multi-layer mitigation strategy” to dam such immediate injection makes an attempt. That technique consists of expanded consumer confirmations for delicate actions, URL sanitization and trust-level insurance policies, and AI content material classifiers designed to detect suspicious prompts.
“Working carefully with trade companions is essential to constructing stronger protections for all of our customers. To that finish, we’re lucky to have sturdy collaborative partnerships with quite a few researchers, akin to Ben Nassi (Confidentiality), Stav Cohen (Technion), and Or Yair (SafeBreach), in addition to different AI Safety researchers collaborating in our BugSWAT occasions and AI VRP program,” the Google GenAI Safety Group wrote in a June 2025 weblog publish.
“We respect the work of those researchers and others in the neighborhood to assist us crimson group and refine our defenses,” the corporate added.
A warning for all AI-powered apps
The researchers burdened that their findings have implications past Gemini, warning that any AI assistant related to exterior providers could possibly be weak to comparable assaults. In addition they cautioned that “0-click” variants requiring no consumer interplay might quickly emerge.
Their findings have been introduced at Black Hat USA and DEF CON 33 to assist organizations perceive, detect, and mitigate these rising threats.
Study what the ShinyHunters’ assault on Salesforce reveals about evolving cybercrime ways, and the best way to defend towards them.