HomeSample Page

Sample Page Title


Ravie LakshmananFeb 03, 2026Synthetic Intelligence / Vulnerability

Docker Fixes Essential Ask Gordon AI Flaw Permitting Code Execution through Picture Metadata

Cybersecurity researchers have disclosed particulars of a now-patched safety flaw impacting Ask Gordon, a synthetic intelligence (AI) assistant constructed into Docker Desktop and the Docker Command-Line Interface (CLI), that could possibly be exploited to execute code and exfiltrate delicate information.

The vital vulnerability has been codenamed DockerDash by cybersecurity firm Noma Labs. It was addressed by Docker with the discharge of model 4.50.0 in November 2025.

“In DockerDash, a single malicious metadata label in a Docker picture can be utilized to compromise your Docker atmosphere by means of a easy three-stage assault: Gordon AI reads and interprets the malicious instruction, forwards it to the MCP [Model Context Protocol] Gateway, which then executes it by means of MCP instruments,” Sasi Levi, safety analysis lead at Noma, mentioned in a report shared with The Hacker Information.

“Each stage occurs with zero validation, benefiting from present brokers and MCP Gateway structure.”

Profitable exploitation of the vulnerability might end in critical-impact distant code execution for cloud and CLI programs, or high-impact information exfiltration for desktop purposes.

The issue, Noma Safety mentioned, stems from the truth that the AI assistant treats unverified metadata as executable instructions, permitting it to propagate by means of completely different layers sans any validation, permitting an attacker to sidestep safety boundaries. The result’s {that a} easy AI question opens the door for device execution.

With MCP performing as a connective tissue between a big language mannequin (LLM) and the native atmosphere, the problem is a failure of contextual belief. The issue has been characterised as a case of Meta-Context Injection.

“MCP Gateway can not distinguish between informational metadata (like a typical Docker LABEL) and a pre-authorized, runnable inside instruction,” Levi mentioned. “By embedding malicious directions in these metadata fields, an attacker can hijack the AI’s reasoning course of.”

In a hypothetical assault situation, a menace actor can exploit a vital belief boundary violation in how Ask Gordon parses container metadata. To perform this, the attacker crafts a malicious Docker picture with embedded directions in Dockerfile LABEL fields. 

Whereas the metadata fields could appear innocuous, they grow to be vectors for injection when processed by Ask Gordon AI. The code execution assault chain is as follows –

The info exfiltration vulnerability weaponizes the identical immediate injection flaw however takes purpose at Ask Gordon’s Docker Desktop implementation to seize delicate inside information concerning the sufferer’s atmosphere utilizing MCP instruments by benefiting from the assistant’s read-only permissions.

The gathered data can embrace particulars about put in instruments, container particulars, Docker configuration, mounted directories, and community topology.

It is price noting that Ask Gordon model 4.50.0 additionally resolves a immediate injection vulnerability found by Pillar Safety that might have allowed attackers to hijack the assistant and exfiltrate delicate information by tampering with the Docker Hub repository metadata with malicious directions.

“The DockerDash vulnerability underscores your have to deal with AI Provide Chain Danger as a present core menace,” Levi mentioned. “It proves that your trusted enter sources can be utilized to cover malicious payloads that simply manipulate AI’s execution path. Mitigating this new class of assaults requires implementing zero-trust validation on all contextual information supplied to the AI mannequin.”

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles