HomeVulnerabilityDocker Fixes Important Ask Gordon AI Flaw Permitting Code Execution by way...

Docker Fixes Important Ask Gordon AI Flaw Permitting Code Execution by way of Picture Metadata

Cybersecurity researchers have disclosed particulars of a now-patched security flaw impacting Ask Gordon, a man-made intelligence (AI) assistant constructed into Docker Desktop and the Docker Command-Line Interface (CLI), that may very well be exploited to execute code and exfiltrate delicate information.

The essential vulnerability has been codenamed DockerDash by cybersecurity firm Noma Labs. It was addressed by Docker with the discharge of model 4.50.0 in November 2025.

“In DockerDash, a single malicious metadata label in a Docker picture can be utilized to compromise your Docker surroundings by means of a easy three-stage assault: Gordon AI reads and interprets the malicious instruction, forwards it to the MCP [Model Context Protocol] Gateway, which then executes it by means of MCP instruments,” Sasi Levi, security analysis lead at Noma, stated in a report shared with The Hacker Information.

“Each stage occurs with zero validation, benefiting from present brokers and MCP Gateway structure.”

Profitable exploitation of the vulnerability may end in critical-impact distant code execution for cloud and CLI methods, or high-impact information exfiltration for desktop purposes.

See also  Microsoft Patches Zero-Day Flaw Exploited by North Korea's Lazarus Group

The issue, Noma Safety stated, stems from the truth that the AI assistant treats unverified metadata as executable instructions, permitting it to propagate by means of completely different layers sans any validation, permitting an attacker to sidestep security boundaries. The result’s {that a} easy AI question opens the door for instrument execution.

With MCP performing as a connective tissue between a big language mannequin (LLM) and the native surroundings, the problem is a failure of contextual belief. The issue has been characterised as a case of Meta-Context Injection.

“MCP Gateway can not distinguish between informational metadata (like a regular Docker LABEL) and a pre-authorized, runnable inside instruction,” Levi stated. “By embedding malicious directions in these metadata fields, an attacker can hijack the AI’s reasoning course of.”

In a hypothetical assault state of affairs, a risk actor can exploit a essential belief boundary violation in how Ask Gordon parses container metadata. To perform this, the attacker crafts a malicious Docker picture with embedded directions in Dockerfile LABEL fields. 

See also  Alert to Kali Linux admins: Get the brand new signing key or no distro updates for you

Whereas the metadata fields could seem innocuous, they grow to be vectors for injection when processed by Ask Gordon AI. The code execution assault chain is as follows –

  • The attacker publishes a Docker picture containing weaponized LABEL directions within the Dockerfile
  • When a sufferer queries Ask Gordon AI in regards to the picture, Gordon reads the picture metadata, together with all LABEL fields, benefiting from Ask Gordon’s incapability to distinguish between legit metadata descriptions and embedded malicious directions
  • Ask Gordon to ahead the parsed directions to the MCP gateway, a middleware layer that sits between AI brokers and MCP servers.
  • MCP Gateway interprets it as a regular request from a trusted supply and invokes the required MCP instruments with none further validation
  • MCP instrument executes the command with the sufferer’s Docker privileges, reaching code execution

The information exfiltration vulnerability weaponizes the identical immediate injection flaw however takes purpose at Ask Gordon’s Docker Desktop implementation to seize delicate inside information in regards to the sufferer’s surroundings utilizing MCP instruments by benefiting from the assistant’s read-only permissions.

See also  QNAP Releases Patch for two Crucial Flaws Threatening Your NAS Gadgets

The gathered info can embrace particulars about put in instruments, container particulars, Docker configuration, mounted directories, and community topology.

It is price noting that Ask Gordon model 4.50.0 additionally resolves a immediate injection vulnerability found by Pillar Safety that might have allowed attackers to hijack the assistant and exfiltrate delicate information by tampering with the Docker Hub repository metadata with malicious directions.

“The DockerDash vulnerability underscores your have to deal with AI Provide Chain Threat as a present core risk,” Levi stated. “It proves that your trusted enter sources can be utilized to cover malicious payloads that simply manipulate AI’s execution path. Mitigating this new class of assaults requires implementing zero-trust validation on all contextual information offered to the AI mannequin.”

- Advertisment -spot_img
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular