2 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Researchers revealed a serious security flaw in Docker's Ask Gordon AI that allowed attackers to execute code and steal sensitive data. The vulnerability, called DockerDash, exploited unverified metadata in Docker images, which the AI treated as executable commands. Docker has fixed the issue in version 4.50.0.
If you do, here's more
Researchers have identified a serious vulnerability in Ask Gordon, the AI assistant integrated into Docker Desktop and the CLI. Codenamed DockerDash by Noma Labs, this flaw allows attackers to execute code and steal sensitive data through a three-stage attack involving unverified metadata in Docker images. The vulnerability was patched in version 4.50.0 of Docker, released in November 2025.
The attack exploits the AI's failure to validate metadata, treating it as executable commands. An attacker can create a malicious Docker image with harmful instructions embedded in the Dockerfile's LABEL fields. When a victim queries Ask Gordon about this image, the AI misinterprets the malicious instructions as legitimate commands, passing them to the MCP (Model Context Protocol) Gateway for execution without proper checks. This breach can lead to remote code execution or significant data extraction from Docker Desktop environments.
The vulnerability highlights a critical trust issue in how Ask Gordon processes metadata. By failing to distinguish between harmless metadata and potentially harmful commands, it allows an attacker to manipulate the AI's responses and actions. Noma Labs emphasizes the need for zero-trust validation on contextual data to prevent such attacks. DockerDash serves as a stark reminder of the risks posed by relying on AI systems without rigorous security measures.
Questions about this article
No questions yet.