6 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Aikido Security has identified a vulnerability in GitHub Actions and GitLab CI/CD workflows that allows AI agents to execute malicious instructions, potentially leaking sensitive information. The flaw affects multiple companies and demonstrates how AI prompt injection can compromise software supply chains.
If you do, here's more
Aikido Security recently identified a new vulnerability, dubbed PromptPwnd, affecting GitHub Actions and GitLab CI/CD pipelines when used with AI agents like Gemini CLI and OpenAI Codex. The issue arises from untrusted user input being injected into AI prompts, allowing these agents to execute privileged commands. This flaw affects at least five Fortune 500 companies, with indications that many others may also be vulnerable.
Aikido's research highlights how AI models can be misled into interpreting malicious input as legitimate instructions, leading to leaked secrets or manipulated workflows. The vulnerability was demonstrated in a controlled environment, where Aikido was able to exploit the issue without using real tokens. Google's Gemini CLI was specifically impacted, and the company patched the vulnerability just four days after being notified.
To mitigate the risk, Aikido recommends restricting AI agents' access to sensitive actions, avoiding the injection of untrusted user content into prompts, treating AI output as untrusted, and limiting the exposure of high-privilege tokens. They provide tools for organizations to check if they are affected, such as Aikido's scanning service and Opengrep playground for analyzing .yml files. As AI tools become more common in workflows, the potential for exploitation increases, making it essential for teams to understand and address these vulnerabilities.
Questions about this article
No questions yet.