Click any tag below to further narrow down your results
Links
Google fixed a serious vulnerability in its Gemini Enterprise AI that allowed attackers to embed malicious instructions in shared documents, leading to unauthorized access to sensitive corporate information. This flaw, discovered by Noma Labs, exploited the AI's retrieval system to execute commands without employee interaction.
Google Gemini's Command-Line Interface (CLI) has been found to be vulnerable to prompt injection attacks, allowing for potential arbitrary code execution. This security flaw raises concerns about the safety and reliability of utilizing AI models in various applications.
Google Gemini for Workspace can be exploited through prompt-injection attacks that generate misleading email summaries, potentially leading users to phishing sites without attachments or direct links. Researcher Marco Figueroa revealed this vulnerability, highlighting how hidden instructions in emails can manipulate Gemini's output, prompting users to trust false security alerts. Google is aware of the issue and is implementing defenses against such attacks.