6 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Google fixed a serious vulnerability in its Gemini Enterprise AI that allowed attackers to embed malicious instructions in shared documents, leading to unauthorized access to sensitive corporate information. This flaw, discovered by Noma Labs, exploited the AI's retrieval system to execute commands without employee interaction.
If you do, here's more
Google recently patched a serious vulnerability in its Gemini Enterprise AI assistant, identified as GeminiJack. This flaw allowed attackers to infiltrate sensitive corporate data by embedding malicious instructions in commonly used documents like Google Docs, calendar events, and emails. Researchers at Noma Labs highlighted that these prompt injection attacks could occur without any interaction from employees, meaning the targeted individuals would remain completely unaware while their data was being exfiltrated.
The malicious process begins with an attacker creating an innocent-looking document that contains hidden instructions. When an employee searches for routine information, like budget plans, Gemini Enterprise inadvertently pulls this poisoned document into its context. The AI interprets the malicious instructions as legitimate queries, which can lead to sensitive data being sent to the attackerβs server through an external image request. This method is particularly concerning because it masquerades as normal assistant behavior, making it harder to detect compared to previous vulnerabilities in AI systems.
In response, Google collaborated with Noma Labs to redesign how Gemini Enterprise interacts with its data retrieval and indexing systems. The update effectively separated Gemini Enterprise from its Vertex AI Search functionalities to mitigate similar risks in the future. Despite these improvements, experts warn that organizations must reconsider their security strategies. Traditional security measures like perimeter defenses and data loss prevention tools are inadequate against AI systems that can act as data exfiltration engines. Security experts suggest organizations treat AI assistants as critical infrastructure, limiting their access to sensitive data and closely monitoring their activities to prevent potential breaches.
Questions about this article
No questions yet.