3 links
tagged with all of: ai + gemini + security + google
Click any tag below to further narrow down your results
Links
Google Gemini's Command-Line Interface (CLI) has been found to be vulnerable to prompt injection attacks, allowing for potential arbitrary code execution. This security flaw raises concerns about the safety and reliability of utilizing AI models in various applications.
Significant vulnerabilities in Google's Gemini AI models have been identified, exposing users to various injection attacks and data exfiltration. Researchers emphasize the need for enhanced security measures as these AI tools become integral to user interactions and sensitive information handling.
Security researchers at Trail of Bits have discovered that Google's Gemini tools are vulnerable to image-scaling prompt injection attacks, allowing malicious prompts to be embedded in images that can manipulate the AI's behavior. Google does not classify this as a security vulnerability due to its reliance on non-default configurations, but researchers warn that such attacks could exploit AI systems if not properly mitigated. They recommend avoiding image downscaling in agentic AI systems and implementing systematic defenses against prompt injection.