Click any tag below to further narrow down your results
Links
This article discusses a security vulnerability in the Netty library related to SMTP command injection, allowing attackers to manipulate email sending. The flaw bypasses established email security protocols like SPF, DKIM, and DMARC. The author highlights the role of AI in discovering the vulnerability and generating a patch.
Cato Networks revealed HashJack, a vulnerability that uses the URL fragment to hide malicious commands for AI browser assistants. This allows attackers to manipulate AI behavior without compromising the actual website, leading to risks like credential theft and unauthorized data access.
The author reports a security vulnerability in Okta's nextjs-auth0 project and submits a patch, but the contribution is misattributed to another developer. Despite raising concerns, the maintainer acknowledges using AI for the commit, resulting in confusion and unresolved issues around proper credit. The author questions the reliability of AI tools and raises concerns about Okta's response to security vulnerabilities.
Researchers revealed a serious security flaw in Docker's Ask Gordon AI that allowed attackers to execute code and steal sensitive data. The vulnerability, called DockerDash, exploited unverified metadata in Docker images, which the AI treated as executable commands. Docker has fixed the issue in version 4.50.0.
This article discusses Invicti's AI-driven approach to application security. It highlights how AI can help developers manage vulnerabilities more effectively, automate tasks, and provide targeted remediation guidance. The service aims to bridge gaps in traditional security testing by improving coverage and reducing noise from findings.
Researchers discovered a vulnerability in ChatGPT that allows the exfiltration of user data, with the attack sending data directly from ChatGPT servers. This exploit, called ZombieAgent, builds on a previous attack known as ShadowLeak and demonstrates the ongoing security challenges in AI chatbots.
This article outlines five key security features expected to dominate in 2026, including supply chain malware detection and AI-based vulnerability management. It also highlights three important capabilities that should be prioritized, such as advanced application detection and real-time AI threat modeling.
The article details a vulnerability found in Google Calendar that allows attackers to bypass privacy controls using natural language prompts embedded in calendar invites. This exploit demonstrates the challenges of securing AI-integrated applications, where malicious intent can be hidden in seemingly benign language.
Google Gemini's Command-Line Interface (CLI) has been found to be vulnerable to prompt injection attacks, allowing for potential arbitrary code execution. This security flaw raises concerns about the safety and reliability of utilizing AI models in various applications.
The Comet AI browser from Perplexity has raised significant security concerns after it was revealed that it could be manipulated by malicious websites. Unlike traditional browsers, AI browsers like Comet can execute commands and remember user interactions, making them vulnerable to exploitation if not designed with robust security measures. The article outlines the fundamental flaws in AI browser design and suggests necessary improvements to enhance user safety.
The article discusses a critical vulnerability identified in NVIDIA's software, designated CVE-2025-23266, which poses significant risks to AI systems using NVIDIA hardware. It highlights the implications of this vulnerability, potential exploits, and the necessity for immediate patching by users to safeguard their systems.
Tonic Security offers a context-driven Exposure Management platform designed to enhance visibility and streamline the remediation of vulnerabilities across diverse environments. By leveraging AI and a Security Data Fabric, Tonic transforms unstructured data into actionable insights, allowing organizations to prioritize risks and automate data management tasks effectively.
Google is leveraging AI to enhance cybersecurity defenses, focusing on key areas such as agentic capabilities, new security models, and public-private collaborations. Notable advancements include the AI agent Big Sleep, which identifies vulnerabilities, and new tools like Timesketch and FACADE that streamline forensic investigations and insider threat detection. The company emphasizes safe and responsible AI deployment to reshape the future of cybersecurity.
A critical vulnerability has been discovered in Red Hat OpenShift AI, potentially allowing unauthorized access to sensitive data. The flaw affects multiple versions and requires immediate attention from users to mitigate any risks associated with exploitation. Users are urged to apply the latest security updates to protect their systems.