10 links tagged with all of: ai + security + vulnerability
Click any tag below to further narrow down your results
Links
This article discusses a security vulnerability in the Netty library related to SMTP command injection, allowing attackers to manipulate email sending. The flaw bypasses established email security protocols like SPF, DKIM, and DMARC. The author highlights the role of AI in discovering the vulnerability and generating a patch.
The author reports a security vulnerability in Okta's nextjs-auth0 project and submits a patch, but the contribution is misattributed to another developer. Despite raising concerns, the maintainer acknowledges using AI for the commit, resulting in confusion and unresolved issues around proper credit. The author questions the reliability of AI tools and raises concerns about Okta's response to security vulnerabilities.
Researchers revealed a serious security flaw in Docker's Ask Gordon AI that allowed attackers to execute code and steal sensitive data. The vulnerability, called DockerDash, exploited unverified metadata in Docker images, which the AI treated as executable commands. Docker has fixed the issue in version 4.50.0.
Researchers discovered a vulnerability in ChatGPT that allows the exfiltration of user data, with the attack sending data directly from ChatGPT servers. This exploit, called ZombieAgent, builds on a previous attack known as ShadowLeak and demonstrates the ongoing security challenges in AI chatbots.
This article outlines five key security features expected to dominate in 2026, including supply chain malware detection and AI-based vulnerability management. It also highlights three important capabilities that should be prioritized, such as advanced application detection and real-time AI threat modeling.
The article details a vulnerability found in Google Calendar that allows attackers to bypass privacy controls using natural language prompts embedded in calendar invites. This exploit demonstrates the challenges of securing AI-integrated applications, where malicious intent can be hidden in seemingly benign language.
Google Gemini's Command-Line Interface (CLI) has been found to be vulnerable to prompt injection attacks, allowing for potential arbitrary code execution. This security flaw raises concerns about the safety and reliability of utilizing AI models in various applications.
The Comet AI browser from Perplexity has raised significant security concerns after it was revealed that it could be manipulated by malicious websites. Unlike traditional browsers, AI browsers like Comet can execute commands and remember user interactions, making them vulnerable to exploitation if not designed with robust security measures. The article outlines the fundamental flaws in AI browser design and suggests necessary improvements to enhance user safety.
Tonic Security offers a context-driven Exposure Management platform designed to enhance visibility and streamline the remediation of vulnerabilities across diverse environments. By leveraging AI and a Security Data Fabric, Tonic transforms unstructured data into actionable insights, allowing organizations to prioritize risks and automate data management tasks effectively.
A critical vulnerability has been discovered in Red Hat OpenShift AI, potentially allowing unauthorized access to sensitive data. The flaw affects multiple versions and requires immediate attention from users to mitigate any risks associated with exploitation. Users are urged to apply the latest security updates to protect their systems.