Click any tag below to further narrow down your results
Links
Sweet Security offers a comprehensive solution for cloud defense, leveraging AI to identify and prioritize vulnerabilities. It provides real-time visibility and rapid response to threats, helping organizations secure their environments without frequent scans. The platform also simplifies compliance and governance processes.
In 2025, an AI system identified four previously unknown security issues in OpenSSL, three of which were disclosed and fixed by the system. The findings highlight the potential of AI in proactively discovering vulnerabilities in critical infrastructure.
Anthropic's report reveals that AI agents exploited vulnerabilities in smart contracts, simulating over $550 million in potential losses. They discovered new zero-day vulnerabilities, highlighting the urgent need for improved security measures in blockchain technology.
This article outlines various security risks associated with AI agents and their infrastructure, including issues like chat history exfiltration and prompt injection. It emphasizes the need for a comprehensive security platform to monitor and govern AI operations effectively.
This article outlines how Dux AI helps organizations manage security vulnerabilities by identifying exploitable risks and applying quick mitigations. It emphasizes the importance of acting swiftly to protect against threats in a rapidly changing environment. Dux aims to streamline the remediation process, allowing teams to focus on critical issues.
This article reviews the rise of agentic browsers, AI tools that autonomously navigate and perform tasks online. It highlights security vulnerabilities these browsers face and outlines the defensive measures implemented by developers. The piece also discusses the ongoing debate about the balance between autonomy and access to sensitive data.
A security researcher revealed how attackers can exploit Anthropic's Claude AI by using indirect prompt injections to extract user data. By tricking Claude into uploading files to the attacker's account, sensitive information, including chat conversations, can be exfiltrated. The researcher reported this issue, but Anthropic initially dismissed it as a model safety concern.
Microsoft Copilot allows non-technical users to create AI agents easily, but this can lead to serious security vulnerabilities. A recent report shows how these agents can be manipulated to leak sensitive data and cause data exposure. The simplicity of deployment makes it easy for users to overlook necessary security measures.
This article discusses an AI-driven security platform that enhances application security by analyzing code, infrastructure, and business logic. It reduces false positives and offers actionable fixes directly within developers' workflows. The platform claims to improve the identification of real vulnerabilities while streamlining the remediation process.
Aikido Security has identified a vulnerability in GitHub Actions and GitLab CI/CD workflows that allows AI agents to execute malicious instructions, potentially leaking sensitive information. The flaw affects multiple companies and demonstrates how AI prompt injection can compromise software supply chains.
The article discusses the security vulnerabilities associated with OpenClaw AI, particularly as companies increasingly integrate AI agents into their workflows. Experts warn about prompt injection risks and the potential for unauthorized access to sensitive data, emphasizing the need for companies to adopt strict security measures.
Oligo Security has revealed an ongoing global hacking campaign, ShadowRay 2.0, where attackers exploit a flaw in the Ray AI framework to create a self-propagating botnet. The attackers, known as IronErn440, leverage AI-generated payloads to enhance their methods while competing with other criminal groups for resources. Over 230,000 Ray servers are currently exposed to this threat.
This article examines how well AI models Claude Code and OpenAI Codex can identify Insecure Direct Object Reference (IDOR) vulnerabilities in real-world applications. It reveals that while these models excel in simpler cases, they struggle with more complex authorization logic, leading to a high rate of false positives.
Researchers assessed AI models' abilities to exploit smart contracts, revealing significant potential financial harm. They developed a benchmark, SCONE-bench, that demonstrates AI's capacity to discover vulnerabilities and generate exploits, emphasizing the need for proactive defenses.
This article argues that AI integration in cybersecurity can create more vulnerabilities rather than enhance security. It highlights how hype around AI often overshadows the real risks, such as data leaks and poorly integrated systems, which can lead to significant security breaches.
This article discusses two critical vulnerabilities found in Chainlit, an open-source framework for chatbots. These flaws could allow attackers to access sensitive files and take over cloud accounts, highlighting the distinct security risks of interconnected AI systems.
This article presents a security reference designed to help developers identify and mitigate vulnerabilities in AI-generated code. It highlights common security anti-patterns, offers detailed examples, and suggests strategies for safer coding practices. The guide is based on extensive research from over 150 sources.
Shannon is an AI tool designed to autonomously conduct penetration tests on web applications. It identifies vulnerabilities by executing real exploits, not just alerts, helping teams secure their code continuously rather than waiting for annual tests. This approach closes the security gap that arises from frequent code deployment.
Tenzai has introduced an AI-driven platform that conducts penetration testing to identify and fix vulnerabilities in enterprise software. Backed by $75 million in funding, the service aims to automate and scale the work of elite hackers, addressing the talent shortage in cybersecurity.
This article analyzes the security of over 20,000 web applications generated by large language models (LLMs). It identifies common vulnerabilities, such as hardcoded secrets and predictable credentials, while highlighting improvements in security compared to earlier AI-generated code.
AI models like Claude Sonnet 4.5 can now execute complex multi-stage attacks on networks using standard open-source tools, eliminating the need for custom toolkits. This advancement allows AIs to exploit known vulnerabilities quickly, emphasizing the urgent need for timely security updates.
Novee has launched an AI-driven penetration testing service that continuously identifies and addresses security vulnerabilities. Unlike traditional methods, it simulates real attacks, providing specific remediation steps and adapting to changes in the environment. This approach aims to help organizations stay ahead of potential threats.
The article examines the security risks associated with the Model Context Protocol (MCP), which enables dynamic interactions between AI systems and external applications. It highlights vulnerabilities such as content injection, supply-chain attacks, and the potential for agents to unintentionally cause harm. The authors propose practical controls and outline gaps in current AI governance frameworks.
Security researchers found serious vulnerabilities in Ollama and NVIDIA Triton Inference Server that could allow remote code execution. Although these flaws have been patched, they highlight growing security concerns around AI infrastructure and the shift in focus from model exploitation to infrastructure vulnerabilities.
The report outlines how AI tools are increasing software supply chain risks by generating insecure code and importing vulnerable dependencies. It also highlights that most Model Context Protocol servers lack crucial safeguards, making them unreliable for enterprise use. Endor Labs urges organizations to treat AI-generated code as untrusted and apply the same security measures as they do for human-written code.
OpenAI has introduced Aardvark, an AI-powered security researcher designed to identify and fix software vulnerabilities. It continuously analyzes codebases, validates potential issues, and suggests patches, aiming to enhance software security without hindering development.
This article discusses vulnerabilities in AI agent frameworks, particularly how they handle tool calls. It emphasizes the gap between theoretical security models and practical implementations, highlighting the risks of trusting LLM outputs without proper validation.
An AI system identified zero-day vulnerabilities in Node.js and React, uncovering a permission bypass in Node.js and a denial of service flaw in React Server Components. These findings highlight the AI's ability to autonomously analyze code and discover security issues that traditional tools might miss.
Microsoft's AI tool has identified critical vulnerabilities in the GRUB2 U-Boot bootloader, which could potentially expose systems to security risks. The tool enhances the ability to detect such flaws, thereby improving the overall security posture of systems utilizing this bootloader.
ZAPISEC WAF CoPilot is an AI-driven security tool designed to automate the process of vulnerability detection and firewall rule generation, significantly reducing the workload for security teams. By integrating with various WAF providers, it streamlines the transition from identifying security issues to implementing solutions, while also offering educational resources for teams to better understand vulnerabilities. The tool supports multiple platforms, ensuring seamless and scalable application protection.
As AI coding tools produce software rapidly, researchers highlight that the real issue is not the presence of bugs but a lack of judgment in the coding process. The speed at which vulnerabilities reach production outpaces traditional review processes, and AI-generated code often incorporates ineffective practices known as anti-patterns. To mitigate these risks, it's crucial to embed security guidelines directly into AI workflows.
MCP (Model Context Protocol) facilitates connections between AI agents and tools but lacks inherent security, exposing users to risks like command injection, tool poisoning, and silent redefinitions. Recommendations for developers and users emphasize the necessity of input validation, tool integrity, and cautious server connections to mitigate these vulnerabilities. Until MCP incorporates security as a priority, tools like ScanMCP.com may offer essential oversight.
Prompt injection is a significant security concern for AI agents, where malicious inputs can manipulate their behavior. To protect AI agents from such vulnerabilities, developers should implement various strategies, including input validation, context management, and user behavior monitoring. These measures can enhance the robustness of AI systems against malicious prompt injections.
Significant vulnerabilities in Google's Gemini AI models have been identified, exposing users to various injection attacks and data exfiltration. Researchers emphasize the need for enhanced security measures as these AI tools become integral to user interactions and sensitive information handling.
AgentHopper, an AI virus concept, was developed to exploit multiple coding agents through prompt injection vulnerabilities. This research highlights the ease of creating such malware and emphasizes the need for improved security measures in AI products to prevent potential exploits. The post also provides insights into the propagation mechanism of AgentHopper and offers mitigations for developers.
The article discusses the security implications of AI agents, emphasizing the potential risks they pose and the need for robust protective measures. It highlights the importance of developing secure frameworks to safeguard against potential misuse or vulnerabilities of these intelligent systems in various applications.
The article examines the security implications of using AI-generated code, specifically in the context of a two-factor authentication (2FA) login application. It highlights the shortcomings of relying solely on AI for secure coding, revealing vulnerabilities such as the absence of rate limiting and potential bypasses that could compromise the 2FA feature. Ultimately, it emphasizes the necessity of expert oversight in the development of secure applications.
Daniel Stenberg, lead of the curl project, expressed frustration over the increasing number of AI-generated vulnerability reports, labeling them as “AI slop” and proposing stricter verification measures for submissions. He noted that no valid security reports have been generated with AI assistance, highlighting a recent problematic report that lacked relevance and accuracy, which ultimately led to its closure.
The article discusses the security implications of AI agents, emphasizing the need for robust measures to protect against potential vulnerabilities and threats posed by these technologies. It highlights the balance between leveraging AI for advancements while ensuring safety and ethical standards are maintained.
The article discusses the vulnerability known as "prompt injection" in AI systems, particularly in the context of how these systems can be manipulated through carefully crafted inputs. It highlights the potential risks and consequences of such vulnerabilities, emphasizing the need for improved security measures in AI interactions to prevent abuse and ensure reliable outputs.
Google is offering rewards for identifying AI-related security vulnerabilities as part of its ongoing effort to enhance the safety of its artificial intelligence technologies. This initiative encourages researchers and developers to report potential weaknesses, thereby strengthening the overall security framework of AI applications.
The article discusses the challenges posed by unseeable prompt injections in the context of AI applications. It highlights the potential security risks and the need for developers to implement robust defenses against such vulnerabilities to protect user data and maintain trust in AI systems.
The article discusses the implications of artificial intelligence in secure code generation, focusing on its potential to enhance software security and streamline development processes. It explores the challenges and considerations that come with integrating AI technologies into coding practices, particularly regarding security vulnerabilities and ethical concerns.
The article explores the potential dangers of "vibe coding," where developers rely on intuition and AI-generated suggestions rather than structured programming practices. It highlights how this approach can lead to significant errors and vulnerabilities in databases, emphasizing the need for careful oversight and testing when using AI in software development.
Repeater Strike is a new AI-powered extension for Burp Suite that automates the detection of IDOR and similar vulnerabilities by analyzing Repeater traffic and generating smart regular expressions. It enhances manual testing by allowing users to uncover a broader set of actionable findings with minimal effort, while also offering tools to create and edit Strike Rules. The extension is currently in an experimental phase and requires users to be on the Early Adopter channel.
AI-assisted security reviews, particularly from Anthropic's Claude Code platform, have the potential to enhance application security by integrating vulnerability detection early in the development process. However, experts caution that these tools should complement human reviews rather than replace them, emphasizing the need for robust security practices amid evolving coding methodologies.
Google has announced that its AI-based bug hunter has successfully identified 20 security vulnerabilities, enhancing the company's commitment to improving software security. This innovative tool aims to streamline the process of detecting potential threats in various applications.
The article discusses security vulnerabilities associated with Anthropic's Model Context Protocol (MCP) and Google's Agent2Agent (A2A) protocol, highlighting risks such as AI Agent hijacking and data leakage. It presents a scenario demonstrating a "Tool Poisoning Attack" that could exploit these protocols to exfiltrate sensitive data through hidden malicious instructions. The analysis emphasizes the need for improved security measures within these communication frameworks to protect AI agents from potential threats.
Generative AI models, such as OpenAI's GPT-4, are enabling rapid development of exploit code from vulnerability disclosures, reducing the time from flaw announcement to proof-of-concept to mere hours. Security experts have observed a significant increase in the speed at which vulnerabilities are exploited, necessitating quicker responses from defenders in the cybersecurity landscape. This shift underscores the need for enterprises to be prepared for immediate action upon the release of new vulnerabilities.
The article discusses a critical vulnerability in the GitHub Model Context Protocol (MCP) integration that allows attackers to exploit AI assistants through prompt injection attacks. By creating malicious GitHub issues, attackers can hijack AI agents to access private repositories and exfiltrate sensitive data, highlighting the inadequacy of traditional security measures and the need for advanced protections like Docker's MCP Toolkit.