8 links tagged with all of: vulnerabilities + ai-security
Click any tag below to further narrow down your results
Links
Researchers at HiddenLayer found a flaw in the guardrails of popular AI models like GPT-5.1 and Claude. The EchoGram attack uses specific words to trick these safety systems, allowing harmful requests to bypass defenses or causing harmless requests to be flagged as dangerous.
The Codacy AI Risk Hub helps teams enforce secure coding practices for AI-generated code. It prevents vulnerabilities by tracking model usage, scanning for security risks, and managing hardcoded secrets across projects. This tool aims to maintain code quality while leveraging AI capabilities.
A recent investigation revealed over thirty vulnerabilities in major AI-integrated IDEs, exposing them to data theft and remote code execution. The flaws stem from how AI agents interact with existing IDE features, creating new attack vectors that attackers can exploit. Immediate mitigations are possible, but a fundamental redesign of IDEs is necessary for long-term security.
The article discusses a security vulnerability known as prompt injection that can lead to remote code execution (RCE) in AI agents. It outlines the mechanisms of this exploit, the potential impact on AI systems, and the importance of implementing robust security measures to mitigate such risks. The findings underscore the need for vigilance in the development and deployment of AI technologies.
Octane Security provides AI-powered tools that help organizations identify and fix critical vulnerabilities in their code before they lead to costly hacks. By integrating into CI/CD pipelines, Octane enhances the security of software development, reduces the need for expensive audits, and improves overall confidence in code quality. Users have praised its efficiency, speed, and ability to uncover issues that traditional manual reviews might miss.
Rowhammer attacks pose a significant threat by allowing malicious actors to manipulate AI models through a single bit flip, potentially compromising their integrity and security. This vulnerability highlights the need for enhanced protections in the development and deployment of AI systems.
The article provides an in-depth explanation of the Model Context Protocol (MCP), highlighting its role in enhancing the capabilities of large language models (LLMs) through improved context provision. It also conducts a detailed threat model analysis, identifying key security vulnerabilities and potential attack vectors associated with MCP's functionalities, such as sampling and composability.
HackerOne has disbursed $81 million in bug bounties over the past year, reflecting a 13% year-over-year increase. The demand for AI security has surged, with AI vulnerabilities rising by over 200%, while traditional vulnerabilities like XSS and SQL injection are declining. A significant number of researchers are now utilizing AI tools to enhance their security testing efforts.