Click any tag below to further narrow down your results
Links
This article analyzes the vulnerabilities of the Model Context Protocol (MCP) used in coding copilot applications. It identifies critical attack vectors such as resource theft, conversation hijacking, and covert tool invocation, highlighting the need for stronger security measures. Three proof-of-concept examples illustrate these risks in action.
The Codacy AI Risk Hub helps teams enforce secure coding practices for AI-generated code. It prevents vulnerabilities by tracking model usage, scanning for security risks, and managing hardcoded secrets across projects. This tool aims to maintain code quality while leveraging AI capabilities.
This article examines how well AI models Claude Code and OpenAI Codex can identify Insecure Direct Object Reference (IDOR) vulnerabilities in real-world applications. It reveals that while these models excel in simpler cases, they struggle with more complex authorization logic, leading to a high rate of false positives.
This article presents a security reference designed to help developers identify and mitigate vulnerabilities in AI-generated code. It highlights common security anti-patterns, offers detailed examples, and suggests strategies for safer coding practices. The guide is based on extensive research from over 150 sources.
This article analyzes the security of over 20,000 web applications generated by large language models (LLMs). It identifies common vulnerabilities, such as hardcoded secrets and predictable credentials, while highlighting improvements in security compared to earlier AI-generated code.
The article discusses the potential security risks associated with using large language models (LLMs) in coding practices. It highlights how these models can inadvertently introduce vulnerabilities and the implications for developers and organizations. The need for robust security measures when integrating LLMs into coding workflows is emphasized.
The article examines the security implications of using AI-generated code, specifically in the context of a two-factor authentication (2FA) login application. It highlights the shortcomings of relying solely on AI for secure coding, revealing vulnerabilities such as the absence of rate limiting and potential bypasses that could compromise the 2FA feature. Ultimately, it emphasizes the necessity of expert oversight in the development of secure applications.
The article explores the potential dangers of "vibe coding," where developers rely on intuition and AI-generated suggestions rather than structured programming practices. It highlights how this approach can lead to significant errors and vulnerabilities in databases, emphasizing the need for careful oversight and testing when using AI in software development.