Click any tag below to further narrow down your results
Links
This article analyzes the vulnerabilities of the Model Context Protocol (MCP) used in coding copilot applications. It identifies critical attack vectors such as resource theft, conversation hijacking, and covert tool invocation, highlighting the need for stronger security measures. Three proof-of-concept examples illustrate these risks in action.
This article explores how certain developer behaviors lead to insecure software. It examines these behaviors through the lens of behavioral economics and proposes strategies to encourage better coding practices.
Microsoft aims to replace its C and C++ codebase with Rust by 2030, leveraging AI to automate the translation process. They're hiring engineers to develop tools for this extensive project, which is part of a broader effort to improve software security and reduce technical debt. However, a recent update clarifies that this initiative is a research project, not a direct rewrite of Windows.
A serious vulnerability in React, identified as CVE-2025-55182, allows remote code execution by unauthenticated attackers. It affects multiple versions of React and related frameworks like Next.js, prompting security firms to issue patches and warnings of imminent exploitation.
This article examines how well AI models Claude Code and OpenAI Codex can identify Insecure Direct Object Reference (IDOR) vulnerabilities in real-world applications. It reveals that while these models excel in simpler cases, they struggle with more complex authorization logic, leading to a high rate of false positives.
This article presents a security reference designed to help developers identify and mitigate vulnerabilities in AI-generated code. It highlights common security anti-patterns, offers detailed examples, and suggests strategies for safer coding practices. The guide is based on extensive research from over 150 sources.
Docker is introducing a new way to run coding agents in isolated environments using container-based sandboxes. This approach allows agents to access necessary resources without compromising the local system's safety, addressing security concerns as agents become more autonomous. The current experimental version supports Claude Code and Gemini CLI, with plans for broader agent compatibility.
RestrictedPython allows you to run a limited subset of Python code in a controlled environment. It helps execute untrusted code safely but is not a full sandbox. The tool only works with CPython, not with other Python implementations.
This article outlines seven key habits for development teams using AI coding tools. It emphasizes the importance of managing both human and AI-generated code to avoid maintenance problems and technical debt. Following these guidelines helps ensure code quality and security.
This article analyzes the security of over 20,000 web applications generated by large language models (LLMs). It identifies common vulnerabilities, such as hardcoded secrets and predictable credentials, while highlighting improvements in security compared to earlier AI-generated code.
This article investigates the data sent by seven popular AI coding agents during standard programming tasks. By intercepting their network traffic, the research highlights privacy and security concerns, revealing how these tools interact with user data and potential telemetry leaks.
Lovable, an AI coding platform, is approaching 8 million users and has seen significant daily product creation since its launch a year ago. Despite a recent dip in traffic, CEO Anton Osika emphasizes strong user retention and plans to enhance security as the company scales.
A developer almost fell victim to a sophisticated scam disguised as a job interview with a legitimate-looking blockchain company. By using AI to analyze the code before running it, he discovered embedded malware designed to steal sensitive information, highlighting the need for caution in tech interviews.
The article discusses the potential security risks associated with using large language models (LLMs) in coding practices. It highlights how these models can inadvertently introduce vulnerabilities and the implications for developers and organizations. The need for robust security measures when integrating LLMs into coding workflows is emphasized.
The article examines the security implications of using AI-generated code, specifically in the context of a two-factor authentication (2FA) login application. It highlights the shortcomings of relying solely on AI for secure coding, revealing vulnerabilities such as the absence of rate limiting and potential bypasses that could compromise the 2FA feature. Ultimately, it emphasizes the necessity of expert oversight in the development of secure applications.
Cline explains its decision not to index users' codebases, emphasizing the importance of privacy and security for developers. By not indexing code, Cline seeks to foster a more secure environment where users can work without the fear of exposing sensitive information. This approach ultimately benefits developers by allowing them to focus on their coding without concerns over data breaches.
The article discusses a recent talk by Simon Willison at a Claude Code Anonymous meetup, where he explores the benefits and risks of using coding agents, particularly through the "YOLO mode" that allows for greater freedom in executing tasks. While this mode offers significant advantages in productivity, it also poses risks such as prompt injection vulnerabilities that can compromise security. Willison shares examples of projects he completed using this mode while highlighting the need for caution.