3 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
This article examines how well AI models Claude Code and OpenAI Codex can identify Insecure Direct Object Reference (IDOR) vulnerabilities in real-world applications. It reveals that while these models excel in simpler cases, they struggle with more complex authorization logic, leading to a high rate of false positives.
If you do, here's more
The article evaluates the performance of AI coding agents, specifically Claude Code and OpenAI Codex, in identifying Insecure Direct Object Reference (IDOR) vulnerabilities in open-source applications. The research builds on previous findings by testing these models with straightforward prompts, revealing that they found 15 previously unknown vulnerabilities but also generated 93 false positives. Claude Code with Sonnet 4 outperformed the others.
The analysis categorizes IDOR vulnerabilities into four levels of complexity: no authorization checks, limited scope protections, custom role-based access control (RBAC), and implicit authorization through middleware. The AI performed best in the first two categories, struggling with the more complex scenarios where authorization checks were scattered across multiple files or handled implicitly. The authors suggest that refining the prompts and adding scaffolding could enhance the models' effectiveness in detecting IDORs.
IDOR vulnerabilities occur when applications expose internal identifiers without proper access controls, allowing attackers to manipulate these identifiers for unauthorized access. Despite their prevalence and potential for serious consequences, detecting IDORs is challenging due to the nuanced understanding required of the application’s logic. The article highlights the importance of addressing these vulnerabilities, citing real-world incidents that have led to significant breaches and financial losses.
Questions about this article
No questions yet.