3 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Anthropic tested ten AI models on 405 smart contract exploits and found that they could replicate over half of them, generating $4.6 million in simulated attacks. The study highlights the speed at which AI can identify vulnerabilities, raising concerns about security in decentralized finance.
If you do, here's more
Anthropic's recent tests reveal that AI models can effectively replicate smart contract exploits at a level comparable to skilled human attackers. The company evaluated ten models, including Llama 3, GPT-5, and Claude Opus 4.5, against a dataset of 405 historical exploits, managing to reproduce 207 of them. Notably, three of these models simulated exploits that generated a total of $4.6 million, targeting contracts created after their training cutoff. This demonstrates that automated systems are not only quick to identify existing vulnerabilities but can also uncover new ones, such as two zero-day vulnerabilities in Binance Smart Chain contracts.
The findings highlight the accessibility of flaws in smart contracts. David Schwed from SovereignAI noted that many vulnerabilities are already documented, making them easy targets for malicious actors using AI. He explained that attackers can exploit known vulnerabilities in forked projects, allowing for continuous, automated attempts against various contracts. Anthropic's analysis plotted exploit revenue against model release dates, emphasizing that potential profits matter more to attackers than the complexity of the exploits themselves.
In their testing, Anthropic used a zero-day dataset of 2,849 contracts from Binance Smart Chain and found that Claude Sonnet 4.5 and GPT-5 uncovered two undisclosed flaws, generating $3,694 in simulated value. The strongest model, Claude Opus 4.5, exploited 17 vulnerabilities, accounting for $4.5 million of the simulated value. Improvements across model generations have also significantly reduced operational costs, enhancing their effectiveness. Schwed urged developers to implement automated security tools alongside AI advancements, suggesting that if attackers can exploit vulnerabilities, defenders can find ways to secure their systems as well.
Questions about this article
No questions yet.