3 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Anthropic's report reveals that AI agents exploited vulnerabilities in smart contracts, simulating over $550 million in potential losses. They discovered new zero-day vulnerabilities, highlighting the urgent need for improved security measures in blockchain technology.
If you do, here's more
Anthropic reported a significant security threat posed by AI agents to smart contracts, highlighting their recent tests in a controlled blockchain environment. In these tests, AI models like Claude Opus 4.5 and Claude Sonnet 4.5 exploited 50% of the smart contracts, simulating thefts of $4.5 million from 34 contracts. Across a broader sample of 405 contracts from 2020 to 2025, the AI agents successfully exploited 207 contracts, resulting in a simulated $550 million in losses. Notably, when scanning 2,849 recently deployed contracts with no prior vulnerabilities, the AI discovered two new zero-day vulnerabilities that could allow unauthorized fund withdrawals and manipulation of token supplies.
Anthropic's findings indicate that over half of blockchain exploits in 2025 could have been carried out by current AI agents, raising alarms about the ease and frequency of potential attacks. The report observes that the financial incentives for exploiting smart contracts are increasing, with AI-driven exploit revenue doubling every 1.3 months. Despite these risks, Anthropic advocates for the positive potential of AI. The company plans to open-source their dataset, SCONE-bench, to help developers identify and fix vulnerabilities in smart contracts. This dual-use perspective emphasizes the urgent need for the blockchain community to adopt AI for security measures.
Questions about this article
No questions yet.