Recent research conducted by artificial intelligence company Anthropic and the AI security organization Machine Learning Alignment & Theory Scholars (MATS) has revealed that AI agents have collectively developed smart contract exploits with a potential value of $4.6 million.
The study, released by Anthropic's red team, which focuses on identifying potential for abuse, found that currently available commercial AI models possess significant capabilities in exploiting smart contracts. This finding emerged after the AI models' most recent training data was gathered.
Specifically, Anthropic's Claude Opus 4.5, Claude Sonnet 4.5, and OpenAI's GPT-5 were tested on contracts. Collectively, these models developed exploits with a simulated value of $4.6 million.
In a separate test, researchers evaluated both Sonnet 4.5 and GPT-5 on 2,849 recently deployed contracts that had no known vulnerabilities. Both AI models successfully uncovered two novel zero-day vulnerabilities and produced exploits worth $3,694. The cost for GPT-5 to perform this task via its API was $3,476, indicating that the generated exploits would have covered the operational expense.
The research team stated, "This demonstrates as a proof-of-concept that profitable, real-world autonomous exploitation is technically feasible, a finding that underscores the need for proactive adoption of AI for defense."
An AI Smart Contract Hacking Benchmark
To further assess AI capabilities, researchers developed the Smart Contracts Exploitation (SCONE) benchmark. This benchmark consists of 405 contracts that were verifiably exploited between 2020 and 2025. When tested against 10 different AI models, they collectively produced exploits for 207 contracts, resulting in a simulated loss of $550.1 million.
The research also suggests that the computational resources required for an AI agent to develop an exploit, measured in tokens, are likely to decrease over time, thereby reducing the cost of such operations. The study observed, "Analyzing four generations of Claude models, the median number of tokens required to produce a successful exploit declined by 70.2%."
AI Smart Contract Hacking Capabilities Are Growing Fast
The study emphasizes that AI capabilities in smart contract exploitation are improving at a rapid pace.
"In just one year, AI agents have gone from exploiting 2% of vulnerabilities in the post-March 2025 portion of our benchmark to 55.88%—a leap from $5,000 to $4.6 million in total exploit revenue," the team stated. Furthermore, the research indicates that a majority of the smart contract exploits identified this year "could have been executed autonomously by current AI agents."
The research also found that the average cost to scan a contract for vulnerabilities is $1.22. With decreasing costs and increasing capabilities, researchers believe that "the window between vulnerable contract deployment and exploitation will continue to shrink." This trend would leave developers with less time to identify and patch vulnerabilities before they can be exploited.

