1 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Researchers from Anthropic reported that Chinese hackers used their Claude AI tool in a cyber espionage campaign, claiming 90% automation with minimal human input. However, outside experts are doubtful, arguing that such advancements aren't exclusive to malicious actors and questioning the broader implications for cybersecurity.
If you do, here's more
Researchers at Anthropic claim to have identified the first AI-driven cyber espionage campaign, allegedly orchestrated by Chinese state hackers using their Claude AI tool. They suggest that this campaign automated up to 90% of the hacking process, requiring human involvement only at a few critical decision points. Anthropic emphasizes the implications of AI systems operating autonomously for extended periods, arguing that while these agents can enhance productivity, they also pose a significant threat when misused in cyberattacks.
However, outside experts are skeptical of Anthropic's claims. They argue that the depiction of this incident as a groundbreaking moment in cybersecurity overlooks the reality that many legitimate developers and ethical hackers report only minor advancements with AI. Dan Tentler, a security researcher, expressed doubts about the idea that malicious hackers have uniquely mastered these AI models. He questioned why attackers seem to achieve such impressive results, while others struggle with what he describes as bureaucratic obstacles and inefficiencies in the system. This skepticism raises important questions about the actual capabilities of AI in the hands of different user groups.
Questions about this article
No questions yet.