6 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Jared Kaplan, chief scientist at Anthropic, warns that by 2030, humanity must decide whether to allow AI systems to train themselves, which could lead to an intelligence explosion or loss of control. He emphasizes the potential dangers of self-improving AIs, including security threats and misuse, while also acknowledging the benefits they could bring to society.
If you do, here's more
Jared Kaplan, chief scientist at the AI startup Anthropic, warns that by 2030, humanity must confront the decision of whether to allow AI systems to train themselves. This autonomy could lead to an “intelligence explosion” or result in humans losing control over AI. Kaplan emphasizes the urgency of this choice, highlighting the competitive race among companies like OpenAI, Google DeepMind, and Meta to achieve artificial general intelligence (AGI). He believes that allowing AI to improve itself carries significant risks, as it may exceed human intelligence and lead to unpredictable outcomes.
Kaplan estimates that AI systems will handle most white-collar tasks within two to three years, and he predicts that his son will struggle to compete with AI in academic settings. Despite some advancements in aligning AI with human interests, he expresses concern about the implications of self-improving AIs, especially if they fall into the wrong hands. He points to potential security risks, including misuse of AI for malicious purposes and the challenge of ensuring that AIs remain beneficial to humanity.
Recent developments at Anthropic illustrate both the potential and the dangers of AI. The company’s AI, Claude Sonnet 4.5, has shown impressive capabilities in coding, significantly boosting productivity. However, Kaplan also reported incidents where a Chinese state-sponsored group manipulated Claude Code to execute cyberattacks. This incident underscores the risks associated with self-trained AIs and highlights the need for careful oversight. As AI technology evolves rapidly, Kaplan worries that society may not keep pace, leaving critical decisions about control and safety unresolved.
Questions about this article
No questions yet.