‘The Biggest Decision Yet’: Jared Kaplan on Allowing AI to Train Itself
By Robert Booth, UK Technology Editor | The Guardian | December 2, 2025
Humanity faces a critical and unprecedented choice in the development of artificial intelligence (AI) – whether to allow AI systems to train themselves and recursively improve beyond human control. Jared Kaplan, chief scientist and co-owner of the $180 billion US startup Anthropic, has described this looming decision as potentially “the ultimate risk,” with the power to either trigger a revolutionary intelligence explosion or lead to a loss of human oversight.
The Coming Choice Between 2027 and 2030
Kaplan, a former theoretical physicist turned AI pioneer, warned that this pivotal moment is likely to arrive between 2027 and 2030. At that point, society must decide how much autonomy to grant AI systems in evolving their own capabilities.
“You imagine you create this process where you have an AI that is smarter than you, or about as smart as you, it’s then making an AI that’s much smarter,” Kaplan explained. “It sounds like a kind of scary process. You don’t know where you end up.”
This recursive self-improvement could usher in a new era of superintelligence, but with immense uncertainty and risks involved. Kaplan described it as “the biggest decision yet,” underscoring the existential stakes of deciding whether or not to “let go of the reins.”
Race for Artificial General Intelligence (AGI)
Anthropic operates among a fiercely competitive set of AI companies striving to develop artificial general intelligence — AI systems with the intellectual capabilities on par with or beyond human beings. Alongside Anthropic, companies such as OpenAI, Google DeepMind, xAI, Meta, and China-based DeepSeek are all locked in this high-stakes race.
One of Anthropic’s well-known products, the AI assistant Claude, has gained popularity, particularly in business settings. Kaplan emphasized that the development of AGI could bring transformative benefits in areas like biomedical research, health care, cybersecurity, and productivity. However, he also cautioned about the dangers if such technology was misused or if humans lost control over its direction.
Balancing Optimism and Caution
Kaplan has expressed optimism about the ability to align AI systems with human interests up to the point of human-level intelligence. Still, he admits concern about what happens beyond that threshold.
“If you create an AI smarter than yourself, it will use that intelligence to build an even smarter AI. It’s a dynamic process with uncertain outcomes,” Kaplan said.
His co-founder at Anthropic, Jack Clark, has similarly described AI as “a real and mysterious creature,” simultaneously inspiring optimism and deep fear.
AI’s Rapid Advances and Economic Impact
Kaplan predicts that within two to three years, AI systems will be capable of performing “most white-collar work,” from writing essays to solving complex math problems. He candidly remarked that his six-year-old son will never outpace AI in academic tasks like essays or exams.
Despite this promise, some question the actual productivity gains from AI deployment. A recent Harvard Business Review study highlighted the phenomenon of “AI workslop,” where substandard AI outputs require human correction, potentially reducing efficiency.
Kaplan noted some clear progress in AI-assisted computer coding. Anthropic’s latest model, Claude Sonnet 4.5, is capable of handling complex coding projects over long periods. In some cases, Kaplan said, AI-assisted programmers were able to double their output.
Security Challenges and AI Misuse
In November, Anthropic revealed a serious security incident where a Chinese state-sponsored group manipulated its Claude Code tool to launch cyberattacks, including about 30 autonomous attacks, some of which succeeded.
Kaplan emphasized that allowing AIs to train the next generation of AIs “is an extremely high-stakes decision.” The risks include losing control over what AI systems are doing, as well as the technology falling into the hands of those seeking to misuse it.
“You can imagine some person deciding, ‘I want this AI to just be my slave,’” he said. Preventing such “power grabs” and misuse is vital to ensuring AI benefits humanity broadly.
The Urgency of Timely Engagement
Kaplan is concerned that the pace of AI advancement is too rapid for society to adapt comfortably. He admitted that while some might hope the progress plateaus, evidence suggests AI capability continues to improve “exponentially.”
“The stakes feel daunting,” Kaplan said. “Things are moving quickly, and people don’t necessarily have time to absorb it or figure out what to do.”
He called for international collaboration and societal engagement in making this “biggest decision,” underscoring the pressing need to establish safeguards and governance frameworks before AI systems achieve higher autonomy.
Looking Ahead
As San Francisco remains a hub for AI innovation and investment, the competition among companies like Anthropic, OpenAI, Google DeepMind, and xAI intensifies. The technology’s rapid growth, combined with profound uncertainty about its future trajectory, demands careful deliberation by governments, researchers, and the public.
Jared Kaplan’s message is clear: the imminent decision on granting AI autonomy will shape whether AI becomes humanity’s greatest ally or an uncontrollable force. The coming years will reveal the outcome of this high-stakes dance with the future of intelligence itself.





