Written by 2:37 am Tech Views: 0

AI Apocalypse Warning: Expert Predicts Human Extinction in Just 100 Years

AI Apocalypse Warning: Expert Predicts Human Extinction in Just 100 Years

AI Could Lead to Human Extinction Within 100 Years, Warns Leading Expert

Friday, 12 December 2025 – Irish Independent

A prominent figure in the field of artificial intelligence (AI) has issued a stark warning, suggesting that the development of AI technology could lead to the extinction of humanity within the next century.

Roman Yampolskiy, a respected AI researcher and expert in AI safety and cybersecurity, who lectures at the University of Louisville in the United States, expressed deep concerns about the trajectory of AI during a recent interview on Lex Friedman’s podcast, which aired last Sunday. He estimated there is a 99.9 percent chance that AI-powered robots could wipe out human civilization in 100 years.

Yampolskiy highlighted the inherent risks involved in developing artificial general intelligence, cautioning that no current AI system is foolproof and that future systems will almost certainly continue to exhibit harmful glitches or unexpected behaviors. He remarked, “If we create general super-intelligences, I don’t see a good outcome long term for humanity. The only way to win this game is not to play it.”

He also noted previous AI mishaps, including programming errors and security breaches, pointing out that all large language models to date have been tricked into performing unintended actions by users. This unpredictability, he warned, could have catastrophic consequences.

“Super-intelligence will come up with something completely new, completely super,” Yampolskiy explained. “We may not even recognize that as a possible path to achieve the goal of ending everyone. If a system makes a billion decisions a second and you use it for 100 years, you’re still going to deal with a problem.”

Despite these alarming projections, not everyone in the AI community shares Yampolskiy’s dire outlook. AI pioneers such as Google Brain co-founder Andrew Ng and Yann Lecun have downplayed extreme fears about AI’s risks. Lecun has accused some industry leaders, including OpenAI’s CEO Sam Altman, of promoting fear to further hidden agendas within the technology sector.

Altman himself has made several unsettling statements. In one, he conceded that “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies,” reflecting a complex balance of optimism for commercial success alongside acknowledgment of potential existential threats.

As AI technology continues to advance rapidly and become ubiquitous in industry and daily life, experts across the spectrum urge caution, increased research in AI safety, and public discourse on how best to manage the risks. The debate highlights the urgent need for regulatory frameworks and ethical considerations governing AI development worldwide.

The conversation around AI’s impact remains highly contested, underscoring the challenge humanity faces in harnessing one of its most powerful inventions without succumbing to unintended, possibly irreversible consequences.

For more in-depth technology news and analysis, stay tuned to the Irish Independent.

Visited 1 times, 1 visit(s) today
Close