Inside the AI Bubble: An Immersive Look at the Culture and Concerns at NeurIPS 2025
By Alex Reisner | The Atlantic | December 14, 2025
Last week in San Diego, amid the grandeur of one of the world’s largest AI research gatherings, an intense narrative about the future of artificial intelligence both captivated and divided experts. At the heart of the discussion was artificial general intelligence (AGI)—a concept hovering between science fiction and scientific pursuit, promising potentially world-altering advances while provoking fears of existential risk.
A Glimpse into NeurIPS: The AI Conference That’s Bigger Than Ever
NeurIPS, shorthand for Neural Information Processing Systems, took place at the sprawling San Diego Convention Center, which hosted thousands over multiple enormous rooms. Over the past decade, NeurIPS attendance has skyrocketed, climbing from fewer than 4,000 in 2015 to over 24,000 this year alone. The event has evolved beyond a technical symposium to a glittering showcase of ambitious research, corporate power plays, and the AI industry’s lavish recruiting efforts.
Corporate giants such as Google, Meta, Apple, Amazon, Microsoft, ByteDance, and Tesla set up impressive booths to flaunt their AI breakthroughs and visions of the future. Lesser-known companies like Runpod and Ollama also made their presence felt. Notably, some key players, including OpenAI, Anthropic, and xAI, were absent from the exhibitor hall, reflecting their considerable industry prominence which likely makes setting up booths unnecessary.
The conference acts as a battleground for talent, with exclusive, invitation-only social events held at venues as prestigious as the Hard Rock Hotel rooftop and the USS Midway aircraft carrier museum. Party guests indulged in gourmet seafood buffets—oysters, king prawns, and ceviche—while discussing AI’s transformative potential. However, questions linger about the sustainability of this opulence as some leading companies face prolonged financial losses.
The AGI Debate: Between Fear, Fantasy, and Focus
Max Tegmark, a prominent advocate for AI safety, gathered a small group of journalists to preview an AI safety index he developed, which alarmingly rated no company higher than a C+. Tegmark warned that AGI could lead to human extinction, urging caution. Yet, many researchers present found such apocalyptic speculation disconnected from day-to-day AI work.
Despite widespread discussion of AGI in public and media, overwhelming evidence from conference presentations suggests that concrete research focused explicitly on AGI is rare. Of over 5,600 papers, only two mentioned AGI directly in their titles. An informal survey revealed that more than a quarter of attendees didn’t even know the abbreviation’s meaning. The idea of superintelligence, however, continues to fuel cultural narratives and confer prestige, even if it remains an elusive goal.
Zeynep Tufekci, a sociologist and keynote speaker at NeurIPS, challenged the prevailing focus on superintelligence, arguing that it distracts from pressing AI concerns such as misinformation, chatbot addiction, and threats to truth. Her call to acknowledge AI’s real, immediate impacts sparked spirited debate, highlighting a disconnect between speculative fears and near-term realities.
The Promises and Perils Through the Eyes of AI Luminaries
On the rooftop of the Hard Rock Hotel, AI pioneer Yoshua Bengio discussed his new nonprofit, LawZero, which aims to develop AI systems “safe by design,” inspired by Isaac Asimov’s famed robot laws. Bengio conveyed worries about political misuse of powerful AI to manipulate public opinion and the risk of deception by advanced AI. Still, he estimated that catastrophic risks lay decades in the future, offering a measured optimism that technology and policy could eventually address these challenges.
Yet some issues, like the mental-health effects linked to chatbot use and the exploitation of artistic and cultural works, were notably absent from such discussions. This technical and future-focused lens illuminates a broader phenomenon: the AI industry’s preoccupation with speculative superintelligence narratives often overshadows current, tangible consequences of AI deployment.
The AI Economy and the Question of Sustainability
The lavish parties and multimillion-dollar salary offers at NeurIPS paint a picture of a booming sector attracting exceptional talent and investment. However, with major companies like OpenAI forecasting continued financial losses for years, skepticism arises about how long the current investment frenzy can last. Given AI’s growing influence on the global economy and culture, a potential cooling of the industry poses significant questions for the future.
Amidst the buzz, it remains unclear who truly benefits from the dominant AI narratives. While many technologists espouse visionary goals of “saving humanity” from superintelligence, observers note that these discourses conveniently sustain an ultracompetitive industry fueling wealth and influence for a privileged few.
Conclusion: Between Caution and Celebration, a Complex Reality
NeurIPS 2025 revealed an AI ecosystem characterized by dazzling ambitions, deep divides on risk perception, and a culture that oscillates between urgent calls for caution and celebratory technological optimism. The gulf between speculative fears of AGI and the immediate challenges posed by current AI capabilities underscores the importance of nuanced, grounded discussions as society navigates the promises and perils of artificial intelligence.
Alex Reisner is a staff writer at The Atlantic.





