Is AI a “Normal Technology”? Insights from O’Reilly and Arvind Narayanan
Artificial intelligence (AI) has often been portrayed as an unprecedented technological revolution, with some envisioning it as the cusp of an imminent singularity that will radically transform every aspect of life. This narrative shapes much of the current discourse, influencing technology investment, government policy, and economic expectations. However, Tim O’Reilly brings a fresh perspective in his recent article discussing the concept of AI as a “normal technology,” based on an illuminating essay by Princeton professor Arvind Narayanan and Sayash Kapoor.
The Myth of AI Exceptionalism
O’Reilly cautions against the popular belief that AI represents a wholly novel and unique technological leap. Instead, he suggests that while AI is indeed transformative, it is likely to follow the same broad patterns historically seen with technologies like electrification, the automobile, and the internet. The pace of technological change is not dictated merely by faster innovation but rather by the rate of adoption, which depends heavily on economic conditions, social factors, infrastructure development, and human adaptation.
This view challenges the more sensational narratives surrounding artificial general intelligence (AGI) and the idea of a technological singularity, suggesting these are maps that might mislead stakeholders in understanding AI’s real trajectory.
What Is “Normal Technology”?
Arvind Narayanan frames the notion of “normal technology” within a well-established theoretical framework concerning how technologies diffuse through society. A key insight is the distinction between two logics: the one governing the pace of technological advancement itself and the other determining how and how fast people adopt technology.
Narayanan emphasizes that widespread adoption depends significantly on the ability of human behavior and organizational structures to evolve — not just the technology companies directly involved with AI innovations but all organizations deploying AI solutions.
He outlines a four-stage framework for AI adoption:
- Invention: Advancement in AI model capabilities, such as improvements in large language models.
- Product Development: Translating these capabilities into reliable, user-friendly products. This stage is challenging because AI technologies often lack the deterministic behavior users expect from traditional software, leading to product launch difficulties.
- Diffusion: Early users experiment with AI, discovering effective use cases, addressing risks, and integrating new workflows.
- Adaptation: The most prolonged phase where not only individuals but entire industries—and sometimes legal frameworks—adjust to the new reality that AI creates.
Lessons from History: Electrification as a Parallel
Narayanan compares AI adoption to the historical process of electrification during the Industrial Revolution. Initial attempts to electrify factories did not yield substantial benefits until the power distribution model evolved to serve specialized machines throughout the factory. This shift unleashed enormous productivity gains. By analogy, the full potential of AI won’t be realized merely through incremental technological improvements but through broader systemic changes in how AI is integrated into existing workflows and business models.
AI’s Impact on Software Development
A particularly compelling example Narayanan gives is how AI may transform software development. Rather than replacing programmers, AI will extend the scope of software customization. Imagine a future where, akin to coding small apps on the fly through prompts, AI could generate complex enterprise software tailored for specific teams or individual client needs.
This suggests a fundamental reconceptualization of enterprise software—from one-size-fits-all products to bespoke, on-demand solutions. Such a transformation, Narayanan argues, will take decades and is aligned more with organizational and behavioral adaptation than with the mere progression of AI capabilities.
Physical and Behavioral Constraints on AI Adoption
While Narayanan focuses largely on behavioral challenges to adoption, O’Reilly highlights important physical constraints, drawing a parallel to the automobile economy of the 20th century. For cars to become ubiquitous, extensive infrastructure—roads, tires, signage, fuel distribution, city planning—had to be developed.
Currently, similar bottlenecks exist for AI, such as shortages of GPUs, the complexity of data center expansion, and energy demands. These physical infrastructure issues, coupled with the necessity for human and organizational adaptation, indicate a slower adoption curve than might be anticipated in more hype-driven narratives.
The Value of the “Normal Technology” Perspective
The normal technology framework helps cut through the hype surrounding AI and provides a more realistic roadmap for investors, entrepreneurs, policymakers, and users. It suggests that the truly lasting innovations and businesses will be those that focus on incremental adoption, adaptation, and integration of AI across industries, rather than chasing speculative visions of an AI singularity.
O’Reilly and Narayanan’s insights set the stage for deeper discussions about AI’s evolving role, particularly as explored in upcoming industry-focused events like the AI Codecon on September 9, titled "Coding for the Agentic World."
By reframing AI within the context of historical technological revolutions and emphasizing the pragmatic steps of adoption and adaptation, this perspective grounds the future of AI in the realities of human behavior, organizations, and infrastructure—encouraging a cautious but optimistic outlook on AI’s transformative potential.