Nvidia Licenses Groq’s AI Technology Amid Growing Demand for Advanced Chips
By Kate Clark, The Wall Street Journal | December 24, 2025
Nvidia has taken a significant step to enhance its artificial intelligence (AI) chip capabilities by licensing AI-inference technology from chip startup Groq, the companies announced Wednesday. This nonexclusive agreement underscores the rising demand for cutting-edge AI hardware designed to handle the increasingly critical task of AI inference.
Strategic Partnership and Leadership Transition
As part of the licensing deal, Groq’s CEO and founder, Jonathan Ross, along with the company’s president and select staff members, will be joining Nvidia. Ross, who previously studied under AI pioneer Yann LeCun and played a key role at Google developing the tensor processing units (TPUs), will be instrumental in integrating Groq’s technology within Nvidia’s suite of AI products. Despite this transition, Groq’s GroqCloud service – a platform offering AI processing power to software developers without requiring physical chips or servers – will continue operating as an independent entity. Groq’s finance chief, Simon Edwards, is set to assume the role of Groq’s new CEO.
Groq’s Innovation in AI Inference Chips
Founded in 2016, Groq has positioned itself as an innovator in the AI hardware space, specializing in chips and software tailored for AI model inference. Unlike GPUs that often consume substantial power primarily for training AI models, Groq designs “language processing unit” chips optimized for inference tasks — the routine operation where AI models interpret new data to generate answers, predictions, or conclusions in real time.
A key feature of Groq’s chips is embedded memory, which enables faster production and deployment alongside reduced energy consumption compared to traditional graphics-processing units. This architectural difference addresses a critical industry challenge: the need for powerful yet energy-efficient inference hardware.
Context Within the Broader AI Chip Ecosystem
The deal signals Nvidia’s strategic move to secure advanced technologies amid a competitive and rapidly evolving AI chip market. Demand for AI inference hardware is accelerating as AI applications become more prevalent across industries, prompting startups and established tech giants to innovate aggressively in this domain.
Groq recently conducted a $750 million funding round in September that valued the company at $6.9 billion. The round attracted notable investors such as BlackRock, Neuberger Berman, Cisco Systems, and Samsung. Groq’s chips are designed, fabricated, and assembled predominantly in North America, partnering with firms including Samsung.
Nvidia’s licensing of Groq’s technology complements similar recent industry moves, reflecting broader consolidation and collaboration trends. For example:
- Meta Platforms invested $14 billion in Scale AI, bringing Scale’s CEO onboard to help lead its AI efforts.
- Alphabet’s Google not only licensed technology from Character.AI but also recruited top executives from the startup.
- Microsoft established a deal with Inflection AI to strengthen its AI offerings.
Nvidia’s Position and Market Dynamics
Nvidia remains the dominant supplier of advanced AI chips, having become the world’s most valuable company through sustained leadership in the sector. The company has increased the frequency of its advanced AI chip releases to meet growing market demands.
However, Nvidia faces mounting competition from major technology companies such as Google and Amazon, alongside an emerging wave of AI startups. Furthermore, some of Nvidia’s key customers, including OpenAI and Meta, have begun designing their own custom AI chips, signaling evolving dynamics in the AI hardware ecosystem.
After the announcement, Nvidia shares experienced little change in after-hours trading, despite a year-to-date gain exceeding 35%.
For further information, contact Kate Clark at kate.clark@wsj.com.
© 2025 Dow Jones & Company, Inc. All rights reserved.





