At its annual Nvidia GTC event in San Jose, Jensen Huang made one thing very clear: Nvidia believes the AI computing boom is far from over .
During a 2.5-hour keynote, Huang unveiled new chips, new partnerships, and even talked about future data centers in space . But the headline moment was Nvidia’s projection that its flagship AI processors could help generate $1 trillion in cumulative data-center revenue by 2027 .
Here’s what stood out from the event and why it matters.
The Big Bet: $1 Trillion in AI Infrastructure
- Nvidia believes the demand for computing power has exploded , with Huang saying it has increased 1 million times in the last two years .
- The company expects its AI data-center products — including the upcoming Blackwell and Rubin architectures — to drive $1 trillion in sales through 2027 .
- Previously, Nvidia had forecast $500 billion in AI infrastructure revenue through 2026 , so the new outlook essentially extends and doubles the timeline .
For investors worried that AI spending might slow down, the message from Nvidia was simple: the demand pipeline still looks massive .
That said, the forecast doesn’t necessarily imply a dramatic acceleration in growth — it mostly reflects continued strong demand over a longer window .
New Chips, New Roadmap
Nvidia also laid out the next phases of its chip roadmap.
- Rubin architecture (expected in the second half of 2026 ) will succeed current AI chips.
- After Rubin, the next generation will be named after physicist Richard Feynman .
- These future chips will feature custom high-bandwidth memory designed for even more demanding AI workloads.
The naming theme continues Nvidia’s tradition of honoring scientists — Rubin itself is named after astronomer Vera Rubin , whose research helped prove the existence of dark matter.
Nvidia Moves Into CPUs
For years, Nvidia’s dominance came from GPUs used to train AI models. Now the company is pushing deeper into CPUs , an area historically dominated by Intel and Advanced Micro Devices .
The company announced:
- A new CPU architecture called Vera
- Systems built entirely around Nvidia CPUs
- CPUs designed to manage complex AI data centers more efficiently
This shift reflects a bigger change in AI infrastructure. Training large models often happens on GPUs, but running those models in production can sometimes be cheaper on CPUs .
If Nvidia succeeds here, it could open a multibillion-dollar new business line while putting additional pressure on Intel’s data-center market.
Groq Technology Enters Nvidia’s Portfolio
Another highlight was Nvidia integrating technology from AI startup Groq .
- Nvidia now offers the Groq 3 LPU (Language Processing Unit) .
- LPUs are designed specifically for AI inference , meaning generating responses from large language models.
- These chips are optimized for extremely fast text generation thanks to high-speed on-chip memory.
The Groq chips will work alongside Nvidia GPUs as coprocessors , helping AI systems respond faster while GPUs handle more complex workloads.
Manufacturing will be handled by Samsung Electronics , using its 4-nanometer process .
Major Partnerships Across Industries
Nvidia also used GTC to showcase new and expanded partnerships, reinforcing its strategy of building an AI ecosystem .
Key collaborations include:
- International Business Machines expanding AI infrastructure partnerships
- Adobe working on AI tools and creative software integration
- Hewlett Packard Enterprise building enterprise AI systems
One of the more futuristic announcements came with Uber Technologies , where Nvidia said it plans to support a fleet of autonomous vehicles powered by Nvidia software by 2028 .
Yes — Nvidia Even Talked About Space
In one of the more unusual announcements, Nvidia said it is exploring chips designed for data centers in outer space .
The concept is still early, but the idea reflects how far the company is thinking about the future of computing infrastructure .
As AI models become larger and energy demand grows, companies are increasingly exploring new locations and architectures for data centers .
Competition Is Heating Up
Even with Nvidia’s dominance, the competitive pressure is rising.
Challenges are coming from multiple directions:
- Advanced Micro Devices building competing AI accelerators
- Large customers like Amazon designing their own chips (Graviton and AI accelerators)
- Cloud giants increasingly investing in in-house silicon
At the same time, Nvidia itself is expanding into full-stack AI infrastructure , offering:
- Chips
- Networking
- Complete AI systems
- Software and AI models
This vertical integration helps keep customers inside Nvidia’s ecosystem .
Market Reaction
Despite the ambitious outlook, Nvidia’s stock response was muted.
- Shares initially jumped about 4.8% during the event.
- Gains later faded, and the stock closed up around 1.6% at $183.19 .
Some investors were hoping for an even more aggressive growth outlook.
Still, Nvidia remains the world’s most valuable company , with a market value around $4.4 trillion .
The Bigger Picture
The key takeaway from Nvidia’s GTC event wasn’t just the new chips — it was the scale of the AI infrastructure buildout the company expects.
If Huang’s forecast holds true:
- AI computing could become a trillion-dollar market within a few years .
- Nvidia aims to capture a large share of that through chips, systems, and software .
In other words, Nvidia is positioning itself not just as a chipmaker — but as the backbone of the global AI economy .