The AI chip market has been dominated by one name for years: Nvidia.
Its GPUs power everything from ChatGPT training to AI data centers. But a newly public company, Cerebras Systems, is trying something radically different.
And investors are paying attention.
Cerebras debuted on Nasdaq this week and the stock surged 68% on day one. That instantly pushed the company’s valuation close to $67 billion, making it one of the biggest AI IPOs in recent years.
But the real story is not the IPO pop.
It is the technology.
Cerebras Is Not Building Chips Like Nvidia
Most AI chips today are relatively small.
Companies like Nvidia, Advanced Micro Devices, and Intel manufacture many small chips from a silicon wafer. Those chips are then connected together inside servers.
Cerebras looked at that entire process and asked:
“What if we just used the whole wafer as one giant chip?”
That became the Wafer Scale Engine (WSE).
Instead of cutting the wafer into dozens or hundreds of processors, Cerebras turns the entire wafer into a single massive AI processor.
To understand the scale:
- A traditional GPU is roughly postage-stamp sized
- Cerebras’ chip is closer to the size of an iPad
- It is considered the largest commercial chip ever built
Think of traditional chipmaking like slicing a pizza into pieces.
Cerebras decided to use the entire pizza as one processor.
Why This Matters for AI
AI workloads are extremely data-intensive.
The biggest bottleneck in AI systems is often not raw computing power. It is moving data between chips fast enough.
Traditional AI systems solve this by connecting thousands of GPUs together. But that creates communication overhead because data constantly travels between separate processors.
Cerebras tries to eliminate that problem.
By using one giant processor:
- More memory sits closer to the compute cores
- Data movement becomes faster
- AI models can train and run with lower latency
- Large AI inference tasks become more efficient
That last part is especially important.
Cerebras Is Betting Big on AI Inference
The AI market has two major segments:
1. Training
This is where models like ChatGPT are initially built using massive datasets.
2. Inference
This is when trained AI models actually respond to users in real time.
Inference is expected to become an even bigger market long term because every AI query, chatbot response, image generation request, or enterprise AI workflow depends on inference.
Cerebras claims its architecture performs extremely well for inference workloads, potentially even outperforming many traditional GPU-based setups.
That is a major reason investors are excited.
The SRAM Advantage
Another major difference is memory architecture.
Most AI systems rely heavily on DRAM memory.
Cerebras instead uses large amounts of SRAM.
Here’s the simple version:
DRAM
- Cheaper
- Denser
- Slower
SRAM
- Much faster
- More expensive
- Takes up more space
Cerebras chose speed over efficiency.
That decision makes its systems incredibly powerful for certain AI tasks, but also significantly more expensive and harder to manufacture.
The Biggest Risk: Manufacturing Complexity
This is where things get tricky.
Building a wafer-scale processor is extraordinarily difficult.
With traditional chips, if one tiny section of the wafer has defects, manufacturers simply discard that chip and keep the rest.
Cerebras cannot do that.
If something goes wrong on a wafer-sized chip, the entire processor could become unusable.
That creates:
- Higher production risk
- Lower manufacturing yields
- Bigger costs
- Supply chain challenges
To solve this, Cerebras says it developed a fault-tolerant architecture that can route around defective areas on the wafer.
In simple words:
Even if tiny sections fail, the chip can still function as one giant processor.
That is a massive engineering achievement if it scales reliably.
The Numbers Behind the Hype
Cerebras is still tiny compared to Nvidia.
But growth has been explosive.
Revenue Growth
- 2022: $24.6 million
- 2023: $78.7 million
- 2024: $290.3 million
- 2025: $510 million
That is more than 20x growth in just three years.
The company’s customers reportedly include:
- OpenAI
- Amazon
- Meta Platforms
That customer list alone explains why Wall Street is paying attention.
But Nvidia Is Still in a Different Universe
Even after the IPO surge, Cerebras is nowhere near Nvidia’s scale.
Market Caps Comparison
- Nvidia: ~$5.7 trillion
- Taiwan Semiconductor Manufacturing Company: ~$2.2 trillion
- Broadcom: ~$2.1 trillion
- Advanced Micro Devices: ~$733 billion
- Cerebras Systems: ~$67 billion
More importantly, Nvidia has:
- A massive software ecosystem
- CUDA dominance
- Deep enterprise relationships
- Supply chain scale
- Huge developer adoption
That ecosystem advantage is incredibly hard to break.
This is why most AI chip startups fail to seriously challenge Nvidia.
So What Makes Cerebras Different?
Most AI challengers try to build “better GPUs.”
Cerebras is taking a completely different architectural approach.
That matters because paradigm shifts in computing often come from companies willing to rethink the entire system rather than just improve existing designs incrementally.
The company is essentially betting that:
- Bigger chips
- Faster memory
- Reduced inter-chip communication
- AI-specific architecture
…can outperform traditional GPU clusters for key workloads.
If that thesis works at scale, Cerebras could carve out a meaningful niche in AI infrastructure.
The Bull Case
Investors bullish on Cerebras believe:
- AI inference demand will explode
- Existing GPU architectures may hit scaling limits
- Wafer-scale computing offers structural advantages
- Cerebras could become a premium AI infrastructure provider
- Cloud AI services could become a recurring revenue engine
There is also another important tailwind.
The stock is already large enough to potentially qualify for indexes like:
- S&P 500
- Nasdaq-100
If included, ETFs and index funds would be forced buyers of the stock.
The Bear Case
There are still major concerns.
1. Manufacturing risk
Wafer-scale chips remain incredibly difficult and expensive to produce.
2. Competition
Nvidia is not standing still. Neither are Advanced Micro Devices, Google, or custom AI chip makers.
3. Profitability
Cerebras is still losing money operationally because it spends heavily on R&D.
4. Adoption risk
Even if performance is strong, enterprises may hesitate to move away from Nvidia’s mature ecosystem.
That switching cost is real.
Final Thoughts
Cerebras is probably the boldest hardware architecture bet in AI today.
Instead of competing directly with Nvidia on the same battlefield, it changed the battlefield itself.
That does not guarantee success.
But it does make Cerebras one of the few AI chip companies that genuinely looks differentiated rather than “another GPU startup.”
The company still has enormous execution risk ahead.
Yet if AI inference demand grows the way many expect over the next decade, Cerebras could become a serious infrastructure player in a market that may ultimately support multiple winners.
For now, Nvidia remains the king of AI chips.
But Cerebras just made the AI hardware race far more interesting.