Cloud Computing Wars: How AI Spending Is Reshaping the $600B Market

The AI-Fueled Capex Boom
The three hyperscale cloud providers — AWS, Microsoft Azure, and Google Cloud — collectively spent over $150 billion on capital expenditure in 2024, with 2025 projections exceeding $200 billion. The vast majority of this spend is directed at AI infrastructure: GPU clusters, custom silicon, and the data centers to house them.
This represents a fundamental shift in cloud economics. For the first time, infrastructure investment is growing faster than cloud revenue — a pattern that would normally alarm investors but is being tolerated because of AI's perceived transformative potential.
Market Share: The Current Standings
Azure's growth story is the most dramatic. Microsoft's multi-billion-dollar OpenAI partnership has given it a unique wedge: enterprises that want GPT-4 integration get it through Azure, driving cloud migration as a side effect.
The Custom Silicon Race
All three hyperscalers are investing heavily in custom AI chips to reduce their dependence on NVIDIA:
| Company | Chip | Generation | Status | Target Workload |
|---|---|---|---|---|
| TPU v5p | 5th Gen | Production | Training + Inference | |
| TPU v6 | 6th Gen | Early Access | Frontier Models | |
| AWS | Trainium2 | 2nd Gen | Production | Training |
| AWS | Inferentia2 | 2nd Gen | Production | Inference |
| Microsoft | Maia 100 | 1st Gen | Production | Training + Inference |
| Meta | MTIA v2 | 2nd Gen | Internal | Inference at Scale |
Source: Reuters, The Information
"The hyperscalers don't want to defeat NVIDIA — they want leverage. Custom silicon gives them negotiating power and margin protection."
None of these custom chips will displace NVIDIA in the near term, but they represent a strategic hedge that could reshape the GPU market's pricing power over time.
Cloud Revenue Growth Rates
While market share tells part of the story, the growth rates reveal who has momentum.
What Enterprise Buyers Should Watch
For CIOs and CTOs evaluating cloud strategy, several trends matter:
- Multi-cloud is becoming multi-AI: enterprises are choosing cloud providers based on which AI models and tools they offer, not just compute pricing
- Egress fees are falling: competitive pressure is finally reducing the lock-in cost of moving data between providers
- Sovereign cloud is expanding: regulatory requirements in the EU, India, and Southeast Asia are creating regional cloud markets with distinct dynamics
- Spot and reserved AI compute markets are emerging, similar to how AWS transformed general compute pricing a decade ago
The Outlook
The cloud wars are entering their most capital-intensive phase. The winners will be determined not just by who builds the most GPU clusters, but by who translates that infrastructure into the most compelling AI-native services.
Expect continued consolidation among smaller cloud providers, increasing pressure on margins as AI workloads demand specialized (and expensive) infrastructure, and a growing bifurcation between cloud providers that serve as AI platforms and those that remain commodity compute utilities.
Help us improve
Your feedback shapes the analysis we publish. What interests you most? What would you like to see more of? Let us know.
Your feedback is sent privately to our editorial team and is never displayed publicly.
Enjoy this analysis? Help keep it independent and ad-free.
Become a Supporter