Key Takeaways
- Meta signed a $21 billion expanded deal with CoreWeave on April 9, 2026, extending a prior $14.2 billion contract — total committed spend now reaches $35.2 billion through December 2032.
- The contract focuses on AI inference capacity, not training — serving live model outputs to users across WhatsApp, Instagram, and Facebook at scale.
- Deployments include NVIDIA’s next-gen Vera Rubin platform: 5x the FP4 inference performance of Blackwell and approximately 35x better inference per megawatt.
- Meta plans $115–$135 billion in total capex for 2026, nearly double its $72 billion spend in 2025.
- CoreWeave’s contracted revenue backlog now stands at $87.8 billion, with Meta and OpenAI accounting for roughly 65% of that figure.
Meta just committed another $21 billion of AI compute to a single vendor — and the deal reveals exactly where the AI infrastructure race is heading. On April 9, 2026, Meta Platforms and CoreWeave announced an expanded agreement worth $21 billion, running through December 2032. Combined with a prior $14.2 billion contract signed in late 2025, Meta’s total committed spend with CoreWeave now stands at $35.2 billion.

The contract is focused on AI inference — the real-time process of serving trained model outputs to users — not on training new models. As Meta embeds AI features across apps serving over 3 billion daily users, the demand for low-latency inference has grown faster than any single company can build data centers to serve it.
For more breaking stories on hyperscaler spending and AI infrastructure, see our Tech News coverage.
What Does the $21 Billion CoreWeave-Meta Agreement Cover?
The expanded deal, announced via BusinessWire on April 9, covers new GPU cloud capacity running from 2027 through December 20, 2032. It also exercises an existing option for additional capacity through April 10, 2032.
CoreWeave provides Meta with dedicated GPU clusters across multiple data center locations — reserved, not shared cloud. This matters for inference workloads: Meta needs predictable, high-throughput compute for AI assistants and content recommendations operating inside apps used by billions daily.
Why Is Meta Using CoreWeave Instead of Its Own Data Centers?

Meta is aggressively building its own infrastructure. The company has budgeted $115–$135 billion in capital expenditures for 2026 — nearly double the $72 billion it spent in 2025. But physical data center construction takes 18–36 months. CoreWeave can deploy GPU clusters significantly faster.
When Meta needs capacity online by a specific launch date — such as rolling out a new AI-powered feature on Instagram — CoreWeave delivers ahead of any in-house build schedule. The outsourced model also reduces exposure to stranded compute assets during lower-demand periods. Meta gets flexibility while CoreWeave manages the hardware density and power infrastructure challenges that come with next-generation GPUs.
How Does NVIDIA Vera Rubin Raise the Stakes for This Deal?
The new contract includes some of the first commercial deployments of NVIDIA’s Vera Rubin platform — the GPU architecture following Blackwell. The performance improvements are substantial:
| Metric | NVIDIA Blackwell | NVIDIA Vera Rubin |
|---|---|---|
| FP4 Inference (per GPU) | ~10 PFLOPS | ~50 PFLOPS (5x) |
| Memory Bandwidth | ~8 TB/s | 22 TB/s (2.75x) |
| Inference per Megawatt | Baseline | ~35x improvement |
| Rack Power Requirement | Standard air/liquid | 190–230 kW liquid-cooled |
Vera Rubin’s 190–230 kW rack requirement demands purpose-built liquid cooling infrastructure. CoreWeave has engineered its facilities specifically for this density — a differentiator that makes it one of the few providers capable of deploying Vera Rubin at scale. By securing early access through CoreWeave, Meta gains a significant performance advantage in inference speed and per-query cost efficiency.
For broader context on how next-generation AI architectures are reshaping compute demand, explore our AI section.
What Does CoreWeave’s $87.8 Billion Backlog Signal About AI Demand?
CoreWeave’s contracted revenue backlog stands at $87.8 billion. Meta and OpenAI together account for roughly 65% of that figure. CoreWeave projects $12–$13 billion in revenue for 2026, representing year-over-year growth of 134–153%. The company plans $30 billion in capital expenditures in 2026 — double its 2025 level.
Notably, CoreWeave also announced a multi-year agreement with Anthropic in April 2026 to provide NVIDIA GPU capacity for Claude inference workloads. Within 48 hours of the Meta announcement, CoreWeave priced a $3.5 billion convertible notes offering — reflecting the capital intensity required to build AI infrastructure at this scale.
- $35.2 billion — Meta’s total committed spend with CoreWeave through 2032
- $87.8 billion — CoreWeave’s total contracted revenue backlog, April 2026
- $115–$135 billion — Meta’s planned total capex for 2026 (vs. $72 billion in 2025)
- 134–153% — CoreWeave’s projected year-over-year revenue growth for 2026
- $30 billion — CoreWeave’s own capex plan for 2026, double its 2025 outlay
These numbers confirm that demand for specialized AI infrastructure is still accelerating. The largest AI companies are signing longer contracts at higher values, for capacity they cannot build fast enough in-house. The neocloud model — GPU-specialized providers like CoreWeave — is becoming critical to how frontier AI is deployed globally.
Common Questions — Meta CoreWeave Deal
Q: Why is Meta spending $21 billion with CoreWeave when it builds its own data centers?
A: CoreWeave can deploy GPU clusters significantly faster than Meta can construct new facilities. For time-sensitive capacity needs — such as supporting a major AI product launch — CoreWeave provides speed and flexibility that in-house construction cannot match. Meta uses CoreWeave to bridge the gap while its own data center pipeline catches up.
Q: What is AI inference, and why does it require so much compute?
A: Inference is the process of running a trained AI model to generate real-time outputs for users — answering questions, generating content, or personalizing recommendations. Unlike training, which happens once, inference runs continuously at massive scale. Meta’s AI features serve over 3 billion daily active users, creating enormous, ongoing compute demands.
Q: What is NVIDIA Vera Rubin, and how does it differ from Blackwell?
A: Vera Rubin is NVIDIA’s next-generation GPU architecture succeeding Blackwell. It delivers approximately 50 PFLOPS of FP4 inference per chip — roughly 5x Blackwell’s performance — with 2.75x higher memory bandwidth and about 35x better inference efficiency per megawatt. Vera Rubin requires liquid-cooled 190–230 kW racks, limiting it to purpose-built data center facilities.
Q: Is CoreWeave a competitor to AWS, Azure, or Google Cloud?
A: CoreWeave operates as a neocloud — a GPU-specialized cloud provider, not a full-service hyperscaler. It doesn’t compete on general compute, databases, or cloud services. Its advantage is in high-density GPU infrastructure for AI workloads, which it can deploy faster and at greater scale than most hyperscalers currently manage for specialized inference tasks.
Conclusion
Meta’s $21 billion CoreWeave expansion confirms three trends: AI inference demand is growing faster than companies can self-build, NVIDIA’s Vera Rubin represents a meaningful generational GPU leap, and specialized infrastructure providers are now foundational to deploying frontier AI at scale. With $87.8 billion in contracted backlog, CoreWeave’s position at the center of the AI infrastructure market is structural — not speculative.
Explore more in our Tech News section.
Last Updated: April 16, 2026








