Meta has signed a multibillion-dollar, multi-year agreement with Amazon Web Services to deploy tens of millions of Graviton5 processor cores for artificial intelligence, the companies confirmed on Friday. The deal, announced just as Google Cloud Next 2026 wrapped, instantly makes Meta one of the largest customers of AWS's homegrown ARM-based silicon and is the clearest sign yet that the AI chip race has expanded well beyond Nvidia GPUs.
Santosh Janardhan, Meta's head of infrastructure, framed the move as a fit for the company's growing fleet of autonomous AI systems. "AWS has been a trusted cloud partner for years, and expanding to Graviton allows us to run the CPU-intensive workloads behind agentic AI with the performance and efficiency we need at our scale," he said.
Why CPUs are suddenly hot again
For most of the generative AI boom, the conversation has revolved around GPUs and specialized accelerators for model training. Agentic systems are changing the math. When trained models orchestrate tools, search the web, write code, and chain multi-step actions, the bottleneck shifts from matrix multiplication to general-purpose compute, memory bandwidth, and tight inter-core communication.
That is the niche AWS has been engineering Graviton5 for. The chip carries 192 cores and a cache reported to be roughly five times larger than the prior generation, cutting inter-core delays by up to 33% and delivering as much as 25% better overall performance, according to AWS. For agent workloads that fan out across many lightweight requests, those gains compound quickly.
A diversification campaign, not a swap
The AWS announcement lands inside a broader compute spree. Meta has been stitching together capacity from multiple vendors as it absorbs the costs of training Llama-class models and running consumer-facing AI features across its apps. Reports tie that procurement campaign to commitments involving Nvidia, AMD, CoreWeave, Nebius, and Broadcom-built MTIA custom silicon, alongside earlier deals with Google Cloud.
For AWS, the Meta win follows Anthropic's $100 billion, 10-year commitment to the platform, announced earlier this month alongside Amazon's expanded $25 billion investment in the lab. Together, the two deals position AWS as a preferred destination for both frontier model developers and hyperscale agentic deployments.
A strategic message to Nvidia
The timing is hard to miss. Amazon CEO Andy Jassy has been explicit that AWS intends to compete on price-performance against Nvidia and Intel for AI workloads, and Graviton5 is the company's headline argument. Coming on the heels of Google's TPU 8t and TPU 8i announcements, the Meta deal turns the agentic-era chip market into a three-way silicon contest between custom cloud silicon, traditional GPUs, and ARM-based AGI CPUs that Meta has separately committed to adopting.
Implications
For enterprises planning agentic deployments, the deal is a useful signal that inference economics — not just training capacity — are now driving infrastructure decisions at the largest scale. Expect more multi-vendor stacks where GPUs handle training and large-context reasoning while ARM CPUs and custom accelerators absorb high-volume agent traffic. For Nvidia, the message is that even its biggest customers are aggressively building optionality. And for Meta, locking in tens of millions of CPU cores is the kind of move that only makes sense if you expect agents to become a daily, billion-user behavior — fast.



