Meta raised its 2026 capital expenditure guidance to between $125 billion and $145 billion on Wednesday, lifting the prior $115–$135 billion range and intensifying what is already the biggest infrastructure build-out in the company's history. The disclosure, made alongside a Q1 earnings beat, sent Meta shares down roughly 6–7% in after-hours trading even as revenue grew 33% year over year to $56.3 billion.
The earnings release lands in the middle of a frenetic stretch of hyperscaler capex announcements and signals that Meta sees the cost curve for frontier AI infrastructure steepening rather than flattening. Mark Zuckerberg framed the quarter as a milestone for Meta Superintelligence Labs, citing a "significantly upgraded" Meta AI and the recent debut of Muse Spark — the lab's first proprietary foundation model — as proof points that the spend is translating into product.
Why the capex line moved
Meta attributed the higher range to two pressures: higher component pricing this year and additional data center costs to support future-year capacity. Both factors are downstream of an industry-wide squeeze on advanced packaging, HBM memory, and power-ready data center sites, and both echo language used by other hyperscalers reporting this quarter.
The practical effect is that Meta is locking in a multi-year ramp rather than a one-off bump. Management has previously said the company is securing land, power, and long-lead infrastructure components well ahead of model demand, an approach that pushes cash outlays forward of revenue contribution.
MTIA Gen 2 moves from roadmap to fab
The most consequential technical disclosure was that MTIA Gen 2, Meta's custom AI accelerator co-developed with Broadcom on a 2-nanometer process, has entered production. The chip is designed to handle recommendation-model inference at scale across Instagram Reels, Facebook Feed, and Threads — workloads that today consume an enormous share of Meta's GPU capacity.
Meaningful MTIA deployment at scale is widely viewed as a 2027 story, but bringing Gen 2 into production this year gives Meta a concrete path to lower per-query inference costs and to reduce Nvidia exposure on its largest internal workloads. Optical networking and Meta-designed data center architecture are absorbing additional capex alongside the silicon.
Implications
For Meta, the bet is that personal superintelligence and AI-driven recommendations justify capex on a trajectory that now rivals national infrastructure programs. For the broader market, the guidance raise reinforces a pattern visible across this earnings cycle: AI compute is getting more expensive even as model providers compete on price, with the cost being absorbed by hyperscaler balance sheets rather than end users. Investors flinched on Wednesday, but the signal to suppliers — Broadcom, TSMC, memory vendors, and data center developers — is that Meta intends to keep buying.



