Elon Musk has laid out the most detailed public blueprint yet for Terafab, the semiconductor venture co-owned by Tesla, SpaceX, and xAI, confirming that the project will rely on Intel's forthcoming 14A manufacturing process to build AI chips at an unprecedented scale. The plan, surfaced in reports published April 24, 2026, cements Intel's return as a serious foundry contender and positions Terafab as one of the most ambitious industrial bets in the AI era.
A Two-Fab Complex in Austin
According to Musk, Terafab will consist of two advanced chip factories built on and around Tesla's Giga Texas campus in the Austin area. One facility is slated to produce silicon for Tesla vehicles and Optimus humanoid robots, while the other is designed to output chips for AI data centers — including installations Musk has floated for eventual deployment in space.
The first stage is a research fab at Giga Texas expected to cost roughly $3 billion. Musk described it as "capable of maybe a few thousand wafers per month," framing the initial site as a test bed rather than a volume producer. SpaceX is expected to take the lead on any subsequent large-scale Terafab construction, while Tesla handles the research pilot.
Intel 14A as the Core Process
The most consequential disclosure is Terafab's reliance on Intel's 14A node, the successor to Intel's 18A process and the first node to use High-NA EUV lithography for its most critical layers. Musk said Intel's 14A process "will be probably fairly mature or ready for prime time" by the time Terafab scales, signaling that Terafab's production ramp is being timed to Intel's roadmap.
For Intel, the deal represents its first publicly named major customer for 14A. On Intel's earnings call, CEO Lip-Bu Tan said he could "think of no better partner than Elon Musk" for the partnership, and the expanded disclosures around Terafab give Intel Foundry a flagship reference customer at a moment when TSMC continues to dominate leading-edge AI chip production.
A Terawatt of Compute — Eventually
Terafab's stated long-term goal is to produce one terawatt of computing capacity per year, a figure Musk has previously cited as roughly double the total currently generated across the United States. Independent analysts at Bernstein have estimated that scaling chip capacity to that level could require between $5 trillion and $13 trillion in cumulative capital expenditure — a range that dwarfs even the most aggressive AI infrastructure commitments announced by hyperscalers so far.
SpaceX is also reportedly developing its own GPUs inside the Terafab program, an effort that would place the company in direct competition with Nvidia for AI data center silicon at some future point.
Implications
Many structural questions remain unanswered: who funds each phase, who operates the fabs, and when high-volume production begins. But the April 24 disclosures move Terafab from slide-deck ambition toward a named process node, a named foundry partner, and a concrete initial site. For AI builders, the signal is that compute supply may eventually be shaped as much by Musk's industrial footprint in Austin as by Taiwan's foundries — though the terawatt target is a decade-class goal, not a near-term deliverable.



