Thinking Machines Lab, the AI startup co-founded by former OpenAI CTO Mira Murati, has secured one of the largest compute partnerships in AI history. The multi-year strategic deal with Nvidia will give Thinking Machines access to at least one gigawatt of next-generation Vera Rubin systems — a commitment that rivals the infrastructure footprint of established AI giants.
What the Deal Includes
The partnership, announced jointly by Nvidia CEO Jensen Huang and Murati, covers three key pillars. First, Thinking Machines will deploy Nvidia's Vera Rubin platforms at scale to train its frontier AI models, with rollout beginning in 2027. Second, Nvidia has made a "significant investment" in Thinking Machines, though neither party disclosed the exact figure. Third, the two companies will collaborate on technical optimizations, ensuring Thinking Machines' products are tuned specifically for Nvidia's silicon.
A gigawatt of compute is a staggering figure. For context, that is roughly the output of a large nuclear power plant dedicated entirely to AI training. It puts Thinking Machines in the same infrastructure tier as OpenAI, Google DeepMind, and Anthropic — companies that have spent years and tens of billions building out their compute capacity.
Why Nvidia Made This Bet
For Nvidia, the deal extends its dominance in the AI training market at a critical moment. With AMD's MI400 accelerators gaining traction and custom silicon from Google and Amazon maturing, locking in a high-profile customer like Thinking Machines reinforces the Vera Rubin platform's position as the default choice for frontier labs.
The investment also signals Nvidia's confidence in Murati's vision. Since leaving OpenAI in late 2024, Murati has assembled a team of top researchers and engineers, many recruited from OpenAI, Google DeepMind, and Meta FAIR. While the company has remained tight-lipped about its model architecture and product roadmap, the scale of compute it is now acquiring suggests ambitions well beyond a niche research lab.
Implications for the AI Landscape
The partnership reshapes the competitive dynamics of the frontier AI race. Thinking Machines now has a credible path to training models at a scale previously accessible only to a handful of hyperscalers and well-funded incumbents.
It also highlights the growing importance of compute partnerships as a strategic lever. Rather than building data centers from scratch, Thinking Machines is leveraging Nvidia's ecosystem to accelerate its timeline — a playbook that more startups may follow as the cost of frontier training runs continues to climb.
For the broader industry, the message is clear: the barrier to entry for frontier AI research is not just capital, but access to the right hardware at the right scale. Murati appears to have secured both.



