Rackspace Technology and AMD announced on Thursday that they have signed a memorandum of understanding to build a managed Enterprise AI Cloud aimed at regulated industries and sovereign workloads, marking one of the more ambitious efforts to package AMD's data-center silicon into a turnkey, governed offering rather than commodity GPU rental.
The announcement, which coincided with Rackspace's first-quarter 2026 earnings update, sent the company's shares sharply higher and reframed the long-time hosting provider as a credible challenger in enterprise AI infrastructure.
A different shape than hyperscaler GPU rental
The core idea behind the partnership is that most enterprises in regulated sectors cannot simply rent GPU capacity by the hour and carry the integration, security, and compliance burden on their own. Rackspace and AMD are proposing to assemble, operate, and govern the full stack on the customer's behalf, integrating AMD Instinct GPUs and AMD EPYC CPUs with Rackspace's managed services layer.
"Governing AI infrastructure in regulated environments with defined accountability is not something you bolt on after the fact," Rackspace chief executive Gajen Kandiah said in the announcement. "It must be built in from the start."
Dan McNamara, an AMD senior vice president, framed the collaboration as a way to bring AMD's accelerators into environments where governance and predictability matter as much as raw throughput. "Our collaboration with Rackspace delivers AMD AI compute into managed, private and governed environments so enterprises can deploy AI with the performance and flexibility their workloads demand," he said.
Four offerings under one stack
The companies described four integrated capabilities they intend to bring to market together. The flagship is a fully managed Enterprise AI Cloud built on AMD Instinct GPUs and EPYC CPUs, with Rackspace handling end-to-end operations from accelerated compute through inference and agentic workloads in production.
A second tier, Inference as a Service, is positioned as a governed alternative to commodity GPU rental, pairing dedicated managed AMD Instinct capacity with developer-ready inference and fine-tuning toolkits. A Bare Metal AMD Instinct offering would give customers physical isolation and deterministic performance for the most demanding training and inference workloads. Underpinning all three is a context-aware inference engine designed to retain domain knowledge and session state across enterprise applications.
Caveats and what to watch
The MOU is explicitly described as non-binding and a framework for possible future collaboration. No dollar value, no GPU volume commitment, and no customer wins were disclosed. Specific deployment timelines were also not announced.
Why it matters
The deal reflects a broader shift visible across the AI infrastructure market in 2026. As regulated industries — banks, defense suppliers, hospital networks, and government agencies — move from pilots to production, hourly GPU rentals from hyperscalers increasingly run into procurement, audit, and sovereignty requirements that the hyperscaler model was not designed to meet. Rackspace and AMD are betting that a governed, fully operated stack is what unlocks that next tranche of enterprise spending — and that AMD silicon, rather than Nvidia's, can anchor it.



