OpenAI on May 6 unveiled Multipath Reliable Connection (MRC), a new networking protocol co-developed with Nvidia, Microsoft, AMD, Intel and Broadcom that the lab says is already running on its largest GB200 supercomputers. The release lands as the industry's biggest competitors line up behind a single open standard for moving training traffic between hundreds of thousands of GPUs.
MRC is designed to attack one of the hardest problems in gigascale training: keeping a job alive when individual links fail. Instead of pinning an RDMA connection to a single path, MRC lets one connection spread its packets across hundreds of paths simultaneously, and reroutes around a downed link in microseconds rather than waiting for higher-level retries.
How MRC works
MRC builds on RDMA over Converged Ethernet (RoCE), the IBTA standard for hardware-accelerated remote memory access between GPUs and CPUs. It extends RoCE with SRv6-based source routing and incorporates techniques from the Ultra Ethernet Consortium, the cross-industry body that has been pushing Ethernet toward AI-class workloads. Nvidia is shipping MRC as part of its Spectrum-X Ethernet platform, where it pairs with the company's multiplanar network designs and hardware-accelerated load balancing.
The protocol is built into the latest generation of 800 Gb/s network interfaces, which is the bandwidth tier most of the largest 2026 training clusters are now standardising on. By moving multipathing into the transport layer, MRC also simplifies the network control plane, removing some of the routing gymnastics operators have had to perform in software.
Already in production
The announcement is unusually concrete for a new standard. Nvidia and OpenAI say MRC is already deployed across all of OpenAI's largest Nvidia GB200 supercomputers, including the Oracle Cloud Infrastructure site in Abilene, Texas that anchors the Stargate buildout. According to the companies, the protocol has been used in the training of frontier models, including GPT-5.5, and is also running on Microsoft's Fairwater supercomputers.
OpenAI has contributed the MRC specification to the Open Compute Project, the Meta-founded hardware standards body, opening the door for other NIC and switch vendors to implement it. That positions MRC as a credible counterweight to Nvidia's proprietary InfiniBand fabric while still benefiting from Nvidia's own Spectrum-X silicon.
Why it matters
The story is as much about politics as packets. AMD, Intel and Broadcom have spent the last two years arguing that AI fabrics should not be locked to one vendor's switches. Getting all of them, plus Nvidia and Microsoft, to publicly back a single OpenAI-led specification is a notable de-escalation in a market where every gigawatt of new capacity is contested.
For cloud operators standing up the next round of GB200 and Rubin-class clusters, MRC promises a cleaner path to training jobs that survive routine link failures — the kind of resiliency feature that, in 2026, is the difference between a productive training run and a multi-day reset.



