NVIDIA’s Spectrum-X Ethernet With MRC Redefines AI Networking: OpenAI, Microsoft, Oracle Already Deploying
NVIDIA today announced that its Spectrum-X Ethernet platform, now equipped with the Multipath Reliable Connection (MRC) protocol, is rapidly becoming the backbone of the world’s largest AI factories. The open specification, contributed to the Open Compute Project, has already been deployed by OpenAI, Microsoft, and Oracle to power gigascale AI training runs—setting a new industry benchmark for performance and reliability.
“Deploying MRC in the Blackwell generation was very successful and made possible by a strong collaboration with NVIDIA,” said Sachin Katti, head of industrial compute at OpenAI. “MRC’s end-to-end approach enabled us to avoid much of the typical network-related slowdowns and interruptions and maintain the efficiency of frontier training runs at scale.”
MRC is an RDMA transport protocol that allows a single connection to spread traffic across multiple network paths, improving throughput, load balancing, and availability. Think of replacing a single-lane road with an intelligent grid system that reroutes cars around traffic jams in real time—that is the leap MRC delivers for AI data centers.
Background
Building large-scale AI models requires networking that can handle unprecedented data volumes without bottlenecks. Traditional Ethernet fabrics often suffer from packet loss and congestion, which stalls GPU utilization and slows training. NVIDIA’s Spectrum-X was purpose-built to solve these challenges, combining hardware designed for AI workloads with advanced telemetry and fabric control.

Microsoft’s Fairwater and Oracle’s Abilene data centers—two of the largest AI factories ever built—now rely on MRC over Spectrum-X Ethernet to meet the extreme performance and efficiency demands of frontier large language models. These deployments prove MRC works at massive scale, delivering high GPU utilization by balancing traffic across all available paths and dynamically avoiding overloaded routes.

What This Means
The open release of MRC means any organization can build AI networks that match the performance of the hyperscalers. By enabling intelligent retransmission and real-time congestion avoidance, MRC minimizes the impact of data loss on long-running jobs—dramatically reducing GPU idle time. Administrators also gain granular visibility into traffic flows, simplifying troubleshooting and operational management.
This development signals a shift from proprietary, closed networking solutions to a standardized, open approach that accelerates AI innovation. With MRC, NVIDIA has effectively raised the bar for what Ethernet can achieve in the AI era, making gigascale training more accessible and efficient. Industry leaders are already voting with their deployments, confirming that this protocol is not just theoretical but a proven, production-ready technology.
Related Articles
- Enhancing Man Pages: Lessons from rsync, strace, grep, and Perl
- 7 Man Page Design Innovations That Make Command-Line Tools Easier to Master
- How to Build a Gigascale AI Network with NVIDIA Spectrum-X and MRC
- 8 Key Facts About NVIDIA Spectrum-X and MRC: Powering the World's Largest AI Factories
- OnePlus Pad 4 Breaks Cover with Snapdragon 8 Elite Gen 5, Mystery Downgrade, and Murky Launch Timeline
- How to Navigate the OnePlus Pad 4 Launch: Specs, Downgrade, and Purchase Tips
- 8 Things You Need to Know About LDAP Secrets Management in Vault Enterprise 2.0
- Smartphone Price Surge: OnePlus 15 and Nothing Phone (4a) Pro Hit with 'Inevitable' Hikes Amid RAM Crisis