“When we launched our AI back-end networking report in late 2023, the market was dominated by InfiniBand, with a share exceeding 80%,” says Sameh Boujelbene, vice president of Dell’Oro Group. “Despite its dominance, we always predicted that Ethernet would eventually prevail at scale. However, what’s most remarkable is how quickly Ethernet has gained traction in AI back-end networking. As the industry moves toward 800 Gbps and beyond, we believe Ethernet is now firmly positioned to overtake InfiniBand in these high-performance deployments.”
“While Ethernet is gaining traction in AI back-end networks, a key question remains: what share of the Ethernet opportunity will NVIDIA capture versus other switch vendors? In 2024, Celestica, NVIDIA, and Huawei led the Ethernet segment of the AI back-end switch market. However, we anticipate significant shifts in market share by 2025 as large-scale Ethernet deployments accelerate at Meta, Microsoft, Oracle, and other GPU-as-a-service providers, creating opportunities for other switch vendors, such as Accton, Arista, Cisco, Juniper/HPE, Nokia, and others, to gain ground,” Boujelbene continued.
Other highlights from the July 2025 advanced research report, “AI Networks for AI Workloads,” include:
GPU-as-a-Service providers, such as CoreWeave, Lambda Labs, Vultr, and others, are projected to outpace Tier 1 cloud service providers in the next five years.
Most switch ports deployed in AI back-end networks are expected to reach 800 Gbps in 2025, 1600 Gbps in 2027, and 3200 Gbps in 2030.
Adoption of co-packaged optics in AI clusters will begin to materialize in the coming years, led by NVIDIA.
