Upscale AI, a Santa Clara networking startup that only emerged from stealth seven months ago, is in talks to raise a new round at a valuation of roughly $2 billion, according to a Bloomberg report picked up today by TechCrunch. The round is said to be in the range of $180 million to $200 million, and would double the company's valuation in just three months.
The deal — if it closes on the reported terms — is one of the clearest signals yet that investors are no longer content to back only chipmakers and foundation-model labs. The plumbing that ties those chips together has become its own billion-dollar fight.
From stealth to unicorn in six months
Co-founded by CEO Barun Kar and executive chairman Rajiv Khemani, Upscale AI launched publicly in September 2025 with a $100 million seed round, then followed with a $200 million Series A in January 2026. Existing backers include Tiger Global Management, Xora Innovation and Premji Invest. Bloomberg previously reported that the January round alone pushed the company past $1 billion, framing it explicitly as a challenger to Cisco and Broadcom.
Remarkably, Upscale AI has still not shipped a product. The reported round is essentially a bet on roadmap, team, and the premise that AI data centers need a new networking layer built from the ground up.
The pitch: open standards, full stack
Upscale AI is building what it describes as a full-stack AI networking platform that spans silicon, systems and software. Its flagship effort, SkyHammer, targets scale-up networking at the rack level — the dense, latency-sensitive fabric that connects GPUs, AI accelerators, memory, and storage within a single system.
Rather than lock customers into a proprietary stack, the company is leaning hard on open standards. It is active in the Ultra Accelerator Link (UAL) Consortium, the Ultra Ethernet Consortium (UEC), the Open Compute Project and the SONiC Foundation, and has referenced ESUN and the Switch Abstraction Interface as part of its architecture. That positioning is a direct contrast to the vertically integrated approaches pushed by the incumbents.
Why the networking layer matters now
As frontier-model training runs stretch across tens of thousands of accelerators, the bottleneck is increasingly not per-chip FLOPs but the speed and predictability of the links between them. A rack that can push more tokens per watt — or keep more GPUs usefully busy — is worth a premium to hyperscalers, sovereign AI clouds and neoclouds alike.
Upscale AI is not alone in chasing that prize. Nvidia's NVLink and Spectrum-X, AMD's UALink push, and Broadcom's Tomahawk Ethernet roadmap all aim at the same problem. A $2 billion valuation for a pre-product startup suggests at least some large investors believe the incumbents are vulnerable at exactly the layer where AI infrastructure is most constrained.
Implications
For buyers, a credible third option in AI rack-scale networking could slow price increases from incumbents and accelerate adoption of open fabrics like Ultra Ethernet and UAL. For rivals, the message is sharper: capital is now actively funding a greenfield challenger to their most profitable franchises.



