Back to stories
Industry

NVIDIA Blackwell Ultra Ships to Major Cloud Providers

Michael Ouroumis2 min read
NVIDIA Blackwell Ultra Ships to Major Cloud Providers

NVIDIA has begun shipping its Blackwell Ultra GPUs to major cloud providers, marking the start of the next cycle of AI infrastructure upgrades. AWS, Microsoft Azure, and Google Cloud Platform are among the first recipients, with instances expected to be available to customers within weeks.

The Numbers

Blackwell Ultra delivers substantial improvements over the previous Hopper generation:

These specs translate directly into cost savings for companies running AI at scale. A workload that previously required a cluster of 100 H100s could potentially run on 25 Blackwell Ultra units.

Cloud Provider Plans

AWS

Amazon is deploying Blackwell Ultra in new P6 instances, available initially in us-east-1 and eu-west-1. The instances will support up to 8 GPUs per node with 400Gbps networking.

Microsoft Azure

Azure is integrating the GPUs into its ND-series virtual machines, with tight integration into Azure AI Studio for model training and deployment.

Google Cloud

GCP is offering Blackwell Ultra through its A4 accelerator-optimized instances, with integration into Vertex AI for managed model serving.

Supply Constraints

Despite the shipments, supply remains tight. NVIDIA CEO Jensen Huang acknowledged on a recent earnings call that demand continues to outstrip supply, with lead times extending to several months for large orders. The company has ramped production at TSMC's facilities in Taiwan, but the AI infrastructure buildout shows no signs of slowing.

What This Means for AI Development

The performance improvements in Blackwell Ultra lower the cost floor for training and serving large models. Startups that previously couldn't afford to train competitive models may find the economics more favorable, potentially leading to more competition in the foundation model space.

For inference-heavy applications — chatbots, code assistants, real-time translation — the 4x throughput improvement means significantly lower per-query costs, which could accelerate deployment of AI features in consumer products. NVIDIA's hardware dominance has been a key factor in the company becoming the first to reach a $5 trillion valuation, with Meta's 1.3 million GPU deal illustrating the staggering demand.

More in Industry

AMD Unveils MI400 AI Accelerator — First Real Threat to NVIDIA's Dominance
Industry

AMD Unveils MI400 AI Accelerator — First Real Threat to NVIDIA's Dominance

AMD launches the Instinct MI400, an AI accelerator with 256GB of HBM4 memory and training performance that AMD claims matches NVIDIA's H200 at 40% lower cost per chip.

1 day ago2 min read
Apple Announces On-Device LLM at WWDC 2026 — Privacy-First AI
Industry

Apple Announces On-Device LLM at WWDC 2026 — Privacy-First AI

Apple unveils a 3-billion parameter large language model that runs entirely on-device across iPhone, iPad, and Mac, powering a dramatically upgraded Siri with no cloud dependency for core features.

1 day ago2 min read
Cursor AI Raises $500M at $2B Valuation as AI-Native IDEs Go Mainstream
Industry

Cursor AI Raises $500M at $2B Valuation as AI-Native IDEs Go Mainstream

Anysphere, the company behind the Cursor AI code editor, closes a $500 million Series C at a $2 billion valuation, signaling that AI-native development environments are becoming the industry default.

1 day ago2 min read