xAI's Colossus 2 supercomputer in Memphis, Tennessee is reaching a historic computational milestone this month, with the facility's planned upgrade to 1.5 gigawatts of power capacity now underway. The expansion cements Colossus 2's position as the most powerful AI training infrastructure ever constructed.
Elon Musk confirmed the timeline on X, writing: "The Colossus 2 supercomputer for Grok is now operational. First Gigawatt training cluster in the world. Upgrades to 1.5GW in April."
The Scale of Colossus 2
The Colossus complex, which first came online at gigawatt scale in January 2026, spans three buildings in Memphis and houses approximately 550,000 to 555,000 NVIDIA GPUs — primarily advanced Blackwell series chips including GB200 and GB300 models. The April upgrade to 1.5 gigawatts reportedly adds around 250,000 GPUs to the existing count, pushing the total toward 850,000.
To put the power consumption in perspective, 1.5 gigawatts is roughly enough electricity to power a city of over one million people. The facility's energy demands have raised questions about infrastructure strain in the Memphis area, and independent analysis from satellite imagery has previously suggested that cooling capacity may lag behind the facility's stated power figures.
Grok 5 Training
The primary purpose of the expansion is training Grok 5, xAI's next-generation AI model. According to reports, Grok 5 features a 6-trillion-parameter Mixture-of-Experts architecture — a significant leap over previous Grok models. The model's public beta is estimated between March and April 2026, with a full release targeted for Q2 2026.
The current flagship model, Grok 4.20 Beta 2, has already demonstrated competitive performance against rivals like GPT-5.4 and Claude Opus 4.6. Grok 5 is expected to push further, with xAI betting that raw computational scale will translate into meaningful capability gains.
The Infrastructure Arms Race
xAI's expansion reflects the broader AI industry's escalating investment in compute infrastructure. NVIDIA's latest GPUs remain the backbone of frontier AI training, and the race to secure chips and power capacity has become a defining competitive dynamic. Companies including Microsoft, Meta, and Google are all pursuing data center expansions measured in hundreds of megawatts, but xAI's 1.5-gigawatt target puts it in a category of its own.
However, skeptics have raised questions about whether raw scale alone delivers proportional capability improvements, and whether the environmental and energy costs of gigawatt-scale AI training are sustainable long-term. The Memphis facility's power draw is equivalent to a meaningful fraction of Tennessee's total electricity generation capacity.
As Grok 5 nears completion, the AI community will be watching closely to see whether Colossus 2's unprecedented scale translates into a genuine frontier model — or whether diminishing returns from scaling begin to set in.



