Back to stories
Models

Tencent Drops Hy3 Preview: 295B Open-Source MoE Model Kicks DeepSeek Out of Yuanbao

Michael Ouroumis2 min read
Tencent Drops Hy3 Preview: 295B Open-Source MoE Model Kicks DeepSeek Out of Yuanbao

Tencent on Thursday open-sourced Hy3 Preview, a new flagship large language model and the first major release from the Hunyuan team since a leadership overhaul earlier this year. The model is a mixture-of-experts system with 295 billion total parameters, 21 billion of which are activated per token, and supports context windows of up to 256,000 tokens. Weights are available on Hugging Face.

The release is Tencent's most serious attempt yet to close the gap with domestic rivals such as DeepSeek, Alibaba's Qwen and Moonshot's Kimi, and to stop relying on outside labs inside its own products.

A three-month rebuild

According to reporting from South China Morning Post and Caixin, Hy3 Preview began training in late January 2026 and shipped in under three months. Tencent credits a February infrastructure overhaul led by chief AI scientist Yao Shunyu — a former OpenAI researcher the company brought on to lead its AI push — who pushed a full rebuild of the pretraining and reinforcement learning stack.

The 21-billion-activated footprint is the headline economic claim. With only a fraction of its weights lit up per token, Hy3 Preview is meant to be noticeably cheaper to serve than dense models of comparable capability, which matters for a company that wants to bake the model into consumer-scale chat and search products.

Yuanbao swaps out DeepSeek

The most concrete product signal is Yuanbao, Tencent's flagship AI chatbot. Yuanbao had been using DeepSeek as its primary underlying model while the in-house stack was being rebuilt. With Hy3 Preview, Tencent is switching Yuanbao's default engine to its own model and rolling Hy3 across its broader ecosystem.

On benchmarks, Tencent highlights a jump on SWE-bench Verified — a coding test built around real GitHub bug fixes — from 53% on the prior Hy2 generation to 74.4% on Hy3 Preview, a roughly 40% relative gain. That puts Hy3 in the same neighborhood as other top Chinese open-weight coders, though Tencent concedes the model still trails the best closed frontier systems from OpenAI and Google DeepMind.

Why it matters

Three things stand out. First, turnaround: a 295B MoE trained and shipped in under three months is a signal that Tencent's new AI leadership can move at Chinese-startup speed rather than at big-company speed. Second, independence: pulling DeepSeek out of Yuanbao removes a dependency on a model the company does not control and that is itself under geopolitical pressure. Third, distribution: Tencent has the rare advantage of shipping Hy3 into Yuanbao, WeChat-adjacent surfaces and enterprise tools from day one, which turns the open-source release into a real-world product, not just a leaderboard entry.

Hy3 Preview will not dislodge GPT-5.5 or Claude Opus 4.7 at the frontier, but inside China it meaningfully reshuffles the open-weight hierarchy — and it pulls one of the country's largest consumer AI surfaces back in-house.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Models

DeepSeek V4 Preview Lands: 1.6T-Parameter Open Model With 1M Context, Flash Pricing at $0.14/M
Models

DeepSeek V4 Preview Lands: 1.6T-Parameter Open Model With 1M Context, Flash Pricing at $0.14/M

DeepSeek on April 24 released preview versions of V4-Pro and V4-Flash, an open-weight MoE family with a 1M-token context window and pricing that undercuts Western frontier labs.

6 hours ago2 min read
OpenAI Releases GPT-5.5 'Spud', Tops Artificial Analysis Intelligence Index
Models

OpenAI Releases GPT-5.5 'Spud', Tops Artificial Analysis Intelligence Index

OpenAI launched GPT-5.5 on April 23, 2026. Codenamed 'Spud', the model scores 60 on the Artificial Analysis Intelligence Index, three points ahead of Claude Opus 4.7 and Gemini 3.1 Pro Preview.

21 hours ago3 min read
OpenAI Ships Open-Weight Privacy Filter to Redact PII On Device
Models

OpenAI Ships Open-Weight Privacy Filter to Redact PII On Device

OpenAI released Privacy Filter, a 1.5B-parameter Apache 2.0 open-weight model that detects and redacts personal data locally, hitting 96% F1 on a standard PII benchmark.

1 day ago2 min read