Back to stories
Models

Meta Delays Its Next Major AI Model 'Avocado' to at Least May

Michael Ouroumis2 min read
Meta Delays Its Next Major AI Model 'Avocado' to at Least May

Meta has delayed the release of its next-generation artificial intelligence model, code-named Avocado, pushing its launch from March to at least May 2026. The delay, first reported by The New York Times on March 12, signals that even the most well-resourced AI labs face challenges in shipping frontier models on schedule.

What We Know About Avocado

While Meta has not publicly detailed Avocado's full capabilities, the model is widely expected to represent a significant step forward from the Llama 4 family released in April 2025. Industry observers believe it will target the performance tier currently occupied by models like OpenAI's GPT-5 series and Anthropic's Claude Opus 4.6.

Meta has invested heavily in AI infrastructure over the past year, announcing partnerships with AMD and NVIDIA to build out massive training clusters. The company's AI research division, FAIR, has been a prolific contributor to open-source AI, and Avocado was expected to continue that tradition with an open-weights release.

Why the Delay Matters

The postponement comes at a critical moment in the frontier model race. OpenAI recently released GPT-5.4 with improved reasoning capabilities. Google shipped Gemini 3.1 Pro with enhanced multi-step reasoning. Anthropic launched Claude Opus 4.6 with industry-leading agentic coding performance.

Each month of delay gives these competitors additional time to build developer ecosystems and enterprise relationships around their models. For Meta, whose AI strategy relies heavily on open-source adoption to drive its broader business goals, timing matters.

Internal Quality Concerns

Sources familiar with the matter suggest the delay stems from internal quality benchmarks that Avocado has not yet met. In the current competitive environment, releasing a model that underperforms expectations carries significant reputational risk — particularly for a company that has positioned itself as the leader in open-source AI.

Meta CEO Mark Zuckerberg has repeatedly stated that AI is the company's top priority, with billions allocated to training infrastructure and research talent. The decision to delay rather than ship a model that isn't ready reflects a maturing approach to AI releases, where the cost of a disappointing launch now outweighs the cost of missing a deadline.

What Happens Next

The May timeline is described as a minimum, suggesting further delays are possible if the model does not meet Meta's standards. When Avocado does ship, it will face intense scrutiny from the research community and enterprise buyers who have increasingly sophisticated benchmarking frameworks.

For the broader AI ecosystem, the delay is a reminder that building frontier models remains extraordinarily difficult even with virtually unlimited resources. The gap between announcing ambitious AI plans and delivering production-ready models continues to challenge every major lab in the industry.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Models

xAI Launches Grok Voice Think Fast 1.0, Tops τ-Voice Bench and Powers Starlink Support
Models

xAI Launches Grok Voice Think Fast 1.0, Tops τ-Voice Bench and Powers Starlink Support

xAI's new voice model scored 67.3% on the τ-voice Bench — well ahead of Gemini 3.1 Flash Live and GPT Realtime — and is now powering Starlink's phone sales and support with a 70% autonomous resolution rate.

1 day ago2 min read
Tencent Drops Hy3 Preview: 295B Open-Source MoE Model Kicks DeepSeek Out of Yuanbao
Models

Tencent Drops Hy3 Preview: 295B Open-Source MoE Model Kicks DeepSeek Out of Yuanbao

Tencent has open-sourced Hy3 Preview, a 295B/21B-activated mixture-of-experts model built in under three months. The Yuanbao chatbot is switching its primary engine from DeepSeek to the new in-house model.

3 days ago2 min read
DeepSeek V4 Preview Lands: 1.6T-Parameter Open Model With 1M Context, Flash Pricing at $0.14/M
Models

DeepSeek V4 Preview Lands: 1.6T-Parameter Open Model With 1M Context, Flash Pricing at $0.14/M

DeepSeek on April 24 released preview versions of V4-Pro and V4-Flash, an open-weight MoE family with a 1M-token context window and pricing that undercuts Western frontier labs.

3 days ago2 min read