Meta's AI division is in a more complicated position than the company's public silence suggests. While the company pushed its flagship Avocado model to May or later after internal testing found it underperforming against Google's Gemini 3, OpenAI's GPT-5.4, and Anthropic's Claude — it has been doing something that would have seemed unthinkable a year ago: routing some of its Meta AI users through Google's own models.
The details emerged from analysis of Meta's internal model selection infrastructure, published this week by TestingCatalog.
What the Testing Revealed
Meta's internal model selector — accessible through parts of the Meta AI interface — reveals several Avocado configurations currently in parallel evaluation:
- Avocado 9B: A smaller 9 billion parameter version, likely a candidate for on-device or cost-efficient deployments
- Avocado Mango: Carries "agent" and "sub-agent" labels and appears capable of image generation — a multimodal agentic variant that could go head-to-head with GPT-5.4's workflow capabilities
- Avocado TOMM: Described as "Tool of Many Models," a composite routing system built on top of Avocado
- Avocado Thinking 5.6: The latest iteration of Meta's reasoning-focused Avocado variant
- Paricado: A text-only conversational model, apparently a separate product line
The sheer number of release candidates in flight suggests Meta hasn't determined which configuration will ship — or in what order. It's the kind of parallelization you do when you're not sure your primary bet is going to hit the benchmark bar you need.
The Gemini A/B Test
The most striking finding is the Gemini routing. System prompt analysis and traffic patterns show that some requests within Meta AI are already being processed by Google's Gemini models rather than any version of Avocado or Llama. According to sources cited by TestingCatalog, Meta's AI leadership has held serious discussions about temporarily licensing Gemini technology to fill capability gaps while Avocado matures.
This is not a small thing. Meta has built one of the largest AI user bases in the world across Facebook, Instagram, and WhatsApp — hundreds of millions of active users who interact with Meta AI every day. If those users are getting Gemini responses without knowing it, Meta has effectively become a reseller of its competitor's product in its own ecosystem.
The arrangement makes sense from a short-term product perspective. Meta can't afford to have its AI products fall dramatically behind competitors while Avocado is being rebuilt. But it also carries reputational and strategic risks: it reveals just how far behind Avocado has fallen, and it creates a dependency on a company (Google) that is also a direct competitor.
The Delay That Triggered This
The backstory matters here. In early March, the New York Times reported that Meta had delayed Avocado's release to at least May after internal evaluations showed it couldn't match GPT-5.4, Gemini 3, or Claude on key benchmarks. Multiple sources described the internal testing as a significant disappointment, particularly given the resources Meta has poured into the project.
The benchmarks where Avocado struggled were notably not exotic edge cases. According to TestingCatalog's analysis of system prompts and capability probes, Avocado fell short on complex math reasoning problems that Gemini 3 and GPT had already solved months earlier. That's a meaningful gap for a model that's supposed to power everything from Instagram DMs to Meta's enterprise tools.
The Open Source Question
Perhaps the most consequential aspect of Avocado's development is what it signals about Meta's future relationship with open source. For the past several years, Meta has been the most prominent open-source champion in frontier AI — releasing the Llama family under permissive licenses, providing researchers and smaller companies access to state-of-the-art weights. That stance had strategic logic: if strong models are freely available, the competitive advantage of closed-source providers diminishes.
Avocado is expected to be proprietary.
Under CEO Mark Zuckerberg's mandate to pursue superintelligence, Meta has shifted toward treating its model research as a strategic asset rather than a community resource. Sources say the company views the open-source approach as inconsistent with the level of capability and resource concentration that frontier AI development now requires.
For the broader AI ecosystem, that shift matters. The open-source AI community has relied heavily on Meta's Llama releases as a foundation for research and startups. If Avocado ships closed-source, that pipeline dries up at exactly the moment when the gap between open and closed models was beginning to close.
What Users Experience
For Meta AI's hundreds of millions of users, Avocado will eventually represent a meaningful step up from the current Llama-based experience — even if it doesn't match the very frontier of GPT-5.4 or Gemini 3. Better reasoning, more capable agents, and multimodal improvements are all visible in the variants currently under testing.
The question is whether Meta can get there before users and developers choose alternative AI products that are already available. In the attention economy, a few months matters.
Whether Meta quietly ships these improvements through a soft rollout or waits for a high-profile launch moment remains unclear. What's certain is that the company's AI timeline is messier, and more interesting, than its public silence suggests.


