Back to stories
Models

Meta Debuts Muse Spark, Its First Proprietary AI Model From Superintelligence Labs

Michael Ouroumis2 min read
Meta Debuts Muse Spark, Its First Proprietary AI Model From Superintelligence Labs

Meta has officially unveiled Muse Spark, the first AI model to emerge from its Superintelligence Labs division, marking a pivotal moment in the company's AI strategy. The model, originally code-named Avocado, was announced on April 8 and represents a ground-up rebuild of Meta's AI stack under the leadership of chief AI officer Alexandr Wang.

A New Architecture, A New Approach

Muse Spark is a natively multimodal reasoning model designed with tool-use, visual chain of thought, and multi-agent orchestration baked in from the start. According to Meta, the team spent nine months rebuilding everything — model architecture, optimization pipelines, and data curation — to produce a system that prioritizes being lightweight, fast, and consumer-centric.

The efficiency gains are notable. Meta claims Muse Spark can match the capability level of Llama 4 Maverick with over ten times less compute, a result the company attributes to its deliberate and scientific approach to model scaling.

Benchmark Performance

According to Artificial Analysis, Muse Spark scores 52 on their Intelligence Index, placing it within the top five models benchmarked. That puts it ahead of Claude Sonnet 4.6 but behind Gemini 3.1 Pro Preview, GPT-5.4, and Claude Opus 4.6. Notably, Muse Spark used just 58 million output tokens to complete the Intelligence Index evaluation — significantly fewer than Claude Opus 4.6 at 157 million or GPT-5.4 at 120 million, suggesting strong efficiency per inference.

The model shows competitive performance across multimodal perception, reasoning, health applications, and agentic tasks, though it does not surpass frontier models across the board.

The Open-Source Question

Perhaps the biggest strategic shift is that Muse Spark is proprietary. Meta built its AI reputation on the open-source Llama family, so the decision to keep Muse Spark closed has drawn attention. The company has stated it hopes to open-source future versions of the model, but for now, Muse Spark remains behind closed doors.

This move aligns with a broader industry pattern where companies increasingly view their most capable models as competitive assets rather than community resources.

Rollout and Market Reaction

Muse Spark currently powers the Meta AI assistant on the standalone Meta AI app and meta.ai. It will roll out to WhatsApp, Instagram, Facebook, Messenger, and Meta's Ray-Ban AI glasses in the coming weeks.

Wall Street responded enthusiastically — Meta's stock jumped nearly 9% on the announcement, heading for its sharpest single-day rally since January. The company has committed between $115 billion and $135 billion in AI-related capital expenditures for 2026, up roughly 73% from last year's $72.2 billion.

What It Means

Muse Spark signals that Meta is serious about competing at the frontier of AI, not just in open-source tooling. With Alexandr Wang's Superintelligence Labs now delivering production models, the AI race has another well-funded contender pushing the boundaries of what consumer-facing AI can do.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Models

Meta's Muse Spark Narrows Frontier Gap With Novel Thought Compression Technique
Models

Meta's Muse Spark Narrows Frontier Gap With Novel Thought Compression Technique

Meta debuts Muse Spark, a proprietary AI model from its Superintelligence Labs that uses thought compression to match frontier rivals with a fraction of the compute.

10 hours ago2 min read
Alibaba's Qwen 3.5-Omni Displays Emergent Ability to Write Code From Voice and Video
Models

Alibaba's Qwen 3.5-Omni Displays Emergent Ability to Write Code From Voice and Video

Alibaba's new Qwen 3.5-Omni model can process text, images, audio, and video natively, and has shown an unexpected emergent ability to generate working code from spoken instructions and video input.

4 days ago2 min read
Google Releases Gemma 4 — Most Capable Open Models Yet, Under Apache 2.0
Models

Google Releases Gemma 4 — Most Capable Open Models Yet, Under Apache 2.0

Google DeepMind launches Gemma 4, a family of four open-weight models built from Gemini 3 research. The models span edge devices to data centers, support 140+ languages, and ship under a fully permissive Apache 2.0 license.

5 days ago2 min read