Back to stories
Models

Google Launches Gemini 3.1 Flash Live and Search Live Globally — Point Your Camera and Talk

Michael Ouroumis3 min read
Google Launches Gemini 3.1 Flash Live and Search Live Globally — Point Your Camera and Talk

Google has launched Gemini 3.1 Flash Live, its highest-quality conversational voice model yet, and used it to power a global rollout of Search Live across more than 200 countries.

The capability is striking in its simplicity: point your phone camera at anything — a plant, a dish, a street sign, a broken appliance — and have a real-time spoken conversation about it. Not a text query. Not a search result page. A live dialogue.

What's New in Gemini 3.1 Flash Live

Gemini 3.1 Flash Live is optimized for real-time voice interaction — low latency, high naturalness, and the ability to maintain conversational context across a back-and-forth exchange. It's the model powering the voice layer of Search Live, and it represents a significant step up from previous Gemini voice capabilities in terms of both response speed and conversational coherence.

The "Flash" designation indicates it's tuned for speed and efficiency rather than maximum reasoning depth — the right trade-off for a voice-first, real-time product where sub-second response times matter more than extended reasoning chains.

Search Live Goes Global

Search Live — Google's camera-plus-voice search feature — has been in limited testing for months. Today's global rollout to 200+ countries is the moment it becomes a mainstream product rather than an experiment.

The user experience is designed to be frictionless. Open the camera, point it at something, ask a question out loud. Follow up conversationally. Ask for more detail, a different angle, a recommendation. The AI maintains context across the exchange so you don't have to restart each question from scratch.

This is a direct evolution of Google Lens, which was already the most-used image-based search product globally. Adding real-time voice conversation to visual search creates something qualitatively different from any search experience that came before.

The Gemini Drop: Memory Portability

Alongside the Flash Live launch, Google's March Gemini Drop included a feature that's strategically significant: the ability to import AI memories and full chat history from rival apps — including ChatGPT — directly into Gemini.

The move is a direct play for user retention and switching. One of the stickiest parts of any AI assistant is accumulated context — the model knowing your preferences, your past projects, your communication style. Google is removing that as a barrier to switching, betting that users who try Gemini will stay once they see its capabilities, rather than being locked out by the migration friction.

Combined with Apple's announcement that iOS 27 will open Siri to rival AIs including Gemini, Google is executing a coordinated distribution push: be everywhere users are, make it easy to start using Gemini, and demonstrate quality once they arrive.

What This Means

The voice AI layer is becoming the primary battleground. Mistral shipped Voxtral TTS earlier today. Cohere shipped Transcribe. Apple is opening Siri to Gemini. And Google is deploying its best voice model into the world's most-used search product.

The era of text-primary AI interaction is not ending — but it's sharing the stage with voice in a way it hasn't before. The companies winning the voice interface layer will have distribution advantages that compound: users who start asking questions by speaking rather than typing will default to whichever voice experience feels most natural. Google just made a strong claim on being that default.

How AI Actually Works — Free Book on FreeLibrary

A free book that explains the AI concepts behind the headlines — no jargon, just clarity.

More in Models

Anthropic's Next Model Is Called 'Mythos' — Leaked via Security Lapse in Its Own CMS
Models

Anthropic's Next Model Is Called 'Mythos' — Leaked via Security Lapse in Its Own CMS

A misconfigured content management system exposed nearly 3,000 unpublished Anthropic assets, including a draft blog post revealing 'Claude Mythos' — a new model tier Anthropic describes as a 'step change' that surpasses all previous Claude models.

20 hours ago3 min read
Anthropic Ships Claude Computer Use: Claude Can Now Control Your Mac While You Do Something Else
Models

Anthropic Ships Claude Computer Use: Claude Can Now Control Your Mac While You Do Something Else

Anthropic has launched Claude Computer Use as a research preview — Claude can take over your keyboard, mouse, browser, and apps via the Dispatch iOS app, completing tasks autonomously on your desktop.

20 hours ago3 min read
Jensen Huang Says 'I Think We've Achieved AGI' — But His Own Words Complicate the Claim
Models

Jensen Huang Says 'I Think We've Achieved AGI' — But His Own Words Complicate the Claim

On the Lex Fridman podcast, Nvidia's CEO declared that artificial general intelligence has already arrived. Then, almost immediately, he started walking it back.

20 hours ago3 min read