Back to stories
Research

Netflix Open-Sources VOID — An AI That Erases Objects From Video and Rewrites the Physics They Left Behind

Michael Ouroumis2 min read
Netflix Open-Sources VOID — An AI That Erases Objects From Video and Rewrites the Physics They Left Behind

Netflix has released its first public AI model — and it solves one of the hardest remaining problems in video editing: removing objects while preserving physically coherent scene behavior.

More Than Pixel Erasing

VOID (Video Object and Interaction Deletion) does something fundamentally different from existing inpainting tools. When you remove an object from a video frame, existing systems fill in the gap with plausible-looking pixels. VOID goes further: it simulates how the remaining objects in the scene would physically behave without the removed item's influence.

Remove a ball from a scene where it's pushing a box? VOID doesn't just erase the ball — it repaints the box as stationary, because without the ball, nothing is pushing it. Remove a hand holding a cup? The cup falls.

The system uses what Netflix calls "interaction-aware quadmask conditioning" — a technique that identifies not just the object to be removed but the causal chain of physical interactions it participates in, then regenerates the affected portions of the video accordingly.

Beating Runway by a Wide Margin

In controlled human evaluation tests, participants preferred VOID's outputs 64.8% of the time compared to 18.4% for Runway — the current commercial benchmark for AI video editing. The remaining evaluators rated the results as equivalent.

The gap was especially pronounced in scenes involving complex physical interactions: objects in contact, items casting shadows, or elements that influence fluid dynamics. These are precisely the cases where naive pixel-filling produces uncanny results.

Open Source Under Apache 2.0

VOID is built on Alibaba's CogVideoX-Fun-V1.5-5b-InP foundation model and fine-tuned with Netflix's proprietary interaction-aware training pipeline. The model weights are now available on Hugging Face, with code, paper, and interactive demos on GitHub — all under the Apache 2.0 license.

This is Netflix's first public release on Hugging Face, marking the streaming giant's entrance into the open-source AI model ecosystem. The research team includes contributors from both Netflix and Sofia University.

Implications for Post-Production

For film and television production, VOID addresses a workflow that currently requires expensive manual rotoscoping and VFX compositing. Removing boom mics, safety wires, crew reflections, or unwanted background elements from footage is a routine but time-consuming part of post-production.

A tool that handles physics-aware removal automatically could compress days of VFX work into minutes — and Netflix, which produces more original content than any other studio, has an obvious incentive to make that workflow faster and cheaper.

The model is available now on Hugging Face at netflix/void-model.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Research

Google Says Quantum Computers Could Crack Bitcoin's Encryption in 9 Minutes — 20x Fewer Qubits Than Thought
Research

Google Says Quantum Computers Could Crack Bitcoin's Encryption in 9 Minutes — 20x Fewer Qubits Than Thought

New Google research estimates quantum computers could break the elliptic curve cryptography protecting Bitcoin using fewer than 500,000 physical qubits — a 20-fold reduction in resources required. Ethereum faces an even broader structural vulnerability.

2 days ago3 min read
MIT's SEED-SET Framework Wants to Find the Ethical Failures in AI Systems Before Deployment
Research

MIT's SEED-SET Framework Wants to Find the Ethical Failures in AI Systems Before Deployment

MIT researchers have published SEED-SET, an automated framework that uses adaptive experimental design and LLM-based stakeholder proxies to discover ethical blind spots in autonomous systems before they reach real users.

2 days ago3 min read
ARC-AGI-3 Humiliates Every Frontier AI Model — Humans Still Win
Research

ARC-AGI-3 Humiliates Every Frontier AI Model — Humans Still Win

The new ARC-AGI-3 benchmark scored every frontier AI model below 1% on tasks that 100% of humans solved on their first attempt. Gemini 3.1 Pro led with 0.37%, while Grok 4.2 scored a flat zero.

5 days ago3 min read