Back to stories
Policy

Man Pleads Guilty to $8M AI Music Streaming Fraud — Created Hundreds of Thousands of Fake Songs

Michael Ouroumis2 min read
Man Pleads Guilty to $8M AI Music Streaming Fraud — Created Hundreds of Thousands of Fake Songs

A North Carolina man has pleaded guilty to one of the most brazen AI-assisted fraud cases in the music industry's history — using artificial intelligence to generate hundreds of thousands of songs and bots to stream them billions of times, fraudulently collecting more than $8 million in royalties.

Michael Smith's scheme, confirmed by the Department of Justice's Southern District of New York, represents a new frontier in AI-enabled fraud: systematically exploiting the economics of streaming royalty systems at a scale that was previously impossible without AI.

How the Scheme Worked

The mechanics were straightforward and scalable:

  1. Generate content at scale — Smith used AI tools to produce hundreds of thousands of songs. These weren't high-quality productions — quantity was the point, not artistry.

  2. Upload everywhere — the AI-generated tracks were distributed across major streaming platforms including Spotify, Apple Music, and Amazon Music.

  3. Bot the streams — automated bots were deployed to stream the songs "billions" of times, according to the DOJ. Streaming platforms calculate royalty payments based on play counts, so artificially inflated streams translate directly into real money.

  4. Collect the royalties — the fraudulent stream counts generated over $8 million in royalty payments that Smith received.

Why This Case Matters

Smith's case isn't just about one person's fraud — it's a preview of a systemic vulnerability in how the music industry monetizes streaming.

Streaming royalty systems were designed for a world where creating and uploading music had meaningful friction. AI has eliminated that friction entirely. Anyone with access to a music generation AI and basic technical knowledge can now produce thousands of "songs" in hours. Combine that with bot infrastructure for artificial streaming, and the fraud economics are compelling.

The scale Smith achieved — billions of streams, $8 million in royalties — required AI. A human couldn't manually create and upload hundreds of thousands of tracks. That's what makes this case a landmark: it demonstrates that AI doesn't just assist fraud, it enables fraud at a scale that wasn't previously feasible.

The Industry Response

Streaming platforms have been fighting bot-driven streaming fraud for years, but the AI content generation layer adds a new dimension. Previously, fraudsters needed some amount of real music. Now they need none.

Spotify, Apple Music, and Amazon have not commented specifically on this case. The broader industry body representing performance rights organizations has acknowledged that AI-generated content is complicating existing royalty frameworks — though the focus has been on compensation questions for human artists, not fraud prevention.

What Comes Next

Smith's guilty plea is significant but unlikely to deter others. The technical barrier to replicating his scheme is low, and the potential upside — millions in fraudulent royalties — is high. Until streaming platforms develop more robust detection systems capable of identifying AI-generated content and distinguishing genuine listeners from bots at scale, the vulnerability remains.

The music industry spent years dealing with stream manipulation fraud. AI just made the problem orders of magnitude worse.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

EU Strikes AI Act Omnibus Deal: High-Risk Rules Delayed to 2027, Nudification Apps Banned
Policy

EU Strikes AI Act Omnibus Deal: High-Risk Rules Delayed to 2027, Nudification Apps Banned

European Parliament and Council negotiators reached a provisional political agreement on the Digital Omnibus on AI, postponing high-risk obligations to December 2027 and adding a new ban on AI systems that generate non-consensual sexual content.

1 hours ago3 min read
Apple to Pay $250M to Settle Class Action Over Delayed Siri AI Features
Policy

Apple to Pay $250M to Settle Class Action Over Delayed Siri AI Features

Apple has agreed to a $250 million settlement covering U.S. iPhone 15 Pro and iPhone 16 buyers, ending a class action that accused the company of overstating when its Apple Intelligence-powered Siri would actually ship.

1 day ago2 min read
CAISI Signs Frontier AI Testing Pacts With Microsoft, Google DeepMind, and xAI
Policy

CAISI Signs Frontier AI Testing Pacts With Microsoft, Google DeepMind, and xAI

The Commerce Department's Center for AI Standards and Innovation will conduct pre-deployment evaluations of frontier models from Microsoft, Google DeepMind, and xAI, including testing in classified environments.

1 day ago3 min read