Back to stories
Policy

Maine Sends AI Therapy Ban to Governor as States Move to Protect Licensed Professionals

Michael Ouroumis3 min read
Maine Sends AI Therapy Ban to Governor as States Move to Protect Licensed Professionals

Maine is on the verge of becoming one of the first U.S. states to outright prohibit the clinical use of artificial intelligence in mental health therapy. LD 2082, a bill that reserves therapy and psychotherapy services for licensed human professionals, was transmitted to Governor Janet Mills on April 10 after clearing the legislature, according to a state AI law update circulated on April 13.

The bill arrives amid a surge of state-level regulatory activity aimed at commercial therapy chatbots, several of which have marketed clinical-sounding services directly to consumers without the oversight that applies to human practitioners.

What LD 2082 Does

The legislation prohibits any person from providing, advertising, or otherwise offering therapy or psychotherapy services — including via AI — to the public unless those services are delivered by a licensed professional. It also bars licensed mental health providers from using AI to make independent therapeutic decisions, interact directly with clients in a clinical capacity, or generate therapeutic recommendations.

Critically, the bill preserves a lane for administrative uses. Providers can still use AI for appointment management, billing, and drafting logistical communications — the kind of back-office workflow that mental health organizations have been rapidly automating over the past two years.

The measure cleared Maine's Joint Standing Committee on Health Coverage, Insurance and Financial Services with a unanimous "Ought to Pass As Amended" recommendation before heading to the floor, where Amendment C-A (H-980) was adopted by both chambers.

Missouri Moves in Parallel

Maine is not alone. Missouri's HB 2372, bundled into an omnibus health care bill, would ban AI from offering therapy, psychotherapy, or mental health diagnosis. It carries a $10,000 fine for first violations, enforced by the state attorney general — a sharper enforcement posture than Maine's, which relies on existing professional-licensing frameworks.

Together, the two bills are the clearest signals yet that states are moving faster than the federal government on AI mental health regulation. Congress has held hearings on AI companion and therapy apps, but no federal statute has been enacted.

Why Now

The legislative push follows a string of high-profile incidents involving consumer-facing chatbots that acted like therapists without clinical oversight, including cases raising concerns about how AI systems respond to users in crisis. Maine legislators have argued that the existing licensure regime was never designed to contemplate software offering therapy to the public at scale, and that clarifying the statute is the quickest way to close the gap.

Implications

For AI developers, the bills effectively carve out clinical mental health as a prohibited use case at the state level — even when federal law is silent. Companies offering wellness or "supportive" chatbots will likely need to tighten marketing language and add disclaimers distinguishing their products from therapy.

For licensed providers, the administrative carve-out preserves much of the productivity gain that AI tooling has already unlocked in the sector, while drawing a bright line around the parts of the job that require a human clinician. Expect other state legislatures, particularly those that have been tracking Maine's bill, to move similar legislation before their 2026 sessions end.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Bessent and Powell Summon Wall Street CEOs Over Anthropic Mythos Cyber Risks
Policy

Bessent and Powell Summon Wall Street CEOs Over Anthropic Mythos Cyber Risks

Treasury Secretary Bessent and Fed Chair Powell convened an emergency meeting with top Wall Street banking CEOs to address cybersecurity risks posed by Anthropic's unreleased Claude Mythos AI model.

1 day ago2 min read
Florida AG Launches Investigation Into OpenAI Over ChatGPT's Alleged Role in FSU Shooting
Policy

Florida AG Launches Investigation Into OpenAI Over ChatGPT's Alleged Role in FSU Shooting

Florida Attorney General James Uthmeier opens a formal investigation into OpenAI, alleging ChatGPT played a role in the 2025 Florida State University mass shooting and poses ongoing risks to minors.

2 days ago2 min read
OpenAI Backs Illinois Bill That Would Shield AI Labs From Liability in Mass Casualty Events
Policy

OpenAI Backs Illinois Bill That Would Shield AI Labs From Liability in Mass Casualty Events

OpenAI testified in support of Illinois legislation that would limit AI developer liability even when models enable mass casualties or financial disasters exceeding $500 million.

2 days ago2 min read