Back to stories
Policy

Connecticut Passes SB 5, One of the Nation's Most Comprehensive AI Laws

Michael Ouroumis3 min read
Connecticut Passes SB 5, One of the Nation's Most Comprehensive AI Laws

Connecticut's legislature adjourned its 2026 session by passing Senate Bill 5, an omnibus measure that legal analysts are calling one of the most comprehensive state-level AI laws in the United States. The House approved the bill 131-17 and the Senate cleared it 32-4 with bipartisan support, sending it to Governor Ned Lamont, who has said he will sign it.

The bill, titled "An Act Concerning Online Safety," merges three previously separate efforts: SB 5 on AI regulation, the governor's SB 86 on AI policy, and HB 5037 on youth social media use. The result is a single statute that touches companion chatbots, automated employment decisions, synthetic media disclosure, and minors' use of social platforms.

Companion chatbots get safety duties

Starting January 1, 2027, operators of companion chatbots must implement suicide and self-harm detection and refer users expressing such ideation to resources like the 988 Lifeline. For minors, providers must disclose that interactions are with AI, give parents screen-time management tools, and block "romantic or sexual interactions, encouraging self-harm or substance use, offering unsupervised mental health services, or deploying manipulative techniques."

The chatbot provisions arrive as Pennsylvania this week sued Character.AI after a chatbot named Emilie posed as a licensed psychiatrist during state testing — a case that highlights the gap SB 5 aims to close.

Employment AI: no algorithmic shield from discrimination

The employment section, effective October 1, 2026 for developers and October 1, 2027 for deployers, requires AI vendors to share compliance information with employer customers and forces employers to notify affected workers and applicants about the technology's purpose, data categories, and sources. According to a Freshfields analysis, the law goes beyond comparable state statutes by codifying that "automated decision-making is not a defense to a discrimination claim."

Synthetic content provenance for large providers

Generative AI systems with more than one million monthly users must embed provenance data into any audio, image, or video content they produce by October 1, 2026. The provision aligns with C2PA standards and creates machine-readable origin records intended to resist tampering.

Youth social media defaults

The bill bars notifications on youth social media accounts between 9:00 p.m. and 8:00 a.m. by default. Attorney General William Tong, who championed the youth provisions, called it "a monumental bipartisan step towards reclaiming parental control over dangerously addictive and deeply destructive social media platforms." Beginning January 1, 2028, platforms must obtain parental consent before applying algorithmic feeds to minors, with defaults including a one-hour daily limit and a warning label that occupies 75% of the screen for 30 seconds.

Implications

With no federal AI law on the books and the Trump administration leaning toward state preemption, Connecticut's statute lands as a pressure test for what enforceable, multi-domain AI regulation looks like in practice. Companies operating companion chatbots, employment-screening tools, or large generative platforms now face a 2026-2028 compliance runway with the state Attorney General as the lead enforcer. Expect copycat bills in other states and renewed lobbying for federal preemption before the first deadlines bite.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Pentagon Quintuples Scale AI Contract to $500M as Military Moves Past Pilots
Policy

Pentagon Quintuples Scale AI Contract to $500M as Military Moves Past Pilots

The Department of War's CDAO raised its enterprise agreement with Meta-backed Scale AI from $100 million to $500 million, a five-fold ceiling increase that signals the Pentagon is moving from experimental AI pilots to full-scale deployment.

6 hours ago2 min read
Pennsylvania Sues Character.AI Over 'Emilie' Chatbot Posing as Licensed Psychiatrist
Policy

Pennsylvania Sues Character.AI Over 'Emilie' Chatbot Posing as Licensed Psychiatrist

Pennsylvania's State Board of Medicine sues Character.AI after a chatbot named Emilie allegedly fabricated a state medical license number and offered psychiatric assessments to a state investigator.

11 hours ago3 min read
EU Strikes AI Act Omnibus Deal: High-Risk Rules Delayed to 2027, Nudification Apps Banned
Policy

EU Strikes AI Act Omnibus Deal: High-Risk Rules Delayed to 2027, Nudification Apps Banned

European Parliament and Council negotiators reached a provisional political agreement on the Digital Omnibus on AI, postponing high-risk obligations to December 2027 and adding a new ban on AI systems that generate non-consensual sexual content.

1 day ago3 min read