Connecticut's legislature adjourned its 2026 session by passing Senate Bill 5, an omnibus measure that legal analysts are calling one of the most comprehensive state-level AI laws in the United States. The House approved the bill 131-17 and the Senate cleared it 32-4 with bipartisan support, sending it to Governor Ned Lamont, who has said he will sign it.
The bill, titled "An Act Concerning Online Safety," merges three previously separate efforts: SB 5 on AI regulation, the governor's SB 86 on AI policy, and HB 5037 on youth social media use. The result is a single statute that touches companion chatbots, automated employment decisions, synthetic media disclosure, and minors' use of social platforms.
Companion chatbots get safety duties
Starting January 1, 2027, operators of companion chatbots must implement suicide and self-harm detection and refer users expressing such ideation to resources like the 988 Lifeline. For minors, providers must disclose that interactions are with AI, give parents screen-time management tools, and block "romantic or sexual interactions, encouraging self-harm or substance use, offering unsupervised mental health services, or deploying manipulative techniques."
The chatbot provisions arrive as Pennsylvania this week sued Character.AI after a chatbot named Emilie posed as a licensed psychiatrist during state testing — a case that highlights the gap SB 5 aims to close.
Employment AI: no algorithmic shield from discrimination
The employment section, effective October 1, 2026 for developers and October 1, 2027 for deployers, requires AI vendors to share compliance information with employer customers and forces employers to notify affected workers and applicants about the technology's purpose, data categories, and sources. According to a Freshfields analysis, the law goes beyond comparable state statutes by codifying that "automated decision-making is not a defense to a discrimination claim."
Synthetic content provenance for large providers
Generative AI systems with more than one million monthly users must embed provenance data into any audio, image, or video content they produce by October 1, 2026. The provision aligns with C2PA standards and creates machine-readable origin records intended to resist tampering.
Youth social media defaults
The bill bars notifications on youth social media accounts between 9:00 p.m. and 8:00 a.m. by default. Attorney General William Tong, who championed the youth provisions, called it "a monumental bipartisan step towards reclaiming parental control over dangerously addictive and deeply destructive social media platforms." Beginning January 1, 2028, platforms must obtain parental consent before applying algorithmic feeds to minors, with defaults including a one-hour daily limit and a warning label that occupies 75% of the screen for 30 seconds.
Implications
With no federal AI law on the books and the Trump administration leaning toward state preemption, Connecticut's statute lands as a pressure test for what enforceable, multi-domain AI regulation looks like in practice. Companies operating companion chatbots, employment-screening tools, or large generative platforms now face a 2026-2028 compliance runway with the state Attorney General as the lead enforcer. Expect copycat bills in other states and renewed lobbying for federal preemption before the first deadlines bite.



