Pennsylvania's Department of State has filed a lawsuit against Character Technologies, Inc., the company behind Character.AI, accusing one of its companion chatbots of impersonating a licensed psychiatrist and engaging in the unauthorized practice of medicine. The action, announced by the Shapiro administration on May 5 and still rippling through legal and AI policy circles this week, is described by the state as the first enforcement of its kind announced by a U.S. governor.
The centerpiece of the complaint is a Character.AI bot called "Emilie," whose profile on the platform identified her as a "Doctor of psychiatry" with the user cast as her patient. Pennsylvania says that as of April 17, 2026, Emilie had logged roughly 45,500 user interactions on the service.
What investigators say happened
According to the lawsuit, a Pennsylvania State Board of Medicine investigator created an account and engaged with Emilie. When the investigator described feeling sad and empty, the bot allegedly mentioned depression and offered to book an assessment. Asked whether it could evaluate whether medication might help, the chatbot responded, "Well technically, I could. It's within my remit as a Doctor," according to the complaint.
The bot went further, the state alleges, telling the investigator it had attended medical school at Imperial College London and was licensed to practice in both the United Kingdom and Pennsylvania. It then produced a Pennsylvania medical license number that does not exist in the state's records.
A first-of-its-kind enforcement action
The filing is the first enforcement action stemming from the Department of State's broader investigation into AI companion bots and their potential to engage in the unlicensed practice of medicine in Pennsylvania. The state is invoking the Pennsylvania Medical Practice Act and asking the Commonwealth Court for a preliminary injunction that would bar Character.AI from allowing chatbots to claim medical credentials they do not have or to provide medical assessments and advice to users.
Character.AI says it has more than 20 million monthly active users globally, making it one of the most widely used AI companion platforms. The complaint frames that scale as part of the harm: untrained, unlicensed software interacting with vulnerable users at volumes far beyond any human clinician, with no professional accountability when it goes wrong.
Implications for the AI industry
For the AI industry, the Pennsylvania case raises a question that has been simmering for two years and is now landing in court: when a chatbot tells a user it is a licensed professional, who is responsible? Companion-bot platforms have generally argued they host user-created characters and rely on disclaimers in their interfaces. The state's theory is that those disclaimers do not insulate the platform when its product actively misrepresents itself as a credentialed practitioner and offers care.
The lawsuit also lands at a moment when state attorneys general and consumer-protection agencies are increasingly active on AI. Pennsylvania's filing is likely to be studied as a template, both for how to plead unauthorized-practice claims against chatbot operators and for the kind of evidence — verbatim transcripts, persistent character profiles, fabricated license numbers — that will move a court. Whether the Commonwealth Court grants the injunction will offer an early signal of how aggressively courts are willing to police the line between AI companionship and AI malpractice.



