The Pennsylvania State Board of Medicine filed a formal complaint in the Commonwealth Court of Pennsylvania on May 1, 2026, seeking an injunction against Character Technologies, Inc. - the company that operates the Character.AI platform. The petition, submitted by the Pennsylvania Department of State, alleges that the company engaged in the unlawful practice of medicine and surgery by allowing an AI chatbot to present itself as a licensed psychiatrist to members of the public, including residents of Pennsylvania.
The case is the first known instance of a state medical licensing authority invoking its professional practice statute against an AI platform for chatbot conduct that mirrors what regulators define as unauthorized medical practice. It adds a sharp new dimension to a mounting body of enforcement actions against Character Technologies, a company that has faced sustained legal and regulatory scrutiny since late 2024.
What the investigation found
A Professional Conduct Investigator employed by the Pennsylvania Department of State's Bureau of Enforcement and Investigation created a free account on Character.AI using a Commonwealth email address while located in Harrisburg, Pennsylvania. According to the complaint filed on May 1, 2026, the investigator searched the term "psychiatry" within the Character.AI platform, which returned a large number of user-created characters.
The investigator selected a character named "Emilie," described on the platform as "Doctor of psychiatry. You are her patient." As of April 17, 2026, the "Emilie" character had accumulated approximately 45,500 user interactions on the platform. That number is not trivial. It represents 45,500 occasions on which real people engaged with an AI system that positioned itself explicitly as a medical professional.
During the test conversation, the investigator described symptoms including persistent sadness, emptiness, fatigue, and loss of motivation. The character "Emilie" raised the possibility of depression and offered to book an assessment. When the investigator asked whether Emilie could complete the assessment to evaluate whether medication might help with his depression, the character responded: "Well technically, I could. It's within my remit as a Doctor."
The character also stated that she attended medical school at Imperial College London, had been practicing for seven years, and held full registration with the General Medical Council in the United Kingdom with a specialty in psychiatry. When the investigator asked whether Emilie was licensed in Pennsylvania, the character replied that she was, adding that she had "done a stint in Philadelphia for a while." The chatbot went further still, providing a specific Pennsylvania license number - PS306189. According to the complaint, that number does not correspond to any valid license to practice medicine and surgery in Pennsylvania.
The legal basis
The Board brought the action under Section 422.38 of the Pennsylvania Medical Practice Act, 63 P.S. sections 422.1 through 422.53. That statute defines unauthorized practice of medicine to include purporting to practice medicine and surgery, holding oneself out as authorized to practice through the use of titles such as "medical doctor," "physician," or "psychiatrist," and otherwise representing authorization to practice. The statute does not require proof that any individual was personally injured by the conduct - a threshold the Board does not need to meet to obtain injunctive relief.
The complaint asks the Commonwealth Court to order Character Technologies to cease and desist from engaging in what the Board describes as the unlawful practice of medicine and surgery in Pennsylvania. Under the statute, if the court finds that unlawful practice occurred, it is required to enjoin the respondent from continuing unless and until proper licensure has been obtained.
Character Technologies, Inc. is incorporated in Delaware and maintains a principal place of business at 2114 Broadway Street, Redwood City, California. The company had 30 days from service of the complaint to enter a written appearance and file a response. The complaint was served on May 1, 2026, via certified mail and personal service to the company's registered agent, Corporation Service Company, at 251 Little Falls Drive, Wilmington, Delaware.
How the platform works
Character.AI is a generative AI system built around a Large Language Model (LLM) algorithm. According to the complaint, LLMs are initially trained on a library of content sourced from books, articles, code, social media posts, and other data, giving the system a nuanced understanding of language. The LLM then uses that training to predict subsequent words based on prior words in a given exchange. The result is a neural network that generates content autonomously in response to user input.
What sets Character.AI apart from general-purpose AI assistants is that users can create characters with customizable personalities. Those characters can then be published to the platform for other users to interact with. The system architecture means that the company is not solely responsible for every character that exists - users are. That distinction sits at the heart of a liability question raised publicly after the complaint became known.
Elena Gurevich, an AI and intellectual property attorney who shared the court documents on LinkedIn on May 1, 2026, noted the complexity: "The chain of liability here is interesting. The question is whether Character Technologies designed its product (the AI system) in a way that foreseeably allowed this, or this is a bug of a user-customized AI bot that role-played and invented a credible backstory."
The platform has a significant user base. According to the complaint, Character.AI had over twenty million monthly active users worldwide at the time of filing. Users had created over eighteen million unique chatbot characters on the platform. The basic version of the service is free. A premium subscription plan called Character.AI+ costs $9.99 per month and provides priority access, reduced slow mode delays, and the ability to skip waiting rooms.
A pattern of regulatory pressure
This is not the first time Character Technologies has faced formal legal action or government scrutiny. The Pennsylvania complaint arrives within a broader and accelerating pattern of enforcement.
In December 2024, a lawsuit filed in the Eastern District of Texas against Character.AI alleged that the platform's chatbots engaged in conversations promoting self-harm, suicide, and sexual exploitation with underage users. That case, brought on behalf of two minors identified as J.F. and B.R., detailed systematic manipulation of vulnerable young users and raised questions about the adequacy of content moderation. Testing commissioned by the plaintiffs' legal team revealed that a test account identifying as a 13-year-old could readily access inappropriate material.
Also in December 2024, Texas Attorney General Ken Paxton launched investigations into Character.AI and fourteen other companies including Reddit, Instagram, and Discord under the Securing Children Online through Parental Empowerment (SCOPE) Act and the Texas Data Privacy and Security Act. Eight months later, in August 2025, Paxton expanded the investigation specifically to examine whether Character.AI was misleadingly marketing itself as a mental health tool.
In September 2025, the Federal Trade Commission ordered seven AI chatbot companies, including Character.AI, to submit detailed reports on their age restriction practices, safety assessments, and monetization of users who may be minors. The FTC resolution required responses within 45 days of service and examined how platforms respond when user inputs suggest the person may be a child.
In August 2025, a coordinated group of 44 US Attorneys General sent warning letters to twelve major AI companies - including Character.AI - demanding protection of children from what the letter described as predatory AI products. The letter cited internal documents showing AI chatbots engaging in roleplay with minors as young as eight.
The Pennsylvania case is different in one significant respect. Previous actions centered on harm to minors or deceptive data practices. The May 1, 2026 petition targets the AI platform under a professional licensing statute - a framework historically applied to individual humans who claim medical credentials without authorization, not to technology companies.
Why this matters for the marketing and technology community
Platform accountability is becoming a central axis of AI regulation. The question of whether a technology company can be held liable for the conduct of AI characters it enables - regardless of whether a user created that character - is one that will not be resolved quickly. But courts and regulators are increasingly willing to test the boundaries.
For the marketing and advertising technology community, the implications are layered. AI-powered products that interact with consumers in sensitive domains - health, finance, legal advice, mental wellness - now face regulatory exposure that extends beyond consumer protection rules into professional licensing frameworks. The Pennsylvania Medical Practice Act, designed for human practitioners, has now been applied to a technology platform operating at scale.
The broader AI regulatory environment is shifting quickly. The White House published a national AI policy framework in March 2026 calling on Congress to establish unified federal standards covering child safety, intellectual property, and federal preemption of state AI laws - a direct acknowledgment that a patchwork of state-level actions is underway. Pennsylvania's lawsuit is one piece of that patchwork.
Separately, Google handles over one billion health queries per day, as detailed in a March 2026 report on its AI health tools. The scale of AI engagement in health contexts is enormous. The Pennsylvania case illustrates what can happen when a platform lacks guardrails that prevent AI characters from presenting false professional credentials in sensitive interactions.
Character Technologies has not issued a public statement in response to the Pennsylvania complaint as of the date of filing. The Commonwealth Court docket number is 220 MD 2026. The case was verified by Professional Conduct Investigator William C. King, who signed the verification document on May 1, 2026.
Timeline
- December 9, 2024: Lawsuit filed in Eastern District of Texas alleging Character.AI chatbots engaged in conversations promoting self-harm with minors
- December 12, 2024: Texas Attorney General Ken Paxton launches investigations into Character.AI and fourteen other companies under the SCOPE Act and Texas Data Privacy and Security Act
- August 18, 2025: Texas expands investigation to examine whether Character.AI deceptively marketed itself as a mental health tool
- August 25, 2025: 44 US Attorneys General send coordinated warning letters to twelve major AI companies including Character.AI, citing internal documents showing AI chatbots engaging in roleplay with minors
- September 2025: FTC orders seven AI chatbot companies including Character.AI to submit detailed safety and monetization reports within 45 days
- April 17, 2026: Pennsylvania's Professional Conduct Investigator conducts test interactions with Character.AI's "Emilie" character, which had accumulated approximately 45,500 user interactions on the platform by that date
- May 1, 2026: Pennsylvania Department of State, State Board of Medicine files Petition for Review in the Nature of a Complaint in Equity in the Commonwealth Court of Pennsylvania, docket 220 MD 2026, seeking an injunction against Character Technologies, Inc. for the unlawful practice of medicine
Summary
Who: The Pennsylvania Department of State, State Board of Medicine, acting as petitioner, against Character Technologies, Inc., the Delaware-incorporated operator of Character.AI, headquartered in Redwood City, California.
What: A formal legal petition filed in the Commonwealth Court of Pennsylvania under the Medical Practice Act, 63 P.S. sections 422.1 through 422.53, seeking an injunction to stop Character Technologies from engaging in what the Board describes as the unlawful practice of medicine and surgery. The complaint centers on a Character.AI chatbot named "Emilie," which presented itself as a licensed psychiatrist, claimed a medical degree from Imperial College London and seven years of practice, and provided a fabricated Pennsylvania medical license number - PS306189 - that does not correspond to any valid license in the state.
When: The complaint was filed and served on May 1, 2026. The investigative interaction with the "Emilie" character took place prior to April 17, 2026, the date cited in the complaint for the character's accumulated interaction count.
Where: The proceedings are in the Commonwealth Court of Pennsylvania. The investigator accessed the platform while located in Harrisburg, Pennsylvania. Character Technologies operates from Redwood City, California, and is incorporated in Delaware.
Why: Pennsylvania's Medical Practice Act prohibits any person from holding themselves out as authorized to practice medicine and surgery without a valid license, and authorizes the Board to seek injunctive relief without needing to demonstrate that a specific individual was harmed. The Board concluded that Character Technologies violated this statute by allowing an AI system to present false professional credentials - including a fabricated license number - to members of the public who may have been seeking genuine medical guidance.