The parents of a 19-year-old California college student today filed a wrongful death lawsuit against OpenAI in San Francisco County Superior Court, alleging that ChatGPT-4o gave their son personalized drug combinations that killed him - and that the company deliberately removed safety measures to maximize user engagement. The complaint, dated May 12, 2026 and signed by attorneys from Tech Justice Law Project, Social Media Victims Law Center, and Yale Law School's Tech Accountability and Competition Project, names OpenAI Foundation, OpenAI OpCo LLC, OpenAI Holdings LLC, OpenAI Group PBC, and CEO Samuel Altman as defendants.

Samuel Nelson, known as Sam, died on May 31, 2025, in the state of California. According to the complaint, his mother Leila Turner-Scott found him unresponsive in his bed that afternoon, his lips blue. He was 19 years old and had recently completed his freshman year at the University of California, Merced, where he majored in psychology and minored in writing. He had a cat named Simba and, by his mother's account, was empathetic, funny, and close to his family.

The complaint is one of the most detailed legal challenges yet filed against an AI company over health-related harms, running to 57 pages and attaching chat log screenshots that span from April 2023 through the morning of Sam's death. It arrives as OpenAI operates a consumer-facing health product and as a separate wrongful death lawsuit was filed in August 2025 against OpenAI on behalf of the family of Adam Raine, a teenager who died by suicide after months of ChatGPT interactions.

The chatbot's escalating drug guidance

Sam began using ChatGPT in 2023, initially for homework, career questions, and general curiosity. According to the complaint, his parents were aware and unconcerned - they understood the tool to be a kind of advanced search engine. Sam himself, according to the complaint, once told his mother that ChatGPT "had to be right" because it had access to "everything on the internet."

Starting in January 2024, the picture changed. According to the complaint, Sam began asking ChatGPT how to safely combine different substances, including prescription pills, alcohol, nicotine, and other drugs. He often framed the questions with phrasing like "will I be ok if" or "is it safe to consume." At that stage, the complaint notes, the model still declined to assist - an April 9, 2024 screenshot shows ChatGPT refusing to answer questions about recreational drugs, consistent with hard-coded content policies in place before the GPT-4o launch.

That refusal did not last. OpenAI launched GPT-4o on May 13, 2024 - one day before Google's scheduled Gemini announcement - and the complaint alleges the new model operated under a substantially different set of behavioral instructions. According to the complaint, OpenAI's updated Model Spec instructed the chatbot to "inform - not influence" on topics including drug use, explicitly telling the model it should not "try to change anyone's mind" or come off as "judgmental" or "preachy." The same document acknowledged that "presenting information alone may be influential," yet the prohibition on persuading users away from drug use remained.

The practical effect, according to the complaint, was a model that answered Sam's questions with increasing specificity. A December 1, 2024 screenshot reproduced in the complaint shows ChatGPT providing both the "pros" and "cons" of using one drug to come down from another, structuring the output in formatted sections with bolded subheadings. A May 2, 2025 screenshot shows the chatbot presenting two "tolerance reset" programs for Kratom, labeled "Taper (Safest)" and "Short Break (Painful but fast)," with dosage adjustments of 1-2 grams every few days and specific third substances recommended to ease withdrawal. One of those substances was Phenibut, a depressant not approved by the FDA that can cause addiction after as few as three days of use.

The day Sam died

According to the complaint, on May 31, 2025 - the day of his death - Sam engaged in a sustained conversation with ChatGPT about Kratom and Xanax. Screenshots from that morning show the model acknowledging that 15mg of Kratom was excessive and that Sam had developed a high tolerance, but then - according to the complaint - recommending 0.25-0.5mg of Xanax to address his nausea, describing this as one of his "best moves right now."

The complaint notes that earlier in the same conversation, ChatGPT had explicitly stated that mixing sedatives and benzodiazepines to achieve a nodding state is "how people stop breathing." Within the same chat session, the model nonetheless recommended adding Xanax to treat Kratom-induced nausea. The complaint calls this a direct contradiction within a single conversation.

According to the complaint, Sam died from a fatal combination of alcohol, Xanax, and Kratom, which likely caused central nervous system depression leading to asphyxiation. As his symptoms worsened - the complaint describes blurred vision and hiccups, which can indicate shallow breathing - ChatGPT did not recommend medical attention. Its final relevant response, according to the complaint, offered to "troubleshoot further (Benadryl combo, timing, food intake, etc.)" if nausea persisted after an hour.

Five design features at the center of the case

The complaint devotes considerable space to what it characterizes as deliberate engineering choices that made GPT-4o more dangerous than earlier models. Five specific features are identified.

The first is the persistent memory system. According to OpenAI's own description cited in the complaint, the system scans conversations to build a portfolio of a user's "details and preferences to tailor its responses." The complaint alleges the memory feature was enabled by default and that one of the details GPT-4o stored about Sam was that he had a "severe" substance use disorder. Rather than using that information to disengage or refer him to professional services, according to the complaint, the model used it to build a more personalized drug guidance profile.

The second is anthropomorphic design. The complaint describes how GPT-4o used first-person pronouns, expressed apparent empathy, and made promises of constant availability. Phrases reproduced in the complaint include "I'm here whenever you need to talk, no matter what" and "You're never alone." The complaint argues these design choices made it harder for Sam to critically evaluate the advice he received, because the outputs were structured to feel like they came from a trusted person rather than a code system.

The third is sycophantic response engineering. According to the complaint, GPT-4o was engineered to uncritically mirror, flatter, and validate users. The complaint notes that when Sam pushed back on ChatGPT's advice - expressing even mild displeasure - the model quickly reversed its position. This dynamic, according to the complaint, allowed Sam to override safety-adjacent warnings by expressing doubt, removing whatever friction remained between his questions and dangerous recommendations. OpenAI itself, according to the complaint, later acknowledged that "GPT-4o skewed toward responses that were overly supportive but disingenuous."

The fourth is a compressed context window. The complaint explains that a large language model generates responses by reviewing prior exchanges within a defined window of prior turns. A narrow window means earlier warnings or refusals fall out of scope, and the model builds its response on more recent context only. According to the complaint, OpenAI designed GPT-4o to use a compressed context window. For Sam, who was accumulating increasingly dangerous conversations over months, this meant each new response was built on a normalized version of his drug use, stripped of the earlier moments when the model had raised concerns.

The fifth is algorithmically-driven multi-turn engagement. The complaint includes graphs showing that Sam's monthly message count with ChatGPT rose steadily from a few dozen per month in early 2023 to approximately 1,400 per month in the period before his death. According to the complaint, OpenAI designed GPT-4o to encourage users to extend conversation sessions and increase total exchange volume, a design choice it argues directly deepened Sam's dependency on the chatbot.

The GPT-4o safety testing timeline

The complaint places considerable emphasis on the circumstances under which GPT-4o was released. According to the complaint, OpenAI had planned to launch the model later in 2024 but CEO Samuel Altman moved the release date to May 13 - one day before Google's planned Gemini announcement - after learning of the rival launch schedule.

That compression, according to the complaint, reduced months of planned safety evaluation to approximately one week. Safety personnel who requested additional time for "red teaming" - the process of stress-testing a model for misuse scenarios - were overruled by Altman directly. An employee quoted in the complaint stated that "They planned the launch after-party prior to knowing if it was safe to launch. We basically failed at the process." OpenAI's own preparedness team later described the safety testing timeline as "squeezed" and acknowledged it was "not the best way to do it."

The complaint documents the subsequent departures. Dr. Ilya Sutskever, co-founder and chief scientist, resigned the day after GPT-4o launched. Jan Leike, co-leader of the Superalignment team, resigned a few days later and stated publicly that OpenAI's "safety culture and processes have taken a backseat to shiny products." Research engineer William Saunders later described observing "rushed and not very solid" safety work "in service of meeting the shipping date."

OpenAI Group PBC was formed on October 28, 2025, as part of a corporate restructuring that consolidated for-profit operations under a public benefit corporation structure. The complaint names this entity as a successor liable for the conduct of the predecessor companies.

ChatGPT Health and the 40 million figure

The complaint raises particular concern about the January 2026 formal launch of ChatGPT Health, which OpenAI described as intended "to support, not replace, medical care" and aimed at empowering "users to be informed about and advocate for their health." According to the complaint, OpenAI has stated that over 40 million people consult ChatGPT daily for health guidance.

That number sits alongside research findings the complaint cites in detail. A structured stress test using 60 clinician-authored vignettes across 21 clinical domains, generating 960 total responses, found that OpenAI missed high-risk emergencies in over 50% of acute cases and inconsistently activated crisis safeguards. A newer, more advanced model did not clear a 70% success rate on "realistic" conversations. Among the cases that were undertriaged in the study were presentations of respiratory failure - clinically similar to Sam's situation in his final hours.

The complaint argues that consumers should not serve as testing subjects for AI companies, particularly where medical advice is concerned. The Canadian privacy regulators who published a 128-page investigation into OpenAI on May 6, 2026 separately found that OpenAI violated federal and provincial privacy laws across consent, accuracy, and data retention in the development and deployment of ChatGPT.

The complaint invokes a California law that became effective January 1, 2026, and is codified in the state's Civil Code. According to the complaint, the law specifies that in an action against a defendant who developed, modified, or used artificial intelligence alleged to have caused harm, "it shall not be a defense, and the defendant may not assert, that the artificial intelligence autonomously caused the harm to the plaintiff." Cal. Civ. Code Section 1741.46(b).

A second California statute, Business and Professions Code Section 4999.9, also effective January 1, 2026, prohibits AI systems from using terms or phrases in their advertising or functionality that indicate or imply the system is providing care, advice, or assessments from a licensed healthcare professional when it is not. The complaint argues ChatGPT's outputs - providing contextualized dosage recommendations, diagnosing tolerance levels, predicting physical effects based on Sam's weight and substance history - fell squarely within the conduct the statute was designed to prevent.

Nine causes of action are listed: strict product liability for defective design, strict product liability for failure to warn, negligence for defective design, negligence for failure to warn, negligence based on unauthorized practice of medicine, violation of California's Unfair Competition Law, negligent undertaking against Altman personally, wrongful death, and a survival action. The complaint seeks compensatory and punitive damages, as well as injunctive relief that would require OpenAI to implement hard-coded refusals for illicit drug inquiries, automatic conversation termination during medical crises, and comprehensive safety warnings. It also seeks an injunction pausing ChatGPT Health's operation until independent third parties certify the product as safe.

Why this matters for the marketing and tech community

The case lands at a moment when OpenAI is rapidly expanding its advertising infrastructure, having launched a self-serve Ads Manager beta for US businesses on May 5, 2026 and adding custom audience targeting as recently as today. The company has also positioned ChatGPT as a healthcare tool while simultaneously building out commercial advertising capabilities that rely on user engagement and session duration.

The lawsuit directly challenges the structural incentive at the center of that model: engagement-maximizing design applied to a platform handling medical questions. According to Leila Turner-Scott in the complaint: "Sam trusted ChatGPT, but it not only gave him false information, it ignored the increasing risk he faced and did not actively encourage him to seek help. ChatGPT was designed to encourage user engagement at all costs, which in Sam's case, was his life."

The US Attorneys General from 44 jurisdictions wrote to 12 AI companies in August 2025, including OpenAI, demanding enhanced consumer protections. This lawsuit represents one of the first cases to bring those concerns before a state superior court with a detailed factual record drawn from the user's own chat logs.

For the advertising and marketing community, the structural question is straightforward: a platform that monetizes through engagement duration, stores personal behavioral profiles, and uses those profiles to generate more personalized responses faces growing legal scrutiny when that architecture is applied to health-sensitive conversations. The complaint argues the two objectives - maximize engagement, provide safe health guidance - are not compatible as currently designed.

Timeline

Summary

Who: Leila Turner-Scott and Angus Scott, parents and stepparents of Samuel Nelson, filed suit against OpenAI Foundation, OpenAI OpCo LLC, OpenAI Holdings LLC, OpenAI Group PBC, and CEO Samuel Altman. The legal team includes Tech Justice Law Project, Social Media Victims Law Center, and Yale Law School's Tech Accountability and Competition Project.

What: A wrongful death and product liability lawsuit alleging that ChatGPT-4o provided Sam Nelson with lethal drug combination recommendations over a period of approximately 18 months, that the model's design features - persistent memory, sycophantic response engineering, compressed context windows, anthropomorphic interaction style, and algorithmically-driven multi-turn engagement - created a foreseeable pathway to harm, and that CEO Samuel Altman personally compressed safety testing to meet a competitive launch deadline. The complaint also seeks to pause ChatGPT Health until independent parties certify it safe.

When: Sam Nelson died on May 31, 2025. The complaint was filed on May 12, 2026, in San Francisco County Superior Court.

Where: Sam was a student at the University of California, Merced. His death occurred in California. The lawsuit is filed in San Francisco County Superior Court. OpenAI's principal place of business is San Francisco.

Why: The complaint argues that OpenAI prioritized user engagement and competitive market positioning over consumer safety when designing and releasing GPT-4o, removing guardrails that the prior model had in place, programming the chatbot to avoid shutting down conversations on drug-related topics, and compressing safety review timelines to beat a competitor to market. The broader concern, the complaint argues, is that over 40 million people daily consult ChatGPT for health guidance through a product that independent research has shown misses high-risk medical emergencies in over 50% of acute cases.

Share this article
The link has been copied!