New York moved today one significant step closer to holding chatbot operators legally accountable for dispensing professional advice, as Senate Bill S7263 appeared on the floor calendar of the New York State Senate on February 26, 2026 - the Thursday before this article's publication - signalling that a full chamber vote is imminent.
The bill, introduced on April 7, 2025, by Senator Kristen Gonzalez of the 59th Senate District, a Democrat and Working Families Party member, would amend the state's General Business Law by adding a new section, § 390-f, specifically targeting AI systems that simulate the advice or conduct of licensed professionals. Its co-sponsors include Senators Michelle Hinchey, John C. Liu, and Julia Salazar, all Democrats, and the legislation has a parallel version in the Assembly, numbered A6545.
The timing matters. As AI-powered chat tools become embedded in everything from consumer websites to enterprise software stacks, the question of who bears responsibility when those tools give harmful advice has remained largely unresolved in U.S. law. S7263 attempts to answer it - at least within New York's borders - with statutory liability backed by a private right of action.
What the bill actually says
The legislative text is precise in its definitions. According to the bill, an "artificial intelligence system" is described as "a machine-based system or combination of systems, that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments." The definition deliberately excludes basic software utilities. Anti-malware tools, calculators, spreadsheets, spam filters, web hosting services, and similar applications that do not materially affect individual rights or welfare are all carved out.
A "chatbot," according to the bill's text, means "an artificial intelligence system, software program, or technological application that simulates human-like conversation and interaction through text messages, voice commands, or a combination thereof to provide information and services to users." This is a broad definition that would capture most modern conversational AI products currently on the market.
The key operative term is "proprietor." According to S7263, a proprietor is "any person, business, company, organization, institution or government entity that owns, operates or deploys a chatbot system used to interact with users." Critically, the definition explicitly excludes "third-party developers that license their chatbot technology to a proprietor." In practical terms, this means the company that deploys a chatbot faces the legal exposure - not necessarily the company that built the underlying model.
The prohibited conduct
The bill's prohibitions are structured around a counterfactual test. A chatbot proprietor may not permit their system to provide "any substantive response, information, or advice, or take any action which, if taken by a natural person," would constitute a crime under New York's Education Law or Judiciary Law. The Education Law provisions referenced govern a wide range of licensed professions. These include medicine, dentistry, architecture, psychology, social work, and psychoanalysis, spanning articles 131, 133, 135, 136, 137, 139, 141, 143, 145, 147, 153, 154, and 163 of the Education Law. The Judiciary Law provision covers the unauthorized practice of law.
That last point deserves emphasis. Legal advice from a chatbot - not a disclaimer-wrapped summary, but substantive guidance a reasonable person might act upon - would fall squarely within the bill's prohibition if it mimics what a licensed attorney would provide. The same principle applies to a chatbot that diagnoses a medical condition, recommends a dental procedure, or provides the kind of structured psychological guidance that a licensed therapist would offer.
Disclaimers are not a shield
One of the bill's most commercially significant provisions is its treatment of disclosure notices. According to the legislation, "a proprietor may not waive or disclaim this liability merely by notifying consumers that they are interacting with a non-human chatbot system." This directly addresses a common industry practice: deploying AI tools that offer substantive professional guidance while posting terms-of-service disclaimers or interface labels stating the output is not professional advice.
Under S7263, that approach would not protect the operator from liability. Businesses cannot, in other words, continue offering what amounts to legal or medical guidance through a chatbot and then escape accountability with a small-print disclaimer.
The bill does separately require disclosure. According to section 4 of the text, "proprietors utilizing chatbots shall provide clear, conspicuous and explicit notice to users that they are interacting with an artificial intelligence chatbot program." The notice must appear in the same language the chatbot is using and must be rendered at a font size no smaller than the largest text appearing elsewhere on the website. The dual-track structure - mandatory disclosure that is simultaneously insufficient to defeat liability - places the compliance burden squarely on the operator's conduct, not their labelling.
Civil enforcement mechanism
Enforcement under S7263 does not depend on state action. According to the bill, "a person may bring a civil action to recover actual damages." Where a court finds that the proprietor "willfully violated this section," the statute escalates: the violator becomes liable for "actual damages together with costs and reasonable attorneys' fees and disbursements incurred by the person bringing such action." Attorney fee shifting is a significant deterrent in U.S. civil litigation, making it more economically viable for individuals to pursue claims that might otherwise be too costly to litigate.
The private right of action - rather than enforcement by the state Attorney General - distinguishes S7263 from some competing regulatory approaches. It distributes enforcement capacity across the population of users who might be harmed, rather than concentrating it within a single public agency with limited resources.
If enacted, the law would take effect on "the ninetieth day after it shall have become a law." That 90-day implementation window would give deployers a short runway to audit and modify their AI systems before legal exposure begins.
Legislative status and trajectory
The bill's placement on the Senate floor calendar on February 26, 2026, represents a meaningful procedural advance. The New York State Senate's floor calendar is the scheduling mechanism through which bills move from committee to a full chamber vote. According to the NYSenate.gov bill tracking page for S7263, the bill has completed eight recorded actions and is now categorized as "Active." Its current status sits at the "On Floor Calendar - Senate" stage, with the full path to enactment still requiring Senate passage, Assembly passage in a corresponding version, and the Governor's signature.
Elena Gurevich, described on LinkedIn as an AI and intellectual property attorney for startups and small and medium-sized enterprises, flagged the bill's floor calendar placement in a post on February 26. "The bill makes providers liable for damages caused by their chatbot responses that emulate those of licensed professionals," she wrote, adding that the bill's scope covers "doctors, dentists, architects, psychologists, social workers, and psychoanalysts" in addition to lawyers.
Gurevich also noted the significance of the disclaimer provision: "You can't get away with it by simply slapping a disclaimer on your chatbot that it's AI."
Context: a wave of AI-specific state legislation
S7263 does not exist in isolation. It sits inside a broader pattern of state-level AI regulation that has accelerated since 2024, as federal legislative action on AI has remained slow and fragmented.
California's SB-243, signed on October 13, 2025, established requirements for companion chatbots to disclose their artificial nature to users. That law, focused on emotional engagement platforms rather than professional advice contexts, took a transparency-first approach: if a reasonable person would be misled into thinking they are talking to a human, the AI must say otherwise. New York's S7263 goes considerably further, attaching liability not merely to identity deception but to the substance of the chatbot's outputs.
California had already moved on the healthcare dimension of the problem. The state's Attorney General issued guidance on January 13, 2025, making clear that California law "does not allow delegation of the practice of medicine to AI" and requiring licensed physicians to supervise AI systems that impact medical decisions.
The Law Commission of England and Wales published a discussion paper on July 31, 2025, identifying what it called liability gaps - scenarios where autonomous AI systems cause harm but no natural or legal person can be held responsible. S7263 directly addresses that gap, at least for professional advice contexts, by assigning liability to the deployer.
The European Commission opened a consultation on AI transparency guidelines in September 2025, addressing obligations under Article 50 of the EU AI Act. Those rules, applying from August 2, 2026, require chatbots and virtual assistants to notify users that they are interacting with an AI. The New York approach, while structurally similar on disclosure, adds the liability dimension that European rules have not yet codified.
Tennessee introduced a bill in early 2026 that could make certain AI training practices a criminal felony, illustrating how divergent state approaches have become. Unlike Tennessee's criminal framework or California's disclosure mandate, New York's bill is primarily a civil liability instrument targeting the deployment of AI in regulated professional contexts.
Federal regulators have also been active. The Federal Trade Commission issued orders on September 10, 2025, requiring seven AI chatbot companies to submit detailed reports on safety practices and data handling. That inquiry focused on harm to children rather than professional impersonation, but it reflects the same underlying regulatory concern: that chatbot operators have not adequately accounted for the harms their products can cause.
What this means for companies deploying AI
The bill's implications are broad for any company using AI in customer-facing contexts touching regulated professional domains. Legal technology companies offering AI-assisted contract analysis or legal research tools, health technology platforms that allow users to describe symptoms and receive structured guidance, mental health applications using conversational AI, and financial advisory services using chatbot interfaces all operate in territory that S7263 would directly govern - at least for New York users.
The definition of proprietor is expansive enough to capture a wide range of deployment models. A retailer that integrates a third-party AI assistant on its website is a proprietor under the bill's terms. An insurance company offering a chatbot that helps users understand their coverage options - where that guidance touches on legal rights - would likely qualify. A healthcare system deploying a patient-facing triage chatbot faces the most direct exposure.
The exclusion of "third-party developers that license their chatbot technology to a proprietor" from the definition of proprietor is significant. It means major model providers are not the bill's primary targets. The companies and institutions that take those models and deploy them to end users in professional advice contexts carry the legal risk.
Whether that risk calculus reshapes how companies structure their chatbot deployments - separating the deploying entity from the model provider with contractual indemnities, restricting AI outputs in professional domains, or building more robust routing to licensed professionals - remains to be seen. What is clear is that operators can no longer rely on a disclaimer to absorb the liability.
The marketing technology dimension
For the marketing technology community, S7263 raises compliance questions that extend beyond legal and healthcare verticals. Advertising technology platforms and marketing agencies increasingly deploy AI chat interfaces for client service, campaign consultation, and strategic guidance. Where those conversations involve advice that touches on regulated domains - tax implications of advertising spend, privacy law compliance for data-driven campaigns, employment questions for marketing teams - the boundary between AI-powered customer service and professional advice may become contested.
PPC Land has documented the expanding regulatory complexity facing AI systems across multiple jurisdictions, from state-level enforcement actions to federal investigations. The pattern suggests that AI deployers in marketing and advertising - particularly those offering AI-driven strategic consultation - should evaluate whether their chatbot outputs could be characterized as substantive professional advice under the terms of bills like S7263.
The 90-day implementation window, should the bill pass and be signed, provides a defined but limited period to conduct that evaluation and make necessary changes to system design, output restrictions, or routing architecture.
Timeline
- April 7, 2025 - Senator Kristen Gonzalez introduces Senate Bill S7263 in the New York State Senate; bill is committed to the Committee on Internet and Technology.
- January 13, 2025 - California Attorney General issues guidance requiring licensed physicians to supervise all AI systems impacting medical decisions.
- July 31, 2025 - Law Commission of England and Wales publishes discussion paper on AI legal challenges, identifying liability gaps in autonomous AI systems.
- August 25, 2025 - 44 U.S. Attorneys General send formal letter to 12 AI companies demanding enhanced protection of children from AI platforms.
- September 5, 2025 - European Commission opens consultation on AI transparency guidelines under Article 50 of the EU AI Act, applicable from August 2026.
- September 10, 2025 - FTC issues orders to seven AI chatbot companies for detailed safety practice reports under Section 6(b) of the FTC Act.
- October 13, 2025 - California Governor signs SB-243, mandating companion chatbots disclose their artificial nature to users.
- January 2, 2026 - Tennessee senator introduces bill that could make certain AI companion training practices a felony, effective July 2026.
- February 26, 2026 - New York Senate Bill S7263 is placed on the Senate floor calendar, moving toward a full chamber vote.
Summary
Who: Senator Kristen Gonzalez (D-WF, 59th District), co-sponsored by Senators Michelle Hinchey, John C. Liu, and Julia Salazar. The bill targets chatbot proprietors - any entity that owns, operates, or deploys a chatbot - while exempting third-party developers that license model technology to deployers.
What: Senate Bill S7263 proposes to amend New York's General Business Law by adding § 390-f, which prohibits chatbot operators from permitting their systems to provide substantive responses that would constitute unauthorized practice of licensed professions. Those professions include medicine, dentistry, architecture, psychology, social work, psychoanalysis, and law. The bill mandates explicit AI disclosure notices and grants individuals a private right of action to recover actual damages, plus attorney fees in willful violation cases. Disclaimer notices alone cannot defeat liability.
When: Introduced on April 7, 2025, the bill reached the New York Senate floor calendar on February 26, 2026. If enacted, it takes effect 90 days after becoming law.
Where: New York State. The bill amends the General Business Law and applies to any entity deploying chatbots to interact with New York users, regardless of where the deploying company is headquartered.
Why: The bill responds to the proliferation of AI chatbots that provide guidance in regulated professional domains - legal, medical, psychological, architectural - without the licensure, accountability, or standards of care that human professionals must meet. Existing disclaimer practices have allowed operators to disclaim liability while still offering substantive professional-grade guidance. S7263 closes that gap by attaching liability to the conduct of the AI output rather than the presence or absence of a notice.