NY Bill: AI Chatbots Liable for Pro Advice

New York Senate Bill S7263 has reached the Senate floor calendar, signaling an imminent full chamber vote. The bill would prohibit AI chatbot proprietors from permitting substantive professional advice — including medical diagnosis, legal counsel, and psychological guidance — that would constitute unauthorized practice if performed by a human. Sponsored by Democratic Senator Kristen Gonzalez, it covers all licensed professions under New York Education Law plus legal practice under Judiciary Law.

The bill's most impactful provisions: it creates a private right of action allowing consumers to sue for actual damages (plus attorney fees for willful violations); disclaimers cannot shield operators from liability — companies cannot escape accountability merely by disclosing the chatbot is non-human; and liability targets deployers rather than model developers, meaning a company deploying ChatGPT bears responsibility, not OpenAI itself.

If enacted, S7263 would be the first US state law regulating AI professional advice through civil litigation mechanisms. Legal analysts suggest it may expand litigation exposure for deployers more than it curbs unauthorized practice. Key disputes center on defining 'substantive' responses, what constitutes professional 'practice,' and identifying liable parties in AI's increasingly complex ownership ecosystem.

New York AI Chatbot Liability Bill Deep Analysis: When AI Starts 'Practicing Medicine' and 'Practicing Law'

I. S7263: Legal Red Lines for AI Professional Advice

New York Senate Bill S7263 appeared on the Senate floor calendar on February 26, 2026, signaling an imminent full chamber vote. Sponsored by Democratic Senator Kristen Gonzalez, the bill attempts to answer a question largely unresolved in US law: who bears responsibility when AI chatbots give harmful professional advice?

The bill's core logic rests on a counterfactual test: if a response would constitute unauthorized practice if provided by a human, the chatbot proprietor may not permit their system to provide it. This covers all licensed professions under New York Education Law — medicine, dentistry, architecture, psychology, social work, psychoanalysis — plus legal practice under Judiciary Law.

II. Three Killer Provisions

Private Right of Action — Consumers can directly sue chatbot operators for actual damages. Willful violations allow recovery of attorney fees. No need to wait for government enforcement — individuals can initiate proceedings.

Disclaimers Are Not a Shield — Operators cannot escape liability by notifying consumers they're interacting with AI. This directly challenges the industry's prevailing compliance strategy of bottom-of-page disclaimers.

Deployers Bear Responsibility, Not Model Developers — The bill excludes "third-party developers that license their chatbot technology." A law firm deploying GPT-4-based legal chat bears liability, not OpenAI.

graph TD
A["S7263 Liability Framework"] --- B["Private Right of Action<br/>Consumers can sue directly"]
A --- C["Disclaimers Invalid<br/>Cannot disclaim as 'this is AI'"]
A --- D["Deployer Liable<br/>Not model developer"]

III. Scope: Broader Than Expected

The bill covers a surprisingly wide range of professions — not just medicine and law but dentistry, architecture, psychology, social work, psychoanalysis, and even podiatry. Holland & Knight notes the bill may expand litigation exposure more than curb unauthorized practice.

IV. Industry Impact

Enterprise AI deployers face the most direct risk — financial institutions, healthcare providers, and law firms using AI chatbots must reassess response boundaries. AI platform companies are temporarily safe but may face similar legislative pressure if the model spreads. LegalTech may be hit hardest — the line between "legal information" and "legal advice" becomes existential.

V. National Context

S7263 emerges within a fast-evolving state-level AI regulatory patchwork. Without federal AI legislation, states are experimenting with different approaches. New York chose civil litigation mechanisms, consistent with its strong consumer protection tradition.

Conclusion

S7263 represents a significant shift from administrative regulation to civil litigation in AI governance. Its core innovation is creating powerful economic deterrence through private right of action. If enacted, the "deployer bears responsibility" principle could fundamentally reshape AI product business models and risk frameworks.

Sources

  • [Holland & Knight: NY bill creates chatbot proprietor liability](https://www.hklaw.com/en/insights/publications/2026/03/new-york-bill-would-create-liability-for-chatbot-proprietors)
  • [PPC Land: NY chatbot liability bill reaches Senate floor](https://ppc.land/new-yorks-chatbot-liability-bill-reaches-senate-floor-threatening-ai-providers/)
  • [Fast Company: NY lawmakers want AI chatbots to stop pretending to be doctors](https://www.fastcompany.com/91503990/new-york-lawmakers-want-ai-chatbots-to-stop-pretending-to-be-doctors-or-lawyers)
  • [NY State Senate: S7263 bill text](https://www.nysenate.gov/legislation/bills/2025/S7263)