California requires licensed physicians to supervise AI use in healthcare

Legal advisory details new requirements and restrictions for artificial intelligence systems in medical settings.

California AG issues legal framework for AI use in healthcare, focusing on patient protections.
California AG issues legal framework for AI use in healthcare, focusing on patient protections.

California's Attorney General's Office has issued guidance detailing new legal requirements around the use of artificial intelligence (AI) in healthcare settings. According to the legal advisory released on January 13, 2025, healthcare organizations must ensure licensed physicians supervise all AI systems that impact medical decisions.

According to the Food and Drug Administration (FDA), 981 AI and machine learning software devices had received medical use authorization as of May 2024. These systems are being deployed across diagnosis, treatment planning, scheduling, risk assessment and billing.

The advisory outlines specific prohibitions, including using AI systems to deny insurance claims by overriding doctors' medical necessity determinations. Healthcare providers must comply with California's Unfair Competition Law and other consumer protection statutes when implementing AI tools.

"Only human physicians and other medical professionals are licensed to practice medicine in California," the advisory states. "California law does not allow delegation of the practice of medicine to AI." Licensed physicians face potential conflict of interest violations if they have financial stakes in AI services without proper disclosure.

Recent amendments to the Knox-Keene Act and California Insurance Code place strict limitations on health plans' use of automated decision systems. Plans cannot employ AI to "deny, delay, or modify health care services based on medical necessity." Instead, the advisory mandates that AI systems must:

  • Not replace licensed provider decision-making
  • Base determinations on individual patient circumstances
  • Undergo regular accuracy and reliability reviews
  • Face inspection and auditing by state agencies
  • Avoid discriminatory impacts
  • Restrict data usage to stated purposes
  • Prevent direct or indirect patient harm

The guidance emphasizes AI's dual potential - while it may improve health outcomes and reduce administrative burdens, it also risks perpetuating discrimination and interfering with patient autonomy if not properly regulated. Healthcare entities must validate that AI systems "reduce rather than replicate human error and biases."

California's civil rights laws prohibit discrimination by any entity receiving state support, including healthcare organizations. Protected classifications include race, color, religion, national origin, age, disability status, and other categories. The advisory notes that AI systems making less accurate predictions about historically marginalized groups could constitute illegal disparate impact discrimination.

Patient privacy also receives heightened focus. The Confidentiality of Medical Information Act governs use of health data, with special protections for mental health and reproductive care information. Recent amendments require providers and digital health companies to enable patients to keep sensitive health data separate and confidential.

Looking ahead, the advisory indicates that California will continue developing AI regulations while enforcing existing consumer protection, civil rights, and privacy laws. Healthcare entities are advised not to wait for new rules before ensuring AI compliance.

The guidance arrives as healthcare AI adoption accelerates nationwide. Healthcare providers, insurers, vendors and investors in California must now carefully evaluate their AI implementations against these detailed legal requirements or risk running afoul of state law.

Attorney General Bonta stated that while AI technology evolves rapidly, "existing California laws apply to both the development and use of AI. Companies, including healthcare entities, are responsible for complying with new and existing California laws and must take full accountability for their actions, decisions, and products."

The advisory concludes by noting that beyond the discussed regulations, other California laws covering issues like tort liability and public health will apply equally to AI systems. Using AI will not serve as a defense against violations of any applicable laws.