Artificial intelligence (AI) and machine learning (ML) hold great promise to improve health outcomes, but also pose issues of inaccuracy and bias that can lead to patient injury. And with injury comes liability. In our recent article in the Milbank Quarterly, we explore the liability implications of AI/ML and propose a toolbox of liability reforms to smooth the implementation of this promising yet disruptive technology.

We describe four ways to reform liability: 1) changing the standard of care, 2) insurance and indemnity, 3) special adjudication, and 4) regulation. These tools are neither static nor mutually exclusive: they will ebb and flow as AI/ML evolves in different parts of health care. They are also largely available to both federal and state policymakers—and in some instances, private parties.

Changing the standard of care represents the narrowest reform, but one of the easiest to implement. Physicians are liable for medical malpractice when they deviate from a standard of care—the practice of the profession in a certain clinical situation. As such, professional societies and groups of physicians acting together can change the standard of care by changing their practice around AI/ML. For instance, the NIH and radiology societies have come together to develop research agendas and a roadmap for implementing AI/ML in medical imaging. However, this reform tool is largely confined to physicians.

Insurance and indemnity provide more robust tools for liability reform across the health care ecosystem. Insurance is a familiar way to protect against liability risk through premiums. Indemnity is similar but allows two parties to divide liability risk by contract. Both allow parties to limit risk upfront and should prove useful to health systems, large physician practices, and AI/ML algorithm designers. These entities have the scale and legal resources to develop such agreements. State insurance regulators or health care interest groups could assist by composing model policy or terms for common AI/ML situations or algorithms, as some states already do for certain types of insurance.

Legislators, courts, and regulations could go further and encourage the safe adoption of AI/ML through more thorough liability reforms. These more drastic changes may be required for some of the most complex and groundbreaking applications of AI/ML, the so-called “black box” algorithm. These algorithms are constantly updating and learning based on data inputs—but the exact identity and weighting of variables cannot be determined as they constantly evolve.

Special adjudication systems exempt particular activities from liability systems in favor of specialized tribunals that focus on particular issues. At their best, they can streamline proceedings, focus on critical safety questions (rather than technical legal issues), and protect actors from direct liability. For instance, the National Vaccine Injury Compensation Program largely exempts vaccine manufacturers from liability in favor of a federally-administered program funded by a tax on vaccine doses. Florida and Virginia have enacted special programs to exempt practitioners and hospitals from certain liability arising from neonatal neurologic injuries, also funded by levies on practitioners and hospitals. Certain types of AI/ML (for example, radiology or pathology software) or certain actors (for example, developers) could be protected from direct liability through such systems.

Regulation can partially or completely replace liability. Legislators can do this directly by providing for a regulatory scheme that eliminates traditional liability. Federal regulators can also co-opt liability by enacting certain types of regulatory schemes. For example, courts have recognized that FDA regulations on drug or device labeling can exclude forms of liability. Current FDA regulatory activities involving health care software and clinical AI/ML have not yet developed to level of formal regulations that can preempt liability, but FDA’s proposed and current guidance may play a role in defining good industry practice.

Liability reform is essential for AI/ML to realize its full potential in health care because excessive or misplaced liability—often borne by physicians through malpractice—can discourage individuals from researching, developing, and implementing this technology. By reforming the liability system, we can promote use of this disruptive, potentially life-saving innovation and address its ethical, regulatory, and technical challenges. AI/ML has begun to transform health care, and it’s time for the liability system to respond.


The article, Artificial Intelligence and Liability in Medicine: Balancing Safety and Innovation, was published April 6, 2021 in Milbank Quarterly.  Authors are George Maliha, Sara Gerke, I. Glenn Cohen, and Ravi B. Parikh.