Why Deaths Are Likely to Rise from Hotter Weather Due to Climate Change
LDI Study Highlights Rising Heat’s Impact on Mortality
Health Equity | Population Health
Blog Post
The growth of technologies for artificial intelligence (AI) and machine learning (ML) has led the Food and Drug Administration (FDA) to authorize hundreds of AI/ML-based devices. While the FDA has experimented with new regulatory frameworks, it has used traditional regulatory approaches for deciding whether to authorize most of these clinical decision support (CDS) tools. To help the public understand this emerging technology, the agency has released a public database of authorized AI/ML devices (Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices | FDA), that was last updated October 2022.
In our recent study published in JAMA Internal Medicine, we analyzed this public database to describe the current state of FDA authorizations for AI/ML CDS devices and characterize the evidence supporting these devices, many of which have shown promise in supporting bedside clinicians to improve decisions in the critical care setting but remain unproven.
We found that the FDA’s regulatory review process, developed for more traditional devices, provides little data about safety, effectiveness, or equity for newer AI/ML systems. Of the 10 authorized devices suitable for CDS in critical care, nine received clearance through the traditional 510(k) pathway. Notably, the 510(k) premarket submission pathway does not require clinical evidence for safety, effectiveness, or equity; rather, it focuses on demonstrating similarity to previously authorized devices. Furthermore, when we examined the devices used as predicates, we found many were not equivalent in clinical or methodologic characteristics based on our team’s expert review. For example, many of the predicates received clearance through generations of equivalence determinations that reach back several decades, long before the widespread use of AI/ML technologies in clinical medicine.
Only three of the 10 device authorizations were accompanied by any clinical evaluations. For these three, regulators compared the predictive performance of the device against a clinical outcome. None were evaluated for their effect on care processes or patient outcomes, and none of the authorizations included any information about safety, potential for harm, or equitable performance by race, sex, age, or other demographic group.
Importantly, our study does not suggest that the FDA failed to adhere to its own requirements for medical device regulation. Rather, we highlight the importance of creating new regulatory frameworks that are appropriate to more modern CDS devices that rely on AI/ML technologies.
What should these new frameworks look like? First, prospective clinical studies of AI/ML devices are needed, especially in settings like critical care, to ensure that they improve treatment processes and outcomes. Second, they should be evaluated for equity given the growing evidence of biased algorithms reinforcing or worsening health disparities. By incorporating these approaches into its authorizations, the FDA can ensure the safety, effectiveness, and equity of AI/ML CDS devices.
The study, “Analysis of Devices Authorized by the FDA for Clinical Decision Support in Critical Care” was published October 9, 2023 in JAMA Internal Medicine. Authors include Jessica Lee, Alexander Moffett, George Maliha, Zahra Faraji, Genevieve Kanter, and Gary Weissman.
LDI Study Highlights Rising Heat’s Impact on Mortality
New Study Looks Back at Conditions That Speak to Future Hospital Emergency Preparedness
Changing Perceptions of Survival Risk Increased COVID Vaccination 5 Years Later
Will Evaluate the Impact of Current Government Anti-Tobacco Messaging
LDI Senior Fellow Guy David Has Taught Leadership To Over 2,000 Clinicians. He Lays Out Why Doctors Can Make Great Leaders In a Stat News Op-Ed
Reduction Limits Save Lives and Money, and Even Food Firms Back Them. So Why Doesn’t Congress Act?