Turning Upheaval into Opportunity for U.S. Health Information
Six Lessons the U.S. Can Learn from Europe About Protecting Health Data Linkages
News

As artificial intelligence rapidly embeds itself into every corner of health care—from reading X-rays to drafting medical notes—a new National Academy of Medicine report warns that the technology’s promise could easily deepen the very problems it aims to solve. While AI tools are being hailed as a fix for clinician burnout, rising costs, and inequities in access to care, they also risk amplifying bias, eroding trust, and widening digital divides.
The Academy’s proposed solution is laid out in a new report: An AI Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action (AICC). It provides a set of six simple but sweeping commitments—advancing humanity, ensuring equity, engaging affected individuals, improving workforce well-being, monitoring performance, and fostering innovation—to help the nation harness AI’s benefits without sacrificing ethics, safety, or fairness.
LDI Senior Fellow and University of Pennsylvania Professor Kevin B. Johnson, MD, MS, was one of the 21 authors of the 206-page national report.
“The same thing that happened with electronic health records is happening again with AI,” said Johnson. “Everyone’s building tools, but there isn’t a shared playbook to make sure they’re safe, fair, and actually useful. This report was needed to bring some order to the chaos. It gives us a national framework so AI in health care can be developed and used responsibly, with transparency and trust at the center.”
Johnson, a professor with joint appointments in Biostatistics, Epidemiology and Informatics at the Perelman School of Medicine; Computer and Information Science and Bioengineering at the Penn School of Engineering and Applied Science; and Science Communication at the Annenberg School for Communication, is known for his extensive research on electronic prescribing and computer-based clinical documentation.
He noted that the field needs to be more concerned that AI in health care is becoming a free-for-all. “Hospitals, companies, and agencies are all moving quickly, but not necessarily in the same direction. That lack of coordination leads to duplicated work, unclear accountability, and uneven protections for patients. This report provides a way to fix that. It sets up a model that’s clear about goals and accountability, open enough to allow innovation, and focused on tracking results once systems are in place.”
The AI Code of Conduct (AICC) Framework is centered around two components:
• Code Principles aligned with the NAM’s Learning Health System (LHS) core commitments on equity, safety, and transparency.
• Code Commitments viewed as “simple rules” based on Complex Adaptive Systems Theory (CAST) to guide organizations and individuals. (CAST is a framework for understanding how large groups of interconnected elements—such as people, organisms, or organizations—interact, adapt, and evolve over time. It emphasizes that the overall behavior of a system emerges from the interactions of its parts rather than being directed by any single controlling force.)
The report’s six recommended Code Commitments are:
Johnson emphasized that managing and enforcing the code is not about creating a big “AI police.”
“Instead,” he continued, “it’ll be a shared system that mixes national oversight with local responsibility. Federal agencies like the Food and Drug Administration (FDA), National Institutes of Health (NIH), and the Office of the National Coordinator for Health Information Technology (ONC) will set standards, but hospitals, universities, and professional groups will handle the day-to-day work. It’s kind of like how quality improvement or patient safety programs run now—federally guided but locally carried out. That approach gives us flexibility while keeping everyone accountable.”
The hardest part of this isn’t likely to be the technology, but rather the people and systems.
“Everyone has different priorities, incentives, and comfort levels with risk,” Johnson said. “Until developers, clinicians, regulators, and patients share a common understanding of what ‘trustworthy AI’ means, we’ll keep bumping into the same issues. We need to agree on shared values, safety checks, and how we measure success. Once we do that, the technology part will fall into place.”
He emphasized that health care change doesn’t happen overnight and estimated that progress will become evident in the next two or three years with pilot programs, certification models, and shared metrics.
The report cites the key risks and ethical concerns as:
• Bias in data and algorithms
• Privacy and security breaches
• Model drift and lack of transparency
• Attribution of human traits, emotions, or intentions to, and overreliance on, AI
• Disparate access to AI benefits
On the disparities issue, Johnson said there is not yet a system in place to identify and measure bias in AI used in health care.
“Some tools can check data and algorithms for fairness, but they’re inconsistent and not widely used,” said Johnson. “What’s missing is a standard way to measure bias all the way from how data are collected to how decisions get applied in care. The report calls for national metrics and independent certification, so bias detection becomes routine, just like quality or safety checks.”
Johnson separately has written a piece in JAMA Health Forum arguing against delaying AI use until systems are entirely bias-free—a condition he describes as unrealistic given the “nascent and challenging” nature of fairness assessment. Instead, he emphasizes:
• Embedding equity work throughout development, integration, and deployment rather than waiting for perfect fairness.
• Implementing AI carefully in specific, well-characterized contexts where bias and performance can be actively monitored.
• Iteratively refining systems based on real-world feedback and continuously revisiting key questions on equity.
The Forum piece’s closing quote invokes Mark Twain: “The secret of getting ahead is getting started,” reinforcing the pragmatic stance it recommends—proceeding with implementation in measured, evidence-gathering ways, ensuring continuous monitoring for equity rather than postponing deployment until all bias is eliminated.
Six Lessons the U.S. Can Learn from Europe About Protecting Health Data Linkages
Moving from Fee-for-Service to Risk-Based Contracts Hasn’t Dramatically Changed Patient Care, Raising Questions About How to Make These Models More Effective
Equitably Improving Care for Hospitalized Kids Who Experience Cardiac Arrest Requires Hospital-Level Changes, LDI Fellows Say
Six Studies That Highlight How Losing SNAP Can Raise Food Insecurity and Affect Health
Includes $1 Million in Donations to Meet a Matching Challenge
For Seven Years, Revenue Has Supported Education, Community Projects, and Better Health Outcomes