In a 1907 analysis that’s famous for demonstrating “the wisdom of the crowd,” about 800 people guessed the weight of a butchered ox. Most were wrong by up to 133 pounds. The average of the estimates, though, was within 1% of the true weight: 1,207 pounds. 

LDI Senior Fellow Damon Centola studies how the wisdom of the crowd—harnessed through structured information-sharing networks—improves clinical care, such as reducing bias. His new study, with coauthor LDI Senior Fellow Jaya Aysola, tested how being in a network affected physicians’ decisions in seven common situations including cardiac events, low back pain, and diabetes care. Network participation improved performance, but not through averaging, like the ox example. Instead, clinicians who initially made incorrect diagnoses became more accurate by being part of the group. This suggests a way for physicians to have positive interactions with their peers and to do better for their patients.

Centola and LDI Senior Fellow Anish Agarwal, Deputy Director of the Center for Digital Health at Penn Medicine’s Center for Health Care Innovation (who was not involved in the research), answered questions about the study and its application in real-world practice.

Dr. Centola, what sparked your interest in information-sharing networks among physicians?

Diagnostic errors aren’t uncommon: Rates are estimated at 10-15%. The idea of the wisdom of the crowd has been around for over a century as a mathematical curiosity, but there was no way to translate the intelligence of the collective into direct improvements in individual reasoning. 

I wanted to see if information-sharing networks could harness the “wisdom of the clinical crowd” in real-time, to enable individual clinicians to deliver better care to their patients. I hypothesized that if networks among clinicians were structured to be egalitarian so each member had an equal number of connections, such an infrastructure could predictably improve providers’ diagnostic reasoning. 

Nearly 3,000 clinicians from across the country used our proprietary networking app for studying medical decision-making to view clinical vignettes and enter their treatment recommendations for the patients in them. Some clinicians were randomized into information-sharing networks and others into isolated control groups. Over the course of several rounds, networked clinicians saw responses from their peers, while control clinicians reflected on their decisions in isolation. Clinicians could revise their recommendations after each round.

How do doctors experience the information-sharing app and network?

The app is designed to be intuitive, engaging, and even fun for physicians. To help with implementation, the experience resembles continuing medical education (CME) activities that they’re familiar with. Each exercise takes 5-8 minutes and relies on the clinicians’ motivation to choose the right diagnosis and provide good care. 

The network is an egalitarian environment for collective learning. Medicine is usually hierarchical: Physicians with seniority have more influence. But in the information-sharing networks we used, everyone had four connections to equalize the power dynamics. An egalitarian structure is special because it retains the wisdom and experience of senior participants and adds the knowledge from younger members with more recent training, who might have experience with newer technologies, for example.

How did the app improve doctors’ decision-making?

Structured information-sharing networks significantly reduced clinical errors. Clinicians with the best initial decisions held fast, so improvements in diagnostic accuracy weren’t simply a “regression to the mean,” with outlying decisions moving toward the average. Quality improved because clinicians with the best decisions didn’t change, but clinicians with originally poor decisions got better. The networks created a ratcheting-up of performance.

The independent controls also improved, but less dramatically. Overall, average starting accuracies were 76-77%. After independent reflection, average accuracy was 79% (2.3 percentage points better) and after network participation, average accuracy was over 81% (5 percentage points better). Among the worst-performing clinicians, networks produced a 15% increase over controls in the fraction switching from an initially wrong recommendation to the correct treatment.

We’re now working with the Penn Medical Communication Research Institute to pilot the app and our structured collaborative network in clinical practice. Our current focus is testing it for existing e-consult infrastructures to make it seamless for clinicians. 

Dr. Agarwal, how might this innovation be applied in clinical practice?

In emergency medicine, there are many real-time decisions and consulting and informal conferring with colleagues. More formal and in-depth case reviews, such as the historic morbidity and mortality conferences, happen much later and after the fact. We don’t have many formal processes for real-time, peer decision support. Providing it virtually is really smart. I could see this evolving to a state that connects clinicians at scale. 

One concern is privacy, because the network shares patient information. We can make digital processes HIPAA-compliant and private, though. I’d want to know that network participants are validated, trustworthy, and reliable. I wonder about balancing the supply and demand for support, and on the supply side, about compensating network members or giving them protected time. 

I imagine there are benefits for participants, too. I study workforce wellness and burnout and I know that tackling a challenging case alone can be isolating and anxiety-provoking. Having support from an expert network could be rewarding, both by getting input from trusted peers and by helping out colleagues.


The study, “Experimental Evidence for Structured Information-Sharing Networks Reducing Medical Errors,” was published in the Proceedings of the National Academy of Sciences (PNAS) on July 24, 2023. Authors include Damon Centola, Joshua Becker, Jingwen Zhang, Jaya Aysola, Douglas Guilbeault, and Elaine Khoong.


Author


More from LDI