News
Penn Health Care Innovators Test ‘Fake Front Ends’ and Back Ends
Adopting Software Marketing Techniques to Assess Potential Clinical Break-Through Concepts
Writing in an August New England Journal of Medicine Perspective, LDI Senior Fellows David Asch and Roy Rosin describe how “fake front ends,” “fake back ends,” “mini pilots” and vaporware-like marketing techniques being explored by the University of Pennsylvania’s Penn Medicine Center for Health Care Innovation may prove useful to the broader health care innovation field. Adopted from the software industry, the concepts refer to rapid testing methods used to assess the potential efficacy of clinical innovations that lower the cost or increase the quality of health care delivery.
Authors Asch and Rosin noted “the mistaken view that ‘innovation’ is just about generating new ideas” and then report, “the innovation field has shifted its focus from the generation of ideas to rapid methods of running experiments to test them.”
Asch, MD, MBA, is Executive Director of the Penn Medicine Center for Innovation and a Professor of both Medicine and Medical Ethics at Penn’s Perelman School of Medicine and Health Care Management at the Wharton School. Rosin, MBA and former Vice President of Innovation at the software giant Intuit, is Chief Innovation Officer at Penn Medicine.
Non-existent products
Vapor tests (a play on the software industry term “vaporware”) refers to the practice of advertising for sale a product or service that does not yet exist in order to get market feedback about potential consumer demand.
“The vapor test,” said Asch in a companion online audio interview with NEJM Managing Editor Stephen Morrissey, “is a move from the wishful thinking of ‘If you build it, they will come,’ to ‘If they come, you should build it.’ It reverses the system of thinking about product development.”
“The problem with vapor tests,” Asch continued in his interview,” is that they are a little bit deceptive in that you’re working with a product that doesn’t really exist and that’s somewhat more challenging to think about in a health care context.”
Potential new ER service
As an example of a real recent vapor test, Asch detailed how a medical student pointed out that some patients who came to the emergency department for one thing also ask if they can have a contraceptive intrauterine device implanted while they are there.
“We had never put in those devices in the ED,” he said, “but if we decided to test the demand we might ask something fairly tentative, like, ‘I’m not sure we can do that right now for you but if I could arrange it, is it something you would like us to do?’ It puts that question of demand within context and allows us to get a much better sense of whether that’s a service that patients would genuinely want.”
Another testing method used by Penn’s Health Care Innovation Center is the “fake front end,” a non-intrusive, information-gathering technique that helps to clarify whether “intended users would behave as expected when a new element was embedded in their workflow.”
CHOP ‘fake front end’ test
As an example of this in their Perspective piece, Asch and Rosin point to the Children’s Hospital of Philadelphia. “CHOP recently used a fake front end to test whether they could safely reduce admission among patients with sickle cell disease presenting at their ED with fever but low risk for bacterial infection. As part of their routine workflow, physicians were asked to identify which children could safely be sent home. What was fake was that to prove the safety of the approach, all patients were still admitted. The data gathered resolved the debate over feasibility, and now 27% of these children are no longer admitted.”
NEJM’s Morrissey asked Asch what were the biggest obstacles keeping more hospitals from experimenting with these sorts of innovation techniques.
“Probably the biggest,” said Asch, “are the internal sense that ‘we already know what’s right’ and not recognizing the need to test, along with an unwillingness or blindness to the value that such experimentation can provide. It’s such a funny thing to have that be true in healthcare because academic health systems are populated by faculty physicians and nurses who are very used to experimenting; they write out specific aims and have testable hypotheses. Yet, often when they go into the clinical context it’s as if they completely forget that sometimes those hypotheses turn out to be unsupported.
Dissemination issues
Morrissey also asked how the new knowledge obtained from these small, short-term experiments was being disseminated (given that the small nature of the tests and results data doesn’t rise to the level required for academic journal publishing)?
Asch acknowledged that currently, such test results are “at risk of being new knowledge that doesn’t get disseminated… This is a fundamental problem with implementation science more generally in that we need to find new outlets so that knowledge like this is generalizable or can be generalized so that we can all learn from each other. We have established mechanisms for that with large clinical trials but we don’t have good mechanisms for that with these smaller pilots that can be very, very informative.”