Health Policy$ense

What do Edward Snowden and Daniel Ellsberg tell us about the appropriate use of personal health information?

Since Edward Snowden revealed that US agencies have been following social media, telephone data, and other seemingly private communications of US citizens, public reaction to his actions has been mixed. Roughly as many call him a traitor as call him a hero, and some simultaneously criticize his approaches and praise him for what he revealed.

The situation recalls Daniel Ellsberg’s 1971 leak to The New York Times of the Pentagon Papers—classified documents reflecting poorly on US strategy during the Vietnam War. Ellsberg was later charged with violations of the Espionage Act, but the case was dismissed in part because of illegal evidence gathering against Ellsberg by some of the same individuals who, less than a year later, broke into Democratic National Committee Headquarters at the Watergate Hotel. And so, although Ellsberg and Snowden are typically linked as government whistleblowers, they are also linked because they raised questions about protecting and collecting personal information. Both cases are complex and generate divided opinions, but each has revealed that Americans care a lot about the privacy of information.

With that backdrop one might think that Americans would feel similarly protective about their health information—information that is increasingly electronic and searchable in the way that social media and cell phone data are, and often of greater sensitivity. But the issues surrounding health information are also complex. What makes electronically accessible health information so easily searchable also makes it so useful for understanding associations and mechanisms of health and developing ways to improve health care—goals of potentially immense social value.

While comparisons between national security and personal health should be taken only so far, both contexts highlight privacy’s subtler distinctions. Indeed, there are many ways personal health information can be defined: what its content is, how it was collected, who is using it, and for what purpose. If there is a single lesson common to the Snowden case and our approach to personal health information, it may be that these distinctions matter.

We recently conducted a population-based survey that presented 3,000 Americans with hypothetical scenarios involving electronic health information that varied according to the user of the information (a university hospital, a public health department, a pharmaceutical company), the purpose of the information (for research, for quality improvement, for marketing), and the content of the information (containing genetic test results or not). The purpose of the information made the most difference in public acceptability, and the purpose these Americans privileged was research. The user was also important (participants preferred university hospitals) but the content of the information was not important at all—despite beliefs that genetic information needs special protections, participants didn’t seem to care.

These findings contrast with the predominant approaches to health information privacy that focus most on whether personal health information is or is not identifiable. Such definitions implicitly link privacy to information content rather than information use. Indeed, the federal Common Rule details 18 identifiers (everything from names to dates of service) that must be removed from health records to ensure that individual identities cannot be reconnected to the data their care produced. While de-identifying data serves one definition of privacy, it greatly limits what data can be used for. And it is the purpose of these data that Americans consider most important. Specific elements removed from de-identified files, like dates of service, are sometimes central to questions about health delivery, and de-identification can prevent valuable linkages of files from different sources—like linking insurance claims with medical records clinical outcomes.

At one extreme is a view that privacy is an absolute and it is at risk so long as information can be linked to an individual. At another extreme is a view that it matters only that the information be used well. The continuum reflects perennial tensions between means and ends, the same concepts that underlie the contemporary discussion about Snowden or past discussions about Ellsberg. Still, it seems likely that many Americans would prefer slightly more open access to such data rather than less, particularly if more vivid cases could be made for the social value.

So how should we approach health information privacy in the future? Of course individual permission for data use provides patients with control over their health information and would sidestep many concerns. However, in most cases involving large data sets that offer the most social promise, asking permission isn’t possible and the kinds of bias that would be introduced by differential responses would substantially undermine the value of the data in the first place. In these cases, we need a different standard. Rather than focus so heavily on the identifiability and content of information, as the application of our policies often does, we believe our policies should focus instead on strong evidence of social purpose and information security.

Some of this flexibility is already built into current data use agreements because identifiable personal health information can often be used without individual permission, given purposes deemed socially important and protections to keep those data securely focused on those purposes. But many Americans might prefer a state in which sharing health data (with appropriate protections) would be part of the social contract. Indeed, a strong argument can be made that patients have a social obligation to participate in clinical research, an obligation that isn’t absolute, but stronger when individual risks and burdens are very low—as is typically the case with research based on previously collected personal health data. This reasoning ought to shift the current standard of an affirmative declaration from patients to an assumption of participation. A 2011 Advanced Notice of Federal Rulemaking embraced this approach. We accept the reporting of communicable diseases, typically of very sensitive content, in part because communicable diseases have negative externalities capable of harming non-consenting others. Why not facilitate public use of health data to take advantage of the positive externalities they can provide? Often, recognizing these inherent asymmetries is how burdens of proof shift.

There was no advance oversight when Edward Snowden leaked information about NSA monitoring of electronic communications or when Daniel Ellsberg leaked the Pentagon Papers. Both released information surreptitiously and possibly illegally. Even so, in each case the public was divided about whether those acts were right. When managing personal health information, we have the great advantage of systems of oversight that can help adjudicate in advance whether social purpose justifies the selective re-use of information that would otherwise be considered private. Our current system for research uses of personal health information focuses considerably on whether information is identifiable and whether it is sensitive enough that its release might cause harm. Given the priority patients place on the uses of information, we believe we ought to redirect our emphasis from the identifiability of personal health information toward the social purpose for which it is re-used.