Fear, loathing and profits – the battle over data access and data privacy in healthcare
The proponents of data access declare it to be a fundamental right for consumers. Those who stand to profit – or lose – from open access to patient data have different points of view.
Nothing has roiled the healthcare technology markets since the beginning of the year as much as data access and data privacy. One of the significant issues debated today is whether patients should be given free and open access to their health records sitting today in EHR systems. A proposed ruling by the Dept of Health and Human Services (HHS), awaiting finalization, will hand over data access to patients who, in turn, can share it freely with anyone they choose. Further, the ruling warns of penalties for information-blocking and other restrictive practices.
The proponents of granting access declare it to be a fundamental right for consumers. It enables, among other things, portability of medical information, long seen as a barrier to improved care, especially in life-threatening emergencies. Look deeper, and there are significant stakes involved in the battle for access to patient data. Firstly, the monetization of healthcare data is part of every health system and digital health startup’s business model today, as I discussed in an earlier blog here titled The New Innovation Model – Monetizing Healthcare data.
Big tech firms have been mainly supportive of the proposed HHS ruling, having struggled for years with interoperability and information blocking that have led to unnecessary costs and have acted as barriers to the acceleration of digital health innovation. Access to the data enables technology firms such as Google, Amazon, and Microsoft as well as digital health startups to mine the data for insights using advanced AI and machine learning algorithms. In turn, the insights are expected to drive improvements in care quality and better healthcare outcomes.
Can consumers be trusted with their data?
Opponents of the idea that patients should have access to their data as a fundamental right contend that once patients have access to their data, they may harm themselves by having no control over where the data may end up. EHR provider Epic and their supporters among hospitals and health systems have argued for the HHS to delay the finalization of its proposed data-sharing rules on this basis. They argue that consumers, used to signing away data rights routinely when downloading apps on their smartphones, may find their data ending up in the wrong places. Besides, the data may be used in undesirable ways and may even be easily vulnerable to breach.
Part of Epic’s reluctance to share data is purely commercial – as long-time stewards of patient data, they do not want to see tech firms and startups profiting off the data through free and open access. However, there is also a valid concern about the data falling into the wrong hands, especially without proper vetting of the entity gaining access to the records. Consider the case of Clearview, an unknown startup that has stealthily accumulated billions of images of individuals by scraping them off the internet. The company, a tiny operation that has built sophisticated facial recognition algorithms that law-enforcers across the country have been piloting for solving cold cases, had its entire client list stolen recently in a data breach.
How safe is patient medical information with tech firms?
When it comes to healthcare data, it’s a wild west out there right now. After nearly two decades of harvesting consumers’ digital exhaust, big technology companies ( who have been referred to as surveillance capitalists) have earned themselves billions in profits – and a growing trust deficit among consumers. Healthcare data privacy is held to a higher standard through HIPAA information privacy rules; however, in the absence of any standardized rules around the use of consumer data in the app economy, “digital health” companies are monetizing the data in ways consumers may not fully appreciate. It is indeed the only way for some digital health startups to turn a profit. Google’s $2.1 Billion acquisition of Fitbit (still awaiting regulatory approvals) is in large part due to the fact that, in the words of Fitbit CEO James Park, “ultimately Fitbit is going to be about the data.”
The scrutiny around data-sharing agreements signed by several leading health systems with Google points to the growing unease with health systems sharing patient medical records, even though such practices have existed for a long time in technology partnerships. A report by Stat News, which studied a 2016 data-sharing agreement between Google and the University of California at San Francisco (UCSF) obtained under a public information request, found adequate privacy protections to prevent unauthorized access, including restrictions on access to clinical notes and images.
Indeed, in a manner evoking imagery from classic Greek literature, Google asked to be “lashed to a mast and blindfolded” lest they are tempted by the sight of protected health information (PHI), much as Ulysses wished to be protected from being tempted by the sirens in the Odyssey. Despite the contractual safeguards, the notion of complete anonymization is in doubt after Google’s own research revealed that the encryption of data, especially unstructured data, cannot be assumed to be 100% effective. The nuances of encryption and anonymization can be all but lost when making the case to a skeptical public about the privacy protections on their health records.
The challenges of regulating AI in healthcare
The battle over access to patient data is as much about patient rights as it is about the untold riches that can accrue from analyzing the data for insights that can deliver competitive advantages to tech firms and health systems alike. UCSF’s data-sharing agreement provides the data free of charge to Google, implying that by itself, the data has no value whatsoever. The value lies ultimately in the insights generated through artificial intelligence (AI) and machine learning (ML) algorithms that can detect patterns, predict healthcare events, and enable targeted interventions for health systems looking to manage the health of their populations.
Tech firms with large consumer-oriented businesses can, for their part, enhance their understanding of consumers by cross-referencing their medical histories with other demographic data. The ability to cross-reference consumers further strengthens the algorithms for targeted advertising (though both health systems and tech firms alike have taken pains to point out that such use is explicitly prohibited in data sharing agreements).
Health systems that have taken the lead on using AI in improving healthcare outcomes are seeing significant benefits in clinical as well as administrative functions. AI-led approaches are prevalent in patient engagement, population health risk management, and revenue cycle operations. However, for more advanced applications, there remain questions around AI algorithms: How they work, what insights they reveal, and how the insights can support health care decisions.
AI algorithms are considered proprietary IP, and a source of competitive advantage by most firms. Hence, no two algorithms work the same way. There have been ongoing concerns about algorithmic bias from underlying datasets that are not representative of broader populations, prompting calls for more transparency in the use of AI tools.
The Food and Drug Administration (FDA) has been grappling with the issue of AI, especially in the context of medical devices where the risk of patient harm can be high. The ethics of AI, especially in facial recognition algorithms that have routinely delivered incorrect matches, in using them for employment or even healthcare decisions, has been yet another growing worry. Facebook’s recent $ 550 M settlement against a class-action lawsuit, brought under the Illinois Biometric privacy law, is a sign of how sensitive the use of facial recognition technology can be.
The battle over data access rights and transparency in the use of AI algorithms is intensifying against big tech firms. As always, the European Union has taken a lead in this regard and is set to announce strict rules requiring tech firms to “explain” their AI models, with provisions to make tech firms “reboot” the models and learn from scratch with new datasets before rolling them out.
Nowhere is this more pertinent than in healthcare where lives are at stake. Healthcare apps that use AI algorithms on patient data must start getting more transparent about their terms of service, with provisions for notifying consumers and regulators whenever there are material changes to the terms as well as to AI algorithms.
It is perhaps also time to implement model agreements for patient data use, which hold all parties accessing the data to the same standards of disclosure and compliance. Finally, it may be time to compensate consumers for their healthcare data.
The market for selling personal data is nascent. However, some firms, such as IKEA, are looking at offering greater control to consumers over their data as a competitive differentiator. Regulators, tech firms, and healthcare providers must embrace the concept as passionately as they embrace the idea of granting unfettered access to patients to their data.
Originally published on CIO