Last week I wrote about the ways that privacy’s been changed by digitalisation. The biggest of which, I suggested, is that we’ve moved from a time when personal information was mostly governed by consent to one in which data are gathered by default. This week some thoughts, amid corona crisis, about privacy and health.
Privacy and health in the time of COVID-19
There’s a huge amount being written now about the need to gather personal data to manage the corona crisis, on the one hand, and, on the other, the need to manage data gathering in ways that don’t entrench intrusive levels of surveillance for the future.
Much of that debate’s to do with contact tracing; much with how far data can be anonymised; much with how far governments (and data corporations) are trusted to limit use to what’s essential and proportionate.
There are no simple answers here, and I will not attempt one. This blog’s about some of the issues behind the conundrum that COVID’s brought forefront, and some parameters we need to heed in seeking answers.
Four starting points
First, health and privacy are both important. This shouldn’t be about prioritising one over the other, but finding a balance that achieves the best outcome for both (which may differ in different places as the crisis does). This is consistent with a rights framework that includes both (the right to health, the right to privacy) and emphasises tools of balance such as necessity and proportionality.
Second, it’s urgent. Pandemics don’t give time for lengthy cogitation. We have to rely on current expertise and experience to respond in time to make a difference. Decisions will be taken now because they have to be; and people will judge harshly those that turn out to be wrong. So: difficult, complex decisions are needed within time constraints.
Third, this issue’s not unique within the crisis. Governments worldwide are struggling with other questions of balance between public and private interest – how far to constrain freedoms of assembly and association in order to minimise infection rates; when and how quickly to open up economies; what levels of risk are acceptable to public health and private welfare.
Fourth, it’s not unique to this crisis either. The core dilemma’s one that’s central to the coming digital society, not just to health but to big data, to the Internet of Things, to AI, and to the gains that many hope digitalisation brings.
That big data conundrum
Here’s that dilemma.
On the one hand, enthusiasts for the digital society laud the benefits they hope will come from leveraging big data – the ability to analyse massive data sets, in combination, in order to understand what’s really happening in our societies, to target resources more efficiently (not least in health), to make transport and cities work more efficiently, to monitor environment and conflict, to achieve the Sustainable Development Goals.
And on the other, the data gathering and analysis required for this are inherently invasive. Big data works because corporations (and governments) can gather data by default on anything we do. Instead of disclosure by consent we have disclosure by default. The data that are gathered can be used (by commercial business) for commercial gain and (by some governments already, by others potentially) for surveillance and social control, as well as in the public interest.
A personal story
I’ve always thought this dilemma central to the emergence of a digital society, since WSIS days if not before. It came home forcefully, though, about ten years ago at a briefing by one of the early gurus of social media analytics (using Twitter and Facebook posts to analyse human behaviour).
Our guru on that day was keen to emphasise the benefits for humankind his algorithms promised, rather than those for client businesses. His pitch focused especially on epidemics. Already then, he said, his algorithms could trace the spread of flu from posts on social media more quickly than public health officials could from what their doctors and their clinics told them. Think of the benefits of that, he urged.
It was a common pitch at the time from those promoting prospects for big data. As it happened, though, I shared this briefing with a delegation from one of the real world’s more authoritarian states, and it was clear that the implications for social and political surveillance struck them far more forcefully (pleasing some, alarming others).
What didn’t happen
We've known of this dilemma for a long time.
It’s regrettable that it was not addressed in the early days of digital expansion. Internet protocols and applications were designed to support sharing of information, in trusted communities of fellow scientists, not as the tools of mass engagement that they’ve become. Privacy wasn’t at their core, wasn’t built in by default.
We’ve been struggling to catch up ever since. A good few countries still lack data privacy laws, while data corporations – global in scale and difficult to oversee at national level – have exploited data to the max. The toughest privacy protections such as Europe’s GDPR are proving difficult to enforce, and already look inadequate for the next wave of data gathering and analysis, powered by AI and Internet of Things.
A crisis like the present shows the importance of thinking issues like this through beforehand rather than trying to fix them after the event. Decisions in a crisis must be taken urgently. Guiding principles for doing so are most important then. And principles that were adequate before data were gathered by default are insufficient to enable us to make decisions now in which we can be confident.
So what to do?
We are, however, where we are, so what to do? First, obviously, address the current dilemma as best we can. But also, learn from it so that next time we’re better prepared. And in the meantime learn what we can to build a better balance between big data’s opportunity and big data’s risk more generally. I’ll make six suggestions for things that should be part of that process.
First, both health and privacy matter. This is about finding a balance that works for now, in crisis, minimising harm today without leaving a legacy of harm tomorrow.
Second, context matters. The International Covenant on Civil and Political Rights permits derogations for ‘the protection of public health … in times of public emergency.’ But different governments respond in different ways. Democratic governments can be constrained by legal and regulatory frameworks. Authoritarian governments won’t be. One size of regulation won’t fit every country.
Third, public attitudes and opinions are significant. In a crisis, people value survival first and economic welfare next. They’re willing to forgo things they would otherwise regard as fundamental in order to secure these. Look, for example, at the way the vast majority have been obedient to lockdowns. International rights agreements focus on measures that are ‘essential’ and ‘proportionate’ in times of crisis. They’re an important focus but public views of them will differ in a crisis.
Fourth, dialogue’s essential. I’ve been in several online discussions recently about contact tracing and data management in the time of COVID. They’ve included representatives of governments and data companies, and experts on privacy. Conspicuously absent have been health experts, those who know about the virus and medical ethicists. We won’t reach sensible conclusions if any of these groups are missing. Privacy specialists need to listen to health experts and vice versa.
Fifth, we have experience. Medical ethicists have been dealing with the relationship between personal information, identity and health for decades. The kind of data that concern us now are not unique. Human tissue is very similar to data where identity’s concerned: unique, personal, revelatory about the individual. At one time, it was gathered, analysed and exploited (both medically and commercially) with no rights for the individual concerned. The long road from that to modern practice is one of many lessons data ethicists can learn from their medical peers.
And sixth, we need experience. This is not the last time we'll be dealing with a crisis of this kind. We need to experiment to find what works – in terms of health, in terms of privacy – and how to manage it. Some things will happen that won’t work out well. Others will turn out better than expected. There will be risks in finding which is which, but there are bigger risks in not so doing, not least because data gathering and analysis will happen anyway. The lessons that we learn should have wider resonance than just in health.
And generally?
As I suggested last week, I think that we must accept that crucial change, from disclosure by consent to disclosure by default, has taken place, and will not be reversed. This requires rethinking. Past assumptions aren’t enough to deal with it. The focus for the future has to be on managing data exploitation, regulating who does what, when, how and with what impact, and building agreement on appropriate arrangements.
It won’t work everywhere, and it will be resisted by those who stand to gain from leveraging data. Those concerned with privacy will constantly be chasing new technology and new types of data exploitation. But the effort’s needed.
Image: By Dayne Topkin on Unsplash.