The publication “Human Rights in the Age of Platforms”, published by the MIT Press in November 2019, examines the human rights implications of today's platform society. “Today such companies as Apple, Facebook, Google, Microsoft, and Twitter play an increasingly important role in how users form and express opinions, encounter information, debate, disagree, mobilize, and maintain their privacy. What are the human rights implications of an online domain managed by privately owned platforms?”
The volume starts with a foreword by David Kaye, the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, and contains contributions from scholars from across law and internet and media studies who consider the “datafication” of society, including the economic model of data extraction and the conceptualisation of privacy, examine online advertising, content moderation, corporate storytelling around human rights, and other platform practices, and discuss the relationship between human rights law and private actors, addressing such issues as private companies' human rights responsibilities and content regulation.
APCNews interviewed Rikke Frank Jørgensen, senior researcher at the Danish Institute for Human Rights, editor of "Human Rights in the Global Information Society" (MIT Press), and the author of "Framing the Net: The Internet and Human Rights". As the editor of this volume, Jørgensen provided more insight on the analysis captured in this book and states: "Laws and policies that draw on automated decision making should undergo human rights impact assessment to ensure that individual rights are protected."
APCNews: What is the main contribution this book makes to the current studies and thinking around human rights in the age of platforms?
Rikke Frank Jørgensen: The book addresses current challenges of datafication, platforms, and surveillance capitalism within a framework of human rights. It uses human rights as a lens to analyse and understand these challenges and to propose possible solutions. Whereas many scholars write about issues such as platform power, there are not that many who combine such studies with human rights law and practice. In short, what do these developments mean for our human rights and how shall we ensure human rights protection within these private platforms going forward?
APCNews: You speak of the defining characteristic of social web platforms as not creating objects of consumption but rather creating the world within which such objects can exist or, in short, that the platforms give us our horizons, or our sense of the possible. Can users really lead change, if in principle those change options seem to be trapped within those same boundaries? Where is the “out-of-the-box” or in this case, “out-of-the-platform” happening here?
RFJ: It is a challenge that our daily life is so embedded in the dominating platforms and their models of data extraction, yet there are also alternative spaces and discourse, and my sense is that the call for alternatives is growing. Movements and scholarship around data ethics, data justice and data ownership, for example, have taken on a momentum over the past years, so I do see critical “out-of-the-box” thinking. Danish DataEthics is one out of many examples of such voices calling for alternatives.
APCNews: Your introduction mentions that the human rights implications of the social web are still under-researched and that the majority of works available are oriented toward the right to freedom of expression and privacy. What in your opinion are other urgent areas of research, given the fast-changing technology developments?
RFJ: It’s true that much scholarship has dealt with civil and political rights such as freedom of expression and privacy, and less focus has been on economic, social and cultural rights. Some of the (many) urgent areas are protection from discrimination in relation to artificial intelligence (AI) and automatic decision making; protecting workers' rights in the digital economy; human rights impact assessment as part of technology development; governments' use of data, for example, in relation to health and social services; ensuring that data is used to empower rather than disempower the individual. I often mention APC as an example of an organisation that has been good in pointing to the full range of human rights affected by technology.
APCNews: You mention that some organisations, such as APC, have broadened the discourse on human rights in the information society to include social, economic and cultural rights, going beyond freedom of expression and privacy online. Where do you think our energy should be directed, today? What are the crucial advocacy spaces where we should intervene to broaden discourse on human rights in the information society?
RFJ: I think one crucial space is “GovTech” – the increasing use of technology and data by the public sector, and the potential negative impacts this may have for the individual, as pointed to by Philip Alston in his recent report on the Digital Welfare State. We all intervene with the public sector in relation to basic rights, so it's crucial that the public sector's use of technology protect rather than undermine rights.
APCNews: The book addresses the “increasing concerns about the shift in decision-making power from humans to algorithms,” involving “a code that is largely self-executing and implies minimal scope for interpretation.” How should laws and policies respond to this in a human rights framework?
RFJ: Laws and policies that draw on automated decision making should undergo human rights impact assessment to ensure that individual rights are protected. At the Danish Institute for Human Rights, we are currently examining several cases of profiling and automated decision making (or decision support) from the public sector in Denmark from the perspective of human rights law. The report will provide recommendations to the authorities in order to strengthen the protection of human rights when public authorities use such new tools.
APCNews: Do you believe it is still politically relevant or even valid to speak of an "information society", the term formally used back in 2003 and 2005 during the World Summit on the Information Society (WSIS)?
RFJ: I think it’s still very relevant to talk about the role of technology in society, and what kind of society we envision for the future. The notion of “information society” is dated, but the discourse itself is as relevant as ever. The challenges that the book raises are challenges related to power and the way technology and data are used; yet there is nothing in the technology itself that dictates the current form of surveillance economy. It's within our ability as a society to decide on another way.
APCNews: While the book recognises the “more optimistic accounts of the networked public sphere and its potential for public participation,” it mostly addresses the challenges that the social web poses to human rights. Are you personally hopeful in terms of the direction we are heading?
RFJ: The aim of the book was to point to the challenges; however, I do recognise the many ways in which technology benefits individuals and groups around the globe. And yes, I am still hopeful, not least because so many good people are engaged in moving this development in the right direction.
The book “Human Rights in the Age of Platforms” can be obtained here.
Read the 2019 Global Information Society Watch edition on Artificial intelligence: Human rights, social justice and development here.