Pasar al contenido principal

The United Nations has now adopted the Global Digital Compact about which I wrote two weeks ago. In the end, there was less argument about it at the UN itself than might have been anticipated. An attempt to head it off, led by Russia, made little headway. 

The Compact now stands as a core UN statement on the digital society, supplementing the agreements made at WSIS 20 years ago, and integrated with the broad Pact for the Future which seeks to revitalise the scope for international cooperation at a time of growing global challenge.

Whatever doubts some may have had about these documents, they are now foundations to which future international agreements will refer.

Meanwhile, about AI

But another document emerged out of the UN system as the Pact and Compact were agreed: the final report of its High Level Advisory Body on Artificial Intelligence. It’s that on which I’m going to concentrate today.

It’s linked, of course, to both the Pact and Compact. The proposals for new institutions to address AI emerge from its discussions. They’re not liked by some, who argue that the UN should keep out of things they think it doesn’t know enough about. A little on that later, but first, what does it have to say?

The starting point

The starting point is clear enough. AI’s perceived by most (not all) observers as the next big thing in tech, and the next big thing in tech’s capacity to transform society. There’s no consensus yet, and probably won’t ever be, on what that means. 

“In our consultations around the world,” say the report’s team of authors (a multinational group), “we engaged with those who see a future of boundless goods provided by ever-cheaper, ever-more-helpful AI systems. We also spoke with those wary of darker futures, of division and unemployment, and even extinction.”

Something so big – potentially, indeed, existential – requires more than a shrug of the shoulders and the hope things will turn out for the best. Not least because, the report continues, “no-one currently understands all of AI’s inner workings enough to fully control its outputs or predict its evolution.”

The goal of the report

The report’s entitled Governing AI for Humanity, and spells out what that means.

“There is, today, a global governance deficit with respect to AI,” it says. “Despite much discussion of ethics and principles, the patchwork of norms and institutions is still nascent and full of gaps.” AI’s decision-makers aren’t accountable for their decisions or the consequences those decisions will entail. 

“The development, deployment and use of such a technology cannot be left to the whims of markets alone,” it argues. Nor, for that matter, to decision-makers in a small number of businesses and countries. Not when they will impact – indeed, already impact – everyone and everywhere.

And so?

So what is to be done? The advisory body has come up with both broad principles and a number of specific actions.

The principles emerged in its earlier report and reflect those in many statements made by other actors: AI should be governed “for the benefit of all”, “in the public interest”, “rooted in … multistakeholder collaboration”, in accordance with the UN Charter, human rights and sustainable development. 

Ethics, principles and etcetera

There’s obviously wide scope here for interpretation. Hundreds, it seems, of initiatives have explored AI ethics and principles, and how they might be governed, reflecting different points of view. 

AI business interests, predictably, have been concerned to minimise the scope of regulation. There’s a commercial arms race underway to rush systems to market. First movers have advantages, and that means less attention’s paid to systems’ weaknesses and to potential hazards.

More academic interventions, and those from governments and governance institutions, are more concerned with impacts on society, both positive and negative. 

Governments in countries with advanced digital business sectors typically want to do four things: to benefit their own digital businesses; to enable productivity improvements in their economies; to bank the savings to be made from lowering the costs of public services; and to reassure the public that things won’t go wrong.

Civil society approaches have focused more on individual rights (expression, privacy, surveillance) than social impacts (education, health, social dynamics).

But what of practice?

Thinking about the bigger picture like this is important but takes time. The pace of AI development – some of it hype, some of it real – is faster than the pace of working groups, advisory bodies, legislative processes and other forms of governance.

The nature of what’s happening and what may happen’s far from clear, but it’s also much more complicated than many media reports imply. 

Long, medium and short-term issues

Some emphasise the existential risks which are perceived by pessimistic AI pioneers (such as Geoffrey Hinton) and scoffed by others (such as Yann LeCun). But impacts in the medium and shorter terms are already being felt, for good or ill. The way that these are dealt with now will affect the way they change society in future.

The data sets that train today’s AI have inbuilt biases, derived from the way past data sets have been collected and the imbalance in data volumes between developed and developing countries, and amongst communities within all countries. Those biases affect the way AI’s going to be used in health and education, assessing jobs, or alarmingly (for instance) in predicting crime.

The scale of data management required by AI systems is energy-intensive, with substantial impacts on climate change. The implications of AI-enabled targeting in war are with us now and of deep concern to potential military and civilian targets. These might be called medium-term governance issues.

As for the short term, AI’s begun to challenge further the integrity of an information ecosystem that was already challenged. London’s Standard newspaper is now publishing “reviews” purportedly written by its deceased art critic Brian Sewell. A trivial example, maybe, but how much do you now trust photographs and videos you find on Twitter/X? And how concerned are you about politicians faking memes of influential figures such as Taylor Swift? Or, much more serious, videos of faked atrocities?

Governance and regulation

All of which raises issues of governance and regulation. 

The new UN report makes two key points here: 

  • that the development of such powerful technologies should not be left to the market alone, or to the interests of a small group of dominant businesses, technologists and their sponsors in the world’s most powerful governments;
  • that the global nature of the changes wrought by new technologies mean that its governance has to be international/universal – or at the least be based on global norms.

Both of these arguments have growing resonance, both within and outside ‘the digital community’, but both remain resisted by substantial groups in business and elsewhere.

I’ll say something briefly on each point.

Regulating markets

Different political traditions have different approaches to the role of regulation. 

Digital libertarians and market fundamentalists have many differences, but both have sought to minimise the role of government: the former because they fear it stifles individualism, the latter because they fear it stifles innovation and commercial value.

Liberal and social democratic politics, especially in Europe, are much more rooted in the evolution of societies as ecosystems, seeking to balance social and economic interests, and to promote individual welfare through collective action for which the role of government is crucial.

Regulation is the means by which governments seek to achieve that balance. Democratic and authoritarian governments obviously differ in approaches here, as do different governments of either style, depending on their national contexts, economic circumstances and political priorities. 

Outcomes depend on the nature of the balance that is being sought, the capacity of governments to achieve their goals, and the commitment of wider society to securing governance in the interest of the whole community (‘good governance’, perhaps) rather than personal interest.

From the perspective of the UN’s advisory board, AI’s development should reflect this common good. The market alone won’t do this; AI’s governance should also be democratic and inclusive – which is led by governments but, in today’s digital environment, also means multistakeholder.

Universality

There’s a tension too between universal norms and national identity, which is affected by the nature of AI and other digital technologies. 

The UN’s role is to provide a space within which international norms can be debated and international tensions mitigated. 

Hence fora such as the Brundtland Commission and Earth Summit in the 1980s/90s and the World Summit on the Information Society in the first years of this century, which identified the frameworks for international cooperation in sustainable and digital development respectively. Hence international discussion fora now in many different areas of policy, from arms control to climate change, from human rights to cybercrime.

Multistakeholder participation adds value to these discourses, but doesn’t obviate the critical importance of intergovernmental agreements or of universal norms and targets – from the Universal Declaration of Human Rights (and subsequent Covenants) to the fragile targets set for sustainable development and climate change. It is governments that have to implement them, and governments’ consent’s required to make them stick.

The UN provides a space in which globally critical issues can be addressed. Its engagement with AI (and digitalisation in general) results from their overarching significance across global society, economy and culture. It also provides a forum that enables developing countries’ voices to be heard alongside those of countries that are economically and digitally dominant.

Digital universality

The digital world has also been committed to universal norms and principles, applauding the UN’s commitment to equivalence of human rights online and off, insisting on internet universality and defending the internet against potential fragmentation.

The borderless nature of digital technology creates a tension, however, between digital development and the international governance arrangements that emerged from conflicts in the nineteenth and twentieth centuries, not least in the way that it affects and potentially accentuates the power imbalances between different nation states and their societies. 

It also creates new tensions between international/universal norms and national circumstances. Different countries have different historical contexts, different economies, different internal power structures, different common perceptions (think, for instance, of US and European attitudes to gun control), different legal frameworks, different political and social structures, all of which affect the ways in which universality translates in nationality.

The UN role

The UN’s AI report and Digital Compact propose new institutions in the UN system to address these challenges, which have proved controversial. In practice, these will form only part of ongoing debates around the future of AI. 

Powerful businesses and governments will continue to make their own interventions, as they do in other areas of international policy where the UN system has adopted frameworks (such as climate change). Other international organisations, from the OECD to the African Union, will continue to develop policies and frameworks relevant to their member-countries. And AI technologies will continue to develop and have impacts that we can’t readily predict and won’t readily understand.

The question for all stakeholders, including civil society, shouldn’t be whether the institutional frameworks for handling this development are perfect in their eyes, for they won’t ever be. It should be how to use them to achieve outcomes that are, in the report’s words, “for the benefit of all” and “governed in the public interest.”

 

Image credit: UN Photo/Eskinder Debebe. Secretary-General António Guterres (second from front at table) and Deputy Secretary-General Amina Mohammed attend a virtual meeting of the High-level Advisory Body on Artificial Intelligence. 

David Souter writes a fortnightly column for APC, looking at different aspects of the information society, development and rights. David’s pieces take a fresh look at many of the issues that concern APC and its members, with the aim of provoking discussion and debate. Issues covered include internet governance and sustainable development, human rights and the environment, policy, practice and the use of ICTs by individuals and communities.