Skip to main content

Last week I wrote about ethical frameworks for artificial intelligence. There’s a proliferation of these being devised just now. We need them, I argued, because AI’s becoming so important and because so much about it’s new.

“AI4People”

I’d draw this week, I said, on one initiative among these, which is called AI4People, led by Luciano Floridi from Oxford University’s Digital Ethics Lab. This aims to lay foundations for what it calls a “Good AI Society”. (I’m wary of defining things as “Good” per se, but it’s a useful starting point.) I’ll make my observations as I go along.

Three premises

AI4People starts from these three premises:

  • that AI systems are already widespread and will become more so, and fast;

  • that their use offers opportunities for human agency, achievement and interaction; and

  • that the risks lie in underuse as well as overuse and misuse – we may use them too little because we fear them or too much because we don’t fear them enough, while some will use them in ways that are intended to exploit or to abuse.

Ethical frameworks are nothing new

Of course, it’s not so difficult to fit potential ‘good’ and ‘bad’, ‘benign’ and ‘malign’, ‘positive’ and ‘negative’ uses into a framework of ethics, or indeed a broader policy approach. We’ve done that with many things for many years: the Universal Declaration of Human Rights and the Sustainable Development Goals are both, in many ways, such frameworks. 

It’s not surprising, then, that so many organisations have been looking to define the ethics of AI. Or that there’s been criticism that their efforts aren't co-ordinated.  

These frameworks have different starting points depending on who makes them. Different societies and different actors have different understandings of what’s ‘good’ and ‘bad’.  Businesses, social groups, individuals and, yes, civil society organisations start from what most suits their interests or aspirations. The UDHR and SDGs are both negotiated compromises between such different perspectives.

One thing that’s critical is “who participates?” It would be easy for the ethics of AI to be defined by those with the most powerful interests and resources – big businesses in the US and China; big power blocs in international relations. But the frameworks that suit them may not be suited to the very different contexts of, say, Least Developed Countries. Inclusion is important here.

Some common principles

Floridi and his colleagues looked at six of the most prominent frameworks made so far and came to an encouraging conclusion. They’ve more in common than they feared. Building on that commonality, they suggest four core principles from bio-ethics (medicine, gene-editing etc.) that could apply where AI's concerned, and then add a fifth.  

First, beneficence – AI should be developed for the common good; should empower; should be sustainable.

Second, the converse, non-maleficence – it should ‘do no harm’, intentional or unintentional (which suggests that it should work within regulatory norms. as bio-innovation does).

Third, autonomy – decisions that matter should remain within human control. Whatever powers we delegate, as societies or individuals, should be delegated with consent, and we should retain the power to override them.

Fourth, justice – including righting of past wrongs (such as discrimination), ensuring new benefits are equitably shared, and preventing harm to valued social structures.

Explicability

The fifth principle that’s added in this framework is ‘explicability’.  That brings together norms with which we are familiar in governance – transparency and accountability.  We should be able to understand how decisions that affect us are being reached, and to confirm that they are rational and in line with what we want.

It addresses one especial fear that many people have about AI: that the algorithms it uses and the outcomes it derives from machine learning are so complex that they can’t be understood or even followed – not just by ordinary citizens but even by the experts that designed the algorithms.  

A starting point but not an end point

I think these principles make a sound starting point for looking at digital innovation . You could use them as a frame within which to consider what an ethical approach might be in different contexts. Try it, for example, with data management, or the use of drones, or ‘deep fakes’ (videos that fake real people, perhaps for political ends, or maybe sexual abuse), or the use of bots to write news stories.

As I wrote last week, I think we need ethical frameworks. However, three issues arise that suggest they aren't an end point.

The world’s not all that ethical

The first is rooted in our history. Democracy’s a rare anomaly in human experience. Human rights as we know them are only recently defined. There’ve been many ethical frameworks in the past, but most have been more honoured in the breach than by adherence. 

Which is the case today as well. People (individuals, social groupings, businesses and governments) seek advantage, and often find it more easily in holding power over others than in solidarity with them. Authoritarian governments look for ways around the norms that they sign up to. Businesses focus on profit maximisation, not on social good (this is not a criticism; it’s how they stay in business). We praise self-sacrifice in others because we admire it rather than expect it.

Ethical and policy frameworks are means we use to pursue the common good to counter this experience. They’re at the root of rule of law, of human rights, of international agreements on sustainable development, on climate change and so forth.

They’re also at the root of regulation. As another Oxford luminary, economist Will Hutton, put it in last weekend’s Observer newspaper ‘The role of government is not to subsume markets. Rather, it is to set vigorous regulatory frameworks that attack monopoly, promote competition and outlaw noxious practices.’

Time for a rethink

My second issue’s that any ethical or rules-based framework for the digital society requires rethinking on the part of companies and governments.  

The libertarian idea that technologists should freely innovate and let society sort out the consequences later sounded fine when they were tweaking at the margins of economy, society and culture; it sounds less grand when what they do can pose an existential threat - especially if, as with AI, the consequences may be outside their control. 

AI can do wonders for “good”, but overuse, misuse or abuse could do wonders that we wouldn’t want to see. AI’s an obvious resource for those that want to discriminate as well as those that don’t.  The power to surveille, in ways that reach far beyond what was previously possible, is clear. The risk of automated warfare’s nothing new – young readers could look out for Stanley Kubrick’s 1960s film Dr Strangelove – but obviously potent.

We’re used to policy (and ethical) frameworks in other areas. In governance, not least, the international human rights agreements and data privacy. On the environment, prescriptive rules on pollution, resource use and climate. In medicine, on the design and introduction of new drugs and new procedures. In all these, and AI, the key question's "do we really want to sort out problems when they’re irreversible?"

Four types of action

But if all we do is put together intellectual frameworks, we’ll get nowhere. Floridi and his colleagues suggest four areas of detailed action that are needed:

  • assessing existing institutions’ capacity for handling AI’s innovations – including legal frameworks and those for dialogue between government, businesses and citizens;

  • developing ways of explaining AI systems to those affected, to audit consequences, to facilitate justice and, for example, enable redress when things go wrong;

  • incentivising AI innovations that are ‘socially preferable… and environmentally friendly,’ with diverse cross-sectoral (and, I’d add, cross-cultural) debate about their implications for future society;

  • and supporting ways of activating ethics – not just through governmental regulation but also codes of conduct and corporate ethical reviews.

To summarise

If we want “AI for Good”, we have to work out what we think ‘good’ is, and find ways of making multi-stakeholders pursue it rather than pursuing just their own advantage.  This won’t be easy, and we may not succeed, but it’s worth the try. That means focusing on the real implications of AI, and on the real behaviour of real power players, not hoping for the best or letting those power players set the rules in ways that suit them.

Next week: Is the World Summit on the Information Society still relevant after fifteen years?

Image source: https://www.eismd.eu/ai4people-ethical-framework/

Read also: 2019 Global Information Society Watch on Artificial Intelligence, human rights, social justice and development 
David Souter writes a weekly column for APC, looking at different aspects of the information society, development and rights. David’s pieces take a fresh look at many of the issues that concern APC and its members, with the aim of provoking discussion and debate. Issues covered include internet governance and sustainable development, human rights and the environment, policy, practice and the use of ICTs by individuals and communities. More about David Souter.