This article was republished from EngageMedia.
As technologies based on artificial intelligence (AI) gain traction, the need to govern them also becomes increasingly urgent. In recent years, ethical AI has surfaced as the de facto pathway towards safer and better AI, often manifested in lists of guidelines and principles or codes of conduct. At least 84 of such documents exist, put forth by private companies, government agencies, academic and research institutions, non-profit organisations/professional associations, and so on. They are not legally binding, but aim to influence decision-making in the tech industry to abide by certain principles when designing and building AI technologies.
In this three-part series, we will scrutinise current AI ethical principles and guidelines and their shortfalls, and also discuss alternative ethical frameworks that are available. For some background, you may want to read an earlier series on AI and human rights in the context of Southeast Asia.
Some scholars describe ethics as “arguably the hottest product in Silicon Valley’s hype cycle today, even as headlines decrying a lack of ethics in technology companies accumulate” (Metcalf et al., 2019). The more I read about the topic, the less inclined I am to believe they would work to limit AI harms, if not supplemented significantly with other governance approaches such as governmental regulation or tech standards.
Why? In this blog post and the next, I will discuss some points that I’ve gathered from academic literature from two angles: first, from analysing the substance of existing ethical guidelines and principles out there, and, second, from the difficulties in putting the principles into practice.
Maybe I can start by stating my expectations on what ethical AI should and should not do. In the broadest sense, it should support sustainable development and uphold human rights. Ethical AI should not uplift some communities at the expense of others, or be weaponised against marginalised communities or democratic institutions. These are reasonable asks for any technology powerful enough to have a significant impact on society.
Do the content of existing ethical principles and guidelines live up to these expectations? At least four different studies have analysed the substance of the documents, and we will look at some of the conclusions on 1) what the principles contain, 2) what they leave out, and 3) the underlying assumptions that limit what ethical guidelines would be able to achieve. While going through the principles, we should also keep in mind that most of these ethical guidelines analysed are built in the West. In Part Three of this series, we will broaden the conversation by exploring ethical frameworks of other cultures as well.
What do the principles contain?
Anna Jobin, Marcello Ienca and Effy Vayena, who collected and studied the 84 documents of ethical guidelines and statements, noted that, even though there is no one principle common across all 84, there are five main principles most commonly shared:
-
transparency;
-
justice and fairness;
-
non-maleficence (i.e causing no harm);
-
responsibility; and,
-
privacy.
Other than these five, there are six that occur less: beneficence (i.e. promoting good), freedom and autonomy, trust, dignity, sustainability, and solidarity. In a separate analysis of 36 documents (Fjeld et al., 2020), researchers came up with similar categories with slightly different groupings. Both papers unpack the concepts, which is worth a read if you want an overall view of what AI ethics cover.
Jobin, Ienca, and Vayena point out that the many ethical guidance documents diverge in four main things:
-
how ethical principles are interpreted,
-
why they are deemed important,
-
what issue, domain, or actors they pertain to; and,
-
how they should be implemented.
Conceptual and procedural divergences mean that different actors often choose what to prioritise differently, and implementation may vary widely. This may lead to a practise called “ethics shopping”, whereby actors mix and match ethical principles that fit their purposes, instead of actually changing unethical behaviour (Floridi, 2019). (We’ll discuss this more in Part 2.)
What do the principles leave out?
AI ethical guidelines prioritise certain principles against others and, as mentioned, there is a cluster of five main principles that most guidelines agree upon, at least at a high level. But what is left out or underrepresented?
An interesting point argued by Thilo Hagendorff is that most of the recurring principles are the ones that are most easily operationalised mathematically. Also, they tend to be implemented in terms of technical solutions. These principles, such as accountability, explainability, privacy, justice, robustness or safety, tend to be “male-dominated justice ethics”, reflecting the situation that the discourse on AI ethics is primarily shaped by men.
Within the 22 guidelines that he analysed, Hagendorff points out that almost none talk about AI in the contexts of care, nurture, help, welfare, social responsibility, or ecological networks. He goes on to elaborate that very few address the aspects of democratic control, governance, and political deliberation of AI systems – or political abuse of AI systems. The guidelines rarely discuss a lack of diversity within the AI community, where most decisions are predominantly taken by white men. There is also little discussion of trolley problems (ethical dilemmas where there is no clear right or wrong), or assessments of the efficacy of algorithmic decision-making versus human decision-making, or the hidden social and ecological costs of AI systems.
Jobin, Ienca, and Vayena emphasise that mainstream AI ethics debates significantly underrepresent sustainability, dignity, and solidarity as principles. Unpacked, this means the environmental impacts of AI are rarely discussed, as well as impacts on human rights and dignity, and the implications on labour markets. They also point out that, geographically speaking, global regions are not participating equally in the AI ethics debate, pointing out the underrepresentation of areas such as Africa, South and Central America, and Central Asia. In the case of Southeast Asia, Jobin et al.’s data set includes a discussion paper on AI and personal data by Singapore’s Personal Data Protection Commission. No other country in Southeast Asia is represented in the study, even though we find such discussions starting to happen in other Southeast Asian countries such as Thailand. There is also a move towards building national AI strategies in this region.
What are the underlying assumptions?
Having some understanding about what the ethical documents contain and leave out, it would probably be useful to zoom out a little and understand why. Daniel Greene, Anna Lauren Hoffman and Luke Stark analyse the “moral background” of values statements, or the grounding assumptions that form the discussions on AI ethics. These are the ideas that are taken for granted and seldom questioned around the framing of AI ethics.
From examining seven public statements of ethical principles, Greene, Hoffman, and Stark find seven interrelated key assumptions:
-
The ethical guidelines assume that the concerns of positive and negative impacts of AI are universally the same across all cultures and species, and these concerns can be addressed by objectively measuring and fixing the impacts.
-
Ethical design is the realm of experts (e.g. AI corporations, leading academics, and legal minds); other people who are concerned are merely stakeholders (such as product users or buyers). “Experts make AI happen. Other Stakeholders have AI happen to them” (pg. 2127).
-
AI and all its associated technologies (e.g. machine learning) will happen inevitably, and humans can only react to its consequences (such as dealing with mass job displacement). The ethical debate is therefore focused on how to design appropriately, and never on whether the systems should be built in the first place.
-
The good or bad of implementing an AI system is rarely scrutinised from a business level, but always from a design level.
-
The only ethical path forward is to “build better”: by maximising the benefits of AI and by minimising negative impacts, and educating the public about the role of AI in their lives. “Not building” is not an option.
-
How to build better? By vetting through the building process, largely by experts (see Point 2). The main legitimising point is transparency, but there is no commitment to substantive changes.
-
The “experts” mentioned in Point 2 also cover AI and ML technologies, so there is always talk of “explicable” and “transparent” systems.
Through these assumptions, we can then have a better understanding of why AI ethics are formed as so, and, more importantly, understand the limitations that ethical guidelines have in terms of safeguarding society against AI harms.
In the digital era, tech companies generate huge amounts of profits amidst negative impacts on the environment and society, and this is not challenged by AI ethical guidelines. Technology is assumed to solve all problems, and the fact that it brings about some problems of its own is not recognised. The playing field for ethical debates of AI is by design, unequal, because it assumes that AI experts will take the driving seat and the rest of us will just tag along at the back of the wagon. These underlying assumptions are consistent with what we have seen in the substance of ethical guidelines and what they leave out.
Conclusion so far
In this blog post, we have discussed the substance of ethical guidelines that have mushroomed in the recent years and taken a closer look at what they cover and their underlying assumptions. We have found that the contents of these guidelines are mostly focused on narrow fixes and carry with them problematic blindspots which do not help with systemic solutions.
But surely, even if the guidelines are not perfect, they can do some good in practice? The next blog post will examine this question, and suggest that they do not, and can even be harmful in cases.