Pasar al contenido principal
24 March 2022 | Updated on 24 March 2022
Source
Feminist Internet Research Network

Artificial intelligence has harmful implications for gender equality and its intersectionalities, new research focusing on algorithmic decision-making by Latin American governments reveals.

We are witnessing major hype around artificial intelligence (AI), with algorithmic decision-making systems being adopted as a magic wand for social, economic, environmental and political problems. However, machines don’t erase biases and inequalities, including gendered ones. This is the main finding of “Not MY AI: A feminist framework to challenge algorithmic decision-making systems deployed by the public sector”, a research study by Brazilian-based think tank Coding Rights focused on Chile, Brazil, Argentina, Colombia, Mexico and Uruguay. In 24 of the projects mapped, research found likely harmful implications on gender equality and all its intersectionalities.

The research compiles insights from the notmy.ai platform, a project focusing on the development of a feminist framework to question algorithmic decision making and shared as part of the Feminist Internet Research Network, led by APC and funded by the International Development Research Centre (IDRC). It provides data-driven evidence that can be used to inform writing, reporting and coverage of this issue, from a feminist, human-rights perspective.

Key findings
“AI systems tend to punish the poor” 

Research shows that it is increasingly common for wealthy people to benefit from personal interactions, while data from the poor is processed by machines making decisions about their rights.

This divide is even more relevant when we consider that social class has a powerful gender component.

Colonialism is present in algorithmic systems 

Today’s extraction of personal data naturalises the colonial appropriation of life in general.

Coloniality presents itself in algorithmic systems through institutionalised algorithmic oppression, exploitation and dispossession.

AI is driven by precarious labour

Digital technologies are powered by “ghost work” or invisible labour.

These jobs typically entail precarious working conditions – overworked, underpaid, no social security benefits or stability, very different from the work conditions of the creators of these systems.

Neoliberal policies become automatised

Discourses around big data have an overwhelmingly positive connotation thanks to the neoliberal idea that the exploitation for profit of the poor’s data will only benefit the population.

Racism is embedded in AI design

For UN Special Rapporteur Tandayi Achiume, emerging digital technologies are “capable of creating and maintaining racial and ethnic exclusion in systemic and structural terms.”

Designers of AI technologies build a digital caste system structured on existing racial discrimination.

AI is patriarchal by design

The discussion about algorithmic fairness has omitted sexual orientation and gender identity, with concrete impacts on censorship, language, online safety, health and employment leading to discrimination and exclusion of LGBTIQ+ people.

AI lacks transparency

Transparency is key to fostering trust in the tools.

When government agencies adopt algorithmic tools without transparency, accountability and external oversight, their use can threaten civil liberties and exacerbate existing problems.

Would you like to know more?
  • Access the full research

  • Visit the Feminist Internet Research Network website for more internet research with a feminist approach

  • For interviews, coverage and other press inquiries, contact Leila Nachawati, APC’s media outreach lead: leila@apc.org

  • Check our press section for press releases, publications, multimedia materials and a list of contacts from the APC community.