Pasar al contenido principal

The Women of Uganda Network (WOUGNET) recently participated in the Digital Rights and Inclusion Forum in Accra, Ghana, with the support of a grant from the APC Member Engagement and Travel Fund (METF). The WOUGNET representatives joined a panel to discuss the profound impact of these technologies on civic space in Africa, balancing the opportunities they present against the significant challenges and threats they pose. This piece originally published by WOUGNET offers a summary of this important discussion, as well as recommendations and priorities highlighted by the panellists and participants looking to the future. 

Women of Uganda Network (WOUGNET) recently participated in the Digital Rights and Inclusion Forum hosted by Paradigm Initiative in Accra, Ghana, from 23 to 25 April 2024. At this influential event, the International Center for Not-for-Profit Law (ICNL) organized a compelling panel discussion titled “Artificial Intelligence and Other Emerging Technologies and Their Impact on Civil Society.” Moderated by ICNL’s Senior Legal Advisor for Africa, Florence Nakazibwe, the panel featured four distinguished speakers: Li Fung, Senior Human Rights Adviser, UN Human Rights Office – Kenya; Zwelithini Eugene Xaba, Legal Officer, African Commission on Human and Peoples’ Rights; Sandra Aceng, Executive Director, Women of Uganda Network (WOUGNET); and Adeboye Adegoke, Senior Manager, Paradigm Initiative.

Rising AI surveillance in Africa

According to the Carnegie AI Global Surveillance Index, there is a growing trend among African states to deploy advanced Artificial Intelligence (AI) surveillance tools, including smart policing and facial recognition systems, to monitor and track citizens under the guise of addressing security risks. The panel discussed the profound impact of these technologies on civic space in Africa, balancing the opportunities they present against the significant challenges and threats they pose to civil society.

Participants listening the discussion

Leveraging AI for civil society work

Sandra Aceng from WOUGNET highlighted innovative ways that civil society organisations (CSOs) are using AI tools to enhance their effectiveness in online spaces. For example, WOUGNET is developing an AI chatbot on its website to provide information about online gender-based violence for survivors. Viamo, a global leader in mobile engagement is dedicated to connecting businesses and governments to people in low-connectivity environments and is also partnering with CSOs to bridge the digital divide and connect with vulnerable groups. In Zambia, the “Ask Viamo Anything” feature, powered by generative AI, allows easy access to timely information via basic phones. These tools are also being used to create innovative educational and advocacy materials for digital literacy programs and online campaigns.

The role of women and civil actors in AI governance

Women and civil actors have a critical role to play in the governance of AI. AI can serve as an enabler for change by aiding the creation of educational materials and providing information about gender and ICT. However, AI systems trained on biased data can perpetuate gender stereotypes, underscoring the need to document and map threats targeting marginalized groups using AI tools. It is crucial to ensure that structurally silenced groups, such as Persons with Disabilities, LGBTIQ+, Women and Girls, are involved and considered in designing AI systems.

Risks of AI in surveillance and human rights violations

Speakers also raised concerns about the use of AI tools by state security agencies to conduct unregulated surveillance of CSOs, human rights defenders (HRDs) and political opponents, which violates their fundamental human rights. In Uganda, for example, the government has signed a contract with a Chinese technology firm, Huawei to set up an AI-powered facial recognition surveillance system purportedly for crime prevention. Similarly, a report published by the Institute of Development Studies shows that by 2023, Ghana had spent US$184m on surveillance technology by implementing a Safe City project with a CCTV component powered by Huawei’s facial recognition AI. Such tools are susceptible to misuse including data leaks and illegitimate targeting of HRDs and CSOs that can result in potential victimisation of marginalised groups and self-censorship.

Human rights risks and AI

Li Fung from OHCHR discussed the potential risks of AI on human rights, highlighting that generative AI is being developed rapidly without sufficient consideration of its consequences. Key risks include violations of the right to non-discrimination, privacy, access to information, and expression. Generative AI models are equally fueling hate speech, misinformation, and violence, disrupting democratic processes, exacerbating surveillance, and profiling. OHCHR therefore advocates for human rights frameworks to be central in the creation and implementation of AI models to harness their benefits while avoiding negative impacts.

Governance gaps in AI regulation in Africa

Eugene from ACHPR highlighted the lack of AI Regulatory frameworks in Africa, noting that less than ten countries have dedicated national AI policies. At the regional level, no specific framework addresses the usage of AI, though existing general frameworks cover related issues. The African Union (AU) is developing a draft AI policy framework and a continental AI strategy. The lack of regulatory frameworks poses significant risks to CSOs, as digital tools such as smart cities increasingly permit broad state surveillance, curtailing CSOs’ ability to organize freely and violating privacy rights.

Engaging CSOs in AI policy spaces

Adeboye from PIN noted the need for more robust CSO engagement with the AU and governments on AI regulation and governance. Despite their relevant human rights experience, African CSOs are not adequately involved in policy discussions. The AU has been conducting consultations on the proposed AI frameworks but this has not been properly structured.  The private sector’s role in AI misuse, particularly regarding data privacy, is also a concern, as developers often withhold information due to intellectual property and trade secrets, hindering risk assessments and institutional accountability.

Three speakers talking during the event

Plenary discussions and recommendations

Participants in the plenary session flagged several pressing issues:

  • The challenge of content moderation by AI systems and the need for relevant safeguards.
  • Policy incoherence in AI frameworks due to the absence of a regional standard.
  • The need to decolonize AI systems and secure data sources contextualized in Africa.
  • The role of AI in decision-making and ensuring government accountability for negative consequences.
  • The importance of diplomatic engagement in promoting responsible AI regulations.
  • Risks of growing unemployment due to AI solutions and labour exploitation in the AI field.
  • Ensuring the integrity and ethical use of AI research in academic institutions.

A representative from Meta’s oversight board emphasized that AI systems are prone to human biases and need to be guided by human rights frameworks. Content moderation remains a significant challenge, and CSOs should ensure accountability from corporations for human rights violations on their platforms. Strengthening African stakeholder engagement and collaboration between governments, the private sector, and civil society is therefore crucial for effective AI accountability.

Panelist responses and future priorities

Panelists suggested several priorities for addressing AI’s impact on civic space in Africa:

  • Developing clear AI strategies that avoid replicating Western frameworks and promote decolonization of AI models.
  • Supporting youth to develop AI skills and build systems tailored to the African context.
  • Embracing AI as a research tool in academic institutions, ensuring ethical AI principles are followed.
  • Addressing the skills gap in the labour market due to AI disruption through digital literacy initiatives.
  • Ensuring accountability from companies operating in Africa by setting precedents in courts globally.
  • Creating robust data systems for AI development that are uniquely tailored to the African context .
  • Integrating data protection frameworks into AI governance regimes.
  • Pushing for transparency and accountability in the private sector’s development of AI systems.

Conclusion

The panel highlighted the need for a multi-stakeholder approach in creating AI tools and governance safeguards. African CSOs and other stakeholders must play an active role in AI policy spaces, leveraging opportunities such as the UN Global Digital Compact to influence policy actions. It was further echoed that governments and CSOs should prioritize identifying and mitigating AI risks, through conducting risk assessments, and ensuring human rights are prioritised in AI governance. By fostering transparency, accountability, and inclusive participation, Africa can harness the potential of AI while mitigating its risks in the future.

Written by Sandra Aceng, executive director at  WOUGNET, and Florence Nakazibwe, ICNL’s senior legal advisor for Africa.

This article was originally published by WOUGNET. Photos: Courtesy of WOUGNET.