‘It’s not only how we regulate, but which harms we choose to centre’
AI safety is a theme rapidly gaining traction across the continent and globally. But rather than echoing familiar concerns about rogue algorithms, killer robots, or existential threats to humanity, a webinar held by the University of Cape Town’s Faculty of Health Sciences EthicsLab sought to ask a more situated question: what does it mean to talk about safety in Africa, and for whom is safety at stake?
Our guest speaker, Dr Samuel Segun, Senior Researcher at the Global Centre on AI Governance, offered a panoptic and conceptually sharp overview of current risks, regulatory blind spots and opportunities for African leadership in shaping AI safety.
His framing was especially valuable because it resisted attempts to limit safety to a narrow technical question about models misfiring. Instead, it brought into view the social, political, infrastructural and epistemic conditions that shape how AI is built, used and governed on the continent.
Framing the issue: AI Safety ≠ AI Ethics?
Segun opened by clarifying that AI safety and AI ethics, though often used interchangeably, are not quite the same. Safety tends to ask whether a system will behave reliably and avoid unintended harms. Ethics, by contrast, probes whether systems and the societies that shape them are structured in ways that are just, inclusive and normatively defensible.
Yet in practice, the line blurs.
In Africa, like elsewhere, the harms are not abstract but lived, including manipulated elections, surveillance of activists, online gender-based violence and misinformation campaigns that worsen public health crises
These are not merely questions of safety but equally questions of power. They raise questions about what happens when frameworks of ‘safety’ and ‘risk’ assume a universal subject or an abstract ‘humanity’, and overlook the uneven geographies of exposure, harm, and harm prevention. Specifically, they overlook the possibility that what is safe for some may not be safe for others.
Although Segun drew a distinction between safety and ethics, his broad framing of safety ended up encompassing many classic ethical concerns like justice, labour exploitation, human rights and environmental harm.
Jantina de Vries’ intervention pointed to the risks of this move, asking if ethics loses its critical edge when absorbed into safety. We may come to see political problems as technical ones or assume that preventing harm is the same as enabling justice.
While AI safety tends to focus on preventing harm or managing risk, ethics asks deeper questions about how technology shapes the way we live, who is included or excluded, and what values guide these choices
De Vries sought to direct attention not just to how systems work, but to how people use them, and to the social conditions that make some groups more vulnerable than others.
Three risk zones
Segun drew on the three broad categories of risk identified in the International AI Safety Report (2025) which include malicious use, malfunction and systemic harm. Each is worth unpacking.
Malicious use
includes deliberate weaponisation of AI technologies: surveillance tools used to monitor dissent, deepfakes deployed during elections and voice-cloning scams targeting vulnerable users.
African cases abound, from Zimbabwe’s use of facial recognition cameras to Uganda’s police profiling of activists to cybercriminals in Ghana impersonating relatives for mobile money scams. In these cases, AI becomes less a tool for liberation and more a tool for control and deception.
Malfunction
refers to unintended but no less harmful failures: biased algorithms trained on non-African data, healthcare chatbots that provide dangerous advice, systems that ‘hallucinate’ but are treated as infallible.
The data scarcity in African languages and contexts makes such errors more likely, and the consequences more severe, especially when users are structurally positioned to trust or rely on the system.
Systemic risk
looks at the bigger picture. What happens when AI accelerates job displacement, undermines already fragile labour markets or amplifies environmental harm? What futures are being made impossible or foreclosed?
As Segun noted, Africa’s developmental trajectories, especially around tech-enabled outsourcing, may be prematurely cut off by automation. And as data centres expand, their energy and water demands threaten communities already grappling with scarcity
These risks are neither hypothetical nor are they evenly distributed. As I noted in our discussion, a critical question in assessing these AI safety risks is
- who bears the brunt of these harms?
- whose access to water is diverted to cool a data centre?
- whose job disappears in the integration of chatbots in call centres?
- who is profiled, monitored, or manipulated, and
- who is shielded from those effects?
AI safety and risk is never just about technical systems; it is always about people, positioned differently in power and precarity.
Regulating in a global ecosystem
One recurring question in the discussion was whether Africa can meaningfully regulate AI in a world where much of the technology is developed elsewhere. Segun argued that AI regulation cannot be siloed from broader data governance.
Foundational protections like privacy, data ownership, and consumer rights are often cited as prerequisites for meaningful AI regulation. But perhaps the more urgent question is: who decides what counts as foundational?
In African contexts where precarious labour conditions and environmental vulnerability are already widespread, is it clear that privacy should always be the primary or starting concern? Shouldn’t protections for workers, energy security, or environmental commons be just as foundational given the ways AI technologies intersect with these spheres?
What’s at stake, in other words, is not only how we regulate, but which harms we centre in our regulatory imagination.
Some participants pointed to the European Union’s Artificial Intelligence Act as a possible model, but Segun cautioned against transplanting frameworks without adaptation. Legislation without enforcement, he reminded us, can offer the illusion of protection while leaving underlying harms intact. What is needed is not just policy, but capacity to audit, to adapt and to govern.
Encouragingly, there are efforts underway. Several African countries are drafting AI strategies. Kenya, Segun tells us, is part of the international AI Safety Institute Network (though it does not yet have a domestic institute). But continent-wide coordination remains limited and uneven.
Toward African-led responses
Segun proposed five directions for action, each of which invites further engagement:
- a human rights-based approach to AI governance grounded in principles like non-discrimination, privacy and participation
- an African AI Safety Institute as a dedicated space for research, risk mitigation and knowledge exchange across the continent
- early detection systems and tools that can flag AI-generated fakes and scaled manipulation, with support for African languages and local contexts
- public literacy and capacity-building that includes not only technical training, but critical media and civic education, and
- enforcement, not just legislation to ensure that policy frameworks are effectively implemented
Each of these ideas deserves deeper conversation. Who defines what counts as ‘safe’? How do we build detection systems without reinforcing surveillance logics? Can public literacy campaigns avoid becoming top-down digital paternalism?
Closing thoughts: On safety, ethics and imagination
It is clear that the term, ‘safety’, does useful work, alerting us to danger and demanding precaution. But it also has limits. It can drift into technocratic neutrality, obscuring the political choices embedded in design and deployment.
Choices about what gets built, how and where; whose data is used; and which harms are prioritised are always shaped by competing values and interests. Safety, and its proxy of risk, can also flatten difference, treating harms as universal rather than situated
Is it worth building on and pushing beyond safety, toward something like justice or even care?
BY DR ANYE NYAMNJOH
Senior Research Officer
ETHICS LAB
UNIVERSITY OF CAPE TOWN
PICTURE: Courtesy CIPESA
This article was first published here