Meta is under renewed scrutiny for failing to moderate AI-generated misinformation targeting African users, as local fact-checkers and digital rights advocates warn of a widening enforcement gap that leaves millions vulnerable to scams and disinformation.
From manipulated videos of Nigerian TV anchors endorsing fake medical cures to deepfake ads featuring business moguls like Tony Elumelu promoting Ponzi schemes, Facebook and Instagram are increasingly being flooded in Africa with deceptive AI-generated content.
Yet Meta’s moderation responses remain slow, inconsistent, or absent altogether.
‘It often feels like Meta follows stricter laws in Europe, while in Africa, there’s little accountability,’ said Olayinka*, a Nigerian fact-checker who reported identical scam ads – only to see one removed quickly in Europe, while the other remained live for days
The FactCheckHub reviewed multiple AI-generated scam ads targeting Nigerian users. While similar content aimed at European audiences was swiftly taken down, African-targeted material often remained online, even after repeated reports.
In one instance, a Facebook ad using a deepfake version of broadcaster Olamide Odekunle promoted a fraudulent trading platform. Despite being flagged by users and media, the ad remained live for weeks. Odekunle, who is also one of Nigeria’s first AI news anchors, confirmed she did not consent to the likeness being used.
Another video falsely showed TV presenter Kayode Okikiolu endorsing miracle drugs and scams. ‘It was scary at first … then anger. I was being used to mislead people,’ Okikiolu told the FactCheckHub.
Experts say Meta’s phasing out of its third-party fact-checking programme in Africa and reliance on crowdsourced Community Notes is weakening safeguards, especially in countries like Nigeria and Kenya.
‘Meta has started withdrawing fact-checking partnerships where they’re most needed,’ said Kehinde Adegboyega of the Human Rights Journalists Network of Nigeria. ‘In South Africa, they launched an Election Operations Centre and multilingual moderation. In Nigeria, we’re left to fight these battles ourselves.’
Meta’s stated policies prohibit AI-generated misinformation and manipulated media. Yet the FactCheckHub found dozens of ads and videos – violating these rules – that remained online. Critics argue that Meta enforces its guidelines unevenly, applying stricter standards in the Global North than in African countries.
The issue goes beyond financial scams. Health misinformation – such as fake AI-generated videos promoting hypertension cures – has also circulated widely. One viral video even used a fabricated X post to falsely claim that former President Muhammadu Buhari had died in 2017.
‘These deceptions are emotionally and financially damaging,’ said Olayinka. ‘And Meta’s slow action only worsens the problem.’
Fact-checkers, journalists, and civil society groups are calling for Meta to:
restore regional fact-checking partnerships
invest in language-aware AI moderation
launch election response centres beyond South Africa, and
improve platform transparency and community reporting tools
‘AI is rapidly changing the information landscape, but without enforcement and local accountability, it becomes a weapon,’ said Adegboyega. ‘Africa should not be treated as an afterthought in global tech governance’
The Media Institute of Southern Africa has submitted its insights to the African Commission on Human and Peoples’ Rights (ACHPR) Public Consultation on Freedom of Expression and Artificial Intelligence (AI).
In its submissions, MISA emphasised the need for governments to establish AI legal frameworks rooted in international human rights law, incorporating transparency, accountability, data security, and clear redress mechanisms.
As a key media freedom and digital rights advocate, the organisation recognises that the transformative power of AI will directly shape the future of journalism, alter the information landscape, and impact the right to freedom of expression.
Read ‘Implementation of Resolutions on digital rights instruments underway’, here
To safeguard fundamental human rights, essential safeguards and measures, including mandated human oversight, must be incorporated in the entire life cycle of all AI systems that impact these rights.
In its submissions, MISA highlighted key concerns, including, among others:
that Generative Artificial Intelligence (GenAI) has amplified misinformation, blurring the lines between truth and fiction, and that there is a risk that AI may influence editorial independence or journalistic decisions
that deepfakes can be used for political manipulation, character assassination or to incite violence
that the digital divide and resource gaps remain significant challenges in most African countries, where a lack of affordable and reliable internet connectivity hinders citizens from realising the full potential of emerging technologies
that most AI systems currently in use are predominantly trained on Western datasets, which inherently carry biases that often lead to discrimination against specific segments of African populations and misrepresent African contexts
the potential for State control and censorship, which can lead to increased surveillance (e.g., facial recognition) including social media monitoring, to track journalists and ordinary citizens, which often results in self-censorship
the dominance of big tech companies, which control most AI models, which leads to the decline of smaller media outlets, with the monetisation and exploitation of data by these companies often reflecting biases or commercial interests embedded in the AI models which, in turn, affects market dynamics by creating economic dependencies; media organisations become economically and structurally reliant on these platforms for traffic and advertising revenue, restricting their ability to maintain editorial independence, and
that most international AI instruments are non-binding in nature and fail to incorporate the perspectives of the Global South, which results in challenges in translating AI principles into practical policies
Way forward
Moving forward, governments must establish AI legal frameworks grounded in international human rights law, incorporating transparency, accountability, data security and clear mechanisms for redress.
AI systems must be designed inclusively, with input from various stakeholders, including marginalised groups, people with disabilities and other underrepresented voices. The media industry should also develop its own AI Policy or Code of Ethics, incorporating key best practices and clearly labelling all content that has been generated, augmented, or significantly altered by AI.
AI-generated content, particularly for news and informational purposes, must go through rigorous human review and editorial approval before publication
AI systems should not influence editorial independence or journalistic decisions by making critical choices about content publication or editorial direction.
Strong policies should be implemented to close the digital gap, ensuring affordable and accessible internet , and enhancing digital literacy for marginalised communities.
There is a need to increase accountability on the Utilisation of Universal Service Funds to stimulate infrastructure development. This will bridge and close the digital divide between urban and rural communities, serving as the backbone for localised AI development and deployment.
Finally, regional and global coordination is vital to harmonise AI development and translate AI principles into practical, enforceable policies through multi-stakeholder partnerships.
At the 5th Ordinary Session of the 6th Pan-African Parliament in Midrand, South Africa, legislators and experts placed Africa’s data sovereignty, AI governance and responsible digital innovation at the forefront of the continent’s transformation agenda, emphasising the need for urgent, African-led action to avoid becoming a ‘digital colony’ while harnessing the Fourth Industrial Revolution for inclusive development.
Hon. Behdja Lammali, (Algerian) Chairperson of the Committee on Transport, Industry, Communications, Energy, Science and Technology, opened the discussions, saying, ‘Africa continues to lag behind in digitalisation, innovation and AI adoption, risking long-term negative impacts on our continent and our people’.
‘We must align our strategies with Agenda 2063 to advance digital health, smart industrialisation, and responsible AI use while protecting privacy and personal data.’
Reflecting on the outcomes of the First Parliamentary Digital Summit, Hon. Lammali noted, ‘We covered critical areas including AI training, data protection and digital health, discussing the role of parliamentarians in advancing AI and policy harmonisation’.
She called on Member States to ‘develop model laws on AI, data protection and privacy aligned with Agenda 2063, and to ratify and domesticate the Malabo Convention to address emerging technologies, AI, cross-border data flows and cyber threats’.
She added that ‘Africa must build a secure, inclusive, sovereign digital and AI future that aligns with the Africa We Want under Agenda 2063, ensuring data protection, AI for development, local innovation and equitable benefits for all Africans, from North to South, East to West’
Prof. Mirjam van Reisen of Leiden and Tilburg Universities, who presented on ‘Building a continental framework for AI, Data Sovereignty and Responsible Digital Inoovation’, highlighted the urgency for Africa to take ownership of its data.
‘Artificial Intelligence is now embedded in everyday tools and platforms and is essential for economic growth in Africa, with the potential to add $3-trillion to Africa’s economy by 2030,’ she said.
Van Reisen warned, ‘Africa risks losing control over its digital data, with it being exported for economic gain in the United States, China and Europe without African oversight’.
She continued, ‘Controlling data is essential for controlling AI tools and protecting African interests – current centralised models of data storage and AI development reinforce inequality’.
She drew on Africa’s traditions, saying, ‘Just as traditional communities gathered under trees to find solutions, Africa now needs decentralised data systems through decentralised web and edge computing to build sovereignty over AI’.
Van Reisen underlined, ‘Africa should become the first continent fully data sovereign, using African data and legacy to shape “African Intelligence” for AI, avoiding digital colonialism while leveraging AI for African-led growth and problem-solving’.
‘AI is transforming healthcare, education, agriculture and public policy in Africa, but African data is often stored and processed outside the continent, risking misuse and loss of control, he said.
Mveyange explained, ‘AI models often rely on non-African datasets, leading to biases and poor applicability to African contexts’.
He urged African governments to ‘build legal, technical and governance frameworks to protect data and ensure it benefits African citizens’, emphasising ‘data is an economic resource, and African countries must prevent digital extractivism by global technology companies’.
He proposed adopting FAIR Data principles, making data ‘Findable, Accessible, Interoperable and Reusable within African contexts’, while investing in African data scientists, AI engineers and ethical AI governance structures
Mveyange called on the Pan-African Parliament and the African Union to ‘facilitate dialogue across governments, civil society, academia and the private sector to develop harmonised policies’, and to position Africa as ,a leader in ethical, responsible and people-centred AI’.
He concluded, ‘The time is now to build an Africa-led, responsible AI ecosystem to drive economic growth, improve health outcomes and foster inclusive development across the continent’.
Gregory Isaacson, AI expert from AgridroneAfrica, showcased practical pathways for AI implementation, focusing on food security and agricultural modernisation through African data sovereignty. He described a pilot model using drones and AI to boost yields and market efficiency via apps in farmers’ own languages.
Addressing the sovereignty aspect, Isaacson warned, ‘Current global AI models collect user data, raising privacy concerns … We propose local, solar-powered AI systems on farms that operate offline, store data locally, and prevent data leakage’.
The session at the Pan-African Parliament reaffirmed that while AI can transform healthcare, agriculture, education and governance, it must be rooted in African realities, be people-centred and respect local cultures, languages, and community needs
From calls to strengthen AI legal frameworks and expand local cloud infrastructure to proposals for cross-border research projects addressing healthcare and supply chain resilience, the speakers underscored the need for a unified, Africa-led approach that ensures AI benefits all Africans.
By aligning these initiatives with the African Union’s Agenda 2063, Africa can turn Artificial Intelligence into ‘African Intelligence’, ensuring it is ethical, inclusive and a driver of prosperity and resilience on the continent.
PAN-AFRICAN PARLIAMENT
PICTURE: Hon. Behdja Lammali, Chairperson of the Pan-African Parliament Committee on Transport, Industry, Communications, Energy, Science and Technology, opened the discussions (PAP)
In mid-July, the United States revoked the visas of several Brazilian judicial officials, including Supreme Federal Court Justice Alexandre de Moraes, accusing them of leading a ‘persecution and censorship complex’ that not only ‘violates basic rights of Brazilians, but also extends beyond Brazil’s shores to target Americans’.
Brazil’s President, Luiz Inácio Lula da Silva, slammed the decision as ‘arbitrary’ and ‘baseless’, calling it a violation of his country’s sovereignty.
The move, announced by US Secretary of State Marco Rubio on 18 July, marks the first use of a new policy aimed at foreign officials involved in what the Trump administration says are efforts to censor protected expression in the US, including ‘pressuring American tech platforms to adopt global content moderation policies’.
Pushback from the US comes as online safety laws are moving ahead in several major jurisdictions, with enforcement mechanisms already in motion.
In the UK, companies have completed their first round of illegal harms risk assessments under the Online Safety Act (OSA) and are expected to finalise children’s risk assessments next. The UK’s communications regulator, Ofcom, just launched nine new investigations under the law in June.
While the Trump administration claims its visa restriction defends free speech and national sovereignty, the visa restrictions on foreign officials are more than just a diplomatic warning — they’re forcing regulators worldwide to take stock.
The State Department, in an email to Tech Policy Press, described the visa restriction policy as a ‘global policy’, but singled out the DSA, saying the US is ‘very concerned about the DSA’s spill-over effects that impact free speech in America’.
Reiterating Rubio’s announcement, a spokesperson said, ‘We see troubling instances of foreign governments and foreign officials picking up the slack’, adding that the DSA’s impact on protected expression in the US is ‘an issue we’re monitoring’.
By targeting foreign officials they accuse of censoring speech on US soil, Washington is raising questions about how far enforcement of tech rules should extend, and what diplomatic fallout might follow
How are regulators navigating these tensions, and what does it mean for international cooperation on platform regulation?
The EU has so far made no move (at least publicly) to adjust course. While the US continues to express concern that parts of the DSA could chill protected expression in the US, Brussels continues to position the law as a model for global digital governance.
As part of its international digital strategy, published last month, the EU committed to promoting its regulatory approach in bilateral and multilateral forums, as well as sharing its experience in implementing it. It said it will organise regional events ‘with international organisations, third-country legislators, regulators and civil society to promote freedom of expression and safety’.
Various public statements by EU officials suggest the stance is unlikely to soften. But concerns around sustained enforcement are beginning to surface internally. This concern has gained traction amid reports that the European Commission is delaying its DSA probe into X ahead of a 1 August deadline linked to trade talks with the US.
The US visa policy may not mention trade, but it’s pushing regulators to rethink how their tech rules play on the global stage.
In South Korea, the US has raised significant objections to the government’s proposed Online Platform Fairness Act, which aims to rein in the dominance of major tech platforms and protect smaller market players. The legislation has emerged as a central issue in the two countries’ ongoing trade negotiations, with officials reportedly viewing it as a greater hurdle than traditional market access topics like agricultural imports.
South Korea’s President, Lee Jae Myung, has committed to advancing these reforms as part of a broader push to strengthen oversight of both domestic and foreign tech giants. Yet, US lawmakers argue that the bill closely mirrors the DMA and disproportionately impacts American companies.
South Korea’s ruling party is said to be reconsidering the pace of its antitrust efforts on US tech companies such as Google, Apple and Meta, amid concerns about the potential fallout on trade talks and diplomatic relations.
Similarly, Canada postponed plans to implement a digital services tax following sustained bilateral trade talks with the US, highlighting how US concerns are increasingly influencing the enforcement of tech laws worldwide.
Are US policies prompting regulators to rethink the global impact of their tech rules?
As global regulators move forward with enforcement under new online safety laws, some are taking extra care to clarify the limits of their authority. Owen Bennett, Head of International Online Safety at Ofcom, emphasised to Tech Policy Press that freedom of expression remains ‘core to what we do’.
He highlighted built-in protections within the OSA and systematic assessments of unintended impacts on speech and privacy. ‘The OSA requires only that services take action to protect users based in the UK – it does not require them to take action in relation to users based anywhere else in the world.’
As platforms navigate overlapping compliance regimes, the US is amplifying concerns that some enforcement efforts risk appearing politically motivated or extraterritorial in scope. Ofcom said it actively monitors how companies balance UK rules with obligations elsewhere
Australia’s eSafety Commissioner voiced similar concerns, calling for proportionate and rights-respecting regulation while underscoring the need for global alignment.
‘It’s reassuring to see governments around the world taking steps to protect their citizens from online harms, including the US through the Take It Down Act,’ an eSafety spokesperson told Tech Policy Press, ‘but we’d welcome even more governments considering the role of proportionate, human rights-respecting regulation to address the more egregious online harms’.
UNESCO, which developed the Guidelines for the Governance of Digital Platforms, flagged growing concern that major technology platforms, particularly US-based firms, may be walking back earlier commitments to user safety and governance standards, amid waning ‘political pressure and a discernible shift towards a less regulated environment’.
Last year, the UN agency launched the GFR, bringing together 87 national and regional bodies to coordinate an international approach to platform governance. In response to the deregulation trend, a UNESCO spokesperson said regulators involved in the initiatives it leads are rethinking their engagement strategies, emphasising direct communication with platforms and alignment on co-regulatory goals.
A recent US trade report flagged a range of legislation — including digital taxes and data protection laws — in more than a dozen countries, from Canada to Kenya, as potential obstacles to digital trade
Trump’s efforts to frame foreign regulations covering US tech companies as censorship or trade barriers are challenging countries to rethink how their digital rules are perceived abroad.
Brazil’s sharp pushback against the US visa policy suggests some governments may simply reject the Trump administration’s interpretation of digital regulation as censorship or protectionism.
Kenya is advancing legislation that would require all social media users in the country to verify their identities using National ID cards before accessing social media platforms.
The proposed measure aims to reduce online anonymity and combat issues like misinformation, hate speech and cyberbullying on social media platforms.
The initiative follows Kenya’s broader efforts to strengthen digital identity verification, including the country’s recent nationwide digital ID registration programme for secondary school students.
The development builds upon Kenya’s partnership with the United Nations Development Programme (UNDP) to advance its digital identity initiative.
The Communications Authority of Kenya (CA) has demonstrated a consistent regulatory approach toward digital services, as evidenced by recent enforcement actions against unlicensed tracking services. The regulatory oversight extends to the proposed social media verification requirements
The CA has recently been active in implementing new security measures to combat digital fraud, particularly in the mobile sector.
The legislation emerges amid ongoing discussions about digital rights and state control. During recent protests against the Finance Bill, authorities implemented Internet speed restrictions and detained online critics, highlighting the complex relationship between digital governance and civil liberties.
The developments follow a pattern of increased digital oversight, including the CA’s controversial directive requiring mobile phone International Mobile Equipment Identity (IMEI) registration.
Implementation challenges may arise from existing barriers to National ID acquisition. Youth and marginalised populations have reported difficulties obtaining National IDs, as documented in public forums like the #SiasaYaID events, potentially affecting equal access to social media platforms under the proposed system.
However, recent initiatives have shown progress in expanding digital access, such as new regulation enabling refugees to access mobile services.
The Bill represents part of Kenya’s broader digital governance strategy, which includes blockchain-based digital tokens and enhanced oversight of digital services. The initiatives reflect the government’s efforts to balance online accountability with digital rights and access, while positioning Kenya as a leader in digital transformation across Africa.
Major new research will help experts to counter the spread of misinformation in Africa and understand the causes and consequences of the continent’s growing digital divides.
The project, by researchers from the University of Exeter, will provide crucial information for the UK Government about the role of social media in galvanizing offline protest movements across Africa, and the logic behind foreign-origin disinformation and influence campaigns in the region.
Gadjanova will investigate how the public’s growing access to digital technologies and social media in Africa is influencing politics, parties’ organisational capacity and campaign strategies, electoral integrity, socio-economic inequalities and the nature and spread of misinformation in Ghana, Nigeria, Kenya and Zambia.
She has also studied which social media campaigns become viral and influence offline protest movements.
Gadjanova said: ‘I’m thankful to be awarded this fellowship, a result of my work over several years on the role and impact of digital technologies in Africa while here at Exeter, the research networks I have created across several countries, and experience with engaging with policymakers.’
‘This fellowship will support the [Foreign, Commonwealth and Development Office] FCDO capacity to carry out prompt and in-depth analysis of the various impacts that digital technologies are having on the socio-economic transformation and changing power dynamics in Africa
‘This will ensure decisions reflect the latest research and evidence, and improve the FCDO’s capacity to respond to a fast-moving policy environment.
‘In particular, my research can inform the FCDO’s ongoing work on democracy support, electoral integrity, media freedom and countering the spread of social media disinformation in Africa. It is crucial everyone works together to battle the offline spread of misinformation originating online.
‘Improved digital literacy and institutional monitoring can help to counter the worst online harms. There is also a need to improve party institutionalisation to harness the potential of digital technologies to empower new political actors, increase political trust, and improve government accountability.’
Gadjanova has previously briefed the Foreign and Commonwealth Office on African elections. Findings from her earlier research were cited by the Kenya Human Rights Commission in its evidence of electoral irregularities submitted to the Kenyan Supreme Court in August 2017.
The Innovation Fellowships scheme provides funding and support for established early-career and mid-career researchers to partner with organisations and business in the creative, cultural, public, private and policy sectors, to address challenges that require innovative approaches and solutions. The aim is to create new and deeper links beyond academia.
UNIVERSITY OF EXETER
PICTURE: Social media in sub-Saharan Africa is under the academic microscope (Aeqglobal.com)
Children’s rights should not be sidelined in the digital environment
Overview
Children across the globe are increasingly coming to terms with and engaging in a digital
world marked by both extraordinary promise and deep inequality. While the digital
environment offers unprecedented opportunities for learning, expression, and civic
engagement, many children remain disconnected, misrepresented, unprotected and at
risk of being misinformed.
The urgency of centering young voices in media integrity discussions is underscored by the 2024 Children in G20 findings, which reveal that 2.2-billion children and youth globally lack home Internet access, while those who are connected face significant rights violations including commercial exploitation, relentless data harvesting, behavioural profiling for advertising and inadequate protection standards.
The African child’s experience in the digital environment is uniquely shaped by a complex interplay of opportunity and adversity. Africa is home to the world’s youngest population, with children and youth making up a significant proportion of its demographic landscape
The G20, as a global leader in digital governance, has a critical role to play in setting standards and fostering international cooperation that puts children’s rights at the centre of the digital future. This policy brief builds on G20 commitments to strengthen child and youth protection and participation in digital media.
Proposal to the G20
The G20 must recognise that building sustainable, inclusive and rights-respecting digital
communities means ensuring children are protected and empowered online.
As digital platforms continue to evolve rapidly, children in Africa, and globally, face urgent threats including unsafe online spaces, AI-driven surveillance and profiling of children, to deepfake technologies targeting minors, commercial exploitation of data, the digital divide and the explosion of mis and disinformation, all while being excluded from shaping digital
policies.
The G20 cannot afford to sideline children’s experiences, rights and best interest; they must be central to the global digital transformation agenda and uphold core protection principles of non-discrimination, protection, survival and development.
Defining the critical issue and role of the G20
The G20’s commitment to child online protection is not new. The 2021 High Level Principles for Children Protection and Empowerment in the Digital Environment
developed a framework that promotes governments’ adoption of measures that provide
for age-appropriate child safety by design, and which G20 members have been working
to implement.
Previous G20 efforts, documented through comprehensive toolkits, together with G20 Member States efforts (in the form of consultations with children in the 2024 edition of Children in the G20 by the Brazilian articulation group), have identified critical success factors in the discussion on children’s digital rights, including:
holding tech companies accountable
age-appropriate design
risk response assessment and mitigation
effective support systems, and
the essential role of states, civil society and business in safeguarding children online
Within the African context, the African Union’s Child Online Safety and Empowerment Policy provides a continental framework recognising that children face exposure to hate speech, inappropriate content and online predators while acknowledging digital technologies’ transformative potential for education and development.
Together, these frameworks establish that protecting children online requires both regulatory oversight and corporate responsibility.
Five interconnected themes that the G20 must address
1Children in the media: Regarding access to and consumption of news and information integrity, media policy must adapt by supporting digital participation, inclusive content, and youth-driven storytelling to reflect their lived realities and strengthen their media literacy.
This is also very true for African children, who increasingly access news and information through social media, Search platforms and streaming services. As young minds increasingly turn to smartphones and tablets, they encounter a terrain riddled with algorithmic sinkholes, colonial data traps and disinformation mirages.
This is not merely an access gap; it is an integrity emergency, threatening an entire generation’s right to truth. At the click of a button, they are exposed to an overwhelming volume of content, ranging from political developments and community news to the latest celebrity trends
Online platforms prioritise content based on algorithms, likes, shares, and trending topics, rather than principles of fair representation and inclusivity. Algorithms and AI systems merge news and entertainment and blur the lines between fact and opinion, making it difficult for children to discern the importance of hard news.
Emotionally charged and sensational content dominates their feeds. Less sensationalised but equally important content, which affects children’s lives and needs are often being ignored in favour of clickbait content.
Furthermore, these algorithms quickly create echo chambers, reinforcing children’s preferences by serving them more of the same type of content.
2Media and information literacy (MIL): As digital technologies reshape communication, education, and public engagement, MIL has become a vital 21st-Century skill. MIL empowers individuals, including children, to critically assess content, navigate media systems, identify disinformation, and participate meaningfully in public discourse.
Children’s exposure to harmful digital experiences translates to violent content, mis- and disinformation, cyberbullying, online grooming and child sexual abuse material (CSAM) which have detrimental effects on children’s mental health and development
Climate disinformation, as discussed in MMA’s discussion document, deserves special mention because it undermines children’s ability to understand environmental issues and make informed decisions about climate action, potentially limiting their participation in climate activism and their capacity to address one of the most pressing challenges of their generation.
Media literacy must be framed as a rights-based issue, Article 17 of the UNCRC
recognises children’s right to access information from a diversity of sources and the
obligation of States to guide children’s use of media in ways that protect them from harm and promote their well-being.
Another important dimension is the empowerment of children as content creators. Media literacy is not only about being informed consumers of media but also about becoming thoughtful producers of content.
In a world where anyone with a smartphone can post a video, write a blog, or share an opinion, media literacy gives children the confidence and competence to share their own stories, advocate for issues they care about, and participate meaningfully in public discourse. Empowering children with MIL is thus crucial.
3 The impact of Artificial Intelligence on African children and their right to privacy: AI is increasingly embedded in the digital environments that African
children use – from the games they play and the content they consume, to the education
tools they use.
While AI offers substantial benefits for learning, innovation, and service delivery, it poses serious risks through a lack of transparency in decision-making, extensive data harvesting, and limited contextual understanding.
African children, many of whom face compounded vulnerabilities due to structural inequalities, are especially at risk when AI systems are not designed with their rights, best interests, and participation in mind
Current AI systems often harvest and process children’s personal data without meaningful consent or child-centred oversight, amplifying risks of commercial exploitation, surveillance, profiling, manipulation and discrimination. Surveillance and data commodification reduce children to profit-generating datasets.
From recommender systems that promote harmful content to biased algorithms that reproduce racial, linguistic or socio-economic inequalities, these systems can undermine children’s best interests, including:
Urgent safeguards are needed to ensure AI technologies are accountable, transparent, and developed with African children’s voices, contexts, and rights at the centre.
4 The long-standing issue of the Digital Divide: The lack of investment in meaningful Internet access in schools is another challenge that translates into the lack of digital learning and outdated or narrow curricula focused on risks rather than building critical thinking.
The Digital Divide reveals underlying barriers, including high data costs and insufficient digital literacy training for both adults and children, factors that limit the effective integration of technology in educational settings.
G20 countries must ensure that online platforms adopt an intersectional lens to address how digital exclusion and online violence disproportionately affect girls, children with disabilities and those in conflict zones in line with the principle of non-discrimination as enunciated by the CRC and the ACRWC
5 Children’s right to protection and participation: The digital environment presents both significant opportunities and complex risks for children, making both their protection and meaningful participation important.
Although formal bodies like the Children’s Parliament are valuable, their integration into policymaking remains largely symbolic. Equipping children with the skills and knowledge to engage meaningfully is essential, but efforts must also address structural inequalities in legislation, education, digital access, language diversity, as well as adult duty-bearers’
capacity to listen to and act on children’s views.
1 Supporting child-safe digital environments: What measures beyond content moderation could ensure platforms prioritise child protection over engagement metrics?
2 Information integrity for children: How best can we ensure that children have access to diverse content, but also promote and ensure children have access to credible, accurate information?
3 Media literacy as a fundamental right: How can media and information literacy be integrated into education systems as a foundational skill rather than an optional add-on?
4 Cultural sensitivity and pluralism: Should G20 countries enforce mechanisms ensuring AI is trained on data that reflects African cultures, especially for low-resourced languages and indigenous knowledge systems, to make sure these identities are not erased by technology?
5 The participation gap: Children are digital natives but remain excluded from governance decisions. How can meaningful child participation be institutionalised in digital policy-making beyond symbolic consultations?
6 Addressing the digital divide: How can universal, affordable and child-safe Internet access be achieved, particularly in low- and middle-income countries?
7 Transnational enforcement: What legal mechanisms can hold G20-based platforms accountable for cultural and other harms relating to African children?
Proposed text for inclusion in G20 outputs
For the Heads of States (‘Leaders Declaration’):
‘We acknowledge the vulnerabilities faced by children in the digital environment and
commit to promoting formal mechanisms for their participation in digital policy-making
processes, such as youth parliaments and inclusive consultations.
‘We will prioritise childcentred protection and participation frameworks into G20 commitments to ensure alignment with international human rights standards, as well as legal and regulatory measures to hold G20 technology-based companies accountable for digital harms to children.’
For the Digital Ministers 2025 Declaration:
‘In recognition of the evolving digital landscape and unequal risks faced by children online, there is an urgent need for G20 countries and beyond to promote and support digital platform’s adoption of child-centred safety standards which include ageappropriate child safety by design, transparent moderation, algorithmic accountability, accessible reporting tools and clear measures taken to prohibit predictive profiling of minors.
Media and information literacy must be integrated into educational systems, especially for low-income countries and communities as a foundation for promoting digital citizenship and building resilience against digital harms.’
Recommendations and opportunities for G20 media
As real-time reporting through social media becomes more widespread, media, guided by human rights institutions, can strengthen their role in promoting ethical standards that amplify accurate and credible information that promotes children’s rights and debunk inaccurate information that reinforces harmful stereotypes.
Media can lead efforts to develop and enforce guidelines on the ethical use of children’s images and stories in online content, ensuring their privacy, dignity and best interests, as well as to amplify marginalised voices of children when it is in their best interest
This includes providing practical guidance for journalists, citizen reporters and media platforms on consent, anonymisation, and child-sensitive storytelling, particularly in crisis or high-visibility situations.
For media to remain competitive, they can adopt innovative formats and storytelling approaches that are diverse, inclusive and that promote children’s rights and reflect their lived experiences, while upholding ethical standards.
News media can collaborate with civil society institutions and campaigns that prioritise digital and media literacy for children, to educate parents and children on how to identify and report disinformation online.
Additional documents and further reading
The 2021 High-Level Principles for Children Protection and Empowerment in the
Digital Environment (as above)
The M20 initiative is a ‘shadow’ parallel process set up to intersect with the G20 processes. The M20 seeks to persuade the G20 network of the most powerful global economies to recognise the news media’s relevance to their concerns.
As a collaborative M20 document, this paper is a working, live document. Share your suggestions or comments for consideration at [email protected]
For more information about the G20 process, which is hosted by South Africa in 2025, visit the website here
It took place at a time when Africa lags behind the rest of the world in terms of digitisation. Our innovation, capacitation and ability to harness the dividends of a digitised society are not on par with global trends and standards. In the long run, this could reduce Africa’s and Africans’ competitiveness internationally.
Moreover, trends associated with Industry 4.0 (the Fourth Industrial Revolution) – especially in digital transformation, AI, data protection and privacy – highlight the need for African policymakers to acquire necessary, relevant skills to address policy issues, particularly in terms of global health security, in alignment with’ the Africa We Want’ as outlined in the African Union‘s Agenda 2063.
The inaugural Summit aimed at building the capacity of Members of Parliament in terms of AI, data protection and privacy with a specific focus on governance and policy considerations and challenges. It also laid a foundation to foster dialogue among policymakers, researchers and private sector stakeholders to align legislative action with Africa’s digital transformation agenda.
Each reaffirmed their commitment to the success of the Summit and underscored its timeliness and relevance, highlighting the critical importance of equipping MPs with the knowledge required to make informed legislative decisions on these complex and evolving issues.
They further indicated that the Summit was crucial for ensuring that technological innovation translates into tangible benefits for African citizens while safeguarding national interests, rights and sovereignty.
Members of Parliament
EXPRESSED appreciation to the APHRC and GSMA for convening the Summit as an opportunity to establish the foundation for long-term partnerships that enhance the role of research, data science, and innovation in policy processes across the continent
ACKNOWLEDGED the need for strengthening the connection between research and policy action by fostering dialogue among legislators, researchers and private sector stakeholders, ensuring that Africa’s policy responses are grounded in robust and locally generated evidence
WELCOMED the development of targeted data and digital literacy programmes for MPs and parliamentary staff, enhancing their ability to navigate complex issues such as AI governance, data protection, privacy and governance, and cross-border data flows
ACKNOWLEDGED that while AI offers significant opportunities for predictive healthcare, optimised resource allocation and enhanced production processes, it also presents risks related to inequality, exclusion and privacy violations if not governed appropriately.
RECOGNISED the need for robust Africa-led governance frameworks to ensure responsible AI development and application, given the potential impact on democracy, elections and governance which is key to the AU under the African Charter on Democracy, Elections and Good Governance (ACDEG)
NOTED that digital health offers significant economic potential in Africa, estimated at $4.6-billion in 2024, with projections reaching approximately $5.7bn under a baseline scenario and $6.5bn under an optimistic scenario by 2030. This growth is expected to substantially contribute to the region’s health sector Gross Domestic Product (GDP), highlighting the critical importance of ongoing investment and strategic development
EMPHASISED the need for the transformative potential of digital health, which is revolutionising healthcare delivery through enhanced data utilisation, AI-driven decision-making and cross-border health information exchange – critical components for advancing universal health coverage across the continent
Further EMPHASISED the importance of establishing a normative standards framework for health to ensure that Digital Health Initiatives (DHIs) adhere to international standards. This includes adopting national Health Information Exchange (HIE) protocols to promote data sharing and policy harmonisation
RECOGNISED the necessity of facilitating cross-sector collaboration among the Commerce, Trade and ICT sectors to jointly explore opportunities for the digitalisation of manufacturing processes, thereby enhancing domestic productivity and supply chain resilience
Further RECOGNISED the necessity of creating an enabling environment for sustained investments in advancing connectivity networks, given their critical role in facilitating smart manufacturing
NOTED the need to promote Science, Technology, Engineering and Mathematics education initiatives to cultivate the necessary labour skills to meet the workforce demands for smart manufacturing in Africa
Further NOTED the need to integrate Industry 4.0 into national industrial policies by advocating for the explicit inclusion of digital transformation goals and metrics in national industrial development strategies and sectoral master plans
EMPHASISED the need to incentivise digital adoption among SMEs and mid-sized manufacturers by legislating laws and policies that champion targeted financial mechanisms, including: tax incentives for technology upgrades; direct grants and subsidies for digital adoption, and access to affordable industrial financing tools that reduce risk
Further EMPHASISED the need to establish and enforce data governance and cybersecurity protocols by encouraging African Member States to adopt and implement robust frameworks for data ownership, sharing and cybersecurity in line with the African Union’s Malabo Convention on industrial data protection laws, operational technology (OT) cybersecurity standards and national cyber resilience strategies for manufacturing systems
REAFFIRMED parliamentarians’ commitment, if capacitated, to facilitate the adoption of coherent and harmonised legislation that promotes responsible digital innovation, safeguards privacy and human rights, and ensures that Africa’s digital transformation remains inclusive, secure and aligned with the continent’s priorities
Further REAFFIRMED the need to drive regulatory reform to foster innovation by supporting the review and modernisation of outdated industrial regulations, and leverage the African Continental Free Trade Area (AfCFTA) agreement to create regional supply chains for smart manufacturing inputs
ACKNOWLEDGED the opportunities presented by smart manufacturing, leveraging AI and automation to enhance industrial productivity, create jobs and strengthen Africa’s global competitiveness
RECOGNISED that these developments are pivotal for achieving the Aspirations of Agenda 2063, the AUn’s blueprint for inclusive growth, sustainable development and the continent’s integration
EXPRESSED the need to implement enabling regulations for investment in advanced connectivity infrastructure, e.g 5G, especially around Special Economic Zones (SEZs). Effective policies around spectrum can facilitate the roll out of 5G, including private 5G networks around SEZs where manufacturers are based to improve production processes
Further EXPRESSED the need for governments to build awareness of the benefits of smart manufacturing among manufacturers and SMEs, and launch campaigns to educate manufacturers on smart manufacturing benefits, such as cost savings and productivity gains
EMPHASISED the need to support local research institutions and start-ups developing affordable, context-specific solutions, given the unique issues that local manufacturers face in the region. Encourage reverse engineering and the adaptation of global technologies
NOTED the need to facilitate investments in renewable energy and off-grid solutions to address energy challenges, including exploring emerging models for off-grid solutions for renewable energy generation and distribution
At the end of the fruitful engagements and deliberations, the following recommendations were made for:
Members of Parliament to:
Support legislative and regulatory frameworks on AI for health and industry to ensure safe, transparent and ethical use at national, regional and continental levels
Develop Model Laws and Policy Guidelines on Artificial Intelligence, Data Protection and Privacy with support from the APRM, AUDA-NEPAD, GSMA and the APHRC, aligned with AU Agenda 2063
Advocate for the ratification and domestication of the Malabo Convention to address emerging technologies, including AI, cross-border data flows and evolving cyber threats
Support allocation of adequate funding for digital infrastructure, research, innovation and development in AI
Support vocational and higher education in AI by increasing the budget of the relevant academic institutions
Enhance collaboration between the PAP, AU organs and civil society organisations such as the APHRC and GSMA to develop and implement an Africa-led governance framework on Artificial Intelligence, ensuring it benefits all Africans
Facilitate cross-border research projects on AI solutions that address shared healthcare burdens and supply chain resilience, and
Institutionalise the convening of an annual Africa Digital Parliamentary Summit in collaboration with the APRM, AUDA-NEPAD, GSMA and the APHRC as a formal multi-stakeholder platform to review progress on recommendations made and monitor policy harmonisation
TheAPHRC, GSMA and other stakeholders to:
Support Members of Parliament and parliamentary technical personnel with continuous technical training, up-to-date research and knowledge exchange on AI ethics, data governance, digital health and smart manufacturing
Support the development of national AI capability frameworks to assess readiness at sectoral levels in collaboration with Members of Parliament
Facilitate the development of tailored leadership programmes for policymakers to understand AI’s strategic value
Support AU Member States in integrating smart manufacturing, digital health and AI into national development plans, mobilising resources, technical expertise and multi-sectoral partnerships
Encourage GSMA to deepen partnerships with the African private sector, mobile network operators and innovators to expand infrastructure and services that enable AI to ensure no community is left behind
Encourage the APHRC enhance its collaboration with the PAP and national parliaments to translate research into legislative action and to continue generating policy-relevant research on the socio-economic impacts of digital transformation
Facilitate an immersive learning experience in digital health and smart manufacturing for parliamentarians at the MWC GSMA Shanghai in June 2026, as part of supporting their capacity building in the adoption of coherent and harmonised legislation that promotes responsible digital innovation, safeguards privacy and human rights, and ensures that Africa’s digital transformation remains inclusive, secure and aligned with the continent’s priorities
Their session, marking two decades of the World Summit on the Information Society (WSIS), emphasised that affordability, poor infrastructure and a lack of digital literacy continue to block access, especially for marginalised communities.
The speakers proposed a structured three-pillar framework – inclusion, ethics, and sustainability – to ensure that no one is left behind in the digital age.
The inclusion pillar advocated for universal connectivity through affordable broadband, multilingual content, and skills-building programmes, citing India’s Digital India and Kenya’s Community Networks as examples of success
On ethics, they called for policies grounded in human rights, data privacy and transparent AI governance, pointing to the EU’s AI Act and UNESCO guidelines as benchmarks.
The sustainability pillar highlighted the importance of energy-efficient infrastructure, proper e-waste management, and fair public-private collaboration, showcasing Rwanda’s green ICT strategy and Estonia’s e-residency program.
Dutta presented detailed data from Bangladesh, showing stark urban-rural and gender-based gaps in Internet access and digital literacy. While urban broadband penetration has soared, rural and female participation lags behind.
Encouraging trends, such as rising female enrolment in ICT education and the doubling of ICT sector employment since 2022, were tempered by low data protection awareness and a dire e-waste recycling rate of only 3%.
The session concluded with a call for coordinated global and regional action, embedding ethics and inclusion in every digital policy. The speakers urged stakeholders to bridge divides in connectivity, opportunity, access, and environmental responsibility, ensuring digital progress uplifts all communities.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorised as necessary are stored on your browser as they are essential for the basic functionalities of the website. We also use third-party cookies that help us analyse and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. Opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always active
Necessary cookies are essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Analytics
Analytics cookies are used to track user behaviour on our website. We process these cookies to understand user engagement and improve user experience on our website.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.