International IDEA, in partnership with the Gambia Press Union (GPU) and with support from the European Union (EU), has concluded a two-day review and validation of the draft Cherno Jallow Charter of Ethical Journalism. The event, held from 6 to 7 August 2025 brought together media stakeholders to review a document aimed at strengthening ethical journalism in The Gambia.
Jainaba Faye, Head of the International IDEA Gambia Office, praised the GPU for incorporating progressive provisions in the revised Charter, particularly those related to hate speech, AI, disability inclusion and fake news, which reflect the current realities. She described the initiative as a timely response that is relevant to the Gambian context.
Faye stressed that hate speech not only undermines media integrity but also threatens national unity, fuelling misinformation and creating division and instability. She urged journalists to remain committed to fair, balanced and responsible reporting, resisting inflammatory rhetoric and prioritising dialogue over division.
EU Ambassador to The Gambia, Immaculada Roca i Cortés, emphasised the importance of incorporating principles that promote respectful and inclusive reporting, particularly to amplify the voices of marginalised communities.
She highlighted the growing influence of AI in shaping the global media landscape, noting both its opportunities and ethical challenges.
‘While AI can enhance journalistic efficiency and outreach, it also raises critical questions about bias, transparency, and accountability. It is commendable that this draft Code includes guidance on the ethical use of AI, tailored to the local context, ensuring technology serves humanity, not the other way around,’ Roca said.
Sheriff Saidykhan, the Vice President of the Gambia Press Union, explained that the Cherno Jallow Charter was originally developed to enhance public trust in the media.
He stressed that the revised Charter would empower journalists to navigate restrictive provisions in the Criminal Code that continue to hinder press freedom, while also promoting higher professional standards in Gambian journalism
The review was conducted through a section-by-section analysis, with participants working in groups to evaluate, present, and refine the draft provisions. Feedback gathered during the sessions was scrutinized and will be incorporated into the final document.
PICTURE: Vice President of the Gambia Press Union Sheriff Saidykhan (centre left, on seats), International IDEA Country Office head Jainaba Faye (centre) and EU Ambassador Immaculada Roca i Cortés (centre right on seats) were among those at the official opening of the validation exercise of the draft Charter
This Framework will provide essential support to AU Member States to develop and implement comprehensive information literacy and digital competencies policies and strategies, and further empower African citizens to critically assess information, participate effectively in digital spaces and contribute to inclusive development.
‘This collaboration with the African Union on the Continental Framework for Information Integrity and Media and Information Literacy is a crucial step forward,’ said Rita Bissoonauth, Director of the UNESCO Liaison Office to the AU, ECA and Representative to Ethiopia.
‘UNESCO firmly believes that empowering African citizens with these essential skills is key to promoting informed civic engagement, countering disinformation and misinformation, and driving an inclusive and equitable digital transformation across the continent’
The development of this inaugural draft Framework is guided by key continental and international recommendations and decisions, including:
relevant decisions adopted by the Peace and Security Council (PSC) of the AU at its 1214th meeting (open session) held on 13 June 2024, and its 1230th meeting (open session) held on 2 September 2024, underlining the nexus between information integrity, digital literacy, peace and security
Following the completion of this initial draft, a six-month regional consultation process with stakeholders will commence. This extensive consultation phase will ensure comprehensive feedback and broad buy-in before the Framework is presented for approval to the Communication and ICT Ministers and the Sixth Ordinary Session of the Specialised Technical Committee on Communication and ICT (CCICT-6), to be held from 3 to 6 November 2025 in Addis Ababa.
‘In an era of evolving information landscapes, safeguarding information integrity is paramount to fostering trust, informed citizenship and robust democracies across Africa,’ said Director of the AU’s Information and Communication Directorate, Leslie Richer
‘This framework marks an important step in equipping our citizens with the critical skills needed to navigate the digital age.’
The African Commission on Human and Peoples’ Rights (the African Commission), meeting at its 84th Ordinary Session, held virtually from 21 to 30 July 2025, recalled its mandate to promote and protect human and peoples’ rights enshrined in the African Charter on Human and Peoples’ Rights.
To this end, it reaffirmed Article 1 of the Charter which provides that States Parties shall recognise the rights, duties and freedoms enshrined therein and shall undertake to adopt legislative or other measures to give effect to them, in addition to Article 45(1) which mandates the Commission to develop norms and standards to guide States Parties in fulfilling their obligations under the African Charter.
We are cognisant of Resolution ACHPR/Res.167 (XLVIII)10 on Securing the Effective Realisation of Access to Information in Africa initiating the process of developing the Model Law, which was subsequently adopted by the Commission during its 13th Extra-Ordinary Session in 2013.
We note that since the adoption of the Model Law, 29 States in Africa have adopted access to information laws.
We further consider Executive Council Decision EX.CL/Dec.1234(XLIV), adopted during Council’s 44th Ordinary Session in February 2024, during which it requested the AU Commission to ‘work with [it] to undertake a 10-year review and update of the [Model Law], to ensure compliance with the [Declaration of Principles], and particularly to make it fit for purpose in the digital age’.
We take into account developments in the areas of freedom of expression and access to information in the digital era since the adoption of the Model Law, including as reflected in Part IV of the Declaration of Principles.
The Commission thus:
Decides to conduct a one-year continental study on developments in the areas of freedom of expression and access to information in the digital era, with the view to reviewing and updating theModel Law
Meta is under renewed scrutiny for failing to moderate AI-generated misinformation targeting African users, as local fact-checkers and digital rights advocates warn of a widening enforcement gap that leaves millions vulnerable to scams and disinformation.
From manipulated videos of Nigerian TV anchors endorsing fake medical cures to deepfake ads featuring business moguls like Tony Elumelu promoting Ponzi schemes, Facebook and Instagram are increasingly being flooded in Africa with deceptive AI-generated content.
Yet Meta’s moderation responses remain slow, inconsistent, or absent altogether.
‘It often feels like Meta follows stricter laws in Europe, while in Africa, there’s little accountability,’ said Olayinka*, a Nigerian fact-checker who reported identical scam ads – only to see one removed quickly in Europe, while the other remained live for days
The FactCheckHub reviewed multiple AI-generated scam ads targeting Nigerian users. While similar content aimed at European audiences was swiftly taken down, African-targeted material often remained online, even after repeated reports.
In one instance, a Facebook ad using a deepfake version of broadcaster Olamide Odekunle promoted a fraudulent trading platform. Despite being flagged by users and media, the ad remained live for weeks. Odekunle, who is also one of Nigeria’s first AI news anchors, confirmed she did not consent to the likeness being used.
Another video falsely showed TV presenter Kayode Okikiolu endorsing miracle drugs and scams. ‘It was scary at first … then anger. I was being used to mislead people,’ Okikiolu told the FactCheckHub.
Experts say Meta’s phasing out of its third-party fact-checking programme in Africa and reliance on crowdsourced Community Notes is weakening safeguards, especially in countries like Nigeria and Kenya.
‘Meta has started withdrawing fact-checking partnerships where they’re most needed,’ said Kehinde Adegboyega of the Human Rights Journalists Network of Nigeria. ‘In South Africa, they launched an Election Operations Centre and multilingual moderation. In Nigeria, we’re left to fight these battles ourselves.’
Meta’s stated policies prohibit AI-generated misinformation and manipulated media. Yet the FactCheckHub found dozens of ads and videos – violating these rules – that remained online. Critics argue that Meta enforces its guidelines unevenly, applying stricter standards in the Global North than in African countries.
The issue goes beyond financial scams. Health misinformation – such as fake AI-generated videos promoting hypertension cures – has also circulated widely. One viral video even used a fabricated X post to falsely claim that former President Muhammadu Buhari had died in 2017.
‘These deceptions are emotionally and financially damaging,’ said Olayinka. ‘And Meta’s slow action only worsens the problem.’
Fact-checkers, journalists, and civil society groups are calling for Meta to:
restore regional fact-checking partnerships
invest in language-aware AI moderation
launch election response centres beyond South Africa, and
improve platform transparency and community reporting tools
‘AI is rapidly changing the information landscape, but without enforcement and local accountability, it becomes a weapon,’ said Adegboyega. ‘Africa should not be treated as an afterthought in global tech governance’
The Media Council of Tanzania (MCT) is marking 30 years since its establishment on 30 June 1995. The Council was founded by independent media stakeholders during a conference of journalists and media players as an independent, voluntary and non-judicial body to resolve conflicts arising from news and information published by media outlets, and to uphold media ethics.
Media stakeholders decided to establish this body with the aim of resolving their own disputes without relying on legal bodies such as courts. MCT officially began operations in 1997 after being registered under the Societies Ordinance of 1954.
Besides conflict resolution and training in various media fields, the MCT has heavily invested in efforts to improve media-related laws – a task carried out under the voluntary coalition known as CoRI (Coalition on the Right to Information). Members include the MCT itself, the Tanganyika Law Society (TLS), MISA-Tanzania (the Tanzania chapter of the Media Institute of Southern Africa), TAMWA (the Tanzania Media Women’s Association), TEF (the Tanzania Editors’ Forum), the Legal and Human Rights Centre and other civil society organisations.
The MCT and CoRI have been engaged in the media law reform process since 2006. Their first major action was opposing the 1993 Media Profession Regulation Bill, which was drafted in English and seen as restrictive to press freedom.
A nationwide stakeholder consultation, coordinated by the MCT – the secretariat to the CoRI – was held to collect opinions on access to information and media services law, essentially to debate whether to have one law for all citizens, or two, with one aimed specifically at media professionals.
In 2007 and 2008, MCT and its partners published alternative draft bills based on collected public opinions, one for each proposed law. These alternative bills were drafted by journalist, activist, politician and advocate Dr Sengodo Mvungi, lawyer Mohammed Tibanyendera, Dr Damas Ndumbaro (currently the Minister for Constitution and Legal Affairs) and their teams. The drafts were officially submitted to the government.
To help lawmakers better understand the legislative terrain, CoRI and the MCT organised a study tour to India in 2012 for eight Members of Parliament – Juma Mkamia, Husesin Mzee, Assumpter Mshama, Rebecca Mngodo, Jussa Ismail Ladhu, Ali Mzee, Ramadhani Saleh and Moza Abeid Saidy – as India was a developing nation with progressive media laws at the time.
It took almost 10 years of struggle to see media laws drafted in 2016 when the Media Services Act (MSA) was at last enacted, but only 12 out of 62 stakeholder recommendations were adopted, many of which were minor
Key concerns in the stakeholders’ recommendation – such as ministerial powers to ban media houses without fair hearing – were ignored. Others were: the power of the police force to confiscate media equipment with only what is stated as ‘disclosed security information’; criminalising defamation-related stories, and sedition-related provisions.
Not all media stakeholders were not happy with the provisions seeing this as impacting upon freedom of expression, and the media in general decided to file cases to challenge the MSA provisions. One of the cases filed in the Mwanza High Court registry to challenge the provisions of MSA was around the powers of Minister to ban newspapers.
Another case filed in the East Africa Court of Justice (EACJ) by MCT,LHRC and THRDC saw 16 provisions out of 18 tabled by applicants under the MSA ordered for review by the government.
After much pressure from the stakeholders, in 2023, the government tabled written laws (the Miscellaneous Amendments Act), proposing changes to various laws, including the MSA. The government come out with a proposal for amendments but most of these related to a reduction in prison time
MCT and CoRI submitted around 25 new proposals covering: the minister’s powers to ban newspapers without due process; annual registration of newspapers; criminal defamation and sedition clauses; police powers to seize media equipment, and other harsh, unlimited penalties.
The June 2023 amendments addressed only nine provisions, including: removing the director of information services’ power to control advertisements; decriminalising defamation (turning it into a civil matter); reducing the length of sentences, and limiting judicial powers to seize printing equipment. CORI’s recommendation were ignored again, especially those sensitive to the areas of sedition and the powers of the police force.
Thus, the 30-year journey continues, especially with unresolved issues in the MSA – the core of media professionalism
According to stakeholders, about 12 sections still raise serious concerns over press freedom. Chief among these is Section 7(2)(b)(iv), which forces private media to publish public interest stories under instruction from the government. To media stakeholders, these instructions undermine editorial independence and should be rejected under law.
CoRI through MCT is still pushing for change, urging the National Assembly and the government to address its outstanding issues, especially after the 2025 General Election, in the hopes of achieving better, fairer media laws in Tanzania.
In 2022, the European Union approved the Digital Services Act (DSA), legislation promising to protect user rights and placing a regulatory requirement on platforms to identify and mitigate risks resulting from their online services.
Crucially, the DSA stipulates that online platforms, including social media companies, must ‘give particular consideration’ to freedom of expression when deciding how to address the serious harms to society identified under this framework.
Since these platforms published their first assessments in late 2024, several challenges to this aim are becoming apparent, some derived from the ambiguity of the DSA’s key terms, others from missed opportunities to integrate global human rights standards into these assessments.
Building on the work of many organisations active in this field, the Oversight Board believes it is crucial that human rights, particularly freedom of expression, are placed at the core of systemic risk assessments.
In that spirit, this paper sets out four focus areas that could help to enhance platform accountability and improve how content is governed, as part of a consistent and effective rights-based approach:
Clarify the meaning of systemic risks. Ambiguity over this DSA term could leave the door open for overbroad interpretations, potentially incentivising restrictions on speech
Draw on global human rights standards. Fully integrate such standards across all categories of risk assessment for more consistent reporting. Mainstreaming global human rights is more effective than treating them as a standalone category
Embed stakeholder engagement into identification of risks and design of mitigations. By following the practices set out in the United Nations Guiding Principles on Business and Human Rights (UNGPs), platforms can more meaningfully show how stakeholder engagement shapes their responses to risk
Deepen analysis with data. Quantitative and qualitative data are equally valuable to reporting. Companies should more openly use appeals data supported by insights from external oversight mechanisms to show whether mitigations are effective in respecting freedom of expression and other human rights
Introduction
Recent EU regulation of online platforms introduces a new, risk-based approach to online services, focusing on how platforms may create or amplify certain types of harm. The DSA seeks to regulate social media to establish ‘harmonised rules’ for a ‘trusted online environment’ in which human rights are respected.
It requires Very Large Online Platforms (VLOPs) to disclose the steps they are taking to prevent their services from harming people and society.
The early ‘systemic risk assessments’ published by VLOPs provide insights into how platforms identify, evaluate and mitigate risks – including to human rights – arising from the design and use of their systems, as required by DSA Articles 34 and 35.
Although the DSA has the potential to enhance transparency and support human rights, the incentives it creates could also lead to excessive restrictions on freedom of expression globally.
Reconciling risk mitigation and respect for freedom of expression
Many of the risks the DSA addresses reflect the issues the Board has prioritised in its cases. For example, the DSA Recital 86 requires platforms to ‘give particular consideration to the impact on freedom of expression’ when choosing how to mitigate systemic risks.
This consideration is closely linked to the Board’s mandate, which centers on ensuring respect for freedom of expression and identifying when speech restrictions may be justified to protect other rights or interests.
Our decisions, which are binding on Meta, tackle the most challenging content moderation issues, and examine how Meta’s policies, design choices and use of automation impact people’s rights. These decisions provide insights into how to reconcile the identification and mitigation of risks on Meta’s platforms with respect for freedom of expression and other human rights
The Board emphasises that systemic risk assessments must include greater focus on respect for human rights, including freedom of expression, if they are to enhance meaningful platform accountability to users and improve content governance in line with the DSA’s objectives.
Drawing on this work and its close analysis of the first systemic risk assessments, the Board offers the following reflections.
Clarify the meaning of systemic risks
The first reports are limited by the lack of a shared understanding of what the term ‘systemic risks’ means. It is not defined in the DSA and is not rooted in global human rights law.
While the Board acknowledges the DSA’s deliberately flexible approach of allowing the meaning to develop over time, this shifts the responsibility over to platforms to thoughtfully interpret the concept.
Given this, it is understandable that platforms often default to a narrow, compliance-focused approach, which can hinder a meaningful understanding of systemic risks developing.
The result is the reduction of systemic risks analysis to a checklist exercise, as largely seen in the initial publication of platforms’ risk assessments in 2024.
Most platform reports refer only to the DSA’s listed systemic risk categories (‘illegal content’, ‘negative effects’ on ‘fundamental rights’, democratic processes, public security, ‘gender-based violence’ and the protection of minors) and its 11 mitigation measures (e.g., ‘adapting’ and ‘adjusting’ design choices and ‘recommender systems’).
Platforms are largely silent on whether their assessments identified new risks or led to the rollout of new mitigations, and do not challenge presumed connections between their platforms and specific risks. This ambiguity, in turn, may facilitate platforms missing or obfuscating new threats and emerging trends.
Incentivising speech restrictions
From a freedom of expression perspective, ambiguity over the term’s meaning may lead to overbroad interpretations and arbitrary enforcement, incentivising excessive restrictions on speech. This could stifle diverse opinions and potentially chill platforms’ commitments to providing spaces for open discourse on challenging and sensitive topics.
Consequently, this could deter users’ freedom to express themselves on these platforms. It has also the potential to undermine some of the benefits that the DSA may bring in terms of greater access to user remedy and increased transparency
The DSA treats human rights as a standalone category rather than integrating it across risk areas, leading to fragmented approaches on how platforms identify, assess and mitigate risks. This is especially problematic given the DSA’s novel standard that mitigations must be ‘reasonable, proportionate and effective’, which lacks clear implementation guidance.
By placing human rights in a standalone category, the DSA misses the opportunity to integrate human rights considerations comprehensively into systemic risk governance.
This prompts platforms to prioritise certain rights over others and discourages them from assessing how each risk area or ‘influencing factor’ may affect human rights as a whole.
Recent research from the CELE, the Argentina-based NGO, argues that the risk-based approach ‘pushes rights out [from] the centrestage of Internet governance, and may create a logic of “symbolic compliance” where [the] governance role of rights is further diminished’.
Drawing on global human rights standards could support a more consistent and rights-based approach to systemic risk reporting, helping align methodologies while ensuring a common framework for assessing impacts on rights.
This fragmented treatment becomes particularly evident in the context of freedom of expression. While standalone reporting may cover concerns about content moderation practices, account suspensions or misinformation, it often overlooks more nuanced issues
For example, it may fail to consider how other risk areas like ‘illegal content’ or ‘influencing factors’ like automated detection, recommendation algorithms or search functionalities can have systemic impacts on freedom of expression, even when these effects initially seem limited. Or, in another instance, when platforms cooperate with governments on content takedowns, it is often unclear how such requests are made, recorded or acted upon.
This lack of transparency has been a recurring issue identified in the Board’s case work, which has examined the opaque and inconsistent nature of state requests (see Shared Al Jazeera Post,UK Drill Music and Öcalan’s Isolation decisions), and their potential to suppress freedom of expression.
Platforms also rely heavily on automated systems to detect and remove content, which can, on the one hand, lead to the overenforcement of political and counter speech. On the other, reducing reliance on automation can also carry risks, with uneven consequences for different users.
The Board recently recommended that Meta examine the global implications of its decision, announced on 7 January 2025, to reduce reliance on automation for some policy areas.
Mainstream human rights
To mainstream human rights as a cross-cutting issue, platforms could benefit from greater clarity and implementation guidance on how to identify and assess risks through a rights-based framework with clear and consistent criteria.
While many platforms have developed their own approaches, they often reference a variety of frameworks in their reports, from the UNGPs to risk models from unrelated fields like finance and climate change. This leads to inconsistent evaluation of factors such as scope, scale, irremediability and likelihood of potential adverse impacts.
All this hinders the ability of stakeholders to compare risks across services, and assess industry-wide harms and limitations on users’ abilities to speak freely.
Drawing upon guidance from international treaties and the UNGPs could help ensure that efforts to identify and assess systemic risks do not unduly infringe on human rights.
The UNGPs offer a structured approach for assessing human rights impacts, emphasising stakeholder engagement, context and attention to vulnerable groups. They involve well-established guidance on evaluating the scope, scale, irremediability and likelihood of potential adverse impacts on human rights.
Using the UNGPs would enhance cross-platform comparability and ensure that risk assessments go beyond what is immediately visible or quantifiable, capturing broader and longer-term impacts embedded in platform design and operation.
Distinguish between risks and mitigation measures
To navigate these challenges, platforms also need a structured way to distinguish between prioritising risks and determining mitigation measures. A rights-based approach could help platforms apply carefully calibrated measures, rather than oversimplifying assessments based on risk prioritisation.
This approach should include an evaluation of the impacts of mitigation strategies themselves, using clear, rights-specific criteria. For example, measuring the effectiveness of content moderation would require assessing content prevalence, volume of decisions, enforcement error rates and appeal outcomes.
This would ensure that responses to risks do not generate new or disproportionate impacts, while resulting in more granular transparency and access to data to support third-party research into moderation trends.
While the DSA aims to establish a framework for evaluating mitigation measures by requiring them to be ‘reasonable, proportionate and effective’, it lacks clear implementation guidelines. As with risk identification and assessment, this leaves much to the discretion of platforms and results in the use of divergent methodologies, which can affect the quality, effectiveness and timeliness of these mitigations
Clearer guidance on how to evaluate and implement mitigation measures could be achieved by drawing on existing global frameworks for evaluating restrictions on speech: namely, the three-part test for legitimate restrictions on freedom of expression, based on Article 19 (3) of the International Covenant on Civil and Political Rights (ICCPR), and its relevance to companies under the UNGPs.
This would allow platforms to better evaluate mitigation strategies by integrating speech concerns and other legitimate aims. Another benefit would be ensuring that freedom of expression and civic discourse are not treated as a standalone ‘risk’ area, but mainstreamed as a cross-cutting issue.
Organisations that bridge the gap
Embracing existing frameworks would challenge assumptions that freedom of expression is always in tension with respect for other human rights and societal interests, and encourage innovative approaches to risk mitigation.
The Board applies this three-part test in all our cases to assess whether Meta’s speech interventions meet the requirements for legality, legitimate aim, and necessity and proportionality. This provides a transparent and replicable model for rights-based analysis that platforms can adopt in their own mitigation efforts.
A consistent global response
Systemic risk frameworks designed under regional regulatory regimes, such as the DSA, could end up shaping regulatory approaches in other regions. Therefore, it is crucial for the regulator to clarify the cross-cutting role of human rights across all risk areas and for platforms to adopt frameworks rooted in global human rights standards to ensure their systems effectively mitigate risks in regional jurisdictions, while maintaining global consistency.
As the Board’s extensive work demonstrates, relying on global standards requires consideration of local and regional contexts, both when identifying risks and designing mitigations. While harms to individual rights may manifest differently in different regions, applying a global framework can ensure that a company’s response is consistent and grounded in respect for freedom of expression.
Embed stakeholder engagement into assessments and mitigation design
Although all platforms refer to stakeholder engagement (such as civil society, academia and marginalised communities) in their reports, there is limited insight into how this input informs systemic risk assessments.
While platforms set out their consultation processes in detail, they do not clearly draw connections between the outputs of those consultations and their analysis of risk or evaluation of mitigations.
This reporting on stakeholder engagement also fails to align with the good industry practices outlined in the UNGPs. Specifically, with the lack of clarity on how engagements are structured, which stakeholders are involved and what concerns are raised, it is difficult to understand how stakeholder insights influence platforms’ responses to individual risks, before and after mitigations are applied
Meaningful stakeholder engagement should prioritise the input of individuals and groups most affected by platform decisions by actively seeking expertise and diverse perspectives.
Moreover, this type of engagement is essential for considering regional and global factors when assessing systemic risks and mitigations.
While the DSA emphasises localized risk assessment, current methodologies often fail to account for local diversity (e.g., the EU’s different languages and cultures), since platforms mainly focus on structural issues affecting their systems.
This is exacerbated by a lack of targeted stakeholder engagement, leading to risk assessments that fail to capture the complexity of local contexts.
The Board’s prioritisation of stakeholder engagement in cases and policy advisory opinions highlights how such efforts can increase transparency and participation, and amplify the voices of people and communities most impacted by platform decisions (see the ‘Shaheed’ policy advisory opinion).
Additionally, the work of expert organisations, such as the GNI and DTSP forum, underline how multi-stakeholder consultations with diverse experts can enrich both risk assessments and mitigation strategies, and help platforms align these processes with a rights-based approach.
Deepen analysis with appeals data
Since the first reports by platforms are primarily qualitative, they provide limited insight into the quantitative data used to assess risks and mitigation measures. When cited, metrics are often high level and duplicate pre-existing transparency report disclosures.
Building on the Board’s experience, one way to evaluate the effectiveness of mitigation measures, particularly on freedom of expression and other human rights, is to draw on both qualitative and quantitative assessments of user appeals data, such as on decisions to remove or restore content.
Appeals are not only a mechanism for error correction, they are also a vital safeguard for protecting free speech by revealing which enforcement practices may be suppressing lawful expression
User reports and appeals against decisions to leave content online can also highlight where enforcement practices may be failing to properly curb harmful content.
Appeals can also offer valuable insights into enforcement accuracy and residual risks. For example, data on appeals volume, geographic location, relevant policies, associated risk areas and outcomes can help determine which mitigation measures are effective over time – and which require improvement.
Receiving hundreds of thousands of appeals annually from around the world, the Board’s data could help highlight enforcement trends as potential indicators of risks, such as censorship of journalistic content, and over- or underenforcement of policies during a crisis, as well as help to evaluate the effectiveness of mitigations.
This, in turn, could supplement platforms’ own processes, contributing to independent oversight.
By systematically analysing, openly reporting and meaningfully integrating data into risk assessments, platforms will not only enhance the effectiveness of mitigation but also strengthen trust in their commitment to safeguard human rights.
Conclusion
Now the initial rounds of assessments have been published and as platforms develop their next reports, the time is right to refine methodologies to ensure that products, platform features and content moderation systems are evaluated with greater precision, depth and robustness.
A transparent and multi-stakeholder approach, bringing together diverse expertise and perspectives, is essential to support this endeavour.
It is crucial that human rights, particularly freedom of expression, are placed at the centre of systemic risk assessments to safeguard speech, rather than to serve as a mechanism for its restriction.
By drawing on its expertise, the Board is committed to help develop rights-based approaches that centrally position freedom of expression. Given the iterative nature of assessments, the Board encourages platforms to incorporate feedback and for regulators to take these insights into account when designing guidance for platforms and auditors.
The Board looks forward to working with interested organisations and experts on systemic risk assessments and mitigation.
The new Code, published in a Special Issue of the Kenya Gazette Supplement No. 70 on 14 May 2025 (Legislative Supplement No. 40) by the Cabinet Secretary for Information, Communications and the Digital Economy, William Kabogo, updates the Second Schedule to the Media Council Act, 2013, effectively replacing the Code of Conduct for the Practice of Journalism in Kenya.
This revision addresses the shortcomings of the former Code and follows a High Court ruling that declared the Broadcasting Code unconstitutional, ordering the Media Council of Kenya (MCK) to establish age-appropriate standards to protect children and vulnerable groups.
The new Code tackles the challenges of our changing media landscape, setting firm guidelines for ethical AI use, safeguarding children and vulnerable individuals, promoting responsible user-generated content and ensuring principled editorial conduct
The Council lauds the National Assembly’s approval and accession to this crucial document, and its subsequent confirmation by the Clerk of the National Assembly. This is a defining moment for media regulation, professionalism and the unyielding defence of press freedom in the country.
The ratification of this Code is a testimony and clarion call for progress. It demands accountability from the media and welcomes critique from the government, fostering trust and mutual respect.
Furthermore, it will streamline dispute resolution, ensure swift and fair handling of complaints while upholding professional integrity.
We also applaud the media community for their unwavering commitment to this transformative Code. This Code, shaped through extensive consultation across the media, legal, academic and civil society sectors, is a pact to uphold the highest journalistic standards.
The MCK reaffirms its unwavering commitment to fostering a media landscape that upholds the highest standards of integrity and serves the public. This is the dawn of a new era for ethical, fearless and impactful journalism in Kenya.
The Media Institute of Southern Africa has submitted its insights to the African Commission on Human and Peoples’ Rights (ACHPR) Public Consultation on Freedom of Expression and Artificial Intelligence (AI).
In its submissions, MISA emphasised the need for governments to establish AI legal frameworks rooted in international human rights law, incorporating transparency, accountability, data security, and clear redress mechanisms.
As a key media freedom and digital rights advocate, the organisation recognises that the transformative power of AI will directly shape the future of journalism, alter the information landscape, and impact the right to freedom of expression.
Read ‘Implementation of Resolutions on digital rights instruments underway’, here
To safeguard fundamental human rights, essential safeguards and measures, including mandated human oversight, must be incorporated in the entire life cycle of all AI systems that impact these rights.
In its submissions, MISA highlighted key concerns, including, among others:
that Generative Artificial Intelligence (GenAI) has amplified misinformation, blurring the lines between truth and fiction, and that there is a risk that AI may influence editorial independence or journalistic decisions
that deepfakes can be used for political manipulation, character assassination or to incite violence
that the digital divide and resource gaps remain significant challenges in most African countries, where a lack of affordable and reliable internet connectivity hinders citizens from realising the full potential of emerging technologies
that most AI systems currently in use are predominantly trained on Western datasets, which inherently carry biases that often lead to discrimination against specific segments of African populations and misrepresent African contexts
the potential for State control and censorship, which can lead to increased surveillance (e.g., facial recognition) including social media monitoring, to track journalists and ordinary citizens, which often results in self-censorship
the dominance of big tech companies, which control most AI models, which leads to the decline of smaller media outlets, with the monetisation and exploitation of data by these companies often reflecting biases or commercial interests embedded in the AI models which, in turn, affects market dynamics by creating economic dependencies; media organisations become economically and structurally reliant on these platforms for traffic and advertising revenue, restricting their ability to maintain editorial independence, and
that most international AI instruments are non-binding in nature and fail to incorporate the perspectives of the Global South, which results in challenges in translating AI principles into practical policies
Way forward
Moving forward, governments must establish AI legal frameworks grounded in international human rights law, incorporating transparency, accountability, data security and clear mechanisms for redress.
AI systems must be designed inclusively, with input from various stakeholders, including marginalised groups, people with disabilities and other underrepresented voices. The media industry should also develop its own AI Policy or Code of Ethics, incorporating key best practices and clearly labelling all content that has been generated, augmented, or significantly altered by AI.
AI-generated content, particularly for news and informational purposes, must go through rigorous human review and editorial approval before publication
AI systems should not influence editorial independence or journalistic decisions by making critical choices about content publication or editorial direction.
Strong policies should be implemented to close the digital gap, ensuring affordable and accessible internet , and enhancing digital literacy for marginalised communities.
There is a need to increase accountability on the Utilisation of Universal Service Funds to stimulate infrastructure development. This will bridge and close the digital divide between urban and rural communities, serving as the backbone for localised AI development and deployment.
Finally, regional and global coordination is vital to harmonise AI development and translate AI principles into practical, enforceable policies through multi-stakeholder partnerships.
In mid-July, the United States revoked the visas of several Brazilian judicial officials, including Supreme Federal Court Justice Alexandre de Moraes, accusing them of leading a ‘persecution and censorship complex’ that not only ‘violates basic rights of Brazilians, but also extends beyond Brazil’s shores to target Americans’.
Brazil’s President, Luiz Inácio Lula da Silva, slammed the decision as ‘arbitrary’ and ‘baseless’, calling it a violation of his country’s sovereignty.
The move, announced by US Secretary of State Marco Rubio on 18 July, marks the first use of a new policy aimed at foreign officials involved in what the Trump administration says are efforts to censor protected expression in the US, including ‘pressuring American tech platforms to adopt global content moderation policies’.
Pushback from the US comes as online safety laws are moving ahead in several major jurisdictions, with enforcement mechanisms already in motion.
In the UK, companies have completed their first round of illegal harms risk assessments under the Online Safety Act (OSA) and are expected to finalise children’s risk assessments next. The UK’s communications regulator, Ofcom, just launched nine new investigations under the law in June.
While the Trump administration claims its visa restriction defends free speech and national sovereignty, the visa restrictions on foreign officials are more than just a diplomatic warning — they’re forcing regulators worldwide to take stock.
The State Department, in an email to Tech Policy Press, described the visa restriction policy as a ‘global policy’, but singled out the DSA, saying the US is ‘very concerned about the DSA’s spill-over effects that impact free speech in America’.
Reiterating Rubio’s announcement, a spokesperson said, ‘We see troubling instances of foreign governments and foreign officials picking up the slack’, adding that the DSA’s impact on protected expression in the US is ‘an issue we’re monitoring’.
By targeting foreign officials they accuse of censoring speech on US soil, Washington is raising questions about how far enforcement of tech rules should extend, and what diplomatic fallout might follow
How are regulators navigating these tensions, and what does it mean for international cooperation on platform regulation?
The EU has so far made no move (at least publicly) to adjust course. While the US continues to express concern that parts of the DSA could chill protected expression in the US, Brussels continues to position the law as a model for global digital governance.
As part of its international digital strategy, published last month, the EU committed to promoting its regulatory approach in bilateral and multilateral forums, as well as sharing its experience in implementing it. It said it will organise regional events ‘with international organisations, third-country legislators, regulators and civil society to promote freedom of expression and safety’.
Various public statements by EU officials suggest the stance is unlikely to soften. But concerns around sustained enforcement are beginning to surface internally. This concern has gained traction amid reports that the European Commission is delaying its DSA probe into X ahead of a 1 August deadline linked to trade talks with the US.
The US visa policy may not mention trade, but it’s pushing regulators to rethink how their tech rules play on the global stage.
In South Korea, the US has raised significant objections to the government’s proposed Online Platform Fairness Act, which aims to rein in the dominance of major tech platforms and protect smaller market players. The legislation has emerged as a central issue in the two countries’ ongoing trade negotiations, with officials reportedly viewing it as a greater hurdle than traditional market access topics like agricultural imports.
South Korea’s President, Lee Jae Myung, has committed to advancing these reforms as part of a broader push to strengthen oversight of both domestic and foreign tech giants. Yet, US lawmakers argue that the bill closely mirrors the DMA and disproportionately impacts American companies.
South Korea’s ruling party is said to be reconsidering the pace of its antitrust efforts on US tech companies such as Google, Apple and Meta, amid concerns about the potential fallout on trade talks and diplomatic relations.
Similarly, Canada postponed plans to implement a digital services tax following sustained bilateral trade talks with the US, highlighting how US concerns are increasingly influencing the enforcement of tech laws worldwide.
Are US policies prompting regulators to rethink the global impact of their tech rules?
As global regulators move forward with enforcement under new online safety laws, some are taking extra care to clarify the limits of their authority. Owen Bennett, Head of International Online Safety at Ofcom, emphasised to Tech Policy Press that freedom of expression remains ‘core to what we do’.
He highlighted built-in protections within the OSA and systematic assessments of unintended impacts on speech and privacy. ‘The OSA requires only that services take action to protect users based in the UK – it does not require them to take action in relation to users based anywhere else in the world.’
As platforms navigate overlapping compliance regimes, the US is amplifying concerns that some enforcement efforts risk appearing politically motivated or extraterritorial in scope. Ofcom said it actively monitors how companies balance UK rules with obligations elsewhere
Australia’s eSafety Commissioner voiced similar concerns, calling for proportionate and rights-respecting regulation while underscoring the need for global alignment.
‘It’s reassuring to see governments around the world taking steps to protect their citizens from online harms, including the US through the Take It Down Act,’ an eSafety spokesperson told Tech Policy Press, ‘but we’d welcome even more governments considering the role of proportionate, human rights-respecting regulation to address the more egregious online harms’.
UNESCO, which developed the Guidelines for the Governance of Digital Platforms, flagged growing concern that major technology platforms, particularly US-based firms, may be walking back earlier commitments to user safety and governance standards, amid waning ‘political pressure and a discernible shift towards a less regulated environment’.
Last year, the UN agency launched the GFR, bringing together 87 national and regional bodies to coordinate an international approach to platform governance. In response to the deregulation trend, a UNESCO spokesperson said regulators involved in the initiatives it leads are rethinking their engagement strategies, emphasising direct communication with platforms and alignment on co-regulatory goals.
A recent US trade report flagged a range of legislation — including digital taxes and data protection laws — in more than a dozen countries, from Canada to Kenya, as potential obstacles to digital trade
Trump’s efforts to frame foreign regulations covering US tech companies as censorship or trade barriers are challenging countries to rethink how their digital rules are perceived abroad.
Brazil’s sharp pushback against the US visa policy suggests some governments may simply reject the Trump administration’s interpretation of digital regulation as censorship or protectionism.
Media capture happens when media outlets lose their independence and fall under the influence of political or financial interests. This often leads to news content that favours power instead of public accountability.
What is media capture and how has it reshaped itself in recent times?
Media capture describes how media outlets are influenced, manipulated or controlled by powerful actors – often governments or large corporations – to serve their interests. It’s an idea that helps us understand how powerful groups in society can have a negative influence on news media.
While this idea isn’t new, what has changed is how subtly and pervasively it now operates.
These groups include big technology organisations that own digital media platforms – such as X, owned by xAI (Elon Musk), and Instagram and Facebook, owned by Meta. But it’s also important to consider Google as a large search engine that shapes the news content and audience of many other platforms.
This matters because the media are important for the functioning of democratic societies. Ideally, they provide information, represent different groups and issues in society, and hold powerful actors to account.
For example, one of the key roles of the media is to provide accurate information for citizens to be able to decide how to vote in elections. Or to decide what they think about important issues. One big concern, then, is the effect of inaccurate or biased information on democracy
Or it might be that accurate information is harder to access because algorithms and platforms make it easier to access inaccurate or biased information. These can be intended and unintended consequences of the technology itself, but algorithms can amplify misinformation and fake news – especially if this content has the potential to go viral.
So, what’s particular about media capture in the global south?
This is a really interesting question that is still being investigated, but we have some ideas.
First of all, it’s useful to know that media capture scholarship from the global north emerged around the time of the 2008 financial crisis. The influence of financial institutions on business journalists was one of the first areas of study.
Since then, research in the United States has focused on the capture of government-funded media organisations like Voice of America. And on how digital platforms like Google and Facebook can lead to capture.
In the Global South, scholars have drawn attention to the importance of large media corporations in understanding media capture. For example, in Latin America, there’s a high level of what’s called ‘media concentration’. This is when many media outlets are owned by a few companies. These companies often own companies in other sectors, which means that critical reporting on business interests presents a conflict of interest.
But to focus on Africa, scholars have drawn attention to governments as a source of pressure on journalists and editors. This can be through direct pressure or what we might call ‘covert’ pressure. Withholding advertising that helps to fund media outlets is an example, or offering financial incentives to stop investigating certain topics.
Researchers are also concerned about the influence of big tech in Africa. Digital platforms like Google and Facebook can shape the news and information that citizens have access to.
Can you share some of the studies from the book?
Our book includes many interesting studies – from Colombia, Brazil and Mexico in Latin America to Ethiopia and Morocco in Africa. We’ll share a few African cases here to give an overview of the issues.
The book’s contribution on Ghana warns us that although more overt ‘old’ types of media capture may have subsided, transitional democracies can feature messier, more nuanced forms of media control. This can be evident in government pressures and through capture of regulators.
In the Morocco chapter, we see the threat to media freedom presented by digital platforms owned by global tech giants. This is known as ‘infrastructural capture’. It means news organisations become dependent on tech giants to set the rules of the game for democratic communication.
Another compelling case is Nigeria, where researchers explore ties between media ownership and political patronage. The authors argue that the Nigerian press is failing in its democratic duty because of its reliance on advertising and sponsorship income from the state.
Added to this are ineffective regulatory mechanisms and close relationships with some big businesses that own newspapers and printing presses.
How can media capture be resisted in the global south?
The studies in the book show some ways forward and we do think it’s important to be optimistic! Resistance takes many forms. Sometimes it comes through legal and policy reform aimed at increasing transparency and media diversity. In other cases, it’s driven by social movements, investigative journalists and independent media who continue to operate under pressure.
The chapter on Uganda shows that journalist groups working with media advocacy organisations can strategically act to resist government media capture and harmful regulations.
For example, to push back against one legislative change, several groups formed a temporary network called Article 29 (named after the article in the Ugandan Constitution protecting free speech) and the African Centre for Media Excellence produced a report criticising the proposed changes.
One of the chapters on Ghana also shows how networks such as journalists, media associations, human rights groups and legal organisations can mobilise to push back against government influence.
These findings are echoed in Latin America, where research on Mexico and Colombia also found professional journalism to be a strong source of resistance.
The conversation must also include rethinking how we define capture itself. If we frame it only as total control, we risk missing the everyday ways influence operates – and the spaces where it can be resisted.
We would also say it’s really important that citizens are aware and alert to the issues when they think about how they access news media and what platforms they use. This is sometimes called ‘media literacy’ and is about people being more knowledgeable about where trustworthy new comes from.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorised as necessary are stored on your browser as they are essential for the basic functionalities of the website. We also use third-party cookies that help us analyse and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. Opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always active
Necessary cookies are essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Analytics
Analytics cookies are used to track user behaviour on our website. We process these cookies to understand user engagement and improve user experience on our website.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.