Nigeria’s Minister of Information and National Orientation Alhaji Mohammed Idris has been commended for defending media freedom and the rule of law following his intervention in a recent shutdown directive issued against Badegi FM, a licensed radio station operating in Minna, Niger State.
The Broadcasting Organisations of Nigeria (BON) applauded the Minister’s timely response, which advised the Niger State Governor to route any complaints about the station’s conduct through the National Broadcasting Commission (NBC),the sole body legally empowered to regulate and sanction broadcasters in Nigeria.
In a press statement, by the Executive Secretary of BON, Dr Yemisi Bamgbose,described the Governor’s order as unconstitutional and a violation of the legal process. The organisation reaffirmed that only the NBC has the statutory authority to suspend or revoke a broadcast licence after due process
The Governor had reportedly directed the shutdown of Badegi FM, citing allegations that the station was hostile to his administration. However, the move sparked nationwide concern among media stakeholders, who warned against undermining press freedom.
Bamgbose warned that unilateral actions by political leaders against the media threaten the democratic space.
‘While we encourage all media organisations to operate responsibly and within legal frameworks, no government, federal, state or local has the right to close down a duly licensed broadcaster without going through the proper regulatory channels,’ he stated.
He added that suppressing media voices through executive orders erodes public trust and weakens democratic institutions.
‘The media serves both the government and the governed. Undermining its role without due process is not only unconstitutional but dangerous for national development,’ he said.
Bamgbose urged all levels of government to work in partnership with the media and respect the oversight role of regulatory bodies like the NBC
He also called on journalists to maintain high ethical standards and professionalism, even in politically charged environments
He concluded that BON reaffirmed its commitment to press freedom and responsible broadcasting ,warned against any actions that may stifle the rights of Nigerians to access diverse and credible information.
The IPI global network calls for urgent and renewed international attention to the case of American-British journalist Christopher Allen. We urge the international community to pressure South Sudanese authorities to conduct an independent and transparent investigation into Allen’s killing.
Allen, a freelance photojournalist, was killed by government forces on 26 August 2017 while documenting the civil war in South Sudan. He was the first foreign journalist to lose his life reporting on the conflict.
As fighting escalated on 26 August, Allen was shot by multiple rounds, including one to his head, in what rebel forces believe was a deliberately targeted attack by government troops. Rebel forces claimed Allen was easily identifiable as a journalist at the time of the attack.
According to a rebel spokesperson, Allen wore a vest with the word, ‘PRESS’, large and visible, carried two cameras and no weapons. Photographic evidence following Allen’s death suggests his body was subjected to inhumane treatment constituting war crimes on the part of the government forces.
South Sudanese authorities disputed the rebels’ account of Allen’s death, arguing that there was no indication he was a journalist. Authorities denied Allen the status of civilian in the wake of his death, labelling him a ‘white rebel’ who entered the country illegally, and stating that anyone accompanying rebels, including journalists, would be treated as combatants.
A report on the killing published by the South Sudan Investigation Committee in March 2024 failed to achieve any measure of accountability for his death, concluding that Allen was killed accidentally amid cross-fire.
The investigation, which was conducted without any involvement from Allen’s family, failed to address the horrific mistreatment of his body in the aftermath of his murder and blatantly ignored basic international legal frameworks
United States officials publicly denounced the investigation, alongside Allen’s family and human rights and press freedom organisations.
‘The killing of journalist Christopher Allen and the subsequent failure of the South Sudanese government to adequately investigate his death and bring his killers to justice demonstrates an alarming display of impunity on the part of authorities,’ Amy Brouillette, IPI Director of Advocacy, said.
‘Journalists covering conflict must be free to do their jobs without fear of being targeted. We urgently call for a renewed investigation into Allen’s death that ensures accountability and compliance with international law.’
The Media Council of Tanzania (MCT) is marking 30 years since its establishment on 30 June 1995. The Council was founded by independent media stakeholders during a conference of journalists and media players as an independent, voluntary and non-judicial body to resolve conflicts arising from news and information published by media outlets, and to uphold media ethics.
Media stakeholders decided to establish this body with the aim of resolving their own disputes without relying on legal bodies such as courts. MCT officially began operations in 1997 after being registered under the Societies Ordinance of 1954.
Besides conflict resolution and training in various media fields, the MCT has heavily invested in efforts to improve media-related laws – a task carried out under the voluntary coalition known as CoRI (Coalition on the Right to Information). Members include the MCT itself, the Tanganyika Law Society (TLS), MISA-Tanzania (the Tanzania chapter of the Media Institute of Southern Africa), TAMWA (the Tanzania Media Women’s Association), TEF (the Tanzania Editors’ Forum), the Legal and Human Rights Centre and other civil society organisations.
The MCT and CoRI have been engaged in the media law reform process since 2006. Their first major action was opposing the 1993 Media Profession Regulation Bill, which was drafted in English and seen as restrictive to press freedom.
A nationwide stakeholder consultation, coordinated by the MCT – the secretariat to the CoRI – was held to collect opinions on access to information and media services law, essentially to debate whether to have one law for all citizens, or two, with one aimed specifically at media professionals.
In 2007 and 2008, MCT and its partners published alternative draft bills based on collected public opinions, one for each proposed law. These alternative bills were drafted by journalist, activist, politician and advocate Dr Sengodo Mvungi, lawyer Mohammed Tibanyendera, Dr Damas Ndumbaro (currently the Minister for Constitution and Legal Affairs) and their teams. The drafts were officially submitted to the government.
To help lawmakers better understand the legislative terrain, CoRI and the MCT organised a study tour to India in 2012 for eight Members of Parliament – Juma Mkamia, Husesin Mzee, Assumpter Mshama, Rebecca Mngodo, Jussa Ismail Ladhu, Ali Mzee, Ramadhani Saleh and Moza Abeid Saidy – as India was a developing nation with progressive media laws at the time.
It took almost 10 years of struggle to see media laws drafted in 2016 when the Media Services Act (MSA) was at last enacted, but only 12 out of 62 stakeholder recommendations were adopted, many of which were minor
Key concerns in the stakeholders’ recommendation – such as ministerial powers to ban media houses without fair hearing – were ignored. Others were: the power of the police force to confiscate media equipment with only what is stated as ‘disclosed security information’; criminalising defamation-related stories, and sedition-related provisions.
Not all media stakeholders were not happy with the provisions seeing this as impacting upon freedom of expression, and the media in general decided to file cases to challenge the MSA provisions. One of the cases filed in the Mwanza High Court registry to challenge the provisions of MSA was around the powers of Minister to ban newspapers.
Another case filed in the East Africa Court of Justice (EACJ) by MCT,LHRC and THRDC saw 16 provisions out of 18 tabled by applicants under the MSA ordered for review by the government.
After much pressure from the stakeholders, in 2023, the government tabled written laws (the Miscellaneous Amendments Act), proposing changes to various laws, including the MSA. The government come out with a proposal for amendments but most of these related to a reduction in prison time
MCT and CoRI submitted around 25 new proposals covering: the minister’s powers to ban newspapers without due process; annual registration of newspapers; criminal defamation and sedition clauses; police powers to seize media equipment, and other harsh, unlimited penalties.
The June 2023 amendments addressed only nine provisions, including: removing the director of information services’ power to control advertisements; decriminalising defamation (turning it into a civil matter); reducing the length of sentences, and limiting judicial powers to seize printing equipment. CORI’s recommendation were ignored again, especially those sensitive to the areas of sedition and the powers of the police force.
Thus, the 30-year journey continues, especially with unresolved issues in the MSA – the core of media professionalism
According to stakeholders, about 12 sections still raise serious concerns over press freedom. Chief among these is Section 7(2)(b)(iv), which forces private media to publish public interest stories under instruction from the government. To media stakeholders, these instructions undermine editorial independence and should be rejected under law.
CoRI through MCT is still pushing for change, urging the National Assembly and the government to address its outstanding issues, especially after the 2025 General Election, in the hopes of achieving better, fairer media laws in Tanzania.
Escalating Assaults on Journalists’ Safety are a Threat to Democracy
Overview
Attacks on journalists worldwide are a monstrosity for democracy’s existential commitment to information integrity. Solidarity is needed with the frontline fighters for information integrity. Sustainability is required so they can do their work without fear, and equality is necessary so that journalists have the right to justice and there is an end to impunity for those who violate human rights.
The M20 is an opportunity to ‘showcase’ to the G20 the increase in killings, murders, kidnappings and detentions of journalists around the world – especially in war zones such as Gaza, Sudan, Ukraine, Syria, DRC, among others. These increases have been noted this year by organisations that include the Committee to Protect Journalists (CPJ), Reporters Sans Frontières (RSF) and UNESCO.
Powering the surge in attacks is the 20-year trajectory (from 2006-2024), where 1 700 journalists were killed. The majority of these crimes – 85% – go unpunished. The second monstrous layer of the anti-democracy trajectory is online bullying of women journalists (especially black, and LGBTQI+), consisting of intimidation, harassment, doxxing and trolling, threats of rape and murder in the cyber sphere – such as on social media – which is an emotionally violent zone.
Therefore, this year’s G20 themes of solidarity, equality, and sustainability must apply directly to stop attacks on journalists. These G20 ideals cannot be realised unless journalism as a public good, which values information integrity, is fought for and protected as a treasure to democracy.
Alliances with civil society (and governments whose values align for a more peaceful and just world), and international collaboration are needed. While signing multilateral agreements on occasions like World Press Freedom Day is a valuable start, there is a pressing need to take concrete action beyond symbolic gestures
Proposal to the G20
The proposal to the G20 is to hear, discuss, acknowledge and act against the ever-increasing killing of journalists, as well as online sexual violence against women journalists.
The Rio G20 leadership declaration says: ‘Acknowledging that gender-based violence, including sexual violence against women and girls, is alarmingly high across public and private spheres, we condemn every form of discrimination against women and girls and recall our commitment to end gender-based violence, including sexual violence and combat misogyny online and offline.’
Women journalists are adversely affected by bots, trolls and politicians on platforms that are adversarial by algorithmic design and by an absence of content moderation, and which seem to contain no discussion or nuance.
They spread hatred of a sexualised nature, often spilling over into real life space, as in the case of journalists Maria Ressa and Ferial Haffajee.
In the G20 interpretation of equality, solidarity and sustainability, neither equality (for all genders), solidarity (with all those suffering from war mongers) nor sustainability (healing the planet and ending poverty) can be reached without freedom to do journalism as a public good. Therefore, journalists’ safety, protection and acknowledgment of their role in democracy should be an urgent M20/ G20 goal.
This Policy Brief argues that signed agreements must be followed through with action against perpetrators involving new levels of co-operation between civil society, governments, international and continental agencies such as the United Nations, African Union and European Union, as well as between media freedom and journalist safety networks and advocacy organisations such as the International Centre For Journalists (ICFJ), the Journalist Safety Network and SANEF.
Defining the critical issue and role of the G20 and key issues
The global state of press freedom is now classified as a ‘difficult situation’, according to the RSF 2025 report. This is the first time this has been the case in the index’s history. While one of the main reasons was due to the ‘economic factor’ – the sustainability of journalism – RSF noted that physical attacks continue.
The United States is leading the economic depression, while it is also recognised as a global leader in Silicon Valley for AI and social media apps. Similarly, online attacks – enabled by Big Tech billion-dollar profit-making companies such as Alphabet (Google’s parent company), Meta (Facebook’s parent company) and platforms like X and TikTok – operate with little to no accountability or regulation concerning journalist safety.
The situation in Palestine (163rd on the RSF index) is disastrous; in Gaza, the Israeli army has destroyed newsrooms and killed nearly 200 journalists. As of 16 June 2025, CPJ’s preliminary investigations showed at least 185 journalists and media workers were among more than tens of thousands killed in Gaza, the West Bank, Israel and Lebanon since the war began, making it the deadliest period for journalists since CPJ began gathering data in 1992.
As BBC News, Agence France-Presse (AFP), Associated Press (AP) and Reuters have noted, those reporting the conflict from Gaza now face ‘the same dire circumstances as those they are covering’. That is, engineered starvation.
This unprecedented extreme violence against journalists in conflict zones takes place against a broader assault on journalists globally. For over a decade, research has shown that women and journalists of colour are particularly targeted. Seventy percent of women journalists experienced online and offline threats, harassment, or attacks, and a third have considered leaving the profession as a result, according to a 2019 report by the International Women’s Media Foundation.
Africa: Online bullying and cybermisogyny
Women journalists in certain African countries have encountered extreme online harassment due to their journalism and/or for having a public profile, according to a study by Alana Barton (Reader in Criminology in the Department of Law and Criminology, Edge Hill University in Lancashire, UK) and Hannah Storm (award-winning journalist, producer and director).
This has not abated, with 73% of women journalists saying they experienced harassment and bullying on platforms such as X and Facebook, according to 2022 research by Julie Posetti and Nabeelah Shabbir in their study, The Chilling: Global Study of Online Violence Against Women Journalists.
The ICFJ/UNESCO study reveals that deep-dive research into attacks on journalists in African countries includes online harassment, disinformation and smear campaigns, sexist and hateful speech, as well as trolling with threats of rape and death
In some African and other countries, this occurs against the backdrop of authoritarian regimes that place the free press under attack.
Studies reveal patterns vis-à-vis the digital harassment of women journalists on the continent, including self-censoring and exiting the journalistic field. The research found that 75 percent of women journalists surveyed in Kenya experienced online harassment, particularly when covering politics and sport.
Harassment not only leads women to stop using digital tools but also to withdraw from the profession, wrote Moraa Obiria (senior gender writer at Nation Media Group in Kenya). Those who resist face being silenced further.
Globally, cyberspace reflects and amplifies harassment, sexism and other forms of discrimination against journalists, including homophobia, racism, and religious hate speech.
South Africa
In South Africa, women journalists of all races who work in the political reporting and investigative spaces have been targeted with threats of rape and murder, and trolling and doxxing. Journalists include Ferial Haffajee (Associate Editor), Tshidi Madia (political broadcast journalist) and Karyn Maughan (legal journalist).
‘Much like your casual school bully, online trolls will do everything in their power to get under your skin. They will persist despite you ignoring them […]
‘In 2018, when it became manifest that I will not succumb to social media bullying, the efforts to intimidate me became more direct and sinister. In August 2018, I was sent a picture of a gun by an ANC Women’s League leader for sending her probing questions about a meeting she attended.
‘There was an attempt to dox me — an effort to intimidate me by sharing my address — but, thankfully, the post was taken down … There were full-blown threats to rape and kill me by Zuma supporters.
‘While my employer and the South African National Editors’ Forum came to my defence, I never felt more alone in that ordeal. I knew I was not the only one facing this, and I also knew that my seniors did not know how to navigate this terrain.
‘What do you do in this instance? Do you send legal letters to thousands of bots? By this time, attacks on female political journalists in South Africa had become far, far worse.’
Journalists should not simply ‘suck it up’, says Hunter, who links mental health to safety and media freedom in the book. Her vocal advocacy for mental health awareness in journalism earned her the prestigious Nat Nakasa Award for Courageous Journalism in 2019.
Collaborations needed to enforce platform accountability
Platforms such as X and Facebook have permitted sexism in a vile fashion, and have failed to prioritise dealing with threats against women journalists. Reports of cybermisogyny on social media across the continent indicate that harassment, such as threats of rape and murder, often leads women journalists to leave social media or the industry altogether.
According to one report on women journalists and safety, there is a complete lack of accountability. South African women describe it as a free-for-all, saying they are advised to report incidents to the police – but when they do, the officers appear unfamiliar with terms like ’emotional violence’ or ‘cybermisogyny’.
It is the responsibility of traditional and Big Tech, as well as governments and civil society, to take action and effect change. Early warning systems need to be developed to monitor, predict and prevent online violence escalation
Research on cyberbullying in South Africa, as referenced in this Policy Brief, also indicates that currently, only NGOs in the civil society space and some news organisations fully recognise the importance and nature of physical violence against journalists, and are attempting to effect change.
But they cannot act alone. Governments on both the continent and globally must hold Big Tech accountable to curb unregulated online bullying. Pressure needs to be applied for companies to take coordinated action in stopping harassment, identifying offenders and ensuring they face criminal consequences.
Urgent continental, intercontinental and global collaborations are needed to tackle Big Tech companies for regulation, naming, shaming and sanctions.
Other recommendations, outlined in The Chilling, include the adoption of a more inclusive approach to recognise and call out the intersectional nature of online violence, and for law enforcement agencies to develop gender-sensitive skills to be equipped to tackle these cases.
Proposed text for inclusion in G20 output
For the Heads of State (‘Leaders’ declaration’):
‘We recognise with deep concern the unprecedented rise in physical and online assaults on journalists, and we unequivocally condemn such acts as grave violations of international law and fundamental human rights.
‘We call on governments to demand immediate protection for targeted journalists and unimpeded humanitarian access.
‘We call upon all governments to strengthen and enhance efforts to ensure the safety and protection of journalists, uphold freedom of the press, and foster an environment where media professionals can carry out their vital work without fear or intimidation, let alone being targeted in war and subjected to generalised starvation.
‘We recognise the sizeable role played by large technology and social media companies in the proliferation of online harassment, particularly targeting women journalists.
‘We call on governments to develop and implement robust regulatory frameworks that ensure accountability of digital platforms for protecting safety and human rights online, including of journalists, and empower state organs to effectively respond to online criminal acts.’
Recommendations and opportunities for G20 media
A joint campaign opportunity awaits: civil society, with progressive democratic governments, journalist organisations, and international agencies can collaborate to stop physical violence against journalists, as well as emotional online
The media need co-operation and alliances (with international agencies and national governments) to hold Big Tech accountable
Safety measures and equipment need to be provided to journalists in conflict areas and war zones, and there can be no impunity for perpetrators who fail to respect reporters as civilians
News organisations need to develop gender-awareness protocols to respond to online violence, to stop victim-blaming, and not to feel restricted or silenced in their response
Acknowledgements and call for comments
This policy brief was commissioned within the framework of the M20 ahead of the G20 Summit.
The M20 initiative is a ‘shadow’ parallel process set up to intersect with the G20 processes. The M20 seeks to persuade the G20 network of the most powerful global economies to recognise the news media’s relevance to their concerns.
As a collaborative M20 document, this paper is a working, live document. Share your suggestions or comments for consideration to [email protected]
This Policy Brief can be republished under Creative Commons licence, – i.e. provided that you credit the source, indicate any changes to the text, and link back to the original article on the M20 site.
In 2022, the European Union approved the Digital Services Act (DSA), legislation promising to protect user rights and placing a regulatory requirement on platforms to identify and mitigate risks resulting from their online services.
Crucially, the DSA stipulates that online platforms, including social media companies, must ‘give particular consideration’ to freedom of expression when deciding how to address the serious harms to society identified under this framework.
Since these platforms published their first assessments in late 2024, several challenges to this aim are becoming apparent, some derived from the ambiguity of the DSA’s key terms, others from missed opportunities to integrate global human rights standards into these assessments.
Building on the work of many organisations active in this field, the Oversight Board believes it is crucial that human rights, particularly freedom of expression, are placed at the core of systemic risk assessments.
In that spirit, this paper sets out four focus areas that could help to enhance platform accountability and improve how content is governed, as part of a consistent and effective rights-based approach:
Clarify the meaning of systemic risks. Ambiguity over this DSA term could leave the door open for overbroad interpretations, potentially incentivising restrictions on speech
Draw on global human rights standards. Fully integrate such standards across all categories of risk assessment for more consistent reporting. Mainstreaming global human rights is more effective than treating them as a standalone category
Embed stakeholder engagement into identification of risks and design of mitigations. By following the practices set out in the United Nations Guiding Principles on Business and Human Rights (UNGPs), platforms can more meaningfully show how stakeholder engagement shapes their responses to risk
Deepen analysis with data. Quantitative and qualitative data are equally valuable to reporting. Companies should more openly use appeals data supported by insights from external oversight mechanisms to show whether mitigations are effective in respecting freedom of expression and other human rights
Introduction
Recent EU regulation of online platforms introduces a new, risk-based approach to online services, focusing on how platforms may create or amplify certain types of harm. The DSA seeks to regulate social media to establish ‘harmonised rules’ for a ‘trusted online environment’ in which human rights are respected.
It requires Very Large Online Platforms (VLOPs) to disclose the steps they are taking to prevent their services from harming people and society.
The early ‘systemic risk assessments’ published by VLOPs provide insights into how platforms identify, evaluate and mitigate risks – including to human rights – arising from the design and use of their systems, as required by DSA Articles 34 and 35.
Although the DSA has the potential to enhance transparency and support human rights, the incentives it creates could also lead to excessive restrictions on freedom of expression globally.
Reconciling risk mitigation and respect for freedom of expression
Many of the risks the DSA addresses reflect the issues the Board has prioritised in its cases. For example, the DSA Recital 86 requires platforms to ‘give particular consideration to the impact on freedom of expression’ when choosing how to mitigate systemic risks.
This consideration is closely linked to the Board’s mandate, which centers on ensuring respect for freedom of expression and identifying when speech restrictions may be justified to protect other rights or interests.
Our decisions, which are binding on Meta, tackle the most challenging content moderation issues, and examine how Meta’s policies, design choices and use of automation impact people’s rights. These decisions provide insights into how to reconcile the identification and mitigation of risks on Meta’s platforms with respect for freedom of expression and other human rights
The Board emphasises that systemic risk assessments must include greater focus on respect for human rights, including freedom of expression, if they are to enhance meaningful platform accountability to users and improve content governance in line with the DSA’s objectives.
Drawing on this work and its close analysis of the first systemic risk assessments, the Board offers the following reflections.
Clarify the meaning of systemic risks
The first reports are limited by the lack of a shared understanding of what the term ‘systemic risks’ means. It is not defined in the DSA and is not rooted in global human rights law.
While the Board acknowledges the DSA’s deliberately flexible approach of allowing the meaning to develop over time, this shifts the responsibility over to platforms to thoughtfully interpret the concept.
Given this, it is understandable that platforms often default to a narrow, compliance-focused approach, which can hinder a meaningful understanding of systemic risks developing.
The result is the reduction of systemic risks analysis to a checklist exercise, as largely seen in the initial publication of platforms’ risk assessments in 2024.
Most platform reports refer only to the DSA’s listed systemic risk categories (‘illegal content’, ‘negative effects’ on ‘fundamental rights’, democratic processes, public security, ‘gender-based violence’ and the protection of minors) and its 11 mitigation measures (e.g., ‘adapting’ and ‘adjusting’ design choices and ‘recommender systems’).
Platforms are largely silent on whether their assessments identified new risks or led to the rollout of new mitigations, and do not challenge presumed connections between their platforms and specific risks. This ambiguity, in turn, may facilitate platforms missing or obfuscating new threats and emerging trends.
Incentivising speech restrictions
From a freedom of expression perspective, ambiguity over the term’s meaning may lead to overbroad interpretations and arbitrary enforcement, incentivising excessive restrictions on speech. This could stifle diverse opinions and potentially chill platforms’ commitments to providing spaces for open discourse on challenging and sensitive topics.
Consequently, this could deter users’ freedom to express themselves on these platforms. It has also the potential to undermine some of the benefits that the DSA may bring in terms of greater access to user remedy and increased transparency
The DSA treats human rights as a standalone category rather than integrating it across risk areas, leading to fragmented approaches on how platforms identify, assess and mitigate risks. This is especially problematic given the DSA’s novel standard that mitigations must be ‘reasonable, proportionate and effective’, which lacks clear implementation guidance.
By placing human rights in a standalone category, the DSA misses the opportunity to integrate human rights considerations comprehensively into systemic risk governance.
This prompts platforms to prioritise certain rights over others and discourages them from assessing how each risk area or ‘influencing factor’ may affect human rights as a whole.
Recent research from the CELE, the Argentina-based NGO, argues that the risk-based approach ‘pushes rights out [from] the centrestage of Internet governance, and may create a logic of “symbolic compliance” where [the] governance role of rights is further diminished’.
Drawing on global human rights standards could support a more consistent and rights-based approach to systemic risk reporting, helping align methodologies while ensuring a common framework for assessing impacts on rights.
This fragmented treatment becomes particularly evident in the context of freedom of expression. While standalone reporting may cover concerns about content moderation practices, account suspensions or misinformation, it often overlooks more nuanced issues
For example, it may fail to consider how other risk areas like ‘illegal content’ or ‘influencing factors’ like automated detection, recommendation algorithms or search functionalities can have systemic impacts on freedom of expression, even when these effects initially seem limited. Or, in another instance, when platforms cooperate with governments on content takedowns, it is often unclear how such requests are made, recorded or acted upon.
This lack of transparency has been a recurring issue identified in the Board’s case work, which has examined the opaque and inconsistent nature of state requests (see Shared Al Jazeera Post,UK Drill Music and Öcalan’s Isolation decisions), and their potential to suppress freedom of expression.
Platforms also rely heavily on automated systems to detect and remove content, which can, on the one hand, lead to the overenforcement of political and counter speech. On the other, reducing reliance on automation can also carry risks, with uneven consequences for different users.
The Board recently recommended that Meta examine the global implications of its decision, announced on 7 January 2025, to reduce reliance on automation for some policy areas.
Mainstream human rights
To mainstream human rights as a cross-cutting issue, platforms could benefit from greater clarity and implementation guidance on how to identify and assess risks through a rights-based framework with clear and consistent criteria.
While many platforms have developed their own approaches, they often reference a variety of frameworks in their reports, from the UNGPs to risk models from unrelated fields like finance and climate change. This leads to inconsistent evaluation of factors such as scope, scale, irremediability and likelihood of potential adverse impacts.
All this hinders the ability of stakeholders to compare risks across services, and assess industry-wide harms and limitations on users’ abilities to speak freely.
Drawing upon guidance from international treaties and the UNGPs could help ensure that efforts to identify and assess systemic risks do not unduly infringe on human rights.
The UNGPs offer a structured approach for assessing human rights impacts, emphasising stakeholder engagement, context and attention to vulnerable groups. They involve well-established guidance on evaluating the scope, scale, irremediability and likelihood of potential adverse impacts on human rights.
Using the UNGPs would enhance cross-platform comparability and ensure that risk assessments go beyond what is immediately visible or quantifiable, capturing broader and longer-term impacts embedded in platform design and operation.
Distinguish between risks and mitigation measures
To navigate these challenges, platforms also need a structured way to distinguish between prioritising risks and determining mitigation measures. A rights-based approach could help platforms apply carefully calibrated measures, rather than oversimplifying assessments based on risk prioritisation.
This approach should include an evaluation of the impacts of mitigation strategies themselves, using clear, rights-specific criteria. For example, measuring the effectiveness of content moderation would require assessing content prevalence, volume of decisions, enforcement error rates and appeal outcomes.
This would ensure that responses to risks do not generate new or disproportionate impacts, while resulting in more granular transparency and access to data to support third-party research into moderation trends.
While the DSA aims to establish a framework for evaluating mitigation measures by requiring them to be ‘reasonable, proportionate and effective’, it lacks clear implementation guidelines. As with risk identification and assessment, this leaves much to the discretion of platforms and results in the use of divergent methodologies, which can affect the quality, effectiveness and timeliness of these mitigations
Clearer guidance on how to evaluate and implement mitigation measures could be achieved by drawing on existing global frameworks for evaluating restrictions on speech: namely, the three-part test for legitimate restrictions on freedom of expression, based on Article 19 (3) of the International Covenant on Civil and Political Rights (ICCPR), and its relevance to companies under the UNGPs.
This would allow platforms to better evaluate mitigation strategies by integrating speech concerns and other legitimate aims. Another benefit would be ensuring that freedom of expression and civic discourse are not treated as a standalone ‘risk’ area, but mainstreamed as a cross-cutting issue.
Organisations that bridge the gap
Embracing existing frameworks would challenge assumptions that freedom of expression is always in tension with respect for other human rights and societal interests, and encourage innovative approaches to risk mitigation.
The Board applies this three-part test in all our cases to assess whether Meta’s speech interventions meet the requirements for legality, legitimate aim, and necessity and proportionality. This provides a transparent and replicable model for rights-based analysis that platforms can adopt in their own mitigation efforts.
A consistent global response
Systemic risk frameworks designed under regional regulatory regimes, such as the DSA, could end up shaping regulatory approaches in other regions. Therefore, it is crucial for the regulator to clarify the cross-cutting role of human rights across all risk areas and for platforms to adopt frameworks rooted in global human rights standards to ensure their systems effectively mitigate risks in regional jurisdictions, while maintaining global consistency.
As the Board’s extensive work demonstrates, relying on global standards requires consideration of local and regional contexts, both when identifying risks and designing mitigations. While harms to individual rights may manifest differently in different regions, applying a global framework can ensure that a company’s response is consistent and grounded in respect for freedom of expression.
Embed stakeholder engagement into assessments and mitigation design
Although all platforms refer to stakeholder engagement (such as civil society, academia and marginalised communities) in their reports, there is limited insight into how this input informs systemic risk assessments.
While platforms set out their consultation processes in detail, they do not clearly draw connections between the outputs of those consultations and their analysis of risk or evaluation of mitigations.
This reporting on stakeholder engagement also fails to align with the good industry practices outlined in the UNGPs. Specifically, with the lack of clarity on how engagements are structured, which stakeholders are involved and what concerns are raised, it is difficult to understand how stakeholder insights influence platforms’ responses to individual risks, before and after mitigations are applied
Meaningful stakeholder engagement should prioritise the input of individuals and groups most affected by platform decisions by actively seeking expertise and diverse perspectives.
Moreover, this type of engagement is essential for considering regional and global factors when assessing systemic risks and mitigations.
While the DSA emphasises localized risk assessment, current methodologies often fail to account for local diversity (e.g., the EU’s different languages and cultures), since platforms mainly focus on structural issues affecting their systems.
This is exacerbated by a lack of targeted stakeholder engagement, leading to risk assessments that fail to capture the complexity of local contexts.
The Board’s prioritisation of stakeholder engagement in cases and policy advisory opinions highlights how such efforts can increase transparency and participation, and amplify the voices of people and communities most impacted by platform decisions (see the ‘Shaheed’ policy advisory opinion).
Additionally, the work of expert organisations, such as the GNI and DTSP forum, underline how multi-stakeholder consultations with diverse experts can enrich both risk assessments and mitigation strategies, and help platforms align these processes with a rights-based approach.
Deepen analysis with appeals data
Since the first reports by platforms are primarily qualitative, they provide limited insight into the quantitative data used to assess risks and mitigation measures. When cited, metrics are often high level and duplicate pre-existing transparency report disclosures.
Building on the Board’s experience, one way to evaluate the effectiveness of mitigation measures, particularly on freedom of expression and other human rights, is to draw on both qualitative and quantitative assessments of user appeals data, such as on decisions to remove or restore content.
Appeals are not only a mechanism for error correction, they are also a vital safeguard for protecting free speech by revealing which enforcement practices may be suppressing lawful expression
User reports and appeals against decisions to leave content online can also highlight where enforcement practices may be failing to properly curb harmful content.
Appeals can also offer valuable insights into enforcement accuracy and residual risks. For example, data on appeals volume, geographic location, relevant policies, associated risk areas and outcomes can help determine which mitigation measures are effective over time – and which require improvement.
Receiving hundreds of thousands of appeals annually from around the world, the Board’s data could help highlight enforcement trends as potential indicators of risks, such as censorship of journalistic content, and over- or underenforcement of policies during a crisis, as well as help to evaluate the effectiveness of mitigations.
This, in turn, could supplement platforms’ own processes, contributing to independent oversight.
By systematically analysing, openly reporting and meaningfully integrating data into risk assessments, platforms will not only enhance the effectiveness of mitigation but also strengthen trust in their commitment to safeguard human rights.
Conclusion
Now the initial rounds of assessments have been published and as platforms develop their next reports, the time is right to refine methodologies to ensure that products, platform features and content moderation systems are evaluated with greater precision, depth and robustness.
A transparent and multi-stakeholder approach, bringing together diverse expertise and perspectives, is essential to support this endeavour.
It is crucial that human rights, particularly freedom of expression, are placed at the centre of systemic risk assessments to safeguard speech, rather than to serve as a mechanism for its restriction.
By drawing on its expertise, the Board is committed to help develop rights-based approaches that centrally position freedom of expression. Given the iterative nature of assessments, the Board encourages platforms to incorporate feedback and for regulators to take these insights into account when designing guidance for platforms and auditors.
The Board looks forward to working with interested organisations and experts on systemic risk assessments and mitigation.
When Kenya last hosted a major continental sporting event – the All-Africa Games in 1987 – many of the country’s current sports journalists hadn’t even been born. Some were toddlers, others only dreams in their parents’ minds.
Now, nearly four decades later, a new generation of reporters will live their own moment of history as Kenya co-hosts the TotalEnergies CAF African Nations Championship (CHAN) 2024, alongside neighbors Uganda and Tanzania.
For young and seasoned journalists alike, the month-long tournament is more than just another assignment – it’s a powerful symbol of pride, opportunity, and long-awaited fulfillment.
‘This is a big opportunity. It’s been a long time coming and is a great build-up to AFCON 2027,’ says Jeff Kinyanjui, veteran journalist and Head of Communications at the Football Kenya Federation (FKF).
‘This tournament offers local media personnel a platform to learn CAF operations and tournament coverage. It’s also a chance to interact with some of the continent’s brightest football talents – many of whom could go on to become world-class stars’
Kenya came close to hosting major football events twice before – AFCON 1996 and CHAN 2018 – but both were lost due to unpreparedness. CHAN 2024 now marks a redemptive moment not only for Kenyan football but also for its growing media ecosystem.
‘It’s not just about journalism,’ Kinyanjui adds. ‘It’s about capacity building for the whole East African region and fostering continental connections. It has been a tough learning curve for most of us, beginning with the accreditation processes like accessing the media channel and all.
‘For most of our colleagues, this is their first time experiencing this. We have learnt a lot even before the first ball is kicked on the field.’
Veteran broadcaster Daniel Wahome, now a Senior Editor at the Kenya Broadcasting Corporation (KBC), recalls listening to the 1987 All-Africa Games on a radio at his grandfather’s homestead. CHAN 2024 completes a full-circle moment for him.
‘It’s an incredible honour. Not just to witness this moment but to help shape it. This is also our chance to tell the East African story – our culture, our people – and share it with the world,’ Wahome reflects
Veteran editor Chris Mbaisi, former president of the Sports Journalists Association of Kenya (SJAK), has reported from the Olympics and Commonwealth Games. But even with over 25 years in the profession, he feels CHAN 2024 strikes a special chord.
‘There’s nothing quite like covering a tournament your own country is hosting. It’s personal. It’s a moment of pride – and a chance to tell Kenya’s story our way,’ Mbaisi says.
For 25-year-old digital media reporter Tabitha Makumi, CHAN 2024 is more than just a tournament – it’s a career milestone.
‘It feels surreal to walk into a stadium and cover these games as a journalist,’ she says. ‘This is a chance to showcase young Kenyan talent and tell Africa’s football story to the world. It also allows us to exchange ideas with colleagues from across the continent. For example, I get to learn how a journalist from Nigeria connects with their audience, and share how we do it here and so on.
‘It’s a beautiful blend of storytelling cultures. Over and beyond, it will be a proud moment for me to put it in my CV that I was part of this CHAN’
CAF has accredited over 250 Kenyan journalists, with nearly 800 media professionals expected to cover the tournament across the three host nations.
For Kenya’s media, CHAN 2024 represents more than football – it’s a generational leap, a cultural statement, and a shared moment of African unity.
The new Code, published in a Special Issue of the Kenya Gazette Supplement No. 70 on 14 May 2025 (Legislative Supplement No. 40) by the Cabinet Secretary for Information, Communications and the Digital Economy, William Kabogo, updates the Second Schedule to the Media Council Act, 2013, effectively replacing the Code of Conduct for the Practice of Journalism in Kenya.
This revision addresses the shortcomings of the former Code and follows a High Court ruling that declared the Broadcasting Code unconstitutional, ordering the Media Council of Kenya (MCK) to establish age-appropriate standards to protect children and vulnerable groups.
The new Code tackles the challenges of our changing media landscape, setting firm guidelines for ethical AI use, safeguarding children and vulnerable individuals, promoting responsible user-generated content and ensuring principled editorial conduct
The Council lauds the National Assembly’s approval and accession to this crucial document, and its subsequent confirmation by the Clerk of the National Assembly. This is a defining moment for media regulation, professionalism and the unyielding defence of press freedom in the country.
The ratification of this Code is a testimony and clarion call for progress. It demands accountability from the media and welcomes critique from the government, fostering trust and mutual respect.
Furthermore, it will streamline dispute resolution, ensure swift and fair handling of complaints while upholding professional integrity.
We also applaud the media community for their unwavering commitment to this transformative Code. This Code, shaped through extensive consultation across the media, legal, academic and civil society sectors, is a pact to uphold the highest journalistic standards.
The MCK reaffirms its unwavering commitment to fostering a media landscape that upholds the highest standards of integrity and serves the public. This is the dawn of a new era for ethical, fearless and impactful journalism in Kenya.
The Media Institute of Southern Africa has submitted its insights to the African Commission on Human and Peoples’ Rights (ACHPR) Public Consultation on Freedom of Expression and Artificial Intelligence (AI).
In its submissions, MISA emphasised the need for governments to establish AI legal frameworks rooted in international human rights law, incorporating transparency, accountability, data security, and clear redress mechanisms.
As a key media freedom and digital rights advocate, the organisation recognises that the transformative power of AI will directly shape the future of journalism, alter the information landscape, and impact the right to freedom of expression.
Read ‘Implementation of Resolutions on digital rights instruments underway’, here
To safeguard fundamental human rights, essential safeguards and measures, including mandated human oversight, must be incorporated in the entire life cycle of all AI systems that impact these rights.
In its submissions, MISA highlighted key concerns, including, among others:
that Generative Artificial Intelligence (GenAI) has amplified misinformation, blurring the lines between truth and fiction, and that there is a risk that AI may influence editorial independence or journalistic decisions
that deepfakes can be used for political manipulation, character assassination or to incite violence
that the digital divide and resource gaps remain significant challenges in most African countries, where a lack of affordable and reliable internet connectivity hinders citizens from realising the full potential of emerging technologies
that most AI systems currently in use are predominantly trained on Western datasets, which inherently carry biases that often lead to discrimination against specific segments of African populations and misrepresent African contexts
the potential for State control and censorship, which can lead to increased surveillance (e.g., facial recognition) including social media monitoring, to track journalists and ordinary citizens, which often results in self-censorship
the dominance of big tech companies, which control most AI models, which leads to the decline of smaller media outlets, with the monetisation and exploitation of data by these companies often reflecting biases or commercial interests embedded in the AI models which, in turn, affects market dynamics by creating economic dependencies; media organisations become economically and structurally reliant on these platforms for traffic and advertising revenue, restricting their ability to maintain editorial independence, and
that most international AI instruments are non-binding in nature and fail to incorporate the perspectives of the Global South, which results in challenges in translating AI principles into practical policies
Way forward
Moving forward, governments must establish AI legal frameworks grounded in international human rights law, incorporating transparency, accountability, data security and clear mechanisms for redress.
AI systems must be designed inclusively, with input from various stakeholders, including marginalised groups, people with disabilities and other underrepresented voices. The media industry should also develop its own AI Policy or Code of Ethics, incorporating key best practices and clearly labelling all content that has been generated, augmented, or significantly altered by AI.
AI-generated content, particularly for news and informational purposes, must go through rigorous human review and editorial approval before publication
AI systems should not influence editorial independence or journalistic decisions by making critical choices about content publication or editorial direction.
Strong policies should be implemented to close the digital gap, ensuring affordable and accessible internet , and enhancing digital literacy for marginalised communities.
There is a need to increase accountability on the Utilisation of Universal Service Funds to stimulate infrastructure development. This will bridge and close the digital divide between urban and rural communities, serving as the backbone for localised AI development and deployment.
Finally, regional and global coordination is vital to harmonise AI development and translate AI principles into practical, enforceable policies through multi-stakeholder partnerships.
At the 5th Ordinary Session of the 6th Pan-African Parliament in Midrand, South Africa, legislators and experts placed Africa’s data sovereignty, AI governance and responsible digital innovation at the forefront of the continent’s transformation agenda, emphasising the need for urgent, African-led action to avoid becoming a ‘digital colony’ while harnessing the Fourth Industrial Revolution for inclusive development.
Hon. Behdja Lammali, (Algerian) Chairperson of the Committee on Transport, Industry, Communications, Energy, Science and Technology, opened the discussions, saying, ‘Africa continues to lag behind in digitalisation, innovation and AI adoption, risking long-term negative impacts on our continent and our people’.
‘We must align our strategies with Agenda 2063 to advance digital health, smart industrialisation, and responsible AI use while protecting privacy and personal data.’
Reflecting on the outcomes of the First Parliamentary Digital Summit, Hon. Lammali noted, ‘We covered critical areas including AI training, data protection and digital health, discussing the role of parliamentarians in advancing AI and policy harmonisation’.
She called on Member States to ‘develop model laws on AI, data protection and privacy aligned with Agenda 2063, and to ratify and domesticate the Malabo Convention to address emerging technologies, AI, cross-border data flows and cyber threats’.
She added that ‘Africa must build a secure, inclusive, sovereign digital and AI future that aligns with the Africa We Want under Agenda 2063, ensuring data protection, AI for development, local innovation and equitable benefits for all Africans, from North to South, East to West’
Prof. Mirjam van Reisen of Leiden and Tilburg Universities, who presented on ‘Building a continental framework for AI, Data Sovereignty and Responsible Digital Inoovation’, highlighted the urgency for Africa to take ownership of its data.
‘Artificial Intelligence is now embedded in everyday tools and platforms and is essential for economic growth in Africa, with the potential to add $3-trillion to Africa’s economy by 2030,’ she said.
Van Reisen warned, ‘Africa risks losing control over its digital data, with it being exported for economic gain in the United States, China and Europe without African oversight’.
She continued, ‘Controlling data is essential for controlling AI tools and protecting African interests – current centralised models of data storage and AI development reinforce inequality’.
She drew on Africa’s traditions, saying, ‘Just as traditional communities gathered under trees to find solutions, Africa now needs decentralised data systems through decentralised web and edge computing to build sovereignty over AI’.
Van Reisen underlined, ‘Africa should become the first continent fully data sovereign, using African data and legacy to shape “African Intelligence” for AI, avoiding digital colonialism while leveraging AI for African-led growth and problem-solving’.
‘AI is transforming healthcare, education, agriculture and public policy in Africa, but African data is often stored and processed outside the continent, risking misuse and loss of control, he said.
Mveyange explained, ‘AI models often rely on non-African datasets, leading to biases and poor applicability to African contexts’.
He urged African governments to ‘build legal, technical and governance frameworks to protect data and ensure it benefits African citizens’, emphasising ‘data is an economic resource, and African countries must prevent digital extractivism by global technology companies’.
He proposed adopting FAIR Data principles, making data ‘Findable, Accessible, Interoperable and Reusable within African contexts’, while investing in African data scientists, AI engineers and ethical AI governance structures
Mveyange called on the Pan-African Parliament and the African Union to ‘facilitate dialogue across governments, civil society, academia and the private sector to develop harmonised policies’, and to position Africa as ,a leader in ethical, responsible and people-centred AI’.
He concluded, ‘The time is now to build an Africa-led, responsible AI ecosystem to drive economic growth, improve health outcomes and foster inclusive development across the continent’.
Gregory Isaacson, AI expert from AgridroneAfrica, showcased practical pathways for AI implementation, focusing on food security and agricultural modernisation through African data sovereignty. He described a pilot model using drones and AI to boost yields and market efficiency via apps in farmers’ own languages.
Addressing the sovereignty aspect, Isaacson warned, ‘Current global AI models collect user data, raising privacy concerns … We propose local, solar-powered AI systems on farms that operate offline, store data locally, and prevent data leakage’.
The session at the Pan-African Parliament reaffirmed that while AI can transform healthcare, agriculture, education and governance, it must be rooted in African realities, be people-centred and respect local cultures, languages, and community needs
From calls to strengthen AI legal frameworks and expand local cloud infrastructure to proposals for cross-border research projects addressing healthcare and supply chain resilience, the speakers underscored the need for a unified, Africa-led approach that ensures AI benefits all Africans.
By aligning these initiatives with the African Union’s Agenda 2063, Africa can turn Artificial Intelligence into ‘African Intelligence’, ensuring it is ethical, inclusive and a driver of prosperity and resilience on the continent.
PAN-AFRICAN PARLIAMENT
PICTURE: Hon. Behdja Lammali, Chairperson of the Pan-African Parliament Committee on Transport, Industry, Communications, Energy, Science and Technology, opened the discussions (PAP)
Authorities in the Republic of the Congo and the Democratic Republic of Congo (DRC) must ensure the safety, respectively, of journalists Rosie Pioth and Sadam Kapanda wa Kapanda.
In Pioth’s case, this is following death threats for her reporting on the anniversary of the 1982 bombing of the Maya-Maya International Airport in the Republic of the Congo’s capital, Brazzaville. In Kapanda’s case, this is following death threats related to his coverage of the National Fund for the Repair of Victims of Sexual Violence and Crimes against Peace and Security of Humanity (FONAREV) in the DRC.
Angela Quintal, the Committee to Protect Journalists’ Africa Regional Director, said from New York that ‘the authorities of the Republic of the Congo must urgently investigate the threats against journalist Rosie Pioth and ensure she can continue her work without the looming possibility of being killed’.
‘Many journalists working in the Republic of the Congo self-censor out of fear of reprisal, and the possibility that these threats will go without adequate response may only entrench those fears’
Pioth, correspondent for the French government-owned outlet France 24 and director of the news site, Fact Check Congo, published an article on 17 July, the anniversary of the bombing, which detailed how, after 43 years, victims’ families continue to demand justice and compensation.
Pioth emphasised how the story of the bombing had been ‘erased’ with ‘No monuments. No textbooks. No national day. No public mention of this tragedy’. At the end of the report, she also announced intentions to publish further investigations on the bombing, which killed nine, and its aftermath.
The day after the article was published, unidentified individuals called and messaged death threats to Pioth, urging her to stop reporting about the bombing, according to Pioth and CPJ’s review of the messages. Pioth said her husband also received threatening messages directed at her.
‘[A]re you the one encouraging your wife towards media provocations? You have 72 hours to decide to stop your publications. I am watching all your movements, and the unpredictable is not far away, dear infiltrator,’ read one of the messages sent to her husband.
Pioth told CPJ that she went into hiding after the threats and intended to file a complaint with the prosecutor’s office in Brazzaville. The local professional association Journalism and Ethics Congo (JEC) also called for her protection.
CPJ’s calls and questions sent via messaging app to a Republic of the Congo government spokesperson and Minister of Communication and Media Thierry Moungalla did not receive a reply.
Kapanda received death threats from at least two local officials and two unidentified callers for his reporting on FONAREV for the privately owned broadcaster, Notre Chaîne de Radio and the Identitenews site.
Established by the government in 2022, FONAREV has worked in response to the Kamuina Nsapu rebellion that erupted in August 2016 in Kasaï province, which killedthousands and displaced millions. Kapanda’s reporting has alleged fraud, manipulation and nepotism by FONAREV Regional Coordinator Myrhant Mulumba, as Kapanda uncovered the identities of victims of the Kamuina Nsapu militias.
‘Journalists in the DRC too regularly face threats and intimidation from public officials. Authorities must investigate the death threats against journalist Sadam Kapanda wa Kapanda and ensure his safety,’ said Quintal.
‘Reporting on matters of public interest, especially amid conflict, is essential for those with power to be held accountable and for the public to be informed about issues and actors that affect their lives’
In separate calls and messages on 2 July 2025, Mulumba and Kasaï provincial Minister of the Interior Peter Tshisuaka threatened to kill Kapanda if he did not halt his critical coverage of the fund, according to the journalist and messages reviewed by CPJ. Kapanda said that Mulumba also offered him a job with the fund if he agreed to stop criticising their operations, which Kapanda refused.
Tshisuaka responded to CPJ’s request for comment by messaging app saying that, ‘The journalist does his job, and I do my job too, Kapanda should look for work elsewhere’.
A third, unknown caller on 2 July threatened to have Kapanda killed, Kapanda told CPJ. On 9 July, Kapanda said he received an additional death threat from an unidentified caller.
Around 2am on 15 July, two unidentified armed men arrived at Kapanda’s home and sought to enter, but fled when his neighbours began shouting, the journalist told CPJ. On 16 and 17 July, Kapanda received further death threats via phone calls and messages, copies of which CPJ reviewed.
Kapanda told CPJ that he was unaware of police having opened an investigation into the threats.
CPJ’s calls and messages to Mulumba went unanswered.
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorised as necessary are stored on your browser as they are essential for the basic functionalities of the website. We also use third-party cookies that help us analyse and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. Opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always active
Necessary cookies are essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Analytics
Analytics cookies are used to track user behaviour on our website. We process these cookies to understand user engagement and improve user experience on our website.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.