Posted: 6 May, 2025 Filed under: Hlengiwe Dube | Tags: AI, algorithmic curation, algorithms, Artificial intelligence, automated fact-checking tools, automated translation, biased news, ChatGPT, critical challenges, democracy, editorial decisions, ethical responsibilities, human rights, independent media, misinformation, multilingual content distribution, real-time content moderation, transformation, transparency, World Press Freedom Day
Author: Hlengiwe Dube
Centre for Human Rights, University of Pretoria
On 3 May 2025, the world observed World Press Freedom Day. This annual commemoration is a reminder of the important role that free, independent media plays in protecting democracy, transparency, and human rights. It is a day for governments to reaffirm their obligation to safeguard press freedom, for journalists and media professionals to reflect on ethical responsibilities, and for the public to honour the many courageous media practitioners who have risked or lost their lives in the pursuit of truth. In 2025, the theme of World Press Freedom Day is as urgent as it is visionary: Reporting in the Brave New World – The Impact of Artificial Intelligence on Press Freedom and the Media. The theme acknowledges the profound and accelerating impact of Artificial Intelligence (AI) on the field of journalism. As AI tools become more deeply integrated into the production, distribution, and consumption of news, this transformation brings with it both groundbreaking opportunities and critical challenges that demand global attention.
AI’s Impact on Press Freedom
AI is already reshaping journalism in ways previously unimaginable. From generative text to automated translation, from algorithmic curation to real-time content moderation, AI technologies are changing how news is created, shared, and experienced. AI can generate quick news summaries, personalised newsfeeds, and assist in multilingual content distribution. Tools like OpenAI’s ChatGPT and Google’s Gemini are being experimented with in newsrooms globally. While these tools offer efficiency and scale, they also risk reinforcing biases and reducing editorial control over content prioritisation. AI systems such as ClaimReview and Full Fact’s automated fact-checking tools can help detect misinformation more rapidly. However, the same technology is used to create synthetic content, such as deepfakes, which complicates the public’s ability to distinguish truth from falsehood.
Thus, as AI tools become increasingly affordable and widely available, their integration into newsrooms and content platforms is transforming the way information is curated and consumed. Algorithms now play a critical role in determining which news stories are promoted, sidelined, or personalised for specific audiences. While this has improved efficiency and user engagement, it also raises significant ethical and societal concerns. A major issue stems from the datasets used to train these AI systems. When these datasets contain historical or systemic biases such as underrepresentation of marginalised groups or skewed narratives, those same biases can be learned and perpetuated by the algorithm. This can result in biased news coverage, reinforcement of stereotypes, and a narrowing of viewpoints available to the public.
Compounding the challenge is the opacity surrounding how AI models make editorial decisions. Most algorithms used by tech platforms and news aggregators operate as black boxes (systems whose internal logic is hidden or difficult to interpret, even by their creators, meaning there is little visibility into how they prioritise, suppress, or recommend content. This transparency deficit poses a threat to editorial independence, as decisions traditionally made by human editors are increasingly shaped by commercial or opaque algorithmic logic. It also undermines media pluralism, by potentially amplifying dominant voices while marginalising dissenting or minority perspectives.
In recognition of these risks, policymakers in Africa should consider legislative frameworks that are aimed at enhancing accountability and transparency in AI-driven systems. For instance, drawing inspiration from the United States Algorithmic Accountability Act, which requires companies to assess the impacts of their automated decision-making systems, especially those that significantly affect individuals’ rights or access to information. Similarly, the European Union’s AI Act includes provisions to regulate high-risk AI systems, including those used in media and information services. The AI Act classifies AI systems into four risk categories: unacceptable, high, limited, and minimal. High-risk AI systems are those that pose significant risks to health, safety, or fundamental rights and freedoms. These systems are subject to strict compliance requirements before being placed on the market or put into service.
In this regard, it is essential for news organisations, technology platforms, and regulators to work collaboratively to ensure transparency, fairness, and accountability in AI-driven content delivery. Without meaningful oversight, there is a real danger that these tools could entrench inequality and erode the foundations of democratic discourse. With careful design and public oversight, AI can support rather than suppress free journalism
Surveillance and Data Privacy
AI-powered surveillance systems are playing an increasingly prominent role in the monitoring of individuals, particularly journalists, activists, and dissidents, especially in authoritarian regimes. These systems leverage facial recognition, predictive analytics, and behavioural pattern analysis to track individuals online and offline, often without their knowledge or consent. The use of such technologies by governments raises significant concerns regarding human rights, freedom of expression, and the right to privacy. According to reports by Privacy International and Access Now, AI-driven surveillance tools have been deployed in various regions to suppress dissent and stifle independent journalism. These tools are often embedded in broader digital repression strategies, including targeted malware attacks, mobile phone surveillance, and the use of spyware like Pegasus to intercept calls, messages, and location data. Privacy International has highlighted the increasing use of surveillance tech that exploits AI to identify, categorise, and monitor individuals in real time. Their report, “Track, Capture, Kill: Inside Communications Surveillance and Counterterrorism in Kenya”, illustrates how such systems are being misused under the guise of national security. Access Now has documented numerous cases where journalists and human rights defenders have been targeted through advanced surveillance operations. Their #KeepItOn campaign and digital rights reports show how surveillance technology enables censorship and physical threats against reporters.
This trend is not limited to authoritarian governments. There are increasing concerns in democratic societies about the use of AI in policing and public surveillance, often without sufficient oversight or legal safeguards. For example, the European Digital Rights (EDRi) network has criticised proposals for mass biometric surveillance across the EU, warning that such practices violate the EU Charter of Fundamental Rights. The deployment of AI surveillance technologies without adequate regulation or transparency threatens to erode press freedom globally. It can also discourage whistleblowers and sources from engaging with journalists, undermining the role of the media as a watchdog.
Global advocacy groups and civil society organisations are increasingly calling for international frameworks to protect journalists’ data and communications. In response to the growing misuse of AI surveillance technologies, particularly facial recognition and predictive analytics, human rights organisations, legal scholars, and civil society networks have put forward several urgent policy proposals aimed at protecting journalists, activists, and the public at large. A key demand from digital rights advocates is the imposition of a global moratorium on the use of facial recognition technologies (FRT) in public spaces, especially for surveillance purposes. These systems often lack transparency, are prone to racial and gender bias, and have been deployed without public consent or oversight.
The United Nations High Commissioner for Human Rights has recommended that governments halt the use of AI systems that pose a serious risk to human rights until adequate safeguards are in place, including facial recognition in public areas. Similarly, Amnesty International has called for a total ban on facial recognition for mass surveillance, arguing that its use violates the right to privacy, freedom of assembly, and expression. Their global campaign, “Ban the Scan”, has documented its impact in several cities. Various organisations are also advocating for a new binding international treaty that guarantees privacy, freedom of expression, and protection against unlawful surveillance in the digital environment. These proposals reflect a growing consensus that unchecked AI surveillance threatens individual rights and also weakens democratic institutions. Thus, effective governance requires multilateral action, technological transparency, and strong protections embedded in national and international legal frameworks.
Job Automation and Labour Displacement
Newsrooms are increasingly relying on automation for routine tasks like financial reporting, sports updates, and transcription. While this can enhance productivity, it also raises ethical concerns about newsroom downsizing and the long-term sustainability of human journalism. The JournalismAI project at the London School of Economics is actively exploring how AI can be integrated without displacing core journalistic functions. However, in Africa, job automation and labour displacement in newsrooms is emerging but not yet as widespread or deeply entrenched as in more technologically advanced regions. Newsrooms in Africa are beginning to experiment with AI for automated transcription (Otter.ai, Google tools); content scheduling and social media automation; fact-checking assistance; and algorithmic bias and editorial control. Therefore, concerns about displacement in Africa are valid prospectively. Many journalists and other media practitioners fear that, as AI tools become cheaper and more accessible, editorial and administrative jobs may be lost.
Opportunities
When developed and applied responsibly, AI has the potential to be a powerful ally for journalism and freedom of expression. It can support investigative reporting and enhance accessibility and newsroom efficiency thereby helping newsrooms meet the demands of a fast-changing information ecosystem, particularly in under-resourced contexts like in the case of most African countries. AI can accelerate and enhance investigative journalism by enabling reporters to analyse large volumes of data with precision. Using AI, journalists can comb through leaked documents, analyse satellite imagery, and detect anomalies in public records. Projects like The AIJO Project and The Markup exemplify this intersection of AI and journalism. They use data science and algorithmic auditing to uncover corporate malfeasance, algorithmic discrimination, and other critical issues that might otherwise remain invisible.
AI can also facilitate inclusion by breaking language and accessibility barriers. For instance, real-time translation and text-to-speech technologies enable media outlets to reach multilingual communities and visually impaired audiences, expanding the reach of public-interest journalism. Initiatives such as Google’s Project Relate and open-source voice tech like Mozilla’s Common Voice offer promising accessibility solutions. Project Relate is an innovative application designed to assist individuals with non-standard speech, such as those affected by conditions like dysarthria, cerebral palsy, or muscular dystrophy, in communicating more effectively. By leveraging machine learning, the app personalises speech recognition to better understand unique speech patterns, facilitating clearer interactions in daily life. There are various tools that could be useful to newsrooms, particularly in the African context where AI uptake is still limited.
AI tools are already streamlining newsroom operations by automating time-consuming tasks and enabling faster editorial decisions. This efficiency frees journalists to focus on high-value tasks like in-depth storytelling, verification, and audience engagement, which are the core functions of a free press. Tools like Otter.ai, NewsWhip, and Dataminr are already widely used for real-time insights and early warning systems. AI technologies can enhance newsroom workflows by automating repetitive tasks, surfacing real-time insights, and helping newsrooms act faster and more efficiently. This allows journalists to focus more on investigative reporting, in-depth storytelling, and editorial integrity, the core functions of a free press.
Otter.ai is a widely used AI-powered transcription tool that enables journalists to convert interviews, press conferences, and meetings into searchable, editable text in real time. This is particularly valuable for fast-paced environments and long-form journalism, where precise recall of dialogue is essential. With Otter’s ability to differentiate speakers, generate summaries, and even integrate with Zoom, reporters can quickly process information and focus more on analysis than note-taking. NewsWhip provides real-time data on how stories are being engaged with, across social media platforms. Newsrooms use it to anticipate trends, discover emerging topics, and measure the potential impact of a news story before committing resources. By surfacing content that resonates with audiences and where it’s gaining traction NewsWhip helps editorial teams prioritise their coverage and align with public interest without sacrificing journalistic judgment. Dataminr is a real-time information discovery platform used by major news organisations to detect breaking news events before they reach mainstream awareness. It uses AI to analyse public data from social media, blogs, and other open-source platforms to alert journalists about unfolding events, including from natural disasters and political unrest incidents. This capability helps newsrooms respond quickly to time-sensitive stories and allocate reporters accordingly.
While these tools offer major productivity gains, their use also raises important questions about editorial independence, data privacy, and verification. For example, relying too heavily on AI-generated signals for newsworthiness could inadvertently prioritise virality over impact. To address this, many media organisations should develop AI ethics frameworks and editorial oversight protocols to ensure these tools augment and not replace human judgment.This also provides opportunities for collaborative efforts between news organisations, academics, and tech companies in shaping ethical frameworks for responsible AI use. A case in point is the Partnership on AI and AI4Media which explores standards for transparency, bias mitigation, and algorithmic accountability in media contexts. AI offers real opportunities to amplify the impact, reach, and integrity of journalism, but only if guided by strong ethical foundations and inclusive design. The stated tools and projects represent a blueprint for what is possible when technology and journalism evolve together, in service of the public interest.
Call to Action
As we mark World Press Freedom Day 2025, key stakeholders should respond with urgency and foresight, particularly in the following ways:
- Governments must protect press freedom in digital environments by enacting laws that ensure the ethical use of AI and prohibit its deployment for censorship or journalist surveillance.
- Technology companies must prioritise transparency in their AI systems, particularly those affecting content moderation and recommendation. They should include journalists in conversations around AI development and commit to principles of algorithmic accountability.
- Media organisations must invest in AI literacy and equip their newsrooms with the skills to evaluate and ethically use AI tools. Editorial oversight must remain central to decision-making, even in AI-assisted environments.
- Civil society and academia must amplify voices from underserved communities, advocate for equitable access to AI tools, and monitor the social impacts of AI on media freedoms.
- Journalists must continue to lead with courage and integrity, using new technologies to enhance, rather than replace, their craft, while holding power to account in the age of AI.
Conclusion
Although the path ahead is complex, it is also filled with promise. Just as journalism has adapted to every previous technological revolution, it can rise to meet the challenges and opportunities, presented by artificial intelligence. What matters most is that we navigate this transformation grounded in the enduring principles of truth, fairness, accountability, and freedom. We must never forget the journalists who have lost their lives in pursuit of facts, or the many more who continue to report under threat and restriction. We owe it to them, and to the societies they serve, to ensure that technology becomes a tool for liberation, not for control. In this new world shaped by artificial intelligence, the truth is more urgent than ever. As algorithms increasingly shape how information is produced and consumed, we must ensure they do not silence the truth-tellers. As journalists stand up for truth, the world must stand with them, to stop the threats, the lies, and the attacks on those who dare to tell it. As we commemorate World Press Freedom Day 2025, let us recommit to these core values and shape a future where journalism not only survives, but thrives, in the age of AI.
About the Author:
Hlengiwe Dube is an expert on information rights including freedom of expression, access to information and data protection, complemented by strong expertise of technology’s intersection with human rights. She is finalising her doctoral studies focusing on the complex dynamics of state surveillance in the context of human rights and public security. She is based at the Centre for Human Rights, University of Pretoria as a Project Manager of the Expression, Information and Digital Rights Unit, overseeing initiatives that span freedom of expression, access to information, data protection, elections, digital rights, and related themes at the nexus of democracy, technology, and human rights. Hlengiwe also extends her expertise to provide technical assistance to the African Commission on Human and Peoples’ Rights (ACHPR) special mechanisms on digital and information rights issues. She holds a Master’s Degree in Human Rights and Democratisation in Africa, further underpinning her depth of knowledge and commitment to advancing human rights discourse, particularly in the digital age.