AI can be harnessed to help journalists do their jobs more efficiently, but is also poised to further pollute the information landscape, panelists at the opening Town Hall of the 2023 IPI World Congress and Media Innovation Festival said. They assessed the risks of AI to independent journalism, and the information landscape more broadly, and underscored the need for governments to put strong human rights safeguards in place. 

The opening session of this year’s IPI World Congress gathered top experts and journalists to tackle one of the most pressing questions facing the media industry today: how AI and new technologies will affect journalism practices and our news and information environments. The discussion was co-organized with the Office of the OSCE Representative on Freedom of the Media and moderated by David Kaye, a professor of law at the University of California, Irvine, and former U.N. Special Rapporteur on freedom of expression.

Teresa Ribeiro, the OSCE Representative on Freedom of the Media, kicking off the Town Hall, underscored that challenges to media freedom are not detached from political, social, and technological realities. “Digital technologies and artificial intelligence have drastically transformed the way that information is produced, disseminated, accessed, and consumed and all of this has a direct and significant impact on societies, social cohesion, and democratic processes”, she said. She highlighted the need to center policies on human rights and public interest to ensure healthier information spaces and promote independent media in the age of AI.

 

Panelists agreed that AI has the potential to support newsgathering – for instance by processing large data sets in a way that would not have been possible before. Andrian Kreye, editor-at-large at Germany’s Süddeutsche Zeitung, recounted how journalists relied on machine learning to sort through 2.6 terabytes of files in the Panama Papers investigation, for example.

But the wider risks are also clear. Julia Angwin, contributing writer at The New York Times, noted that while journalists employ machine learning to support their ultimate goal of accurate reporting, the goal of AI bots is “to create the plausible feeling of a sentence”. She continued, “There is a lot of work that journalists need to do to understand how to deal with this, because I think this is unfortunately going to flood the information landscape with a lot of plausible sounding words that [will] probably pollute the landscape even more than it has been right now.”

These risks have naturally prompted debates about AI regulation. Dunja Mijatović, the Council of Europe Commissioner for Human Rights, spoke about the CoE’s work with member states to strengthen AI oversight and regulation in order to safeguard human rights, including freedom of expression and access to information. Protecting these latter rights, she said “is the ultimate goal for all of us, journalists or not, because if access to independent, truthful information is jeopardized by AI systems, then we are going to embrace an era that is definitely not something we want to see”. She added that greater media literacy around the world is critical and national human rights institutions must be equipped in AI literacy and new digital technologies to help regulate its impact on people.

In practice, Angwin said although the U.S. is far behind in regulation, Europe’s regulatory efforts, particularly the Digital Services Act, may be a model for regulation by placing the burden on tech companies to conduct risk human rights impact assessments and mitigate risks.

WATCH THE RECORDING HERE

 

Siddharth Varadarajan, editor of India’s The Wire news site, was more skeptical about the role of governments in regulating AI, pointing to how the Indian government has abused online regulation to tighten control of free speech. “We are in a world where governments exert extraordinary influence on big tech”, he said, and those companies too often comply with government demands to censor content. He stressed the importance of training journalists and the public to spot, and develop defences against, fake content. Varadarajan also warned against the malevolent use of technology, including by powerful political actors, who have already weaponized social media in India.

Amidst a rise in disinformation and declining public trust in institutions, there is a risk that AI technology could be leveraged to undermine the credibility and integrity of journalists. Angwin underscored the importance of trust, describing it as “the most important thing that we need to solve as journalists, that’s what we sell to the audiences, they need to trust us”. She also stressed that journalists must be transparent about their methodology to secure trust. “You have to show your work, these days I don’t think the brand name is enough to engender trust. You have to show your work, you have to prove what you did.”

Revisit the IPI World Congress & Media Innovation Festival 2023