Commentary

AI Statecraft: Top News Policy Concerns For Regulators

advertisement

advertisement

Transparency and privacy are the leading issues related to AI and news media worldwide, judging by Journalism’s New Frontier, a study from the Center for News, Technology & Innovation (CNTI).

But freedom of expression ranks at the bottom of the issues addressed by regulators, which cannot be reassuring to publishers and journalists. 

CNTI examined 188 AI policy proposals across the globe and found they addressed: 

  • Transparency and accountability of AI—124
  • Data protection and privacy—107
  • Algorithmic discrimination and bias—92
  • Public information and awareness about AI—76
  • Manipulated or synthetic content—64
  • Intellectual property and copyright—49
  • Freedom of speech and expression—19

The high ranking of transparency and accountability is “a sign of the challenge of accountability in the opaqueness of many AI systems and the weight that privacy issues have held in recent technology governance,” the study notes. 

In addition, only 20 of the documents mentioned such terms as “journalist,” “journalism,” “news,” “media” or “news media.” Five of these mentions were in North America. 

Should regulators use these words more often? Not necessarily: those terms can be weaponized by governments against news media. 

In any case, policies don’t have to mention journalism by name to affect it. CNTI determined that when freedom of speech and expression are recognized, it generally has positive implications for journalism. ·  

But it depends on the country. There were “no mentions of freedom of speech in Middle East or North African countries." This, the study says is “consistent with their low press freedom rankings." 

Even when countries address these issues, there is a certain vagueness. Argentina has a bill prohibiting AI use  that violates “fundamental human rights such as privacy, freedom of expression, equality or human dignity." But that is rather broad.  

And, the Dominican Republic has a bill that would ban the use of deepfakes to alter videos. Violators could end up in prison. Journalism is not excluded, and reporters could be dissuaded from using AI for legitimate purposes, like creating an avatar to protect a source. 

In the U.S., states are focused on labeling and disclosing AI-generated content, but not the federal government.

The study offers three areas in need of policy attention (and we quote):

  1. In AI proposals that address manipulated content, it is important that policymakers work towards methods that protect certain journalistic uses in ways that to not enable government censorship or determination of who is or is not a journalist.
  2. Bias audits and transparency measures are best implemented before a tool is deployed.
  3. Policymakers should ensure that AI working groups include journalism producers, product teams and engineers, alongside AI techologists, researchers, civil society and other relevant stakeholders. 

The full study can be read here. 

 

Next story loading loading..