Many journalists are using artificial intelligence. But best practices have not been widely codified, and are limited in what they address, judging by a briefing from the Center for News
Technology & Innovation (CNTI).
“Newsroom policies on new technologies tend to emphasize principles and values but do not often offer practical
guidance," CNTI states. “It would be valuable for policies to include more detail on algorithms and systems in addition to outputs, and to lay out considerations for working with third-party
tools.”
The study, based on a review of 30 research articles. notes that subtle biases may be built into third-party tools, and that newsrooms are not well-equipped to address
them.
“In particular, guidelines for procurement are rarely addressed, even though the tools underlying algorithms may subtly influence media
organizations’ editorial decisions,” the study continues.
advertisement
advertisement
So what do they advise?
“The
newsrooms that do have AI policies share a similar approach, prioritizing transparency about the use of AI, human supervision of AI tools and human verification of outputs,” the study says.
“However, few of these guidelines operationalize these priorities concretely or include clear oversight mechanisms.”
The study advises newsrooms
to “include people with different personal and professional backgrounds to ensure guidelines address a broad range of use cases and impacts.”
For
those who are new to this discussion, CNTI quotes OECD’s definition of what AI is: an “AI system is a machine-based system that, for explicit or implicit objectives,
infers, from the input it receives, how to generate outputs such as predictions, content, or virtual environments.”