Readers are comfortable with AI use in journalism, but only up to a point.
Their comfort levels depend on factors like “where in the production process AI is used, the level of human oversight, if any, that is involved, the extent to which AI is being used to represent ‘real life’ through photorealistic imagery and video,” according to a new global study from RMIT University--Generative AI and Journalism: Content, Journalistic Perceptions, and Audience Experiences.
Readers are more accepting of AI use when it is transparent.
For instance, only 28.3% feel good about journalists using AI to create b-roll (text-to-video). But
this jumps to 60.3% if it the use is disclosed, and if the b-roll depicts generic content, or if b-roll doesn’t show people. Another 39.6% was uncomfortable over possible job losses or
inaccuracy.
advertisement
advertisement
And a mere 6.45% feel right about sharing of news via a virtual presenter. But that rises to 9.6% if the AI use is disclosed.
Meanwhile, 70.3% are comfortable with use of AI to generate 3D models, rising to 90.7% if the models are accurate. But 9.2% are wary due to concerns about labor or output
accuracy.
In general, readers are most comfortable with such AI tasks as resizing images, generating color palettes, video editing using an AI-generated transcript and
least so with generating a virtual presenter and generative expand with a person.
Here's one caveat: only 25% were sure they had encountered generative AI in
journalism, according to a report on the study in The Conversation. And 50% were unsure or suspected they had.
The report was authored by Thomson, T. J.,
Thomas, R. J., Riedlinger, M., & Matich, P. It brings together six research activities from 2022 through 2024 in Australia, Germany, USA, UK, Norway, Switzerland, and
France.