Publishers may wonder what is driving the high level of staff paranoia about generative AI.
Guardrails about AI use are increasingly being written into labor contracts, but what are
journalists and unions really looking for?
There are some answers in a study conducted by two academic researchers: Mike Ananny, an associate professor of communication and
journalism at the University of Southern California Annenberg School, and Jake Karr the acting director of New York University’s Technology Law and Policy Clinic.
Ananny and
Karr studied 50 union sources from 2022 to 2024, and determined that these dynamics are at play in organized labor’s approach to generative AI:- Unions agree that
publishers have the power to initiate generative AI experiments and control AI’s use and adoption.
- Employees do not trust publishers’ GenAI plans because of what
they see as a widespread lack of transparency.
- Unions seek to restore trust by by demanding more transparency
in procurement, licensing deals and other issues.
- Unions insist that the humanity of workers is key to
quality journalism. They say publishers should trust their workers staffs’ opinions about when and how to use generative AI.
- While unions are understandably worried about
the impact of gen AI on jobs, they also say that gen AI is “inherently unaccountable and reliable in ways that are antithetical to the values of journalism.
- Unions
feel contractual guardrails are needed to stabilize generative AI, but they also think that worker action alone cannot achieve a change in how publishers use it.
advertisement
advertisement
Publishers
don’t have to guess about these issues: They will be loud and clear during contract negotiations.
Their wrap-up appears in NiemanLab.