
Citing a variety of factors related to
artificial intelligence -- misinformation, disinformation, media literacy, media credibility, and increasing media polarization and journalistic partisanship -- the Society of Professional Journalists
(SPJ) this morning announced its ethics committee has begun reviewing the issues and will make recommendations to update its code of ethics, which will mark its 100th anniversary next year.
"It's becoming increasingly difficult to distinguish between images, videos and words that have been created with AI and those that haven't. And there's a critical need for more guidance on the
ethics of using AI to do journalism," Committee Chairman Dan Axelrod tells "Media 3.0."
The current code, which was last revised in 2014, can be reviewed here.
advertisement
advertisement
For what it's worth, MediaPost has been constantly revising our own policies relating to the use of artificial intelligence (AI),
including how we use it for research, analytics, image generation, and suggestions for writing copy, as well as publishing AI-generated copy. Our current standard: explicit disclosure.
That
said, I asked an AI agent -- Gemini 2.5 Pro -- to suggest revisions to the SPJ code and to generate an image for this column (above) based on the society's press release.
Here's what it
recommends:
The most effective way to update the code is to build upon its existing, time-tested principles rather than creating a separate, siloed "AI policy."
1. Seek
Truth and Report It
The core principle of verification must be explicitly extended to AI-generated content.
Current Guideline: "Take
responsibility for the accuracy of their work. Verify information before releasing it. Use original sources whenever possible."
Recommended Update: Add language
specifying that AI-generated content is not an original source. Journalists must apply the same (or greater) level of rigorous fact-checking to information produced by an AI as they would to
any unverified human source.
Recommended Update: Add a clause to guard against "hallucinations" and synthetic media. Journalists should be mandated to use
verification tools to detect deepfakes and be skeptical of any AI-generated information that cannot be independently corroborated by a verifiable source.
2. Minimize
Harm
This pillar must be updated to include the specific harms that biased algorithms can inflict.
Current Guideline: "Balance the public’s
need for information against potential harm or discomfort... Avoid pandering to lurid curiosity... Be cautious about identifying juvenile suspects or victims of sex crimes."
Recommended Update: Add language requiring journalists to interrogate their AI tools for potential bias. This includes considering the training data and whether the tool's output
could perpetuate stereotypes or unfairly target vulnerable communities.
Recommended Update: Broaden the concept of "harm" to include the privacy implications of
AI-driven data analysis. Journalists should be transparent about using AI to analyze large datasets and ensure such methods don't unjustly compromise individual privacy.
3. Act Independently
This principle must now cover independence from the tools themselves, not just sources or advertisers.
Current
Guideline: "Avoid conflicts of interest, real or perceived. Disclose unavoidable conflicts."
Recommended Update: Add a clause on maintaining human editorial
judgment. The code should state that while AI can be used for assistance, final editorial decisions must rest with human journalists. Journalists must avoid ceding editorial control to
algorithms.
Recommended Update: Clarify that an over-reliance on a single AI platform or provider can constitute a perceived conflict of interest.
Journalists should disclose the significant use of proprietary AI tools that may influence the framing or creation of a story.
4. Be Accountable and
Transparent
This is arguably the most critical area for an AI-related update. The public must be able to trust how the news is made.
Current
Guideline: "Explain ethical choices and processes to audiences. Encourage a civil dialogue with the public about journalistic practices..."
Recommended Update:
Mandate clear and consistent labeling of AI-generated content. The code should require publications to prominently disclose when an article, image, or video was created or significantly
assisted by AI. This is not just a byline issue; it's a fundamental matter of transparency.
Recommended Update: Reaffirm that accountability remains 100%
human. The code must state that the journalist and the publication are fully responsible for all content, regardless of whether it was drafted by a human or an AI. An "AI error" is a publishing
error.
By embedding these AI-specific principles within the code's existing framework, the SPJ can provide clear, actionable guidance that addresses new technology while
reinforcing the timeless values of ethical journalism.